Скачать 43.35 Kb.

INSTRUMENTAL ANALYSIS AND SEPARATIONS MEASUREMENT  LECTURE 4 The output from analytical equipment must be converted to useful (publishable!) information. All measurements will include at least some error, but a good understanding of the sources of error is necessary to obtain reliable data. In a recent survey, LC/GC magazine ranked the sources of error in chemical analysis based on the percentage of respondents choosing a specific cause. This survey indicates that sample processing is one of the most likely sources of error. Figure 1. Relative Frequency of the Error Cause Encountered in Chemical Analysis shown as the Percentage of Respondents Choosing a Category. Errors are commonly classified as random or systematic. Random errors do not have an assigned cause and they affect the precision of a measurement. Random errors are usually estimated by repeated sampling, in other words replication. Common statistical procedures such as Analysis of Variance are used to report the precision of the data. Systematic errors can be classified as: instrument errors (non calibrated scales or glassware, etc.), method errors (unknown side reactions, precipitation of reactants, impurities in reagents, etc.), or operator errors (math errors, sloppy measurement etc.). Systematic errors can also be classified as either constant or proportional. Table 1. Possible Sources of Systematic Errors.
STANDARDS & CALIBRATIONCalibration is the procedure that allows the analyst to estimate the concentration, mass or volume of an analyte from the instrument response. Almost all analytical methods require some type of calibration. BlanksA blank is a simulated matrix which is as identical to the sample matrix as possible, but does not contain any of the analytes. A blank is used to correct for background effects and to take into account interferences that may be due to the matrix itself or to added reagents. External StandardsExternal standards are used to make the typical calibration curves commonly used in instrumental analysis. To use this method, several standards of known concentration (including a blank) are made up in the same matrix as the analytes. The samples are introduced into the instrument and the resulting data is used to make a correlation between instrument readings and the analyte concentration. This correlation is known as a standard curve. It is desirable that the curve is linear over a large concentration range, but this is not absolutely necessary. However, if the standard curve is not linear, many more standard sample points are necessary to establish an accurate relationship between the instrument response and the analyte concentration. Often it is not possible to make up external standards that are exactly the same as the matrix and other calibration methods will need to be used. Internal StandardsAn internal standard is a compound that is added to all blanks, calibration standards, and samples in a known and constant amount. Instead of a normal calibration curve, a response ratio is computed for each standard and sample. Calibration then involves plotting the ratio of the analyte signal to the internal standard signal. The internal standard must be known to be absent from the sample matrix so that the only source of the standard is the added amount. A properly chosen internal standard can compensate for several types of both random and systematic errors. The internal standard can be added to the sample before any sample preparation steps. In this case, variations in extraction, concentration, or other processing steps are taken into account by the changes in the internal standard. It is important to choose the internal standard so that it reacts in the same way as the analytes throughout the sample preparation process. Internal standards can also be added to the prepared sample just before it is analyzed. In this case, the standard is used to adjust for variations in the analytical procedure, such as variation in detector output, or variation in the mass of the sample placed in the instrument. Standard Addition MethodsStandard addition methods are commonly used in the analysis of complex samples. In cases where sample matrix effects are significant, this is the best choice of a calibration method. In this method several increments of a standard are added directly to the samples. The added standard is chemically identical to the analyte of interest. In its simplest form a known amount of the analyte standard called a ‘spike’ is added to a sample, and then the instrument readings of the samples with and without the spike are compared. For the sample without the spike, a small amount of water or solvent is added so that the volumes of the two samples are equal. The concentration of the analyte in the original sample can then be estimated by the following formula: c_{x} = S_{1}c_{s}V_{s} _{ } (S_{2} – S_{1})V_{x} Where S_{1} = Instrument reading without spikeS_{2} = Instrument reading with spike V_{s} = Volume of the spike V_{x} = Volume of the sample c_{s} = concentration of the spike c_{x} = concentration of the analyte/sample Detection LimitsThe detection limit is the smallest concentration of an analyte that can be detected with a given degree of confidence. The detection limit is commonly based on a signal to noise ratio of 3. Where noise is defined as the width of the baseline as shown in Figure 2. Another concept similar to the limit of detection is the limit of quantitation. The limit of quantitation is defined as the smallest signal that can be converted to an accurate measurement of an analyte. The limit of quantitation can be defined in various ways, but it is commonly set at a signal to noise ratio of 10. At this set limit, the analytical precision should be better than plus/minus 3%. Figure 2. Graph Showing the Measurement of Signal to Noise Ratio, for a Detector Signal. Signal A is Below the Detection Limit and Signal B is Above the Limit. SensitivityIf the relationship between the detector response and the analyte concentration is linear the relationship can be described by the equation: y = mx + b or Concentration = slope of calibration curve*(detector response) + yintercept The value ‘m’ or the slope of the calibration curve is a measure of the sensitivity. If a small change in detector output is correlated with a relatively large change in estimated concentration of the analyte, the method is sensitive. Sensitivity is basically a measure of how easily it is for the detector to measure small differences in analytes concentration whereas the detection limit is a measure of simply how small of a concentration can be detected. Validation of an Analytical MethodOnce an analytical method has been developed it needs to be validated. The validation process ensures that the results are reproducible and accurate. Some of the important parameters that need to be considered during the validation process are specificity, selectivity, intermediate precision, reproducibility, repeatability, accuracy, bias, linearity, dynamic range, limit of detection, limit of quantification, recovery, robustness, ruggedness, and stability. Specificity is a measure of how the method responds to compounds other than the analyte. A method that responds to only a limited number of compounds is often termed Selective. Reproducibility refers to the precision of measurement among different laboratories, while intermediate precision refers to the precision within the same laboratory over the long term. Repeatability is defined as the precision obtained (using the same lab and equipment) when samples from different matrices and with different concentrations are analyzed. As discussed previously, the accuracy of a method is based on how closely the analytical value approaches the real value. Linearity refers to the linear dependence of detector response to the concentration of the analyte. The dynamic range of a method is the interval between the highest and lowest concentration of the analyte that can be determined with accuracy and precision. The limit of detection is the smallest amount of an analyte that can be detected. It is often defined as the detector signal that is 3 times higher than the average background signal (noise). The detector signal that is large enough to produce a quantifiable concentration of an analyte is called the limit of quantification. It is often defined as a signal that is 10 times higher than the background noise level in a blank sample. Recovery is often estimated by adding a known amount of an analyte to a sample. It is defined as the ratio between the known amount of the analyte and the amount estimated by the method. Robustness refers to the relationship between the results and small changes in the analytical process. For example if a small change in the pH of the matrix resulted in a large change in the analytical results, the method would not be considered robust. Ruggedness refers to the degree of precision of results that are obtained under a variety of conditions such as different laboratories, different operators, or different instruments. Another term, stability is essentially the same as ruggedness. If a new method meets the minimum acceptable levels for all these parameters, it can be considered a reliable method. STATISTICS REVIEW Required Sample SizeThe value of the results from chemical analysis depends on the accuracy and precision of the data. Generally the more samples you take, the higher the accuracy and precision you can report. The question then is how many samples do you need to take in order to obtain the necessary precision? Estimating the number of samples you need is an important step in any sampling plan or experimental design. It is very important that you collect the required number of samples. Sometimes the estimated sample size may seem like too much work, but, IF YOU ARE NOT WILLING TO COLLECT THE REQUIRED NUMBER OF SAMPLES, THEN DON’T COLLECT ANY DATA OR DON’T DO THE EXPERIMENT. The formula for estimating the required sample size is: n = z^{2}_{}_{/2} ^{2} d^{2}Where: n = the required sample size, z_{}_{/2} = confidence interval, = standard deviation, d = allowable error For a typical 95% confidence interval Z_{}_{/2} = 1.96, the standard deviation is usually a little difficult to estimate, but a reasonable estimate will do. The allowable error is determined simply by how accurate you think the results need to be.For example: You are investigating the effect of storage conditions on the level of methylpyrazine in roasted peanuts. The average concentration in your samples is 56 parts per billion, and the standard deviation is 7.5 ppb. Using a 95% confidence interval, you wish to show that your treatment increases the pyrazine content by at least 10%. In this case, assume that your allowable error is 10% of 56 ppb or 5.6 ppb. N = (1.96)^{2}(7.5)^{2}/(5.6)^{2} = 6.89 or you need 7 samples (replications) It is common for experimental designs to use 4 replications. However, this estimated sample size shows that 4 replication would not be enough to show a 10% difference in treatments so the results will likely be inconclusive. Therefore, there is no point in doing the experiment with only 4 replications. CHEMISTRY REVIEWSignificant Figures: The number of digits needed to express results consistent with the precision of the measurement is called significant figures. The digit “0” can be significant or it can be used simply to indicate the magnitude of a number. For example: 0.01 has one significant figure. The zeros simply indicate the magnitude of the number as “one hundredth”. While 0.01 has only one significant figure, 0.10 has two significant figures, there is no difference between 0.10, 0.12, 0.13 – all have 2 significant figures. In this case, the last zero is significant. Examples of 3 significant figures: 0.00312, 312, 12.0, 1.10, 1.01, 12,400, 93,200,000 It is not really possible to determine the number of significant figures in the number 93,200,000 because there is no indication if the zeros are actually measured as zero or are placeholders to indicate the magnitude of the number. A better way to represent the number is with scientific notation. 9.32 x 10^{7 } has 3 significant figures 9.3200 x 10^{7 }has 5 significant figures This way of representing the data clearly indicates the precision of the measurement. Note: the number 93,200,000.0 has 9 significant figures. The placement of the zero after the decimal point indicates that the value was measured to the nearest “tenth” therefore, all the numbers are significant. Many experiments require some processing of the data using addition, subtraction, multiplication, division, logarithms, etc. Calculations and reporting such as this are often seen: 35.6 g x 1.7 = 6.05201 grams Is it correct to report the value to the nearest one hundred thousandth of a gram? No. Neither of the values used in the calculation has that degree of precision, so the final value cannot have that precision. As a general rule, the reported value should have the precision of the least precise value in the calculation. In this case, 1.7 which has 2 significant figures should be used as the basis for reporting the precision of the final value. 35.6 g x 1.7 = 6.1 grams (2 significant figures) Note: if the final value following the last significant figure is 5 or greater, the number is rounded up, other wise it is rounded down. Trick Question: How many significant figures should you report in this calculation? 25.15 x 0.0135 x 100% = 33.9525 The reported value should contain 3 significant figures based on the value “0.0135” The 100% plays no role in the number of significant figures since it is only used to move the decimal point. Note: If you wish to take the average of 3 numbers: (12.12 + 12.54 + 12.19)/3 = 12.28 You report 4 significant figures. The divisor “3” is an absolute number and does not play a role in defining the number of significant figures. Units of MeasureThe following are a few units of measure often used in chromatography that you might not have used previously. MassMilligram (mg) one thousandth of a gram Microgram (g) one millionth of a gram Nanogram one billionth of a gram Picogram one trillionth of a gram Femtogram one quadrillionth of a gram Pressure UnitsConversion Factors (to convert from the units shown across the top of the table to the units shown in the left side column, multiply by the values shown)
Sources: Christensen, H. B., Statistics Step by Step, Houghton Mifflin Co., Boston, 1977, 643 pages. Fifield, F. W.; Kealey, D. Principles and Practices of Analytical Chemistry, 5th ed.; Blackwell Science, Ltd.: Oxford, 2000; 562 pages. Moldoveanu, S. C.; David, V. Sample Preparation in Chromatography, 1st ed.; Elsevier Science: Amsterdam, 2000, 930 pages. Lichon, M. J. Sample Preparation. In Handbook of Food Analysis; L. M. L. Nollet, Ed.; Marcel Dekker, Inc.: New York, 1996; 1088 pages. Skoog, D. A.; Holler, F. J.; Nieman, T. A. Principles of Instrumental Analysis. 1998, 849 pages. LC/GC Magazine Online: http://www.chromatographyonline.com/lcgc/ Lecture 3 