152L & 272 L home | 151L & 170L home
  • Analyzing data with uncertainties
What does "reasonably certain" mean?

In the case of repeated measurements where the error appears to be random, the answer is that you must use a statistical method to determine uncertainties.   The statistical method is discussed in the next section.

However, even if you only have one data point, you still want to estimate the uncertainty.  I can only give you this advice: Use your common sense.  

Make sure you have a clear definition of what you are measuring and estimate the range in which you can measure it. What you want to avoid is either overestimating or underestimating the uncertainty in your value. (See comments to the right.)


In both of the above cases, I am talking about random errors.  For systematic errors, you need more information to determine the amount of "skew" going on in your experiment.  You should try to identify these errors and make an estimate of how much error these produce. Sources such as the Handbook of Chemistry and Physics could give you an approximation.




Overestimating may mean that all of your data points are within the error range, but now your results have no usefulness.  Consider this example: I could estimate the speed of a car is 50 +/- 50 mph. That would certainly cover almost all of the speeds the car could be going (from 0 to 100 mph), but now the data tells me nothing!

Underestimating is more common in the undergraduate labs. The problem is that students want to have a quick and dirty "rule" such as "the smallest division equals the uncertainty". However, blind application of this "rule" leads to nonsensical results!  

This is an actual experiment done in the lab:


Students are asked to find the current at which they see a dot on their screens.  The students change the current until they see a dot and then report that the current is:
18.5 +/- 0.5 A
They explain that 0.5 A is the smallest division on the scale, and, therefore, they can determine the current to that precision.  Although the students can read the scale to that precision, that is not what they are being asked.  The question is "What range of currents give you a dot?"  It turns out that you can still see the dot from 18.0 A to 20.0 A.  So what they should have written was 19 +/- 1 A, twice as much error as blindly applying their "rule".