# Solved – Combining experimental error in a mean

I understand the rules for combining experimental errors in sums, differences and ratios (as explained here), but what happens to an experimental error when you average it?

Say, ruler measurements of the length of beetles that are all values like 12.3 cm \$pm\$ 2mm.
I can find plenty of explanation on the web about how to add and multiply values like that together, but what happens to the error when you take an average of (say) 10 measurements?

Contents

The formulas on the link you quote assume that the measurements are

• independent and

• come from a Gaussian distribution around the true value (which one wants to measure).

They're essentially relying on the fact that the sum of two (or more) Gaussian distributed random variables are again distributed as a Gaussian random variable whose mean is the sum of the means and the variance ('square of the error') is the sum of the variances of the two summed distributions. See also this section on Wikipedia: http://en.wikipedia.org/wiki/Normal_distribution#Combination_of_two_independent_random_variables

You'll notice that the product of two Gaussian random variables is not a Gaussian variable any more, so the formulas assume an approximation by a Gaussian distribution.

For the ratio, the resulting distribution is a Cauchy distribution whose variance does not even exist (because it does not go quickly enough to zero when going to +/- infinity), so this is definitively an approximation for this case.

When you average \$n\$ measurements, \$n\$ has no uncertainty assigned, so (assuming all your measurements have the same uncertainty) the uncertainty of the average is the uncertainty of a single measurement divided by \$sqrt{n}\$ (follows from the formula for the sum of two measurements).

While this looks like one could achieve an arbitrarily good precision by just doing an appropriately high number of measurements, keep in mind that this assumes that the deviation of the measurements from the true value are purely 'statistical' (randomly distributed).

In practice, you'll find sources of measurement bias (which are common to all measurements, thus the assumption of is not justified any more), such as the fact that you'll always take the value of the closest ruler mark.

Rate this post