Solved – Standard error of mean vs error propagation

I'm confused on how to correctly calculate the final uncertainty from averaging measurements that each have their own internal errors.

Say I have 3 voltage measurements: (1.232 ± 0.001) V, (1.197 ± 0.001) V, and (1.292 ± 0.001) V. The uncertainties here are due to the precision limit on the voltmeter.

I want to plot the mean of these three measurements as a point on a graph, with its associated error bar for the voltage V. I've read things that have said the final error is just (max value - min value) / N, but this ignores the uncertainties within the original measurements.

I've also seen something that suggested the final uncertainty is simply the uncertainty of 1 measurement divided by the square root of N, so in my case 0.001/sqrt(3). But this value seems like too small of an error.

Is one of these methods correct, or do I need to somehow combine these uncertainties? If so, how?

Hi: your question sounds like you might have a time series and the answer I think depends critically on what 0.001 represents ? If you can find that out or know that now, once you have that, the rest is probably not so difficult.

In general statistical modelling, there's usually an underlying population model that makes assumptions about how the measurements are generated. Once that is given, then any statistic can be constructed hopefully from first principles( but maybe not. see below for more on that ).

For example, suppose the underlying model was say

$y_{it} = mu_i + epsilon_{it} ~~~~~forall i = 1,2,3$

so the population is actually 3 different populations each with its own mean and standard deviation. where $epsilon_ti$ has some distribution say $N(0, sigma^2_{epsilon_i})$ and the $u_{i}$ were the numbers you stated, namely (1.232, 1.197, 1.292). Then, the statistics are pretty straightforward depending on whether you take an average or a sum or whatever.

For example, in the case above, if you wanted to known the standard deviation of a new value generated from group 1, with known mean, say 1.232, then it's $sqrt(.001)$ because that's the standard deviation of $epsilon_{1}$.

On the other hand, maybe the true known $mu_{1}$ is really 1.21 and you observed a random observation represented by the observed value 1.232 and suppose you don't know the $sigma_{1}$. Then in that case, you can estimate the standard deviation of a new observation from group 1 using the sample of measurements that came from group 1.

Or maybe the measurements come from one population with an overall known $mu$ say equal to 1.2 rather than a $mu_{i}$. So, there are a lot of possibilities and maybe one of them applies to your problem. I hope that helps and I can try to give specifics if such a description is possible.

P.S: Note that if you take the $t$ subscripting out of the model description above, then there's a whole non time series area called analysis of variance that delves into this in much, much deeper detail and calculates more complex statistics. Based on your question, I assumed that wasn't what you wanted ? If it is, then I'm not the person to explain it.

Similar Posts:

Rate this post

Leave a Comment