I have a question that seems basic but I found two alternative answers online so I thought that I should ask for advice. I have an experiment where each subject makes decisions about target words in two conditions: the word can be either preceded by a related word (the related condition) or it can be preceded by an unrelated word (the unrelated condition). We measure their response times to the target words. My goal is to calculate the standard error associated to the difference between these two condition means. As each subject performs the same number of decisions in the related and unrelated conditions, the observations are paired.
Which formula is appropriate to use to calculate the standard error of the difference between the two condition means? When looking for this online, I found two alternatives, which I paste below. Are they expressing the same computation or are there reasons to choose one over the other? Also, are these formulas appropriate for when the observations are paired? Also, should I calculate mean and SE for each subject separately (and then average across subjects) or can I just treat all subjects' together?
Thanks for your help!
Best Answer
The formulas are equivalent:
$$sqrt{frac{sigma_1^2}{n_1} + frac{sigma_2^2}{n_2}} = sqrt{frac{sigma_1^2}{sqrt{n_1}^2} + frac{sigma_2^2}{sqrt{n_2}^2}} = sqrt{left(frac{sigma_1}{sqrt{n_1}}right)^2 + left(frac{sigma_2}{sqrt{n_2}}right)^2} = sqrt{SE_1^2 + SE_2^2}$$
- First equality should be clear.
- Second equality is just pulling the square outside the fraction.
- Third equality is the formula for standard error of the mean. (also given in the question)
Similar Posts:
- Solved – Gaussian Distribution: How to calculate the Cumulative Distribution Formula (CDF) from the Probability Density Function (PDF)? + Error Function?
- Solved – How to calculate $int^{infty}_{-infty}Phileft(frac{w-a}{b}right)phi(w),mathrm dw$
- Solved – How to calculate $int^{infty}_{-infty}Phileft(frac{w-a}{b}right)phi(w),mathrm dw$
- Solved – Propagation of error in log ratios
- Solved – asymptotic variance