Let's say I have an array of values drawn from a normal distribution with mean 50 and standard deviation 10. Using python:

`d = np.random.normal(50,10,1000) `

I take a single random sample of size n = 10 from this distribution:

`s = np.random.choice(d, 10) `

What process do I go through to get the best estimate of the population mean from the sample, and an estimation of the margin of error?

Obviously I know the population mean and standard deviation in this case, but let's pretend I don't.

I could also take many samples and compute the sampling distribution of the mean, but let's say I can't do that either.

So I just have this single sample. What process do I go through and can I estimate how often my estimate of the population mean will be wrong?

**Contents**hide

#### Best Answer

I'm assuming infinite population size.

The best estimate of the population mean is the sample mean, $bar{x}$.

The best estimate of the population standard deviation is the sample standard deviation, $s = sqrt{frac{1}{n-1} sum_{i=1}^n (x_i – overline{x})^2}$

Since the sample size is less than 30 (10 in this case) and the population standard deviation is unknown, I would prefer to use the T-distribution to develop an interval.

$bar{x} pm t(frac{s}{sqrt{n}})$

where $t$ is the critical value from the $t_{n-1}$ distribution. In your case with n=10 and a desired confidence level of 95%, $t_{9}=1.833$.

More details can be found in this Example.

### Similar Posts:

- Solved – How to estimate population mean from a single sample
- Solved – How to estimate population mean from a single sample
- Solved – Using a point estimate in confidence interval calculation
- Solved – what is the sample correlation between the mean and the standard deviation under normality assumption
- Solved – Is it possible to estimate the standard deviation of a normal distribution if I only have the mean of the population