Let $(X_i)_{i}$ be a sequence of iid **positive** variables of mean 1 and variance $sigma^2$. Let $bar{X}_n = frac{sum_{i=1}^n X_i}{n}$.

My question is: Can we can bound $mathbb{E}(1/bar{X}_n)$ as a function of $sigma$ and $n$?

There seems to be some strategy that may work based on the taylor extension, but

- I'm not sure about the hypothesis that need to be met;
- if it works in this case; and
- if we can say something definite on $bar{X}_n$ or if we need to use the central limit theorem and can only say this for the normal approximation?

More details about the Taylor expansion. According to this wikipedia article,

$$mathbb{E}(f(X)) approx f(mu_X) +frac{f''(mu_X)}{2}sigma_X^2$$

So in my case it would give something like:

$$mathbb{E}(1/bar{X}_n) approx 1 +frac{sigma^2}{4 n}$$

I'm trying to find maybe a formal proof of a similar result, or hypothesis so that it works. Maybe references?

Thanks

EDIT: if needed, we can consider that the $(X_i)_i$ are discrete, there exists $v_1<cdots<v_K$ such that $mathbb{P}(X=v_k)=p_k$ and $sum p_k = 1$. In this case we know that $bar{X}_n geq v_1$. Although I believe something can be said in the general case.

PS: this is almost a cross-post of this on Math.SE.

**Contents**hide

#### Best Answer

You cannot bound that expectation in $sigma, n$. That's because there is the distinct possibility that the expectation do not exist at all (or, is $infty$.) See I've heard that ratios or inverses of random variables often are problematic, in not having expectations. Why is that?. If the conditions given there is fulfilled for the density of $X_1$, it will so be for the density of $bar{X}_n$. If densities do not exist, but probability mass functions do, it is simpler, since your assumptions prohibit a probability atom at zero, but a probability density can still be positive at zero even if $P(X >0)=1$.

For a useful bound you will at least need to restrict the common distribution of $X_1, dotsc, X_n$ much more.

`EDIT `

After your new information, and with $v_1>0$, the expectation of $1/bar{X}_n$ certainly will exist (irrespective if $K$ is finite or not.) And, since the function $xmapsto 1/x$ is convex for $x>0$, we can use the Jensen Inequality to conclude that $DeclareMathOperator{E}{mathbb{E}}E 1/bar{X}_n ge 1/E bar{X}_n$.

### Similar Posts:

- Solved – Taylor approximation of expected value of multivariate function
- Solved – Taylor approximation of expected value of multivariate function
- Solved – Expectation of the maximum of two correlated normal variables
- Solved – Expectation of the maximum of two correlated normal variables
- Solved – Let X,Y be 2 r.v. with infinite expectations, are there possibilities where min(X,Y) have finite expectation