The central moments of a probability distribution $p(x)$ are defined as:
$$theta_n = langle (x – langle x rangle)^n rangle $$
while the non-central moments are the standard:
$$mu_n = langle x^n rangle $$
By the binomial theorem, we have:
$$theta_n = sum_{k=0}^n binom{n}{k}(-1)^{n-k} mu_k mu_1^{n-k}$$
which allows us to compute the central moments from the non-central moments. Is there an inverse to this expression, giving the non-central moments $mu_n$ from the central moments $theta_n$?
Contents
hide
Best Answer
One can write:
$$mu_n = langle (x – langle x rangle + langle x rangle)^n rangle$$
By the binomial theorem
$$mu_n = sum_{k=0}^n binom{n}{k} theta_k mu^{n-k}_1$$
Similar Posts:
- Solved – Proof of recurrence between cumulants and central-moments
- Solved – How to compute the standard deviation of data with errors
- Solved – Fast way to compute central moments of a Poisson random variable
- Solved – Fast way to compute central moments of a Poisson random variable
- Solved – Kernel feature mapping: Derivation of polynomial kernel