# Solved – Central Limit Theorem for random vectors under weak dependence

Central limit theorem is generalizable for the multivariate case, but this is possibile due to the i.i.d. hypotesis on the random variables involved in the sum.

Infact, if you sum a set of i.i.d. random vectors, you obtain normality for each marginal, i.e. for each vector component. Then, given independence, you can easily derive the joint distribution (product of two gaussian PDFs is a gaussain PDF). Please correct me if I am wrong.

Now consider a situation where central limit theorem still holds for marginals, but there is some dependence between the components of the vectors included in the sum. Is there a way to see if this dependence is weak, and derive the joint distribution in a similar way?

I see that weak dependence mainly concerns stochastic processes, where a time variable is involved. But what about the joint distribution of vector components that exist all at the same time?

I think that if it's known how each vector component is calculated, then there is a way to study the dependence structure and, if weak, derive the limit distribution for the whole vector. Is it possible?

Contents

Look at page 21-22 here. The construction is for Markov chains, but I think essentially, the assumption is only of weak dependence. My answer here is also useful.

Here I present the gist of it: Say \${X_t}_{tgeq 1}\$ is a \$p\$-multivariate process with weak dependence. Here \$X_t = (X^{(1)}, X^{(2)}, dots, X^{(p)})^T\$. Now suppose the random mean vector from \$n\$ "samples" is \$\$theta_n = dfrac{1}{n} sum_{t=1}^{n}X_t = (theta_n^{(1)}, theta_n^{(2)}, dots, theta_n^{(p)})^T,.\$\$

If a central limit theorem for all components holds, then for \$i = 1, dots, p\$, there exists \$sigma^2_i > 0\$ such thatas \$n to infty\$, \$\$sqrt{n} (theta_n^{(i)} – theta^{(i)}) overset{d}{to} N(0, sigma^2_i),. \$\$

Here \$theta^{(i)}\$ is the true mean for the \$i\$th component.

Let \$(t_1, t_2, dots, t_p)\$ be arbitrary vector of constants in \$mathbb{R}^p\$. Then, \$\$sum_{i=1}^{p} t_i sqrt{n}(theta_n^{(i)} – theta^{(i)}) overset{d}{to} sum_{i=1}^{p} t_i N(0, sigma^2_i)\$\$ Then by the Cramér Wold Theorem, there exists a \$ptimes p\$ positive definite matrix \$Sigma\$ such that as \$n to infty\$ \$\$sqrt{n}(theta_n – theta) overset{d}{to} N_p(0, Sigma) ,.\$\$

Here \$Sigma = lim_{nto infty} Cov (theta_n)\$. For Markov chains this breaks down nicely and uses the assumption that \$Cov(X_1,X_{1+k}) = Cov(X_{t}, X_{t+k})\$ for all \$t > 1\$. This may not hold for all weakly dependent processes.

Rate this post