I am confused in understanding the formal definition for independence and how does this differ from the law of total expectations.

I thought that $E[u|x]=E[u]$ formally means independence of $u$ and $x$.

However, when I have $E[u|x]=a$ (const) and use the law of total expectations,

$E[u]=E[E[u|x]]=a$

I obtain $E[u]=a$

Don't I have then $a=E[u|x]$ and $a=E[u] implies a=E[u|x]=E[u] implies text{independence between x and u}$?

Something does not add up, as this would mean that we can always establish independence between two variables. Where is my error in thinking?

**Contents**hide

#### Best Answer

It's impossible to read minds, so to locate any error in thinking let's help you work through as simple an example as possible using the most basic possible definitions and axioms of probability.

Consider a sample space $Omega = {au, bu, cu, av, bv, cv}$ ("$au$" etc. are just names of six abstract things) where all subsets are considered measurable. Define a probability $mathbb{P}$ on $Omega$ in terms of its values on the atoms via

$$mathbb{P}(au) = mathbb{P}(cu) = p, mathbb{P}(bu) = r-2p;quad mathbb{P}(av) = mathbb{P}(cv) = q, mathbb{P}(bv) = 1-r-2q$$

where $p,q,r$ are any numbers for which all six probabilities are positive. For instance, we may take $p=1/6$, $q=1/8$, and $r=1/2$. Because the sum of the six given probabilities is unity, this defines a valid probability measure.

Define the random variables $U$ and $X$ as

$$U(omega)=-1, 0, 1$$

depending on whether the initial letter in the name of $omega$ is $a$, $b$, or $c$ respectively; and

$$X(omega) = 0,1$$

depending on whether the final letter in the name of $omega$ is $u$ or $v$ respectively.

This can be neatly summarized in a $3$ by $2$ table of probabilities, headed by values of $U$ and $X$, whose interpretation I trust is evident:

$$begin{array}{r|cc} & text{X=0} & text{X=1} \ hline text{U=-1} & p & q \ text{U=0} & r-2 p & 1-r-2 q \ text{U=1} & p & q end{array}$$

It is then easy to compute the following:

$mathbb{P}(X=0) = mathbb{P}({au,bu,cu}) = p + r-2p + p = r$ (sum the left column in the table).

$mathbb{P}(U=-1) = mathbb{P}({au,av}) = p+q$ (sum the top row in the table).

**Independence**** means nothing other than probabilities multiply.** This would imply, among other things, that

$$p = mathbb{P}(au) = mathbb{P}(X=0, U=-1) = mathbb{P}(X=0)mathbb{P}(U=-1) = r(p+q)$$

(investigate the top left entry in the table). But this is rarely the case; for instance, $1/6 ne (1/2)(1/6 + 1/8)$. Therefore **$X$ and $U$ are not independent** (except for some special possible combinations of $p$, $q$, and $r$). However,

$$mathbb{E}(U) = mathbb{P}(U=-1)(-1) + mathbb{P}(U=0)(0) + mathbb{P}(U=1)(1) = 0$$

That is, the expectation of $U$ is zero. (This should be obvious from the symmetry: the chance that $U=1$ balances the chance that $U=-1$ regardless of the value of $X$.) However,

$$mathbb{E}(U|X=0) = mathbb{P}(U=-1, X=0)(-1) + cdots + mathbb{P}(U=1, X=0)(1) = 0,$$

and similarly $$mathbb{E}(U|X=1) = 0.$$ That is, the conditional expectation of $U$–which is a *function*–has the constant value zero. In symbols, we have found that

$$mathbb{E}(U|X) = 0 = mathbb{E}(U)$$

and we have a simple, explicit example showing why **constant conditional expectation does not imply independence.**

### Similar Posts:

- Solved – Law of total expectation and how prove that two variables are independent
- Solved – Conditional expectations by conditioning on functions of random variables
- Solved – a sample of a random variable
- Solved – Expectation and Conditional Independence
- Solved – Quadratic loss function implying conditional expectation