Let $left{X_{n}right}_{ngeq0}$ be a homogeneous markov chain with state space E and transition matrix P. Let $tau$ be the first time n for which $X_{n}$ $neq$ $X_{0}$, where $tau=+infty$ if $X_{n}=X_{0}$ for all $ngeq0$. We need to compute $E[tau|X_{0}=i]$ in terms of $p_{ii}$
My solution
For any $l>0$ let $l$ be the first for which $X_{n}$ $neq$ $X_{0}$ which means that $forall$ $m$ such that $1leq m <l$ the value of $X_{m}=X_{0}$ but $X_{i} neq X_{0}$. The probability of this happening is the product of probability of $P_{ii}^{m}$ for all $m$ multiplied by $(1-P_{ii}^{l})$. The resulting product is then again multiplied by the value of $l$ and then summed over for all possible values of $l$ i.e from 1 to $infty$ to find expectation
The expected value therefore is $(1-P_{ii}^{1})+2(1-P^{2}_{ii})P_{ii}^{1}+3(1-P_{ii}^{3})P_{ii}^{2}P_{ii}^{1}+…..=sum_{j=1}^{infty}(j(1-P^{j}_{ii})prod_{k=0}^{j-1}P^{k}_{ii})$ where $P^{0}_{ii}=1$
Is this correct ?
Best Answer
You are thinking along the right lines but it seems your notation is obscuring things (see bottom of this answer).
The event $tau = k$ means we remain in state $X_0$ for $k-1$ steps (which has probability $p_{ii}^{k-1}$) and then jump to another state on step $k$ (which has probability $1-p_{ii}$). Consequently, $$ mathbb{P}(tau = k) = p_{ii}^{k-1}(1-p_{ii}). $$
This "waiting for an event to happen (in a time homogenous setting)" time distribution is called the geometric distribution.
The expectation can be evaluated with a "differentiate under the integral" trick:
$$ mathbb{E}[tau] = sum_k k p_{ii}^{k-1}(1-p_{ii}) \ = (1-p_{ii})sum_k frac{d}{dp} left. p^k right|_{p=p_{ii}} \ = (1-p_{ii})frac{d}{dp}left. sum_k p^k right|_{p=p_{ii}} \ = frac{1}{1-p_{ii}}, $$ the differentiation in the infinite series being ok because all terms are non-negative. Also we used the formula for a geometric series.
- Usually "homogeneous" means time homogeneous for Markov chains.
- With the usual notation for transition matrices $P = (p_{ij})_{i,j in mathcal{S}}$, $P^k$ is the $k$-step transition probability (just the matrix power). You appear to use $P^k$ for the transition probability at time step $k$ (which is different to the usual convention).
With this understanding, taking $P^j_{ii}=p_{ii}$ (in your notation) makes your attempt look right.
Similar Posts:
- Solved – Definition of stationary distribution in continuous time markov chains
- Solved – Markov process with future knowledge
- Solved – Expected number of times you spent in a state of an absorbing markov chain, given the eventual absorbing state
- Solved – How to draw state diagram for first order Markov chain for 10000bases from 2 chromosomes
- Solved – Markov Chains: Dice Problem I’m not sure how to start