I am using the package markovchain in R.

My transition matrix looks like this

`> transition_matrix Arriving Playing.on.Phone Paying.Attention Writing.Notes Listening Kicked.Out [1,] 0 0.5 0.50 0.0 0 0.00 [2,] 0 0.0 0.99 0.0 0 0.01 [3,] 0 0.8 0.00 0.2 0 0.00 [4,] 0 0.0 0.00 0.0 1 0.00 [5,] 0 0.0 0.00 1.0 0 0.00 [6,] 0 0.0 0.00 0.0 0 1.00 `

Now I am building a markov chain object

mcstates <- new("markovchain", states = colnames(transition_matrix),

transitionMatrix = transition_matrix ,name = "state")

Setting initial value as

`init <- c(1,0,0,0,0,0) `

After 10 steps

`> init * (mcstates ^ 10) Arriving Playing.on.Phone Paying.Attention Writing.Notes Listening Kicked.Out [1,] 0 0.1573841 0.1947628 0.3309517 0.2886897 0.02821181 `

After 100 steps

`> init * (mcstates ^ 100) Arriving Playing.on.Phone Paying.Attention Writing.Notes Listening Kicked.Out [1,] 0 4.361078e-06 5.396834e-06 0.4807651 0.4759563 0.04326881 `

After 1000 steps

`> init * (mcstates ^ 1000) Arriving Playing.on.Phone Paying.Attention Writing.Notes Listening Kicked.Out [1,] 0 1.163927e-51 1.440359e-51 0.4807692 0.4759615 0.04326923 `

Showing that there is no change in distribution

However when I try to calculate the steadystate

`> steadyStates(mcstates) Arriving Playing.on.Phone Paying.Attention Writing.Notes Listening Kicked.Out [1,] 0 8.211848e-16 1.055809e-15 0.5170262 0.5170262 -0.03405231 [2,] 0 0.000000e+00 0.000000e+00 0.0000000 0.0000000 1.00000000 `

I have two questions

How is the steady state different from the stationary distribution I am hitting when I keep on multiplying with the transition matrix

Why is there a negative probability in the steady state solution

Any insight on this will be greatly appreciated

**Contents**hide

#### Best Answer

I write this post as author of markovchain package. On August 2016 I pulled a fix to the package that should close the issue. Basically, the above transition matrix (TM) was composed by more than one closed class. Numerical issues could arise when solving the eigenvalue problem. So, we have decided to find the steady state distribution by closed class and to merge togheter. HTH

### Similar Posts:

- Solved – Creating a transition matrix for markov chain
- Solved – Markov Chain State Transition Probability in R
- Solved – Creating a probability transition matrix
- Solved – Stochastic Process: On a chessboard a single random knight performs a simple random walk
- Solved – Calculating probabilities of an nth step transition matrix for discrete time markov chains