Simulation of Markov Chain

The following code excerpts are from julia.quantecon.org. I added explanations to each line of codes.

function mc_sample_path(P; init=1, sample_size=1000)
    # transition matrix, P, should be a square matrix 
    @assert size(P)[1] == size(P)[2] 
    
    # N be the number of rows of P matrix (or the number of initial states)
    N = size(P)[1]
    # dists be the state transition probabilities for each initial state; for example dists[1] will be state-transition probabilities of state 1 transitioning to state 1  and 2 respectively
    dists = [Categorical(P[i, :]) for i in 1:N]

    # create X that stores samples
    X = fill(0, sample_size)
    # put init as the first element of X
    X[1] = init 
    
    # loop over from 2 to the `sample_size` that draws randomly of states (either 1 or 2) by the last state's transition distribution
    for t in 2:sample_size 
        dist=dists[X[t-1]]
        X[t]=rand(dist)
    end 
    return X 
end 

P=[0.4 0.6; 0.2 0.8]
X = mc_sample_path(P, sample_size = 10_000_000)
μ1 = count(X.==1)/length(X)

Output:

0.250002

Note that for a long series drawn from P, the fraction of the sample that takes value 1 will be about 0.25 and value 2 will be about 0.75. This shows stationarity of P.

If $$ \psi^* = \psi^* P, $$ a distribution \(\psi^*\) is called stationary for P.

Note that from the above example, $$ \psi^{*} = \begin{bmatrix} 0.25 \newline 0.75. \end{bmatrix} $$

Reference

https://julia.quantecon.org/introduction_dynamics/finite_markov.html