Skip to content

komm.MarkovChain

Finite-state homogeneous discrete-time Markov chain. It is defined by a finite set of states $\mathcal{S}$ and a transition matrix $P$ over $\mathcal{S}$. Here, for simplicity, the set of states is taken as $\mathcal{S} = [0 : |\mathcal{S}|)$, where $|\mathcal{S}|$ denotes the cardinality of the set of states. For more details, see YG14, Ch. MCS and GS97, Ch. 11.

Parameters:

  • transition_matrix (ArrayLike)

    The transition matrix $P$ over the set of states $\mathcal{S}$. It must be a $|\mathcal{S}| \times |\mathcal{S}|$-matrix where $P_{i,j}$ is the probability of transitioning from state $i \in \mathcal{S}$ to state $j \in \mathcal{S}$. Its elements must be non-negative and each row must sum to $1$.

Examples:

  1. Consider the finite-state homogeneous discrete-time Markov chain depicted in the figure below.

    Finite-state homogeneous discrete-time Markov chain example.

    It has set of states $\mathcal{S} = \{ 0, 1, 2 \}$ and transition matrix $$ P = \begin{bmatrix} 1/2 & 1/4 & 1/4 \\ 1/2 & 0 & 1/2 \\ 1/4 & 1/4 & 1/2 \end{bmatrix}. $$

    >>> chain = komm.MarkovChain([
    ...     [1/2, 1/4, 1/4],
    ...     [1/2,   0, 1/2],
    ...     [1/4, 1/4, 1/2],
    ... ])
    

num_states int property

The number of states in the Markov chain, $|\mathcal{S}|$.

Examples:

>>> chain = komm.MarkovChain([
...     [1/2, 1/4, 1/4],
...     [1/2,   0, 1/2],
...     [1/4, 1/4, 1/2],
... ])
>>> chain.num_states
3

accessible_states_from() cached

Computes the subset of states that are accessible from a given state. State $j \in \mathcal{S}$ is accessible from state $i \in \mathcal{S}$, denoted by $i \to j$, if there exists $n \geq 0$ such that $(P^n)_{i,j} > 0$.

Parameters:

  • state (int)

    A state $i \in \mathcal{S}$.

Returns:

  • set[int]

    The subset of all states accessible from state $i$.

Examples:

>>> chain = komm.MarkovChain([
...     [1/2, 1/4, 1/4],
...     [1/2,   0, 1/2],
...     [1/4, 1/4, 1/2],
... ])
>>> chain.accessible_states_from(1)
{0, 1, 2}
>>> chain = komm.MarkovChain([
...     [1/2, 1/2,   0],
...     [1/2, 1/2,   0],
...     [  0,   0,   1],
... ])
>>> chain.accessible_states_from(0)
{0, 1}
>>> chain.accessible_states_from(2)
{2}

communicating_classes() cached

Computes the communicating classes of the Markov chain. A communicating class is a subset of states such that every state in the class is accessible from every other state in the class. In other words, two states $i, j \in \mathcal{S}$ are in the same communicating class if and only if $i \to j$ and $j \to i$. The set of all communicating classes forms a partition of $\mathcal{S}$.

Returns:

  • list[set[int]]

    A list of all communicating classes in the Markov chain.

Examples:

>>> chain = komm.MarkovChain([
...     [1/2, 1/4, 1/4],
...     [1/2,   0, 1/2],
...     [1/4, 1/4, 1/2],
... ])
>>> chain.communicating_classes()
[{0, 1, 2}]
>>> chain = komm.MarkovChain([
...     [1/2, 1/2,   0],
...     [1/2, 1/2,   0],
...     [  0,   0,   1],
... ])
>>> chain.communicating_classes()
[{0, 1}, {2}]

is_irreducible() cached

Returns whether the Markov chain is irreducible. A Markov chain is irreducible if $i \to j$ for all $i, j \in \mathcal{S}$. Equivalently, a Markov chain is irreducible if it has only one communicating class.

Returns:

  • bool

    True if the Markov chain is irreducible, False otherwise.

Examples:

>>> chain = komm.MarkovChain([
...     [1/2, 1/4, 1/4],
...     [1/2,   0, 1/2],
...     [1/4, 1/4, 1/2],
... ])
>>> chain.is_irreducible()
True
>>> chain = komm.MarkovChain([
...     [1/2, 1/2,   0],
...     [1/2, 1/2,   0],
...     [  0,   0,   1],
... ])
>>> chain.is_irreducible()
False
>>> chain = komm.MarkovChain([
...     [0, 1, 0],
...     [0, 0, 1],
...     [1, 0, 0],
... ])
>>> chain.is_irreducible()
True

stationary_distribution() cached

Computes the stationary distribution of an irreducible Markov chain. The stationary distribution $\pi$ is a pmf over $\mathcal{S}$ such that $\pi P = \pi$.

Note

This method is only implemented for irreducible chains.

Returns:

  • Array1D[floating]

    The stationary distribution $\pi$ of the Markov chain.

Examples:

>>> chain = komm.MarkovChain([
...     [1/2, 1/4, 1/4],
...     [1/2,   0, 1/2],
...     [1/4, 1/4, 1/2],
... ])
>>> chain.stationary_distribution()
array([0.4, 0.2, 0.4])
>>> chain = komm.MarkovChain([
...     [0, 1, 0],
...     [0, 0, 1],
...     [1, 0, 0],
... ])
>>> chain.stationary_distribution()
array([0.33333333, 0.33333333, 0.33333333])
>>> chain = komm.MarkovChain([
...     [1, 0, 0],
...     [0, 1, 0],
...     [0, 0, 1],
... ])
>>> chain.stationary_distribution()
Traceback (most recent call last):
...
NotImplementedError: method is only implemented for irreducible chains

transient_states() cached

Returns the subset $\mathcal{T} \subseteq \mathcal{S}$ of transient states of the Markov chain. A state $i \in \mathcal{S}$ is transient if there exists a state $j \in \mathcal{S}$ such that $i \to j$ but $j \not\to i$.

Returns:

  • set[int]

    The subset $\mathcal{T}$ of transient states in the Markov chain.

Examples:

>>> chain = komm.MarkovChain([
...     [1/2, 1/4, 1/4],
...     [1/2,   0, 1/2],
...     [1/4, 1/4, 1/2],
... ])
>>> chain.transient_states()
set()
>>> chain = komm.MarkovChain([
...     [1/2, 1/2,   0],
...     [1/2,   0, 1/2],
...     [  0,   0,   1],
... ])
>>> chain.transient_states()
{0, 1}

recurrent_states() cached

Returns the subset $\mathcal{R} \subseteq \mathcal{S}$ of recurrent states of the Markov chain. A state $i \in \mathcal{S}$ is recurrent if it is not transient.

Returns:

  • set[int]

    The subset $\mathcal{R}$ of recurrent states in the Markov chain.

Examples:

>>> chain = komm.MarkovChain([
...     [1/2, 1/4, 1/4],
...     [1/2,   0, 1/2],
...     [1/4, 1/4, 1/2],
... ])
>>> chain.recurrent_states()
{0, 1, 2}
>>> chain = komm.MarkovChain([
...     [1/2, 1/2,   0],
...     [1/2,   0, 1/2],
...     [  0,   0,   1],
... ])
>>> chain.recurrent_states()
{2}

is_regular() cached

Returns whether the Markov chain is regular. A Markov chain is regular if there exists a positive integer $n$ such that all entries of $P^n$ are positive. A finite-state Markov chain is regular if and only if it is irreducible and aperiodic.

Returns:

  • bool

    True if the Markov chain is regular, False otherwise.

Examples:

>>> chain = komm.MarkovChain([
...     [1/2, 1/4, 1/4],
...     [1/2,   0, 1/2],
...     [1/4, 1/4, 1/2],
... ])
>>> chain.is_regular()
True
>>> chain = komm.MarkovChain([
...     [1/2, 1/2,   0],
...     [1/2, 1/2,   0],
...     [  0,   0,   1],
... ])
>>> chain.is_regular()
False
>>> chain = komm.MarkovChain([
...     [0, 1, 0],
...     [0, 0, 1],
...     [1, 0, 0],
... ])
>>> chain.is_regular()
False

index_of_primitivity()

Computes the index of primitivity of a regular Markov chain. The index of primitivity is the smallest positive integer $n$ such that all entries of $P^n$ are positive.

Notes
  • This method only applies to regular Markov chains.

Returns:

  • int

    The index of primitivity of the Markov chain.

Examples:

>>> chain = komm.MarkovChain([
...     [1/2, 1/4, 1/4],
...     [1/2,   0, 1/2],
...     [1/4, 1/4, 1/2],
... ])
>>> chain.index_of_primitivity()
2
>>> chain = komm.MarkovChain([
...     [1/2, 1/2,   0],
...     [1/2, 1/2,   0],
...     [  0,   0,   1],
... ])
>>> chain.index_of_primitivity()
Traceback (most recent call last):
...
ValueError: chain is not regular

absorbing_states() cached

Returns the subset $\mathcal{A} \subseteq \mathcal{S}$ of absorbing states of the Markov chain. A state $i \in \mathcal{S}$ is absorbing if $P_{i,i} = 1$.

Returns:

  • set[int]

    The subset $\mathcal{A}$ of absorbing states in the Markov chain.

Examples:

>>> chain = komm.MarkovChain([
...     [1/2, 1/4, 1/4],
...     [1/2,   0, 1/2],
...     [1/4, 1/4, 1/2],
... ])
>>> chain.absorbing_states()
set()
>>> chain = komm.MarkovChain([
...     [1/2, 1/2,   0],
...     [1/2,   0, 1/2],
...     [  0,   0,   1],
... ])
>>> chain.absorbing_states()
{2}
>>> chain = komm.MarkovChain([
...     [1/2, 1/2,   0],
...     [1/2, 1/2,   0],
...     [  0,   0,   1],
... ])
>>> chain.absorbing_states()
{2}

is_absorbing() cached

Returns whether the Markov chain is absorbing. A Markov chain is absorbing if it has at least one absorbing state and each absorbing state is accessible from at least one transient state.

Returns:

  • bool

    True if the Markov chain is absorbing, False otherwise.

Examples:

>>> chain = komm.MarkovChain([
...     [1/2, 1/4, 1/4],
...     [1/2,   0, 1/2],
...     [1/4, 1/4, 1/2],
... ])
>>> chain.is_absorbing()
False
>>> chain = komm.MarkovChain([
...     [1/2, 1/2,   0],
...     [1/2,   0, 1/2],
...     [  0,   0,   1],
... ])
>>> chain.is_absorbing()
True
>>> chain = komm.MarkovChain([
...     [1/2, 1/2,   0],
...     [1/2, 1/2,   0],
...     [  0,   0,   1],
... ])
>>> chain.is_absorbing()
False

mean_number_of_visits() cached

Computes the mean number of visits from each transient state to each transient state. This is the expected number of times the chain, starting from transient state $i \in \mathcal{T}$, visits transient state $j \in \mathcal{T}$ before being absorbed.

Notes
  • This method only applies to absorbing Markov chains.
  • This corresponds to the fundamental matrix $N$ of the Markov chain.

Returns:

  • Array2D[floating]

    A $|\mathcal{T}| \times |\mathcal{T}|$-matrix $N$ where entry $N_{i,j}$ is the mean number of visits from $i \in \mathcal{T}$ to $j \in \mathcal{T}$.

Examples:

>>> chain = komm.MarkovChain([
...     [  1,   0,   0,   0],
...     [1/2,   0, 1/2,   0],
...     [  0, 1/2,   0, 1/2],
...     [  0,   0,   0,   1],
... ])
>>> chain.mean_number_of_visits()
array([[1.33333333, 0.66666667],
       [0.66666667, 1.33333333]])

mean_time_to_absorption() cached

Computes the mean time to absorption from each transient state. This is the expected number of steps until the chain, starting from transient state $i \in \mathcal{T}$, is absorbed.

Notes
  • This method only applies to absorbing Markov chains.
  • This is obtained by adding all the entries in the $i$-th row of the fundamental matrix $N$.

Returns:

  • Array1D[floating]

    A $|\mathcal{T}|$-vector $\mathbf{t}$ where $\mathbf{t}_i$ is the mean time to absorption from $i \in \mathcal{T}$.

Examples:

>>> chain = komm.MarkovChain([
...     [  1,   0,   0,   0],
...     [1/2,   0, 1/2,   0],
...     [  0, 1/2,   0, 1/2],
...     [  0,   0,   0,   1],
... ])
>>> chain.mean_time_to_absorption()
array([2., 2.])

absorption_probabilities() cached

Computes the absorption probabilities from each transient state to each absorbing state. This is the probability of, starting from transient state $i \in \mathcal{T}$, being absorbed in absorbing state $j \in \mathcal{A}$.

Notes
  • This method only applies to absorbing Markov chains.
  • This corresponds to the matrix $B = N R$, where $N$ is the fundamental matrix and $R$ is the submatrix of $P$ row-indexed by $\mathcal{T}$ and column-indexed by $\mathcal{A}$.

Returns:

  • Array2D[floating]

    A $|\mathcal{T}| \times |\mathcal{A}|$-matrix $B$ where $B_{i,j}$ is the absorption probability from $i \in \mathcal{T}$ to $j \in \mathcal{A}$.

Examples:

>>> chain = komm.MarkovChain([
...     [  1,   0,   0,   0],
...     [1/2,   0, 1/2,   0],
...     [  0, 1/2,   0, 1/2],
...     [  0,   0,   0,   1],
... ])
>>> chain.absorption_probabilities()
array([[0.66666667, 0.33333333],
       [0.33333333, 0.66666667]])

period() cached

Computes the period of a given state. The period of a state $i \in \mathcal{S}$ is the largest integer $d$ such that $(P^n)_{i,i} = 0$ whenever $n$ is not divisible by $d$.

Parameters:

  • state (int)

    A state $i \in \mathcal{S}$.

Returns:

  • int

    The period of the state $i$.

Examples:

>>> chain = komm.MarkovChain([
...     [1/2, 1/4, 1/4],
...     [1/2,   0, 1/2],
...     [1/4, 1/4, 1/2],
... ])
>>> chain.period(1)
1
>>> chain = komm.MarkovChain([
...     [0, 1, 0],
...     [0, 0, 1],
...     [1, 0, 0],
... ])
>>> chain.period(0)
3

is_aperiodic() cached

Returns whether the Markov chain is aperiodic. A Markov chain is aperiodic if all states have period $1$.

Returns:

  • bool

    True if the Markov chain is aperiodic, False otherwise.

Examples:

>>> chain = komm.MarkovChain([
...     [1/2, 1/4, 1/4],
...     [1/2,   0, 1/2],
...     [1/4, 1/4, 1/2],
... ])
>>> chain.is_aperiodic()
True
>>> chain = komm.MarkovChain([
...     [1/2, 1/2,   0],
...     [1/2, 1/2,   0],
...     [  0,   0,   1],
... ])
>>> chain.is_aperiodic()
True
>>> chain = komm.MarkovChain([
...     [0, 1, 0],
...     [0, 0, 1],
...     [1, 0, 0],
... ])
>>> chain.is_aperiodic()
False

simulate()

Returns random samples from the Markov chain.

Parameters:

  • steps (int)

    The number of steps to simulate.

  • initial_state (int)

    The state $i \in \mathcal{S}$ from which to start the simulation.

Returns:

  • output (Array1D[integer])

    A 1D-array of integers in $\mathcal{S}$ representing the states visited during the simulation. The length of the array is equal to steps.

Examples:

>>> chain = komm.MarkovChain([
...     [1/2, 1/4, 1/4],
...     [1/2,   0, 1/2],
...     [1/4, 1/4, 1/2],
... ])
>>> chain.simulate(initial_state=1, steps=10)
array([1, 2, 1, 2, 2, 0, 2, 2, 2, 0])
>>> chain = komm.MarkovChain([
...     [0, 1, 0],
...     [0, 0, 1],
...     [1, 0, 0],
... ])
>>> chain.simulate(initial_state=2, steps=10)
array([2, 0, 1, 2, 0, 1, 2, 0, 1, 2])

simulate_until_absorption()

Returns random samples from the Markov chain until an absorbing state is reached.

Note

This method only applies to absorbing Markov chains.

Parameters:

  • initial_state (int)

    The state $i \in \mathcal{S}$ from which to start the simulation.

Returns:

  • output (Array1D[integer])

    A 1D-array of integers in $\mathcal{S}$ representing the states visited during the simulation.

Examples:

>>> chain = komm.MarkovChain([
...     [  1,   0,   0,   0],
...     [1/2,   0, 1/2,   0],
...     [  0, 1/2,   0, 1/2],
...     [  0,   0,   0,   1],
... ])
>>> chain.simulate_until_absorption(initial_state=1)
array([1, 2, 1, 2, 3])