Def A Markov Chain is said to admin a limiting probability distribution if the following conditions are satisfied:
- the limits
exists for all states and are indipendent from the starting state , and 2.
for all .
Remark Condition is always satisfied in the case of a finite state space , provided that the limits exists (codition ) since we can exchange then limit and then (finite) summation.
For intuition, think of a finite Markov Chain with its transition matrix being an stochastic matrix. What happens if we start from any vector , and reapply the right matrix multiplication ? in the limit we expect to be in the eigenspace corresponding to the eigenvalue of greatest absulute value. But using Gershgorin cicle theorem it’s easy to see that the spectrum of a stochastic matrix is limited to value , and such eigenvalue exists since in any stochastic matrix the vector has eigenvalue (every row sums to one and ).
Then it’s natural to define the following:
Def (Stationary measure) We say that a measure is an invariant measure of the Markov Chain with transition matrix if it solved the LHE:
if it’s called an invariant probability distribution.
We now show that if a finite Markov Chain admits a limiting probability distribution (even if it depends on the starting state ), then the limiting distribution is also stationary.
Theorem Let be finite. Suppose that for some the following limits exists
then is an invariant distribution. Proof First we check that it’s normalized
and
where the exchanges are justified by the finiteness of the summation.
So in general if the state space is infinite, the limiting distribution are not stationary distribution. Consider the symmetric random walk in , the limiting distribution are , which is stationary, but not a probability measure!
Other possibilities are:
- Stationary distribution may not be unique if the Markov chain is not irreducible, since any convex combination of each class is a new valid stationary distribution. Trivial example is (any distribution is invariant). Another counterexample is if the chain is irreducible but transient: consider the one dimensional RW with transition probabilities , then we have two stationary measures: and .
- Stationary distribution may not exist, (es. symmetric walk in )
- Limiting distribution do not exist if the Markov chain is periodic, a trivial example is the MC with transition matrix
What are the conditions such that the limiting distribution exists? When it’s unique? When does a stationary distribution exists? When it’s unique?
The best we can hope is that the limiting distribution exists, and it converges to the unique stationary distribution.
For a fixed state , we define for each the expected time spent in between visits to :
Oss Observe that
Theorem (Existence of an invariant measure) Let be irreducible and recurrent. Then
- is invariant (i.e. satisfies )
- for all .
Proof It follows from the definition. Exchanging the expectetion value (the exchange is justified by MON) we get:
Let’s check that is invariant:
now the event depends only on information up to time , since:
thus, using the Markov property
using this identity
the other term is
where since the chain is recurrent.
Since the chain is irreducible, for each state there exists such that , then and
And our theorem is proven.
Clearly if we scale by a positive costant we get a new invariant mesure. If this invariant measure is normalizable, then we can find an invariant probability distribution. This is possibile if the chain is positive recurrent.
Let be irreducible. Then the following conditions are equivalent:
- All states are positive recurrent.
- Some state is positive recurrent.
- There exists an invariant distribution . Moreover, when holds we have
Proof is obvious. (we include this to show that positive recurrence is also a class property!). Call a recurrent state. Since a this state is recurrent and the chain irreducible, then the vector is an invariant measure, and since is positive recurrent it’s also normalizalble, so that
is an invariant distribution. Since is an invariant distribution
so that there must exists at least one . But since the chain i irreducibile this implies that for all . Choose any and set , this is an invariant measure of an irreducible chain, so that
but then
Theorem (Uniqueness of an invariant measure). Let be irreducible and let be an invariant measure for with . Then for all . If is recurrent then , so that the invariant measure is unique up to a positive constat facor (we set ).
Proof For each we have
we use the same strategy in the proofs of Hitting probability and mean hitting times, repeated substitution:
where is a positive quantity. By repeating this substitution one gets:
taking the limit, and exchanging expected value and sum by (MON) on gets the definition of .
If in addition is also recurrent, then is also stationary, we can define a new stationary measure
(we can do this since ), but since they agree on state , so by our previou theorem point for all , so that .