1
$\begingroup$

Exercise 3.1: Let $G$ be a (finite) graph with vertices $v_1, \ldots, v_p$. Assume that some power of the probability matrix $M(G)$ defined by $(3.1)$ has positive entries. (It's not hard to see that this is equivalent to $G$ being connected and containing at least one cycle of odd length, but you don't have to show this.) Let $d_k$ denote the degree (number of incident edges) of vertex $v_k$. Let $D = d_1 + d_2 + \ldots + d_p = 2q - r$, where $G$ has $q$ edges and $r$ loops. Start at any vertex of $G$ and do a random walk on the vertices of $G$ as defined in the text. Let $p_k(\ell)$ denote the probability of ending up at vertex $v_k$ after $\ell$ steps. Assuming the Perron–Frobenius theorem (Theorem 3.3), show that $$\lim_{\ell \to \infty} p_k(\ell) = \frac{d_k}{D}.$$ This limiting probability distribution on the set of vertices of $G$ is called the stationary distribution of the random walk.

Additionally, the theorem referenced (theorem 3.3) states: Let $B$ be a nonnegative irreducible square matrix. If $\rho$ is the maximum absolute value of the eigenvalues of $B$, then $\rho > 0$, and there is an eigenvalue equal to $\rho$. Moreover, there is an eigenvector for $\rho$ (unique up to multiplication by a positive real number) all of whose entries are positive.

I was wondering if anyone could give me a hint as to where to go with this problem. As of right now, I have only made some observations, that the final $\lim_{\ell \to \infty} p_k(\ell) = \frac{d_k}{D}$ should be the same regardless of where I start, so if I let $\alpha = \begin{pmatrix} \alpha_1 & \dots & \alpha_p \end{pmatrix},$ where $\alpha_1 + \dots + \alpha_p = 1$, and $\alpha_i$ is the probability of starting the random walk at vertex $i$. Then $\alpha M(G)$ will be a vector containing the probabilities of ending up at some vertex $v$, given the starting probabilities $\alpha$. So the values of $\alpha_i$ should not matter.

However, I tried to keep multiplying $\alpha M(G)$ by $M(G)$, but it doesn't come out to a very nice expression. But if I set $\alpha = \begin{pmatrix} 1 & 0 & \dots & 0 \end{pmatrix}$, then $\alpha M(G)$ is just going to be the first row of $M(G)$ (as a row vector), which also doesn't expand very nicely if we keep right-multiplying by $M(G)$.

Any hints/insights would be greatly appreciated!

$\endgroup$

0

You must log in to answer this question.

Browse other questions tagged .