I have a bit a formal problem with a property of the expected win rate expressed with Markov chains.
Let $(X_i)_{i \geq 0}$ be a (countably finite), time-discrete stochastic process that has the elementary Markov property with respect to $\mathbb{P}$ and is homogenous, so:
$$\mathbb{P}(X_n = i_n|X_{n-1} = i_{n-1}, \ldots, X_0 = i_0) = \mathbb{P}(X_{n} = i_{n}|X_{n-1} = i_{n-1}) = \mathbb{P}(X_{1} = i_{1}|X_{0} = i_{0}) =: p_{i_0 i_1}$$
This must only hold whenever we can condition on the past states without division by zero. We then talk about such a five-state Markov chain where the state $5$ is a winning condition and the state $2$ is a losing condition. We define
$$\tau_i := \min\{n \geq 0: X_n = i\}$$
as the random variable that yields the first hit of state $i$ (including the starting time). The winning condition with start in state $i$ is thus
$$g_i := \mathbb{P}(\tau_5 < \tau_2 | X_0 = i)$$
The state $1$ has transition probabilities $p_{12} = \frac{1}{2} = p_{13}$, meaning that
$$\begin{array}{lcl} g_1 &=& \mathbb{P}(\tau_5 < \tau_2|X_0 = 1) \\ &=& \frac{\mathbb{P}(\tau_5 < \tau_2,X_0 = 1)}{\mathbb{P}(X_0 = 1)} \\ &=& \frac{\mathbb{P}(\tau_5 < \tau_2,X_0 = 1,X_1 = 2)}{\mathbb{P}(X_0 = 1)} + \frac{\mathbb{P}(\tau_5 < \tau_2,X_0 = 1, X_1 = 3)}{\mathbb{P}(X_0 = 1)} \\ &=& \frac{\mathbb{P}(\tau_5 < \tau_2,X_0 = 1,X_1 = 2)}{\mathbb{P}(X_0 = 1,X_1 = 2)}\frac{\mathbb{P}(X_0 = 1, X_1 = 2)}{\mathbb{P}(X_0 = 1)} + \frac{\mathbb{P}(\tau_5 < \tau_2,X_0 = 1, X_1 = 3)}{\mathbb{P}(X_0 = 1,X_1 = 3)}\frac{\mathbb{P}(X_0 = 1, X_1 = 3)}{\mathbb{P}(X_0 = 1)} \\ &=& \mathbb{P}(\tau_5 < \tau_2|X_0 = 1, X_1 = 2) \mathbb{P}(X_1 = 2| X_0 = 1) + \mathbb{P}(\tau_5 < \tau_2|X_0 = 1, X_1 = 3) \mathbb{P}(X_1 = 3| X_0 = 1) \\ &=& g_2 p_{12} + g_3 p_{13} \end{array}$$
The fact that is tripping me up is not that we can use homogeneity and the Markov property for $\tau_5 < \tau_2$ on the left side (I have already proven that), but rather:
Does the above property still hold if our starting probability (here $\mathbb{P}(X_0 = 1)$) is zero? Can we justify the questionable algebra with division by zero?
The reason I am asking for this special case to be "proven" is that right after this proof, we use this to solve the system generated by these equations (excluding the absorption states).
My guess to resolve this problem is that we could define the conditional probability to be $0$ in that case, which aligns with the intution that $g_1 = 0$ but... that would just make other steps in the above derivation questionable, if I am not mistaken. More generally:
How does one circumvent the problem of having to deal with ill-defined probabilities when proving properties of certain Markov chains? Is there a generalized approach to this problem?
Thank you for your attention!