2
$\begingroup$

I have a rather complex game, whose expected value I need to find. Given is a binary random value with probability for success p (lets say an unfair coin with probability of landing heads p and tails 1-p, p<<0,5) The rules of the games are the following: you start by betting 1 euro, a multiplier = 1 and 15 tosses. Each time you land heads (less likely option) you receive 15 more tosses (added to the remaining one you have) and your multiplier increases by 1. Landing tails doesn't give you anything, you've just lost one toss. Your multiplier can grow up to 5, and each time you land a head after achieving a multiplier of 5 adds new 15 tosses, but does not increase the multiplier further. I am seeking for the expected multiplier of the game.

I started by building a Markov Chain table A[i,j] for the probability of spin j to have multiplier i: A[1,1] = 1, A[1,2] = 1-p, A[2,2] = p, A[1,3] = (1-p)*(1-p), etc... We have 15 initial spins then each A[i, 15+i] = 0. But I am still stuck of deriving the Avg MP from the table. Any help will be much appreciated!

Thanks!

EDIT: I forgot to say, at each toss you win your bet x Multiplier, so actually, I am looking for the EV of the game, but I think it can be presented as finding the average MP of the game and the average number of tosses, so in the end the EV should be their product. I know how to find the Avg number of tosses, and my main issue is the AVG MP.

$\endgroup$
3
  • 1
    $\begingroup$ If I understand correctly: The term "multiplier" is confusing, as it doesn't seem to be multiplying anything. It's just an index that ratchets up to $5$ or stops earlier if you throw too many consecutive tails, is that correct? And what does the betting have to do with anything? $\endgroup$
    – lulu
    Commented Jan 15, 2021 at 18:57
  • 1
    $\begingroup$ Regardless, if you want to do it via Markov methods, you'll need to keep track of how many tosses you have left. Thus, if $S(i, n)$ denotes the state in which the counter shows $i<5$ and you have $n$ rolls left, then you can transition to $S(i ,n-1)$ by throwing $T$ or to $S(i+1, n+14)$ if you throw $H$. Or maybe it is $S(i+1, n+15)$, that's not clear to me. I'm ignoring the betting aspect because I don't see how it enters into anything (but perhaps I am misreading). $\endgroup$
    – lulu
    Commented Jan 15, 2021 at 19:09
  • $\begingroup$ Hey, you are absolutely right, I forgot to say, at each toss, you win your bet x Multiplier, so actually, I am looking for the EV of the game, but I think it can be presented as finding the average MP of the game and the average number of tosses, so at the end the EV should be their product. I know how to find the Avg number of tosses, and my main issue is the AVG MP. Thanks! $\endgroup$ Commented Jan 16, 2021 at 16:03

2 Answers 2

3
$\begingroup$

Foreword.

Probably this answer is not what you expect. It aims actually at a more complicated task of computing the probability that the game ends at the $(n+1)$-th batch of trials. Knowing the probability one can of course compute the expected value of any quantity which depends only on the number of played batches. Yet the computation of an expected value often does not require the knowledge of the individual probabilities. It may be the case also here. Nevertheless I have decided to post the answer since it reveals some beauty and can be possibly useful for other problems.

Preliminaries.

Let $\tilde n$ be a weak composition of a non-negative number $n$ into $n+1$ non-negative parts: $$ \tilde n=(n_0,n_1,\ldots,n_{n}),\tag1 $$ and define the exceedance of the composition as: $$ X(\tilde n)=\sum_{m=0}^{n}\mathbf{1}_{N_m>m}\tag2 $$ where $\mathbf1_A$ is the indicator function and $N_m=\sum_{i=0}^m n_i$ are the partial sums of $\tilde n$. Expressing simply the exceedance is the number of valid inequalities $N_m> m$ for $m$ ranging from $0$ to $n$. The exceedance can take on $n+1$ values from $0$ to $n$ (observe that $N_n=n$).

Solution.

Let $t$ be the length of a batch of trials (in your example $t=15$). Then the probability that the game ends at the $(n+1)$-th batch is:

$$W(n,t)\left(pq^{t-1}\right)^n q^t,\tag3 $$ where $q=1-p$ is the probability to toss a tail and $W(n,t)$ is the number of ways to distribute $n$ heads between $t(n+1)$ tosses in a way which ensures the condition that there is sufficient number of previously tossed heads to proceed to every next batch.

Let $n_i$ in (1) be the number of heads tossed in the $i$-th batch. Then the above mentioned condition is simply: $$ X(\tilde n)=n.\tag4 $$

It follows then that $$ W(n,t)=\frac1{n+1}\binom{t(n+1)}n.\tag5 $$

The formula can be obtained in the following way. There are in total $\binom{t(n+1)}n$ ways to distribute $n$ heads among $t(n+1)$ tosses. Consider now some particular distribution and cyclically shift it by $k$ batches ($0\le k \le n$) so that $n'_i=n_{(i+k)\pmod{n+1}}$. It appears that all $n+1$ cyclically shifted compositions have distinct exceedances so that only one of them has the required exceedance $n$ (an excellent proof can be found here). This explains the prefactor $\frac1{n+1}$.

Appendix.

The overall probability that the game ends after a finite number of batches is: $$ P_t(q)=q^t\sum_{n=0}^\infty\frac{1}{n+1}\binom{t(n+1)}n\left(pq^{t-1}\right)^n.\tag6 $$

The following figure demonstrates the behavior of the function $P_t(q)$ for $t=1,2,3,4$.

enter image description here

Observe that $P_t(q)=1$ only if $p\le\frac1t$ (the inequality is strict for $t=1$). This means that for larger $p$ there is a certain probability that the game never ends. This is however irrelevant for the expected value of a constant.

Hope this helps.

$\endgroup$
3
  • $\begingroup$ Thanks! This will surely be very helpful for me! $\endgroup$ Commented Jan 22, 2021 at 10:18
  • $\begingroup$ Returning from your follow-up problem to this initial context, I wonder if there's a simple interpretation to the threshold probability being $p=1/t$. $\endgroup$ Commented Jan 27, 2021 at 19:58
  • $\begingroup$ @Semiclassical It is intuitively clear that the game ends with certainity only if the expected number of heads per batch is not larger than 1, but I am not sure if it is rigorous enough. $\endgroup$
    – user
    Commented Jan 28, 2021 at 8:57
1
$\begingroup$

Solution: Turned out the Markov Chain approach was pretty successful after all:

Calculating the probabilities for the first 60 tosses (taking into account that A[i, 15+i] = 0) and summing them per columns (per multiplier) one can find the average number of tosses with the respective MP. Instead of calculating ton of probabilities to gain an approximation for the last MP 5, one can simply derive its average number of tosses as subtracting the sum of the first 4 from the expected number of tosses (That is 15/(1- 15*p), coming from the infinite geometric series formula). Lastly simply dividing all average number of tosses per MP by the expected number of tosses, one gets the probabilities for each Mp and the average MP is simply their expectation.

P.P.It's a naive approach 'that gives you the answer, but the reasoning behind it is much more beautiful, so thanks to user for shedding light on it!

Blockquote

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .