Let player A and player B are playing number-guessing game, which is:
Player A draws one natural number $X$ in $1,2,\cdots,N$ at random.
Player B guesses a number $Y$ in $1,2,\cdots, N$.
Player A says "ok" if the guessed number $Y$ equals $X$. If $X > Y$, player A says "up". If $X<Y$, player A says "down".
Player B keeps on guessing until player B gets the right number.
Now let $M$ be the number of trials that player B needed to take until player A says "ok". Player B earns $X \cdot r^M$ points. Here $r$ is pre-determined real number in $(0,1)$.
I want to give an optimal strategy for player B which maximizes an expectation of points earned.
I defined an array $a$, where $a(i,j)$ means the optimal guess when $i\leq X \leq j$ is guaranteed. What makes the problem hard is that the game is not translation-invariant: i.e. if $i>j$, $a(i,i+k)$ need not be equal to $a(j,j+k)+i-j$. This happens because the points earned when the game is over is proportional to $X$.
What I want to prove is the following inequality: if $i>j$, $$a(i,i+k) \leq a(j,j+k) + i-j $$
This seems intuitively clear because the smaller the $I$ is, the more benefit player B gets if he/she guesses a larger number.
I am stuck with coming up with a rigorous proof.
Thank you for any form of help, hint, or solution.