I am working on some theory related to controls in the context of stochastic games, and I am a bit confused on some terminologies for zero-sum games.
Suppose we have a zero-sum game with two players. Let the first players' inputs be $U \in \mathbb{R}^{m}$, whose goal is to minimize the payoff function $J(U,V)$. Let the second players' inputs be $V \in \mathbb{R}^{\ell}$, whose goal is to maximize the payoff function $J(U,V)$. In my context, the first player is called the "controller" and the second player is called the "stopper".
I am currently reading a paper that states the following:
The upper game is a scheme in which the stopper chooses $V$ based on the information it has on the control $U$, and the upper value is defined as
$$ \mathcal{V}^+ = \inf_{U\in\mathbb{R}^{m}}\sup_{V\in\mathbb{R}^{\ell}} J(U,V). $$
Similarly, the lower game is a scheme in which the controller chooses $U$ based on the information it has on the control $V$, and the lower value is defined by
$$ \mathcal{V}^{-} = \sup_{V\in\mathbb{R}^{\ell}}\inf_{U\in\mathbb{R}^{m}} J(U,V). $$
Now I know from the minimax theorem that $\mathcal{V}^{-} \leq \mathcal{V}^{+}$ and they are only equal to each other at a saddle point (equilibrium).
My question is regarding the definitions of the upper and lower games. Looking at the equation for the upper value, how does it make sense that the upper game is a game in which the stopper plays based on the information it has on the controller if the first optimization is over $V$ and the second optimization is over $U$. The way I'm interpreting the upper value is that the stopper optimizes the cost, and then based on the optimal stopper input, the controller then optimizes the resulting payoff, so the stopper acts first, and the controller best follows after.
Can someone please explain this to me?