Let us consider the following two-player game between Chooser and Guesser. There is a finite set $\Omega$ and $k$ probability distributions on $\Omega$, denoted by $ \mathcal{P} =\{P_1,\ldots,P_k\} $.
Chooser picks a distribution $Q\in \mathcal{P} $ and presents an $X\in\Omega$ drawn according to $Q$ to Guesser. Guesser has pre-committed to a function $\psi:\Omega\to\mathcal{P}$ according to which he will infer the identity of $Q$. The loss suffered by the Guesser is the probability that his inference is incorrect, under the Chooser's adaptive (worst-case) choice of $Q$: $$ L(\psi) = \max_{Q\in P} \sum_{x\in\Omega}Q(x)\boldsymbol{1}[ Q \neq \psi(x) ]. $$
We say that $\psi^*$ is minimax-optimal (for $\mathcal{P}$) if $$ L(\psi^*) = \min_{ \psi\in \mathcal{P}^\Omega } L(\psi). $$
For binary hypothesis testing, it is an immediate consequence of the Neyman-Pearson Lemma that the maximum likelihood yields the minimax-optimal inference. Namely, for $k=2$, the choice $\psi(x)=\mathrm{argmax}\{P_1(x),P_2(x)\}$ is easily seen to be minimax-optimal. For $k=3$, this is no longer the case. Indeed, one can easily construct 3 distributions $P_1,P_2,P_3$ such that for all $x\in\Omega$, $P_3(x)\le \max\{P_1(x),P_2(x)\}$. In this case, the choice $\psi(x)=\mathrm{argmax}\{P_1(x),P_2(x),P_3(x)\}$ will never select $P_3$, and hence the adaptive adversary will always choose $Q=P_3$ for a maximal loss of $1$.
Question. What is known about this problem? What is known about minimax decision rule for $k>2$?
Update: My claims about the implications of the Neyman-Pearson Lemma are incorrect and in particular the maximum-likelihood $\psi$ is not always minimax-optimal. See the answer by Mr X.