My question is strongly related with this one. Google's quantum supremacy claim uses Random Circuit Sampling. The principle is the following one: a realistic noise model for random circuits performed on a noisy quantum computer is the depolarising channel. That is, if $|\psi\rangle\!\langle\psi|$ is the pure state one would have got when applying a random circuit $C$ with a perfect quantum computer, the final state when executing this circuit on a noisy device is the following: $$\rho=\lambda|\psi\rangle\!\langle\psi|+\frac{1-\lambda}{2^n}I_{2^n}$$ Using this state, one can compute the Cross-Entropy Benchmarking, which can be seen as a random variable whose expectation is equal to $\frac{1+\lambda}{2^n}$. It was then believed that sampling from a distribution over the $n$-bit strings that give a XEB of $\frac{1+b}{2^n}$ for $b>0$ was exponentially hard for a classical computer.
This claim was also strengthened by Aaronson and Gunn who, from my understanding, show this very assumption. In a very recent paper by Aaronson and Hung, the restate that this problem is hard for a classical computer (once again, from my understanding).
On the other hand, there has been a substantial number of protestations against this claim. IBM states that using another algorithm than the one considered by Google, a supercomputer would have been able to obtain similar results, some papers claim that they actually managed to do it using a 60-GPU cluster, which was then improved to uncorrelated bitstrings with a 512-GPU cluster, which seems to contradict the aforementioned papers' claims. Finally, Aharonov et al. designed a classical algorithm that run in polynomial time and that is able to sample from the distribution of a random circuit applied on a noisy quantum computer (once again, from my understanding). They note however that due to large constant in the algorithm running time, this does not contradict the various supremacy claims that have been made.
How can all these statements be true at the same time? Can the XEB test be easily passed in the $50-60$ qubits regime, or do these algorithms tackle a different problem?