2
$\begingroup$

My question is strongly related with this one. Google's quantum supremacy claim uses Random Circuit Sampling. The principle is the following one: a realistic noise model for random circuits performed on a noisy quantum computer is the depolarising channel. That is, if $|\psi\rangle\!\langle\psi|$ is the pure state one would have got when applying a random circuit $C$ with a perfect quantum computer, the final state when executing this circuit on a noisy device is the following: $$\rho=\lambda|\psi\rangle\!\langle\psi|+\frac{1-\lambda}{2^n}I_{2^n}$$ Using this state, one can compute the Cross-Entropy Benchmarking, which can be seen as a random variable whose expectation is equal to $\frac{1+\lambda}{2^n}$. It was then believed that sampling from a distribution over the $n$-bit strings that give a XEB of $\frac{1+b}{2^n}$ for $b>0$ was exponentially hard for a classical computer.

This claim was also strengthened by Aaronson and Gunn who, from my understanding, show this very assumption. In a very recent paper by Aaronson and Hung, the restate that this problem is hard for a classical computer (once again, from my understanding).

On the other hand, there has been a substantial number of protestations against this claim. IBM states that using another algorithm than the one considered by Google, a supercomputer would have been able to obtain similar results, some papers claim that they actually managed to do it using a 60-GPU cluster, which was then improved to uncorrelated bitstrings with a 512-GPU cluster, which seems to contradict the aforementioned papers' claims. Finally, Aharonov et al. designed a classical algorithm that run in polynomial time and that is able to sample from the distribution of a random circuit applied on a noisy quantum computer (once again, from my understanding). They note however that due to large constant in the algorithm running time, this does not contradict the various supremacy claims that have been made.

How can all these statements be true at the same time? Can the XEB test be easily passed in the $50-60$ qubits regime, or do these algorithms tackle a different problem?

$\endgroup$

1 Answer 1

2
$\begingroup$

I think that the original rationale for using the linear cross-entropy (XEB) score as a metric to claim quantum computational supremacy was valid, but we may now be at a point now where the continued use of linear XEB for random circuit sampling on transmon qubit architectures to score and claim quantum advantage is not as justified as it was maybe in 2019, at least for two reasons:

  1. It was known from the beginning that classical verification of cross-entropy scores scales exponentially with the number of qubits. This is still true today as it was in 2019. We still have no way to efficiently verify the output from a set of strings generated by a quantum computer (or, for that matter, by the algorithms of Aharanov et al.) But, with about 60 or so qubits, this was hoped and expected to be in the goldilocks zone of not being too hard verify efficiently nor too easy to be classically spoofed. If we used much more than that (say, with more than 100 qubits), we cannot even classically calculate the linear XEB score.

  2. Much of the work you mentioned, for example IBM's initial response, required exponential resources not just to verify but also to even generate samples - whereas a quantum computer (even with a dilution refrigerator) would use exponentially fewer resources to generate the samples. But, what Aharanov et al. showed was that a classical computer could generate noisy samples from random circuit sampling where the resources to generate these samples grow polynomially - even though it takes exponential resources to verify and calculate the score.

There might be a handful of remaining loopholes to consider - for example, if we could keep the depth of our RCS algorithm constant the Aharanov et al. paper might not carry through. I also don't know the implications of the recent work for Boson Sampling experiments.

Another frustration is that without cross-entropy benchmarking, we don't know the best answer to what other ways do we have to prove that we've gone beyond classical with our computational resources in the NISQ era? Shor's algorithm is out, as it requires error correction. Some neat approaches of Kahanamoku-Meyer et al. might eventually be viable, but there's perhaps a long way to go.

I also like the new results of Chen et al. on the NISQ complexity class, suggesting that there likely still is exponential advantage for some carefully chosen problems even in the presence of noise - but instantiating these problems seems a bit tough now. For example, the Bernstein-Vazirani problem requires $O(1)$ quantum, but $O(n)$ classical queries (using perfect qubits); this is changed to $O(\log n)$ NISQ queries - still an exponential separation.

$\endgroup$
3
  • $\begingroup$ Thanks for the answer and the reference! There's still a point that I struggle with though: you say that Aharanov et al.'s algorithm is able to generate samples that would pass the XEB test if we were able to compute the XEB in the first place. However, the same is true for Quantum Computers. Thus, it seems to me that on this point, NISQ devices and classical computers are on par. However, you describe the use of the XEB test as "not as justified as it was maybe in 2019". Why is it "not as justified" instead of ruled out straight away? $\endgroup$
    – Tristan Nemoz
    Commented Mar 7, 2023 at 15:06
  • $\begingroup$ Well, you said it - "due to [the] large constant in the algorithm running time, this does not contradict the various supremacy claims that have been made". If your definition of quantum computational supremacy requires the problem to be solved to be classically intractable in the limit, those constants get washed out in the limit. But, some people are defending the position that quantum computational supremacy was only a statement about the here-and-now, and not about the limit as $n$ goes to infinity. $\endgroup$ Commented Mar 7, 2023 at 15:23
  • $\begingroup$ Oh ok, I thought that I had misunderstood the reason why the authors described their algorithm as irrelevant for quantum supremacy. So the real problem here is about the running time of the algorithm, not the verification of it. Thanks! $\endgroup$
    – Tristan Nemoz
    Commented Mar 7, 2023 at 15:25

Not the answer you're looking for? Browse other questions tagged or ask your own question.