15
$\begingroup$

More than a year ago a couple of scientists made a splash by presenting a classical algorithm that took less than a week to simulate Sycamore's circuits on a small GPU cluster. Also, their simulations produced exact results and not estimates.

This was a big runtime cut from Google's claim of 10000 years on a supercomputer to ~5 days on a small GPU cluster. The runtime can be cut further if a more computational resources are used.

As far as I know, the paper was submitted to Phys Rev journal and uploaded on Arxiv more than a year ago. However, I can't find the published version.

Does anyone know the status of the paper? Also, if the proposed algorithm turns out to be correct, does it mean we can scratch Google's claim of quantum supremacy until they repeat the experiment with more qubits, higher circuit depth and some sort of solid verification that produced results are indeed correct and not just noise?

$\endgroup$
4
  • 1
    $\begingroup$ Please see Aaronson’s latest blogpost for a quick assessment. From a time-perspective, maybe not, but from a CO2 or other perspective, very much so. $\endgroup$ Commented Aug 21, 2022 at 0:00
  • $\begingroup$ @MarkS interesting discussion. Thanks a lot. Aaronson mentions even more recent paper by the same authors. And he also mentions another paper by Gao. None of then published. It seems that when it comes to quantum sampling verifiability is an issue. I wonder if any of these papers pass scrutiny, could they help to verify some of the results with subsequent interpolation to higher qubit counts. $\endgroup$
    – MonteNero
    Commented Aug 21, 2022 at 0:40
  • $\begingroup$ I haven’t studied the papers - but what task did they perform? Didn’t they “spoof” Sycamore and generate an output that had a crossentropy fidelity greater than 0? I don’t think they did anything with Google’s output strings, per se. $\endgroup$ Commented Aug 21, 2022 at 2:11
  • $\begingroup$ I don't want to give an impression that I understand what I'm talking about. To me, Pan and Zhang paper seems to be about a proper quantum simulator and additional post processing tricks to get a proper XEB. The paper by Gao et al. is more about how to get good metrics without doing quantum simulation. So out of the two, the latter is more "spoofy" and kind of hacks the XEB. $\endgroup$
    – MonteNero
    Commented Aug 21, 2022 at 19:16

1 Answer 1

3
+50
$\begingroup$

The paper simulating the random circuit sampling task was published in PRL. An even larger scale simulation verified the 53 qubit, 20 cycle Google results directly. RE repeating the experiments, updated benchmarks were announced at APS using up to 70 qubits, 26 cycles with similar fidelity, which AFAIK are well beyond current simulation methods.

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.