0
$\begingroup$

I read in the Sparse Blossom paper: "A surface code superconducting quantum computer with a million physical qubits will generate measurement data at a rate of around 1 terabit per second".

I was wondering about the source/measurement workflow that would lead to so much data creation?

$\endgroup$

1 Answer 1

3
$\begingroup$

In a surface code, half the qubits are measurement qubits used to measure the stabilizers. In a superconducting chip you can run a surface code cycle, measuring all the stabilizers, in less than a microsecond. Note you want to measure as fast as possible to catch errors before they accumulate into logical errors.

One million qubits times one million cycles per second times a measure qubit density of one half equals half a trillion measurements per second. 0.5 terabits per second.

$\endgroup$
3
  • 2
    $\begingroup$ To put 500 Gb/s into perspective, a common interface for GPUs in use for the last several years, PCIe 4.0 x16, can transfer about 256 Gb/s. There are two other facts that are quite nice as well: 1) You don't have to do all your decoding in one place 2) The syndrome is pretty compressible: For a surface code under circuit noise of 10^3, the syndrome flip rate is about 1%, corresponding to about 1/10-th of a bit of entropy per syndrome bit. So a good compression algorithm can get about a factor of 10x compression ratio! $\endgroup$ Commented Jun 7 at 2:31
  • 1
    $\begingroup$ (This is also an indication that surface code is not a very good code: Error correction, in a sense, is compression of the distribution of errors, so a syndrome with low entropy per bit is indicative that your code is not performing very well. Of course, this totally neglects other considerations like layout and logical operations) $\endgroup$ Commented Jun 7 at 2:35
  • 2
    $\begingroup$ Another way to tell that the surface code is verbose with its data, which I know @chrispattison is also familiar with, is to notice that it gives a huge amount of soft information. You can pretty accurately estimate how likely it is that a surface code has failed. Activating that information, by concatenating even an error detecting code on top of the surface code, substantially improves its efficiency at correcting errors. $\endgroup$ Commented Jun 7 at 16:01

Not the answer you're looking for? Browse other questions tagged or ask your own question.