1
$\begingroup$

I am a new learner in quantum error correction and I am curious about the motivation for simulating multiround syndrome extraction circuits of quantum error correction code.

The purpose for single round simulation of a quantum code is clear for me. My understanding is that, similar to classical code simulation, given a cordword of qubits, some errors may flip the codeword. We obtain the syndrome by multiply the parity check matrix of the codeword, then a decoder predict the error based on the syndrome. Under the circuit level consideration, the error probability of each qubit is different depending on the exact design of encoding and measurement circuit.

The multiround measurement simulation is that the "syndrome" is the parity change of measurement outcome in each round(called "detector event" in stabilizer circuits simulator Stim, or "error-sensitive event" in some papers). The "codeword" is the error mechanism occurring in the circuit ("detector error model" in Stim), and the "parity check matrix"(decoding graph) is the error mechanism that lead to change of detectors. Sometimes the simulation cares about logical observable or logical syndrome that lead to logical error of the code. For more information, refer to the papers 1, 2, 3.

My question is, while the single round simulation shows the ability of a code or a decoder, what is the purpose of multiround measurement simulation? Does it reflect something when operating the code in real quantum computer?

$\endgroup$

2 Answers 2

0
$\begingroup$

A single round of noise ("code capacity") is a good model for a perfect sender transmitting to a perfect receiver over a noisy quantum channel. It's a bad model for a quantum computer built of imperfect parts trying to preserve its quantum information despite those imperfections ("circuit noise"). Circuit noise is the more important problem, because it's what what determines if you can even make a good quantum computer in the first place.

Note that circuit noise is distinct from doing multiple rounds of correction where it's assumed that measuring a stabilizer is an atomic operation ("phenom noise"). Phenom noise is a much better approximation of circuit noise than code capacity noise, but still falls short in several respects. It insufficiently penalizes large stabilizers, for example.

$\endgroup$
0
$\begingroup$

This paper is a good read.

The key point is that measurement is also a noisy process, so you cannot fully trust your extracted syndrome at any round. If you consider that you sometimes incorrectly read the outcome of a stabilizer measurement, then an error chain of any length can go undetected if measurement errors (also called time-like errors, ghost defects...) happen at its extremities.

By doing multi-round syndrome extraction (typically a number of times proportionnal to the code's distance), you make it so that measurement errors at the same spot would have to chain in time (i.e. happen at each round) for the same data-qubit error chain to remain undetected, which happens less frequently.

Multi-round simulation shows the ability of the {code+decoder} to still behave as expected despite measurement errors occuring. As these errors will almost certainly occur in practice, these simulations are probably more realistic than their single round counterpart.

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.