3
$\begingroup$

My understanding is that any QEC code is able to detect and correct a certain number of physical errors on its qubits and this is the code distance $d$. If we have more than $d$ errors in a given round of error correction, we can no longer guarantee that we have decoded to the right state i.e. we may have a logical error.

To get a notion of time, I will look at the number of gates in my circuit. Doesn't this mean that if we wait sufficiently long (i.e. have a sufficiently deep circuit), we will have at least one round where we have more than $d$ errors with arbitrarily high probability?

Does this mean that all QEC codes must eventually fail?

$\endgroup$

2 Answers 2

3
$\begingroup$

if we wait sufficiently long (i.e. have a sufficiently deep circuit), we will have at least one round where we have more than d errors with arbitrarily high probability?

Sort of! If you're using a surface code, then to survive $r$ rounds you'll need to pick a distance $d = \Theta(\log(r))$. If you hold $d$ constant and start increasing $r$, you'll eventually break things. Same for any other fault tolerant construction that's divided up into repeating rounds.

But there are more complicated fault tolerant constructions that use larger and larger code blocks to maintain the coding rate that don't have this issue. The details of what you are doing will still vary as the problem increases in scope, but the space overhead limits to a constant. See "Fault-Tolerant Quantum Computation with Constant Overhead" and "Constant overhead quantum fault-tolerance with quantum expander codes".

$\endgroup$
2
  • $\begingroup$ So the idea is that as long as the error correction circuit itself is not exponential in $d$, we are okay to use them? $\endgroup$
    – Josph
    Commented May 29 at 10:14
  • $\begingroup$ @Josph Yes, as long as the overhead increases slowly then it's not a big deal. Not ideal, but doable. $\endgroup$ Commented May 29 at 10:28
2
$\begingroup$

This is a question of which variable you fix and which one you vary.

Fixed code distance

If you fix the code distance $d$ and given a certain physical error probability and a given error correction code and a given error correction scheme then yes, at some point a sufficiently deep circuit will cause your scheme to fail.

Fixed logical circuit

But a different perspective would be:

Given a fixed logical circuit of a certain depth and given a certain physical error probability (which must be below threshold) you can (in principle) choose an error correction code with a code distance that pushes logical error probabilities low enough for your circuit to do what it should do.

Code distance scaling

For example, the $[[n^2, 1, n]]$ surface code can be scaled to the code distance you need (at a high cost of quadratic scaling of data qubits, though). Alternatively, you could concatenate codes to increase the code distance.

By the way: code distance $d$ means that $d - 1$ errors can be detected and $ \left\lfloor \frac{{d-1}}{2} \right\rfloor$ errors can be corrected.

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.