3
$\begingroup$

I am trying to implement a single-shot error correction for the surface code with data + measurement errors (both with prob. p), using the (build-in) BP+OSD decoder. I am mostly following these papers:

  1. https://journals.aps.org/prxquantum/pdf/10.1103/PRXQuantum.4.020332
  2. https://arxiv.org/pdf/2309.11719.pdf

If I understand correctly, this is the algorithm: in order to get each point in the threshold-figure, one should average over N iterations. Each such iteration as a inner loop of n iterations as follow:

One starts with Hx (for example) of the surface code, but for each syndrome another 'fake' data qubit is added to account for measurement errors, such that: H=(Hx, I).

Now, For 1:n-1 iterations: Error = draw error in all data qubits (real and fake) with prob. p

If not first iteration: Error_eff = (Error + residual error) % 2

Calculate the syndrome and get a correction vector

residual error = (Error + correction vector) % 2

then feed this residual error to the next iteration, and so on.

If iteration n: the same thing, but now the 'fake' data qubits does not get an error.

Now, I'm puzzled: The length of the effective error + correction is the number of all data qubits (real and fake), whereas the length of logical needs to be checked is only the number of the real data qubits. So, how do you check for a logical error? Moreover, in the second paper mentioned above, they set N = 10^4, and n = 100. It thus seems, that they average only over N iterations, but nonetheless, in their figure 3 the error rate is up to 10^-6, meaning they also include the inner loop in the calculation for the error rate.

So, what is the correct algorithm to account for measurement errors in a single-shot decoder like this?

******* EDIT *******

So I talked with the author and I have an answer: First, after obtaining the residual error, the next step takes Error(t)+residual_error(t-1), without the correction of the measurement 'fake' qubits. Second, the error rate is define per logical qubits, so it is actually the average logical error/number of logical qubits.

$\endgroup$
1
  • $\begingroup$ Didn't read these two papers but I don't beleive single shot decoding is possible for the 2D surface code...you need to go to higher dimensions (3D toric code...) to get checks on the syndromes themselves. $\endgroup$
    – unknown
    Commented Oct 25, 2023 at 17:21

1 Answer 1

0
$\begingroup$

Now, I'm puzzled: The length of the effective error + correction is the number of all data qubits (real and fake), whereas the length of logical needs to be checked is only the number of the real data qubits. So, how do you check for a logical error?

You can check if the real part of the residual error (effective error + correction) commutes with the logical operator. pseudo-code:

if residual_error[:real_part] * logical % 2 == 1:
   logical_error_count += 1

In their figure 3 the error rate is up to 10^-6, meaning they also include the inner loop in the calculation for the error rate.

I think they are including the inner loop, but you should check with the authors to be sure.

$\endgroup$
1
  • 1
    $\begingroup$ OK, so I talked with the author and I have an answer: First, after residual error, the next step takes Error(t)+residual_error(t-1), without the correction of the measurement 'fake' qubits. Second, the error rate is define per logical qubits, so it is actually the average logical error/number of logical qubits. $\endgroup$ Commented Nov 9, 2023 at 7:48

Not the answer you're looking for? Browse other questions tagged or ask your own question.