21
$\begingroup$

The Knill Laflamme QEC conditions are stated this way:

We consider a code space $C$ and its associated projector $P_C$.

We consider a noise acting on our system: $E(\rho)=\sum_a E_a \rho E_a^{\dagger}$

A necessary & sufficient condition for the existence of an error correction operation correcting $E$ is the Knill-Laflamme QEC condition:

$$P_C E_i^{\dagger} E_j P_C=\alpha_{ij} P_C $$

Where $\alpha_{ij}$ are coefficient of an Hermitic matrix.

For me this mathematical condition is a little "abstract". I would be interested to have some feeling about what it means geometrically on the Hilbert space.

In Nielsen & Chuang it seems to mean that the different errors $E_i$ are associated with orthogonal subspace that preserve the orthogonality (two orthogonal vector of the code space will still be orthogonal after an error). But I don't understand why this is equivalent to Knill-Laflamme QEC condition.

$\endgroup$

3 Answers 3

18
+100
$\begingroup$

We can try to geometrically interpret the Knill–Laflamme conditions on a code-space $\mathcal C$, as follows.$\def\ket#1{\lvert#1\rangle}\def\bra#1{\langle#1\rvert}$

Images of the code-space under an error operation

First, consider any individual error operator $E_j$: this is a Kraus operator of some transformation of the system, and so is not necessarily even unitary.

For any input pure state $\ket\psi \in \mathcal{C}$ and for an arbitrary operator $E_j$, the vector $E_j \ket{\psi}$ will represent some distorted image of the input state. You might wonder whether different states $\ket\psi \in \mathcal C$ could be affected in different ways: specifically, whether some vectors $E_j \ket\psi$ will have different norms for various states $\ket\psi$. The Knill–Laflamme condition indicates that this isn't possible: as $P_{\mathcal C} E_{\!\!\:j}^\dagger E_{\!\!\:j}^{\phantom\dagger} \!P_{\mathcal C} = \alpha_{\!\!\:j,j} P_{\mathcal C}$ for some scalar $\alpha_{\!\!\:j,j}$ (which we can show is a non-negative real), it follows that $\lVert E_j \ket\psi \rVert^2 = \alpha_{\!\!\:j,j}$ for all $\ket\psi \in \mathcal C$.

So: for any set of errors $\{E_j\}_j$ satisfying the Knill–Laflamme conditions, each error $E_j$ produces some faithful image of the original code-space $\mathcal C$i.e., proportional to a unitary transformation on $\mathcal C$, even if $E_j$ is not itself proportional to a unitary.

Images of the code-space under different error operations

Next, consider how these images of the code-space compare for two different error operators $E_j$ and $E_k$, and under what conditions we can hope to recover a state which have been affected by one of the errors.

  • In some cases — as with distinct single-qubit Pauli operators on a code with distance 3 or greater — the subspace $E_{\!\!\:j}\!\: \mathcal C$ and $E_k \mathcal C$ will be orthogonal to one another, and therefore perfectly distinguishable. In particular, we would then have $E_j\ket\psi \perp E_k\ket\varphi$ for any $\ket\psi,\ket\varphi \in \mathcal C$, so that $P_{\mathcal C} E_{\!\!\:j}^\dagger E_{\!\!\:k}^{\phantom\dagger}\!\!\: P_{\mathcal C} = 0$. If all of the error operators $\{E_j\}_{j}$ are 'orthogonal' in this sense, there would then be no problem to correct the errors: we could simply perform a measurement of which of the subspaces $E_r \!\: \mathcal C$ we were in, and then perform a unitary transformation which maps $E_r \!\:\mathcal C \mapsto \mathcal C$. This corresponds to the case where $\alpha_{j,k} = 0$ for any $j \ne k$.

  • However, it is also possible (for instance, in a degenerate code as Patrick Fuentes indicates in his answer) to have errors $E_j$ and $E_k$ in which $E_{\!\!\:j}\!\: \mathcal C$ and $E_k \mathcal C$ are not orthogonal: the unit vectors in those subspaces may have non-zero overlap, or the two subspaces may even be the same. In this case, the only hope for us to be able to correct the error is that — to the extent that the effect of $E_j$ and $E_k$ on the code-space cannot be distinguished — the two images in a sense agree.

What I mean by this is: suppose that you had a state $\ket\psi \in \mathcal C$ which (unbeknownst to you) was affected by the error $E_j$, yielding the state $\ket{\psi'} = E_j \ket{\psi}$. Suppose that you then tried to determine whether it was affected instead by the error $E_k$, by measuring whether the state was in the space $E_k \mathcal C$. In general, this would disturb the state $\ket{\psi'}$. However, if $\ket{\psi'}$ collapsed to the state $\ket{\psi''} = E_k \ket{\psi}$, there would be no problem: you could simply correct the state as though $E_k$ had happened, and thereby recover $\ket{\psi}$.

This is what is guaranteed by the Knill–Laflamme conditions: that for any state $\ket{\psi} \in \mathcal C$, the projection of $E_{\!\!\:j}\!\: \ket{\psi}$ onto $E_k \mathcal C$ will be proportional to $E_k \ket{\psi}$. Furthermore, that the constant of proportionality $\alpha_{j,k}$ involved is the same for all $\ket{\psi} \in \mathcal C$.

Thus, the Knill–Laflamme conditions imply that, to the extent that we cannot perfectly distinguish the effects of $E_j$ and $E_k$ on the entire subspace $\mathcal C$, the effects of $E_j$ and $E_k$ on individual states in $\mathcal C$ are also indistinguishable, so that we cannot confuse one error $E_j$ for another error $E_k$ plus some transformation on the original encoded state.

In summary

The Knill–Laflamme conditions on a set of error operators $\{E_j\}_j$ on a code-space $\mathcal C$ indicate that the effect of any one error $E_j$ affects the entire space a uniform way (rescaling it uniformly and transforming it unitarily otherwise); and that to the extent that the effects of two errors $E_j$ and $E_k$ cannot be operationally distinguished from one another, their effect on the code-space is the same as one another, so that you can measure which error occurred and then correct the state as though that error actually is the one that did occur.

$\endgroup$
10
  • $\begingroup$ Thank you for your answer. I have a little question: What do you exactly mean by "|ψ⟩∈C could be affected differently". Could you define a transformation that would act "the same" on all state ? I guess you will tell me proportional to unitary but then why do you say it acts the same on all state in contrary to another transformation, I am a little confused. For the second part I understood your first point (very nice!) for the second I roughly see what you mean but I need more work on it atm $\endgroup$ Commented Aug 29, 2019 at 13:18
  • $\begingroup$ Fair question: the wording is a little vague there. Basically I have in mind that the effect of the operator $E_j$ will be the same on the norms of those states: for instance, there won't be a vector $\lvert \psi \rangle \in \mathcal C$ for which $E_j \lvert \psi \rangle = \mathbf 0$, unless all states $\lvert \varphi \rangle \in \mathcal C$ have $E_j \lvert \varphi \rangle = \mathbf 0$. I will try to make my wording clearer there. $\endgroup$ Commented Aug 29, 2019 at 13:29
  • $\begingroup$ @Niel de Beaudrap I started writing the same intuitive explanation yesterday and wanted to finish it today but you beat me to it. You did a very good job (probably better than I would). What surprised me is that I came up with it on my own at some point and I have never seen it from any other source. Guess this explanation is not new after all. $\endgroup$
    – oleg
    Commented Aug 29, 2019 at 16:06
  • $\begingroup$ @oleg: When I first saw this picture, I came up with it on my own as well --- though I doubt that I was the first to do so. Some things lie just below the surface, and are available to anyone who can find the way to peel off the top layer. But while the idea doesn't leap straight out at me while skimming Knill and Laflamme's original article [ arxiv.org/abs/quant-ph/9604034 ], some of the commentary (e.g. the top of page 16) suggests that this view may have been available from the very beginning for someone who read it carefully. $\endgroup$ Commented Aug 29, 2019 at 16:18
  • $\begingroup$ @Niel de Beaudrap Huh, I actually have never read the original paper. Glancing at it now I see that it is rife with explanations. I really shouldn't be surprised. Thanks again! $\endgroup$
    – oleg
    Commented Aug 29, 2019 at 20:11
6
$\begingroup$

Let me begin by reformulating the Knill-Laflamme Theorem, looking at it from a different angle might help in making the matter a little less abstract.

The Knill-Laflamme Theorem for the conditions of quantum error correction can be stated as follows:

Let $C$ be a quantum error correcting code defined as a subspace of the n-dimensional Hilbert space, $\mathcal{H}_2^{\otimes n}$, and let $\mathcal{E} \subset \mathbb{C}^{2^n\times2^n}$ be a set of errors. Then $\mathcal{C}$ can correct $\mathcal{E}$ if and only if $$\langle\psi_i|E_a^\dagger E_b|\psi_j\rangle = c_{ab}\delta_{ij} ,$$ where {$|\psi_i\rangle$} is a base of the subspace that defines the code $\mathcal{C} \subset \mathcal{H}_2^{\otimes n}$ and $E_a$, $E_b$ $\in \mathcal{E}$.

By itself, the above statement provides information about the degeneracy of the code. By computing all the possible outputs of parameter $c_{ab}$ (this can be done by randomly selecting a basis vector of $C$, $|\psi_i\rangle$, and setting $i=j$.), placing them in a matrix $C_{ab}$, and then computing its rank, we can determine if $C$ is degenerate or non-degenerate. If $\mathrm{rank}(C_{ab}) < max$, the quantum code is degenerate, which means different errors from an error set $\mathcal{E}$ map to the same syndrome and cause the same error effect on the codeword, thus, being reversible via the same recovery operation. If $C_{ab}$ is full rank, each error $E_i$ in the error set $\mathcal{E}$ maps to a unique error syndrome.

Placing the matter of degeneracy aside, the following corollary of the Theorem might help in understanding the meaning of the Theorem itself.

For any code $C$ defined as a subspace of $\mathcal{H}_2^{\otimes n}$, if $C$ is capable of correcting all the errors in a set $\mathcal{E} \subset \mathcal{C}^{2^n\times2^n}$, then $C$ will be able to correct all the errors within the linear span of $\mathcal{E}$.

Essentially what this corollary tells us is that if a quantum error correcting code can correct all the errors in the set $\mathcal{E}$, it will also be able to correct all the possible linear combinations of the elements within that set. If we now consider that the Pauli matrices define a basis for the $\mathbb{C}^{2\times2}$ space, then the $n$-fold Pauli group will form a base for the n-dimensional subspace $(\mathbb{C}^{2\times2})^{\otimes n} = \mathbb{C}^{2^n\times2^n}$. This last statement, means that all the matrices in $\mathbb{C}^{2^n\times2^n}$ can be expressed as linear combinations of the elements of the $n$-fold Pauli group, which by the above corollary gives us the groundbreaking condition that to design good quantum error correcting codes we must needs only consider errors that are elements of the $n$-fold Pauli group.

Hopefully this approach to the Theorem aids in understanding its utility and importance.

$\endgroup$
2
  • $\begingroup$ Hello. I come back on this answer one year later. Let's assume $Rank(c_{ab})$ to be maximum while $c_{ab}$ is not identity. I agree that $c$ will be invertible but I do not exactly see the connection with the fact I can "inverse" the effect of the error. $\endgroup$ Commented Dec 9, 2020 at 6:50
  • $\begingroup$ I mean: I understand with examples what may happen with examples. With $c_{ab}$ identity, here I don't see any problem to correct. When $c_{ab}$ is not identity I also have example in mind where there wouldn't have any problem. Indeed two different errors give you the same syndrome: by measuring you "enforced" a given error to be converted to another one that you can then correct. But even though I don't have much more intuition. One example that disturbs me intuitively is given in my previous comment. $\endgroup$ Commented Dec 9, 2020 at 6:52
2
$\begingroup$

$ \def\dg{^\dagger} \def\0{\bar0} \def\1{\bar1} \def\vk{\varphi_k} \def\vl{\varphi_l} \def\vm{\varphi_m} \def\ket#1{\left\lvert#1\right\rangle} \def\bra#1{\left\langle#1\right\rvert} \def\braket#1#2{\left\langle#1\middle\vert#2\right\rangle} \def\ketbra#1#2{\left\lvert#1\middle\rangle\!\middle\langle#2\right\rvert} $

Resource

Section 3 in this lecture by Daniel Gottesman explains the Knill-Laflamme criterion with the 9-qubit Shor code as an example.


Breakdown of the Knill-Laflamme criteria

Simplification

$P_C$ is the projector onto the coding subspace $C$. If we have a code that protects 1-qubit information, we will have two states in our coding subspace representing logical zero $\ket\0$ and logical one $\ket\1$. As an example, in 3-qubit bit-flip code, $\ket\0 = \ket{000}$ and $\ket\1 = \ket{111}$. In such case, $P_C = \ketbra\0\0 + \ketbra\1\1$. Generally, $$P_C = \sum_k\ketbra\vk\vk$$ where, $\ket\vk \in \text{basis}(C)$. So, an alternative way to represent the criteria:

\begin{align} \sum_{kl} \ketbra\vk\vk E_i\dg E_j \ketbra\vl\vl &= \alpha_{ij} \sum_m \ketbra\vm\vm \\ \Rightarrow \bra\vk\left( \sum_{kl}\ketbra\vk\vk E_i\dg E_j \ketbra\vl\vl \right)\ket\vl &= \alpha_{ij}\bra\vk\left( \sum_m\ketbra\vm\vm \right)\ket\vl \\ \Rightarrow\bra\vk E_i\dg E_j\ket\vl &= \alpha_{ij} \braket\vk\vl = \alpha_{ij} \delta_{kl} \end{align}

Preserve orthogonality between the transformed states

If we encounter an error, we should always be able to discern the transformed states. i.e. $E_i\ket\0$ should always be different from $E_i\ket\1$. Not only that, but different errors should also map the bases orthogonally: $\left(E_i\ket\0\right)\dg E_j\ket\1 = \bra\0E_i\dg E_j\ket\1 = 0$. Otherwise, we won't be able to discern which error took place.

Different errors map to different subspaces

If any two different errors map coding subspace to different error subspaces (sufficient but not necessary), then the coding scheme fulfils the criteria: $\bra\vk E_i\dg E_j \ket\vk = 0 = \alpha_{ij}$. In such cases, we can easily detect which error took place.

But it is not necessary that different correctable errors should always map to different subspaces. This phenomenon is explained later.

Linear combination of correctable errors

If a code is able to correct a set of errors $\{E_i\}$, it should also be able to correct any linear combination of that error. For example, 3-qubit bit-flip code can correct errors with Kraus operators: $\left\{\sqrt{(1-p)^3}I, \sqrt{p(1-p)^2}X_1, \sqrt{p(1-p)^2}X_2, \sqrt{p(1-p)^2}X_3 \right\}$, where $p$ is the probability of single qubit bit-flip error. Any code that is able to correct $\{I, X_1, X_2, X_3\}$ will also be able to correct the above-mentioned set of operators since they are just scaled versions of each other.

Invariance of $\alpha_{ij}$ on basis

The linearity of correctable errors also shows why the $\alpha_{ij}$ is needed in the Knill-Laflamme criteria instead of just $\delta_{ij}$. For example, with $E_1 = \sqrt{p(1-p)^2}X_1$ we see that $\bra\0 E_1\dg E_1 \ket\0 = \bra\0 E_1\dg E_1 \ket\0 = p(1-p)^2 = \alpha_{11}$. Notice how the value of $\alpha_{ij}$ is only dependent on $i$ and $j$, but not the basis: $\bra\vk E_i\dg E_j\ket\vk = \alpha_{ij}$. Recall the fact that unitary operations preserve inner product. Here, the inner product of $E_i\ket\vk$ and $E_j\ket\vk$, $\bra\vk E_i\dg E_j \ket\vk$ remains constant whatever the $\ket\vk$ is. That means we can choose a different set of bases for the error, which might map errors to orthogonal subspaces.

A good example of this is $Z_1$ and $Z_2$ errors in 9-qubit Shor code. Both are correctable by the Shor code. Interestingly, $\bra\vk Z_1\dg Z_2 \ket\vk = 1$, meaning they are not orthogonal errors. But if we took a different basis $B_1 = \frac{Z_1+Z_2}{2}$ and $B_2 = \frac{Z_1 - Z_2}{2}$, we see that, $\bra\vk B_1\dg B_2 \ket\vk = \bra\vk (Z_1^2 - Z_2^2) \ket\vk = \bra\vk (I-I) \ket\vk = 0$, which are orthogonal. Both of these cases are handled well by the Knill-Laflamme criteria.

Hermitian operator $[\alpha_{ij}]$

The fact that $[\alpha_{ij}]$ is a Hermitian matrix can easily be shown by swapping indices $i$ and $j$ in the criteria equation: $\bra\vk E_i\dg E_j \ket\vk = \alpha_{ij} = \left(\bra\vk E_j\dg E_i \ket\vk\right)\dg = \left(\alpha_{ji}\right)\dg$.

One might falsely think different errors need to be mapped to different subspaces for them to be corrected, i.e. $\alpha_{ij} = 0$ is always true if $i \neq j$. Not necessarily. For example, 3-qubit bit-flip code can correct both $X_1$ error and $X_1 Z_2 Z_3$ error. Both of the errors map $\ket\0 = \ket{000}$ to $\ket{100}$ and $\ket\1 = \ket{111}$ to $\ket{011}$. So, which error has occurred is hardly of importance. What matters is the fact that both of them can be corrected by applying the $X_1$ operation. This is possible because both errors have identical effect on the coding subspace. This is reflected in the value of $\alpha_{ij}$: $\bra\vk E_i\dg E_j \ket\vk = \bra\vk X_1\dg X_1 Z_2 Z_3 \ket\vk = 1 = \alpha_{ij}$.

$\endgroup$
2
  • $\begingroup$ Please post this as a comment or add more information. $\endgroup$ Commented Dec 9, 2022 at 16:40
  • 1
    $\begingroup$ @MartinVesely noted. I’m a new user of qcstack, so don't have required +50 reputation for comment. I will add more information instead (also using OP's notation). Thanks. $\endgroup$
    – Shawon
    Commented Jan 8, 2023 at 9:09

Not the answer you're looking for? Browse other questions tagged or ask your own question.