4
$\begingroup$

I am quite stumped by the fact that the roadmaps for quantum computers as given by IBM, Google, IonQ, etc. seem to imply a linear/exponential growth in the size of their quantum computers.

Naively, I would assume that bigger systems are much harder to engineer, especially if one wanted to be able to entangle all of their bits, because only the undisturbed outcome is correct. Given a fidelity where $P[0_i] = x$ means no error occurs on bit $i$ with probability $x$, it would seem to me that an entangled system of size $n$ would have probability $P[0_1, ..., 0_n] = x^n$. The probability of error grows exponentially, if $x < 1.0$, which is of course true for all practical systems.

ECCs mitigate this to some extent, but also just by a polynomial factor and require even more bits. It looks like building large systems of the same fidelity becomes exponentially harder, and just scaling an existing design will result in a system with exponentially lower fidelity. Most interesting algorithmic building blocks, like the QFT, require entanglement between all of the involved bits. More bits without the ability to produce larger entangled states seem to have a limited advantage, because it would be practically equivalent to using two computers or executing in sequences.

Optimism about improvements in basic gate fidelity and error correction aside, is there anything I am missing about errors in entangled states that gives rise to these predictions? To state this more precisely, is there any theoretical effect or algorithm that can be exploited to limit the error probability in entangled systems to be polynomial or at least subexponential in the size of the system?

$\endgroup$

1 Answer 1

6
$\begingroup$

It is true that fidelity decays exponentially in the course of quantum computation. This is indeed a major limitation of NISQ computers that imposes a stringent "depth budget". In order to overcome the decay, we need gates with fidelity so close to one that the decay is negligible over the course of quantum algorithms we intend to run. As you anticipated in your question, this is achieved using quantum error correction.

However, quantum error correction gives rise to exponential suppression of errors, not polynomial. Moreover, the asymptotic resource cost of the feat is polylogarithmic as a function of key variables. This enables the creation of logical gates with arbitrarily high fidelity and thus allows us to overcome exponential decay of fidelity. In practice, a challenge in building a quantum computer is that while the asymptotic overhead of quantum error correction is modest, the constant factors are fairly high.

Quantitatively, for a given quantum error correcting code there exist constants $c_1$ and $c_2$ and a threshold error rate $\epsilon_T$ such that the logical error rate $\epsilon_L$ is

$$ \epsilon_L \approx c_1 \left(c_2\frac{\epsilon_P}{\epsilon_T}\right)^{\left\lfloor\frac{d+1}{2}\right\rfloor}\tag1 $$

where $\epsilon_P$ is the physical error rate and $d$ is the code distance (c.f. equation $(2)$ on p.11 in this paper). Importantly, the resources required to build and operate a quantum computer using a quantum error correcting code of distance $d$ are polynomial in $d$. For example, in case of the Surface Code the number of physical qubits required to encode one logical qubit scales as $O(d^2)$. Thus, theoretically, if the physical error rate $\epsilon_P$ can be brought down below threshold then arbitrarily high logical fidelity should be achievable.

For a concrete example, suppose we have a computer with $\epsilon_P < \epsilon_T$ and we wish to run an algorithm that consists of $k$ gates on $q$ logical qubits and are willing to accept overall failure probability no greater than $p_f>0$. In order to meet these requirements, we need $\epsilon_L$ such that

$$ kq\epsilon_L \approx 1 - (1 - \epsilon_L)^{kq} < p_f. $$

Substituting from $(1)$ and carrying out the calculations, we find an upper bound on the necessary code distance

$$ \log kqc_1 + \left\lfloor\frac{d+1}{2}\right\rfloor \log \left(c_2\frac{\epsilon_P}{\epsilon_T}\right) \lt \log p_f \\ d \lt \frac{2\log \frac{kqc_1}{p_f}}{\log \frac{\epsilon_T}{c_2\epsilon_P}} = O\left(\log \frac{kq}{p_f}\right). $$

We see that the asymptotic overhead of quantum error correction is very modest. For example, using the Surface Code the number of physical qubits needed to run an algorithm with the above parameters is $O(q\log^2 \frac{kq}{p_f})$.

For an informative discussion of exponential suppression of errors in concatenated codes see section 10.6.1 in Nielsen & Chuang, especially the text around equation $(10.113)$ on page 481. For a recent experimental demonstration of exponential suppression of errors, see this paper (disclosure: I'm a co-author).

$\endgroup$
11
  • $\begingroup$ How is it possible to have so many authors? How did you people collaborate? (Sorry I am not casting any vote in this answer because my terrible level at quantum computing doesn't allow me to understand this answer). $\endgroup$
    – user27286
    Commented Feb 14, 2021 at 19:58
  • $\begingroup$ Well, a real quantum computer is a very complex system, so I suppose it's all out of necessity. That said, note that this isn't even close to the scope of collaboration required in certain other physics experiments, see for example this paper. The way it happens in practice is I suppose as anywhere else: via lots of meetings, emails, talks, delegation, task-group-forming, code reviews, document writing etc. $\endgroup$ Commented Feb 14, 2021 at 20:13
  • $\begingroup$ I see. I am new to all these things so please don't mind if I said something offending. The paper you mentioned has two groups of people. Compared to that this is less. I found it interesting that's why said. $\endgroup$
    – user27286
    Commented Feb 14, 2021 at 20:19
  • $\begingroup$ I will have to re-read that section of Nielsen & Chuang to fully understand this, because I don't see the connection between the threshold theorem and the increasing number of entangled qubits yet. It is clear that a fault tolerant code of arbitrary fidelity can be constructed if a basic level of fidelity can be achieved, but how this would limit the impact of more bits adding more chance of interaction with the environment isn't. My question is not about gate fidelity, but about how many qubits have to remain free of error if the entire system is entangled. Or are these equivalent questions? $\endgroup$
    – midor
    Commented Feb 15, 2021 at 23:05
  • $\begingroup$ Regarding limiting the impact of more qubits increasing the chances of interaction with environment: This works the same way as in classical ECCs. On one hand, as we increase the number of physical qubits per logical qubit our scheme benefits from increasing redundancy. On the other hand, error per qubit remains constant. In the classical case, suppose you have noisy bits that spontaneously flip with probability $p$. Encoding one logical bit as a series of $k$ noisy physical bits (known as the repetition code) reduces error probability to $\approx p^{\lfloor\frac{k}{2}\rfloor}$. $\endgroup$ Commented Feb 15, 2021 at 23:22

Not the answer you're looking for? Browse other questions tagged or ask your own question.