3
$\begingroup$

Fault-tolerant quantum computation promises to strongly suppress the errors by scaling up the size of the systems. Right now, different physical implementations of proto quantum computers have very different time-scales at which they can do physcial operations ranging from nano-seconds to milliseconds.

Talking to experts I hear two views: Only the error rate is important since the logical systems anyway will not be able to suppress error for long enough time to perform the computation. The other view is that if we do not have fast enough logical operations (say a microsecond), then the combined overheads from error correction and slow gates will push practical advantage for fault tolerant systems very far into the future.

How should I think about the time it takes to perform logical operations?

Is it true that overhead of quantum error correction likely means that logical cycle time (defined by the time each layer of logical gates take) probably will be an order of magnitude slower than physical cycle time?

How deep are logical curcuits standard imagined application of quantum computations? It is easy to find estimates for say the number of gates or T-gates, but what are some estimates of the depth? Is it believed that acutal large scale quantum operations can be performed in parallel so that it really is the depth that is the deciding factor for the time?

$\endgroup$

1 Answer 1

4
$\begingroup$

See the paper "time-optimal quantum computation" and "Flexible layout of surface code computations using AutoCCZ states".

For Clifford operations, time is not a concern at all. Because of gate teleportation and the ability to backdate Pauli corrections in stabilizer circuits, Clifford operations can be precomputed in parallel elsewhere in the machine. At large scales it makes more sense to think of Clifford operations as having a spacetime cost, rather than a time cost. They will occupy some amount of qubits for some amount of time, but those qubits never have to be important ones near a computational bottleneck.

For non-Clifford operations, what matters is linear chains where each gate doesn't commute with the previous one in the chain. Each step in the chain requires a measurement to be decoded, to inform a decision about what to do next somewhere in the quantum computer, to get the next measurement in the sequence. There's no known way to resolve a step of the chain faster than you can run that loop. Doing quantum computation limited by this loop is doing "reaction limited" quantum computation.

At large scales, quantum computations optimized for time don't look like (A) they look like (B):

enter image description here

So it's all about how fast can you do an X-vs-Z measurement, and how fast can your decoder error correct the result. There's no error correction "cycle" on the critical path, because you can arrange things so no two qubit gates are in the critical part of the reaction loop.

That said, running reaction limited requires a lot of magic state factories (for superconducting qubits, it's like 20-ish covering millions of physical qubits total). When fault tolerant quantum computing first starts working, space will be too much of a constraint to have that many factories. And so the rate at which you can do non-Clifford gates will initially be determined by the inability to have enough factories rather than the dependencies between non-Clifford gates. Initially you will see computational styles more similar to the ones proposed in "A Game of Surface Codes".

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.