Defining logical qubits: Criteria for Resilient Quantum Computation

Dr. Krysta M. Svore

As an industry, we are all collectively committed to bringing scaled quantum computing to fruition. Understanding what it will take to reach this goal is crucial not just for measuring industry progress, but also for developing a robust strategy to build a quantum machine and a quantum-ready community. That’s why in June 2023, we offered how quantum computing must graduate through three implementation levels to achieve utility scale: Level 1 Foundational, Level 2 Resilient, Level 3 Scale.  All quantum computing technologies today are at Level 1, and while numerous NISQ machines have been developed, they do not offer practical quantum advantage. True utility will only come from orchestrating resilient quantum computation across a sea of logical qubits — something that, to the best of our knowledge, can only be achieved through fault tolerance and error correction. And it has not yet been demonstrated.

The next step toward practical quantum advantage, and Level 3 Scale, is to demonstrate resilient quantum computation on a logical qubit.  Resilience in this context means the ability to show that quantum error correction helps—rather than hinders—non-trivial quantum computation. However, an important element of this non-triviality is the interaction between logical qubits and the entanglement it generates, which means resilience of just one logical qubit will not be enough.  Therefore, demonstrating two logical qubits performing an error-corrected computation that outperforms the same computation on physical qubits will mark the first demonstration of a resilient quantum computation in our field’s history.

Before our industry can declare victory on reaching Level 2 Resilient Quantum Computing, by performing such a demonstration on a given quantum computing hardware, it’s important to agree on what this entails, and the path from there to Level 3 Scale.

Image logical qubit

Defining a logical qubit

The most meaningful definition of a logical qubit hinges on what one can do with that qubit – demonstrating a qubit that can only remain idle, that is, be preserved in memory, is not as meaningful as demonstrating a non-trivial operation.  Therefore, we define a logical qubit such that it initially allows some non-trivial, encoded computation to be performed on it.

A significant challenge in formally defining a logical qubit is accounting for distinct hardware; for example, the definition should not favor one hardware over another. To address this, we propose a set of criteria that marks the entrance into the resilient level of quantum computation.  In other words, these are the criteria for calling something a “logical qubit”.

Entrance criteria to Level 2

Graduating to Level 2 resilient quantum computing is achieved when fewer errors are observed on the output of a logical, error-corrected quantum circuit than on the analogous physical circuit without error correction.[1]  We also require that a resilient level demonstration include some uniquely “quantum” feature. Otherwise, the demonstration reduces to a simply novel demonstration of probabilistic bits.

Arguably the most natural “quantum” feature to demonstrate in this regard is entanglement. A demonstration of the resilient level of quantum computation should then satisfy the following criteria:

  1. demonstrates a convincingly large separation between the logical error rate and the physical error rate of a non-trivial logical circuit and its physical counterpart, respectively
  2. corrects at least all individual circuit faults
  3. generates entanglement between at least two logical qubits.

Upon satisfaction of these criteria, the term “logical qubit” can then be used to refer to the encoded qubits involved.

The distinction between the Resilient and Scale levels is worth emphasizing — a proof of principle demonstration of resiliency must be convincing, but it does not require a fully scaled machine. For this reason, a resilient level demonstration may use certain forms of post-selection. Post-selection here means the ability to accept only those runs that satisfy specific criteria. Importantly, the chosen post-selection method must not replace error-correction altogether, as error-correction is central to the type of resiliency that Level 2 aims to demonstrate.

Measuring progress across Level 2

Once entrance to the Resilient Level is achieved, as an industry we need to be able to measure continued progress toward Level 3.  Not every type of quantum computing hardware will achieve Level 3 Scale; the requirements to reach practical quantum advantage at Level 3 include achieving upwards of 1000 logical qubits operating at a mega-rQOPS with logical error rates better than 10-12. And so it is critical to be able to understand advancements within Level 2 toward these requirements.

Inspired in part by DiVincenzo’s criteria, we propose to measure progress along four axes: universality, scalability, fidelity, composability. For each axis we offer the following ideas on how to measure it, with hopes the community will build on them:

  1. Universality: A universal quantum computer requires both Clifford and non-Clifford operations. Is there a set of high-fidelity Clifford-complete logical operations? Is there a set of high-fidelity universal logical operations? A typical strategy employed is to design the former, which can then be used in conjunction with a noisy non-Clifford state to realize a universal set of logical operations. Of course, different hardware and approaches to fault-tolerance may employ different strategies.
  2. Scalability: At its core, resource requirements for advantage must be reasonable (i.e., a very small fraction of the Earth’s resources or a person’s lifetime). More technically, does the resource overhead required scale polynomially with target logical error rate of any quantum algorithm? Note that some hardware may achieve very high fidelity but may have limited numbers of physical qubits, so that improving the error correction codes in the most obvious way (increasing code distance) may be difficult.
  3. Fidelity: Logical error rates of all operations must improve with code strength. More strictly, is the logical error rate better than the physical error rate, i.e., are each of the operation fidelities “sub-pseudothreshold”? Progress on this axis can be measured with Quantum Characterization Verification & Validation (QCVV) performed at the logical level, or by engaging in operational tasks such as Bell inequality violations and self-testing protocols.
  4. Composability: Are the fault-tolerant gadgets for all logical operations composable? It is not sufficient to demonstrate operations separately, rather it is crucial to demonstrate their composition into richer circuits and eventually more powerful algorithms. More crucially, the performance of the circuits must be bounded by the performance of the components in the expected way. Metrics along this line will enable us to check what logical circuits can be run, and with what expected fidelity.

Criteria to advance from Level 2 to Level 3 Scale

The exit of the resilient level of logical computation will be marked by large depth, high fidelity computations involving upwards of hundreds of logical qubits. For example, a logical, fault-tolerant computation on ~100 logical qubits or more with a universal set of composable logical operations with an error rate of ~10-8 or better will be necessary.  At Level 3, performance of a quantum supercomputer can then be measured by reliable quantum operations per second (rQOPS).    Ultimately, a quantum supercomputer will be achieved once the machine is able to demonstrate 1000 logical qubits operating at a mega-rQOPS with logical error rate of 10-12 or better.

Conclusion

It’s no doubt an exciting time to be in quantum computing.  Our industry is at the brink of reaching the next implementation level, Level 2, which puts our industry on path to ultimately achieving practical quantum advantage.  Together as a community we have an opportunity to help measure progress across Level 2, and to introduce benchmarks for the industry. If you have ideas or feedback on criteria to enter Level 2, or how to measure progress, we’d love to hear from you.

 

[1] Our criteria build on and complement criteria of both DiVincenzo (DiVincenzo, David P. (2000-04-13). “The Physical Implementation of Quantum Computation”. Fortschritte der Physik. 48 (9–11): 771–783) and Gottesman (Gottesman, Daniel. (2016-10). “Quantum fault tolerance in small experiments”. https://arxiv.org/abs/1610.03507), who have previously outlined important criteria for achieving quantum computing and its fault tolerance.

0 comments

Discussion is closed.

Feedback usabilla icon