The rapid advances in quantum computing as a technology with real applications are reflected in the increases in the number of qubits these devices have available for computation. In 1998, laboratory prototypes could boast just two: enough for a proof of principle but little more. Today that figure has risen to 72 in the latest device reported by Google. Given that the number of states available in principle to systems of N qubits is 2^N, this is an enormous difference. The ability to hold this number of qubits in entangled states involves a herculean feat of quantum engineering.
It’s not surprising, then, that media reports tend to focus on the number of qubits a quantum computer has at its disposal as the figure of merit. The qubit count is also commonly regarded as the determinant of the machine’s capabilities, most famously with the widely repeated claim that 50 qubits marks the threshold of “quantum supremacy”, when a quantum computer becomes capable of things to all intents and purposes impossible for classical devices.
The problem is that this is all misleading. What a quantum computer can and can’t accomplish depends on many things, of which the qubit count is just one. For one thing, the quality of the qubits is critical: how noisy they are, and how likely to incur errors. There is also the question of their heterogeneity. Qubits manufactured from superconducting circuits will generally differ in their precise characteristics and performance, whereas quantum computers that use trapped-ion qubits benefit from having them all identical. And because qubits can only be kept coherent for short times before quantum decoherence scrambles them, how fast they can be switched can determine how many logic operations you can perform in the time available. The power of the device then depends also on the number of gate operations your algorithm needs: its so-called depth.
There is also the question of connectivity: does every qubit couple with every other, or are they for instance coupled only to two neighbours in a linear array?
The performance of a quantum computer therefore needs a better figure of merit than a crude counting of qubits. Researchers at IBM have suggested one, which they call the “quantum volume” – an attempt to fold all of these features into a single number. And this isn’t, then, a way of evaluating which of two devices “performs better”, but quantifies the power of a particular computation. Device performance will depend on what you’re asking it to do. Particular architectures and hardware will work well for some tasks than for others (see here).
As a result, a media tendency to present quantum computation as a competition between rivals – IBM vs Google, superconducting-qubits vs trapped ions – does the field no favours. Of course one can’t deny that competitiveness exists, as well as a degree of commercial secrecy – this is a business with huge stakes, after all. But no one expects any overall “winner” to be anointed. It’s unfortunate, then, that this is how things looks if we judge from the “qubit counter” created by MIT Tech Review. As a rough-and-ready timeline of how the applied tech of the field is evolving, this might be just about defensible. But some fear that this sort of presentation does more harm than good, and we should certainly not see it as a guide to who is currently “in the lead”.