# Quantum Supremacy

A few days ago, the Financial Times reported that “Google claims to have reached quantum supremacy”. The paper in question, available here, explains how they reached this milestone, and how they proved it. It does beg the question, though: what *is* quantum supremacy?
The experiment Google conducted is essentially an applied physics experiment: as long as quantum supremacy has not been proven to be possible, there may be some “hidden laws of physics” to prevent it. As Feynman once said: “if you think you understand quantum mechanics, you don’t understand quantum mechanics”. Engineers have a nack for pushing what they build just beyond the boundaries of what science can explain and understand. This experiment is one of those occasions where they did just that.

The experiment had a quantum computer generate random numbers with a certain verifiable pattern, and compared the time it took to do that with the time it would have taken a classical computer to do the same thing. Now, classical computers are inherently incapable of generating true random numbers, while quantum computers can do that, so at some level, the comparison started out skewed in favor of the quantum computers, but regardless of that, both the quantum computer and the classical computer had a number of calculations to do to get to the interference pattern that the quantum computer generated.

You can compare this with the two-slit experiment designed by Feynman, where photons going through two slits interfere with themselves and generate a pattern on the photosensitive film on the other end of the two slits. A similar interference pattern was expected from the random numbers generated by the quantum computer, due to the way they were generated using a number of interconnected qubits. To produce the same type of pattern, a classic computer doing something equivalent would basically have a lot of linear algebra to compute.

The paper goes into some detail on how they verified the accuracy with which they could read the values from the qubits, and the fidelity of the quantum computer’s operation. In doing so, they found that one of the 54 qubits didn’t work, so they didn’t use it for the experiment. The quantum computer itself was designed as a lattice (kinda like a net) of qubits, connected to each other with a kind of bridge. Each qubit is connected to up to four neighboring qubits. The briges are used to implement gates, allowing the computer to operate and implement certain basic operations.

Classical computers as we know them today are Von Neumann machines: they have random access memory that allow the central processing unit (CPU) to access any part of memory, load it into its registers and operate on those registers. Typically, a CPU has at least eight general-purpose registers that it can use with any operation and can access quickly. Many computers have far more than that. A quantum computer, or at least *this* quantum computer, doesn’t have that: in stead, it can operate on pairs of qubits, but you can’t arbitrarily select the qubits in the pairs: you can arbitrarily select one qubit, but the second one has to be right next to it and connected to it with a bridge, and you can only activate one bridge for every qubit. This still allows you to do a whole bunch of operations at the same time, and any single operation only takes a fraction of a fraction of a second (a few nanoseconds), but it’s still rather limiting. The potential potential of future quantum computers is much greater than what Google was able to accomplish here (and they say as much in the paper).

The way they estimated how much time a classical computer would take to do the same calculations is also interesting: the supercomputer they used, which had 100,000 cores and 250 TB of RAM, couldn’t emulate a quantum computer with 53 qubits: it could simulate up to 43 qubits before running out of memory. So they used the Google data centers to do part of the simulation, in addition to the super computer, and combined the results they got. They estimated that, to run the 20 cycles they ran on the quantum computer, with the 53 qubits they had to simulate, it would take several millenia to do all the computations necessary. The quantum computer did the same in a few minutes.

That’s what quantum supremacy basically is: for a class of problems for which quantum computers are inherently better than classical computers (which is basically anything that involves linear algebra), a quantum computer can be several orders of magnitude faster than a classical computer to do the same thing. They showed this by turning the quantum computer into a very expensive random number generator, in which the random numbers followed a certain pattern. They could verify the operation of the quantum computer within a reasonable time if they didn’t do too many cycles, but once they were running the full program (the “supremacy regime”), there’s just no way to verify that a classical computer could still come up with the same type of output.

Random numbers are immensely important, and the way they generated these shows that the way we ought to think of a qubit is the way we’re taught to think of a qubit. It doesn’t tell us anything about which interpretation is better (so that discussion will go on unabated), but it does show that the promise is promising. In the introduction to the paper they talk about a few of those promises: optimization optimization, machine learning, materials science, chemistry, and factorization. The program implemented here doesn’t get us any closer to any of those, and especially Shor’s factorization algorithm is still a while away (and many a cyber security expert I know is sighing a sigh of relief), **but**…

With quantum supremacy attained for at least this particular class of problems, and IBM putting a 53-qubit quantum computer as well as three 20-qubit quantum computers in the cloud, quantum computers that can break a 2048-bit RSA key using Shor’s algorithm are only about a decade away. *We don’t have viable quantum-resistant alternatives to RSA yet!* We do have a viable quantum-resistant alternative to Diffie-Hellman, but that is not enough to implement PKI.

I’ll try to find some time to explain that later.