Ordinary Computers Can Beat Google’s Quantum Computer After All |  Science

Ordinary Computers Can Beat Google’s Quantum Computer After All | Science

If the era of quantum computing began 3 years ago, its rising sun may have been hiding behind a cloud. In 2019, Google researchers claimed they passed a milestone known as quantum supremacy when their Sycamore quantum computer performed an abstruse computation in 200 seconds that would have blocked a supercomputer for 10,000 years. Now, scientists in China have done the computation in hours with regular processors. A supercomputer, they say, could beat Sycamore outright.

“I think they’re right that if they had access to a big enough supercomputer, they could have simulated the … task in seconds,” says Scott Aaronson, a computer scientist at the University of Texas, Austin. The advance takes some of the luster off Google’s claims, says Greg Kuperberg, a mathematician at the University of California, Davis. “Getting 300 feet from the summit is less exciting than reaching the summit.”

However, the promise of quantum computing remains unaltered, say Kuperberg and others. And Sergio Boixo, lead scientist at Google Quantum AI, said in an email that the Google team knew its lead might not last long. “In our 2019 article, we stated that classical algorithms would be improved,” he said. But “we don’t think this classical approach will keep pace with quantum circuits in 2022 and beyond.”

The “problem” solved by Sycamore was designed to be difficult for a conventional computer but as simple as possible for a quantum computer, which manipulates qubits that can be set to 0, 1 or, thanks to quantum mechanics, any combination of 0 and 1 at the same time. Together, Sycamore’s 53 qubits, tiny resonant electrical circuits made of superconducting metal, can encode any number from 0 to 253 (about 9 quadrillion) or even all at once.

Starting with all qubits set to 0, the Google researchers applied to individual qubits and coupled a random but fixed set of logical operations, or gates, over 20 cycles, then read the qubits. Put simply, the quantum waves representing all possible outputs have moved between the qubits and the gates have created interference that strengthened some outputs and canceled others. So some should have appeared more likely than others. Over the course of millions of trials, a spiky output pattern has emerged.

Google researchers said simulating those interference effects would overwhelm even Summit, a supercomputer from Oak Ridge National Laboratory, which has 9,216 central processing units and 27,648 faster graphics processing units (GPUs). Researchers at IBM, which developed Summit, quickly retorted that if they harnessed every bit of the hard drive available to the computer, it could handle the computation in a matter of days. Now, Pan Zhang, a statistical physicist at the Institute of Theoretical Physics at the Chinese Academy of Sciences, and colleagues have shown how to beat Sycamore in a press article at Physical Review Letters.

After others, Zhang and colleagues reformulated the problem as a 3D mathematical array called a tensor network. It consisted of 20 layers, one for each cycle of gates, with each layer comprising 53 points, one for each qubit. The lines connected the points to represent the gates, with each gate encoded in a tensor, a 2D or 4D grid of complex numbers. The execution of the simulation was therefore essentially reduced to multiplying all the tensors. “The advantage of the tensor network method is that we can use many GPUs to perform the computations in parallel,” says Zhang.

Zhang and colleagues also relied on a key insight: Sycamore’s calculation was far from accurate, so theirs didn’t need to be either. Sycamore calculated the distribution of outputs with an estimated fidelity of 0.2%, just enough to distinguish fingerprint-like edginess from noise in circuitry. So Zhang’s team traded accuracy for speed by cutting some lines in his network and eliminating the corresponding gates. Losing just eight lines made the computation 256 times faster while maintaining 0.37% fidelity.

The researchers calculated the output model for 1 million of the 9 quadrillion possible number strings, relying on their own innovation to obtain a truly random and representative set. The computation took 15 hours on 512 GPUs and produced edgy output. “It’s fair to say that Google’s experiment was simulated on a conventional computer,” says Dominik Hangleiter, a quantum computer scientist at the University of Maryland, College Park. On a supercomputer, the calculation would take a few dozen seconds, says Zhang, 10 billion times faster than the Google team estimated.

The advance underscores the pitfalls of running a quantum computer against a conventional one, the researchers say. “There is an urgent need for better quantum supremacy experiments,” says Aaronson. Zhang suggests a more practical approach: “We should find some real-world applications to demonstrate the quantum advantage.”

However, Google’s demonstration wasn’t just hype, the researchers say. Sycamore required far fewer operations and less power than a supercomputer, notes Zhang. And if Sycamore had slightly higher fidelity, he says, the team simulation of him couldn’t keep up. As Hangleiter says, “Google’s experiment did what it was supposed to do, start this race.”

Leave a Comment

Your email address will not be published.