About Google’s Self-Proclaimed Quantum Supremacy and its Impact on Artificial Intelligence

Google claimed quantum supremacy, IBM challenged it… but the development is really important for the future of AI.



Last week, Google sparked controversy in the scientific community by claiming that it has achieved the anticipated milestone known as quantum supremacy. In a paper published in Nature, Google described the experiments conducted on a new quantum machine, code named Sycamore, which proof the famous benchmark. It only took a few hours for IBM, Google’s archrival in the race towards quantum dominance, to publish a paper refuting Google’s claims sparking a passionate debate within the computer science community. Despite the controversy surrounding Google’s claims, there is no doubt that the release of Sycamore represents a major milestone to demonstrate the viability of quantum systems and it has profound ramifications across other technology fields. In the case of artificial intelligence(AI), there has been a lot of speculation in terms of how the advent of quantum computing will affect AI programs. However, not many people think about how AI can influence the development of quantum computing. Today, I would like to explore that thesis in more detail from the perspective of quantum supremacy argument.


What is Quantum Supremacy?

The term quantum supremacy was originally coined in 2012 by John Preskill, a theoretical physicist at the California Institute of Technology. The term was a generic definition to describe the point where quantum computers could do things unachievable by classical computers. The term was immediately embraced by the quantum community but different experts developed different theories of what it practically meant.

The controversy surrounding the quantum supremacy term has to do with practicality of certain computations. Given enough time, classical computers can solve the same problems of quantum computers. However, the time for those calculations might result unpractical for any real world problem. In the case of Google, their paper claims that the Sycamore processor took 200 seconds to perform a calculation that the world’s best supercomputer — which happens to be IBM’s Summit machine — would need 10,000 years to match. That time doesn’t seem very practical. However, IBM claims that their supercomputer can solve the same puzzle in 2.5 days which seems a bit more practical for some tasks.

The target experiment focused on performing a specialized computation known as “random circuit sampling” in fast polynomial time. To describe the experiment, imagine that we could compose quantum algorithms from a small dictionary of elementary gate operations. Since each gate has a probability of error, we would want to limit themselves to a modest sequence with about a thousand total gates. Assuming these programmers have no prior experience, they might create what essentially looks like a random sequence of gates, which one could think of as the “hello world” program for a quantum computer. Because there is no structure in random circuits that classical algorithms can exploit, emulating such quantum circuits typically takes an enormous amount of classical supercomputer effort. Each run of a random quantum circuit on a quantum computer produces a bitstring, for example 0000101. Finding the most likely bit-strings for a random quantum circuit on a classical computer becomes exponentially more difficult as the number of qubits (width) and number of gate cycles (depth) grow. Google’s experiment first ran random simplified circuits from 12 up to 53 qubits, keeping the circuit depth constant. Once they verified that the system was working, they ran random hard circuits with 53 qubits and increasing depth, until reaching the point where classical simulation became infeasible.

I think the Google-IBM argument is obsessing over the non-important aspect of what should be considered a major milestone for quantum computing. Maybe Google misspoke and IBM overreacted but that doesn’t change the fundamentals of the situation. For all intents and purposes, Google’s Sycamore was able to solve a very difficult mathematical problem using a different approach than classical computers. This demonstrates that we can build quantum systems that are accurate enough to solve problems that we couldn’t solve before and that quantum computing is here to stay.

The evolution of quantum computing technologies is likely to disrupt many computational fields. Among those, nothing intrigues industry experts more than the relationship between quantum computing and AI.


The Bidirectional Relationship Between Quantum Computing and AI

Applying the quantum supremacy argument to AI, we can intuitively assume that the advent of quantum computing will enable the creation of new neural network paradigms that are impossible today. However, recent developments in AI are also influencing the evolution of quantum technologies.


How can Quantum Computing Influence AI? Quantum Neural Networks

Quantum neural networks(QNN) is an emerging deep learning paradigm that promotes the creation of neural networks that can run on quantum computing architectures. The work on QNNs is still very nascent but we are already seeing some interesting developments.

In a recent paper titled “Classification with Quantum Neural Networks on Near Term Processors”, Google proposed a QNN model focused on classification tasks. The architecture of a QNN contrasts with traditional deep neural networks. Instead of hidden layers, a QNN will be formed by entangling actions, or “quantum gates”, on qubits. In a superconducting qubit setup this could be enacted through a microwave control pulse corresponding to each gate. Google trained their QNN in the famous MNIST dataset and the results were very impressive.


How can AI Influence Quantum Computing? Reinforcement Learning for Quantum Control

While the influence of quantum computing on AI technologies is pretty obvious, not many people think about the opposite relationship. The creation of quantum computing architectures is full of challenges that are hard to solve by traditional discrete computation technologies and are better suited for AI models. One of those problems is known as quantum control optimization and focuses on the design of quantum controls to translate each quantum algorithm into a set of analog-control signals that accurately steer the quantum computer around the Hilbert space. The precise choice of these controls ultimately governs the fidelity and speed of each quantum operation. Until know, quantum computing has been lacking universal control framework that facilitates optimization over major experimental non-idealities under systematic constraints which have constrained the creation of quantum architectures.

In a paper titled “Universal Quantum Control through Deep Reinforcement Learning”, Google proposed a deep reinforcement learning(DRL) method that overcomes some of the major challenges of quantum control optimization. The novelty of this new quantum control paradigm hinges upon the development of a quantum control function and an efficient optimization method based on DRL.

Google’s proposed architecture is based on a three-layer, fully connected neural network known as the policy NN. Similarly, the control cost function is modeled as a second neural network (the value NN) which encodes the discounted future reward. The numerical simulations in Google’s proposed framework showed a 100x reduction in quantum gate errors and reduced gate times for a family of continuously parameterized simulation gates by an average of one order-of-magnitude over traditional approaches using a universal gate set.

As you can see, AI and quantum computing are destined to evolve together to challenge the classical computing paradigms. While quantum computing will trigger the creation of new AI models for this novel type of infrastructure; AI will enable the creation of better quantum architectures. In spite of the controversies, “Google’s quantum supremacy” moment represents a major milestone in the evolution of quantum computing and one that can influence the near future of AI technologies.

Original. Reposted with permission.