This new AI system helps to improve the reliability of this new technology by accurately identifying defects in quantum computers.
Quantum computers have the potential to revolutionize fundamental physics, drug discovery, and material design if they can be made to operate reliably.
Instead of taking billions of years to solve some problems, a quantum computer may do so in a couple of hours. However, these modern CPUs are more vulnerable to noise than the old ones. If it is conceivable to improve the dependability of quantum computers, especially when deployed at scale, it must accurately identify and correct these defects.
In a publication published today in Nature, we describe AlphaQubit, an AI-based decoder that identifies quantum computing flaws with state-of-the-art accuracy. This collaborative endeavor accelerated the construction of a reliable quantum computer by combining the machine learning capabilities of Google DeepMind with the error correcting expertise of Google Quantum AI.
The ability of quantum computers to precisely identify errors is essential to their capacity to perform complex calculations at scale, which would open up a number of new research areas and scientific breakthroughs.
Fixing mistakes in quantum computing
By using the unique properties of matter at the smallest scales, like superposition and entanglement, quantum computers may be able to do some complex tasks in a considerably smaller number of steps than classical computers. The technology employs qubits, or quantum bits, which can filter through vast sets of possibilities via quantum interference in order to get an answer.
Numerous factors, like as heat, vibration, electromagnetic interference, minute hardware defects, and even cosmic rays, which are present everywhere, might disrupt the qubit's delicate natural quantum state.
Quantum error correction offers a way forward by using redundancy, integrating several qubits into a single logical qubit, and performing regular consistency checks on it. Using these consistency tests, the decoder preserves quantum information by identifying and correcting logical qubit errors.
Nine physical qubits, which are tiny gray circles, are arranged in a qubit grid with a side length of three (code distance) to generate a logical qubit. The eight extra qubits that do consistency checks at each time step (square and semicircle regions, blue and magenta when failing, and gray otherwise) provide information to the neural network decoder (AlphaQubit). At the end of the experiment, AlphaQubit detects the errors that occurred.
Making a candidate neural network for decoding
A neural-network decoder called AlphaQubit makes use of Google's Transformers deep learning architecture, which forms the basis of many large language models in use today. Using the consistency checks as an input, it aims to precisely predict whether the logical qubit has flipped from its intended state when measured at the end of the experiment.The primary processing component of a quantum computer, the Sycamore quantum processor, must first train a simulation in order to decode the data from a set of 49 qubits. Using a quantum simulator, it generated hundreds of millions of samples with various error levels and settings to teach AlphaQubit the fundamental decoding problem. Everyone then adjusted AlphaQubit for a given decoding task using hundreds of trial samples from a particular Sycamore processor.
When tested using new Sycamore data, AlphaQubit's accuracy performance surpassed that of the previous top decoders. In the largest Sycamore trials, AlphaQubit achieves 6% less errors than tensor network methods, which are very accurate but impractical because of their speed. Furthermore, AlphaQubit is a scalable and accurate decoder that generates 30% fewer errors than correlated matching.
AlphaQubit scaling for next systems
Expect quantum computing to surpass existing capabilities as well. In order to test AlphaQubit's ability to adapt to larger devices with lower error levels, it used data from simulated quantum systems with up to 241 qubits more than what was available on the Sycamore platform.The fact that AlphaQubit again outperformed leading algorithmic decoders suggests that it will eventually work with medium-sized quantum devices.
This system also demonstrated advanced features including accepting and reporting input and output confidence levels. These information-rich interfaces have the potential to considerably improve the performance of the quantum processor.
Furthermore, despite being trained on samples that included up to 25 rounds of error correction, AlphaQubit maintained good performance on simulated trials for up to 100,000 rounds, demonstrating its ability to generalize to scenarios outside of its training data.
Towards the realization of practical quantum computing
AlphaQubit is a major breakthrough in using machine learning to quantum error correction. Still, there are numerous problems with performance and scalability.For example, each consistency check is measured a million times per second by a fast superconducting quantum processor. Although AlphaQubit is very good at accurately identifying errors, it is still too slow to immediately resolve issues in a superconducting processor. As quantum computing moves closer to the potentially millions of qubits needed for commercially viable applications, it will also need to develop more data-efficient techniques for training AI-based decoders.
The teams are combining state-of-the-art advances in machine learning and quantum error correction to overcome these challenges and pave the way for reliable quantum computers that can solve some of the world's most difficult problems.
0 Comments