The invention of the abacus in Babylonia around 2000-3000 BC marked the beginning of the human quest to develop ever more powerful devices for computation. From slide rules to personal computers to supercomputers, we have steadily created machines that can help us calculate increasingly difficult quantities. Today, computation is at the heart of every modern technology, from weather forecasting to financial modeling to drug discovery.
But what is computation, and how can its power be quantified? The formal study of computation is intertwined with the development of modern mathematics. A mathematical proof guarantees the truth of a statement from a set of axioms by connecting the two with a series of deductive steps. An algorithm, proposed by the ninth-century Persian mathematician al-Khwārizmī [1], is one way of proving a statement by providing a set of instructions that bridge the assumption to the conclusion by a series of computational steps.
In his 1936 paper, Alan Turing formalized what it means to compute [2]. He proposed what we now call the Turing Machine, which is an abstract machine that can write and erase symbols on a strip of tape according to a set of rules. Despite its simplicity, the Turing Machine can implement any algorithm, and consequently carry out arbitrary computations! What you can do on an abacus, slide rule or a supercomputer, you can translate to a computation on a Turing Machine.
As modern computers became a reality in the 20th century, more refined questions arose about the power and efficiency of computation. Is the Turing Machine an efficient model of computation? What model of computation is more powerful and has better practical reach?
This made the study of computational complexity a central field in modern computer science. Turing had proved that all computations, irrespective of architecture details and manufacturing, can be reduced to a computation on a Turing machine. What is more astonishing is that all computation turned out to be reduced to a computation on a Turing machine with a mild overhead of resources. This led to the conjecture known as Extended Church-Turing Thesis (ECTT), which asserts that any physically reasonable model of computation should be efficiently simulable on a Turing Machine.
Quantum computing is the only known model of computation that may prove ECTT false! You see, the Turing machine is manifestly a 'classical' machine because it operates by the laws of classical physics. A quantum computer, however, fundamentally works under the laws of quantum physics. In quantum physics there are new possibilities such as entanglement with no classical counterpart whatsoever. Building them is tremendously costly and difficult. Are quantum computers powerful enough to make the effort worthwhile? Moreover, the development of classical computers has been so rapid that the components of the computers are becoming smaller and smaller while the computational power increases. But there is a limit to this trend. We will soon reach the saturation of this as quantum effects become relevant if we were to make the elementary components of the circuit any smaller.
By now we have examples of computational tasks that can be efficiently run on a quantum computer, which seem unlikely to be efficient on any classical computer. In fact, they would be practically impossible to run on any classical computer because they would take so long. Examples include simulation of quantum matter and integer factorization. However, we are unsure as we cannot prove that quantum chemistry or integer factorization are actually hard classically. All we know is that our attempts so far have failed in finding efficient algorithms for these tasks that can be performed on a classical computer.
Quantum primacy (formerly known as supremacy) is the event at which a quantum computer performs a task efficiently that would take exponential time on any classical computer. For this one needs to identify a task, have a solid theoretical guarantee for its hardness, and perform an experimental demonstration of it on a quantum computer.
The lead candidate task is Random Circuit Sampling, which is the task of sampling the output of a quantum computer with many random quantum operations by drawing bit-strings from its output. Indeed, Google performed the first experiment for this task in 2019 [3]. Since then, ever larger experiments have been performed.
A recent article published in Nature Physics lays a mathematical foundation for the hardness of this experiment [4]. In computer science, we do not have a lot of techniques for proving the hardness of sampling. However, an algorithm due to Stockmeyer implies that if sampling were indeed efficient, then with the help of an NP-oracle (a machine that immediately answers problems in NP), it would be efficient to estimate the output probabilities of quantum computers to within some specific degree of accuracy. Logically equivalent statement (i.e., the contrapositive) is that if estimating the probabilities with the help of the NP-oracle to that accuracy is hard, then sampling is hard.
But we know that the output probabilities can encode very hard problems which are believed to be much harder than what one can do with an NP-oracle [5]. If estimating the output probabilities were efficient, then the so-called infinite polynomial hierarchy would collapse to finite level, which in computer science is believed to be very implausible.
This paper was the first to give quantifiable proof of hardness of output probabilities [4]. A caveat is that we can only prove that estimating probabilities to a slightly higher degree of accuracy is hard. The proved estimation window of hardness, serves as a new hardness criterion for the performance of classical algorithms. Namely, it implies the existence of an exponential hardness wall for the approximate classical simulation of most quantum computations. Finally, this paper introduces the Cayley path for interpolation between quantum circuits that may be of independent interest in quantum cryptography and approximation theory.
Our work provides strongest evidence yet that random circuit sampling is hard for classical computers, suggesting that quantum computers are inherently more powerful than classical ones.
References:
[1] Clegg, Brian (1 October 2019). Scientifica Historica: How the world's great science books chart the history of knowledge. Ivy Press. p. 61. ISBN 978-1-78240-879-6. Archived from the original on 28 March 2023. Retrieved 30 December 2021.
[2] Turing, Alan Mathison. "On computable numbers, with an application to the Entscheidungsproblem." J. of Math 58.345-363 (1936): 5.
[3] Arute, Frank, et al. "Quantum supremacy using a programmable superconducting processor." Nature 574.7779 (2019): 505-510.
[4] Movassagh, Ramis. "The hardness of random quantum circuits." Nature Physics (2023): 1-6
[5] Kondo, Yasuhiro, Ryuhei Mori, and Ramis Movassagh. "Quantum supremacy and hardness of estimating output probabilities of quantum circuits." 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS). IEEE, 2022.
Please sign in or register for FREE
If you are a registered user on Research Communities by Springer Nature, please sign in