What will a quantum computer be good for, exactly?

I think we don’t really know yet.
quantum concepts
paper review
Published

May 15, 2023

Is this even a real question?

This post will be a bit different from others, in that I’ll try to give a high level review instead of diving in detail into a specific topic. Also, since this is my personal blog, I figured I will give some personal background on this one.

I got involved in the field of quantum computing relatively recently, in 2021. Before that, while working in theoretical physics, I didn’t even know the field exists, let alone is taking off at the industrial scale (yes, I’m known for my breadth of knowledge). After the transition into quantum, I’ve tried to catch up on many levels – from understanding the fundamentals, to getting up to speed with state of the art in hardware and algorithms, to carrying out some original research. Although I made some progress in all directions, you can imagine how fragmented and incomplete my picture of the field is. While there is still a ton of topics I’d like to understand better, one particular question have been sitting on the back of my mind for quite a while.

What quantum computers will actually be good for?

ChatGPT is probably a good way to summarize a folk knowledge. So, I asked it this exact question and got a pretty long list (click to unfold).

  1. Cryptography: Quantum computers have the potential to break many of the currently used encryption methods and could be used to develop new, more secure encryption methods that are resistant to quantum attacks.
  2. Optimization: Quantum computers could be used to solve optimization problems exponentially faster than classical computers, potentially revolutionizing industries ranging from finance to transportation to healthcare.
  3. Chemistry and materials science: Quantum computers can simulate the behavior of molecules and materials at the quantum level, which could help in the development of new drugs and materials.
  4. Machine learning and artificial intelligence: Quantum computers could potentially outperform classical computers in tasks such as image and speech recognition, which could have applications in areas such as autonomous vehicles and robotics.
  5. Financial modeling: Quantum computers could be used to quickly analyze large amounts of financial data and predict stock prices or other market trends.
  6. Supply chain optimization: Quantum computers could be used to optimize supply chain operations, such as routing and scheduling, to improve efficiency and reduce costs.
  7. Weather forecasting: Quantum computers could be used to simulate complex weather patterns and improve the accuracy of weather forecasting.
  8. Energy production: Quantum computers could be used to optimize energy production and distribution, potentially reducing energy costs and improving energy efficiency.
  9. Particle physics: Quantum computers could be used to analyze data from particle accelerators and help physicists understand the behavior of particles at the subatomic level.
  10. Transportation: Quantum computers could be used to optimize transportation networks, including traffic flow, route planning, and logistics.

And indeed, all these topics, and many more, are frequently covered in popular discussions and technical papers on quantum computing. Probably for a non-expert, it would be nearly impossible to critically sort through these and resist the impression that quantum computers will be good for everything (and soon). However, my exposure to the field already made me skeptical about many of the usually proposed applications. And I felt a pressing need to sort this out for myself. I now feel like I’ve mostly done my homework on this one, and here is what I found.

We don’t really know yet what quantum computers will be good for.

And among the zoo of proposed applications, most are rather speculative, while a single one seems to stand out as most promising. You can read on to see what I have to back up these claims, or jump directly to the very short summary section (Section 10).

Disclaimers

Note

I will specifically focus on quantum computation, not sensing or communication. Moreover, I will look for practical, useful problems, not just any possible demonstration of quantum advantage.

Warning

As mentioned, my expertise in the field is limited. So take my assessment critically. I will cite many sources, but of course they are subject to my selection bias. Also, I’m more than happy to be proven wrong in this case. Feel free to leave the feedback.

Note

I do not attempt to make the references comprehensive, e.g. I won’t cite original work of Grover or Shor. This is a blog post, come on. On the contrary, I will try to limit citations to those directly relevant to my arguments.

Types of quantum algorithms

While there are a great many specific versions of quantum algorithms (see a pretty comprehensive list at  [1]), they can be divided in a few broad categories. Back in 2003 the one and only Peter Shor wrote a short essay “Why haven’t more quantum algorithms been found?”  [2]. There he points out three main types of quantum algorithms that were known to date.

  • Those using the quantum Fourier transform to find periodicity. The most famous example is the integer factoring algorithm due to Shor himself.
  • Algorithms similar or derived from Grover’s search.
  • Algorithms for quantum simulation.

By the way, if you are interested “why haven’t more quantum algorithms been found”, then according to Shor, it is probably because (1) quantum algorithms are so unusual that we do not have a good way to think about and invent them or (2) maybe there are just a few quantum algorithms, after all. You may wonder how much have changed in the past 20 years. Apparently, not that much. I would add three more positions to the list.

  • Big data/machine learning algorithms, e.g. HHL algorithm.
  • NISQ “algorithms”.
  • Algorithms for quantum data.

The key new addition here is the class of big data/machine learning algorithms. The most important example is an algorithm for linear systems, discovered by Harrow, Hassidim and Lloyd and known as HHL algorithm. Another broad class is NISQ algorithms, of which variational quantum algorithms (VQA) are the primary example. Finally, there is a new paradigm emerging, which asks if we can do more with a quantum computer if our data is not classical, but quantum. I will discuss each of them in turn.

As far as I can tell, that’s basically it. Of course, there have been a great deal of progress since 2003 in finding new applications for the existing primitives, refining their efficiency, and even building unifying frameworks (see e.g.  [3]). But the broad classification outlined above still seems to apply. I list a number of omissions in section Section 9, and I’m happy to expand it base on the feedback.

Wait, isn’t the quantum advantage already here?

Before going through the list sketched above, let’s briefly touch on the quantum advantage experiments. They certainly made a big splash, and call to be addressed. So far, four groups reported quantum advantage. The first and most known result comes from Google  [4], then there were two academic groups in China  [5]- [6], and last year it was Xanadu  [7]. All these quantum advantage experiments were in sampling tasks, meaning the goal was to sample from a certain probability distribution that is believed to be hard to sample from classically. While debates up to this day continue on whether these quantum advantage experiments can in fact be spoofed by a classical computer, I’m willing to assume that the sampling tasks of this kind give a real and short-term achievable quantum advantage.

The problem is, no useful applications of these sampling tasks are known. This claim may sound too strong, as there are papers proposing different uses. For example, here is a review from Xanadu on possible applications of the Gaussian Boson Sampling  [8]. Unfortunately, these are only heuristics. I’m not familiar with this subfield in any technical detail, so my argument is a behavioral one. Since these sampling experiments are already available, there is a huge incentive to produce practical results or at least come up with strong theoretical proposals. And this does not seem to be happening. Until I’ll see a surge of activity in this area, I’ll assume these sampling experiments are unfortunately not useful.

I must admit this is bothering me. We found something, that quantum computers can do better than classical, why can’t we use it? There must be a way, right? If we can’t, is there a good explanation? I don’t know one, but my discomfort was a bit relieved after reading a perspective by Harrow and Montanaro  [9]. There, they introduce sampling experiments as computational analogs of Bell’s experiments. Indeed, although we know that quantum correlations can be stronger than classical, this does not lead directly to useful applications. Alright, I’ll leave it at that.

NISQ algorithms don’t work

Noisy-Intermediate-Scale-Quantum (NISQ) algorithms  [10] account for most of the prospective applications of quantum computing, or at least you can get such an impression. There are NISQ algorithms for almost everything – optimization, machine learning, simulation, and even NISQ versions of algorithms such as factoring or HHL. However, these “algorithms” are in fact heuristics that come with many theoretical and practical problems. Let’s take a closer look.

Variational quantum algorithms

The main bundle of NISQ algorithms are variational quantum algorithms (VQA)  [11]. The two most studied examples are Quantum Approximate Optimization Algorithm (QAOA) and Variational Quantum Eigensolver (VQE). QAOA mostly focuses on ground state preparation for classical Ising Hamiltonians, which in facts covers a huge range of problems related to combinatorial optimization. VQE typically addresses Hamiltonians that arise from physics or chemistry, but largely does the same thing. I think the line between different types of VQA is quite blurry.

Quite generally, variational quantum algorithms aim to find a low energy state of some Hamiltonian \(H\) encoding the problem of interest. They start with a trivial quantum state \(|0\rangle\) and apply a parameterized quantum circuit to it \(U(\theta)|0\rangle\). The resulting energy \[E(\theta)=\langle0|U^\dagger(\theta)HU(\theta)|0\rangle\] is minimized by adjusting parameters \(\theta\) classically.

Parameterized quantum circuits \(U(\theta)\) can be informed by the problem as in QAOA, which seeks to approximate the adiabatic evolution, or completely problem-agnostic as in Hardware-Efficient ansatze.

Here is my simple-minded and a bit cynic take on the idea behind variational quantum “algorithms”, which I think would be better called heuristics. Real quantum algorithms (without quotes) typically require circuits that are very deep. The current generation of quantum devices is pretty inaccurate, errors in two-qubit gates are of the order of \(0.1-1\%\). If you apply many gates, there will be nothing but noise at the output, and the computation is not useful. VQA approach the problem as follows. Alright, we do not know algorithms with shallow circuits, but let’s try to build some. We’ll prepare a quantum circuit that is sufficiently shallow to have a non-zero signal-to-noise ratio, and introduce parameters in there. While we do not know if any values of these parameters correspond to a useful computation, let’s try to adjust them (classical optimization loop) so that the circuit does something useful.

I mean, it is not a bad idea, and in many respects similar to how classical machine learning works. The problem seems to be, the current hardware only allows circuits so shallow, that you may optimize them all you want, no interesting results will follow. Another practical problem is that evaluation of the energy function \(E(\theta)\) requires taking a ton of samples, which is slow and expensive. On the theoretical side, the loss landscape of most VQA appears to be pretty terrible, featuring barren plateaus and many bad local minimums. So even with a perfect hardware, there are no guarantees for good results.

Quantum machine learning

A typical quantum model for a supervised learning task looks very similar to VQA instances described above \[E(x,\theta)=\langle0|U^\dagger(x,\theta) H U(x,\theta)|0\rangle \ .\] Only here, part of the parameters \(x\) are now not model “weights” to be adjusted, but instead encode the training data. The rest of the parameters \(\theta\) are to be optimized to yield a better loss function \(E(x,\theta)\).

Quantum machine learning (QML)  [12] sounds quite fancy, but it shares much of the problems with VQA. Additional questions you might ask about QML models is whether they have and edge over classical in data encoding, expressivity, generalize better etc. To the best of my understanding, all claims that some QML models are somehow better than classical counterparts are heuristic, inconclusive, or only work for extremely artificial datasets. Moreover, let me quote a recent perspective  [13] by Schuld and Killoran titled “Is quantum advantage the right goal for quantum machine learning?”

Contrary to commercial expectations – machine learning may turn out to be one of the hardest applications to show a practical quantum advantage for.

Why? By all means take a look at the paper if you are interested, but a short answer is that

Quantum machine learning research is trying to beat large, high-performing algorithms for problems that are conceptually hard to study.

In other words, classical machine learning is so efficient it sets a very high bar; it’s hard to theoretically analyze how it works, let alone prove quantum advantage; and we can’t collect any meaningful empirical data on QML because we only have toy hardware yet.

Noisy summary

There are many more versions of NISQ algorithms beyond VQA and QML. However, they all come with significant challenges. I quote an extensive recent review  [10]

At the moment of documenting this review, there is no known demonstration of industrially relevant quantum advantage.

On top of that, to the best of my knowledge, there are also no theoretical guarantees that NISQ algorithms can lead to quantum advantage at all. So, NISQ algorithms seemed like a low-hanging fruit, but despite all the work of the past years, useful applications have not been demonstrated. Even the industry now seems to become less optimistic about NISQ, and focus more on the fault-tolerant algorithms, and so will we.

Error correction eats polynomial speedups

One of the earliest discovered, best known, and simplest to explain quantum algorithms is due to Lov Grover.

Grover’s algorithm is often described as a search through an unstructured database. When there are \(N\) entries and no structure to the database whatsoever, classically you need to make \(N/2\) queries on average to find what are you looking for. Grover’s algorithm allows using about \(\sqrt{N}\) quantum queries, yielding a quadratic speed-up.

Although Grover’s algorithm is one of the central results in quantum computing, there are many ways to challenge its practical significance  [14]. Here I will focus on a particular one, associated with the cost of its fault-tolerant implementation. This line of attack is important because it applies to many other algorithms with polynomial speed-ups, including variants of quantum optimization, Monte-Carlo, some machine learning tasks, etc.

As discussed in the NISQ section, decoherence and gate errors make it impossible to run deep circuits on current devices. Sure, we expect that the error rates will go down in the future, but even orders of magnitude improvements won’t be enough. The principle answer to this challenge is error correction and fault-tolerant computation. By encoding the logical states in many physical qubits it is possible to arbitrarily suppress effective error rates. Asymptotically, the overhead of error correction in terms of physical to logical qubit ratio is polylog, and looks insignificant. However, the constant factors involved can be a dealbreaker. What follows below is my coarse rundown based on papers  [15] and  [16].

In the dominating paradigm of fault-tolerant computing, the bottleneck is in production of the magic states. Roughly, one magic state is consumed to perform a single non-trivial computational primitive, such as the Toffoli gate. Producing a single magic state involves hundreds of physical qubits, multiple code cycles, and a lot of the classical processing to guide the behavior of the quantum system. Rough estimations show that executing a single Toffoli gate can be 10 orders of magnitude slower than executing the NAND gate, its classical counterpart. This looks troublesome, doesn’t it?

But won’t this huge constant factor become irrelevant for large enough problem sizes? After all, we have a quadratic speed-up. Yes it will, but you should also consider how big that problem will be. Because the quantum computer starts with this huge handicap, it can take years of runtime to catch up with a classical solver. So the problem sizes where you would see the quantum advantage are so large, that they will take too long to solve even for a quantum computer. So this scenario does not look practical, in the end.

With very optimistic projections of how quantum computing will progress, problems with higher polynomial speed-ups, say quartic, become to look feasible. However, it appears that problems with exponential speed-up are really our best bet, and so we turn to them.

Breaking RSA is not a big deal

It was a true breakthrough by Shor to show that the integer factoring problem can be solved efficiently by a quantum computer, exponentially faster than with any known classical algorithm. While you’d still need a huge fault-tolerant machine to pull that out in practice  [17], Shor’s algorithm does defy most of the objections aplicable to other quantum algorithms.

Shor’s algorithm allows factoring an integer into primes in time scaling polynomially with the number of digits. It is probably one of the hardest among the archetypical quantum algorithms to explain. However, the difficult part is entirely in the classical pre- and post-processing steps, where the factoring problem is reduced to finding a period of some function and factorization is extracted from an approximate solution.

The key quantum subroutine used is the quantum Fourier transform (QFT), which is simple to sketch. By definition, starting with a computational basis state \(|n\rangle\) it prepares a linear combination of all basis states \(|m\rangle\) with amplitudes given by the coefficients of the discrete Fourier transform.

\[QFT|n\rangle = \frac{1}{\sqrt{N}}\sum_{m\in\{0,1\}^N} e^{2\pi i \frac{nm}{N}}|m\rangle\]

The Fourier transform of a periodic function is peaked at the values related to the period, so that it exposes the information about the original prime factorization problem. The important part is that the quantum Fourier transform can be implemented efficiently, i.e. by a polynomially-sized quantum circuit.

The only problem – it is not really useful. Yes, you can break RSA, but so what? There are other encryption schemes that so far seem to be safe against quantum adversaries. Once the quantum computers of necessary scale are available, the cryptographic world will need to adjust, and there will be some potential for adventures in the meantime. But in the end, the impact will be quite limited. Quoting Matthias Troyer  [18]

So factoring might fund most of the field now, it’s not a killer app in the end.

Now, there are other applications of factoring beyond cracking RSA, as there are other period-finding algorithms exploiting QFT, but these are mostly very special number-theoretic problems. While useful in principle, they seem much too limited in scope to provide the impact we expect from the quantum computing.

Big data algorithms are tricky

There is a large class of quantum algorithms that can be broadly described as big data or (fault-tolerant) quantum machine learning algorithms. The initial inspiration and the key subroutine in many of them is provided by HHL algorithm, allowing to efficiently solve large systems of linear equations. Other options include clustering, support vector machines, principal component analysis, differential equations etc.

HHL is often described as an algorithm to solve a linear system of \(N\) equations

\[Ax=b\]

in time scaling as \(\log N\), providing an exponential speed-up over classical algorithms. This scaling really requires an explanation, since even writing down the full solution \((x_1,\dots, x_N)\) would take \(O(N)\) time and negate the speed-up.

HHL assumes that vector \(b\) is as available as a quantum state \(|b\rangle\) belonging to \(\log N\) dimensional Hilbert space, whose amplitudes encode the entries of \(b\). Similarly, the output of the algorithm is a quantum state \(|x\rangle\), and not its individual amplitudes. To produce the solution, one needs to apply operator \(e^{-iAt}\), which only allows an efficient quantum circuit for very special types of matrices \(A\). Finally, the matrix \(A\) also needs to be well conditioned, meaning that the ratio of its highest to lowest eigenvalue is not too large (does not scale polynomially with \(N\)).

There are fundamental factors limiting applicability of the big data algorithms. My discussion here is mostly based on a short and wonderfully written commentary piece by Scott Aaronson “Read the fine print”  [19]. The big data quantum algorithms aim to provide an exponential speed-up to tasks like matrix inversion, which take polynomial time classically. Which means, quantum algorithms need to be done in logarithmic time. But in general, \(\log N\) time is not even sufficient to read in the problem specification, say \(N\times N\) matrix, or output a full solution, say an \(N\)-dimensional vector.

Thus, big data algorithms assume that the initial data is encoded as amplitudes of quantum states \(|\psi_0\rangle\) in a Hilbert space of dimension \(\log N\), and output a solution in the same form \(|\psi\rangle\). Hence, you need a very efficient way to load or generate the input data in your quantum memory, and be able to read out interesting results from a few expectation values of \(|\psi\rangle\) (instead of all of its amplitudes). An operator transforming \(|\psi_0\rangle\) to \(|\psi\rangle\) also requires an efficient quantum circuit implementation and can not correspond, say, to a general \(N\times N\) matrix.

Finally, after you’ve restricted the problem in all these ways, it gets harder to exclude that some classical algorithm can take advantage of all the additional structure to run equally fast. This is not just a pedantic remark to wave away. Recently, many of the machine learning algorithms have been “de-quantized”, i.e. their efficient classical substitutes have been found  [20]. While de-quantized algorithms typically impose even more constraints on a problem, e.g. that the linear problem be low-rank  [21], and can be polynomially slower, they add another layer of subtlety to finding big data algorithms with quantum advantage.

While not insurmountable, all these considerations are highly restrictive in practice. The HHL algorithm appeared in 2009, and in his 2015 essay Scott Aaronson mentions only two (and somewhat contrived) end-to-end proposals addressing all the caveats. Probably there are more today, but I do not know of a well-established useful big data problem that is just waiting for an appropriate fault-tolerant machine to appear to rock the world. I’ll still mention an interesting recent proposal  [22], which aims to accelerate the training of classical neural networks. I also can’t resist quoting this passage from the paper

Frankly, the core thesis of this work is that a main application of quantum computers may be in the training of classical neural networks.

The boring old stuff: quantum simulation

Quantum dynamics

Quantum simulation is the application that is often cited as having kick-started the field. At the same time, it is still widely believed to have the best shot at useful quantum advantage.

It is a very natural application, since no convoluted procedure to fold and squeeze a classical problem into a quantum domain is required. Instead, it looks at the task that is obviously quantum in origin, and proposes an efficient way to solve it with a quantum computer.

Quantum simulation is designed to take an initial quantum state \(|\psi_0\rangle\) and carry out its evolution under some Hamiltonian \(H\), i.e. to find

\[|\psi(t)\rangle=e^{-iHt}|\psi_0\rangle \ .\]

For a Hamiltonian which is sparse, e.g. consists of not too many local terms \(H=\sum_k{H_k}\), one can use the Trotter-Suzuki approximation \(e^{(A+B)\Delta t}=e^{A\Delta t}e^{B\Delta t}+O(\Delta t^2)\) to reduce the simulation of the full Hamiltonian evolution over some small time period \(\Delta t\) to a simulation of separate local terms, which is in principle straightforward

\[e^{-i H \Delta t}=\prod_k e^{-i H_k \Delta t}+O(\Delta t^2) \ .\]

Evolution over a finite time period \(t\) can then be produced by a sequence of short evolutions \(e^{-iHt}=\left(e^{-iH\Delta t}\right)^{\frac{t}{\Delta t}}\). The error coming from “Trotterization” of each small time step can be reduced by making \(\Delta t\) smaller, at the cost of increasing the circuit depth polynomially.

While numerous classical methods for approximate simulation of quantum systems have been developed, with great success in many cases, they are not sufficient in general. This is another important point about the simulation problem – the difficulty of the classical approach is well appreciated, so quantum computer is really expected to make the difference here.

Is quantum simulation useful? I mean, it obviously is, but how exactly? There are definitely implications for fundamental science such as probing complicated quantum dynamics, new phases of matter, quantum chaos and so on. But what about designing a high-temperature superconductor or a new battery? Unfortunately, I am not aware of a rigorous connection between the ability to do quantum simulation and producing practically useful outcomes. So far it seems to be more about exploring the physics/chemistry with the new tools and beyond the regimes the current techniques allow. For this reason, it’s not clear that analog quantum simulators, which will be ultimately limited in their accuracy, will have applications beyond basic science  [23].

Quantum phase estimation

There is another flavor of quantum simulation that leads to more deterministic results. The archetypical algorithm here is the quantum phase estimation (QPE).

Quantum phase estimation (QPE) can be thought of as an efficient quantum circuit to perform a projective energy measurement.

It allows finding eigenvalues and preparing eigenstates of a Hamiltonian \(H\), provided one can efficiently implement controlled evolution operators \(e^{-iHt}\). Usually, QPE is formulated as an algorithm for finding eigenvalues of a unitary operator \(U\) given its eigenstate \(|\lambda\rangle\) with an unknown eigenvalue. QPE proceeds by applying powers of \(U\) (\(U, U^2, U^4,\dots\)) to state \(|\lambda\rangle\), each controlled by its own auxiliary qubit. The state of the auxiliary qubits then contains a lot of information about \(\lambda\), roughly one bit of accuracy per qubit, and this information can be efficiently revealed after performing a quantum Fourier transform on the auxiliary qubits.

If the original state \(|\psi\rangle\) is not an eigenstate of \(U\), QPE performs a projective energy measurement. It will output some energy \(E\) and prepare the corresponding state \(|\lambda_E\rangle\) with probability proportional to the overlap of \(|\langle\psi|\lambda_E\rangle|^2\).

Quantum phase estimation can yield high-accuracy information about the eigenstates of a Hamiltonian. In many practical questions of quantum chemistry, this exactly what you want to know. There is a caveat, though. Most interesting in practice are ground and low-energy states, and for QPE to reveal information about them, the initial state for the algorithm needs to have sufficiently high overlap with low-energy states. In general this problem is QMA-hard (not expected to be efficiently solvable even on a quantum computer), and whether it is solvable in practical scenarios is still not settled conclusively  [24].

What did I miss?

While I believe that the list of algorithms we went through captures all the main players competing for quantum advantage, it is of course not all comprehensive. Here I will mention some omissions. I also plan to extend this section based on feedback.

So here are some of the algorithms I didn’t mention, in no particular order.

  • Quantum annealing. This is a NISQ optimization algorithm working on analog principles, and is subject to the same criticism as other NISQ algorithms. Mainly, it’s a heuristic (no theoretical guarantees) and a quantum advantage have not been demonstrated in practice.
  • As an example of new algorithmic developments, here is a recent paper  [25] titled “Exponential quantum speedup in simulating coupled classical oscillators”. To my understanding, it is most similar to the big data algorithms described above, and shares the same limitations. In particular, I believe a practical application have not been identified.
  • Quantum computers can be provably efficient when the input data is quantum. E.g. a quantum computer can learn properties of a quantum state better  [26]. While a very interesting avenue, I believe it lacks practical use cases yet.

Short summary

Here is a very short summary and some key references.

While it is easy to get an impression that quantum computers will be good for everything, finding a tangible and useful application appears to be really hard if you resist wishful thinking and insist on solid evidence.

  • There is a NISQ algorithm for anything  [10]. However, they try to make use of short noisy circuits, and this does not seem to work neither in practice nor in theory. In particular, this applies to NISQ machine learning  [13].
  • Algorithms with quadratic (and likely any polynomial) speed-ups become impractical once the cost of error correction is factored in  [15],  [16].
  • While interesting theoretically, Shor’s and related algorithms are limited to a narrow range of number-theoretic problems. Breaking RSA is not a real application.
  • Big data/fault-tolerant machine learning algorithms only work under highly restrictive conditions  [19],  [20], and finding the right applications is an open problem.
  • Quantum simulation looks like the best bet. It is both naturally hard classically, and efficient quantumly. However, both quantum dynamics and quantum phase estimation come with their caveats that remain to be addressed  [24].

If you take this critical view to the extreme, there isn’t yet a single use-case for a quantum computer (even the future, fault-tolerant one) with a guaranteed impact. So not only the quantum hardware, but quantum algorithms themselves appear to still be under construction 🏗️.

Discussion

Nothing I’ve said here is new

If you think that the perspective taken in this blog post is in any way novel or radical, it’s not. While the points I made here are not often voiced or put in writing, many experts have been saying similar things for years. Here are some references.

  • I highly recommend this talk  [18] given by Matthias Troyer way back in 2014. Or a more recent one  [27] from 2021. Interestingly, they are pretty similar in content, and in particular Troyer seems to entirely ignore the variational algorithms, and maybe for a good reason. There is also a recent short write-up by him and collaborators  [16].
  • In a 2021 talk Ryan Babbush  [28] (in conclusions part) says that the community still needs to figure out, with clarity, what will quantum computers be useful for. He says this in the context of early fault-tolerant computation, but I think the point applies more broadly.
  • Here is a piece by a renowned condensed matter physicist Sankar Das Sarma  [29], arguing that potential applications of NISQ are highly overstated.
  • Here are two presentations by Owen Lockwood  [30],  [31] critically assessing the state of NISQ algorithms and NISQ QML. Owen might not have the weight of other people I reference here, but I found his take on things original and informative.
  • Here is a pretty critical LinkedIn post by Victor Galitski  [32]. It is again a mostly a critique of NISQ, with focus mainly on socio-economic rather than algorithmic side of things, but still worth a read.
  • Finally, I’ll mention this popular interview with John Preskill  [33], where he mentions (section ‘simulation’) that quantum simulation is still probably our best grounded expectation for practical quantum advantage.

I may have a bias problem

Alright, you might have noticed that even this list gets increasingly less rigorous. My investigation, which started as a noble search for truth, quickly turned into a confirmation bias exercise. Indeed, I quite quickly started to err on the side that ‘we still don’t really know what quantum computers will be good for’, and enjoyed finding support for this view. While this may not be a great journalistic work, I still think this point of view is seriously underrepresented and worth voicing. At the same time, I’m really open to changing my mind, as I have all the reasons to want the field to succeed, and the sooner, the better.

Quantum computing is gonna be a rock star one day

I must also say that in the long run, a radical impact of quantum computing looks inevitable to me. This is a fundamentally new way of information processing, and this must make a difference. As Scott Aaronson have argued, if for fundamental reasons large scale quantum computers can never be built, it would be a new and revolutionary law of physics. I’d say that similarly, if we could ‘prove’ that quantum computers can not be useful, this would be a new remarkable law of nature worth discovering. From what we know now, it looks extremely unlikely. However, use cases for truly novel technologies are hard to forecast.

What looks the most promising at the moment?

In searching for practical quantum advantage, several requirements need to be met.

  1. There must be a problem that a quantum computer can solve efficiently.
  2. Evidence that a classical computer can’t.
  3. Last but not least, the problem must be useful.

If you think about it, this list is as much about the problem we want to solve as it is about the power of quantum algorithms. And finding the right problems, although possible in theory, turns out to be very challenging in practice.

It is exciting to try applying quantum algorithms to problems that appear to have no direct relation to the quantum world whatsoever. Basically, we start with (1) and then try to comply to (2) and (3). Say, we have an idea about how to solve certain large linear systems of equations and then try to find a subset of those that are useful and intractable classically. And while there may be gems on this path, a lot of evidence now shows that finding the right problems of this kind is really tricky. One early impressive success is Shor’s algorithm, but It may still be the only well-established example.

On the other hand, one can tackle obviously quantum-inspired problems. Quite recently, people started to look at cases when the input data is quantum rather than classical, and there the quantum advantage is already established, but no wide uses have been suggested yet. I’d say that the simulation of quantum systems for physics and chemistry appears to be our most grounded proposal for where to look for a practical quantum advantage. It addresses an obviously important problem known to be classically hard by decades of intensive research. So (2) and (3) are covered, and (1) also comes naturally. Indeed, people also often describe quantum simulation as a native task for a quantum computer. I quote an elegant passage from  [23]

This is the ‘native’ and most natural application of quantum computers, where we aim to use a quantum computer to mimic the rules that describe physical microscopic quantum systems. These problems are computationally challenging for the same underpinning reason that quantum computers can be powerful.

Alright, I’ll leave it at that. As usual, any feedback is welcome.

No matching items

References

[1]
S. Jordan, Quantum Algorithm Zoo, (2022).
[2]
P. W. Shor, Why haven’t more quantum algorithms been found?, Journal of the ACM (JACM) 50, 87 (2003).
[3]
J. M. Martyn, Z. M. Rossi, A. K. Tan, and I. L. Chuang, A Grand Unification of Quantum Algorithms, PRX Quantum 2, (2021).
[4]
F. Arute et al., Quantum supremacy using a programmable superconducting processor, Nature 2019 574:7779 574, 505 (2019).
[5]
[6]
H.-S. Zhong et al., Quantum computational advantage using photons, Science 370, 1460 (2020).
[7]
L. S. Madsen et al., Quantum computational advantage with a programmable photonic processor, Nature 2022 606:7912 606, 75 (2022).
[8]
T. R. Bromley, J. M. fla, S. Jahangiri, J. Izaac, N. Quesada, A. D. Gran, M. Schuld, J. Swinarton, Z. Zabaneh, and N. Killoran, Applications of Near-Term Photonic Quantum Computers: Software and Algorithms, Quantum Science and Technology 5, (2019).
[9]
A. W. Harrow and A. Montanaro, Quantum computational supremacy, Nature 549, 203 (2017).
[10]
K. Bharti et al., Noisy intermediate-scale quantum algorithms, Reviews of Modern Physics 94, 015004 (2022).
[11]
M. Cerezo et al., Variational Quantum Algorithms, Nature Reviews Physics 3, 625 (2020).
[12]
J. Biamonte, P. Wittek, N. Pancotti, P. Rebentrost, N. Wiebe, and S. Lloyd, Quantum machine learning, Nature 549, 195 (2017).
[13]
M. Schuld and N. Killoran, Is quantum advantage the right goal for quantum machine learning?, PRX Quantum 3, (2022).
[14]
[15]
R. Babbush, J. McClean, M. Newman, C. Gidney, S. Boixo, and H. Neven, Focus beyond quadratic speedups for error-corrected quantum advantage, PRX Quantum 2, (2020).
[16]
T. Hoefler, T. Häner, and M. Troyer, Disentangling Hype from Practicality: On Realistically Achieving Quantum Advantage, Communications of the ACM 66, 82 (2023).
[17]
[18]
[19]
S. Aaronson, Read the fine print, Nature Physics 11, 291 (2015).
[20]
E. Tang, Dequantizing algorithms to understand quantum advantage in machine learning, Nature Reviews Physics 2022 4:11 4, 692 (2022).
[21]
[22]
J. Liu, M. Liu, J.-P. Liu, Z. Ye, Y. Alexeev, J. Eisert, and L. Jiang, Towards provably efficient quantum algorithms for large-scale machine-learning models, (2023).
[23]
A. J. Daley, I. Bloch, C. Kokail, S. Flannigan, N. Pearson, M. Troyer, and P. Zoller, Practical quantum advantage in quantum simulation, Nature 2022 607:7920 607, 667 (2022).
[24]
[25]
R. Babbush, D. W. Berry, R. Kothari, R. D. Somma, and N. Wiebe, Exponential quantum speedup in simulating coupled classical oscillators, (2023).
[26]
H.-Y. Huang et al., Quantum advantage in learning from experiments, Science 376, 1182 (2021).
[27]
[28]
[29]
[30]
[31]
[32]
[33]