Did Shor ruin quantum computing?
Quantum computing (QC) is really hard. And, as many people will agree, QC is also no longer fully suited to academia. While most academic QC work focuses on small-scale demonstrations of novel physics, what QC needs is scale and performance engineering, which individual university labs cannot deliver. Thus, over the recent years, an ever larger fraction of important QC advances came from the industry, and this trend shows no sign of stopping.
This story makes sense, but it’s incomplete. This is because there are many other scientific projects that have “outgrown the lab”. But instead of commercialising, projects such as particle accelerators and gravitational wave detectors formed scientific consortia to build their hugely complex machines. So why has this never happened for quantum?
When I posed this conundrum to Charles Tahan, he immediately replied: “Shor’s algorithm, of course”. I think this hypothesis makes a lot of sense. Before Shor’s factoring, quantum computers were, for what it’s worth, a science toy. After Shor, they were a money-making technology in the making - or a valuable piece of military tech at the very least. Why would the governments fund a major international project for something that could be funded by profit-oriented entities insted?
Still, it is notable that no major QC consortia were formed in the decades between the discovery of Shor’s algorithm and the creation of the first commercial QC companies - why exactly would that be? Another counterexample is the space of nuclear fusion, where the profit goal has always coexisted with the national and international initiatives.
Perhaps there is another aspect to this story altogether, which is that while QC experiments are hard, they’re still just about easy enough for a single lab to pull off. These other projects - fusion reactors, space telescopes, gravitational wave detectors etc - are obviously too massive for any single lab, even if only due to their literal size! On the other hand, quantum computers are really quite compact by comparison and, despite decades of research, there are still exciting experiments to be done even with one or two qubits. This is particularly true as the main figure of merit that describes a quantum computer is not its size, but it’s noise - and keeping the noise low may be easier in small systems built by a handful of people than in large systems build by hundreds.
It’s also difficult to say if this research fragmentation is good or bad. Perhaps the space has missed out on some investments in well-paced scale engineering, but on the other hand, it created opportunities for research into novel platforms that would have not seen the light of day otherwise. It’s notable how rapidly the nuclear fusion space shifted over the recent years from massive collaborations like ITER into spawning small startups across the globe - perhaps the influx of venture capital into the QC space just have it a boost it would have needed in any case?
What do you think? Would you rather see more QC startups, or a “CERN for quantum”? And do you have any theories why the latter never happened? Let me know in the comments below!