Hacker News new | past | comments | ask | show | jobs | submit login

Co-author of the paper here.

The “largest integer factored” metric is terrible for several reasons, which is well recognized by researchers and is why people rarely publish those sorts of claims any more. Without net-positive error correction, it’s basically a function of how low your error rate is and how many bits that allows you to compute on with low chance of error, but as soon as the error rate crosses the fault-tolerance threshold then suddenly your storage is limited only by how many qubits you can build. So you can’t use this metric (if people were even publishing it) to predict anything after fault tolerance is achieved.

On more meaningful metrics, there has been steady progress.




I thought the error rate has stayed pretty exponential in terms of the number of physical qubits needed to express a logical qubit and it’s not actually known if we’re any closer on that metric vs other more easily achieved metrics.

Additionally, I was under the impression that not all QCs being built are able of executing Shor’s algorithm which added additional challenges that aren’t solved.

My final impression is that having the QC algorithm run faster than a classical computer doing the same operation has also not necessarily gotten better so we don’t even know if we are actually closer to Quantum supremacy in terms of actually useful more general computation instead of the limited supremacy that Google achieved in 2019 on specific problems, similar to the (still ongoing?) controversy whether DWave built a QC or a quantum annealer capable of accelerating simulations of very specific physical properties.

Is this impression outdated and/or misinformed? Would love to update my priors.


A QC that can't run Shor's algorithm is not a QC, as much as D-Wave screams and stamps their feet that it's not fair.


In principle it can be a QC, just not a general purpose (“Q-Turing complete”) QC. There were useful classical computing machines before there were general purpose Turing-complete classical computers.

In practice, the first useful QCs will probably be Q-Turing complete.


I'm a complete layman when it comes to Quantum Computing, but I thought Shor's algorithm is effectively the most basic usecase example of Quantum Computers?

I'm confused here.


Yes, but it doesn't work on modern hardware. Too much noise.

Everyone in the industry is basically fully occupied with coming up with a mechanism for active error correction. But based on what I've heard from the folks on the front lines, it's still 10+ years away at minimum.


There are extant non-general purpose quantum computers that solve super esoteric useless math problems specially designed to be easy on these primitive quantum computers but infeasible on any classical computer. (This is “quantum supremacy” aka “quantum advantage”.) Furthermore, we expect to develop slightly less primitive quantum computers in the medium-term that can solve maybe-interesting quantum simulation questions without having the capability to run Shor’s algorithm (at least on an interestingly large integers). So no, Shor’s algorithm is not the most basic use case. It’s just one of the cleanest to understand.


Is there any reason to believe a computer that can do Shor's algorithm is actually possible? There's a lot of freaking out trying to build QC-resistent assymetric encryption, but we don't actually know whether or not it's physically possible to build the requisite error corrections to make it feasible in the first place, right? Is there reason to believe that the physical reality of QM doesn't require exponentially more and more qbits for every logical qbit added to the system? Basically, is there any reason to believe that a QC capable of Shor's doesn't degrade to a space / time tradeoff for a classical computer when it comes to integer factorization?


The strong expert consensus is that fault-tolerant quantum computation is physically possible and it’s just a challenging engineering problem. One key insight is the threshold theorems, which show that if you can build individual physical gates with error rate below a finite threshold (typically 0.1-1%), you can just stack these gates together and they simulate logical operations with logical error rates that can be made exponentially small.

https://en.wikipedia.org/wiki/Threshold_theorem

If it helps, there are plenty of historical analogies. Quantum-enhanced sensing took more than half a century between conception and practical use, but it is currently used daily at LIGO.


The strong expert consensus has been reached between a lot of people who would be out of a job if their consensus was the opposite.


Not really accurate. There are tons of tenured profs who are well-positioned to reveal reasons why QCs are fundamentally infeasible (and those kind of stories play well in the media). You can read about Gil Kalai's arguments here:

https://www.quantamagazine.org/the-argument-against-quantum-...

In any case, I'm happy to bet on this.


Yes, there are people who don't agree with your consensus, that is part of my point, the consensus is not all that universal. And the other part of my point is that if you only count people who have a full time job developing quantum computers as experts then you get a massive bias in opinion, as the sceptics are more likely to pick different jobs.

As for my own belief, I don't know how it will pan out, but I'm inherently sceptical of people who are too certain that it is one way or another, the hard evidence simply does not exist yet.

I'm personally most inclined to believe that the universe is computationally classical, and while quantum physics necessitates that there must be quite a lot more information and processing happening than a Newtonian model requires, it is still a finite, thus putting a finite limit on what speed-up we can gain from a quantum computer. Most likely this pans out as the error correction limit being impossible to reach, but there is also the possibility that more complicated quantum states result in higher per-gate error rates.

I'm open to other conjectures, but if you want to dispel mine I would like to see some hard evidence.


Boson sampling _works_. It's now on the error-correction skeptics to provide a framework where that remains true, but stops being true when a particular combination of gates is applied, because the “naive” math says it should work fine. It would require a new fundamental physical principle to be true, for a quantum system to be sensitive to the nature of the computation involved in generating the distribution being sampled from.


At toy scale, yes, just like quantum computers. And since we haven't seen any news of working boson sampling with thousands of photons I assume that it stops working when you try to scale up, just like quantum computers.

New fundamental physical principles are at this stage required in any case, quantum mechanics and general relativity don't fit together, meaning that at least one of them need to be overhauled into something that explains the same observations but doesn't have quite the same maths. The lacklustre performance of quantum computers could be a hint to what needs overhauling.


What would be a statement that you would put, say, $100k on, that would be determined within 2040.

And which odds would you need on that?


The existence of a handful of logical qubits. Like I can initialize them, swap them, entangle them, measure them, etc., all performed with essentially perfect fidelity (say, 1e-10 error rate). I’d immediately bet $100k on that by 2040 at 50-50 odds, and probably 80-20 if I had a day or two to review the literature and confirm I wasn’t being adversely selected against.

If you wanted a useful machine, say factoring RSA-2048, you would need to push the date to more like 2050 or 2060.

Most of the uncertainty comes from the economy, engineering cost, public interest in QC, AI doom, etc. If we could somehow make a platonic bet on “is a large QC constructible if dedicated $10T/yr for 200 years”, then I am over 95-5.


With that kind of statement, it seems fundamentally immature to be deploying PQC on the Internet today when we're 20+ years out a useful machine. Prognostication this far into the future should not be the basis of making changes today - we're not good at it.


Disagree. I still put a few percentage points of probability on it happening much faster, maybe by 2030 or 2035. The whole internet and a good chunk of the global economy is predicated on secure encryption. A small chance of that being disrupted is worth substantial investment. Deploying PQC protocols for testing is cheap.

Not to mention the fact that we still don’t know if the current candidate PQC protocols are actually secure. Security is mostly a game of back-and-forth over years, so it could take a while.


The coordinated failures implied by a failure of QEC to scale would be new physics; the existing principles don't predict that anything like that should occur.

Which isn't to say that it's impossible, but it would be quite exciting if it were the case.


May not require new physics but may be arbitrarily expensive / impractical to build in reality, no?


Not really, no. If the breakpoint for fault tolerance is reached, then in the absence of new physics causing the noise that needs to be corrected to be adversarially coordinated, quantum computers are scaleable.


> I thought the error rate has stayed pretty exponential in terms of the number of physical qubits needed to express a logical qubit

There is finite a threshold error rate (roughly 0.1-1%) at which you can produce a single logical qubit with an unbounded number of physical qubits (infinite overhead). For error rates below the threshold, the overhead becomes much less. People expect overheads in the thousands. See Fig. 1 in our paper. https://arxiv.org/pdf/2009.05045

> and it’s not actually known if we’re any closer on that metric vs other more easily achieved metrics.

We are getting lower error rates. But until we cross the error threshold, the overhead for a logical qubit is infinity.

> I was under the impression that not all QCs being built are able of executing Shor’s algorithm

Correct.

> which added additional challenges that aren’t solved.

Logical qubits enable general purpose quantum computing, which includes Shor's algorithm. As mentioned, we don't have logical qubits yet, and some people are trying to build less general devices to solve certain math problems in the meantime. But the overall goal for the field is still logical qubits, and there's steady progress on that.

> My final impression is that having the QC algorithm run faster than a classical computer doing the same operation has also not necessarily gotten better

I can't really parse the claim, but I think your impression is wrong. Supremacy has always been a fuzzy bound, since it's defined in terms of the best known classical algorithms. But the supremacy results have gotten more unambiguous over time.


> I can't really parse the claim, but I think your impression is wrong. Supremacy has always been a fuzzy bound, since it's defined in terms of the best known classical algorithms. But the supremacy results have gotten more unambiguous over time.

By that I mean that integer factorization is still slower than classical machines even though the numbers that can be factored have gotten larger. Similarly, with the exception of very specific toy problems specifically constructed to demonstrate quantum supremacy, we haven't achieved supremacy on any interesting and useful problems (not sure if DWave's quantum annealing machine really has any useful applications but presumably it must, but also not clear that it's a meaningful step on the path to a QC).


Those two sentences are correct, but don't really support your original points as I understand them.


Which metrics would be more meaningful?


Single most important is probably two-gate error rates, until the fault tolerance threshold is crossed. Then things like connectivity and logical-qubit counts (where logical qubits are roughly proportional to physical qubits, but suppressed by an overhead that improves with lower error rates and implementation-specific factors).


Predictions as to the first coprime factored after it has been achieved? Will they publish just rsa-1024, or drop a whole load of factors all at once, eliminate the whole field?


My non-confident impression is that when fault tolerance is first achieved it will still be somewhat expensive to add qubits, so the size of the largest integers that are factorizable will grow roughly like “quantum Moore’s law”. Like maybe they double qubit count every year or two, and I think the length (in digits) of the largest factorizable integer will scale like that, modulo a root or square. I could be confused though.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: