Hacker News new | past | comments | ask | show | jobs | submit login
What You Shouldn't Know About Quantum Computers (arxiv.org)
126 points by mathgenius 8 months ago | hide | past | favorite | 101 comments



> Researchers like Jaime Sevilla and Jess Riedel support this timeline, publishing a report in late 2020 that claimed a 90% confidence of RSA-2048 being factored before 2060.

I am skeptical. 36 years is a long time, but in the past 10 years there hasn't been much progress:

    year 2001: factorization of 15 (IBM)
    year 2012: factorization of 21 (University of Bristol)
    year 2019: factorization of 35 attempt, failed (IBM)
https://en.wikipedia.org/wiki/Shor%27s_algorithm#Physical_im...


Co-author of the paper here.

The “largest integer factored” metric is terrible for several reasons, which is well recognized by researchers and is why people rarely publish those sorts of claims any more. Without net-positive error correction, it’s basically a function of how low your error rate is and how many bits that allows you to compute on with low chance of error, but as soon as the error rate crosses the fault-tolerance threshold then suddenly your storage is limited only by how many qubits you can build. So you can’t use this metric (if people were even publishing it) to predict anything after fault tolerance is achieved.

On more meaningful metrics, there has been steady progress.


I thought the error rate has stayed pretty exponential in terms of the number of physical qubits needed to express a logical qubit and it’s not actually known if we’re any closer on that metric vs other more easily achieved metrics.

Additionally, I was under the impression that not all QCs being built are able of executing Shor’s algorithm which added additional challenges that aren’t solved.

My final impression is that having the QC algorithm run faster than a classical computer doing the same operation has also not necessarily gotten better so we don’t even know if we are actually closer to Quantum supremacy in terms of actually useful more general computation instead of the limited supremacy that Google achieved in 2019 on specific problems, similar to the (still ongoing?) controversy whether DWave built a QC or a quantum annealer capable of accelerating simulations of very specific physical properties.

Is this impression outdated and/or misinformed? Would love to update my priors.


A QC that can't run Shor's algorithm is not a QC, as much as D-Wave screams and stamps their feet that it's not fair.


In principle it can be a QC, just not a general purpose (“Q-Turing complete”) QC. There were useful classical computing machines before there were general purpose Turing-complete classical computers.

In practice, the first useful QCs will probably be Q-Turing complete.


I'm a complete layman when it comes to Quantum Computing, but I thought Shor's algorithm is effectively the most basic usecase example of Quantum Computers?

I'm confused here.


Yes, but it doesn't work on modern hardware. Too much noise.

Everyone in the industry is basically fully occupied with coming up with a mechanism for active error correction. But based on what I've heard from the folks on the front lines, it's still 10+ years away at minimum.


There are extant non-general purpose quantum computers that solve super esoteric useless math problems specially designed to be easy on these primitive quantum computers but infeasible on any classical computer. (This is “quantum supremacy” aka “quantum advantage”.) Furthermore, we expect to develop slightly less primitive quantum computers in the medium-term that can solve maybe-interesting quantum simulation questions without having the capability to run Shor’s algorithm (at least on an interestingly large integers). So no, Shor’s algorithm is not the most basic use case. It’s just one of the cleanest to understand.


Is there any reason to believe a computer that can do Shor's algorithm is actually possible? There's a lot of freaking out trying to build QC-resistent assymetric encryption, but we don't actually know whether or not it's physically possible to build the requisite error corrections to make it feasible in the first place, right? Is there reason to believe that the physical reality of QM doesn't require exponentially more and more qbits for every logical qbit added to the system? Basically, is there any reason to believe that a QC capable of Shor's doesn't degrade to a space / time tradeoff for a classical computer when it comes to integer factorization?


The strong expert consensus is that fault-tolerant quantum computation is physically possible and it’s just a challenging engineering problem. One key insight is the threshold theorems, which show that if you can build individual physical gates with error rate below a finite threshold (typically 0.1-1%), you can just stack these gates together and they simulate logical operations with logical error rates that can be made exponentially small.

https://en.wikipedia.org/wiki/Threshold_theorem

If it helps, there are plenty of historical analogies. Quantum-enhanced sensing took more than half a century between conception and practical use, but it is currently used daily at LIGO.


The strong expert consensus has been reached between a lot of people who would be out of a job if their consensus was the opposite.


Not really accurate. There are tons of tenured profs who are well-positioned to reveal reasons why QCs are fundamentally infeasible (and those kind of stories play well in the media). You can read about Gil Kalai's arguments here:

https://www.quantamagazine.org/the-argument-against-quantum-...

In any case, I'm happy to bet on this.


Yes, there are people who don't agree with your consensus, that is part of my point, the consensus is not all that universal. And the other part of my point is that if you only count people who have a full time job developing quantum computers as experts then you get a massive bias in opinion, as the sceptics are more likely to pick different jobs.

As for my own belief, I don't know how it will pan out, but I'm inherently sceptical of people who are too certain that it is one way or another, the hard evidence simply does not exist yet.

I'm personally most inclined to believe that the universe is computationally classical, and while quantum physics necessitates that there must be quite a lot more information and processing happening than a Newtonian model requires, it is still a finite, thus putting a finite limit on what speed-up we can gain from a quantum computer. Most likely this pans out as the error correction limit being impossible to reach, but there is also the possibility that more complicated quantum states result in higher per-gate error rates.

I'm open to other conjectures, but if you want to dispel mine I would like to see some hard evidence.


Boson sampling _works_. It's now on the error-correction skeptics to provide a framework where that remains true, but stops being true when a particular combination of gates is applied, because the “naive” math says it should work fine. It would require a new fundamental physical principle to be true, for a quantum system to be sensitive to the nature of the computation involved in generating the distribution being sampled from.


At toy scale, yes, just like quantum computers. And since we haven't seen any news of working boson sampling with thousands of photons I assume that it stops working when you try to scale up, just like quantum computers.

New fundamental physical principles are at this stage required in any case, quantum mechanics and general relativity don't fit together, meaning that at least one of them need to be overhauled into something that explains the same observations but doesn't have quite the same maths. The lacklustre performance of quantum computers could be a hint to what needs overhauling.


What would be a statement that you would put, say, $100k on, that would be determined within 2040.

And which odds would you need on that?


The existence of a handful of logical qubits. Like I can initialize them, swap them, entangle them, measure them, etc., all performed with essentially perfect fidelity (say, 1e-10 error rate). I’d immediately bet $100k on that by 2040 at 50-50 odds, and probably 80-20 if I had a day or two to review the literature and confirm I wasn’t being adversely selected against.

If you wanted a useful machine, say factoring RSA-2048, you would need to push the date to more like 2050 or 2060.

Most of the uncertainty comes from the economy, engineering cost, public interest in QC, AI doom, etc. If we could somehow make a platonic bet on “is a large QC constructible if dedicated $10T/yr for 200 years”, then I am over 95-5.


With that kind of statement, it seems fundamentally immature to be deploying PQC on the Internet today when we're 20+ years out a useful machine. Prognostication this far into the future should not be the basis of making changes today - we're not good at it.


Disagree. I still put a few percentage points of probability on it happening much faster, maybe by 2030 or 2035. The whole internet and a good chunk of the global economy is predicated on secure encryption. A small chance of that being disrupted is worth substantial investment. Deploying PQC protocols for testing is cheap.

Not to mention the fact that we still don’t know if the current candidate PQC protocols are actually secure. Security is mostly a game of back-and-forth over years, so it could take a while.


The coordinated failures implied by a failure of QEC to scale would be new physics; the existing principles don't predict that anything like that should occur.

Which isn't to say that it's impossible, but it would be quite exciting if it were the case.


May not require new physics but may be arbitrarily expensive / impractical to build in reality, no?


Not really, no. If the breakpoint for fault tolerance is reached, then in the absence of new physics causing the noise that needs to be corrected to be adversarially coordinated, quantum computers are scaleable.


> I thought the error rate has stayed pretty exponential in terms of the number of physical qubits needed to express a logical qubit

There is finite a threshold error rate (roughly 0.1-1%) at which you can produce a single logical qubit with an unbounded number of physical qubits (infinite overhead). For error rates below the threshold, the overhead becomes much less. People expect overheads in the thousands. See Fig. 1 in our paper. https://arxiv.org/pdf/2009.05045

> and it’s not actually known if we’re any closer on that metric vs other more easily achieved metrics.

We are getting lower error rates. But until we cross the error threshold, the overhead for a logical qubit is infinity.

> I was under the impression that not all QCs being built are able of executing Shor’s algorithm

Correct.

> which added additional challenges that aren’t solved.

Logical qubits enable general purpose quantum computing, which includes Shor's algorithm. As mentioned, we don't have logical qubits yet, and some people are trying to build less general devices to solve certain math problems in the meantime. But the overall goal for the field is still logical qubits, and there's steady progress on that.

> My final impression is that having the QC algorithm run faster than a classical computer doing the same operation has also not necessarily gotten better

I can't really parse the claim, but I think your impression is wrong. Supremacy has always been a fuzzy bound, since it's defined in terms of the best known classical algorithms. But the supremacy results have gotten more unambiguous over time.


> I can't really parse the claim, but I think your impression is wrong. Supremacy has always been a fuzzy bound, since it's defined in terms of the best known classical algorithms. But the supremacy results have gotten more unambiguous over time.

By that I mean that integer factorization is still slower than classical machines even though the numbers that can be factored have gotten larger. Similarly, with the exception of very specific toy problems specifically constructed to demonstrate quantum supremacy, we haven't achieved supremacy on any interesting and useful problems (not sure if DWave's quantum annealing machine really has any useful applications but presumably it must, but also not clear that it's a meaningful step on the path to a QC).


Those two sentences are correct, but don't really support your original points as I understand them.


Which metrics would be more meaningful?


Single most important is probably two-gate error rates, until the fault tolerance threshold is crossed. Then things like connectivity and logical-qubit counts (where logical qubits are roughly proportional to physical qubits, but suppressed by an overhead that improves with lower error rates and implementation-specific factors).


Predictions as to the first coprime factored after it has been achieved? Will they publish just rsa-1024, or drop a whole load of factors all at once, eliminate the whole field?


My non-confident impression is that when fault tolerance is first achieved it will still be somewhat expensive to add qubits, so the size of the largest integers that are factorizable will grow roughly like “quantum Moore’s law”. Like maybe they double qubit count every year or two, and I think the length (in digits) of the largest factorizable integer will scale like that, modulo a root or square. I could be confused though.


> I am skeptical. 36 years is a long time, but in the past 10 years there hasn't been much progress:

36 years ago we didn't even know shor's algorithm existed, and didn't know quantum computers could non-exponentially factor integers at all.

So in a certain sense, we have made infinite progress in the last 36 years. Who knows what could happen in the next 36.

More generally, i think making science predictions that far out is basically impossible.


https://youtu.be/6qD9XElTpCE

> Hear the story of Shor's Algorithm, straight from the source, Peter Shor.

Shor's part of the story starts in '81.

Though, I generally agree with the it is hard to make predictions that far out. We've got the math to do some higher things, but we still don't have flying cars and fusion power.


Shor being alive in 1981 is not the same thing as saying shor's algorithm was discovered in 1981.


The seeds of Shor's algorithm was when he went to a lecture on negative probability by Feynman in 1981 as a graduate student. Shor's part of the storm starts in 1981.


IIRC none of those uses of shor's algorithm were real (well maybe the 2019 one was, but that failed).

There's a threshold you need to reach for quantum error correction to work and we are approaching it pretty steadily on a log scale.


They were illegitimate instances of Shor according to [1]:

> Of course this should not be considered a serious demonstration of Shor’s algorithm. It does, however, illustrate the danger in “compiled” demonstrations of Shor’s algorithm. To varying degrees, all previous factorization experiments have benefited from this artifice. While there is no objection to having a classical compiler help design a quantum circuit (indeed, probably all quantum computers will function in this way), it is not legitimate for a compiler to know the answer to the problem being solved. To even call such a procedure compilation is an abuse of language.

[1] https://arxiv.org/pdf/1301.7007


Quantum error correction produces logical qubits that have smaller nonzero error rate. Thus it needs to be applied repeatedly to achieve a necessary error rate to produce meaningful results. For Shor algorithm the error rate needs to decrease exponentially with the number of qubits. Thus even though IBM has a hundred qubits QCs they still only have managed to use five qubits to factorize a number 21.


>For Shor algorithm the error rate needs to decrease exponentially with the number of qubits

I have never seen a result like that. Do you have a citation?


I have forgotten the source where I stumbled on the requirement for a qubit error rate to run the Shor algorithm. I would greatly value it if someone could put a key reference here.

A recent result considers a generic error model and a subclass for particular kinds of prime numbers [1]. The important result is $epsilon > cn^{−1/3}$ where epsilon represents qubit error rate and n number of bits in the factorised number. As $n = 2^N$ where N is a number of qubits, the result:

error_rate < const/2^{N/3}

Therefore, the error rate must decrease exponentially with the number of logical qubits with which the computation is made.

[1]: https://arxiv.org/pdf/2306.10072


lots of functions with horizontal asymptotes look ok on a log scale until they don't.


According to Locklin, even factoring 15 requires precompilation where you have to calculate the factors in advance before sending to the QC in order to minimize the number of required gates.


It’s hard to make predictions, especially about the future.


When a prediction has a probability attached to it, it should be possible to check the math. In some sense it isn’t really a prediction about the future, so much as a statement about current information (which is not complete).

Or, possibly, they are using “90% confidence” colloquially as “pretty sure.” If so, that should probably be made more clear. He’s using the fact that the researchers agree as an argument from authority, which is doubly wrong if the apparent authorities here were just giving their hunches.


It’s a quote, and you are gonna laugh when you find out who said it.


Attributed to Yogi Berra, I just don’t think it fits the situation here very well.


We are quite clear in the abstract and the paper that the predictions are conditional on the continuation of the smooth progress seen to date. Contrary to the GP's interpretation, these predictions are holding up well.


"When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong."


At first I thought that the introduction was hyperbole, but the write-up does actually more or less deliver on it. Written in plain English that anyone can grok, it's a good summary of what quantum is, what it isn't, and why you should care. Definitely recommend reading if, like me, you're not too familiar with the field.


> For example, would quantum computers work by trying all possible answers in parallel? Sorry, no, that's too good to be true: Quantum computers work by choreographing a pattern of interference, where the contributions to the amplitude of each wrong answer cancel each other out, while the contributions to the right answer's amplitude reinforce each other. Only for special problems, as it turns out, do we know how to choreograph such an interference pattern to deliver a huge speedup over the best known classical algorithms. This, in turn, is why we don't expect quantum computers ever to replace classical computers, but “merely” to complement them, accelerating specific tasks like quantum simulation and codebreaking.

I'm not sure about the field of physics but in deep learning there are hundreds of papers published every day while no more than a percent of them tries to make the paper less mythical and instead they keep inventing buzzwords and claiming positive results to make them even more mythical


Neither NP or NP∩coNP, are contained in BQP, for those that want a complexity theory version of the above.

BQP: Bounded-Error Quantum Polynomial-Time, bounded by a max error of 1:3

BQP is the complexity class thought to contain problems with practical solutions for quantum computers.

IIRC the main limit being the transition amplitudes are subject to the Church–Turing thesis and must be computable functions.

Hopefully useful buzzwords for those who want to dig deeper.


Yeah these buzzwords are very useful pointers to wikipedia pages with details on what problems are in which complexity class and what are not.

I'm more familiar with cryptography so the most famous problem in BQP for me is discrete logarithm. Once you have this primitive, the following things are very clear:

1. How Shor's algorithm for factorization works: it consists of a classical algorithm that reduces factorization to calculation of group order of an element (which is a special case of discrete logarithm), then uses a quantum computer to solve the group order problem. This breaks RSA.

2. Breaking elliptic cryptography: Modern elliptic cryptography constructs an elliptic curve (in the form of y^2=x^3+Ax+B) and defines multiplication on top of the points on the curve. It turns out that multiplication is very easy but discrete logarithm is hard and that hardness is used to prove that Diffie-Hellman key exchange is hard to break, but what if it's not? Moreover, elliptic curves usually only have 256~512 bits since it's sufficient to guarantee security in the classical case, compared to RSA with 2048~4096 bits. While it's harder to break elliptic curves using classical methods, it turns out to be even easier for quantum computers.

What quantum computers is NOT is a parallel computer with 2^n threads running in parallel that would collapse to the thread that gives the correct results. This would imply BQP=NP which is not known to be true and however many qubits we build it won't be any more likely to become true.


That phenomenon isn't specific to physics or deep learning. Academic papers are more and more of a joke these days. Sure there's plenty of good work being done too, but its be buried in a pile of poorly done research and deceiving statistics that are written only to chase funding and/or recognition for the author (promotions, graduation, jobs, etc).


The article suffers from skipping between layers of abstraction in an attempt to make a point, and in the end in doing so fails to make the initial point.

In the first "Myth" section, that "nobody understands this quantum stuff", the example is used of the transistor. It's true that we have no way of making a classical model of a transistor, and our understanding of how transistors work relies on quantum mechanics.

But we did not invent transistors from quantum physics -- we created transistors long before we had an explanation of how they worked, and we have continued to improve and iterate by making advances in material sciences and experimentation, not by applying first principles. Things like blue LEDs were invented by tinkering rather than by solving Lagrangians.

The final section was of particular interest to me; Gil Kalai's work on quantum error correction is very interesting to me and I am in the camp that believes that quantum computing is not possible in any useful sense; in particular a quantum computer will not be capable of being significantly more powerful than a classical computer, in the quantum supremacy sense.

Here the author reverts to a simplistic argument that "of course" quantum computers are possible, because we can model a quantum computer in a traditional computer. But this sidesteps the main claims, which are whether it is possible to scale error correction to the point where a useful result can be achieved.

Even in the domain of NISQ, which is roughly the equivalent of running fluid dynamics simulation in a bathtub, we have yet to produce results showing that we can scale significantly better than a quantum computer.


> But we did not invent transistors from quantum physics -- we created transistors long before we had an explanation of how they worked, and we have continued to improve and iterate by making advances in material sciences and experimentation, not by applying first principles. Things like blue LEDs were invented by tinkering rather than by solving Lagrangians.

In a similar spirit ships were built, steam engines made and applications of superconductivity imagined long before the objects were understood from first principles. Creation often proceeded theory nevertheless it is still useful for optimizing designs at later stages.


>I am in the camp that believes that quantum computing is not possible in any useful sense; in particular a quantum computer will not be capable of being significantly more powerful than a classical computer, in the quantum supremacy sense.

Could you elaborate on that? I was under the impression that some companies are using Quantum Computers because quantum supremacy has been demonstrated (such as in molecular modeling for drug discovery or materials science) - but I'm a complete novice here, and would love to learn more. I'm probably quite wrong, and happy to be corrected.

Also, by not possible in a useful sense, do you mean that QCs as they are now versus in N years, or just in general?


When I say that quantum computing is not possible, I mean in general. It's a dead end. Note that this belief is subject to being falsified by progress in the field or a demonstration of error correction at scale.

There are companies selling and companies using "Quantum Computers". For example, D-Wave systems sells a computer that uses quantum annealing so solve a class of minimization problems.

To date, there is no quantum computer, in a commercial or a research setting, that has demonstrated it can compute anything faster than a classical computer (this includes the D-Wave systems).

The current trend is to try to establish quantum supremacy (and thus falsify the error correction is impossible thesis) through building extremely simple (non-general purpose) quantum systems. Most recently Google announced a successful attempt that has since been invalidated.


There are no quantum computers in practical use at the moment. This essay^ by IBM (incredibly biased towards making quantum computing seem useful) provides many "visions of"'s and "potential to"'s and "may benefit"'s, but nothing in the present tense.

^https://www.ibm.com/quantum/blog/quantum-working-groups


>The final section was of particular interest to me; Gil Kalai's work on quantum error correction is very interesting to me and I am in the camp that believes that quantum computing is not possible in any useful sense; in particular a quantum computer will not be capable of being significantly more powerful than a classical computer, in the quantum supremacy sense.

Maybe his arguments have improved over the decades. Does he now have a coherent argument that doesn't (i) take it as a point of religious faith that noise will be magically correlated in order to break error correcting codes (ii) also demonstrate the impossibility of classical computers


> (i) take it as a point of religious faith that noise will be magically correlated in order to break error correcting codes

This possibility doesn't strike me as any more magical than QM's conjugate variables or its unusual correlations, which also seemed magical to classical physics. Arguably these + contextuality would make noise correlated with the system's configuration in some way, we just don't have a thorough enough understanding of measurement and decoherence to quantify it precisely. I think that's beginning to change, and that we'll have more clarity in the next 10 years or so.


But the reason people believe QM is because of overwhelming empirical evidence, not because it seems right.

You can't just go "QM has many surprising aspects" to "this other theory also has surprising aspects, it's probably true".

If we had been having this conversation before some of the more bizarre quantum effects had been observed it would have been a fair comparison, but we are way past that point.


> You can't just go "QM has many surprising aspects" to "this other theory also has surprising aspects, it's probably true".

I didn't say it's probably true because of this argument, you're saying it's probably false because it seems magical, and that's the implication I'm disputing with that analogy.

I do think it's plausible that noise could be correlated for the reasons I specified, but not because of the "magical" analogy.


> The final section was of particular interest to me; Gil Kalai's work on quantum error correction is very interesting to me and I am in the camp that believes that quantum computing is not possible in any useful sense; in particular a quantum computer will not be capable of being significantly more powerful than a classical computer, in the quantum supremacy sense.

My question too. I've had a vague feeling about this for a long time, waving my hands about thermodynamics with "It must get exponentially harder per qbit to eliminate thermal noise by cooling down closer to absolute zero," and I'd really like to get past my hand-waving and see what the dynamics really are.


>It must get exponentially harder per qbit to eliminate thermal noise by cooling down closer to absolute zero

Why? Cooling a large object is not exponentially harder than cooling a small object.


Surface area is squared, volume is cubed, a larger object has to get hotter to expell the same amount of heat.

Per unit of volume, your body produces more heat than the sun, exactly because it is an exponential function.


Huh? What are you saying is an exponential function?

x^3 is not an exponential function, in the sense relevant here.


You aren't thinking about the exponential decay of emissivity near absolute zero. It isn't linear like we get to assume to make the math easier.

Thus why IBMs largest refrigerator can only dissipate tiny amounts when cold.

> enabling close to ~10 mW at 100 mK cooling power, and over 24 W of cooling power at 4 K temperatures. Finally, the weight of the entire system — 6.7 metric tons

They aren't building single huge quantum processors, but networks of easy to cool parts.

IBM hopes that one GoldenEye refrigerator may be able to hold a million qbits but that isn't enough to break RSA.

RAND estimated 890 MWh per key to be broken.

It will be horizontal sprawl, not vertical integration.

Larger objects simply have to get hotter to expell the same watts per volume or increase surface area.

That is problematic for quantum computers.


> RAND estimated 890 MWh per key to be broken

Can you give a reference? Also, how is heat generated within the quantum processor as operations are unitary?


It seems to me your argument hinges on the ratio between the surface area and the volume dropping to zero as the object gets bigger.

This is only true if the object enlarges in every direction equally. If it spreads out along a flat plane then the ratio is essentially constant, for example.


If the heat produced and the volume are each proportional to the number of qubits (R proportional to cube root of n), and the surface area bounding the qubits is as small as it could be while bounding that much volume (as a pessimistic assumption) (and therefore a sphere) ... hm, the rate that heat passes through a surface by conductance is proportional to the surface area multiplied by the gradient of temperature across the surface, right? If the heat production is uniform within the ball, then... well, the core of the ball would be the hottest... supposing that the surface of the ball is held at a constant temperature (with the system in a steady state, as far as temperature goes)

Let u(x) the temperature at location x. Let \alpha be the thermal diffusivity (assume to be constant throughout the material and over time). Assume that within the ball of radius R, \alpha \nabla^2 u = k for k the heat production density divided by the specific heat capacity (assumed to be constant over the range of temperatures involved) . For spherically symmetric u(x), a function of just distance from the center..

ok, so, need solutions of Laplace's equation, \nabla^2 u = f , where f is some constant times the indicator function of the ball of radius r? Uh, I was thinking to have a boundary condition at the surface of the ball, fixing a particular temperature there, and seeing what temperature enforced there is enough to produce a small enough temperature at the center of the ball... (In that case I guess f can just be a constant, rather than the indicator function of the ball)

uhh.. does this have an analytic solution? This is ending up as a more difficult computation than I anticipated...

edit: oh, for it to be steady state, the rate of heat going through any sphere centered at the origin, must be equal to the rate of heat produced within the ball that it bounds, so for r < R, (4/3) pi r^3 k going through 4 pi r^2 surface area, so heat transfer per surface area of (1/3) r k , and... uh, this is proportional to the gradient in temperature, and this gradient should be proportional to the derivative of temperature with respect to radius (as, gradient should be in a radial direction)

so, g'(r) ~ r ,

so g(r) - g(0) ~ r^2 .

So... if I haven't messed up too badly, I would think that, the difference in temperature of the center, and the temperature of the surface, should be proportional to (heat production per qubit) * ((radius of ball)^2) ~ (heat production per qubit) * ((number of qubits)^{2/3})

which... given a particular upper bound on working temperatures for the core of the ball, would put an upper bound on the number of qubits if packed in a ball like that. Though, I would imagine that if you instead have the inner (some number) fraction of the ball not have qubits, and not produce heat, then that wouldn't apply. Though this would require the surface area grow faster than (number of qubits)^{2/3} .


What is the right term for "the ratio scales as a power, rather than linearly"?


(super-linear) polynomial


For GP: You can be more specific too with quadratic, cubic, etc...


Not the size, but the temperature. If you have to cool to a microkelvin for a certain number of qbits to retain coherence, how low do you need to go to add one more qbit, and how much energy will that require?

My thermodynamic instinct says that the cooling effort required rises with the resolving power — which is exponential with the number of qbits. But it's just instinct, not grounded very well in science or engineering.


It's plausible to me that cooling becomes exponentially harder as you aim for lower temperatures but I don't understand why you think you need lower temperatures for more qbits? The whole point of quantum error correction is that ones you reach a constant threshold you can use more iterations of error correction to decrease you logical error rate without decreasing your physical error rate.


That's also plausible. My guess is the same logic would apply — you would need exponentially more error correction for a linear increase in qbits.


They're not talking about lower temperatures, but greater volume.


I had assumed both — greater volume and also lower temperatures. The more qbits you are trying to get to cooperate, the less thermal noise you can tolerate, so the cooler you have to make it.

Yes, more qbits also take up more space, but I hadn't thought of that as a major factor — but it certainly could be if they are physically large! Overall I think the bigger issue is cooling.

The article mentions error correction as an alternative to increased coherence among the qbits. Perhaps what that really means is "to increase meaningful interaction among qbits, make it as cold as you can, and then error correct until you get a meaningful answer." My intuition is that they are both a battle against entropy — the need for error correction will also increase exponentially with the number of qbits, simply because the number of possible combinations increases exponentially.

And the even larger outcome of all this is that if this intuition bears out, quantum computing will have no fundamental advantage over conventional computing — which also has an exponential cost for a linear increase in bits, for computations such as factoring large numbers.


Relevant reading: https://computerhistory.org/blog/why-analog-computers/

tl;dr Experts in a domain are extremely poor at predicting the impact of a technology disruptive to their domain. It's not that they are trying to be dishonest. Just that there are too many fallacies waiting to trap them and they inevitably fall prey to one or more of them.


Chris Ferrie also writes great books about science for babies.

https://www.csferrie.com/books

https://www.amazon.com/Quantum-Computing-Babies-Baby-Univers...


It might be that I have a Phd in Physics and a long term interest in foundations, but I feel like these books are not very good. Often they emphasize the superficial analogies which physicists use rather than the substantial qualities of the theories.

For example, emphasizing spacetime curvature is, in my opinion, kind of a weird way to talk about the actual substance of general relativity (universal coupling of gravity, futility of trying to describe the world with any fixed four dimensional coordinate system). His quantum mechanics book has similar problems. When I try to explain physics to my young child I always try to get at the essence of the ideas, not the superficial pictures.


At first I saw that this was downvoted and assumed maybe it was a different Chris Ferrie, but from the looks of the blog it's the same person.[0]

Maybe other people thought this didn't add much to the discussion, but I found it interesting.

[0] I am Chris Ferrie, father of four and happy husband. My day job is academic research where I follow my curiosity through the world of quantum physics. My passion for communicating science has led from the most esoteric topics of mathematical physics to more recently writing children’s books.[1]

[1] https://www.csferrie.com/about


It is for sure the same Chris Ferrie, it's even in the foreword of the paper:

Yet in all that time I never thought to write, much less did I actually write, a pithy book called “What You Shouldn't Know About Quantum Computers.” My colleague Chris Ferrie did. He's the same guy who coauthored the surprise bestseller “Quantum Computing for Babies.” Now he's back, with something for those babies to read when they're slightly older.

I enjoy his kids' books and read almost all of them to my kids. They aren't "perfect" (whatever that means for a kids' book), but my kids love them and they start to wrap their brains around otherwise inaccessible topics for their age.


Fwiw, there's an event I was just looking at from UCL on the "Future of Quantum Computing" (online webinar), https://www.eventbrite.co.uk/e/the-future-of-quantum-computi...

It's a panel headed by Prof Al-Khalili. I suspect it will be a beginner level presentation.

>Professor Jim Al-Khalili CBE FRS is a theoretical physicist at the University of Surrey where he holds a Distinguished Chair in physics and leads the Quantum Foundations and Technologies Research Group in the School of Mathematics and Physics. As well as his academic work he is a well-known popular science author and broadcaster on BBC radio and television.

No affiliation, just may be of interest.


You have to get to page 25 before it starts being honest about the fact quantum computing is a con.

Important Nuance: the research is real, the science is real, but the narrative being sold about the future of quantum computing to ensnare investors is not a reasonable prediction and is a con.


To be fair, that doesn't really distinguish it from the rest of the industry. I would say the same thing about AI and (especially) blockchain.


> I would say the same thing about AI

AI has commercial applications right now. What commercial application is there for quantum computing right now?


D-wave managed to sell some devices. I'm not clear on what they could be useful for, but they still managed to sell them.


The linked paper touches on quantum error correction but doesn't explain what the state of the art is. Has any team successfully demonstrated a single usable logical error-corrected qubit yet? I saw this article two months ago https://physicsworld.com/a/why-error-correction-is-quantum-c... discussing the topic, but the writing makes it unclear exactly what was achieved by the various groups. Is recovery from a finite number of errors sufficient to make a usable logical cubit, or is more work still left to be done?


I think this is the SOTA

https://s7d9.scene7.com/is/content/quantum/LQ_ErrorCorrectio...

the corrections significantly decrease the error rate but they need to do a lot of preselection, so it isn't quite useful yet.


Thanks! If I'm reading that correctly, their best error rate with pre and post selection is 0.03%, but they end the paper with the statement "A significant milestone will be to demonstrate a universal family of quantum circuits with logical error rates approaching 10^−8." Seems like we're still six orders of magnitude off.


IIRC We're just over 1 order of magnitude on the physical qbit error before we should see the exponential gains from error correction that theory predicts.


It's a better use of your time just learning quantum mechanics than reading this book. Even if you give up, you'll at least learn some linear algebra along the way.


Thank God someone other than Scott Aaronson is publishing this kind of content. Shetl Optimized is too much of a vanity project to serve as a mainstream resource.


Meanwhile, the author of the foreward is none other than…


> Researchers like Jaime Sevilla and Jess Riedel support this timeline, publishing a report in late 2020 that claimed a 90% confidence of RSA-2048 being factored before 2060.

Delusional, IMO. Without a proper understanding of how the non-linearity of the macroscopic world arises from the unitarity of QM, any scaling projections are wishful thinking. We just don't have a good enough understanding of the measurement problem and decoherence to make such projections.


@dang why was this flagged off the front page? Isn't this a blatantly obvious abuse of the flagging system?


Is this really an appropriate use of arxiv? I thought it was for physics preprints. This might be a neat work, but it appears to be a 143 page blog post in a PDF (unless I missed the references section?)


This is in the arXiv section called “physics and society” which is specifically intended for high-quality popular physics book, among other things.


Good to know!


The text also available as a printed book, so I guess a pdf of it fits any reasonable definition of preprint.


It will never work, the physics is completely wrong.


>It will never work, the physics is completely wrong.

How so?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: