Hacker News new | past | comments | ask | show | jobs | submit login

> The final section was of particular interest to me; Gil Kalai's work on quantum error correction is very interesting to me and I am in the camp that believes that quantum computing is not possible in any useful sense; in particular a quantum computer will not be capable of being significantly more powerful than a classical computer, in the quantum supremacy sense.

My question too. I've had a vague feeling about this for a long time, waving my hands about thermodynamics with "It must get exponentially harder per qbit to eliminate thermal noise by cooling down closer to absolute zero," and I'd really like to get past my hand-waving and see what the dynamics really are.




>It must get exponentially harder per qbit to eliminate thermal noise by cooling down closer to absolute zero

Why? Cooling a large object is not exponentially harder than cooling a small object.


Surface area is squared, volume is cubed, a larger object has to get hotter to expell the same amount of heat.

Per unit of volume, your body produces more heat than the sun, exactly because it is an exponential function.


Huh? What are you saying is an exponential function?

x^3 is not an exponential function, in the sense relevant here.


You aren't thinking about the exponential decay of emissivity near absolute zero. It isn't linear like we get to assume to make the math easier.

Thus why IBMs largest refrigerator can only dissipate tiny amounts when cold.

> enabling close to ~10 mW at 100 mK cooling power, and over 24 W of cooling power at 4 K temperatures. Finally, the weight of the entire system — 6.7 metric tons

They aren't building single huge quantum processors, but networks of easy to cool parts.

IBM hopes that one GoldenEye refrigerator may be able to hold a million qbits but that isn't enough to break RSA.

RAND estimated 890 MWh per key to be broken.

It will be horizontal sprawl, not vertical integration.

Larger objects simply have to get hotter to expell the same watts per volume or increase surface area.

That is problematic for quantum computers.


> RAND estimated 890 MWh per key to be broken

Can you give a reference? Also, how is heat generated within the quantum processor as operations are unitary?


It seems to me your argument hinges on the ratio between the surface area and the volume dropping to zero as the object gets bigger.

This is only true if the object enlarges in every direction equally. If it spreads out along a flat plane then the ratio is essentially constant, for example.


If the heat produced and the volume are each proportional to the number of qubits (R proportional to cube root of n), and the surface area bounding the qubits is as small as it could be while bounding that much volume (as a pessimistic assumption) (and therefore a sphere) ... hm, the rate that heat passes through a surface by conductance is proportional to the surface area multiplied by the gradient of temperature across the surface, right? If the heat production is uniform within the ball, then... well, the core of the ball would be the hottest... supposing that the surface of the ball is held at a constant temperature (with the system in a steady state, as far as temperature goes)

Let u(x) the temperature at location x. Let \alpha be the thermal diffusivity (assume to be constant throughout the material and over time). Assume that within the ball of radius R, \alpha \nabla^2 u = k for k the heat production density divided by the specific heat capacity (assumed to be constant over the range of temperatures involved) . For spherically symmetric u(x), a function of just distance from the center..

ok, so, need solutions of Laplace's equation, \nabla^2 u = f , where f is some constant times the indicator function of the ball of radius r? Uh, I was thinking to have a boundary condition at the surface of the ball, fixing a particular temperature there, and seeing what temperature enforced there is enough to produce a small enough temperature at the center of the ball... (In that case I guess f can just be a constant, rather than the indicator function of the ball)

uhh.. does this have an analytic solution? This is ending up as a more difficult computation than I anticipated...

edit: oh, for it to be steady state, the rate of heat going through any sphere centered at the origin, must be equal to the rate of heat produced within the ball that it bounds, so for r < R, (4/3) pi r^3 k going through 4 pi r^2 surface area, so heat transfer per surface area of (1/3) r k , and... uh, this is proportional to the gradient in temperature, and this gradient should be proportional to the derivative of temperature with respect to radius (as, gradient should be in a radial direction)

so, g'(r) ~ r ,

so g(r) - g(0) ~ r^2 .

So... if I haven't messed up too badly, I would think that, the difference in temperature of the center, and the temperature of the surface, should be proportional to (heat production per qubit) * ((radius of ball)^2) ~ (heat production per qubit) * ((number of qubits)^{2/3})

which... given a particular upper bound on working temperatures for the core of the ball, would put an upper bound on the number of qubits if packed in a ball like that. Though, I would imagine that if you instead have the inner (some number) fraction of the ball not have qubits, and not produce heat, then that wouldn't apply. Though this would require the surface area grow faster than (number of qubits)^{2/3} .


What is the right term for "the ratio scales as a power, rather than linearly"?


(super-linear) polynomial


For GP: You can be more specific too with quadratic, cubic, etc...


Not the size, but the temperature. If you have to cool to a microkelvin for a certain number of qbits to retain coherence, how low do you need to go to add one more qbit, and how much energy will that require?

My thermodynamic instinct says that the cooling effort required rises with the resolving power — which is exponential with the number of qbits. But it's just instinct, not grounded very well in science or engineering.


It's plausible to me that cooling becomes exponentially harder as you aim for lower temperatures but I don't understand why you think you need lower temperatures for more qbits? The whole point of quantum error correction is that ones you reach a constant threshold you can use more iterations of error correction to decrease you logical error rate without decreasing your physical error rate.


That's also plausible. My guess is the same logic would apply — you would need exponentially more error correction for a linear increase in qbits.


They're not talking about lower temperatures, but greater volume.


I had assumed both — greater volume and also lower temperatures. The more qbits you are trying to get to cooperate, the less thermal noise you can tolerate, so the cooler you have to make it.

Yes, more qbits also take up more space, but I hadn't thought of that as a major factor — but it certainly could be if they are physically large! Overall I think the bigger issue is cooling.

The article mentions error correction as an alternative to increased coherence among the qbits. Perhaps what that really means is "to increase meaningful interaction among qbits, make it as cold as you can, and then error correct until you get a meaningful answer." My intuition is that they are both a battle against entropy — the need for error correction will also increase exponentially with the number of qbits, simply because the number of possible combinations increases exponentially.

And the even larger outcome of all this is that if this intuition bears out, quantum computing will have no fundamental advantage over conventional computing — which also has an exponential cost for a linear increase in bits, for computations such as factoring large numbers.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: