Hacker News new | past | comments | ask | show | jobs | submit login

Quantum error correction produces logical qubits that have smaller nonzero error rate. Thus it needs to be applied repeatedly to achieve a necessary error rate to produce meaningful results. For Shor algorithm the error rate needs to decrease exponentially with the number of qubits. Thus even though IBM has a hundred qubits QCs they still only have managed to use five qubits to factorize a number 21.



>For Shor algorithm the error rate needs to decrease exponentially with the number of qubits

I have never seen a result like that. Do you have a citation?


I have forgotten the source where I stumbled on the requirement for a qubit error rate to run the Shor algorithm. I would greatly value it if someone could put a key reference here.

A recent result considers a generic error model and a subclass for particular kinds of prime numbers [1]. The important result is $epsilon > cn^{−1/3}$ where epsilon represents qubit error rate and n number of bits in the factorised number. As $n = 2^N$ where N is a number of qubits, the result:

error_rate < const/2^{N/3}

Therefore, the error rate must decrease exponentially with the number of logical qubits with which the computation is made.

[1]: https://arxiv.org/pdf/2306.10072




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: