They were illegitimate instances of Shor according to [1]:
> Of course this should not be considered a serious demonstration of Shor’s algorithm. It does, however, illustrate the danger in “compiled” demonstrations of Shor’s algorithm. To varying degrees, all previous factorization experiments have benefited from this artifice. While there is no objection to having a classical compiler help design a quantum circuit (indeed, probably all quantum
computers will function in this way), it is not legitimate for a compiler to know the answer to the problem being solved. To even call such a procedure compilation is an abuse of language.
Quantum error correction produces logical qubits that have smaller nonzero error rate. Thus it needs to be applied repeatedly to achieve a necessary error rate to produce meaningful results. For Shor algorithm the error rate needs to decrease exponentially with the number of qubits. Thus even though IBM has a hundred qubits QCs they still only have managed to use five qubits to factorize a number 21.
I have forgotten the source where I stumbled on the requirement for a qubit error rate to run the Shor algorithm. I would greatly value it if someone could put a key reference here.
A recent result considers a generic error model and a subclass for particular kinds of prime numbers [1]. The important result is $epsilon > cn^{−1/3}$ where epsilon represents qubit error rate and n number of bits in the factorised number. As $n = 2^N$ where N is a number of qubits, the result:
error_rate < const/2^{N/3}
Therefore, the error rate must decrease exponentially with the number of logical qubits with which the computation is made.
There's a threshold you need to reach for quantum error correction to work and we are approaching it pretty steadily on a log scale.