You Can't Vibe Code a Quantum Computer
- Belinda Anderton
- 1 day ago
- 7 min read
Updated: 3 hours ago
You cannot open Instagram these days without someone offering you the best prompt in the world, ever, guaranteed, for a product they built over the weekend using AI and a feeling. There is something genuinely comic about the collision between that energy and the threshold theorem in quantum error correction.
This is not a piece about AI being overhyped. It is however, a convenient hook upon which to hype up this news. This hype, for once, is roughly in the right postcode.
John Preskill, who coined the term "quantum supremacy" and has spent decades working on fault-tolerant computation, is quoted in the announcement saying he has never been this close. Preskill does not do hype. He does mathematics. The distinction is relevant here.
The failure modes of quantum systems are probabilistic, correlated, and decoherence-dependent. An error rate of p = 0.001 sounds reassuring until you remember what the threshold theorem says happens to that rate under composition across millions of operations. The error budget is not a backlog item. It is a hard physical constraint, and missing it doesn't produce a bug report. It produces a machine that looks like it's computing while generating structured noise.
This is the point at which vibe coding, as a philosophy, runs out of road. It relies on feedback loops: run the code, observe the output, adjust. Quantum error correction does not offer this. You cannot observe a qubit state mid-computation without collapsing it. Measurement is destructive. By the time you see the output, the intermediate states that would have told you what went wrong are gone. You are debugging a black box that destroys its own evidence. This is less a software engineering problem and more a philosophical condition that would have interested Heisenberg, who was, notably, not available on Fiverr. But back to the threshold theorum.
The theorem states that if the physical error rate p falls below a critical value p_c, then arbitrarily long computations can be performed with only polynomial overhead:
p_err(L) <= (p / p_c)^(2^L)
where L is the level of concatenation. Below threshold, errors decrease doubly exponentially with each successive layer of encoding. Above it, they compound. The physics does not take pull requests. It does not have a Discord. Nobody is shipping a fix on Friday.
I am a mathematician by disposition and a chronic overthinker by nature. Adding a headshot to LinkedIn has me briefly catastrophising about sex trafficking and the deepest corners of the internet, which is less paranoia than it is twenty-five years of knowing exactly how the infrastructure works and what it gets used for. I have been around. I know what computing has brought the world, and not all of it has been worth celebrating. I carry that knowledge the way anyone does who has watched something genuinely transformative also become genuinely terrible in the same lifetime.
I am also, and have always been, fascinated by mechanical things. The engineering inside ordinary objects. The way complexity hides behind simplicity and keeps working anyway. I love the mathematics behind the fax machine specifically: Modified Huffman coding, the V.29 modulation standard, the way a document becomes a frequency, travels down a copper wire as sound, and reassembles itself on the other side as marks on thermal paper. Nobody who ever sent a fax was thinking about any of this. It worked anyway. This is the correct relationship between most people and most technology, and there is nothing wrong with it.
Vibe coding is the fax machine made general. Describe what you want, something functional emerges, the abstraction holds well enough for the purpose. LLMs have genuinely changed the texture of knowledge work in ways that are still difficult to fully account for, and the productivity gains are real enough that anyone performing scepticism about them for aesthetic reasons is simply not paying attention. For a very wide range of problems, vibe coding is not just acceptable but optimal.
Which is why, when something comes along that makes me set all of that aside and feel the thing I felt when I named my cat Quantum Europa Tuesday in the eighties because Thomas Dolby was blinding people with science and Scott Bakula was leaping through time and quantum mechanics felt like the most exciting frontier a person could stand at, I pay attention.
A working implementation of Shor's algorithm, the quantum procedure that renders RSA and elliptic curve cryptography computationally trivial, requires on the order of 1,000 logical qubits executing millions of gate operations at fault-tolerant error rates. The logical qubit is not the physical qubit. Under conventional surface code architectures, where each physical qubit connects only to its nearest neighbours on a two-dimensional grid, the overhead is approximately:
n_physical = d^2 * n_logical
where d is the code distance required to achieve the target error rate. At d ~ 30, this gives roughly 900 physical qubits per logical qubit, and a million-qubit machine as the minimum viable product. That figure has sat in the background of quantum computing investment theses for a decade, usually rendered in a font size calibrated to discourage further questions.
The Caltech and Oratomic result, published this March, changes the denominator. Their new error-correction architecture, applied to neutral-atom arrays, encodes logical qubits using as few as five physical qubits each, rather than the thousand required by conventional techniques, reducing the total qubit count for a fault-tolerant machine to somewhere between 10,000 and 20,000. The mechanism is what the researchers call a high-rate code, made possible by a structural property unique to neutral atom systems: optical tweezers can shuttle individual atoms across the full extent of the array and entangle them with distant partners, enabling connectivity that surface codes, constrained to nearest-neighbour interactions, simply cannot achieve.
In surface codes, the encoding rate k/n, where k is the number of logical qubits and n the number of physical qubits, is asymptotically poor: k/n → 0 as n grows. High-rate codes break this. Each physical qubit participates in multiple logical qubits simultaneously, so k/n is meaningfully non-zero. The efficiency gain is not incremental. The reduction in qubit count is up to two orders of magnitude. Two orders of magnitude is not a sprint velocity improvement. It is a different problem.
This matters beyond the hardware roadmap, and not in the good way if you happen to rely on the internet for financial transactions, which is everyone. RSA-2048 derives its security from the computational hardness of factoring the product of two large primes. Shor's algorithm dispatches this in polynomial time, O((log N)^3), against the classical best-known approach, the general number field sieve, which runs in sub-exponential time, roughly exp(c (log N)^(1/3) (log log N)^(2/3)).
The asymptotic gap between these two expressions is the entire security model of the internet. It has been sitting there since 1994, when Peter Shor published the algorithm, comfortable in the assumption that the hardware to run it was safely theoretical. The Caltech team notes explicitly that their findings bring forward the moment at which this changes, and they emphasise the urgency of migrating to post-quantum cryptographic standards.
This is why the Caltech result required years of theoretical work, multiple co-authors including the inventor of quantum error correction and the Feynman professor of theoretical physics, and a purpose-built company to commercialise it. Significant engineering challenges remain to combine these capabilities into scalable systems. The path from 10,000-qubit theory to functioning hardware is not a matter of iteration speed. It is a matter of whether you understand, precisely and formally, what you are building, and whether that understanding extends to the physical layer, where the abstraction runs out and the atoms are doing whatever atoms do regardless of your product vision.
The broader industry's relationship with quantum computing has been characterised by a peculiar inversion: the business case is treated as the rigorous part, and the physics is treated as a detail. Investor decks model quantum advantage as though it ships on a roadmap, after design, after engineering, after QA. It does not. It is a physical phenomenon that either satisfies the threshold theorem or doesn't, and the theorem has not read the deck.
The Caltech result is significant precisely because it shifts a constraint that looked fixed. Bringing the required qubit count down by two orders of magnitude is not an optimisation. It is a theoretical result that changes what the hardware problem actually is. But it does not change the fundamental character of the problem, which is that quantum computation is one of the few domains where physical law is directly and non-negotiably load-bearing on the software architecture.
Et c'est ainsi. And so it is, and so it was, long before anyone thought to write a pitch deck about it. The universe has been running fault-tolerant quantum computation since approximately the beginning, in the behaviour of every atom that has ever existed, indifferent to whether we had a sufficient theoretical framework to notice. We are not teaching nature to compute. We are, very slowly and with considerable difficulty, learning to read what it has always been doing. That requires formalism, not feeling. The qubit count just got smaller. The physics, as ever, was already there.
What the fuck is a qubit anyway?
A qubit is not a better bit. This is the most common misunderstanding and it matters here. A classical bit is a switch: it is either 0 or 1. A qubit is a quantum system, typically a single atom or photon, that exists in a superposition of both states simultaneously, described not by a single value but by a probability amplitude:
|ψ⟩ = α|0⟩ + β|1⟩
where α and β are complex numbers satisfying |α|^2 + |β|^2 = 1. The qubit does not secretly have a value that we haven't looked at yet. It genuinely has no definite value until measured, at which point it collapses to 0 with probability |α|^2 or 1 with probability |β|^2, and the superposition is gone. This is not a metaphor. It is what the mathematics says and what experiment has confirmed, repeatedly and to extraordinary precision, for the better part of a century.
The reason this is simultaneously a physics problem and an engineering problem is that superposition is fantastically fragile. Any interaction with the environment, a stray photon, a vibration, a fluctuation in the electromagnetic field, causes decoherence: the quantum state leaks into the surroundings and the superposition collapses before you wanted it to. Building a quantum computer is therefore an exercise in maintaining a physical system in a state that nature is constantly and aggressively trying to destroy, while also performing precise operations on it, while also not looking at it, because looking destroys it.
The engineering challenge is not incidental to the physics. It is the physics, expressed in laboratory conditions. You cannot separate the two the way you can, say, write software without understanding semiconductor fabrication. At the quantum layer, the hardware and the theory are the same conversation. The Caltech result is exciting precisely because it reduces how perfectly you need to control the system in order to correct for the moments when control fails. It does not make the physics easier. It makes the engineering more tractable given the physics as it actually is, which is a meaningful and genuinely difficult thing to have done.
Quantum Europa Tuesday, had she understood any of this, would have been entirely unbothered. She was that kind of cat.



Comments