All Chapters

The Missing Chapter

The Halting Problem

Why no computer can predict all computers — and what that tells us about the limits of knowledge itself

An extension of Jordan Ellenberg's "How Not to Be Wrong"

Chapter 56

The Question That Broke Mathematics

Here's a question that sounds like it should be easy: given a computer program, can you tell whether it will eventually stop running, or loop forever?

Let me make this concrete. Suppose I hand you this program:

x = 1 while x != 0: x = x - 1 print("done!")

Will it halt? Obviously yes — x starts at 1, gets decremented to 0, the loop exits, and we print "done!" Easy.

Now what about this one?

x = 7 while x != 1: if x is even: x = x / 2 else: x = 3 * x + 1 print("done!")

Try running it in your head. 7 → 22 → 11 → 34 → 17 → 52 → 26 → 13 → 40 → 20 → 10 → 5 → 16 → 8 → 4 → 2 → 1. It halts! But the path was wild — shooting up to 52 before crashing back down. This is the famous Collatz conjecture, and while every starting number anyone has ever tried eventually reaches 1, nobody has been able to prove it always does.1

In 1936, a twenty-three-year-old English mathematician named Alan Turing asked the ultimate version of this question: can we build a single program — let's call it HALT — that takes any program and any input, and correctly tells us whether that program will eventually stop?

If such a thing existed, it would be spectacularly useful. It would settle the Collatz conjecture, for starters. It would also tell us whether our code has infinite loops (goodbye, frozen laptops), whether a given encryption scheme can be cracked (goodbye, cybercrime), and whether a mathematical proof-checking program will ever find a proof of the Riemann Hypothesis.

Turing proved it can't exist. And the way he proved it is one of the most beautiful arguments in all of mathematics.

· · ·

Try It Yourself

Before we get to Turing's proof, let's see how good you are at detecting halting. The programs start easy, but they get harder — and some of them are programs where nobody on Earth knows the answer.

🎮 Halting Detector Game

Score: 0 / 0
Round 1 of 8 Easy

Chapter 56

Turing's Devastating Argument

Turing's proof is a proof by contradiction — possibly the most famous one ever constructed. It works like this.

Step 1: Assume HALT exists. Suppose, for the sake of argument, that someone has written a program called HALT(P, I) that takes any program P and any input I, and returns "halts" if P eventually stops when run on I, and "loops" if P runs forever.

Step 2: Build a troublemaker. Using HALT as a subroutine, construct a new program called EVIL. EVIL takes a single input — a program P — and does the following:

EVIL(P):
  Ask HALT(P, P) — "Does P halt when given itself as input?"
  If HALT says "halts" → loop forever
  If HALT says "loops" → halt immediately

EVIL is a contrarian. Whatever HALT predicts, EVIL does the opposite. If HALT says a program halts on itself, EVIL loops. If HALT says it loops, EVIL halts. EVIL exists to make HALT wrong.

Step 3: Feed EVIL to itself. Now ask: what happens when we run EVIL(EVIL)?

EVIL calls HALT(EVIL, EVIL) — "Does EVIL halt when given itself as input?"

If HALT says "halts," then EVIL loops forever. But wait — HALT just said EVIL halts. HALT was wrong.

If HALT says "loops," then EVIL halts immediately. But wait — HALT just said EVIL loops. HALT was wrong again.

Either way, HALT gives the wrong answer. But we assumed HALT always gives the right answer. Contradiction.2

Therefore, HALT cannot exist. No program can correctly predict the halting behavior of all possible programs. Period.

Walk through it yourself:

🔬 Turing's Proof — Step by Step

HALT(P, I) "halts" or "loops" EVIL(P) Do the opposite EVIL(EVIL) → feeds itself in 💥 Contradiction!

EVIL uses HALT to predict itself, then does the opposite — creating an inescapable contradiction.

· · ·
Chapter 56

Hilbert's Dream, Shattered

To understand why this matters, we need to rewind to 1928, when the great German mathematician David Hilbert posed a challenge he called the Entscheidungsproblem — the "decision problem." Hilbert wanted to know: is there a mechanical procedure that can determine the truth or falsity of any mathematical statement?3

This was the crown jewel of Hilbert's program to place all of mathematics on a firm, mechanical foundation. He wanted an algorithm for truth. Feed in a statement — "every even number greater than 2 is the sum of two primes," say — and the algorithm churns away and eventually spits out TRUE or FALSE.

In 1936, two men independently destroyed this dream. Alonzo Church at Princeton used his lambda calculus to show that no such procedure could exist. And Turing, then a graduate student at Cambridge, invented his abstract computing machines — what we now call Turing machines — and showed the same thing by proving the halting problem was unsolvable.4

The two proofs were different in method but identical in conclusion: there are well-defined mathematical questions that no algorithm can answer. Not because we haven't been clever enough, but because such an algorithm is logically impossible.

If you've read Chapter 53 on Gödel's incompleteness theorems, this should feel familiar — and it's not a coincidence. Gödel showed in 1931 that any sufficiently powerful formal system contains true statements it can't prove. Turing showed in 1936 that there are well-defined computational questions no algorithm can answer. The two results are deeply connected: they're both consequences of the strange loops that arise when systems try to analyze themselves.5

The Connection to Gödel

Gödel's first incompleteness theorem says: any consistent formal system powerful enough to express arithmetic contains true statements it cannot prove. Turing's halting theorem says: no algorithm can decide whether an arbitrary program halts. These are essentially the same wall, encountered from different directions. If you could solve the halting problem, you could decide all the statements Gödel showed were undecidable. The universe of mathematical truth is bigger than the universe of mechanical proof.

· · ·
Chapter 56

The Diagonal Runs Deep

You might have noticed that Turing's proof has a certain shape. We built a table — programs versus inputs — and then constructed something that disagreed with every entry on the diagonal. This is Cantor's diagonal argument, recycled. Cantor used it in 1891 to show the real numbers are uncountable. Gödel used a version to build his self-referential sentence. Turing used it to build EVIL.

P₁ P₂ P₃ P₄ P₁ P₂ P₃ P₄ H L H H H H L H L H L H L L H L EVIL L L H H ?!

The diagonal entries show what each program does on itself. EVIL flips every diagonal entry — so it can't appear anywhere in the table.

The diagonal is the key. Each program on the diagonal is being asked about itself: does P₁ halt on P₁? Does P₂ halt on P₂? EVIL flips every diagonal answer, which means EVIL can't be any program in the list. But the list was supposed to contain all programs. So our assumption — that HALT exists — must be wrong.

It's the same trick every time: use self-reference to build something that can't be classified by the system. Cantor built a real number that isn't on any list. Gödel built a sentence that says "I am unprovable." Turing built a program that does the opposite of what any halting detector predicts. The diagonal argument is mathematics' favorite weapon against completeness.

· · ·
Chapter 56

Rice's Theorem: It Gets Worse

You might think: okay, so we can't solve the halting problem specifically. But surely we can answer other questions about programs?

In 1953, Henry Gordon Rice proved something devastating: you can't decide any non-trivial semantic property of programs.6 Not whether a program ever prints "hello." Not whether it ever accesses a particular file. Not whether two programs compute the same function. Not whether a program has a security vulnerability. Nothing about what a program does (as opposed to what it looks like) can be algorithmically decided in all cases.

The proof is slick: if you could decide any such property, you could use that to solve the halting problem — which we just showed is impossible. It's turtles all the way down.

Your antivirus can't catch all viruses. A virus is just a program with a certain behavior. Rice's theorem says no algorithm can perfectly classify program behavior. Every antivirus must either miss some viruses or flag some innocent programs — usually both.

Your compiler can't perfectly optimize all code. Deciding whether a piece of code is dead (never executes) is undecidable. Compilers use clever heuristics, but they'll always leave optimizations on the table.

You can't write a perfect deadlock detector. Determining whether a concurrent program can deadlock requires knowing its runtime behavior — which is, you guessed it, undecidable in general.

This is not an engineering limitation. It's a mathematical certainty. We can build better and better heuristics, but the gap between what we'd like to know about programs and what we can know is permanent and unbridgeable.

· · ·
Chapter 56

The Busy Beaver: Uncomputable Growth

Here's a fun way to stare into the abyss of uncomputability. Consider all possible Turing machines with exactly n states. Some of them halt; some loop forever. Among those that halt, one of them writes more 1s on its tape than any other before stopping. The number of 1s it writes is called BB(n) — the nth Busy Beaver number.7

The first few values are known:

The Busy Beaver Function
BB(1) = 1
BB(2) = 4
BB(3) = 6
BB(4) = 13
BB(5) = 47,176,870
BB(6) is currently unknown. The best known lower bound has over 10↑↑15 digits.

Look at that jump from BB(4) to BB(5). It's not just that BB grows fast — it grows faster than any computable function. Faster than exponentials. Faster than towers of exponentials. Faster than any function you could ever write a program to compute. If you could compute BB, you could solve the halting problem — just run every n-state machine for BB(n) steps, and if it hasn't halted by then, it never will.

BB(5) = 47,176,870 was finally determined in 2024 by the collaborative "bb(5)" project, which verified every single 5-state Turing machine — a heroic computational effort.8 BB(6) remains unknown, and there's a real sense in which it might always remain unknown: the question is intimately entangled with open problems in mathematics.

Number of states (n) BB(n) 1 4 6 13 47M 1 2 3 4 5 (not even close to scale)

The Busy Beaver function grows so fast that even this "not to scale" chart is laughably inadequate. BB(5) is 47 million; BB(6) is beyond astronomical.

· · ·
Chapter 56

What the Halting Problem Really Tells Us

There's a tendency, when people first learn about the halting problem, to think it's some obscure technicality. A weird edge case that only matters to theoretical computer scientists. It's not.

The halting problem is the computational equivalent of Gödel's incompleteness theorem. It tells us that the universe of truth is fundamentally larger than the universe of mechanical verification. There are facts about programs — simple, well-defined, yes-or-no facts — that no algorithm can determine.

And this has consequences far beyond computer science. When economists build models of rational agents, they implicitly assume those agents can compute certain things. When biologists simulate protein folding, they trust their simulations terminate. When engineers verify safety-critical software for planes and nuclear reactors, they run up against the halting problem every day — they just use conservative approximations and pray.

The halting problem draws a line. On one side: everything a computer can do, which is already unimaginably vast. On the other side: the things that are well-defined, perfectly meaningful, but forever beyond the reach of any machine.

Turing, at twenty-three, showed us where that line is. And then, because he was Turing, he went on to help win a war, invent modern computing, and lay the foundations of artificial intelligence. But the halting problem might be his most profound legacy. It's the moment mathematics proved — with mathematical certainty — that there are things mathematics can't do.

The universe of truth is bigger than the universe of proof. And that's not a bug. It might be the most important feature of reality there is.

Notes & References

  1. Lagarias, J. C. (2010). The Ultimate Challenge: The 3x+1 Problem. American Mathematical Society. The Collatz conjecture has been verified for all starting values up to approximately 2.95 × 10²⁰, but a proof remains elusive. Paul Erdős famously said, "Mathematics may not be ready for such problems."
  2. Turing, A. M. (1936). "On Computable Numbers, with an Application to the Entscheidungsproblem." Proceedings of the London Mathematical Society, s2-42(1), 230–265. This is the foundational paper that introduced Turing machines and proved the halting problem undecidable.
  3. Hilbert, D. & Ackermann, W. (1928). Grundzüge der theoretischen Logik. Springer. The Entscheidungsproblem was formally posed here, though Hilbert had been discussing variants of it since at least 1900.
  4. Church, A. (1936). "An Unsolvable Problem of Elementary Number Theory." American Journal of Mathematics, 58(2), 345–363. Church's paper appeared slightly earlier than Turing's, but Turing's formulation using abstract machines proved far more influential for computer science.
  5. For a beautiful exposition of the connections between Gödel, Turing, and self-reference, see Hofstadter, D. R. (1979). Gödel, Escher, Bach: An Eternal Golden Braid. Basic Books.
  6. Rice, H. G. (1953). "Classes of Recursively Enumerable Sets and Their Decision Problems." Transactions of the American Mathematical Society, 74(2), 358–366.
  7. Radó, T. (1962). "On Non-Computable Functions." Bell System Technical Journal, 41(3), 877–884. Radó introduced the Busy Beaver function and proved it was not computable.
  8. The bbchallenge project (2024). "BB(5) = 47,176,870." See bbchallenge.org for the full verification details and community effort.