The Empire of Exactitude
In Jorge Luis Borges's short story "On Exactitude in Science," the cartographers of an empire grow dissatisfied with maps that merely approximate the territory. They want precision. They demand perfection. And so they create a map at the scale of one to one: every road, every field, every stone reproduced at full size. The map covers the entire empire. It is, of course, perfectly useless. Subsequent generations, less enamored with cartography, let it decay into tatters, inhabited only by beggars and animals.1
Borges wrote this as a joke, but like all good jokes, it contains a serious point. There is a tension—a necessary tension—between fidelity and utility. A map that is perfectly accurate is perfectly useless because it is the territory itself, and you cannot fold the territory into your pocket. You cannot consult it. You cannot stand above it and see the patterns. The very abstraction that makes a map useful also makes it wrong.
This is not a failure of cartography. It is the condition of all knowledge.
The Territory Bites Back
In the 1930s, the philosopher and engineer Alfred Korzybski gave us the phrase that should be tattooed on every statistician's forearm: "The map is not the territory."2 It sounds obvious when stated plainly, but we forget it constantly. We look at the unemployment rate and think we see the economy. We look at test scores and think we see education. We look at a model's R² and think we see truth.
The statistician George Box offered a complementary insight: "All models are wrong, but some are useful."3 This is often quoted as a kind of defensive caveat—a statistician's apology for the imperfection of their craft. But Box meant it as something more profound. The wrongness of models is not a bug to be eliminated. It is the feature that makes them possible. A model that is not wrong in some particular way is not a model at all; it is a replica, and replicas do not generalize.
Every chapter in this collection has been a map. When we explored the Kelly Criterion, we mapped the territory of optimal betting onto a simple logarithmic formula. When we examined the Inspection Paradox, we mapped the experience of waiting onto probability distributions. When we analyzed game theory, we mapped the infinite complexity of human interaction onto payoff matrices. Each map was useful. Each map was wrong.
The question is not whether to use maps—we must, for we cannot navigate the territory itself in its full, unabstracted complexity. The question is which map to use, when to trust it, and when to crumple it up and look at the ground.
Model vs. Reality
Consider the problem of fitting a curve to data. We observe some points—measurements, perhaps, or historical observations—and we want to find a function that describes the underlying pattern. But what kind of function? A line? A parabola? A 20th-degree polynomial that threads through every point like a drunkard stumbling home?
The interactive below demonstrates the fundamental trade-off. Click anywhere to add data points (or use the "Generate Data" button). Then try fitting models of increasing complexity. Watch what happens.
Notice the progression. The linear model is too simple—it cannot capture the curvature of the underlying relationship. This is underfitting: high bias, systematic error that won't go away with more data. The quadratic or cubic model may do better, capturing the essential pattern without chasing noise.
But then we go too far. The 15th-degree polynomial fits your specific data points perfectly. It threads through every observation like a snake. And yet it is catastrophically wrong about the underlying pattern. This is overfitting: high variance, extreme sensitivity to the particular sample, poor generalization to new data.
The perfectly fitted model is Borges's map at scale one to one. It reproduces every bump and wrinkle of the observed data—which includes not just the signal we want, but the noise we don't. And in reproducing everything, it explains nothing.
When Models Attack
History is littered with the wreckage of models that forgot they were maps.
In the years leading up to the 2008 financial crisis, Wall Street used a mathematical model called the Gaussian copula to price mortgage-backed securities and the credit default swaps that insured them.4 The model was elegant. It was tractable. It allowed traders to reduce the complex dependencies between thousands of mortgage defaults to a single correlation parameter. It made the incomprehensible computable.
It was also catastrophically wrong. The Gaussian copula assumed that mortgage defaults were like dice rolls—independent events with predictable correlations. But mortgages don't default independently. When housing prices fall in Florida, they also fall in Arizona and Nevada. When one borrower defaults, their neighbor's house loses value, making default more likely for the neighbor. The correlations weren't stable parameters; they were emergent properties of a system that changed under stress.
The model didn't just fail to predict the crisis. It helped cause it. By making mortgage-backed securities seem safer than they were, the Gaussian copula encouraged more lending, more securitization, more leverage—until the system collapsed under the weight of its own mathematical self-confidence.
During the COVID-19 pandemic, epidemiological models faced the opposite problem. Early models predicted catastrophic death tolls that (thankfully, due to behavioral changes) never materialized.5 Critics pounced: the models were wrong! But this misunderstands the purpose of models in a crisis. The early COVID models weren't prophecies; they were warnings. They said: If we do nothing, this is what happens. They were maps of a territory that changed because people looked at the map.
Both failures—the financial crisis and the COVID controversy—stem from the same confusion: mistaking the map for the territory. In 2008, traders treated the Gaussian copula as if it was the risk, rather than a simplified representation of it. In 2020, critics treated epidemiological models as failed prophecies, rather than contingent scenarios.
Goodhart's Law
There is a special danger that arises when maps become targets. The economist Charles Goodhart articulated it in what has become known as Goodhart's Law: "When a measure becomes a target, it ceases to be a good measure."6
The logic is elegant in its perversity. A measure is useful because it correlates with something we care about but cannot directly observe. Test scores correlate with learning. GDP correlates with prosperity. citation counts correlate with scientific impact. But these are correlations, not identities. They work because people aren't trying to optimize them directly.
Once a measure becomes a target—once test scores determine school funding, once GDP growth becomes the metric of political success—people optimize for the measure. And because the measure was only ever an approximation of the real goal, optimization of the measure often comes at the expense of the goal.
Consider a school that wants to improve education. It cannot observe "learning" directly, so it uses test scores as a proxy. This works reasonably well at first. But once test scores become the target—once teachers are evaluated based on them, once funding depends on them—the school starts teaching to the test. Test scores rise. Actual learning may stagnate or decline. The map has become the territory, and the territory has suffered for it.
Drag the slider above to increase "teaching to the test" intensity. Watch what happens to the relationship between test scores and actual learning.
Goodhart's Law explains why so many well-intentioned metrics go wrong. Universities want to improve education, so they optimize for student satisfaction scores—resulting in grade inflation and diminished rigor. Police departments want to reduce crime, so they optimize for reported crime rates—resulting in under-reporting and reclassification. Social media platforms want to show users engaging content, so they optimize for click-through rates—resulting in clickbait and outrage.7
In each case, the metric was a reasonable map of the territory—until it became a target. Then the map was gamed, and the territory suffered.
The Art of Mapmaking
So what are we to do? Abandon models? Reject measurement? Return to gut instinct and folk wisdom?
No. Mathematics remains our most powerful mapmaking tool. The problem is not that we use models; it is that we forget their limitations. We fall in love with our abstractions. We mistake the elegance of the formula for the truth of the world.
The chapters of this collection have all been exercises in mapmaking. When we explored the Kelly Criterion, we mapped the territory of optimal betting onto a logarithmic utility function. When we examined Benford's Law, we mapped the distribution of leading digits onto a logarithmic curve. When we analyzed the Inspection Paradox, we mapped the experience of waiting onto the mathematics of length-biased sampling.
Each of these maps was useful in its domain. And each was wrong in ways that mattered outside that domain. The Kelly Criterion assumes you know your edge exactly and can weather any drawdown—assumptions that fail for mortal investors with finite bankrolls and finite nerves. Benford's Law assumes scale-invariance—a property that holds for naturally occurring numbers but not for assigned numbers like prices. The Inspection Paradox assumes a stationary process—a condition that fails when bus schedules adapt to passenger loads.
Being "not wrong"—to use Ellenberg's framing—does not mean having the right map. It means knowing which map to use, when to trust it, and when to put it down.
The goal was never the map. It was always navigating the territory. A perfect map is a trap; a useful map is a guide. The mathematician's art lies not in eliminating error—impossible—but in understanding where the error lives and how it behaves.
Putting Down the Map
There is a moment in any serious intellectual journey when you must put down the map and look at the ground. The statistician who never doubts their model is a danger to themselves and others. The economist who treats their equations as revealed truth is not doing economics; they are doing theology with Greek letters.
But the person who refuses to use maps—who insists on navigating purely by intuition, who rejects all abstraction as distortion—is equally lost. The territory is too vast, too complex, too full of hidden patterns that only become visible through the right lens.
The wisdom lies in the movement between map and territory. Use the map to guide your steps. But look up. Check the landmarks. Ask: what would make this map wrong? What is it not showing me? What has changed since the map was drawn?
Borges's cartographers forgot this. In their pursuit of perfect accuracy, they lost all utility. Korzybski reminds us of the gap that must always remain. Box teaches us to live with wrongness, to find the useful kind. Goodhart warns us what happens when we optimize our maps instead of navigating the world.
The 99 chapters before this one have been maps—useful maps, I hope, but maps nonetheless. This final chapter is the reminder that comes at the end of every atlas: the map is not the territory. Go outside. Look around. The territory is still there, in all its messy, unmodeled glory, waiting for you to navigate it.
The mathematics ends here. The journey continues.