The Whiz Kid Goes to War
Robert Strange McNamara โ and yes, that was really his middle name โ was the kind of man who believed every problem had a number at its heart. At Ford Motor Company in the 1950s, he and his fellow "Whiz Kids" had transformed a chaotic, gut-instinct empire into a gleaming machine of statistical control. They counted everything. They optimized everything. And it worked: Ford went from hemorrhaging money to dominating the American road.1
So when President Kennedy tapped McNamara to run the Pentagon in 1961, nobody was surprised that he brought his spreadsheets with him. The Department of Defense was the largest organization on Earth โ surely it could benefit from the same rigorous quantitative analysis that had saved Ford. McNamara installed systems analysts, demanded data on every program, and set about measuring war the same way he'd measured automobile production.
He needed a number. A metric that would tell him whether the United States was winning in Vietnam.
He chose body count.
The numbers were beautiful. The war was a disaster.
By 1967, McNamara's own analysts were feeding him kill ratios that implied the U.S. had already eliminated the entire fighting-age male population of North Vietnam โ twice over.2 And yet somehow the enemy kept coming. The numbers said victory was imminent. The jungle said otherwise. American soldiers were capturing the same hills over and over. South Vietnamese villagers were quietly feeding intelligence to the Viet Cong. The political will to continue the war was evaporating at home.
McNamara had measured what was easy to measure โ bodies โ and treated it as the thing that mattered. He'd confused the map for the territory. The body count told him what his weapons were doing. It told him nothing about whether he was winning.
That's Daniel Yankelovich, writing in 1972, giving the disease its diagnosis.3 He laid out the four steps of what would come to be called the McNamara Fallacy โ the systematic substitution of what you can count for what you should count.
Yankelovich's four steps: from reasonable measurement to epistemic suicide.
The Fallacy Is Everywhere
If the McNamara Fallacy were confined to the Pentagon in the 1960s, it would be a historical curiosity. It's not. It's the defining intellectual failure of modern institutional life.
Teaching to the Test
When the No Child Left Behind Act tied school funding to standardized test scores, schools did exactly what any rational actor would do: they optimized for test scores. They cut art, music, and recess. They drilled reading comprehension strategies instead of fostering a love of reading. They reclassified struggling students as "special needs" to exclude their scores from the averages.4
Test scores went up. Education quality is โ well, it depends on who's measuring, and with what. The sociologist Donald Campbell had predicted exactly this outcome in 1979: "The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor."5 Campbell's Law and Goodhart's Law are cousins. The McNamara Fallacy is the family patriarch.
The Hospital Paradox
British hospitals were famously rated on A&E waiting times. The target: see patients within four hours. Hospitals responded brilliantly. Some kept patients in ambulances circling the parking lot โ if you haven't technically arrived at A&E, you can't be waiting at A&E. Others moved patients to hallway gurneys labeled "assessment units" โ technically not A&E, technically not waiting. The measured metric improved. Whether patients got better faster is a different and largely unmeasured question.
The Engagement Trap
Silicon Valley learned McNamara's lesson and proceeded to ignore it completely. Facebook optimized for engagement โ time on site, clicks, shares. The algorithm delivered. People were spectacularly engaged. They were also angrier, more polarized, and more anxious than they'd ever been. Internal Facebook research, leaked in 2021, showed that the company knew Instagram was damaging teenage mental health.6 But "teenage mental health" isn't a metric with a dashboard. Engagement is.
Gaming the Rankings
For decades, the U.S. News & World Report rankings have shaped which universities get the best students, the most donations, and the highest prestige. The formula is public. The variables are measurable. So universities measure them. They spend millions on glossy brochures to increase application volume (which lowers acceptance rate, which boosts rankings). They reclassify spending as "instructional" whether or not it touches a classroom. Temple University's business school simply fabricated test scores for years.7 Columbia University was caught inflating class-size data in 2022. The rankings improved. Whether anyone learned anything โ that's not in the formula.
The map said America was winning. The territory disagreed.
Try It Yourself
Abstract arguments are easy to nod along to. So let's make it concrete. Below is a little institution for you to run. Pick what to optimize, and watch what happens to the things you didn't.
๐ Metric Dashboard Simulator
You're in charge. Pick a sector, choose which metrics to optimize, then run the simulation. Watch the measured numbers soar โ and then reveal what actually happened.
Choose up to 2 metrics to optimize:
โ ๏ธ What You Didn't Measure
The Goodhart Decay
There's a precise mathematical intuition buried in all of this. When a metric is first introduced, it correlates with the thing you actually care about. Hospital wait times correlate with quality of care โ at first. Test scores correlate with learning โ at first. Body count correlates with military success โ at first.
But the moment you attach stakes to the metric โ funding, careers, bonuses, survival โ you create an incentive to optimize the metric directly, decoupled from the underlying reality. The higher the stakes, the faster the correlation decays. This is Goodhart's Law in action: "When a measure becomes a target, it ceases to be a good measure."8
The relationship isn't mysterious. It's a predictable function of incentive strength and time. Try it:
๐ Measurement Corruption Calculator
How fast does a metric stop meaning what it used to mean? Set the stakes and watch correlation decay.
CompStat and the Blue Wall of Numbers
In 1994, New York City Police Commissioner Bill Bratton introduced CompStat โ a system that tracked crime statistics precinct by precinct, week by week. Commanders whose numbers looked bad got grilled in high-pressure meetings. Commanders whose numbers looked good got promoted.
Crime in New York dropped dramatically. Or at least, reported crime did. A series of investigations revealed that officers were systematically downgrading crimes: felony assaults became misdemeanors, robberies became "lost property," rapes were unfounded. In one Bronx precinct, a retired officer described commanders telling detectives to persuade victims not to file reports. The metric improved. Public safety โ that unmeasured, unmeasurable, maddeningly complex reality โ was a different story.
Here's the deep irony: CompStat was designed to fight exactly this problem. It was supposed to replace the old system, where police brass relied on gut instinct and political connections. Numbers were going to bring accountability. And they did โ just not the kind anyone intended.
The measurement trap: a self-reinforcing cycle where each attempt to fix the metric makes it less meaningful.
What McNamara Knew (Eventually)
Here's the part that elevates this from a cautionary tale to a tragedy. McNamara figured it out. By 1967, his own private memos to President Johnson expressed deep skepticism that the war could be won. He had seen the gap between his numbers and reality. He understood, at some level, that body count was a mirage.
He didn't say so publicly. He was moved to the World Bank. He didn't speak openly about his doubts until 1995 โ thirty years and 58,000 American dead (and millions of Vietnamese dead) later โ in his memoir In Retrospect. "We were wrong, terribly wrong," he wrote. It was too late, of course. It's always too late by Step 4.
The Deep Lesson
The McNamara Fallacy isn't about being anti-measurement. Measurement is essential. The fallacy is about forgetting that the measurement is not the thing. Every metric is a proxy โ a simplified representation of a complex reality. The moment you optimize the proxy, you've begun to decouple it from reality. The question isn't whether to measure. It's whether you still remember what you were trying to measure, and whether you're still checking.
The antidote is not to stop counting. It's to count humbly. Hold every metric loosely. Rotate your metrics before they ossify. And always, always keep asking the question that no dashboard can answer: But is it actually working?
McNamara's numbers were never wrong. The war was never going well. Both of these things were true at the same time, and that should keep you up at night every time someone shows you a chart with all the lines going up.