Two Urns, Two Worlds
Here's a game. I put two urns in front of you. Urn A contains 50 red balls and 50 black balls. If you draw red, you win $100. Which urn do you want to draw from?
Trick question — I haven't told you about Urn B yet. Urn B also contains 100 balls, some red and some black. But I won't tell you the proportions. Could be 100 red. Could be 100 black. Could be any split at all. You have no information.
Now: which urn do you want to draw from?
Urn A
50 red, 50 black — you know the odds
Urn B
100 balls, unknown mix — you know nothing
If you're like most people — and "most people" here includes professional statisticians, Wall Street traders, and Nobel Prize-winning economists — you picked Urn A. And here's the interesting part: you can't justify this with expected value. If you have no information about Urn B, your best prior guess is that it, too, is 50/50. The expected value of both urns is the same: $50.
Yet the preference is overwhelming. In experiment after experiment, roughly 70% of people choose the known urn.1 This isn't stupidity. It's one of the deepest intuitions human beings have about the world, and it took an economist named Frank Knight to figure out why it matters.
The distinction isn't between safe and dangerous. It's between the kind of dangerous where you can do the math — and the kind where there isn't any math to do.
Frank Knight's Big Idea
In 1921, a University of Chicago economist named Frank Knight published a book with one of the most boring titles in the history of important books: Risk, Uncertainty, and Profit.2 The title sounds like something you'd find on a shelf in an accountant's waiting room, right between Tax Code Amendments Quarterly and A Guide to Municipal Bonds. But buried inside was an idea that would reshape how we think about the future.
Knight drew a line that most people — before him and since — have been perfectly happy to blur. He said there are two fundamentally different kinds of not-knowing:
Risk is when you don't know what will happen, but you know the probabilities. A roulette wheel. A coin flip. A well-diversified stock portfolio over a ten-year period. The future is uncertain, sure, but it's uncertain in a way you can calculate. You can write down a probability distribution. You can compute an expected value. You can buy insurance at a price that makes mathematical sense.
Uncertainty is when you don't even know the probabilities. Will artificial intelligence achieve general human-level reasoning by 2030? What's the probability that the European Union will dissolve in the next twenty years? What were the odds, in 2005, that a social networking site for Harvard students would become the dominant communications platform on Earth? These aren't just hard questions. They're questions where the very act of assigning a probability feels like a category error — like asking for the square root of a sandwich.
Not all ignorance is created equal. The kind of not-knowing determines the kind of thinking you need.
Knight's insight was that this isn't a philosophical nicety. It's the whole ballgame. Because the tools that work beautifully in the world of risk — expected value calculations, probability-weighted averages, Monte Carlo simulations — don't just work poorly in the world of uncertainty. They can actively mislead you. They give you a number where a number has no business existing, and then you make decisions based on that number, and then you're surprised when reality doesn't cooperate.
The danger isn't that we can't calculate the future. It's that we calculate it anyway.
Chapter 3The Ellsberg Paradox
In 1961, a young analyst at the RAND Corporation named Daniel Ellsberg — yes, that Daniel Ellsberg, who would later become famous for leaking the Pentagon Papers — published a paper that made the urn game precise.3 Ellsberg was interested in decision theory, and he'd noticed something that bugged him about the then-dominant framework for making rational choices.
The framework, called expected utility theory, said that a rational person should act as if they assign probabilities to every possible outcome and then maximize the weighted average of their utility. It was mathematically gorgeous. It was axiomatically clean. And it predicted that you should be indifferent between Urn A and Urn B.
Here's the logic: if you believe there's a 60% chance of red in Urn B, you should prefer Urn B. If you think it's 40%, you should prefer Urn A. If you're totally ignorant, the "principle of insufficient reason" says you should treat it as 50% — same as Urn A. Any way you slice it, you should have a definite preference or be indifferent. What you should not do is consistently prefer the known urn regardless of which color you're betting on.
But that's exactly what people do. Ellsberg showed that if you ask people to bet on red from either urn, they pick Urn A. Then if you ask them to bet on black from either urn, they still pick Urn A. This is flatly inconsistent with having any probability belief about Urn B. If you think Urn B is more than 50% red, you should prefer it for red bets. If you think it's less than 50% red, you should prefer it for black bets. You can't consistently avoid it for both — unless you're not responding to probability at all. You're responding to something else entirely.
That something else is ambiguity. The discomfort isn't with the odds. It's with the fact that you don't have odds. And Ellsberg's point was: this isn't irrational. This is a perfectly coherent response to a genuinely different kind of epistemic situation.
The Key Insight
Ambiguity aversion isn't the same as risk aversion. A risk-averse person dislikes gambles — they'd pay to avoid a coin flip. An ambiguity-averse person doesn't mind gambles with known odds — they mind not knowing the odds in the first place. These are two separate psychological and mathematical phenomena, and confusing them is like confusing a fear of heights with a fear of flying.
Where the Models Broke
Now let's talk about money. Specifically, let's talk about the moment in September 2008 when the smartest quantitative minds on Wall Street discovered they'd been confusing risk with uncertainty for the better part of a decade.
Here's the story in miniature. Starting in the late 1990s, financial engineers built mathematical models to price complex mortgage-backed securities. The models were, by any technical standard, impressive. They incorporated correlations, default probabilities, recovery rates, and prepayment speeds. They used historical data going back decades. They ran Monte Carlo simulations with millions of paths. And they produced numbers — precise, reassuring numbers — that told banks and investors exactly how risky these securities were.4
The problem was subtle but lethal. The models treated the future of the housing market as a risk — something with a known probability distribution, estimated from historical data. But the situation was actually one of Knightian uncertainty. The United States had never experienced a nationwide decline in housing prices in the postwar era. Not once. So when the models said "the probability of a 30% nationwide decline is essentially zero," they weren't calculating a fact about the world. They were revealing a limitation of their data.
The most famous model was David Li's Gaussian copula function, published in 2000, which provided a seemingly elegant way to model the correlation of mortgage defaults. Banks loved it because it gave them a single number — a correlation parameter — that summarized the dependency structure of thousands of mortgages. The rating agencies loved it because it let them stamp AAA on tranches that were, in retrospect, explosive.
The formula worked beautifully in the domain of risk: when defaults were driven by individual borrower circumstances (job loss, illness, divorce), they were roughly independent, and the correlation parameter was low. But when defaults were driven by a systemic factor — a nationwide housing crash — the true correlations jumped to nearly 1.0, and the model's estimate of "essentially zero" probability became, in fact, "100% happening right now."5
This is Knight's distinction in action. The banks didn't fail because they took risks. Banks take risks every day; that's their job. They failed because they took uncertainties and, using the alchemy of mathematical modeling, transmuted them into risks. The models didn't eliminate the uncertainty. They just hid it.
As the physicist and finance writer Emanuel Derman put it: "Models are metaphors that compare something we don't fully understand with something we think we do."6 In the world of risk, the metaphor is close enough to be useful. In the world of uncertainty, the metaphor becomes a fairy tale.
Chapter 5Can You Tell the Difference?
Here's the practical challenge: in real life, risk and uncertainty don't come with labels. Nobody hands you a situation and says, "This one's a risk — feel free to use your probability models." The whole difficulty is that you have to figure out which kind of not-knowing you're dealing with before you choose your tools.
Let's develop some intuition with a game.
Risk or Uncertainty?
For each scenario, decide whether it's fundamentally a problem of risk (known or knowable probabilities) or uncertainty (probabilities are inherently unknowable or unreliable).
Loading...
The tricky part is that most interesting situations live somewhere in the middle. The stock market has elements of both: over long periods, historical averages are a decent guide (risk), but the possibility of a complete structural change — a world war, a technological revolution, a collapse of the dollar — introduces genuine uncertainty. And it's exactly in the boundary zone where the biggest mistakes happen, because you're most tempted to use risk-style tools while standing on uncertain ground.
The top-right quadrant is where catastrophes live: applying precise mathematical tools to situations that aren't precise at all.
The Uncertainty Toolkit
So if expected value and probability models are the tools for risk, what are the tools for uncertainty? Knight himself pointed toward the answer, and a century of thinkers — from John Maynard Keynes to Nassim Nicholas Taleb — have built on it.
1. Robustness over optimality
In the world of risk, you want the optimal strategy — the one that maximizes expected value. In the world of uncertainty, you want the robust strategy — the one that doesn't blow up no matter what happens. These are different goals, and they usually produce different answers.
Consider choosing a career. A risk-based approach would be to estimate the probability distribution of income for each career and pick the one with the highest expected value. (Congratulations, you're now applying to medical school or going into quantitative finance.) An uncertainty-based approach would be to choose a career that keeps your options open, that builds transferable skills, that doesn't leave you destitute if an entire industry evaporates. These two approaches might give the same answer — but often they don't.
2. Margin of safety
This is Benjamin Graham's great contribution to investing, and it's really an uncertainty principle in disguise.7 Graham didn't say "buy stocks where the expected value is highest." He said "buy stocks where the price is so far below your estimate of value that you'd be fine even if your estimate is significantly wrong." The margin of safety isn't a risk calculation. It's an admission that your model of value might be garbage — an explicit hedge against uncertainty.
3. Optionality
Options — the right but not the obligation to do something — are particularly valuable under uncertainty. When you don't know what the future holds, you want to be in a position where you benefit from good surprises without being destroyed by bad ones. Taleb calls this "antifragility," but the core idea is older than the word: it's the asymmetric payoff, where your upside is unlimited but your downside is capped.8
A startup founder has optionality: they've risked a finite amount (a few years, some savings) for a potentially unlimited upside. A person who takes on a huge mortgage to buy the biggest house they can afford has the opposite: they've traded away optionality for optimization.
4. Diversification across models, not just assets
Standard diversification is a risk tool: spread your money across assets so that individual losses cancel out. But under uncertainty, you need to diversify across worldviews. You need to ask not just "what if this stock goes down?" but "what if my entire model of how the economy works is wrong?" This means holding some assets that would only make sense in a world that doesn't look like the one you expect — a little gold, a little Bitcoin, a little farmland — as insurance against the possibility that your understanding of reality has a bug in it.
Uncertainty Budget Calculator
How much of your decision-making should be driven by formal models vs. robustness thinking? Adjust the sliders to describe your situation.
Keynes Knew
John Maynard Keynes understood the distinction between risk and uncertainty as well as anyone who ever lived, and he understood it because he'd been burned by ignoring it.
In the early 1920s, Keynes was a currency speculator, and a confident one. He had theories about the relative values of European currencies in the aftermath of World War I, and he leveraged those theories into large positions. In 1920, the markets moved against him so violently that he was nearly wiped out, saved only by a loan from a sympathetic financier.9
This experience transformed Keynes's thinking. In his General Theory (1936), he wrote what might be the most honest paragraph in the history of economics:
Keynes's radical claim was that much of economic life doesn't operate in the domain of risk at all. When a business owner decides whether to build a factory, they aren't computing expected values. They can't be — there are too many unknowns, too many factors that resist quantification. Instead, they're making a leap of what Keynes called "animal spirits": a gut-level confidence in the future that is, at bottom, a psychological phenomenon rather than a mathematical one.
This doesn't mean the decision is irrational. It means it's a-rational — operating in a domain where rationality, in the narrow sense of probability-weighted optimization, simply doesn't apply. And Keynes's deep insight was that this is fine. The economy doesn't need every decision to be a solved optimization problem. It needs people to act boldly in the face of irreducible uncertainty, and the financial system's job is to make that boldness feasible.
Chapter 8The Meta-Uncertainty Problem
Here's where things get truly vertiginous. The hardest problem isn't dealing with uncertainty — it's figuring out whether you're dealing with uncertainty or risk in the first place. Call it the meta-uncertainty problem.
Think about it: when Long-Term Capital Management collapsed in 1998, the partners thought they were in the world of risk. They had models. They had decades of data. They had two Nobel Prize winners on the board. Their value-at-risk models told them that the losses they experienced should have occurred once in several billion years.11 The models were right — within their own assumptions. The problem was that the assumptions described a world that was tidier than the real one.
Or consider Fukushima in 2011. The seawall was designed to handle a certain height of tsunami, based on historical records and probabilistic risk analysis. The analysis was correct, given the data. But the data didn't go back far enough — evidence of even larger historical tsunamis was later found in geological sediments. The engineers had been solving a risk problem. The actual problem was one of uncertainty.12
So here's the heuristic I'd propose, and it's deliberately conservative: when in doubt about whether you face risk or uncertainty, assume uncertainty. The cost of treating risk as uncertainty is inefficiency — you'll be too cautious, too diversified, too hesitant. The cost of treating uncertainty as risk is catastrophe. These costs are not symmetric, and that asymmetry should determine your default.
The Practitioner's Rule
If your model relies on data from a period that has never included the kind of event you're trying to survive, you're not managing risk. You're papering over uncertainty with arithmetic. The remedy isn't better arithmetic. It's humility about the limits of arithmetic itself.
Donald Rumsfeld's famous taxonomy, but with teeth: the category you're in determines which math you're allowed to use.
Living with Not-Knowing
The deepest lesson of the risk-uncertainty distinction isn't a mathematical technique. It's a disposition toward the world.
We live in a culture that worships quantification. We want numbers for everything — the probability of rain, the chance of a terrorist attack, the likelihood of a startup succeeding, the risk that a new drug causes side effects. And for many of these questions, numbers are exactly the right tool. The probability of rain tomorrow in a specific city? We're pretty good at that. Insurance companies can reliably estimate how many 45-year-old nonsmokers will die next year. Casinos know their edge to the penny.
But for the questions that matter most — the ones that keep us up at night, the ones that determine the long arc of our lives — numbers are often a security blanket rather than a searchlight. "What's the probability that my marriage will last?" is technically a question with a statistical answer (about 50% of first marriages in the US end in divorce), but anyone who's been married knows that the population-level statistic tells you essentially nothing about your marriage. Your marriage is not a draw from an urn. It's a unique, unrepeatable, radically uncertain endeavor.
Frank Knight's distinction gives us permission to say: "I don't know, and moreover, this is the kind of thing where not knowing is the permanent condition, not a temporary gap waiting to be filled with more data." That's not defeatism. It's the starting point for a different kind of wisdom — one that relies on judgment, character, resilience, and optionality rather than expected value and probability-weighted optimization.
Or as Knight himself put it, with the dry understatement of a man who knew he was saying something important: "The practical difference between the two categories, risk and uncertainty, is that in the former the distribution of the outcome in a group of instances is known, while in the case of uncertainty, this is not true, the reason being in general that it is impossible to form a group of instances."13
The world is full of people who can compute an expected value. It is much shorter on people who know when expected values are the wrong tool for the job. Knight's gift to us is the clarity to tell the difference — and the courage to put down the calculator when the situation demands it.