All Chapters

The Missing Chapter

The Ergodicity Problem

Why the math that economists use to think about risk has been wrong for 300 years—and why it matters for every bet you'll ever make.

≈ 28 min read · Interactive essay

Chapter 1

The Game That Eats Your Money

Here's a game. I flip a fair coin. Heads: your total wealth goes up 50%. Tails: it goes down 40%. You must bet everything each time. Want to play?

Before you answer, let's do what every economist since Daniel Bernoulli has told us to do: compute the expected value. You start with $100. Heads gives you $150. Tails gives you $60. The average:

Expected Value — One Flip
E[W] = 0.5 × $150 + 0.5 × $60 = $105
A 5% expected gain per flip. The math says: play.

Five percent expected gain, every flip, for free. By the logic that has dominated economics for three centuries, you should mortgage your house to play this game. You should play it a thousand times. You should play it forever.

And if you did, you'd go broke.

Not probably broke. Not "there's a chance you go broke." With probability approaching one, you will lose essentially everything. The most sophisticated expected-value calculation in the world told you to play, and playing destroys you.

The expected value is positive. The typical outcome is ruin. These two facts are not in conflict—they're the whole point.

This isn't a trick. There's no hidden assumption, no sleight of hand with infinite sums, no St. Petersburg–style divergence. It's something worse: a case where the standard mathematical framework that economists use to evaluate decisions gives advice that will, with near certainty, destroy the person who follows it.

The concept that explains why is called ergodicity. A physicist named Ole Peters thinks it might be the central error in 300 years of economic theory.1 To understand it, we need to figure out why the coin game goes wrong—and what "wrong" even means here, because the expected value calculation isn't incorrect. It's just answering the wrong question.

+50% $100 → $150 −40% $150 → $90 One round trip: $100 → $90 Feels symmetric. Isn't.

The multiplicative trap: a +50% gain followed by a −40% loss doesn't return you to even. You lose 10% every round trip.

· · ·
Chapter 2

Two Kinds of Average

Imagine a thousand people walk into a room. Each has $1,000. They all play one round of the coin game. After the flip, about 500 of them have $1,500 and 500 have $600. The average wealth in the room is $1,050. After two rounds, it's about $1,102.50. The pool of money in the room is growing.

Now imagine one person—you—playing a thousand rounds in sequence. What happens to you?

These sound like the same question. They are absolutely not the same question.

Ensemble average: The average outcome across many people, each playing once. This is what expected value computes. It answers: "What happens to the group?"

Time average: The average outcome for one person playing many times in sequence. This answers: "What happens to me?"

When these two averages agree, the process is ergodic. When they diverge—and in this game they diverge catastrophically—the process is non-ergodic, and expected value lies to you about your own future.

The ensemble average grows at 5% per round. That's real—the total wealth in the room really does grow. But it grows the way GDP grows in a country with soaring inequality: a tiny number of absurdly lucky players accumulate astronomical sums, while the vast majority are wiped out. Computing the "average wealth" and concluding everyone is fine is like computing the "average net worth" in a room containing Jeff Bezos and 999 bankrupt people and announcing that everyone's a billionaire.

The time average tells a different story. Each round, your wealth gets multiplied by either 1.5 (heads) or 0.6 (tails). Over many rounds, you'll get roughly half of each. The question that governs your fate is: what is the geometric mean of these multipliers?

Time-Average Growth Rate
g = ½ ln(1.5) + ½ ln(0.6) −0.0527
Negative. You lose about 5.3% per round in the long run.

The geometric mean of 1.5 and 0.6 is √(0.9) ≈ 0.949. Less than one. Every two flips, on average, your wealth shrinks to 90% of what it was. The ensemble says +5%. The trajectory says −5%. Both are correct. They're just answering different questions.2

And you, sitting there with your one life and your one bankroll, are a time-average creature living in an ensemble-average world.

The Core Insight

In multiplicative dynamics—where gains and losses compound on your current wealth—the geometric mean governs your fate, not the arithmetic mean. The arithmetic mean can be positive while the geometric mean is negative. When this happens, "expected value" describes what happens to a fictional average entity. It says nothing about what will happen to you.

ENSEMBLE vs TIME Ensemble: 20 people, one flip $1,000 each $1,500 $600 Avg: $1,050 ↑ Time: 1 person, 200 flips Start 200 rounds → $0.005 Ensemble avg Same game. Different questions. Opposite answers.

Left: what happens across many people (ensemble). Right: what happens to one person over time. The ensemble grows. The individual decays.

· · ·
Chapter 3

See It For Yourself

Theory is nice. Watching yourself go broke is better. The simulator below runs the coin game and tracks 20 individual players (the thin colored lines), along with the ensemble average (the thick gold line). Hit "Play 200 Rounds" and watch the gap between the average and the typical.

The Multiplicative Coin Game
20 players each start with $1,000. Watch their individual trajectories diverge from the ensemble average.
Gain on Heads +50%
Loss on Tails −40%
Ensemble average
Individual players
$1,000 start
Ensemble Average
$1,000
Median Player
$1,000
Players Broke (<$1)
0 / 20
Round
0

Notice the pattern? The gold ensemble line climbs steadily upward—5% per round, exactly as the math promised. But the individual players tell a different story. Most of them collapse toward zero. One or two lucky ones soar to absurd heights, dragging the average up while the typical player is ruined.

This is the ergodicity problem in its purest visual form. The "average" is real but irrelevant to any individual. The median—the experience of the typical person—is the time average, and it decays toward zero.

Try adjusting the sliders. Make the gain bigger. Make the loss smaller. You'll find that the ensemble average becomes even more optimistic while individuals still go broke—sometimes faster. The gap between what the average says and what you experience is the measure of non-ergodicity.

· · ·
Chapter 4

Three Hundred Years of Getting It Wrong

In 1713, Nicolas Bernoulli posed a problem that would haunt probability theory for decades. Imagine a game where a coin is flipped until it lands tails. If the first tails appears on flip n, you win 2n ducats. How much should you pay to play?

The expected value is infinite: ½ × 2 + ¼ × 4 + ⅛ × 8 + … = 1 + 1 + 1 + … = ∞. And yet nobody in their right mind would pay more than about 20 ducats. This was the St. Petersburg Paradox, and it troubled the greatest mathematical minds in Europe.

Twenty-five years later, Nicolas's cousin Daniel Bernoulli published his solution. People don't value money linearly, he argued. The utility of money is logarithmic—each doubling of wealth feels equally good, regardless of your starting point. A dollar matters more to a pauper than a prince. Maximize expected utility, not expected value, and the paradox dissolves.3

Bernoulli's 1738 paper, "Exposition of a New Theory on the Measurement of Risk," became one of the most influential works in the history of economics. Its framework—maximize expected utility with a concave utility function—is the foundation of decision theory, game theory, and most of modern microeconomics. For 280 years, no one seriously questioned it.

Here's what Ole Peters noticed in 2011, working at the London Mathematical Laboratory, a small independent research institute funded to think about exactly this kind of foundational question.4 He noticed that Bernoulli's logarithm wasn't revealing something about human psychology. It was revealing something about mathematics.

When you take the logarithm of wealth and compute the expected value, you get the time-average growth rate. Exactly. Not approximately, not metaphorically—exactly. The logarithm converts multiplicative dynamics into additive dynamics, and additive dynamics are ergodic. Bernoulli didn't discover a feature of the human soul. He stumbled on a change of variables that fixes a broken calculation.

Bernoulli's "Utility" = Time-Average Growth Rate
E[ln(W)] = time-average growth rate
The logarithm isn't psychology. It's a correction for non-ergodicity.

This is an audacious claim. Peters isn't saying expected utility theory gets wrong answers—it often gets perfectly good ones. He's saying it gets right answers for the wrong reason. And wrong reasons matter, because they break down in exactly the cases where you need them most.

For three centuries, economics has been explaining a mathematical correction as a psychological quirk.

Think about what this means. Every time a behavioral economist documents a "bias"—loss aversion, risk aversion, the endowment effect—they're comparing human behavior to the predictions of expected value theory. When humans deviate, the conclusion is: humans are irrational. But if expected value theory is using the wrong average, then the "irrational" humans might be the ones doing the math correctly. They might be optimizing the time average—the thing that actually governs their fate—while economists are optimizing the ensemble average, a mathematical ghost that no individual will ever experience.5

· · ·
Chapter 5

The Fix: Think in Growth Rates

If expected value is broken for multiplicative processes, what do we use instead? Peters's answer is elegant: maximize the time-average growth rate. For any gamble that compounds on your wealth, compute the expected logarithmic growth rate and ask whether it's positive or negative.

For our coin game with full bet size:

Growth Rate Check
g = ½ ln(1.5) + ½ ln(0.6) = −0.053
Negative → decline is certain. Don't play.

But what if you don't have to bet everything? What if you can choose to bet, say, 10% of your wealth each round—gaining 5% on heads and losing 4% on tails? Now the growth rate is:

Growth Rate — Betting 10%
g = ½ ln(1.05) + ½ ln(0.96) = +0.0042
Positive! By betting less, you turned a losing game into a winning one.

This is the profound surprise. The game didn't change. The probabilities didn't change. The expected value was positive all along. What changed is that by sizing the bet correctly, you aligned the ensemble average with the time average. You made the process approximately ergodic—for you.

The optimal fraction—the one that maximizes your long-run growth rate—is given by the Kelly criterion. For this specific game, Kelly says bet about 8.3% of your wealth each round. Do that and you'll grow steadily, indefinitely. Bet 100% and you'll die. Same game, same edge, opposite outcomes—the only difference is how much you bet.6

Try it yourself. The calculator below lets you choose what fraction of your wealth to bet. Watch how the growth rate changes—and notice where it flips from positive to negative.

The Bet-Sizing Calculator
Same coin game (+50% / −40%). How much of your wealth should you bet?
Fraction of Wealth to Bet 100%
Time-Average Growth Rate
−5.27%
per round — you will go broke
Expected Value
+5.0%
Kelly Optimal
8.3%
After 100 Rounds
$0.005
Verdict
☠️ Ruin
The Bet-Sizing Paradox

Expected value is always maximized by betting 100%. The growth rate is maximized at 8.3%. And the growth rate goes negative above about 16.7%—meaning any bet larger than one-sixth of your wealth will destroy you in this game, even though the expected value is positive at every bet size. This is the ergodicity problem in a single chart.

· · ·
Chapter 6

The Real World Is Non-Ergodic

If ergodicity were just about coin flip games, it would be a curiosity for probability professors. But non-ergodicity is everywhere—it's the default mode of economic life. Almost everything that matters to you compounds multiplicatively.

Why Insurance Is Rational

Standard economics has always squirmed about insurance. The insurance company charges a premium, so buying insurance has negative expected value. The standard explanation: people are "risk-averse" (they have concave utility functions). This is circular—it explains the behavior by postulating a preference that produces the behavior.

The ergodicity explanation is simpler and more powerful: a catastrophic loss in a multiplicative world doesn't just hurt you once. It compounds against you forever. Lose 90% of your wealth and you need a 900% return just to get back to where you started. Insurance isn't soothing your anxious brain. It's protecting your time-average growth rate by capping the worst-case multiplier. You're paying a small drag on your geometric mean to avoid a devastating one.7

Why Inequality Grows

If wealth follows multiplicative dynamics—and it does, because investment returns compound on existing wealth—then the ergodicity framework predicts exactly what we observe: relentless concentration. In the simulator above, after enough rounds, one or two players hold almost all the wealth while the rest are ruined. The ensemble average (GDP per capita) keeps rising. The median person gets poorer.

Start 1,000 people at $100,000 each. Each year, multiply their wealth by a random factor—lognormally distributed, mean +7%, standard deviation 20%. That's roughly the stock market. After 40 years, the average wealth is about $1.5 million. But the median is around $300,000—and the bottom 20% have less than they started with.

No one cheated. No one had an unfair advantage. The rules were identical. The inequality emerged from pure multiplicative dynamics plus randomness. This is non-ergodicity as social policy.8

ERGODIC vs NON-ERGODIC Ergodic Process Gas molecules in a box Spreads evenly Time avg = Ensemble avg ✓ Non-Ergodic Process Wealth after 100 rounds 99% Concentrates Time avg ≠ Ensemble avg ✗

Ergodic systems spread evenly over time — one molecule visits every corner. Non-ergodic systems concentrate — one winner takes almost everything.

This reframes the entire inequality debate. Standard economics says inequality is about differences in skill, effort, or luck. Non-ergodicity says inequality is mathematically inevitable in any system with multiplicative dynamics and finite time—even if everyone is identical. It's not a bug. It's a theorem.

Why "Loss Aversion" Might Not Be a Bias

Kahneman and Tversky's prospect theory famously shows that people feel losses about twice as strongly as equivalent gains. Standard behavioral economics calls this a cognitive bias—an irrational quirk of the human brain.

But in a multiplicative world, losses are worse than gains, mathematically. A 50% loss requires a 100% gain to recover. A 90% loss requires a 900% gain. The asymmetry isn't in your head—it's in the arithmetic. Humans who weight losses more heavily than gains aren't being irrational. They're being good intuitive geometers of multiplicative dynamics.9

Career Decisions

Should you take the stable government job or the startup lottery? Expected value says: go for the startup if the numbers work out. But you only get one career—you're a time-average creature in a one-shot game. The startup might have higher expected value across all parallel universes, but in the one universe you inhabit, the stable job might deliver higher time-average growth. This isn't cowardice. It's arithmetic.

You don't live in the average of all possible worlds. You live in one world, sequentially, and you have to survive each round to play the next.
· · ·
Chapter 7

The Punchline

The ergodicity problem isn't really about economics. It's about a mistake so natural that it took 300 years and a physicist to spot it: confusing the average across possibilities with the trajectory through time.

When a financial advisor tells you the stock market returns "10% on average," ask: average across what? Across all possible histories? Or across time, for you? Because if the returns are volatile—and they always are—those are different numbers. The ensemble average can paint a rosy picture while the typical investor gets slowly ground down by what mathematicians call volatility drag: the geometric penalty you pay for variance in a multiplicative system.10

This is what Peters has been trying to tell us. Not that expected value is useless—it's the right tool for additive, ergodic processes, and many processes genuinely are ergodic. But for the multiplicative, non-ergodic processes that dominate real financial life—investments, insurance, career gambles, health risks—it's the wrong tool. It's like using a map of London to navigate Tokyo. The map isn't wrong. You're just in the wrong city.

Question Ergodic (Additive) Non-Ergodic (Multiplicative)
Which average governs? Arithmetic mean Geometric mean
Ensemble = Time? Yes No
Example Salary income Investment returns
Correct tool Expected value Expected growth rate
Bankruptcy possible? Only by spending Yes, by math alone

Jordan Ellenberg taught us that mathematical thinking is a way of being in the world. Here's one more piece of that: before you take any bet—financial, professional, personal—ask yourself, is this ergodic? Can I replay this enough times for the average outcome to become my outcome? If not—if it's a one-shot deal, or a sequence of bets that compound on each other—then the expected value is a fiction about parallel universes you'll never visit.

The Final Lesson

Maximize the time-average growth rate, not the expected value. Protect against catastrophic losses—not because you're risk-averse, but because ruin is an absorbing state in a multiplicative world. Size your bets with Kelly. Buy insurance when the downside compounds. And never, ever confuse what happens on average with what happens to you.

How not to be wrong: know which average governs your life.

Notes & References

  1. Ole Peters, "The ergodicity problem in economics," Nature Physics, vol. 15, pp. 1216–1221, 2019. Peters's argument has attracted both enthusiastic supporters and sharp critics. For a balanced assessment, see Jason Collins's review at jasoncollins.blog.
  2. This follows from the strong law of large numbers applied to the logarithm of wealth. If the expected log-growth per round is negative, the log of wealth goes to −∞ almost surely. See Peters & Gell-Mann, "Evaluating gambles using dynamics," Chaos, vol. 26, 023103, 2016.
  3. Daniel Bernoulli, "Specimen Theoriae Novae de Mensura Sortis," Commentarii Academiae Scientiarum Imperialis Petropolitanae, 1738. English translation by Louise Sommer in Econometrica, vol. 22, no. 1, pp. 23–36, 1954.
  4. Peters, O., "The time resolution of the St Petersburg paradox," Philosophical Transactions of the Royal Society A, vol. 369, pp. 4913–4931, 2011. The paper that started the ergodicity economics program.
  5. For the argument that ergodicity economics dissolves several "behavioral biases," see Doctor, J.N. et al., "Ergodicity-breaking reveals time optimal economic behavior in humans," PLOS Computational Biology, 16(9), 2020. The claim is controversial—many behavioral economists argue that the biases have independent empirical support beyond expected utility violations.
  6. Kelly, J.L. Jr., "A New Interpretation of Information Rate," Bell System Technical Journal, vol. 35, pp. 917–926, 1956. For the full story, see our companion chapter on the Kelly criterion.
  7. Peters, O. & Adamou, A., "The time interpretation of expected utility theory," arXiv:1801.03680, 2018. They show that most standard results in expected utility theory—including the demand for insurance—can be derived from time-average optimization without assuming a utility function.
  8. Berman, Y., Peters, O. & Adamou, A., "An empirical test of the ergodic hypothesis: wealth distributions in the United States," working paper, London Mathematical Laboratory, 2020. The simple multiplicative model reproduces key features of the U.S. wealth distribution remarkably well.
  9. Peters, O. & Gell-Mann, M., "Evaluating gambles using dynamics," Chaos, 26(2), 2016. They show that "loss aversion" emerges naturally from time-average optimization without needing to postulate it as a psychological feature.
  10. Volatility drag: if returns have arithmetic mean μ and variance σ², the geometric mean is approximately μ − σ²/2. The σ²/2 term is pure destruction from variance. This is why a stock that goes up 50% then down 50% doesn't return to its starting value—it loses 25%.