All Chapters

The Missing Chapter

Public Goods & Free Riders

Why your neighborhood park is underfunded, and what mathematics says about the tragedy of rational selfishness

An extension of Jordan Ellenberg's "How Not to Be Wrong"

Chapter 96

The Streetlight That Never Was

Ten people live on a dark street. A streetlight costs $100. Each resident values having the light at $20. Total value: $200. Cost: $100. This is a no-brainer — build the light! But here's the thing: nobody builds it.

This isn't a story about stupidity. It's a story about rationality — the cold, impeccable, self-defeating kind. Each resident does the same calculation: "If the other nine chip in, the light gets built whether I pay or not. And if they don't chip in, my $10 won't save the project anyway. Either way, I'm better off keeping my money." Multiply that logic by ten, and you've got a very dark street.

Economists call this the public goods problem, and once you learn to see it, you'll find it everywhere. It's why your local park is underfunded, why open-source software developers burn out, why Wikipedia has to beg for donations, why climate change negotiations crawl, and why your office kitchen is always out of coffee. It is, arguably, the central problem of civilization itself: how do you get people to contribute to things that benefit everyone?

The answer turns out to involve some beautiful mathematics, some surprising psychology, and one Nobel Prize awarded to a woman who figured out what economists had been getting wrong for decades.1

· · ·
The Definition

What Makes a Good "Public"

A public good has two defining properties, and both matter:

Non-rivalrous: My consumption doesn't diminish yours. If I enjoy the streetlight, that doesn't make it dimmer for you. Compare this to a sandwich: if I eat it, you can't. The streetlight doesn't care how many people use it.

Non-excludable: You can't prevent people from benefiting. The streetlight illuminates the whole street — you can't shine it only on the driveways of people who paid. This is what creates the free-rider problem: if you can't exclude non-payers, why would anyone pay?

Public Goods streetlights, defense Club Goods Netflix, gyms Common-Pool fisheries, forests Private Goods sandwiches, shoes Non- excludable Excludable Non-rivalrous Rivalrous Rivalrous → Excludable →
The four types of goods. Public goods sit in the worst corner for markets: nobody can be charged, and there's plenty to go around.

National defense is the classic example. If the army repels an invasion, you're protected whether you paid taxes or not. Clean air, herd immunity, basic scientific research, the concept of zero — all public goods. The lighthouse was the go-to example for a century, until Ronald Coase spoiled the fun by pointing out that British lighthouses were historically funded by port fees.2

The really interesting cases are the ones where you might not immediately recognize the public good lurking inside. Wikipedia is a public good — non-rivalrous (your reading doesn't slow mine) and non-excludable (no paywall). Open-source software runs most of the internet and is maintained by people who, economically speaking, shouldn't bother. Every time you don't litter, you're providing a public good. The question is: why do we do any of this at all?

· · ·
The Game

The Mathematics of Selfishness

To understand why public goods are so tricky, let's build a game. It's a game that economists have run thousands of times in laboratories around the world, and its results are both depressing and — eventually — hopeful.

Five players each receive $10. Each secretly decides how much to put into a common pool (from $0 to $10), keeping the rest. The pool is multiplied by 1.5× and then split equally among all five players, regardless of how much each contributed.

Let's do the math. Suppose everyone contributes their full $10. The pool is $50. Multiplied by 1.5 gives $75. Split five ways, each person gets $15. Everyone started with $10, so everyone profited $5. Wonderful!

But now think like a free rider. If the other four people each contribute $10, the pool is $40. Multiplied by 1.5 gives $60. Your share: $12. Plus the $10 you kept. Total: $22. By defecting while others cooperated, you earned $22 versus the $15 you'd have gotten by cooperating. Free riding pays.

Your payoff
(10 − c) + 1.5 × C / n
where c = your contribution, C = total pool, n = number of players
c
What you put in (0 to 10)
C
What everyone put in, total
n
Group size (5 in our game)
1.5
Multiplication factor — the "social return"

Here's the key insight. Each dollar you contribute costs you $1 but returns only $1.50 / 5 = $0.30 to you personally. You're losing $0.70 on every dollar you give. A rational, self-interested player contributes zero.

But if everyone contributes zero, the pool is $0, and everyone walks away with their original $10 — much worse than the $15 they could have had. This is the Nash equilibrium: everyone defects, even though everyone cooperating would make everyone richer.3

The invisible hand, in this case, is giving everyone the finger.

Try it yourself:

The Public Goods Game
You + 4 AI players. Each starts with $10 per round. Contribute to the common pool, see what happens over 10 rounds.
Round / 10 Your Total: $0.00 Group Avg: $0.00
$5
Press "Contribute" to begin Round 1.

If you played a few rounds, you probably noticed something. The AI agents don't all defect immediately — some start generous and then pull back. That's not just a game mechanic; it's what happens in real experiments.

· · ·
The Evidence

What Actual Humans Do

The public goods game has been run in laboratories across dozens of countries since the 1980s, and the results are remarkably consistent.4 People don't behave like the Nash equilibrium predicts. They don't contribute zero. But they don't contribute optimally either.

In the first round, people typically contribute 40–60% of their endowment. Generous! Hopeful! But watch what happens over time: contributions decay, round by round, settling toward 10–20% by the final rounds. It's not that people start selfish and stay selfish. They start generous and learn to free-ride — or, more precisely, they watch others free-ride and think, "Why am I the sucker?"

0% 25% 50% 75% 100% 1 2 3 4 5 6 7 8 9 10 Round Without punishment With punishment Nash equilibrium (0%) Avg. Contribution (%)
Typical contribution patterns in public goods experiments. Without punishment, cooperation decays. With punishment, it rises. Data pattern based on Fehr & Gächter (2000).

This pattern — initial cooperation followed by decay — tells us something important. People aren't perfectly selfish calculating machines. But they're not unconditional altruists either. Most people are conditional cooperators: they'll contribute if they believe others will too, and they'll defect once they feel exploited. It's not selfishness that kills cooperation; it's the perception of unfairness.

· · ·
The Arithmetic of the Dilemma

Why Defecting Always "Wins"

Let's be precise about why this is a dilemma. Use the calculator below to see the payoffs for cooperators versus defectors in groups of different sizes.

Free Rider Calculator
Adjust parameters to see why free riding is individually rational but socially ruinous.
Group Size 5
Multiplication Factor 1.5×
Number of Cooperators 4
$0
Cooperator
(contributes $10)
$0
Defector
(contributes $0)
If everyone cooperates
$0
If everyone defects (Nash)
$0
Social surplus lost
$0

Notice something crucial: the defector always earns more than the cooperator in the same group, regardless of how many cooperators there are. That's what makes this a true dilemma — not a misunderstanding, not a coordination failure, but a structural trap where individual rationality leads to collective ruin.

The gap between cooperator and defector payoff is exactly $10 × (1 − m/n). When the multiplication factor m is less than the group size n — which is the standard setup — this gap is always positive. The defector always wins the comparison.

But also notice: when everyone cooperates, each person earns more than when everyone defects. The social optimum beats the Nash equilibrium. This is the knife's edge of the public goods problem: what's best for each individual is worst for the group.

The Social Dilemma, Formally

A public goods game is a social dilemma when the multiplication factor m satisfies 1 < m < n. The lower bound (m > 1) means the group benefits from cooperation. The upper bound (m < n) means each individual benefits from defection. Between these bounds lies the tragedy.

· · ·
The Cure

Punishing the Free Riders

In 2000, Ernst Fehr and Simon Gächter published a paper that changed how economists think about cooperation.5 They modified the standard public goods game by adding a second stage: after seeing everyone's contributions, players could spend their own money to punish low contributors. For every $1 you spent on punishment, the target lost $3.

The result was dramatic. With punishment available, contributions didn't decay — they increased, climbing toward full cooperation over time. People were willing to pay real money, from their own earnings, to punish free riders. And the threat of punishment kept potential defectors in line.

This is deeply puzzling from a standard economics perspective. Punishment is itself a public good — you pay the cost, but everyone benefits from the increased cooperation. A rational free rider wouldn't punish anyone. Yet people do it, reliably, across cultures, even when they'll never interact with the punished player again.6

Humans didn't just evolve to cooperate. They evolved to enforce cooperation.

Fehr and Gächter called it altruistic punishment: you bear a cost to uphold a norm, even when you can't personally benefit. It's altruistic in the sense that it helps the group at a cost to the individual. But it might also feel good — brain imaging studies show that punishing defectors activates reward centers.7 Revenge, it turns out, really is sweet.

· · ·
Beyond Punishment

Ostrom's Eight Principles

Punishment is one solution. But it's crude, and sometimes it backfires — in some cultures, punished people retaliate by punishing cooperators, leading to an ugly downward spiral. There has to be something better.

Elinor Ostrom spent her career studying communities around the world that had figured out how to manage shared resources — irrigation systems in Nepal, fishing villages in Japan, forest commons in Switzerland — without either privatization or government control.8 She won the Nobel Prize in Economics in 2009, the first woman to do so, for demonstrating that the standard story ("public goods require government or they fail") was too simple.

Ostrom identified eight design principles that successful commons institutions share:

01
Clear Boundaries
Who's in the group and what's the resource? No ambiguity.
02
Proportional Rules
Benefits match costs. Those who contribute more get more.
03
Collective Choice
Those affected by rules participate in making them.
04
Monitoring
Behavior is observable. Free riders can be identified.
05
Graduated Sanctions
Mild punishment first, escalating for repeat offenders.
06
Conflict Resolution
Cheap, fast access to dispute resolution mechanisms.
07
Local Autonomy
External authorities respect the community's right to self-organize.
08
Nested Enterprises
For large systems, governance is layered — local within regional within global.

Read that list carefully and you'll notice something: it's a recipe for making cooperation visible and defection costly, without requiring a heavy-handed central authority. Boundaries tell you who to trust. Monitoring tells you who's defecting. Graduated sanctions give people a chance to reform. Collective choice gives rules legitimacy.

Compare these principles to the public goods game. In the basic game, contributions are anonymous, there's no communication, no iteration, no identity — it's the worst-case scenario for cooperation. Ostrom's point was that real communities almost never face that worst case. They have names and faces, reputations and relationships, and they build institutions precisely to avoid the anonymous one-shot game.

· · ·
The Bigger Picture

Taxes, Wikipedia, and Open-Source

The most blunt solution to the public goods problem is the one governments use: compulsory contribution. Taxes are forced cooperation. You can't free-ride on national defense because the IRS won't let you. This works, but it requires a mechanism for enforcement (tax collectors, auditors, courts) and a mechanism for deciding what to fund (democracy, in theory). Both mechanisms can fail in interesting ways.

But some of the most fascinating public goods problems have been solved without compulsion at all. Wikipedia has 60 million articles written by volunteers. Linux powers most of the world's servers, built by people who could have been billing $200/hour. Why?

Partly it's reputation — contributing to open source builds your career. Partly it's intrinsic motivation — people enjoy creating things. Partly it's conditional cooperation at scale — when you see others contributing, you want to contribute too. And partly it's what Ostrom would recognize: these communities have developed their own governance structures, norms, monitoring systems, and graduated sanctions (try vandalizing a Wikipedia article and see how fast the revert comes).

Public Good Taxes Punishment Reputation Ostrom's Norms Talk
The public goods problem isn't unsolvable — it's multiply solvable. Different mechanisms work for different contexts.

The public goods problem isn't a counsel of despair. It's a map of the design space. Once you understand why free riding happens — the arithmetic that makes defection individually rational — you can start building institutions that change the arithmetic. Make contributions visible. Make defection costly. Make the group small enough that people care about each other's opinions. Let people talk. These are the tools that turn the Nash equilibrium from zero contribution into something that looks, remarkably, like civilization.

The Bottom Line

The public goods problem is not a flaw in human nature. It's a flaw in the game. Change the rules — add reputation, communication, monitoring, or graduated sanctions — and the same "selfish" humans who free-ride in anonymous one-shot games become enthusiastic cooperators. The math tells us why cooperation breaks down. But the math also tells us how to build it back up.

Notes & References

  1. Elinor Ostrom won the Nobel Memorial Prize in Economic Sciences in 2009 for "her analysis of economic governance, especially the commons." She was the first — and as of 2024, still the only — woman to win the economics Nobel solo.
  2. Ronald Coase, "The Lighthouse in Economics," Journal of Law and Economics 17, no. 2 (1974): 357–376. Coase showed that most British lighthouses were built and maintained by private enterprise through Trinity House, funded by user tolls at ports.
  3. John Nash's equilibrium concept, from his 1950 Princeton dissertation, describes a state where no player can improve their outcome by unilaterally changing strategy. In the public goods game, zero contribution is the unique Nash equilibrium when the multiplication factor is less than the group size.
  4. For a comprehensive meta-analysis of public goods experiments, see Zelmer, J., "Linear Public Goods Experiments: A Meta-Analysis," Experimental Economics 6 (2003): 299–310. Average initial contributions cluster around 40–60% across hundreds of studies.
  5. Fehr, E. & Gächter, S., "Cooperation and Punishment in Public Goods Experiments," American Economic Review 90, no. 4 (2000): 980–994. One of the most cited papers in experimental economics.
  6. Henrich, J. et al., "Costly Punishment Across Human Societies," Science 312 (2006): 1767–1770. Found that willingness to punish free riders varies across cultures but is present in all 15 societies studied.
  7. de Quervain, D. et al., "The Neural Basis of Altruistic Punishment," Science 305 (2004): 1254–1258. PET imaging showed activation of the dorsal striatum (reward center) when subjects punished defectors.
  8. Ostrom, E., Governing the Commons: The Evolution of Institutions for Collective Action (Cambridge University Press, 1990). The foundational text that launched the study of polycentric governance.