The Streetlight That Never Was
Ten people live on a dark street. A streetlight costs $100. Each resident values having the light at $20. Total value: $200. Cost: $100. This is a no-brainer — build the light! But here's the thing: nobody builds it.
This isn't a story about stupidity. It's a story about rationality — the cold, impeccable, self-defeating kind. Each resident does the same calculation: "If the other nine chip in, the light gets built whether I pay or not. And if they don't chip in, my $10 won't save the project anyway. Either way, I'm better off keeping my money." Multiply that logic by ten, and you've got a very dark street.
Economists call this the public goods problem, and once you learn to see it, you'll find it everywhere. It's why your local park is underfunded, why open-source software developers burn out, why Wikipedia has to beg for donations, why climate change negotiations crawl, and why your office kitchen is always out of coffee. It is, arguably, the central problem of civilization itself: how do you get people to contribute to things that benefit everyone?
The answer turns out to involve some beautiful mathematics, some surprising psychology, and one Nobel Prize awarded to a woman who figured out what economists had been getting wrong for decades.1
What Makes a Good "Public"
A public good has two defining properties, and both matter:
Non-rivalrous: My consumption doesn't diminish yours. If I enjoy the streetlight, that doesn't make it dimmer for you. Compare this to a sandwich: if I eat it, you can't. The streetlight doesn't care how many people use it.
Non-excludable: You can't prevent people from benefiting. The streetlight illuminates the whole street — you can't shine it only on the driveways of people who paid. This is what creates the free-rider problem: if you can't exclude non-payers, why would anyone pay?
National defense is the classic example. If the army repels an invasion, you're protected whether you paid taxes or not. Clean air, herd immunity, basic scientific research, the concept of zero — all public goods. The lighthouse was the go-to example for a century, until Ronald Coase spoiled the fun by pointing out that British lighthouses were historically funded by port fees.2
The really interesting cases are the ones where you might not immediately recognize the public good lurking inside. Wikipedia is a public good — non-rivalrous (your reading doesn't slow mine) and non-excludable (no paywall). Open-source software runs most of the internet and is maintained by people who, economically speaking, shouldn't bother. Every time you don't litter, you're providing a public good. The question is: why do we do any of this at all?
The Mathematics of Selfishness
To understand why public goods are so tricky, let's build a game. It's a game that economists have run thousands of times in laboratories around the world, and its results are both depressing and — eventually — hopeful.
Five players each receive $10. Each secretly decides how much to put into a common pool (from $0 to $10), keeping the rest. The pool is multiplied by 1.5× and then split equally among all five players, regardless of how much each contributed.
Let's do the math. Suppose everyone contributes their full $10. The pool is $50. Multiplied by 1.5 gives $75. Split five ways, each person gets $15. Everyone started with $10, so everyone profited $5. Wonderful!
But now think like a free rider. If the other four people each contribute $10, the pool is $40. Multiplied by 1.5 gives $60. Your share: $12. Plus the $10 you kept. Total: $22. By defecting while others cooperated, you earned $22 versus the $15 you'd have gotten by cooperating. Free riding pays.
- c
- What you put in (0 to 10)
- C
- What everyone put in, total
- n
- Group size (5 in our game)
- 1.5
- Multiplication factor — the "social return"
Here's the key insight. Each dollar you contribute costs you $1 but returns only $1.50 / 5 = $0.30 to you personally. You're losing $0.70 on every dollar you give. A rational, self-interested player contributes zero.
But if everyone contributes zero, the pool is $0, and everyone walks away with their original $10 — much worse than the $15 they could have had. This is the Nash equilibrium: everyone defects, even though everyone cooperating would make everyone richer.3
The invisible hand, in this case, is giving everyone the finger.
Try it yourself:
If you played a few rounds, you probably noticed something. The AI agents don't all defect immediately — some start generous and then pull back. That's not just a game mechanic; it's what happens in real experiments.
What Actual Humans Do
The public goods game has been run in laboratories across dozens of countries since the 1980s, and the results are remarkably consistent.4 People don't behave like the Nash equilibrium predicts. They don't contribute zero. But they don't contribute optimally either.
In the first round, people typically contribute 40–60% of their endowment. Generous! Hopeful! But watch what happens over time: contributions decay, round by round, settling toward 10–20% by the final rounds. It's not that people start selfish and stay selfish. They start generous and learn to free-ride — or, more precisely, they watch others free-ride and think, "Why am I the sucker?"
This pattern — initial cooperation followed by decay — tells us something important. People aren't perfectly selfish calculating machines. But they're not unconditional altruists either. Most people are conditional cooperators: they'll contribute if they believe others will too, and they'll defect once they feel exploited. It's not selfishness that kills cooperation; it's the perception of unfairness.
Why Defecting Always "Wins"
Let's be precise about why this is a dilemma. Use the calculator below to see the payoffs for cooperators versus defectors in groups of different sizes.
Notice something crucial: the defector always earns more than the cooperator in the same group, regardless of how many cooperators there are. That's what makes this a true dilemma — not a misunderstanding, not a coordination failure, but a structural trap where individual rationality leads to collective ruin.
The gap between cooperator and defector payoff is exactly $10 × (1 − m/n). When the multiplication factor m is less than the group size n — which is the standard setup — this gap is always positive. The defector always wins the comparison.
But also notice: when everyone cooperates, each person earns more than when everyone defects. The social optimum beats the Nash equilibrium. This is the knife's edge of the public goods problem: what's best for each individual is worst for the group.
A public goods game is a social dilemma when the multiplication factor m satisfies 1 < m < n. The lower bound (m > 1) means the group benefits from cooperation. The upper bound (m < n) means each individual benefits from defection. Between these bounds lies the tragedy.
Punishing the Free Riders
In 2000, Ernst Fehr and Simon Gächter published a paper that changed how economists think about cooperation.5 They modified the standard public goods game by adding a second stage: after seeing everyone's contributions, players could spend their own money to punish low contributors. For every $1 you spent on punishment, the target lost $3.
The result was dramatic. With punishment available, contributions didn't decay — they increased, climbing toward full cooperation over time. People were willing to pay real money, from their own earnings, to punish free riders. And the threat of punishment kept potential defectors in line.
This is deeply puzzling from a standard economics perspective. Punishment is itself a public good — you pay the cost, but everyone benefits from the increased cooperation. A rational free rider wouldn't punish anyone. Yet people do it, reliably, across cultures, even when they'll never interact with the punished player again.6
Humans didn't just evolve to cooperate. They evolved to enforce cooperation.
Fehr and Gächter called it altruistic punishment: you bear a cost to uphold a norm, even when you can't personally benefit. It's altruistic in the sense that it helps the group at a cost to the individual. But it might also feel good — brain imaging studies show that punishing defectors activates reward centers.7 Revenge, it turns out, really is sweet.
Ostrom's Eight Principles
Punishment is one solution. But it's crude, and sometimes it backfires — in some cultures, punished people retaliate by punishing cooperators, leading to an ugly downward spiral. There has to be something better.
Elinor Ostrom spent her career studying communities around the world that had figured out how to manage shared resources — irrigation systems in Nepal, fishing villages in Japan, forest commons in Switzerland — without either privatization or government control.8 She won the Nobel Prize in Economics in 2009, the first woman to do so, for demonstrating that the standard story ("public goods require government or they fail") was too simple.
Ostrom identified eight design principles that successful commons institutions share:
Read that list carefully and you'll notice something: it's a recipe for making cooperation visible and defection costly, without requiring a heavy-handed central authority. Boundaries tell you who to trust. Monitoring tells you who's defecting. Graduated sanctions give people a chance to reform. Collective choice gives rules legitimacy.
Compare these principles to the public goods game. In the basic game, contributions are anonymous, there's no communication, no iteration, no identity — it's the worst-case scenario for cooperation. Ostrom's point was that real communities almost never face that worst case. They have names and faces, reputations and relationships, and they build institutions precisely to avoid the anonymous one-shot game.
Taxes, Wikipedia, and Open-Source
The most blunt solution to the public goods problem is the one governments use: compulsory contribution. Taxes are forced cooperation. You can't free-ride on national defense because the IRS won't let you. This works, but it requires a mechanism for enforcement (tax collectors, auditors, courts) and a mechanism for deciding what to fund (democracy, in theory). Both mechanisms can fail in interesting ways.
But some of the most fascinating public goods problems have been solved without compulsion at all. Wikipedia has 60 million articles written by volunteers. Linux powers most of the world's servers, built by people who could have been billing $200/hour. Why?
Partly it's reputation — contributing to open source builds your career. Partly it's intrinsic motivation — people enjoy creating things. Partly it's conditional cooperation at scale — when you see others contributing, you want to contribute too. And partly it's what Ostrom would recognize: these communities have developed their own governance structures, norms, monitoring systems, and graduated sanctions (try vandalizing a Wikipedia article and see how fast the revert comes).
The public goods problem isn't a counsel of despair. It's a map of the design space. Once you understand why free riding happens — the arithmetic that makes defection individually rational — you can start building institutions that change the arithmetic. Make contributions visible. Make defection costly. Make the group small enough that people care about each other's opinions. Let people talk. These are the tools that turn the Nash equilibrium from zero contribution into something that looks, remarkably, like civilization.
The public goods problem is not a flaw in human nature. It's a flaw in the game. Change the rules — add reputation, communication, monitoring, or graduated sanctions — and the same "selfish" humans who free-ride in anonymous one-shot games become enthusiastic cooperators. The math tells us why cooperation breaks down. But the math also tells us how to build it back up.