The Candle Problem
In 1680, a London merchant insured his warehouse against fire, then — according to legend — started storing gunpowder next to the candles. The insurers were not amused.
Whether that particular story is apocryphal or not, the general pattern was real enough that 17th-century English insurers gave it a name: moral hazard. Not because the insured person was immoral, exactly, but because the mere existence of the insurance policy changed the moral calculus. When someone else is picking up the tab for your catastrophe, the catastrophe starts looking a lot less catastrophic.1
This is one of those ideas that sounds like common sense once you hear it. Of course people behave differently when they're protected from consequences. Every parent knows this. Every teacher who has announced "this won't be on the test" and watched the classroom's attention evaporate knows this. But the mathematical structure underneath moral hazard turns out to be surprisingly rich, surprisingly tricky, and — as we learned in 2008 — surprisingly capable of blowing up the global economy.
Let's start with something simpler than global finance. Let's start with you, a house, and a box of matches.
The Invisible Effort Problem
Suppose you own a house worth $300,000. There's some probability it burns down — let's call it p. But here's the thing: p isn't fixed. It depends on how careful you are. If you're vigilant — checking your wiring, cleaning the dryer vent, not leaving candles unattended while you binge Netflix — maybe p is 0.1%. If you're careless, it might be 1%.
Being careful costs you something. Not money, necessarily, but effort, attention, the mild daily irritation of actually getting up to blow out that candle before bed. Let's call that cost c.
Without insurance, the math is straightforward. Your expected loss from being careless is 1% × $300,000 = $3,000 per year. If the effort cost of being careful is less than the $2,700 you'd save (the difference between the careless and careful expected losses), you'll be careful. The market works. Incentives align. Everyone's happy.
Now add insurance. You pay a premium, and if the house burns down, the insurer pays to rebuild it. Suddenly your personal expected loss from a fire drops toward zero. And that $2,700 incentive to be careful? It evaporates like morning dew.
This is the core of moral hazard: insurance, by its very nature, reduces the cost of bad outcomes to the insured. And reduced costs mean reduced incentives to prevent those outcomes. The insurer can't watch you blow out the candles every night. They can't observe your effort. All they can observe is whether or not the house burned down — and by then, it's too late.
Economists call this a principal-agent problem. The principal (the insurer) wants the agent (you) to be careful. The agent would prefer to be careful — all else equal, nobody wants their house to burn down — but effort is costly, and when insurance absorbs the financial consequences, "all else" isn't equal anymore.
Kenneth Arrow formalized this in his landmark 1963 paper on health economics.2 He pointed out that if health insurance covers everything at zero cost to the patient, patients will consume more healthcare than is socially optimal. Not because they're greedy — but because the marginal cost to them is zero, so they keep consuming until the marginal benefit is also zero. The invisible hand just got a case of the shakes.
The Contract Designer's Dilemma
So what do you do about it? You can't just eliminate insurance — the whole point of insurance is that risk-averse people are willing to pay a premium to avoid catastrophic losses. Insurance makes people better off. That's not a bug, it's the feature.
The trick is to design contracts that share the risk rather than transferring all of it. This is where deductibles and co-pays come from. They aren't just ways for insurers to be stingy — they're mechanisms to keep you in the game, to make sure you still have some skin in the outcome.
The Insured's Cost After a Loss
Cost = min(Loss, D) + α · max(Loss − D, 0)
- D
- Deductible — you pay the first D dollars
- α
- Co-pay rate — your share of costs above the deductible (0 to 1)
At α = 0 with low D, you're barely exposed. At α = 1, you have no insurance at all.
The optimal contract sits somewhere in between. Enough cost-sharing to keep the agent's incentives alive, but not so much that the insurance becomes pointless. It's a mathematical tightrope, and the solution depends on things like how risk-averse the agent is, how observable effort is, and how sensitive the loss probability is to effort.3
Try it yourself:
Insurance Contract Designer
Design an insurance contract and watch how it changes behavior. Find the sweet spot between protection and moral hazard.
Notice the shape of that welfare curve. Too little cost-sharing and moral hazard eats you alive — the insured gets reckless, losses spike, and the insurer either goes broke or charges enormous premiums. Too much cost-sharing and you've defeated the purpose of insurance — the insured bears most of the risk and might as well self-insure. The optimum is always in the messy middle.
When Banks Play With Other People's Money
Everything we've said about houses and candles gets approximately a thousand times more dangerous when you scale it up to the financial system. And in 2008, we got to watch exactly how dangerous in real time.
Here's the setup. A bank takes deposits and makes loans. Some of those loans are risky — they might not get paid back. The riskier the loans, the higher the interest rate the bank can charge, and the higher the potential profit. But also the higher the chance the bank goes bust.
Without any safety net, banks have a natural incentive to be at least somewhat careful. If you make too many bad loans, you fail, your shareholders lose everything, and your career as a banker is over. The threat of ruin concentrates the mind wonderfully.
But now add two things that actually exist in the real world:
First: deposit insurance. After the bank runs of the 1930s, the U.S. created the FDIC to guarantee deposits up to a certain amount.4 This was brilliant — it stopped bank runs cold. If your deposits are guaranteed by the federal government, there's no reason to line up at the bank at 6 AM in a panic. But it also meant depositors stopped caring whether their bank was making reckless bets. Why would they? Their money was safe regardless.
Second: "too big to fail." When a bank is so large and so interconnected that its failure would drag down the entire financial system, the government faces an agonizing choice: let it fail and risk economic catastrophe, or bail it out and send the message that size equals safety. In practice, governments almost always choose the bailout.5
The combination is toxic. Depositors don't monitor the bank because the FDIC protects them. The bank's management knows that if things go well, they keep the profits, and if things go catastrophically, the taxpayer picks up the pieces. This is what economists call a one-way bet: heads I win, tails you lose.6
The banks didn't fail despite being protected. In a real sense, they failed because they were protected.
And the math bears this out. If a bank can borrow cheaply (because depositors don't worry about risk), invest in high-risk assets, and keep the upside while socializing the downside, the rational strategy is to take as much risk as possible. This isn't a moral failure — it's a mathematical inevitability. The incentive structure demands it.
In the simulator below, you can watch this play out. Run a banking system with and without a bailout guarantee, and see how behavior shifts.
Too Big to Fail Simulator
Each bank chooses a risk level (1–10). Higher risk = higher potential profit but higher chance of failure. Run multiple rounds and compare behavior with and without bailout guarantees.
If you toggled between the two modes, you probably noticed the pattern: without bailouts, banks cluster around moderate risk levels (3–5). With bailouts, they drift toward the extremes (7–10). The guarantee doesn't just change outcomes — it changes choices. And those choices, aggregated across the system, produce the very catastrophe that the guarantee was supposed to prevent.
The Price of Safety
Here's where it gets philosophically interesting, and where a lot of political arguments about moral hazard go wrong. People sometimes talk about moral hazard as if it's a problem to be solved — as if we could find some clever contractual arrangement that gives us all the benefits of insurance with none of the incentive distortion.
We can't. Moral hazard is the price of insurance. It is not a market failure to be corrected; it is a fundamental tradeoff to be managed.
The Fundamental Tradeoff
Full insurance eliminates risk but destroys incentives. No insurance preserves incentives but exposes people to catastrophic loss. Every real insurance contract is a compromise between these two impossibilities.
Think about deposit insurance again. Yes, it creates moral hazard. Banks take more risks when deposits are guaranteed. But the alternative — a world without deposit insurance — is a world of bank runs, panics, and depressions. The bank runs of the 1930s wiped out the savings of millions of Americans. The FDIC stopped that. The moral hazard it introduced is real, but it is vastly less costly than the panic it prevents.7
The same logic applies everywhere moral hazard appears. Seatbelts make driving slightly safer per accident, so people drive slightly more aggressively — the Peltzman effect, named after economist Sam Peltzman, who documented it in 1975.8 But the net effect of seatbelts is still overwhelmingly positive. People drive a little riskier, but they survive crashes they otherwise wouldn't have. The moral hazard exists; it just doesn't outweigh the direct benefit.
Warranties? Sure, some people are rougher on their laptops because AppleCare will replace the screen. But most people would rather have a working laptop than a new screen, so the moral hazard is bounded. Employment contracts? Yes, strong labor protections can reduce effort at the margin. But they also let workers invest in firm-specific skills without fearing arbitrary termination. The tradeoff, again, usually favors the protection.
The Real Lesson
The moral hazard framework doesn't tell you "insurance is bad" or "bailouts are bad." It tells you that every form of protection comes with an incentive cost, and the job of the contract designer — or the policymaker — is to find the combination of coverage and cost-sharing that maximizes total welfare.
Sometimes that means accepting quite a lot of moral hazard (deposit insurance, seatbelts). Sometimes it means imposing significant cost-sharing (health insurance deductibles, bank capital requirements). And sometimes, as in the case of too-big-to-fail banks, it means redesigning the system so the guarantee is less necessary — breaking up the banks, increasing capital buffers, creating resolution mechanisms that let big institutions fail without taking the economy with them.
The 17th-century insurers who coined "moral hazard" understood the problem. Three and a half centuries later, we're still working on the solution. We always will be. And that's okay — because the alternative to managing moral hazard is either a world without insurance or a world where everyone pretends incentives don't exist.
Neither of those worlds is one where anyone would want to live.