Two Brochures, One Surgery
Imagine you're a public health official. A deadly disease is bearing down on your city and will kill 600 people if nothing is done. Your scientists have developed two programs. Now choose.
This is the setup of what may be the most important psychology experiment of the twentieth century—one that revealed not a quirk of human reasoning but a structural crack running through the foundation of rational choice itself.
In 1981, Amos Tversky and Daniel Kahneman presented subjects with exactly this scenario.1 One group saw it framed in terms of lives saved:
Program A: 200 people will be saved.
Program B: There is a 1/3 probability that 600 people will be saved, and a 2/3 probability that no one will be saved.
72% of respondents chose Program A.
Makes sense, right? A bird in the hand. You know 200 people will survive. Why gamble?
Now here's where it gets uncomfortable. A different group of subjects saw the same scenario, but framed in terms of lives lost:
Program C: 400 people will die.
Program D: There is a 1/3 probability that nobody will die, and a 2/3 probability that 600 people will die.
78% of respondents chose Program D.
Now we want to roll the dice. 400 certain deaths? That's unacceptable. Let's take our chances!
But here's the thing that should keep you up at night: Program A and Program C are identical. Saving 200 out of 600 is 400 dying. And Program B and Program D are identical. A 1/3 chance of saving everyone is a 1/3 chance of nobody dying.
Same math. Same outcomes. Same probabilities. Completely opposite preferences. The only thing that changed was the frame—whether we talked about saving or dying, gains or losses, the glass being a third full or two-thirds empty.
This isn't a party trick. This is a fundamental feature of human cognition, and it means that the very concept of "what you prefer" is, in a precise mathematical sense, less stable than you think.
Your Turn in the Frame
Before we go any further into the theory, let's find out how susceptible you are to framing effects. The experiment below will present you with a series of choices. Just go with your gut—there are no wrong answers. (Well, actually, there kind of are. But we'll get to that.)
🧪 The Framing Experiment
Answer each scenario instinctively. Don't overthink it.
The Shape of Feelings
So why does this happen? The answer is one of the great intellectual achievements of the late twentieth century: prospect theory, developed by Kahneman and Tversky in 1979.2 It replaced the classical notion that people evaluate outcomes in terms of final wealth levels with something far more psychologically realistic: people evaluate outcomes relative to a reference point.
This sounds innocuous. It is not.
Classical economics says that if you have $500,000 in total wealth, you should feel the same regardless of whether you started the day with $400,000 (and gained $100,000) or started with $600,000 (and lost $100,000). Prospect theory says: of course you don't. The person who gained is elated. The person who lost is miserable. Same final state, different reference points, totally different psychological reality.
The prospect theory value function: an S-curve centered on a reference point. Losses loom larger than equivalent gains.
The value function has three critical properties:
1. Reference dependence. Outcomes are coded as gains or losses relative to a reference point—usually the status quo, but it can be shifted by expectations, aspirations, or the way a question is framed. Change the reference point, change the entire evaluation.
2. Diminishing sensitivity. The difference between gaining $100 and gaining $200 feels bigger than the difference between gaining $1,100 and gaining $1,200. The value function is concave for gains (risk aversion) and convex for losses (risk seeking). This is why people in the domain of losses will gamble to avoid a sure loss—like those subjects who chose Program D.
3. Loss aversion. The curve is steeper on the loss side. Losing $100 hurts about twice as much as gaining $100 feels good.3 This asymmetry is perhaps the single most robust finding in behavioral economics. It explains why people won't accept a coin flip that pays $110 for heads and costs $100 for tails—even though the expected value is positive.
Mathematically, Kahneman and Tversky proposed:
That λ ≈ 2.25 is the mathematical signature of loss aversion. It says: whatever you feel when you gain something, you feel about 2.25 times more intensely when you lose the same amount. Your nervous system has an asymmetric alarm system, and the loss alarm is set to a higher volume.
Feel the Curve
Theory is nice, but feeling it is better. The interactive below lets you explore the prospect theory value function directly. Plug in a gain and a loss, adjust the parameters, drag the reference point, and watch how the psychological "value" of money is profoundly asymmetric.
📈 Prospect Theory Visualizer
Explore how the value function transforms objective outcomes into subjective feelings.
Frames Are Everywhere
Once you see framing effects, you can't unsee them. They're not confined to hypothetical disease scenarios in psychology labs. They're in your supermarket, your doctor's office, and your country's laws.
The Grocery Store
Consider ground beef labeled "90% fat-free" versus "10% fat." These are the same product—the same cow, if you like—described two ways. But studies show that consumers rate the "90% fat-free" version as leaner, healthier, and better tasting.4 Better tasting. The frame changed not just their beliefs about the beef but their subjective sensory experience of eating it. If that doesn't make you question the reliability of restaurant reviews, nothing will.
The Operating Room
The medical framing literature is genuinely frightening. When told a surgery has a "90% survival rate," patients are significantly more likely to consent than when told it has a "10% mortality rate."5 Same knife, same odds, different words, different decision. And it's not just patients—surgeons themselves are influenced by how the statistics are framed. The people who are supposed to be the dispassionate experts, the ones who do this every day, are swayed by whether you say "lives" or "deaths."
Same surgery, same odds. The frame changes the decision.
The Organ Donor Default
Here is perhaps the most consequential framing effect in public policy. In countries where organ donation is the default (you have to opt out), donation rates are above 90%. In countries where you have to opt in, rates are typically below 20%.6 The mathematical content of the choice is identical: check a box or don't. But the default acts as a reference point—deviating from it feels like a loss, and loss aversion keeps you where you are.
This isn't hypothetical. It's the difference between thousands of people receiving transplants or dying on waiting lists. The frame—opt-in versus opt-out, the choice of what counts as "doing nothing"—has a body count.
Data from Johnson & Goldstein (2003). The default option—the "frame" for inaction—dominates the decision.
Nudges, Ethics, and the Mathematician's Dilemma
If frames are this powerful, should we use them on purpose?
Richard Thaler and Cass Sunstein argue yes, in their influential book Nudge.7 Their idea—"libertarian paternalism"—holds that since someone has to choose the default, the arrangement of the cafeteria, the order of options on the form, it might as well be arranged to help people make better choices. Put the salad before the french fries. Make retirement savings opt-out. Frame the healthy choice as the easy choice.
The mathematical point is subtle: there is no "neutral" frame. Every presentation of a choice involves a reference point, a default, a frame. Even the decision to present information as "90% survival" versus "10% mortality" is a choice. You cannot present a number without a frame any more than you can say a sentence without a language. The question is never whether to frame, but how.
But this is also where the ethical ground gets shaky. If framing can nudge people toward saving for retirement, it can also nudge them toward buying overpriced insurance, voting for a particular candidate, or accepting a bad deal. The difference between a "nudge" and "manipulation" is not mathematically sharp. It depends on who's doing the framing, for whose benefit, and with what degree of transparency.
Consider: a car dealer who says "you'll save $2,000 with this package" versus "you'll overpay by $2,000 without it." A political ad that says "97% of scientists agree" versus "3% of scientists dissent." A credit card company that frames a minimum payment as a helpful suggestion rather than a debt trap. Same math, every time. But the framing does real work in the world.
The Debiasing Problem
Can we protect ourselves? The evidence is somewhat discouraging. Simply knowing about framing effects does not make you immune to them.8 Kahneman himself, who spent his career studying these biases, admitted he was still susceptible to them. "My intuitive thinking is just as prone to overconfidence, extreme predictions, and the planning fallacy as it was before I made a study of these issues," he wrote.
But mathematics offers a partial shield. The discipline of translating a problem into numbers—calculating expected values, writing down the actual probabilities, converting between frames—forces you to see that "200 saved" and "400 die" are the same point in outcome space. The math doesn't care about the frame. It can't. The number 200 doesn't know whether it's counting the living or the dead.
This, I think, is one of the deepest reasons to learn mathematics, and one that doesn't appear in any curriculum guide. Math is not just a tool for solving problems. It is a tool for seeing through frames, for stripping away the emotional clothing that dresses up a choice and revealing the naked structure underneath. You may still feel differently about "90% survival" and "10% mortality." But if you can do the arithmetic, you at least know that the feeling is the frame talking, not the world.
When you suspect you're being framed, do the math. Convert between frames. If "200 saved" and "400 die" are the same thing, write them both down and see which one you were reacting to. The frame you don't see is the one that controls you.
Tversky once said that the study of framing effects made him pessimistic about human rationality but optimistic about human science. We may never be perfectly rational—the value function's asymmetry is probably wired into our neurons by millions of years of evolution, where the cost of missing a predator was always higher than the cost of missing a meal. But we can build frameworks that account for our irrationality, systems that present choices more fairly, and mathematical habits that catch the frame before it catches us.
The same surgery, two brochures. The mathematics says they're identical. Your gut says they're not. And the interesting question—the one that prospect theory opens up and that we're still far from closing—is what to do when your gut and your math disagree.
I'd suggest listening to the math. But then, I'm a mathematician. That's my frame.