Marina's Apartment Hunt
In the spring of 2009, a twenty-six-year-old software engineer named Marina moved to San Francisco with a job offer, two suitcases, and exactly three weeks to find an apartment. If you've never apartment-hunted in San Francisco, imagine speed-dating at gunpoint. Listings appear on Craigslist at 8 a.m. and vanish by noon. You show up to an open house with forty other desperate applicants clutching their credit reports like sacred texts. You see the place — its weird alcove kitchen, its proximity to the BART, its roommate who "mostly works nights" — and you have maybe twenty minutes to decide. Say yes and hand over a deposit, or walk away and never see it again.
Marina saw fourteen apartments in eleven days. The third one was lovely: hardwood floors, quiet street, actual closet. But it was only day three. Surely something better was coming. She passed. Apartments four through nine were progressively worse — a converted garage, a unit that smelled faintly of cat, a place where the "bedroom" was separated from the living room by a curtain. By apartment ten, she started to panic. Apartment twelve was decent, not as good as number three, but she grabbed it. She spent the next two years wondering about the one that got away.
Marina's dilemma feels personal. It is also one of the most thoroughly studied problems in the history of mathematics.
It's been called the secretary problem, the sultan's dowry problem, the fussy suitor problem, and about a dozen other names, each reflecting the era and anxieties of whoever was studying it. The core structure is always the same: you see options one at a time, you can't go back, and you want the best one. How do you decide when to stop looking and start choosing?
Mathematicians cracked this in the early 1960s, and the answer is one of those results that feels like it was handed down from some elegant alien civilization.1 It involves no complicated formulas, no advanced prerequisites — just one number that falls out of the mathematics like a gift: 37%.
The Game You Can't Win (But Can Play Optimally)
Let's make this concrete. You're hiring a secretary — this is the 1960s, so bear with the terminology. You have n candidates. They arrive one at a time in random order. After each interview, you must either hire that person on the spot or reject them forever. You can rank any candidate against the ones you've already seen, but you have no idea what's coming next. Your goal: hire the single best candidate out of all n.
Here's the brutal part. You can't hedge. You can't say "let me think about it." You can't call back candidate #3 after seeing candidate #7. Every decision is final. Accept or reject. Now.
If you just pick at random — hire the first person, or the last, or flip a coin halfway through — you'll get the best candidate 1/n of the time. With 100 candidates, that's a 1% chance. Dismal.
But there's a strategy that does dramatically better.
The Secretary Problem
10 candidates arrive one at a time. Can you hire the best one? Play multiple rounds to test your intuition, then learn the optimal strategy.
The 37% Rule
Here is the optimal strategy: Look at the first 37% of candidates. Reject them all, no matter how good they are. Then, hire the next candidate who is better than every candidate you've seen so far.
That's it. No complicated scoring system, no weighted averages, no Bayesian updating. Just set a threshold by exploring, then commit to the first thing that exceeds it.
With 10 candidates, you'd reject the first 3 or 4, then pull the trigger on the next one that's the best-so-far. With 100 candidates, reject the first 37 and then pounce. With Marina's 14 apartments, she should have looked at the first 5 without committing, then grabbed the next one better than anything she'd seen. (Apartment number three was only her third — she hadn't explored enough to calibrate.)
And the probability of landing the single best candidate with this strategy? It converges to 1/e — roughly 36.8%. This is astonishingly good. Remember, random guessing gives you 1/n, which shrinks to zero as n grows. The 37% rule gives you a floor of about 37% regardless of how large the pool is.2
Why 1/e?
Let's see why this magical number falls out. Suppose you have n candidates and your strategy is: reject the first r candidates, then hire the next one who's the best so far.
For the overall best candidate (let's call her candidate B) to be hired, two things must happen:
- Candidate B must appear after position r (otherwise you automatically reject her in the exploration phase).
- None of the candidates between position r+1 and B's position can be a "best-so-far" — because if one were, you'd hire that person before ever reaching B. This means the best of the first B−1 candidates must be among the first r candidates.
So the optimal cutoff is to reject the first 1/e ≈ 36.8% of candidates, and the resulting success probability is 1/e ≈ 36.8%. The fraction you reject and the probability of success are the same number. That's the kind of coincidence that makes mathematicians smile for the rest of the day.3
The Explore-Exploit Tradeoff
The secretary problem is a pure, crystallized version of something you face constantly: the tension between exploring your options and committing to one. Computer scientists call this the explore-exploit tradeoff, and it shows up everywhere that decisions are sequential and information is incomplete.
Explore too much and you waste your opportunities — Marina's apartment number three went to someone else while she was still "gathering data." Exploit too soon and you commit to mediocrity because you haven't seen what's out there — like marrying your high school sweetheart without ever leaving your hometown. (This is not relationship advice. This is mathematics. These are different things.)
The 37% rule is nature's answer to "how much exploration is enough?" And the answer is surprisingly generous.
You should spend more than a third of your time just looking. Not committing, not agonizing, just calibrating your sense of what's out there. Then, once you've calibrated, act decisively on the first thing that clears your bar.
Optimal Stopping Calculator
How many options will you see? Find out when to stop exploring.
Where the Math Meets the Mess
The secretary problem is beautiful. It is also, in its pure form, almost comically unrealistic. Real life violates its assumptions in every direction, and understanding how it breaks is just as instructive as the rule itself.
You can sometimes go back
The original problem assumes irrevocable rejection, but in practice, callbacks exist. You can sometimes call apartment #3 and ask if it's still available. When there's even a small chance of recalling past options, the optimal strategy shifts: you should explore less and commit sooner, because the penalty for passing on a great option is softer. If you have a 50% chance of successfully recalling a rejected candidate, the optimal threshold drops from 37% to about 25%.4
Information leaks in
The classic problem assumes you know nothing about the distribution of quality. But you usually do. If you're hiring a software engineer, you have a rough sense of what "great" looks like before seeing a single resume. In the version where you know the distribution, the problem changes from relative ranking to threshold rules — you can set a quality bar in advance and accept the first candidate who clears it.
The pool size is unknown
Marina didn't know she'd see exactly 14 apartments. When you don't know n, the problem gets harder, but the 1/e rule remains surprisingly robust.5
You might not want the single best
The classic problem optimizes for getting the best candidate and treats everything else as failure. When you optimize for expected rank rather than probability of the best, the optimal strategy changes: you should explore less — about √n candidates instead of n/e.6
The Dating Problem
Of course mathematicians applied this to romance. With the kind of enthusiasm that only people who solve equations for fun can muster.
If you're going to date, say, 20 serious partners between age 18 and 40, the 37% rule says: date freely and without commitment through your first 7 or so relationships — roughly until age 26. Then get serious about the next person who's better than everyone you've dated before.
The mathematician Peter Todd actually tested this with simulations in the late 1990s and found that the 37% rule performed remarkably well in more realistic settings.7
This is a toy model. Humans are not interchangeable secretary candidates with fixed quality scores. Relationships are bidirectional (the other person has to choose you too). People change over time. Love is not a ranking problem. But the qualitative insight survives: there's real value in a period of deliberate, commitment-free exploration before you start evaluating partners as potential long-term matches.
Parking, and Other Wars of Attrition
The secretary problem has a spatial cousin that you've probably encountered without knowing it: the parking problem. You're driving down a street toward your destination. Parking spots appear one by one. The spots closest to your destination are the most desirable, but they're also the most likely to be taken. If you grab a spot too early, you're in for a long walk. If you drive past everything hoping for a closer spot, you might overshoot entirely and end up circling the block.
Mathematicians have modeled this too, and the structure is eerily similar to the secretary problem.8 The optimal strategy involves a threshold: drive past a certain number of spots to calibrate what's available, then grab the first acceptable one. The twist is that "acceptable" here factors in distance — you're optimizing a cost function (walking distance) rather than a binary win/lose. But the qualitative lesson is identical. You explore, you set a bar, and then you commit.
The Courage to Stop
Here's what the secretary problem really teaches you, and it's not the formula.
It teaches you that there is a mathematically provable cost to perpetual browsing. The person who says "I'm still exploring my options" at the 80% mark is not being thorough — they're being irrational. The math is unambiguous: past a certain point, more information doesn't help. It hurts.
It also teaches you that commitment under uncertainty is not recklessness — it's strategy. When you commit to the first thing that clears your calibrated bar, you're not settling. You're executing the optimal policy. You've earned the right to commit because you spent the first third of your search calibrating your judgment.
Even the optimal strategy only wins 37% of the time. The best possible approach, the one that literally cannot be improved upon, fails almost two-thirds of the time. If you're beating yourself up for not finding the perfect apartment, the perfect job, the perfect partner — the universe's own algorithm doesn't find the perfect answer either. It finds the best process and makes peace with the outcome.
Marina, by the way, eventually grew to love her apartment. The roommate who "mostly works nights" turned out to be a jazz musician, and the kitchen, while tiny, had a window that caught the afternoon light. It wasn't the optimal choice. But she stopped looking, and she started living — and that, in the end, is what the math says you should do.