All Chapters

The Missing Chapter

The Algebra of Connections

Why a fax machine is worthless until someone else buys one — and what that tells us about everything.

An extension of Jordan Ellenberg's "How Not to Be Wrong"

Chapter 1

The Machine Nobody Wanted

In 1843, a Scottish inventor named Alexander Bain patented a device that could transmit an image across a wire. He called it a "recording telegraph." It could scan a flat surface — letter by letter, line by line — encode it as electrical signals, and reproduce it on the other end. It was, in every meaningful sense, the first fax machine.1

Nobody cared. The device sat in the patent office like a party invitation addressed to a world that hadn't been born yet. It wasn't that the technology didn't work. It was that a fax machine is useless if you're the only person who has one. You can't fax yourself. Well, you can, but that's just photocopying with extra steps.

This is the fundamental strangeness of networks: the product changes depending on how many other people are using it. A telephone in a world with one telephone is a paperweight. A telephone in a world with a billion telephones is a miracle. The object is the same. The math around it is not.

Most products lose value the more of them exist — that's basic supply and demand. But some products gain value with every additional user. This inversion is so counterintuitive, so contrary to the way we normally think about markets, that economists spent a century trying to pretend it didn't exist.

The story of network effects is a story about a simple mathematical observation that turned out to explain monopolies, languages, VHS tapes, social media empires, and — if you squint — the formation of cities and civilizations themselves. It's also a story about what happens when you mistake a good formula for an exact one.

1 phone 0 connections 4 phones 6 connections 12 phones 66 connections

Triple the users, eleven-fold the connections. The value of a network grows much faster than the network itself.

Chapter 2

Metcalfe's Back-of-the-Napkin Law

Bob Metcalfe didn't set out to discover a law. In the 1980s, he was trying to sell Ethernet — the networking standard he'd co-invented at Xerox PARC — to skeptical corporate buyers. His problem was a bootstrapping nightmare: nobody wanted to buy a networking card for their computer because nobody else had one. Sound familiar?2

To convince purchasing managers, Metcalfe drew a simple graph on a slide. On the x-axis: the number of devices on the network. On the y-axis: the "value" of the network. His claim was bold and clean: the value of a network is proportional to the square of the number of connected users.

Metcalfe's Law
V n2
More precisely: V ∝ n(n − 1)/2, the number of possible pairwise connections

The reasoning is almost childishly simple. If you have n people on a network, each person can potentially connect with n − 1 others. That gives us n(n − 1) possible directed connections, or n(n − 1)/2 unique pairs. For large n, the minus-one doesn't matter, so the whole thing scales like .

This is the kind of math that executives love: it's simple, it's dramatic, and it comes with a graph that goes up and to the right in a way that makes investors salivate. A network of 10 users has about 45 possible connections. A network of 100 users has 4,950. A network of 10,000 has nearly 50 million. You've increased the users by a factor of a thousand; you've increased the potential value by a factor of a million.

Metcalfe wasn't being naive — he was being persuasive. And the persuasion worked, not just on corporate buyers but on an entire generation of technology investors who used growth as a kind of mathematical faith. If your platform has network effects, the logic went, you should burn as much money as humanly possible to acquire users, because each new user makes the whole network more valuable than what you spent to get them. This reasoning funded the dot-com boom, and then, when the bubble burst, it funded the recriminations.

"A network of one is worth nothing. A network of a million is worth a trillion. The only question is how fast you can get from one to the other."
— Silicon Valley conventional wisdom, circa 1999

But here's what Ellenberg would want us to notice, the thing that the breathless technology press almost always skips: Metcalfe's Law isn't really a law. It's a counting argument. It counts the number of possible connections and says the value is proportional to that number. But that assumes every connection is equally valuable, and that every possible connection will actually be used. Both assumptions are spectacularly wrong.

Chapter 3

The n² Problem

Think about your contacts list on your phone. You might have 500 people in there. According to Metcalfe, you're a node in a network and your contribution to its value is roughly proportional to those 500 connections. But how many of those 500 people did you talk to last month? Twenty? Thirty? How many did you talk to last year? Maybe a hundred?

The anthropologist Robin Dunbar spent his career studying primate social groups, and he arrived at a number that has become its own kind of internet meme: 150. That's roughly the maximum number of stable social relationships a human can maintain.3 You can have 5,000 Facebook friends, but Dunbar's number says you're only really connected to about 150 of them. The rest are acquaintances, names you half-remember, people whose wedding you attended in 2014 and haven't spoken to since.

This matters enormously for the math. If each person only maintains a fixed number of connections — call it k — then the total number of active connections in a network of n people isn't n²/2. It's kn/2. That's linear, not quadratic. And the difference between linear and quadratic growth is the difference between a nice business and a world-devouring monopoly.

Number of users (n) Network value n·log(n) n Metcalfe (all pairs) Odlyzko–Tilly (groups) Sarnoff (broadcast)

Three models for how network value scales. The truth usually lies somewhere between Sarnoff's modest linearity and Metcalfe's explosive quadratic.

In 2006, Andrew Odlyzko and Benjamin Tilly published a paper arguing that Metcalfe's Law was responsible for inflating — and then destroying — trillions of dollars in telecom valuations during the dot-com bubble.4 Their alternative? Network value scales as n · log(n). The reasoning: if you rank your connections by how much you value them, the most important one is worth a lot, the second is worth a bit less, the third still less, and so on. The value of each additional connection follows something like Zipf's law — the kth connection is worth about 1/k as much as the first. Sum up 1/k from 1 to n, and you get roughly log(n). Multiply by the number of users, and you get n · log(n).

This is the mathematician's favorite move: taking an argument that sounds intuitive and poking at its assumptions until you find the one that's load-bearing. Metcalfe's Law assumes flat value per connection. Odlyzko and Tilly assume declining value. The empirical truth depends on the specific network — and that's exactly the kind of nuance that gets lost when formulas become slogans.

Chapter 4

Winner Takes All (Sometimes)

Here's where network effects become genuinely dangerous — not just intellectually but economically. If a network really does become more valuable as it gets bigger, then getting big first is everything. This creates a dynamic economists call tipping: at some critical mass of users, the network becomes self-sustaining. New users join because everyone's already there, which makes even more users join. It's a positive feedback loop, a snowball rolling downhill, and the result is that markets with strong network effects tend toward monopoly.5

Consider the QWERTY keyboard. You learned to type on QWERTY because that's what all the keyboards had. All the keyboards had QWERTY because that's what everyone learned. Is QWERTY the best possible layout? Almost certainly not — the Dvorak layout is measurably faster for English typing. But QWERTY had the installed base, and the installed base is the network.6

In the 1970s, Sony's Betamax and JVC's VHS fought for the home video market. Betamax was technically superior — better resolution, less noise. VHS won because JVC licensed the format broadly, which meant more manufacturers, which meant more tapes in video stores, which meant more customers, which meant more tapes. By the time the dust settled, Beta was dead. The better technology lost to the more connected one. Network effects don't care about quality. They care about quantity.

This pattern repeats with eerie regularity. Microsoft Windows beat technically superior alternatives. Facebook beat MySpace (which had the early lead) by opening to college students in a controlled rollout that created dense, high-value subnetworks before expanding. WhatsApp dominates messaging in countries where it arrived first, while WeChat dominates in China and LINE in Japan — not because any of them is dramatically better, but because your messaging app is only as good as the number of people you know who are on it.

The economist Brian Arthur, working at the Santa Fe Institute in the 1980s, formalized this intuition with what he called increasing returns.7 Classical economics assumes diminishing returns: each additional unit of input produces less additional output. Arthur showed that in technology markets, the opposite often holds. Each additional user produces more value, creating a lock-in effect that can persist even when better alternatives exist. He was nearly drummed out of the profession for suggesting this. The invisible hand of the market, it turned out, could get stuck.

Chapter 5

Try It Yourself

The best way to feel network effects in your bones is to watch them happen. The calculator below lets you explore how different growth models value a network as it scales. Move the slider and watch the gap between Metcalfe's optimistic and the more conservative models widen into a chasm.

Network Value Explorer

See how different models value the same network as it grows.

Number of users 50
Metcalfe (n²)
1,225
possible connections
Odlyzko–Tilly (n·log n)
196
Sarnoff (linear)
50
Metcalfe / Odlyzko ratio
6.3×

Notice what happens as you push the slider toward 1,000. The Metcalfe number rockets into the hundreds of thousands, while Odlyzko–Tilly stays comparatively modest. This is why the choice of model isn't academic — it's the difference between a company being "worth" $10 billion and $100 billion. During the dot-com era, investors were effectively using Metcalfe's exponent to justify valuations. When the actual revenue scaled closer to n · log(n), the correction was brutal.

Chapter 6

The Network Beneath the Network

There's a deeper lesson lurking in all of this, one that goes well beyond technology companies and their valuations. Network effects are really a special case of a much older mathematical idea: externalities. An externality is any time your action changes the payoff for someone else, without them getting a say in the matter. When you join a social network, you make it slightly more valuable for everyone already on it. That's a positive externality. When you leave, you make it slightly less valuable. That's a negative one.

The British economist Arthur Pigou identified this problem in 1920: markets with strong externalities don't reach efficient equilibria on their own.8 If your joining a network creates value for existing users, then from society's perspective, you should join even if it's not privately worth it to you yet. The social value of your membership exceeds the private value. This means networks tend to be underprovided early in their life (nobody wants to go first) and then overprovided late in their life (everyone has to join because everyone else is already there, even if the network has become toxic).

This is the mathematical tragedy of network effects: the same force that makes a network powerful makes it coercive. You don't use Facebook because it's good. You use it because your aunt, your college roommate, and your kid's soccer league are on it, and the cost of not being there exceeds the cost of being there. The economists would say you're trapped in a Nash equilibrium — a state where no individual can improve their situation by changing their behavior, even though everyone might prefer a different world.9

The Coordination Trap

Network effects create situations where everyone agrees the current standard is bad, but nobody can switch because switching alone makes you worse off. The math says: in networks, rational individual choices can lead to collectively irrational outcomes. The keyboard layout you're typing on right now is proof.

Chapter 7

Reed's Law and the Group Explosion

Just when you thought Metcalfe was too optimistic, David P. Reed came along in 1999 with an even more audacious claim.10 Metcalfe counted pairs. Reed counted groups.

Think about it: a network of n people doesn't just allow pairwise connections. It allows the formation of groups — of every possible subset. Your book club, your family group chat, your team Slack channel: these are all subsets of the larger network, and each one has independent value. How many possible subgroups can n people form? Each person is either in the group or out, giving us 2n possible subsets (minus the trivial ones).

Reed's Law
V 2n
Exponential growth from group-forming networks

If Metcalfe's funded the dot-com boom, Reed's 2n would fund the heat death of the universe. A network of just 50 people would have more possible subgroups than there are atoms in the Earth. Reed's Law is mathematically correct in the same way that Metcalfe's Law is — it counts a real thing — and equally misleading. Nobody actually forms all possible subgroups. Most potential groups never exist, just as most potential phone calls are never made.

But Reed's insight has a more subtle point worth preserving: the most valuable feature of a network isn't necessarily the pairwise connections (who can call whom) but the group formation (who can organize with whom). Facebook's early growth was driven by pairwise connections — friend graphs. But its defensibility, its real lock-in, came from Groups. WhatsApp's killer feature isn't one-to-one messaging; it's group chats. The mathematics of combinations tells us that group-forming capacity grows faster than pairwise capacity, and the qualitative insight — that groups matter more than pairs — is real even if Reed's literal 2n is absurd as a valuation formula.

Pairwise connections n(n−1)/2 = 10 pairs Possible groups 2ⁿ − n − 1 = 26 groups

Five people can form 10 pairs but 26 non-trivial groups. The combinatorial explosion of groups is where the real lock-in lives.

Chapter 8

The Lesson

So what do we take away from all of this? Three things.

First: counting isn't valuing. Metcalfe counted possible connections and called it value. Reed counted possible groups and called it value. Both were right about the counting and wrong about the valuing. The number of possible connections in a network grows quadratically; the number of possible groups grows exponentially; but the value grows at some rate that depends on the specific network, the behavior of its users, and the declining marginal utility of each additional connection. The formula is a starting point, not an answer.

Second: network effects create lock-in, and lock-in isn't always good. The same math that makes networks powerful makes them monopolistic and coercive. Knowing this helps you recognize when you're trapped — when you're using a product not because it's good but because everyone else does. That's not an efficient market. That's a coordination failure dressed up as a market outcome.

Third, and most importantly: beware of simple formulas for complex systems. Metcalfe's Law is elegant. It's also incomplete. The history of mathematical thinking is littered with models that were too simple for the phenomena they described — and network effects, of all things, should remind us that value doesn't live in isolated nodes. It lives in the messy, unpredictable, gloriously human space between them.

"The value of a network isn't in the wires. It's in the conversations that flow through them — and conversations are not a thing that mathematics has ever been good at counting."

Alexander Bain's recording telegraph sat unused for decades, not because it was a bad invention but because it was an invention ahead of its network. When the fax machine finally took off in the 1980s — more than a century after Bain's patent — it wasn't because the technology had improved. It was because enough other people had fax machines. The machine hadn't changed. The math around it finally had.

· · ·

Notes & References

  1. Alexander Bain received British patent no. 9745 for his "Electric Printing Telegraph" on May 27, 1843. The device used a pendulum-based scanning mechanism to transmit images over telegraph wires. See Coopersmith, J. Faxed: The Rise and Fall of the Fax Machine (Johns Hopkins University Press, 2015).
  2. Metcalfe, R. "Metcalfe's Law after 40 Years of Ethernet." IEEE Computer, vol. 46, no. 12 (2013), pp. 26–31. Metcalfe himself has written about the origins of his "law" as a sales pitch rather than a rigorous theorem.
  3. Dunbar, R.I.M. "Neocortex size as a constraint on group size in primates." Journal of Human Evolution, vol. 22, no. 6 (1992), pp. 469–493. The famous "Dunbar's number" of ~150 has been both widely cited and widely debated.
  4. Odlyzko, A. and Tilly, B. "A refutation of Metcalfe's Law and a better estimate for the value of networks and network interconnections." Digital Technology Center, University of Minnesota (2005). They argue that n·log(n) fits empirical data better than n².
  5. Shapiro, C. and Varian, H.R. Information Rules: A Strategic Guide to the Network Economy (Harvard Business School Press, 1999). Still the clearest treatment of tipping points and lock-in in technology markets.
  6. The QWERTY-as-lock-in narrative has been challenged. Liebowitz, S.J. and Margolis, S.E. "The Fable of the Keys," Journal of Law and Economics, vol. 33, no. 1 (1990), pp. 1–25. They argue the evidence for Dvorak's superiority is weaker than commonly claimed.
  7. Arthur, W.B. "Competing Technologies, Increasing Returns, and Lock-In by Historical Events." The Economic Journal, vol. 99, no. 394 (1989), pp. 116–131.
  8. Pigou, A.C. The Economics of Welfare (Macmillan, 1920). Pigou's analysis of externalities laid the groundwork for understanding network effects decades before networks existed.
  9. Nash, J. "Non-Cooperative Games." Annals of Mathematics, vol. 54, no. 2 (1951), pp. 286–295. The concept of Nash equilibrium is central to understanding why inefficient standards persist.
  10. Reed, D.P. "The Law of the Pack." Harvard Business Review (February 2001). Reed argued that group-forming networks scale exponentially, making them far more valuable than Metcalfe's pairwise model suggests.