The Inverse Commons

Having cast a skeptical eye on one prevailing model, let's see if we can build another—a hard-nosed economic explanation of what makes open-source cooperation sustainable.

This is a question that bears examination on a couple of different levels. On one level, we need to explain the behavior of individuals who contribute to open-source projects; on another, we need to understand the economic forces that sustain cooperation on open-source projects like Linux or Apache.

Again, we must first demolish a widespread folk model that interferes with understanding. Over every attempt to explain cooperative behavior there looms the shadow of Garret Hardin's ``Tragedy of the Commons''.

Hardin famously asks us to imagine a green held in common by a village of peasants, who graze their cattle there. But grazing degrades the commons, tearing up grass and leaving muddy patches, which re-grow their cover only slowly. If there is no agreed-upon (and enforced!) policy to allocate grazing rights that prevents overgrazing, all parties' incentives push them to run as many cattle as quickly as possible, trying to extract maximum value before the commons degrades into a sea of mud.

Most people have an intuitive model of cooperative behavior that goes much like this. The tragedy of the commons actually stems from two linked problems, one of overuse and another of underprovision. On the demand side, the commons situation encourages a race to the bottom by overuse—what economists call a congested–public-good problem. On the supply side, the commons rewards free-rider behavior—removing or diminishing incentives for individual actors to invest in developing more pasturage.

The tragedy of the commons predicts only three possible outcomes. One is the sea of mud. Another is for some actor with coercive power to enforce an allocation policy on behalf of the village (the communist solution). The third is for the commons to break up as village members fence off bits they can defend and manage sustainably (the property-rights solution).

When people reflexively apply this model to open-source cooperation, they expect it to be unstable with a short half-life. Since there's no obvious way to enforce an allocation policy for programmer time over the Internet, this model leads straight to a prediction that the commons will break up, with various bits of software being taken closed-source and a rapidly decreasing amount of work being fed back into the communal pool.

In fact, it is empirically clear that the trend is opposite to this. The trend in breadth and volume of open-source development can be measured by submissions per day at Metalab and SourceForge (the leading Linux source sites) or announcements per day at (a site dedicated to advertising new software releases). Volume on both is steadily and rapidly increasing. Clearly there is some critical way in which the ``Tragedy of the Commons'' model fails to capture what is actually going on.

Part of the answer certainly lies in the fact that using software does not decrease its value. Indeed, widespread use of open-source software tends to increase its value, as users fold in their own fixes and features (code patches). In this inverse commons, the grass grows taller when it's grazed upon.

That this public good cannot be degraded by overuse takes care of half of Hardin's tragedy, the congested–public-goods problem. It doesn't explain why open source doesn't suffer from underprovision. Why don't people who know the open-source community exists universally exhibit free-rider behavior behavior, waiting for others to do the work they need, or (if they do the work themselves) not bothering to contribute the work back into the commons?

Part of the answer lies in the fact that people don't merely need solutions, they need solutions on time. It's seldom possible to predict when someone else will finish a given piece of needed work. If the payoff from fixing a bug or adding a feature is sufficient to any potential contributor, that person will dive in and do it (at which point the fact that everyone else is a free rider becomes irrelevant).

Another part of the answer lies in the fact that the putative market value of small patches to a common source base is hard to capture. Supposing I write a fix for an irritating bug, and suppose many people realize the fix has money value; how do I collect from all those people? Conventional payment systems have high enough overheads to make this a real problem for the sorts of micropayments that would usually be appropriate.

It may be more to the point that this value is not merely hard to capture, in the general case it's hard to even assign. As a thought experiment, let us suppose that the Internet came equipped with the theoretically ideal micropayment system—secure, universally accessible, zero-overhead. Now let's say you have written a patch labeled ``Miscellaneous Fixes to the Linux Kernel''. How do you know what price to ask? How would a potential buyer, not having seen the patch yet, know what is reasonable to pay for it?

What we have here is almost like a funhouse-mirror image of F. A. Hayek's `calculation problem'—it would take a superbeing, both able to evaluate the functional worth of patches and trusted to set prices accordingly, to lubricate trade.

Unfortunately, there's a serious superbeing shortage, so patch author J. Random Hacker is left with two choices: sit on the patch, or throw it into the pool for free.

Sitting on the patch gains nothing. Indeed, it incurs a future cost—the effort involved in re-merging the patch into the source base in each new release. So the payoff from this choice is actually negative (and multiplied by the rapid release tempo characteristic of open-source projects).

To put it more positively, the contributor gains by passing maintainance overhead of the patch to the source-code owners and the rest of the project group. He also gains because others will improve on his work in the future. Finally, because he won't have to maintain the patch himself, he will be able to afford more time on other and larger customizations to suit his needs. The same arguments that favor opening source for entire packages apply to patches as well.

Throwing the patch in the pool may gain nothing, or it may encourage reciprocal effort from others that will address some of J. Random's problems in the future. This choice, apparently altruistic, is actually optimally selfish in a game-theoretic sense.

In analyzing this kind of cooperation, it is important to note that while there is a free-rider problem (work may be underprovided in the absence of money or money-equivalent compensation) it is not one that scales with the number of end users (see the endnote on [ST] for discussion). The complexity and communications overhead of an open-source project is almost entirely a function of the number of developers involved; having more end users who never look at source costs effectively nothing. It may increase the rate of silly questions appearing on the project mailing lists, but this is relatively easily forestalled by maintaining a Frequently Asked Questions list and blithely ignoring questioners who have obviously not read it (and in fact both these practices are typical).

The real free-rider problems in open-source software are more a function of friction costs in submitting patches than anything else. A potential contributor with little stake in the cultural reputation game (see Homesteading the Noosphere [HtN]) may, in the absence of money compensation, think ``It's not worth submitting this fix because I'll have to clean up the patch, write a ChangeLog entry, and sign the FSF assignment papers...''. It's for this reason that the number of contributors (and, at second order, the success of projects) is strongly and inversely correlated with the number of hoops each project makes a contributing user go through. Such friction costs may be political as well as mechanical. Together I think they explain why the loose, amorphous Linux culture has attracted orders of magnitude more cooperative energy than the more tightly organized and centralized BSD efforts—and why the Free Software Foundation has receded in relative importance as Linux has risen.

This is all good as far as it goes. But it is an after-the-fact explanation of what J. Random Hacker does with his patch after he has created it. The other half we need is an economic explanation of how JRH was able to write that patch in the first place, rather than having to work on a closed-source program that might have returned him sale value. What business models create niches in which open-source development can flourish?