Halloween Document I (Version 1.17)
Open Source Software:
Open Source(Apache Style)
Open Source(Linux/GNU style)
Zero Price Avenue
Source Code Available
Source Code Modifiable
Public "Check-ins" to core codebase
All derivatives must be free
The broad categories of licensing include:
Commercial software is classic Microsoft bread-and-butter. It must be purchased, may NOT be redistributed, and is typically only available as binaries to end users.
Limited trial software are usually functionally limited versions of commercial software which are freely distributed and intend to drive purchase of the commercial code. Examples include 60-day time bombed evaluation products.
Shareware products are fully functional and freely redistributable but have a license that mandates eventual purchase by both individuals and corporations. Many internet utilities (like "WinZip") take advantage of shareware as a distribution method.
Non-commercial use software is freely available and redistributable by non-profit making entities. Corporations, etc. must purchase the product. An example of this would be Netscape Navigator.
Royalty-free binaries consist of software which may be freely used and distributed in binary form only. Internet Explorer and NetMeeting binaries fit this model.
Royalty-free libraries are software products whose binaries and source code are freely used and distributed but may NOT be modified by the end customer without violating the license. Examples of this include class libraries, header files, etc.
A small, closed team of developers develops
BSD-style open source products & allows free use and
redistribution of binaries and code.
Apache takes the BSD-style open source model and extends it by
CopyLeft or GPL (General Public License) based software takes the Open Source license one critical step farther. Whereas BSD and Apache style software permits users to "fork" the codebase and apply their own license terms to their modified code (e.g. make it commercial), the GPL license requires that all derivative works in turn must also be GPL code. "You are free to hack this code as long as your derivative is also hackable"
To us, open-source licensing and the rights it grants to users and third parties are primary, and specific development practice varies ad-hoc in a way not especially coupled to our license variations. In this Microsoft taxonomy, on the other hand, the central distinction is who has write access to a privileged central code base.
This reflects a much more centralized view of reality, and reflects a failure of imagination or understanding on the memo-authors's part. He doesn't grok our distributed-development tradition fully. This is hardly surprising... }
This paper focuses on Open Source Software (OSS). OSS is acutely different from the other forms of licensing (in particular "shareware") in two very important respects:
OSS is a concern to Microsoft for several reasons:
A key barrier to entry for OSS in many customer environments has been its perceived lack of quality. OSS advocates contend that the greater code inspection & debugging in OSS software results in higher quality code than commercial software.
recent case studiesare all
anecdotal. But if so, why call them
very dramatic evidence?
It appears there's a bit of self-protective backing and filling going on in the second sentence. Nevertheless, the first sentence is a huge concession for Microsoft to make (even internally).
In any case, the `anecdotal' claim is false. See Fuzz Revisited: A Re-examination of the Reliability of UNIX Utilities and Services .
Here are three pertinent lines from this paper:
"The failure rate of utilities on the commercial versions of UNIX that we tested . . . ranged from 15-43%." "The failure rate of the utilities on the freely-distributed Linux version of UNIX was second-lowest, at 9%." "The failure rate of the public GNU utilities was the lowest in our study, at only 7%.}
Note the clever distinction here (which Eric missed in his analysis).customer's eyes(in Microsoft's own words) rather than any real code quality. In other words, to Microsoft and the software market in general, a software product has "commercial quality" if it has thelook and feelof commercial software products. A product has commercial quality code if and only if there is a public perception that it is made with commercial quality code. This means that MS will take seriously any product that has an appealing, commercial-looking appearance because MS assumes -- rightly so -- that this is what the typical, uninformed consumer uses as the judgment benchmark for what isgood code.
TN is probably right. This didn't occur to me because, like most open-source programmers, I consider programs that crash and screw up a lot to be junk no matter how pretty their interfaces are....
Another barrier to entry that has been tackled by OSS is project complexity. OSS teams are undertaking projects whose size & complexity had heretofore been the exclusive domain of commercial, economically-organized/motivated development teams. Examples include the Linux Operating System and Xfree86 GUI.
OSS process vitality is directly tied to the Internet to provide distributed development resources on a mammoth scale. Some examples of OSS project size:
Lines of Code
Linux Kernel (x86 only)
Apache Web Server
Xfree86 X-windows server
"K" desktop environment
Full Linux distribution
The OSS process is unique in its participants' motivations and the resources that can be brought to bare down on problems. OSS, therefore, has some interesting, non-replicable assets which should be thoroughly understood.
non-replicable assets-- implies that Microsoft's modus operandi typically involves copying anything that others do. }
Open source software has roots in the hobbyist and the scientific community and was typified by ad hoc exchange of source code by developers/users.
The largest case study of OSS is the Internet. Most of the earliest code on the Internet was, and is still based on OSS as described in an interview with Tim O'Reilly (http://www.techweb.com/internet/profile/toreilly/interview):
TIM O'REILLY: The biggest message that we started out with was, "open source software works." ... BIND has absolutely dominant market share as the single most mission-critical piece of software on the Internet. Apache is the dominant Web server. SendMail runs probably eighty percent of the mail servers and probably touches every single piece of e-mail on the Internet
Free Software Foundation / GNU Project
Credit for the first instance of modern, organized OSS is generally given to Richard Stallman of MIT. In late 1983, Stallman created the Free Software Foundation (FSF) -- http://www.gnu.ai.mit.edu/fsf/fsf.html -- with the goal of creating a free version of the UNIX operating system. The FSF released a series of sources and binaries under the GNU moniker (which recursively stands for "Gnu's Not Unix").
The original FSF / GNU initiatives fell short of their original goal of creating a completely OSS Unix. They did, however, contribute several famous and widely disseminated applications and programming tools used today including:
FSF/GNU software introduced the "copyleft" licensing scheme that not only made it illegal to hide source code from GNU software but also made it illegal to hide the source from work derived from GNU software. The document that described this license is known as the General Public License (GPL).
Wired magazine has the following summary of this scheme & its intent (http://www.wired.com/wired/5.08/linux.html):
The general public license, or GPL, allows users to sell, copy, and change copylefted programs - which can also be copyrighted - but you must pass along the same freedom to sell or copy your modifications and change them further. You must also make the source code of your modifications freely available.
The second clause -- open source code of derivative works -- has been the most controversial (and, potentially the most successful) aspect of CopyLeft licensing.
Commercial software development processes are hallmarked by organization around economic goals. However, since money is often not the (primary) motivation behind Open Source Software, understanding the nature of the threat posed requires a deep understanding of the process and motivation of Open Source development teams.
This applies in reverse as well, which is why bashing Microsoft qua Microsoft misses the point -- they're a symptom, not the disease itself. I wish more Linux hackers understood this.
On a practical level, this insight means we can expect Microsoft's propaganda machine to be directed against the process and culture of open source, rather than specific competitors. Brace for it... }
Some of the key attributes of Internet-driven OSS teams:
Communication -- Internet Scale
Coordination of an OSS team is extremely dependent on Internet-native forms of collaboration. Typical methods employed run the full gamut of the Internet's collaborative technologies:
OSS projects the size of Linux and Apache are only viable if a large enough community of highly skilled developers can be amassed to attack a problem. Consequently, there is direct correlation between the size of the project that OSS can tackle and the growth of the Internet.
In addition to the communications medium, another set of factors implicitly coordinate the direction of the team.
Common goals are the equivalent of vision statements which permeate the distributed decision making for the entire development team. A single, clear directive (e.g. "recreate UNIX") is far more efficiently communicated and acted upon by a group than multiple, intangible ones (e.g. "make a good operating system").
Precedence is potentially the most important factor in explaining the rapid and cohesive growth of massive OSS projects such as the Linux Operating System. Because the entire Linux community has years of shared experience dealing with many other forms of UNIX, they are easily able to discern -- in a non-confrontational manner -- what worked and what didn't.
There weren't arguments about the command syntax to use in the text editor -- everyone already used "vi" and the developers simply parcelled out chunks of the command namespace to develop.
Having historical, 20:20 hindsight provides a
strong, implicit structure.
More generally, it suggests a serious and potentially exploitable
underestimation of the open-source community's ability to enable its
own visionary leaders. We didn't get Emacs or Perl or the World Wide
20:20 hindsight -- nor is it correct to view even
the relatively conservative Linux kernel design as a backward-looking
recreation of past models.
Accordingly, it suggests that Microsoft's response to open source can be wrong-footed by emphasizing innovation in both our actions and the way we represent what we're doing to the rest of the world. }
NatBro points out that the need for a commonly accepted skillset as a pre-requisite for OSS development. This point is closely related to the common precedents phenomena. From his email:
A key attribute ... is the common UNIX/gnu/make skillset that OSS taps into and reinforces. I think the whole process wouldn't work if the barrier to entry were much higher than it is ... a modestly skilled UNIX programmer can grow into doing great things with Linux and many OSS products. Put another way -- it's not too hard for a developer in the OSS space to scratch their itch, because things build very similarly to one another, debug similarly, etc.
Whereas precedents identify the end goal, the common skillsets attribute describes the number of people who are versed in the process necessary to reach that end.
The Cathedral and the Bazaar
A very influential paper by an open source software advocate -- Eric Raymond -- was first published in May 1997 (http://www.redhat.com/redhat/cathedral-bazaar/). Raymond's paper was expressly cited by (then) Netscape CTO Eric Hahn as a motivation for their decision to release browser source code.
Raymond dissected his OSS project in order to derive rules-of-thumb which could be exploited by other OSS projects in the future. Some of Raymond's rules include:
Every good work of software starts by scratching a developer's personal itch
This summarizes one of the core motivations of developers in the OSS process -- solving an immediate problem at hand faced by an individual developer -- this has allowed OSS to evolve complex projects without constant feedback from a marketing / support organization.
Good programmers know what to write. Great ones know what to rewrite (and reuse).
Raymond posits that developers are more likely to reuse code in a rigorous open source process than in a more traditional development environment because they are always guaranteed access to the entire source all the time.
Widely available open source reduces search costs for finding a particular code snippet.
Plan to throw one away; you will, anyhow.
Quoting Fred Brooks,
The Mythical Man-Month, Chapter 11. Because development teams in OSS are often extremely far flung, many major subcomponents in Linux had several initial prototypes followed by the selection and refinement of a single design by Linus.
Treating your users as co-developers is your least-hassle route to rapid code improvement and effective debugging.
Raymond advocates strong documentation and significant developer support for OSS projects in order to maximize their benefits.
Code documentation is cited as an area which commercial developers typically neglect which would be a fatal mistake in OSS.
Release early. Release often. And listen to your customers.
This is a classic play out of the Microsoft handbook. OSS advocates will note, however, that their release-feedback cycle is potentially an order of magnitude faster than commercial software's.
But it suggests something else -- that even though the author intellectually grasps the importance of source code releases, he doesn't truly grok how powerful a lever the early release specifically of source code truly is. Perhaps living within Microsoft's assumptions makes that impossible.
The difference here is, in every release cycle Microsoft always listens to its most ignorant customers. This is the key to dumbing down each release cycle of software for further assaulting the non-PC population. Linux and OS/2 developers, OTOH, tend to listen to their smartest customers. This necessarily limits the initial appeal of the operating system, while enhancing its long-term benefits. Perhaps only a monopolist like Microsoft could get away with selling worse products each generation -- products focused so narrowly on the least-technical member of the consumer base that they necessarily sacrifice technical excellence. Linux and OS/2 tend to appeal to the customer who knows greatness when he or she sees it.The good that Microsoft does in bringing computers to the non-users is outdone by the curse they bring upon the experienced users, because their monopoly position tends to force everyone toward the lowest-common-denominator, not just the new users.
Note: This means that Microsoft does the
heavy lifting of
expanding the overall PC marketplace. The great fear at Microsoft is
that somebody will come behind them and make products that not only
are more reliable, faster, and more secure, but are also easy to use,
fun, and make people more productive. That would mean that Microsoft
had merely served as a pioneer and taken all the arrows in the back,
while we who have better products become a second wave to homestead on
Microsoft's tamed territory. Well, sounds like a good idea to me.
So, we ought to take a page from Microsoft's book and listen to the newbies once in a while. But not so often that we lose our technological superiority over Microsoft.
ESR again. I don't agree with TN's apparent assumption that ease-of-use and technical superiority are necessarily mutually exclusive; with good design it's possible to do both. But given limited resources and poor-to-mediocre design skills, they do tend to get set in opposition with each other. Thus there's enough point to TN's analysis to make it worth reproducing here. }
Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.
This is probably the heart of Raymond's insight into the OSS process. He paraphrased this rule as "debugging is parallelizable". More in depth analysis follows.
Once a component framework has been established (e.g. key API's & structures defined), OSS projects such as Linux utilize multiple small teams of individuals independently solving particular problems.
Because the developers are typically hobbyists, the ability to `fund' multiple, competing efforts is not an issue and the OSS process benefits from the ability to pick the best potential implementation out of the many produced.
Note, that this is very dependent on:
The core argument advanced by Eric Raymond is that unlike other aspects of software development, code debugging is an activity whose efficiency improves nearly linearly with the number of individuals tasked with the project. There are little/no management or coordination costs associated with debugging a piece of open source code -- this is the key `break' in Brooks' laws for OSS.
Raymond includes Linus Torvald's description of the Linux debugging process:
My original formulation was that every problem
will be transparent to somebody. Linus demurred that the person who understands and fixes the problem is not necessarily or even usually the person who first characterizes it.
Somebody finds the problem, he says,
and somebody else understands it. And I'll go on record as saying that finding it is the bigger challenge. But the point is that both things tend to happen quickly
Debugging is parallelizable. Jeff [Dutky <firstname.lastname@example.org>] observes that although debugging requires debuggers to communicate with some coordinating developer, it doesn't require significant coordination between debuggers. Thus it doesn't fall prey to the same quadratic complexity and management costs that make adding developers problematic.
One advantage of parallel debugging is that bugs and their fixes are found / propagated much faster than in traditional processes. For example, when the TearDrop IP attack was first posted to the web, less than 24 hours passed before the Linux community had a working fix available for download.
An extension to parallel debugging that I'll add to Raymond's hypothesis is "impulsive debugging". In the case of the Linux OS, implicit to the act of installing the OS is the act of installing the debugging/development environment. Consequently, it's highly likely that if a particular user/developer comes across a bug in another individual's component -- and especially if that bug is "shallow" -- that user can very quickly patch the code and, via internet collaboration technologies, propagate that patch very quickly back to the code maintainer.
Put another way, OSS processes have a very low entry barrier to the debugging process due to the common development/debugging methodology derived from the GNU tools.
Any large scale development process will encounter conflicts which must be resolved. Often resolution is an arbitrary decision in order to further progress the project. In commercial teams, the corporate hierarchy + performance review structure solves this problem -- How do OSS teams resolve them?
In the case of Linux, Linus Torvalds is the undisputed `leader' of the project. He's delegated large components (e.g. networking, device drivers, etc.) to several of his trusted "lieutenants" who further de-facto delegate to a handful of "area" owners (e.g. LAN drivers).
Other organizations are described by Eric Raymond: (http://earthspace.net/~esr/writings/homesteading/homesteading-15.html):
Some very large projects discard the `benevolent dictator' model entirely. One way to do this is turn the co-developers into a voting committee (as with Apache). Another is rotating dictatorship, in which control is occasionally passed from one member to another within a circle of senior co-developers (the Perl developers organize themselves this way).
This section provides an overview of some of the key reasons OSS developers seek to contribute to OSS projects.
Solving the Problem at Hand
This is basically a rephrasing of Raymond's first rule of thumb -- "Every good work of software starts by scratching a developer's personal itch".
Many OSS projects -- such as Apache -- started as a small team of developers setting out to solve an immediate problem at hand. Subsequent improvements of the code often stem from individuals applying the code to their own scenarios (e.g. discovering that there is no device driver for a particular NIC, etc.)
The Linux kernel grew out of an educational project at the University of Helsinki. Similarly, many of the components of Linux / GNU system (X windows GUI, shell utilities, clustering, networking, etc.) were extended by individuals at educational institutions.
The most ethereal, and perhaps most profound motivation presented by the OSS development community is pure ego gratification.
In "The Cathedral and the Bazaar", Eric S. Raymond cites:
utility function Linux hackers are maximizing is not classically economic, but is the intangible of their own ego satisfaction and reputation among other hackers.
And, of course, "you aren't a hacker until someone else calls you hacker"
Homesteading on the Noosphere
A second paper published by Raymond -- "Homesteading on the Noosphere" (http://sagan.earthspace.net/~esr/writings/homesteading/), discusses the difference between economically motivated exchange (e.g. commercial software development for money) and "gift exchange" (e.g. OSS for glory).
"Homesteading" is acquiring property by being the first to `discover' it or by being the most recent to make a significant contribution to it. The "Noosphere" is loosely defined as the "space of all work". Therefore, Raymond posits, the OSS hacker motivation is to lay a claim to the largest area in the body of work. In other words, take credit for the biggest piece of the prize.
From "Homesteading on the Noosphere":
Abundance makes command relationships difficult to sustain and exchange relationships an almost pointless game. In gift cultures, social status is determined not by what you control but by what you give away.
For examined in this way, it is quite clear that the society of open-source hackers is in fact a gift culture. Within it, there is no serious shortage of the `survival necessities' -- disk space, network bandwidth, computing power. Software is freely shared. This abundance creates a situation in which the only available measure of competitive success is reputation among one's peers.
More succinctly (http://www.techweb.com/internet/profile/eraymond/interview):
SIMS: So the scarcity that you looked for was the scarcity of attention and reward?
RAYMOND: That's exactly correct.
This is a controversial motivation and I'm inclined to believe that at some level, Altruism `degenerates' into a form of the Ego Gratification argument advanced by Raymond.
A key threat in any large development team -- and one that is particularly exacerbated by the process chaos of an internet-scale development team -- is the risk of code-forking.
Code forking occurs when over normal push-and-pull of a development project, multiple, inconsistent versions of the project's code base evolve.
In the commercial world, for example, the strong, singular management of the Windows NT codebase is considered to be one of it's greatest advantages over the `forked' codebase found in commercial UNIX implementations (SCO, Solaris, IRIX, HP-UX, etc.).
Forking in OSS -- BSD Unix
Within OSS space, BSD Unix is the best example of forked code. The original BSD UNIX was an attempt by U-Cal Berkeley to create a royalty-free version of the UNIX operating system for teaching purposes. However, Berkeley put severe restrictions on non-academic uses of the codebase.
In order to create a fully free version of BSD UNIX, an ad hoc (but closed) team of developers created FreeBSD. Other developers at odds with the FreeBSD team for one reason or another splintered the OS to create other variations (OpenBSD, NetBSD, BSDI).
There are two dominant factors which led to the forking of the BSD tree:
OK, we've learned something now. This may in fact explain the couinterintuitive fact that the projects which open up development the most actually have the least tendency to fork... }
Both of these motivations create a situation where developers may try to force a fork in the code and collect royalties (monetary, or ego) at the expense of the collective BSD society.
(Lack of) Forking in Linux
In contrast to the BSD example, the Linux kernel code base hasn't forked. Some of the reasons why the integrity of the Linux codebase has been maintained include:
Linus Torvalds is a celebrity in the Linux world and his decisions are considered final. By contrast, a similar celebrity leader did NOT exist for the BSD-derived efforts.
Linus is considered by the development team to be a fair, well-reasoned code manager and his reputation within the Linux community is quite strong. However, Linus doesn't get involved in every decision. Often, sub groups resolve their -- often large -- differences amongst themselves and prevent code forking.
In contrast to BSD's closed membership, anyone can contribute to Linux and your "status" -- and therefore ability to `homestead' a bigger piece of Linux -- is based on the size of your previous contributions.
Indirectly this presents a further disincentive to code forking. There is almost no credible mechanism by which the forked, minority code base will be able to maintain the rate of innovation of the primary Linux codebase.
Because derivatives of Linux MUST be available through some free avenue, it lowers the long term economic gain for a minority party with a forked Linux tree.
Ego motivations push OSS developers to plant the biggest stake in the biggest Noosphere. Forking the code base inevitably shrinks the space of accomplishment for any subsequent developers to the new code tree.
What are the core strengths of OSS products that Microsoft needs to be concerned with?
Like our Operating System business, OSS ecosystems have several exponential attributes:
The single biggest constraint faced by any OSS project is finding enough developers interested in contributing their time towards the project. As an enabler, the Internet was absolutely necessary to bring together enough people for an Operating System scale project. More importantly, the growth engine for these projects is the growth in the Internet's reach. Improvements in collaboration technologies directly lubricate the OSS engine.
Put another way, the growth of the Internet will make existing OSS projects bigger and will make OSS projects in "smaller" software categories become viable.
Like commercial software, the most viable single OSS project in many categories will, in the long run, kill competitive OSS projects and `acquire' their IQ assets. For example, Linux is killing BSD Unix and has absorbed most of its core ideas (as well as ideas in the commercial UNIXes). This feature confers huge first mover advantages to a particular project
The larger the OSS project, the greater the prestige associated with contributing a large, high quality component to its Noosphere. This phenomena contributes back to the "winner-take-all" nature of the OSS process in a given segment.
The larger the project, the more development/test/debugging the code receives. The more debugging, the more people who deploy it.
Binaries may die but source code lives forever
One of the most interesting implications of viable OSS ecosystems is long-term credibility.
Long-Term Credibility Defined
Long term credibility exists if there is no way you can be driven out of business in the near term. This forces change in how competitors deal with you.
Note the terminology used here
driven out of business. MS believes
that putting other companies out of business is not merely
damage -- a byproduct of selling better stuff -- but rather, a direct
business goal. To put this in perspective, economic theory and the
typical honest, customer-oriented businessperson will think of
business as a stock-car race -- the fastest car with the most skillful
driver wins. Microsoft views business as a demolition derby -- you
knock out as many competitors as possible, and try to maneuver things
so that your competitors wipe each other out and thereby eliminate
themselves. In a stock car race there are many finishers and thus many
drivers get a paycheck. In a demolition derby there is just one
survivor. Can you see why
freedom of choice are
absolutely in two different universes?
For example, Airbus Industries garnered initial long term credibility from explicit government support. Consequently, when bidding for an airline contract, Boeing would be more likely to accept short-term, non-economic returns when bidding against Lockheed than when bidding against Airbus.
Loosely applied to the vernacular of the software industry,
OSS systems are considered credible because the source code is available from potentially millions of places and individuals.
The really interesting thing about these two statements is that they imply that Microsoft should give up on FUD as an effective tactic against us.
Most of us have been assuming that the DOJ antitrust suit is what's keeping Microsoft from hauling out the FUD guns. But if His Gatesness bought this part of the memo, Microsoft may believe that they need to develop a more substantive response because FUD won't work.
This could be both good and bad news. The good news is that Microsoft would give up attack marketing, a weapon which in the past has been much more powerful than its distinctly inferior technology. The bad news is that, against us, giving it up would actually be better strategy; they wouldn't be wasting energy any more and might actually evolve some effective response. }
The likelihood that Apache will cease to exist is orders of magnitudes lower than the likelihood that WordPerfect, for example, will disappear. The disappearance of Apache is not tied to the disappearance of binaries (which are affected by purchasing shifts, etc.) but rather to the disappearance of source code and the knowledge base.
Inversely stated, customers know that Apache will be around 5 years from now -- provided there exists some minimal sustained interested from its user/development community.
One Apache customer, in discussing his rationale for running his e-commerce site on OSS stated, "because it's open source, I can assign one or two developers to it and maintain it myself indefinitely. "
Lack of Code-Forking Compounds Long-Term Credibility
The GPL and its aversion to code forking reassures customers that they aren't riding an evolutionary `dead-end' by subscribing to a particular commercial version of Linux.
By the author's own admission, OSS is bulletproof on this
score. On the other hand, the exploding complexity and schedule
slippage of the just-renamed
Windows 2000 suggest that it
is an evolutionary dead end.
The author didn't go on to point that out. But we should. }
And the amateurs are
. By Microsoft's own admission, we're actually
Maybe there's a message about the underlying products here? }
In particular, larger, more savvy, organizations who rely on OSS for business operations (e.g. ISPs) are comforted by the fact that they can potentially fix a work-stopping bug independent of a commercial provider's schedule (for example, UUNET was able to obtain, compile, and apply the teardrop attack patch to their deployed Linux boxes within 24 hours of the first public attack)
Alternatively stated, "developer resources are essentially free in OSS". Because the pool of potential developers is massive, it is economically viable to simultaneously investigate multiple solutions / versions to a problem and chose the best solution in the end.
For example, the Linux TCP/IP stack was probably rewritten 3 times. Assembly code components in particular have been continuously hand tuned and refined.
OSS = `perfect' API evangelization / documentation
OSS's API evangelization / developer education is basically providing the developer with the underlying code. Whereas evangelization of API's in a closed source model basically defaults to trust, OSS API evangelization lets the developer make up his own mind.
NatBro and Ckindel point out a split in developer capabilities here. Whereas the "enthusiast developer" is comforted by OSS evangelization, novice/intermediate developers --the bulk of the development community -- prefer the trust model + organizational credibility (e.g. "Microsoft says API X looks this way")
Twenty years of experience in the field tells me not; that, in general, developers prefer code even when their non-technical bosses are naive enough to prefer `trust'. Microsoft, obviously, wants to believe that its `organizational credibility' counts -- I detect some wishful thinking here.
On the other hand, they may be right. We in the open-source community can't afford to dismiss that possibility. I think we can meet it by developing high-quality documentation. In this way, `trust' in name authors (or in publishers of good repute such as O'Reilly or Addison-Wesley) can substitute for `trust' in an API-defining organization. }
Strongly componentized OSS projects are able to release subcomponents as soon as the developer has finished his code. Consequently, OSS projects rev quickly & frequently.
The weaknesses in OSS projects fall into 3 primary buckets:
The biggest roadblock for OSS projects is dealing with exponential growth of management costs as a project is scaled up in terms of rate of innovation and size. This implies a limit to the rate at which an OSS project can innovate.
Starting an OSS project is difficult
From Eric Raymond:
It's fairly clear that one cannot code from the ground up in bazaar style. One can test, debug and improve in bazaar style, but it would be very hard tooriginate a project in bazaar mode. Linus didn't try it. I didn't either. Your nascent developer community needs to have something runnable and testable to play with.
Raymond `s argument can be extended to the difficulty in starting/sustaining a project if there are no clear precedent / goal (or too many goals) for the project.
Obviously, there are far more fragments of source code on the Internet than there are OSS communities. What separates "dead source code" from a thriving bazaar?
One article (http://www.mibsoftware.com/bazdev/0003.htm) provides the following credibility criteria:
"....thinking in terms of a hard minimum number of participants is misleading. Fetchmail and Linux have huge numbers of beta testers *now*, but they obviously both had very few at the beginning.
What both projects did have was a handful of enthusiasts and a plausible promise. The promise was partly technical (this code will be wonderful with a little effort) and sociological (if you join our gang, you'll have as much fun as we're having). So what's necessary for a bazaar to develop is that it be credible that the full-blown bazaar will exist!"
I'll posit that some of the key criteria that must exist for a bazaar to be credible include:
The Cathedral and the Bazaar.. The distinction he makes between `Large Future Noosphere' and `Scratch a big itch' is particularly telling. }
When describing this problem to JimAll, he provided the perfect analogy of "chasing tail lights". The easiest way to get coordinated behavior from a large, semi-organized mob is to point them at a known target. Having the taillights provides concreteness to a fuzzy vision. In such situations, having a taillight to follow is a proxy for having strong central leadership.
Of course, once this implicit organizing principle
is no longer available
Part of the point of open source is to lower the energy barriers that retard innovation. We've found by experience that the `massive management' the author extols is one of the worst of these barriers.
In the open-source world, innovators get to try anything, and the only test is whether users will volunteer to experiment with the innovation and like it once they have. The Internet facilitates this process, and the cooperative conventions of the open-source community are specifically designed to promote it.
The third alternative to
(and more effective than either) is an evolving
creative anarchy, in which there are a thousand leaders and ten
thousand followers linked by a web of peer review
and subject to rapid-fire reality checks.
Microsoft cannot beat this. I don't think they can even really understand it, not on a gut level. }
This is possibly the single most interesting hurdle to face the Linux community now that they've achieved parity with the state of the art in UNIX in many respects.
Another interesting thing to observe in the near future of OSS is how well the team is able to tackle the "unsexy" work necessary to bring a commercial grade product to life.
unsexyreveals an interesting blind spot. It has been my experience that for almost any kind of work, there will be somebody, somewhere, who thinks it's interesting or fulfilling enough to undertake it.
Take the example of Unicode support above. Who's likely to do the best, most thorough job of implementing Unicode support, of the following three people?
It's likely to be either Ana or Jeff (all else, including skill sets, being equal), because they're scratching their itches. It ain't gonna be Joe.
Now, which development model is more likely to pull Ana or Jeff into the development effort -- closed source, or open?
Easy question. }
In the operating systems space, this includes small, essential functions such as power management, suspend/resume, management infrastructure, UI niceties, deep Unicode support, etc.
For Apache, this may mean novice-administrator functionality such as wizards.
Integrative work across modules is the biggest cost encountered by OSS teams. An email memo from Nathan Myrhvold on 5/98, points out that of all the aspects of software development, integration work is most subject to Brooks' laws.
Up till now, Linux has greatly benefited from the integration / componentization model pushed by previous UNIX's. Additionally, the organization of Apache was simplified by the relatively simple, fault tolerant specifications of the HTTP protocol and UNIX server application design.
We can only hope Microsoft continues to believe this, because it would hinder their response. Much will depend on how they interpret innovations such as (for example) the SMPization of the Linux kernel.
Interestingly, the author contradicts
himself on this point.
A former Microserf tells me that `throw one away' is actually pretty
close to a defined Microsoft policy, but one designed to leverage
marketing rather than fix problems. The project he was involved with
involved a web-based front-end to Exchange. The resulting first draft
(after 14 months of effort) was completely inferior to already
existing free-web-email (Yahoo, Hotmail, etc). The official response
to that was
He adds: Internet Explorer 5, just before one of its beta releases had about 300K (yes, 300K) outstanding bugs targeted to be fixed before the beta release. Much of this was accomplished by simply removing large chunks of planned (new) functionality and pushing them to a later (+1-2 years later) release. }
These are weaknesses intrinsic to OSS's design/feedback methodology.
One of the key's to the OSS process is having many more iterations than commercial software (Linux was known to rev it's kernel more than once a day!). However, commercial customers tell us they want fewer revs, not more.
This is why commercial Linux distributors exist -- to mediate between the rapid-development process and customers who don't want to follow every twist of it. The kernel may rev once a day, but Red Hat only revs once in six months. }
The Linux OS is not developed for end users but rather, for other hackers. Similarly, the Apache web server is implicitly targetted at the largest, most savvy site operators, not the departmental intranet server.
The key thread here is that because OSS doesn't have an explicit marketing / customer feedback component, wishlists -- and consequently feature development -- are dominated by the most technically savvy users.
There are two ways to build in ease of use "from the ground up". One
(the Microsoft way) is to design monolithic applications that are
defined and dominated by their UIs. This tends to produce
Windowsitis -- rigid, clunky, bug-prone monstrosities that are all
glossy surface with a hollow interior.
Programs built this way look user-friendly at first sight, but turn out to be huge time and energy sinks in the longer term. They can only be sustained by carpet-bomb marketing, the main purpose of which is to delude users into believing that (a) bugs are features, or that (b) all bugs are really the stupid user's fault, or that (c) all bugs will be abolished if the user bends over for the next upgrade. This approach is fundamentally broken.
The other way is the Unix/Internet/Web way, which is to separate the engine (which does the work) from the UI (which does the viewing and control). This approach requires that the engine and UI communicate using a well-defined protocol. It's exemplified by browser/server pairs -- the engine specializes in being an engine, and the UI specializes in being a UI.
With this second approach, overall complexity goes down and reliability goes up. Further, the interface is easier to evolve/improve/customize, precisely because it's not tightly coupled to the engine. It's even possible to have multiple interfaces tuned to different audiences.
Finally, this architecture leads naturally to applications that are enterprise-ready -- that can be used or administered remotely from the server. This approach works -- and it's the open-source community's natural way to counter Microsoft.
The key point is here is that if Microsoft wants to fight the open-source community on UI, let them -- because we can win that battle, too, fighting it our way. They can write ever-more-elaborate Windows monoliths that spot-weld you to your application-server console. We'll win if we write clean distributed applications that leverage the Internet and the Web and make the UI a pluggable/unpluggable user choice that can evolve.
Note, however, that our win depends on the existence of well-defined
protocols (such as HTTP) to communicate between UIs and engines.
That's why the stuff later in this memo about
protocols is so sinister. We need to guard against that.
The interesting trend to observe here will be the effect that commercial OSS providers (such as RedHat in Linux space, C2Net in Apache space) will have on the feedback cycle.
How can OSS provide the service that consumers expect from software providers?
Product support is typically the first issue prospective consumers of OSS packages worry about and is the primary feature that commercial redistributors tout.
However, the vast majority of OSS projects are
supported by the developers of the respective components. Scaling
this support infrastructure to the level expected in commercial
products will be a significant challenge.
This would have led to a choice of unpalatable (for Microsoft) alternatives. It may be that Apache's informal user-support channels and `organizational credibility' actually produce better results than Microsoft's IIS organization can offer. If that's true, then it's hard to see in principle why the same shouldn't be true of other open-source projects.
The alternative -- that Apache is so good that it doesn't need much support or `organizational credibility' -- is even worse. That would mean that all of Microsoft's heavy-duty support and marketing battalions were just a huge malinvestment, like crumbling Stalinist apartment blocks forty years later.
These two possible explanations imply distinct but parallel strategies for open-source advocates. One is to build software that's so good it just doesn't need much support (but we'd do this anyway, and generally have). The other is to do more intensely what we're already doing along the lines of support mailing lists, newsgroups, FAQs, and other informal but extremely effective channels. A former Microserf adds: "As of NT5 (sorry, Win2K :-) MS is going to claim a huge increase in IIS market share. This is because IIS5 is built directly linked with the NT kernel and handles all external TCP traffic (mail, http, etc). MSOffice is also going to communicate through IIS when talking with NT or Exchange, thus allowing them to add all internal LAN traffic to their usage reports. Let's see if we can pop their balloon before they raise it." }
For the short-medium run, this factor alone will relegate OSS products to the top tiers of the user community.
Hey -- this would be neat....
Perhaps we're fortunate that `organizational credibility' looms so large in the Microsoft world-view. The time and energy they spend worrying about that and believing it's a prerequisite is resources they won't spend doing anything that might be effective against us. }
What does it mean for the Linux community to
"sign up" to help build the Corporate Digital Nervous
System? How can Linux guarantee backward compatibility with apps
written to previous API's?
The question about backward compatibility is pretty ironic, considering that I've never heard of a program that will run under all of Windows 3.1, Windows 95, Windows 98, and NT 4.0 without change.
The author has been overtaken by events here. He should ask Microsoft's buddies at Intel, who bought a minority stake in Red Hat less than two months after this memo was written. }
In the last 2 years, OSS has taken another twist with the emergence of companies that sell OSS software, and more importantly, hiring full-time developers to improve the code base. What's the business model that justifies these salaries?
In many cases, the answers to these questions are similar to "why should I submit my protocol/app/API to a standards body?"
The vendor of OSS-ware provides sales, support, and integration to the customer. Effectively, this transforms the OSS-ware vendor from a package goods manufacturer into a services provider.
The Loss Leader OSS business model can be used for two purposes:
Many OSS startups -- particularly those in Operating Systems space -- view funding the development of OSS products as a strategic loss leader against Microsoft.
Linux distributors, such as RedHat, Caldera, and others, are expressly willing to fund full time developers who release all their work to the OSS community. By simultaneously funding these efforts, Red Hat and Caldera are implicitly colluding and believe they'll make more short term revenue by growing the Linux market rather than directly competing with each other.
An indirect example is O'Reilly & Associates employment of Larry Wall -- "leader" and full time developer of PERL. The #1 publisher of PERL reference books, of course is O'Reilly & Associates.
For the short run, especially as the OSS project is at the steepest part of it's growth curve, such investments generate positive ROI. Longer term, ROI motivations may steer these developers towards making proprietary extensions rather than releasing OSS.
Commoditizing Downstream Suppliers
This is very closely related to the loss leader business model. However, instead of trying to get marginal service returns by massively growing the market, these businesses increase returns in their part of the value chain by commoditizing downstream suppliers.
The best examples of this currently are the thin server vendors such as Whistle Communications, and Cobalt Micro who are actively funding developers in SAMBA and Linux respectively.
Both Whistle and Cobalt generate their revenue on hardware volume. Consequently, funding OSS enables them to avoid today's PC market where a "tax" must be paid to the OS vendor (NT Server retail price is $800 whereas Cobalt's target MSRP is around $1000).
The earliest Apache developers were employed by cash-strapped ISPs and ICPs.
Another, more recent example is IBM's deal with Apache. By declaring the HTTP server a commodity, IBM hopes to concentrate returns in the more technically arcane application services it bundles with it's Apache distribution (as well as hope to reach Apache's tremendous market share).
First Mover -- Build Now, $$ Later
One of the exponential qualities of OSS -- successful OSS projects swallow less successful ones in their space -- implies a pre-emption business model where by investing directly in OSS today, they can pre-empt / eliminate competitive projects later -- especially if the project requires API evangelization. This is tantamount to seizing a first mover advantage in OSS.
In addition, the developer scale, iteration rate, and reliability advantages of the OSS process are a blessing to small startups who typically can't afford a large in--house development staff.
Examples of startups in this space include SendMail.com (making a commercially supported version of the sendmail mail transfer agent) and C2Net (makes commercial and encrypted Apache)
Notice, that no case of a successful startup originating an OSS project has been observed. In both of these cases, the OSS project existed before the startup was formed.
Sun Microsystem's has recently announced that its "JINI" project will be provided via a form of OSS and may represent an application of the pre-emption doctrine.
The next several sections analyze the most prominent OSS projects including Linux, Apache, and now, Netscape's OSS browser.
A second memo titled "Linux OS Competitive Analysis" provides an in-depth review of the Linux OS. Here, I provide a top-level summary of my findings in Linux.
Linux (pronounced "LYNN-ucks") is the #1 market share Open Source OS on the Internet. Linux is derives strongly from the 25+ years of lessons learned on the UNIX operating system.
Linux is a real, credible OS + Development process
Linux is a short/medium-term threat in servers
The primary threat Microsoft faces from Linux is against NT Server.
Linux's future strength against NT server (and other UNIXes) is fed by several key factors:
To put it slightly differently: Linux can win if services are open and protocols are simple, transparent. Microsoft can only win if services are closed and protocols are complex, opaque.
To put it even more bluntly: "commodity" services and protocols are good things for customers; they promote competition and choice. Therefore, for Microsoft to win, the customer must lose.
The most interesting revelation in this memo is how close to explicitly stating this logic Microsoft is willing to come. }
Linux is unlikely to be a threat on the desktop
Linux is unlikely to be a threat in the medium-long term on the desktop for several reasons:
Though this is true, it evades an important issue -- which is that Microsoft's own meretriciousness on this score doesn't make its criticism any less valid. Open-source development really is poor at addressing this class of issues, because it doesn't involve systematic ease-of-use-testing with non-hackers.
This genuinely will slow down Linux's advance on the desktop. It is not likely to stall it forever, however -- not if efforts like GNOME and KDE get time to mature. }
first mover advantageare the only ways to defray the perceived cost of switching. This is a dangerous assumption for Microsoft; it may be that the superior reliability and stability of Linux is sufficient.
Even granting the author's presumption, the possibility that Linux can grab a sufficient `first-mover' advantage is not safely foreclosed unless the open-source mode really is incapable of generating innovation -- and we already know that's not true. }
In addition to the attacking the general weaknesses of OSS projects (e.g. Integrative / Architectural costs), some specific attacks on Linux are:
All the standard product issues for NT vs. Sun apply to Linux.
What the author is driving at is nothing less than trying to subvert
the entire " The ` This game is called (This standards-pollution strategy is perfectly in line with
Microsoft's efforts to corrupt Java and break the Java brand.)
Open-source advocates can counter by pointing out exactly how and
why customers lose (reduced competition, higher costs, lower
reliability, lost opportunities). Open-source advocates can also make
this case by showing the contrapositive -- that is, how open source and
open standards increase vendor competition, decrease costs, improve
reliability, and create opportunities.
Once again, as Microsoft conceded earlier in the
memo, the Internet is our poster child. Our best stop-thrust
against embrace-and-extend is to point out that Microsoft is trying
to close up the Internet.
embrace and extend. We've seen Microsoft play
this game before, and they're very good at it. When it works,
Microsoft wins a monopoly lock. Customers lose.
The ` This game is called (This standards-pollution strategy is perfectly in line with
Microsoft's efforts to corrupt Java and break the Java brand.)
Open-source advocates can counter by pointing out exactly how and
why customers lose (reduced competition, higher costs, lower
reliability, lost opportunities). Open-source advocates can also make
this case by showing the contrapositive -- that is, how open source and
open standards increase vendor competition, decrease costs, improve
reliability, and create opportunities.
Once again, as Microsoft conceded earlier in the
memo, the Internet is our poster child. Our best stop-thrust
against embrace-and-extend is to point out that Microsoft is trying
to close up the Internet.
This game is called
(This standards-pollution strategy is perfectly in line with Microsoft's efforts to corrupt Java and break the Java brand.)
Open-source advocates can counter by pointing out exactly how and why customers lose (reduced competition, higher costs, lower reliability, lost opportunities). Open-source advocates can also make this case by showing the contrapositive -- that is, how open source and open standards increase vendor competition, decrease costs, improve reliability, and create opportunities.
Once again, as Microsoft conceded earlier in the memo, the Internet is our poster child. Our best stop-thrust against embrace-and-extend is to point out that Microsoft is trying to close up the Internet. }
In an attempt to renew it's credibility in the browser space, Netscape has recently released and is attempting to create an OSS community around it's Mozilla source code.
Netscape's organization and licensing model is loosely based on the Linux community & GPL with a few differences. First, Mozilla and Netscape Communicator are 2 codebases with Netscape's engineers providing synchronization.
Unlike the full GPL, Netscape reserves the final right to reject / force modifications into the Mozilla codebase and Netscape's engineers are the appointed "Area Directors" of large components (for now).
Capitalize on Anti-MSFT Sentiment in the OSS Community
Relative to other OSS projects, Mozilla is considered to be one of the most direct, near-term attacks on the Microsoft establishment. This factor alone is probably a key galvanizing factor in motivating developers towards the Mozilla codebase.
The availability of Mozilla source code has renewed Netscape's credibility in the browser space to a small degree. As BharatS points out in
"They have guaranteed by releasing their code that they will never disappear from the horizon entirely in the manner that Wordstar has disappeared. Mozilla browsers will survive well into the next 10 years even if the user base does shrink. "
Scratch a big itch
The browser is widely used / disseminated. Consequently, the pool of people who may be willing to solve "an immediate problem at hand" and/or fix a bug may be quite high.
Post parity development
Mozilla is already at close to parity with IE4/5. Consequently, there no strong example to chase to help implicitly coordinate the development team.
Netscape has assigned some of their top developers towards the full time task of managing the Mozilla codebase and it will be interesting to see how this helps (if at all) the ability of Mozilla to push on new ground.
An interesting weakness is the size of the remaining "Noosphere" for the OSS browser.
There are no longer any large, high-profile segments of the stand-alone browser which must be developed. In otherwords, Netscape has already solved the interesting 80% of the problem. There is little / no ego gratification in debugging / fixing the remaining 20% of Netscape's code.
Linus Torvalds' management of the Linux codebase is arguably directed towards the goal of creating the best Linux. Netscape, by contrast, expressly reserves the right to make code management decisions on the basis of Netscape's commercial / business interests. Instead of creating an important product, the developer's code is being subjugated to Netscape's stock price.
Potentially the single biggest detriment to the Mozilla effort is the level of integration that customers expect from features in a browser. As stated earlier, integration development / testing is NOT a parallelizable activity and therefore is hurt by the OSS process.
In particular, much of the new work for IE5+ is not just integrating components within the browser but continuing integration within the OS. This will be exceptionally painful to compete aga inst.
The contention therefore, is that unlike the Apache and Linux projects which, for now, are quite successful, Netscape's Mozilla effort will:
Keeping in mind that the source code was only released a short time ago (April '98), there is already evidence of waning interest in Mozilla. EXTREMELY unscientific evidence is found in the decline in mailing list volume on Mozilla mailing lists from April to June.
Mozilla Mailing List
Internal mirrors of the Mozilla mailing lists can be found on http://egg.Microsoft.com/wilma/lists
In February of 1995, the most popular server software on the Web was the public domain HTTP daemon developed by NCSA, University of Illinois, Urbana-Champaign. However, development of that httpd had stalled after mid-1994, and many webmasters had developed their own extensions and bug fixes that were in need of a common distribution. A small group of these webmasters, contacted via private e-mail, gathered together for the purpose of coordinating their changes (in the form of "patches"). By the end of February `95, eight core contributors formed the foundation of the original Apache Group. In April 1995, Apache 0.6.2 was released.
During May-June 1995, a new server architecture (code-named Shambhala) was developed which included a modular structure and API for better extensibility, pool-based memory allocation, and an adaptive pre-forking process model. The group switched to this new server base in July and added the features from 0.7.x, resulting in Apache 0.8.8 (and its brethren) in August.
Less than a year after the group was formed, the Apache server passed NCSA's httpd as the #1 server on the Internet.
The Apache development team consists of about 19 core members plus hundreds of web site administrators around the world who've submitted a bug report / patch of one form or another. Apache's bug data can be found at: http://bugs.apache.org/index.
A description of the code management and dispute resolution procedures followed by the Apache team are found on http://www.apache.org:
There is a core group of contributors (informally called the "core") which was formed from the project founders and is augmented from time to time when core members nominate outstanding contributors and the rest of the core members agree.
Changes to the code are proposed on the mailing list and usually voted on by active members -- three +1 (yes votes) and no -1 (no votes, or vetoes) are needed to commit a code change during a release cycle
Apache far and away has #1 web site share on the Internet today. Possession of the lion's share of the market provides extremely powerful control over the market's evolution.
In particular, Apache's market share in web server space presents the following competitive hurdles:
3rd Party Support
The number of tools / modules / plug-ins available for Apache has been growing at an increasing rate.
In the short run, IIS soundly beats Apache on SPECweb. Moving further, as IIS moves into kernel and takes advantage deeper integration with the NT, this lead is expected to increase further.
Apache, by contrast, is saddled with the requirement to create portable code for all of its OS environments.
HTTP Protocol Complexity & Application services
Part of the reason that Apache was able to get a foothold and take off was because the HTTP protocol is so simple. As more and more features become layered on top of the humble web server (e.g. multi-server transaction support, POD, etc.) it will be interesting to see how the Apache team will be able to keep up.
ASP support, for example is a key driver for IIS in corporate intranets.
Recently, IBM announced it's support for the Apache codebase in its WebSphere application server. The actual result of the press furor is still unclear however:
Some other OSS projects:
In general, a lot more thought/discussion needs to put into Microsoft's response to the OSS phenomena. The goal of this document is education and analysis of the OSS process, consequently in this section, I present only a very superficial list of options and concerns.
Where is Microsoft most likely to feel the "pinch" of OSS projects in the near future?
Server vs. Client
The server is more vulnerable to OSS products than the client. Reasons for this include:
Capturing OSS benefits -- Developer Mindshare
The ability of the OSS process to collect and harness the collective IQ of thousands of individuals across the Internet is simply amazing. More importantly, OSS evangelization scales with the size of the Internet much faster than our own evangelization efforts appear to scale.
How can Microsoft capture some of the rabid developer mindshare being focused on OSS products?
Some initial ideas include:
A former Microserf tells me that Microsoft departments see themselves almost as separate organizations. Parallel (and competitive) software development spurs both groups onward. The 'surviving' product is then what MS releases. This internal adversarial approach is taken so far that many crucial components do not have documented APIs -- primarily to ensure that the Dev team is not broken up and moved to other projects. MS is protected against perjury charges by the simple fact that their APIs are not even documented for internal MS use, so they are not holding anything back from competitors. }
Capturing OSS benefits -- Microsoft Internal Processes
What can Microsoft learn from the OSS example? More specifically: How can we recreate the OSS development environment internally? Different reviewers of this paper have consistently pointed out that internally, we should view Microsoft as an idealized OSS community but, for various reasons do not:
"a developer at Microsoft working on the OS can't scratch an itch they've got with Excel, neither can the Excel developer scratch their itch with the OS -- it would take them months to figure out how to build & debug & install, and they probably couldn't get proper source access anyway"
"People have to work on their parts independent of the rest so internal abstractions between components are well documented and well exposed/exported as well as being more robust because they have no idea how they are going to be called. The linux development system has evolved into allowing more devs to party on it without causing huge numbers of integration issues because robustness is present at every level. This is great, long term, for overall stability and it shows."
The trick of course, is to capture these benefits without incurring the costs of the OSS process. These costs are typically the reasons such barriers were erected in the first place:
Extending OSS benefits -- Service Infrastructure
Supporting a platform & development community requires a lot of service infrastructure which OSS can't provide. This includes PDC's, MSDN, ADCU, ISVs, IHVs, etc.
The OSS communities "MSDN" equivalent, of course, is a loose confederation of web sites with API docs of varying quality. MS has an opportunty to really exploit the web for developer evangelization.
Generally, Microsoft wins by attacking the core weaknesses of OSS projects.
De-commoditize protocols & applications
David Stutz makes a very good point: in
competing with Microsoft's level of desktop integration,
"commodity protocols actually become the means of
integration" for OSS projects.
and stop open-source software.
de-commoditize protocols & applications
A former Microserf adds: only half of the reason MS sends people to the W3C working groups relates to a desire to improve RFC standards. The other half is to give MS a sneak peak at upcoming standards so they can "extend" them in advance and claim that the `official' standard is `obsolete' when it emerges around the same time as their `extension'.
Once again, open-source advocates' best response is to point out to
customers that when things are
de-commoditized, vendors gain and
Some examples of Microsoft initiatives which are extending commodity protocols include:
Make Integration Compelling -- Especially on the server
The rise of specialty servers is a particularly potent and dire long term threat that directly affects our revenue streams. One of the keys to combating this threat is to create integrative scenarios that are valuable on the server platform. David Stutz points out:
The bottom line here is whoever has the best network-oriented integration technologies and processes will win the commodity server business. There is a convergence of embedded systems, mobile connectivity, and pervasive networking protocols that will make the number of servers (especially "specialist servers"??) explode. The general-purpose commodity client is a good business to be in - will it be dwarfed by the special-purpose commodity server business?
Many people provided, datapoints, proofreading, thoughtful email, and analysis on both this paper and the Linux analysis:
Started revision table
Folded in comments from JoshCo
More fixes, printed copies for PaulMa review
(This annotated version has been renamed; there's a sequel, , which marks up a second memo more specifically addressing Linux.)
Microsoft has publicly acknowledged that this memorandum is authentic, but dismissed it as a mere engineering study that does not define Microsoft policy.
However, the list of collaborators mentioned at the end includes some people who are known to be key players at Microsoft, and the document reads as though the research effort had the cooperation of top management; it may even have been commissioned as a policy white paper for Bill Gates's attention (the author seems to have expected that Gates would read it).
Either way, it provides us with a very valuable look past Microsoft's dismissive marketing spin about Open Source at what the company is actually thinking -- which, as you'll see, is an odd combination of astuteness and institutional myopia.
Despite some speculation that this was an intentional leak, this seems quite unlikely. The document is too damning; portions could be considered evidence of anti-competitive practices for the DOJ lawsuit. Also, the authorwhen initially contacted, suggesting that Microsoft didn't have its story worked out in advance.
Since the author quoted my analyses of open-source community dynamics (The Cathedral and the Bazaar and Homesteading the Noosphere) extensively, it seems fair that I should respond on behalf of the community. :-)
Here are some notable quotes from the document, with hotlinks to where they are embedded. It's helpful to know that here.is the author's abbreviation for . FUD, a characteristic Microsoft tactic, is explained
How To Read This Document:
Comments in this color, surrounded by curly brackets, are me (Eric S. Raymond). I have highlighted what I believe to be key points in the original text by turning them1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28
have inserted comments near these key points; you can skim the
document by surfing through this comment index in sequence.
I've embedded a few other comments in brown that aren't associated with key points and aren't indexed. These additional comments are only of interest if you're reading the entire document.
I have otherwise left the document completely as-is (not even correcting typos), so you can read what Bill Gates is reading about Open Source. It's a bit long, but persevere. An accurate fix on the opposition's thinking is worth some effort -- and there are one or two really startling insights buried in the corporatespeak.
I believe that far and away the the most dangerous tactic advocated in this memorandum is that embodied in the sinister phrase.
If publication of this document does nothing else, I hope it will alert everyone to the stifling of competition, the erosion of consumer choice, the higher costs, and the monopoly lock-in that this tactic implies.
The parallel with Microsoft's attempted hijacking of Java, and its attempts to spoil thepotential of this technology, should be obvious.
I have included an extended discussion of this point in my interlinear comments. To prevent this tactic from working, I believe open-source advocates must begin emphasizing these points:
The first (1.1) annotated version of the VinodV memorandum was prepared over the weekend of 31 Oct-1 Nov 1998. It is in recognition of the date, and my fond hope that publishing it will help realize Microsoft's worst nightmares, that I named it the.
The 1.2 version featured cleanup of non-ASCII characters.
The 1.3 version noted Microsoft's acknowledgement of authenticity.
The 1.4 version added a bit more analysis and the section on Threat Assessment.
The 1.5 version added some bits to the preamble.
The 1.6 version added more to one of the comments.>
The 1.7 version added the reference to the Fuzz papers.
The 1.8 version added a link to the Halloween II document.
The 1.9 version adds a note about HTTP-DAV support.
The 1.10 version adds more on thequestion.
The 1.11 version adds perceptive comments from the Learning From Linux, page by Tom Nadeau, an OS/2 advocate.
The 1.12 version adds illuminating comments by a former Microserf who wishes to remain nameless.
The 1.13 version adds a comment onwork based on some thoughts by Tim Kynerd.
The 1.14 version adds a bit of cleanup.
The 1.15 removed font changes that made the HTML hard to read on large screens.
The 1.16 version CSS-ized the document and changed the comment color to make life easier for people with red-green color blindness.
The 1.17 version cleaned up some markup that caused rendering problems on some browsers.
Note: some links have died, but have been left as they are for historical reasons.
A French translation is available offsite.}