|
"Spine, Doc Reform - SiSU Markup" (2008) [en] AMISSAH, Ralph
26:
SiSU has evolved, the current implementation focuses on one primary use-case, books and literary writings. However the concept on which it is based has wider application. Here is a prevously posted souvenir from my encounter with an IBM software evaluator in London June 2004 that came about through a chance encounter with an IBM manager at a Linux Expo, who was curious about my interest in Gnu/Linux with my legal background... on hearing that I also wrote software, he suggested, maybe IBM should have a look at it. I was interested, the meeting was set up... with an IBM, Software Innovations evaluator<br>His response after the meeting:
27:
“Ralph
Good to meet with you today, I was very impressed with your software.
[colleague's name (also posted to an IBM colleague)] - in summary - Ralph has built an application that runs on linux and takes ASCII documents and pulls them apart in to the smallest constituent parts, storing them as XML, PDF and HTML, the HTML are hyperlinked up so the document can be browsed in its full form. the format and text data created is stored in a database.<br>This has potential in any place that needs the power of full text search whilst holding the structural concepts of the document i.e. legal, pharma, education, research.. which ones we need to figure out, ...”
78:
# SiSU master 8.0 title: main: "SiSU" subtitle: "Markup" creator: author: "Amissah, Ralph" date: created: "2002-08-28" issued: "2002-08-28" available: "2002-08-28" published: "2008-05-22" modified: "2020-04-11" rights: copyright: "Copyright (C) Ralph Amissah 2007, 2020" license: "AGPL 3 (part of SiSU Spine documentation)" classify: topic_register: "electronic documents:SiSU:document:markup;SiSU:document:markup;SiSU:manual:markup;electronic documents:SiSU:manual:markup" subject: "ebook, epublishing, electronic book, electronic publishing, electronic document, electronic citation, data structure, citation systems, search" make: auto_num_top_at_level: "1" substitute: [ [ "[$]{2}\\{sisudoc\\}", "www.sisudoc.org" ] ] bold: "Debian|SiSU" italics: "Linux|GPL|LaTeX|SQL" breaks: "new=:B; break=1" home_button_text: "{SiSU}https://sisudoc.org; {sources / git}https://git.sisudoc.org/projects/" footer: "{SiSU}https://sisudoc.org; {git}https://git.sisudoc.org/projects"
93:
make: auto_num_top_at_level: "1" substitute: [ [ "[$]{2}\\{sisudoc\\}", "www.sisudoc.org" ] ] bold: "Debian|SiSU" # [regular expression of words/phrases to be made bold] italics: "Linux|GPL|LaTeX|SQL" # [regular expression of words/phrases to italicise] breaks: "new=:B; break=1" home_button_text: "{SiSU}https://sisudoc.org; {sources / git}https://git.sisudoc.org/gitweb/" footer: "{SiSU}https://sisudoc.org; {git}https://git.sisudoc.org" headings: text to match for each level (e.g. PART; Chapter; Section; Article; or another: none; BOOK|FIRST|SECOND; none; CHAPTER;)
222:
{ sm_tux.png 64x80 }image % various url linked images {sm_tux.png 64x80 "a better way" }https://www.sisudoc.org/ {sm_GnuDebianLinuxRubyBetterWay.png 100x101 "Way Better - with Gnu/Linux, Debian and Ruby" }https://www.sisudoc.org/ {~^ sm_ruby_logo.png "Ruby" }https://www.ruby-lang.org/en/
230:
"The Wealth of Networks - How Social Production Transforms Markets and Freedom" (2006) [en] BENKLER, Yochai
20:
It is easy to miss these changes. They run against the grain of some of our most basic Economics 101 intuitions, intuitions honed in the industrial economy at a time when the only serious alternative seen was state Communism--an alternative almost universally considered unattractive today. The undeniable economic success of free software has prompted some leading-edge economists to try to understand why many thousands of loosely networked free software developers can compete with Microsoft at its own game and produce a massive operating system--GNU/Linux. That growing literature, consistent with its own goals, has focused on software and the particulars of the free and open-source software development communities, although Eric von Hippel's notion of “user-driven innovation” has begun to expand that focus to thinking about how individual need and creativity drive innovation at the individual level, and its diffusion through networks of likeminded individuals. The political implications of free software have been central to the free software movement and its founder, Richard Stallman, and were developed provocatively and with great insight by Eben Moglen. Free software is but one salient example of a much broader phenomenon. Why can fifty thousand volunteers successfully coauthor Wikipedia, the most serious online alternative to the Encyclopedia Britannica, and then turn around and give it away for free? Why do 4.5 million volunteers contribute their leftover computer cycles to create the most powerful supercomputer on Earth, SETI@Home? Without a broadly accepted analytic model to explain these phenomena, we tend to treat them as curiosities, perhaps transient fads, possibly of significance in one market segment or another. We [pg 6] should try instead to see them for what they are: a new mode of production emerging in the middle of the most advanced economies in the world-- those that are the most fully computer networked and for which information goods and services have come to occupy the highest-valued roles.
96:
An excellent example of a business strategy based on nonexclusivity is IBM's. The firm has obtained the largest number of patents every year from 1993 to 2004, amassing in total more than 29,000 patents. IBM has also, however, been one of the firms most aggressively engaged in adapting its business model to the emergence of free software. Figure 2.1 shows what happened to the relative weight of patent royalties, licenses, and sales in IBM's revenues and revenues that the firm described as coming from “Linuxrelated services.” Within a span of four years, the Linux-related services category moved from accounting for practically no revenues, to providing double the revenues from all patent-related sources, of the firm that has been the most patent-productive in the United States. IBM has described itself as investing more than a billion dollars in free software developers, hired programmers to help develop the Linux kernel and other free software; and donated patents to the Free Software Foundation. What this does for the firm is provide it with a better operating system for its server business-- making the servers better, faster, more reliable, and therefore more valuable to consumers. Participating in free software development has also allowed IBM to develop service relationships with its customers, building on free software to offer customer-specific solutions. In other words, IBM has combined both supply-side and demand-side strategies to adopt a nonproprietary business model that has generated more than $2 billion yearly of business [pg 47] for the firm. Its strategy is, if not symbiotic, certainly complementary to free software.
120:
Industrial organization literature provides a prominent place for the transaction costs view of markets and firms, based on insights of Ronald Coase and Oliver Williamson. On this view, people use markets when the gains from doing so, net of transaction costs, exceed the gains from doing the same thing in a managed firm, net of the costs of organizing and managing a firm. Firms emerge when the opposite is true, and transaction costs can best be reduced by [pg 60] bringing an activity into a managed context that requires no individual transactions to allocate this resource or that effort. The emergence of free and open-source software, and the phenomenal success of its flagships, the GNU/ Linux operating system, the Apache Web server, Perl, and many others, should cause us to take a second look at this dominant paradigm. 18 Free software projects do not rely on markets or on managerial hierarchies to organize production. Programmers do not generally participate in a project because someone who is their boss told them to, though some do. They do not generally participate in a project because someone offers them a price to do so, though some participants do focus on long-term appropriation through money-oriented activities, like consulting or service contracts. However, the critical mass of participation in projects cannot be explained by the direct presence of a price or even a future monetary return. This is particularly true of the all-important, microlevel decisions: who will work, with what software, on what project. In other words, programmers participate in free software projects without following the signals generated by marketbased, firm-based, or hybrid models. In chapter 2 I focused on how the networked information economy departs from the industrial information economy by improving the efficacy of nonmarket production generally. Free software offers a glimpse at a more basic and radical challenge. It suggests that the networked environment makes possible a new modality of organizing production: radically decentralized, collaborative, and nonproprietary; based on sharing resources and outputs among widely distributed, loosely connected individuals who cooperate with each other without relying on either market signals or managerial commands. This is what I call “commons-based peer production.”
18.For an excellent history of the free software movement and of open-source development, see Glyn Moody, Rebel Code: Inside Linux and the Open Source Revolution (New York: Perseus Publishing, 2001).
128:
Free software has played a critical role in the recognition of peer production, because software is a functional good with measurable qualities. It can be more or less authoritatively tested against its market-based competitors. And, in many instances, free software has prevailed. About 70 percent of Web server software, in particular for critical e-commerce sites, runs on the Apache Web server--free software. 21 More than half of all back-office e-mail functions are run by one free software program or another. Google, Amazon, and CNN.com, for example, run their Web servers on the GNU/Linux operating system. They do this, presumably, because they believe this peerproduced operating system is more reliable than the alternatives, not because the system is “free.” It would be absurd to risk a higher rate of failure in their core business activities in order to save a few hundred thousand dollars on licensing fees. Companies like IBM and Hewlett Packard, consumer electronics manufacturers, as well as military and other mission-critical government agencies around the world have begun to adopt business and service strategies that rely and extend free software. They do this because it allows them to build better equipment, sell better services, or better fulfill their public role, even though they do not control the software development process and cannot claim proprietary rights of exclusion in the products of their contributions.
21.Netcraft, April 2004 Web Server Survey, http://news.netcraft.com/archives/web_server_survey.html.
130:
The next major step came when a person with a more practical, rather than prophetic, approach to his work began developing one central component of the operating system--the kernel. Linus Torvalds began to share the early implementations of his kernel, called Linux, with others, under the GPL. These others then modified, added, contributed, and shared among themselves these pieces of the operating system. Building on top of Stallman's foundation, Torvalds crystallized a model of production that was fundamentally [pg 66] different from those that preceded it. His model was based on voluntary contributions and ubiquitous, recursive sharing; on small incremental improvements to a project by widely dispersed people, some of whom contributed a lot, others a little. Based on our usual assumptions about volunteer projects and decentralized production processes that have no managers, this was a model that could not succeed. But it did.
133:
The most surprising thing that the open source movement has shown, in real life, is that this simple model can operate on very different scales, from the small, three-person model I described for simple projects, up to the many thousands of people involved in writing the Linux kernel and the GNU/ Linux operating system--an immensely difficult production task. SourceForge, the most popular hosting-meeting place of such projects, has close to 100,000 registered projects, and nearly a million registered users. The economics of this phenomenon are complex. In the larger-scale models, actual organization form is more diverse than the simple, three-person model. In particular, in some of the larger projects, most prominently the Linux kernel development process, a certain kind of meritocratic hierarchy is clearly present. However, it is a hierarchy that is very different in style, practical implementation, and organizational role than that of the manager in the firm. I explain this in chapter 4, as part of the analysis of the organizational forms of peer production. For now, all we need is a broad outline of how peer-production projects look, as we turn to observe case studies of kindred production models in areas outside of software. [pg 68]
146:
First and foremost, the Wikipedia project is self-consciously an encyclopedia-- rather than a dictionary, discussion forum, web portal, etc. Wikipedia's participants [pg 73] commonly follow, and enforce, a few basic policies that seem essential to keeping the project running smoothly and productively. First, because we have a huge variety of participants of all ideologies, and from around the world, Wikipedia is committed to making its articles as unbiased as possible. The aim is not to write articles from a single objective point of view--this is a common misunderstanding of the policy--but rather, to fairly and sympathetically present all views on an issue. See “neutral point of view” page for further explanation. 26
26.Yochai Benkler, “Coase's Penguin, or Linux and the Nature of the Firm,” Yale Law Journal 112 (2001): 369.
170:
Most of the distributed computing projects provide a series of utilities and statistics intended to allow contributors to attach meaning to their contributions in a variety of ways. The projects appear to be eclectic in their implicit social and psychological theories of the motivations for participation in the projects. Sites describe the scientific purpose of the models and the specific scientific output, including posting articles that have used the calculations. In these components, the project organizers seem to assume some degree of taste for generalized altruism and the pursuit of meaning in contributing to a common goal. They also implement a variety of mechanisms to reinforce the sense of purpose, such as providing aggregate statistics about the total computations performed by the project as a whole. However, the sites also seem to assume a healthy dose of what is known in the anthropology of gift literature as agonistic giving--that is, giving intended to show that the person giving is greater than or more important than others, who gave less. For example, most of the sites allow individuals to track their own contributions, and provide “user of the month”-type rankings. An interesting characteristic of quite a few of these is the ability to create “teams” of users, who in turn compete on who has provided more cycles or work units. SETI@home in particular taps into ready-made nationalisms, by offering country-level statistics. Some of the team names on Folding@home also suggest other, out-of-project bonding measures, such as national or ethnic bonds (for example, Overclockers Australia or Alliance Francophone), technical minority status (for example, Linux or MacAddict4Life), and organizational affiliation (University of Tennessee or University of Alabama), as well as shared cultural reference points (Knights who say Ni!). In addition, the sites offer platforms for simple connectedness and mutual companionship, by offering user fora to discuss the science and the social participation involved. It is possible that these sites are shooting in the dark, as far as motivating sharing is concerned. It also possible, however, that they have tapped into a valuable insight, which is that people behave sociably and generously for all sorts of different reasons, and that at least in this domain, adding reasons to participate--some agonistic, some altruistic, some reciprocity-seeking--does not have a crowding-out effect.
183:
The increasing salience of nonmarket production in general, and peer production in particular, raises three puzzles from an economics perspective. First, why do people participate? What is their motivation when they work for or contribute resources to a project for which they are not paid or directly rewarded? Second, why now, why here? What, if anything, is special about the digitally networked environment that would lead us to believe that peer production is here to stay as an important economic phenomenon, as opposed to a fad that will pass as the medium matures and patterns of behavior settle toward those more familiar to us from the economy of steel, coal, and temp agencies. Third, is it efficient to have all these people sharing their computers and donating their time and creative effort? Moving through the answers to these questions, it becomes clear that the diverse and complex patterns of behavior observed on the Internet, from Viking ship hobbyists to the developers of the GNU/ Linux operating system, are perfectly consistent with much of our contemporary understanding of human economic behavior. We need to assume no fundamental change in the nature of humanity; [pg 92] we need not declare the end of economics as we know it. We merely need to see that the material conditions of production in the networked information economy have changed in ways that increase the relative salience of social sharing and exchange as a modality of economic production. That is, behaviors and motivation patterns familiar to us from social relations generally continue to cohere in their own patterns. What has changed is that now these patterns of behavior have become effective beyond the domains of building social relations of mutual interest and fulfilling our emotional and psychological needs of companionship and mutual recognition. They have come to play a substantial role as modes of motivating, informing, and organizing productive behavior at the very core of the information economy. And it is this increasing role as a modality of information production that ripples through the rest this book. It is the feasibility of producing information, knowledge, and culture through social, rather than market and proprietary relations--through cooperative peer production and coordinate individual action--that creates the opportunities for greater autonomous action, a more critical culture, a more discursively engaged and better informed republic, and perhaps a more equitable global community.
205:
Cooperation in peer-production processes is usually maintained by some combination of technical architecture, social norms, legal rules, and a technically backed hierarchy that is validated by social norms. Wikipedia is the strongest example of a discourse-centric model of cooperation based on social norms. However, even Wikipedia includes, ultimately, a small number of people with system administrator privileges who can eliminate accounts or block users in the event that someone is being genuinely obstructionist. This technical fallback, however, appears only after substantial play has been given to self-policing by participants, and to informal and quasi-formal communitybased dispute resolution mechanisms. Slashdot, by contrast, provides a strong model of a sophisticated technical system intended to assure that no one can “defect” from the cooperative enterprise of commenting and moderating comments. It limits behavior enabled by the system to avoid destructive behavior before it happens, rather than policing it after the fact. The Slash code does this by technically limiting the power any given person has to moderate anyone else up or down, and by making every moderator the subject of a peer review system whose judgments are enforced technically-- that is, when any given user is described by a sufficiently large number of other users as unfair, that user automatically loses the technical ability to moderate the comments of others. The system itself is a free software project, licensed under the GPL (General Public License)--which is itself the quintessential example of how law is used to prevent some types of defection from the common enterprise of peer production of software. The particular type of defection that the GPL protects against is appropriation of the joint product by any single individual or firm, the risk of which would make it less attractive for anyone to contribute to the project to begin with. The GPL assures that, as a legal matter, no one who contributes to a free software project need worry that some other contributor will take the project and make it exclusively their own. The ultimate quality judgments regarding what is incorporated into the “formal” releases of free software projects provide the clearest example of the extent to which a meritocratic hierarchy can be used to integrate diverse contributions into a finished single product. In the case of the Linux kernel development project (see chapter 3), it was always within the power of Linus Torvalds, who initiated the project, to decide which contributions should be included in a new release, and which should not. But it is a funny sort of hierarchy, whose quirkiness Steve Weber [pg 105] well explicates. 38 Torvalds's authority is persuasive, not legal or technical, and certainly not determinative. He can do nothing except persuade others to prevent them from developing anything they want and add it to their kernel, or to distribute that alternative version of the kernel. There is nothing he can do to prevent the entire community of users, or some subsection of it, from rejecting his judgment about what ought to be included in the kernel. Anyone is legally free to do as they please. So these projects are based on a hierarchy of meritocratic respect, on social norms, and, to a great extent, on the mutual recognition by most players in this game that it is to everybody's advantage to have someone overlay a peer review system with some leadership.
38.Steve Weber, The Success of Open Source (Cambridge, MA: Harvard University Press, 2004).
247:
Consider the example I presented in chapter 2 of IBM's relationship to the free and open source software development community. IBM, as I explained there, has shown more than $2 billion a year in “Linux-related revenues.” Prior to IBM's commitment to adapting to what the firm sees as the inevitability of free and open source software, the company either developed in house or bought from external vendors the software it needed as part of its hardware business, on the one hand, and its software services-- customization, enterprise solutions, and so forth--on the other hand. In each case, the software development follows a well-recognized supply chain model. Through either an employment contract or a supply contract the [pg 124] company secures a legal right to require either an employee or a vendor to deliver a given output at a given time. In reliance on that notion of a supply chain that is fixed or determined by a contract, the company turns around and promises to its clients that it will deliver the integrated product or service that includes the contracted-for component. With free or open source software, that relationship changes. IBM is effectively relying for its inputs on a loosely defined cloud of people who are engaged in productive social relations. It is making the judgment that the probability that a sufficiently good product will emerge out of this cloud is high enough that it can undertake a contractual obligation to its clients, even though no one in the cloud is specifically contractually committed to it to produce the specific inputs the firm needs in the time-frame it needs it. This apparent shift from a contractually deterministic supply chain to a probabilistic supply chain is less dramatic, however, than it seems. Even when contracts are signed with employees or suppliers, they merely provide a probability that the employee or the supplier will in fact supply in time and at appropriate quality, given the difficulties of coordination and implementation. A broad literature in organization theory has developed around the effort to map the various strategies of collaboration and control intended to improve the likelihood that the different components of the production process will deliver what they are supposed to: from early efforts at vertical integration, to relational contracting, pragmatic collaboration, or Toyota's fabled flexible specialization. The presence of a formalized enforceable contract, for outputs in which the supplier can claim and transfer a property right, may change the probability of the desired outcome, but not the fact that in entering its own contract with its clients, the company is making a prediction about the required availability of necessary inputs in time. When the company turns instead to the cloud of social production for its inputs, it is making a similar prediction. And, as with more engaged forms of relational contracting, pragmatic collaborations, or other models of iterated relations with co-producers, the company may engage with the social process in order to improve the probability that the required inputs will in fact be produced in time. In the case of companies like IBM or Red Hat, this means, at least partly, paying employees to participate in the open source development projects. But managing this relationship is tricky. The firms must do so without seeking to, or even seeming to seek to, take over the project; for to take over the project in order to steer it more “predictably” toward the firm's needs is to kill the goose that lays the golden eggs. For IBM and more recently Nokia, supporting [pg 125] the social processes on which they rely has also meant contributing hundreds of patents to the Free Software Foundation, or openly licensing them to the software development community, so as to extend the protective umbrella created by these patents against suits by competitors. As the companies that adopt this strategic reorientation become more integrated into the peer-production process itself, the boundary of the firm becomes more porous. Participation in the discussions and governance of open source development projects creates new ambiguity as to where, in relation to what is “inside” and “outside” of the firm boundary, the social process is. In some cases, a firm may begin to provide utilities or platforms for the users whose outputs it then uses in its own products. The Open Source Development Group (OSDG), for example, provides platforms for Slashdot and SourceForge. In these cases, the notion that there are discrete “suppliers” and “consumers,” and that each of these is clearly demarcated from the other and outside of the set of stable relations that form the inside of the firm becomes somewhat attenuated.
264:
Second Life and Jedi Saga are merely examples, perhaps trivial ones, within the entertainment domain. They represent a shift in possibilities open both to human beings in the networked information economy and to the firms that sell them the tools for becoming active creators and users of their information environment. They are stark examples because of the centrality of the couch potato as the image of human action in television culture. Their characteristics are representative of the shift in the individual's role that is typical of the networked information economy in general and of peer production in particular. Linus Torvalds, the original creator of the Linux kernel [pg 137] development community, was, to use Eric Raymond's characterization, a designer with an itch to scratch. Peer-production projects often are composed of people who want to do something in the world and turn to the network to find a community of peers willing to work together to make that wish a reality. Michael Hart had been working in various contexts for more than thirty years when he--at first gradually, and more recently with increasing speed--harnessed the contributions of hundreds of volunteers to Project Gutenberg in pursuit of his goal to create a globally accessible library of public domain e-texts. Charles Franks was a computer programmer from Las Vegas when he decided he had a more efficient way to proofread those e-texts, and built an interface that allowed volunteers to compare scanned images of original texts with the e-texts available on Project Gutenberg. After working independently for a couple of years, he joined forces with Hart. Franks's facility now clears the volunteer work of more than one thousand proofreaders, who proof between two hundred and three hundred books a month. Each of the thousands of volunteers who participate in free software development projects, in Wikipedia, in the Open Directory Project, or in any of the many other peer-production projects, is living some version, as a major or minor part of their lives, of the possibilities captured by the stories of a Linus Torvalds, a Michael Hart, or The Jedi Saga. Each has decided to take advantage of some combination of technical, organizational, and social conditions within which we have come to live, and to become an active creator in his or her world, rather than merely to accept what was already there. The belief that it is possible to make something valuable happen in the world, and the practice of actually acting on that belief, represent a qualitative improvement in the condition of individual freedom. They mark the emergence of new practices of self-directed agency as a lived experience, going beyond mere formal permissibility and theoretical possibility.
574:
The software industry offers a baseline case because of the proven large scope for peer production in free software. As in other information-intensive industries, government funding and research have played an enormously important role, and university research provides much of the basic science. However, the relative role of individuals, nonprofits, and nonproprietary market producers is larger in software than in the other sectors. First, twothirds of revenues derived from software in the United States are from services [pg 321] and do not depend on proprietary exclusion. Like IBM's “Linux-related services” category, for which the company claimed more than two billion dollars of revenue for 2003, these services do not depend on exclusion from the software, but on charging for service relationships. 111 Second, some of the most basic elements of the software environment--like standards and protocols--are developed in nonprofit associations, like the Internet Engineering Taskforce or the World Wide Web Consortium. Third, the role of individuals engaged in peer production--the free and open-source software development communities--is very large. Together, these make for an organizational ecology highly conducive to nonproprietary production, whose outputs can be freely usable around the globe. The other sectors have some degree of similar components, and commons-based strategies for development can focus on filling in the missing components and on leveraging nonproprietary components already in place.
111.For the sources of numbers for the software industry, see chapter 2 in this volume. IBM numbers, in particular, are identified in figure 2.1.
607:
The licensing or pooling component is more proactive, and is likely the most significant of the project. BIOS is setting up a licensing and pooling arrangement, “primed” by CAMBIA's own significant innovations in tools, which are licensed to all of the initiative's participants on a free model, with grant-back provisions that perform an openness-binding function similar to copyleft. 124 In coarse terms, this means that anyone who builds upon the [pg 343] contributions of others must contribute improvements back to the other participants. One aspect of this model is that it does not assume that all research comes from academic institutions or from traditional governmentfunded, nongovernmental, or intergovernmental research institutes. It tries to create a framework that, like the open-source development community, engages commercial and noncommercial, public and private, organized and individual participants into a cooperative research network. The platform for this collaboration is “BioForge,” styled after Sourceforge, one of the major free and open-source software development platforms. The commitment to engage many different innovators is most clearly seen in the efforts of BIOS to include major international commercial providers and local potential commercial breeders alongside the more likely targets of a commons-based initiative. Central to this move is the belief that in agricultural science, the basic tools can, although this may be hard, be separated from specific applications or products. All actors, including the commercial ones, therefore have an interest in the open and efficient development of tools, leaving competition and profit making for the market in applications. At the other end of the spectrum, BIOS's focus on making tools freely available is built on the proposition that innovation for food security involves more than biotechnology alone. It involves environmental management, locale-specific adaptations, and social and economic adoption in forms that are locally and internally sustainable, as opposed to dependent on a constant inflow of commoditized seed and other inputs. The range of participants is, then, much wider than envisioned by PIPRA or the GCP. It ranges from multinational corporations through academic scientists, to farmers and local associations, pooling their efforts in a communications platform and institutional model that is very similar to the way in which the GNU/Linux operating system has been developed. As of this writing, the BIOS project is still in its early infancy, and cannot be evaluated by its outputs. However, its structure offers the crispest example of the extent to which the peer-production model in particular, and commons-based production more generally, can be transposed into other areas of innovation at the very heart of what makes for human development--the ability to feed oneself adequately.
124.Wim Broothaertz et al., “Gene Transfer to Plants by Diverse Species of Bacteria,” Nature 433 (2005): 629.
734:
Another case did not end so well for the defendant. It involved a suit by the eight Hollywood studios against a hacker magazine, 2600. The studios sought an injunction prohibiting 2600 from making available a program called DeCSS, which circumvents the copy-protection scheme used to control access to DVDs, named CSS. CSS prevents copying or any use of DVDs unauthorized by the vendor. DeCSS was written by a fifteen-year-old Norwegian named Jon Johanson, who claimed (though the district court discounted his claim) to have written it as part of an effort to create a DVD player for GNU/Linux-based machines. A copy of DeCSS, together with a story about it was posted on the 2600 site. The industry obtained an injunction against 2600, prohibiting not only the posting of DeCSS, but also its linking to other sites that post the program--that is, telling users where they can get the program, rather than actually distributing a circumvention program. That decision may or may not have been correct on the merits. There are strong arguments in favor of the proposition that making DVDs compatible with GNU/Linux systems is a fair use. There are strong arguments that the DMCA goes much farther than it needs to in restricting speech of software programmers and Web authors, and so is invalid under the First Amendment. The court rejected these arguments.
"Viral Spiral - How the Commoners Built a Digital Republic of Their Own" (2008) [en] BOLLIER, David
idx:
idx:
idx:
30:
The salience of electronic commerce has, at times, obscured an important fact — that the commons is one of the most potent forces driving innovation in our time. Individuals working with one another via social networks are a growing force in our economy and society. This phenomenon has many manifestations, and goes by many names — “peer production,” “social production,” “smart mobs,” the “wisdom of crowds,” “crowdsourcing,” and “the commons.” 3 The basic point is that socially created value is increasingly competing with conventional markets, as GNU/Linux has famously shown. Through an open, accessible commons, one can efficiently tap into the “wisdom of the crowd,” nurture experimentation, accelerate innovation, and foster new forms of democratic practice.
3.“Social production” and “peer production” are associated with the work of Yale law professor Yochai Benkler, especially in his 2006 book, The Wealth of Networks. “Smart mobs” is a coinage of Howard Rheingold, author of a 2003 book by the same name.“Crowdsourcing” is the name of a blog run by Jeff Howe and the title of a June 2006 Wired article on the topic.“Wisdom of crowds” is a term coined by James Surowiecki and used as the title of his 2004 book.
51:
By the late 1990s, this legal scholarship was in full flower, Internet usage was soaring, and the free software movement produced its first significant free operating system, GNU/Linux. The commoners were ready to take practical action. Lessig, then a professor at Harvard Law School, engineered a major constitutional test case, Eldred v. Reno (later Eldred v. Ashcroft), to try to strike down a twentyyear extension of copyright terms — a case that reached the U.S. Supreme Court in 2002. At the same time, Lessig and a number of his colleagues, including MIT computer scientist Hal Abelson, Duke law professor James Boyle, and Villanova law professor Michael W. Carroll, came together to explore innovative ways to protect the public domain. It was a rare moment in history in which an ad hoc salon of brilliant, civic-minded thinkers from diverse fields of endeavor found one another, gave themselves the freedom to dream big thoughts, and embarked upon practical plans to make them real.
61:
Open business. One of the most surprising recent developments has been the rise of “open business” models. Unlike traditional businesses that depend upon proprietary technology or content, a new breed of businesses see lucrative opportunities in exploiting open, participatory networks. The pioneer in this strategy was IBM, which in 2000 embraced GNU/Linux, the open-source computer operating system, as the centerpiece of its service and consulting business. 16 Dozens of small, Internet-based companies are now exploiting open networks to build more flexible, sustainable enterprises.
16.Steve Lohr, “IBM to Give Free Access to 500 Patents, New York Times, July 11, 2005. See also Steven Weber, The Success of Open Source Software (Cambridge, Mass.: Harvard University Press, 2004), pp. 202–3. See also Pamela Samuelson, “IBM’s Pragmatic Embrace of Open Source,” Communications of the ACM 49, no. 21 (October 2006).
118:
Stallman’s atavistic zeal to preserve the hacker community, embodied in the GPL, did not immediately inspire others. In fact, most of the tech world was focused on how to convert software into a marketable product. Initially, the GPL functioned like a spore lying dormant, waiting until a more hospitable climate could activate its full potential. Outside of the tech world, few people knew about the GPL, or cared.~[* The GPL is not the only software license around, of course, although it was, and remains, the most demanding in terms of protecting the commons of code. Other popular open-source licenses include the MIT, BSD, and Apache licenses, but each of these permit, but do not require, that the source code of derivative works also be freely available. The GPL, however, became the license used for Linux, a quirk of history that has had far-reaching implications.]~ And even most techies were oblivious to the political implications of free software.
120:
In 1991, Torvalds was a twenty-one-year-old computer science student at the University of Helsinki, in Finland. Frustrated by the expense and complexity of Unix, and its inability to work on personal computers, Torvalds set out to build a Unix-like operating system on his IBM AT, which had a 33-megahertz processor and four megabytes of memory. Torvalds released a primitive version of his program to an online newsgroup and was astonished when a hundred hackers responded within a few months to offer suggestions and additions. Over the next few years, hundreds of additional programmers joined the project, which he named “Linux” by combining his first name, “Linus,” with “Unix.” The first official release of his program came in 1994. 27
27.One useful history of Torvalds and Linux is Glyn Moody, Rebel Code: Inside Linux and the Open Source Revolution (Cambridge, MA: Perseus, 2001).
121:
The Linux kernel, when combined with the GNU programs developed by Stallman and his free software colleagues, constituted a complete computer operating system — an astonishing and unexpected achievement. Even wizened computer scientists could hardly believe that something as complex as an operating system could be developed by thousands of strangers dispersed around the globe, cooperating via the Internet. Everyone assumed that a software program had to be organized by a fairly small group of leaders actively supervising the work of subordinates through a hierarchical authority system — that is, by a single corporation. Yet here was a virtual community of hackers, with no payroll or corporate structure, coming together in a loose, voluntary, quasi-egalitarian way, led by leaders who had earned the trust and respect of some highly talented programmers.
122:
The real innovation of Linux, writes Eric S. Raymond, a leading analyst of the technology, was “not technical, but sociological”:
123:
Linux was rather casually hacked on by huge numbers of volunteers coordinating only through the Internet. Quality was maintained not by rigid standards or autocracy but by the naively simple strategy of releasing every week and getting feedback from hundreds of users within days, creating a sort of rapid Darwinian selection on the mutations introduced by developers. To the amazement of almost everyone, this worked quite well. 28
28.Eric S. Raymond, “A Brief History of Hackerdom,” http://www.catb.org/~est/writings/cathedral-bazaar/hacker-history/ar01s06.html.
124:
The Free Software Foundation had a nominal project to develop a kernel, but it was not progressing very quickly. The Linux kernel, while primitive, “was running and ready for experimentation,” writes Steven Weber in his book The Success of Open Source: “Its crude functionality was interesting enough to make people believe that it could, with work, evolve into something important. That promise was critical and drove the broader development process from early on.” 29
29.Steven Weber, The Success of Open Source (Cambridge, MA: Harvard University Press, 2004), p. 100.
125:
There were other powerful forces driving the development of Linux. Throughout the 1990s, Microsoft continued to leverage its monopoly grip over the operating system of personal computers, eventually attracting the attention of the U.S. Department of Justice, which filed an antitrust lawsuit against the company. Software competitors such as Hewlett-Packard, Sun Microsystems, and IBM found that rallying behind an open-source alternative — one that was legally protected against being taken private by anyone else— offered a terrific way to compete against Microsoft.
127:
Given these problems, there was great appeal in a Unix-like operating system with freely available source code. Linux helped address the fragmentation of Unix implementations and the difficulties of competing against the Microsoft monopoly. Knowing that Linux was GPL’d, hackers, academics, and software companies could all contribute to its development without fear that someone might take it private, squander their contributions, or use it in hostile ways. A commons of software code offered a highly pragmatic solution to a market dysfunction.
128:
Stallman’s GNU Project and Torvalds’s Linux software were clearly synergistic, but they represented very different styles. The GNU Project was a slower, more centrally run project compared to the “release early and often” developmental approach used by the Linux community. In addition, Stallman and Torvalds had temperamental and leadership differences. Stallman has tended to be more overbearing and directive than Torvalds, who does not bring a political analysis to the table and is said to be more tolerant of diverse talents. 31
31.Torvalds included a brief essay, “Linux kernel management style,” dated October 10, 2004, in the files of the Linux source code, with the annotation, “Wisdom passed down the ages on clay tablets.” It was included as an epilogue in the book Open Life: The Philosophy of Open Source, by Henrik Ingo, and is available at http://www.openlife.cc/node/43.
129:
So despite their natural affinities, the Free Software Community and the Linux community never found their way to a grand merger. Stallman has applauded Linux’s success, but he has also resented the eclipse of GNU programs used in the operating system by the Linux name. This prompted Stallman to rechristen the program “GNU/Linux,” a formulation that many people now choose to honor.
130:
Yet many hackers, annoyed at Stallman’s political crusades and crusty personal style, committed their own linguistic raid by renaming “free software” as “open source software,” with a twist. As GNU/Linux became more widely used in the 1990s, and more corporations began to seriously consider using it, the word free in “free software” was increasingly seen as a problem. The “free as in free speech, not as in free beer” slogan never quite dispelled popular misconceptions about the intended sense of the word free. Corporate information technology (IT) managers were highly wary about putting mission-critical corporate systems in the hands of software that could be had for free. Imagine telling the boss that you put the company’s fate in the hands of a program you downloaded from the Internet for free!
132:
One response to this issue was the rebranding of free software as “open-source” software. A number of leading free software programmers, most notably Bruce Perens, launched an initiative to set forth a consensus definition of software that would be called “opensource.” At the time, Perens was deeply involved with a community of hackers in developing a version of Linux known as the Debian GNU/Linux distribution. Perens and other leading hackers not only wanted to shed the off-putting political dimensions of “free software,” they wanted to help people deal with the confusing proliferation of licenses. A lot of software claimed to be free, but who could really tell what that meant when the terms were so complicated and legalistic?
137:
The Linux world behaves in many respects like a free market or an ecology, a collection of selfish agents attempting to maximize utility which in the process produces a selfcorrecting spontaneous order more elaborate and efficient than any amount of central planning could have achieved. . . . The utility function Linux hackers are maximizing is not classically economic, but is the intangible of their own ego satisfaction and reputation among other hackers. 36
36.Eric Raymond, “The Cathedral and the Bazaar,” available at http://www.catb.org/~esr/writings/cathedral-bazaar/cathedral-bazaar/ar01s11.html.
140:
Red Hat, a company founded in 1993 by Robert Young, was the first to recognize the potential of selling a custom version (or “distribution”) of GNU/Linux as a branded product, along with technical support. A few years later, IBM became one of the first large corporations to recognize the social realities of GNU/Linux and its larger strategic and competitive implications in the networked environment. In 1998 IBM presciently saw that the new software development ecosystem was becoming far too variegated and robust for any single company to dominate. It understood that its proprietary mainframe software could not dominate the burgeoning, diversified Internet-driven marketplace, and so the company adopted the open-source Apache Web server program in its new line of WebSphere business software.
141:
It was a daring move that began to bring the corporate and open-source worlds closer together. Two years later, in 2000, IBM announced that it would spend $1 billion to help develop GNU/Linux for its customer base. IBM shrewdly realized that its customers wanted to slash costs, overcome system incompatibilities, and avoid expensive technology “lock-ins” to single vendors. GNU/Linux filled this need well. IBM also realized that GNU/Linux could help it compete against Microsoft. By assigning its property rights to the commons, IBM could eliminate expensive property rights litigation, entice other companies to help it improve the code (they could be confident that IBM could not take the code private), and unleash a worldwide torrent of creative energy focused on GNU/Linux. Way ahead of the curve, IBM decided to reposition itself for the emerging networked marketplace by making money through tech service and support, rather than through proprietary software alone. 38
38.Andrew Leonard, “How Big Blue Fell for Linux,” Salon.com, September 12, 2000, available at http://www.salon.com/tech/fsp/2000/09/12/chapter_7_part_one.print.html. The competitive logic behind IBM’s moves are explored in Pamela Samuelson, “IBM’s Pragmatic Embrace of Open Source,” Communications of the ACM 49, no. 21 (October 2006), and Robert P. Merges, “A New Dynamism in the Public Domain,” University of Chicago Law Review 71, no. 183 (Winter 2004).
142:
It was not long before other large tech companies realized the benefits of going open source. Amazon and eBay both saw that they could not affordably expand their large computer infrastructures without converting to GNU/Linux. GNU/Linux is now used in everything from Motorola cell phones to NASA supercomputers to laptop computers. In 2005, BusinessWeek magazine wrote, “Linux may bring about the greatest power shift in the computer industry since the birth of the PC, because it lets companies replace expensive proprietary systems with cheap commodity servers.” 39 As many as one-third of the programmers working on open-source projects are corporate employees, according to a 2002 survey. 40
39.Steve Hamm, “Linux Inc.,” BusinessWeek, January 31, 2005.
40.Cited by Elliot Maxwell in “Open Standards Open Source and Open Innovation,” note 80, Berlecon Research, Free/Libre Open Source Software: Survey and Study — Firms’ Open Source Activities: Motivations and Policy Implications, FLOSS Final Report, Part 2, at www.berlecon.de/studien/downloads/200207FLOSS _Activities.pdf.
143:
With faster computing speeds and cost savings of 50 percent or more on hardware and 20 percent on software, GNU/Linux has demonstrated the value proposition of the commons. Open source demonstrated that it can be cheaper and more efficacious to collaborate in the production of a shared resource based on common standards than to strictly buy and own it as private property.
144:
But how does open source work without a conventional market apparatus? The past few years have seen a proliferation of sociological and economic theories about how open-source communities create value. One formulation, by Rishab Ghosh, compares free software development to a “cooking pot,” in which you can give a little to the pot yet take a lot — with no one else being the poorer. “Value” is not measured economically at the point of transaction, as in a market, but in the nonmonetary flow of value that a project elicits (via volunteers) and generates (through shared software). 41 Another important formulation, which we will revisit later, comes from Harvard law professor Yochai Benkler, who has written that the Internet makes it cheap and easy to access expertise anywhere on the network, rendering conventional forms of corporate organization costly and cumbersome for many functions. Communities based on social trust and reciprocity are capable of mobilizing creativity and commitment in ways that market incentives often cannot — and this can have profound economic implications. 42 Benkler’s analysis helps explain how a global corps of volunteers could create an operating system that, in many respects, outperforms software created by a well-paid army of Microsoft employees.
41.Rishab Aiyer Ghosh, “Cooking Pot Markets and Balanced Value Flows,” in Rishab Aiyer Ghosh, ed., CODE: Collaborative Ownership and the Digital Economy (Cambridge, MA: MIT Press, 2005), pp. 153–68.
42.See, e.g., Benkler, “Coase’s Penguin, or Linux and the Nature of the Firm,” Yale Law Journal 112, no. 369 (2002); Benkler, “ ‘Sharing Nicely’: On Shareable Goods and the Emergence of Sharing as a Modality of Economic Production,” Yale Law Journal 114, no. 273 (2004).
148:
Nearly twenty years after the introduction of the GPL, free software has expanded phenomenally. It has given rise to countless FOSS software applications, many of which are major viral hits such as Thunderbird (e-mail), Firefox (Web browser), Ubuntu (desktop GNU/Linux), and Asterisk (Internet telephony). FOSS has set in motion, directly or indirectly, some powerful viral spirals such as the Creative Commons licenses, the iCommons/free culture movement, the Science Commons project, the open educational resource movement, and a new breed of open-business ventures, Yet Richard Stallman sees little connection between these various “open” movements and free software; he regards “open” projects as too vaguely defined to guarantee that their work is truly “free” in the free software sense of the term. “Openness and freedom are not the same thing,” said Stallman, who takes pains to differentiate free software from open-source software, emphasizing the political freedoms that lie at the heart of the former. 44
44.Interview with Richard Stallman, January 21, 2008.
191:
The DMCA has been roundly denounced by software programmers, music fans, and Internet users for prohibiting them from making personal copies, fair use excerpts, and doing reverse engineering on software, even with legally purchased products. Using digital rights management systems sanctioned by the DMCA, for example, many CDs and DVDs are now coded with geographic codes that prevent consumers from operating them on devices on other continents. DVDs may contain code to prevent them from running on Linux-based computers. Digital journals may “expire” after a given period of time, wiping out library holdings unless another payment is made. Digital textbooks may go blank at the end of the school year, preventing their reuse or resale.
334:
Initially, the goal was more exploratory and improvisational — an earnest attempt to find leverage points for dealing with the intolerable constraints of copyright law. Fortunately, there were instructive precedents, most notably free software, which by 2000, in its opensource guise, was beginning to find champions among corporate IT managers and the business press. Mainstream programmers and corporations started to recognize the virtues of GNU/Linux and opensource software more generally. Moreover, a growing number of people were internalizing the lessons of Code, that the architecture of software and the Internet really does matter.
366:
For all of its brainpower and commitment, Lessig’s rump caucus might not have gotten far if it had not found a venturesome source of money, the Center for the Public Domain. The center — originally the Red Hat Center — was a foundation created by entrepreneur Robert Young in 2000 following a highly successful initial public offering of Red Hat stock. As the founder of Red Hat, a commercial vendor of GNU/Linux, Young was eager to repay his debt to the fledgling public-domain subculture. He also realized, with the foresight of an Internet entrepreneur, that strengthening the public domain would only enhance his business prospects over the long term. (It has; Young later founded a print-on-demand publishing house, Lulu.com, that benefits from the free circulation of electronic texts, while making money from printing hard copies.)
539:
Another new business using CC licenses is Lulu, a technology company started by Robert Young, the founder of the Linux vendor Red Hat and benefactor for the Center for the Public Domain.Lulu lets individuals publish and distribute their own books, which can be printed on demand or downloaded. Lulu handles all the details of the publishing process but lets people control their content and rights. Hundreds of people have licensed their works under the CC ShareAlike license and Public Domain Dedication, and under the GNU Project’s Free Documentation License. 200
200.Mia Garlick, “Lulu,” Creative Commons blog, May 17, 2006, at http://creativecommons.org/text/lulu.
645:
This history matters, because when Gil was appointed culture minister, he brought with him a rare political sophistication and public veneration. His moral stature and joyous humanity allowed him to transcend politics as conventionally practiced. “Gil wears shoulder-length dreadlocks and is apt to show up at his ministerial offices dressed in the simple white linens that identify him as a follower of the Afro-Brazilian religion candomblé,” wrote American journalist Julian Dibbell in 2004. “Slouching in and out of the elegant Barcelona chairs that furnish his office, taking the occasional sip from a cup of pinkish herbal tea, he looks — and talks — less like an elder statesman than the posthippie, multiculturalist, Taoist intellectual he is.” 257
257.Julian Dibbell, “We Pledge Allegiance to the Penguin,” Wired, November 2004, at http://www.wired.com/wired/archive/12.11/linux_pr.html.
651:
One of the first collaborations between Creative Commons and the Brazilian government involved the release of a special CC-GPL license in December 2003. 260 This license adapted the General Public License for software by translating it into Portuguese and putting it into the CC’s customary “three layers” — a plain-language version, a lawyers’ version compatible with the national copyright law, and a machine-readable metadata expression of the license. The CC-GPL license, released in conjunction with the Free Software Foundation, was an important international event because it gave the imprimatur of a major world government to free software and the social ethic of sharing and reuse. Brazil has since become a champion of GNU/Linux and free software in government agencies and the judiciary. It regards free software and open standards as part of a larger fight for a “development agenda” at the World Intellectual Property Organization and the World Trade Organization. In a related vein, Brazil has famously challenged patent and trade policies that made HIV/AIDS drugs prohibitively expensive for thousands of sick Brazilians.
260.Creative Commons press release, “Brazilian Government First to Adopt New ‘CC-GPL,’ ” December 2, 2003.
728:
It is worth noting that a commons does not necessarily preclude making money from the fruit of the commons; it’s just that any commercial activity cannot interfere with the integrity of social relationships within the commons. In the case of GPL’d software, for example, Red Hat is able to sell its own versions of GNU/Linux only because it does not “take private” any code or inhibit sharing within the commons. The source code is always available to everyone. By contrast, scientists who patent knowledge that they glean from their participation in a scientific community may be seen as “stealing” community knowledge for private gain. The quest for individual profit may also induce ethical corner-cutting, which undermines the integrity of research in the commons.
819:
Free software was one of the earliest demonstrations of the power of online commons as a way to create value. In his classic 1997 essay “The Cathedral and the Bazaar,” hacker Eric S. Raymond provided a seminal analysis explaining how open networks make software development more cost-effective and innovative than software developed by a single firm. 343 A wide-open “bazaar” such as the global Linux community can construct a more versatile operating system than one designed by a closed “cathedral” such as Microsoft. “With enough eyes, all bugs are shallow,” Raymond famously declared. Yochai Benkler gave a more formal economic reckoning of the value proposition of open networks in his pioneering 2002 essay “Coase’s Penguin, or, Linux and the Nature of the Firm.” 344 The title is a puckish commentary on how GNU/Linux, whose mascot is a penguin, poses an empirical challenge to economist Ronald Coase’s celebrated “transaction cost” theory of the firm. In 1937, Coase stated that the economic rationale for forming a business enterprise is its ability to assert clear property rights and manage employees and production more efficiently than contracting out to the marketplace.
343.Eric Raymond, “The Cathedral and the Bazaar,” May 1997, at http://www.catb.org/~esr/writings/cathedral-bazaar. The essay has been translated into nineteen languages to date.
344.Yochai Benkler, “Coase’s Penguin, or, Linux and the Nature of the Firm,” Yale Law Journal 112, no. 369 (2002), at http://www.benkler.org/CoasesPenguin.html.
832:
The idea that a company can make money by giving away something for free seems so counterintuitive, if not ridiculous, that conventional business people tend to dismiss it. Sometimes they protesteth too much, as when Microsoft’s Steve Ballmer compared the GNU GPL to a “cancer” and lambasted open-source software as having “characteristics of communism.” 352 In truth, “sharing the wealth” has become a familiar strategy for companies seeking to develop new technology markets. The company that is the first mover in an emerging commercial ecosystem is likely to become the dominant player, which may enable it to extract a disproportionate share of future market rents. Giving away one’s code or content can be a great way to become a dominant first mover.
352.Joe Wilcox and Stephen Shankland, “Why Microsoft is wary of open source,” CNET, June 18, 2001; and Lea, Graham, “MS’ Ballmer: Linux is communism,” Register (U.K.), July 31, 2000.
833:
Netscape was one of the first to demonstrate the power of this model with its release of its famous Navigator browser in 1994. The free distribution to Internet users helped develop the Web as a social and technological ecosystem, while helping fuel sales of Netscape’s Web server software. (This was before Microsoft arrived on the scene with its Internet Explorer, but that’s another story.) At a much larger scale, IBM saw enormous opportunities for building a better product by using GNU/Linux. The system would let IBM leverage other people’s talents at a fraction of the cost and strengthen its service relationships with customers. The company now earns more than $2 billion a year from Linux-related services. 353
353.Yochai Benkler, The Wealth of Networks (Yale University Press, 2006), Figure 2.1 on p. 47.
978:
As chance had it, Baraniuk’s research group at Rice was just discovering open-source software. “It was 1999, and we were moving all of our workstations to Linux,” he recalled. “It was just so robust and high-quality, even at that time, and it was being worked on by thousands of people.” Baraniuk remembers having an epiphany: “What if we took books and ‘chunked them apart,’ just like software? And what if we made the IP open so that the books would be free to re-use and remix in different ways?’”
1043:
History-making citizenship is not without its deficiencies. Rumors, misinformation, and polarized debate are common in this more open, unmediated environment. Its crowning virtue is its potential ability to mobilize the energies and creativity of huge numbers of people. GNU/Linux improbably drew upon the talents of tens of thousands of programmers; certainly our contemporary world with its countless problems could use some of this elixir— platforms that can elicit distributed creativity, specialized talent, passionate commitment, and social legitimacy. In 2005 Joi Ito, then chairman of the board of the Creative Commons, wrote: “Traditional forms of representative democracy can barely manage the scale, complexity and speed of the issues in the world today. Representatives of sovereign nations negotiating with each other in global dialog are limited in their ability to solve global issues. The monolithic media and its increasingly simplistic representation of the world cannot provide the competition of ideas necessary to reach informed, viable consensus.” 447 Ito concluded that a new, not-yetunderstood model of “emergent democracy” is likely to materialize as the digital revolution proceeds. A civic order consisting of “intentional blog communities, ad hoc advocacy coalitions and activist networks” could begin to tackle many urgent problems.
447.Joichi Ito, “Emergent Democracy,” chapter 1 in John Lebkowsky and Mitch Ratcliffe, eds., Extreme Democracy (Durham, NC: Lulu.com, 2005), at http://extremedemocracy.com/chapters/Chapter%20One-Ito.pdf.
1055:
As projects like GNU/Linux, Wikipedia, open courseware, open-access journals, open databases, municipal Wi-Fi, collections of CC-licensed content, and other commons begin to cross-link and coalesce, the commons paradigm is migrating from the margins of culture to the center. The viral spiral, after years of building its infrastructure and social networks, may be approaching a Cambrian explosion, an evolutionary leap.
"The Public Domain - Enclosing the Commons of the Mind" (2008) [en] BOYLE, James
371:
Jon Johansen, a 16-year-old Norwegian, was the unwitting catalyst for one of the most important cases interpreting the DMCA. He and two anonymous helpers wrote a program called DeCSS. Depending on whom you listen to, DeCSS is described either as a way of allowing people who use Linux or other open source operating systems to play DVDs on their computers, or as a tool for piracy that threatened the entire movie industry and violated the DMCA.
377:
Let us return to Mr. Johansen, the 16-year-old Norwegian. He and his two anonymous collaborators claimed that they were affected by another limitation imposed by the CSS licensing body. At that time, there was no way to play DVDs on a computer running Linux, or any other free or open source operating system. (I will talk more about free and open source software later.) Let’s say you buy a laptop. A Sony Vaio running Windows, for example. It has a slot in the side for DVDs to slide in and software that comes along with it which allows the DVD reader to decode and play the disk. The people who wrote the software have been licensed by the DVD Copy Control Association and provided with a CSS key. But at the time Mr. Johansen set out to create DeCSS, the licensing body had not licensed keys to any free or open source software developers. Say Mr. Johansen buys the Sony Vaio, but with the Linux operating system on it instead of Windows. The computer is the same. The little slot is still there. Writing an open source program to control the DVD player is trivial. But without the CSS key, there is no way for the player to decode and play the movie. (The licensing authority later did license an open source player, perhaps because they realized its unavailability gave Mr. Johansen a strong defense, perhaps because they feared an antitrust suit, or perhaps because they just got around to it.)
386:
“2600: The Hacker Quarterly has included articles on such topics as how to steal an Internet domain name, how to write more secure ASP code, access other people’s e-mail, secure your Linux box, intercept cellular phone calls, how to put Linux on an Xbox, how to remove spyware, and break into the computer systems at Costco stores and Federal Express. One issue contains a guide to the federal criminal justice system for readers charged with computer hacking. In addition, 2600 operates a web site located at 2600.com (http://www.2600.com), which is managed primarily by Mr. Corley and has been in existence since 1995.”
398:
This was the issue in Reimerdes. True, if I cut through the digital fence on a DVD in order to excerpt a small portion in a critical documentary, I would not be violating your copyright, but I would be violating the anticircumvention provisions. And DeCSS seemed to be a tool for doing what the DMCA forbids. By providing links to it, Mr. Corley and 2600 were “trafficking” in a technology that allows others to circumvent a technological protection measure. DeCSS could, of course, be used for purposes that did not violate copyright—to make the DVD play on a computer running Linux, for example. It enabled various noninfringing fair uses. It could also be used to aid illicit copying. But the alleged violation of the DMCA had nothing to do with that. The alleged violation of the DMCA was making the digital wire cutters available in the first place. So one First Amendment problem with the DMCA can be stated quite simply. It appeared to make it illegal to exercise at least some of the limitations and exceptions copyright law needs in order to pass First Amendment scrutiny. Or did it just make it very, very difficult to exercise those rights legally? I could, after all, make a videotape of the DVD playing on my television, and use that grainy, blurry image in my documentary criticizing the filmmaker. The DMCA would not be violated, though my movie might be painful to watch.
409:
Congress could have passed many laws less restrictive than the DMCA. It could have only penalized the use of programs such as DeCSS for an illicit purpose. If it wished to reach those who create the tools as well as use them, it could have required proof that the creator intended them to be used for illegal purposes. Just as we look at the government’s intention in creating the law, we could make the intent of the software writer critical for the purposes of assessing whether or not his actions are illegal. If I write a novel detailing a clever way to kill someone and you use it to carry out a real murder, the First Amendment does not allow the state to punish me. If I write a manual on how to be a hit man and sell it to you, it may. First Amendment law is generally skeptical of statutes that impose “strict liability” without a requirement of intent. But Judge Kaplan believed that the DMCA made the motives of Mr. Johansen irrelevant, except insofar as they were relevant to the narrowly tailored exceptions of the DMCA, such as encryption research. In other words, even if Mr. Johansen made DeCSS so that he and his friends could watch DVDs they purchased legally on computers running Linux, they could still be liable for breaking the DMCA.
436:
The legal implementation of this conclusion would be simple. It would be unconstitutional to punish an individual for gaining access in order to make a fair use. However, if they cut down the digital fence to make illicit copies, both the cutting and the copying would be illegal. But what about the prohibition of trafficking in digital wire cutters, technologies such as DeCSS? There the constitutional question is harder. I would argue that the First Amendment requires an interpretation of the antitrafficking provisions that comes closer to the ruling in the Sony case. If Mr. Johansen did indeed make DeCSS to play DVDs on his Linux computer, and if that were indeed a substantial noninfringing use, then it cannot be illegal for him to develop the technology. But I accept that this is a harder line to draw constitutionally. About my first conclusion, though, I think the argument is both strong and clear.
659:
The creators of free and open source software were able to use the fact that software is copyrighted, and that the right attaches automatically upon creation and fixation, to set up new, distributed methods of innovation. For example, free and open source software under the General Public License—such as Linux—is a “commons” to which all are granted access. Anyone may use the software without any restrictions. They are guaranteed access to the human-readable “source code,” rather than just the inscrutable “machine code,” so that they can understand, tinker, and modify. Modifications can be distributed so long as the new creation is licensed under the open terms of the original. This creates a virtuous cycle: each addition builds on the commons and is returned to it. The copyright over the software was the “hook” that allowed software engineers to create a license that gave free access and the right to modify and required future programmers to keep offering those freedoms. Without the copyright, those features of the license would not have been enforceable. For example, someone could have modified the open program and released it without the source code—denying future users the right to understand and modify easily. To use an analogy beloved of free software enthusiasts, the hood of the car would be welded shut. Home repair, tinkering, customization, and redesign become practically impossible.
726:
For anyone interested in the way that networks can enable new collaborative methods of production, the free software movement, and the broader but less political movement that goes under the name of open source software, provide interesting case studies. 216 Open source software is released under a series of licenses, the most important being the General Public License (GPL). The GPL specifies that anyone may copy the software, provided the license remains attached and the source code for the software always remains available. 217 Users may add to or modify the code, may build on it and incorporate it into their own work, but if they do so, then the new program created is also covered by the GPL. Some people refer to this as the “viral” nature of the license; others find the term offensive. 218 The point, however, is that the open quality of the creative enterprise spreads. It is not simply a donation of a program or a work to the public domain, but a continual accretion in which all gain the benefits of the program on pain of agreeing to give their additions and innovations back to the communal project.
216.See Glyn Moody, Rebel Code: Linux and the Open Source Revolution (Cambridge, Mass.: Perseus Pub., 2001); Peter Wayner, Free for All: How Linux and the Free Software Movement Undercut the High-Tech Titans (New York: HarperBusiness, 2000); Eben Moglen, “Anarchism Triumphant: Free Software and the Death of Copyright,” First Monday 4 (1999), http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/684/594 [Ed. note: originally published as http://firstmonday.org/issues/issue4_8/index.html, the link has changed].
217.Proprietary, or “binary only,” software is generally released only after the source code has been compiled into machine-readable object code, a form that is impenetrable to the user. Even if you were a master programmer, and the provisions of the Copyright Act, the appropriate licenses, and the DMCA did not forbid you from doing so, you would be unable to modify commercial proprietary software to customize it for your needs, remove a bug, or add a feature. Open source programmers say, disdainfully, that it is like buying a car with the hood welded shut. See, e.g., Wayner, Free for All, 264.
218.See Brian Behlendorf, “Open Source as a Business Strategy,” in Open Sources: Voices from the Open Source Revolution, ed. Chris DiBona et al. (Sebastopol, Calif.: O’Reilly, 1999), 149, 163.
730:
Governments have taken notice. The United Kingdom, for example, concluded last year that open source software “will be considered alongside proprietary software and contracts will be awarded on a value-for-money basis.” The Office of Government Commerce said open source software is “a viable desktop alternative for the majority of government users” and “can generate significant savings. . . . These trials have proved that open source software is now a real contender alongside proprietary solutions. If commercial companies and other governments are taking it seriously, then so must we.” 221 Sweden found open source software to be in many cases “equivalent to—or better than—commercial products” and concluded that software procurement “shall evaluate open software as well as commercial solutions, to provide better competition in the market.” 222
221.“UK Government Report Gives Nod to Open Source,” Desktop Linux (October 28, 2004), available at http://www.desktoplinux.com/news/NS5013620917.html.
222.“Cases of Official Recognition of Free and Open Source Software,” available at http://ec.europa.eu/information_society/activities/opensource/cases/index_en.htm.
739:
Yochai Benkler and I would argue that these questions are fun to debate but ultimately irrelevant. 226 Assume a random distribution of incentive structures in different people, a global network—transmission, information sharing, and copying costs that approach zero—and a modular creation process. With these assumptions, it just does not matter why they do it. In lots of cases, they will do it. One person works for love of the species, another in the hope of a better job, a third for the joy of solving puzzles, and a fourth because he has to solve a particular problem anyway for his own job and loses nothing by making his hack available for all. Each person has their own reserve price, the point at which they say, “Now I will turn off Survivor and go and create something.” But on a global network, there are a lot of people, and with numbers that big and information overhead that small, even relatively hard projects will attract motivated and skilled people whose particular reserve price has been crossed.
226.Benkler’s reasoning is characteristically elegant, even formal in its precision, while mine is clunkier. See Yochai Benkler, “Coase’s Penguin, or, Linux and the Nature of the Firm,” Yale Law Journal 112 (2002): 369–446.
740:
More conventionally, many people write free software because they are paid to do so. Amazingly, IBM now earns more from what it calls “Linux-related revenues” than it does from traditional patent licensing, and IBM is the largest patent holder in the world. 227 It has decided that the availability of an open platform, to which many firms and individuals contribute, will actually allow it to sell more of its services, and, for that matter, its hardware. A large group of other companies seem to agree. They like the idea of basing their services, hardware, and added value on a widely adopted “commons.” This does not seem like a community in decline.
227.Yochai Benkler, The Wealth of Networks: How Social Production Transforms Markets and Freedom (New Haven, Conn.: Yale University Press, 2006), 46–47.
741:
People used to say that collaborative creation could never produce a quality product. That has been shown to be false. So now they say that collaborative creation cannot be sustained because the governance mechanisms will not survive the success of the project. Professor Epstein conjures up a “central committee” from which insiders will be unable to cash out—a nice mixture of communist and capitalist metaphors. All governance systems—including democracies and corporate boards—have problems. But so far as we can tell, those who are influential in the free software and open source governance communities (there is, alas, no “central committee”) feel that they are doing very well indeed. In the last resort, when they disagree with decisions that are taken, there is always the possibility of “forking the code,” introducing a change to the software that not everyone agrees with, and then letting free choice and market selection converge on the preferred iteration. The free software ecosystem also exhibits diversity. Systems based on GNU-Linux, for example, have distinct “flavors” with names like Ubuntu, Debian, and Slackware, each with passionate adherents and each optimized for a particular concern—beauty, ease of use, technical manipulability. So far, the tradition of “rough consensus and running code” seems to be proving itself empirically as a robust governance system.
787:
The most remarkable and important book on “distributed creativity” and the sharing economy is Yochai Benkler, The Wealth of Networks: How Social Production Transforms Markets and Freedom (New Haven, Conn.: Yale University Press, 2006). Benkler sets the idea of “peer production” alongside other mechanisms of market and political governance and offers a series of powerful normative arguments about why we should prefer that future. Comprehensive though this book may seem, it is incomplete unless it is read in conjunction with one of Benkler’s essays: Yochai Benkler, “Coase’s Penguin, or, Linux and the Nature of the Firm,” Yale Law Journal 112 (2002): 369–446. In that essay, Benkler puts forward the vital argument—described in this chapter—about what collaborative production does to Coase’s theory of the firm.
790:
Free and open source software has been a subject of considerable interest to commentators. Glyn Moody’s Rebel Code: Linux and the Open Source Revolution (Cambridge, Mass.: Perseus Pub., 2001), and Peter Wayner’s Free for All: How Linux and the Free Software Movement Undercut the High-Tech Titans (New York: HarperBusiness, 2000), both offer readable and accessible histories of the phenomenon. Eric S. Raymond, The Cathedral and the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary, revised edition (Sebastopol, Calif.: O’Reilly, 2001), is a classic philosophy of the movement, written by a key participant—author of the phrase, famous among geeks, “given enough eyeballs, all bugs are shallow.” Steve Weber, in The Success of Open Source (Cambridge, Mass.: Harvard University Press, 2004), offers a scholarly argument that the success of free and open source software is not an exception to economic principles but a vindication of them. I agree, though the emphasis that Benkler and I put forward is rather different. To get a sense of the argument that free software (open source software’s normatively charged cousin) is desirable for its political and moral implications, not just because of its efficiency or commercial success, one should read the essays of Richard Stallman, the true father of free software and a fine polemical, but rigorous, essayist. Richard Stallman, Free Software, Free Society: Selected Essays of Richard M. Stallman, ed. Joshua Gay (Boston: GNU Press, 2002). Another strong collection of essays can be found in Joseph Feller, Brian Fitzgerald, Scott A. Hissam, and Karim R. Lakhani, eds., Perspectives on Free and Open Source Software (Cambridge, Mass.: MIT Press, 2005). If you only have time to read a single essay on the subject it should be Eben Moglen’s “Anarchism Triumphant: Free Software and the Death of Copyright,” First Monday 4 (1999), available at http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/684/594 [Ed. note: originally published as http://www.firstmonday.dk/issues/issue4_8/moglen/, the link has changed].
"Little Brother" (2008) [en] DOCTOROW, Cory
52:
For me -- for pretty much every writer -- the big problem isn't piracy, it's obscurity (thanks to Tim O'Reilly for this great aphorism). Of all the people who failed to buy this book today, the majority did so because they never heard of it, not because someone gave them a free copy. Mega-hit best-sellers in science fiction sell half a million copies -- in a world where 175,000 attend the San Diego Comic Con alone, you've got to figure that most of the people who “like science fiction” (and related geeky stuff like comics, games, Linux, and so on) just don't really buy books. I'm more interested in getting more of that wider audience into the tent than making sure that everyone who's in the tent bought a ticket to be there.
662:
Hackers blow through those countermeasures. The Xbox was cracked by a kid from MIT who wrote a best-selling book about it, and then the 360 went down, and then the short-lived Xbox Portable (which we all called the “luggable” -- it weighed three pounds!) succumbed. The Universal was supposed to be totally bulletproof. The high school kids who broke it were Brazilian Linux hackers who lived in a favela -- a kind of squatter's slum.
664:
Once the Brazilians published their crack, we all went nuts on it. Soon there were dozens of alternate operating systems for the Xbox Universal. My favorite was ParanoidXbox, a flavor of Paranoid Linux. Paranoid Linux is an operating system that assumes that its operator is under assault from the government (it was intended for use by Chinese and Syrian dissidents), and it does everything it can to keep your communications and documents a secret. It even throws up a bunch of “chaff” communications that are supposed to disguise the fact that you're doing anything covert. So while you're receiving a political message one character at a time, ParanoidLinux is pretending to surf the Web and fill in questionnaires and flirt in chat-rooms. Meanwhile, one in every five hundred characters you receive is your real message, a needle buried in a huge haystack.
666:
Tonight, I'd make the sacrifice. It took about twenty minutes to get up and running. Not having a TV was the hardest part, but eventually I remembered that I had a little overhead LCD projector that had standard TV RCA connectors on the back. I connected it to the Xbox and shone it on the back of my door and got ParanoidLinux installed.
667:
Now I was up and running, and ParanoidLinux was looking for other Xbox Universals to talk to. Every Xbox Universal comes with built-in wireless for multiplayer gaming. You can connect to your neighbors on the wireless link and to the Internet, if you have a wireless Internet connection. I found three different sets of neighbors in range. Two of them had their Xbox Universals also connected to the Internet. ParanoidXbox loved that configuration: it could siphon off some of my neighbors' Internet connections and use them to get online through the gaming network. The neighbors would never miss the packets: they were paying for flat-rate Internet connections, and they weren't exactly doing a lot of surfing at 2AM.
1273:
I held up a laptop Jolu and I had rebuilt the night before, from the ground up. "I trust this machine. Every component in it was laid by our own hands. It's running a fresh out-of-the-box version of ParanoidLinux, booted off of the DVD. If there's a trustworthy computer left anywhere in the world, this might well be it.
1305:
I set up the laptop on a dry bit of rock and booted it from the DVD with her watching. “I'm going to reboot it for every person. This is a standard ParanoidLinux disc, though I guess you'd have to take my word for it.”
1656:
My mailbox overflowed with suggestions from people. They sent me dumps off their phones and their pocket-cameras. Then I got an email from a name I recognized -- Dr Eeevil (three “e”s), one of the prime maintainers of ParanoidLinux.
1660:
> Luckily, it's not hard to strip out the signatures, if you care to. There's a utility on the ParanoidLinux distro you're using that does this -- it's called photonomous, and you'll find it in /usr/bin. Just read the man pages for documentation. It's simple though.
2173:
I went with her into her office to do the scan. I'd expected a stylish, low-powered computer that fit in with her decor, but instead, her spare-bedroom/office was crammed with top-of-the-line PCs, big flat-panel monitors, and a scanner big enough to lay a whole sheet of newsprint on. She was fast with it all, too. I noted with some approval that she was running ParanoidLinux. This lady took her job seriously.
2285:
> Within days of the Xnet launch, we went to work on exploiting ParanoidLinux. The exploits so far have been small and insubstantial, but a break is inevitable. Once we have a zero-day break, you're dead.
2287:
> Even if they don't break ParanoidLinux, there are poisoned ParanoidXbox distros floating around. They don't match the checksums, but how many people look at the checksums? Besides me and you? Plenty of kids are already dead, though they don't know it.
3067:
Whatever -- I didn't have to have anything to do with it, and I got a desk and an office with a storefront, right there on Valencia Street, where we gave away ParanoidXbox CDs and held workshops on building better WiFi antennas. A surprising number of average people dropped in to make personal donations, both of hardware (you can run ParanoidLinux on just about anything, not just Xbox Universals) and cash money. They loved us.
"Two Bits - The Cultural Significance of Free Software" (2008) [en] KELTY, Christopher M.
idx:
idx:
idx:
11:
Sean Doyle and Adrian Gropper opened the doors to this project, providing unparalleled insight, hospitality, challenge, and curiosity. Axel Roch introduced me to Volker Grassmuck, and to much else. Volker Grassmuck introduced me to Berlin’s Free Software world and invited me to participate in the Wizards of OS conferences. Udhay Shankar introduced me to almost everyone I know, sometimes after the fact. Shiv Sastry helped me find lodging in Bangalore at his Aunt Anasuya Sastry’s house, which is called “Silicon Valley” and which was truly a lovely place to stay. Bharath Chari and Ram Sundaram let me haunt their office and cat-5 cables [PAGE xiv] during one of the more turbulent periods of their careers. Glenn Otis Brown visited, drank, talked, invited, challenged, entertained, chided, encouraged, drove, was driven, and gave and received advice. Ross Reedstrom welcomed me to the Rice Linux Users’ Group and to Connexions. Brent Hendricks did yeoman’s work, suffering my questions and intrusions. Geneva Henry, Jenn Drummond, Chuck Bearden, Kathy Fletcher, Manpreet Kaur, Mark Husband, Max Starkenberg, Elvena Mayo, Joey King, and Joel Thierstein have been welcoming and enthusiastic at every meeting. Sid Burris has challenged and respected my work, which has been an honor. Rich Baraniuk listens to everything I say, for better or for worse; he is a magnificent collaborator and friend.
52:
The fifth component, the practice of coordination and collaboration (chapter 7), is the most talked about: the idea of tens or hundreds of thousands of people volunteering their time to contribute to the creation of complex software. In this chapter I show how novel forms of coordination developed in the 1990s and how they worked in the canonical cases of Apache and Linux; I also highlight how coordination facilitates the commitment to adaptability (or modifiability) over against planning and hierarchy, and how this commitment resolves the tension between individual virtuosity and the need for collective control.
62:
Empirically speaking, the actors in my stories are figuring something out, something unfamiliar, troubling, imprecise, and occasionally shocking to everyone involved at different times and to differing extents. 13 There are two kinds of figuring-out stories: the contemporary ones in which I have been an active participant (those of Connexions and Creative Commons), and the historical ones conducted through “archival” research and rereading of certain kinds of texts, discussions, and analyses-at-the-time (those of UNIX, EMACS, Linux, Apache, and Open Systems). Some are stories of technical figuring out, but most are stories of figuring out a problem that appears to have emerged. Some of these stories involve callow and earnest actors, some involve scheming and strategy, but in all of them the figuring out is presented “in the making” and not as something that can be conveniently narrated as obvious and uncontested with the benefit of hindsight. Throughout this book, I tell stories that illustrate what geeks are like in some respects, but, more important, that show them in the midst of figuring things out—a practice that can happen both in discussion and in the course of designing, planning, executing, writing, debugging, hacking, and fixing.
13.The language of “figuring out” has its immediate source in the work of Kim Fortun, “Figuring Out Ethnography.” Fortun’s work refines two other sources, the work of Bruno Latour in Science in Action and that of Hans-Jorg Rheinberger in Towards History of Epistemic Things. Latour describes the difference between “science made” and “science in the making” and how the careful analysis of new objects can reveal how they come to be. Rheinberger extends this approach through analysis of the detailed practices involved in figuring out a new object or a new process—practices which participants cannot quite name or explain in precise terms until after the fact.
64:
Because the stories I tell here are in fact recent by the standards of historical scholarship, there is not much by way of comparison in terms of the empirical material. I rely on a number of books and articles on the history of the early Internet, especially Janet Abbate’s scholarship and the single historical work on UNIX, Peter Salus’s A Quarter Century of Unix. 16 There are also a couple of excellent journalistic works, such as Glyn Moody’s Rebel Code: Inside Linux and the Open Source Revolution (which, like Two Bits, relies heavily on the novel accessibility of detailed discussions carried out on public mailing lists). Similarly, the scholarship on Free Software and its history is just starting to establish itself around a coherent set of questions. 17
16.In addition to Abbate and Salus, see Norberg and O’Neill, Transforming Computer Technology; Naughton, A Brief History of the Future; Hafner, Where Wizards Stay Up Late; Waldrop, The Dream Machine; Segaller, Nerds 2.0.1. For a classic autodocumentation of one aspect of the Internet, see Hauben and Hauben, Netizens.
17.Kelty, “Culture’s Open Sources”; Coleman, “The Social Construction of Freedom”; Ratto, “The Pressure of Openness”; Joseph Feller et al., Perspectives [pg 315] on Free and Open Source Software; see also http://freesoftware.mit.edu/, organized by Karim Lakhani, which is a large collection of work on Free Software projects. Early work in this area derived both from the writings of practitioners such as Raymond and from business and management scholars who noticed in Free Software a remarkable, surprising set of seeming contradictions. The best of these works to date is Steven Weber, The Success of Open Source. Weber’s conclusions are similar to those presented here, and he has a kind of cryptoethnographic familiarity (that he does not explicitly avow) with the actors and practices. Yochai Benkler’s Wealth of Networks extends and generalizes some of Weber’s argument.
107:
Until the mid-1990s, hacker, geek, and computer nerd designated a very specific type: programmers and lurkers on relatively underground networks, usually college students, computer scientists, and “amateurs” or “hobbyists.” A classic mock self-diagnostic called the Geek Code, by Robert Hayden, accurately and humorously detailed the various ways in which one could be a geek in 1996—UNIX/ Linux skills, love/hate of Star Trek, particular eating and clothing habits—but as Hayden himself points out, the geeks of the early 1990s exist no longer. The elite subcultural, relatively homogenous group it once was has been overrun: “The Internet of 1996 was still a wild untamed virgin paradise of geeks and eggheads unpopulated by script kiddies, and the denizens of AOL. When things changed, I seriously lost my way. I mean, all the ‘geek’ that was the Internet [pg 36] was gone and replaced by Xfiles buzzwords and politicians passing laws about a technology they refused to comprehend.” 25
25.See The Geek Code, http://www.geekcode.com/.
111:
Berlin, November 1999. I am in a very hip club in Mitte called WMF. It’s about eight o’clock—five hours too early for me to be a hipster, but the context is extremely cool. WMF is in a hard-to-find, abandoned building in the former East; it is partially converted, filled with a mixture of new and old furnishings, video projectors, speakers, makeshift bars, and dance-floor lighting. A crowd of around fifty people lingers amid smoke and Beck’s beer bottles, [pg 37] sitting on stools and chairs and sofas and the floor. We are listening to an academic read a paper about Claude Shannon, the MIT engineer credited with the creation of information theory. The author is smoking and reading in German while the audience politely listens. He speaks for about seventy minutes. There are questions and some perfunctory discussion. As the crowd breaks up, I find myself, in halting German that quickly converts to English, having a series of animated conversations about the GNU General Public License, the Debian Linux Distribution, open standards in net radio, and a variety of things for which Claude Shannon is the perfect ghostly technopaterfamilias, even if his seventy-minute invocation has clashed heavily with the surroundings.
113:
Before long, I am talking with Volker Grassmuck, founding member of Mikro and organizer of the successful “Wizards of OS” conference, held earlier in the year, which had the very intriguing subtitle “Operating Systems and Social Systems.” Grassmuck is inviting me to participate in a planning session for the next WOS, held at the Chaos Computer Congress, a hacker gathering that occurs each year in December in Berlin. In the following months I will meet a huge number of people who seem, uncharacteristically for artists [pg 38] and activists, strangely obsessed with configuring their Linux distributions or hacking the http protocol or attending German Parliament hearings on copyright reform. The political lives of these folks have indeed mixed up operating systems and social systems in ways that are more than metaphorical.
198:
Perhaps the most familiar and famous of these wars is that between Apple and Microsoft (formerly between Apple and IBM), a conflict that is often played out in dramatic and broad strokes that imply fundamental differences, when in fact the differences are extremely slight. 59 Geeks are also familiar with a wealth of less well-known “holy wars”: EMACS versus vi; KDE versus Gnome; Linux versus BSD; Oracle versus all other databases. 60
59.The Apple-Microsoft conflict was given memorable expression by Umberto Eco in a widely read piece that compared the Apple user interface [pg 320] to Catholicism and the PC user interface to Protestantism (“La bustina di Minerva,” Espresso, 30 September 1994, back page).
60.One entry on Wikipedia differentiates religious wars from run-of-the-mill “flame wars” as follows: “Whereas a flame war is usually a particular spate of flaming against a non-flamy background, a holy war is a drawn-out disagreement that may last years or even span careers” (“Flaming [Internet],” http://en.wikipedia.org/wiki/Flame_war [accessed 16 January 2006]).
209:
In addition to the obvious pleasure with which they deploy the sectarian aspects of the Protestant Reformation, geeks also allow themselves to see their struggles as those of Luther-like adepts, confronted by powerful worldly institutions that are distinct but intertwined: the Catholic Church and absolutist monarchs. Sometimes these comparisons are meant to mock theological argument; sometimes they are more straightforwardly hagiographic. For instance, a 1998 article in Salon compares Martin Luther and Linus Torvalds (originator of the Linux kernel).
298:
A critical point in the emergence of Free Software occurred in 1998-99: new names, new narratives, but also new wealth and new stakes. “Open Source” was premised on dotcom promises of cost-cutting and “disintermediation” and various other schemes to make money on it (Cygnus Solutions, an early Free Software company, playfully tagged itself as “Making Free Software More Affordable”). VA Linux, for instance, which sold personal-computer systems pre-installed with Open Source operating systems, had the largest single initial public offering (IPO) of the stock-market bubble, seeing a 700 percent share-price increase in one day. “Free Software” by contrast fanned kindling flames of worry over intellectual-property expansionism and hitched itself to a nascent legal resistance to the 1998 Digital Millennium Copyright Act and Sonny Bono Copyright Term Extension Act. Prior to 1998, Free Software referred either to the Free Software Foundation (and the watchful, micromanaging eye of Stallman) or to one of thousands of different commercial, avocational, or university-research projects, processes, licenses, and ideologies that had a variety of names: sourceware, freeware, shareware, open software, public domain software, and so on. The term Open Source, by contrast, sought to encompass them all in one movement.
322:
Fomenting Movements The period from 1 April 1998, when the Mozilla source code was first released, to 1 April 1999, when Zawinski announced its failure, couldn’t have been a headier, more exciting time for participants in Free Software. Netscape’s decision to release the source code was a tremendous opportunity for geeks involved in Free Software. It came in the midst of the rollicking dotcom bubble. It also came in the midst of the widespread adoption of [pg 108] key Free Software tools: the Linux operating system for servers, the Apache Web server for Web pages, the perl and python scripting languages for building quick Internet applications, and a number of other lower-level tools like Bind (an implementation of the DNS protocol) or sendmail for e-mail.
323:
Perhaps most important, Netscape’s decision came in a period of fevered and intense self-reflection among people who had been involved in Free Software in some way, stretching back to the mid-1980s. Eric Raymond’s article “The Cathedral and The Bazaar,” delivered at the Linux Kongress in 1997 and the O’Reilly Perl Conference the same year, had started a buzz among Free Software hackers. It was cited by Frank Hecker and Eric Hahn at Netscape as one of the sources for their thinking about the decision to free Mozilla; Raymond and Bruce Perens had both been asked to consult with Netscape on Free Software strategy. In April of the same year Tim O’Reilly, a publisher of handbooks for Free Software, organized a conference called the Freeware Summit.
332:
All through 1998 and 1999, buzz around Open Source built. Little-known companies such as Red Hat, VA Linux, Cygnus, Slackware, and SuSe, which had been providing Free Software support and services to customers, suddenly entered media and business consciousness. Articles in the mainstream press circulated throughout the spring and summer of 1998, often attempting to make sense of the name change and whether it meant a corresponding change in practice. A front-cover article in Forbes, which featured photos of Stallman, Larry Wall, Brian Behlendorf, and Torvalds (figure 2), was noncommittal, cycling between Free Software, Open Source, and Freeware. 105
105.Josh McHugh, “For the Love of Hacking,” Forbes, 10 August 1998, 94-100.
335:
By December 1999, the buzz had reached a fever pitch. When VA Linux, a legitimate company which actually made something real—computers with Linux installed on them—went public, its shares’ value gained 700 percent in one day and was the single [pg 112] most valuable initial public offering of the era. VA Linux took the unconventional step of allowing contributors to the Linux kernel to buy into the stock before the IPO, thus bringing at least a partial set of these contributors into the mainstream Ponzi scheme of the Internet dotcom economy. Those who managed to sell their stock ended up benefiting from the boom, whether or not their contributions to Free Software truly merited it. In a roundabout way, Raymond, O’Reilly, Perens, and others behind the name change had achieved recognition for the central role of Free Software in the success of the Internet—and now its true name could be known: Open Source.
341:
The movement, as a practice of discussion and argument, is made up of stories. It is a practice of storytelling: affect- and intellect-laden lore that orients existing participants toward a particular problem, contests other histories, parries attacks from outside, and draws in new recruits. 107 This includes proselytism and evangelism (and the usable pasts of protestant reformations, singularities, rebellion and iconoclasm are often salient here), whether for the reform of intellectual-property law or for the adoption of Linux in the trenches of corporate America. It includes both heartfelt allegiance in the name of social justice as well as political agnosticism stripped of all ideology. 108 Every time Free Software is introduced to someone, discussed in the media, analyzed in a scholarly work, or installed in a workplace, a story of either Free Software or Open Source is used to explain its purpose, its momentum, and its temporality. At the extremes are the prophets and proselytes themselves: Eric Raymond describes Open Source as an evolutionarily necessary outcome of the natural tendency of human societies toward economies of abundance, while Richard Stallman describes it as a defense of the fundamental freedoms of creativity and speech, using a variety of philosophical theories of liberty, justice, and the defense of freedom. 109 Even scholarly analyses must begin with a potted history drawn from the self-narration of geeks who make or advocate free software. 110 Indeed, as a methodological aside, one reason it is so easy to track such stories and narratives is because geeks like to tell and, more important, like to archive such stories—to create Web pages, definitions, encyclopedia entries, dictionaries, and mini-histories and to save every scrap of correspondence, every fight, and every resolution related to their activities. This “archival hubris” yields a very peculiar and specific kind of fieldsite: one in which a kind [pg 115] of “as-it-happens” ethnographic observation is possible not only through “being there” in the moment but also by being there in the massive, proliferating archives of moments past. Understanding the movement as a changing entity requires constantly glancing back at its future promises and the conditions of their making.
107.It is, in the terms of Actor Network Theory, a process of “enrollment” in which participants find ways to rhetorically align—and to disalign—their interests. It does not constitute the substance of their interest, however. See Latour, Science in Action; Callon, “Some Elements of a Sociology of Translation.”
108.Coleman, “Political Agnosticism.”
109.See, respectively, Raymond, The Cathedral and the Bazaar, and Williams, Free as in Freedom.
110.For example, Castells, The Internet Galaxy, and Weber, The Success of Open Source both tell versions of the same story of origins and development.
375:
Throughout the 1970s, the low licensing fees, the inclusion of the source code, and its conceptual integrity meant that UNIX was ported to a remarkable number of other machines. In many ways, academics found it just as appealing, if not more, to be involved in the creation and improvement of a cutting-edge system by licensing and porting the software themselves, rather than by having it provided to them, without the source code, by a company. Peter Salus, for instance, suggests that people experienced the lack of support from Bell Labs as a kind of spur to develop and share their own fixes. The means by which source code was shared, and the norms and practices of sharing, porting, forking, and modifying source code were developed in this period as part of the development of UNIX itself—the technical design of the system facilitates and in some cases mirrors the norms and practices of sharing that developed: operating systems and social systems. 133
133.The simultaneous development of the operating system and the norms for creating, sharing, documenting, and extending it are often referred to as the “UNIX philosophy.” It includes the central idea that one should build on the ideas (software) of others (see Gancarz, The Unix Philosophy and Linux and the UNIX Philosophy). See also Raymond, The Art of UNIX Programming.
396:
Minix was not commercial software, but nor was it Free Software. It was copyrighted and controlled by Tanenbaum’s publisher, Prentice Hall. Because it used no AT&T source code, Minix was also legally independent, a legal object of its own. The fact that it was intended to be legally distinct from, yet conceptually true to UNIX is a clear indication of the kinds of tensions that govern the creation and sharing of source code. The ironic apotheosis of Minix as the pedagogical gold standard for studying UNIX came in 1991-92, when a young Linus Torvalds created a “fork” of Minix, also rewritten from scratch, that would go on to become the paradigmatic piece of Free Software: Linux. Tanenbaum’s purpose for Minix was that it remain a pedagogically useful operating system—small, concise, and illustrative—whereas Torvalds wanted to extend and expand his version of Minix to take full advantage of the kinds of hardware being produced in the 1990s. Both, however, were committed to source-code visibility and sharing as the swiftest route to complete comprehension of operating-systems principles.
402:
According to Don Libes, Bell Labs allowed Berkeley to distribute its extensions to UNIX so long as the recipients also had a license from Bell Labs for the original UNIX (an arrangement similar to the one that governed Lions’s Commentary). 144 From about 1976 until about 1981, BSD slowly became an independent distribution—indeed, a complete version of UNIX—well-known for the vi editor and the Pascal compiler, but also for the addition of virtual memory and its implementation on DEC’s VAX machines. 145 It should be clear that the unusual quasi-commercial status of AT&T’s UNIX allowed for this situation in a way that a fully commercial computer corporation would never have allowed. Consider, for instance, the fact that many UNIX users—students at a university, for instance—could not essentially know whether they were using an AT&T product or something called BSD UNIX created at Berkeley. The operating system functioned in the same way and, except for the presence of copyright notices that occasionally flashed on the screen, did not make any show of asserting its brand identity (that would come later, in the 1980s). Whereas a commercial computer manufacturer would have allowed something like BSD only if it were incorporated into and distributed as a single, marketable, and identifiable product with a clever name, AT&T turned something of a blind eye to the proliferation and spread of AT&T UNIX and the result were forks in the project: distinct bodies of source code, each an instance of something called UNIX.
144.Libes and Ressler, Life with UNIX, 16-17.
145.A recent court case between the Utah-based SCO—the current owner of the legal rights to the original UNIX source code—and IBM raised yet again the question of how much of the original UNIX source code exists in the BSD distribution. SCO alleges that IBM (and Linus Torvalds) inserted SCO-owned UNIX source code into the Linux kernel. However, the incredibly circuitous route of the “original” source code makes these claims hard to ferret out: it was developed at Bell Labs, licensed to multiple universities, used as a basis for BSD, sold to an earlier version of the company SCO (then known as the Santa Cruz Operation), which created a version called Xenix in cooperation with Microsoft. See the diagram by Eric Lévénez at http://www.levenez.com/unix/. For more detail on this case, see www.groklaw.com.
511:
Openness and open systems are key to understanding the practices of Free Software: the open-systems battles of the 1980s set the context for Free Software, leaving in their wake a partially articulated infrastructure of operating systems, networks, and markets that resulted from figuring out open systems. The failure to create a standard UNIX operating system opened the door for Microsoft Windows NT, but it also set the stage for the emergence of the Linux-operating-system kernel to emerge and spread. The success of the TCP/IP protocols forced multiple competing networking schemes into a single standard—and a singular entity, the Internet—which carried with it a set of built-in goals that mirror the moral-technical order of Free Software.
608:
The rest of the story is quickly told: Stallman resigned from the AI Lab at MIT and started the Free Software Foundation in 1985; he created a raft of new tools, but ultimately no full UNIX operating system, and issued General Public License 1.0 in 1989. In 1990 he was awarded a MacArthur “genius grant.” During the 1990s, he was involved in various high-profile battles among a new generation of hackers; those controversies included the debate around Linus Torvalds’s creation of Linux (which Stallman insisted be referred to as GNU/Linux), the forking of EMACS into Xemacs, and Stallman’s own participation in—and exclusion from—conferences and events devoted to Free Software. [pg 207]
614:
The Free Software Foundation represents a recognition on his part that individual and communal independence would come at the price of a legally and bureaucratically recognizable entity, set apart from MIT and responsible only to itself. The Free Software Foundation took a classic form: a nonprofit organization with a hierarchy. But by the early 1990s, a new set of experiments would begin that questioned the look of such an entity. The stories of Linux and Apache reveal how these ventures both depended on the work of the Free Software Foundation and departed from the hierarchical tradition it represented, in order to innovate new similarly embedded sociotechnical forms of coordination.
617:
The final component of Free Software is coordination. For many participants and observers, this is the central innovation and essential significance of Open Source: the possibility of enticing potentially huge numbers of volunteers to work freely on a software project, leveraging the law of large numbers, “peer production,” “gift economies,” and “self-organizing social economies.” 261 Coordination in Free Software is of a distinct kind that emerged in the 1990s, directly out of the issues of sharing source code, conceiving open systems, and writing copyright licenses—all necessary precursors to the practices of coordination. The stories surrounding these issues find continuation in those of the Linux operating-system kernel, of the Apache Web server, and of Source Code Management tools (SCMs); together these stories reveal how coordination worked and what it looked like in the 1990s.
261.Research on coordination in Free Software forms the central core of recent academic work. Two of the most widely read pieces, Yochai Benkler’s “Coase’s Penguin” and Steven Weber’s The Success of Open Source, are directed at classic research questions about collective action. Rishab Ghosh’s “Cooking Pot Markets” and Eric Raymond’s The Cathedral and the Bazaar set many of the terms of debate. Josh Lerner’s and Jean Tirole’s “Some Simple Economics of Open Source” was an early contribution. Other important works on the subject are Feller et al., Perspectives on Free and Open Source Software; Tuomi, Networks of Innovation; Von Hippel, Democratizing Innovation.
618:
Coordination is important because it collapses and resolves the distinction between technical and social forms into a meaningful [pg 211] whole for participants. On the one hand, there is the coordination and management of people; on the other, there is the coordination of source code, patches, fixes, bug reports, versions, and distributions—but together there is a meaningful technosocial practice of managing, decision-making, and accounting that leads to the collaborative production of complex software and networks. Such coordination would be unexceptional, essentially mimicking long-familiar corporate practices of engineering, except for one key fact: it has no goals. Coordination in Free Software privileges adaptability over planning. This involves more than simply allowing any kind of modification; the structure of Free Software coordination actually gives precedence to a generalized openness to change, rather than to the following of shared plans, goals, or ideals dictated or controlled by a hierarchy of individuals. 262
262.On the distinction between adaptability and adaptation, see Federico Iannacci, “The Linux Managing Model,” http://opensource.mit.edu/papers/iannacci2.pdf. Matt Ratto characterizes the activity of Linux-kernel developers as a “culture of re-working” and a “design for re-design,” and captures the exquisite details of such a practice both in coding and in the discussion between developers, an activity he dubs the “pressure of openness” that “results as a contradiction between the need to maintain productive collaborative activity and the simultaneous need to remain open to new development directions” (“The Pressure of Openness,” 112-38).
619:
Adaptability does not mean randomness or anarchy, however; it is a very specific way of resolving the tension between the individual curiosity and virtuosity of hackers, and the collective coordination necessary to create and use complex software and networks. No man is an island, but no archipelago is a nation, so to speak. Adaptability preserves the “joy” and “fun” of programming without sacrificing the careful engineering of a stable product. Linux and Apache should be understood as the results of this kind of coordination: experiments with adaptability that have worked, to the surprise of many who have insisted that complexity requires planning and hierarchy. Goals and planning are the province of governance—the practice of goal-setting, orientation, and definition of control—but adaptability is the province of critique, and this is why Free Software is a recursive public: it stands outside power and offers powerful criticism in the form of working alternatives. It is not the domain of the new—after all Linux is just a rewrite of UNIX—but the domain of critical and responsive public direction of a collective undertaking.
620:
Linux and Apache are more than pieces of software; they are organizations of an unfamiliar kind. My claim that they are “recursive publics” is useful insofar as it gives a name to a practice that is neither corporate nor academic, neither profit nor nonprofit, neither governmental nor nongovernmental. The concept of recursive public includes, within the spectrum of political activity, the creation, modification, and maintenance of software, networks, and legal documents. While a “public” in most theories is a body of [pg 212] people and a discourse that give expressive form to some concern, “recursive public” is meant to suggest that geeks not only give expressive form to some set of concerns (e.g., that software should be free or that intellectual property rights are too expansive) but also give concrete infrastructural form to the means of expression itself. Linux and Apache are tools for creating networks by which expression of new kinds can be guaranteed and by which further infrastructural experimentation can be pursued. For geeks, hacking and programming are variants of free speech and freedom of assembly.
621:
From UNIX to Minix to Linux
622:
Linux and Apache are the two paradigmatic cases of Free Software in the 1990s, both for hackers and for scholars of Free Software. Linux is a UNIX-like operating-system kernel, bootstrapped out of the Minix operating system created by Andrew Tanenbaum. 263 Apache is the continuation of the original National Center for Supercomputing Applications (NCSA) project to create a Web server (Rob McCool’s original program, called httpd), bootstrapped out of a distributed collection of people who were using and improving that software.
263.Linux is often called an operating system, which Stallman objects to on the theory that a kernel is only one part of an operating system. Stallman suggests that it be called GNU/Linux to reflect the use of GNU operating-system tools in combination with the Linux kernel. This not-so-subtle ploy to take credit for Linux reveals the complexity of the distinctions. The kernel is at the heart of hundreds of different “distributions”—such as Debian, Red Hat, SuSe, and Ubuntu Linux—all of which also use GNU tools, but [pg 338] which are often collections of software larger than just an operating system. Everyone involved seems to have an intuitive sense of what an operating system is (thanks to the pedagogical success of UNIX), but few can draw any firm lines around the object itself.
623:
Linux and Apache are both experiments in coordination. Both projects evolved decision-making systems through experiment: a voting system in Apache’s case and a structured hierarchy of decision-makers, with Linus Torvalds as benevolent dictator, in Linux’s case. Both projects also explored novel technical tools for coordination, especially Source Code Management (SCM) tools such as Concurrent Versioning System (cvs). Both are also cited as exemplars of how “fun,” “joy,” or interest determine individual participation and of how it is possible to maintain and encourage that participation and mutual aid instead of narrowing the focus or eliminating possible routes for participation.
624:
Beyond these specific experiments, the stories of Linux and Apache are detailed here because both projects were actively central to the construction and expansion of the Internet of the 1990s by allowing a massive number of both corporate and noncorporate sites to cheaply install and run servers on the Internet. Were Linux and Apache nothing more than hobbyist projects with a few thousand [pg 213] interested tinkerers, rather than the core technical components of an emerging planetary network, they would probably not represent the same kind of revolutionary transformation ultimately branded a “movement” in 1998-99.
625:
Linus Torvalds’s creation of the Linux kernel is often cited as the first instance of the real “Open Source” development model, and it has quickly become the most studied of the Free Software projects. 264 Following its appearance in late 1991, Linux grew quickly from a small, barely working kernel to a fully functional replacement for the various commercial UNIX systems that had resulted from the UNIX wars of the 1980s. It has become versatile enough to be used on desktop PCs with very little memory and small CPUs, as well as in “clusters” that allow for massively parallel computing power.
264.Eric Raymond directed attention primarily to Linux in The Cathedral and the Bazaar. Many other projects preceded Torvalds’s kernel, however, including the tools that form the core of both UNIX and the Internet: Paul Vixie’s implementation of the Domain Name System (DNS) known as BIND; Eric Allman’s sendmail for routing e-mail; the scripting languages perl (created by Larry Wall), python (Guido von Rossum), and tcl/tk (John Ousterhout); the X Windows research project at MIT; and the derivatives of the original BSD UNIX, FreeBSD and OpenBSD. On the development model of FreeBSD, see Jorgensen, “Putting It All in the Trunk” and “Incremental and Decentralized Integration in FreeBSD.” The story of the genesis of Linux is very nicely told in Moody, Rebel Code, and Williams, Free as in Freedom; there are also a number of papers—available through Free/Opensource Research Community, http://freesoftware.mit.edu/—that analyze the development dynamics of the Linux kernel. See especially Ratto, “Embedded Technical Expression” and “The Pressure of Openness.” I have conducted much of my analysis of Linux by reading the Linux Kernel Mailing List archives, http://lkml.org. There are also annotated summaries of the Linux Kernel Mailing List discussions at http://kerneltraffic.org.
626:
When Torvalds started, he was blessed with an eager audience of hackers keen on seeing a UNIX system run on desktop computers and a personal style of encouragement that produced enormous positive feedback. Torvalds is often given credit for creating, through his “management style,” a “new generation” of Free Software—a younger generation than that of Stallman and Raymond. Linus and Linux are not in fact the causes of this change, but the results of being at the right place at the right time and joining together a number of existing components. Indeed, the title of Torvalds’s semi-autobiographical reflection on Linux—Just for Fun: The Story of an Accidental Revolutionary—captures some of the character of its genesis.
631:
The fact of Linus Torvalds’s pedagogical embedding in the world of UNIX, Minix, the Free Software Foundation, and the Usenet should not be underestimated, as it often is in hagiographical accounts of the Linux operating system. Without this relatively robust moral-technical order or infrastructure within which it was possible to be at the right place at the right time, Torvalds’s late-night dorm-room project would have amounted to little more than that—but the pieces were all in place for his modest goals to be transformed into something much more significant.
638:
So the system is based on Minix, just as Minix had been based on UNIX—piggy-backed or bootstrapped, rather than rewritten in an entirely different fashion, that is, rather than becoming a different kind of operating system. And yet there are clearly concerns about the need to create something that is not Minix, rather than simply extending or “debugging” Minix. This concern is key to understanding what happened to Linux in 1991.
640:
By all accounts, Prentice Hall was not restrictive in its sublicensing of the operating system, if people wanted to create an “enhanced” [pg 217] version of Minix. Similarly, Tanenbaum’s frequent presence on comp.os.minix testified to his commitment to sharing his knowledge about the system with anyone who wanted it—not just paying customers. Nonetheless, Torvalds’s pointed use of the word free and his decision not to reuse any of the code is a clear indication of his desire to build a system completely unencumbered by restrictions, based perhaps on a kind of intuitive folkloric sense of the dangers associated with cases like that of EMACS. 270
270.Indeed, initially, Torvalds’s terms of distribution for Linux were more restrictive than the GPL, including limitations on distributing it for a fee or for handling costs. Torvalds eventually loosened the restrictions and switched to the GPL in February 1992. Torvalds’s release notes for Linux 0.12 say, “The Linux copyright will change: I’ve had a couple of requests [pg 339] to make it compatible with the GNU copyleft, removing the ‘you may not distribute it for money’ condition. I agree. I propose that the copyright be changed so that it conforms to GNU—pending approval of the persons who have helped write code. I assume this is going to be no problem for anybody: If you have grievances (‘I wrote that code assuming the copyright would stay the same’) mail me. Otherwise The GNU copyleft takes effect as of the first of February. If you do not know the gist of the GNU copyright—read it” (http://www.kernel.org/pub/linux/kernel/Historic/old-versions/RELNOTES-0.12).
641:
The most significant aspect of Torvalds’s initial message, however, is his request: “I’d like to know what features most people would want. Any suggestions are welcome, but I won’t promise I’ll implement them.” Torvalds’s announcement and the subsequent interest it generated clearly reveal the issues of coordination and organization that would come to be a feature of Linux. The reason Torvalds had so many eager contributors to Linux, from the very start, was because he enthusiastically took them off of Tanenbaum’s hands.
643:
Tanenbaum’s role in the story of Linux is usually that of the straw man—a crotchety old computer-science professor who opposes the revolutionary young Torvalds. Tanenbaum did have a certain revolutionary reputation himself, since Minix was used in classrooms around the world and could be installed on IBM PCs (something no other commercial UNIX vendors had achieved), but he was also a natural target for people like Torvalds: the tenured professor espousing the textbook version of an operating system. So, despite the fact that a very large number of people were using or knew of Minix as a UNIX operating system (estimates of comp.os.minix subscribers were at 40,000), Tanenbaum was emphatically not interested in collaboration or collaborative debugging, especially if debugging also meant creating extensions and adding features that would make the system bigger and harder to use as a stripped-down tool for teaching. For Tanenbaum, this point was central: “I’ve been repeatedly offered virtual memory, paging, symbolic links, window systems, and all manner of features. I have usually declined because I am still trying to keep the system simple enough for students to understand. You can put all this stuff in your version, but I won’t [pg 218] put it in mine. I think it is this point which irks the people who say ‘MINIX is not free,’ not the $60.” 271
271.Message-ID: 12667@star.cs.vu.nl.
645:
By contrast, Torvalds’s “fun” project had no goals. Being a cocky nineteen-year-old student with little better to do (no textbooks to write, no students, grants, research projects, or committee meetings), Torvalds was keen to accept all the ready-made help he could find to make his project better. And with 40,000 Minix users, he had a more or less instant set of contributors. Stallman’s audience for EMACS in the early 1980s, by contrast, was limited to about a hundred distinct computers, which may have translated into thousands, but certainly not tens of thousands of users. Tanenbaum’s work in creating a generation of students who not only understood the internals of an operating system but, more specifically, understood the internals of the UNIX operating system created a huge pool of competent and eager UNIX hackers. It was the work of porting UNIX not only to various machines but to a generation of minds as well that set the stage for this event—and this is an essential, though often overlooked component of the success of Linux.
646:
Many accounts of the Linux story focus on the fight between Torvalds and Tanenbaum, a fight carried out on comp.os.minix with the subject line “Linux is obsolete.” 272 Tanenbaum argued that Torvalds was reinventing the wheel, writing an operating system that, as far as the state of the art was concerned, was now obsolete. Torvalds, by contrast, asserted that it was better to make something quick and dirty that worked, invite contributions, and worry about making it state of the art later. Far from illustrating some kind of outmoded conservatism on Tanenbaum’s part, the debate highlights the distinction between forms of coordination and the meanings of collaboration. For Tanenbaum, the goals of Minix were either pedagogical or academic: to teach operating-system essentials or to explore new possibilities in operating-system design. By this model, Linux could do neither; it couldn’t be used in the classroom because [pg 219] it would quickly become too complex and feature-laden to teach, and it wasn’t pushing the boundaries of research because it was an out-of-date operating system. Torvalds, by contrast, had no goals. What drove his progress was a commitment to fun and to a largely inarticulate notion of what interested him and others, defined at the outset almost entirely against Minix and other free operating systems, like FreeBSD. In this sense, it could only emerge out of the context—which set the constraints on its design—of UNIX, open systems, Minix, GNU, and BSD.
272.Message-ID: 12595@star.cs.vu.nl. Key parts of the controversy were reprinted in Dibona et al. Open Sources.
647:
Both Tanenbaum and Torvalds operated under a model of coordination in which one person was ultimately responsible for the entire project: Tanenbaum oversaw Minix and ensured that it remained true to its goals of serving a pedagogical audience; Torvalds would oversee Linux, but he would incorporate as many different features as users wanted or could contribute. Very quickly—with a pool of 40,000 potential contributors—Torvalds would be in the same position Tanenbaum was in, that is, forced to make decisions about the goals of Linux and about which enhancements would go into it and which would not. What makes the story of Linux so interesting to observers is that it appears that Torvalds made no decision: he accepted almost everything.
648:
Tanenbaum’s goals and plans for Minix were clear and autocratically formed. Control, hierarchy, and restriction are after all appropriate in the classroom. But Torvalds wanted to do more. He wanted to go on learning and to try out alternatives, and with Minix as the only widely available way to do so, his decision to part ways starts to make sense; clearly he was not alone in his desire to explore and extend what he had learned. Nonetheless, Torvalds faced the problem of coordinating a new project and making similar decisions about its direction. On this point, Linux has been the subject of much reflection by both insiders and outsiders. Despite images of Linux as either an anarchic bazaar or an autocratic dictatorship, the reality is more subtle: it includes a hierarchy of contributors, maintainers, and “trusted lieutenants” and a sophisticated, informal, and intuitive sense of “good taste” gained through reading and incorporating the work of co-developers.
649:
While it was possible for Torvalds to remain in charge as an individual for the first few years of Linux (1991-95, roughly), he eventually began to delegate some of that control to people who would make decisions about different subcomponents of the kernel. [pg 220] It was thus possible to incorporate more of the “patches” (pieces of code) contributed by volunteers, by distributing some of the work of evaluating them to people other than Torvalds. This informal hierarchy slowly developed into a formal one, as Steven Weber points out: “The final de facto ‘grant’ of authority came when Torvalds began publicly to reroute relevant submissions to the lieutenants. In 1996 the decision structure became more formal with an explicit differentiation between ‘credited developers’ and ‘maintainers.’ . . . If this sounds very much like a hierarchical decision structure, that is because it is one—albeit one in which participation is strictly voluntary.” 273
273.Steven Weber, The Success of Open Source, 164.
652:
By 1995-96, Torvalds and lieutenants faced considerable challenges with regard to hierarchy and decision-making, as the project had grown in size and complexity. The first widely remembered response to the ongoing crisis of benevolent dictatorship in Linux was the creation of “loadable kernel modules,” conceived as a way to release some of the constant pressure to decide which patches would be incorporated into the kernel. The decision to modularize [pg 221] Linux was simultaneously technical and social: the software-code base would be rewritten to allow for external loadable modules to be inserted “on the fly,” rather than all being compiled into one large binary chunk; at the same time, it meant that the responsibility to ensure that the modules worked devolved from Torvalds to the creator of the module. The decision repudiated Torvalds’s early opposition to Tanenbaum in the “monolithic vs. microkernel” debate by inviting contributors to separate core from peripheral functions of an operating system (though the Linux kernel remains monolithic compared to classic microkernels). It also allowed for a significant proliferation of new ideas and related projects. It both contracted and distributed the hierarchy; now Linus was in charge of a tighter project, but more people could work with him according to structured technical and social rules of responsibility.
653:
Creating loadable modules changed the look of Linux, but not because of any planning or design decisions set out in advance. The choice is an example of the privileged adaptability of the Linux, resolving the tension between the curiosity and virtuosity of individual contributors to the project and the need for hierarchical control in order to manage complexity. The commitment to adaptability dissolves the distinction between the technical means of coordination and the social means of management. It is about producing a meaningful whole by which both people and code can be coordinated—an achievement vigorously defended by kernel hackers.
654:
The adaptable organization and structure of Linux is often described in evolutionary terms, as something without teleological purpose, but responding to an environment. Indeed, Torvalds himself has a weakness for this kind of explanation.
655:
Let’s just be honest, and admit that it [Linux] wasn’t designed.
658:
And I know better than most that what I envisioned 10 years ago has nothing in common with what Linux is today. There was certainly no premeditated design there. 274
274.Quoted in Zack Brown, “Kernel Traffic #146 for 17Dec2001,” Kernel Traffic, http://www.kerneltraffic.org/kernel-traffic/kt20011217_146.html; also quoted in Federico Iannacci, “The Linux Managing Model,” http://opensource.mit.edu/papers/iannacci2.pdf.
659:
Adaptability does not answer the questions of intelligent design. Why, for example, does a car have four wheels and two headlights? Often these discussions are polarized: either technical objects are designed, or they are the result of random mutations. What this opposition overlooks is the fact that design and the coordination of collaboration go hand in hand; one reveals the limits and possibilities of the other. Linux represents a particular example of such a problematic—one that has become the paradigmatic case of Free Software—but there have been many others, including UNIX, for which the engineers created a system that reflected the distributed collaboration of users around the world even as the lawyers tried to make it conform to legal rules about licensing and practical concerns about bookkeeping and support.
660:
Because it privileges adaptability over planning, Linux is a recursive public: operating systems and social systems. It privileges openness to new directions, at every level. It privileges the right to propose changes by actually creating them and trying to convince others to use and incorporate them. It privileges the right to fork the software into new and different kinds of systems. Given what it privileges, Linux ends up evolving differently than do systems whose life and design are constrained by corporate organization, or by strict engineering design principles, or by legal or marketing definitions of products—in short, by clear goals. What makes this distinction between the goal-oriented design principle and the principle of adaptability important is its relationship to politics. Goals and planning are the subject of negotiation and consensus, or of autocratic decision-making; adaptability is the province of critique. It should be remembered that Linux is by no means an attempt to create something radically new; it is a rewrite of a UNIX operating system, as Torvalds points out, but one that through adaptation can end up becoming something new.
662:
The Apache Web server and the Apache Group (now called the Apache Software Foundation) provide a second illuminating example of the how and why of coordination in Free Software of the 1990s. As with the case of Linux, the development of the Apache project illustrates how adaptability is privileged over planning [pg 223] and, in particular, how this privileging is intended to resolve the tensions between individual curiosity and virtuosity and collective control and decision-making. It is also the story of the progressive evolution of coordination, the simultaneously technical and social mechanisms of coordinating people and code, patches and votes.
681:
Harthill’s injunction to collaborate seems surprising in the context of a mailing list and project created to facilitate collaboration, but the injunction is specific: collaborate by making plans and sharing goals. Implicit in his words is the tension between a project with clear plans and goals, an overarching design to which everyone contributes, as opposed to a group platform without clear goals that provides individuals with a setting to try out alternatives. Implicit in his words is the spectrum between debugging an existing piece of software with a stable identity and rewriting the fundamental aspects of it to make it something new. The meaning of collaboration bifurcates here: on the one hand, the privileging of the autonomous work of individuals which is submitted to a group peer review and then incorporated; on the other, the privileging of a set of shared goals to which the actions and labor of individuals is subordinated. 292
292.Gabriella Coleman captures this nicely in her discussion of the tension between the individual virtuosity of the hacker and the corporate populism of groups like Apache or, in her example, the Debian distribution of Linux. See Coleman, The Social Construction of Freedom.
685:
The technical and social forms that Linux and Apache take are enabled by the tools they build and use, from bug-tracking tools and mailing lists to the Web servers and kernels themselves. One such tool plays a very special role in the emergence of these organizations: Source Code Management systems (SCMs). SCMs are tools for coordinating people and code; they allow multiple people in dispersed locales to work simultaneously on the same object, the same source code, without the need for a central coordinating overseer and without the risk of stepping on each other’s toes. The history of SCMs—especially in the case of Linux—also illustrates the recursive-depth problem: namely, is Free Software still free if it is created with non-free tools?
691:
Both the Apache project and the Linux kernel project use SCMs. In the case of Apache the original patch-and-vote system quickly began to strain the patience, time, and energy of participants as the number of contributors and patches began to grow. From the very beginning of the project, the contributor Paul Richards had urged the group to make use of cvs. He had extensive experience with the system in the Free-BSD project and was convinced that it provided a superior alternative to the patch-and-vote system. Few other contributors had much experience with it, however, so it wasn’t until over a year after Richards began his admonitions that cvs was eventually adopted. However, cvs is not a simple replacement for a patch-and-vote system; it necessitates a different kind of organization. Richards recognized the trade-off. The patch-and-vote system created a very high level of quality assurance and peer review of the patches that people submitted, while the cvs system allowed individuals to make more changes that might not meet the same level of quality assurance. The cvs system allowed branches—stable, testing, experimental—with different levels of quality assurance, while the patch-and-vote system was inherently directed at one final and stable version. As the case of Shambhala [pg 232] exhibited, under the patch-and-vote system experimental versions would remain unofficial garage projects, rather than serve as official branches with people responsible for committing changes.
693:
The Linux kernel has also struggled with various issues surrounding SCMs and the management of responsibility they imply. The story of the so-called VGER tree and the creation of a new SCM called Bitkeeper is exemplary in this respect. 296 By 1997, Linux developers had begun to use cvs to manage changes to the source code, though not without resistance. Torvalds was still in charge of the changes to the official stable tree, but as other “lieutenants” came on board, the complexity of the changes to the kernel grew. One such lieutenant was Dave Miller, who maintained a “mirror” of the stable Linux kernel tree, the VGER tree, on a server at Rutgers. In September 1998 a fight broke out among Linux kernel developers over two related issues: one, the fact that Torvalds was failing to incorporate (patch) contributions that had been forwarded to him by various people, including his lieutenants; and two, as a result, the VGER cvs repository was no longer in synch with the stable tree maintained by Torvalds. Two different versions of Linux threatened to emerge.
296.See Steven Weber, The Success of Open Source, 117-19; Moody, Rebel Code, 172-78. See also Shaikh and Cornford, “Version Management Tools.”
694:
A great deal of yelling ensued, as nicely captured in Moody’s Rebel Code, culminating in the famous phrase, uttered by Larry McVoy: “Linus does not scale.” The meaning of this phrase is that the ability of Linux to grow into an ever larger project with increasing complexity, one which can handle myriad uses and functions (to “scale” up), is constrained by the fact that there is only one Linus Torvalds. By all accounts, Linus was and is excellent at what he does—but there is only one Linus. The danger of this situation is the danger of a fork. A fork would mean one or more new versions would proliferate under new leadership, a situation much like [pg 233] the spread of UNIX. Both the licenses and the SCMs are designed to facilitate this, but only as a last resort. Forking also implies dilution and confusion—competing versions of the same thing and potentially unmanageable incompatibilities.
696:
McVoy was well-known in geek circles before Linux. In the late stages of the open-systems era, as an employee of Sun, he had penned an important document called “The Sourceware Operating System Proposal.” It was an internal Sun Microsystems document that argued for the company to make its version of UNIX freely available. It was a last-ditch effort to save the dream of open systems. It was also the first such proposition within a company to “go open source,” much like the documents that would urge Netscape to Open Source its software in 1998. Despite this early commitment, McVoy chose not to create Bitkeeper as a Free Software project, but to make it quasi-proprietary, a decision that raised a very central question in ideological terms: can one, or should one, create Free Software using non-free tools?
697:
On one side of this controversy, naturally, was Richard Stallman and those sharing his vision of Free Software. On the other were pragmatists like Torvalds claiming no goals and no commitment to “ideology”—only a commitment to “fun.” The tension laid bare the way in which recursive publics negotiate and modulate the core components of Free Software from within. Torvalds made a very strong and vocal statement concerning this issue, responding to Stallman’s criticisms about the use of non-free software to create Free Software: “Quite frankly, I don’t _want_ people using Linux for ideological reasons. I think ideology sucks. This world would be a much better place if people had less ideology, and a whole lot more ‘I do this because it’s FUN and because others might find it useful, not because I got religion.’” 297
297.Linus Torvalds, “Re: [PATCH] Remove Bitkeeper Documentation from Linux Tree,” 20 April 2002, http://www.uwsg.indiana.edu/hypermail/linux/kernel/0204.2/1018.html. Quoted in Shaikh and Cornford, “Version Management Tools.”
698:
Torvalds emphasizes pragmatism in terms of coordination: the right tool for the job is the right tool for the job. In terms of licenses, [pg 234] however, such pragmatism does not play, and Torvalds has always been strongly committed to the GPL, refusing to let non-GPL software into the kernel. This strategic pragmatism is in fact a recognition of where experimental changes might be proposed, and where practices are settled. The GPL was a stable document, sharing source code widely was a stable practice, but coordinating a project using SCMs was, during this period, still in flux, and thus Bitkeeper was a tool well worth using so long as it remained suitable to Linux development. Torvalds was experimenting with the meaning of coordination: could a non-free tool be used to create Free Software?
699:
McVoy, on the other hand, was on thin ice. He was experimenting with the meaning of Free Software licenses. He created three separate licenses for Bitkeeper in an attempt to play both sides: a commercial license for paying customers, a license for people who sell Bitkeeper, and a license for “free users.” The free-user license allowed Linux developers to use the software for free—though it required them to use the latest version—and prohibited them from working on a competing project at the same time. McVoy’s attempt to have his cake and eat it, too, created enormous tension in the developer community, a tension that built from 2002, when Torvalds began using Bitkeeper in earnest, to 2005, when he announced he would stop.
701:
The developer Andrew Trigdell, well known for his work on a project called Samba and his reverse engineering of a Microsoft networking protocol, began a project to reverse engineer Bitkeeper by looking at the metadata it produced in the course of being used for the Linux project. By doing so, he crossed a line set up by McVoy’s experimental licensing arrangement: the “free as long as you don’t copy me” license. Lawyers advised Trigdell to stay silent on the topic while Torvalds publicly berated him for “willful destruction” and a moral lapse of character in trying to reverse engineer Bitkeeper. Bruce Perens defended Trigdell and censured Torvalds for his seemingly contradictory ethics. 298 McVoy never sued Trigdell, and Bitkeeper has limped along as a commercial project, because, [pg 235] much like the EMACS controversy of 1985, the Bitkeeper controversy of 2005 ended with Torvalds simply deciding to create his own SCM, called git.
298.Andrew Orlowski, “‘Cool it, Linus’—Bruce Perens,” Register, 15 April 2005, http://www.theregister.co.uk/2005/04/15/perens_on_torvalds/page2.html.
702:
The story of the VGER tree and Bitkeeper illustrate common tensions within recursive publics, specifically, the depth of the meaning of free. On the one hand, there is Linux itself, an exemplary Free Software project made freely available; on the other hand, however, there is the ability to contribute to this process, a process that is potentially constrained by the use of Bitkeeper. So long as the function of Bitkeeper is completely circumscribed—that is, completely planned—there can be no problem. However, the moment one user sees a way to change or improve the process, and not just the kernel itself, then the restrictions and constraints of Bitkeeper can come into play. While it is not clear that Bitkeeper actually prevented anything, it is also clear that developers clearly recognized it as a potential drag on a generalized commitment to adaptability. Or to put it in terms of recursive publics, only one layer is properly open, that of the kernel itself; the layer beneath it, the process of its construction, is not free in the same sense. It is ironic that Torvalds—otherwise the spokesperson for antiplanning and adaptability—willingly adopted this form of constraint, but not at all surprising that it was collectively rejected.
705:
Novelty, both in the case of Linux and in intellectual property law more generally, is directly related to the interplay of social and technical coordination: goal direction vs. adaptability. The ideal of adaptability promoted by Torvalds suggests a radical alternative to the dominant ideology of creation embedded in contemporary intellectual-property systems. If Linux is “new,” it is new through adaptation and the coordination of large numbers of creative contributors who challenge the “design” of an operating system from the bottom up, not from the top down. By contrast, McVoy represents a moral imagination of design in which it is impossible to achieve novelty without extremely expensive investment in top-down, goal-directed, unpolitical design—and it is this activity that the intellectual-property system is designed to reward. Both are engaged, however, in an experiment; both are engaged in “figuring out” what the limits of Free Software are.
715:
Coordination is a key component of Free Software, and is frequently identified as the central component. Free Software is the result of a complicated story of experimentation and construction, and the forms that coordination takes in Free Software are specific outcomes of this longer story. Apache and Linux are both experiments—not scientific experiments per se but collective social experiments in which there are complex technologies and legal tools, systems of coordination and governance, and moral and technical orders already present.
716:
Free Software is an experimental system, a practice that changes with the results of new experiments. The privileging of adaptability makes it a peculiar kind of experiment, however, one not directed by goals, plans, or hierarchical control, but more like what John Dewey suggested throughout his work: the experimental praxis of science extended to the social organization of governance in the service of improving the conditions of freedom. What gives this experimentation significance is the centrality of Free Software—and specifically of Linux and Apache—to the experimental expansion of the Internet. As an infrastructure or a milieu, the Internet is changing the conditions of social organization, changing the relationship of knowledge to power, and changing the orientation of collective life toward governance. Free Software is, arguably, the best example of an attempt to make this transformation public, to ensure that it uses the advantages of adaptability as critique to counter the power of planning as control. Free Software, as a recursive public, proceeds by proposing and providing alternatives. It is a bit like Kant’s version of enlightenment: insofar as geeks speak (or hack) as scholars, in a public realm, they have a right to propose criticisms and changes of any sort; as soon as they relinquish [pg 240] that commitment, they become private employees or servants of the sovereign, bound by conscience and power to carry out the duties of their given office. The constitution of a public realm is not a universal activity, however, but a historically specific one: Free Software confronts the specific contemporary technical and legal infrastructure by which it is possible to propose criticisms and offer alternatives. What results is a recursive public filled not only with individuals who govern their own actions but also with code and concepts and licenses and forms of coordination that turn these actions into viable, concrete technical forms of life useful to inhabitants of the present.
736:
At about the same time as his idea for a textbook, Rich’s research group was switching over to Linux, and Rich was first learning about Open Source and the emergence of a fully free operating system created entirely by volunteers. It isn’t clear what Rich’s aha! moment was, other than simply when he came to an understanding that such a thing as Linux was actually possible. Nonetheless, at some point, Rich had the idea that his textbook could be an Open Source textbook, that is, a textbook created not just by him, but by DSP researchers all over the world, and made available to everyone to make use of and modify and improve as they saw fit, just like Linux. Together with Brent Hendricks, Yan David Erlich, [pg 249] and Ross Reedstrom, all of whom, as geeks, had a deep familiarity with the history and practices of Free and Open Source Software, Rich started to conceptualize a system; they started to think about modulations of different components of Free and Open Source Software. The idea of a Free Software textbook repository slowly took shape.
751:
The modulated meaning of source code creates all kinds of new questions—specifically with respect to the other four components. In terms of openness, for instance, Connexions modulates this component very little; most of the actors involved are devoted to the ideals of open systems and open standards, insofar as it is a Free Software project of a conventional type. It builds on UNIX (Linux) and the Internet, and the project leaders maintain a nearly fanatical devotion to openness at every level: applications, programming languages, standards, protocols, mark-up languages, interface tools. Every place where there is an open (as opposed to a [pg 256] proprietary) solution—that choice trumps all others (with one noteworthy exception). 310 James Boyle recently stated it well: “Wherever possible, design the system to run with open content, on open protocols, to be potentially available to the largest possible number of users, and to accept the widest possible range of experimental modifications from users who can themselves determine the development of the technology.” 311
310.The most significant exception has been the issue of tools for authoring content in XML. For most of the life of the Connexions project, the XML mark-up language has been well-defined and clear, but there has been no way to write a module in XML, short of directly writing the text and the tags in a text editor. For all but a very small number of possible users, this feels too much like programming, and they experience it as too frustrating to be worth it. The solution (albeit temporary) was to encourage users to make use of a proprietary XML editor (like a word processor, but capable of creating XML content). Indeed, the Connexions project’s devotion to openness was tested by one of the most important decisions its participants made: to pursue the creation of an Open Source XML text editor in order to provide access to completely open tools for creating completely open content.
311.Boyle, “Mertonianism Unbound,” 14.
805:
Perhaps unsurprisingly, the Connexions team spent a great deal of time at the outset of the project creating a pdf-document-creation system that would essentially mimic the creation of a conventional textbook, with the push of a button. 333 But even this process causes a subtle transformation: the concept of “edition” becomes much harder to track. While a conventional textbook is a stable entity that goes through a series of printings and editions, each of which is marked on its publication page, a Connexions document can go through as many versions as an author wants to make changes, all the while without necessarily changing editions. In this respect, the modulation of the concept of source code translates the practices of updating and “versioning” into the realm of textbook writing. Recall the cases ranging from the “continuum” of UNIX versions discussed by Ken Thompson to the complex struggles over version control in the Linux and Apache projects. In the case of writing source code, exactitude demands that the change of even a single character be tracked and labeled as a version change, whereas a [pg 278] conventional-textbook spelling correction or errata issuance would hardly create the need for a new edition.
333.Conventional here is actually quite historically proximate: the system creates a pdf document by translating the XML document into a LaTeX document, then into a pdf document. LaTeX has been, for some twenty years, a standard text-formatting and typesetting language used by some [pg 345] sectors of the publishing industry (notably mathematics, engineering, and computer science). Were it not for the existence of this standard from which to bootstrap, the Connexions project would have faced a considerably more difficult challenge, but much of the infrastructure of publishing has already been partially transformed into a computer-mediated and -controlled system whose final output is a printed book. Later in Connexions’s lifetime, the group coordinated with an Internet-publishing startup called Qoop.com to take the final step and make Connexions courses available as print-on-demand, cloth-bound textbooks, complete with ISBNs and back-cover blurbs.
834:
In the case of shared software source code, one of the principal reasons for sharing it was to reuse it: to build on it, to link to it, to employ it in ways that made building more complex objects into an easier task. The very design philosophy of UNIX well articulates the necessity of modularity and reuse, and the idea is no less powerful in other areas, such as textbooks. But just as the reuse of software is not simply a feature of software’s technical characteristics, the idea of “reusing” scholarly materials implies all kinds of questions that are not simply questions of recombining texts. The ability to share source code—and the ability to create complex software based on it—requires modulations of both the legal meaning of software, as in the case of EMACS, and the organizational form, as in the [pg 290] emergence of Free Software projects other than the Free Software Foundation (the Linux kernel, Perl, Apache, etc.).
849:
The Creative Commons licenses allow authors to grant the use of their work in about a dozen different ways—that is, the license itself comes in versions. One can, for instance, require attribution, prohibit commercial exploitation, allow derivative or modified works to be made and circulated, or some combination of all these. These different combinations actually create different licenses, each of which grants intellectual-property rights under slightly different conditions. For example, say Marshall Sahlins decides to write a paper about how the Internet is cultural; he copyrights the paper (“© 2004 Marshall Sahlins”), he requires that any use of it or any copies of it maintain the copyright notice and the attribution of [pg 295] authorship (these can be different), and he furthermore allows for commercial use of the paper. It would then be legal for a publishing house to take the paper off Sahlins’s Linux-based Web server and publish it in a collection without having to ask permission, as long as the paper remains unchanged and he is clearly and unambiguously listed as author of the paper. The publishing house would not get any rights to the work, and Sahlins would not get any royalties. If he had specified noncommercial use, the publisher would instead have needed to contact him and arrange for a separate license (Creative Commons licenses are nonexclusive), under which he could demand some share of revenue and his name on the cover of the book. 345 But say he was, instead, a young scholar seeking only peer recognition and approbation—then royalties would be secondary to maximum circulation. Creative Commons allows authors to assert, as its members put it, “some rights reserved” or even “no rights reserved.”
345.In December 2006 Creative Commons announced a set of licenses that facilitate the “follow up” licensing of a work, especially one initially issued under a noncommercial license.
916b:
Yochai Benkler "Coase’s Penguin, or Linux and the Nature of the Firm", Yale Law Journal, 112.3, 2002, 369-446.
Mike Gancarz "Linux and the UNIX Philosophy", 2003, Digital Press, Boston.
Glyn Moody "Rebel Code: Inside Linux and the Open Source Revolution", 2001, Perseus, Cambridge, Mass..
Matt Ratto "The Pressure of Openness: The Hybrid work of Linux Free/Open Source Kernel Developers", 2003, San Diego.
Eric S Raymond "The Cathedral and the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary", 2001, 79-135, O’Reilly Press, Sebastopol, Calif..
Maha Shaikh, Tony Cornford "Version Management Tools: CVS to BK in the Linux Kernel", 2003-05-03, Portland, Oregon.
Peter Wayner "Free for All: How LINUX and the Free Software Movement Undercut the High-Tech Titans", 2000, Harper Business, New York.
"Free Culture - How Big Media Uses Technology and the Law to Lock Down Culture and Control Creativity" (2004) [en] LESSIG, Lawrence
275:
Finally, we could try to excuse this piracy with the argument that the piracy actually helps the copyright owner. When the Chinese “steal” Windows, that makes the Chinese dependent on Microsoft. Microsoft loses the value of the software that was taken. But it gains users who are used to life in the Microsoft world. Over time, as the nation grows more wealthy, more and more people will buy software rather than steal it. And hence over time, because that buying will benefit Microsoft, Microsoft benefits from the piracy. If instead of pirating Microsoft Windows, the Chinese used the free GNU/Linux operating system, then these Chinese users would not eventually be buying Microsoft. Without piracy, then, Microsoft would lose.
277:
Still, the argument is not terribly persuasive. We don't give the alcoholic a defense when he steals his first beer, merely because that will make it more likely that he will buy the next three. Instead, we ordinarily allow businesses to decide for themselves when it is best to give their product away. If Microsoft fears the competition of GNU/Linux, then Microsoft can give its product away, as it did, for example, with Internet Explorer to fight Netscape. A property right means giving the property owner the right to say who gets access to what - at least ordinarily. And if the law properly balances the rights of the copyright owner with the rights of access, then violating the law is still wrong.
969:
In the Supreme Court, the briefs on our side were about as diverse as it gets. They included an extraordinary historical brief by the Free Software Foundation (home of the GNU project that made GNU/ Linux possible). They included a powerful brief about the costs of uncertainty by Intel. There were two law professors' briefs, one by copyright scholars and one by First Amendment scholars. There was an exhaustive and uncontroverted brief by the world's experts in the history of the Progress Clause. And of course, there was a new brief by Eagle Forum, repeating and strengthening its arguments.
1109:
I don't mean to enter that debate here. It is important only to make clear that the distinction is not between commercial and noncommercial software. There are many important companies that depend fundamentally upon open source and free software, IBM being the most prominent. IBM is increasingly shifting its focus to the GNU/Linux operating system, the most famous bit of “free software” - and IBM is emphatically a commercial entity. Thus, to support “open source and free software” is not to oppose commercial entities. It is, instead, to support a mode of software development that is different from Microsoft's. 202
202.Microsoft's position about free and open source software is more sophisticated. As it has repeatedly asserted, it has no problem with “open source” software or software in the public domain. Microsoft's principal opposition is to “free software” licensed under a “copyleft” license, meaning a license that requires the licensee to adopt the same terms on any derivative work. See Bradford L. Smith, “The Future of Software: Enabling the Marketplace to Decide,” Government Policy Toward Open Source Software (Washington, D.C.: AEI-Brookings Joint Center for Regulatory Studies, American Enterprise Institute for Public Policy Research, 2002), 69, available at link #62. See also Craig Mundie, Microsoft senior vice president, The Commercial Software Model, discussion at New York University Stern School of Business (3 May 2001), available at link #63.
1163:
Therefore, in 1984, Stallman began a project to build a free operating system, so that at least a strain of free software would survive. That was the birth of the GNU project, into which Linus Torvalds's “Linux” kernel was added to produce the GNU/Linux operating system.
"Live Systems Manual" (2015) [en] Live Systems Project
25:
● chroot: The chroot program, chroot(8), enables us to run different instances of the GNU/Linux environment on a single system simultaneously without rebooting.
115:
● Linux 2.6 or newer.
165:
● Linux kernel image, usually named vmlinuz*
166:
● Initial RAM disk image (initrd): a RAM disk set up for the Linux boot, containing modules possibly needed to mount the System image and some scripts to do it.
168:
● Bootloader: A small piece of code crafted to boot from the chosen medium, possibly presenting a prompt or menu to allow selection of options/configuration. It loads the Linux kernel and its initrd to run with an associated system filesystem. Different solutions can be used, depending on the target medium and format of the filesystem containing the previously mentioned components: isolinux to boot from a CD or DVD in ISO9660 format, syslinux for HDD or USB drive booting from a VFAT partition, extlinux for ext2/3/4 and btrfs partitions, pxelinux for PXE netboot, GRUB for ext2/3/4 partitions, etc.
169:
You can use live-build to build the system image from your specifications, set up a Linux kernel, its initrd, and a bootloader to run them, all in one medium-dependant format (ISO9660 image, disk image, etc.).
175:
The web interface currently makes no provision to prevent the use of invalid combinations of options, and in particular, where changing an option would normally (i.e. using live-build directly) change defaults of other options listed in the web form, the web builder does not change these defaults. Most notably, if you change --architectures from the default i386 to amd64, you must change the corresponding option --linux-flavours from the default 586 to amd64. See the lb_config man page for the version of live-build installed on the web builder for more details. The version number of live-build is listed at the bottom of the web builder page.
230:
In order to make the dkms package work, also the kernel headers for the kernel flavour used in your image need to be installed. Instead of manually listing the correct linux-headers package in above created package list, the selection of the right package can be done automatically by live-build.
231:
$ lb config --linux-packages "linux-image linux-headers"
241:
The generated binary image contains a VFAT partition and the syslinux bootloader, ready to be directly written on a USB device. Once again, using an HDD image is just like using an ISO hybrid one on USB. Follow the instructions in Using an ISO hybrid live image, except use the filename live-image-i386.img instead of live-image-i386.hybrid.iso.
254:
In a network boot, the client runs a small piece of software which usually resides on the EPROM of the Ethernet card. This program sends a DHCP request to get an IP address and information about what to do next. Typically, the next step is getting a higher level bootloader via the TFTP protocol. That could be pxelinux, GRUB, or even boot directly to an operating system like Linux.
255:
For example, if you unpack the generated live-image-i386.netboot.tar archive in the /srv/debian-live directory, you'll find the filesystem image in live/filesystem.squashfs and the kernel, initrd and pxelinux bootloader in tftpboot/.
260:
# /etc/dhcp/dhcpd.conf - configuration file for isc-dhcp-server ddns-update-style none; option domain-name "example.org"; option domain-name-servers ns1.example.org, ns2.example.org; default-lease-time 600; max-lease-time 7200; log-facility local7; subnet 192.168.0.0 netmask 255.255.255.0 { range 192.168.0.1 192.168.0.254; filename "pxelinux.0"; next-server 192.168.0.2; option subnet-mask 255.255.255.0; option broadcast-address 192.168.0.255; option routers 192.168.0.1; }
267:
Once the guest computer has downloaded and booted a Linux kernel and loaded its initrd, it will try to mount the Live filesystem image through a NFS server.
273:
Setting up these three services can be a little tricky. You might need some patience to get all of them working together. For more information, see the syslinux wiki at http://www.syslinux.org/wiki/index.php/PXELINUX or the Debian Installer Manual's TFTP Net Booting section at http://d-i.alioth.debian.org/manual/en.i386/ch04s05.html. They might help, as their processes are very similar.
293:
In order to boot a webboot image it is enough to have the components mentioned above, i.e. vmlinuz and initrd.img in a usb stick inside a directory named live/ and install syslinux as bootloader. Then boot from the usb stick and type fetch=URL/PATH/TO/FILE at the boot options. live-boot will retrieve the squashfs file and store it into ram. This way, it is possible to use the downloaded compressed filesystem as a regular live system. For example:
326:
More information on initial ramfs in Debian can be found in the Debian Linux Kernel Handbook at http://kernel-handbook.alioth.debian.org/ in the chapter on initramfs.
341:
#!/bin/sh lb config noauto \ --architectures i386 \ --linux-flavours 686-pae \ --binary-images hdd \ --mirror-bootstrap http://ftp.ch.debian.org/debian/ \ --mirror-binary http://ftp.ch.debian.org/debian/ \ "${@}"
436:
One or more kernel flavours will be included in your image by default, depending on the architecture. You can choose different flavours via the --linux-flavours option. Each flavour is suffixed to the default stub linux-image to form each metapackage name which in turn depends on an exact kernel package to be included in your image.
437:
Thus by default, an amd64 architecture image will include the linux-image-amd64 flavour metapackage, and an i386 architecture image will include the linux-image-586 metapackage.
438:
When more than one kernel package version is available in your configured archives, you can specify a different kernel package name stub with the --linux-packages option. For example, supposing you are building an amd64 architecture image and add the experimental archive for testing purposes so you can install the linux-image-3.18.0-trunk-amd64 kernel. You would configure that image as follows:
439:
$ lb config --linux-packages linux-image-3.18.0-trunk $ echo "deb http://ftp.debian.org/debian/ experimental main" > config/archives/experimental.list.chroot
442:
The proper and recommended way to deploy your own kernel packages is to follow the instructions in the kernel-handbook. Remember to modify the ABI and flavour suffixes appropriately, then include a complete build of the linux and matching linux-latest packages in your repository.
443:
If you opt to build the kernel packages without the matching metapackages, you need to specify an appropriate --linux-packages stub as discussed in Kernel flavour and version. As we explain in Installing modified or third-party packages, it is best if you include your custom kernel packages in your own repository, though the alternatives discussed in that section work as well.
618:
live-build uses syslinux and some of its derivatives (depending on the image type) as bootloaders by default. They can be easily customized to suit your needs.
619:
In order to use a full theme, copy /usr/share/live/build/bootloaders into config/bootloaders and edit the files in there. If you do not want to bother modifying all supported bootloader configurations, only providing a local customized copy of one of the bootloaders, e.g. isolinux in config/bootloaders/isolinux is enough too, depending on your use case.
621:
There are many possibilities when it comes to making changes. For instance, syslinux derivatives are configured by default with a timeout of 0 (zero) which means that they will pause indefinitely at their splash screen until you press a key.
622:
To modify the boot timeout of a default iso-hybrid image just edit a default isolinux.cfg file specifying the timeout in units of 1/10 seconds. A modified isolinux.cfg to boot after five seconds would be similar to this:
645:
$ lb config --architectures i386 --linux-flavours 586 \ --debian-installer live $ echo debian-installer-launcher >> config/package-lists/my.list.chroot
749:
● Use the “Linux style” of line breaks:
853:
#!/bin/sh lb config noauto \ --architectures i386 \ --linux-flavours 686-pae \ "${@}"
858:
First, --architectures i386 ensures that on our amd64 build system, we build a 32-bit version suitable for use on most machines. Second, we use --linux-flavours 686-pae because we don't anticipate using this image on much older systems. Third, we have chosen the lxde task metapackage to give us a minimal desktop. And finally, we have added two initial favourite packages: iceweasel and xchat.
"Manuale di Live Systems" (2015) [it] Live Systems Project
25:
● chroot: il programma chroot, chroot(8), rende possibile eseguire diverse istanze dell'ambiente GNU/Linux su un singolo sistema simultaneamente senza riavviare.
115:
● Linux 2.6 o successivi
165:
● Immagine del kernel Linux, comunemente chiamata vmlinuz*
166:
● Initial RAM disk image (initrd): un disco RAM creato per il boot di Linux, contenente i moduli potenzialmente necessari per montare l'immagine di sistema e alcuni script per farlo.
168:
● Bootloader: una piccola porzione di codice predisposto per l'avvio dal supporto scelto, che presenta un prompt o un menu per la selezione di opzioni/configurazioni. Carica il kernel Linux ed il suo initrd da eseguire con un filesystem associato. Possono essere usate diverse soluzioni, in base al supporto di destinazione ed al formato del filesystem contenenti le componenti precedentemente citate: isolinux per il boot da CD o DVD nel formato ISO9660, syslinux per supporti HDD o USB che si avviano da una partizione VFAT, extlinux per le partizioni ext/2/3/4 e btrfs, pxelinux per il netboot PXE, GRUB per partizioni ext2/3/4, ecc.
169:
È possibile usare live-build per creare l'immagine di sistema secondo le proprie specifiche, scegliere un kernel Linux, il suo initrd ed un bootloader per avviarli, tutto in un unico formato che dipende dal mezzo (immagini ISO9660, immagine disco, ecc.)
175:
The web interface currently makes no provision to prevent the use of invalid combinations of options, and in particular, where changing an option would normally (i.e. using live-build directly) change defaults of other options listed in the web form, the web builder does not change these defaults. Most notably, if you change --architectures from the default i386 to amd64, you must change the corresponding option --linux-flavours from the default 586 to amd64. See the lb_config man page for the version of live-build installed on the web builder for more details. The version number of live-build is listed at the bottom of the web builder page.
230:
Per far funzionare il pacchetto dkms vanno anche installati gli header per il kernel utilizzato nell'immagine. Anziché indicare manualmente il pacchetto linux-headers adeguato nell'elenco dei pacchetti creato prima, la selezione può essere fatta automaticamente da live-build.
231:
$ lb config --linux-packages "linux-image linux-headers"
241:
The generated binary image contains a VFAT partition and the syslinux bootloader, ready to be directly written on a USB device. Once again, using an HDD image is just like using an ISO hybrid one on USB. Follow the instructions in Using an ISO hybrid live image, except use the filename live-image-i386.img instead of live-image-i386.hybrid.iso.
254:
In un avvio tramite rete, il client esegue una piccola parte di software che normalmente risiede sulla EPROM della scheda Ethernet. Questo programma invia una richiesta DHCP per ottenere un indirizzo IP e le informazioni su cosa fare in seguito. In genere il passo successivo è ottenere un bootloader di di livello superiore attraverso il protocollo TFTP. Questi potrebbe essere pxelinux, GRUB, o anche avviare direttamente un sistema operativo come Linux.
255:
For example, if you unpack the generated live-image-i386.netboot.tar archive in the /srv/debian-live directory, you'll find the filesystem image in live/filesystem.squashfs and the kernel, initrd and pxelinux bootloader in tftpboot/.
260:
# /etc/dhcp/dhcpd.conf - configuration file for isc-dhcp-server ddns-update-style none; option domain-name "example.org"; option domain-name-servers ns1.example.org, ns2.example.org; default-lease-time 600; max-lease-time 7200; log-facility local7; subnet 192.168.0.0 netmask 255.255.255.0 { range 192.168.0.1 192.168.0.254; filename "pxelinux.0"; next-server 192.168.0.2; option subnet-mask 255.255.255.0; option broadcast-address 192.168.0.255; option routers 192.168.0.1; }
267:
Una volta che il computer ospite ha scaricato e avviato un kernel Linux e caricato il suo initrd, cercherà di montare l'immagine del filesystem Live tramite un server NFS.
273:
Configurare questi tre servizi può essere un po' problematico, serve un attimo di pazienza per farli funzionare assieme. Per ulteriori informazioni vedere il wiki syslinux http://www.syslinux.org/wiki/index.php/PXELINUX o il manuale del Debian Installer alla sezione per l'avvio TFTP da rete http://d-i.alioth.debian.org/manual/en.i386/ch04s05.html. Ciò può essere d'aiuto, considerato che il procedimento è molto simile.
293:
In order to boot a webboot image it is enough to have the components mentioned above, i.e. vmlinuz and initrd.img in a usb stick inside a directory named live/ and install syslinux as bootloader. Then boot from the usb stick and type fetch=URL/PATH/TO/FILE at the boot options. live-boot will retrieve the squashfs file and store it into ram. This way, it is possible to use the downloaded compressed filesystem as a regular live system. For example:
326:
Si possono trovare maggiori informazioni sui ramfs iniziali nel capitolo su initramfs del Debian Linux Kernel Handbook all'indirizzo http://kernel-handbook.alioth.debian.org/.
341:
#!/bin/sh lb config noauto \ --architectures i386 \ --linux-flavours 686-pae \ --binary-images hdd \ --mirror-bootstrap http://ftp.ch.debian.org/debian/ \ --mirror-binary http://ftp.ch.debian.org/debian/ \ "${@}"
436:
A seconda dell'architettura, nell'immagine verranno inclusi uno o più tipi di kernel in modo predefinito. È possibile scegliere tipi differenti tramite l'opzione --linux-flavours, ognuno ha come suffisso linux-image che costituisce il nome del metapaccchetto che a sua volta dipende dall'esatto pacchetto del kernel da inserire nell'immagine.
437:
Thus by default, an amd64 architecture image will include the linux-image-amd64 flavour metapackage, and an i386 architecture image will include the linux-image-586 metapackage.
438:
When more than one kernel package version is available in your configured archives, you can specify a different kernel package name stub with the --linux-packages option. For example, supposing you are building an amd64 architecture image and add the experimental archive for testing purposes so you can install the linux-image-3.18.0-trunk-amd64 kernel. You would configure that image as follows:
439:
$ lb config --linux-packages linux-image-3.18.0-trunk $ echo "deb http://ftp.debian.org/debian/ experimental main" > config/archives/experimental.list.chroot
442:
La maniera corretta e raccommandata per collocare i propri pacchetti è di seguire le istruzioni nel kernel-handbook. Ricordarsi di modificare i suffissi per ABI e tipologia in modo appropriato quindi includere una compilazione completa del pacchetto linux e del corrispondente linux-latest nel reposistory.
443:
Se si opta per creare i pacchetti del kernel senza i metapacchetti corrispondenti, bisogna specificare un suffisso --linux-packages appropriato come discusso in Tipi e versioni del kernel. Come spiegato in Installare pacchetti modificati o di terze parti, è meglio includere i propri pacchetti del kernel nel proprio repository, sebbene funzionino anche le alternative discusse in tale sezione.
618:
live-build usa syslinux e alcuni dei suoi derivati (a seconda del tipo di immagine) come bootloader predefiniti. Si possono facilmente personalizzare per soddisfare le proprie esigenze.
619:
Per utilizzare un tema completo, copiare /usr/share/live/build/bootloaders in config/bootloaders e modificare i file. Se non si vogliono modificare tutte le configurazioni dei bootloader supportati è sufficiente fornire la copia locale di uno di essi, ad esempio isolinux in config/bootloaders/isolinux può bastare, dipende dalle esigenze.
621:
Quando si tratta di fare modifiche ci sono varie possibilità. Per esempio i derivati di syslinux sono configurati con un timeout impostato a 0 (zero) in modo predefinito, significa che resteranno in pausa al loro splash screen fino a quando non si preme un tasto.
622:
Per modificare il timeout di avvio di un'immagine iso-hybrid modificare un file isolinux.cfg predefinito specificando il timeout in unità di 1/10 di secondo. Un file isolinux.cfg modificato per effettuare il boot dopo cinque secondi sarebbe simile a questo:
645:
$ lb config --architectures i386 --linux-flavours 586 \ --debian-installer live $ echo debian-installer-launcher >> config/package-lists/my.list.chroot
749:
● Utilizzare lo “stile Linux” per le interruzioni di riga:
853:
#!/bin/sh lb config noauto \ --architectures i386 \ --linux-flavours 686-pae \ "${@}"
858:
Per prima cosa, --architectures i386 assicura che sul nostro sistema amd64 costruiamo una versione a 32-bit utilizzabile sulla maggior parte delle macchine. In secondo luogo, usiamo --linux-flavours 686-pae dato che non prevediamo di usare questa immagine su sistemi troppo vecchi. Terzo, abbiamo scelto il metapacchetto task lxde per avere un desktop minimale. Infine abbiamo aggiunto due pacchetti preferiti: iceweasel e xchat.
"Live Systems Handbuch" (2015) [de] Live Systems Projekt
25:
● chroot: The chroot program, chroot(8), enables us to run different instances of the GNU/Linux environment on a single system simultaneously without rebooting.
115:
● Linux 2.6 or newer.
165:
● Linux kernel image, usually named vmlinuz*
166:
● Initial RAM disk image (initrd): a RAM disk set up for the Linux boot, containing modules possibly needed to mount the System image and some scripts to do it.
168:
● Bootloader: A small piece of code crafted to boot from the chosen medium, possibly presenting a prompt or menu to allow selection of options/configuration. It loads the Linux kernel and its initrd to run with an associated system filesystem. Different solutions can be used, depending on the target medium and format of the filesystem containing the previously mentioned components: isolinux to boot from a CD or DVD in ISO9660 format, syslinux for HDD or USB drive booting from a VFAT partition, extlinux for ext2/3/4 and btrfs partitions, pxelinux for PXE netboot, GRUB for ext2/3/4 partitions, etc.
169:
You can use live-build to build the system image from your specifications, set up a Linux kernel, its initrd, and a bootloader to run them, all in one medium-dependant format (ISO9660 image, disk image, etc.).
175:
The web interface currently makes no provision to prevent the use of invalid combinations of options, and in particular, where changing an option would normally (i.e. using live-build directly) change defaults of other options listed in the web form, the web builder does not change these defaults. Most notably, if you change --architectures from the default i386 to amd64, you must change the corresponding option --linux-flavours from the default 586 to amd64. See the lb_config man page for the version of live-build installed on the web builder for more details. The version number of live-build is listed at the bottom of the web builder page.
230:
In order to make the dkms package work, also the kernel headers for the kernel flavour used in your image need to be installed. Instead of manually listing the correct linux-headers package in above created package list, the selection of the right package can be done automatically by live-build.
231:
$ lb config --linux-packages "linux-image linux-headers"
241:
The generated binary image contains a VFAT partition and the syslinux bootloader, ready to be directly written on a USB device. Once again, using an HDD image is just like using an ISO hybrid one on USB. Follow the instructions in Using an ISO hybrid live image, except use the filename live-image-i386.img instead of live-image-i386.hybrid.iso.
254:
In a network boot, the client runs a small piece of software which usually resides on the EPROM of the Ethernet card. This program sends a DHCP request to get an IP address and information about what to do next. Typically, the next step is getting a higher level bootloader via the TFTP protocol. That could be pxelinux, GRUB, or even boot directly to an operating system like Linux.
255:
For example, if you unpack the generated live-image-i386.netboot.tar archive in the /srv/debian-live directory, you'll find the filesystem image in live/filesystem.squashfs and the kernel, initrd and pxelinux bootloader in tftpboot/.
260:
# /etc/dhcp/dhcpd.conf - configuration file for isc-dhcp-server ddns-update-style none; option domain-name "example.org"; option domain-name-servers ns1.example.org, ns2.example.org; default-lease-time 600; max-lease-time 7200; log-facility local7; subnet 192.168.0.0 netmask 255.255.255.0 { range 192.168.0.1 192.168.0.254; filename "pxelinux.0"; next-server 192.168.0.2; option subnet-mask 255.255.255.0; option broadcast-address 192.168.0.255; option routers 192.168.0.1; }
267:
Once the guest computer has downloaded and booted a Linux kernel and loaded its initrd, it will try to mount the Live filesystem image through a NFS server.
273:
Setting up these three services can be a little tricky. You might need some patience to get all of them working together. For more information, see the syslinux wiki at http://www.syslinux.org/wiki/index.php/PXELINUX or the Debian Installer Manual's TFTP Net Booting section at http://d-i.alioth.debian.org/manual/en.i386/ch04s05.html. They might help, as their processes are very similar.
293:
In order to boot a webboot image it is enough to have the components mentioned above, i.e. vmlinuz and initrd.img in a usb stick inside a directory named live/ and install syslinux as bootloader. Then boot from the usb stick and type fetch=URL/PATH/TO/FILE at the boot options. live-boot will retrieve the squashfs file and store it into ram. This way, it is possible to use the downloaded compressed filesystem as a regular live system. For example:
326:
More information on initial ramfs in Debian can be found in the Debian Linux Kernel Handbook at http://kernel-handbook.alioth.debian.org/ in the chapter on initramfs.
341:
#!/bin/sh lb config noauto \ --architectures i386 \ --linux-flavours 686-pae \ --binary-images hdd \ --mirror-bootstrap http://ftp.ch.debian.org/debian/ \ --mirror-binary http://ftp.ch.debian.org/debian/ \ "${@}"
436:
One or more kernel flavours will be included in your image by default, depending on the architecture. You can choose different flavours via the --linux-flavours option. Each flavour is suffixed to the default stub linux-image to form each metapackage name which in turn depends on an exact kernel package to be included in your image.
437:
Thus by default, an amd64 architecture image will include the linux-image-amd64 flavour metapackage, and an i386 architecture image will include the linux-image-586 metapackage.
438:
When more than one kernel package version is available in your configured archives, you can specify a different kernel package name stub with the --linux-packages option. For example, supposing you are building an amd64 architecture image and add the experimental archive for testing purposes so you can install the linux-image-3.18.0-trunk-amd64 kernel. You would configure that image as follows:
439:
$ lb config --linux-packages linux-image-3.18.0-trunk $ echo "deb http://ftp.debian.org/debian/ experimental main" > config/archives/experimental.list.chroot
442:
The proper and recommended way to deploy your own kernel packages is to follow the instructions in the kernel-handbook. Remember to modify the ABI and flavour suffixes appropriately, then include a complete build of the linux and matching linux-latest packages in your repository.
443:
If you opt to build the kernel packages without the matching metapackages, you need to specify an appropriate --linux-packages stub as discussed in Kernel flavour and version. As we explain in Installing modified or third-party packages, it is best if you include your custom kernel packages in your own repository, though the alternatives discussed in that section work as well.
618:
live-build uses syslinux and some of its derivatives (depending on the image type) as bootloaders by default. They can be easily customized to suit your needs.
619:
In order to use a full theme, copy /usr/share/live/build/bootloaders into config/bootloaders and edit the files in there. If you do not want to bother modifying all supported bootloader configurations, only providing a local customized copy of one of the bootloaders, e.g. isolinux in config/bootloaders/isolinux is enough too, depending on your use case.
621:
There are many possibilities when it comes to making changes. For instance, syslinux derivatives are configured by default with a timeout of 0 (zero) which means that they will pause indefinitely at their splash screen until you press a key.
622:
To modify the boot timeout of a default iso-hybrid image just edit a default isolinux.cfg file specifying the timeout in units of 1/10 seconds. A modified isolinux.cfg to boot after five seconds would be similar to this:
645:
$ lb config --architectures i386 --linux-flavours 586 \ --debian-installer live $ echo debian-installer-launcher >> config/package-lists/my.list.chroot
749:
● Use the “Linux style” of line breaks:
853:
#!/bin/sh lb config noauto \ --architectures i386 \ --linux-flavours 686-pae \ "${@}"
858:
First, --architectures i386 ensures that on our amd64 build system, we build a 32-bit version suitable for use on most machines. Second, we use --linux-flavours 686-pae because we don't anticipate using this image on much older systems. Third, we have chosen the lxde task metapackage to give us a minimal desktop. And finally, we have added two initial favourite packages: iceweasel and xchat.
"Live システムマニュアル" (2015) [ja] Live システムプロジェクト
25:
● chroot: chroot プログラム。chroot(8) により、単一のシステム上で異なる GNU/Linux 環境を再起動せずに並行して実行できるようになります。
115:
● Linux 2.6 以降。
165:
● Linux カーネルイメージ、通常 vmlinuz* という名前です
166:
● 初期 RAM ディスクイメージ (initrd): Linux ブート用に用意された RAM ディスクで、システムのイメージをマウントするのに必要となる可能性があるモジュールとマウントするためのスクリプトをいくつか収録しています。
168:
● ブートローダ: 選択したメディアからブートするように作られた短いコードの集合で、オプション/設定を選択できるプロンプトやメニューを恐らく提示します。Linux カーネルとその initrd を読み込んでそのシステムのファイルシステム上で実行します。前に言及した構成要素を収録する対象メディアやファイルシステムの形式によっては別の方法があります。isolinux では ISO9660 形式のCDやDVDからのブート、syslinux ではHDDやUSBドライブの VFAT パーティションからのブート、extlinux では ext2/3/4 や btrfs パーティション、pxelinux では PXE netboot、GRUB では ext2/3/4 パーティション、等。
169:
live-build を使って Linux カーネル、initrd、それを実行するためのブートローダを独自仕様で用意して全て1つのメディア特有の形式 (ISO9660 イメージやディスクイメージ等) でシステムのイメージをビルドできます。
175:
ウェブインターフェイスでは現在、オプションの不正な組み合わせを避ける対策を何も取っていません。また、特に、変更すると通常ウェブフォームにある他のオプションのデフォルト値 (つまり live-build を直接使った場合の値) が変わるオプションを変更した場合にウェブビルダーはそのデフォルト値を変更しません。最も顕著な例として、--architectures をデフォルトの i386 から amd64 に変更すると対応するオプション --linux-flavours をデフォルトの 586 から amd64 に変更する必要があります。ウェブビルダーにインストールされている live-build のバージョンやさらなる詳細については lb_config man ページを見てください。live-build のバージョン番号はウェブビルダーのページ下部に記載されています。
230:
dkms パッケージを機能させるためには、そのイメージで利用しているカーネルの種類のカーネルヘッダもインストールする必要があります。正しいパッケージの選択は上記で作成したパッケージ一覧に正しい linux-headers パッケージを手作業により列挙する代わりに live-build により自動的に行うことができます。
231:
$ lb config --linux-packages "linux-image linux-headers"
241:
生成されたバイナリイメージには VFAT パーティションと syslinux ブートローダが収録され、そのままUSB機器に書きこめます。繰り返しますがHDDイメージの使い方はUSBで ISO hybrid イメージを使うのと同様です。{ISO hybrid Live イメージの利用}#using-iso-hybrid の指示に従ってください。live-image-i386.hybrid.iso に代えて live-image-i386.img をファイル名に使う点が異なります。
254:
ネットワーク経由のブートでは、クライアントは通常イーサネットカードの EPROM にある小さなソフトウェアを実行します。このプログラムは DHCP リクエストを送り、IPアドレスと次に行うことについての情報を取得します。次の段階は通常、TFTP プロトコルを経由した高レベルブートローダの取得です。これには pxelinux や GRUB、さらには直接 Linux のようなオペレーティングシステムをブートすることもできます。
255:
例えば生成された live-image-i386.netboot.tar アーカイブを /srv/debian-live ディレクトリに展開すると、live/filesystem.squashfs にファイルシステムのイメージ、カーネルや initrd、pxelinux ブートローダが tftpboot/ にあることがわかるでしょう。
260:
# /etc/dhcp/dhcpd.conf - configuration file for isc-dhcp-server ddns-update-style none; option domain-name "example.org"; option domain-name-servers ns1.example.org, ns2.example.org; default-lease-time 600; max-lease-time 7200; log-facility local7; subnet 192.168.0.0 netmask 255.255.255.0 { range 192.168.0.1 192.168.0.254; filename "pxelinux.0"; next-server 192.168.0.2; option subnet-mask 255.255.255.0; option broadcast-address 192.168.0.255; option routers 192.168.0.1; }
267:
ゲストコンピュータが Linux カーネルをダウンロード、ブートして initrd を読み込むと、NFSサーバ経由で Live ファイルシステムのイメージをマウントしようとします。
273:
この3つのサービスの設定にはやや注意が必要かもしれません。全て協調して機能させるまでには忍耐がいくらか必要かもしれません。さらなる情報については http://www.syslinux.org/wiki/index.php/PXELINUX にある syslinux wiki や http://d-i.alioth.debian.org/manual/ja.i386/ch04s05.html にある Debian インストーラマニュアルの TFTP ネットブート節を見てください。方法はとても似ているので手助けになるかもしれません。
293:
ウェブブートイメージの起動は上記で示した構成要素、つまり vmlinuz と initrd.img をUSBメモリの live/ ディレクトリ以下に書き込み、ブートローダとして syslinux をインストールすれば十分です。そしてUSBメモリからブートしてブートオプションに fetch=URL/ファイル/への/パス を入力します。live-boot は squashfs ファイルを取得してRAMに格納します。こうして、ダウンロードした圧縮ファイルシステムを普通の Live システムとして使えるようになります。例えば:
326:
Debian の初期RAMファイルシステムについてのさらなる情報は http://kernel-handbook.alioth.debian.org/ にある Debian Linux カーネルハンドブックの initramfs の章にあります。
341:
#!/bin/sh lb config noauto \ --architectures i386 \ --linux-flavours 686-pae \ --binary-images hdd \ --mirror-bootstrap http://ftp.ch.debian.org/debian/ \ --mirror-binary http://ftp.ch.debian.org/debian/ \ "${@}"
436:
アーキテクチャによっては、イメージに複数のカーネルをデフォルトで収録することができます。フレーバーは --linux-flavours オプションで選択できます。各フレーバーはデフォルトの短い linux-image に、イメージに収録される実際のカーネルパッケージに依存する各メタパッケージの名前を付加した形式になります。
437:
そうして、デフォルトで amd64 アーキテクチャのイメージは linux-image-amd64 のメタパッケージを収録し、i386 アーキテクチャのイメージは linux-image-586 メタパッケージを収録します。
438:
設定したアーカイブで複数バージョンのカーネルパッケージが利用できる場合、--linux-packages オプションでカーネルパッケージ名の前半部を指定できます。例えば amd64 アーキテクチャのイメージをビルドする際にテスト用に experimental アーカイブを追加すると linux-image-3.18.0-trunk-amd64 カーネルをインストールできます。そのイメージの設定例:
439:
$ lb config --linux-packages linux-image-3.18.0-trunk $ echo "deb http://ftp.debian.org/debian/ experimental main" > config/archives/experimental.list.chroot
442:
自身のカーネルパッケージを配置するための適切で推奨する方法は kernel-handbook の指示に従うことです。パッケージ名のABIとフレーバーの部分を忘れずに適切に変更し、リポジトリに linux の完全なビルドとそれに該当する linux-latest パッケージを収録してください。
443:
該当するメタパッケージ無しでカーネルパッケージをビルドしたい場合は、{カーネルのフレーバー (種類) とバージョン}#kernel-flavour-and-version で説明しているように --linux-packages でパッケージ名の適切な前半部を指定する必要があります。{変更したあるいはサードパーティ製パッケージのインストール}#installing-modified-or-third-party-packages で説明しているように、自身のリポジトリに独自のカーネルパッケージを収録する場合はそのようにするのが最善ですが、別の方法についても説明しています。
618:
live-build は syslinux や (イメージの種類により) その派生物の一部をブートローダとしてデフォルトで利用します。これは要件に合わせて簡単に独自化できます。
619:
全面的なテーマを使うには /usr/share/live/build/bootloaders を config/bootloaders にコピーしてその中のファイルを編集します。サポートしているブートローダ全部の設定変更を望まない場合は、ブートローダの1つ、例えば config/bootloaders/isolinux にある isolinux だけを局所的に地域化したものを提供するのでも、活用方法によりますが十分です。
621:
変更を加えるに至る要因は多々あります。例えば syslinux 派生物ではデフォルトでタイムアウト時間が0に設定されていて、この場合はスプラッシュ画面でキーが押されるまでいつまでも一時停止状態で止まっているということになります。
622:
デフォルトの iso-hybrid イメージのブート時のタイムアウト時間を変更する方法は、デフォルトの isolinux.cfg ファイルを編集して1/10秒単位でタイムアウト時間を指定するだけです。5秒後にブートするように isolinux.cfg を変更する場合は
645:
$ lb config --architectures i386 --linux-flavours 586 \ --debian-installer live $ echo debian-installer-launcher >> config/package-lists/my.list.chroot
749:
● 「Linux 式」で改行します:
851:
#!/bin/sh lb config noauto \ --architectures i386 \ --linux-flavours 686-pae \ "${@}"
856:
まず、--architectures i386 により必ず amd64 ビルドシステムでほとんどのマシンでの利用に適応する32ビット版をビルドするようにします。次に、相当に古いシステムでのこのイメージの利用を想定しないため --linux-flavours 686-pae を使います。lxde のタスクメタパッケージを選択して最小限のデスクトップを揃えます。最後に、好みのパッケージの初期値として iceweasel と xchat を追加しています。
"Manualul Live Systems" (2015) [ro] Proiectul Live Systems
25:
● chroot: Programul chroot, chroot(8), permite rularea a diferite instante din mediul GNU/Linux pe un singur sistem si in simultan fara a necesita o repornire a sistemului.
115:
● Linux 2.6 or newer.
165:
● Linux kernel image, usually named vmlinuz*
166:
● Initial RAM disk image (initrd): a RAM disk set up for the Linux boot, containing modules possibly needed to mount the System image and some scripts to do it.
168:
● Bootloader: A small piece of code crafted to boot from the chosen medium, possibly presenting a prompt or menu to allow selection of options/configuration. It loads the Linux kernel and its initrd to run with an associated system filesystem. Different solutions can be used, depending on the target medium and format of the filesystem containing the previously mentioned components: isolinux to boot from a CD or DVD in ISO9660 format, syslinux for HDD or USB drive booting from a VFAT partition, extlinux for ext2/3/4 and btrfs partitions, pxelinux for PXE netboot, GRUB for ext2/3/4 partitions, etc.
169:
You can use live-build to build the system image from your specifications, set up a Linux kernel, its initrd, and a bootloader to run them, all in one medium-dependant format (ISO9660 image, disk image, etc.).
175:
The web interface currently makes no provision to prevent the use of invalid combinations of options, and in particular, where changing an option would normally (i.e. using live-build directly) change defaults of other options listed in the web form, the web builder does not change these defaults. Most notably, if you change --architectures from the default i386 to amd64, you must change the corresponding option --linux-flavours from the default 586 to amd64. See the lb_config man page for the version of live-build installed on the web builder for more details. The version number of live-build is listed at the bottom of the web builder page.
230:
In order to make the dkms package work, also the kernel headers for the kernel flavour used in your image need to be installed. Instead of manually listing the correct linux-headers package in above created package list, the selection of the right package can be done automatically by live-build.
231:
$ lb config --linux-packages "linux-image linux-headers"
241:
The generated binary image contains a VFAT partition and the syslinux bootloader, ready to be directly written on a USB device. Once again, using an HDD image is just like using an ISO hybrid one on USB. Follow the instructions in Using an ISO hybrid live image, except use the filename live-image-i386.img instead of live-image-i386.hybrid.iso.
254:
In a network boot, the client runs a small piece of software which usually resides on the EPROM of the Ethernet card. This program sends a DHCP request to get an IP address and information about what to do next. Typically, the next step is getting a higher level bootloader via the TFTP protocol. That could be pxelinux, GRUB, or even boot directly to an operating system like Linux.
255:
For example, if you unpack the generated live-image-i386.netboot.tar archive in the /srv/debian-live directory, you'll find the filesystem image in live/filesystem.squashfs and the kernel, initrd and pxelinux bootloader in tftpboot/.
260:
# /etc/dhcp/dhcpd.conf - configuration file for isc-dhcp-server ddns-update-style none; option domain-name "example.org"; option domain-name-servers ns1.example.org, ns2.example.org; default-lease-time 600; max-lease-time 7200; log-facility local7; subnet 192.168.0.0 netmask 255.255.255.0 { range 192.168.0.1 192.168.0.254; filename "pxelinux.0"; next-server 192.168.0.2; option subnet-mask 255.255.255.0; option broadcast-address 192.168.0.255; option routers 192.168.0.1; }
267:
Once the guest computer has downloaded and booted a Linux kernel and loaded its initrd, it will try to mount the Live filesystem image through a NFS server.
273:
Setting up these three services can be a little tricky. You might need some patience to get all of them working together. For more information, see the syslinux wiki at http://www.syslinux.org/wiki/index.php/PXELINUX or the Debian Installer Manual's TFTP Net Booting section at http://d-i.alioth.debian.org/manual/en.i386/ch04s05.html. They might help, as their processes are very similar.
293:
In order to boot a webboot image it is enough to have the components mentioned above, i.e. vmlinuz and initrd.img in a usb stick inside a directory named live/ and install syslinux as bootloader. Then boot from the usb stick and type fetch=URL/PATH/TO/FILE at the boot options. live-boot will retrieve the squashfs file and store it into ram. This way, it is possible to use the downloaded compressed filesystem as a regular live system. For example:
326:
More information on initial ramfs in Debian can be found in the Debian Linux Kernel Handbook at http://kernel-handbook.alioth.debian.org/ in the chapter on initramfs.
341:
#!/bin/sh lb config noauto \ --architectures i386 \ --linux-flavours 686-pae \ --binary-images hdd \ --mirror-bootstrap http://ftp.ch.debian.org/debian/ \ --mirror-binary http://ftp.ch.debian.org/debian/ \ "${@}"
436:
One or more kernel flavours will be included in your image by default, depending on the architecture. You can choose different flavours via the --linux-flavours option. Each flavour is suffixed to the default stub linux-image to form each metapackage name which in turn depends on an exact kernel package to be included in your image.
437:
Thus by default, an amd64 architecture image will include the linux-image-amd64 flavour metapackage, and an i386 architecture image will include the linux-image-586 metapackage.
438:
When more than one kernel package version is available in your configured archives, you can specify a different kernel package name stub with the --linux-packages option. For example, supposing you are building an amd64 architecture image and add the experimental archive for testing purposes so you can install the linux-image-3.18.0-trunk-amd64 kernel. You would configure that image as follows:
439:
$ lb config --linux-packages linux-image-3.18.0-trunk $ echo "deb http://ftp.debian.org/debian/ experimental main" > config/archives/experimental.list.chroot
442:
The proper and recommended way to deploy your own kernel packages is to follow the instructions in the kernel-handbook. Remember to modify the ABI and flavour suffixes appropriately, then include a complete build of the linux and matching linux-latest packages in your repository.
443:
If you opt to build the kernel packages without the matching metapackages, you need to specify an appropriate --linux-packages stub as discussed in Kernel flavour and version. As we explain in Installing modified or third-party packages, it is best if you include your custom kernel packages in your own repository, though the alternatives discussed in that section work as well.
618:
live-build uses syslinux and some of its derivatives (depending on the image type) as bootloaders by default. They can be easily customized to suit your needs.
619:
In order to use a full theme, copy /usr/share/live/build/bootloaders into config/bootloaders and edit the files in there. If you do not want to bother modifying all supported bootloader configurations, only providing a local customized copy of one of the bootloaders, e.g. isolinux in config/bootloaders/isolinux is enough too, depending on your use case.
621:
There are many possibilities when it comes to making changes. For instance, syslinux derivatives are configured by default with a timeout of 0 (zero) which means that they will pause indefinitely at their splash screen until you press a key.
622:
To modify the boot timeout of a default iso-hybrid image just edit a default isolinux.cfg file specifying the timeout in units of 1/10 seconds. A modified isolinux.cfg to boot after five seconds would be similar to this:
645:
$ lb config --architectures i386 --linux-flavours 586 \ --debian-installer live $ echo debian-installer-launcher >> config/package-lists/my.list.chroot
749:
● Use the “Linux style” of line breaks:
853:
#!/bin/sh lb config noauto \ --architectures i386 \ --linux-flavours 686-pae \ "${@}"
858:
First, --architectures i386 ensures that on our amd64 build system, we build a 32-bit version suitable for use on most machines. Second, we use --linux-flavours 686-pae because we don't anticipate using this image on much older systems. Third, we have chosen the lxde task metapackage to give us a minimal desktop. And finally, we have added two initial favourite packages: iceweasel and xchat.
"Manual de Live Systems" (2015) [ca] Projecte Live Systems
25:
● chroot: El programa chroot, chroot(8), ens permet executar diferentes instàncies d'un entorn GNU/Linux a la vegada en un sol sistema sense reiniciar.
115:
● Linux 2.6.x o superior.
165:
● Imatge del nucli Linux, generalment s'anomena vmlinuz*
166:
● Imatge del disc RAM inicial (initrd): un disc RAM configurat per a l'arrencada de Linux, que conté els mòduls que possiblement es necessitaran per a muntar la imatge del sistema i algunes seqüències d'ordres per a fer-ho.
168:
● Carregador d'arrencada : Una petita peça de codi dissenyat per a arrencar des del medi triat, possiblement presentant un indicador d'arrencada o un menú per a permetre la selecció d'opcions/configuració. Carrega el nucli de Linux i el seu initrd per a funcionar amb un sistema de fitxers del sistema associat. Es poden utilitzar diverses solucions, en funció del medi de destinació i el format del sistema de fitxers que conté els components esmentats anteriorment: isolinux per a arrencar des de CD o DVD en format ISO9660, syslinux per a una unitat USB o HDD que s'iniciarà des de particions VFAT, extlinux per a particions ext2/3/4 i btrfs, pxelinux per a PXE netboot, GRUB per a particions ext2/3/4, etc.
169:
Es pot utilitzar live-build per a construir la imatge del sistema amb especificacions pròpies, configurar un nucli de Linux, el initrd, i un carregador d'arrencada per a executar-los, tot això en un format depenent del medi (imatge ISO9660, imatge de disc, etc.).
175:
La interfície web actualment no pot prevenir l'ús de combinacions d'opcions no vàlides, i en particular, quan el canvi d'una opció que normalment (és a dir, utilitzant live-build directament) canviaria els valors predeterminats d'altres opcions que figuren en el formulari de la web, el constructor web no canvia aquests valors predeterminats. En particular, si es canvia --architectures del valor per defecte i386 a amd64, s'ha de canviar l'opció corresponent --linux-flavours del valor per defecte 586 a amd64. Veure la pàgina del manual lb_config per a la versió de live-build instal·lada al constructor web per a més detalls. El nombre de la versió de live-build apareix a la part inferior de la pàgina web.
230:
Per tal de fer que el paquet dkms funcioni, s'han d'instal·lar també les capçaleres del nucli per a la variant del nucli de la imatge. En lloc d'enumerar manualment el paquet linux-headers correcte en la llista de paquets creat anteriorment, la selecció del paquet adequat es pot fer automàticament amb live-build.
231:
$ lb config --linux-packages "linux-image linux-headers"
241:
La imatge binària generada conté una partició VFAT i el carregador d'arrencada syslinux, llestos per a ser escrits directament a una memòria USB. Un cop més, donat que l'ús d'una imatge HDD és com utilitzar una imatge ISO híbrida en un USB, seguir les instruccions de Usar una imatge ISO híbrida en viu, però amb el nom de fitxer live-image-i386.img en lloc de live-image-i386.hybrid.iso.
254:
En l'arrencada en xarxa, el client executa una petita peça de programari que normalment es troba a la EPROM de la targeta Ethernet. Aquest programa envia una petició DHCP per a obtenir una adreça IP i la informació sobre què fer a continuació. Per regla general, el següent pas és aconseguir un carregador d'arrencada de més alt nivell a través del protocol TFTP. Podria ser GRUB, pxelinux o fins i tot arrencar directament a un sistema operatiu com Linux.
255:
Per exemple, si es descomprimeix el arxiu live-image-i386.netboot.tar generat al directori /srv/debian-live, es trobarà la imatge del sistema de fitxers a live/filesystem.squashfs i el nucli, initrd i carregador d'arrencada pxelinux a tftpboot/.
260:
# /etc/dhcp/dhcpd.conf - configuration file for isc-dhcp-server ddns-update-style none; option domain-name "example.org"; option domain-name-servers ns1.example.org, ns2.example.org; default-lease-time 600; max-lease-time 7200; log-facility local7; subnet 192.168.0.0 netmask 255.255.255.0 { range 192.168.0.1 192.168.0.254; filename "pxelinux.0"; next-server 192.168.0.2; option subnet-mask 255.255.255.0; option broadcast-address 192.168.0.255; option routers 192.168.0.1; }
267:
Un cop l'ordinador ha descarregat, ha arrencat el nucli de Linux i ha carregat el initrd, intentarà muntar la imatge del sistema de fitxers en viu a través d'un servidor NFS.
273:
La configuració d'aquests tres serveis pot ser una mica difícil. És possible que es necessiti una mica de paciència per a aconseguir que tots tres funcionin plegats. Per a obtenir més informació, veure el wiki de syslinux a http://www.syslinux.org/wiki/index.php/PXELINUX o la secció TFTP Net Booting al Manual del Instal·lador de Debian a http://d-i.alioth.debian.org/manual/ca.i386/ch04s05.html. Això pot ajudar, ja que els seus processos són molt similars.
293:
Per a arrencar una imatge webboot és suficient tenir els components esmentats anteriorment, és a dir, vmlinuz i initrd.img en una memòria usb dins d'un directori anomenat live/ i instal·lar syslinux com a gestor d'arrencada. Després, arrencar des de la memòria usb i escriure fetch=URL/RUTA/AL/FITXER a les opcions d'arrencada. live-boot descarregarà l'arxiu squashfs i l'emmagatzemarà en la memòria ram. D'aquesta manera, és possible utilitzar el sistema de fitxers comprimit descarregat com si fos un sistema viu normal. Per exemple:
326:
Més informació sobre ramfs inicial a Debian es pot trobar al Debian Linux Kernel Handbook http://kernel-handbook.alioth.debian.org/ al capítol sobre initramfs.
341:
#!/bin/sh lb config noauto \ --architectures i386 \ --linux-flavours 686-pae \ --binary-images hdd \ --mirror-bootstrap http://ftp.ch.debian.org/debian/ \ --mirror-binary http://ftp.ch.debian.org/debian/ \ "${@}"
436:
Depenent de l'arquitectura, s'inclouran per defecte en la imatge un o més tipus de nuclis. Es pot triar diferents tipus a través de l'opció --linux-flavours. Cada tipus té un sufix per a l'arrel per defecte linux-image per a formar el nom de cada metapaquet que al seu torn depèn d'un paquet del nucli exacte que s'ha d'incloure en la imatge.
437:
Així, per defecte, una imatge per a l'arquitectura amd64 inclourà el metapaquet linux-image-amd64 i una imatge per a l'arquitectura i386 inclourà el metapaquet linux-image-586.
438:
Quan hi ha més d'una versió del paquet del nucli disponible en els arxius configurats, es pot especificar el nom d'un paquet del nucli amb l'opció --linux-packages. Per exemple, suposem que s'està construint una imatge d'arquitectura amd64 i es vol afegir l'arxiu experimental amb propòsits de fer proves. Perquè es pugui instal·lar el nucli linux-image-3.18.0-trunk-amd64 es podria configurar la imatge de la següent manera:
439:
$ lb config --linux-packages linux-image-3.18.0-trunk $ echo "deb http://ftp.debian.org/debian/ experimental main" > config/archives/experimental.list.chroot
442:
La manera apropiada i recomanable d'implementar els propis paquets del nucli és seguir les instruccions del kernel-handbook. Recordar que s'ha de modificar l'ABI i els sufixos del tipus apropiadament, i a continuació, incloure un conjunt complet dels packets que corresponen amb linux i linux-latest al repositori.
443:
Si s'opta per construir els paquets del nucli sense els metapaquets a joc, cal especificar una arrel --linux-packages apropiada com s'indica a Tipus i versió del nucli. Com expliquem a Instal·lació de paquets modificats o de tercers, és millor si s'inclouen els paquets del nucli personalitzat en un repositori propi, tot i que les alternatives discutides en aquella secció també funcionen.
618:
live-build utilitza syslinux i alguns dels seus derivats (depenent del tipus d'imatge) com carregadors d'arrencada per defecte. Es poden personalitzar fàcilment per satisfer totes les necessitats.
619:
Per a utilitzar un tema complet, copiar /usr/share/live/build/bootloaders a config/bootloaders i editar els fitxers allí. Si no es vol modificar totes les configuracions dels carregadors d'arrencada disponibles, només cal utilitzar una còpia local personalitzada d'un dels carregadors, per exemple, copiar la configuració d'isolinux a config/bootloaders/isolinux ja és suficient, depenent del cas d'ús.
621:
Hi ha moltes possibilitats a l'hora de fer canvis. Per exemple, els derivats de syslinux estan configurats per defecte amb un temps d'espera de 0 (zero) el que significa que faran una pausa indefinida en la seva pantalla inicial fins que es premi una tecla.
622:
Per a modificar el temps d'espera d'arrencada d'una imatge iso-hybrid es pot editar el fitxer isolinux.cfg especificant el temps d'espera en unitats de segons 1/10. Un fitxer isolinux.cfg modificat per a arrencar després de cinc segons seria semblant a aquest:
645:
$ lb config --architectures i386 --linux-flavours 586 \ --debian-installer live $ echo debian-installer-launcher >> config/package-lists/my.list.chroot
749:
● Utilitzar “l'estil Linux” de salts de línia:
853:
#!/bin/sh lb config noauto \ --architectures i386 \ --linux-flavours 686-pae \ "${@}"
858:
En primer lloc, amb --architectures i386 s'assegura que al nostre sistema de construcció amd64 podem construir una versió de 32 bits adequada per al seu ús en la majoria de màquines. En segon lloc, fem servir --linux-flavours 686-pae perquè no creiem que utilitzarem aquesta imatge en sistemes molt més vells. En tercer lloc, hem triat la tasca metapaquet lxde per a donar-nos un escriptori mínim. I, finalment, hem afegit dos paquets inicials favorits: iceweasel i xchat.
"Podręcznik Systemów Live" (2015) [pl] Projekt Systemów Live
25:
● chroot: Program chroot, chroot(8), pozwala na uruchomienie różnych instancji środowiska GNU / Linux na jednym systemie bez ponownego uruchomiania go.
115:
● Linux 2.6 lub nowszy.
165:
● Obraz jądra Linuxa, zazwyczaj nazwany vmlinuz*
166:
● Initial RAM disk image (initrd): a RAM disk set up for the Linux boot, containing modules possibly needed to mount the System image and some scripts to do it.
168:
● Bootloader: A small piece of code crafted to boot from the chosen medium, possibly presenting a prompt or menu to allow selection of options/configuration. It loads the Linux kernel and its initrd to run with an associated system filesystem. Different solutions can be used, depending on the target medium and format of the filesystem containing the previously mentioned components: isolinux to boot from a CD or DVD in ISO9660 format, syslinux for HDD or USB drive booting from a VFAT partition, extlinux for ext2/3/4 and btrfs partitions, pxelinux for PXE netboot, GRUB for ext2/3/4 partitions, etc.
169:
You can use live-build to build the system image from your specifications, set up a Linux kernel, its initrd, and a bootloader to run them, all in one medium-dependant format (ISO9660 image, disk image, etc.).
175:
The web interface currently makes no provision to prevent the use of invalid combinations of options, and in particular, where changing an option would normally (i.e. using live-build directly) change defaults of other options listed in the web form, the web builder does not change these defaults. Most notably, if you change --architectures from the default i386 to amd64, you must change the corresponding option --linux-flavours from the default 586 to amd64. See the lb_config man page for the version of live-build installed on the web builder for more details. The version number of live-build is listed at the bottom of the web builder page.
230:
In order to make the dkms package work, also the kernel headers for the kernel flavour used in your image need to be installed. Instead of manually listing the correct linux-headers package in above created package list, the selection of the right package can be done automatically by live-build.
231:
$ lb config --linux-packages "linux-image linux-headers"
241:
The generated binary image contains a VFAT partition and the syslinux bootloader, ready to be directly written on a USB device. Once again, using an HDD image is just like using an ISO hybrid one on USB. Follow the instructions in Using an ISO hybrid live image, except use the filename live-image-i386.img instead of live-image-i386.hybrid.iso.
254:
In a network boot, the client runs a small piece of software which usually resides on the EPROM of the Ethernet card. This program sends a DHCP request to get an IP address and information about what to do next. Typically, the next step is getting a higher level bootloader via the TFTP protocol. That could be pxelinux, GRUB, or even boot directly to an operating system like Linux.
255:
For example, if you unpack the generated live-image-i386.netboot.tar archive in the /srv/debian-live directory, you'll find the filesystem image in live/filesystem.squashfs and the kernel, initrd and pxelinux bootloader in tftpboot/.
260:
# /etc/dhcp/dhcpd.conf - configuration file for isc-dhcp-server ddns-update-style none; option domain-name "example.org"; option domain-name-servers ns1.example.org, ns2.example.org; default-lease-time 600; max-lease-time 7200; log-facility local7; subnet 192.168.0.0 netmask 255.255.255.0 { range 192.168.0.1 192.168.0.254; filename "pxelinux.0"; next-server 192.168.0.2; option subnet-mask 255.255.255.0; option broadcast-address 192.168.0.255; option routers 192.168.0.1; }
267:
Once the guest computer has downloaded and booted a Linux kernel and loaded its initrd, it will try to mount the Live filesystem image through a NFS server.
273:
Setting up these three services can be a little tricky. You might need some patience to get all of them working together. For more information, see the syslinux wiki at http://www.syslinux.org/wiki/index.php/PXELINUX or the Debian Installer Manual's TFTP Net Booting section at http://d-i.alioth.debian.org/manual/en.i386/ch04s05.html. They might help, as their processes are very similar.
293:
In order to boot a webboot image it is enough to have the components mentioned above, i.e. vmlinuz and initrd.img in a usb stick inside a directory named live/ and install syslinux as bootloader. Then boot from the usb stick and type fetch=URL/PATH/TO/FILE at the boot options. live-boot will retrieve the squashfs file and store it into ram. This way, it is possible to use the downloaded compressed filesystem as a regular live system. For example:
326:
Więcej informacji na temat początkowych plików ramfs w Debianie można znaleźć w Podręczniku Debiana Linux Kernel na http://kernel-handbook.alioth.debian.org/ w rozdziale initramfs.
341:
#!/bin/sh lb config noauto \ --architectures i386 \ --linux-flavours 686-pae \ --binary-images hdd \ --mirror-bootstrap http://ftp.ch.debian.org/debian/ \ --mirror-binary http://ftp.ch.debian.org/debian/ \ "${@}"
436:
One or more kernel flavours will be included in your image by default, depending on the architecture. You can choose different flavours via the --linux-flavours option. Each flavour is suffixed to the default stub linux-image to form each metapackage name which in turn depends on an exact kernel package to be included in your image.
437:
Thus by default, an amd64 architecture image will include the linux-image-amd64 flavour metapackage, and an i386 architecture image will include the linux-image-586 metapackage.
438:
When more than one kernel package version is available in your configured archives, you can specify a different kernel package name stub with the --linux-packages option. For example, supposing you are building an amd64 architecture image and add the experimental archive for testing purposes so you can install the linux-image-3.18.0-trunk-amd64 kernel. You would configure that image as follows:
439:
$ lb config --linux-packages linux-image-3.18.0-trunk $ echo "deb http://ftp.debian.org/debian/ experimental main" > config/archives/experimental.list.chroot
442:
The proper and recommended way to deploy your own kernel packages is to follow the instructions in the kernel-handbook. Remember to modify the ABI and flavour suffixes appropriately, then include a complete build of the linux and matching linux-latest packages in your repository.
443:
If you opt to build the kernel packages without the matching metapackages, you need to specify an appropriate --linux-packages stub as discussed in Kernel flavour and version. As we explain in Installing modified or third-party packages, it is best if you include your custom kernel packages in your own repository, though the alternatives discussed in that section work as well.
618:
live-build używa syslinux i niektórych jego pochodnych (w zależności od typu obrazu) w domyślnym programie ładującym (ang. bootloader). Można je łatwo dostosować do własnych potrzeb.
619:
W celu wykorzystania pełnego motywu, skopiuj /usr/share/live/build/bootloaders do config/bootloaders i edytuj tam te pliki. Jeśli nie chcesz się martwić modyfikacją wszystkich obsługiwanych konfiguracji programu ładującego (ang. bootloader), tylko zapewnienie lokalnego zmodyfikowaną kopię jednego z typu programów, np. * {isolinux} * w # {config / programy ładujące / isolinux} # wystarczy też, w zależności od przypadku użycia.
621:
Istnieje wiele możliwości, jeśli chodzi o wprowadzanie zmian. Na przykład, pochodne syslinux mają domyślnie skonfigurowany limit czasowy (ang. timeout) na 0 (zero), co oznacza, że wstrzymają się one na czas nieokreślony na w ich ekranie powitalnym aż do naciśnięcia klawisza.
622:
Aby zmienić limit czasowy podczas rozruchu w domyślnym obrazie iso-hybrid wystarczy zmienić domyślny plik isolinux.cfg określając limit czasu (ang. timeout) w jednostkach 1/10 sekundy. Zmodyfikowany isolinux.cfg uruchamiający rozruch po pięciu sekundach byłby podobny do tego:
645:
$ lb config --architectures i386 --linux-flavours 586 \ --debian-installer live $ echo debian-installer-launcher >> config/package-lists/my.list.chroot
749:
● Używaj zakończeń lini “typowych dla Linuxa”:
853:
#!/bin/sh lb config noauto \ --architectures i386 \ --linux-flavours 686-pae \ "${@}"
858:
Po pierwsze, --architectures i386 zapewnia, że w naszym systemie kompilacji amd64, możemy zbudować 32-bitową wersję odpowiednią do stosowania na większości maszyn. Po drugie, możemy użyć --linux-flavours 686-pae bo nie przewidujemy używania tego obrazu na dużo starszych systemach. Po trzecie, wybraliśmy metapakiet zadania lxde, który daje nam minimalny pulpit. I w końcu, dodaliśmy dwa wstępne ulubione pakiety: iceweasel i xchat.
"Manuel Live Systems" (2015) [fr] Projet Live Systems
25:
● chroot: Le logiciel chroot, chroot(8), nous permet d'exécuter plusieurs instances concurrentes de l'environnement GNU/Linux sur un système sans redémarrage.
115:
● Linux 2.6.x ou supérieur.
165:
● Image du noyau Linux, d'habitude appelé vmlinuz*
166:
● Image du RAM-disque initiale (initrd): Un disque virtuel RAM configuré pour le démarrage de Linux, contenant possiblement des modules nécessaires pour monter l'image du système et certains scripts pour le faire.
168:
● Chargeur d'amorçage: Un petit morceau de code conçu pour démarrer à partir du support choisi, il peut présenter un menu rapide ou permettre la sélection des options/configurations. Il charge le noyau Linux et son initrd pour fonctionner avec un système de fichiers associé. Différentes solutions peuvent être utilisées, selon le support de destination et le format du système de fichiers contenant les composants mentionnés précédemment: isolinux pour démarrer à partir d'un CD ou DVD au format ISO9660, syslinux pour démarrer un disque dur ou une clé USB à partir d'une partition VFAT, extlinux pour partitions ext2/3/4 et btrfs, pxelinux pour netboot PXE, GRUB pour partitions ext2/3/4, etc.
169:
Vous pouvez utiliser live-build pour construire l'image du système à partir de vos spécifications, configurer un noyau Linux, son initrd, et un chargeur d'amorçage pour les exécuter, tout dans un format en fonction du support (image ISO9660, image disque, etc.).
175:
L'interface web ne permet actuellement pas d'empêcher l'utilisation de combinaisons d'options invalides, en particulier quand le changement d'une option (c'est-à-dire en utilisant live-build directement) modifie les valeurs des autres options énumérées dans le formulaire web, le constructeur web ne modifie pas ces valeurs par défaut. Plus particulièrement, si vous changez la valeur --architectures qui est par défaut i386 pour amd64, vous devez modifier l'option correspondante --linux-flavours de la valeur par défaut 586 pour amd64. Voir la page de manuel lb_config pour la version de live-build installée sur le constructeur web pour plus de détails. Le numéro de version de live-build est indiqué au bas de la page web.
230:
Pour faire fonctionner le paquet dmks, il faut également installer le paquet linux-headers pour le noyau utilisé dans l'image. Au lieu de lister manuellement le paquet linux-headers correct dans la liste de paquets crée ci-dessus, live-build peut faire cela automatiquement.
231:
$ lb config --linux-packages "linux-image linux-headers"
241:
L'image binaire générée contient une partition VFAT et le chargeur d'amorçage syslinux, prêts à être écrits directement sur une clé USB. Encore une fois, comme l'utilisation d'une image HDD est juste comme l'utilisation d'une image ISO hybride sur USB, suivez les instructions Utiliser une image live ISO hybride, en utilisant le nom de fichier live-image-i386.img au lieu de live-image-i386.hybrid.iso.
254:
Dans un démarrage réseau, le client exécute un petit morceau de logiciel qui réside habituellement sur l'EPROM de la carte Ethernet. Ce programme envoie une requête DHCP pour obtenir une adresse IP et les informations sur ce qu'il faut faire ensuite. Typiquement, la prochaine étape est d'obtenir un chargeur d'amorçage de niveau supérieur via le protocole TFTP. Cela pourrait être pxelinux, GRUB, ou démarrer directement à un système d'exploitation comme Linux.
255:
Par exemple, si vous décompressez l'archive généré live-image-i386.netboot.tar dans le répertoire /srv/debian-live, vous trouverez l'image du système de fichiers dans live/filesystem.squashfs et le noyau, initrd et le chargeur d'amorçage pxelinux dans tftpboot/.
260:
# /etc/dhcp/dhcpd.conf - configuration file for isc-dhcp-server ddns-update-style none; option domain-name "example.org"; option domain-name-servers ns1.example.org, ns2.example.org; default-lease-time 600; max-lease-time 7200; log-facility local7; subnet 192.168.0.0 netmask 255.255.255.0 { range 192.168.0.1 192.168.0.254; filename "pxelinux.0"; next-server 192.168.0.2; option subnet-mask 255.255.255.0; option broadcast-address 192.168.0.255; option routers 192.168.0.1; }
267:
Quand l'ordinateur hôte a téléchargé et démarré un noyau Linux et chargé son initrd, il va essayer de monter l'image du système de fichiers live via un serveur NFS.
273:
La configuation de ces trois services peut être un peu dificile. Vous pourriez avoir besoin de patience pour obtenir que tous fonctionnent ensemble. Pour plus d'informations, consultez le wiki syslinux sur http://www.syslinux.org/wiki/index.php/PXELINUX ou la section Debian Installer Manual's TFTP Net Booting sur http://d-i.alioth.debian.org/manual/fr.i386/ch04s05.html. Ils pourraient aider parce que leurs processus sont très semblables.
293:
Afin de démarrer une image webboot il suffit d'avoir les éléments mentionnés ci-dessus, c'est-à-dire, vmlinuz et initrd.img sur une clé usb dans un répertoire nommé live/ et installer syslinux comme chargeur de démarrage. Ensuite, démarrer à partir de la clé usb et taper fetch=URL/CHEMIN/DU/FICHIER aux options de démarrage. live-boot va télécharger le fichier squashfs et le stocker dans la ram. De cette façon, il est possible d'utiliser le système de fichiers compressé téléchargé comme un système live normal. Par exemple:
326:
Plus d'information sur initial ramfs dans Debian peut être trouvée dans le Debian Linux Kernel Handbook sur http://kernel-handbook.alioth.debian.org/ dans le chapitre sur initramfs.
341:
#!/bin/sh lb config noauto \ --architectures i386 \ --linux-flavours 686-pae \ --binary-images hdd \ --mirror-bootstrap http://ftp.ch.debian.org/debian/ \ --mirror-binary http://ftp.ch.debian.org/debian/ \ "${@}"
436:
Un ou plusieurs types de noyau seront inclus dans votre image par défaut, en fonction de l'architecture. Vous pouvez choisir différents types avec l'option --linux-flavours. Chaque type est suffixé à partir de linux-image pour former le nom de chaque métapaquet qui dépend à son tour d'un paquet noyau exact à inclure dans votre image.
437:
Ainsi, par défaut, une image pour l'architecture amd64 comprendra le métapaquet linux-image-amd64, et une image pour l'architecture i386 comprendra le métapaquet linux-image-586.
438:
Lorsque plus d'une version du paquet du noyau est disponible dans vos archives configurées, vous pouvez indiquer un nom du paquet du noyau différent avec l'option --linux-packages. Par exemple, supposons que vous construisiez une image pour l'architecture amd64 et ajoutiez l'archive expérimentale pour faire des essais. Pour installer le noyau linux-image-3.18.0-trunk-amd64 vous pouvez configurer cette image comme suit:
439:
$ lb config --linux-packages linux-image-3.18.0-trunk $ echo "deb http://ftp.debian.org/debian/ experimental main" > config/archives/experimental.list.chroot
442:
La façon correcte et recommandée de déployer vos propres paquets du noyau est de suivre les instructions dans le kernel-handbook. N'oubliez pas de modifier l'ABI et les suffixes de manière appropriée, puis d'inclure une construction complète des paquets linux et linux-latest dans votre dépôt.
443:
Si vous optez pour la construction des paquets du noyau sans les métapaquets correspondants, vous devez indiquer une chaîne --linux-packages appropriée tel que discuté dans Version et type de noyau. Comme nous l'expliquons dans Installation de paquets modifiés ou tiers, il est préférable que vous incluiez vos paquets de noyau personnalisés à votre propre dépôt, bien que les alternatives discutées dans cette section fonctionnent bien également.
618:
live-build utilise syslinux et certains de ses dérivés (selon le type d'image) comme chargeurs d'amorçage par défaut. Vous pouvez facilement les personnaliser pour répondre à vos besoins.
619:
Pour utiliser un thème complet, copiez /usr/share/live/build/bootloaders dans config/bootloaders et modifiez les fichiers là. Si vous ne voulez pas modifier toutes les configurations du chargeur d'amorçage prises en charge, il suffit de fournir une copie locale personnalisée d'un des chargeurs, par exemple copiez la configuration d'isolinux dans config/bootloaders/isolinux, selon votre cas d'utilisation.
621:
Il y a beaucoup de possibilités quand il s'agit de faire des changements. Par exemple, les dérivés de syslinux sont configurés par défaut avec un timeout de 0 (zéro), ce qui signifie qu'ils se mettront en pause indéfiniment à leur écran de démarrage jusqu'à ce que vous pressiez une touche.
622:
Pour modifier le délai de démarrage d'une image iso-hybrid, vous pouvez modifier un fichier isolinux.cfg en précisant le timeout dans les unités de 1/10 secondes. Un isolinux.cfg modifié pour démarrer cinq secondes plus tard serait semblable à ceci:
645:
$ lb config --architectures i386 --linux-flavours 586 \ --debian-installer live $ echo debian-installer-launcher >> config/package-lists/my.list.chroot
749:
● Utilisez le «style Linux» des sauts de ligne:
853:
#!/bin/sh lb config noauto \ --architectures i386 \ --linux-flavours 686-pae \ "${@}"
858:
Tout d'abord, --architectures i386 assure que sur notre système de construction amd64, nous construisons une version de 32 bits qui peut être utilisée sur la plupart des machines. Deuxièmement, nous utilisons --linux-flavours 686-pae parce que nous ne prévoyons pas d'utiliser cette image sur des systèmes très anciens. Troisièmement, nous avons choisi le métapaque de la tâche lxde pour nous donner un bureau minimal. Et enfin, nous avons ajouté deux premiers paquets préférés: iceweasel et xchat.
"Manual de Live Systems" (2015) [es] Proyecto Live Systems
25:
● chroot: El programa chroot, chroot(8), permite ejecutar diferentes instancias de un entorno GNU/Linux en un solo sistema de manera simultánea sin necesidad de reiniciar el sistema.
115:
● Linux 2.6.x o superior.
165:
● Imágen del kernel de Linux, normalmente llamada vmlinuz*
166:
● Imagen del Disco RAM inicial (initrd): Un Disco RAM configurado para el arranque de Linux, que incluya los módulos posiblemente necesarios para montar la imagen del sistema y algunos scripts para ponerlo en marcha.
168:
● Gestor de arranque: Una pequeña pieza de código diseñada para arrancar desde el medio de almacenamiento escogido, posiblemente mostrando un menú o un indicador de arranque para permitir la selección de opciones/configuración. Carga el kernel de Linux y su initrd para funcionar con un sistema de ficheros asociado. Se pueden usar soluciones diferentes, dependiendo del medio de almacenamiento de destino y el formato del sistema de ficheros que contenga los componentes mencionados anteriormente: isolinux para arrancar desde un CD o DVD en formato ISO9660, syslinux para arrancar desde el disco duro o unidad USB desde una partición VFAT, extlinux para formatos ext2/3/4 y particiones btrfs, pxelinux para arranque de red PXE, GRUB para particiones ext2/3/4 , etc.
169:
Se puede utilizar live-build para crear la imagen del sistema a partir de ciertas especificaciones, incluir un kernel de Linux, su initrd y un gestor de arranque para ponerlos en funcionamiento, todo ello en un formato que depende del medio de almacenamiento elegido (imagen ISO9660, imagen de disco, etc.)
175:
La interfaz web actualmente no puede prevenir el uso de combinaciones de opciones no válidas, y en particular, cuando el cambio de una opción que normalmente (es decir, utilizando live-build directamente) cambiaría los valores predeterminados de otras opciones que figuran en el formulario web, el constructor web no cambia estos valores predeterminados. En particular, si se cambia --architectures del valor por defecto i386 a amd64, se debe cambiar la opción correspondiente --linux-flavours del valor por defecto 586 a amd64. Ver la página de manual de lb_config para para más detalles sobre la versión de live-build instalada en el constructor web. El número de versión de live-build aparece en la parte inferior de la página web del servicio de creación de imágenes.
230:
Para que el paquete dkms funcione, hace falta tener instalados también los kernel-headers para la variante del kernel utilizado. En lugar de enumerar manualmente el paquete linux-headers correspondiente en la lista de paquetes creados anteriormente, live-build puede seleccionarlo automáticamente.
231:
$ lb config --linux-packages "linux-image linux-headers"
241:
La imagen binaria generada contiene una partición VFAT y el gestor de arranque syslinux, lista para ser copiada directamente en un dispositivo USB. De nuevo, dado que utilizar una imagen HDD es igual a usar una imagen ISO híbrida en un USB, seguir las instrucciones de Usar una imagen ISO híbrida con la diferencia del nombre, live-image-i386.img en lugar de live-image-i386.hybrid.iso.
254:
En un arranque en red, el cliente ejecuta una pequeña pieza de software que generalmente se encuentra en la EPROM de la tarjeta Ethernet. Este programa envía una solicitud de DHCP para obtener una dirección IP e información sobre qué hacer a continuación. Por lo general, el siguiente paso es conseguir un gestor de arranque de alto nivel a través del protocolo TFTP. Este gestor podría ser PXELINUX, GRUB, o incluso arrancar directamente un sistema operativo como Linux.
255:
Por ejemplo, si se descomprime el archivo generado live-image-i386.netboot.tar en el directorio /srv/debian-live, se verá la imagen del sistema de ficheros en live/filesystem.squashfs y el kernel, initrd y el gestor de arranque pxelinux en tftpboot/.
260:
# /etc/dhcp/dhcpd.conf - fichero de configuración para isc-dhcp-server ddns-update-style none; option domain-name "example.org"; option domain-name-servers ns1.example.org, ns2.example.org; default-lease-time 600; max-lease-time 7200; log-facility local7; subnet 192.168.0.0 netmask 255.255.255.0 { range 192.168.0.1 192.168.0.254; filename "pxelinux.0"; next-server 192.168.0.2; option subnet-mask 255.255.255.0; option broadcast-address 192.168.0.255; option routers 192.168.0.1; }
267:
Una vez el equipo cliente ha descargado y arrancado el kernel de Linux junto a su initrd, intentará montar el sistema de archivos de la imagen en vivo a través de un servidor NFS.
273:
La configuración de estos tres servicios puede ser un poco difícil. Será necesario un poco de paciencia para conseguir que todos ellos funcionen juntos. Para obtener más información, ver el wiki de syslinux en http://www.syslinux.org/wiki/index.php/PXELINUX o la sección sobre TFTP Net Booting del Manual del Instalador de Debian en http://d-i.alioth.debian.org/manual/es.i386/ch04s05.html Esto puede ser útil, ya que sus procesos son muy similares.
293:
Para arrancar una imagen webboot es suficiente copiar los elementos mencionados anteriormente, es decir, vmlinuz y initrd.img en una llave usb dentro de un directorio llamado live/ e instalar syslinux como gestor de arranque. Entonces, arrancar desde la llave usb y teclear fetch=URL/RUTA/AL/FICHERO en las opciones de arranque. live-boot se encargará de descargar el archivo squashfs y almacenarlo en la memoria ram. De este modo, es posible utilizar el sistema de ficheros comprimido descargado como si fuera un sistema en vivo normal. Por ejemplo:
326:
Se puede encontrar más información sobre ramfs inicial en Debian en el Manual del kernel Debian Linux en http://kernel-handbook.alioth.debian.org/ concretamente en el capítulo sobre initramfs.
341:
#!/bin/sh lb config noauto \ --architectures i386 \ --linux-flavours 686-pae \ --binary-images hdd \ --mirror-bootstrap http://ftp.ch.debian.org/debian/ \ --mirror-binary http://ftp.ch.debian.org/debian/ \ "${@}"
436:
Dependiendo de la arquitectura, se incluyen por defecto en las imágenes uno o más tipos de kernels. Se puede elegir entre diferentes tipos utilizando la opción --linux-flavours. Cada tipo tiene el sufijo de la raíz predeterminada linux-image para formar el nombre de cada metapaquete que a su vez depende del paquete del kernel exacto que debe incluirse en la imagen.
437:
Así, por defecto, una imagen de arquitectura amd64 incluirá el metapaquete linux-image-amd64 y una imagen de arquitectura i386 incluirá el metapaquete linux-image-586.
438:
Cuando hay más de una versión diferente del paquete del kernel disponible en los archivos configurados, se puede especificar el nombre de un paquete del kernel diferente con la opción --linux-packages. Por ejemplo, suponer que se está construyendo una image de arquitectura amd64 y se quiere añadir el archivo experimental a fin de realizar pruebas. Para que se pueda instalar el kernel linux-image-3.18.0-trunk-amd64, se podría configurar la imagen de la siguiente manera:
439:
$ lb config --linux-packages linux-image-3.18.0-trunk $ echo "deb http://ftp.debian.org/debian/ experimental main" > config/archives/experimental.list.chroot
442:
La manera apropiada y recomendada de implementar los propios paquetes del kernel es seguir las instrucciones del kernel-handbook. Recordar modificar el ABI y los sufijos de los tipos del kernel e incluir los paquetes del kernel completo en un repositorio que coincidan con los paquetes linux y linux-latest.
443:
Si se opta por construir los paquetes del kernel sin los metapaquetes adecuados, es necesario especificar una raíz --linux-packages apropiada como se indica en Versión y tipo de kernel. Tal y como se explica en Instalar paquetes modificados o de terceros, es mejor si se incluyen los paquetes del kernel personalizado en un repositorio propio, aunque las alternativas discutidas en esa sección también funcionan.
618:
live-build utiliza syslinux y algunos de sus derivados (en función del tipo de imagen) como gestores de arranque por defecto. Se pueden personalizar fácilmente para satisfacer todas las necesidades.
619:
Para utilizar un tema completo, copiar /usr/share/live/build/bootloaders en config/bootloaders y editar los ficheros allí. Si no se desea modificar todas las configuraciones de los gestores de arranque disponibles, es suficiente con sólo proporcionar una copia local personalizada de uno, por ejemplo, copiar la configuración de isolinux en config/bootloaders/isolinux es suficiente, dependiendo del caso de uso.
621:
Hay muchas posibilidades a la hora de hacer cambios. Por ejemplo, los derivados de syslinux están configurados por defecto con un tiempo de espera de 0 (cero) lo que significa que harán una pausa indefinida en su pantalla de inicio hasta que se pulse una tecla.
622:
Para modificar el tiempo de espera de arranque de una imagen iso-hybrid se puede editar el fichero isolinux.cfg especificando el tiempo en unidades de segundo 1/10. Un fichero isolinux.cfg modificado para arrancar después de cinco segundos sería así:
645:
$ lb config --architectures i386 --linux-flavours 586 \ --debian-installer live $ echo debian-installer-launcher >> config/package-lists/my.list.chroot
749:
● Utilizar los saltos de línea al «estilo Linux»:
853:
#!/bin/sh lb config noauto \ --architectures i386 \ --linux-flavours 686-pae \ "${@}"
858:
En primer lugar con --architectures i386 se asegura de que en un sistema de creación amd64 se crea una versión de 32-bits adecuada para ser usada en la mayoría de máquinas. En segundo lugar, se usa --linux-flavours 686-pae porque no se espera usar esta imagen en sistemas mucho más viejos. En tercer lugar se elige el metapaquete lxde para proporcionar un escritorio mínimo. Y, por último, se añaden dos paquetes iniciales favoritos: iceweasel y xchat.
"The Cathedral and the Bazaar" (2002) [en] RAYMOND, Eric S.
3:
Linux is subversive. Who would have thought even five years ago (1991) that a world-class operating system could coalesce as if by magic out of part-time hacking by several thousand developers scattered all over the planet, connected only by the tenuous strands of the Internet?
4:
Certainly not I. By the time Linux swam onto my radar screen in early 1993, I had already been involved in Unix and open-source development for ten years. I was one of the first GNU contributors in the mid-1980s. I had released a good deal of open-source software onto the net, developing or co-developing several programs (nethack, Emacs's VC and GUD modes, xlife, and others) that are still in wide use today. I thought I knew how it was done.
5:
Linux overturned much of what I thought I knew. I had been preaching the Unix gospel of small tools, rapid prototyping and evolutionary programming for years. But I also believed there was a certain critical complexity above which a more centralized, a priori approach was required. I believed that the most important software (operating systems and really large tools like the Emacs programming editor) needed to be built like cathedrals, carefully crafted by individual wizards or small bands of mages working in splendid isolation, with no beta to be released before its time.
6:
Linus Torvalds's style of development—release early and often, delegate everything you can, be open to the point of promiscuity—came as a surprise. No quiet, reverent cathedral-building here—rather, the Linux community seemed to resemble a great babbling bazaar of differing agendas and approaches (aptly symbolized by the Linux archive sites, who'd take submissions from anyone) out of which a coherent and stable system could seemingly emerge only by a succession of miracles.
7:
The fact that this bazaar style seemed to work, and work well, came as a distinct shock. As I learned my way around, I worked hard not just at individual projects, but also at trying to understand why the Linux world not only didn't fly apart in confusion but seemed to go from strength to strength at a speed barely imaginable to cathedral-builders.
9:
This is the story of that project. I'll use it to propose some aphorisms about effective open-source development. Not all of these are things I first learned in the Linux world, but we'll see how the Linux world gives them particular point. If I'm correct, they'll help you understand exactly what it is that makes the Linux community such a fountain of good software—and, perhaps, they will help you become more productive yourself.
18:
Perhaps this should have been obvious (it's long been proverbial that “Necessity is the mother of invention”) but too often software developers spend their days grinding away for pay at programs they neither need nor love. But not in the Linux world—which may explain why the average quality of software originated in the Linux community is so high.
22:
Linus Torvalds, for example, didn't actually try to write Linux from scratch. Instead, he started by reusing code and ideas from Minix, a tiny Unix-like operating system for PC clones. Eventually all the Minix code went away or was completely rewritten—but while it was there, it provided scaffolding for the infant that would eventually become Linux.
24:
The source-sharing tradition of the Unix world has always been friendly to code reuse (this is why the GNU project chose Unix as a base OS, in spite of serious reservations about the OS itself). The Linux world has taken this tradition nearly to its technological limit; it has terabytes of open sources generally available. So spending time looking for some else's almost-good-enough is more likely to give you good results in the Linux world than anywhere else.
29:
But I had a more theoretical reason to think switching might be as good an idea as well, something I learned long before Linux.
42:
Another strength of the Unix tradition, one that Linux pushes to a happy extreme, is that a lot of users are hackers too. Because source code is available, they can be effective hackers. This can be tremendously useful for shortening debugging time. Given a bit of encouragement, your users will diagnose problems, suggest fixes, and help improve the code far more quickly than you could unaided.
45:
In fact, I think Linus's cleverest and most consequential hack was not the construction of the Linux kernel itself, but rather his invention of the Linux development model. When I expressed this opinion in his presence once, he smiled and quietly repeated something he has often said: “I'm basically a very lazy person who likes to get credit for things other people actually do.” Lazy like a fox. Or, as Robert Heinlein famously wrote of one of his characters, too lazy to fail.
46:
In retrospect, one precedent for the methods and success of Linux can be seen in the development of the GNU Emacs Lisp library and Lisp code archives. In contrast to the cathedral-building style of the Emacs C core and most other GNU tools, the evolution of the Lisp code pool was fluid and very user-driven. Ideas and prototype modes were often rewritten three or four times before reaching a stable final form. And loosely-coupled collaborations enabled by the Internet, a la Linux, were frequent.
47:
Indeed, my own most successful single hack previous to fetchmail was probably Emacs VC (version control) mode, a Linux-like collaboration by email with three other people, only one of whom (Richard Stallman, the author of Emacs and founder of the Free Software Foundation) I have met to this day. It was a front-end for SCCS, RCS and later CVS from within Emacs that offered “one-touch” version control operations. It evolved from a tiny, crude sccs.el mode somebody else had written. And the development of VC succeeded because, unlike Emacs itself, Emacs Lisp code could go through release/test/improve generations very quickly.
50:
Early and frequent releases are a critical part of the Linux development model. Most developers (including me) used to believe this was bad policy for larger than trivial projects, because early versions are almost by definition buggy versions and you don't want to wear out the patience of your users.
52:
The most important of these, the Ohio State Emacs Lisp archive, anticipated the spirit and many of the features of today's big Linux archives. But few of us really thought very hard about what we were doing, or about what the very existence of that archive suggested about problems in the FSF's cathedral-building development model. I made one serious attempt around 1992 to get a lot of the Ohio code formally merged into the official Emacs Lisp library. I ran into political trouble and was largely unsuccessful.
53:
But by a year later, as Linux became widely visible, it was clear that something different and much healthier was going on there. Linus's open development policy was the very opposite of cathedral-building. Linux's Internet archives were burgeoning, multiple distributions were being floated. And all of this was driven by an unheard-of frequency of core system releases.
58:
I didn't think so. Granted, Linus is a damn fine hacker. How many of us could engineer an entire production-quality operating system kernel from scratch? But Linux didn't represent any awesome conceptual leap forward. Linus is not (or at least, not yet) an innovative genius of design in the way that, say, Richard Stallman or James Gosling (of NeWS and Java) are. Rather, Linus seems to me to be a genius of engineering and implementation, with a sixth sense for avoiding bugs and development dead-ends and a true knack for finding the minimum-effort path from point A to point B. Indeed, the whole design of Linux breathes this quality and mirrors Linus's essentially conservative and simplifying design approach.
67:
And that's it. That's enough. If “Linus's Law” is false, then any system as complex as the Linux kernel, being hacked over by as many hands as the that kernel was, should at some point have collapsed under the weight of unforseen bad interactions and undiscovered “deep” bugs. If it's true, on the other hand, it is sufficient to explain Linux's relative lack of bugginess and its continuous uptimes spanning months or even years.
69:
One special feature of the Linux situation that clearly helps along the Delphi effect is the fact that the contributors for any given project are self-selected. An early respondent pointed out that contributions are received not from a random sample, but from people who are interested enough to use the software, learn about how it works, attempt to find solutions to problems they encounter, and actually produce an apparently reasonable fix. Anyone who passes all these filters is highly likely to have something useful to contribute.
71:
In practice, the theoretical loss of efficiency due to duplication of work by debuggers almost never seems to be an issue in the Linux world. One effect of a “release early and often” policy is to minimize such duplication by propagating fed-back fixes quickly [JH].
75:
Linus coppers his bets, too. In case there are serious bugs, Linux kernel version are numbered in such a way that potential users can make a choice either to run the last version designated “stable” or to ride the cutting edge and risk bugs in order to get new features. This tactic is not yet systematically imitated by most Linux hackers, but perhaps it should be; the fact that either choice is available makes both more attractive. [HBS]
170:
Linux and fetchmail both went public with strong, attractive basic designs. Many people thinking about the bazaar model as I have presented it have correctly considered this critical, then jumped from that to the conclusion that a high degree of design intuition and cleverness in the project leader is indispensable.
171:
But Linus got his design from Unix. I got mine initially from the ancestral popclient (though it would later change a great deal, much more proportionately speaking than has Linux). So does the leader/coordinator for a bazaar-style effort really have to have exceptional design talent, or can he get by through leveraging the design talent of others?
173:
Both the Linux and fetchmail projects show evidence of this. Linus, while not (as previously discussed) a spectacularly original designer, has displayed a powerful knack for recognizing good design and integrating it into the Linux kernel. And I have already described how the single most powerful design idea in fetchmail (SMTP forwarding) came from somebody else.
176:
So I believe the fetchmail project succeeded partly because I restrained my tendency to be clever; this argues (at least) against design originality being essential for successful bazaar projects. And consider Linux. Suppose Linus Torvalds had been trying to pull off fundamental innovations in operating system design during the development; does it seem at all likely that the resulting kernel would be as stable and successful as what we have?
184:
So it was with Carl Harris and the ancestral popclient, and so with me and fetchmail. But this has been understood for a long time. The interesting point, the point that the histories of Linux and fetchmail seem to demand we focus on, is the next stage—the evolution of software in the presence of a large and active community of users and co-developers.
185:
In The Mythical Man-Month, Fred Brooks observed that programmer time is not fungible; adding developers to a late software project makes it later. As we've seen previously, he argued that the complexity and communication costs of a project rise with the square of the number of developers, while work done only rises linearly. Brooks's Law has been widely regarded as a truism. But we've examined in this essay an number of ways in which the process of open-source development falsifies the assumptionms behind it—and, empirically, if Brooks's Law were the whole picture Linux would be impossible.
188:
The bazaar method, by harnessing the full power of the “egoless programming” effect, strongly mitigates the effect of Brooks's Law. The principle behind Brooks's Law is not repealed, but given a large developer population and cheap communications its effects can be swamped by competing nonlinearities that are not otherwise visible. This resembles the relationship between Newtonian and Einsteinian physics—the older system is still valid at low energies, but if you push mass and velocity high enough you get surprises like nuclear explosions or Linux.
189:
The history of Unix should have prepared us for what we're learning from Linux (and what I've verified experimentally on a smaller scale by deliberately copying Linus's methods [EGCS]). That is, while coding remains an essentially solitary activity, the really great hacks come from harnessing the attention and brainpower of entire communities. The developer who uses only his or her own brain in a closed project is going to fall behind the developer who knows how to create an open, evolutionary context in which feedback exploring the design space, code contributions, bug-spotting, and other improvements come from from hundreds (perhaps thousands) of people.
192:
Linux was the first project for which a conscious and successful effort to use the entire world as its talent pool was made. I don't think it's a coincidence that the gestation period of Linux coincided with the birth of the World Wide Web, and that Linux left its infancy during the same period in 1993–1994 that saw the takeoff of the ISP industry and the explosion of mainstream interest in the Internet. Linus was the first person who learned how to play by the new rules that pervasive Internet access made possible.
193:
While cheap Internet was a necessary condition for the Linux model to evolve, I think it was not by itself a sufficient condition. Another vital factor was the development of a leadership style and set of cooperative customs that could allow developers to attract co-developers and get maximum leverage out of the medium.
196:
The “severe effort of many converging wills” is precisely what a project like Linux requires—and the “principle of command” is effectively impossible to apply among volunteers in the anarchist's paradise we call the Internet. To operate and compete effectively, hackers who want to lead collaborative projects have to learn how to recruit and energize effective communities of interest in the mode vaguely suggested by Kropotkin's “principle of understanding”. They must learn to use Linus's Law.[SP]
197:
Earlier I referred to the “Delphi effect” as a possible explanation for Linus's Law. But more powerful analogies to adaptive systems in biology and economics also irresistably suggest themselves. The Linux world behaves in many respects like a free market or an ecology, a collection of selfish agents attempting to maximize utility which in the process produces a self-correcting spontaneous order more elaborate and efficient than any amount of central planning could have achieved. Here, then, is the place to seek the “principle of understanding”.
198:
The “utility function” Linux hackers are maximizing is not classically economic, but is the intangible of their own ego satisfaction and reputation among other hackers. (One may call their motivation “altruistic”, but this ignores the fact that altruism is itself a form of ego satisfaction for the altruist). Voluntary cultures that work this way are not actually uncommon; one other in which I have long participated is science fiction fandom, which unlike hackerdom has long explicitly recognized “egoboo” (ego-boosting, or the enhancement of one's reputation among other fans) as the basic drive behind volunteer activity.
199:
Linus, by successfully positioning himself as the gatekeeper of a project in which the development is mostly done by others, and nurturing interest in the project until it became self-sustaining, has shown an acute grasp of Kropotkin's “principle of shared understanding”. This quasi-economic view of the Linux world enables us to see how that understanding is applied.
201:
Many people (especially those who politically distrust free markets) would expect a culture of self-directed egoists to be fragmented, territorial, wasteful, secretive, and hostile. But this expectation is clearly falsified by (to give just one example) the stunning variety, quality, and depth of Linux documentation. It is a hallowed given that programmers hate documenting; how is it, then, that Linux hackers generate so much documentation? Evidently Linux's free market in egoboo works better to produce virtuous, other-directed behavior than the massively-funded documentation shops of commercial software producers.
202:
Both the fetchmail and Linux kernel projects show that by properly rewarding the egos of many other hackers, a strong developer/coordinator can use the Internet to capture the benefits of having lots of co-developers without having a project collapse into a chaotic mess. So to Brooks's Law I counter-propose the following:
205:
Perhaps this is not only the future of open-source software. No closed-source developer can match the pool of talent the Linux community can bring to bear on a problem. Very few could afford even to hire the more than 200 (1999: 600, 2000: 800) people who have contributed to fetchmail!
213:
This suggests a reason for questioning the advantages of conventionally-managed software development that is independent of the rest of the arguments over cathedral vs. bazaar mode. If it's possible for GNU Emacs to express a consistent architectural vision over 15 years, or for an operating system like Linux to do the same over 8 years of rapidly changing hardware and platform technology; and if (as is indeed the case) there have been many well-architected open-source projects of more than 5 years duration -- then we are entitled to wonder what, if anything, the tremendous overhead of conventionally-managed development is actually buying us.
214:
Whatever it is certainly doesn't include reliable execution by deadline, or on budget, or to all features of the specification; it's a rare `managed' project that meets even one of these goals, let alone all three. It also does not appear to be ability to adapt to changes in technology and economic context during the project lifetime, either; the open-source community has proven far more effective on that score (as one can readily verify, for example, by comparing the 30-year history of the Internet with the short half-lives of proprietary networking technologies—or the cost of the 16-bit to 32-bit transition in Microsoft Windows with the nearly effortless upward migration of Linux during the same period, not only along the Intel line of development but to more than a dozen other hardware platforms, including the 64-bit Alpha as well).
243:
Relating to your own work process with fear and loathing (even in the displaced, ironic way suggested by hanging up Dilbert cartoons) should therefore be regarded in itself as a sign that the process has failed. Joy, humor, and playfulness are indeed assets; it was not mainly for the alliteration that I wrote of “happy hordes” above, and it is no mere joke that the Linux mascot is a cuddly, neotenous penguin.
258:
In the mean time, however, the open-source idea has scored successes and found backers elsewhere. Since the Netscape release we've seen a tremendous explosion of interest in the open-source development model, a trend both driven by and driving the continuing success of the Linux operating system. The trend Mozilla touched off is continuing at an accelerating rate.
"Democratizing Innovation" (2005) [en] VON HIPPEL, Eric
40:
The practices visible in “open source” software development were important in bringing this phenomenon to general awareness. In these projects it was clear policy that project contributors would routinely and systematically freely reveal code they had developed at private expense (Raymond 1999). However, free revealing of product innovations has a history that began long before the advent of open source software. Allen, in his 1983 study of the eighteenth-century iron industry, was probably the first to consider the phenomon systematically. Later, Nuvolari (2004) discussed free revealing in the early history of mine pumping engines. Contemporary free revealing by users has been documented by von Hippel and Finkelstein (1979) for medical equipment, by Lim (2000) for semiconductor process equipment, by Morrison, Roberts, and von Hippel (2000) for library information systems, and by Franke and Shah (2003) for sporting equipment. Henkel (2003) has documented free revealing among manufacturers in the case of embedded Linux software.
299:
Contributors to the many open source software projects extant (more than 83,000 were listed on SourceForge.net in 2004) also routinely make the new code they have written public. Well-known open source software products include the Linux operating system software and the Apache web server computer software. Some conditions are attached to open source code licensing to ensure that the code remains available to all as an information commons. Because of these added protections, open source code does not quite fit the definition of free revealing given earlier in this chapter. (The licensing of open source software will be discussed in detail in chapter 7.)
300:
Henkel (2003) showed that free revealing is sometimes practiced by directly competing manufacturers. He studied manufacturers that were competitors and that had all built improvements and extensions to a type of software known as embedded Linux. (Such software is “embedded in” and used to operate equipment ranging from cameras to chemical plants.) He found that these manufacturers freely revealed improvements to the common software platform that they all shared and, with a lag, also revealed much of the equipment-specific code they had written.
328:
A variation of this argument applies to the free revealing among competing manufacturers documented by Henkel (2003). Competing developers of embedded Linux systems were creating software that was specifically designed to run the hardware products of their specific clients. Each manufacturer could freely reveal this equipment-specific code without fear of direct competitive repercussions: it was applicable mainly to specific products made by a manufacturer's client, and it was less valuable to others. At the same time, all would jointly benefit from free revealing of improvements to the underlying embedded Linux code base, upon which they all build their proprietary products. After all, the competitive advantages of all their products depended on this code base's being equal to or better than the proprietary software code used by other manufacturers of similar products. Additionally, Linux software was a complement to hardware that many of the manufacturers in Henkel's sample also sold. Improved Linux software would likely increase sales of their complementary hardware products. (Complement suppliers' incentives to innovate have been modeled by Harhoff (1996).)
336:
Interestingly, successful open source software projects do not appear to follow any of the guidelines for successful collective action projects just described. With respect to project recruitment, goal statements provided by successful open source software projects vary from technical and narrow to ideological and broad, and from precise to vague and emergent (for examples, see goal statements posted by projects hosted on Sourceforge.net). 8 Further, such projects may engage in no active recruiting beyond simply posting their intended goals and access address on a general public website customarily used for this purpose (for examples, see the Freshmeat.net website). Also, projects have shown by example that they can be successful even if large groups---perhaps thousands---of contributors are involved. Finally, open source software projects seem to expend no effort to discourage free riding. Anyone is free to download code or seek help from project websites, and no apparent form of moral pressure is applied to make a compensating contribution (e.g., “If you benefit from this code, please also contribute . . .”).
8.As a specific example of a project with an emergent goal, consider the beginnings of the Linux open source software project. In 1991, Linus Torvalds, a student in Finland, wanted a Unix operating system that could be run on his PC, which was equipped with a 386 processor. Minix was the only software available at that time but it was commercial, closed source, and it traded at US$150. Torvalds found this too expensive, and started development of a Posix-compatible operating system, later known as Linux. Torvalds did not immediately publicize a very broad and ambitious goal, nor did he attempt to recruit contributors. He simply expressed his private motivation in a message he posted on July 3, 1991, to the USENET newsgroup comp.os.minix (Wayner 2000): Hello netlanders, Due to a project I'm working on (in minix), I'm interested in the posix standard definition. [Posix is a standard for UNIX designers. A software using POSIX is compatible with other UNIX-based software.] Could somebody please point me to a (preferably) machine-readable format of the latest posix-rules? Ftp-sites would be nice. In response, Torvalds got several return messages with Posix rules and people expressing a general interest in the project. By the early 1992, several skilled programmers contributed to Linux and the number of users increased by the day. Today, Linux is the largest open source development project extant in terms of number of developers.
431:
Interesting examples also exist regarding on the impact a commons can have on the value of intellectual property innovators seek to hold apart from it. Weber (2004) recounts the following anecdote: In 1988, Linux developers were building new graphical interfaces for their open source software. One of the most promising of these, KDE, was offered under the General Public License. However, Matthias Ettrich, its developer, had built KDE using a proprietary graphical library called Qt. He felt at the time that this could be an acceptable solution because Qt was of good quality and Troll Tech, owner of Qt, licensed Qt at no charge under some circumstances. However, Troll Tech did require a developer's fee be paid under other circumstances, and some Linux developers were concerned about having code not licensed under the GPL as part of their code. They tried to convince Troll Tech to change the Qt license so that it would be under the GPL when used in free software. But Troll Tech, as was fully within its rights, refused to do this. Linux developers then, as was fully within their rights, began to develop open source alternatives to Qt that could be licensed under the GPL. As those projects moved toward success, Troll Tech recognized that Qt might be surpassed and effectively shut out of the Linux market. In 2000 the company therefore decided to license Qt under the GPL.
455:
Democratization of the opportunity to create is important beyond giving more users the ability to make exactly right products for themselves. As we saw in a previous chapter, the joy and the learning associated with creativity and membership in creative communities are also important, and these experiences too are made more widely available as innovation is democratized. The aforementioned Chris Hanson, a Principal Research Scientist at MIT and a maintainer in the Debian Linux community, speaks eloquently of this in his description of the joy and value he finds from his participation in an open source software community:
485:
Many user innovations require or benefit from complementary products or services, and manufacturers can often supply these at a profit. For example, IBM profits from user innovation in open source software by selling the complement of computer hardware. Specifically, it sells computer servers with open source software pre-installed, and as the popularity of that software goes up, so do server sales and profits. A firm named Red Hat distributes a version of the open source software computer operating system Linux, and also sells the complementary service of Linux technical support to users. Opportunities to provide profitable complements are not necessarily obvious at first glance, and providers often reap benefits without being aware of the user innovation for which they are providing a complement. Hospital emergency rooms, for example, certainly gain considerable business from providing medical care to the users and user-developers of physically demanding sports, but may not be aware of this.
671:
1. As a specific example of a project with an emergent goal, consider the beginnings of the Linux open source software project. In 1991, Linus Torvalds, a student in Finland, wanted a Unix operating system that could be run on his PC, which was equipped with a 386 processor. Minix was the only software available at that time but it was commercial, closed source, and it traded at US$150. Torvalds found this too expensive, and started development of a Posix-compatible operating system, later known as Linux. Torvalds did not immediately publicize a very broad and ambitious goal, nor did he attempt to recruit contributors. He simply expressed his private motivation in a message he posted on July 3, 1991, to the USENET newsgroup comp.os.minix (Wayner 2000): Hello netlanders, Due to a project I'm working on (in minix), I'm interested in the posix standard definition. [Posix is a standard for UNIX designers. A software using POSIX is compatible with other UNIX-based software.] Could somebody please point me to a (preferably) machine-readable format of the latest posix-rules? Ftp-sites would be nice. In response, Torvalds got several return messages with Posix rules and people expressing a general interest in the project. By the early 1992, several skilled programmers contributed to Linux and the number of users increased by the day. Today, Linux is the largest open source development project extant in terms of number of developers.
759b:
J. Henkel "Software Development in Embedded Linux: Informal Collaboration of Competing Firms"W Uhr, W. Esswein, E. Schoop, 2003, Physica.
763b:
G. Hertel, S. Niedner, S. Herrmann "Motivation of Software Developers in Open Source Projects: An Internet-Based Survey of Contributors to the Linux Kernel", Research Policy, 2003, 1159-1177.
"Free For All - How Linux and the Free Software Movement Undercut the High Tech Titans" (2002) [en] WAYNER, Peter
1:
Free For All - How Linux and the Free Software Movement Undercut the High Tech Titans
Peter Wayner (2002-12-22)
5:
The list should also include the dozens of journalists at places like Slashdot.org, LinuxWorld, Linux magazine, Linux Weekly News, Kernel Traffic, Salon, and the New York Times. I should specifically mention the work of Joe Barr, Jeff Bates, Janelle Brown, Zack Brown, Jonathan Corbet, Elizabeth Coolbaugh, Amy Harmon, Andrew Leonard, Rob Malda, John Markoff, Mark Nielsen, Nicholas Petreley, Harald Radke, and Dave Whitinger. They wrote wonderful pieces that will make a great first draft of the history of the open source movement. Only a few of the pieces are cited directly in the footnotes, largely for practical reasons. The entire body of websites like Slashdot, Linux Journal, Linux World, Kernel Notes, or Linux Weekly News should be required reading for anyone interested in the free software movement.
6:
There are hundreds of folks at Linux trade shows who took the time to show me their products, T-shirts, or, in one case, cooler filled with beer. Almost everyone I met at the conferences was happy to speak about their experiences with open source software. They were all a great source of information, and I don't even know most of their names.
20:
See http://www.wayner.org/books/ffa/ for the FIRST PDF EDITION Page layout for this and the original paper edition designed by William Ruoto, see Not printed on acid-free paper. Library of Congress Cataloging-in-Publication Data Wayner, Peter, 1964 Free for all : how Linux and the free software movement undercut the high-tech titans / Peter Wayner. p. cm. ISBN 0-06-662050-3 1. Linux. 2. Operating systems (Computers) 3. Free computer software. I. Title. QA76.76.063 W394 2000 005.4'469 dc21 00-023919 00 01 02 03 04 V/RRD 10 9 8 7 6 5 4 3 2 1
35:
The last competitor, though, was the most surprising to everyone. Schmalensee saw Linux, a program given away for free, as a big potential competitor. When he said Linux, he really meant an entire collection of programs known as “open source” software. These were written by a loose-knit group of programmers who shared all of the source code to the software over the Internet.
37:
Schmalensee didn't mention that most people thought of Linux as a strange tool created and used by hackers in dark rooms lit by computer monitors. He didn't mention that many people had trouble getting Linux to work with their computers. He forgot to mention that Linux manuals came with subheads like “Disk Druid-like 'fstab editor' available.” He didn't delve into the fact that for many of the developers, Linux was just a hobby they dabbled with when there was nothing interesting on television. And he certainly didn't mention that most people thought the whole Linux project was the work of a mad genius and his weirdo disciples who still hadn't caught on to the fact that the Soviet Union had already failed big-time. The Linux folks actually thought sharing would make the world a better place. Fat-cat programmers who spent their stock-option riches on Porsches and balsamic vinegar laughed at moments like this.
38:
Schmalensee didn't mention these facts. He just offered Linux as an alternative to Windows and said that computer manufacturers might switch to it at any time. Poof. Therefore, Microsoft had competitors. At the trial, the discourse quickly broke down into an argument over what is really a worthy competitor and what isn't. Were there enough applications available for Linux or the Mac? What qualifies as “enough”? Were these really worthy?
39:
Under cross-examination, Schmalensee explained that he wasn't holding up the Mac, BeOS, or Linux as competitors who were going to take over 50 percent of the marketplace. He merely argued that their existence proved that the barriers produced by the so-called Microsoft monopoly weren't that strong. If rational people were investing in creating companies like BeOS, then Microsoft's power wasn't absolute.
40:
Afterward, most people quickly made up their minds. Everyone had heard about the Macintosh and knew that back then conventional wisdom dictated that it would soon fail. But most people didn't know anything about BeOS or Linux. How could a company be a competitor if no one had heard of it? Apple and Microsoft had TV commercials. BeOS, at least, had a charismatic chairman. There was no Linux pitchman, no Linux jingle, and no Linux 30-second spot in major media. At the time, only the best-funded projects in the Linux community had enough money to buy spots on late-night community-access cable television. How could someone without money compete with a company that hired the Rolling Stones to pump excitement into a product launch?
41:
When people heard that Microsoft was offering a free product as a worthy competitor, they began to laugh even louder at the company's chutzpah. Wasn't money the whole reason the country was having a trial? Weren't computer programmers in such demand that many companies couldn't hire as many as they needed, no matter how high the salary? How could Microsoft believe that anyone would buy the supposition that a bunch of pseudo-communist nerds living in their weird techno-utopia where all the software was free would ever come up with software that could compete with the richest company on earth? At first glance, it looked as if Microsoft's case was sinking so low that it had to resort to laughable strategies. It was as if General Motors were to tell the world “We shouldn't have to worry about fixing cars that pollute because a collective of hippies in Ithaca, New York, is refurbishing old bicycles and giving them away for free.” It was as if Exxon waved away the problems of sinking oil tankers by explaining that folksingers had written a really neat ballad for teaching birds and otters to lick themselves clean after an oil spill. If no one charged money for Linux, then it was probably because it wasn't worth buying.
42:
But as everyone began looking a bit deeper, they began to see that Linux was being taken seriously in some parts of the world. Many web servers, it turned out, were already running on Linux or another free cousin known as FreeBSD. A free webserving tool known as Apache had controlled more than 50 percent of the web servers for some time, and it was gradually beating out Microsoft products that cost thousands of dollars. Many of the web servers ran Apache on top of a Linux or a FreeBSD machine and got the job done. The software worked well, and the nonexistent price made it easy to choose.
43:
Linux was also winning over some of the world's most serious physicists, weapons designers, biologists, and hard-core scientists. Some of the nation's top labs had wired together clusters of cheap PCs and turned them into supercomputers that were highly competitive with the best machines on the market. One upstart company started offering “supercomputers” for $3,000. These machines used Linux to keep the data flowing while the racks of computers plugged and chugged their way for hours on complicated simulations.
44:
There were other indications. Linux users bragged that their system rarely crashed. Some claimed to have machines that had been running for a year or more without a problem. Microsoft (and Apple) users, on the other hand, had grown used to frequent crashes. The “Blue Screen of Death” that appears on Windows users' monitors when something goes irretrievably wrong is the butt of many jokes.
45:
Linux users also bragged about the quality of their desktop interface. Most of the uninitiated thought of Linux as a hacker's system built for nerds. Yet recently two very good operating shells called GNOME and KDE had taken hold. Both offered the user an environment that looked just like Windows but was better. Linux hackers started bragging that they were able to equip their girlfriends, mothers, and friends with Linux boxes without grief. Some people with little computer experience were adopting Linux with little trouble.
61:
To the rest of the world, this urge to putter and fiddle with machines is more than a source of marital comedy. Cox is one of the great threats to the continued dominance of Microsoft, despite the fact that he found a way to weld spaghetti to a nonstick pan. He is one of the core developers who help maintain the Linux kernel. In other words, he's one of the group of programmers who helps guide the development of the Linux operating system, the one Richard Schmalensee feels is such a threat to Microsoft. Cox is one of the few people whom Linus Torvalds, the creator of Linux, trusts to make important decisions about future directions. Cox is an expert on the networking guts of the system and is responsible for making sure that most of the new ideas that people suggest for Linux are considered carefully and integrated correctly. Torvalds defers to Cox on many matters about how Linux-based computers talk with other computers over a network. Cox works long and hard to find efficient ways for Linux to juggle multiple connections without slowing down or deadlocking.
62:
The group that works with Cox and Torvalds operates with no official structure. Millions of people use Linux to keep their computers running, and all of them have copies of the source code. In the 1980s, most companies began keeping the source code to their software as private as possible because they worried that a competitor might come along and steal the ideas the source spelled out. The source code, which is written in languages like C, Java, FORTRAN, BASIC, or Pascal, is meant to be read by programmers. Most companies didn't want other programmers understanding too much about the guts of their software. Information is power, and the companies instinctively played their cards close to their chests.
63:
When Linus Torvalds first started writing Linux in 1991, however, he decided to give away the operating system for free. He included all the source code because he wanted others to read it, comment upon it, and perhaps improve it. His decision was as much a radical break from standard programming procedure as a practical decision. He was a poor student at the time, and this operating system was merely a hobby. If he had tried to sell it, he wouldn't have gotten anything for it. He certainly had no money to build a company that could polish the software and market it. So he just sent out copies over the Internet.
67:
Today, about a thousand people regularly work with people like Alan Cox on the development of the Linux kernel, the official name for the part of the operating system that Torvalds started writing back in 1991. That may not be an accurate estimate because many people check in for a few weeks when a project requires their participation. Some follow everything, but most people are just interested in little corners. Many other programmers have contributed various pieces of software such as word processors or spreadsheets. All of these are bundled together into packages that are often called plain Linux or GNU/Linux and shipped by companies like Red Hat or more ad hoc groups like Debian. 1 While Torvalds only wrote the core kernel, people use his name, Linux, to stand for a whole body of software written by thousands of others. It's not exactly fair, but most let it slide. If there hadn't been the Linux kernel, the users wouldn't have the ability to run software on a completely free system. The free software would need to interact with something from Microsoft, Apple, or IBM. Of course, if it weren't for all of the other free software from Berkeley, the GNU project, and thousands of other garages around the world, there would be little for the Linux kernel to do.
1.Linux Weekly News keeps a complete list of distributors. These range from the small, one- or two-man operations to the biggest, most corporate ones like Red Hat: Alzza Linux, Apokalypse, Armed Linux, Bad Penguin Linux, Bastille Linux, Best Linux (Finnish/Swedish), Bifrost, Black Cat Linux (Ukrainian/Russian), Caldera OpenLinux, CCLinux, Chinese Linux Extension, Complete Linux, Conectiva Linux (Brazilian), Debian GNU/Linux, Definite Linux, DemoLinux, DLD, DLite, DLX, DragonLinux, easyLinux, Enoch, Eridani Star System, Eonova Linux, e-smith server and gateway, Eurielec Linux (Spanish), eXecutive Linux, floppyfw, Floppix, Green Frog Linux, hal91, Hard Hat Linux, Immunix, Independence, Jurix, Kha0s Linux, KRUD, KSI-Linux, Laetos, LEM, Linux Cyrillic Edition, LinuxGT, Linux-Kheops (French), Linux MLD (Japanese), LinuxOne OS, LinuxPPC, LinuxPPP (Mexican), Linux Pro Plus, Linux Router Project, LOAF, LSD, Mandrake, Mastodon, MicroLinux, MkLinux, muLinux, nanoLinux II, NoMad Linux, OpenClassroom, Peanut Linux, Plamo Linux, PLD, Project Ballantain, PROSA, QuadLinux, Red Hat, Rock Linux, RunOnCD, ShareTheNet, Skygate, Slackware, Small Linux, Stampede, Stataboware, Storm Linux, SuSE, Tomsrtbt, Trinux, TurboLinux, uClinux, Vine Linux, WinLinux 2000, Xdenu, XTeamLinux, and Yellow Dog Linux.
69:
All of these people work at their own pace. Some work in their homes, like Alan Cox. Some work in university labs. Others work for businesses that use Linux and encourage their programmers to plug away so it serves their needs.
70:
The team is united by mailing lists. The Linux Kernel mailing list hooks up Cox in Britain, Torvalds in Silicon Valley, and the others around the globe. They post notes to the list and discuss ideas. Sometimes verbal fights break out, and sometimes everyone agrees. Sometimes people light a candle by actually writing new code to make the kernel better, and other times they just curse the darkness.
71:
Cox is now one of several people responsible for coordinating the addition of new code. He tests it for compatibility and guides Linux authors to make sure they're working together optimally. In essence, he tests every piece of incoming software to make sure all of the gauges work with the right system of measurement so there will be no glitches. He tries to remove the incompatibilities that marred Zorro.
73:
Other features are not so popular, and they're tackled by the people who need the features. Some people want to hook their Linux boxes up to Macintoshes. Doing that smoothly can require some work in the kernel. Others may want to add special code to enable a special device like a high-speed camera or a strange type of disk drive. These groups often work on their own but coordinate their solutions with the main crowd. Ideally, they'll be able to come up with some patches that solve their problem without breaking some other part of the system.
75:
Each day, Cox and his virtual colleagues pore through the lists trying to figure out how to make Linux better, faster, and more usable. Sometimes they skip out to watch a movie. Sometimes they go for hikes. But one thing they don't do is spend months huddled in conference rooms trying to come up with legal arguments. Until recently, the Linux folks didn't have money for lawyers, and that means they didn't get sidetracked by figuring out how to get big and powerful people like Richard Schmalensee to tell a court that there's no monopoly in the computer operating system business.
78:
The battle between Linux and Microsoft is lining up to be the classic fight between the people like Schmalensee and the people like Cox. On one side are the armies of lawyers, lobbyists, salesmen, and expensive executives who are armed with patents, lawsuits, and legislation. They are skilled at moving the levers of power until the gears line up just right and billions of dollars pour into their pockets. They know how to schmooze, toady, beg, or even threaten until they wear the mantle of authority and command the piety and devotion of the world. People buy Microsoft because it's “the standard.” No one decreed this, but somehow it has come to be.
88:
That's an idyllic picture, and the early success of Linux, FreeBSD, and other free packages makes it tempting to think that the success will build. Today, open source servers power more than 50 percent of the web servers on the Internet, and that is no small accomplishment. Getting thousands, if not millions, of programmers to work together is quite amazing given how quirky programmers can be. The ease of copying makes it possible to think that Alan Cox could get up late and still move the world.
91:
Right now, the free software movement stands at a crucial moment in its history. In the past, a culture of giving and wide-open sharing let thousands of programmers build a great operating system that was, in many ways, better than anything coming from the best companies. Many folks began working on Linux, FreeBSD, and thousands of other projects as hobbies, but now they're waking up to find IBM, HewlettPackard, Apple, and all the other big boys pounding on their door. If the kids could create something as nice as Linux, everyone began to wonder whether these kids really had enough good stuff to go the distance and last nine innings against the greatest power hitters around.
94:
Linus Torvalds may be on the cover of magazines, but he can't do anything with the wave of a hand. He must charm and cajole the thousands of folks on the Linux mailing list to make a change. Many of the free software projects may generate great code, but they have to beg for computers. The programmers might even surprise him and come up with an even better solution. They've done it in the past. But no money means that no one has to do what anyone says.
96:
But shows that are charming and fresh in a barn can become thin and weak on a big stage on Broadway. The glitches and raw functionality of Linux and free software don't seem too bad if you know that they're built by kids in their spare time. Building real tools for real companies, moms, police stations, and serious users everywhere is another matter. Everyone may be hoping that sharing, caring, and curiosity are enough, but no one knows for certain. Maybe capital will end up winning. Maybe it won't. It's freedom versus assurance; it's wide-open sharing versus stock options; it's cooperation versus intimidation; it's the geeks versus the suits, all in one knockdown, hack-till-you-drop, winner-take-everything fight.
100:
FreeBSD is a close cousin to the Linux kernel and one that predates it in some ways. It descends from a long tradition of research and development of operating systems at the University of California at Berkeley. The name BSD stands for “Berkeley Software Distribution,” the name given to one of the first releases of operating system source code that Berkeley made for the world. That small package grew, morphed, and absorbed many other contributions over the years.
101:
Referring to Linux and FreeBSD as cousins is an apt term because they share much of the same source code in the same way that cousins share some of the same genes. Both borrow source code and ideas from each other. If you buy a disk with FreeBSD, which you can do from companies like Walnut Creek, you may get many of the same software packages that you get from a disk from Red Hat Linux. Both include, for instance, some of the GNU compilers that turn source code into something that can be understood by computers.
113:
On that January 14, a new member of the WINE list was learning just how volunteering works. The guy posted a note to the list that described his Diamond RIO portable music device that lets you listen to MP3 files whenever you want. “I think the WINE development team should drop everything and work on getting this program to work as it doesn't seem like Diamond wants to release a Linux utility for the Rio,” he wrote.
117:
The WINE clone of the Win32 is a fascinating example of how open source starts slowly and picks up steam. Bob Amstadt started the project in 1993, but soon turned it over to Alexandre Julliard, who has been the main force behind it. The project, although still far from finished, has produced some dramatic accomplishments, making it possible to run major programs like Microsoft Word or Microsoft Excel on a Linux box without using Windows. In essence, the WINE software is doing a good enough job acting like Windows that it's fooling Excel and Word. If you can trick the cousins, that's not too bad.
118:
The WINE home page (www.winehq.com) estimates that more than 90,000 people use WINE regularly to run programs for Microsoft Windows without buying Windows. About 140 or more people regularly contribute to the project by writing code or fixing bugs. Many are hobbyists who want the thrill of getting their software to run without Windows, but some are corporate programmers. The corporate programmers want to sell their software to the broadest possible marketplace, but they don't want to take the time to rewrite everything. If they can get their software working well with WINE, then people who use Linux or BSD can use the software that was written for Microsoft Windows.
119:
The new user who wanted to get his RIO player working with his Linux computer soon got a rude awakening. Andreas Mohr, a German programmer, wrote back,
122:
Mohr's suggestion was to file a bug report that ranks the usability of the software so the programmers working on WINE can tweak it. This is just the first step in the free software experience. Someone has to notice the problem and fix it. In this case, someone needs to hook up their Diamond RIO MP3 player to a Linux box and try to move MP3 files with the software written for Windows. Ideally, the software will work perfectly, and now all Linux users will be able to use RIO players. In reality, there might be problems or glitches. Some of the graphics on the screen might be wrong. The software might not download anything at all. The first step is for someone to test the product and write up a detailed report about what works and what doesn't.
125:
WINE can't pay anyone, and that means that great ideas sometimes get ignored. The free software community, however, doesn't necessarily see this as a limitation. If the RIO player were truly important, someone else would come along and pick up the project. Someone else would do the work and file a bug report so everyone could use the software. If there's no one else, then maybe the RIO software isn't that important to the Linux community. Work gets done when someone really cares enough to do it.
135:
Still, he told me, “At the time Toy Story was coming out, there was a space shuttle flying with the Debian GNU/Linux distribution on it controlling a biological experiment. People would say 'Are you proud of working at Pixar?' and then I would say my hobby software was running on the space shuttle now. That was a turnaround point when I realized that Linux might become my career.”
138:
In fact, it's a bad idea to see the free software revolution as having much to do with Microsoft. Even if Linux, FreeBSD, and other free software packages win, Microsoft will probably continue to fly along quite happily in much the same way that IBM continues to thrive even after losing the belt of the Heavyweight Computing Champion of the World to Microsoft. Anyone who spends his or her time focused on the image of a ragtag band of ruffians and orphans battling the Microsoft leviathan is bound to miss the real story.
144:
Anyone who tunes in to the battle between Microsoft and the world expecting to see a good old-fashioned fight for marketplace domination is going to miss the real excitement. Sure, Linux, FreeBSD, OpenBSD, NetBSD, Mach, and the thousands of other free software projects are going to come out swinging. Microsoft is going to counterpunch with thousands of patents defended by armies of lawyers. Some of the programmers might even be a bit weird, and a few will be entitled to wear the adjective “ragtag.” But the real revolution has nothing to do with whether Bill Gates keeps his title as King of the Hill. It has nothing to do with whether the programmers stay up late and work in the nude. It has nothing to do with poor grooming, extravagant beards, Coke-bottle glasses, black trench coats, or any of the other stereotypes that fuel the media's image.
180:
Meanwhile, on the other coast, the lawsuit tied up Berkeley and the BSD project for several years, and the project lost valuable energy and time by devoting them to the legal fight. In the meantime, several other completely free software projects started springing up around the globe. These began in basements and depended on machines that the programmer owned. One of these projects was started by Linus Torvalds and would eventually grow to become Linux, the unstoppable engine of hype and glory. He didn't have the money of the Berkeley computer science department, and he didn't have the latest machines that corporations gave them. But he had freedom and the pile of source code that came from unaffiliated, free projects like GNU that refused to compromise and cut intellectual corners. Although Torvalds might not have realized it at the time, freedom turned out to be most valuable of all.
185:
One of the people who wanted UNIX was the Finnish student Linus Torvalds, who couldn't afford this tithe. He was far from the first one, and the conflict began long before he started to write Linux in 1991.
192:
The first move to separate Berkeley's version of UNIX from AT&T's control wasn't really a revolution. No one was starting a civil war by firing shots at Fort Sumter or starting a revolution by dropping tea in the harbor. In fact, it started long before the lawsuit and Linux. In 1989, some people wanted to start hooking their PCs and other devices up to the Internet, and they didn't want to use UNIX.
214:
While news traveled quickly to some corners, it didn't reach Finland. Network Release 2 came in June 1991, right around the same time that Linus Torvalds was poking around looking for a high-grade OS to use in experiments. Jolitz's 386BSD came out about six months later as Torvalds began to dig into creating the OS he would later call Linux. Soon afterward, Jolitz lost interest in the project and let it lie, but others came along. In fact, two groups called NetBSD and FreeBSD sprang up to carry the torch.
240:
Any grown-up should take one look at this battle and understand just how the free software movement got so far. While the Berkeley folks were meeting with lawyers and worrying about whether the judges were going to choose the right side, Linus Torvalds was creating his own kernel. He started Linux on his own, and that made him a free man.
248:
In June 1991, soon after Torvalds 3 started his little science project, the Computer Systems Research Group at Berkeley released what they thought was their completely unencumbered version of BSD UNIX known as Network Release 2. Several projects emerged to port this to the 386, and the project evolved to become the FreeBSD and NetBSD versions of today. Torvalds has often said that he might never have started Linux if he had known that he could just download a more complete OS from Berkeley.
3.Everyone in the community, including many who don't know him, refers to him by his first name. The rules of style prevent me from using that in something as proper as a book.
260:
The core of an OS is often called the “kernel,” which is one of the strange words floating around the world of computers. When people are being proper, they note that Linus Torvalds was creating the Linux kernel in 1991. Most of the other software, like the desktop, the utilities, the editors, the web browsers, the games, the compilers, and practically everything else, was written by other folks. If you measure this in disk space, more than 95 percent of the code in an average distribution lies outside the kernel. If you measure it by user interaction, most people using Linux or BSD don't even know that there's a kernel in there. The buttons they click, the websites they visit, and the printing they do are all controlled by other programs that do the work.
263:
In 1991, Torvalds had a short list of features he wanted to add to the kernel. The Internet was still a small network linking universities and some advanced labs, and so networking was a small concern. He was only aiming at the 386, so he could rely on some of the special features that weren't available on other chips. High-end graphics hardware cards were still pretty expensive, so he concentrated on a text-only interface. He would later fix all of these problems with the help of the people on the Linux kernel mailing list, but for now he could avoid them.
270:
When Torvalds started crafting the Linux kernel, he decided he was going to create a bigger, more integrated version that he called a “monolithic kernel.” This was something of a bold move because the academic community was entranced with what they called “microkernels.” The difference is partly semantic and partly real, but it can be summarized by analogy with businesses. Some companies try to build large, smoothly integr the steps of production. Others try to create smaller operations that subcontract much of the production work to other companies. One is big, monolithic, and all-encompassing, while the other is smaller, fragmented, and heterogeneous. It's not uncommon to find two companies in the same industry taking different approaches and thinking they're doing the right thing.
280:
By the beginning of 1992, Linux was no longer a Finnish student's part-time hobby. Several influential programmers became interested in the code. It was free and relatively usable. It ran much of the GNU code, and that made it a neat, inexpensive way to experiment with some excellent tools. More and more people downloaded the system, and a significant fraction started reporting bugs and suggestions to Torvalds. He rolled them back in and the project snowballed.
284:
This talent for organizing the work of others is a rare commodity, and Torvalds had a knack for it. He was gracious about sharing his system with the world and he never lorded it over anyone. His messages were filled with jokes and self-deprecating humor, most of which were carefully marked with smiley faces (:-)) to make sure that the message was clear. If he wrote something pointed, he would apologize for being a “hothead.” He was always gracious in giving credit to others and noted that much of Linux was just a clone of UNIX. All of this made him easy to read and thus influential.
285:
His greatest trick, though, was his decision to avoid the mantle of power. He wrote in 1992, “Here's my standing on 'keeping control,' in 2 words (three?): I won't. The only control I've effectively been keeping on Linux is that I know it better than anybody else.”
288:
He made it clear that people could vote to depose him at any time. “If people feel I do a bad job, they can do it themselves.” They could just take all of his Linux code and start their own version using Torvalds's work as a foundation.
293:
Torvalds's burgeoning kernel dovetailed nicely with the tools that the GNU project created. All of the work by Stallman and his disciples could be easily ported to work with the operating system core that Torvalds was now calling Linux. This was the power of freely distributable source code. Anyone could make a connection, and someone invariably did. Soon, much of the GNU code began running on Linux. These tools made it easier to create more new programs, and the snowball began to roll.
296:
This freedom also attracted others to the party. They knew that Linux would always be theirs, too. They could write neat features and plug them into the Linux kernel without worrying that Torvalds would yank the rug out from under them. The GPL was a contract that lasted long into the future. It was a promise that bound them together.
297:
The Linux kernel also succeeded because it was written from the ground up for the PC platform. When the Berkeley UNIX hackers were porting BSD to the PC platform, they weren't able to make it fit perfectly. They were taking a piece of software crafted for older computers like the VAX, and shaving off corners and rewriting sections until it ran on the PC.
303:
During the early months of Torvalds's work, the BSD group was stuck in a legal swamp. While the BSD team was involved with secret settlement talks and secret depositions, Linus Torvalds was happily writing code and sharing it with the world on the Net. His life wasn't all peaches and cream, but all of his hassles were open. Professor Andy Tanenbaum, a fairly well-respected and famous computer scientist, got in a long, extended debate with Torvalds over the structure of Linux. He looked down at Linux and claimed that Linux would have been worth two F's in his class because of its design. This led to a big flame war that was every bit as nasty as the fight between Berkeley and AT&T's USL. In fact, to the average observer it was even nastier. Torvalds returned Tanenbaum's fire with strong words like “fiasco,” “brain-damages,” and “suck.” He brushed off the bad grades by pointing out that Albert Einstein supposedly got bad grades in math and physics. The highpriced lawyers working for AT&T and Berkeley probably used very expensive and polite words to try and hide the shivs they were trying to stick in each other's back. Torvalds and Tanenbaum pulled out each other's virtual hair like a squawkfest on the Jerry Springer show.
304:
But Torvalds's flame war with Tanenbaum occurred in the open in an Internet newsgroup. Other folks could read it, think about it, add their two cents' worth, and even take sides. It was a wide-open debate that uncovered many flaws in the original versions of Linux and Tanenbaum's Minix. They forced Torvalds to think deeply about what he wanted to do with Linux and consider its flaws. He had to listen to the arguments of a critic and a number of his peers on the Net and then come up with arguments as to why his Linux kernel didn't suck too badly.
306:
The fight between Torvalds and Tanenbaum, however, drew people into the project. Other programmers like David Miller, Ted T'so, and Peter da Silva chimed in with their opinions. At the time, they were just interested bystanders. In time, they became part of the Linux brain trust. Soon they were contributing source code that ran on Linux. The argument's excitement forced them to look at Torvalds's toy OS and try to decide whether his defense made any sense. Today, David Miller is one of the biggest contributors to the Linux kernel. Many of the original debaters became major contributors to the foundations of Linux.
308:
To this day, all of the devotees of the various BSDs grit their teeth when they hear about Linux. They think that FreeBSD, NetBSD, and OpenBSD are better, and they have good reasons for these beliefs. They know they were out the door first with a complete running system. But Linux is on the cover of the magazines. All of the great technically unwashed are now starting to use “Linux” as a synonym for free software. If AT&T never sued, the BSD teams would be the ones reaping the glory. They would be the ones to whom Microsoft turned when it needed a plausible competitor. They would be more famous.
310:
McKusick says, “If you plot the installation base of Linux and BSD over the last five years, you'll see that they're both in exponential growth. But BSD's about eighteen to twenty months behind. That's about how long it took between Net Release 2 and the unencumbered 4.4BSD-Lite. That's about how long it took for the court system to do its job.”
312:
Through the 1990s, the little toy operating system grew slowly and quietly as more and more programmers were drawn into the vortex. At the beginning, the OS wasn't rich with features. You could run several different programs at once, but you couldn't do much with the programs. The system's interface was just text. Still, this was often good enough for a few folks in labs around the world. Some just enjoyed playing with computers. Getting Linux running on their PC was a challenge, not unlike bolting an aftermarket supercharger onto a Honda Civic. But others took the project more seriously because they had serious jobs that couldn't be solved with a proprietary operating system that came from Microsoft or others.
313:
In time, more people started using the system and started contributing their additions to the pot. Someone figured out how to make MIT's free X Window System run on Linux so everyone could have a graphical interface. Someone else discovered how to roll in technology for interfacing with the Internet. That made a big difference because everyone could hack, tweak, and fiddle with the code and then just upload the new versions to the Net.
314:
It goes without saying that all the cool software coming out of Stallman's Free Software Foundation found its way to Linux. Some were simple toys like GNU Chess, but others were serious tools that were essential to the growth of the project. By 1991, the FSF was offering what might be argued were the best text editor and compiler in the world. Others might have been close, but Stallman's were free. These were crucial tools that made it possible for Linux to grow quickly from a tiny experimental kernel into a full-featured OS for doing everything a programmer might want to do.
315:
James Lewis-Moss, one of the many programmers who devote some time to Linux, says that GCC made it possible for programmers to create, revise, and extend the kernel. “GCC is integral to the success of Linux,” he says, and points out that this may be one of the most important reasons why “it's polite to refer to it as GNU/Linux.”
316:
Lewis-Moss points out one of the smoldering controversies in the world of free software: all of the tools and games that came from the GNU project started becoming part of what people simply thought of as plain “Linux.” The name for the small kernel of the operating system soon grew to apply to almost all the free software that ran with it. This angered Stallman, who first argued that a better name would be“Lignux.”When that failed to take hold, he moved to “GNU/Linux.” Some ignored his pleas and simply used “Linux,” which is still a bit unfair. Some feel that“GNU/Linux”is too much of a mouthful and, for better or worse, just plain Linux is an appropriate shortcut. Some, like Lewis-Moss, hold firm to GNU/Linux.
317:
Soon some people were bundling together CD-ROMs with all this software in one batch. The group would try to work out as many glitches as possible so that the purchaser's life would be easier. All boasted strange names like Yggdrasil, Slackware, SuSE, Debian, or Red Hat. Many were just garage projects that never made much money, but that was okay. Making money wasn't really the point. People just wanted to play with the source. Plus, few thought that much money could be made. The GPL, for instance, made it difficult to differentiate the product because it required everyone to share their source code with the world. If Slackware came up with a neat fix that made their version of Linux better, then Debian and SuSE could grab it. The GPL prevented anyone from constraining the growth of Linux.
318:
But only greedy businessmen see sharing and competition as negatives. In practice, the free flow of information enhanced the market for Linux by ensuring that it was stable and freely available. If one key CDROM developer gets a new girlfriend and stops spending enough time programming, another distribution will pick up the slack. If a hurricane flattened Raleigh, North Carolina, the home of Red Hat, then another supplier would still be around. A proprietary OS like Windows is like a set of manacles. An earthquake in Redmond, Washington, could cause a serious disruption for everyone.
319:
The competition and the GPL meant that the users would never feel bound to one OS. If problems arose, anyone could always just start a splinter group and take Linux in that direction. And they did. All the major systems began as splinter groups, and some picked up enough steam and energy to dominate. In time, the best splinter groups spun off their own splinter groups and the process grew terribly complicated.
322:
Hall remembers well the moment he discovered Linux. He told Linux Today,
323:
I didn't even know I was involved with Linux at first. I got a copy of Dr. Dobb's Journal, and in there was an advertisement for “get a UNIX operating system, all the source code, and run it on your PC.” And I think it was $99. And I go, “Oh, wow, that's pretty cool. For $99, I can do that.” So I sent away for it, got the CD. The only trouble was that I didn't have a PC to run it on. So I put it on my Ultrix system, took a look at the main pages, directory structure and stuff, and said, “Hey, that looks pretty cool.” Then I put it away in the filing cabinet. That was probably around January of 1994.
325:
At the meeting, Torvalds helped Hall and his boss set up a PC with Linux. This was the first time that Hall actually saw Linux run, and he was pleasantly surprised. He said, “By that time I had been using UNIX for probably about fifteen years. I had used System V, I had used Berkeley, and all sorts of stuff, and this really felt like UNIX. You know . . . I mean, it's kind of like playing the piano. You can play the piano, even if it's a crappy piano. But when it's a really good piano, your fingers just fly over the keys. That's the way this felt. It felt good, and I was really impressed.”
326:
This experience turned Hall into a true convert and he went back to Digital convinced that the Linux project was more than just some kids playing with a toy OS. These so-called amateurs with no centralized system or corporate backing had produced a very, very impressive system that was almost as good as the big commercial systems. Hall was an instant devotee. Many involved in the project recall their day of conversion with the same strength. A bolt of lightning peeled the haze away from their eyes, and they saw.
327:
Hall set out trying to get Torvalds to rewrite Linux so it would work well on the Alpha. This was not a simple task, but it was one that helped the operating system grow a bit more. The original version included some software that assumed the computer was designed like the Intel 386. This was fine when Linux only ran on Intel machines, but removing these assumptions made it possible for the software to run well on all types of machines.
328:
Hall went sailing with Torvalds to talk about the guts of the Linux OS. Hall told me, "I took him out on the Mississippi River, went up and down the Mississippi in the river boat, drinking Hurricanes, and I said to him, 'Linus, did you ever think about porting Linux to a 64-bit processor, like the Alpha?' He said, 'Well, I thought about doing that, but the Helsinki office has been having problems getting me a system, so I guess I'll have to do the PowerPC instead.'
329:
“I knew that was the wrong answer, so I came back to Digital (at the time), and got a friend of mine, named Bill Jackson, to send out a system to Linus, and he received it about a couple weeks after that. Then I found some people inside Digital who were also thinking about porting Linux to an Alpha. I got the two groups together, and after that, we started on the Alpha Linux project.”
331:
Hall also helped start a group called Linux International, which works to make the corporate world safe for Linux. “We help vendors understand the Linux marketplace,” Hall told me. “There's a lot of confusion about what the GPL means. Less now, but still there's a lot of confusion. We helped them find the markets.”
332:
Today, Linux International helps control the trademark on the name Linux and ensures that it is used in an open way. “When someone wanted to call themselves something like 'Linux University,' we said that's bad because there's going to be more than one. 'Linux University of North Carolina' is okay. It opens up the space.”
333:
In the beginning, Torvalds depended heavily on the kindness of strangers like Hall. He didn't have much money, and the Linux project wasn't generating a huge salary for him. Of course, poverty also made it easier for people like Hall to justify giving him a machine. Torvalds wasn't rich monetarily, but he became rich in machines.
334:
By 1994, when Hall met Torvalds, Linux was already far from just a one-man science project. The floppy disks and CD-ROMs holding a version of the OS were already on the market, and this distribution mechanism was one of the crucial unifying forces. Someone could just plunk down a few dollars and get a version that was more or less ready to run. Many simply downloaded their versions for free from the Internet.
336:
In 1994, getting Linux to run was never really as simple as putting the CD-ROM in the drive and pressing a button. Many of the programs didn't work with certain video cards. Some modems didn't talk to Linux. Not all of the printers communicated correctly. Yet most of the software worked together on many standard machines. It often took a bit of tweaking, but most people could get the OS up and running on their computers.
337:
This was a major advance for the Linux OS because most people could quickly install a new version without spending too much time downloading the new code or debugging it. Even programmers who understood exactly what was happening felt that installing a new version was a long, often painful slog through technical details. These CDROMs not only helped programmers, they also encouraged casual users to experiment with the system.
340:
Other CD-ROM groups became more commercial. Debian sold its disks to pay for Internet connection fees and other expenses, but they were largely a garage operation. So were groups with names like Slackware, FreeBSD, and OpenBSD. Other groups like Red Hat actually set out to create a burgeoning business, and to a large extent, they succeeded. They took the money and used it to pay programmers who wrote more software to make Linux easier to use.
343:
Slowly but surely, more and more people became aware of Linux, the GNU project, and its cousins like FreeBSD. No one was making much money off the stuff, but the word of mouth was spreading very quickly. The disks were priced reasonably, and people were curious. The GPL encouraged people to share. People began borrowing disks from their friends. Some companies even manufactured cheap rip-off copies of the CD-ROMs, an act that the GPL encouraged.
344:
At the top of the pyramid was Linus Torvalds. Many Linux developers treated him like the king of all he surveyed, but he was like the monarchs who were denuded by a popular constitutional democracy. He had always focused on building a fast, stable kernel, and that was what he continued to do. The rest of the excitement, the packaging, the features, and the toys, were the dominion of the volunteers and contributors.
346:
Torvalds moved to Silicon Valley and took a job with the very secret company Transmeta in order to help design the next generation of computer chips. He worked out a special deal with the company that allowed him to work on Linux in his spare time. He felt that working for one of the companies like Red Hat would give that one version of Linux a special imprimatur, and he wanted to avoid that. Plus, Transmeta was doing cool things.
347:
In January 1999, the world caught up with the pioneers. Schmalensee mentioned Linux on the witness stand during the trial and served official notice to the world that Microsoft was worried about the growth of Linux. The system had been on the company's radar screen for some time. In October 1998, an internal memo from Microsoft describing the threat made its way to the press. Some thought it was just Microsoft's way of currying favor during the antitrust investigation. Others thought it was a serious treatment of a topic that was difficult for the company to understand.
348:
The media followed Schmalensee's lead. Everyone wanted to know about Linux, GNU, open source software, and the magical effects of widespread, unconditional sharing. The questions came in tidal waves, and Torvalds tried to answer them again and again. Was he sorry he gave it all away? No. If he charged anything, no one would have bought his toy and no one would have contributed anything. Was he a communist? No, he was rather apolitical. Don't programmers have to eat? Yes, but they will make their money selling a service instead of getting rich off bad proprietary code. Was Linux going to overtake Microsoft? Yes, if he had his way. World Domination Soon became the motto.
349:
But there were also difficult questions. How would the Linux world resist the embrace of big companies like IBM, Apple, Hewlett-Packard, and maybe even Microsoft? These were massive companies with paid programmers and schedules to meet. All the open source software was just as free to them as anyone else. Would these companies use their strength to monopolize Linux?
351:
Many wanted to know when Linux would become easier to use for nonprogrammers. Programmers built the OS to be easy to take apart and put back together again. That's a great feature if you like hacking the inside of a kernel, but that doesn't excite the average computer user. How was the open source community going to get the programmers to donate their time to fix the mundane, everyday glitches that confused and infuriated the nonprogrammers? Was the Linux community going to be able to produce something that a nonprogrammer could even understand?
352:
Others wondered if the Linux world could ever agree enough to create a software package with some coherence. Today, Microsoft users and programmers pull their hair out trying to keep Windows 95, Windows 98, and Windows NT straight. Little idiosyncrasies cause games to crash and programs to fail. Microsoft has hundreds of quality assurance engineers and thousands of support personnel. Still, the little details drive everyone crazy.
353:
New versions of Linux appear as often as daily. People often create their own versions to solve particular problems. Many of these changes won't affect anyone, but they can add up. Is there enough consistency to make the tools easy enough to use?
354:
Many wondered if Linux was right for world domination. Programmers might love playing with source code, but the rest of the world just wants something that delivers the e-mail on time. More important, the latter are willing to pay for this efficiency.
444:
During this time, the relationship between AT&T and the universities was cordial. AT&T owned the commercial market for UNIX and Berkeley supplied many of the versions used in universities. While the universities got BSD for free, they still needed to negotiate a license with AT&T, and companies paid a fortune. This wasn't too much of a problem because universities are often terribly myopic. If they share their work with other universities and professors, they usually consider their sharing done. There may be folks out there without university appointments, but those folks are usually viewed as cranks who can be safely ignored. Occasionally, those cranks write their own OS that grows up to be Linux. The BSD version of freedom was still a far cry from Stallman's, but then Stallman hadn't articulated it yet. His manifesto was still a few years off.
458:
Daniel is basically correct. The BSD code has evolved, or forked, into many different versions with names like FreeBSD, OpenBSD, and NetBSD while the Linux UNIX kernel released under Stallman's GPL is limited to one fairly coherent package. Still, there is plenty of crosspollination between the different versions of BSD UNIX. Both NetBSD 1.0 and FreeBSD 2.0, for instance, borrowed code from 4.4 BSD-Lite. Also, many versions of Linux come with tools and utilities that came from the BSD project.
459:
But Daniel's point is also clouded with semantics. There are dozens if not hundreds of different Linux distributions available from different vendors. Many differ in subtle points, but some are markedly different. While these differences are often as great as the ones between the various flavors of BSD, the groups do not consider them psychologically separate. They haven't forked politically even though they've split off their code.
468:
Sam Ockman, a Linux enthusiast and the founder of Penguin Computing, remembers the day of the meeting just before Netscape announced it was freeing its source code. “Eric Raymond came into town because of the Netscape thing. Netscape was going to free their software, so we drove down to Transmeta and had a meeting so we could advise Netscape,” he said.
470:
The definition of what was open source grew out of the Debian project, one of the different groups that banded together to press CDROMs of stable Linux releases. Groups like these often get into debates about what software to include on the disks. Some wanted to be very pure and only include GPL'ed software. In a small way, that would force others to contribute back to the project because they wouldn't get their software distributed by the group unless it was GPL'ed. Others wanted less stringent requirements that might include quasi-commercial projects that still came with their source code. There were some cool projects out there that weren't protected by GPL, and it could be awfully hard to pass up the chance to integrate them into a package.
486:
Stallman saw this secrecy as a great crime. Computer users should be able to share the source code so they can share ways to make it better. This trade should lead to more information-trading in a great feedback loop. Some folks even used the word “bloom” to describe the explosion of interest and cross-feedback. They're using the word the way biologists use it to describe the way algae can just burst into existence, overwhelming a region of the ocean. Clever insights, brilliant bug fixes, and wonderful new features just appear out of nowhere as human curiosity is amplified by human generosity in a grand explosion of intellectual synergy. The only thing missing from the picture is a bunch of furry Ewoks dancing around a campfire. 8
8.Linux does have many marketing opportunities. Torvalds chose a penguin named Tux as the mascot, and several companies actually manufacture and sell stuffed penguins to the Linux realm. The BSD world has embraced a cute demon, a visual pun on the fact that BSD UNIX uses the word “daemon” to refer to some of the faceless background programs in the OS.
496:
Raymond pointed out that the free source world can do a great job with these nasty bugs. He characterized this with the phrase, “Given enough eyeballs, all bugs are shallow,” which he characterized as “Linus's Law.” That is, eventually some programmer would start printing and using the Internet at the same time. After the system crashed a few times, some programmer would care enough about the problem to dig into the free source, poke around, and spot the problem. Eventually somebody would come along with the time and the energy and the commitment to diagnose the problem. Raymond named this “Linus's Law” after Linus Torvalds. Raymond is a great admirer of Torvalds and thinks that Torvalds's true genius was organizing an army to work on Linux. The coding itself was a distant second.
503:
The comparison to software was simple. Corporations gathered the tithes, employed a central architect with a grand vision, managed the team of programmers, and shipped a product every once and a bit. The Linux world, however, let everyone touch the Source. People would try to fix things or add new features. The best solutions would be adopted by oth ers and the mediocre would fall by the wayside. Many different Linux versions would proliferate, but over time the marketplace of software would coalesce around the best standard version.
516:
Linus Torvalds changed his mind by increasing the speed of sharing, which Raymond characterized as the rule of “release early and often, delegate everything you can, be open to the point of promiscuity.” Torvalds ran Linux as openly as possible, and this eventually attracted some good contributors. In the past, the FSF was much more careful about what it embraced and brought into the GNU project. Torvalds took many things into his distributions and they mutated as often as daily. Occasionally, new versions came out twice a day.
523:
Raymond mixed this experience with his time watching Torvalds's team push the Linux kernel and used them as the basis for his essay on distributing the Source. “Mostly I was trying to pull some factors that I had observed as unconscious folklore so people could take them out and reason about them,” he said.
525:
There is a good empirical reason for the faith in the strength of free source. After all, a group of folks who rarely saw each other had assembled a great pile of source code that was kicking Microsoft's butt in some corners of the computer world. Linux servers were common on the Internet and growing more common every day. The desktop was waiting to be conquered. They had done this without stock options, without corporate jets, without secret contracts, and without potentially illegal alliances with computer manufacturers. The success of the software from the GNU and Linux world was really quite impressive.
531:
Part of this problem is the success of Raymond's metaphor. He said he just wanted to give the community some tools to understand the success of Linux and reason about it. But his two visions of a cathedral and a bazaar had such a clarity that people concentrated more on dividing the world into cathedrals and bazaars. In reality, there's a great deal of blending in between. The most efficient bazaars today are the suburban malls that have one management company building the site, leasing the stores, and creating a unified experience. Downtown shopping areas often failed because there was always one shop owner who could ruin an entire block by putting in a store that sold pornography. On the other side, religion has always been something of a bazaar. Martin Luther effectively split apart Christianity by introducing competition. Even within denominations, different parishes fight for the hearts and souls of people.
532:
The same blurring holds true for the world of open source software. The Linux kernel, for instance, contains many thousands of lines of source code. Some put the number at 500,000. A few talented folks like Alan Cox or Linus Torvalds know all of it, but most are only familiar with the corners of it that they need to know. These folks, who may number in the thousands, are far outnumbered by the millions who use the Linux OS daily.
535:
Second, no one really knows who reads the Linux source code for the opposite reason. The GNU/Linux source is widely available and frequently downloaded, but that doesn't mean it's read or studied. The Red Hat CDs come with one CD full of pre-compiled binaries and the second full of source code. Who knows whoever pops the second CDROM in their computer? Everyone is free to do so in the privacy of their own cubicle, so no records are kept.
536:
If I were to bet, I would guess that the ratios of cognoscenti to uninformed users in the Linux and Microsoft worlds are pretty close. Reading the Source just takes too much time and too much effort for many in the Linux world to take advantage of the huge river of information available to them.
583:
The average population, however, is aging quickly. As the software becomes better, it is easier for working stiffs to bring it into the corporate environments. Many folks brag about sneaking Linux into their office and replacing Microsoft on some hidden server. As more and more users find a way to make money with the free software, more and more older people (i.e., over 25) are able to devote some time to the revolution.
584:
I suppose I would like to report that there's a healthy contingent of women taking part in the free source world, but I can't. It would be nice to isolate the free software community from the criticism that usually finds any group of men. By some definition or legal reasoning, these guys must be practicing some de facto discrimination. Somebody will probably try to sue someone someday. Still, the women are scarce and it's impossible to use many of the standard explanations. The software is, after all, free. It runs well on machines that are several generations old and available from corporate scrap heaps for several hundred dollars. Torvalds started writing Linux because he couldn't afford a real version of UNIX. Lack of money or the parsimony of evil, gender-nasty parents who refuse to buy their daughters a computer can hardly be blamed.
587:
This may change in the future if organizations like LinuxChix (www.linuxchix.org) have their way. They run a site devoted to celebrating women who enjoy the open source world, and they've been trying to start up chapters around the world. The site gives members a chance to post their names and biographical details. Of course, several of the members are men and one is a man turning into a woman. The member writes, “I'm transsexual (male-to-female, pre-op), and at the moment still legally married to my wife, which means that if we stay together we'll eventually have a legal same-sex marriage.”
589:
Racial politics, however, are more complicated. Much of the Linux community is spread out throughout the globe. While many members come from the United States, major contributors can be found in most countries. Linus Torvalds, of course, came from Finland, one of the more technically advanced countries in the world. Miguel de Icaza, the lead developer of the GNOME desktop, comes from Mexico, a country perceived as technically underdeveloped by many in the United States.
591:
In general, the free source revolution is worldwide and rarely encumbered by racial and national barricades. Europe is just as filled with Linux developers as America, and the Third World is rapidly skipping over costly Microsoft and into inexpensive Linux. Interest in Linux is booming in China and India. English is, of course, the default language, but other languages continue to live thanks to automatic translation mechanisms like Babelfish.
611:
When Linux began to take off, Torvalds moved to Silicon Valley and took a job with the supersecret research firm Transmeta. At Comdex in November 1999, Torvalds announced that Transmeta was working on a low-power computing chip with the nickname “Crusoe.”
612:
There are, of course, some conspiracy theories. Transmeta is funded by a number of big investors including Microsoft cofounder Paul Allen. The fact that they chose to employ Torvalds may be part of a plan, some think, to distract him from Linux development. After all, version 2.2 of the kernel took longer than many expected, although it may have been because its goals were too ambitious. When Microsoft needed a coherent threat to offer up to the Department of Justice, Transmeta courteously made Torvalds available to the world. Few seriously believe this theory, but it is constantly whispered as a nervous joke.
618:
This freedom also extended to programmers at work. In many companies, the computer managers are doctrinaire and officious. They often quickly develop knee-jerk reactions to technologies and use these stereotypes to make technical decisions. Free software like Linux was frequently rejected out of hand by the gatekeepers, who thought something must be wrong with the software if no one was charging for it. These attitudes couldn't stop the engineers who wanted to experiment with the free software, however, because it had no purchase order that needed approval.
625:
The people on the side of BSD-style license, on the other hand, seem pragmatic, organized, and focused. There are three major free versions of BSD UNIX alone, and they're notable because they each have centrally administered collections of files. The GPL-protected Linux can be purchased from at least six major groups that bundle it together, and each of them includes packages and pieces of software they find all over the Net.
626:
The BSD-license folks are also less cultish. The big poster boys, Torvalds and Stallman, are both GPL men. The free versions of BSD, which helped give Linux much of its foundation, are largely ignored by the press for all the wrong reasons. The BSD teams appear to be fragmented because they are all separate political organizations who have no formal ties. There are many contributors, which means that BSD has no major charismatic leader with a story as compelling as that of Linus Torvalds.
629:
The Apache web server is protected by a BSD-style license that permits commercial reuse of the software without sharing the source code. It is a separate program, however, and many Linux users run the software on Linux boxes. Of course, this devotion to business and relatively quiet disposition isn't always true. Theo de Raadt, the leader of the OpenBSD faction, is fond of making bold proclamations. In his interview with me, he dismissed the Free Software Foundation as terribly misnamed because you weren't truly free to do whatever you wanted with the software.
631:
Someone might point out that Alan Cox, one of the steadfast keepers of the GPL-protected Linux kernels, is not particularly flashy nor given to writing long manifestos on the Net. Others might say that Brian Behlendorf has been a great defender of the Apache project. He certainly hasn't avoided defending the BSD license, although not in the way that Stallman might have liked. He was, after all, one of the members of the Apache team who helped convince IBM that they could use the Apache web server without danger.
635:
The three BSD projects are well known for keeping control of all the source code for all the software in the distribution. They're very centrally managed and brag about keeping all the source code together in one build tree. The Linux distributions, on the other hand, include software from many different sources. Some include the KDE desktop. Others choose GNOME. Many include both.
637:
Some groups have become very effective marketing forces. Red Hat is a well-run company that has marketing teams selling people on upgrading their software as well as engineering teams with a job of writing improved code to include in future versions. Red Hat packages their distribution in boxes that are sold through normal sales channels like bookstores and catalogs. They have a big presence at trade shows like LinuxExpo, in part because they help organize them.
639:
In many cases, there is no clear spectrum defined between order and anarchy. The groups just have their own brands of order. OpenBSD brags about stopping security leaks and going two years without a rootlevel intrusion, but some of its artwork is a bit scruffy. Red Hat, on the other hand, has been carefully working to make Linux easy for everyone to use, but they're not as focused on security details.
641:
This disorder is changing a bit now that serious companies like Red Hat and VA Linux are entering the arena. These companies pay fulltime programmers to ensure that their products are bug free and easy to use. If their management does a good job, the open source software world may grow more ordered and actually anticipate more problems instead of waiting for the right person to come along with the time and the inclination to solve them.
650:
Most people quickly become keenly aware of this competition. Each of the different teams creating distributions flags theirs as the best, the most up-to-date, the easiest to install, and the most plush. The licenses mean that each group is free to grab stuff from the other, and this ensures that no one builds an unstoppable lead like Microsoft did in the proprietary OS world. Sure, Red Hat has a large chunk of the mindshare and people think their brand name is synonymous with Linux, but anyone can grab their latest distribution and start making improvements on it. It takes little time at all.
657:
But Stallman is right to distance himself from Soviet-style communism because there are few similarities. There's little central control in Stallman's empire. All Stallman can do to enforce the GNU General Public License is sue someone in court. He, like the Pope, has no great armies ready to keep people in line. None of the Linux companies have much power to force people to do anything. The GNU General Public License is like a vast disarmament treaty. Everyone is free to do what they want with the software, and there are no legal cudgels to stop them. The only way to violate the license is to publish the software and not release the source code.
658:
Many people who approach the free software world for the first time see only communism. Bob Metcalfe, an entrepreneur, has proved himself several times over by starting companies like 3Com and inventing the Ethernet. Yet he looked at the free software world and condemned it with a derisive essay entitled “Linux's 60's technology, open-sores ideology won't beat W2K, but what will?”
660:
The essay makes more confounding points equating Richard Stallman to Karl Marx for his writing and Linus Torvalds to Vladimir Lenin because of his aim to dominate the software world with his OS. For grins, he compares Eric Raymond to “Trotsky waiting for The People's ice pick” for no clear reason. Before this gets out of hand, he backpedals a bit and claims, “OK, communism is too harsh on Linux. Lenin too harsh on Torvalds [sic].”Then he sets off comparing the world of open source to the tree-hugging, back-to-the-earth movement.
666:
“How about Linux as organic software grown in utopia by spiritualists?” he wonders. “If North America actually went back to the earth, close to 250 million people would die of starvation before you could say agribusiness. When they bring organic fruit to market, you pay extra for small apples with open sores--the Open Sores Movement.”
705:
But numbers like this can't really capture the depth of the gift. Linus Torvalds always likes to say that he started writing Linux because he couldn't afford a decent OS for his machine so he could do some experiments. Who knows how many kids, grown-ups, and even retired people are hacking Linux now and doing some sophisticated computer science experiments because they can? How do we count this beneficence?
711:
Free source code has none of these inefficiencies. Websites like Slashdot, Freshmeat, Linux Weekly News, LinuxWorld, KernelTraffic, and hundreds of other Linux or project-specific portals do a great job moving the software to the people who can use its value. People write the code and then other folks discover the value in it. Bad or unneeded code isn't foisted on anyone.
727:
The comparison does offer some insight into life in the free software community. Some conventions like LinuxExpo and the hundreds of install-fests are sort of like parties. One company at a LinuxExpo was serving beer in its booth to attract attention. Of course, Netscape celebrated its decision to launch the Mozilla project with a big party. They then threw another one at the project's first birthday.
728:
But the giving goes beyond the parties and the conferences. Giving great software packages creates social standing in much the same way that giving a lavish feast will establish you as a major member of the tribe. There is a sort of pecking order, and the coders of great systems like Perl or Linux are near the top. The folks at the top of the pyramid often have better luck calling on other programmers for help, making it possible for them to get their jobs done a little better. Many managers justify letting their employees contribute to the free software community because they build up a social network that they can tap to finish their official jobs.
730:
The free source world, on the other hand, is a big free-for-all in both senses of the phrase. The code circulates for everyone to grab, and only those who need it dig in. There's no great connection between programmer and user. People grab software and take it without really knowing to whom they owe any debt. I only know a few of the big names who wrote the code running the Linux box on my desk, and I know that there are thousands of people who also contributed. It would be impossible for me to pay back any of these people because it's hard to keep them straight.
736:
Of course, there's also a certain element of selfishness to the charity. The social prestige that comes from writing good free software is worth a fair amount in the job market. People like to list accomplishments like “wrote driver” or “contributed code to Linux Kernel 2.2” on their résumé. Giving to the right project is a badge of honor because serious folks doing serious work embraced the gift. That's often more valuable and more telling than a plaque or an award from a traditional boss.
738:
Newberry is also a Linux fan. He reads the Kernel list but rarely contributes much to it. He runs various versions of Linux around the house, and none of them were working as well as he wanted with his Macintosh. So he poked around in the software, fixed it, and sent his code off to Alan Cox, who watches over the part of the kernel where his fixes belonged.
739:
“I contributed some changes to the Appletalk stack that's in the Linux Kernel that make it easier for a Linux machine to offer dial-in services for Macintosh users,” he said in an article published in Salon. “As it stands, Mac users have always been able to dial into a Linux box and use IP protocols, but if they wanted to use Appletalk over PPP, the support wasn't really there.”
742:
Of course, all of this justification and rationalization aren't the main reason why Newberry spends so much of his time hacking on Linux. Sure, it may help his company's bottom line. Sure, it might beef up his résumé by letting him brag that he got some code in the Linux kernel. But he also sees this as a bit of charity.
743:
“I get a certain amount of satisfaction from the work . . . but I get a certain amount of satisfaction out of helping people. Improving Linux and especially its integration with Macs has been a pet project of mine for some time,” he says. Still, he sums up his real motivation by saying, “I write software because I just love doing it.” Perhaps we're just lucky that so many people love writing open source software and giving it away.
745:
It's not hard to find bad stories about people who write good code. One person at a Linux conference told me, “The strange thing about Linus Torvalds is that he hasn't really offended everyone yet. All of the other leaders have managed to piss off someone at one time or another. It's hard to find someone who isn't hated by someone else.” While he meant it as a compliment for Torvalds, he sounded as if he wouldn't be surprised if Torvalds did a snotty, selfish, petulant thing. It would just be par for the course.
749:
Occasionally, the fights get interesting. Eric Raymond and Bruce Perens are both great contributors to the open source movement. In fact, both worked together to try to define the meaning of the term. Perens worked with the community that creates the Debian distribution of Linux to come up with a definition of what was acceptable for the community. This definition morphed into a more official version used by the Open Source Initiative. When they got a definition they liked, they published it and tried to trademark the term “open source” in order to make sure it was applied with some consistency. It should be no surprise that all of that hard work brought them farther apart.
777:
The free software world, of course, removes these barriers. If the Hotmail folks had joined the Linux team instead of Microsoft, they would be free to do whatever they wanted with their website even if it annoyed Linus Torvalds, Richard Stallman, and the pope. They wouldn't be rich, but there's always a price.
780:
This love also has a more traditional effect on the hackers who create the free source code. They do it because they love what they're doing. Many of the people in the free source movement are motivated by writing great software, and they judge their success by the recognition they get from equally talented peers. A “nice job” from the right person--like Richard Stallman, Alan Cox, or Linus Torvalds--can be worth more than $100,000 for some folks. It's a strange way to keep score, but for most of the programmers in the free source world it's more of a challenge than money. Any schmoe in Silicon Valley can make a couple of million dollars, but only a few select folks can rewrite the network interface code of the Linux kernel to improve the throughput of the Apache server by 20 percent.
796:
And of course there are thousands of free software projects that are going to get left behind hanging out at the same old pizza joint. There were always going to be thousands left behind. People get excited about new projects, better protocols, and neater code all the time. The old code just sort of withers away. Occasionally someone rediscovers it, but it is usually just forgotten and superseded. But this natural evolution wasn't painful until the successful projects started ending up on the covers of magazines and generating million-dollar deals with venture capitalists. People will always be wondering why their project isn't as big as Linux.
825:
This split is already growing. Red Hat software employs some of the major Linux contributors like Alan Cox. They get a salary while the rest of the contributors get nothing. Sun, Apple, and IBM employees get salaries, but folks who work on Apache or the open versions of BSD get nothing but the opportunity to hack cool code.
832:
Jeff Bates, an editor at Slashdot, says that Mozilla may have suffered because Netscape was so successful. The Netscape browser was already available for free for Linux. “There wasn't a big itch to scratch,” he says. “We already had Netscape, which was fine for most people. This project interested a smaller group than if we'd not had Netscape-hence why it didn't get as much attention.”
835:
In most cases, the flow is not particularly novel. The companies just choose FreeBSD or some version of Linux for their machines like any normal human being. Many web companies use a free OS like Linux or FreeBSD because they're both cheap and reliable. This is going to grow much more common as companies realize they can save a substantial amount of money over buying seat licenses from companies like Microsoft.
851:
What happens if a bug emerges in some version of the Linux kernel and it makes it into several distributions? It's not really the fault of the distribution creators, because they were just shipping the latest version of the kernel. And it's not really the kernel creators' fault, because they weren't marketing the kernel as ready for everyone to run. They were just floating some cool software on the Net for free. Who's responsible for the bug? Who gets sued?
866:
The free OS also puts Intel's lion's share up for grabs. Linux runs well on Intel chips, but it also runs on chips made by IBM, Motorola, Compaq, and many others. The NetBSD team loves to brag that its software runs on almost all platforms available and is dedicated to porting it to as many as possible. Someone using Linux or NetBSD doesn't care who made the chip inside because the OS behaves similarly on all of them.
868:
This threat shows that the emergence of the free OSs ensures that hardware companies will also face increased competitive pressure. Sure, they may be able to get Microsoft off their back, but Linux may make things a bit worse.
916:
He has a point. Linux is a lot of fun to play with and it is now a very stable OS, but it took a fair number of years to get to this point. Many folks in the free source world like to say things like, “It used to be that the most fun in Linux was just getting it to work.” Companies like Morgan Stanley, Schwab, American Airlines, and most others live and die on the quality of their computer systems. They're quite willing to pay money if it helps ensure that things don't go wrong.
928:
Red Hat has managed to sell enough CD-ROM disks to fund the development of new projects. They've created a good selection of installation tools that make it relatively easy for people to use Linux. They also help pay salaries for people like Alan Cox who contribute a great deal to the evolution of the kernel. They do all of this while others are free to copy their distribution disks verbatim.
929:
McVoy doesn't argue with these facts, but feels that they're just a temporary occurrence. The huge growth of interest in Linux means that many new folks are exploring the operating system. There's a great demand for the hand-holding and packaging that Red Hat offers. In time, though, everyone will figure out how to use the product and the revenue stream should disappear as competition drives out the ability to charge $50 for each disk.
949:
CoSource says that it will try to put together the bounties of many small groups and allow people to pay them with credit cards. It uses the example of a group of Linux developers who would gather together to fund the creation of an open source version of their favorite game. They would each chip in $10, $20, or $50 and when the pot got big enough, someone would step forward. Creating a cohesive political group that could effectively offer a large bounty is a great job for these sites.
961:
On the other hand, forking can hurt the community by duplicating efforts, splitting alliances, and sowing confusion in the minds of users. If Bob starts writing and publishing his own version of Linux out of his house, then he's taking some energy away from the main version. People start wondering if the version they're running is the Missouri Synod version of Emacs or the Christian Baptist version. Where do they send bug fixes? Who's in charge? Distribution groups like Debian or Red Hat have to spend a few moments trying to decide whether they want to include one version or the other. If they include both, they have to choose one as the default. Sometimes they just throw up their hands and forget about both. It's a civil war, and those are always worse than a plain old war.
979:
Of course, good software can have anti-forking effects. Linus Torvalds said in one interview, “Actually, I have never even checked 386BSD out; when I started on Linux it wasn't available (although Bill Jolitz's series on it in Dr. Dobbs Journal had started and were interesting), and when 386BSD finally came out, Linux was already in a state where it was so usable that I never really thought about switching. If 386BSD had been available when I started on Linux, Linux would probably never have happened.” So if 386BSD had been easier to find on the Net and better supported, Linux might never have begun.
1026:
While the three forks of BSD may cooperate more than they compete, the Linux world still likes to look at the BSD world with a bit of contempt. All of the forks look somewhat messy, even if having the freedom to fork is what Stallman and GNU are ostensibly fighting to achieve. The Linux enthusiasts seem to think, “We've got our ducks in a single row. What's your problem?” It's sort of like the Army mentality. If it's green, uniform, and the same everywhere, then it must be good.
1027:
The BSD lacks the monomaniacal cohesion of Linux, and this seems to hurt their image. The BSD community has always felt that Linux is stealing the limelight that should be shared at least equally between the groups. Linux is really built around a cult of Linus Torvalds, and that makes great press. It's very easy for the press to take photos of one man and put him on the cover of a magazine. It's simple, clean, neat, and perfectly amenable to a 30-second sound bite. Explaining that there's FreeBSD, NetBSD, OpenBSD, and who knows what smaller versions waiting in the wings just isn't as manageable.
1028:
Eric Raymond, a true disciple of Linus Torvalds and Linux, sees it in technical terms. The BSD community is proud of the fact that each distribution is built out of one big source tree. They get all the source code for all the parts of the kernel, the utilities, the editors, and whatnot together in one place. Then they push the compile button and let people work. This is a crisp, effective, well-managed approach to the project.
1029:
The Linux groups, however, are not that coordinated at all. Torvalds only really worries about the kernel, which is his baby. Someone else worries about GCC. Everyone comes up with their own source trees for the parts. The distribution companies like Red Hat worry about gluing the mess together. It's not unusual to find version 2.0 of the kernel in one distribution while another is sporting version 2.2.
1030:
“In BSD, you can do a unified make. They're fairly proud of that,” says Raymond. “But this creates rigidities that give people incentives to fork. The BSD things that are built that way develop new spin-off groups each week, while Linux, which is more loosely coupled, doesn't fork.”
1032:
But this distinction may be semantic. Forking does occur in the Linux realm, but it happens as small diversions that get explained away with other words. Red Hat may choose to use GNOME, while another distribution like SuSE might choose KDE. The users will see a big difference because both tools create virtual desktop environments. You can't miss them. But people won't label this a fork. Both distributions are using the same Linux kernel and no one has gone off and said, “To hell with Linus, I'm going to build my own version of Linux.” Everyone's technically still calling themselves Linux, even if they're building something that looks fairly different on the surface.
1033:
Jason Wright, one of the developers on the OpenBSD team, sees the organization as a good thing. “The one thing that all of the BSDs have over Linux is a unified source tree. We don't have Joe Blow's tree or Bob's tree,” he says. In other words, when they fork, they do it officially, with great ceremony, and make sure the world knows of their separate creations. They make a clear break, and this makes it easier for developers.
1034:
Wright says that this single source tree made it much easier for them to turn OpenBSD into a very secure OS.“We've got the security over Linux. They've recently been doing a security audit for Linux, but they're going to have a lot more trouble. There's not one place to go for the source code.”
1035:
To extend this to political terms, the Linux world is like the 1980s when Ronald Reagan ran the Republican party with the maxim that no one should ever criticize another Republican. Sure, people argued internally about taxes, abortion, crime, and the usual controversies, but they displayed a rare public cohesion. No one criticizes Torvalds, and everyone is careful to pay lip service to the importance of Linux cohesion even as they're essentially forking by choosing different packages.
1037:
John Gilmore, one of the founders of the free software company Cygnus and a firm believer in the advantages of the GNU General Public License, says, “In Linux, each package has a maintainer, and patches from all distributions go back through that maintainer. There is a sense of cohesion. People at each distribution work to reduce their differences from the version released by the maintainer. In the BSD world, each tree thinks they own each program--they don't send changes back to a central place because that violates the ego model.”
1038:
Jordan Hubbard, the leader of FreeBSD, is critical of Raymond's characterization of the BSD world. “I've always had a special place in my heart for that paper because he painted positions that didn't exist,” Hubbard said of Raymond's piece “The Cathedral and the Bazaar.” "You could point to just the Linux community and decide which part was cathedral-oriented and which part was bazaar-oriented.
1040:
When it comes right down to it, there's even plenty of forking going on about the definition of a fork. When some of the Linux team point at the BSD world and start making fun about the forks, the BSD team gets defensive. The BSD guys always get defensive because their founder isn't on the cover of all the magazines. The Linux team hints that maybe, if they weren't forking, they would have someone with a name in lights, too.
1041:
Hubbard is right. Linux forks just as much, they just call it a distribution or an experimental kernel or a patch kit. No one has the chutzpah to spin off their own rival political organization. No one has the political clout.
1084:
The most prevalent form of government in these communities is the benign dictatorship. Richard Stallman wrote some of the most important code in the GNU pantheon, and he continues to write new code and help maintain the old software. The world of the Linux kernel is dominated by Linus Torvalds. The original founders always seem to hold a strong sway over the group. Most of the code in the Linux kernel is written by others and checked out by a tight circle of friends, but Torvalds still has the final word on many changes.
1085:
The two of them are, of course, benign dictators, and the two of them don't really have any other choice. Both have a seemingly absolute amount of power, but this power is based on a mixture of personal affection and technical respect. There are no legal bounds that keep all of the developers in line. There are no rules about intellectual property or non-disclosure. Anyone can grab all of the Linux kernel or GNU source code, run off, and start making whatever changes they want. They could rename it FU, Bobux, Fredux, or Meganux and no one could stop them. The old threats of lawyers, guns, and money aren't anywhere to be seen.
1087:
The Debian group has a wonderful pedigree and many praise it as the purest version of Linux around, but it began as a bunch of outlaws who cried mutiny and tossed Richard Stallman overboard. Well, it wasn't really so dramatic. In fact, “mutiny” isn't really the right word when everyone is free to use the source code however they want.
1096:
This army is a diverse bunch. At a recent Linux conference, Jeff Bates, one of the editors of the influential website Slashdot (www.slashdot.org), pointed me toward the Debian booth, which was next to theirs. “If you look in the booth, you can see that map. They put a pushpin in the board for every developer and project leader they have around the world. China, Netherlands, Somalia, there are people coming from all over.”
1100:
Lewis-Moss's job isn't exactly programming, but it's close. He has to download the source code, compile the program, run it, and make sure that the latest version of the source works correctly with the latest version of the Linux kernel and the other parts of the OS that keep a system running. The packager must also ensure that the program works well with the Debian-specific tools that make installation easier. If there are obvious bugs, he'll fix them himself. Otherwise, he'll work with the author on tracking down and fixing the problems.
1104:
The Linux development effort moves slowly forward with thousands of stories like Lewis-Moss's. Folks come along, check out the code, and toss in a few contributions that make it a bit better for themselves. The mailing list debates some of the changes if they're controversial or if they'll affect many people. It's a very efficient system in many ways, if you can stand the heat of the debates.
1108:
While the mailing list looks like an idealized notion of a congress for the Linux kernel development, it is not as perfect as it may seem. Not all comments are taken equally because friendships and political alliances have evolved through time. The Debian group elected a president to make crucial decisions that can't be made by deep argument and consensus. The president doesn't have many other powers in other cases.
1109:
While the Linux and GNU worlds are dominated by their one great Sun King, many other open source projects have adopted a more modern government structure that is more like Debian. The groups are still fairly ad hoc and unofficial, but they are more democratic. There's less idolatry and less dependence on one person.
1127:
This seriousness and corporatization are probably the only possible steps that the Apache group could take. They've always been devoted to advancing the members' interests. Many of the other open source projects like Linux were hobbies that became serious. The Apache project was always filled with people who were in the business of building the web. While some might miss the small-town kind of feel of the early years, the corporate structure is bringing more certainty and predictability to the realm. The people don't have to wear suits now that it's a corporation. It just ensures that tough decisions will be made at a predictable pace.
1131:
The next induction ceremony for this pantheon should include Robert Young, the CEO of Red Hat Software, who helped the Linux and the open source world immeasurably by finding a way to charge people for something they could get for free. This discovery made the man rich, which isn't exactly what the free software world is supposed to do. But his company also contributed a sense of stability and certainty to the Linux marketplace, and that was sorely needed. Many hard-core programmers, who know enough to get all of the software for free, are willing to pay $70 to Red Hat just because it is easier. While some may be forever jealous of the millions of dollars in Young's pocket, everyone should realize that bringing Linux to a larger world of computer illiterates requires good packaging and hand-holding. Free software wouldn't be anywhere if someone couldn't find a good way to charge for it.
1132:
The best way to understand why Young ranks with the folks who discovered how to sell sugar water is to go to a conference like LinuxExpo. In the center of the floor is the booth manned by Red Hat Software, the company Young started in Raleigh, North Carolina, after he got through working in the computer-leasing business. Young is in his fifties now and manages to survive despite the fact that most of his company's devotees are much closer to 13. Red Hat bundles together some of the free software made by the community and distributed over the Net and puts it on one relatively easy-to-use CD-ROM. Anyone who wants to install Linux or some of its packages can simply buy a disk from Red Hat and push a bunch of keys. All of the information is on one CD-ROM, and it's relatively tested and pretty much ready to go. If things go wrong, Red Hat promises to answer questions by e-mail or telephone to help people get the product working.
1137:
To make matters worse for Red Hat, the potential competitors don't have to go out onto the Net and reassemble the collection of software for themselves. The GPL specifically forbids people from placing limitations on redistributing the source code. That means that a potential competitor doesn't have to do much more than buy a copy of Red Hat's disk and send it off to the CD-ROM pressing plant. People do this all the time. One company at the exposition was selling copies of all the major Linux distributions like Red Hat, Slackware, and OpenBSD for about $3 per disk. If you bought in bulk, you could get 11 disks for $25. Not a bad deal if you're a consumer.
1146:
Red Hat also added a custom installation utility to make life easier for people who want to add Red Hat to their computer. 12 They could have made this package installation tool proprietary. After all, Red Hat programmers wrote the tool on company time. But Young released it with the GNU General Public License, recognizing that the political value of giving something back was worth much more than the price they could charge for the tool.
12.Er, I mean to say “add Linux” or “add GNU/Linux.” “Red Hat” is now one of the synonyms for free software.
1147:
This is part of a deliberate political strategy to build goodwill among the programmers who distribute their software. Many Linux users compare the different companies putting together free source software CDROMs and test their commitment to the free software ideals. Debian, for instance, is very popular because it is a largely volunteer project that is careful to only include certified free source software on their CD-ROMs. Debian, however, isn't run like a business and it doesn't have the same attitude. This volunteer effort and enlightened pursuit of the essence of free software make the Debian distribution popular among the purists.
1148:
Distributors like Caldera, on the other hand, include nonfree software with their disk. You pay $29.95 to $149.95 for a CD-ROM and get some nonfree software like a word processor tossed in as a bonus. This is a great deal if you're only going to install the software once, but the copyright on the nonfree software prevents you from distributing the CD-ROM to friends. Caldera is hoping that the extras it throws in will steer people toward its disk and get them to choose Caldera's version of Linux. Many of the purists, like Richard Stallman, hate this practice and think it is just a not very subtle way to privatize the free software. If the average user isn't free to redistribute all the code, then there's something evil afoot. Of course, Stallman or any of the other software authors can't do anything about this because they made their software freely distributable.
1150:
Several companies are already making PCs with Linux software installed at the factory. While they could simply download the software from the Net themselves and create their own package, several have chosen to bundle Red Hat's version with their machines. Sam Ockman, the president of Penguin Computing, runs one of those companies.
1151:
Ockman is a recent Stanford graduate in his early twenties and a strong devotee of the Linux and GPL world. He says he started his company to prove that Linux could deliver solid, dependable servers that could compete with the best that Sun and Microsoft have to offer.
1152:
Ockman has mixed feelings about life at Stanford. While he fondly remembers the “golf course-like campus,” he says the classes were too easy. He graduated with two majors despite spending plenty of time playing around with the Linux kernel. He says that the computer science department's hobbled curriculum drove him to Linux. “Their whole CS community is using a stupid compiler for C on the Macintosh,” he says.“Why don't they start you off on Linux? By the time you get to [course] 248, you could hack on the Linux kernel or your own replacement kernel. It's just a tragedy that you're sitting there writing virtual kernels on a Sun system that you're not allowed to reboot.”
1154:
When Ockman had to choose a version of Linux for his Penguin computers, he chose Red Hat. Bob Young's company made the sale because it was playing by the rules of the game and giving software back with a GPL. Ockman says, “We actually buy the box set for every single one. Partially because the customers like to get the books, but also to support Red Hat. That's also why we picked Red Hat. They're the most free of all of the distributions.”
1155:
Debian, Ockman concedes, is also very free and politically interesting, but says that his company is too small to support multiple distributions. “We only do Red Hat. That was a very strategic decision on our part. All of the distributions are pretty much the same, but there are slight differences in this and that. We could have a twelve-person Debian group, but it would just be a nightmare for us to support all of these different versions of Linux.”
1159:
At the LinuxExpo, Red Hat was selling T-shirts, too. One slick number retailing for $19 just said “The Revolution of Choice” in Red Hat's signature old typewriter font. Others for sale at the company's site routinely run for $15 or more. They sucked me in. When I ordered my first Red Hat disk from them, I bought an extra T-shirt to go with the mix.
1161:
Many of the other groups are part of the game. The OpenBSD project sold out of their very fashionable T-shirts with wireframe versions of its little daemon logo soon after the beginning of the LinuxExpo. They continue to sell more T-shirts from their website. Users can also buy CD-ROMs from OpenBSD.
1163:
The most expensive T-shirt at the show came with a logo that imitated one of the early marketing images of the first Star Wars movie. The shirt showed Torvalds and Stallman instead of Han Solo and Luke Skywalker under a banner headline of “OS Wars.” The shirt cost only $100, but “came with free admission to the upcoming Linux convention in Atlanta.”
1167:
Ockman looks at this market competition for T-shirts and sees a genius. He says, "I think Bob Young's absolutely brilliant. Suddenly he realized that there's no future in releasing mainframes. He made a jump after finding college kids in Carolina [using Linux]. For him to make that jump is just amazing. He's a marketing guy. He sat down and figured it out.
1172:
Young's plan to brand the OS with a veneer of cool produced more success than anyone could imagine. Red Hat is by far the market leader in providing Linux to the masses, despite the fact that many can and do “steal” a low-cost version. Of course, “steal” isn't the right word, because Red Hat did the same thing. “Borrow” isn't right, “grab” is a bit casual, and “join in everlasting communion with the great free software continuum” is just too enthusiastic to be cool.
1186:
The GPL is a powerful force that prevents Red Hat from making many unilateral decisions. There are plenty of distributions that would like to take over the mantle of the most popular version of Linux. It's not hard. The source code is all there.
1189:
There are parts of this conspiracy theory that are already true. Red Hat does dominate the United States market for Linux and it controls a great deal of the mindshare. Their careful growth supported by an influx of cash ensured a strong position in the marketplace.
1194:
Can they squeeze their partners by charging different rates for Linux? Microsoft is known to offer lower Windows prices to their friends. This is unlikely. Anyone can just buy a single Red Hat CDROM from a duplicator like CheapBytes. This power play won't work.
1196:
Can they force people to pay a “Red Hat tax” just to upgrade to the latest software? Not likely. Red Hat is going to be a service company, and they're going to compete on having the best service for their customers. Their real competitor will be companies that sell support contracts like LinuxCare. Service industries are hard work. Every customer needs perfect care or they'll go somewhere else next time. Red Hat's honeymoon with the IPO cash will only last so long. Eventually, they're going to have to earn the money to get a return on the investment. They're going to be answering a lot of phone calls and e-mails.
1212:
Still, most users including the best programmers end up paying a company like Red Hat, Caldera, or a group like OpenBSD to do some of the basic research in building a Linux system. All of the distribution companies charge for a copy of their software and throw in some support. While the software is technically free, you pay for help to get it to work.
1214:
Of course, the cost of this is debatable. Tivo, for instance, is a company that makes a set-top box for recording television content on an internal hard disk. The average user just sees a fancy, easy-to-use front end, but underneath, the entire system runs on the Linux operating system. Tivo released a copy of the stripped-down version of Linux it ships on its machines on its website, fulfilling its obligation to the GNU GPL. The only problem I've discovered is that the web page (www.tivo.com/linux/) is not particularly easy to find from the home page. If I hadn't known it was there, I wouldn't have found it.
1237:
Free source folks are just as free to share ideas. Many of the rival Linux and BSD distributions often borrow code from each other. While they compete for the hearts and minds of buyers, they're forced by the free source rules to share the code. If someone writes one device driver for one platform, it is quickly modified for another.
1257:
The Linux movement isn't really about nations and it's not really about war in the old-fashioned sense. It's about nerds building software and letting other nerds see how cool their code is. It's about empowering the world of programmers and cutting out the corporate suits. It's about spending all night coding on wonderful, magnificent software with massive colonnades, endless plazas, big brass bells, and huge steam whistles without asking a boss “Mother, may I?” It's very individualistic and peaceful.
1258:
That stirring romantic vision may be moving the boys in the trenches, but the side effects are beginning to be felt in the world of global politics. Every time Linux, FreeBSD, or OpenBSD is installed, several dollars don't go flowing to Seattle. There's a little bit less available for the Microsoft crowd to spend on mega-mansions, SUVs, and local taxes. The local library, the local police force, and the local schools are going to have a bit less local wealth to tax. In essence, the Linux boys are sacking Seattle without getting out of their chairs or breaking a sweat. You won't see this battle retold on those cable channels that traffic in war documentaries, but it's unfolding as we speak.
1294:
The difference in treatment probably did not result from any secret love for Linux or OpenBSD lurking in the hearts of the regulators in the Bureau of Export Affairs at the Department of Commerce. The regulators are probably more afraid of losing a lawsuit brought by Daniel Bernstein. In the latest decision released in May 1999, two out of three judges on an appeals panel concluded that the U.S. government's encryption regulations violated Bernstein's rights of free speech. The government argued that source code is a device not speech. The case is currently being appealed. The new regulations seem targeted to specifically address the problems the court found with the current regulations.
1309:
Most folks in the free source world may not have big bank accounts. Those are just numbers in a computer anyway, and everyone who can program knows how easy it is to fill a computer with numbers. But the free source world has good software and the source code that goes along with it. How many times a day must Bill Gates look at the blue screen of death that splashes across a Windows computer monitor when the Windows software crashes? How many times does Torvalds watch Linux crash? Who's better off? Who's wealthier?
1313:
There's no question that people like Stallman love life with source code. A deeper question is whether the free source realm offers a wealthier lifestyle for the average computer user. Most people aren't programmers, and most programmers aren't even the hard-core hackers who love to fiddle with the UNIX kernel. I've rarely used the source code to Linux, Emacs, or any of the neat tools on the Net, and many times I've simply recompiled the source code without looking at it. Is this community still a better deal?
1315:
Some grouse that comparing features like this isn't fair to the Mac or Windows world. The GNOME toolkit, they point out, didn't come out of years of research and development. The start button and the toolbar look the same because the GNOME developers were merely copying. The GNU/Linux world didn't create their own OS, they merely cloned all of the hard commercial research that produced UNIX. It's always easier to catch up, but pulling ahead is hard. The folks who want to stay on the cutting edge need to be in the commercial world. It's easy to come up with a list of commercial products and tools that haven't been cloned by an open source dude at the time of this writing: streaming video, vector animation, the full Java API, speech recognition, three dimensional CAD programs, speech synthesis, and so forth. The list goes on and on. The hottest innovations will always come from well capitalized start-ups driven by the carrot of wealth.
1320:
Most Linux users don't need to rewrite the source, but they can still benefit from the freedom. If everyone has the freedom, then someone will come along with the ability to do it and if the problem is big enough, someone probably will. In other words, only one person has to fly the X-wing fighter down the trench and blow up the Death Star.
1322:
Which is a better world? A polished Disneyland where every action is scripted, or a pile of Lego blocks waiting for us to give them form? Do we want to be entertained or do we want to interact? Many free software folks would point out that free software doesn't preclude you from settling into the bosom of some corporation for a long winter's nap. Companies like Caldera and Linuxcare are quite willing to hold your hand and give you the source code. Many other corporations are coming around to the same notion. Netscape led the way, and many companies like Apple and Sun will follow along. Microsoft may even do the same thing by the time you read this.
1340:
While Stallman didn't have monetary capital, he did have plenty of intellectual capital. By 1991, his GNU project had built many well respected tools that were among the best in their class. Torvalds had a great example of what the GPL could do before he chose to protect his Linux kernel with the license. He also had a great set of tools that the GNU project created.
1342:
Stallman's reputation also can be worth more than money when it opens the right doors. He continues to be blessed by the implicit support of MIT, and many young programmers are proud to contribute their work to his projects. It's a badge of honor to be associated with either Linux or the Free Software Foundation. Programmers often list these details on their résumés, and the facts have weight.
1346:
Of course, companies like Red Hat lie in a middle ground. The company charges money for support and plows this money back into improving the product. It pays several engineers to devote their time to improving the entire Linux product. It markets its work well and is able to charge a premium for what people are able to get for free.
1360:
He's done his part. The open source movement thrives on the GCC compiler, and Cygnus managed to find a way to make money on the process of keeping the compiler up to date. The free operating systems like Linux or FreeBSD are great alternatives for people today. They're small, fast, and very stable, unlike the best offerings of Microsoft or Apple. If the open software movement continues to succeed and grow, his child could grow up into a world where the blue screen of death that terrorizes Microsoft users is as foreign to them as manual typewriters.
1375:
One group that is locked out of the fray is the Linux community. While software for playing DVD movies exists for Macintoshes and PCs, there's none for Linux. DeCSS should not be seen as a hacker's tool, but merely a device that allows Linux users to watch the legitimate copies of the DVDs that they bought. Locking out Linux is like locking in Apple and Microsoft.
1376:
The battle between the motion picture community and the Linux world is just heating up as I write this. There will be more lawsuits and prehaps more jail time ahead for the developers who produced DeCSS and the people who shared it through their websites.
1377:
Most of the battles are not so dramatic. They're largely technical, and the free source world should win these easily. Open source solutions haven't had the same sophisticated graphical interface as Apple or Windows products. Most of the programmers who enjoy Linux or the various versions of BSD don't need the graphical interface and may not care about it. The good news is that projects like KDE and GNOME are great tools already. The open source world must continue to tackle this area and fight to produce something that the average guy can use.
1379:
Microsoft's greatest asset is the installed base of Windows, and it will try to use this to the best of its ability to defeat Linux. At this writing, Microsoft is rolling out a new version of the Domain Name Server (DNS), which acts like a telephone book for the Internet. In the past, many of the DNS machines were UNIX boxes because UNIX helped define the Internet. Windows 2000 includes new extensions to DNS that practically force offices to switch over to Windows machines to run DNS. Windows 2000 just won't work as well with an old Linux or UNIX box running DNS.
1380:
This is a typical strategy for Microsoft and one that is difficult, but not impossible, for open source projects to thwart. If the cost of these new servers is great enough, some group of managers is going to create its own open source clone of the modified DNS server. This has happened time and time again, but not always with great success. Linux boxes come with Samba, a program that lets Linux machines act as file servers. It works well and is widely used. Another project, WINE, started with the grand design of cloning all of the much more complicated Windows API used by programmers. It is a wonderful project, but it is far from finished. The size and complexity make a big difference.
1388:
The first cracks are already obvious. Microsoft lost the server market to Apache and Linux on the basis of price and performance. Web server managers are educated computer users who can make their own decisions without having to worry about the need to train others. Hidden computers like this are easy targets, and the free software world will gobble many of them up. More users mean more bug fixes and propagations of better code.
1392:
Of course, free software really isn't free. A variety of companies offering Linux support need to charge something to pay their bills. Distributions like Red Hat or FreeBSD may not cost much, but they often need some customization and hand-holding. Is a business just trading one bill for another? Won't Linux support end up costing the same thing as Microsoft's product?
1395:
Of course, there are also hard numbers. An article in Wired by Andrew Leonard comes with numbers originally developed by the Gartner Group. A 25-person office would cost $21,453 to outfit with Microsoft products and $5,544.70 to outfit with Linux. This estimate is a bit conservative. Most of the Linux cost is debatable because it includes almost $3,000 for 10 service calls to a Linux consultant and about $2,500 for Applixware, an office suite that does much of the same job as Microsoft Office. A truly cheap and technically hip office could make do with the editor built into Netscape and one of the free spreadsheets available for Linux. It's not hard to imagine someone doing the same job for about $3, which is the cost of a cheap knockoff of Red Hat's latest distribution.
1396:
Of course, it's important to realize that free software still costs money to support. But so does Microsoft's. The proprietary software companies also charge to answer questions and provide reliable information. It's not clear that Linux support is any more expensive to offer.
1397:
Also, many offices large and small keep computer technicians on hand. There's no reason to believe that Linux technicians will be any more or less expensive than Microsoft technicians. Both answer questions. Both keep the systems running. At least the Linux tech can look at the source code.
1399:
These users will be the most loyal to Microsoft because they will find it harder than anyone else to move. They can't afford to hire their own Linux gurus to redo the office, and they don't have the time to teach themselves.
1400:
These are the main weaknesses for Microsoft, and the company is already taking them seriously. I think many underestimate how bloody the battle is about to become. If free source software is able to stop and even reverse revenue growth for Microsoft, there are going to be some very rich people with deep pockets who feel threatened. Microsoft is probably going to turn to the same legal system that gave it such grief and find some wedge to drive into the Linux community. Their biggest weapon will be patents and copyright to stop the cloners.
1404:
Suddenly, brands like Hewlett-Packard or IBM can mean something when they're slapped on a PC. Any goofball in a garage can put a circuit board in a box and slap on Microsoft Windows. A big company like HP or IBM could do extra work to make sure the Linux distribution on the box worked well with the components and provided a glitch-free existence for the user.
1408:
Despite these gifts, free software will continue to grow on the campuses. Students often have little cash and Microsoft doesn't get any great tax deduction by giving gifts to individual students (that's income). The smartest kids in the dorms will continue to run Linux. Many labs do cutting-edge work that requires customized software. These groups will naturally be attracted to free source code because it makes their life easier. It will be difficult for Microsoft to counteract the very real attraction of free software.
1412:
If things go perfectly for Microsoft, the company will be able to pull out one or two patents from its huge portfolio and use these to sue Red Hat, Walnut Creek, and a few of the other major distributors. Ideally, this patent would cover some crucial part of the Linux or BSD operating system. After the first few legal bills started arriving on the desk of the Red Hat or Walnut Creek CEO, the companies would have to settle by quitting the business. Eventually, all of the distributors of Linux would crumble and return to the small camps in the hills to lick their wounds. At least, that's probably the dream of some of Microsoft's greatest legal soldiers.
1413:
This maneuver is far from a lock for Microsoft because the free software world has a number of good defenses. The first is that the Linux and BSD world do a good job of publicizing their advances. Any patent holder must file the patent before someone else publishes their ideas. The Linux discussion groups and source distributions are a pretty good public forum. The ideas and patches often circulate publicly long before they make their way into a stable version of the kernel. That means that the patent holders will need to be much farther ahead than the free software world.
1414:
Linux and the free software world are often the cradle of new ideas. University students use open source software all the time. It's much easier to do way cool things if you've got access to the source. Sure, Microsoft has some smart researchers with great funding, but can they compete with all the students?
1416:
The second defense is adaptability. The free software distributions can simply strip out the offending code. The Linux and BSD disks are very modular because they come from a variety of different sources. The different layers and tools come from different authors, so they are not highly integrated. This makes it possible to remove one part without ruining the entire system.
1418:
It will be pretty difficult for a company like Microsoft to find a patent that will allow it to deal a fatal blow to either the Linux or BSD distributions. The groups will just clip out the offending code and then work around it.
1419:
Microsoft's greatest hope is to lock up the next generation of computing with patents. New technologies like streaming multimedia or Internet audio are still up for grabs. While people have been studying these topics in universities for some time, the Linux community is further behind. Microsoft will try to dominate these areas with crucial patents that affect how operating systems deal with this kind of data. Their success at this is hard to predict. In any event, while they may be able to cripple the adoption of some new technologies like streaming multimedia, they won't be able to smash the entire world.
1422:
This does not preclude the free software world from using some ideas or software. There's no reason why Linux can't run proprietary application software that costs money. Perhaps people will sell licenses for some distributions and patches. Still, the users must shift mental gears when they encounter these packages.
1429:
One of the biggest challenges for the free software community will be developing the leadership to undertake these battles. It is one thing to mess around in a garage with your buddies and hang out in some virtual he-man/Microsoft-haters clubhouse cooking up neat code. It's a very different challenge to actually achieve the world domination that the Linux world muses about. When I started writing the book, I thought that an anthem for the free software movement might be Spinal Tap's “Flower People.” Now I think it's going to be Buffalo Springfield's “For What It's Worth,” which warns, “There's something happening here / What it is ain't exactly clear.”
driver Most computers are designed to work with optional devices like modems, disk drives, printers, cameras, and keyboards. A driver is a piece of software that translates the signals sent by the device into a set of signals that can be understood by the operating system. Most operating systems are designed to be modular, so these drivers can be added as an afterthought whenever a user connects a new device. They are usually designed to have a standard structure so other software will work with them. The driver for each mouse, for instance, translates the signals from the mouse into a standard description that includes the position of the mouse and its direction. Drivers are an important point of debate in the free software community because volunteers must often create the drivers. Most manufacturers write the drivers for Windows computers because these customers make up the bulk of their sales. The manufacturers often avoid creating drivers for Linux or BSD systems because they perceive the market to be small. Some manufacturers also cite the GNU GPL as an impediment because they feel that releasing the source code to their drivers publishes important competitive information.
GNOME The GNU Network Object Model Environment, which might be summarized as “All of the functionality of Microsoft Windows for Linux.” It's actually more. There are many enhancements that make the tool easier to use and more flexible than the prototype from Redmond. See also KDE, another package that accomplishes much of the same. (www.gnome.org)
GNU/Linux The name some people use for Linux as a way of giving credit to the GNU project for its leadership and contribution of code.
Linux The name given to the core of the operating system started by Linus Torvalds in 1991. The word is now generally used to refer to an entire bundle of free software packages that work together. Red Hat Linux, for instance, is a large bundle of software including packages written by many other unrelated projects.
C. Scott Ananian "Questions Not to Ask on Linux-Kernel", 1998-05, [http://lwn.net/980521/a/nonfaq.html].
C. Scott Ananian "A Linux Lament: As Red Hat Prepares to Go Public, One Linux Hacker's Dreams of IPO Glory Are Crushed by the Man", 1999-07-30, [http://www.salon.com/tech/feature/1999/07/30/redhat_shares/index.html].
Zack Brown "The 'Linux' vs. 'GNU/Linux' Debate", 1999-04-13, [http://www.kt.opensrc.org/kt19990408_13.html#editorial].
Rachel Chalmers "Challenges Ahead for the Linux Standards Base", 1999-04, [http://www.linuxworld.com/linuxworld/lw-1999-04/lw-04-lsb.html].
James Coates "A Rebellious Reaction to the Linux Revolution", 1999-04-25, [http://www.chicagotribune.com/business/printedition/article/0,1051,SA-Vo9904250051,00.html].
Mary Lisbeth D'Amico ""erman Division of Microsoft Protests 'Where Do You Want to Go Tomorrow' Slogan: Linux Site Holds Contest for New Slogan While Case Is Pending", 1999-04-13, [http://www.linuxworld.com/linuxworld/lw-1999-04/lw-04-german.html].
Eric Kidd "Why You Might Want to Use the Library GPL for Your Next Library", 1999-03, [http://www.linuxgazette.com/issue38/kidd.html].
LWN "Linux Beat Windows NT Handily in an Oracle Performance Benchmark", 1999-04-29, [http://rpmfind.net/veillard/oracle/].
Robert McMillan, Nora Mikes "After the 'Sweet Sixteen': Linus Torvalds's Take on the State of Linux", 1999-03, [http://www.linuxworld.com/linuxworld/lw-1999-03/lw03-torvalds.html].
Bob Metcalfe "Linux's '60s Technology: Open-Sores Ideology Won't Beat W2K, but What Will?", 1999-06-19, [http://www.infoworld.com/articles/op/xml/990621opmetcalfe.xml].
Mindcraft "Web and File Server Comparison: Microsoft Windows NT Server 4.0 and Red Hat Linux 5.2 Upgraded to the Linux 2.2.2 Kernel", 1999-04-13, [http://www.mindcraft.com/whitepapers/nts4rhlinux.html].
Eric Raymond "The Cathedral and the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary", 1999, O'Reilly.
Alessandro Rubini "Tour of the Linux Kernel Source", .
David A Rusling "The Linux Kernel", , [http://metalab.unc.edu/mdw/LDP/tlk/tlk-title.html].
Doc Searles "It's an Industry", Linux Journal, 1999-05-21, [http://www.linuxresources.com/articles/conversations/001.html].
Victoria Slind-Flor "Linux May Alter IP Legal Landscape: Some Predict More Contract Work if Alternative to Windows Catches On", National Law Journal, 1999-03-12, [http://www.lawnewsnetwork.com/stories/mar/e030899q.html].
Linus Torvalds "Linus Torvalds: Leader of the Revolution", Linux's History, 1992-07-31, [http://www.li.org/li/linuxhistory.shtml].
Dave Whitenger "Words of a Maddog", 1999-04-19, [http://linuxtoday.com/stories/5118.html].
Riley Williams "Linux Kernel Version History", , [http://ps.cus.umist.ac.uk/~rhw/kernel.versions.html].
"Free as in Freedom (2.0) - Richard Stallman and the Free Software Revolution" (2010) [en] WILLIAMS, Sam; STALLMAN, Richard M.
idx:
idx:
idx:
Stallman, Richard M. 141, 170, 171, 189, 288, 320, 391, 391, 443, 589, 676, 727,
AI Lab, as a programmer, 391,
behavioral disorders, 170,
childhood, 141, 589,
childhood, behavioral disorders, 171,
childhood, first computer program, 189,
Emacs Commune and, 391,
folk dancing, 288, 320,
GNU General Public License, 589,
GNU Linux, 676,
GNU Project, 443,
open source and, 727
idx:
5:
Many facts needed correction, but deeper changes were also needed. Williams, a non-programmer, blurred fundamental technical and legal distinctions, such as that between modifying an existing program's code, on the one hand, and implementing some of its ideas in a new program, on the other. Thus, the first edition said that both Gosmacs and GNU Emacs were developed by modifying the original PDP-10 Emacs, which in fact neither one was. Likewise, it mistakenly described Linux as a “version of Minix.” SCO later made the same false claim in its infamous lawsuit against IBM, and both Torvalds and Tanenbaum rebutted it.
6:
The first edition over dramatized many events by projecting spurious emotions into them. For instance, it said that I “all but shunned” Linux in 1992, and then made a “a dramatic about-face” by deciding in 1993 to sponsor Debian GNU/Linux. Both my interest in 1993 and my lack of interest in 1992 were pragmatic means to pursue the same end: to complete the GNU system. The launch of the GNU Hurd kernel in 1990 was also a pragmatic move directed at that same end.
10:
In this edition, the complete system that combines GNU and Linux is always “GNU/Linux,” and “Linux” by itself always refers to Torvalds' kernel, except in quotations where the other usage is marked with “[sic]”.
See http://www.gnu.org/gnu/gnu-linux-faq.html for more explanation of why it is erroneous and unfair to call the whole system “Linux.”
101:
In an information economy increasingly dependent on software and increasingly beholden to software standards, the GPL has become the proverbial “big stick.” Even companies that once derided it as “software socialism” have come around to recognize the benefits. Linux, the kernel developed by Finnish college student Linus Torvalds in 1991, is licensed under the GPL, as are most parts of the GNU system: GNU Emacs, the GNU Debugger, the GNU C Compiler, etc. Together, these tools form the components of the free software GNU/Linux operating system, developed, nurtured, and owned by the worldwide hacker community. Instead of viewing this community as a threat, high-tech companies like IBM, Hewlett Packard, and Sun Microsystems have come to rely upon it, selling software applications and services built to ride atop the ever-growing free software infrastructure. 3
3.Although these applications run on GNU/Linux, it does not follow that they are themselves free software. On the contrary, most of them applications are proprietary software, and respect your freedom no more than Windows does. They may contribute to the success of GNU/Linux, but they don't contribute to the goal of freedom for which it exists.
107:
The mutual success of GNU/Linux and Windows over the last 10years suggests that both sides on this question are sometimes right. However, free software activists such as Stallman think this is a side issue. The real question, they say, isn't whether free or proprietary software will succeed more, it's which one is more ethical.
112:
The crowd is filled with visitors who share Stallman's fashion and grooming tastes. Many come bearing laptop computers and cellular modems, all the better to record and transmit Stallman's words to a waiting Internet audience. The gender ratio is roughly 15 males to 1 female, and 1 of the 7 or 8 females in the room comes in bearing a stuffed penguin, the official Linux mascot, while another carries a stuffed teddy bear.
299:
Maybe that's why most writers, when describing Stallman, tend to go for the religious angle. In a 1998 Salon.com article titled “The Saint of Free Software,” Andrew Leonard describes Stallman's green eyes as “radiating the power of an Old Testament prophet.”1 30 A 1999 Wired magazine article describes the Stallman beard as “Rasputin-like,” 31 while a London Guardian profile describes the Stallman smile as the smile of “a disciple seeing Jesus.” 32
30.See Andrew Leonard, “The Saint of Free Software,” Salon.com (August 1998),
http://www.salon.com/21st/feature/1998/08/cov_31feature.html.
31.See Leander Kahney, “Linux's Forgotten Man,” Wired News (March 5, 1999),
http://www.wired.com/news/print/0,1294,18291,00.html.
32.See “Programmer on moral high ground; Free software is a moral issue for Richard Stallman believes in freedom and free software,” London Guardian (November 6, 1999),
http://www.guardian.co.uk/uk/1999/nov/06/andrewbrown.
These are just a small sampling of the religious comparisons. To date, the most extreme comparison has to go to Linus Torvalds, who, in his autobiography - see Linus Torvalds and David Diamond, Just For Fun: The Story of an Accidental Revolutionary (HarperCollins Publishers, Inc., 2001): 58 - writes, “Richard Stallman is the God of Free Software.” Honorable mention goes to Larry Lessig, who, in a footnote description of Stallman in his book - see Larry Lessig, The Future of Ideas (Random House, 2001): 270 - likens Stallman to Moses:...
as with Moses, it was another leader, Linus Torvalds, who finally carried the movement into the promised land by facilitating the development of the final part of the OS puzzle. Like Moses, too, Stallman is both respected and reviled by allies within the movement. He is[an] unforgiving, and hence for many inspiring, leader of a critically important aspect of modern culture. I have deep respect for the principle and commitment of this extraordinary individual, though I also have great respect for those who are courageous enough to question his thinking and then sustain his wrath.
In a final interview with Stallman, I asked him his thoughts about the religious comparisons. “Some people do compare me with an Old Testament prophet, and the reason is Old Testament prophets said certain social practices were wrong. They wouldn't compromise on moral issues. They couldn't be bought off, and they were usually treated with contempt.”
301:
My own first encounter with the legendary Stallman gaze dates back to the March, 1999, LinuxWorld Convention and Expo in San Jose, California. Billed as a “coming out party” for the “Linux” software community, the convention also stands out as the event that reintroduced Stallman to the technology media. Determined to push for his proper share of credit, Stallman used the event to instruct spectators and reporters alike on the history of the GNU Project and the project's overt political objectives.
302:
As a reporter sent to cover the event, I received my own Stallman tutorial during a press conference announcing the release of GNOME 1.0, a free software graphic user interface. Unwittingly, I push an entire bank of hot buttons when I throw out my very first question to Stallman himself: “Do you think GNOME's maturity will affect the commercial popularity of the Linux operating system?”
303:
“I ask that you please stop calling the operating system Linux,” Stallman responds, eyes immediately zeroing in on mine. “The Linux kernel is just a small part of the operating system. Many of the software programs that make up the operating system you call Linux were not developed by Linus Torvalds at all. They were created by GNU Project volunteers, putting in their own personal time so that users might have a free operating system like the one we have today. To not acknowledge the contribution of those programmers is both impolite and a misrepresentation of history. That's why I ask that when you refer to the operating system, please call it by its proper name, GNU/Linux.”
304:
Taking the words down in my reporter's notebook, I notice an eerie silence in the crowded room. When I finally look up, I find Stallman's unblinking eyes waiting for me. Timidly, a second reporter throws out a question, making sure to use the term “GNU/Linux” instead of Linux. Miguel de Icaza, leader of the GNOME project, fields the question. It isn't until halfway through de Icaza's answer, however, that Stallman's eyes finally unlock from mine. As soon as they do, a mild shiver rolls down my back. When Stallman starts lecturing another reporter over a perceived error in diction, I feel a guilty tinge of relief. At least he isn't looking at me, I tell myself.
305:
For Stallman, such face-to-face moments would serve their purpose. By the end of the first LinuxWorld show, most reporters know better than to use the term “Linux” in his presence, and Wired.com is running a story comparing Stallman to a pre-Stalinist revolutionary erased from the history books by hackers and entrepreneurs eager to downplay the GNU Project's overly political objectives. 33 Other articles follow, and while few reporters call the operating system GNU/Linux in print, most are quick to credit Stallman for launching the drive to build a free software operating system 15 years before.
33.See Leander Kahney (1999).
306:
I won't meet Stallman again for another 17 months. During the interim, Stallman will revisit Silicon Valley once more for the August, 1999 LinuxWorld show. Although not invited to speak, Stallman does manage to deliver the event's best line. Accepting the show's Linus Torvalds Award for Community Service - an award named after Linux creator Linus Torvalds - on behalf of the Free Software Foundation, Stallman wisecracks, “Giving the Linus Torvalds Award to the Free Software Foundation is a bit like giving the Han Solo Award to the Rebel Alliance.”
307:
This time around, however, the comments fail to make much of a media dent. Midway through the week, Red Hat, Inc., a prominent GNU/Linux vendor, goes public. The news merely confirms what many reporters such as myself already suspect: “Linux” has become a Wall Street buzzword, much like “e-commerce” and “dot-com” before it. With the stock market approaching the Y2K rollover like a hyperbola approaching its vertical asymptote, all talk of free software or open source as a political phenomenon falls by the wayside.
308:
Maybe that's why, when LinuxWorld follows up its first two shows with a third LinuxWorld show in August, 2000, Stallman is conspicuously absent.
309:
My second encounter with Stallman and his trademark gaze comes shortly after that third LinuxWorld show. Hearing that Stallman is going to be in Silicon Valley, I set up a lunch interview in Palo Alto, California. The meeting place seems ironic, not only because of his absence from the show but also because of the overall backdrop. Outside of Redmond, Washington, few cities offer a more direct testament to the economic value of proprietary software. Curious to see how Stallman, a man who has spent the better part of his life railing against our culture's predilection toward greed and selfishness, is coping in a city where even garage-sized bungalows run in the half-million-dollar price range, I make the drive down from Oakland.
316:
Stallman goes back to tapping away at his laptop. The laptop is gray and boxy, not like the sleek, modern laptops that seemed to be a programmer favorite at the recent LinuxWorld show. Above the keyboard rides a smaller, lighter keyboard, a testament to Stallman's aging hands. During the mid 1990s, the pain in Stallman's hands became so unbearable that he had to hire a typist. Today, Stallman relies on a keyboard whose keys require less pressure than a typical computer keyboard.
328:
During the wait, Stallman practices a few dance steps. His moves are tentative but skilled. We discuss current events. Stallman says his only regret about not attending LinuxWorld was missing out on a press conference announcing the launch of the GNOME Foundation. Backed by Sun Microsystems and IBM, the foundation is in many ways a vindication for Stallman, who has long championed that free software and free-market economics need not be mutually exclusive. Nevertheless, Stallman remains dissatisfied by the message that came out.
329:
“The way it was presented, the companies were talking about Linux with no mention of the GNU Project at all,” Stallman says.
345:
After a brief sigh, Stallman recovers. The moment gives me a chance to discuss Stallman's reputation vis-'a-vis the fairer sex. The reputation is a bit contradictory at times. A number of hackers report Stallman's predilection for greeting females with a kiss on the back of the hand. 37 A May 26, 2000 Salon.com article, meanwhile, portrays Stallman as a bit of a hacker lothario. Documenting the free software-free love connection, reporter Annalee Newitz presents Stallman as rejecting traditional family values, telling her, “I believe in love, but not monogamy.” 38
37.See Mae Ling Mak, “A Mae Ling Story” (December 17, 1998),
http://crackmonkey.org/pipermail/crackmonkey/1998-December/001777.html. So far, Mak is the only person I've found willing to speak on the record in regard to this practice, although I've heard this from a few other female sources. Mak, despite expressing initial revulsion at it, later managed to put aside her misgivings and dance with Stallman at a 1999 LinuxWorld show.
38.See Annalee Newitz, “If Code is Free Why Not Me?”, Salon.com (May 26,2000),
http://www.salon.com/tech/feature/2000/05/26/free_love/print.html.
347:
I mention a passage from the 1999 book Open Sources in which Stallman confesses to wanting to name the GNU kernel after a girl-friend at the time. The girlfriend's name was Alix, a name that fit perfectly with the Unix developer convention of putting an “x” at the end names of operating systems and kernels - e.g., “Linux.” Alix was a Unix system administrator, and had suggested to her friends, “Someone should name a kernel after me.” So Stallman decided to name the GNU kernel “Alix” as a surprise for her. The kernel's main developer renamed the kernel “Hurd,” but retained the name “Alix” for part of it. One of Alix's friends noticed this part in a source snapshot and told her, and she was touched. A later redesign of the Hurd eliminated that part. 39
39.See Richard Stallman, “The GNU Operating System and the Free Software Movement,” Open Sources (O'Reilly & Associates, Inc., 1999): 65. [RMS: Williams interpreted this vignette as suggesting that I am a hopeless romantic, and that my efforts were meant to impress some as-yet-unidentified woman. No MIT hacker would believe this, since we learned quite young that most women wouldn't notice us, let alone love us, for our programming. We programmed because it was fascinating. Meanwhile, these events were only possible because I had a thoroughly identified girlfriend at the time. If I was a romantic, at the time I was neither a hopeless romantic nor a hopeful romantic, but rather temporarily a successful one. On the strength of that naive interpretation, Williams went on to compare meto Don Quijote. For completeness' sake, here's a somewhat inarticulate quote from the first edition: “I wasn't really trying to be romantic. It was more of a teasing thing. I mean, it was romantic, but it was also teasing, you know? It would have been a delightful surprise.”]
354:
I decide to bring up the outcast issue again, wondering if Stallman's teenage years conditioned him to take unpopular stands, most notably his uphill battle since 1994 to get computer users and the media to replace the popular term “Linux” with “GNU/Linux.”
378:
Many criticize Stallman for rejecting handy political alliances; some psychologize this and describe it as a character trait. In the case of his well-publicized distaste for the term “open source,” the unwillingness to participate in recent coalition-building projects seems understand-able. As a man who has spent the last two decades stumping on the behalf of free software, Stallman's political capital is deeply invested in the term. Still, comments such as the “Han Solo” comparison at the 1999 LinuxWorld have only reinforced Stallman's reputation, amongst those who believe virtue consists of following the crowd, as a disgruntled mossback unwilling to roll with political or marketing trends.
380:
[RMS: The term “friends” only partly fits people such as Young, and companies such as Red Hat. It applies to some of what they did, and do: for instance, Red Hat contributes to development of free software, including some GNU programs. But Red Hat does other things that work against the free software movement's goals - for instance, its versions of GNU/Linux contain non-free software. Turning from deeds to words, referring to the whole system as “Linux” is unfriendly treatment of the GNU Project, and promoting “open source” instead of “free software” rejects our values. I could work with Young and Red Hat when we were going in the same direction, but that was not often enough to make them possible allies.]
522:
“As far as I know, that book is still sitting on a shelf somewhere, unusable, uncopyable, just taken out of the system,” Chassell says. “It was quite a good introduction if I may say so myself. It would have taken maybe three or four months to convert [the book] into a perfectly usable introduction to GNU/Linux today. The whole experience, aside from what I have in my memory, was lost.”
531:
As Stallman putters around the front of the room, a few audience members wearing T-shirts with the logo of the Maui FreeBSD Users Group (MFUG) race to set up camera and audio equipment. FreeBSD, a free software offshoot of the Berkeley Software Distribution, the venerable 1970s academic version of Unix, is technically a competitor to the GNU/Linux operating system. Still, in the hacking world, Stallman speeches are documented with a fervor reminiscent of the Grateful Dead and its legendary army of amateur archivists. As the local free software heads, it's up to the MFUG members to make sure fellow programmers in Hamburg, Mumbai, and Novosibirsk don't miss out on the latest pearls of RMS wisdom.
537:
Once again, Stallman quickly segues into the parable of the Xerox laser printer, taking a moment to deliver the same dramatic finger-pointing gestures to the crowd. He also devotes a minute or two to the GNU/Linux name.
547:
For Stallman, the software-patent issue dramatizes the need for eternal hacker vigilance. It also underlines the importance of stressing the political benefits of free software programs over the competitive benefits. Stallman says competitive performance and price, two areas where free software operating systems such as GNU/Linux and FreeBSD already hold a distinct advantage over their proprietary counterparts, are side issues compared to the large issues of user and developer freedom.
548:
This position is controversial within the community: open source advocates emphasize the utilitarian advantages of free software over the political advantages. Rather than stress the political significance of free software programs, open source advocates have chosen to stress the engineering integrity of the hacker development model. Citing the power of peer review, the open source argument paints programs such as GNU/Linux or FreeBSD as better built, better inspected and, by extension, more trustworthy to the average user.
569:
Discussing the St. IGNUcius persona afterward, Stallman says he first came up with it in 1996, long after the creation of Emacs but well before the emergence of the “open source” term and the struggle for hacker-community leadership that precipitated it. At the time, Stallman says, he wanted a way to “poke fun at himself,” to remind listeners that, though stubborn, Stallman was not the fanatic some made him out to be. It was only later, Stallman adds, that others seized the persona as a convenient way to play up his reputation as software ideologue, as Eric Raymond did in an 1999 interview with the Linux.com web site:
570:
When I say RMS calibrates what he does, I'm not belittling or accusing him of insincerity. I'm saying that like all good communicators he's got a theatrical streak. Sometimes it's conscious - have you ever seen him in his St. IGNUcius drag, blessing software with a disk platter on his head? Mostly it's unconscious; he's just learned the degree of irritating stimulus that works, that holds attention without (usually) freaking people out. 84
84.See “Guest Interview: Eric S. Raymond,” Linux.com (May 18, 1999),
http://www.linux.com/interviews/19990518/8/.
572:
That said, Stallman does admit to being a ham. “Are you kidding?” he says at one point. “I love being the center of attention.” To facilitate that process, Stallman says he once enrolled in Toastmasters, an organization that helps members bolster their public-speaking skills and one Stallman recommends highly to others. He possesses a stage presence that would be the envy of most theatrical performers and feels a link to vaudevillians of years past. A few days after the Maui High Performance Computing Center speech, I allude to the 1999 LinuxWorld performance and ask Stallman if he has a Groucho Marx complex - i.e., the unwillingness to belong to any club that would have him as a member. 85 Stallman's response is immediate: “No, but I admire Groucho Marx in a lot of ways and certainly have been in some things I say inspired by him. But then I've also been inspired in some ways by Harpo.”
85.RMS: Williams misinterprets Groucho's famous remark by treating it as psychological. It was intended as a jab at the overt antisemitism of many clubs, which was why they would refuse him as a member. I did not understand this either until my mother explained it to me. Williams and I grew up when bigotry had gone underground, and there was no need to veil criticism of bigotry in humor as Groucho did.
575:
The St. IGNUcius skit ends with a brief inside joke. On most Unix systems and Unix-related offshoots, the primary competitor program to Emacs is vi, pronounced vee-eye, a text-editing program developed by former UC Berkeley student and current Sun Microsystems chief scientist, Bill Joy. Before doffing his “halo,” Stallman pokes fun at the rival program. “People sometimes ask me if it is a sin in the Church of Emacs to use vi,” he says. “Using a free version of vi is not a sin;it is a penance. So happy hacking.” 86
86.The service of the Church of Emacs has developed further since 2001. Users can now join the Church by reciting the Confession of the Faith: “There is no system but GNU, and Linux is one of its kernels.” Stallman sometimes mentions the religious ceremony known as the Foobar Mitzvah, the Great Schism between various rival versions of Emacs, and the cult of the Virgin of Emacs (which refers to any person that has not yet learned to use Emacs). In addition, “vi vi vi” has been identified as the Editor of the Beast.
576:
After a brief question-and-answer session, audience members gather around Stallman. A few ask for autographs. “I'll sign this,” says Stallman, holding up one woman's print out of the GNU General Public License, “but only if you promise me to use the term GNU/Linux instead of Linux” (when referring to the system), “and tell all your friends to do likewise.”
627:
By the end of the 1980s, the GPL was beginning to exert a gravitational effect on the free software community. A program didn't have to carry the GPL to qualify as free software - witness the case of the BSD network utilities - but putting a program under the GPL sent a definite message. “I think the very existence of the GPL inspired people to think through whether they were making free software, and how they would license it,” says Bruce Perens, creator of Electric Fence, a popular Unix utility, and future leader of the Debian GNU/Linux development team. A few years after the release of the GPL, Perens says he decided to discard Electric Fence's homegrown license in favor of Stallman's lawyer-vetted copyright. “It was actually pretty easy to do,” Perens recalls.
657:
Interestingly, the GNU system's completion would stem from one of these trips. In April 1991, Stallman paid a visit to the Polytechnic University in Helsinki, Finland. Among the audience members was 21-year-old Linus Torvalds, who was just beginning to develop the Linux kernel - the free software kernel destined to fill the GNU system's main remaining gap.
667:
The posting drew a smattering of responses and within a month, Torvalds had posted a 0.01 version of his kernel - i.e., the earliest possible version fit for outside review - on an Internet FTP site. In the course of doing so, Torvalds had to come up with a name for the new kernel. On his own PC hard drive, Torvalds had saved the program as Linux, a name that paid its respects to the software convention of giving each Unix variant a name that ended with the letter X. Deeming the name too “egotistical,” Torvalds changed it to Freax, only to have the FTP site manager change it back.
669:
Initially, Linux was not free software: the license it carried did not qualify as free, because it did not allow commercial distribution. Torvalds was worried that some company would swoop in and take Linux away from him. However, as the growing GNU/Linux combination gained popularity, Torvalds saw that sale of copies would be useful for the community, and began to feel less worried about a possible takeover. 105 This led him to reconsider the licensing of Linux.
105.Ibid, p. 94-95.
670:
Neither compiling Linux with GCC nor running GCC with Linux required him legally to release Linux under the GNU GPL, but Torvalds' use of GCC implied for him a certain obligation to let other users borrow back. As Torvalds would later put it: “I had hoisted myself up on the shoulders of giants.” 106 Not surprisingly, he began to think about what would happen when other people looked to him for similar support. A decade after the decision, Torvalds echoes the Free Software Foundation's Robert Chassell when he sums up his thoughts at the time: }
106.Ibid, p. 95-97.
671:
You put six months of your life into this thing and you want to make it available and you want to get something out of it, but you don't want people to take advantage of it. I wanted people to be able to see [Linux], and to make changes and improvements to their hearts' content. But I also wanted to make sure that what I got out of it was to see what they were doing. I wanted to always have access to the sources so that if they made improvements, I could make those improvements myself. 107
107.See Linus Torvalds and David Diamond, Just For Fun: The Story of an Accidental Revolutionary (Harper Collins Publishers, Inc., 2001): 94-95.
672:
When it was time to release the 0.12 version of Linux, the first to operate fully with GCC, Torvalds decided to throw his lot in with the free software movement. He discarded the old license of Linux and replaced it with the GPL. Within three years, Linux developers were offering release 1.0 of Linux, the kernel; it worked smoothly with the almost complete GNU system, composed of programs from the GNU Project and elsewhere. In effect, they had completed the GNU operating system by adding Linux to it. The resulting system was basically GNU plus Linux. Torvalds and friends, however, referred to it confusingly as “Linux.”
673:
By 1994, the amalgamated system had earned enough respect in the hacker world to make some observers from the business world wonder if Torvalds hadn't given away the farm by switching to the GPL in the project's initial months. In the first issue of Linux Journal, publisher Robert Young sat down with Torvalds for an interview. When Young asked the Finnish programmer if he felt regret at giving up private ownership of the Linux source code, Torvalds said no. “Even with 20/20 hindsight,” Torvalds said, he considered the GPL “one of the very best design decisions” made during the early stages of the Linux project. 108
108.See Robert Young, “Interview with Linus, the Author of Linux,” Linux Journal (March 1, 1994),
http://www.linuxjournal.com/article/2736.
674:
That the decision had been made with zero appeal or deference to Stallman and the Free Software Foundation speaks to the GPL's growing portability. Although it would take a couple of years to be recognized by Stallman, the explosiveness of Linux development conjured flashbacks of Emacs. This time around, however, the innovation triggering the explosion wasn't a software hack like Control-R but the novelty of running a Unix-like system on the PC architecture. The motives may have been different, but the end result certainly fit the ethical specifications: a fully functional operating system composed entirely of free software.
675:
As his initial email message to the comp.os.minix newsgroup indicates, it would take a few months before Torvalds saw Linux as anything more than a holdover until the GNU developers delivered on the Hurd kernel. As far as Torvalds was concerned, he was simply the latest in a long line of kids taking apart and reassembling things just for fun. Nevertheless, when summing up the runaway success of a project that could have just as easily spent the rest of its days on an abandoned computer hard drive, Torvalds credits his younger self for having the wisdom to give up control and accept the GPL bargain. “I may not have seen the light,” writes Torvalds, reflecting on Stallman's 1991 Polytechnic University speech and his subsequent decision to switch to the GPL. “But I guess something from his speech sunk in.” 109
109.See Linus Torvalds and David Diamond, Just For Fun: The Story of an Accidental Revolutionary (Harper Collins Publishers, Inc., 2001): 59.
676:
Chapter 10 - GNU/Linux
677:
By 1993, the free software movement was at a crossroads. To the optimistically inclined, all signs pointed toward success for the hacker culture. Wired magazine, a funky, new publication offering stories on data encryption, Usenet, and software freedom, was flying off magazine racks. The Internet, once a slang term used only by hackers and research scientists, had found its way into mainstream lexicon. Even President Clinton was using it. The personal computer, once a hobbyist's toy, had grown to full-scale respectability, giving a whole new generation of computer users access to hacker-built software. And while the GNU Project had not yet reached its goal of a fully intact, free GNU operating system, users could already run the GNU/Linux variant.
679:
Or were they? To the pessimistically inclined, each sign of acceptance carried its own troubling countersign. Sure, being a hacker was suddenly cool, but was cool good for a community that thrived on alienation? Sure, the White House was saying nice things about the Internet, even going so far as to register its own domain name, white-house.gov, *** but it was also meeting with the companies, censorship advocates, and law-enforcement officials looking to tame the Internet's Wild West culture. Sure, PCs were more powerful, but in commoditizing the PC marketplace with its chips, Intel had created a situation in which proprietary software vendors now held the power. For every new user won over to the free software cause via GNU/Linux, hundreds, perhaps thousands, were booting up Microsoft Windows for the first time. GNU/Linux had only rudimentary graphical interfaces, so it was hardly user-friendly. In 1993, only an expert could use it. The GNU Project's first attempt to develop a graphical desktop had been abortive.
681:
Finally, there was the curious nature of GNU/Linux itself. Unrestricted by legal disputes (such as BSD faced), GNU/Linux's high-speed evolution had been so unplanned, its success so accidental, that programmers closest to the software code itself didn't know what to make of it. More compilation album than unified project, it was comprised of a hacker medley of greatest hits: everything from GCC, GDB, and glibc (the GNU Project's newly developed C Library) toX (a Unix-based graphic user interface developed by MIT's Laboratory for Computer Science) to BSD-developed tools such as BIND (the Berkeley Internet Naming Daemon, which lets users substitute easy-to-remember Internet domain names for numeric IP addresses) and TCP/IP. In addition, it contained the Linux kernel - itself designed as a replacement for Minix. Rather than developing a new operating system, Torvalds and his rapidly expanding Linux development team had plugged their work into this matrix. As Torvalds himself would later translate it when describing the secret of his success: “I'm basically a very lazy person who likes to take credit for things other people actually do.” 110
110.Torvalds has offered this quote in many different settings. To date, however, the quote's most notable appearance is in the Eric Raymond essay, “The Cathedral and the Bazaar” (May, 1997),
http://www.catb.org/~esr/writings/cathedral-bazaar/.
683:
By late 1993, a growing number of GNU/Linux users had begun to lean toward the latter definition and began brewing private variations on the theme. They began to develop various “distributions” of GNU/Linux and distribute them, sometimes gratis, sometimes for a price. The results were spotty at best.
684:
“This was back before Red Hat and the other commercial distributions,” remembers Ian Murdock, then a computer science student at Purdue University. “You'd flip through Unix magazines and find all these business card-sized ads proclaiming 'Linux.' Most of the companies were fly-by-night operations that saw nothing wrong with slipping a little of their own [proprietary] source code into the mix.”
685:
Murdock, a Unix programmer, remembers being “swept away” by GNU/Linux when he first downloaded and installed it on his home PC system. “It was just a lot of fun,” he says. “It made me want to get involved.” The explosion of poorly built distributions began to dampen his early enthusiasm, however. Deciding that the best way to get involved was to build a version free of additives, Murdock set about putting a list of the best free software tools available with the intention of folding them into his own distribution. “I wanted something that would live up to the Linux name,” Murdock says.
686:
In a bid to “stir up some interest,” Murdock posted his intentions on the Internet, including Usenet's comp.os.linux newsgroup. One of the first responding email messages was from rms@ai.mit.edu. As a hacker, Murdock instantly recognized the address. It was Richard M. Stallman, founder of the GNU Project and a man Murdock knew even back then as “the hacker of hackers.” Seeing the address in his mail queue, Murdock was puzzled. Why on Earth would Stallman, a person leading his own operating-system project, care about Murdock's gripes over “Linux” distributions?
688:
“He said the Free Software Foundation was starting to look closely at Linux and that the FSF was interested in possibly doing a Linux [sic] system, too. Basically, it looked to Stallman like our goals were in line with their philosophy.”
689:
Not to over dramatize, the message represented a change in strategy on Stallman's part. Until 1993, Stallman had been content to keep his nose out of Linux affairs. After first hearing of the new kernel, Stallman asked a friend to check its suitability. Recalls Stallman, “Here ported back that the software was modeled after System V, which was the inferior version of Unix. He also told me it wasn't portable.”
690:
The friend's report was correct. Built to run on 386-based machines, Linux was firmly rooted to its low-cost hardware platform. What the friend failed to report, however, was the sizable advantage Linux enjoyed as the only free kernel in the marketplace. In other words, while Stallman spent the next year and a half listening to progress reports from the Hurd developer, reporting rather slow progress, Torvalds was winning over the programmers who would later uproot and replant Linux and GNU onto new platforms.
691:
By 1993, the GNU Project's failure to deliver a working kernel was leading to problems both within the GNU Project and in the free software movement at large. A March, 1993, Wired magazine article by Simson Garfinkel described the GNU Project as “bogged down” despite the success of the project's many tools. 111 Those within the project and its nonprofit adjunct, the Free Software Foundation, remember the mood as being even worse than Garfinkel's article let on. “It was very clear, at least to me at the time, that there was a window of opportunity to introduce a new operating system,” says Chassell. “And once that window was closed, people would become less interested. Which is in fact exactly what happened.” 112
111.See Simson Garfinkel, “Is Stallman Stalled?” Wired (March, 1993).
112.Chassell's concern about there being a 36-month “window” for a new operating system is not unique to the GNU Project. During the early 1990s, free software versions of the Berkeley Software Distribution were held up by Unix System Laboratories' lawsuit restricting the release of BSD-derived software. While many users consider BSD offshoots such as FreeBSD and OpenBSD to be demonstrably superior to GNU/Linux both in terms of performance and security, the number of FreeBSD and OpenBSD users remains a fraction of the total GNU/Linux user population. To view a sample analysis of the relative success of GNU/Linux in relation to other free software operating systems, see the essay by New Zealand hacker, Liam Greenwood, “Why is Linux Successful” (1999),
http://www.freebsddiary.org/linux.php.
697:
Over time, the growing success of GNU together with Linux made it clear that the GNU Project should get on the train that was leaving and not wait for the Hurd. Besides, there were weaknesses in the community surrounding GNU/Linux. Sure, Linux had been licensed under the GPL, but as Murdock himself had noted, the desire to treat GNU/Linux as a purely free software operating system was far from unanimous. By late 1993, the total GNU/Linux user population had grown from a dozen or so enthusiasts to somewhere between 20,000 and 100,000. 114 What had once been a hobby was now a marketplace ripe for exploitation, and some developers had no objection to exploiting it with non-free software. Like Winston Churchill watching Soviet troops sweep into Berlin, Stallman felt an understandable set of mixed emotions when it came time to celebrate the GNU/Linux “victory.” 115
114.GNU/Linux user-population numbers are sketchy at best, which is why I've provided such a broad range. The 100,000 total comes from the Red Hat “Milestones” site,
http://www.redhat.com/about/corporate/milestones.html.
115.I wrote this Winston Churchill analogy before Stallman himself sent me his own unsolicited comment on Churchill:
World War II and the determination needed to win it was a very strong memory as I was growing up. Statements such as Churchill's, “We will fight them in the landing zones, we will fight them on the beaches... we will never surrender,” have always resonated for me.
700:
The Free Software Foundation plays an extremely important role in the future of Debian. By the simple fact that they will be distributing it, a message is sent to the world that Linux [sic] is not a commercial product and that it never should be, but that this does not mean that Linux will never be able to compete commercially. For those of you who disagree, I challenge you to rationalize the success of GNU Emacs and GCC, which are not commercial software but which have had quite an impact on the commercial market regardless of that fact.
701:
The time has come to concentrate on the future of Linux[sic] rather than on the destructive goal of enriching one-self at the expense of the entire Linux community and its future. The development and distribution of Debian may not be the answer to the problems that I have outlined in the Manifesto, but I hope that it will at least attract enough attention to these problems to allow them to be solved. 116
116.See Ian Murdock, A Brief History of Debian, (January 6, 1994): Appendix A, “The Debian Manifesto,”
http://www.debian.org/doc/manuals/project-history/ap-manifesto.en.html.
702:
Shortly after the Manifesto's release, the Free Software Foundation made its first major request. Stallman wanted Murdock to call its distribution “GNU/Linux.” At first, Stallman proposed the term “Lignux” - combining the names Linux and GNU - but the initial reaction was very negative, and this convinced Stallman to go with the longer but less criticized GNU/Linux.
703:
Some dismissed Stallman's attempt to add the “GNU” prefix as a belated quest for credit, never mind whether it was due, but Murdock saw it differently. Looking back, Murdock saw it as an attempt to counteract the growing tension between the GNU Project's developers and those who adapted GNU programs to use with the Linux kernel. “There was a split emerging,” Murdock recalls. “Richard was concerned.”
705:
The programmers who adapted various GNU programs to work with the kernel Linux followed this common path: they considered only their own platform. But when the maintainers-in-charge asked them to help clean up their changes for future maintenance, several of them were not interested. They did not care about doing the correct thing, or about facilitating future maintenance of the GNU packages they had adapted. They cared only about their own versions and were inclined to maintain them as forks.
708:
Now programmers had forked several of the principal GNU packages at once. At first, Stallman says he considered the forks to be a product of impatience. In contrast to the fast and informal dynamics of the Linux team, GNU source-code maintainers tended to be slower and more circumspect in making changes that might affect a program's long-term viability. They also were unafraid of harshly critiquing other people's code. Over time, however, Stallman began to sense that there was an underlying lack of awareness of the GNU Project and its objectives when reading Linux developers' emails.
709:
“We discovered that the people who considered themselves 'Linux users' didn't care about the GNU Project,” Stallman says. “They said, 'Why should I bother doing these things? I don't care about the GNU Project. It [the program]'s working for me. It's working for us Linux users, and nothing else matters to us.' And that was quite surprising, given that people were essentially using a variant of the GNU system, and they cared so little. They cared less than anybody else about GNU.” Fooled by their own practice of calling the combination “Linux,” they did not realize that their system was more GNU than Linux.
710:
For the sake of unity, Stallman asked the maintainers-in-charge to do the work which normally the change authors should have done. In most cases this was feasible, but not in glibc. Short for GNU C Library, glibc is the package that all programs use to make “system calls” directed at the kernel, in this case Linux. User programs on a Unix-like system communicate with the kernel only through the C library.
711:
The changes to make glibc work as a communication channel between Linux and all the other programs in the system were major and ad-hoc, written without attention to their effect on other platforms. For the glibc maintainer-in-charge, the task of cleaning them up was daunting. Instead the Free Software Foundation paid him to spend most of a year reimplementing these changes from scratch, to make glibc version 6 work “straight out of the box” in GNU/Linux.
712:
Murdock says this was the precipitating cause that motivated Stallman to insist on adding the GNU prefix when Debian rolled out its software distribution. “The fork has since converged. Still, at the time, there was a concern that if the Linux community saw itself as a different thing as the GNU community, it might be a force for disunity.”
713:
While some viewed it as politically grasping to describe the combination of GNU and Linux as a “variant” of GNU, Murdock, already sympathetic to the free software cause, saw Stallman's request to call Debian's version GNU/Linux as reasonable. “It was more for unity than for credit,” he says.
716:
In 1996, Murdock, following his graduation from Purdue, decided to hand over the reins of the growing Debian project. He had already been ceding management duties to Bruce Perens, the hacker best known for his work on Electric Fence, a Unix utility released under the GPL. Perens, like Murdock, was a Unix programmer who had become enamored of GNU/Linux as soon as the operating system's Unix-like abilities became manifest. Like Murdock, Perens sympathized with the political agenda of Stallman and the Free Software Foundation, albeit from afar.
719:
According to Perens, Stallman was taken aback by the decision but had the wisdom to roll with it. “He gave it some time to cool off and sent a message that we really needed a relationship. He requested that we call it GNU/Linux and left it at that. I decided that was fine. I made the decision unilaterally. Everybody breathed a sigh of relief.”
720:
Over time, Debian would develop a reputation as the hacker's version of GNU/Linux, alongside Slackware, another popular distribution founded during the same 1993-1994 period. However, Slackware contained some non-free programs, and Debian after its separation from GNU began distributing non-free programs too. 118 Despite labeling them as “non-free” and saying that they were “not officially part of Debian,” proposing these programs to the user implied a kind of endorsement for them. As the GNU Project became aware of these policies, it came to recognize that neither Slackware nor Debian was a GNU/Linux distro it could recommend to the public.
118.Debian Buzz in June 1996 contained non-free Netscape 3.01 in its Contrib section.
721:
Outside the realm of hacker-oriented systems, however, GNU/Linux was picking up steam in the commercial Unix marketplace. In North Carolina, a Unix company billing itself as Red Hat was revamping its business to focus on GNU/Linux. The chief executive officer was Robert Young, the former Linux Journal editor who in 1994 had put the question to Linus Torvalds, asking whether he had any regrets about putting the kernel under the GPL. To Young, Torvalds' response had a “profound” impact on his own view toward GNU/Linux. Instead of looking for a way to corner the GNU/Linux market via traditional software tactics, Young began to consider what might happen if a company adopted the same approach as Debian - i.e., building an operating system completely out of free software parts. Cygnus Solutions, the company founded by Michael Tiemann and John Gilmore in 1990, was already demonstrating the ability to sell free software based on quality and customizability. What if Red Hat took the same approach with GNU/Linux?
722:
“In the western scientific tradition we stand on the shoulders of giants,” says Young, echoing both Torvalds and Sir Isaac Newton before him. “In business, this translates to not having to reinvent wheels as we go along. The beauty of [the GPL] model is you put your code into the public domain. 119 If you're an independent software vendor and you're trying to build some application and you need a modem-dialer, well, why reinvent modem dialers? You can just steal PPP off of Red Hat [GNU/]Linux and use that as the core of your modem-dialing tool. If you need a graphic tool set, you don't have to write your own graphic library. Just download GTK. Suddenly you have the ability to reuse the best of what went before. And suddenly your focus as an application vendor is less on software management and more on writing the applications specific to your customer's needs.” However, Young was no free software activist, and readily included non-free programs in Red Hat's GNU/Linux system.
119.Young uses the term ”public domain“ loosely here. Strictly speaking, it means ”not copyrighted.“ Code released under the GNU GPL cannot be in the public domain, since it must be copyrighted in order for the GNU GPL to apply.
723:
Young wasn't the only software executive intrigued by the business efficiencies of free software. By late 1996, most Unix companies were starting to wake up and smell the brewing source code. The GNU/Linux sector was still a good year or two away from full commercial breakout mode, but those close enough to the hacker community could feel it: something big was happening. The Intel 386 chip, the Internet, and the World Wide Web had hit the marketplace like a set of monster waves; free software seemed like the largest wave yet.
724:
For Ian Murdock, the wave seemed both a fitting tribute and a fitting punishment for the man who had spent so much time giving the free software movement an identity. Like many Linux aficionados, Murdock had seen the original postings. He'd seen Torvalds' original admonition that Linux was “just a hobby.” He'd also seen Torvalds' admission to Minix creator Andrew Tanenbaum: “If the GNU kernel had been ready last spring, I'd not have bothered to even start my project.” 120 Like many, Murdock knew that some opportunities had been missed. He also knew the excitement of watching new opportunities come seeping out of the very fabric of the Internet.
120.This quote is taken from the much publicized Torvalds-Tanenbaum “flame war” following the initial release of Linux. In the process of defending his choice of a non-portable monolithic kernel design, Torvalds says he started working on Linux as a way to learn more about his new 386 PC. “If the GNU kernel had been ready last spring, I'd not have bothered to even start my project.” See Chris DiBona et al., Open Sources (O'Reilly & Associates, Inc., 1999): 224.
725:
“Being involved with Linux in those early days was fun,” recalls Murdock. “At the same time, it was something to do, something to pass the time. If you go back and read those old [comp.os.minix]exchanges, you'll see the sentiment: this is something we can play with until the Hurd is ready. People were anxious. It's funny, but in a lot of ways, I suspect that Linux would never have happened if the Hurd had come along more quickly.”
726:
By the end of 1996, however, such “what if” questions were already moot, because Torvalds' kernel had gained a critical mass of users. The 36-month window had closed, meaning that even if the GNU Project had rolled out its Hurd kernel, chances were slim anybody outside the hard-core hacker community would have noticed. Linux, by filling the GNU system's last gap, had achieved the GNU Project's goal of producing a Unix-like free software operating system. However, most of the users did not recognize what had happened: they thought the whole system was Linux, and that Torvalds had done it all. Most of them installed distributions that came with non-free software; with Torvalds as their ethical guide, they saw no principled reason to reject it. Still, a precarious freedom was available for those that appreciated it.
729:
In November, 1995, Peter Salus, a member of the Free Software Foundation and author of the 1994 book, A Quarter Century of Unix, issued a call for papers to members of the GNU Project's “system-discuss” mailing list. Salus, the conference's scheduled chairman, wanted to tip off fellow hackers about the upcoming Conference on Freely Redistributable Software in Cambridge, Massachusetts. Slated for February, 1996, and sponsored by the Free Software Foundation, the event promised to be the first engineering conference solely dedicated to free software and, in a show of unity with other free software programmers, welcomed papers on “any aspect of GNU, Linux, NetBSD, 386BSD, FreeBSD, Perl, Tcl/tk, and other tools for which the code is accessible and redistributable.” Salus wrote:
733:
Despite the falling out, Raymond remained active in the free software community. So much so that when Salus suggested a conference pairing Stallman and Torvalds as keynote speakers, Raymond eagerly seconded the idea. With Stallman representing the older, wiser contingent of ITS/Unix hackers and Torvalds representing the younger, more energetic crop of Linux hackers, the pairing indicated a symbolic show of unity that could only be beneficial, especially to ambitious younger (i.e., below 40) hackers such as Raymond. “I sort of had afoot in both camps,” Raymond says.
739:
Stallman, for his part, doesn't remember any tension at the 1996conference; he probably wasn't present when Torvalds made that statement. But he does remember later feeling the sting of Torvalds' celebrated “cheekiness.” “There was a thing in the Linux documentation which says print out the GNU coding standards and then tear them up,” says Stallman, recalling one example. “When you look closely, what he disagreed with was the least important part of it, the recommendation for how to indent C code.”
741:
For Raymond, the warm reception other hackers gave to Torvalds' comments confirmed a suspicion: the dividing line separating Linux developers from GNU developers was largely generational. Many Linux hackers, like Torvalds, had grown up in a world of proprietary software. They had begun contributing to free software without perceiving any injustice in non-free software. For most of them, nothing was at stake beyond convenience. Unless a program was technically inferior, they saw little reason to reject it on licensing issues alone. Some day hackers might develop a free software alternative to PowerPoint. Until then, why criticize PowerPoint or Microsoft; why not use it?
752:
Raymond says the response was enthusiastic, but not nearly as enthusiastic as the one he received during the 1997 Linux Kongress, a gathering of GNU/Linux users in Germany the next spring.
754:
Eventually, Raymond would convert the speech into a paper, also titled “The Cathedral and the Bazaar.” The paper drew its name from Raymond's central analogy. Previously, programs were “cathedrals,” impressive, centrally planned monuments built to stand the test of time. Linux, on the other hand, was more like “a great babbling bazaar,” a software program developed through the loose decentralizing dynamics of the Internet.
755:
Raymond's paper associated the Cathedral style, which he and Stallman and many others had used, specifically with the GNU Project and Stallman, thus casting the contrast between development models as a comparison between Stallman and Torvalds. Where Stallman was his chosen example of the classic cathedral architect - i.e., a programming “wizard” who could disappear for 18 months and return with something like the GNU C Compiler - Torvalds was more like a genial dinner-party host. In letting others lead the Linux design discussion and stepping in only when the entire table needed a referee, Torvalds had created a development model very much reflective of his own laid-back personality. From Torvalds' perspective, the most important managerial task was not imposing control but keeping the ideas flowing.
756:
Summarized Raymond, “I think Linus's cleverest and most consequential hack was not the construction of the Linux kernel itself, but rather his invention of the Linux development model.” 124
124.See Eric Raymond, “The Cathedral and the Bazaar” (1997).
759:
When Netscape CEO Jim Barksdale cited Raymond's “Cathedral and the Bazaar” essay as a major influence upon the company's decision, the company instantly elevated Raymond to the level of hacker celebrity. He invited a few people including Larry Augustin, founder of VA Research which sold workstations with the GNU/Linux operating system pre-installed; Tim O'Reilly, founder of the publisher O'Reilly& Associates; and Christine Peterson, president of the Foresight Institute, a Silicon Valley think tank specializing in nano technology, to talk. “The meeting's agenda boiled down to one item: how to take advantage of Netscape's decision so that other companies might follow suit?”
768:
Snub or no snub, both O'Reilly and Raymond say the term “open-source” won over just enough summit-goers to qualify as a success. The attendees shared ideas and experiences and brainstormed on how to improve free software's image. Of key concern was how to point out the successes of free software, particularly in the realm of Internet infrastructure, as opposed to playing up the GNU/Linux challenge to Microsoft Windows. But like the earlier meeting at VA, the discussion soon turned to the problems associated with the term “free software.” O'Reilly, the summit host, remembers a comment from Torvalds, a summit attendee.
778:
In addition, Stallman thought that the ideas of “open source” led people to put too much emphasis on winning the support of business. While such support in itself wasn't necessarily bad in itself, he expected that being too desperate for it would lead to harmful compromises. “Negotiation 101 would teach you that if you are desperate to get someone's agreement, you are asking for a bad deal,” he says. “You need to be prepared to say no.” Summing up his position at the 1999 LinuxWorld Convention and Expo, an event billed by Torvalds himself as a “coming out party” for the “Linux” community, Stallman implored his fellow hackers to resist the lure of easy compromise.
780:
Even before the LinuxWorld show, however, Stallman was showing an increased willingness to alienate open source supporters. A few months after the Freeware Summit, O'Reilly hosted its second annual Perl Conference. This time around, Stallman was in attendance. During a panel discussion lauding IBM's decision to employ the free software Apache web server in its commercial offerings, Stallman, taking advantage of an audience microphone, made a sharp denunciation of panelist John Ousterhout, creator of the Tcl scripting language. Stallman branded Ousterhout a “parasite” on the free software community for marketing a proprietary version of Tcl via Ousterhout's startup company, Scriptics. Ousterhout had stated that Scriptics would contribute only the barest minimum of its improvements to the free version of Tcl, meaning it would in effect use that small contribution to win community approval for much a larger amount of non-free software development. Stallman rejected this position and denounced Scriptics' plans. “I don't think Scriptics is necessary for the continued existenceof Tcl,” Stallman said to hisses from the fellow audience members. 126
126.Ibid.
786:
Stallman's energies would do little to counteract the public-relations momentum of open source proponents. In August of 1998, when chip-maker Intel purchased a stake in GNU/Linux vendor Red Hat, an accompanying New York Times article described the company as the product of a movement “known alternatively as free software and opensource.” 128 Six months later, a John Markoff article on Apple Computerwas proclaiming the company's adoption of the “open source” Apache server in the article headline. 129
128.See Amy Harmon, “For Sale: Free Operating System,” New York Times (September 28, 1998),
http://www.nytimes.com/library/tech/98/09/biztech/articles/28linux.html.
129.See John Markoff, “Apple Adopts 'Open Source' for its Server Computers,” New York Times (March 17, 1999),
http://www.nytimes.com/library/tech/99/03/biztech/articles/17apple.html.
787:
Such momentum would coincide with the growing momentum of companies that actively embraced the “open source” term. By August of 1999, Red Hat, a company that now eagerly billed itself as “opensource,” was selling shares on Nasdaq. In December, VA Linux -formerly VA Research - was floating its own IPO to historic effect. Opening at $30 per share, the company's stock price exploded past the $300 mark in initial trading only to settle back down to the $239 level. Shareholders lucky enough to get in at the bottom and stay until the end experienced a 698% increase in paper wealth, a Nasdaq record. Eric Raymond, as a board member, owned shares worth $36 million. However, these high prices were temporary; they tumbled when the dot-com boom ended.
789:
These methods won great success for open source, but not for the ideals of free software. What they had done to “spread the message” was to omit the most important part of it: the idea of freedom as an ethical issue. The effects of this omission are visible today: as of 2009, nearly all GNU/Linux distributions include proprietary programs, Torvalds' version of Linux contains proprietary firmware programs, and the company formerly called VA Linux bases its business on proprietary software. Over half of all the world's web servers run some version of Apache, and the usual version of Apache is free software, but many of those sites run a proprietary modified version distributed by IBM.
792:
Ironically, the success of open source and open source advocates such as Raymond would not diminish Stallman's role as a leader - but it would lead many to misunderstand what he is a leader of. Since the free software movement lacks the corporate and media recognition of open source, most users of GNU/Linux do not hear that it exists, let alone what its views are. They have heard the ideas and values of opensource, and they never imagine that Stallman might have different ones. Thus he receives messages thanking him for his advocacy of “open source,” and explains in response that he has never been a supporter of that, using the occasion to inform the sender about free-software.
829:
Four years after “The Cathedral and the Bazaar,” Stallman still chafes over the Raymond critique. He also grumbles over Linus Torvalds' elevation to the role of world's most famous hacker. He recalls a popular T-shirt that began showing at Linux tradeshows around 1999. Designed to mimic the original promotional poster for Star Wars, the shirt depicted Torvalds brandishing a light-saber like Luke Skywalker, while Stallman's face rides atop R2D2. The shirt still grates on Stallman's nerves not only because it depicts him as Torvalds' sidekick, but also because it elevates Torvalds to the leadership role in the free-software community, a role even Torvalds himself is loath to accept. “It's ironic,” says Stallman mournfully. “Picking up that sword is exactly what Linus refuses to do. He gets everybody focusing on him as the symbol of the movement, and then he won't fight. What good is it?”
830:
Then again, it is that same unwillingness to “pick up the sword,” on Torvalds' part, that has left the door open for Stallman to bolster his reputation as the hacker community's ethical arbiter. Despite his grievances, Stallman has to admit that the last few years have been quite good, both to himself and to his organization. Relegated to the periphery by the ironic success of the GNU/Linux system because users thought of it as “Linux,” Stallman has nonetheless successfully recaptured the initiative. His speaking schedule between January 2000and December 2001 included stops on six continents and visits to countries where the notion of software freedom carries heavy overtones -China and India, for example.
831:
Outside the bully pulpit, Stallman has taken advantage of the leverage of the GNU General Public License (GPL), of which he remains the steward. During the summer of 2000, while the air was rapidly leaking out of the 1999 Linux IPO bubble, Stallman and the Free Software Foundation scored two major victories. In July, 2000, Troll tech, a Norwegian software company and developer of Qt, a graphical interface library that ran on the GNU/Linux operating system, announced it was licensing its software under the GPL. A few weeks later, Sun Microsystems, a company that, until then, had been warily trying to ride the open source bandwagon without actually contributing its code, finally relented and announced that it, too, was dual licensing its new OpenOffice 131 application suite under the Lesser GNU Public License(LGPL) and the Sun Industry Standards Source License (SISSL).
131.Sun was compelled by a trademark complaint to use the clumsy name “OpenOffice.org.”
832:
In the case of Trolltech, this victory was the result of a protracted effort by the GNU Project. The non-freeness of Qt was a serious problem for the free software community because KDE, a free graphical desktop environment that was becoming popular, depended on it. Qt was non-free software but Trolltech had invited free software projects(such as KDE) to use it gratis. Although KDE itself was free software, users that insisted on freedom couldn't run it, since they had to reject Qt. Stallman recognized that many users would want a graphical desktop on GNU/Linux, and most would not value freedom enough to reject the temptation of KDE, with Qt hiding within. The danger was that GNU/Linux would become a motor for the installation of KDE, and therefore also of non-free Qt. This would undermine the freedom which was the purpose of GNU.
833:
To deal with this danger, Stallman recruited people to launch two parallel counter projects. One was GNOME, the GNU free graphical desktop environment. The other was Harmony, a compatible free replacement for Qt. If GNOME succeeded, KDE would not be necessary; if Harmony succeeded, KDE would not need Qt. Either way, users would be able to have a graphical desktop on GNU/Linux without non-free Qt.
836:
Sun desired to play according to the Free Software Foundation's conditions. At the 1999 O'Reilly Open Source Conference, Sun Microsystems co-founder and chief scientist Bill Joy defended his company's “community source” license, essentially a watered-down compromise letting users copy and modify Sun-owned software but not sell copies of said software without negotiating a royalty agreement with Sun. (With this restriction, the license did not qualify as free, nor for that matter as open source.) A year after Joy's speech, Sun Microsystems vice president Marco Boerries was appearing on the same stage spelling out the company's new licensing compromise in the case of OpenOffice, an office-application suite designed specifically for the GNU/Linux operating system.
869:
What history says about the GNU Project, twenty years from now, will depend on who wins the battle of freedom to use public knowledge. If we lose, we will be just a footnote. If we win, it is uncertain whether people will know the role of the GNU operating system - if they think the system is “Linux,” they will build a false picture of what happened and why.
876:
In an effort to drive that image home, Moglen reflects on a shared moment in the spring of 2000. The success of the VA Linux IPO was still resonating in the business media, and a half dozen issues related to free software were swimming through the news. Surrounded by a swirling hurricane of issues and stories each begging for comment, Moglen recalls sitting down for lunch with Stallman and feeling like a castaway dropped into the eye of the storm. For the next hour, he says, the conversation calmly revolved around a single topic: strengthening the GPL.
886:
The story starts in April, 2000. At the time, I was writing stories for the ill-fated web site BeOpen.com. One of my first assignments was a phone interview with Richard M. Stallman. The interview went well, so well that Slashdot (http://www.slashdot.org), the popular “news for nerds” site owned by VA Software, Inc. (formerly VA LinuxSystems and before that, VA Research), gave it a link in its daily list of feature stories. Within hours, the web servers at BeOpen were heating up as readers clicked over to the site.
891:
I read your interview with Richard Stallman on BeOpen with great interest. I've been intrigued by RMS and his work for some time now and was delighted to find your piece which I really think you did a great job of capturing some of the spirit of what Stallman is trying to do with GNU-Linux and the Free Software Foundation.
896:
I have to admit, getting Stallman to participate in an e-book project was an afterthought on my part. As a reporter who covered the open source beat, I knew Stallman was a stickler. I'd already received a half dozen emails at that point upbraiding me for the use of “Linux” instead of “GNU/Linux.”
938:
During the summer, I began to contemplate turning my interview notes into a magazine article. Ethically, I felt in the clear doing so,since the original interview terms said nothing about traditional print media. To be honest, I also felt a bit more comfortable writing about Stallman after eight months of radio silence. Since our telephone conversation in September, I'd only received two emails from Stallman.Both chastised me for using “Linux” instead of “GNU/Linux” in a pair of articles for the web magazine Upside Today. Aside from that, I had enjoyed the silence. In June, about a week after the New York University speech, I took a crack at writing a 5,000-word magazine-length story about Stallman. This time, the words flowed. The distance had helped restore my lost sense of emotional perspective, I suppose.