aboutsummaryrefslogtreecommitdiffhomepage
path: root/data
diff options
context:
space:
mode:
authorRalph Amissah <ralph.amissah@gmail.com>2007-05-22 05:42:13 +0100
committerRalph Amissah <ralph.amissah@gmail.com>2007-05-22 05:42:13 +0100
commit8ed4d8a89190543f9cfb983ac038d6a60afd6c70 (patch)
tree84328c76104308b9ab7d27728c9abc66cfb63dee /data
Imported upstream version 1.0.6upstream/1.0.6
Diffstat (limited to 'data')
-rw-r--r--data/sisu_markup_samples/non-free/README32
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/DebTuxRuby3.pngbin0 -> 24065 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/DebTuxRubySiSU.pngbin0 -> 23032 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/Gnu_Debian_Linux_Ruby_Better_Way.pngbin0 -> 33396 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/b_doc.pngbin0 -> 274 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/ffa.pngbin0 -> 32992 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/free.for.all.pngbin0 -> 32992 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/freeculture.home.pngbin0 -> 6931 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/freeculture01.pngbin0 -> 19117 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/freeculture02.pngbin0 -> 30246 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/freeculture03.pngbin0 -> 14840 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/freeculture04.pngbin0 -> 14652 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/freeculture05.pngbin0 -> 12639 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/freeculture06.pngbin0 -> 28936 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/freeculture07.pngbin0 -> 19156 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/freeculture08.pngbin0 -> 28874 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/freeculture09.pngbin0 -> 19331 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/freeculture10.pngbin0 -> 33973 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/freeculture11.pngbin0 -> 39219 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/freeculture12.pngbin0 -> 25890 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/freeculture13.pngbin0 -> 26971 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/freeculture14.pngbin0 -> 26120 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/freeculture15.pngbin0 -> 74370 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/freeculture16.pngbin0 -> 86389 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/freeculture17.pngbin0 -> 115070 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/freeculture18.pngbin0 -> 164985 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/freeculture_bcode.pngbin0 -> 250 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/freeculture_book.pngbin0 -> 24943 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/gutenberg.home.pngbin0 -> 5911 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/gutenberg_icon.pngbin0 -> 10152 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/lessig.jpgbin0 -> 8194 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/rdgl.pngbin0 -> 11164 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/ruby_takes_over.pngbin0 -> 551086 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/sisu.deb.tux.ruby.pngbin0 -> 24065 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/sisu.home.pngbin0 -> 2049 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/sisu.pngbin0 -> 3260 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/thumb_ruby_takes_over.pngbin0 -> 24334 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/tux_ruby.pngbin0 -> 7480 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/wayner.home.pngbin0 -> 2396 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/wayner.pngbin0 -> 2396 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/won_benkler.pngbin0 -> 44338 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/won_benkler_2_1.pngbin0 -> 93861 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/won_benkler_6_1.pngbin0 -> 40234 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/won_benkler_7_1.pngbin0 -> 75815 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/won_benkler_7_3a.pngbin0 -> 103181 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/won_benkler_7_3b.pngbin0 -> 97016 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/won_benkler_7_4.pngbin0 -> 89891 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/won_benkler_7_5.pngbin0 -> 59459 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/won_benkler_7_6.pngbin0 -> 54155 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/won_benkler_9_1.pngbin0 -> 93565 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/image/won_benkler_book.pngbin0 -> 16795 bytes
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/skin/dir/skin_sisu.rb105
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/skin/doc/skin_gnu.rb96
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/skin/doc/skin_gutenberg.rb218
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/skin/doc/skin_lessig.rb80
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/skin/doc/skin_wayner.rb96
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/skin/doc/skin_won_benkler.rb78
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/skin/site/skin_sisu.rb105
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/skin/yaml/skin_countries.yaml482
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/skin/yaml/skin_country.yaml735
-rw-r--r--data/sisu_markup_samples/non-free/_sisu/skin/yaml/skin_lexAddress.yaml207
-rw-r--r--data/sisu_markup_samples/non-free/autonomy_markup0.sst199
-rw-r--r--data/sisu_markup_samples/non-free/autonomy_markup1.sst197
-rw-r--r--data/sisu_markup_samples/non-free/autonomy_markup2.sst355
-rw-r--r--data/sisu_markup_samples/non-free/autonomy_markup3.sst202
-rw-r--r--data/sisu_markup_samples/non-free/free_culture.lawrence_lessig.sst4834
-rw-r--r--data/sisu_markup_samples/non-free/free_for_all.peter_wayner.sst3269
-rw-r--r--data/sisu_markup_samples/non-free/the_cathedral_and_the_bazaar.eric_s_raymond.sst592
-rw-r--r--data/sisu_markup_samples/non-free/the_wealth_of_networks.book_index.yochai_benkler.sst1847
-rw-r--r--data/sisu_markup_samples/non-free/the_wealth_of_networks.yochai_benkler.sst2165
-rw-r--r--data/sisu_markup_samples/non-free/un_contracts_international_sale_of_goods_convention_1980.sst783
71 files changed, 16677 insertions, 0 deletions
diff --git a/data/sisu_markup_samples/non-free/README b/data/sisu_markup_samples/non-free/README
new file mode 100644
index 0000000..f346a36
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/README
@@ -0,0 +1,32 @@
+Note on sisu markup 2006-11-27, Ralph Amissah
+
+Contains the following books:
+ * "Free Culture", Lawrence Lessig
+ * "The Wealth of Networks", Yochai Benkler
+ * "Free For All", Peter Wayner
+
+The main sisu archive contains:
+ * "Free as in Freedom", Sam Williams (about Richard Stallman)
+
+These Documents provided as markup samples were published under various
+Creative Commons licenses, check the rights section of each document for the
+copyright and license.
+
+--------
+
+SiSU >= 0.38 ships with document markup samples prepared with a new notation
+for document structure.
+
+This note is to point out that sisu-0.38 should be able to process both the new
+and older markup, and a conversion options are in sisu to make conversion
+between 0.36 and 0.38 markup versions fairly simple. For help the man pages, or
+type 'sisu --help convert'
+
+SiSU markup sample Notes:
+SiSU <http://www.jus.uio.no/sisu>
+SiSU markup for 0.16 and later:
+ 0.20.4 header 0~links
+ 0.22 may drop image dimensions (rmagick)
+ 0.23 utf-8 ß
+ 0.38 or later, may use alternative notation for headers, e.g. @title: (instead of 0~title)
+ 0.38 document structure alternative markup, experimental (rad) A,B,C,1,2,3 maps to 1,2,3,4,5,6
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/DebTuxRuby3.png b/data/sisu_markup_samples/non-free/_sisu/image/DebTuxRuby3.png
new file mode 100644
index 0000000..327d3ca
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/DebTuxRuby3.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/DebTuxRubySiSU.png b/data/sisu_markup_samples/non-free/_sisu/image/DebTuxRubySiSU.png
new file mode 100644
index 0000000..06109cd
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/DebTuxRubySiSU.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/Gnu_Debian_Linux_Ruby_Better_Way.png b/data/sisu_markup_samples/non-free/_sisu/image/Gnu_Debian_Linux_Ruby_Better_Way.png
new file mode 100644
index 0000000..ce5b883
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/Gnu_Debian_Linux_Ruby_Better_Way.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/b_doc.png b/data/sisu_markup_samples/non-free/_sisu/image/b_doc.png
new file mode 100644
index 0000000..13ca8eb
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/b_doc.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/ffa.png b/data/sisu_markup_samples/non-free/_sisu/image/ffa.png
new file mode 100644
index 0000000..ab2256c
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/ffa.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/free.for.all.png b/data/sisu_markup_samples/non-free/_sisu/image/free.for.all.png
new file mode 100644
index 0000000..ab2256c
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/free.for.all.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/freeculture.home.png b/data/sisu_markup_samples/non-free/_sisu/image/freeculture.home.png
new file mode 100644
index 0000000..3d47f5d
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/freeculture.home.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/freeculture01.png b/data/sisu_markup_samples/non-free/_sisu/image/freeculture01.png
new file mode 100644
index 0000000..6167da7
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/freeculture01.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/freeculture02.png b/data/sisu_markup_samples/non-free/_sisu/image/freeculture02.png
new file mode 100644
index 0000000..b3e27f2
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/freeculture02.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/freeculture03.png b/data/sisu_markup_samples/non-free/_sisu/image/freeculture03.png
new file mode 100644
index 0000000..294ef07
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/freeculture03.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/freeculture04.png b/data/sisu_markup_samples/non-free/_sisu/image/freeculture04.png
new file mode 100644
index 0000000..11e3723
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/freeculture04.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/freeculture05.png b/data/sisu_markup_samples/non-free/_sisu/image/freeculture05.png
new file mode 100644
index 0000000..01a0978
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/freeculture05.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/freeculture06.png b/data/sisu_markup_samples/non-free/_sisu/image/freeculture06.png
new file mode 100644
index 0000000..cc5bfad
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/freeculture06.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/freeculture07.png b/data/sisu_markup_samples/non-free/_sisu/image/freeculture07.png
new file mode 100644
index 0000000..177745f
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/freeculture07.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/freeculture08.png b/data/sisu_markup_samples/non-free/_sisu/image/freeculture08.png
new file mode 100644
index 0000000..a82c2fe
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/freeculture08.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/freeculture09.png b/data/sisu_markup_samples/non-free/_sisu/image/freeculture09.png
new file mode 100644
index 0000000..a440b83
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/freeculture09.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/freeculture10.png b/data/sisu_markup_samples/non-free/_sisu/image/freeculture10.png
new file mode 100644
index 0000000..db76856
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/freeculture10.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/freeculture11.png b/data/sisu_markup_samples/non-free/_sisu/image/freeculture11.png
new file mode 100644
index 0000000..52d70a9
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/freeculture11.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/freeculture12.png b/data/sisu_markup_samples/non-free/_sisu/image/freeculture12.png
new file mode 100644
index 0000000..140db0f
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/freeculture12.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/freeculture13.png b/data/sisu_markup_samples/non-free/_sisu/image/freeculture13.png
new file mode 100644
index 0000000..3c716c0
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/freeculture13.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/freeculture14.png b/data/sisu_markup_samples/non-free/_sisu/image/freeculture14.png
new file mode 100644
index 0000000..cccfa69
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/freeculture14.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/freeculture15.png b/data/sisu_markup_samples/non-free/_sisu/image/freeculture15.png
new file mode 100644
index 0000000..19db29c
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/freeculture15.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/freeculture16.png b/data/sisu_markup_samples/non-free/_sisu/image/freeculture16.png
new file mode 100644
index 0000000..919a54d
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/freeculture16.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/freeculture17.png b/data/sisu_markup_samples/non-free/_sisu/image/freeculture17.png
new file mode 100644
index 0000000..1f94fc2
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/freeculture17.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/freeculture18.png b/data/sisu_markup_samples/non-free/_sisu/image/freeculture18.png
new file mode 100644
index 0000000..b18dc8d
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/freeculture18.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/freeculture_bcode.png b/data/sisu_markup_samples/non-free/_sisu/image/freeculture_bcode.png
new file mode 100644
index 0000000..c318556
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/freeculture_bcode.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/freeculture_book.png b/data/sisu_markup_samples/non-free/_sisu/image/freeculture_book.png
new file mode 100644
index 0000000..89dc002
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/freeculture_book.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/gutenberg.home.png b/data/sisu_markup_samples/non-free/_sisu/image/gutenberg.home.png
new file mode 100644
index 0000000..e6e021c
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/gutenberg.home.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/gutenberg_icon.png b/data/sisu_markup_samples/non-free/_sisu/image/gutenberg_icon.png
new file mode 100644
index 0000000..2f6466b
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/gutenberg_icon.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/lessig.jpg b/data/sisu_markup_samples/non-free/_sisu/image/lessig.jpg
new file mode 100644
index 0000000..7c0f716
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/lessig.jpg
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/rdgl.png b/data/sisu_markup_samples/non-free/_sisu/image/rdgl.png
new file mode 100644
index 0000000..979471d
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/rdgl.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/ruby_takes_over.png b/data/sisu_markup_samples/non-free/_sisu/image/ruby_takes_over.png
new file mode 100644
index 0000000..be93387
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/ruby_takes_over.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/sisu.deb.tux.ruby.png b/data/sisu_markup_samples/non-free/_sisu/image/sisu.deb.tux.ruby.png
new file mode 100644
index 0000000..327d3ca
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/sisu.deb.tux.ruby.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/sisu.home.png b/data/sisu_markup_samples/non-free/_sisu/image/sisu.home.png
new file mode 100644
index 0000000..202d8c4
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/sisu.home.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/sisu.png b/data/sisu_markup_samples/non-free/_sisu/image/sisu.png
new file mode 100644
index 0000000..b449fa6
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/sisu.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/thumb_ruby_takes_over.png b/data/sisu_markup_samples/non-free/_sisu/image/thumb_ruby_takes_over.png
new file mode 100644
index 0000000..13f1582
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/thumb_ruby_takes_over.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/tux_ruby.png b/data/sisu_markup_samples/non-free/_sisu/image/tux_ruby.png
new file mode 100644
index 0000000..f4a86ed
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/tux_ruby.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/wayner.home.png b/data/sisu_markup_samples/non-free/_sisu/image/wayner.home.png
new file mode 100644
index 0000000..debf9b2
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/wayner.home.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/wayner.png b/data/sisu_markup_samples/non-free/_sisu/image/wayner.png
new file mode 100644
index 0000000..debf9b2
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/wayner.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/won_benkler.png b/data/sisu_markup_samples/non-free/_sisu/image/won_benkler.png
new file mode 100644
index 0000000..06d5c14
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/won_benkler.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/won_benkler_2_1.png b/data/sisu_markup_samples/non-free/_sisu/image/won_benkler_2_1.png
new file mode 100644
index 0000000..20887d4
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/won_benkler_2_1.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/won_benkler_6_1.png b/data/sisu_markup_samples/non-free/_sisu/image/won_benkler_6_1.png
new file mode 100644
index 0000000..b959f84
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/won_benkler_6_1.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/won_benkler_7_1.png b/data/sisu_markup_samples/non-free/_sisu/image/won_benkler_7_1.png
new file mode 100644
index 0000000..8aa93f8
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/won_benkler_7_1.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/won_benkler_7_3a.png b/data/sisu_markup_samples/non-free/_sisu/image/won_benkler_7_3a.png
new file mode 100644
index 0000000..4c42126
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/won_benkler_7_3a.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/won_benkler_7_3b.png b/data/sisu_markup_samples/non-free/_sisu/image/won_benkler_7_3b.png
new file mode 100644
index 0000000..3ce123a
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/won_benkler_7_3b.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/won_benkler_7_4.png b/data/sisu_markup_samples/non-free/_sisu/image/won_benkler_7_4.png
new file mode 100644
index 0000000..073d810
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/won_benkler_7_4.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/won_benkler_7_5.png b/data/sisu_markup_samples/non-free/_sisu/image/won_benkler_7_5.png
new file mode 100644
index 0000000..b9e778b
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/won_benkler_7_5.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/won_benkler_7_6.png b/data/sisu_markup_samples/non-free/_sisu/image/won_benkler_7_6.png
new file mode 100644
index 0000000..f53daf3
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/won_benkler_7_6.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/won_benkler_9_1.png b/data/sisu_markup_samples/non-free/_sisu/image/won_benkler_9_1.png
new file mode 100644
index 0000000..814ed19
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/won_benkler_9_1.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/image/won_benkler_book.png b/data/sisu_markup_samples/non-free/_sisu/image/won_benkler_book.png
new file mode 100644
index 0000000..986e17e
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/image/won_benkler_book.png
Binary files differ
diff --git a/data/sisu_markup_samples/non-free/_sisu/skin/dir/skin_sisu.rb b/data/sisu_markup_samples/non-free/_sisu/skin/dir/skin_sisu.rb
new file mode 100644
index 0000000..66786ce
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/skin/dir/skin_sisu.rb
@@ -0,0 +1,105 @@
+=begin
+ * Name: SiSU information Structuring Universe - Structured information, Serialized Units
+ * Author: Ralph@Amissah.com
+ * http://www.jus.uio.no/sisu
+ * http://www.jus.uio.no/sisu/SiSU/download
+ * Description: Document skin for SiSU descriptive pages, ...
+ * License: Same as SiSU see http://www.jus.uio.no/sisu
+ * Notes: Site default appearance variables set in defaults.rb
+ Generic site wide modifications set here scribe_skin.rb, and this file required by other "scribes" instead of defaults.rb
+=end
+module SiSU_Viz
+ require SiSU_lib + '/defaults'
+ class Skin
+ #% widget
+ def widget_search
+ true
+ end
+ def widget_promo
+#put s "#{__LINE__} #{__FILE__}"
+ #['sisu','ruby','sisu_search_libre','ruby','open_society']
+ end
+ #% path
+ def path_root
+#puts "#{__LINE__} #{__FILE__}"
+ './sisu/' # the only parameter that cannot be changed here
+ end
+ def path_rel
+#puts "#{__LINE__} #{__FILE__}"
+ '../'
+ end
+ #% url
+ def url_home
+#puts "#{__LINE__} #{__FILE__}"
+ 'http://www.jus.uio.no/sisu/'
+ end
+ def url_site # used in pdf header
+#puts "#{__LINE__} #{__FILE__}"
+ 'http://www.jus.uio.no/sisu'
+ end
+ def url_txt # text to go with url usually stripped url
+#puts "#{__LINE__} #{__FILE__}"
+ 'www.jus.uio.no/sisu/'
+ end
+ def url_home_url
+#puts "#{__LINE__} #{__FILE__}"
+ '../index.html'
+ end
+ #def url_root_http
+ #root server path info, used in document information
+ #end
+ #% color
+ def color_band1
+ '"#ffffff"'
+ end
+ def color_band2
+ '"#ffffff"'
+ end
+ #% text
+ def text_hp
+ '&nbsp;SiSU'
+ end
+ def text_home
+ 'SiSU'
+ end
+ #% icon
+ def icon_home_button
+ 'sisu.png'
+ end
+ def icon_home_banner
+ icon_home_button
+ end
+ #% banner
+ def banner_home_button
+ %{<table summary="home button" border="0" cellpadding="3" cellspacing="0"><tr><td align="left" bgcolor="#ffffff"><a href="#{url_site}/">#{png_home}</a></td></tr></table>\n}
+ end
+ def banner_home_and_index_buttons
+ %{<table><tr><td width="20%"><table summary="home and index buttons" border="0" cellpadding="3" cellspacing="0"><tr><td align="left" bgcolor="#ffffff"><a href="#{url_site}/" target="_top">#{png_home}</a>#{table_close}</td><td width="60%"><center><center><table summary="buttons" border="1" cellpadding="3" cellspacing="0"><tr><td align="center" bgcolor="#ffffff"><font face="arial" size="2"><a href="toc" target="_top">&nbsp;This&nbsp;text&nbsp;sub-&nbsp;<br />&nbsp;Table&nbsp;of&nbsp;Contents&nbsp;</a></font>#{table_close}</center></center></td><td width="20%">&nbsp;#{table_close}}
+ end
+ def banner_band
+ %{<table summary="band" border="0" cellpadding="3" cellspacing="0"><tr><td align="left" bgcolor="#ffffff"><a href="#{url_site}/" target="_top">#{png_home}</a>#{table_close}}
+ end
+ #% credits
+ def credits_splash
+ %{<center><table summary="credits" align="center"bgcolor="#ffffff"><tr><td>#{widget_sisu}#{widget_wayBetter}#{widget_browsers}#{widget_pdfviewers}</td></tr></table></center>}
+ end
+ #% stamp
+ def stamp_stmp
+ "\\copyright Ralph Amissah, released under the GPL \\\\\n ralph@amissah.com \\\\\n www.jus.uio.no/sisu/"
+ end
+ end
+ class TeX
+ def header_center
+ "\\chead{\\href{#{@vz.url_site}/}{www.jus.uio.no/sisu/}}"
+ end
+ def home_url
+ "\\href{#{@vz.url_site}/}{www.jus.uio.no/sisu/}"
+ end
+ def home
+ "\\href{#{@vz.url_site}/}{Ralph Amissah}"
+ end
+ def owner_chapter
+ 'Document owner details'
+ end
+ end
+end
diff --git a/data/sisu_markup_samples/non-free/_sisu/skin/doc/skin_gnu.rb b/data/sisu_markup_samples/non-free/_sisu/skin/doc/skin_gnu.rb
new file mode 100644
index 0000000..4c35120
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/skin/doc/skin_gnu.rb
@@ -0,0 +1,96 @@
+=begin
+ * Name: SiSU - Simple information Structuring Universe - Structured information, Serialized Units
+ * Author: Ralph Amissah
+ * http://www.jus.uio.no/sisu
+ * http://www.jus.uio.no/sisu/SiSU/download
+ * Description: Free Software Foundation, Gnu sisu skin
+ * License: Same as SiSU see http://www.jus.uio.no/sisu
+ * Notes: Site default appearance variables set in defaults.rb
+ Generic site wide modifications set here scribe_skin.rb, and this file required by other "scribes" instead of defaults.rb
+=end
+module SiSU_Viz
+ require SiSU_lib + '/defaults'
+ class Skin
+ #% widget
+ def widget_promo
+ ['sisu_icon','sisu','sisu_search_libre','open_society','fsf','ruby']
+ end
+ #% home
+ def home_index
+ end
+ def home_toc
+ end
+ #% path
+ def path_root
+ './sisu/' # the only parameter that cannot be changed here
+ end
+ def path_rel
+ '../'
+ end
+ #% url
+ def url_home
+ 'http://www.fsf.org'
+ end
+ def url_site # used in pdf header
+ 'http://www.fsf.org'
+ end
+ def url_txt # text to go with url usually stripped url
+ 'www.fsf.org'
+ end
+ def url_home_url
+ '../index.html'
+ end
+ # color
+ def color_band1
+ '"#000070"'
+ end
+ #% txt
+ def txt_hp
+ 'Free Software Foundation'
+ end
+ def txt_home # this should be the name of the site eg. Lex Mercatoria or if you prefer to see a url the url in text form copy & ...
+ #"www.jus.uio.no/sisu/"
+ 'Free Software Foundation'
+ end
+ #% icon
+ def icon_home_button
+ 'philosophical_gnu.png'
+ end
+ def icon_home_banner
+ icion_home_button
+ end
+ #% banner
+ def banner_home_button
+ %{<table border="0" summary="home button" cellpadding="3" cellspacing="0"><tr><td align="left" bgcolor="#000070"><a href="#{url_site}/">#{png_home}</a></td></tr></table>\n}
+ end
+ def banner_home_and_index_buttons
+ %{<table><tr><td width="20%"><table summary="home and index buttons" border="0" cellpadding="3" cellspacing="0"><tr><td align="left" bgcolor="#000070"><a href="#{url_site}/" target="_top">#{png_home}</a>#{table_close}</td><td width="60%"><center><table summary="buttons" border="1" cellpadding="3" cellspacing="0"><tr><td align="center" bgcolor="#f1e8de"><font face="arial" size="2"><a href="toc" target="_top">&nbsp;This&nbsp;text&nbsp;sub-&nbsp;<br />&nbsp;Table&nbsp;of&nbsp;Contents&nbsp;</a></font>#{table_close}</center></td><td width="20%">&nbsp;#{table_close}}
+ end
+ def banner_band
+ %{<table summary="band" border="0" cellpadding="3" cellspacing="0"><tr><td align="left" bgcolor="#000070"><a href="#{url_site}/" target="_top">#{png_home}</a>#{table_close}}
+ end
+ #% credits
+ def credits_splash
+ %{<center><table summary="credits" align="center"bgcolor="#ffffff"><tr><td>#{widget_sisu}#{widget_wayBetter}#{widget_browsers}#{widget_pdfviewers}#{table_close}</center>}
+ end
+ end
+ class TeX
+ def header_center
+ "\\chead{\\href{#{@vz.url_site}/}{www.jus.uio.no/sisu/}}"
+ end
+ def home_url
+ "\\href{#{@vz.url_site}/}{www.fsf.org}"
+ end
+ def home
+ "\\href{#{@vz.url_site}/}{Free Software Foundation}"
+ end
+ def owner_chapter
+ "Document owner details"
+ end
+ end
+ class Stamp
+ def stmp
+ "\\copyright Ralph Amissah to be released under the GPL (or QT License equivalent as to be decided) \\\\\n ralph@amissah.com \\\\\n www.jus.uio.no/sisu/"
+ end
+ end
+end
diff --git a/data/sisu_markup_samples/non-free/_sisu/skin/doc/skin_gutenberg.rb b/data/sisu_markup_samples/non-free/_sisu/skin/doc/skin_gutenberg.rb
new file mode 100644
index 0000000..0df9c7f
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/skin/doc/skin_gutenberg.rb
@@ -0,0 +1,218 @@
+=begin
+ * Name: SiSU - Simple information Structuring Universe - Structured information, Serialized Units
+ * Author: Ralph Amissah
+ * http://www.jus.uio.no/sisu
+ * http://www.jus.uio.no/sisu/SiSU/download
+ * Description: Document skin sample prepared for Gutenberg Project (first used with "War and Peace")
+ * License: Same as SiSU see http://www.jus.uio.no/sisu
+ * Notes: Site default appearance variables set in defaults.rb
+ Generic site wide modifications set here scribe_skin.rb, and this file required by other "scribes" instead of defaults.rb
+=end
+module SiSU_Viz
+ require "#{SiSU_lib}/defaults"
+ class Skin
+ #% path
+ def path_root # the only parameter that cannot be changed here
+ './sisu/'
+ end
+ def path_rel
+ '../'
+ end
+ #% url
+ def url_home
+ 'http://www.gutenberg.net'
+ end
+ def url_txt # text to go with url usually stripped url
+ 'www.gutenberg.net'
+ end
+ #% txt
+ def txt_hp
+ 'www.gutenberg.net'
+ end
+ def txt_home
+ 'Gutenberg Project'
+ end
+ #% icon
+ def icon_home_button
+ 'gutenberg.home.png'
+ end
+ def icon_home_banner
+ icon_home_button
+ end
+ #% banner
+ def banner_home_button
+ %{<table summary="home button" border="0" cellpadding="3" cellspacing="0"><tr><td align="left" bgcolor=#{color_yellow_dark}><a href="#{url_home}">#{png_home}</a></td></tr></table>\n}
+ end
+ def banner_home_and_index_buttons
+ %{<table><tr><td width="20%"><table summary="home and index buttons" border="0" cellpadding="3" cellspacing="0"><tr><td align="left" bgcolor=#{color_yellow_dark}><a href="#{url_home}" target="_top">#{png_home}</a></td></tr></table></td><td width="60%"><center><center><table summary="buttons" border="1" cellpadding="3" cellspacing="0"><tr><td align="center" bgcolor="#f1e8de"><font face="arial" size="2"><a href="toc.html" target="_top">&nbsp;This&nbsp;text&nbsp;sub-&nbsp;<br />&nbsp;Table&nbsp;of&nbsp;Contents&nbsp;</a></font></td></tr></table></center></center></td><td width="20%">&nbsp;</td></tr></table>}
+ end
+ def banner_band
+ %{<table summary="band" border="0" cellpadding="3" cellspacing="0"><tr><td align="left" bgcolor=#{color_yellow_dark}><a href="#{url_home}" target="_top">#{png_home}</a>#{table_close}}
+ end
+ #% credits
+ def credits_splash
+ %{<table summary="credits" align="center"bgcolor="#ffffff"><tr><td><font color="black"><center><a href="http://www.gutenberg.net/"><img border="0" align="center" src="../_sisu/image_local/gutenberg_icon.png" alt="Gutenberg Project"><br />Courtesy of The Gutenberg Project</a><br />#{widget_sisu}</center></font></td></tr></table>}
+ end
+ end
+ class TeX
+ def header_center
+ "\\chead{\\href{#{@vz.url_home}}{www.gutenberg.net}}"
+ end
+ def home_url
+ "\\href{#{@vz.url_home}}{www.gutenberg.net}"
+ end
+ def home
+ "\\href{#{@vz.url_home}}{Gutenberg Project}"
+ end
+ def owner_chapter
+ "Document owner details"
+ end
+ def stmp
+ "\\copyright Ralph Amissah, licence GPL \\\\\n www.jus.uio.no/sisu/"
+ end
+ end
+ class Inserts
+ def insert1
+<<CONTENTS
+
+3~ Project Gutenberg~#
+
+4~ Project Gutenberg Notes~#
+
+Copyright laws are changing all over the world, be sure to check the copyright laws for your country before posting these files!!~#
+
+Please take a look at the important information in this header. We encourage you to keep this file on your own disk, keeping an electronic path open for the next readers. Do not remove this.~#
+
+*{It must legally be the first thing seen when opening the book.}* In fact, our legal advisors said we can't even change margins.~#
+
+*{Welcome To The World of Free Plain Vanilla Electronic Texts}*~#
+
+*{Etexts Readable By Both Humans and By Computers, Since 1971}*~#
+
+*{These Etexts Prepared By Hundreds of Volunteers and Donations}*~#
+
+Information on contacting Project Gutenberg to get Etexts, and further information is included below. We need your donations.~#
+
+CONTENTS
+ end
+ def insert2 #note took out stop after http://promo.net/pg and created space after this url repeated in subsequent paragraph, as broke latex/pdf, think of modifying regexs for urls
+<<CONTENTS
+Project Gutenberg Etexts are usually created from multiple editions, all of which are in the Public Domain in the United States, unless a copyright notice is included. Therefore, we usually do NOT keep any of these books in compliance with any particular paper edition.~#
+
+We are now trying to release all our books one month in advance of the official release dates, leaving time for better editing.~#
+
+Please note: neither this list nor its contents are final till midnight of the last day of the month of any such announcement. The official release date of all Project Gutenberg Etexts is at Midnight, Central Time, of the last day of the stated month. A preliminary version may often be posted for suggestion, comment and editing by those who wish to do so. To be sure you have an up to date first edition [xxxxx10x.xxx] please check file sizes in the first week of the next month. Since our ftp program has a bug in it that scrambles the date [tried to fix and failed] a look at the file size will have to do, but we will try to see a new copy has at least one byte more or less.~#
+
+4~ Information about Project Gutenberg (one page)~#
+
+We produce about two million dollars for each hour we work. The time it takes us, a rather conservative estimate, is fifty hours to get any etext selected, entered, proofread, edited, copyright searched and analyzed, the copyright letters written, etc. This projected audience is one hundred million readers. If our value per text is nominally estimated at one dollar then we produce $2 million dollars per hour this year as we release thirty-six text files per month, or 432 more Etexts in 1999 for a total of 2000+ If these reach just 10% of the computerized population, then the total should reach over 200 billion Etexts given away this year.~#
+
+The Goal of Project Gutenberg is to Give Away One Trillion Etext Files by December 31, 2001. [10,000 x 100,000,000 = 1 Trillion] This is ten thousand titles each to one hundred million readers, which is only ~5% of the present number of computer users. At our revised rates of production, we will reach only one-third of that goal by the end of 2001, or about 3,333 Etexts unless we manage to get some real funding; currently our funding is mostly from Michael Hart's salary at Carnegie-Mellon University, and an assortment of sporadic gifts; this salary is only good for a few more years, so we are looking for something to replace it, as we don't want Project Gutenberg to be so dependent on one person.~#
+
+We need your donations more than ever!~#
+
+All donations should be made to "Project Gutenberg/CMU": and are tax deductible to the extent allowable by law. (CMU = Carnegie-Mellon University).~#
+
+For these and other matters, please mail to:~#
+
+Project Gutenberg~#
+
+P. O. Box 2782~#
+
+Champaign, IL 61825~#
+
+When all other email fails. . .try our Executive Director: Michael S. Hart hart@pobox.com forwards to hart@prairienet.org and archive.org if your mail bounces from archive.org, I will still see it, if it bounces from prairienet.org, better resend later on. . . .~#
+
+We would prefer to send you this information by email.~#
+
+******~#
+
+To access Project Gutenberg etexts, use any Web browser to view http://promo.net/pg This site lists Etexts by author and by title, and includes information about how to get involved with Project Gutenberg. You could also download our past Newsletters, or subscribe here. This is one of our major sites, please email hart@pobox.com, for a more complete list of our various sites.~#
+
+To go directly to the etext collections, use FTP or any Web browser to visit a Project Gutenberg mirror (mirror sites are available on 7 continents; mirrors are listed at http://promo.net/pg ).~#
+
+Mac users, do NOT point and click, typing works better.~#
+
+Example FTP session:~#
+
+ftp metalab.unc.edu~#
+
+login: anonymous~#
+
+password: your@login~#
+
+cd pub/docs/books/gutenberg~#
+
+cd etext90 through etext99 or etext00 through etext01, etc.~#
+
+dir [to see files]~#
+
+get or mget [to get files. . .set bin for zip files]~#
+
+GET GUTINDEX.?? [to get a year's listing of books, e.g., GUTINDEX.99]~#
+
+GET GUTINDEX.ALL [to get a listing of ALL books]~#
+
+***~#
+
+3~ Information prepared by the Project Gutenberg legal advisor** (three pages)~#
+
+4~ THE SMALL PRINT!**FOR PUBLIC DOMAIN ETEXTS~#
+
+Why is this "Small Print!" statement here? You know: lawyers. They tell us you might sue us if there is something wrong with your copy of this etext, even if you got it for free from someone other than us, and even if what's wrong is not our fault. So, among other things, this "Small Print!" statement disclaims most of our liability to you. It also tells you how you can distribute copies of this etext if you want to.~#
+
+5~ *BEFORE!* YOU USE OR READ THIS ETEXT~#
+
+By using or reading any part of this PROJECT GUTENBERG-tm etext, you indicate that you understand, agree to and accept this "Small Print!" statement. If you do not, you can receive a refund of the money (if any) you paid for this etext by sending a request within 30 days of receiving it to the person you got it from. If you received this etext on a physical medium (such as a disk), you must return it with your request.~#
+
+5~ ABOUT PROJECT GUTENBERG-TM ETEXTS~#
+
+This PROJECT GUTENBERG-tm etext, like most PROJECT GUTENBERG-tm etexts, is a "public domain" work distributed by Professor Michael S. Hart through the Project Gutenberg Association at Carnegie-Mellon University (the "Project"). Among other things, this means that no one owns a United States copyright on or for this work, so the Project (and you!) can copy and distribute it in the United States without permission and without paying copyright royalties. Special rules, set forth below, apply if you wish to copy and distribute this etext under the Project's "PROJECT GUTENBERG" trademark.~#
+
+To create these etexts, the Project expends considerable efforts to identify, transcribe and proofread public domain works. Despite these efforts, the Project's etexts and any medium they may be on may contain "Defects". Among other things, Defects may take the form of incomplete, inaccurate or corrupt data, transcription errors, a copyright or other intellectual property infringement, a defective or damaged disk or other etext medium, a computer virus, or computer codes that damage or cannot be read by your equipment.~#
+
+5~ LIMITED WARRANTY; DISCLAIMER OF DAMAGES~#
+
+But for the "Right of Replacement or Refund" described below, [1] the Project (and any other party you may receive this etext from as a PROJECT GUTENBERG-tm etext) disclaims all liability to you for damages, costs and expenses, including legal fees, and [2] YOU HAVE NO REMEDIES FOR NEGLIGENCE OR UNDER STRICT LIABILITY, OR FOR BREACH OF WARRANTY OR CONTRACT, INCLUDING BUT NOT LIMITED TO INDIRECT, CONSEQUENTIAL, PUNITIVE OR INCIDENTAL DAMAGES, EVEN IF YOU GIVE NOTICE OF THE POSSIBILITY OF SUCH DAMAGES.~#
+
+If you discover a Defect in this etext within 90 days of receiving it, you can receive a refund of the money (if any) you paid for it by sending an explanatory note within that time to the person you received it from. If you received it on a physical medium, you must return it with your note, and such person may choose to alternatively give you a replacement copy. If you received it electronically, such person may choose to alternatively give you a second opportunity to receive it electronically.~#
+
+THIS ETEXT IS OTHERWISE PROVIDED TO YOU "AS-IS". NO OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED, ARE MADE TO YOU AS TO THE ETEXT OR ANY MEDIUM IT MAY BE ON, INCLUDING BUT NOT LIMITED TO WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.~#
+
+Some states do not allow disclaimers of implied warranties or the exclusion or limitation of consequential damages, so the above disclaimers and exclusions may not apply to you, and you may have other legal rights.~#
+
+5~ INDEMNITY~#
+
+You will indemnify and hold the Project, its directors, officers, members and agents harmless from all liability, cost and expense, including legal fees, that arise directly or indirectly from any of the following that you do or cause: [1] distribution of this etext, [2] alteration, modification, or addition to the etext, or [3] any Defect.~#
+
+5~ DISTRIBUTION UNDER "PROJECT GUTENBERG-tm"~#
+
+You may distribute copies of this etext electronically, or by disk, book or any other medium if you either delete this "Small Print!" and all other references to Project Gutenberg, or:~#
+
+*{[1]}* Only give exact copies of it. Among other things, this requires that you do not remove, alter or modify the etext or this "small print!" statement. You may however, if you wish, distribute this etext in machine readable binary, compressed, mark-up, or proprietary form, including any form resulting from conversion by word pro- cessing or hypertext software, but only so long as *{EITHER}*:~#
+
+_1 *{[*]}* The etext, when displayed, is clearly readable, and does *not* contain characters other than those intended by the author of the work, although tilde (~), asterisk (*) and underline (_) characters may be used to convey punctuation intended by the author, and additional characters may be used to indicate hypertext links; OR~#
+
+_1 *{[*]}* The etext may be readily converted by the reader at no expense into plain ASCII, EBCDIC or equivalent form by the program that displays the etext (as is the case, for instance, with most word processors); OR~#
+
+_1 *{[*]}* You provide, or agree to also provide on request at no additional cost, fee or expense, a copy of the etext in its original plain ASCII form (or in EBCDIC or other equivalent proprietary form).~#
+
+*{[2]}* Honor the etext refund and replacement provisions of this "Small Print!" statement.~#
+
+*{[3]}* Pay a trademark license fee to the Project of 20% of the net profits you derive calculated using the method you already use to calculate your applicable taxes. If you don't derive profits, no royalty is due. Royalties are payable to "Project Gutenberg Association/Carnegie-Mellon University" within the 60 days following each date you prepare (or were legally required to prepare) your annual (or equivalent periodic) tax return.~#
+
+5~ WHAT IF YOU *WANT* TO SEND MONEY EVEN IF YOU DON'T HAVE TO?~#
+
+The Project gratefully accepts contributions in money, time, scanning machines, OCR software, public domain etexts, royalty free copyright licenses, and every other sort of contribution you can think of. Money should be paid to "Project Gutenberg Association / Carnegie-Mellon University".~#
+
+We are planning on making some changes in our donation structure in 2000, so you might want to email me, hart@pobox.com beforehand.~#
+
+*END THE SMALL PRINT! FOR PUBLIC DOMAIN ETEXTS*Ver.04.29.93*END*~#
+
+<!pn!>
+
+CONTENTS
+ end
+ end
+end
+
diff --git a/data/sisu_markup_samples/non-free/_sisu/skin/doc/skin_lessig.rb b/data/sisu_markup_samples/non-free/_sisu/skin/doc/skin_lessig.rb
new file mode 100644
index 0000000..0a61c70
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/skin/doc/skin_lessig.rb
@@ -0,0 +1,80 @@
+=begin
+ * Name: SiSU - Simple information Structuring Universe - Structured information, Serialized Units
+ * Author: Ralph Amissah
+ * http://www.jus.uio.no/sisu
+ * http://www.jus.uio.no/sisu/SiSU/download
+ * Description: Skin prepared for Free Culture, Lawrence Lessig
+ * arch-tag: skin for an individual document set (lessig - freeculture)
+ * License: Same as SiSU see http://www.jus.uio.no/sisu
+ * $Date$
+ * $Id$
+ * Notes: Site default appearance variables set in defaults.rb
+ Generic site wide modifications set here scribe_skin.rb, and this file required by other "scribes" instead of defaults.rb
+=end
+module SiSU_Viz
+ require SiSU_lib + '/defaults'
+ class Skin
+ #def path_root # the only parameter that cannot be changed here
+ # './sisu/'
+ #end
+ #def path_rel
+ # '../'
+ #end
+ #def url_hp # used by wmap, get rid of ie make it seek home instead
+ # 'http://www.free-culture.cc/'
+ #end
+ def url_home
+ 'http://www.free-culture.cc'
+ end
+ def url_txt # text to go with url usually stripped url
+ 'www.lessig.org'
+ end
+ #def url_root_http
+ #root server path info, used in document information
+ #end
+ def color_band1
+ '"#000000"'
+ end
+ def txt_hp
+ 'www.lessig.org'
+ end
+ def txt_home
+ 'Lawrence Lessig'
+ end
+ def icon_home_button
+ 'freeculture.home.png'
+ end
+ def icon_home_banner
+ icon_home_button
+ end
+ def banner_home_button
+ %{<table summary="home button" border="0" cellpadding="3" cellspacing="0"><tr><td align="left" bgcolor=#{color_black}><a href="#{url_home}">#{png_home}</a></td></tr></table>\n}
+ end
+ def banner_home_and_index_buttons
+ %{<table><tr><td width="20%"><table summary="home and index buttons" border="0" cellpadding="3" cellspacing="0"><tr><td align="left" bgcolor=#{color_black}><a href="#{url_home}" target="_top">#{png_home}</a></td><td width="40%"><center><table summary="buttons" border="1" cellpadding="3" cellspacing="0"><tr><td align="center" bgcolor="#f1e8de"><font face="arial" size="2"><a href="toc.html" target="_top">&nbsp;This&nbsp;text&nbsp;sub-&nbsp;<br />&nbsp;Table&nbsp;of&nbsp;Contents&nbsp;</a></font>#{table_close}</center></td><td width="20%">&nbsp;#{table_close}}
+ end
+ def banner_band
+ %{<table summary="band" border="0" cellpadding="3" cellspacing="0"><tr><td align="left" bgcolor=#{color_black}><a href="#{url_home}" target="_top">#{png_home}</a>#{table_close}}
+ end
+ def credits_splash
+ %{<table summary="credits" align="center" bgcolor="#ffffff"><tr><td><font color="black"><center><img border="0" align="center" src="../_sisu/image_local/freeculture_bcode.png" alt="Free Culture Bar Code"><br />Available at Amazon.com<br /><a href="http://www.amazon.com/exec/obidos/tg/detail/-/1594200068/"><img border="0" align="center" src="../_sisu/image_local/freeculture_book.png" alt="Free Culture at Amazon.com"></a><br />This book is Copyright Lawrence Lessig © 2004<br />Under a Creative Commons License, that permits non-commercial use of this work, provided attribution is given.<br />See <a href="http://www.free-culture.cc/">http://www.free-culture.cc/</a><br /><a href="mailto://lessig@pobox.com">lessig@pobox.com</a><br />#{widget_sisu}</center></font></td></tr></table>}
+ end
+ end
+ class TeX
+ def header_center
+ "\\chead{\\href{#{@vz.url_home}}{lessig.org}}"
+ end
+ def home_url
+ "\\href{#{@vz.url_home}}{lessig.org}"
+ end
+ def home
+ "\\href{#{@vz.url_home}}{Lawrence Lessig}"
+ end
+ def owner_chapter
+ "Document owner details"
+ end
+ def stmp
+ "\\copyright Ralph Amissah, licence GPL \\\\\n www.jus.uio.no/sisu/"
+ end
+ end
+end
diff --git a/data/sisu_markup_samples/non-free/_sisu/skin/doc/skin_wayner.rb b/data/sisu_markup_samples/non-free/_sisu/skin/doc/skin_wayner.rb
new file mode 100644
index 0000000..a1d1541
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/skin/doc/skin_wayner.rb
@@ -0,0 +1,96 @@
+=begin
+ * Name: SiSU - Simple information Structuring Universe - Structured information, Serialized Units
+ * Author: Ralph Amissah
+ * http://www.jus.uio.no/sisu
+ * http://www.jus.uio.no/sisu/SiSU/download
+ * Description: Document skin for "Free For All"
+ * arch-tag: skin for an individual document set (wayner)
+ * License: Same as SiSU see http://www.jus.uio.no/sisu
+ * $Date$
+ * $Id$
+ * Notes: Site default appearance variables set in defaults.rb
+ Generic site wide modifications set here scribe_skin.rb, and this file required by other "scribes" instead of defaults.rb
+=end
+module SiSU_Viz
+ require "#{SiSU_lib}/defaults"
+ class Skin
+ #% path
+ def path_root # the only parameter that cannot be changed here
+ './sisu/'
+ end
+ def path_rel
+ '../'
+ end
+ #% url
+ #def url_hp # used by wmap, get rid of ie make it seek home instead
+ # 'http://www.wayner.org/books/ffa/'
+ #end
+ def url_home
+ 'http://www.wayner.org/books/ffa/'
+ end
+ def url_txt # text to go with url usually stripped url
+ 'www.wayner.org'
+ end
+ #def url_root_http
+ #root server path info, used in document information
+ #end
+ #% color
+ def color_band1
+ '"#000070"'
+ end
+ #% txt
+ def txt_hp
+ 'www.wayner.org'
+ end
+ def txt_home
+ 'Peter Wayner'
+ end
+ #% icon
+ def icon_home_button
+ 'wayner.home.png'
+ end
+ def icon_home_banner
+ icon_home_button
+ end
+ def icon_next
+ 'arrow_next_blue.png'
+ end
+ def icon_previous
+ 'arrow_prev_blue.png'
+ end
+ def icon_up
+ 'arrow_up_blue.png'
+ end
+ #% banner
+ def banner_home_button
+ %{<table summary="home button" border="0" cellpadding="3" cellspacing="0"><tr><td align="left" bgcolor=#{color_yellow_dark}><a href="#{url_home}">#{png_home}</a></td></tr></table>\n}
+ end
+ def banner_home_and_index_buttons
+ %{<table><tr><td width="20%"><table summary="home and index buttons" border="0" cellpadding="3" cellspacing="0"><tr><td align="left" bgcolor=#{color_yellow_dark}><a href="#{url_home}" target="_top">#{png_home}</a></td></tr></table></td><td width="60%"><center><center><table summary="buttons" border="1" cellpadding="3" cellspacing="0"><tr><td align="center" bgcolor="#f1e8de"><font face="arial" size="2"><a href="toc.html" target="_top">&nbsp;This&nbsp;text&nbsp;sub-&nbsp;<br />&nbsp;Table&nbsp;of&nbsp;Contents&nbsp;</a></font></td></tr></table></center></center></td><td width="20%">&nbsp;</td></tr></table>}
+ end
+ def banner_band
+ %{<table summary="band" border="0" cellpadding="3" cellspacing="0"><tr><td align="left" bgcolor=#{color_yellow_dark}><a href="#{url_home}" target="_top">#{png_home}</a>#{table_close}}
+ end
+ #% credits
+ def credits_splash
+ %{<table summary="credits" align="center"bgcolor="#ffffff"><tr><td><font color="black"><center>Available at Amazon.com<br /><a href="http://www.amazon.com/exec/obidos/tg/detail/-/0066620503/"><img border="0" align="center" src="../_sisu/image/free.for.all.png" alt="Free For All at Amazon.com"></a><br />This book is Copyright © 2000 by Peter Wayner.<br />See <a href="http://www.wayner.org/books/ffa/">http://www.wayner.org/books/ffa/</a><br /><a href="mailto://p3@wayner.org">p3@wayner.org</a><br />#{widget_sisu}</center></font></td></tr></table>}
+ end
+ end
+ class TeX
+ def header_center
+ "\\chead{\\href{#{@vz.url_home}}{www.wayner.org}}"
+ end
+ def home_url
+ "\\href{#{@vz.url_home}}{www.wayner.org}"
+ end
+ def home
+ "\\href{#{@vz.url_home}}{Peter Wayner}"
+ end
+ def owner_chapter
+ "Document owner details"
+ end
+ def stmp
+ "\\copyright Ralph Amissah, licence GPL \\\\\n www.jus.uio.no/sisu/"
+ end
+ end
+end
diff --git a/data/sisu_markup_samples/non-free/_sisu/skin/doc/skin_won_benkler.rb b/data/sisu_markup_samples/non-free/_sisu/skin/doc/skin_won_benkler.rb
new file mode 100644
index 0000000..75c1f7d
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/skin/doc/skin_won_benkler.rb
@@ -0,0 +1,78 @@
+=begin
+ * Name: SiSU - Simple information Structuring Universe - Structured information, Serialized Units
+ * Author: Ralph Amissah
+ * http://www.jus.uio.no/sisu
+ * http://www.jus.uio.no/sisu/SiSU/download
+ * Description: Skin prepared for The Wealth of Networks, Yochai Benkler
+ * License: Same as SiSU see http://www.jus.uio.no/sisu
+ * Notes: Site default appearance variables set in defaults.rb
+ Generic site wide modifications set here scribe_skin.rb, and this file required by other "scribes" instead of defaults.rb
+=end
+module SiSU_Viz
+ require SiSU_lib + '/defaults'
+ class Skin
+ #def path_root # the only parameter that cannot be changed here
+ # './sisu/'
+ #end
+ #def rel
+ # '../'
+ #end
+ def url_home
+ 'http://www.benkler.org'
+ end
+ def url_txt # text to go with url usually stripped url
+ 'www.benkler.org'
+ end
+ def color_band1
+ '"#ffffff"'
+ end
+ def txt_hp
+ 'www.benkler.org'
+ end
+ def txt_home
+ 'Yochai Benkler'
+ end
+ def icon_home_button
+ 'won_benkler.png'
+ end
+ def icon_home_banner
+ icon_home_button
+ end
+ def banner_home_button
+ %{<table summary="home button" border="0" cellpadding="3" cellspacing="0"><tr><td align="left" bgcolor=#{color_white}><a href="#{url_home}">#{png_home}</a></td></tr></table>\n}
+ end
+ def banner_home_and_index_buttons
+ %{<table><tr><td width="20%"><table summary="home and index buttons" border="0" cellpadding="3" cellspacing="0"><tr><td align="left" bgcolor=#{color_white}><a href="#{url_home}" target="_top">#{png_home}</a></td><td width="40%"><center><table summary="buttons" border="1" cellpadding="3" cellspacing="0"><tr><td align="center" bgcolor="#f1e8de"><font face="arial" size="2"><a href="toc.html" target="_top">&nbsp;This&nbsp;text&nbsp;sub-&nbsp;<br />&nbsp;Table&nbsp;of&nbsp;Contents&nbsp;</a></font>#{table_close}</center></td><td width="20%">&nbsp;#{table_close}}
+ end
+ def banner_band
+ %{<table summary="band" border="0" cellpadding="3" cellspacing="0"><tr><td align="left" bgcolor=#{color_white}><a href="#{url_home}" target="_top">#{png_home}</a>#{table_close}}
+ end
+ def credits_splash
+ %{<table summary="credits" align="center" bgcolor="#ffffff"><tr><td><font color="black"><center>
+<a href="http://www.benkler.org/wonchapters.html">The original pdf is available online</a> at<br /><a href="http://www.benkler.org/">www.benkler.org</a><br />
+<a href="http://www.benkler.org/wealth_of_networks/index.php/Main_Page"><img border="0" align="center" src="../_sisu/image_local/won_benkler_book.png" alt="available at Amazon.com"></a><br />
+available at<br /><a href="http://www.amazon.com/exec/obidos/tg/detail/-/0300110561/">Amazon.com</a> and <br />
+<a href="http://search.barnesandnoble.com/booksearch/isbnInquiry.asp?isbn=0300110561">Barnes & Noble</a><br />
+This book is Copyright Yochai Benkler © 2006<br />
+Under a Creative Commons License, that permits non-commercial use of this work, provided attribution is given.<br />
+<a href="http://creativecommons.org/licenses/by-nc-sa/2.5/">http://creativecommons.org/licenses/by-nc-sa/2.5/</a><br />#{widget_sisu}</center></font></td></tr></table>}
+ end
+ end
+ class TeX
+ def header_center
+ "\\chead{\\href{#{@vz.url_home}}{www.benkler.org}}"
+ end
+ def home_url
+ "\\href{#{@vz.url_home}}{www.benkler.org}"
+ end
+ def home
+ "\\href{#{@vz.url_home}}{Yochai Benkler}"
+ end
+ def owner_chapter
+ "Document owner details"
+ end
+ def stmp
+ "\\copyright Ralph Amissah, licence GPL \\\\\n www.jus.uio.no/sisu/"
+ end
+ end
+end
diff --git a/data/sisu_markup_samples/non-free/_sisu/skin/site/skin_sisu.rb b/data/sisu_markup_samples/non-free/_sisu/skin/site/skin_sisu.rb
new file mode 100644
index 0000000..66786ce
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/skin/site/skin_sisu.rb
@@ -0,0 +1,105 @@
+=begin
+ * Name: SiSU information Structuring Universe - Structured information, Serialized Units
+ * Author: Ralph@Amissah.com
+ * http://www.jus.uio.no/sisu
+ * http://www.jus.uio.no/sisu/SiSU/download
+ * Description: Document skin for SiSU descriptive pages, ...
+ * License: Same as SiSU see http://www.jus.uio.no/sisu
+ * Notes: Site default appearance variables set in defaults.rb
+ Generic site wide modifications set here scribe_skin.rb, and this file required by other "scribes" instead of defaults.rb
+=end
+module SiSU_Viz
+ require SiSU_lib + '/defaults'
+ class Skin
+ #% widget
+ def widget_search
+ true
+ end
+ def widget_promo
+#put s "#{__LINE__} #{__FILE__}"
+ #['sisu','ruby','sisu_search_libre','ruby','open_society']
+ end
+ #% path
+ def path_root
+#puts "#{__LINE__} #{__FILE__}"
+ './sisu/' # the only parameter that cannot be changed here
+ end
+ def path_rel
+#puts "#{__LINE__} #{__FILE__}"
+ '../'
+ end
+ #% url
+ def url_home
+#puts "#{__LINE__} #{__FILE__}"
+ 'http://www.jus.uio.no/sisu/'
+ end
+ def url_site # used in pdf header
+#puts "#{__LINE__} #{__FILE__}"
+ 'http://www.jus.uio.no/sisu'
+ end
+ def url_txt # text to go with url usually stripped url
+#puts "#{__LINE__} #{__FILE__}"
+ 'www.jus.uio.no/sisu/'
+ end
+ def url_home_url
+#puts "#{__LINE__} #{__FILE__}"
+ '../index.html'
+ end
+ #def url_root_http
+ #root server path info, used in document information
+ #end
+ #% color
+ def color_band1
+ '"#ffffff"'
+ end
+ def color_band2
+ '"#ffffff"'
+ end
+ #% text
+ def text_hp
+ '&nbsp;SiSU'
+ end
+ def text_home
+ 'SiSU'
+ end
+ #% icon
+ def icon_home_button
+ 'sisu.png'
+ end
+ def icon_home_banner
+ icon_home_button
+ end
+ #% banner
+ def banner_home_button
+ %{<table summary="home button" border="0" cellpadding="3" cellspacing="0"><tr><td align="left" bgcolor="#ffffff"><a href="#{url_site}/">#{png_home}</a></td></tr></table>\n}
+ end
+ def banner_home_and_index_buttons
+ %{<table><tr><td width="20%"><table summary="home and index buttons" border="0" cellpadding="3" cellspacing="0"><tr><td align="left" bgcolor="#ffffff"><a href="#{url_site}/" target="_top">#{png_home}</a>#{table_close}</td><td width="60%"><center><center><table summary="buttons" border="1" cellpadding="3" cellspacing="0"><tr><td align="center" bgcolor="#ffffff"><font face="arial" size="2"><a href="toc" target="_top">&nbsp;This&nbsp;text&nbsp;sub-&nbsp;<br />&nbsp;Table&nbsp;of&nbsp;Contents&nbsp;</a></font>#{table_close}</center></center></td><td width="20%">&nbsp;#{table_close}}
+ end
+ def banner_band
+ %{<table summary="band" border="0" cellpadding="3" cellspacing="0"><tr><td align="left" bgcolor="#ffffff"><a href="#{url_site}/" target="_top">#{png_home}</a>#{table_close}}
+ end
+ #% credits
+ def credits_splash
+ %{<center><table summary="credits" align="center"bgcolor="#ffffff"><tr><td>#{widget_sisu}#{widget_wayBetter}#{widget_browsers}#{widget_pdfviewers}</td></tr></table></center>}
+ end
+ #% stamp
+ def stamp_stmp
+ "\\copyright Ralph Amissah, released under the GPL \\\\\n ralph@amissah.com \\\\\n www.jus.uio.no/sisu/"
+ end
+ end
+ class TeX
+ def header_center
+ "\\chead{\\href{#{@vz.url_site}/}{www.jus.uio.no/sisu/}}"
+ end
+ def home_url
+ "\\href{#{@vz.url_site}/}{www.jus.uio.no/sisu/}"
+ end
+ def home
+ "\\href{#{@vz.url_site}/}{Ralph Amissah}"
+ end
+ def owner_chapter
+ 'Document owner details'
+ end
+ end
+end
diff --git a/data/sisu_markup_samples/non-free/_sisu/skin/yaml/skin_countries.yaml b/data/sisu_markup_samples/non-free/_sisu/skin/yaml/skin_countries.yaml
new file mode 100644
index 0000000..a68903e
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/skin/yaml/skin_countries.yaml
@@ -0,0 +1,482 @@
+# arch-tag: yaml country list
+# Author: Ralph@Amissah.com
+# License: Same as SiSU see http://www.jus.uio.no/sisu
+id: AF
+ name: Afghanistan
+id: AL
+ name: Albania
+id: DZ
+ name: Algeria
+id: AS
+ name: American Samoa
+id: AD
+ name: Andorra
+id: AO
+ name: Angola
+id: AI
+ name: Anguilla
+id: AQ
+ name: Antarctica
+id: AG
+ name: Antigua and Barbuda
+id: AR
+ name: Argentina
+id: AM
+ name: Armenia
+id: AW
+ name: Aruba
+id: AU
+ name: Australia
+id: AT
+ name: Austria
+id: AZ
+ name: Azerbaijan
+id: BS
+ name: Bahamas
+id: BH
+ name: Bahrain
+id: BD
+ name: Bangladesh
+id: BB
+ name: Barbados
+id: BY
+ name: Belarus
+id: BE
+ name: Belgium
+id: BZ
+ name: Belize
+id: BJ
+ name: Benin
+id: BM
+ name: Bermuda
+id: BT
+ name: Bhutan
+id: BO
+ name: Bolivia
+id: BA
+ name: Bosnia and Herzegovina
+id: BW
+ name: Botswana
+id: BV
+ name: Bouvet Island
+id: BR
+ name: Brazil
+id: IO
+ name: British Indian Ocean Territory
+id: BN
+ name: Brunei Darussalam
+id: BG
+ name: Bulgaria
+id: BF
+ name: Burkina Faso
+id: BI
+ name: Burundi
+id: KH
+ name: Cambodia
+id: CM
+ name: Cameroon
+id: CA
+ name: Canada
+id: CV
+ name: Cape Verde
+id: KY
+ name: Cayman Islands
+id: CF
+ name: Central African Republic
+id: TD
+ name: Chad
+id: CL
+ name: Chile
+id: CN
+ name: China
+id: CX
+ name: Christmas Island
+id: CC
+ name: Cocos (Keeling) Islands
+id: CO
+ name: Colombia
+id: KM
+ name: Comoros
+id: CG
+ name: Congo
+id: CK
+ name: Cook Islands
+id: CR
+ name: Costa Rica
+id: HR
+ name: Croatia (Hrvatska)
+id: CU
+ name: Cuba
+id: CY
+ name: Cyprus
+id: CZ
+ name: Czech Republic
+id: CS
+ name: Czechoslovakia
+id: DK
+ name: Denmark
+id: DJ
+ name: Djibouti
+id: DM
+ name: Dominica
+id: DO
+ name: Dominican Republic
+id: TP
+ name: East Timor
+id: EC
+ name: Ecuador
+id: EG
+ name: Egypt
+id: SV
+ name: El Salvador
+id: GQ
+ name: Equatorial Guinea
+id: ER
+ name: Eritrea
+id: EE
+ name: Estonia
+id: ET
+ name: Ethiopia
+id: FK
+ name: Falkland Islands (Malvinas)
+id: FO
+ name: Faroe Islands
+id: FJ
+ name: Fiji
+id: FI
+ name: Finland
+id: FR
+ name: France
+id: FX
+ name: France, Metroplitan
+id: GF
+ name: French Guiana
+id: PF
+ name: French Polynesia
+id: TF
+ name: French Southern Territories
+id: GA
+ name: Gabon
+id: GM
+ name: Gambia
+id: GE
+ name: Georgia
+id: DE
+ name: Germany
+id: GH
+ name: Ghana
+id: GI
+ name: Gibraltar
+id: GB
+ name: Great Britain (UK)
+id: GR
+ name: Greece
+id: GL
+ name: Greenland
+id: GD
+ name: Grenada
+id: GP
+ name: Guadeloupe
+id: GU
+ name: Guam
+id: GT
+ name: Guatemala
+id: GN
+ name: Guinea
+id: GW
+ name: Guinea-Bissau
+id: GY
+ name: Guyana
+id: HT
+ name: Haiti
+id: HM
+ name: Heard and McDonald Islands
+id: HN
+ name: Honduras
+id: HK
+ name: Hong Kong
+id: HU
+ name: Hungary
+id: IS
+ name: Iceland
+id: IN
+ name: India
+id: ID
+ name: Indonesia
+id: IR
+ name: Iran
+id: IQ
+ name: Iraq
+id: IE
+ name: Ireland
+id: IL
+ name: Israel
+id: IT
+ name: Italy
+id: CI
+ name: Ivory Coast
+id: JM
+ name: Jamaica
+id: JP
+ name: Japan
+id: JO
+ name: Jordan
+id: KZ
+ name: Kazakhstan
+id: KE
+ name: Kenya
+id: KI
+ name: Kiribati
+id: KP
+ name: Korea (North)
+id: KR
+ name: Korea (South)
+id: KW
+ name: Kuwait
+id: KG
+ name: Kyrgyzstan
+id: LA
+ name: Laos
+id: LV
+ name: Latvia
+id: LB
+ name: Lebanon
+id: LS
+ name: Lesotho
+id: LR
+ name: Liberia
+id: LY
+ name: Libya
+id: LI
+ name: Liechtenstein
+id: LT
+ name: Lithuania
+id: LU
+ name: Luxembourg
+id: MO
+ name: Macau
+id: ME
+ name: Macedonia
+id: MG
+ name: Madagascar
+id: MW
+ name: Malawi
+id: MY
+ name: Malaysia
+id: MV
+ name: Maldives
+id: ML
+ name: Mali
+id: MT
+ name: Malta
+id: MB
+ name: Marshall Islands
+id: MQ
+ name: Martinique
+id: MR
+ name: Mauritania
+id: MU
+ name: Mauritius
+id: YT
+ name: Mayotte
+id: MX
+ name: Mexico
+id: FM
+ name: Micronesia
+id: MD
+ name: Moldova
+id: MC
+ name: Monaco
+id: MN
+ name: Mongolia
+id: MS
+ name: Montserrat
+id: MA
+ name: Morocco
+id: MZ
+ name: Mozambique
+id: MM
+ name: Myanmar
+id: NA
+ name: Namibia
+id: NR
+ name: Nauru
+id: NP
+ name: Nepal
+id: NL
+ name: Netherlands
+id: AN
+ name: Netherlands Antilles
+id: NT
+ name: Neutral Zone
+id: NC
+ name: New Caledonia
+id: NZ
+ name: New Zealand (Aotearoa)
+id: NI
+ name: Nicaragua
+id: NE
+ name: Niger
+id: NG
+ name: Nigeria
+id: NU
+ name: Niue
+id: NF
+ name: Norfolk Island
+id: MP
+ name: Northern Mariana Islands
+id: NO
+ name: Norway
+id: OM
+ name: Oman
+id: 00
+ name: Other
+id: PK
+ name: Pakistan
+id: PW
+ name: Palau
+id: PA
+ name: Panama
+id: PG
+ name: Papua New Guinea
+id: PY
+ name: Paraguay
+id: PE
+ name: Peru
+id: PH
+ name: Philippines
+id: PN
+ name: Pitcairn
+id: PL
+ name: Poland
+id: PT
+ name: Portugal
+id: PR
+ name: Puerto Rico
+id: QA
+ name: Qatar
+id: RE
+ name: Reunion
+id: RO
+ name: Romania
+id: RU
+ name: Russian Federation
+id: RW
+ name: Rwanda
+id: GS
+ name: S. Georgia and S. Sandwich Isls.
+id: KN
+ name: Saint Kitts and Nevis
+id: LC
+ name: Saint Lucia
+id: VC
+ name: Saint Vincent and the Grenadines
+id: WS
+ name: Samoa
+id: SM
+ name: San Marino
+id: ST
+ name: Sao Tome and Principe
+id: SA
+ name: Saudi Arabia
+id: SN
+ name: Senegal
+id: SC
+ name: Seychelles
+id: SL
+ name: Sierra Leone
+id: SG
+ name: Singapore
+id: SK
+ name: Slovak Republic
+id: SI
+ name: Slovenia
+id: SB
+ name: Solomon Islands
+id: SO
+ name: Somalia
+id: ZA
+ name: South Africa
+id: ES
+ name: Spain
+id: LK
+ name: Sri Lanka
+id: SH
+ name: St. Helena
+id: PM
+ name: St. Pierre and Miquelon
+id: SD
+ name: Sudan
+id: SR
+ name: Suriname
+id: SJ
+ name: Svalbard and Jan Mayen Islands
+id: SZ
+ name: Swaziland
+id: SE
+ name: Sweden
+id: CH
+ name: Switzerland
+id: SY
+ name: Syria
+id: TW
+ name: Taiwan
+id: TJ
+ name: Tajikistan
+id: TZ
+ name: Tanzania
+id: TH
+ name: Thailand
+id: TG
+ name: Togo
+id: TK
+ name: Tokelau
+id: TO
+ name: Tonga
+id: TT
+ name: Trinidad and Tobago
+id: TN
+ name: Tunisia
+id: TR
+ name: Turkey
+id: TM
+ name: Turkmenistan
+id: TC
+ name: Turks and Caicos Islands
+id: TV
+ name: Tuvalu
+id: UM
+ name: US Minor Outlying Islands
+id: SU
+ name: USSR (former)
+id: UG
+ name: Uganda
+id: UA
+ name: Ukraine
+id: AE
+ name: United Arab Emirates
+id: UK
+ name: United Kingdom
+id: US
+ name: United States
+id: UY
+ name: Uruguay
+id: UZ
+ name: Uzbekistan
+id: VU
+ name: Vanuatu
+id: VA
+ name: Vatican City State (Holy See)
+id: VE
+ name: Venezuela
+id: VN
+ name: Viet Nam
+id: VG
+ name: Virgin Islands (British)
+id: VI
+ name: Virgin Islands (U.S.)
+id: WF
+ name: Wallis and Futuna Islands
+id: EH
+ name: Western Sahara
+
diff --git a/data/sisu_markup_samples/non-free/_sisu/skin/yaml/skin_country.yaml b/data/sisu_markup_samples/non-free/_sisu/skin/yaml/skin_country.yaml
new file mode 100644
index 0000000..ebaf8ac
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/skin/yaml/skin_country.yaml
@@ -0,0 +1,735 @@
+# arch-tag: yaml country list array
+# Author: Ralph@Amissah.com
+# License: Same as SiSU see http://www.jus.uio.no/sisu
+-
+ - AF
+ - Afghanistan
+-
+ - AL
+ - Albania
+-
+ - DZ
+ - Algeria
+-
+ - AS
+ - American Samoa
+-
+ - AD
+ - Andorra
+-
+ - AO
+ - Angola
+-
+ - AI
+ - Anguilla
+-
+ - AQ
+ - Antarctica
+-
+ - AG
+ - Antigua and Barbuda
+-
+ - AR
+ - Argentina
+-
+ - AM
+ - Armenia
+-
+ - AW
+ - Aruba
+-
+ - AU
+ - Australia
+-
+ - AT
+ - Austria
+-
+ - AZ
+ - Azerbaijan
+-
+ - BS
+ - Bahamas
+-
+ - BH
+ - Bahrain
+-
+ - BD
+ - Bangladesh
+-
+ - BB
+ - Barbados
+-
+ - BY
+ - Belarus
+-
+ - BE
+ - Belgium
+-
+ - BZ
+ - Belize
+-
+ - BJ
+ - Benin
+-
+ - BM
+ - Bermuda
+-
+ - BT
+ - Bhutan
+-
+ - BO
+ - Bolivia
+-
+ - BA
+ - Bosnia and Herzegovina
+-
+ - BW
+ - Botswana
+-
+ - BV
+ - Bouvet Island
+-
+ - BR
+ - Brazil
+-
+ - IO
+ - British Indian Ocean Territory
+-
+ - BN
+ - Brunei Darussalam
+-
+ - BG
+ - Bulgaria
+-
+ - BF
+ - Burkina Faso
+-
+ - BI
+ - Burundi
+-
+ - KH
+ - Cambodia
+-
+ - CM
+ - Cameroon
+-
+ - CA
+ - Canada
+-
+ - CV
+ - Cape Verde
+-
+ - KY
+ - Cayman Islands
+-
+ - CF
+ - Central African Republic
+-
+ - TD
+ - Chad
+-
+ - CL
+ - Chile
+-
+ - CN
+ - China
+-
+ - CX
+ - Christmas Island
+-
+ - CC
+ - Cocos (Keeling) Islands
+-
+ - CO
+ - Colombia
+-
+ - KM
+ - Comoros
+-
+ - CG
+ - Congo
+-
+ - CK
+ - Cook Islands
+-
+ - CR
+ - Costa Rica
+-
+ - HR
+ - Croatia (Hrvatska)
+-
+ - CU
+ - Cuba
+-
+ - CY
+ - Cyprus
+-
+ - CZ
+ - Czech Republic
+-
+ - CS
+ - Czechoslovakia (former)
+-
+ - DK
+ - Denmark
+-
+ - DJ
+ - Djibouti
+-
+ - DM
+ - Dominica
+-
+ - DO
+ - Dominican Republic
+-
+ - TP
+ - East Timor
+-
+ - EC
+ - Ecuador
+-
+ - EG
+ - Egypt
+-
+ - SV
+ - El Salvador
+-
+ - GQ
+ - Equatorial Guinea
+-
+ - ER
+ - Eritrea
+-
+ - EE
+ - Estonia
+-
+ - ET
+ - Ethiopia
+-
+ - FK
+ - Falkland Islands (Malvinas)
+-
+ - FO
+ - Faroe Islands
+-
+ - FJ
+ - Fiji
+-
+ - FI
+ - Finland
+-
+ - FR
+ - France
+-
+ - FX
+ - France, Metropolitan
+-
+ - GF
+ - French Guiana
+-
+ - PF
+ - French Polynesia
+-
+ - TF
+ - French Southern Territories
+-
+ - GA
+ - Gabon
+-
+ - GM
+ - Gambia
+-
+ - GE
+ - Georgia
+-
+ - DE
+ - Germany
+-
+ - GH
+ - Ghana
+-
+ - GI
+ - Gibraltar
+-
+ - GB
+ - Great Britain (UK)
+-
+ - GR
+ - Greece
+-
+ - GL
+ - Greenland
+-
+ - GD
+ - Grenada
+-
+ - GP
+ - Guadeloupe
+-
+ - GU
+ - Guam
+-
+ - GT
+ - Guatemala
+-
+ - GN
+ - Guinea
+-
+ - GW
+ - Guinea-Bissau
+-
+ - GY
+ - Guyana
+-
+ - HT
+ - Haiti
+-
+ - HM
+ - Heard and McDonald Islands
+-
+ - HN
+ - Honduras
+-
+ - HK
+ - Hong Kong
+-
+ - HU
+ - Hungary
+-
+ - IS
+ - Iceland
+-
+ - IN
+ - India
+-
+ - ID
+ - Indonesia
+-
+ - IR
+ - Iran
+-
+ - IQ
+ - Iraq
+-
+ - IE
+ - Ireland
+-
+ - IL
+ - Israel
+-
+ - IT
+ - Italy
+-
+ - CI
+ - Ivory Coast
+-
+ - JM
+ - Jamaica
+-
+ - JP
+ - Japan
+-
+ - JO
+ - Jordan
+-
+ - KZ
+ - Kazakhstan
+-
+ - KE
+ - Kenya
+-
+ - KI
+ - Kiribati
+-
+ - KP
+ - Korea (North)
+-
+ - KR
+ - Korea (South)
+-
+ - KW
+ - Kuwait
+-
+ - KG
+ - Kyrgyzstan
+-
+ - LA
+ - Laos
+-
+ - LV
+ - Latvia
+-
+ - LB
+ - Lebanon
+-
+ - LS
+ - Lesotho
+-
+ - LR
+ - Liberia
+-
+ - LY
+ - Libya
+-
+ - LI
+ - Liechtenstein
+-
+ - LT
+ - Lithuania
+-
+ - LU
+ - Luxembourg
+-
+ - MO
+ - Macau
+-
+ - ME
+ - Macedonia
+-
+ - MG
+ - Madagascar
+-
+ - MW
+ - Malawi
+-
+ - MY
+ - Malaysia
+-
+ - MV
+ - Maldives
+-
+ - ML
+ - Mali
+-
+ - MT
+ - Malta
+-
+ - MB
+ - Marshall Islands
+-
+ - MQ
+ - Martinique
+-
+ - MR
+ - Mauritania
+-
+ - MU
+ - Mauritius
+-
+ - YT
+ - Mayotte
+-
+ - MX
+ - Mexico
+-
+ - FM
+ - Micronesia
+-
+ - MD
+ - Moldova
+-
+ - MC
+ - Monaco
+-
+ - MN
+ - Mongolia
+-
+ - MS
+ - Montserrat
+-
+ - MA
+ - Morocco
+-
+ - MZ
+ - Mozambique
+-
+ - MM
+ - Myanmar
+-
+ - NA
+ - Namibia
+-
+ - NR
+ - Nauru
+-
+ - NP
+ - Nepal
+-
+ - NL
+ - Netherlands
+-
+ - AN
+ - Netherlands Antilles
+-
+ - NT
+ - Neutral Zone
+-
+ - NC
+ - New Caledonia
+-
+ - NZ
+ - New Zealand (Aotearoa)
+-
+ - NI
+ - Nicaragua
+-
+ - NE
+ - Niger
+-
+ - NG
+ - Nigeria
+-
+ - NU
+ - Niue
+-
+ - NF
+ - Norfolk Island
+-
+ - MP
+ - Northern Mariana Islands
+-
+ - 'NO'
+ - Norway
+-
+ - OM
+ - Oman
+-
+ - '00'
+ - Other
+-
+ - PK
+ - Pakistan
+-
+ - PW
+ - Palau
+-
+ - PA
+ - Panama
+-
+ - PG
+ - Papua New Guinea
+-
+ - PY
+ - Paraguay
+-
+ - PE
+ - Peru
+-
+ - PH
+ - Philippines
+-
+ - PN
+ - Pitcairn
+-
+ - PL
+ - Poland
+-
+ - PT
+ - Portugal
+-
+ - PR
+ - Puerto Rico
+-
+ - QA
+ - Qatar
+-
+ - RE
+ - Reunion
+-
+ - RO
+ - Romania
+-
+ - RU
+ - Russian Federation
+-
+ - RW
+ - Rwanda
+-
+ - GS
+ - S. Georgia and S. Sandwich Isls.
+-
+ - KN
+ - Saint Kitts and Nevis
+-
+ - LC
+ - Saint Lucia
+-
+ - VC
+ - Saint Vincent and the Grenadines
+-
+ - WS
+ - Samoa
+-
+ - SM
+ - San Marino
+-
+ - ST
+ - Sao Tome and Principe
+-
+ - SA
+ - Saudi Arabia
+-
+ - SN
+ - Senegal
+-
+ - SC
+ - Seychelles
+-
+ - SL
+ - Sierra Leone
+-
+ - SG
+ - Singapore
+-
+ - SK
+ - Slovak Republic
+-
+ - SI
+ - Slovenia
+-
+ - SB
+ - Solomon Islands
+-
+ - SO
+ - Somalia
+-
+ - ZA
+ - South Africa
+-
+ - ES
+ - Spain
+-
+ - LK
+ - Sri Lanka
+-
+ - SH
+ - St. Helena
+-
+ - PM
+ - St. Pierre and Miquelon
+-
+ - SD
+ - Sudan
+-
+ - SR
+ - Suriname
+-
+ - SJ
+ - Svalbard and Jan Mayen Islands
+-
+ - SZ
+ - Swaziland
+-
+ - SE
+ - Sweden
+-
+ - CH
+ - Switzerland
+-
+ - SY
+ - Syria
+-
+ - TW
+ - Taiwan
+-
+ - TJ
+ - Tajikistan
+-
+ - TZ
+ - Tanzania
+-
+ - TH
+ - Thailand
+-
+ - TG
+ - Togo
+-
+ - TK
+ - Tokelau
+-
+ - TO
+ - Tonga
+-
+ - TT
+ - Trinidad and Tobago
+-
+ - TN
+ - Tunisia
+-
+ - TR
+ - Turkey
+-
+ - TM
+ - Turkmenistan
+-
+ - TC
+ - Turks and Caicos Islands
+-
+ - TV
+ - Tuvalu
+-
+ - UM
+ - US Minor Outlying Islands
+-
+ - SU
+ - USSR (former)
+-
+ - UG
+ - Uganda
+-
+ - UA
+ - Ukraine
+-
+ - AE
+ - United Arab Emirates
+-
+ - UK
+ - United Kingdom
+-
+ - US
+ - United States
+-
+ - UY
+ - Uruguay
+-
+ - UZ
+ - Uzbekistan
+-
+ - VU
+ - Vanuatu
+-
+ - VA
+ - Vatican City State (Holy See)
+-
+ - VE
+ - Venezuela
+-
+ - VN
+ - Viet Nam
+-
+ - VG
+ - Virgin Islands (British)
+-
+ - VI
+ - Virgin Islands (U.S.)
+-
+ - WF
+ - Wallis and Futuna Islands
+-
+ - EH
+ - Western Sahara
+-
+ - YE
+ - Yemen
+-
+ - YU
+ - Yugoslavia
+-
+ - ZR
+ - Zaire
+-
+ - ZM
+ - Zambia
+-
+ - ZW
+ - Zimbabwe
diff --git a/data/sisu_markup_samples/non-free/_sisu/skin/yaml/skin_lexAddress.yaml b/data/sisu_markup_samples/non-free/_sisu/skin/yaml/skin_lexAddress.yaml
new file mode 100644
index 0000000..6e49b54
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/_sisu/skin/yaml/skin_lexAddress.yaml
@@ -0,0 +1,207 @@
+# arch-tag: yaml addresses used on lexmercatoria
+# Author: Ralph@Amissah.com
+# License: Same as SiSU see http://www.jus.uio.no/sisu
+address:
+ un: "United Nations\n
+ UN Headquarters\n
+ First Avenu at 46th Street\n
+ New York\, NY 10017\n
+ web: www.unsystem.org\n
+ web: www.un.org\n\n
+ UN Publications\n
+ tel: 1 800 253-9646\n
+ web: www.un.org/Pubs/sales.htm\n
+ e-mail publications@un.org\n\n"
+ uncitral: "UNCITRAL Secretariat\n
+ P.O. Box 500\n
+ Vienna International Centre\n
+ A-1400 Vienna\n
+ Austria\n\n
+ tel: (43-1)21345-4060 or 4061\n
+ fax: (43-1) 21345-5813\n
+ telex: 135612 unoa\n
+ web: www.uncitral.org\n
+ e-mail uncitral@uncitral.org\n\n
+ UN Publications\n
+ tel: 1 800 253-9646\n
+ web: www.un.org/Pubs/sales.htm\n
+ e-mail publications@un.org\n\n
+ web: www.unsystem.org\n
+ web: www.un.org\n\n
+ United Nations\n
+ UN Headquarters\n
+ First Avenu at 46th Street\n
+ New York, NY 10017\n
+ web: www.unsystem.org\n
+ web: www.un.org\n\n"
+ unece: "United Nations Economic Commission for Europe\n
+ Information Office\n
+ Palais des Nations\n
+ CH-1211 Geneva 10\n
+ Switzerland\n
+ tel: +41 22 971 44 44\n
+ fax: +41 22 917 05 05\n
+ web: www.unece.org\n
+ e-mail info.ece@unece.org\n\n
+ United Nations\n
+ UN Headquarters\n
+ First Avenu at 46th Street\n
+ New York, NY 10017\n
+ web: www.unsystem.org\n
+ web: www.un.org\n\n
+ UN Publications\n
+ tel: 1 800 253-9646\n
+ web: www.un.org/Pubs/sales.htm\n
+ e-mail publications@un.org\n\n"
+ unidroit: "UNIDROIT\n
+ (The International Institute for the Unification of Private Law)\n
+ 28 Via Panisperna,\n
+ 00184 Rome\nItaly\n\n
+ tel.: (39-06) 696 211\n
+ fax: (39-06) 699 41394\n
+ web: http://www.unidroit.org/\n
+ e-mail: unidroit.rome@unidroit.org"
+ icc: "International Chamber of Commerce (ICC),\n
+ The world business organization,
+ 38, Cours Albert 1er,\n
+ 75008 Paris - France.\n\n
+ tel: +33 1 49 53 28 28\n
+ fax: +33 1 49 53 28 59\n
+ e-mail: icc@iccwbo.org\n\n
+ ICC Publishing SA (Paris),\n
+ SA, 38 Cours Albert ler,\n
+ 75008 Paris,\n
+ France.\n\n
+ tel: +33 1 49 53 29 23\n
+ tel: +33 1 49 53 28 89\n
+ fax: +33 1 49 53 29 02\n
+ e-mail: pub@iccwbo.org\n\n
+ ICC Publishing, Inc. (New York)\n
+ 156, Fifth Avenue, Suite 417,\n
+ New York, N.Y. 10010,\n
+ United States.\n\n
+ tel: +1 212 206 1150\n
+ fax: +1 212 633 6025\n
+ e-mail: info@iccpub.net\n\n
+ ICC International Court of Arbitration,\n
+ 38, Cours Albert ler,\n
+ 75008 Paris,\n
+ France.\n\n
+ tel: +33 1 49 53 28 28\n
+ fax: +33 1 49 53 29 33\n
+ e-mail: arb@iccwbo.org"
+ hcpil: "Permanent Bureau of the Hague Conference on Private International Law\n
+ 6 Scheveningseweg\n
+ 2517 KT The Hague\n
+ Netherlands\n
+ tel.: (31/70) 363.33.03\n
+ fax: (31/70) 360.48.67\n
+ cable: CODIP \n
+ web: www.hcch.net"
+ moftec: "No. 2 Dong Chang''an Avenue,\n
+ Beijing,\nChina 100731\n\n
+ tel: (010) 6519 8114\n
+ fax: (010) 6519 8039\n
+ e-mail: moftec@moftec.gov.cn\n
+ web: www.moftec.gov.cn:7777/search.wct?ChannelID=8115\n
+ web: www.moftec.com"
+ china: "No. 2 Dong Chang''an Avenue,\n
+ Beijing,\nChina 100731\n\n
+ tel: (010) 6519 8114\n
+ fax: (010) 6519 8039\n
+ e-mail: moftec@moftec.gov.cn\n
+ web: www.moftec.gov.cn:7777/search.wct?ChannelID=8115\n
+ web: www.moftec.com"
+ eu_pil: "For more information, write to the Secretary of the Commission: Matthias E. STORME, Zuidbroek 49, B-9030 GENT (BELGIUM)\n\n
+ fax: +32-9-236 24 40\n
+ e-mail frw.storme.m@ufsia.ac.be or Matthias.Storme@rug.ac.be\n
+ web: www.ufsia.ac.be/~estorme/CECL.html"
+ wto: "World Trade Organization Centre\n
+ William Rappard,\n
+ Rue de Lausanne 154,\n
+ CH 1211 Geneva 21,\n
+ Switzerland\n\n
+ tel: (41-22) 739 51 11\n
+ fax: (41-22) 731 42 06\n
+ web: www.wto.org\n
+ e-mail: enquiries@wto.org"
+ wta: "World Trade Organization Centre\n
+ William Rappard,\n
+ Rue de Lausanne 154,\n
+ CH 1211 Geneva 21,\n
+ Switzerland\n\n
+ tel: (41-22) 739 51 11\n
+ fax: (41-22) 731 42 06\n
+ web: www.wto.org\n
+ e-mail: enquiries@wto.org"
+ icsid: "ICSID - International Centre for Settlement of Investment Disputes\n
+ 1818 H Street, N.W.,\n
+ Washington, D.C. 20433,\n
+ U.S.A.\n\n
+ tel: (1 202) 458-1534\n
+ fax: (1 202) 522-2615\n
+ web: www.worldbank.org/icsid/\n"
+ wipo: "World Intellectual Property Organisation\n
+ The WIPO headquarters is in Geneva, Switzerland, near the Place des Nations.\n
+ 34, chemin des Colombettes,\n
+ Geneva.\n\n
+ P.O. Box 18, CH-1211 Geneva 20\n
+ tel: 41-22 730 9111\n
+ fax: 41-22 733 5428\n\n
+ Mailing address:\nWIPO\n
+ P.O. Box 18, CH-1211\n
+ Geneva 20\n\n
+ web: www.wipo.org\n\n
+ WIPO has a Liaison Office at the United Nations in New York, U.S.A.\n
+ Address: 2,\n
+ United Nations Plaza,\n
+ Room 560,\n
+ New York, N.Y. 10017\n\n
+ tel: (1-212) 963-6813\n
+ fax: (1-212) 963 4801 \n\n
+ e-mail: WIPO.mail@wipo.int for matters of general interest\n
+ DEVCO.mail@wipo.int for development cooperation matters\n
+ PCT.mail@wipo.int for PCT (Patent Cooperation Treaty) matters\n
+ INTREG.mail@wipo.int for international trademark and design registration matters\n
+ ARBITER.mail@wipo.int for arbitration and mediation matters\n
+ PUBLICATIONS.mail@wipo.int for ordering publications\n
+ PERSONNEL.mail@wipo.int for personnel matters"
+ unctad: ~
+ lcia: "The International Dispute Resolution Centre\n
+ LCIA,\n
+ 8 Breams Buildings,\n
+ Chancery Lane,\n
+ London EC4A 1HP\n
+ England\n\n
+ tel: (44) 0(207) 405 8008\n
+ fax: (44) 0(207) 405 8009\n
+ web: www.wto.org\n
+ e-mail: lcia@lcia-arbitration.com"
+ american.arbitration.association: ~
+ milan.chamber.of.commerce: "The Chamber of National and International Arbitration of Milan\n
+ Milan Chamber of Commerce,\n
+ Palazzo Mezzanotte - Piazza Affari,\n
+ 6 - 20123 Milano,\n
+ Italy\n\n
+ tel: 39 2 8515.4536-4444-4515\n
+ fax: 39 2 8515.4384\n
+ web: www.mi.camcom.it/eng/arbitration.chamber/\n
+ e-mail: camera.arbitrale@mi.camcom.it"
+ afreximbank: "African Export-Import Bank\n
+ World Trade Center,\n
+ 1191 Corniche El-Nil,\n
+ Cairo 11221,\n
+ Egypt\n
+ web: www.afreximbank.com\n
+ tel: 202 580 1800\n
+ fax: 202 578 0276\n
+ e-mail: info@afreximbank.com\n\n"
+ amissah: "Ralph Amissah\n
+ 10 Cameron Court,\n
+ Princes Way,\n
+ London SW19 6QY,\n
+ England\n
+ web: www.amissah.com\n
+ tel: 44 20 8789 3452\n
+ e-mail: ralph@amissah.com\n\n"
+
diff --git a/data/sisu_markup_samples/non-free/autonomy_markup0.sst b/data/sisu_markup_samples/non-free/autonomy_markup0.sst
new file mode 100644
index 0000000..472980f
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/autonomy_markup0.sst
@@ -0,0 +1,199 @@
+% SiSU 0.42
+
+@title: Revisiting the Autonomous Contract
+
+@subtitle: Transnational contracting, trends and supportive structures
+
+@creator: Ralph Amissah
+
+@type: article
+
+@subject: international contracts, international commercial arbitration, private international law
+
+@date: 2000-08-27
+
+@italics: /CISG|PICC|PECL|UNCITRAL|UNIDROIT|lex mercatoria|pacta sunt servanda|caveat subscriptor|ex aequo et bono|amiable compositeur|ad hoc/i
+
+@links: {Syntax}http://www.jus.uio.no/sisu/sample/syntax/autonomy_markup0.sst.html
+{The Autonomous Contract}http://www.jus.uio.no/lm/the.autonomous.contract.07.10.1997.amissah/toc.html
+{Contract Principles}http://www.jus.uio.no/lm/private.international.commercial.law/contract.principles.html
+{UNIDROIT Principles}http://www.jus.uio.no/lm/unidroit.international.commercial.contracts.principles.1994.commented/toc.html
+{Sales}http://www.jus.uio.no/lm/private.international.commercial.law/sale.of.goods.html
+{CISG}http://www.jus.uio.no/lm/un.contracts.international.sale.of.goods.convention.1980/doc.html
+{Arbitration}http://www.jus.uio.no/lm/arbitration/toc.html
+{Electronic Commerce}http://www.jus.uio.no/lm/electronic.commerce/toc.html
+
+@level: num_top=1
+
+:A~ Revisiting the Autonomous Contract (Draft 0.90 - 2000-08-27)
+
+:B~ Transnational contract "law", trends and supportive structures
+
+:C~ \copyright Ralph Amissah~{* Ralph Amissah is a Fellow of Pace University, Institute for International Commercial Law. http://www.cisg.law.pace.edu/ <br>RA lectured on the private law aspects of international trade whilst at the Law Faculty of the University of Tromsø, Norway. http://www.jus.uit.no/ <br> RA built the first web site related to international trade law, now known as lexmercatoria.org and described as "an (international | transnational) commercial law and e-commerce infrastructure monitor". http://lexmercatoria.org/ <br> RA is interested in the law, technology, commerce nexus. RA works with the law firm Amissahs.<br>/{[This is a draft document and subject to change.]}/ <br>All errors are very much my own.<br>ralph@amissah.com }~
+
+1~ Reinforcing trends: borderless technologies, global economy, transnational legal solutions?
+
+Revisiting the Autonomous Contract~{ /{The Autonomous Contract: Reflecting the borderless electronic-commercial environment in contracting}/ was published in /{Elektronisk handel - rettslige aspekter, Nordisk årsbok i rettsinformatikk 1997}/ (Electronic Commerce - Legal Aspects. The Nordic yearbook for Legal Informatics 1997) Edited by Randi Punsvik, or at http://www.jus.uio.no/the.autonomous.contract.07.10.1997.amissah/doc.html }~
+
+Globalisation is to be observed as a trend intrinsic to the world economy.~{ As Maria Cattaui Livanos suggests in /{The global economy - an opportunity to be seized}/ in /{Business World}/ the Electronic magazine of the International Chamber of Commerce (Paris, July 1997) at http://www.iccwbo.org/html/globalec.htm <br> "Globalization is unstoppable. Even though it may be only in its early stages, it is already intrinsic to the world economy. We have to live with it, recognize its advantages and learn to manage it.<br>That imperative applies to governments, who would be unwise to attempt to stem the tide for reasons of political expediency. It also goes for companies of all sizes, who must now compete on global markets and learn to adjust their strategies accordingly, seizing the opportunities that globalization offers."}~ Rudimentary economics explains this runaway process, as being driven by competition within the business community to achieve efficient production, and to reach and extend available markets.~{To remain successful, being in competition, the business community is compelled to take advantage of the opportunities provided by globalisation.}~ Technological advancement particularly in transport and communications has historically played a fundamental role in the furtherance of international commerce, with the Net, technology's latest spatio-temporally transforming offering, linchpin of the "new-economy", extending exponentially the global reach of the business community. The Net covers much of the essence of international commerce providing an instantaneous, low cost, convergent, global and borderless: information centre, marketplace and channel for communications, payments and the delivery of services and intellectual property. The sale of goods, however, involves the separate element of their physical delivery. The Net has raised a plethora of questions and has frequently offered solutions. The increased transparency of borders arising from the Net's ubiquitous nature results in an increased demand for the transparency of operation. As economic activities become increasingly global, to reduce transaction costs, there is a strong incentive for the "law" that provides for them, to do so in a similar dimension. The appeal of transnational legal solutions lies in the potential reduction in complexity, more widely dispersed expertise, and resulting increased transaction efficiency. The Net reflexively offers possibilities for the development of transnational legal solutions, having in a similar vein transformed the possibilities for the promulgation of texts, the sharing of ideas and collaborative ventures. There are however, likely to be tensions within the legal community protecting entrenched practices against that which is new, (both in law and technology) and the business community's goal to reduce transaction costs.
+
+Within commercial law an analysis of law and economics may assist in developing a better understanding of the relationship between commercial law and the commercial sector it serves.~{ Realists would contend that law is contextual and best understood by exploring the interrelationships between law and the other social sciences, such as sociology, psychology, political science, and economics.}~ "...[T]he importance of the interrelations between law and economics can be seen in the twin facts that legal change is often a function of economic ideas and conditions, which necessitate and/or generate demands for legal change, and that economic change is often governed by legal change."~{ Part of a section cited in Mercuro and Steven G. Medema, /{Economics and the Law: from Posner to Post-Modernism}/ (Princeton, 1997) p. 11, with reference to Karl N. Llewellyn The Effect of Legal Institutions upon Economics, American Economic Review 15 (December 1925) pp 655-683, Mark M. Litchman Economics, the Basis of Law, American Law Review 61 (May-June 1927) pp 357-387, and W. S. Holdsworth A Neglected Aspect of the Relations between Economic and Legal History, Economic History Review 1 (January 1927-1928) pp 114-123.}~ In doing so, however, it is important to be aware that there are several competing schools of law and economics, with different perspectives, levels of abstraction, and analytical consequences of and for the world that they model.~{ For a good introduction see Nicholas Mercuro and Steven G. Medema, /{Economics and the Law: from Posner to Post-Modernism}/ (Princeton, 1997). These include: Chicago law and economics (New law and economics); New Haven School of law and economics; Public Choice Theory; Institutional law and economics; Neoinstitutional law and economics; Critical Legal Studies.}~
+
+Where there is rapid interrelated structural change with resulting new features, rather than concentrate on traditionally established tectonic plates of a discipline, it is necessary to understand underlying currents and concepts at their intersections, (rather than expositions of history~{ Case overstated, but this is an essential point. It is not be helpful to be overly tied to the past. It is necessary to be able to look ahead and explore new solutions, and be aware of the implications of "complexity" (as to to the relevance of past circumstances to the present). }~), is the key to commencing meaningful discussions and developing solutions for the resulting issues.~{ The majority of which are beyond the scope of this paper. Examples include: encryption and privacy for commercial purposes; digital signatures; symbolic ownership; electronic intellectual property rights.}~ Interrelated developments are more meaningfully understood through interdisciplinary study, as this instance suggests, of the law, commerce/economics, and technology nexus. In advocating this approach, we should also pay heed to the realisation in the sciences, of the limits of reductionism in the study of complex systems, as such systems feature emergent properties that are not evident if broken down into their constituent parts. System complexity exceeds sub-system complexity; consequently, the relevant unit for understanding the systems function is the system, not its parts.~{ Complexity theory is a branch of mathematics and physics that examines non-linear systems in which simple sets of deterministic rules can lead to highly complicated results, which cannot be predicted accurately. A study of the subject is provided by Nicholas Rescher /{Complexity: A Philosophical Overview}/ (New Brunswick, 1998). See also Jack Cohen and Ian Stewart, /{The Collapse of Chaos: Discovering Simplicity in a Complex World}/ (1994). }~ Simplistic dogma should be abandoned for a contextual approach.
+
+1~ Common Property - advocating a common commercial highway
+
+Certain infrastructural underpinnings beneficial to the working of the market economy are not best provided by the business community, but by other actors including governments. In this paper mention is made for example of the /{United Nations Convention on the Recognition and Enforcement of Foreign Arbitral Awards}/ (New York, 10 June 1958), which the business community regularly relies upon as the back-stop for their international agreements. Common property can have an enabling value, the Net, basis for the "new" economy, would not be what it is today without much that has been shared on this basis, having permitted /{"Metcalf's law"}/~{ Robert Metcalf, founder of 3Com. }~ to take hold. /{Metcalf's law}/ suggests that the value of a shared technology is exponential to its user base. In all likelihood it applies as much to transnational contract law, as to technological networks and standards. The more people who use a network or standard, the more "valuable" it becomes, and the more users it will attract. Key infrastructure should be identified and common property solutions where appropriate nurtured, keeping transaction costs to a minimum.
+
+The following general perspective is submitted as worthy of consideration (and support) by the legal, business and academic communities, and governments. *(a)* Abstract goals valuable to a transnational legal infrastructure include, certainty and predictability, flexibility, simplicity where possible, and neutrality, in the sense of being without perceived "unfairness" in the global context of their application. This covers the content of the "laws" themselves and the methods used for their interpretation. *(b)* Of law with regard to technology, "rules should be technology-neutral (i.e., the rules should neither require nor assume a particular technology) and forward looking (i.e., the rules should not hinder the use or development of technologies in the future)."~{ /{US Framework for Global Electronic Commerce}/ (1997) http://www.whitehouse.gov/WH/New/Commerce/ }~ *(c)* Desirable abstract goals in developing technological standards and critical technological infrastructure, include, choice, and that they should be shared and public or "open" as in "open source", and platform and/or program neutral, that is, interoperable. (On security, to forestall suggestions to the contrary, popular open source software tends to be as secure or more so than proprietary software). *(d)* Encryption is an essential part of the mature "new" economy but remains the subject of some governments' restriction.~{ The EU is lifting such restriction, and the US seems likely to follow suit. }~ The availability of (and possibility to develop common transnational standards for) strong encryption is essential for commercial security and trust with regard to all manner of Net communications and electronic commerce transactions, /{vis-à-vis}/ their confidentiality, integrity, authentication, and non-repudiation. That is, encryption is the basis for essential commerce related technologies, including amongst many others, electronic signatures, electronic payment systems and the development of electronic symbols of ownership (such as electronic bills of lading). *(e)* As regards the dissemination of primary materials concerning "uniform standards" in both the legal and technology domains, "the Net" should be used to make them globally available, free. Technology should be similarly used where possible to promote the goals outlined under point (a). Naturally, as a tempered supporter of the market economy,~{ Caveats extending beyond the purview of this paper. It is necessary to be aware that there are other overriding interests, global and domestic, that the market economy is ill suited to providing for, such as the environment, and possibly key public utilities that require long term planning and high investment. It is also necessary to continue to be vigilant against that which even if arising as a natural consequence of the market economy, has the potential to disturb or destroy its function, such as monopolies.}~ proprietary secondary materials and technologies do not merit these reservations. Similarly, actors of the market economy would take advantage of the common property base of the commercial highway.
+
+1~ Modelling the private international commercial law infrastructure
+
+Apart from the study of "laws" or the existing legal infrastructure, there are a multitude of players involved in their creation whose efforts may be regarded as being in the nature of systems modelling. Of interest to this paper is the subset of activity of a few organisations that provide the underpinnings for the foundation of a successful transnational contract/sales law. These are not amongst the more controversial legal infrastructure modelling activities, and represent a small but significant part in simplifying international commerce and trade.~{ Look for instance at national customs procedures, and consumer protection.}~
+
+Briefly viewing the wider picture, several institutions are involved as independent actors in systems modelling of the transnational legal infrastructure. Their roles and mandates and the issues they address are conceptually different. These include certain United Nations organs and affiliates such as the United Nations Commission on International Trade Law (UNCITRAL),~{ http://www.uncitral.org/ }~ the World Intellectual Property Organisation (WIPO)~{ http://www.wipo.org/ }~ and recently the World Trade Organisation (WTO),~{ http://www.wto.org/ }~ along with other institutions such as the International Institute for the Unification of Private Law (UNIDROIT),~{ http://www.unidroit.org/ }~ the International Chamber of Commerce (ICC),~{ http://www.iccwbo.org/ }~ and the Hague Conference on Private International Law.~{ http://www.hcch.net/ }~ They identify areas that would benefit from an international or transnational regime and use various tools at their disposal, (including: treaties; model laws; conventions; rules and/or principles; standard contracts), to develop legislative "solutions" that they hope will be subscribed to.
+
+A host of other institutions are involved in providing regional solutions.~{ such as ASEAN http://www.aseansec.org/ the European Union (EU) http://europa.eu.int/ MERCOSUR http://embassy.org/uruguay/econ/mercosur/ and North American Free Trade Agreement (NAFTA) http://www.nafta-sec-alena.org/english/nafta/ }~ Specialised areas are also addressed by appropriately specialised institutions.~{ e.g. large international banks; or in the legal community, the Business Section of the International Bar Association (IBA) with its membership of lawyers in over 180 countries. http://www.ibanet.org/ }~ A result of globalisation is increased competition (also) amongst States, which are active players in the process, identifying and addressing the needs of their business communities over a wide range of areas and managing the suitability to the global economy of their domestic legal, economic, technological and educational~{ For a somewhat frightening peek and illuminating discussion of the role of education in the global economy as implemented by a number of successful States see Joel Spring, /{Education and the Rise of the Global Economy}/ (Mahwah, NJ, 1998). }~ infrastructures. The role of States remains to identify what domestic structural support they must provide to be integrated and competitive in the global economy.
+
+In addition to "traditional" contributors, the technology/commerce/law confluence provides new challenges and opportunities, allowing, the emergence of important new players within the commercial field, such as Bolero,~{ http://www.bolero.org/ also http://www.boleroassociation.org/ }~ which, with the backing of international banks and ship-owners, offers electronic replacements for traditional paper transactions, acting as transaction agents for the electronic substitute on behalf of the trading parties. The acceptance of the possibility of applying an institutionally offered lex has opened the door further for other actors including ad hoc groupings of the business community and/or universities to find ways to be engaged and actively participate in providing services for themselves and/or others in this domain.
+
+1~ The foundation for transnational private contract law, arbitration
+
+The market economy drive perpetuating economic globalisation is also active in the development and choice of transnational legal solutions. The potential reward, international sets of contract rules and principles, that can be counted on to be consistent and as providing a uniform layer of insulation (with minimal reference back to State law) when applied across the landscape of a multitude of different municipal legal systems. The business community is free to utilise them if available, and if not, to develop them, or seek to have them developed.
+
+The kernel for the development of a transnational legal infrastructure governing the rights and obligations of private contracting individuals was put in place as far back as 1958 by the /{UN Convention on the Recognition and Enforcement of Foreign Arbitral Awards}/ (/{"NY Convention on ICA"}/),~{ at http://www.jus.uio.no/lm/un.arbitration.recognition.and.enforcement.convention.new.york.1958/ }~ now in force in over a hundred States. Together with freedom of contract, the /{NY Convention on ICA}/ made it possible for commercial parties to develop and be governed by their own /{lex}/ in their contractual affairs, should they wish to do so, and guaranteed that provided their agreement was based on international commercial arbitration (/{"ICA"}/), (and not against relevant mandatory law) it would be enforced in all contracting States. This has been given further support by various more recent arbitration rules and the /{UNCITRAL Model Law on International Commercial Arbitration 1985}/,~{ at http://www.jus.uio.no/lm/un.arbitration.model.law.1985/ }~ which now explicitly state that rule based solutions independent of national law can be applied in /{"ICA"}/.~{ Lando, /{Each Contracting Party Must Act In Accordance with Good Faith and Fair Dealing}/ in /{Festskrift til Jan Ramberg}/ (Stockholm, 1997) p. 575. See also UNIDROIT Principles, Preamble 4 a. Also Arthur Hartkamp, The Use of UNIDROIT Principles of International Commercial Contracts by National and Supranational Courts (1995) in UNIDROIT Principles: A New Lex Mercatoria?, pp. 253-260 on p. 255. But see Goode, /{A New International Lex Mercatoria?}/ in /{Juridisk Tidskrift}/ (1999-2000 nr 2) p. 256 and 259. }~
+
+/{"ICA"}/ is recognised as the most prevalent means of dispute resolution in international commerce. Unlike litigation /{"ICA"}/ survives on its merits as a commercial service to provide for the needs of the business community.~{ /{"ICA"}/ being shaped by market forces and competition adheres more closely to the rules of the market economy, responding to its needs and catering for them more adequately. }~ It has consequently been more dynamic than national judiciaries, in adjusting to the changing requirements of businessmen. Its institutions are quicker to adapt and innovate, including the ability to cater for transnational contracts. /{"ICA"}/, in taking its mandate from and giving effect to the will of the parties, provides them with greater flexibility and frees them from many of the limitations of municipal law.~{ As examples of this, it seeks to give effect to the parties' agreement upon: the lex mercatoria as the law of the contract; the number of, and persons to be "adjudicators"; the language of proceedings; the procedural rules to be used, and; as to the finality of the decision. }~
+
+In sum, a transnational/non-national regulatory order governing the contractual rights and obligations of private individuals is made possible by: *(a)* States' acceptance of freedom of contract (public policy excepted); *(b)* Sanctity of contract embodied in the principle pacta sunt servanda *(c)* Written contractual selection of dispute resolution by international commercial arbitration, whether ad hoc or institutional, usually under internationally accepted arbitration rules; *(d)* Guaranteed enforcement, arbitration where necessary borrowing the State apparatus for law enforcement through the /{NY Convention on ICA}/, which has secured for /{"ICA"}/ a recognition and enforcement regime unparalleled by municipal courts in well over a hundred contracting States; *(e)* Transnational effect or non-nationality being achievable through /{"ICA"}/ accepting the parties' ability to select the basis upon which the dispute would be resolved outside municipal law, such as through the selection of general principles of law or lex mercatoria, or calling upon the arbitrators to act as amiable compositeur or ex aequo et bono.
+
+This framework provided by /{"ICA"}/ opened the door for the modelling of effective transnational law default rules and principles for contracts independent of State participation (in their development, application, or choice of law foundation). Today we have an increased amount of certainty of content and better control over the desired degree of transnational effect or non-nationality with the availability of comprehensive insulating rules and principles such as the PICC or /{Principles of European Contract Law}/ (/{"European Principles"}/ or /{"PECL"}/) that may be chosen, either together with, or to the exclusion of a choice of municipal law as governing the contract. For electronic commerce a similar path is hypothetically possible.
+
+1~ "State contracted international law" and/or "institutionally offered lex"? CISG and PICC as examples
+
+An institutionally offered lex ("IoL", uniform rules and principles) appear to have a number of advantages over "State contracted international law" ("ScIL", model laws, treaties and conventions for enactment). The development and formulation of both "ScIL" and "IoL" law takes time, the CISG representing a half century of effort~{ /{UNCITRAL Convention on Contracts for the International Sale of Goods 1980}/ see at http://www.jus.uio.no/lm/un.contracts.international.sale.of.goods.convention.1980/ <br>The CISG may be regarded as the culmination of an effort in the field dating back to Ernst Rabel, (/{Das Recht des Warenkaufs}/ Bd. I&II (Berlin, 1936-1958). Two volume study on sales law.) followed by the Cornell Project, (Cornell Project on Formation of Contracts 1968 - Rudolf Schlesinger, Formation of Contracts. A study of the Common Core of Legal Systems, 2 vols. (New York, London 1968)) and connected most directly to the UNIDROIT inspired /{Uniform Law for International Sales}/ (ULIS http://www.jus.uio.no/lm/unidroit.ulis.convention.1964/ at and ULF at http://www.jus.uio.no/lm/unidroit.ulf.convention.1964/ ), the main preparatory works behind the CISG (/{Uniform Law on the Formation of Contracts for the International Sale of Goods}/ (ULF) and the /{Convention relating to a Uniform Law on the International Sale of Goods}/ (ULIS) The Hague, 1964.). }~ and PICC twenty years.~{ /{UNIDROIT Principles of International Commercial Contracts}/ commonly referred to as the /{UNIDROIT Principles}/ and within this paper as PICC see at http://www.jus.uio.no/lm/unidroit.contract.principles.1994/ and http://www.jus.uio.no/lm/unidroit.international.commercial.contracts.principles.1994.commented/ <br>The first edition of the PICC were finalised in 1994, 23 years after their first conception, and 14 years after work started on them in earnest. }~ The CISG by UNCITRAL represents the greatest success for the unification of an area of substantive commercial contract law to date, being currently applied by 57 States,~{ As of February 2000. }~ estimated as representing close to seventy percent of world trade and including every major trading nation of the world apart from England and Japan. To labour the point, the USA most of the EU (along with Canada, Australia, Russia) and China, ahead of its entry to the WTO already share the same law in relation to the international sale of goods. "ScIL" however has additional hurdles to overcome. *(a)* In order to enter into force and become applicable, it must go through the lengthy process of ratification and accession by States. *(b)* Implementation is frequently with various reservations. *(c)* Even where widely used, there are usually as many or more States that are exceptions. Success, that is by no means guaranteed, takes time and for every uniform law that is a success, there are several failures.
+
+Institutionally offered lex ("IoL") comprehensive general contract principles or contract law restatements that create an entire "legal" environment for contracting, has the advantage of being instantly available, becoming effective by choice of the contracting parties at the stroke of a pen. "IoL" is also more easily developed subsequently, in light of experience and need. Amongst the reasons for their use is the reduction of transaction cost in their provision of a set of default rules, applicable transnationally, that satisfy risk management criteria, being (or becoming) known, tried and tested, and of predictable effect.~{ "[P]arties often want to close contracts quickly, rather than hold up the transaction to negotiate solutions for every problem that might arise." Honnold (1992) on p. 13. }~ The most resoundingly successful "IoL" example to date has been the ICC's /{Uniform Customs and Practices for Documentary Credits}/, which is subscribed to as the default rules for the letters of credit offered by the vast majority of banks in the vast majority of countries of the world. Furthermore uniform principles allow unification on matters that at the present stage of national and regional pluralism could not be achieved at a treaty level. There are however, things that only "ScIL" can "engineer", (for example that which relates to priorities and third party obligations).
+
+*{PICC:}* The arrival of PICC in 1994 was particularly timely. Coinciding as it did with the successful attempt at reducing trade barriers represented by the /{World Trade Agreement,}/~{ http://www.jus.uio.no/lm/wta.1994/ }~ and the start of general Internet use,~{ See Amissah, /{On the Net and the Liberation of Information that wants to be Free}/ in ed. Jens Edvin A. Skoghoy /{Fra institutt til fakultet, Jubileumsskrift i anledning av at IRV ved Universitetet i Tromsø feirer 10 år og er blitt til Det juridiske fakultet}/ (Tromsø, 1996) pp. 59-76 or the same at http://www.jus.uio.no/lm/on.the.net.and.information.22.02.1997.amissah/ }~ allowed for the exponential growth of electronic commerce, and further underscored the transnational tendency of commerce. The arrival of PICC was all the more opportune bearing in mind the years it takes to prepare such an instrument. Whilst there have been some objections, the PICC (and PECL) as contract law restatements cater to the needs of the business community that seeks a non-national or transnational law as the basis of its contracts, and provide a focal point for future development in this direction. Where in the past they would have been forced to rely on the ethereal and nebulous lex mercatoria, now the business community is provided with the opportunity to make use of such a "law" that is readily accessible, and has a clear and reasonably well defined content, that will become familiar and can be further developed as required. As such the PICC allow for more universal and uniform solutions. Their future success will depend on such factors as: *(a)* Suitability of their contract terms to the needs of the business community. *(b)* Their becoming widely known and understood. *(c)* Their predictability evidenced by a reasonable degree of consistency in the results of their application. *(d)* Recognition of their potential to reduce transaction costs. *(e)* Recognition of their being neutral as between different nations' interests (East, West; North, South). In the international sale of goods the PICC can be used in conjunction with more specific rules and regulations, including (on parties election~{ Also consider present and future possibilities for such use of PICC under CISG articles 8 and 9. }~) in sales the CISG to fill gaps in its provisions.~{ Drobnig, id. p. 228, comment that the CISG precludes recourse to general principles of contract law in Article 7. This does not refer to the situation where parties determine that the PICC should do so, see CISG Article 6. Or that in future the PICC will not be of importance under CISG Articles 8 and 9. }~ Provisions of the CISG would be given precedence over the PICC under the accepted principle of /{specialia generalibus derogant}/,~{ "Special principles have precedence over general ones." See Huet, Synthesis (1995) p. 277. }~ the mandatory content of the PICC excepted. The CISG has many situations that are not provided for at all, or which are provided for in less detail than the PICC.
+
+Work on PICC and PECL under the chairmanship of Professors Bonell and Ole Lando respectively, was wisely cross-pollinated (conceptually and through cross-membership of preparatory committees), as common foundations strengthen both sets of principles. A couple of points should be noted. Firstly, despite the maintained desirability of a transnational solution, this does not exclude the desirability of regional solutions, especially if there is choice, and the regional solutions are more comprehensive and easier to keep of uniform application. Secondly, the European Union has powers and influence (within the EU) unparalleled by UNIDROIT that can be utilised in future with regard to the PECL if the desirability of a common European contract solution is recognised and agreed upon by EU member States. As a further observation, there is, hypothetically at least, nothing to prevent there in future being developed an alternative extensive (competing) transnational contract /{lex}/ solution, though the weighty effort already in place as represented by PICC and the high investment in time and independent skilled legal minds, necessary to achieve this in a widely acceptable manner, makes such a development not very likely. It may however be the case that for electronic commerce, some other particularly suitable rules and principles will in time be developed in a similar vein, along the lines of an "IoL".
+
+1~ Contract /{Lex}/ design. Questions of commonweal
+
+The virtues of freedom of contract are acknowledged in this paper in that they allow the international business community to structure their business relationships to suit their requirements, and as such reflect the needs and working of the market economy. However, it is instructive also to explore the limits of the principles: freedom of contract, pacta sunt servanda and caveat subscriptor. These principles are based on free market arguments that parties best understand their interests, and that the contract they arrive at will be an optimum compromise between their competing interests. It not being for an outsider to regulate or evaluate what a party of their own free will and volition has gained from electing to contract on those terms. This approach to contract is adversarial, based on the conflicting wills of the parties, achieving a meeting of minds. It imposes no duty of good faith and fair dealing or of loyalty (including the disclosure of material facts) upon the contracting parties to one another, who are to protect their own interests. However, in international commerce, this demand can be more costly, and may have a negative and restrictive effect. Also, although claimed to be neutral in making no judgement as to the contents of a contract, this claim can be misleading.
+
+2~ The neutrality of contract law and information cost
+
+The information problem is a general one that needs to be recognised in its various forms where it arises and addressed where possible.
+
+Adherents to the caveat subscriptor model, point to the fact that parties have conflicting interests, and should look out for their own interests. However information presents particular problems which are exacerbated in international commerce.~{ The more straightforward cases of various types of misrepresentation apart. }~ As Michael Trebilcock put it: "Even the most committed proponents of free markets and freedom of contract recognise that certain information preconditions must be met for a given exchange to possess Pareto superior qualities."~{ Trebilcock, (1993) p. 102, followed by a quotation of Milton Friedman, from /{Capitalism and Freedom}/ (1962) p. 13. }~ Compared with domestic transactions, the contracting parties are less likely to possess information about each other or of what material facts there may be within the other party's knowledge, and will find it more difficult and costly to acquire. With resource inequalities, some parties will be in a much better position to determine and access what they need to know, the more so as the more information one already has, the less it costs to identify and to obtain any additional information that is required.~{ Trebilcock, (1993) p. 102, note quoted passage of Kim Lane Scheppele, /{Legal Secrets: Equality and Efficiency in the Common Law}/ (1988) p. 25. }~ The converse lot of the financially weaker party, makes their problem of high information costs (both actual and relative), near insurmountable. Ignorance may even become a rational choice, as the marginal cost of information remains higher than its marginal benefit. "This, in fact is the economic rationale for the failure to fully specify all contingencies in a contract."~{ See for example Nicholas Mercuro and Steven G. Medema, p. 58 }~ The argument is tied to transaction cost and further elucidates a general role played by underlying default rules and principles. It also extends further to the value of immutable principles that may help mitigate the problem in some circumstances. More general arguments are presented below.
+
+2~ Justifying mandatory loyalty principles
+
+Given the ability to create alternative solutions and even an independent /{lex}/ a question that arises is as to what limits if any should be imposed upon freedom of contract? What protective principles are required? Should protective principles be default rules that can be excluded? Should they be mandatory? Should mandatory law only exist at the level of municipal law?
+
+A kernel of mandatory protective principles with regard to loyalty may be justified, as beneficial, and even necessary for "IoL" to be acceptable in international commerce, in that they (on the balance) reflect the collective needs of the international business community. The present author is of the opinion that the duties of good faith and fair dealing and loyalty (or an acceptable equivalent) should be a necessary part of any attempt at the self-legislation or institutional legislation of any contract regime that is based on "rules and principles" (rather than a national legal order). If absent a requirement for them should be imposed by mandatory international law. Such protective provisions are to be found within the PICC and PECL.~{ Examples include: the deliberately excluded validity (Article 4); the provision on interest (Article 78); impediment (Article 79), and; what many believe to be the inadequate coverage of battle of forms (Article 19). }~ As regards PICC *(a)* The loyalty (and other protective) principles help bring about confidence and foster relations between parties. They provide an assurance in the international arena where parties are less likely to know each other and may have more difficulty in finding out about each other. *(b)* They better reflect the focus of the international business community on a business relationship from which both sides seek to gain. *(c)* They result in wider acceptability of the principles within both governments and the business community in the pluralistic international community. These protective principles may be regarded as enabling the PICC to better represent the needs of the commonweal. *(d)* Good faith and fair dealing~{ The commented PECL explain "'Good faith' means honesty and fairness in mind, which are subjective concepts... 'fair dealing' means observance of fairness in fact which is an objective test". }~ are fundamental underlying principles of international commercial relations. *(e)* Reliance only on the varied mandatory law protections of various States does not engender uniformity, which is also desirable with regard to that which can be counted upon as immutable. (Not that it is avoidable, given that mandatory State law remains overriding.) More generally, freedom of contract benefits from these protective principles that need immutable protection from contractual freedom to effectively serve their function. In seeking a transnational or non-national regime to govern contractual relations, one might suggest this to be the minimum price of freedom of contract that should be insisted upon by mandatory international law, as the limitation which hinders the misuse by one party of unlimited contractual freedom. They appear to be an essential basis for acceptability of the autonomous contract (non-national contract, based on agreed rules and principles/ "IoL"). As immutable principles they (hopefully and this is to be encouraged) become the default standard for the conduct of international business and as such may be looked upon as "common property." Unless immutable they suffer a fate somewhat analogous to that of "the tragedy of the commons."~{ Special problem regarding common/shared resources discussed by Garrett Hardin in Science (1968) 162 pp. 1243-1248. For short discussion and summary see Trebilcock, (1993) p. 13-15. }~ It should be recognised that argument over the loyalty principles should be of degree, as the concept must not be compromised, and needs to be protected (even if they come at the price of a degree of uncertainty), especially against particularly strong parties who are most likely to argue against their necessity.
+
+1~ Problems beyond uniform texts
+
+2~ In support of four objectives
+
+In the formulation of many international legal texts a pragmatic approach was taken. Formulating legislators from different States developed solutions based on suitable responses to factual example circumstances. This was done, successfully, with a view to avoiding arguments over alternative legal semantics and methodologies. However, having arrived at a common text, what then? Several issues are raised by asking the question, given that differences of interpretation can arise and become entrenched, by what means is it possible to foster a sustainable drive towards the uniform application of shared texts? Four principles appear to be desirable and should insofar as it is possible be pursued together: *(i)* the promotion of certainty and predictability; *(ii)* the promotion of uniformity of application; *(iii)* the protection of democratic ideals and ensuring of jurisprudential deliberation, and; *(iv)* the retention of efficiency.
+
+2~ Improving the predictability, certainty and uniform application of international and transnational law
+
+The key to the (efficient) achievement of greater certainty and predictability in an international and/or transnational commercial law regime is through the uniform application of shared texts that make up this regime.
+
+Obviously a distinction is to be made between transnational predictability in application, that is "uniform application", and predictability at a domestic level. Where the "uniform law" is applied by a municipal court of State "A" that looks first to its domestic writings, there may be a clear - predictable manner of application, even if not in the spirit of the "Convention". Another State "B" may apply the uniform law in a different way that is equally predictable, being perfectly consistent internally. This however defeats much of the purpose of the uniform law.
+
+A first step is for municipal courts to accept the /{UN Convention on the Law of Treaties 1969}/ (in force 1980) as a codification of existing public international law with regard to the interpretation of treaties.~{ This is the position in English law see Lord Diplock in Fothergill v Monarch Airlines [1981], A.C. 251, 282 or see http://www.jus.uio.no/lm/england.fothergill.v.monarch.airlines.hl.1980/2_diplock.html also Mann (London, 1983) at p. 379. The relevant articles on interpretation are Article 31 and 32. }~ A potentially fundamental step towards the achievement of uniform application is through the conscientious following of the admonitions of the interpretation clauses of modern conventions, rules and principles~{ Examples: The CISG, Article 7; The PICC, Article 1.6; PECL Article 1.106; /{UN Convention on the Carriage of Goods by Sea (The Hamburg Rules) 1978}/, Article 3; /{UN Convention on the Limitation Period in the International Sale of Goods 1974}/ and /{1978}/, Article 7; /{UN Model Law on Electronic Commerce 1996}/, Article 3; /{UNIDROIT Convention on International Factoring 1988}/, Article 4; /{UNIDROIT Convention on International Financial Leasing 1988}/, Article 6; also /{EC Convention on the Law Applicable to Contractual Obligations 1980}/, Article 18. }~ to take into account their international character and the need to promote uniformity in their application,~{ For an online collection of articles see the Pace CISG Database http://www.cisg.law.pace.edu/cisg/text/e-text-07.html and amongst the many other articles do not miss Michael Van Alstine /{Dynamic Treaty Interpretation}/ 146 /{University of Pennsylvania Law Review}/ (1998) 687-793. }~ together with all this implies.~{ Such as the CISG provision on interpretation - Article 7. }~ However, the problems of uniform application, being embedded in differences of legal methodology, go beyond the agreement of a common text, and superficial glances at the works of other legal municipalities. These include questions related to sources of authority and technique applied in developing valid legal argument. Problems with sources include differences in authority and weight given to: *(a)* legislative history; *(b)* rulings domestic and international; *(c)* official and other commentaries; *(d)* scholarly writings. There should be an ongoing discussion of legal methodology to determine the methods best suited to addressing the problem of achieving greater certainty, predictability and uniformity in the application of shared international legal texts. With regard to information sharing, again the technology associated with the Net offers potential solutions.
+
+2~ The Net and information sharing through transnational databases
+
+The Net has been a godsend permitting the collection and dissemination of information on international law. With the best intentions to live up to admonitions to "to take into account their international character and the need to promote uniformity in their application" of "ScIL" and "IoL", a difficulty has been in knowing what has been written and decided elsewhere. In discussing solutions, Professor Honnold in /{"Uniform Words and Uniform Application" }/~{ Based on the CISG, and inputs from several professors from different legal jurisdictions, on the problems of achieving the uniform application of the text across different legal municipalities. J. Honnold, /{Uniform words and uniform applications. Uniform Words and Uniform Application: The 1980 Sales Convention and International Juridical Practice}/. /{Einheitliches Kaufrecht und nationales Obligationenrecht. Referate Diskussionen der Fachtagung}/. am 16/17-2-1987. Hrsg. von P. Schlechtriem. Baden-Baden, Nomos, 1987. p. 115-147, at p. 127-128. }~ suggests the following: "General Access to Case-Law and Bibliographic Material: The development of a homogenous body of law under the Convention depends on channels for the collection and sharing of judicial decisions and bibliographic material so that experience in each country can be evaluated and followed or rejected in other jurisdictions." Honnold then goes on to discuss "the need for an international clearing-house to collect and disseminate experience on the Convention" the need for which, he writes there is general agreement. He also discusses information-gathering methods through the use of national reporters. He poses the question "Will these channels be adequate? ..."
+
+The Net, offering inexpensive ways to build databases and to provide global access to information, provides an opportunity to address these problems that was not previously available. The Net extends the reach of the admonitions of the interpretation clauses. Providing the medium whereby if a decision or scholarly writing exists on a particular article or provision of a Convention, anywhere in the world, it will be readily available. Whether or not a national court or arbitration tribunal chooses to follow their example, they should be aware of it. Whatever a national court decides will also become internationally known, and will add to the body of experience on the Convention.~{ Nor is it particularly difficult to set into motion the placement of such information on the Net. With each interested participant publishing for their own interest, the Net could provide the key resources to be utilised in the harmonisation and reaching of common understandings of solutions and uniform application of legal texts. Works from all countries would be available. }~
+
+Such a library would be of interest to the institution promulgating the text, governments, practitioners and researchers alike. It could place at your fingertips: *(a)* Convention texts. *(b)* Implementation details of contracting States. *(c)* The legislative history. *(d)* Decisions generated by the convention around the world (court and arbitral where possible). *(e)* The official and other commentaries. *(f)* Scholarly writings on the Convention. *(g)* Bibliographies of scholarly writings. *(h)* Monographs and textbooks. *(i)* Student study material collections. *(j)* Information on promotional activities, lectures - moots etc. *(k)* Discussion groups/ mailing groups and other more interactive features.
+
+With respect to the CISG such databases are already being maintained.~{ Primary amongst them Pace University, Institute of International Commercial Law, CISG Database http://www.cisg.law.pace.edu/ which provides secondary support for the CISG, including providing a free on-line database of the legislative history, academic writings, and case-law on the CISG and additional material with regard to PICC and PECL insofar as they may supplement the CISG. Furthermore, the Pace CISG Project, networks with the several other existing Net based "autonomous" CISG projects. UNCITRAL under Secretary Gerold Herrmann, has its own database through which it distributes its case law materials collected from national reporters (CLOUT). }~
+
+The database by ensuring the availability of international materials, used in conjunction with legal practice, helps to support the fore-named four principles. That of efficiency is enhanced especially if there is a single source that can be searched for the information required.
+
+The major obstacle that remains to being confident of this as the great and free panacea that it should be is the cost of translation of texts.
+
+2~ Judicial minimalism promotes democratic jurisprudential deliberation
+
+How to protect liberal democratic ideals and ensure international jurisprudential deliberation? Looking at judicial method, where court decisions are looked to for guidance, liberal democratic ideals and international jurisprudential deliberation are fostered by a judicial minimalist approach.
+
+For those of us with a common law background, and others who pay special attention to cases as you are invited to by interpretation clauses, there is scope for discussion as to the most appropriate approach to be taken with regard to judicial decisions. US judge Cass Sunstein suggestion of judicial minimalism~{ Cass R. Sunstein, /{One Case at a Time - Judicial Minimalism on the Supreme Court}/ (1999) }~ which despite its being developed in a different context~{ His analysis is developed based largely on "hard" constitutional cases of the U.S. }~ is attractive in that it is suited to a liberal democracy in ensuring democratic jurisprudential deliberation. It maintains discussion, debate, and allows for adjustment as appropriate and the gradual development of a common understanding of issues. Much as one may admire farsighted and far-reaching decisions and expositions, there is less chance with the minimalist approach of the (dogmatic) imposition of particular values. Whilst information sharing offers the possibility of the percolation of good ideas.~{ D. Stauffer, /{Introduction to Percolation Theory}/ (London, 1985). Percolation represents the sudden dramatic expansion of a common idea or ideas thought he reaching of a critical level/mass in the rapid recognition of their power and the making of further interconnections. An epidemic like infection of ideas. Not quite the way we are used to the progression of ideas within a conservative tradition. }~ Much as we admire the integrity of Dworkin's Hercules,~{ Ronald Dworkin, /{Laws Empire}/ (Harvard, 1986); /{Hard Cases in Harvard Law Review}/ (1988). }~ that he can consistently deliver single solutions suitable across such disparate socio-economic cultures is questionable. In examining the situation his own "integrity" would likely give him pause and prevent him from dictating that he can.~{ Hercules was created for U.S. Federal Cases and the community represented by the U.S. }~ This position is maintained as a general principle across international commercial law, despite private (as opposed to public) international commercial law not being an area of particularly "hard" cases of principle, and; despite private international commercial law being an area in which over a long history it has been demonstrated that lawyers are able to talk a common language to make themselves and their concepts (which are not dissimilar) understood by each other.~{ In 1966, a time when there were greater differences in the legal systems of States comprising the world economy Clive Schmitthoff was able to comment that:<br>"22. The similarity of the law of international trade transcends the division of the world between countries of free enterprise and countries of centrally planned economy, and between the legal families of the civil law of Roman inspiration and the common law of English tradition. As a Polish scholar observed, "the law of external trade of the countries of planned economy does not differ in its fundamental principles from the law of external trade of other countries, such as e.g., Austria or Switzerland. Consequently, international trade law specialists of all countries have found without difficulty that they speak a 'common language'<br>23. The reason for this universal similarity of the law of international trade is that this branch of law is based on three fundamental propositions: first, that the parties are free, subject to limitations imposed by the national laws, to contract on whatever terms they are able to agree (principle of the autonomy of the parties' will); secondly, that once the parties have entered into a contract, that contract must be faithfully fulfilled (pacta sunt servanda) and only in very exceptional circumstances does the law excuse a party from performing his obligations, viz., if force majeure or frustration can be established; and, thirdly that arbitration is widely used in international trade for the settlement of disputes, and the awards of arbitration tribunals command far-reaching international recognition and are often capable of enforcement abroad."<br>/{Report of the Secretary-General of the United Nations, Progressive Development of the Law of International Trade}/ (1966). Report prepared for the UN by C. Schmitthoff. }~
+
+2~ Non-binding interpretative councils and their co-ordinating guides can provide a focal point for the convergence of ideas - certainty, predictability, and efficiency
+
+A respected central guiding body can provide a guiding influence with respect to: *(a)* the uniform application of texts; *(b)* information management control. Given the growing mass of writing on common legal texts - academic and by way of decisions, we are faced with an information management problem.~{ Future if not current. }~
+
+Supra-national interpretative councils have been called for previously~{ /{UNCITRAL Secretariat}/ (1992) p. 253. Proposed by David (France) at the second UNCITRAL Congress and on a later occasion by Farnsworth (USA). To date the political will backed by the financing for such an organ has not been forthcoming. In 1992 the UNCITRAL Secretariat concluded that "probably the time has not yet come". Suggested also by Louis Sono in /{Uniform laws require uniform interpretation: proposals for an international tribunal to interpret uniform legal texts}/ (1992) 25th UNCITRAL Congress, pp. 50-54. Drobnig, /{Observations in Uniform Law in Practice}/ at p. 306. }~ and have for various reasons been regarded impracticable to implement including problems associated with getting States to formally agree upon such a body with binding authority.
+
+However it is not necessary to go this route. In relation to "IoL" in such forms as the PICC and PECL it is possible for the promulgators themselves,~{ UNIDROIT and the EU }~ to update and clarify the accompanying commentary of the rules and principles, and to extend their work, through having councils with the necessary delegated powers. In relation to the CISG it is possible to do something similar of a non-binding nature, through the production of an updated commentary by an interpretive council (that could try to play the role of Hercules).~{ For references on interpretation of the CISG by a supranational committee of experts or council of "wise men" see Bonell, /{Proposal for the Establishment of a Permanent Editorial Board for the Vienna Sales Convention}/ in /{International Uniform Law in Practice/ Le droit uniforme international dans la practique [Acts and Proceedings of the 3rd Congress on Private Law held by the International Institute for the Unification of Private Law}/ (Rome, 1987)], (New York, 1988) pp. 241-244 }~ With respect, despite some expressed reservations, it is not true that it would have no more authority than a single author writing on the subject. A suitable non-binding interpretative council would provide a focal point for the convergence of ideas. Given the principle of ensuring democratic jurisprudential deliberation, that such a council would be advisory only (except perhaps on the contracting parties election) would be one of its more attractive features, as it would ensure continued debate and development.
+
+2~ Capacity Building
+
+_1 "... one should create awareness about the fact that an international contract or transaction is not naturally rooted in one particular domestic law, and that its international specifics are best catered for in a uniform law."~{ UNCITRAL Secretariat (1992) p. 255. }~
+
+_{/{Capacity building}/}_ - raising awareness, providing education, creating a new generation of lawyers versed in a relatively new paradigm. Capacity building in international and transnational law, is something relevant institutions including arbitration institutions; the business community, and; far sighted States, should be interested in promoting. Finding means to transcend national boundaries is also to continue in the tradition of seeking the means to break down barriers to legal communication and understanding. However, while the business community seeks and requires greater uniformity in their business relations, there has paradoxically, at a national level, been a trend towards a nationalisation of contract law, and a regionalisation of business practice.~{ Erich Schanze, /{New Directions in Business Research}/ in Børge Dahl & Ruth Nielsen (ed.), /{New Directions in Contract Research}/ (Copenhagen, 1996) p. 62. }~
+
+As an example, Pace University, Institute of International Commercial Law, plays a prominent role with regard to capacity building in relation to the CISG and PICC. Apart from the previously mentioned /{CISG Database}/, Pace University organise a large annual moot on the CISG~{ See http://www.cisg.law.pace.edu/vis.html }~ this year involving students of 79 universities from 28 countries, and respected arbitrators from the word over. Within the moot the finding of solutions based on PICC where the CISG is silent, is encouraged. Pace University also organise an essay competition~{ See http://www.cisg.law.pace.edu/cisg/text/essay.html }~ on the CISG and/or the PICC, which next year is to be expanded to include the PECL as a further option.
+
+1~ Marketing of transnational solutions
+
+Certain aspects of the Net/web may already be passé, but did you recognise it for what it was, or might become, when it arrived?
+
+As uniform law and transnational solutions are in competition with municipal approaches, to be successful a certain amount of marketing is necessary and may be effective. The approach should involve ensuring the concept of what they seek to achieve is firmly implanted in the business, legal and academic communities, and through engaging the business community and arbitration institutions, in capacity building and developing a new generation of lawyers. Feedback from the business community, and arbitrators will also prove invaluable. Whilst it is likely that the business community will immediately be able to recognise their potential advantages, it is less certain that they will find the support of the legal community. The normal reasons would be similar to those usually cited as being the primary constraints on its development "conservatism, routine, prejudice and inertia" René David. These are problems associated with gaining the initial foothold of acceptability, also associated with the lower part of an exponential growth curve. In addition the legal community may face tensions arising for various reasons including the possibility of an increase in world-wide competition.
+
+There are old well developed legal traditions with developed infrastructures and roots well established in several countries, that are dependable and known. The question arises why experiment with alternative non-extensively tested regimes? The required sophistication is developed in the centres providing legal services, and it may be argued that there is not the pressing need for unification or for transnational solutions, as the traditional way of contracting provides satisfactorily for the requirements of global commerce. The services required will continue to be easily and readily available from existing centres of skill. English law, to take an example is for various reasons (including perhaps language, familiarity of use, reputation and widespread Commonwealth~{ http://www.thecommonwealth.org/ }~ relations) the premier choice for the law governing international commercial transactions, and is likely to be for the foreseeable future. Utilising the Commonwealth as an example, what the "transnational" law (e.g. CISG) experience illustrates however, is that for States there may be greater advantage to be gained from participation in a horizontally shared area of commercial law, than from retaining a traditional vertically integrated commercial law system, based largely for example on the English legal system.
+
+Borrowing a term from the information technology sector, it is essential to guard against FUD (fear, uncertainty and doubt) with regard to the viability of new and/or competing transnational solutions, that may be spread by their detractors, and promptly, in the manner required by the free market, address any real problems that are discerned.
+
+1~ Tools in future development
+
+An attempt should be made by the legal profession to be more contemporary and to keep up to date with developments in technology and the sciences, and to adopt effective tools where suitable to achieve their goals. Technology one way or another is likely to encroach further upon law and the way we design it.
+
+Science works across cultures and is aspired to by most nations as being responsible for the phenomenal success of technology (both are similarly associated with globalisation). Science is extending its scope to (more confidently) tackle complex systems. It would not hurt to be more familiar with relevant scientific concepts and terminology. Certainly lawyers across the globe, myself included, would also benefit much in their conceptual reasoning from an early dose of the philosophy of science,~{ An excellent approachable introduction is provided by A.F. Chalmers /{What is this thing called Science?}/ (1978, Third Edition 1999). }~ what better than Karl Popper on scientific discovery and the role of "falsification" and value of predictive probity.~{ Karl R. Popper /{The Logic of Scientific Discovery}/ (1959). }~ And certainly Thomas Kuhn on scientific advancement and "paradigm shifts"~{ Thomas S. Kuhn /{The Structure of Scientific Revolutions}/ (1962, 3rd Edition 1976). }~ has its place. Having mentioned Karl Popper, it would not be unwise to go further (outside the realms of philosophy of science) to study his defence of democracy in both volumes of /{Open Society and Its Enemies}/.~{ Karl R. Popper /{The Open Society and Its Enemies: Volume 1, Plato}/ (1945) and /{The Open Society and Its Enemies: Volume 2, Hegel & Marx}/. (1945) }~
+
+Less ambitiously there are several tools not traditionally in the lawyers set, that may assist in transnational infrastructure modelling. These include further exploration and development of the potential of tools, including to suggest a few by way of example: flow charts, fuzzy thinking, "intelligent" electronic agents and Net collaborations.
+
+In the early 1990's I was introduced to a quantity surveyor and engineer who had reduced the /{FIDIC Red Book}/~{ FIDIC is the International Federation of Consulting Engineers http://www.fidic.com/ }~ to over a hundred pages of intricate flow charts (decision trees), printed horizontally on roughly A4 sized sheets. He was employed by a Norwegian construction firm, who insisted that based on past experience, they knew that he could, using his charts, consistently arrive at answers to their questions in a day, that law firms took weeks to produce. Flow charts can be used to show interrelationships and dependencies, in order to navigate the implications of a set of rules more quickly. They may also be used more pro-actively (and /{ex ante}/ rather than /{ex post}/) in formulating texts, to avoid unnecessary complexity and to arrive at more practical, efficient and elegant solutions.
+
+Explore such concepts as "fuzzy thinking"~{ Concept originally developed by Lotfi Zadeh /{Fuzzy Sets}/ Information Control 8 (1965) pp 338-353. For introductions see Daniel McNeill and Paul Freiberger /{Fuzzy Logic: The Revolutionary Computer Technology that is Changing our World}/ (1993); Bart Kosko Fuzzy Thinking (1993); Earl Cox The Fuzzy Systems Handbook (New York, 2nd ed. 1999). Perhaps to the uninitiated an unfortunate choice of name, as fuzzy logic and fuzzy set theory is more precise than classical logic and set theory, which comprise a subset of that which is fuzzy (representing those instances where membership is 0% or 100%). The statement is not entirely without controversy, in suggesting the possibility that classical thinking may be subsumed within the realms of an unfamiliar conceptual paradigm, that is to take hold of the future thinking. In the engineering field much pioneer work on fuzzy rule based systems was done at Queen Mary College by Ebrahim Mamdani in the early and mid-1970s. Time will tell. }~ including fuzzy logic, fuzzy set theory, and fuzzy systems modelling, of which classical logic and set theory are subsets. Both by way of analogy and as a tool fuzzy concepts are better at coping with complexity and map more closely to judicial thinking and argument in the application of principles and rules. Fuzzy theory provides a method for analysing and modelling principle and rule based systems, even where conflicting principles may apply permitting /{inter alia}/ working with competing principles and the contextual assignment of precision to terms such as "reasonableness". Fuzzy concepts should be explored in expert systems, and in future law. Problems of scaling associated with multiple decision trees do not prevent useful applications, and structured solutions. The analysis assists in discerning what lawyers are involved with.
+
+"Intelligent" electronic agents can be expected both to gather information on behalf of the business community and lawyers. In future electronic agents are likely to be employed to identify and bring to the attention of their principals "invitations to treat" or offers worthy of further investigation. In some cases they will be developed and relied upon as electronic legal agents, operating under a programmed mandate and vested with the authority to enter certain contracts on behalf of their principals. Such mandate would include choice of law upon which to contract, and the scenario could be assisted by transnational contract solutions (and catered for in the design of "future law").
+
+Another area of technology helping solve legal problems relates to various types of global register and transaction centres. Amongst them property registers being an obvious example, including patents and moveable property. Bolero providing an example of how electronic documents can be centrally brokered on behalf of trading parties.
+
+Primary law should be available on the Net free, and this applies also to "IoL" and the static material required for their interpretation. This should be the policy adopted by all institutions involved in contributing to the transnational legal infrastructure. Where possible larger databases also should be developed and shared. The Net has reduced the cost of dissemination of material, to a level infinitesimally lower than before. Universities now can and should play a more active role. Suitable funding arrangements should be explored that do not result in proprietary systems or the forwarding of specific lobby interests. In hard-copy to promote uniform standards, institutions should also strive to have their materials available at a reasonable price. Many appear to be unacceptably expensive given the need for their promotion and capacity building, amongst students, and across diverse States.
+
+Follow the open standards and community standards debate in relation to the development of technology standards and technology infrastructure tools - including operating systems,~{ See for example /{Open Sources : Voices from the Open Source Revolution - The Open Source Story}/ http://www.oreilly.com/catalog/opensources/book/toc.html }~ to discover what if anything it might suggest for the future development of law standards.
+
+1~ As an aside, a word of caution
+
+I end with an arguably gratuitous observation, by way of a reminder and general warning. Gratuitous in the context of this paper because the areas focused upon~{ Sale of goods (CISG), contract rules and principles (PICC), related Arbitration, and the promotion of certain egalitarian ideals. }~ were somewhat deliberately selected to fall outside the more contentious and "politically" problematic areas related to globalisation, economics, technology, law and politics.~{ It is not as evident in the area of private international commercial contract law the chosen focus for this paper, but appears repeatedly in relation to other areas and issues arising out of the economics, technology, law nexus. }~ Gratuitous also because there will be no attempt to concretise or exemplify the possibility suggested.
+
+Fortunately, we are not (necessarily) talking about a zero sum game, however, it is necessary to be able to distinguish and recognise that which may harm. International commerce/trade is competitive, and by its nature not benign, even if it results in an overall improvement in the economic lot of the peoples of our planet. "Neutral tests" such as Kaldor-Hicks efficiency, do not require that your interests are benefited one iota, just that whilst those of others are improved, yours are not made worse. If the measure adopted is overall benefit, it is even more possible that an overall gain may result where your interests are adversely affected. The more so if you have little, and those that gain, gain much. Furthermore such "tests" are based on assumptions, which at best are approximations of reality (e.g. that of zero transaction costs, where in fact not only are they not, but they are frequently proportionately higher for the economically weak). At worst they may be manipulated /{ex ante}/ with knowledge of their implications (e.g. engineering to ensure actual or relative~{ Low fixed costs have a "regressive" effect }~ asymmetrical transaction cost). It is important to be careful in a wide range of circumstances related to various aspects of the modelling of the infrastructure for international commerce that have an impact on the allocation of rights and obligations, and especially the allocation of resources, including various types of intellectual property rights. Ask what is the objective and justification for the protection? How well is the objective met? Are there other consequential effects? Are there other objectives that are worthy of protection? Could the stated objective(s) be achieved in a better way?
+
+Within a system are those who benefit from the way it has been, that may oppose change as resulting in loss to them or uncertainty of their continued privilege. For a stable system to initially arise that favours such a Select Set, does not require the conscious manipulation of conditions by the Select Set. Rather it requires that from the system (set) in place the Select Set emerges as beneficiary. Subsequently the Select Set having become established as favoured and empowered by their status as beneficiary, will seek to do what it can, to influence circumstances to ensure their continued beneficial status. That is, to keep the system operating to their advantage (or tune it to work even better towards this end), usually with little regard to the conditions resulting to other members of the system. Often this will be a question of degree, and the original purpose, or an alternative "neutral" argument, is likely to be used to justify the arrangement. The objective from the perspective of the Select Set is fixed; the means at their disposal may vary. Complexity is not required for such situations to arise, but having done so subsequent plays by the Select Set tend towards complexity. Furthermore, moves in the interest of the Select Set are more easily obscured/disguised in a complex system. Limited access to information and knowledge are devastating handicaps without which change cannot be contemplated let alone negotiated. Frequently, having information and knowledge are not enough. The protection of self-interest is an endemic part of our system, with the system repeatedly being co-opted to the purposes of those that are able to manipulate it. Membership over time is not static, for example, yesterday's "copycat nations" are today's innovators, and keen to protect their intellectual property. Which also illustrates the point that what it may take to set success in motion, may not be the same as that which is preferred to sustain it. Whether these observations appear to be self-evident and/or abstract and out of place with regard to this paper, they have far reaching implications repeatedly observable within the law, technology, and commerce (politics) nexus. Even if not arising much in the context of the selected material for this paper, their mention is justified by way of warning. Suitable examples would easily illustrate how politics arises inescapably as an emergent property from the nexus of commerce, technology, and law.~{ In such circumstances either economics or law on their own would be sufficient to result in politics arising as an emergent property. }~
+
+%% SiSU markup sample Notes:
+% SiSU http://www.jus.uio.no/sisu
+% SiSU markup for 0.16 and later:
+% 0.20.4 header 0~links
+% 0.22 may drop image dimensions (rmagick)
+% 0.23 utf-8 ß
+% 0.38 or later, may use alternative notation for headers, e.g. @title: (instead of 0~title)
+% 0.38 document structure alternative markup, experimental (rad) A,B,C,1,2,3 maps to 1,2,3,4,5,6
+% 0.42 * type endnotes, used e.g. in relation to author
+% 0.51 skins changed, markup unchanged
+% 0.52 declared document type identifier at start of text
+% Output: http://www.jus.uio.no/sisu/autonomy_markup0/sisu_manifest.html
+% SiSU 0.38 experimental (alternative structure) markup used for this document
+% (compare 0.36 standard markup in sisu-examples autonomy_markup4.sst)
diff --git a/data/sisu_markup_samples/non-free/autonomy_markup1.sst b/data/sisu_markup_samples/non-free/autonomy_markup1.sst
new file mode 100644
index 0000000..6639ac0
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/autonomy_markup1.sst
@@ -0,0 +1,197 @@
+% SiSU 0.42
+% alternative markup for document structure and headers
+
+@title: Revisiting the Autonomous Contract
+
+@subtitle: Transnational contracting, trends and supportive structures
+
+@creator: Ralph Amissah*
+
+@type: article
+
+@subject: international contracts, international commercial arbitration, private international law
+
+@date: 2000-08-27
+
+@level: num_top=1
+
+@links: {Syntax}http://www.jus.uio.no/sisu/sample/syntax/autonomy_markup1.sst.html
+{The Autonomous Contract}http://www.jus.uio.no/lm/the.autonomous.contract.07.10.1997.amissah/toc.html
+{Contract Principles}http://www.jus.uio.no/lm/private.international.commercial.law/contract.principles.html
+{UNIDROIT Principles}http://www.jus.uio.no/lm/unidroit.international.commercial.contracts.principles.1994.commented/toc.html
+{Sales}http://www.jus.uio.no/lm/private.international.commercial.law/sale.of.goods.html
+{CISG}http://www.jus.uio.no/lm/un.contracts.international.sale.of.goods.convention.1980/doc.html
+{Arbitration}http://www.jus.uio.no/lm/arbitration/toc.html
+{Electronic Commerce}http://www.jus.uio.no/lm/electronic.commerce/toc.html
+
+:A~ Revisiting the Autonomous Contract <sub>(Draft 0.90 - 2000.08.27 ;)</sub>
+
+:B~ Transnational contract "law", trends and supportive structures
+
+:C~ \copyright Ralph Amissah~{* Ralph Amissah is a Fellow of Pace University, Institute for International Commercial Law. http://www.cisg.law.pace.edu/ <br>RA lectured on the private law aspects of international trade whilst at the Law Faculty of the University of Tromsø, Norway. http://www.jus.uit.no/ <br> RA built the first web site related to international trade law, now known as lexmercatoria.org and described as "an (international | transnational) commercial law and e-commerce infrastructure monitor". http://lexmercatoria.org/ <br> RA is interested in the law, technology, commerce nexus. RA works with the law firm Amissahs.<br>/{[This is a draft document and subject to change.]}/ <br>All errors are very much my own.<br>ralph@amissah.com }~
+
+1~ Reinforcing trends: borderless technologies, global economy, transnational legal solutions?
+
+Revisiting the Autonomous Contract~{ /{The Autonomous Contract: Reflecting the borderless electronic-commercial environment in contracting}/ was published in /{Elektronisk handel - rettslige aspekter, Nordisk årsbok i rettsinformatikk 1997}/ (Electronic Commerce - Legal Aspects. The Nordic yearbook for Legal Informatics 1997) Edited by Randi Punsvik, or at http://www.jus.uio.no/the.autonomous.contract.07.10.1997.amissah/doc.html }~
+
+Globalisation is to be observed as a trend intrinsic to the world economy.~{ As Maria Cattaui Livanos suggests in /{The global economy - an opportunity to be seized}/ in /{Business World}/ the Electronic magazine of the International Chamber of Commerce (Paris, July 1997) at http://www.iccwbo.org/html/globalec.htm <br> "Globalization is unstoppable. Even though it may be only in its early stages, it is already intrinsic to the world economy. We have to live with it, recognize its advantages and learn to manage it.<br>That imperative applies to governments, who would be unwise to attempt to stem the tide for reasons of political expediency. It also goes for companies of all sizes, who must now compete on global markets and learn to adjust their strategies accordingly, seizing the opportunities that globalization offers."}~ Rudimentary economics explains this runaway process, as being driven by competition within the business community to achieve efficient production, and to reach and extend available markets.~{To remain successful, being in competition, the business community is compelled to take advantage of the opportunities provided by globalisation.}~ Technological advancement particularly in transport and communications has historically played a fundamental role in the furtherance of international commerce, with the Net, technology's latest spatio-temporally transforming offering, linchpin of the "new-economy", extending exponentially the global reach of the business community. The Net covers much of the essence of international commerce providing an instantaneous, low cost, convergent, global and borderless: information centre, marketplace and channel for communications, payments and the delivery of services and intellectual property. The sale of goods, however, involves the separate element of their physical delivery. The Net has raised a plethora of questions and has frequently offered solutions. The increased transparency of borders arising from the Net's ubiquitous nature results in an increased demand for the transparency of operation. As economic activities become increasingly global, to reduce transaction costs, there is a strong incentive for the "law" that provides for them, to do so in a similar dimension. The appeal of transnational legal solutions lies in the potential reduction in complexity, more widely dispersed expertise, and resulting increased transaction efficiency. The Net reflexively offers possibilities for the development of transnational legal solutions, having in a similar vein transformed the possibilities for the promulgation of texts, the sharing of ideas and collaborative ventures. There are however, likely to be tensions within the legal community protecting entrenched practices against that which is new, (both in law and technology) and the business community's goal to reduce transaction costs.
+
+Within commercial law an analysis of law and economics may assist in developing a better understanding of the relationship between commercial law and the commercial sector it serves.~{ Realists would contend that law is contextual and best understood by exploring the interrelationships between law and the other social sciences, such as sociology, psychology, political science, and economics.}~ "...[T]he importance of the interrelations between law and economics can be seen in the twin facts that legal change is often a function of economic ideas and conditions, which necessitate and/or generate demands for legal change, and that economic change is often governed by legal change."~{ Part of a section cited in Mercuro and Steven G. Medema, /{Economics and the Law: from Posner to Post-Modernism}/ (Princeton, 1997) p. 11, with reference to Karl N. Llewellyn The Effect of Legal Institutions upon Economics, American Economic Review 15 (December 1925) pp 655-683, Mark M. Litchman Economics, the Basis of Law, American Law Review 61 (May-June 1927) pp 357-387, and W. S. Holdsworth A Neglected Aspect of the Relations between Economic and Legal History, Economic History Review 1 (January 1927-1928) pp 114-123.}~ In doing so, however, it is important to be aware that there are several competing schools of law and economics, with different perspectives, levels of abstraction, and analytical consequences of and for the world that they model.~{ For a good introduction see Nicholas Mercuro and Steven G. Medema, /{Economics and the Law: from Posner to Post-Modernism}/ (Princeton, 1997). These include: Chicago law and economics (New law and economics); New Haven School of law and economics; Public Choice Theory; Institutional law and economics; Neoinstitutional law and economics; Critical Legal Studies.}~
+
+Where there is rapid interrelated structural change with resulting new features, rather than concentrate on traditionally established tectonic plates of a discipline, it is necessary to understand underlying currents and concepts at their intersections, (rather than expositions of history~{ Case overstated, but this is an essential point. It is not be helpful to be overly tied to the past. It is necessary to be able to look ahead and explore new solutions, and be aware of the implications of "complexity" (as to to the relevance of past circumstances to the present). }~), is the key to commencing meaningful discussions and developing solutions for the resulting issues.~{ The majority of which are beyond the scope of this paper. Examples include: encryption and privacy for commercial purposes; digital signatures; symbolic ownership; electronic intellectual property rights.}~ Interrelated developments are more meaningfully understood through interdisciplinary study, as this instance suggests, of the law, commerce/economics, and technology nexus. In advocating this approach, we should also pay heed to the realisation in the sciences, of the limits of reductionism in the study of complex systems, as such systems feature emergent properties that are not evident if broken down into their constituent parts. System complexity exceeds sub-system complexity; consequently, the relevant unit for understanding the systems function is the system, not its parts.~{ Complexity theory is a branch of mathematics and physics that examines non-linear systems in which simple sets of deterministic rules can lead to highly complicated results, which cannot be predicted accurately. A study of the subject is provided by Nicholas Rescher /{Complexity: A Philosophical Overview}/ (New Brunswick, 1998). See also Jack Cohen and Ian Stewart, /{The Collapse of Chaos: Discovering Simplicity in a Complex World}/ (1994). }~ Simplistic dogma should be abandoned for a contextual approach.
+
+1~ Common Property - advocating a common commercial highway
+
+Certain infrastructural underpinnings beneficial to the working of the market economy are not best provided by the business community, but by other actors including governments. In this paper mention is made for example of the /{United Nations Convention on the Recognition and Enforcement of Foreign Arbitral Awards}/ (New York, 10 June 1958), which the business community regularly relies upon as the back-stop for their international agreements. Common property can have an enabling value, the Net, basis for the "new" economy, would not be what it is today without much that has been shared on this basis, having permitted /{"Metcalf's law"}/~{ Robert Metcalf, founder of 3Com. }~ to take hold. /{Metcalf's law}/ suggests that the value of a shared technology is exponential to its user base. In all likelihood it applies as much to transnational contract law, as to technological networks and standards. The more people who use a network or standard, the more "valuable" it becomes, and the more users it will attract. Key infrastructure should be identified and common property solutions where appropriate nurtured, keeping transaction costs to a minimum.
+
+The following general perspective is submitted as worthy of consideration (and support) by the legal, business and academic communities, and governments. *(a)* Abstract goals valuable to a transnational legal infrastructure include, certainty and predictability, flexibility, simplicity where possible, and neutrality, in the sense of being without perceived "unfairness" in the global context of their application. This covers the content of the "laws" themselves and the methods used for their interpretation. *(b)* Of law with regard to technology, "rules should be technology-neutral (i.e., the rules should neither require nor assume a particular technology) and forward looking (i.e., the rules should not hinder the use or development of technologies in the future)."~{ /{US Framework for Global Electronic Commerce}/ (1997) http://www.whitehouse.gov/WH/New/Commerce/ }~ *(c)* Desirable abstract goals in developing technological standards and critical technological infrastructure, include, choice, and that they should be shared and public or "open" as in "open source", and platform and/or program neutral, that is, interoperable. (On security, to forestall suggestions to the contrary, popular open source software tends to be as secure or more so than proprietary software). *(d)* Encryption is an essential part of the mature "new" economy but remains the subject of some governments' restriction.~{ The EU is lifting such restriction, and the US seems likely to follow suit. }~ The availability of (and possibility to develop common transnational standards for) strong encryption is essential for commercial security and trust with regard to all manner of Net communications and electronic commerce transactions, /{vis-à-vis}/ their confidentiality, integrity, authentication, and non-repudiation. That is, encryption is the basis for essential commerce related technologies, including amongst many others, electronic signatures, electronic payment systems and the development of electronic symbols of ownership (such as electronic bills of lading). *(e)* As regards the dissemination of primary materials concerning "uniform standards" in both the legal and technology domains, "the Net" should be used to make them globally available, free. Technology should be similarly used where possible to promote the goals outlined under point (a). Naturally, as a tempered supporter of the market economy,~{ Caveats extending beyond the purview of this paper. It is necessary to be aware that there are other overriding interests, global and domestic, that the market economy is ill suited to providing for, such as the environment, and possibly key public utilities that require long term planning and high investment. It is also necessary to continue to be vigilant against that which even if arising as a natural consequence of the market economy, has the potential to disturb or destroy its function, such as monopolies.}~ proprietary secondary materials and technologies do not merit these reservations. Similarly, actors of the market economy would take advantage of the common property base of the commercial highway.
+
+1~ Modelling the private international commercial law infrastructure
+
+Apart from the study of "laws" or the existing legal infrastructure, there are a multitude of players involved in their creation whose efforts may be regarded as being in the nature of systems modelling. Of interest to this paper is the subset of activity of a few organisations that provide the underpinnings for the foundation of a successful transnational contract/sales law. These are not amongst the more controversial legal infrastructure modelling activities, and represent a small but significant part in simplifying international commerce and trade.~{ Look for instance at national customs procedures, and consumer protection.}~
+
+Briefly viewing the wider picture, several institutions are involved as independent actors in systems modelling of the transnational legal infrastructure. Their roles and mandates and the issues they address are conceptually different. These include certain United Nations organs and affiliates such as the United Nations Commission on International Trade Law (UNCITRAL),~{ http://www.uncitral.org/ }~ the World Intellectual Property Organisation (WIPO)~{ http://www.wipo.org/ }~ and recently the World Trade Organisation (WTO),~{ http://www.wto.org/ }~ along with other institutions such as the International Institute for the Unification of Private Law (UNIDROIT),~{ http://www.unidroit.org/ }~ the International Chamber of Commerce (ICC),~{ http://www.iccwbo.org/ }~ and the Hague Conference on Private International Law.~{ http://www.hcch.net/ }~ They identify areas that would benefit from an international or transnational regime and use various tools at their disposal, (including: treaties; model laws; conventions; rules and/or principles; standard contracts), to develop legislative "solutions" that they hope will be subscribed to.
+
+A host of other institutions are involved in providing regional solutions.~{ such as ASEAN http://www.aseansec.org/ the European Union (EU) http://europa.eu.int/ MERCOSUR http://embassy.org/uruguay/econ/mercosur/ and North American Free Trade Agreement (NAFTA) http://www.nafta-sec-alena.org/english/nafta/ }~ Specialised areas are also addressed by appropriately specialised institutions.~{ e.g. large international banks; or in the legal community, the Business Section of the International Bar Association (IBA) with its membership of lawyers in over 180 countries. http://www.ibanet.org/ }~ A result of globalisation is increased competition (also) amongst States, which are active players in the process, identifying and addressing the needs of their business communities over a wide range of areas and managing the suitability to the global economy of their domestic legal, economic, technological and educational~{ For a somewhat frightening peek and illuminating discussion of the role of education in the global economy as implemented by a number of successful States see Joel Spring, /{Education and the Rise of the Global Economy}/ (Mahwah, NJ, 1998). }~ infrastructures. The role of States remains to identify what domestic structural support they must provide to be integrated and competitive in the global economy.
+
+In addition to "traditional" contributors, the technology/commerce/law confluence provides new challenges and opportunities, allowing, the emergence of important new players within the commercial field, such as Bolero,~{ http://www.bolero.org/ also http://www.boleroassociation.org/ }~ which, with the backing of international banks and ship-owners, offers electronic replacements for traditional paper transactions, acting as transaction agents for the electronic substitute on behalf of the trading parties. The acceptance of the possibility of applying an institutionally offered lex has opened the door further for other actors including /{ad hoc}/ groupings of the business community and/or universities to find ways to be engaged and actively participate in providing services for themselves and/or others in this domain.
+
+1~ The foundation for transnational private contract law, arbitration
+
+The market economy drive perpetuating economic globalisation is also active in the development and choice of transnational legal solutions. The potential reward, international sets of contract rules and principles, that can be counted on to be consistent and as providing a uniform layer of insulation (with minimal reference back to State law) when applied across the landscape of a multitude of different municipal legal systems. The business community is free to utilise them if available, and if not, to develop them, or seek to have them developed.
+
+The kernel for the development of a transnational legal infrastructure governing the rights and obligations of private contracting individuals was put in place as far back as 1958 by the /{UN Convention on the Recognition and Enforcement of Foreign Arbitral Awards}/ (/{"NY Convention on ICA"}/),~{ at http://www.jus.uio.no/lm/un.arbitration.recognition.and.enforcement.convention.new.york.1958/ }~ now in force in over a hundred States. Together with freedom of contract, the /{NY Convention on ICA}/ made it possible for commercial parties to develop and be governed by their own /{lex}/ in their contractual affairs, should they wish to do so, and guaranteed that provided their agreement was based on international commercial arbitration (/{"ICA"}/), (and not against relevant mandatory law) it would be enforced in all contracting States. This has been given further support by various more recent arbitration rules and the /{UNCITRAL Model Law on International Commercial Arbitration 1985}/,~{ at http://www.jus.uio.no/lm/un.arbitration.model.law.1985/ }~ which now explicitly state that rule based solutions independent of national law can be applied in /{"ICA"}/.~{ Lando, /{Each Contracting Party Must Act In Accordance with Good Faith and Fair Dealing}/ in /{Festskrift til Jan Ramberg}/ (Stockholm, 1997) p. 575. See also UNIDROIT Principles, Preamble 4 a. Also Arthur Hartkamp, The Use of UNIDROIT Principles of International Commercial Contracts by National and Supranational Courts (1995) in UNIDROIT Principles: A New Lex Mercatoria?, pp. 253-260 on p. 255. But see Goode, /{A New International Lex Mercatoria?}/ in /{Juridisk Tidskrift}/ (1999-2000 nr 2) p. 256 and 259. }~
+
+/{"ICA"}/ is recognised as the most prevalent means of dispute resolution in international commerce. Unlike litigation /{"ICA"}/ survives on its merits as a commercial service to provide for the needs of the business community.~{ /{"ICA"}/ being shaped by market forces and competition adheres more closely to the rules of the market economy, responding to its needs and catering for them more adequately. }~ It has consequently been more dynamic than national judiciaries, in adjusting to the changing requirements of businessmen. Its institutions are quicker to adapt and innovate, including the ability to cater for transnational contracts. /{"ICA"}/, in taking its mandate from and giving effect to the will of the parties, provides them with greater flexibility and frees them from many of the limitations of municipal law.~{ As examples of this, it seeks to give effect to the parties' agreement upon: the lex mercatoria as the law of the contract; the number of, and persons to be "adjudicators"; the language of proceedings; the procedural rules to be used, and; as to the finality of the decision. }~
+
+In sum, a transnational/non-national regulatory order governing the contractual rights and obligations of private individuals is made possible by: *(a)* States' acceptance of freedom of contract (public policy excepted); *(b)* Sanctity of contract embodied in the principle /{pacta sunt servanda}/ *(c)* Written contractual selection of dispute resolution by international commercial arbitration, whether /{ad hoc}/ or institutional, usually under internationally accepted arbitration rules; *(d)* Guaranteed enforcement, arbitration where necessary borrowing the State apparatus for law enforcement through the /{NY Convention on ICA}/, which has secured for /{"ICA"}/ a recognition and enforcement regime unparalleled by municipal courts in well over a hundred contracting States; *(e)* Transnational effect or non-nationality being achievable through /{"ICA"}/ accepting the parties' ability to select the basis upon which the dispute would be resolved outside municipal law, such as through the selection of general principles of law or /{lex mercatoria}/, or calling upon the arbitrators to act as /{amiable compositeur}/ or /{ex aequo et bono}/.
+
+This framework provided by /{"ICA"}/ opened the door for the modelling of effective transnational law default rules and principles for contracts independent of State participation (in their development, application, or choice of law foundation). Today we have an increased amount of certainty of content and better control over the desired degree of transnational effect or non-nationality with the availability of comprehensive insulating rules and principles such as the /{PICC}/ or /{Principles of European Contract Law}/ (/{"European Principles"}/ or /{"PECL"}/) that may be chosen, either together with, or to the exclusion of a choice of municipal law as governing the contract. For electronic commerce a similar path is hypothetically possible.
+
+1~ "State contracted international law" and/or "institutionally offered lex"? /{CISG}/ and /{PICC}/ as examples
+
+An institutionally offered lex ("IoL", uniform rules and principles) appear to have a number of advantages over "State contracted international law" ("ScIL", model laws, treaties and conventions for enactment). The development and formulation of both "ScIL" and "IoL" law takes time, the /{CISG}/ representing a half century of effort~{ /{UNCITRAL Convention on Contracts for the International Sale of Goods 1980}/ see at http://www.jus.uio.no/lm/un.contracts.international.sale.of.goods.convention.1980/ <br>The /{CISG}/ may be regarded as the culmination of an effort in the field dating back to Ernst Rabel, (/{Das Recht des Warenkaufs}/ Bd. I&II (Berlin, 1936-1958). Two volume study on sales law.) followed by the Cornell Project, (Cornell Project on Formation of Contracts 1968 - Rudolf Schlesinger, Formation of Contracts. A study of the Common Core of Legal Systems, 2 vols. (New York, London 1968)) and connected most directly to the UNIDROIT inspired /{Uniform Law for International Sales}/ (ULIS http://www.jus.uio.no/lm/unidroit.ulis.convention.1964/ at and ULF at http://www.jus.uio.no/lm/unidroit.ulf.convention.1964/ ), the main preparatory works behind the /{CISG}/ (/{Uniform Law on the Formation of Contracts for the International Sale of Goods}/ (ULF) and the /{Convention relating to a Uniform Law on the International Sale of Goods}/ (ULIS) The Hague, 1964.). }~ and /{PICC}/ twenty years.~{ /{UNIDROIT Principles of International Commercial Contracts}/ commonly referred to as the /{UNIDROIT Principles}/ and within this paper as /{PICC}/ see at http://www.jus.uio.no/lm/unidroit.contract.principles.1994/ and http://www.jus.uio.no/lm/unidroit.international.commercial.contracts.principles.1994.commented/ <br>The first edition of the /{PICC}/ were finalised in 1994, 23 years after their first conception, and 14 years after work started on them in earnest. }~ The /{CISG}/ by UNCITRAL represents the greatest success for the unification of an area of substantive commercial contract law to date, being currently applied by 57 States,~{ As of February 2000. }~ estimated as representing close to seventy percent of world trade and including every major trading nation of the world apart from England and Japan. To labour the point, the USA most of the EU (along with Canada, Australia, Russia) and China, ahead of its entry to the WTO already share the same law in relation to the international sale of goods. "ScIL" however has additional hurdles to overcome. *(a)* In order to enter into force and become applicable, it must go through the lengthy process of ratification and accession by States. *(b)* Implementation is frequently with various reservations. *(c)* Even where widely used, there are usually as many or more States that are exceptions. Success, that is by no means guaranteed, takes time and for every uniform law that is a success, there are several failures.
+
+Institutionally offered lex ("IoL") comprehensive general contract principles or contract law restatements that create an entire "legal" environment for contracting, has the advantage of being instantly available, becoming effective by choice of the contracting parties at the stroke of a pen. "IoL" is also more easily developed subsequently, in light of experience and need. Amongst the reasons for their use is the reduction of transaction cost in their provision of a set of default rules, applicable transnationally, that satisfy risk management criteria, being (or becoming) known, tried and tested, and of predictable effect.~{ "[P]arties often want to close contracts quickly, rather than hold up the transaction to negotiate solutions for every problem that might arise." Honnold (1992) on p. 13. }~ The most resoundingly successful "IoL" example to date has been the ICC's /{Uniform Customs and Practices for Documentary Credits}/, which is subscribed to as the default rules for the letters of credit offered by the vast majority of banks in the vast majority of countries of the world. Furthermore uniform principles allow unification on matters that at the present stage of national and regional pluralism could not be achieved at a treaty level. There are however, things that only "ScIL" can "engineer", (for example that which relates to priorities and third party obligations).
+
+*{/{PICC}/:}* The arrival of /{PICC}/ in 1994 was particularly timely. Coinciding as it did with the successful attempt at reducing trade barriers represented by the /{World Trade Agreement,}/~{ http://www.jus.uio.no/lm/wta.1994/ }~ and the start of general Internet use,~{ See Amissah, /{On the Net and the Liberation of Information that wants to be Free}/ in ed. Jens Edvin A. Skoghoy /{Fra institutt til fakultet, Jubileumsskrift i anledning av at IRV ved Universitetet i Tromsø feirer 10 år og er blitt til Det juridiske fakultet}/ (Tromsø, 1996) pp. 59-76 or the same at http://www.jus.uio.no/lm/on.the.net.and.information.22.02.1997.amissah/ }~ allowed for the exponential growth of electronic commerce, and further underscored the transnational tendency of commerce. The arrival of /{PICC}/ was all the more opportune bearing in mind the years it takes to prepare such an instrument. Whilst there have been some objections, the /{PICC}/ (and /{PECL}/) as contract law restatements cater to the needs of the business community that seeks a non-national or transnational law as the basis of its contracts, and provide a focal point for future development in this direction. Where in the past they would have been forced to rely on the ethereal and nebulous /{lex mercatoria}/, now the business community is provided with the opportunity to make use of such a "law" that is readily accessible, and has a clear and reasonably well defined content, that will become familiar and can be further developed as required. As such the /{PICC}/ allow for more universal and uniform solutions. Their future success will depend on such factors as: *(a)* Suitability of their contract terms to the needs of the business community. *(b)* Their becoming widely known and understood. *(c)* Their predictability evidenced by a reasonable degree of consistency in the results of their application. *(d)* Recognition of their potential to reduce transaction costs. *(e)* Recognition of their being neutral as between different nations' interests (East, West; North, South). In the international sale of goods the /{PICC}/ can be used in conjunction with more specific rules and regulations, including (on parties election~{ Also consider present and future possibilities for such use of /{PICC}/ under /{CISG}/ articles 8 and 9. }~) in sales the /{CISG}/ to fill gaps in its provisions.~{ Drobnig, id. p. 228, comment that the /{CISG}/ precludes recourse to general principles of contract law in Article 7. This does not refer to the situation where parties determine that the /{PICC}/ should do so, see /{CISG}/ Article 6. Or that in future the /{PICC}/ will not be of importance under /{CISG}/ Articles 8 and 9. }~ Provisions of the /{CISG}/ would be given precedence over the /{PICC}/ under the accepted principle of /{specialia generalibus derogant}/,~{ "Special principles have precedence over general ones." See Huet, Synthesis (1995) p. 277. }~ the mandatory content of the /{PICC}/ excepted. The /{CISG}/ has many situations that are not provided for at all, or which are provided for in less detail than the /{PICC}/.
+
+Work on /{PICC}/ and /{PECL}/ under the chairmanship of Professors Bonell and Ole Lando respectively, was wisely cross-pollinated (conceptually and through cross-membership of preparatory committees), as common foundations strengthen both sets of principles. A couple of points should be noted. Firstly, despite the maintained desirability of a transnational solution, this does not exclude the desirability of regional solutions, especially if there is choice, and the regional solutions are more comprehensive and easier to keep of uniform application. Secondly, the European Union has powers and influence (within the EU) unparalleled by UNIDROIT that can be utilised in future with regard to the /{PECL}/ if the desirability of a common European contract solution is recognised and agreed upon by EU member States. As a further observation, there is, hypothetically at least, nothing to prevent there in future being developed an alternative extensive (competing) transnational contract /{lex}/ solution, though the weighty effort already in place as represented by /{PICC}/ and the high investment in time and independent skilled legal minds, necessary to achieve this in a widely acceptable manner, makes such a development not very likely. It may however be the case that for electronic commerce, some other particularly suitable rules and principles will in time be developed in a similar vein, along the lines of an "IoL".
+
+1~ Contract /{Lex}/ design. Questions of commonweal
+
+The virtues of freedom of contract are acknowledged in this paper in that they allow the international business community to structure their business relationships to suit their requirements, and as such reflect the needs and working of the market economy. However, it is instructive also to explore the limits of the principles: freedom of contract, /{pacta sunt servanda}/ and /{caveat subscriptor}/. These principles are based on free market arguments that parties best understand their interests, and that the contract they arrive at will be an optimum compromise between their competing interests. It not being for an outsider to regulate or evaluate what a party of their own free will and volition has gained from electing to contract on those terms. This approach to contract is adversarial, based on the conflicting wills of the parties, achieving a meeting of minds. It imposes no duty of good faith and fair dealing or of loyalty (including the disclosure of material facts) upon the contracting parties to one another, who are to protect their own interests. However, in international commerce, this demand can be more costly, and may have a negative and restrictive effect. Also, although claimed to be neutral in making no judgement as to the contents of a contract, this claim can be misleading.
+
+2~ The neutrality of contract law and information cost
+
+The information problem is a general one that needs to be recognised in its various forms where it arises and addressed where possible.
+
+Adherents to the /{caveat subscriptor}/ model, point to the fact that parties have conflicting interests, and should look out for their own interests. However information presents particular problems which are exacerbated in international commerce.~{ The more straightforward cases of various types of misrepresentation apart. }~ As Michael Trebilcock put it: "Even the most committed proponents of free markets and freedom of contract recognise that certain information preconditions must be met for a given exchange to possess Pareto superior qualities."~{ Trebilcock, (1993) p. 102, followed by a quotation of Milton Friedman, from /{Capitalism and Freedom}/ (1962) p. 13. }~ Compared with domestic transactions, the contracting parties are less likely to possess information about each other or of what material facts there may be within the other party's knowledge, and will find it more difficult and costly to acquire. With resource inequalities, some parties will be in a much better position to determine and access what they need to know, the more so as the more information one already has, the less it costs to identify and to obtain any additional information that is required.~{ Trebilcock, (1993) p. 102, note quoted passage of Kim Lane Scheppele, /{Legal Secrets: Equality and Efficiency in the Common Law}/ (1988) p. 25. }~ The converse lot of the financially weaker party, makes their problem of high information costs (both actual and relative), near insurmountable. Ignorance may even become a rational choice, as the marginal cost of information remains higher than its marginal benefit. "This, in fact is the economic rationale for the failure to fully specify all contingencies in a contract."~{ See for example Nicholas Mercuro and Steven G. Medema, p. 58 }~ The argument is tied to transaction cost and further elucidates a general role played by underlying default rules and principles. It also extends further to the value of immutable principles that may help mitigate the problem in some circumstances. More general arguments are presented below.
+
+2~ Justifying mandatory loyalty principles
+
+Given the ability to create alternative solutions and even an independent /{lex}/ a question that arises is as to what limits if any should be imposed upon freedom of contract? What protective principles are required? Should protective principles be default rules that can be excluded? Should they be mandatory? Should mandatory law only exist at the level of municipal law?
+
+A kernel of mandatory protective principles with regard to loyalty may be justified, as beneficial, and even necessary for "IoL" to be acceptable in international commerce, in that they (on the balance) reflect the collective needs of the international business community. The present author is of the opinion that the duties of good faith and fair dealing and loyalty (or an acceptable equivalent) should be a necessary part of any attempt at the self-legislation or institutional legislation of any contract regime that is based on "rules and principles" (rather than a national legal order). If absent a requirement for them should be imposed by mandatory international law. Such protective provisions are to be found within the /{PICC}/ and /{PECL}/.~{ Examples include: the deliberately excluded validity (Article 4); the provision on interest (Article 78); impediment (Article 79), and; what many believe to be the inadequate coverage of battle of forms (Article 19). }~ As regards /{PICC}/ *(a)* The loyalty (and other protective) principles help bring about confidence and foster relations between parties. They provide an assurance in the international arena where parties are less likely to know each other and may have more difficulty in finding out about each other. *(b)* They better reflect the focus of the international business community on a business relationship from which both sides seek to gain. *(c)* They result in wider acceptability of the principles within both governments and the business community in the pluralistic international community. These protective principles may be regarded as enabling the /{PICC}/ to better represent the needs of the commonweal. *(d)* Good faith and fair dealing~{ The commented /{PECL}/ explain "'Good faith' means honesty and fairness in mind, which are subjective concepts... 'fair dealing' means observance of fairness in fact which is an objective test". }~ are fundamental underlying principles of international commercial relations. *(e)* Reliance only on the varied mandatory law protections of various States does not engender uniformity, which is also desirable with regard to that which can be counted upon as immutable. (Not that it is avoidable, given that mandatory State law remains overriding.) More generally, freedom of contract benefits from these protective principles that need immutable protection from contractual freedom to effectively serve their function. In seeking a transnational or non-national regime to govern contractual relations, one might suggest this to be the minimum price of freedom of contract that should be insisted upon by mandatory international law, as the limitation which hinders the misuse by one party of unlimited contractual freedom. They appear to be an essential basis for acceptability of the autonomous contract (non-national contract, based on agreed rules and principles/ "IoL"). As immutable principles they (hopefully and this is to be encouraged) become the default standard for the conduct of international business and as such may be looked upon as "common property." Unless immutable they suffer a fate somewhat analogous to that of "the tragedy of the commons."~{ Special problem regarding common/shared resources discussed by Garrett Hardin in Science (1968) 162 pp. 1243-1248. For short discussion and summary see Trebilcock, (1993) p. 13-15. }~ It should be recognised that argument over the loyalty principles should be of degree, as the concept must not be compromised, and needs to be protected (even if they come at the price of a degree of uncertainty), especially against particularly strong parties who are most likely to argue against their necessity.
+
+1~ Problems beyond uniform texts
+
+2~ In support of four objectives
+
+In the formulation of many international legal texts a pragmatic approach was taken. Formulating legislators from different States developed solutions based on suitable responses to factual example circumstances. This was done, successfully, with a view to avoiding arguments over alternative legal semantics and methodologies. However, having arrived at a common text, what then? Several issues are raised by asking the question, given that differences of interpretation can arise and become entrenched, by what means is it possible to foster a sustainable drive towards the uniform application of shared texts? Four principles appear to be desirable and should insofar as it is possible be pursued together: *(i)* the promotion of certainty and predictability; *(ii)* the promotion of uniformity of application; *(iii)* the protection of democratic ideals and ensuring of jurisprudential deliberation, and; *(iv)* the retention of efficiency.
+
+2~ Improving the predictability, certainty and uniform application of international and transnational law
+
+The key to the (efficient) achievement of greater certainty and predictability in an international and/or transnational commercial law regime is through the uniform application of shared texts that make up this regime.
+
+Obviously a distinction is to be made between transnational predictability in application, that is "uniform application", and predictability at a domestic level. Where the "uniform law" is applied by a municipal court of State "A" that looks first to its domestic writings, there may be a clear - predictable manner of application, even if not in the spirit of the "Convention". Another State "B" may apply the uniform law in a different way that is equally predictable, being perfectly consistent internally. This however defeats much of the purpose of the uniform law.
+
+A first step is for municipal courts to accept the /{UN Convention on the Law of Treaties 1969}/ (in force 1980) as a codification of existing public international law with regard to the interpretation of treaties.~{ This is the position in English law see Lord Diplock in Fothergill v Monarch Airlines [1981], A.C. 251, 282 or see http://www.jus.uio.no/lm/england.fothergill.v.monarch.airlines.hl.1980/2_diplock.html also Mann (London, 1983) at p. 379. The relevant articles on interpretation are Article 31 and 32. }~ A potentially fundamental step towards the achievement of uniform application is through the conscientious following of the admonitions of the interpretation clauses of modern conventions, rules and principles~{ Examples: The /{CISG}/, Article 7; The /{PICC}/, Article 1.6; /{PECL}/ Article 1.106; /{UN Convention on the Carriage of Goods by Sea (The Hamburg Rules) 1978}/, Article 3; /{UN Convention on the Limitation Period in the International Sale of Goods 1974}/ and /{1978}/, Article 7; /{UN Model Law on Electronic Commerce 1996}/, Article 3; /{UNIDROIT Convention on International Factoring 1988}/, Article 4; /{UNIDROIT Convention on International Financial Leasing 1988}/, Article 6; also /{EC Convention on the Law Applicable to Contractual Obligations 1980}/, Article 18. }~ to take into account their international character and the need to promote uniformity in their application,~{ For an online collection of articles see the Pace /{CISG}/ Database http://www.cisg.law.pace.edu/cisg/text/e-text-07.html and amongst the many other articles do not miss Michael Van Alstine /{Dynamic Treaty Interpretation}/ 146 /{University of Pennsylvania Law Review}/ (1998) 687-793. }~ together with all this implies.~{ Such as the /{CISG}/ provision on interpretation - Article 7. }~ However, the problems of uniform application, being embedded in differences of legal methodology, go beyond the agreement of a common text, and superficial glances at the works of other legal municipalities. These include questions related to sources of authority and technique applied in developing valid legal argument. Problems with sources include differences in authority and weight given to: *(a)* legislative history; *(b)* rulings domestic and international; *(c)* official and other commentaries; *(d)* scholarly writings. There should be an ongoing discussion of legal methodology to determine the methods best suited to addressing the problem of achieving greater certainty, predictability and uniformity in the application of shared international legal texts. With regard to information sharing, again the technology associated with the Net offers potential solutions.
+
+2~ The Net and information sharing through transnational databases
+
+The Net has been a godsend permitting the collection and dissemination of information on international law. With the best intentions to live up to admonitions to "to take into account their international character and the need to promote uniformity in their application" of "ScIL" and "IoL", a difficulty has been in knowing what has been written and decided elsewhere. In discussing solutions, Professor Honnold in /{"Uniform Words and Uniform Application" }/~{ Based on the /{CISG}/, and inputs from several professors from different legal jurisdictions, on the problems of achieving the uniform application of the text across different legal municipalities. J. Honnold, /{Uniform words and uniform applications. Uniform Words and Uniform Application: The 1980 Sales Convention and International Juridical Practice}/. /{Einheitliches Kaufrecht und nationales Obligationenrecht. Referate Diskussionen der Fachtagung}/. am 16/17-2-1987. Hrsg. von P. Schlechtriem. Baden-Baden, Nomos, 1987. p. 115-147, at p. 127-128. }~ suggests the following: "General Access to Case-Law and Bibliographic Material: The development of a homogenous body of law under the Convention depends on channels for the collection and sharing of judicial decisions and bibliographic material so that experience in each country can be evaluated and followed or rejected in other jurisdictions." Honnold then goes on to discuss "the need for an international clearing-house to collect and disseminate experience on the Convention" the need for which, he writes there is general agreement. He also discusses information-gathering methods through the use of national reporters. He poses the question "Will these channels be adequate? ..."
+
+The Net, offering inexpensive ways to build databases and to provide global access to information, provides an opportunity to address these problems that was not previously available. The Net extends the reach of the admonitions of the interpretation clauses. Providing the medium whereby if a decision or scholarly writing exists on a particular article or provision of a Convention, anywhere in the world, it will be readily available. Whether or not a national court or arbitration tribunal chooses to follow their example, they should be aware of it. Whatever a national court decides will also become internationally known, and will add to the body of experience on the Convention.~{ Nor is it particularly difficult to set into motion the placement of such information on the Net. With each interested participant publishing for their own interest, the Net could provide the key resources to be utilised in the harmonisation and reaching of common understandings of solutions and uniform application of legal texts. Works from all countries would be available. }~
+
+Such a library would be of interest to the institution promulgating the text, governments, practitioners and researchers alike. It could place at your fingertips: *(a)* Convention texts. *(b)* Implementation details of contracting States. *(c)* The legislative history. *(d)* Decisions generated by the convention around the world (court and arbitral where possible). *(e)* The official and other commentaries. *(f)* Scholarly writings on the Convention. *(g)* Bibliographies of scholarly writings. *(h)* Monographs and textbooks. *(i)* Student study material collections. *(j)* Information on promotional activities, lectures - moots etc. *(k)* Discussion groups/ mailing groups and other more interactive features.
+
+With respect to the /{CISG}/ such databases are already being maintained.~{ Primary amongst them Pace University, Institute of International Commercial Law, /{CISG}/ Database http://www.cisg.law.pace.edu/ which provides secondary support for the /{CISG}/, including providing a free on-line database of the legislative history, academic writings, and case-law on the /{CISG}/ and additional material with regard to /{PICC}/ and /{PECL}/ insofar as they may supplement the /{CISG}/. Furthermore, the Pace /{CISG}/ Project, networks with the several other existing Net based "autonomous" /{CISG}/ projects. UNCITRAL under Secretary Gerold Herrmann, has its own database through which it distributes its case law materials collected from national reporters (CLOUT). }~
+
+The database by ensuring the availability of international materials, used in conjunction with legal practice, helps to support the fore-named four principles. That of efficiency is enhanced especially if there is a single source that can be searched for the information required.
+
+The major obstacle that remains to being confident of this as the great and free panacea that it should be is the cost of translation of texts.
+
+2~ Judicial minimalism promotes democratic jurisprudential deliberation
+
+How to protect liberal democratic ideals and ensure international jurisprudential deliberation? Looking at judicial method, where court decisions are looked to for guidance, liberal democratic ideals and international jurisprudential deliberation are fostered by a judicial minimalist approach.
+
+For those of us with a common law background, and others who pay special attention to cases as you are invited to by interpretation clauses, there is scope for discussion as to the most appropriate approach to be taken with regard to judicial decisions. US judge Cass Sunstein suggestion of judicial minimalism~{ Cass R. Sunstein, /{One Case at a Time - Judicial Minimalism on the Supreme Court}/ (1999) }~ which despite its being developed in a different context~{ His analysis is developed based largely on "hard" constitutional cases of the U.S. }~ is attractive in that it is suited to a liberal democracy in ensuring democratic jurisprudential deliberation. It maintains discussion, debate, and allows for adjustment as appropriate and the gradual development of a common understanding of issues. Much as one may admire farsighted and far-reaching decisions and expositions, there is less chance with the minimalist approach of the (dogmatic) imposition of particular values. Whilst information sharing offers the possibility of the percolation of good ideas.~{ D. Stauffer, /{Introduction to Percolation Theory}/ (London, 1985). Percolation represents the sudden dramatic expansion of a common idea or ideas thought he reaching of a critical level/mass in the rapid recognition of their power and the making of further interconnections. An epidemic like infection of ideas. Not quite the way we are used to the progression of ideas within a conservative tradition. }~ Much as we admire the integrity of Dworkin's Hercules,~{ Ronald Dworkin, /{Laws Empire}/ (Harvard, 1986); /{Hard Cases in Harvard Law Review}/ (1988). }~ that he can consistently deliver single solutions suitable across such disparate socio-economic cultures is questionable. In examining the situation his own "integrity" would likely give him pause and prevent him from dictating that he can.~{ Hercules was created for U.S. Federal Cases and the community represented by the U.S. }~ This position is maintained as a general principle across international commercial law, despite private (as opposed to public) international commercial law not being an area of particularly "hard" cases of principle, and; despite private international commercial law being an area in which over a long history it has been demonstrated that lawyers are able to talk a common language to make themselves and their concepts (which are not dissimilar) understood by each other.~{ In 1966, a time when there were greater differences in the legal systems of States comprising the world economy Clive Schmitthoff was able to comment that:<br>"22. The similarity of the law of international trade transcends the division of the world between countries of free enterprise and countries of centrally planned economy, and between the legal families of the civil law of Roman inspiration and the common law of English tradition. As a Polish scholar observed, "the law of external trade of the countries of planned economy does not differ in its fundamental principles from the law of external trade of other countries, such as e.g., Austria or Switzerland. Consequently, international trade law specialists of all countries have found without difficulty that they speak a 'common language'<br>23. The reason for this universal similarity of the law of international trade is that this branch of law is based on three fundamental propositions: first, that the parties are free, subject to limitations imposed by the national laws, to contract on whatever terms they are able to agree (principle of the autonomy of the parties' will); secondly, that once the parties have entered into a contract, that contract must be faithfully fulfilled (/{pacta sunt servanda}/) and only in very exceptional circumstances does the law excuse a party from performing his obligations, viz., if force majeure or frustration can be established; and, thirdly that arbitration is widely used in international trade for the settlement of disputes, and the awards of arbitration tribunals command far-reaching international recognition and are often capable of enforcement abroad."<br>/{Report of the Secretary-General of the United Nations, Progressive Development of the Law of International Trade}/ (1966). Report prepared for the UN by C. Schmitthoff. }~
+
+2~ Non-binding interpretative councils and their co-ordinating guides can provide a focal point for the convergence of ideas - certainty, predictability, and efficiency
+
+A respected central guiding body can provide a guiding influence with respect to: *(a)* the uniform application of texts; *(b)* information management control. Given the growing mass of writing on common legal texts - academic and by way of decisions, we are faced with an information management problem.~{ Future if not current. }~
+
+Supra-national interpretative councils have been called for previously~{ /{UNCITRAL Secretariat}/ (1992) p. 253. Proposed by David (France) at the second UNCITRAL Congress and on a later occasion by Farnsworth (USA). To date the political will backed by the financing for such an organ has not been forthcoming. In 1992 the UNCITRAL Secretariat concluded that "probably the time has not yet come". Suggested also by Louis Sono in /{Uniform laws require uniform interpretation: proposals for an international tribunal to interpret uniform legal texts}/ (1992) 25th UNCITRAL Congress, pp. 50-54. Drobnig, /{Observations in Uniform Law in Practice}/ at p. 306. }~ and have for various reasons been regarded impracticable to implement including problems associated with getting States to formally agree upon such a body with binding authority.
+
+However it is not necessary to go this route. In relation to "IoL" in such forms as the /{PICC}/ and /{PECL}/ it is possible for the promulgators themselves,~{ UNIDROIT and the EU }~ to update and clarify the accompanying commentary of the rules and principles, and to extend their work, through having councils with the necessary delegated powers. In relation to the /{CISG}/ it is possible to do something similar of a non-binding nature, through the production of an updated commentary by an interpretive council (that could try to play the role of Hercules).~{ For references on interpretation of the /{CISG}/ by a supranational committee of experts or council of "wise men" see Bonell, /{Proposal for the Establishment of a Permanent Editorial Board for the Vienna Sales Convention}/ in /{International Uniform Law in Practice/ Le droit uniforme international dans la practique [Acts and Proceedings of the 3rd Congress on Private Law held by the International Institute for the Unification of Private Law}/ (Rome, 1987)], (New York, 1988) pp. 241-244 }~ With respect, despite some expressed reservations, it is not true that it would have no more authority than a single author writing on the subject. A suitable non-binding interpretative council would provide a focal point for the convergence of ideas. Given the principle of ensuring democratic jurisprudential deliberation, that such a council would be advisory only (except perhaps on the contracting parties election) would be one of its more attractive features, as it would ensure continued debate and development.
+
+2~ Capacity Building
+
+_1 "... one should create awareness about the fact that an international contract or transaction is not naturally rooted in one particular domestic law, and that its international specifics are best catered for in a uniform law."~{ UNCITRAL Secretariat (1992) p. 255. }~
+
+_{/{Capacity building}/}_ - raising awareness, providing education, creating a new generation of lawyers versed in a relatively new paradigm. Capacity building in international and transnational law, is something relevant institutions including arbitration institutions; the business community, and; far sighted States, should be interested in promoting. Finding means to transcend national boundaries is also to continue in the tradition of seeking the means to break down barriers to legal communication and understanding. However, while the business community seeks and requires greater uniformity in their business relations, there has paradoxically, at a national level, been a trend towards a nationalisation of contract law, and a regionalisation of business practice.~{ Erich Schanze, /{New Directions in Business Research}/ in Børge Dahl & Ruth Nielsen (ed.), /{New Directions in Contract Research}/ (Copenhagen, 1996) p. 62. }~
+
+As an example, Pace University, Institute of International Commercial Law, plays a prominent role with regard to capacity building in relation to the /{CISG}/ and /{PICC}/. Apart from the previously mentioned /{CISG Database}/, Pace University organise a large annual moot on the /{CISG}/~{ See http://www.cisg.law.pace.edu/vis.html }~ this year involving students of 79 universities from 28 countries, and respected arbitrators from the word over. Within the moot the finding of solutions based on /{PICC}/ where the /{CISG}/ is silent, is encouraged. Pace University also organise an essay competition~{ See http://www.cisg.law.pace.edu/cisg/text/essay.html }~ on the /{CISG}/ and/or the /{PICC}/, which next year is to be expanded to include the /{PECL}/ as a further option.
+
+1~ Marketing of transnational solutions
+
+Certain aspects of the Net/web may already be passé, but did you recognise it for what it was, or might become, when it arrived?
+
+As uniform law and transnational solutions are in competition with municipal approaches, to be successful a certain amount of marketing is necessary and may be effective. The approach should involve ensuring the concept of what they seek to achieve is firmly implanted in the business, legal and academic communities, and through engaging the business community and arbitration institutions, in capacity building and developing a new generation of lawyers. Feedback from the business community, and arbitrators will also prove invaluable. Whilst it is likely that the business community will immediately be able to recognise their potential advantages, it is less certain that they will find the support of the legal community. The normal reasons would be similar to those usually cited as being the primary constraints on its development "conservatism, routine, prejudice and inertia" René David. These are problems associated with gaining the initial foothold of acceptability, also associated with the lower part of an exponential growth curve. In addition the legal community may face tensions arising for various reasons including the possibility of an increase in world-wide competition.
+
+There are old well developed legal traditions with developed infrastructures and roots well established in several countries, that are dependable and known. The question arises why experiment with alternative non-extensively tested regimes? The required sophistication is developed in the centres providing legal services, and it may be argued that there is not the pressing need for unification or for transnational solutions, as the traditional way of contracting provides satisfactorily for the requirements of global commerce. The services required will continue to be easily and readily available from existing centres of skill. English law, to take an example is for various reasons (including perhaps language, familiarity of use, reputation and widespread Commonwealth~{ http://www.thecommonwealth.org/ }~ relations) the premier choice for the law governing international commercial transactions, and is likely to be for the foreseeable future. Utilising the Commonwealth as an example, what the "transnational" law (e.g. /{CISG}/) experience illustrates however, is that for States there may be greater advantage to be gained from participation in a horizontally shared area of commercial law, than from retaining a traditional vertically integrated commercial law system, based largely for example on the English legal system.
+
+Borrowing a term from the information technology sector, it is essential to guard against FUD (fear, uncertainty and doubt) with regard to the viability of new and/or competing transnational solutions, that may be spread by their detractors, and promptly, in the manner required by the free market, address any real problems that are discerned.
+
+1~ Tools in future development
+
+An attempt should be made by the legal profession to be more contemporary and to keep up to date with developments in technology and the sciences, and to adopt effective tools where suitable to achieve their goals. Technology one way or another is likely to encroach further upon law and the way we design it.
+
+Science works across cultures and is aspired to by most nations as being responsible for the phenomenal success of technology (both are similarly associated with globalisation). Science is extending its scope to (more confidently) tackle complex systems. It would not hurt to be more familiar with relevant scientific concepts and terminology. Certainly lawyers across the globe, myself included, would also benefit much in their conceptual reasoning from an early dose of the philosophy of science,~{ An excellent approachable introduction is provided by A.F. Chalmers /{What is this thing called Science?}/ (1978, Third Edition 1999). }~ what better than Karl Popper on scientific discovery and the role of "falsification" and value of predictive probity.~{ Karl R. Popper /{The Logic of Scientific Discovery}/ (1959). }~ And certainly Thomas Kuhn on scientific advancement and "paradigm shifts"~{ Thomas S. Kuhn /{The Structure of Scientific Revolutions}/ (1962, 3rd Edition 1976). }~ has its place. Having mentioned Karl Popper, it would not be unwise to go further (outside the realms of philosophy of science) to study his defence of democracy in both volumes of /{Open Society and Its Enemies}/.~{ Karl R. Popper /{The Open Society and Its Enemies: Volume 1, Plato}/ (1945) and /{The Open Society and Its Enemies: Volume 2, Hegel & Marx}/. (1945) }~
+
+Less ambitiously there are several tools not traditionally in the lawyers set, that may assist in transnational infrastructure modelling. These include further exploration and development of the potential of tools, including to suggest a few by way of example: flow charts, fuzzy thinking, "intelligent" electronic agents and Net collaborations.
+
+In the early 1990's I was introduced to a quantity surveyor and engineer who had reduced the /{FIDIC Red Book}/~{ FIDIC is the International Federation of Consulting Engineers http://www.fidic.com/ }~ to over a hundred pages of intricate flow charts (decision trees), printed horizontally on roughly A4 sized sheets. He was employed by a Norwegian construction firm, who insisted that based on past experience, they knew that he could, using his charts, consistently arrive at answers to their questions in a day, that law firms took weeks to produce. Flow charts can be used to show interrelationships and dependencies, in order to navigate the implications of a set of rules more quickly. They may also be used more pro-actively (and /{ex ante}/ rather than /{ex post}/) in formulating texts, to avoid unnecessary complexity and to arrive at more practical, efficient and elegant solutions.
+
+Explore such concepts as "fuzzy thinking"~{ Concept originally developed by Lotfi Zadeh /{Fuzzy Sets}/ Information Control 8 (1965) pp 338-353. For introductions see Daniel McNeill and Paul Freiberger /{Fuzzy Logic: The Revolutionary Computer Technology that is Changing our World}/ (1993); Bart Kosko Fuzzy Thinking (1993); Earl Cox The Fuzzy Systems Handbook (New York, 2nd ed. 1999). Perhaps to the uninitiated an unfortunate choice of name, as fuzzy logic and fuzzy set theory is more precise than classical logic and set theory, which comprise a subset of that which is fuzzy (representing those instances where membership is 0% or 100%). The statement is not entirely without controversy, in suggesting the possibility that classical thinking may be subsumed within the realms of an unfamiliar conceptual paradigm, that is to take hold of the future thinking. In the engineering field much pioneer work on fuzzy rule based systems was done at Queen Mary College by Ebrahim Mamdani in the early and mid-1970s. Time will tell. }~ including fuzzy logic, fuzzy set theory, and fuzzy systems modelling, of which classical logic and set theory are subsets. Both by way of analogy and as a tool fuzzy concepts are better at coping with complexity and map more closely to judicial thinking and argument in the application of principles and rules. Fuzzy theory provides a method for analysing and modelling principle and rule based systems, even where conflicting principles may apply permitting /{inter alia}/ working with competing principles and the contextual assignment of precision to terms such as "reasonableness". Fuzzy concepts should be explored in expert systems, and in future law. Problems of scaling associated with multiple decision trees do not prevent useful applications, and structured solutions. The analysis assists in discerning what lawyers are involved with.
+
+"Intelligent" electronic agents can be expected both to gather information on behalf of the business community and lawyers. In future electronic agents are likely to be employed to identify and bring to the attention of their principals "invitations to treat" or offers worthy of further investigation. In some cases they will be developed and relied upon as electronic legal agents, operating under a programmed mandate and vested with the authority to enter certain contracts on behalf of their principals. Such mandate would include choice of law upon which to contract, and the scenario could be assisted by transnational contract solutions (and catered for in the design of "future law").
+
+Another area of technology helping solve legal problems relates to various types of global register and transaction centres. Amongst them property registers being an obvious example, including patents and moveable property. Bolero providing an example of how electronic documents can be centrally brokered on behalf of trading parties.
+
+Primary law should be available on the Net free, and this applies also to "IoL" and the static material required for their interpretation. This should be the policy adopted by all institutions involved in contributing to the transnational legal infrastructure. Where possible larger databases also should be developed and shared. The Net has reduced the cost of dissemination of material, to a level infinitesimally lower than before. Universities now can and should play a more active role. Suitable funding arrangements should be explored that do not result in proprietary systems or the forwarding of specific lobby interests. In hard-copy to promote uniform standards, institutions should also strive to have their materials available at a reasonable price. Many appear to be unacceptably expensive given the need for their promotion and capacity building, amongst students, and across diverse States.
+
+Follow the open standards and community standards debate in relation to the development of technology standards and technology infrastructure tools - including operating systems,~{ See for example /{Open Sources : Voices from the Open Source Revolution - The Open Source Story}/ http://www.oreilly.com/catalog/opensources/book/toc.html }~ to discover what if anything it might suggest for the future development of law standards.
+
+1~ As an aside, a word of caution
+
+I end with an arguably gratuitous observation, by way of a reminder and general warning. Gratuitous in the context of this paper because the areas focused upon~{ Sale of goods (/{CISG}/), contract rules and principles (/{PICC}/), related Arbitration, and the promotion of certain egalitarian ideals. }~ were somewhat deliberately selected to fall outside the more contentious and "politically" problematic areas related to globalisation, economics, technology, law and politics.~{ It is not as evident in the area of private international commercial contract law the chosen focus for this paper, but appears repeatedly in relation to other areas and issues arising out of the economics, technology, law nexus. }~ Gratuitous also because there will be no attempt to concretise or exemplify the possibility suggested.
+
+Fortunately, we are not (necessarily) talking about a zero sum game, however, it is necessary to be able to distinguish and recognise that which may harm. International commerce/trade is competitive, and by its nature not benign, even if it results in an overall improvement in the economic lot of the peoples of our planet. "Neutral tests" such as Kaldor-Hicks efficiency, do not require that your interests are benefited one iota, just that whilst those of others are improved, yours are not made worse. If the measure adopted is overall benefit, it is even more possible that an overall gain may result where your interests are adversely affected. The more so if you have little, and those that gain, gain much. Furthermore such "tests" are based on assumptions, which at best are approximations of reality (e.g. that of zero transaction costs, where in fact not only are they not, but they are frequently proportionately higher for the economically weak). At worst they may be manipulated /{ex ante}/ with knowledge of their implications (e.g. engineering to ensure actual or relative~{ Low fixed costs have a "regressive" effect }~ asymmetrical transaction cost). It is important to be careful in a wide range of circumstances related to various aspects of the modelling of the infrastructure for international commerce that have an impact on the allocation of rights and obligations, and especially the allocation of resources, including various types of intellectual property rights. Ask what is the objective and justification for the protection? How well is the objective met? Are there other consequential effects? Are there other objectives that are worthy of protection? Could the stated objective(s) be achieved in a better way?
+
+Within a system are those who benefit from the way it has been, that may oppose change as resulting in loss to them or uncertainty of their continued privilege. For a stable system to initially arise that favours such a Select Set, does not require the conscious manipulation of conditions by the Select Set. Rather it requires that from the system (set) in place the Select Set emerges as beneficiary. Subsequently the Select Set having become established as favoured and empowered by their status as beneficiary, will seek to do what it can, to influence circumstances to ensure their continued beneficial status. That is, to keep the system operating to their advantage (or tune it to work even better towards this end), usually with little regard to the conditions resulting to other members of the system. Often this will be a question of degree, and the original purpose, or an alternative "neutral" argument, is likely to be used to justify the arrangement. The objective from the perspective of the Select Set is fixed; the means at their disposal may vary. Complexity is not required for such situations to arise, but having done so subsequent plays by the Select Set tend towards complexity. Furthermore, moves in the interest of the Select Set are more easily obscured/disguised in a complex system. Limited access to information and knowledge are devastating handicaps without which change cannot be contemplated let alone negotiated. Frequently, having information and knowledge are not enough. The protection of self-interest is an endemic part of our system, with the system repeatedly being co-opted to the purposes of those that are able to manipulate it. Membership over time is not static, for example, yesterday's "copycat nations" are today's innovators, and keen to protect their intellectual property. Which also illustrates the point that what it may take to set success in motion, may not be the same as that which is preferred to sustain it. Whether these observations appear to be self-evident and/or abstract and out of place with regard to this paper, they have far reaching implications repeatedly observable within the law, technology, and commerce (politics) nexus. Even if not arising much in the context of the selected material for this paper, their mention is justified by way of warning. Suitable examples would easily illustrate how politics arises inescapably as an emergent property from the nexus of commerce, technology, and law.~{ In such circumstances either economics or law on their own would be sufficient to result in politics arising as an emergent property. }~
+
+
+%% SiSU markup sample Notes:
+% SiSU http://www.jus.uio.no/sisu
+% SiSU markup for 0.16 and later:
+% 0.20.4 header 0~links
+% 0.22 may drop image dimensions (rmagick)
+% 0.23 utf-8 ß
+% 0.38 or later, may use alternative notation for headers, e.g. @title: (instead of 0~title)
+% 0.38 document structure alternative markup, experimental (rad) A,B,C,1,2,3 maps to 1,2,3,4,5,6
+% 0.42 * type endnotes, used e.g. in relation to author
+% Output: http://www.jus.uio.no/sisu/autonomy_markup1/sisu_manifest.html
+% SiSU 0.38 experimental (alternative structure) markup used for this document
+% note embedded endnotes, compare with sample autonomy_markup2.sst
diff --git a/data/sisu_markup_samples/non-free/autonomy_markup2.sst b/data/sisu_markup_samples/non-free/autonomy_markup2.sst
new file mode 100644
index 0000000..54f2fde
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/autonomy_markup2.sst
@@ -0,0 +1,355 @@
+% SiSU 0.38
+% alternative markup for document structure and headers
+
+@title: Revisiting the Autonomous Contract
+
+@subtitle: Transnational contracting, trends and supportive structures
+
+@creator: Ralph Amissah*
+
+@type: article
+
+@subject: international contracts, international commercial arbitration, private international law
+
+@date: 2000-08-27
+
+@level: num_top=1
+
+@links: {Syntax}http://www.jus.uio.no/sisu/sample/syntax/autonomy_markup2.sst.html
+{The Autonomous Contract}http://www.jus.uio.no/lm/the.autonomous.contract.07.10.1997.amissah/toc.html
+{Contract Principles}http://www.jus.uio.no/lm/private.international.commercial.law/contract.principles.html
+{UNIDROIT Principles}http://www.jus.uio.no/lm/unidroit.international.commercial.contracts.principles.1994.commented/toc.html
+{Sales}http://www.jus.uio.no/lm/private.international.commercial.law/sale.of.goods.html
+{CISG}http://www.jus.uio.no/lm/un.contracts.international.sale.of.goods.convention.1980/doc.html
+{Arbitration}http://www.jus.uio.no/lm/arbitration/toc.html
+{Electronic Commerce}http://www.jus.uio.no/lm/electronic.commerce/toc.html
+
+:A~ Revisiting the Autonomous Contract <sub>(Draft 0.90 - 2000.08.27 ;)</sub>
+
+:B~ Transnational contract "law", trends and supportive structures
+
+:C~ \copyright Ralph Amissah*
+
+1~ Reinforcing trends: borderless technologies, global economy, transnational legal solutions?
+
+Revisiting the Autonomous Contract~^
+
+^~ /{The Autonomous Contract: Reflecting the borderless electronic-commercial environment in contracting}/ was published in /{Elektronisk handel - rettslige aspekter, Nordisk årsbok i rettsinformatikk 1997}/ (Electronic Commerce - Legal Aspects. The Nordic yearbook for Legal Informatics 1997) Edited by Randi Punsvik, or at http://www.jus.uio.no/the.autonomous.contract.07.10.1997.amissah/doc.html
+
+Globalisation is to be observed as a trend intrinsic to the world economy.~^ Rudimentary economics explains this runaway process, as being driven by competition within the business community to achieve efficient production, and to reach and extend available markets.~^ Technological advancement particularly in transport and communications has historically played a fundamental role in the furtherance of international commerce, with the Net, technology's latest spatio-temporally transforming offering, linchpin of the "new-economy", extending exponentially the global reach of the business community. The Net covers much of the essence of international commerce providing an instantaneous, low cost, convergent, global and borderless: information centre, marketplace and channel for communications, payments and the delivery of services and intellectual property. The sale of goods, however, involves the separate element of their physical delivery. The Net has raised a plethora of questions and has frequently offered solutions. The increased transparency of borders arising from the Net's ubiquitous nature results in an increased demand for the transparency of operation. As economic activities become increasingly global, to reduce transaction costs, there is a strong incentive for the "law" that provides for them, to do so in a similar dimension. The appeal of transnational legal solutions lies in the potential reduction in complexity, more widely dispersed expertise, and resulting increased transaction efficiency. The Net reflexively offers possibilities for the development of transnational legal solutions, having in a similar vein transformed the possibilities for the promulgation of texts, the sharing of ideas and collaborative ventures. There are however, likely to be tensions within the legal community protecting entrenched practices against that which is new, (both in law and technology) and the business community's goal to reduce transaction costs.
+
+^~ As Maria Cattaui Livanos suggests in /{The global economy - an opportunity to be seized}/ in /{Business World}/ the Electronic magazine of the International Chamber of Commerce (Paris, July 1997) at http://www.iccwbo.org/html/globalec.htm <br> "Globalization is unstoppable. Even though it may be only in its early stages, it is already intrinsic to the world economy. We have to live with it, recognize its advantages and learn to manage it.<br>That imperative applies to governments, who would be unwise to attempt to stem the tide for reasons of political expediency. It also goes for companies of all sizes, who must now compete on global markets and learn to adjust their strategies accordingly, seizing the opportunities that globalization offers."
+
+^~ To remain successful, being in competition, the business community is compelled to take advantage of the opportunities provided by globalisation.
+
+Within commercial law an analysis of law and economics may assist in developing a better understanding of the relationship between commercial law and the commercial sector it serves.~^ "...[T]he importance of the interrelations between law and economics can be seen in the twin facts that legal change is often a function of economic ideas and conditions, which necessitate and/or generate demands for legal change, and that economic change is often governed by legal change."~^ In doing so, however, it is important to be aware that there are several competing schools of law and economics, with different perspectives, levels of abstraction, and analytical consequences of and for the world that they model.~^
+
+^~ Realists would contend that law is contextual and best understood by exploring the interrelationships between law and the other social sciences, such as sociology, psychology, political science, and economics.
+
+^~ Part of a section cited in Mercuro and Steven G. Medema, /{Economics and the Law: from Posner to Post-Modernism}/ (Princeton, 1997) p. 11, with reference to Karl N. Llewellyn The Effect of Legal Institutions upon Economics, American Economic Review 15 (December 1925) pp 655-683, Mark M. Litchman Economics, the Basis of Law, American Law Review 61 (May-June 1927) pp 357-387, and W. S. Holdsworth A Neglected Aspect of the Relations between Economic and Legal History, Economic History Review 1 (January 1927-1928) pp 114-123.
+
+^~ For a good introduction see Nicholas Mercuro and Steven G. Medema, /{Economics and the Law: from Posner to Post-Modernism}/ (Princeton, 1997). These include: Chicago law and economics (New law and economics); New Haven School of law and economics; Public Choice Theory; Institutional law and economics; Neoinstitutional law and economics; Critical Legal Studies.
+
+Where there is rapid interrelated structural change with resulting new features, rather than concentrate on traditionally established tectonic plates of a discipline, it is necessary to understand underlying currents and concepts at their intersections, (rather than expositions of history~^), is the key to commencing meaningful discussions and developing solutions for the resulting issues.~^ Interrelated developments are more meaningfully understood through interdisciplinary study, as this instance suggests, of the law, commerce/economics, and technology nexus. In advocating this approach, we should also pay heed to the realisation in the sciences, of the limits of reductionism in the study of complex systems, as such systems feature emergent properties that are not evident if broken down into their constituent parts. System complexity exceeds sub-system complexity; consequently, the relevant unit for understanding the systems function is the system, not its parts.~^ Simplistic dogma should be abandoned for a contextual approach.
+
+^~ Case overstated, but this is an essential point. It is not be helpful to be overly tied to the past. It is necessary to be able to look ahead and explore new solutions, and be aware of the implications of "complexity" (as to to the relevance of past circumstances to the present).
+
+^~ The majority of which are beyond the scope of this paper. Examples include: encryption and privacy for commercial purposes; digital signatures; symbolic ownership; electronic intellectual property rights.
+
+^~ Complexity theory is a branch of mathematics and physics that examines non-linear systems in which simple sets of deterministic rules can lead to highly complicated results, which cannot be predicted accurately. A study of the subject is provided by Nicholas Rescher /{Complexity: A Philosophical Overview}/ (New Brunswick, 1998). See also Jack Cohen and Ian Stewart, /{The Collapse of Chaos: Discovering Simplicity in a Complex World}/ (1994).
+
+1~ Common Property - advocating a common commercial highway
+
+Certain infrastructural underpinnings beneficial to the working of the market economy are not best provided by the business community, but by other actors including governments. In this paper mention is made for example of the /{United Nations Convention on the Recognition and Enforcement of Foreign Arbitral Awards}/ (New York, 10 June 1958), which the business community regularly relies upon as the back-stop for their international agreements. Common property can have an enabling value, the Net, basis for the "new" economy, would not be what it is today without much that has been shared on this basis, having permitted /{"Metcalf's law"}/~^ to take hold. /{Metcalf's law}/ suggests that the value of a shared technology is exponential to its user base. In all likelihood it applies as much to transnational contract law, as to technological networks and standards. The more people who use a network or standard, the more "valuable" it becomes, and the more users it will attract. Key infrastructure should be identified and common property solutions where appropriate nurtured, keeping transaction costs to a minimum.
+
+^~ Robert Metcalf, founder of 3Com.
+
+The following general perspective is submitted as worthy of consideration (and support) by the legal, business and academic communities, and governments. *{(a)}* Abstract goals valuable to a transnational legal infrastructure include, certainty and predictability, flexibility, simplicity where possible, and neutrality, in the sense of being without perceived "unfairness" in the global context of their application. This covers the content of the "laws" themselves and the methods used for their interpretation. *{(b)}* Of law with regard to technology, "rules should be technology-neutral (i.e., the rules should neither require nor assume a particular technology) and forward looking (i.e., the rules should not hinder the use or development of technologies in the future)."~^ *{(c)}* Desirable abstract goals in developing technological standards and critical technological infrastructure, include, choice, and that they should be shared and public or "open" as in "open source", and platform and/or program neutral, that is, interoperable. (On security, to forestall suggestions to the contrary, popular open source software tends to be as secure or more so than proprietary software). *{(d)}* Encryption is an essential part of the mature "new" economy but remains the subject of some governments' restriction.~^ The availability of (and possibility to develop common transnational standards for) strong encryption is essential for commercial security and trust with regard to all manner of Net communications and electronic commerce transactions, /{vis-à-vis}/ their confidentiality, integrity, authentication, and non-repudiation. That is, encryption is the basis for essential commerce related technologies, including amongst many others, electronic signatures, electronic payment systems and the development of electronic symbols of ownership (such as electronic bills of lading). *{(e)}* As regards the dissemination of primary materials concerning "uniform standards" in both the legal and technology domains, "the Net" should be used to make them globally available, free. Technology should be similarly used where possible to promote the goals outlined under point (a). Naturally, as a tempered supporter of the market economy,~^ proprietary secondary materials and technologies do not merit these reservations. Similarly, actors of the market economy would take advantage of the common property base of the commercial highway.
+
+^~ /{US Framework for Global Electronic Commerce}/ (1997) http://www.whitehouse.gov/WH/New/Commerce/
+
+^~ The EU is lifting such restriction, and the US seems likely to follow suit.
+
+^~ Caveats extending beyond the purview of this paper. It is necessary to be aware that there are other overriding interests, global and domestic, that the market economy is ill suited to providing for, such as the environment, and possibly key public utilities that require long term planning and high investment. It is also necessary to continue to be vigilant against that which even if arising as a natural consequence of the market economy, has the potential to disturb or destroy its function, such as monopolies.
+
+1~ Modelling the private international commercial law infrastructure
+
+Apart from the study of "laws" or the existing legal infrastructure, there are a multitude of players involved in their creation whose efforts may be regarded as being in the nature of systems modelling. Of interest to this paper is the subset of activity of a few organisations that provide the underpinnings for the foundation of a successful transnational contract/sales law. These are not amongst the more controversial legal infrastructure modelling activities, and represent a small but significant part in simplifying international commerce and trade.~^
+
+^~ Look for instance at national customs procedures, and consumer protection.
+
+Briefly viewing the wider picture, several institutions are involved as independent actors in systems modelling of the transnational legal infrastructure. Their roles and mandates and the issues they address are conceptually different. These include certain United Nations organs and affiliates such as the United Nations Commission on International Trade Law (UNCITRAL),~^ the World Intellectual Property Organisation (WIPO)~^ and recently the World Trade Organisation (WTO),~^ along with other institutions such as the International Institute for the Unification of Private Law (UNIDROIT),~^ the International Chamber of Commerce (ICC),~^ and the Hague Conference on Private International Law.~^ They identify areas that would benefit from an international or transnational regime and use various tools at their disposal, (including: treaties; model laws; conventions; rules and/or principles; standard contracts), to develop legislative "solutions" that they hope will be subscribed to.
+
+^~ http://www.uncitral.org/
+
+^~ http://www.wipo.org/
+
+^~ http://www.wto.org/
+
+^~ http://www.unidroit.org/
+
+^~ http://www.iccwbo.org/
+
+^~ http://www.hcch.net/
+
+A host of other institutions are involved in providing regional solutions.~^ Specialised areas are also addressed by appropriately specialised institutions.~^ A result of globalisation is increased competition (also) amongst States, which are active players in the process, identifying and addressing the needs of their business communities over a wide range of areas and managing the suitability to the global economy of their domestic legal, economic, technological and educational~^ infrastructures. The role of States remains to identify what domestic structural support they must provide to be integrated and competitive in the global economy.
+
+^~ such as ASEAN http://www.aseansec.org/ the European Union (EU) http://europa.eu.int/ MERCOSUR http://embassy.org/uruguay/econ/mercosur/ and North American Free Trade Agreement (NAFTA) http://www.nafta-sec-alena.org/english/nafta/
+
+^~ e.g. large international banks; or in the legal community, the Business Section of the International Bar Association (IBA) with its membership of lawyers in over 180 countries. http://www.ibanet.org/
+
+^~ For a somewhat frightening peek and illuminating discussion of the role of education in the global economy as implemented by a number of successful States see Joel Spring, /{Education and the Rise of the Global Economy}/ (Mahwah, NJ, 1998).
+
+In addition to "traditional" contributors, the technology/commerce/law confluence provides new challenges and opportunities, allowing, the emergence of important new players within the commercial field, such as Bolero,~^ which, with the backing of international banks and ship-owners, offers electronic replacements for traditional paper transactions, acting as transaction agents for the electronic substitute on behalf of the trading parties. The acceptance of the possibility of applying an institutionally offered lex has opened the door further for other actors including /{ad hoc}/ groupings of the business community and/or universities to find ways to be engaged and actively participate in providing services for themselves and/or others in this domain.
+
+^~ http://www.bolero.org/ also http://www.boleroassociation.org/
+
+1~ The foundation for transnational private contract law, arbitration
+
+The market economy drive perpetuating economic globalisation is also active in the development and choice of transnational legal solutions. The potential reward, international sets of contract rules and principles, that can be counted on to be consistent and as providing a uniform layer of insulation (with minimal reference back to State law) when applied across the landscape of a multitude of different municipal legal systems. The business community is free to utilise them if available, and if not, to develop them, or seek to have them developed.
+
+The kernel for the development of a transnational legal infrastructure governing the rights and obligations of private contracting individuals was put in place as far back as 1958 by the /{UN Convention on the Recognition and Enforcement of Foreign Arbitral Awards}/ (/{"NY Convention on ICA"}/),~^ now in force in over a hundred States. Together with freedom of contract, the /{NY Convention on ICA}/ made it possible for commercial parties to develop and be governed by their own /{lex}/ in their contractual affairs, should they wish to do so, and guaranteed that provided their agreement was based on international commercial arbitration (/{"ICA"}/), (and not against relevant mandatory law) it would be enforced in all contracting States. This has been given further support by various more recent arbitration rules and the /{UNCITRAL Model Law on International Commercial Arbitration 1985}/,~^ which now explicitly state that rule based solutions independent of national law can be applied in /{"ICA"}/.~^
+
+^~ at http://www.jus.uio.no/lm/un.arbitration.recognition.and.enforcement.convention.new.york.1958/
+
+^~ at http://www.jus.uio.no/lm/un.arbitration.model.law.1985/
+
+^~ Lando, /{Each Contracting Party Must Act In Accordance with Good Faith and Fair Dealing}/ in /{Festskrift til Jan Ramberg}/ (Stockholm, 1997) p. 575. See also UNIDROIT Principles, Preamble 4 a. Also Arthur Hartkamp, The Use of UNIDROIT Principles of International Commercial Contracts by National and Supranational Courts (1995) in UNIDROIT Principles: A New Lex Mercatoria?, pp. 253-260 on p. 255. But see Goode, /{A New International Lex Mercatoria?}/ in /{Juridisk Tidskrift}/ (1999-2000 nr 2) p. 256 and 259.
+
+/{"ICA"}/ is recognised as the most prevalent means of dispute resolution in international commerce. Unlike litigation /{"ICA"}/ survives on its merits as a commercial service to provide for the needs of the business community.~^ It has consequently been more dynamic than national judiciaries, in adjusting to the changing requirements of businessmen. Its institutions are quicker to adapt and innovate, including the ability to cater for transnational contracts. /{"ICA"}/, in taking its mandate from and giving effect to the will of the parties, provides them with greater flexibility and frees them from many of the limitations of municipal law.~^
+
+^~ /{"ICA"}/ being shaped by market forces and competition adheres more closely to the rules of the market economy, responding to its needs and catering for them more adequately.
+
+^~ As examples of this, it seeks to give effect to the parties' agreement upon: the lex mercatoria as the law of the contract; the number of, and persons to be "adjudicators"; the language of proceedings; the procedural rules to be used, and; as to the finality of the decision.
+
+In sum, a transnational/non-national regulatory order governing the contractual rights and obligations of private individuals is made possible by: *{(a)}* States' acceptance of freedom of contract (public policy excepted); *{(b)}* Sanctity of contract embodied in the principle /{pacta sunt servanda}/ *{(c)}* Written contractual selection of dispute resolution by international commercial arbitration, whether /{ad hoc}/ or institutional, usually under internationally accepted arbitration rules; *{(d)}* Guaranteed enforcement, arbitration where necessary borrowing the State apparatus for law enforcement through the /{NY Convention on ICA}/, which has secured for /{"ICA"}/ a recognition and enforcement regime unparalleled by municipal courts in well over a hundred contracting States; *{(e)}* Transnational effect or non-nationality being achievable through /{"ICA"}/ accepting the parties' ability to select the basis upon which the dispute would be resolved outside municipal law, such as through the selection of general principles of law or /{lex mercatoria}/, or calling upon the arbitrators to act as /{amiable compositeur}/ or /{ex aequo et bono}/.
+
+This framework provided by /{"ICA"}/ opened the door for the modelling of effective transnational law default rules and principles for contracts independent of State participation (in their development, application, or choice of law foundation). Today we have an increased amount of certainty of content and better control over the desired degree of transnational effect or non-nationality with the availability of comprehensive insulating rules and principles such as the /{PICC}/ or /{Principles of European Contract Law}/ (/{"European Principles"}/ or /{"PECL"}/) that may be chosen, either together with, or to the exclusion of a choice of municipal law as governing the contract. For electronic commerce a similar path is hypothetically possible.
+
+1~ "State contracted international law" and/or "institutionally offered lex"? /{CISG}/ and /{PICC}/ as examples
+
+An institutionally offered lex ("IoL", uniform rules and principles) appear to have a number of advantages over "State contracted international law" ("ScIL", model laws, treaties and conventions for enactment). The development and formulation of both "ScIL" and "IoL" law takes time, the /{CISG}/ representing a half century of effort~^ and /{PICC}/ twenty years.~^ The /{CISG}/ by UNCITRAL represents the greatest success for the unification of an area of substantive commercial contract law to date, being currently applied by 57 States,~^ estimated as representing close to seventy percent of world trade and including every major trading nation of the world apart from England and Japan. To labour the point, the USA most of the EU (along with Canada, Australia, Russia) and China, ahead of its entry to the WTO already share the same law in relation to the international sale of goods. "ScIL" however has additional hurdles to overcome. *{(a)}* In order to enter into force and become applicable, it must go through the lengthy process of ratification and accession by States. *{(b)}* Implementation is frequently with various reservations. *{(c)}* Even where widely used, there are usually as many or more States that are exceptions. Success, that is by no means guaranteed, takes time and for every uniform law that is a success, there are several failures.
+
+^~ /{UNCITRAL Convention on Contracts for the International Sale of Goods 1980}/ see at http://www.jus.uio.no/lm/un.contracts.international.sale.of.goods.convention.1980/ <br>The /{CISG}/ may be regarded as the culmination of an effort in the field dating back to Ernst Rabel, (/{Das Recht des Warenkaufs}/ Bd. I&II (Berlin, 1936-1958). Two volume study on sales law.) followed by the Cornell Project, (Cornell Project on Formation of Contracts 1968 - Rudolf Schlesinger, Formation of Contracts. A study of the Common Core of Legal Systems, 2 vols. (New York, London 1968)) and connected most directly to the UNIDROIT inspired /{Uniform Law for International Sales}/ (ULIS http://www.jus.uio.no/lm/unidroit.ulis.convention.1964/ at and ULF at http://www.jus.uio.no/lm/unidroit.ulf.convention.1964/ ), the main preparatory works behind the /{CISG}/ (/{Uniform Law on the Formation of Contracts for the International Sale of Goods}/ (ULF) and the /{Convention relating to a Uniform Law on the International Sale of Goods}/ (ULIS) The Hague, 1964.).
+
+^~ /{UNIDROIT Principles of International Commercial Contracts}/ commonly referred to as the /{UNIDROIT Principles}/ and within this paper as /{PICC}/ see at http://www.jus.uio.no/lm/unidroit.contract.principles.1994/ and http://www.jus.uio.no/lm/unidroit.international.commercial.contracts.principles.1994.commented/ <br>The first edition of the /{PICC}/ were finalised in 1994, 23 years after their first conception, and 14 years after work started on them in earnest.
+
+^~ As of February 2000.
+
+Institutionally offered lex ("IoL") comprehensive general contract principles or contract law restatements that create an entire "legal" environment for contracting, has the advantage of being instantly available, becoming effective by choice of the contracting parties at the stroke of a pen. "IoL" is also more easily developed subsequently, in light of experience and need. Amongst the reasons for their use is the reduction of transaction cost in their provision of a set of default rules, applicable transnationally, that satisfy risk management criteria, being (or becoming) known, tried and tested, and of predictable effect.~^ The most resoundingly successful "IoL" example to date has been the ICC's /{Uniform Customs and Practices for Documentary Credits}/, which is subscribed to as the default rules for the letters of credit offered by the vast majority of banks in the vast majority of countries of the world. Furthermore uniform principles allow unification on matters that at the present stage of national and regional pluralism could not be achieved at a treaty level. There are however, things that only "ScIL" can "engineer", (for example that which relates to priorities and third party obligations).
+
+^~ "[P]arties often want to close contracts quickly, rather than hold up the transaction to negotiate solutions for every problem that might arise." Honnold (1992) on p. 13.
+
+*{/{PICC}/:}* The arrival of /{PICC}/ in 1994 was particularly timely. Coinciding as it did with the successful attempt at reducing trade barriers represented by the /{World Trade Agreement,}/~^ and the start of general Internet use,~^ allowed for the exponential growth of electronic commerce, and further underscored the transnational tendency of commerce. The arrival of /{PICC}/ was all the more opportune bearing in mind the years it takes to prepare such an instrument. Whilst there have been some objections, the /{PICC}/ (and /{PECL}/) as contract law restatements cater to the needs of the business community that seeks a non-national or transnational law as the basis of its contracts, and provide a focal point for future development in this direction. Where in the past they would have been forced to rely on the ethereal and nebulous /{lex mercatoria}/, now the business community is provided with the opportunity to make use of such a "law" that is readily accessible, and has a clear and reasonably well defined content, that will become familiar and can be further developed as required. As such the /{PICC}/ allow for more universal and uniform solutions. Their future success will depend on such factors as: *{(a)}* Suitability of their contract terms to the needs of the business community. *{(b)}* Their becoming widely known and understood. *{(c)}* Their predictability evidenced by a reasonable degree of consistency in the results of their application. *{(d)}* Recognition of their potential to reduce transaction costs. *{(e)}* Recognition of their being neutral as between different nations' interests (East, West; North, South). In the international sale of goods the /{PICC}/ can be used in conjunction with more specific rules and regulations, including (on parties election~^) in sales the /{CISG}/ to fill gaps in its provisions.~^ Provisions of the /{CISG}/ would be given precedence over the /{PICC}/ under the accepted principle of /{specialia generalibus derogant}/,~^ the mandatory content of the /{PICC}/ excepted. The /{CISG}/ has many situations that are not provided for at all, or which are provided for in less detail than the /{PICC}/.
+
+^~ http://www.jus.uio.no/lm/wta.1994/
+
+^~ See Amissah, /{On the Net and the Liberation of Information that wants to be Free}/ in ed. Jens Edvin A. Skoghøy /{Fra institutt til fakultet, Jubileumsskrift i anledning av at IRV ved Universitetet i Tromsø feirer 10 år og er blitt til Det juridiske fakultet}/ (Tromsø, 1996) pp. 59-76 or the same at http://www.jus.uio.no/lm/on.the.net.and.information.22.02.1997.amissah/
+
+^~ Also consider present and future possibilities for such use of /{PICC}/ under /{CISG}/ articles 8 and 9.
+
+^~ Drobnig, id. p. 228, comment that the /{CISG}/ precludes recourse to general principles of contract law in Article 7. This does not refer to the situation where parties determine that the /{PICC}/ should do so, see /{CISG}/ Article 6. Or that in future the /{PICC}/ will not be of importance under /{CISG}/ Articles 8 and 9.
+
+^~ "Special principles have precedence over general ones." See Huet, Synthesis (1995) p. 277.
+
+Work on /{PICC}/ and /{PECL}/ under the chairmanship of Professors Bonell and Ole Lando respectively, was wisely cross-pollinated (conceptually and through cross-membership of preparatory committees), as common foundations strengthen both sets of principles. A couple of points should be noted. Firstly, despite the maintained desirability of a transnational solution, this does not exclude the desirability of regional solutions, especially if there is choice, and the regional solutions are more comprehensive and easier to keep of uniform application. Secondly, the European Union has powers and influence (within the EU) unparalleled by UNIDROIT that can be utilised in future with regard to the /{PECL}/ if the desirability of a common European contract solution is recognised and agreed upon by EU member States. As a further observation, there is, hypothetically at least, nothing to prevent there in future being developed an alternative extensive (competing) transnational contract /{lex}/ solution, though the weighty effort already in place as represented by /{PICC}/ and the high investment in time and independent skilled legal minds, necessary to achieve this in a widely acceptable manner, makes such a development not very likely. It may however be the case that for electronic commerce, some other particularly suitable rules and principles will in time be developed in a similar vein, along the lines of an "IoL".
+
+1~ Contract /{Lex}/ design. Questions of commonweal
+
+The virtues of freedom of contract are acknowledged in this paper in that they allow the international business community to structure their business relationships to suit their requirements, and as such reflect the needs and working of the market economy. However, it is instructive also to explore the limits of the principles: freedom of contract, /{pacta sunt servanda}/ and /{caveat subscriptor}/. These principles are based on free market arguments that parties best understand their interests, and that the contract they arrive at will be an optimum compromise between their competing interests. It not being for an outsider to regulate or evaluate what a party of their own free will and volition has gained from electing to contract on those terms. This approach to contract is adversarial, based on the conflicting wills of the parties, achieving a meeting of minds. It imposes no duty of good faith and fair dealing or of loyalty (including the disclosure of material facts) upon the contracting parties to one another, who are to protect their own interests. However, in international commerce, this demand can be more costly, and may have a negative and restrictive effect. Also, although claimed to be neutral in making no judgement as to the contents of a contract, this claim can be misleading.
+
+2~ The neutrality of contract law and information cost
+
+The information problem is a general one that needs to be recognised in its various forms where it arises and addressed where possible.
+
+Adherents to the /{caveat subscriptor}/ model, point to the fact that parties have conflicting interests, and should look out for their own interests. However information presents particular problems which are exacerbated in international commerce.~^ As Michael Trebilcock put it: "Even the most committed proponents of free markets and freedom of contract recognise that certain information preconditions must be met for a given exchange to possess Pareto superior qualities."~^ Compared with domestic transactions, the contracting parties are less likely to possess information about each other or of what material facts there may be within the other party's knowledge, and will find it more difficult and costly to acquire. With resource inequalities, some parties will be in a much better position to determine and access what they need to know, the more so as the more information one already has, the less it costs to identify and to obtain any additional information that is required.~^ The converse lot of the financially weaker party, makes their problem of high information costs (both actual and relative), near insurmountable. Ignorance may even become a rational choice, as the marginal cost of information remains higher than its marginal benefit. "This, in fact is the economic rationale for the failure to fully specify all contingencies in a contract."~^ The argument is tied to transaction cost and further elucidates a general role played by underlying default rules and principles. It also extends further to the value of immutable principles that may help mitigate the problem in some circumstances. More general arguments are presented below.
+
+^~ The more straightforward cases of various types of misrepresentation apart.
+
+^~ Trebilcock, (1993) p. 102, followed by a quotation of Milton Friedman, from /{Capitalism and Freedom}/ (1962) p. 13.
+
+^~ Trebilcock, (1993) p. 102, note quoted passage of Kim Lane Scheppele, /{Legal Secrets: Equality and Efficiency in the Common Law}/ (1988) p. 25.
+
+^~ See for example Nicholas Mercuro and Steven G. Medema, p. 58
+
+2~ Justifying mandatory loyalty principles
+
+Given the ability to create alternative solutions and even an independent /{lex}/ a question that arises is as to what limits if any should be imposed upon freedom of contract? What protective principles are required? Should protective principles be default rules that can be excluded? Should they be mandatory? Should mandatory law only exist at the level of municipal law?
+
+A kernel of mandatory protective principles with regard to loyalty may be justified, as beneficial, and even necessary for "IoL" to be acceptable in international commerce, in that they (on the balance) reflect the collective needs of the international business community. The present author is of the opinion that the duties of good faith and fair dealing and loyalty (or an acceptable equivalent) should be a necessary part of any attempt at the self-legislation or institutional legislation of any contract regime that is based on "rules and principles" (rather than a national legal order). If absent a requirement for them should be imposed by mandatory international law. Such protective provisions are to be found within the /{PICC}/ and /{PECL}/.~^ As regards /{PICC}/ *{(a)}* The loyalty (and other protective) principles help bring about confidence and foster relations between parties. They provide an assurance in the international arena where parties are less likely to know each other and may have more difficulty in finding out about each other. *{(b)}* They better reflect the focus of the international business community on a business relationship from which both sides seek to gain. *{(c)}* They result in wider acceptability of the principles within both governments and the business community in the pluralistic international community. These protective principles may be regarded as enabling the /{PICC}/ to better represent the needs of the commonweal. *{(d)}* Good faith and fair dealing~^ are fundamental underlying principles of international commercial relations. *{(e)}* Reliance only on the varied mandatory law protections of various States does not engender uniformity, which is also desirable with regard to that which can be counted upon as immutable. (Not that it is avoidable, given that mandatory State law remains overriding.) More generally, freedom of contract benefits from these protective principles that need immutable protection from contractual freedom to effectively serve their function. In seeking a transnational or non-national regime to govern contractual relations, one might suggest this to be the minimum price of freedom of contract that should be insisted upon by mandatory international law, as the limitation which hinders the misuse by one party of unlimited contractual freedom. They appear to be an essential basis for acceptability of the autonomous contract (non-national contract, based on agreed rules and principles/ "IoL"). As immutable principles they (hopefully and this is to be encouraged) become the default standard for the conduct of international business and as such may be looked upon as "common property." Unless immutable they suffer a fate somewhat analogous to that of "the tragedy of the commons."~^ It should be recognised that argument over the loyalty principles should be of degree, as the concept must not be compromised, and needs to be protected (even if they come at the price of a degree of uncertainty), especially against particularly strong parties who are most likely to argue against their necessity.
+
+^~ Examples include: the deliberately excluded validity (Article 4); the provision on interest (Article 78); impediment (Article 79), and; what many believe to be the inadequate coverage of battle of forms (Article 19).
+
+^~ The commented /{PECL}/ explain "'Good faith' means honesty and fairness in mind, which are subjective concepts... 'fair dealing' means observance of fairness in fact which is an objective test".
+
+^~ Special problem regarding common/shared resources discussed by Garrett Hardin in Science (1968) 162 pp. 1243-1248. For short discussion and summary see Trebilcock, (1993) p. 13-15.
+
+1~ Problems beyond uniform texts
+
+2~ In support of four objectives
+
+In the formulation of many international legal texts a pragmatic approach was taken. Formulating legislators from different States developed solutions based on suitable responses to factual example circumstances. This was done, successfully, with a view to avoiding arguments over alternative legal semantics and methodologies. However, having arrived at a common text, what then? Several issues are raised by asking the question, given that differences of interpretation can arise and become entrenched, by what means is it possible to foster a sustainable drive towards the uniform application of shared texts? Four principles appear to be desirable and should insofar as it is possible be pursued together: *{(i)}* the promotion of certainty and predictability; *{(ii)}* the promotion of uniformity of application; *{(iii)}* the protection of democratic ideals and ensuring of jurisprudential deliberation, and; *{(iv)}* the retention of efficiency.
+
+2~ Improving the predictability, certainty and uniform application of international and transnational law
+
+The key to the (efficient) achievement of greater certainty and predictability in an international and/or transnational commercial law regime is through the uniform application of shared texts that make up this regime.
+
+Obviously a distinction is to be made between transnational predictability in application, that is "uniform application", and predictability at a domestic level. Where the "uniform law" is applied by a municipal court of State "A" that looks first to its domestic writings, there may be a clear - predictable manner of application, even if not in the spirit of the "Convention". Another State "B" may apply the uniform law in a different way that is equally predictable, being perfectly consistent internally. This however defeats much of the purpose of the uniform law.
+
+A first step is for municipal courts to accept the /{UN Convention on the Law of Treaties 1969}/ (in force 1980) as a codification of existing public international law with regard to the interpretation of treaties.~^ A potentially fundamental step towards the achievement of uniform application is through the conscientious following of the admonitions of the interpretation clauses of modern conventions, rules and principles~^ to take into account their international character and the need to promote uniformity in their application,~^ together with all this implies.~^ However, the problems of uniform application, being embedded in differences of legal methodology, go beyond the agreement of a common text, and superficial glances at the works of other legal municipalities. These include questions related to sources of authority and technique applied in developing valid legal argument. Problems with sources include differences in authority and weight given to: *{(a)}* legislative history; *{(b)}* rulings domestic and international; *{(c)}* official and other commentaries; *{(d)}* scholarly writings. There should be an ongoing discussion of legal methodology to determine the methods best suited to addressing the problem of achieving greater certainty, predictability and uniformity in the application of shared international legal texts. With regard to information sharing, again the technology associated with the Net offers potential solutions.
+
+^~ This is the position in English law see Lord Diplock in Fothergill v Monarch Airlines [1981], A.C. 251, 282 or see http://www.jus.uio.no/lm/england.fothergill.v.monarch.airlines.hl.1980/2_diplock.html also Mann (London, 1983) at p. 379. The relevant articles on interpretation are Article 31 and 32.
+
+^~ Examples: The /{CISG}/, Article 7; The /{PICC}/, Article 1.6; /{PECL}/ Article 1.106; /{UN Convention on the Carriage of Goods by Sea (The Hamburg Rules) 1978}/, Article 3; /{UN Convention on the Limitation Period in the International Sale of Goods 1974}/ and /{1978}/, Article 7; /{UN Model Law on Electronic Commerce 1996}/, Article 3; /{UNIDROIT Convention on International Factoring 1988}/, Article 4; /{UNIDROIT Convention on International Financial Leasing 1988}/, Article 6; also /{EC Convention on the Law Applicable to Contractual Obligations 1980}/, Article 18.
+
+^~ Such as the /{CISG}/ provision on interpretation - Article 7.
+
+^~ For an online collection of articles see the Pace /{CISG}/ Database http://www.cisg.law.pace.edu/cisg/text/e-text-07.html and amongst the many other articles do not miss Michael Van Alstine /{Dynamic Treaty Interpretation}/ 146 /{University of Pennsylvania Law Review}/ (1998) 687-793.
+
+2~ The Net and information sharing through transnational databases
+
+The Net has been a godsend permitting the collection and dissemination of information on international law. With the best intentions to live up to admonitions to "to take into account their international character and the need to promote uniformity in their application" of "ScIL" and "IoL", a difficulty has been in knowing what has been written and decided elsewhere. In discussing solutions, Professor Honnold in /{"Uniform Words and Uniform Application" }/~^ suggests the following: "General Access to Case-Law and Bibliographic Material: The development of a homogenous body of law under the Convention depends on channels for the collection and sharing of judicial decisions and bibliographic material so that experience in each country can be evaluated and followed or rejected in other jurisdictions." Honnold then goes on to discuss "the need for an international clearing-house to collect and disseminate experience on the Convention" the need for which, he writes there is general agreement. He also discusses information-gathering methods through the use of national reporters. He poses the question "Will these channels be adequate? ..."
+
+^~ Based on the /{CISG}/, and inputs from several professors from different legal jurisdictions, on the problems of achieving the uniform application of the text across different legal municipalities. J. Honnold, /{Uniform words and uniform applications. Uniform Words and Uniform Application: The 1980 Sales Convention and International Juridical Practice}/. /{Einheitliches Kaufrecht und nationales Obligationenrecht. Referate Diskussionen der Fachtagung}/. am 16/17-2-1987. Hrsg. von P. Schlechtriem. Baden-Baden, Nomos, 1987. p. 115-147, at p. 127-128.
+
+The Net, offering inexpensive ways to build databases and to provide global access to information, provides an opportunity to address these problems that was not previously available. The Net extends the reach of the admonitions of the interpretation clauses. Providing the medium whereby if a decision or scholarly writing exists on a particular article or provision of a Convention, anywhere in the world, it will be readily available. Whether or not a national court or arbitration tribunal chooses to follow their example, they should be aware of it. Whatever a national court decides will also become internationally known, and will add to the body of experience on the Convention.~^
+
+^~ Nor is it particularly difficult to set into motion the placement of such information on the Net. With each interested participant publishing for their own interest, the Net could provide the key resources to be utilised in the harmonisation and reaching of common understandings of solutions and uniform application of legal texts. Works from all countries would be available.
+
+Such a library would be of interest to the institution promulgating the text, governments, practitioners and researchers alike. It could place at your fingertips: *{(a)}* Convention texts. *{(b)}* Implementation details of contracting States. *{(c)}* The legislative history. *{(d)}* Decisions generated by the convention around the world (court and arbitral where possible). *{(e)}* The official and other commentaries. *{(f)}* Scholarly writings on the Convention. *{(g)}* Bibliographies of scholarly writings. *{(h)}* Monographs and textbooks. *{(i)}* Student study material collections. *{(j)}* Information on promotional activities, lectures - moots etc. *{(k)}* Discussion groups/ mailing groups and other more interactive features.
+
+With respect to the /{CISG}/ such databases are already being maintained.~^
+
+^~ Primary amongst them Pace University, Institute of International Commercial Law, /{CISG}/ Database http://www.cisg.law.pace.edu/ which provides secondary support for the /{CISG}/, including providing a free on-line database of the legislative history, academic writings, and case-law on the /{CISG}/ and additional material with regard to /{PICC}/ and /{PECL}/ insofar as they may supplement the /{CISG}/. Furthermore, the Pace /{CISG}/ Project, networks with the several other existing Net based "autonomous" /{CISG}/ projects. UNCITRAL under Secretary Gerold Herrmann, has its own database through which it distributes its case law materials collected from national reporters (CLOUT).
+
+The database by ensuring the availability of international materials, used in conjunction with legal practice, helps to support the forenamed four principles. That of efficiency is enhanced especially if there is a single source that can be searched for the information required.
+
+The major obstacle that remains to being confident of this as the great and free panacea that it should be is the cost of translation of texts.
+
+2~ Judicial minimalism promotes democratic jurisprudential deliberation
+
+How to protect liberal democratic ideals and ensure international jurisprudential deliberation? Looking at judicial method, where court decisions are looked to for guidance, liberal democratic ideals and international jurisprudential deliberation are fostered by a judicial minimalist approach.
+
+For those of us with a common law background, and others who pay special attention to cases as you are invited to by interpretation clauses, there is scope for discussion as to the most appropriate approach to be taken with regard to judicial decisions. US judge Cass Sunstein suggestion of judicial minimalism~^ which despite its being developed in a different context~^ is attractive in that it is suited to a liberal democracy in ensuring democratic jurisprudential deliberation. It maintains discussion, debate, and allows for adjustment as appropriate and the gradual development of a common understanding of issues. Much as one may admire farsighted and far-reaching decisions and expositions, there is less chance with the minimalist approach of the (dogmatic) imposition of particular values. Whilst information sharing offers the possibility of the percolation of good ideas.~^ Much as we admire the integrity of Dworkin's Hercules,~^ that he can consistently deliver single solutions suitable across such disparate socio-economic cultures is questionable. In examining the situation his own "integrity" would likely give him pause and prevent him from dictating that he can.~^ This position is maintained as a general principle across international commercial law, despite private (as opposed to public) international commercial law not being an area of particularly "hard" cases of principle, and; despite private international commercial law being an area in which over a long history it has been demonstrated that lawyers are able to talk a common language to make themselves and their concepts (which are not dissimilar) understood by each other.~^
+
+^~ Cass R. Sunstein, /{One Case at a Time - Judicial Minimalism on the Supreme Court}/ (1999)
+
+^~ His analysis is developed based largely on "hard" constitutional cases of the U.S.
+
+^~ D. Stauffer, /{Introduction to Percolation Theory}/ (London, 1985). Percolation represents the sudden dramatic expansion of a common idea or ideas thought he reaching of a critical level/mass in the rapid recognition of their power and the making of further interconnections. An epidemic like infection of ideas. Not quite the way we are used to the progression of ideas within a conservative tradition.
+
+^~ Ronald Dworkin, /{Laws Empire}/ (Harvard, 1986); /{Hard Cases in Harvard Law Review}/ (1988).
+
+^~ Hercules was created for U.S. Federal Cases and the community represented by the U.S.
+
+^~ In 1966, a time when there were greater differences in the legal systems of States comprising the world economy Clive Schmitthoff was able to comment that:<br>"22. The similarity of the law of international trade transcends the division of the world between countries of free enterprise and countries of centrally planned economy, and between the legal families of the civil law of Roman inspiration and the common law of English tradition. As a Polish scholar observed, "the law of external trade of the countries of planned economy does not differ in its fundamental principles from the law of external trade of other countries, such as e.g., Austria or Switzerland. Consequently, international trade law specialists of all countries have found without difficulty that they speak a 'common language'<br>23. The reason for this universal similarity of the law of international trade is that this branch of law is based on three fundamental propositions: first, that the parties are free, subject to limitations imposed by the national laws, to contract on whatever terms they are able to agree (principle of the autonomy of the parties' will); secondly, that once the parties have entered into a contract, that contract must be faithfully fulfilled (/{pacta sunt servanda}/) and only in very exceptional circumstances does the law excuse a party from performing his obligations, viz., if force majeure or frustration can be established; and, thirdly that arbitration is widely used in international trade for the settlement of disputes, and the awards of arbitration tribunals command far-reaching international recognition and are often capable of enforcement abroad."<br>/{Report of the Secretary-General of the United Nations, Progressive Development of the Law of International Trade}/ (1966). Report prepared for the UN by C. Schmitthoff.
+
+2~ Non-binding interpretative councils and their co-ordinating guides can provide a focal point for the convergence of ideas - certainty, predictability, and efficiency
+
+A respected central guiding body can provide a guiding influence with respect to: *{(a)}* the uniform application of texts; *{(b)}* information management control. Given the growing mass of writing on common legal texts - academic and by way of decisions, we are faced with an information management problem.~^
+
+^~ Future if not current.
+
+Supra-national interpretative councils have been called for previously~^ and have for various reasons been regarded impracticable to implement including problems associated with getting States to formally agree upon such a body with binding authority.
+
+^~ /{UNCITRAL Secretariat}/ (1992) p. 253. Proposed by David (France) at the second UNCITRAL Congress and on a later occasion by Farnsworth (USA). To date the political will backed by the financing for such an organ has not been forthcoming. In 1992 the UNCITRAL Secretariat concluded that "probably the time has not yet come". Suggested also by Louis Sono in /{Uniform laws require uniform interpretation: proposals for an international tribunal to interpret uniform legal texts}/ (1992) 25th UNCITRAL Congress, pp. 50-54. Drobnig, /{Observations in Uniform Law in Practice}/ at p. 306.
+
+However it is not necessary to go this route. In relation to "IoL" in such forms as the /{PICC}/ and /{PECL}/ it is possible for the promulgators themselves,~^ to update and clarify the accompanying commentary of the rules and principles, and to extend their work, through having councils with the necessary delegated powers. In relation to the /{CISG}/ it is possible to do something similar of a non-binding nature, through the production of an updated commentary by an interpretive council (that could try to play the role of Hercules).~^ With respect, despite some expressed reservations, it is not true that it would have no more authority than a single author writing on the subject. A suitable non-binding interpretative council would provide a focal point for the convergence of ideas. Given the principle of ensuring democratic jurisprudential deliberation, that such a council would be advisory only (except perhaps on the contracting parties election) would be one of its more attractive features, as it would ensure continued debate and development.
+
+^~ UNIDROIT and the EU
+
+^~ For references on interpretation of the /{CISG}/ by a supranational committee of experts or council of "wise men" see Bonell, /{Proposal for the Establishment of a Permanent Editorial Board for the Vienna Sales Convention}/ in /{International Uniform Law in Practice/ Le droit uniforme international dans la practique [Acts and Proceedings of the 3rd Congress on Private Law held by the International Institute for the Unification of Private Law}/ (Rome, 1987)], (New York, 1988) pp. 241-244
+
+2~ Capacity Building
+
+_1 "... one should create awareness about the fact that an international contract or transaction is not naturally rooted in one particular domestic law, and that its international specifics are best catered for in a uniform law."~^
+
+^~ UNCITRAL Secretariat (1992) p. 255.
+
+_{/{Capacity building}/}_ - raising awareness, providing education, creating a new generation of lawyers versed in a relatively new paradigm. Capacity building in international and transnational law, is something relevant institutions including arbitration institutions; the business community, and; far sighted States, should be interested in promoting. Finding means to transcend national boundaries is also to continue in the tradition of seeking the means to break down barriers to legal communication and understanding. However, while the business community seeks and requires greater uniformity in their business relations, there has paradoxically, at a national level, been a trend towards a nationalisation of contract law, and a regionalisation of business practice.~^
+
+^~ Erich Schanze, /{New Directions in Business Research}/ in Børge Dahl & Ruth Nielsen (ed.), /{New Directions in Contract Research}/ (Copenhagen, 1996) p. 62.
+
+As an example, Pace University, Institute of International Commercial Law, plays a prominent role with regard to capacity building in relation to the /{CISG}/ and /{PICC}/. Apart from the previously mentioned /{CISG Database}/, Pace University organise a large annual moot on the /{CISG}/~^ this year involving students of 79 universities from 28 countries, and respected arbitrators from the word over. Within the moot the finding of solutions based on /{PICC}/ where the /{CISG}/ is silent, is encouraged. Pace University also organise an essay competition~^ on the /{CISG}/ and/or the /{PICC}/, which next year is to be expanded to include the /{PECL}/ as a further option.
+
+^~ See http://www.cisg.law.pace.edu/vis.html
+
+^~ See http://www.cisg.law.pace.edu/cisg/text/essay.html
+
+1~ Marketing of transnational solutions
+
+Certain aspects of the Net/web may already be passé, but did you recognise it for what it was, or might become, when it arrived?
+
+As uniform law and transnational solutions are in competition with municipal approaches, to be successful a certain amount of marketing is necessary and may be effective. The approach should involve ensuring the concept of what they seek to achieve is firmly implanted in the business, legal and academic communities, and through engaging the business community and arbitration institutions, in capacity building and developing a new generation of lawyers. Feedback from the business community, and arbitrators will also prove invaluable. Whilst it is likely that the business community will immediately be able to recognise their potential advantages, it is less certain that they will find the support of the legal community. The normal reasons would be similar to those usually cited as being the primary constraints on its development "conservatism, routine, prejudice and inertia" René David. These are problems associated with gaining the initial foothold of acceptability, also associated with the lower part of an exponential growth curve. In addition the legal community may face tensions arising for various reasons including the possibility of an increase in world-wide competition.
+
+There are old well developed legal traditions with developed infrastructures and roots well established in several countries, that are dependable and known. The question arises why experiment with alternative non-extensively tested regimes? The required sophistication is developed in the centres providing legal services, and it may be argued that there is not the pressing need for unification or for transnational solutions, as the traditional way of contracting provides satisfactorily for the requirements of global commerce. The services required will continue to be easily and readily available from existing centres of skill. English law, to take an example is for various reasons (including perhaps language, familiarity of use, reputation and widespread Commonwealth~^ relations) the premier choice for the law governing international commercial transactions, and is likely to be for the foreseeable future. Utilising the Commonwealth as an example, what the "transnational" law (e.g. /{CISG}/) experience illustrates however, is that for States there may be greater advantage to be gained from participation in a horizontally shared area of commercial law, than from retaining a traditional vertically integrated commercial law system, based largely for example on the English legal system.
+
+^~ http://www.thecommonwealth.org/
+
+Borrowing a term from the information technology sector, it is essential to guard against FUD (fear, uncertainty and doubt) with regard to the viability of new and/or competing transnational solutions, that may be spread by their detractors, and promptly, in the manner required by the free market, address any real problems that are discerned.
+
+1~ Tools in future development
+
+An attempt should be made by the legal profession to be more contemporary and to keep up to date with developments in technology and the sciences, and to adopt effective tools where suitable to achieve their goals. Technology one way or another is likely to encroach further upon law and the way we design it.
+
+Science works across cultures and is aspired to by most nations as being responsible for the phenomenal success of technology (both are similarly associated with globalisation). Science is extending its scope to (more confidently) tackle complex systems. It would not hurt to be more familiar with relevant scientific concepts and terminology. Certainly lawyers across the globe, myself included, would also benefit much in their conceptual reasoning from an early dose of the philosophy of science,~^ what better than Karl Popper on scientific discovery and the role of "falsification" and value of predictive probity.~^ And certainly Thomas Kuhn on scientific advancement and "paradigm shifts"~^ has its place. Having mentioned Karl Popper, it would not be unwise to go further (outside the realms of philosophy of science) to study his defence of democracy in both volumes of /{Open Society and Its Enemies}/.~^
+
+^~ An excellent approachable introduction is provided by A.F. Chalmers /{What is this thing called Science?}/ (1978, Third Edition 1999).
+
+^~ Karl R. Popper /{The Logic of Scientific Discovery}/ (1959).
+
+^~ Thomas S. Kuhn /{The Structure of Scientific Revolutions}/ (1962, 3rd Edition 1976).
+
+^~ Karl R. Popper /{The Open Society and Its Enemies: Volume 1, Plato}/ (1945) and /{The Open Society and Its Enemies: Volume 2, Hegel & Marx}/. (1945)
+
+Less ambitiously there are several tools not traditionally in the lawyers set, that may assist in transnational infrastructure modelling. These include further exploration and development of the potential of tools, including to suggest a few by way of example: flow charts, fuzzy thinking, "intelligent" electronic agents and Net collaborations.
+
+In the early 1990's I was introduced to a quantity surveyor and engineer who had reduced the /{FIDIC Red Book}/~^ to over a hundred pages of intricate flow charts (decision trees), printed horizontally on roughly A4 sized sheets. He was employed by a Norwegian construction firm, who insisted that based on past experience, they knew that he could, using his charts, consistently arrive at answers to their questions in a day, that law firms took weeks to produce. Flow charts can be used to show interrelationships and dependencies, in order to navigate the implications of a set of rules more quickly. They may also be used more proactively (and /{ex ante}/ rather than /{ex post}/) in formulating texts, to avoid unnecessary complexity and to arrive at more practical, efficient and elegant solutions.
+
+^~ FIDIC is the International Federation of Consulting Engineers http://www.fidic.com/
+
+Explore such concepts as "fuzzy thinking"~^ including fuzzy logic, fuzzy set theory, and fuzzy systems modelling, of which classical logic and set theory are subsets. Both by way of analogy and as a tool fuzzy concepts are better at coping with complexity and map more closely to judicial thinking and argument in the application of principles and rules. Fuzzy theory provides a method for analysing and modelling principle and rule based systems, even where conflicting principles may apply permitting /{inter alia}/ working with competing principles and the contextual assignment of precision to terms such as "reasonableness". Fuzzy concepts should be explored in expert systems, and in future law. Problems of scaling associated with multiple decision trees do not prevent useful applications, and structured solutions. The analysis assists in discerning what lawyers are involved with.
+
+^~ Concept originally developed by Lotfi Zadeh /{Fuzzy Sets}/ Information Control 8 (1965) pp 338-353. For introductions see Daniel McNeill and Paul Freiberger /{Fuzzy Logic: The Revolutionary Computer Technology that is Changing our World}/ (1993); Bart Kosko Fuzzy Thinking (1993); Earl Cox The Fuzzy Systems Handbook (New York, 2nd ed. 1999). Perhaps to the uninitiated an unfortunate choice of name, as fuzzy logic and fuzzy set theory is more precise than classical logic and set theory, which comprise a subset of that which is fuzzy (representing those instances where membership is 0% or 100%). The statement is not entirely without controversy, in suggesting the possibility that classical thinking may be subsumed within the realms of an unfamiliar conceptual paradigm, that is to take hold of the future thinking. In the engineering field much pioneer work on fuzzy rule based systems was done at Queen Mary College by Ebrahim Mamdani in the early and mid-1970s. Time will tell.
+
+"Intelligent" electronic agents can be expected both to gather information on behalf of the business community and lawyers. In future electronic agents are likely to be employed to identify and bring to the attention of their principals "invitations to treat" or offers worthy of further investigation. In some cases they will be developed and relied upon as electronic legal agents, operating under a programmed mandate and vested with the authority to enter certain contracts on behalf of their principals. Such mandate would include choice of law upon which to contract, and the scenario could be assisted by transnational contract solutions (and catered for in the design of "future law").
+
+Another area of technology helping solve legal problems relates to various types of global register and transaction centres. Amongst them property registers being an obvious example, including patents and moveable property. Bolero providing an example of how electronic documents can be centrally brokered on behalf of trading parties.
+
+Primary law should be available on the Net free, and this applies also to "IoL" and the static material required for their interpretation. This should be the policy adopted by all institutions involved in contributing to the transnational legal infrastructure. Where possible larger databases also should be developed and shared. The Net has reduced the cost of dissemination of material, to a level infinitesimally lower than before. Universities now can and should play a more active role. Suitable funding arrangements should be explored that do not result in proprietary systems or the forwarding of specific lobby interests. In hard-copy to promote uniform standards, institutions should also strive to have their materials available at a reasonable price. Many appear to be unacceptably expensive given the need for their promotion and capacity building, amongst students, and across diverse States.
+
+Follow the open standards and community standards debate in relation to the development of technology standards and technology infrastructure tools - including operating systems,~^ to discover what if anything it might suggest for the future development of law standards.
+
+^~ See for example /{Open Sources : Voices from the Open Source Revolution - The Open Source Story}/ http://www.oreilly.com/catalog/opensources/book/toc.html
+
+1~ As an aside, a word of caution
+
+I end with an arguably gratuitous observation, by way of a reminder and general warning. Gratuitous in the context of this paper because the areas focused upon~^ were somewhat deliberately selected to fall outside the more contentious and "politically" problematic areas related to globalisation, economics, technology, law and politics.~^ Gratuitous also because there will be no attempt to concretise or exemplify the possibility suggested.
+
+^~ Sale of goods (/{CISG}/), contract rules and principles (/{PICC}/), related Arbitration, and the promotion of certain egalitarian ideals.
+
+^~ It is not as evident in the area of private international commercial contract law the chosen focus for this paper, but appears repeatedly in relation to other areas and issues arising out of the economics, technology, law nexus.
+
+Fortunately, we are not (necessarily) talking about a zero sum game, however, it is necessary to be able to distinguish and recognise that which may harm. International commerce/trade is competitive, and by its nature not benign, even if it results in an overall improvement in the economic lot of the peoples of our planet. "Neutral tests" such as Kaldor-Hicks efficiency, do not require that your interests are benefited one iota, just that whilst those of others are improved, yours are not made worse. If the measure adopted is overall benefit, it is even more possible that an overall gain may result where your interests are adversely affected. The more so if you have little, and those that gain, gain much. Furthermore such "tests" are based on assumptions, which at best are approximations of reality (e.g. that of zero transaction costs, where in fact not only are they not, but they are frequently proportionately higher for the economically weak). At worst they may be manipulated /{ex ante}/ with knowledge of their implications (e.g. engineering to ensure actual or relative~^ asymmetrical transaction cost). It is important to be careful in a wide range of circumstances related to various aspects of the modelling of the infrastructure for international commerce that have an impact on the allocation of rights and obligations, and especially the allocation of resources, including various types of intellectual property rights. Ask what is the objective and justification for the protection? How well is the objective met? Are there other consequential effects? Are there other objectives that are worthy of protection? Could the stated objective(s) be achieved in a better way?
+
+^~ Low fixed costs have a "regressive" effect
+
+Within a system are those who benefit from the way it has been, that may oppose change as resulting in loss to them or uncertainty of their continued privilege. For a stable system to initially arise that favours such a Select Set, does not require the conscious manipulation of conditions by the Select Set. Rather it requires that from the system (set) in place the Select Set emerges as beneficiary. Subsequently the Select Set having become established as favoured and empowered by their status as beneficiary, will seek to do what it can, to influence circumstances to ensure their continued beneficial status. That is, to keep the system operating to their advantage (or tune it to work even better towards this end), usually with little regard to the conditions resulting to other members of the system. Often this will be a question of degree, and the original purpose, or an alternative "neutral" argument, is likely to be used to justify the arrangement. The objective from the perspective of the Select Set is fixed; the means at their disposal may vary. Complexity is not required for such situations to arise, but having done so subsequent plays by the Select Set tend towards complexity. Furthermore, moves in the interest of the Select Set are more easily obscured/disguised in a complex system. Limited access to information and knowledge are devastating handicaps without which change cannot be contemplated let alone negotiated. Frequently, having information and knowledge are not enough. The protection of self-interest is an endemic part of our system, with the system repeatedly being co-opted to the purposes of those that are able to manipulate it. Membership over time is not static, for example, yesterday's "copycat nations" are today's innovators, and keen to protect their intellectual property. Which also illustrates the point that what it may take to set success in motion, may not be the same as that which is preferred to sustain it. Whether these observations appear to be self-evident and/or abstract and out of place with regard to this paper, they have far reaching implications repeatedly observable within the law, technology, and commerce (politics) nexus. Even if not arising much in the context of the selected material for this paper, their mention is justified by way of warning. Suitable examples would easily illustrate how politics arises inescapably as an emergent property from the nexus of commerce, technology, and law.~^
+
+^~ In such circumstances either economics or law on their own would be sufficient to result in politics arising as an emergent property.
+
+1~note Note
+
+* Ralph Amissah is a Fellow of Pace University, Institute for International Commercial Law. http://www.cisg.law.pace.edu/ <br>RA lectured on the private law aspects of international trade whilst at the Law Faculty of the University of Tromsø, Norway. http://www.jus.uit.no/ <br>RA built the first web site related to international trade law, now known as lexmercatoria.org and described as "an (international | transnational) commercial law and e-commerce infrastructure monitor". http://lexmercatoria.org/ <br>RA is interested in the law, technology, commerce nexus. RA works with the law firm Amissahs.<br>/{[This is a draft document and subject to change.]}/ <br>All errors are very much my own.<br> ralph@amissah.com
+
+%% SiSU markup sample Notes:
+% SiSU http://www.jus.uio.no/sisu
+% SiSU markup for 0.16 and later:
+% 0.20.4 header 0~links
+% 0.22 may drop image dimensions (rmagick)
+% 0.23 utf-8 ß
+% 0.38 or later, may use alternative notation for headers, e.g. @title: (instead of 0~title)
+% 0.38 document structure alternative markup, experimental (rad) A,B,C,1,2,3 maps to 1,2,3,4,5,6
+% Output: http://www.jus.uio.no/sisu/autonomy_markup2/sisu_manifest.html
+% SiSU 0.38 experimental (alternative structure) markup used for this document
+% note endnotes follow paragraphs, compare with sample autonomy_markup1.sst
diff --git a/data/sisu_markup_samples/non-free/autonomy_markup3.sst b/data/sisu_markup_samples/non-free/autonomy_markup3.sst
new file mode 100644
index 0000000..8bf5e5b
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/autonomy_markup3.sst
@@ -0,0 +1,202 @@
+% SiSU 0.16
+
+0~title Revisiting the Autonomous Contract
+
+0~subtitle Transnational contracting, trends and supportive structures
+
+0~creator Ralph Amissah*
+
+0~type article
+
+0~subject international contracts, international commercial arbitration, private international law
+
+0~date 2000-08-27
+
+0~level num_top=4;
+
+0~italics /CISG|PICC|PECL|UNCITRAL|UNIDROIT|WTO|ICC|WIPO|ScIL|IoL|lex mercatoria|pacta sunt servanda|caveat subscriptor|ex aequo et bono|amiable compositeur|ad hoc/i
+
+0~links {Markup}http://www.jus.uio.no/sisu/sample/markup/autonomy_markup0.sst
+{Syntax}http://www.jus.uio.no/sisu/sample/syntax/autonomy_markup0.sst.html
+{The Autonomous Contract}http://www.jus.uio.no/lm/the.autonomous.contract.07.10.1997.amissah/toc.html
+{Contract Principles}http://www.jus.uio.no/lm/private.international.commercial.law/contract.principles.html
+{UNIDROIT Principles}http://www.jus.uio.no/lm/unidroit.international.commercial.contracts.principles.1994.commented/toc.html
+{Sales}http://www.jus.uio.no/lm/private.international.commercial.law/sale.of.goods.html
+{CISG}http://www.jus.uio.no/lm/un.contracts.international.sale.of.goods.convention.1980/doc.html
+{Arbitration}http://www.jus.uio.no/lm/arbitration/toc.html
+{Electronic Commerce}http://www.jus.uio.no/lm/electronic.commerce/toc.html
+
+0~rcs+ $Id: autonomy_markup3.sst,v 1.1 2006/04/15 17:17:39 ralph Exp ralph $
+
+1~ Revisiting the Autonomous Contract <sub>(Draft 0.90 - 2000.08.27 ;)</sub>
+
+2~ Transnational contract "law", trends and supportive structures
+
+3~ \copyright Ralph Amissah*
+
+4~ Reinforcing trends: borderless technologies, global economy, transnational legal solutions?
+
+Revisiting the Autonomous Contract~{ /{The Autonomous Contract: Reflecting the borderless electronic-commercial environment in contracting}/ was published in /{Elektronisk handel - rettslige aspekter, Nordisk årsbok i rettsinformatikk 1997}/ (Electronic Commerce - Legal Aspects. The Nordic yearbook for Legal Informatics 1997) Edited by Randi Punsvik, or at http://www.jus.uio.no/the.autonomous.contract.07.10.1997.amissah/doc.html }~
+
+Globalisation is to be observed as a trend intrinsic to the world economy.~{ As Maria Cattaui Livanos suggests in /{The global economy - an opportunity to be seized}/ in /{Business World}/ the Electronic magazine of the International Chamber of Commerce (Paris, July 1997) at http://www.iccwbo.org/html/globalec.htm <br> "Globalization is unstoppable. Even though it may be only in its early stages, it is already intrinsic to the world economy. We have to live with it, recognize its advantages and learn to manage it.<br>That imperative applies to governments, who would be unwise to attempt to stem the tide for reasons of political expediency. It also goes for companies of all sizes, who must now compete on global markets and learn to adjust their strategies accordingly, seizing the opportunities that globalization offers."}~ Rudimentary economics explains this runaway process, as being driven by competition within the business community to achieve efficient production, and to reach and extend available markets.~{To remain successful, being in competition, the business community is compelled to take advantage of the opportunities provided by globalisation.}~ Technological advancement particularly in transport and communications has historically played a fundamental role in the furtherance of international commerce, with the Net, technology's latest spatio-temporally transforming offering, linchpin of the "new-economy", extending exponentially the global reach of the business community. The Net covers much of the essence of international commerce providing an instantaneous, low cost, convergent, global and borderless: information centre, marketplace and channel for communications, payments and the delivery of services and intellectual property. The sale of goods, however, involves the separate element of their physical delivery. The Net has raised a plethora of questions and has frequently offered solutions. The increased transparency of borders arising from the Net's ubiquitous nature results in an increased demand for the transparency of operation. As economic activities become increasingly global, to reduce transaction costs, there is a strong incentive for the "law" that provides for them, to do so in a similar dimension. The appeal of transnational legal solutions lies in the potential reduction in complexity, more widely dispersed expertise, and resulting increased transaction efficiency. The Net reflexively offers possibilities for the development of transnational legal solutions, having in a similar vein transformed the possibilities for the promulgation of texts, the sharing of ideas and collaborative ventures. There are however, likely to be tensions within the legal community protecting entrenched practices against that which is new, (both in law and technology) and the business community's goal to reduce transaction costs.
+
+Within commercial law an analysis of law and economics may assist in developing a better understanding of the relationship between commercial law and the commercial sector it serves.~{ Realists would contend that law is contextual and best understood by exploring the interrelationships between law and the other social sciences, such as sociology, psychology, political science, and economics.}~ "...[T]he importance of the interrelations between law and economics can be seen in the twin facts that legal change is often a function of economic ideas and conditions, which necessitate and/or generate demands for legal change, and that economic change is often governed by legal change."~{ Part of a section cited in Mercuro and Steven G. Medema, /{Economics and the Law: from Posner to Post-Modernism}/ (Princeton, 1997) p. 11, with reference to Karl N. Llewellyn The Effect of Legal Institutions upon Economics, American Economic Review 15 (December 1925) pp 655-683, Mark M. Litchman Economics, the Basis of Law, American Law Review 61 (May-June 1927) pp 357-387, and W. S. Holdsworth A Neglected Aspect of the Relations between Economic and Legal History, Economic History Review 1 (January 1927-1928) pp 114-123.}~ In doing so, however, it is important to be aware that there are several competing schools of law and economics, with different perspectives, levels of abstraction, and analytical consequences of and for the world that they model.~{ For a good introduction see Nicholas Mercuro and Steven G. Medema, /{Economics and the Law: from Posner to Post-Modernism}/ (Princeton, 1997). These include: Chicago law and economics (New law and economics); New Haven School of law and economics; Public Choice Theory; Institutional law and economics; Neoinstitutional law and economics; Critical Legal Studies.}~
+
+Where there is rapid interrelated structural change with resulting new features, rather than concentrate on traditionally established tectonic plates of a discipline, it is necessary to understand underlying currents and concepts at their intersections, (rather than expositions of history~{ Case overstated, but this is an essential point. It is not be helpful to be overly tied to the past. It is necessary to be able to look ahead and explore new solutions, and be aware of the implications of "complexity" (as to to the relevance of past circumstances to the present). }~), is the key to commencing meaningful discussions and developing solutions for the resulting issues.~{ The majority of which are beyond the scope of this paper. Examples include: encryption and privacy for commercial purposes; digital signatures; symbolic ownership; electronic intellectual property rights.}~ Interrelated developments are more meaningfully understood through interdisciplinary study, as this instance suggests, of the law, commerce/economics, and technology nexus. In advocating this approach, we should also pay heed to the realisation in the sciences, of the limits of reductionism in the study of complex systems, as such systems feature emergent properties that are not evident if broken down into their constituent parts. System complexity exceeds sub-system complexity; consequently, the relevant unit for understanding the systems function is the system, not its parts.~{ Complexity theory is a branch of mathematics and physics that examines non-linear systems in which simple sets of deterministic rules can lead to highly complicated results, which cannot be predicted accurately. A study of the subject is provided by Nicholas Rescher /{Complexity: A Philosophical Overview}/ (New Brunswick, 1998). See also Jack Cohen and Ian Stewart, /{The Collapse of Chaos: Discovering Simplicity in a Complex World}/ (1994). }~ Simplistic dogma should be abandoned for a contextual approach.
+
+4~ Common Property - advocating a common commercial highway
+
+Certain infrastructural underpinnings beneficial to the working of the market economy are not best provided by the business community, but by other actors including governments. In this paper mention is made for example of the /{United Nations Convention on the Recognition and Enforcement of Foreign Arbitral Awards}/ (New York, 10 June 1958), which the business community regularly relies upon as the back-stop for their international agreements. Common property can have an enabling value, the Net, basis for the "new" economy, would not be what it is today without much that has been shared on this basis, having permitted /{"Metcalf's law"}/~{ Robert Metcalf, founder of 3Com. }~ to take hold. /{Metcalf's law}/ suggests that the value of a shared technology is exponential to its user base. In all likelihood it applies as much to transnational contract law, as to technological networks and standards. The more people who use a network or standard, the more "valuable" it becomes, and the more users it will attract. Key infrastructure should be identified and common property solutions where appropriate nurtured, keeping transaction costs to a minimum.
+
+The following general perspective is submitted as worthy of consideration (and support) by the legal, business and academic communities, and governments. *(a)* Abstract goals valuable to a transnational legal infrastructure include, certainty and predictability, flexibility, simplicity where possible, and neutrality, in the sense of being without perceived "unfairness" in the global context of their application. This covers the content of the "laws" themselves and the methods used for their interpretation. *(b)* Of law with regard to technology, "rules should be technology-neutral (i.e., the rules should neither require nor assume a particular technology) and forward looking (i.e., the rules should not hinder the use or development of technologies in the future)."~{ /{US Framework for Global Electronic Commerce}/ (1997) http://www.whitehouse.gov/WH/New/Commerce/ }~ *(c)* Desirable abstract goals in developing technological standards and critical technological infrastructure, include, choice, and that they should be shared and public or "open" as in "open source", and platform and/or program neutral, that is, interoperable. (On security, to forestall suggestions to the contrary, popular open source software tends to be as secure or more so than proprietary software). *(d)* Encryption is an essential part of the mature "new" economy but remains the subject of some governments' restriction.~{ The EU is lifting such restriction, and the US seems likely to follow suit. }~ The availability of (and possibility to develop common transnational standards for) strong encryption is essential for commercial security and trust with regard to all manner of Net communications and electronic commerce transactions, /{vis-à-vis}/ their confidentiality, integrity, authentication, and non-repudiation. That is, encryption is the basis for essential commerce related technologies, including amongst many others, electronic signatures, electronic payment systems and the development of electronic symbols of ownership (such as electronic bills of lading). *(e)* As regards the dissemination of primary materials concerning "uniform standards" in both the legal and technology domains, "the Net" should be used to make them globally available, free. Technology should be similarly used where possible to promote the goals outlined under point (a). Naturally, as a tempered supporter of the market economy,~{ Caveats extending beyond the purview of this paper. It is necessary to be aware that there are other overriding interests, global and domestic, that the market economy is ill suited to providing for, such as the environment, and possibly key public utilities that require long term planning and high investment. It is also necessary to continue to be vigilant against that which even if arising as a natural consequence of the market economy, has the potential to disturb or destroy its function, such as monopolies.}~ proprietary secondary materials and technologies do not merit these reservations. Similarly, actors of the market economy would take advantage of the common property base of the commercial highway.
+
+4~ Modelling the private international commercial law infrastructure
+
+Apart from the study of "laws" or the existing legal infrastructure, there are a multitude of players involved in their creation whose efforts may be regarded as being in the nature of systems modelling. Of interest to this paper is the subset of activity of a few organisations that provide the underpinnings for the foundation of a successful transnational contract/sales law. These are not amongst the more controversial legal infrastructure modelling activities, and represent a small but significant part in simplifying international commerce and trade.~{ Look for instance at national customs procedures, and consumer protection.}~
+
+Briefly viewing the wider picture, several institutions are involved as independent actors in systems modelling of the transnational legal infrastructure. Their roles and mandates and the issues they address are conceptually different. These include certain United Nations organs and affiliates such as the United Nations Commission on International Trade Law (UNCITRAL),~{ http://www.uncitral.org/ }~ the World Intellectual Property Organisation (WIPO)~{ http://www.wipo.org/ }~ and recently the World Trade Organisation (WTO),~{ http://www.wto.org/ }~ along with other institutions such as the International Institute for the Unification of Private Law (UNIDROIT),~{ http://www.unidroit.org/ }~ the International Chamber of Commerce (ICC),~{ http://www.iccwbo.org/ }~ and the Hague Conference on Private International Law.~{ http://www.hcch.net/ }~ They identify areas that would benefit from an international or transnational regime and use various tools at their disposal, (including: treaties; model laws; conventions; rules and/or principles; standard contracts), to develop legislative "solutions" that they hope will be subscribed to.
+
+A host of other institutions are involved in providing regional solutions.~{ such as ASEAN http://www.aseansec.org/ the European Union (EU) http://europa.eu.int/ MERCOSUR http://embassy.org/uruguay/econ/mercosur/ and North American Free Trade Agreement (NAFTA) http://www.nafta-sec-alena.org/english/nafta/ }~ Specialised areas are also addressed by appropriately specialised institutions.~{ e.g. large international banks; or in the legal community, the Business Section of the International Bar Association (IBA) with its membership of lawyers in over 180 countries. http://www.ibanet.org/ }~ A result of globalisation is increased competition (also) amongst States, which are active players in the process, identifying and addressing the needs of their business communities over a wide range of areas and managing the suitability to the global economy of their domestic legal, economic, technological and educational~{ For a somewhat frightening peek and illuminating discussion of the role of education in the global economy as implemented by a number of successful States see Joel Spring, /{Education and the Rise of the Global Economy}/ (Mahwah, NJ, 1998). }~ infrastructures. The role of States remains to identify what domestic structural support they must provide to be integrated and competitive in the global economy.
+
+In addition to "traditional" contributors, the technology/commerce/law confluence provides new challenges and opportunities, allowing, the emergence of important new players within the commercial field, such as Bolero,~{ http://www.bolero.org/ also http://www.boleroassociation.org/ }~ which, with the backing of international banks and ship-owners, offers electronic replacements for traditional paper transactions, acting as transaction agents for the electronic substitute on behalf of the trading parties. The acceptance of the possibility of applying an institutionally offered lex has opened the door further for other actors including ad hoc groupings of the business community and/or universities to find ways to be engaged and actively participate in providing services for themselves and/or others in this domain.
+
+4~ The foundation for transnational private contract law, arbitration
+
+The market economy drive perpetuating economic globalisation is also active in the development and choice of transnational legal solutions. The potential reward, international sets of contract rules and principles, that can be counted on to be consistent and as providing a uniform layer of insulation (with minimal reference back to State law) when applied across the landscape of a multitude of different municipal legal systems. The business community is free to utilise them if available, and if not, to develop them, or seek to have them developed.
+
+The kernel for the development of a transnational legal infrastructure governing the rights and obligations of private contracting individuals was put in place as far back as 1958 by the /{UN Convention on the Recognition and Enforcement of Foreign Arbitral Awards}/ (/{"NY Convention on ICA"}/),~{ at http://www.jus.uio.no/lm/un.arbitration.recognition.and.enforcement.convention.new.york.1958/ }~ now in force in over a hundred States. Together with freedom of contract, the /{NY Convention on ICA}/ made it possible for commercial parties to develop and be governed by their own /{lex}/ in their contractual affairs, should they wish to do so, and guaranteed that provided their agreement was based on international commercial arbitration (/{"ICA"}/), (and not against relevant mandatory law) it would be enforced in all contracting States. This has been given further support by various more recent arbitration rules and the /{UNCITRAL Model Law on International Commercial Arbitration 1985}/,~{ at http://www.jus.uio.no/lm/un.arbitration.model.law.1985/ }~ which now explicitly state that rule based solutions independent of national law can be applied in /{"ICA"}/.~{ Lando, /{Each Contracting Party Must Act In Accordance with Good Faith and Fair Dealing}/ in /{Festskrift til Jan Ramberg}/ (Stockholm, 1997) p. 575. See also UNIDROIT Principles, Preamble 4 a. Also Arthur Hartkamp, The Use of UNIDROIT Principles of International Commercial Contracts by National and Supranational Courts (1995) in UNIDROIT Principles: A New Lex Mercatoria?, pp. 253-260 on p. 255. But see Goode, /{A New International Lex Mercatoria?}/ in /{Juridisk Tidskrift}/ (1999-2000 nr 2) p. 256 and 259. }~
+
+/{"ICA"}/ is recognised as the most prevalent means of dispute resolution in international commerce. Unlike litigation /{"ICA"}/ survives on its merits as a commercial service to provide for the needs of the business community.~{ /{"ICA"}/ being shaped by market forces and competition adheres more closely to the rules of the market economy, responding to its needs and catering for them more adequately. }~ It has consequently been more dynamic than national judiciaries, in adjusting to the changing requirements of businessmen. Its institutions are quicker to adapt and innovate, including the ability to cater for transnational contracts. /{"ICA"}/, in taking its mandate from and giving effect to the will of the parties, provides them with greater flexibility and frees them from many of the limitations of municipal law.~{ As examples of this, it seeks to give effect to the parties' agreement upon: the lex mercatoria as the law of the contract; the number of, and persons to be "adjudicators"; the language of proceedings; the procedural rules to be used, and; as to the finality of the decision. }~
+
+In sum, a transnational/non-national regulatory order governing the contractual rights and obligations of private individuals is made possible by: *(a)* States' acceptance of freedom of contract (public policy excepted); *(b)* Sanctity of contract embodied in the principle pacta sunt servanda *(c)* Written contractual selection of dispute resolution by international commercial arbitration, whether ad hoc or institutional, usually under internationally accepted arbitration rules; *(d)* Guaranteed enforcement, arbitration where necessary borrowing the State apparatus for law enforcement through the /{NY Convention on ICA}/, which has secured for /{"ICA"}/ a recognition and enforcement regime unparalleled by municipal courts in well over a hundred contracting States; *(e)* Transnational effect or non-nationality being achievable through /{"ICA"}/ accepting the parties' ability to select the basis upon which the dispute would be resolved outside municipal law, such as through the selection of general principles of law or lex mercatoria, or calling upon the arbitrators to act as amiable compositeur or ex aequo et bono.
+
+This framework provided by /{"ICA"}/ opened the door for the modelling of effective transnational law default rules and principles for contracts independent of State participation (in their development, application, or choice of law foundation). Today we have an increased amount of certainty of content and better control over the desired degree of transnational effect or non-nationality with the availability of comprehensive insulating rules and principles such as the PICC or /{Principles of European Contract Law}/ (/{"European Principles"}/ or /{"PECL"}/) that may be chosen, either together with, or to the exclusion of a choice of municipal law as governing the contract. For electronic commerce a similar path is hypothetically possible.
+
+4~ "State contracted international law" and/or "institutionally offered lex"? CISG and PICC as examples
+
+An institutionally offered lex ("IoL", uniform rules and principles) appear to have a number of advantages over "State contracted international law" ("ScIL", model laws, treaties and conventions for enactment). The development and formulation of both "ScIL" and "IoL" law takes time, the CISG representing a half century of effort~{ /{UNCITRAL Convention on Contracts for the International Sale of Goods 1980}/ see at http://www.jus.uio.no/lm/un.contracts.international.sale.of.goods.convention.1980/ <br>The CISG may be regarded as the culmination of an effort in the field dating back to Ernst Rabel, (/{Das Recht des Warenkaufs}/ Bd. I&II (Berlin, 1936-1958). Two volume study on sales law.) followed by the Cornell Project, (Cornell Project on Formation of Contracts 1968 - Rudolf Schlesinger, Formation of Contracts. A study of the Common Core of Legal Systems, 2 vols. (New York, London 1968)) and connected most directly to the UNIDROIT inspired /{Uniform Law for International Sales}/ (ULIS http://www.jus.uio.no/lm/unidroit.ulis.convention.1964/ at and ULF at http://www.jus.uio.no/lm/unidroit.ulf.convention.1964/ ), the main preparatory works behind the CISG (/{Uniform Law on the Formation of Contracts for the International Sale of Goods}/ (ULF) and the /{Convention relating to a Uniform Law on the International Sale of Goods}/ (ULIS) The Hague, 1964.). }~ and PICC twenty years.~{ /{UNIDROIT Principles of International Commercial Contracts}/ commonly referred to as the /{UNIDROIT Principles}/ and within this paper as PICC see at http://www.jus.uio.no/lm/unidroit.contract.principles.1994/ and http://www.jus.uio.no/lm/unidroit.international.commercial.contracts.principles.1994.commented/ <br>The first edition of the PICC were finalised in 1994, 23 years after their first conception, and 14 years after work started on them in earnest. }~ The CISG by UNCITRAL represents the greatest success for the unification of an area of substantive commercial contract law to date, being currently applied by 57 States,~{ As of February 2000. }~ estimated as representing close to seventy percent of world trade and including every major trading nation of the world apart from England and Japan. To labour the point, the USA most of the EU (along with Canada, Australia, Russia) and China, ahead of its entry to the WTO already share the same law in relation to the international sale of goods. "ScIL" however has additional hurdles to overcome. *(a)* In order to enter into force and become applicable, it must go through the lengthy process of ratification and accession by States. *(b)* Implementation is frequently with various reservations. *(c)* Even where widely used, there are usually as many or more States that are exceptions. Success, that is by no means guaranteed, takes time and for every uniform law that is a success, there are several failures.
+
+Institutionally offered lex ("IoL") comprehensive general contract principles or contract law restatements that create an entire "legal" environment for contracting, has the advantage of being instantly available, becoming effective by choice of the contracting parties at the stroke of a pen. "IoL" is also more easily developed subsequently, in light of experience and need. Amongst the reasons for their use is the reduction of transaction cost in their provision of a set of default rules, applicable transnationally, that satisfy risk management criteria, being (or becoming) known, tried and tested, and of predictable effect.~{ "[P]arties often want to close contracts quickly, rather than hold up the transaction to negotiate solutions for every problem that might arise." Honnold (1992) on p. 13. }~ The most resoundingly successful "IoL" example to date has been the ICC's /{Uniform Customs and Practices for Documentary Credits}/, which is subscribed to as the default rules for the letters of credit offered by the vast majority of banks in the vast majority of countries of the world. Furthermore uniform principles allow unification on matters that at the present stage of national and regional pluralism could not be achieved at a treaty level. There are however, things that only "ScIL" can "engineer", (for example that which relates to priorities and third party obligations).
+
+*{PICC:}* The arrival of PICC in 1994 was particularly timely. Coinciding as it did with the successful attempt at reducing trade barriers represented by the /{World Trade Agreement,}/~{ http://www.jus.uio.no/lm/wta.1994/ }~ and the start of general Internet use,~{ See Amissah, /{On the Net and the Liberation of Information that wants to be Free}/ in ed. Jens Edvin A. Skoghoy /{Fra institutt til fakultet, Jubileumsskrift i anledning av at IRV ved Universitetet i Tromsø feirer 10 år og er blitt til Det juridiske fakultet}/ (Tromsø, 1996) pp. 59-76 or the same at http://www.jus.uio.no/lm/on.the.net.and.information.22.02.1997.amissah/ }~ allowed for the exponential growth of electronic commerce, and further underscored the transnational tendency of commerce. The arrival of PICC was all the more opportune bearing in mind the years it takes to prepare such an instrument. Whilst there have been some objections, the PICC (and PECL) as contract law restatements cater to the needs of the business community that seeks a non-national or transnational law as the basis of its contracts, and provide a focal point for future development in this direction. Where in the past they would have been forced to rely on the ethereal and nebulous lex mercatoria, now the business community is provided with the opportunity to make use of such a "law" that is readily accessible, and has a clear and reasonably well defined content, that will become familiar and can be further developed as required. As such the PICC allow for more universal and uniform solutions. Their future success will depend on such factors as: *(a)* Suitability of their contract terms to the needs of the business community. *(b)* Their becoming widely known and understood. *(c)* Their predictability evidenced by a reasonable degree of consistency in the results of their application. *(d)* Recognition of their potential to reduce transaction costs. *(e)* Recognition of their being neutral as between different nations' interests (East, West; North, South). In the international sale of goods the PICC can be used in conjunction with more specific rules and regulations, including (on parties election~{ Also consider present and future possibilities for such use of PICC under CISG articles 8 and 9. }~) in sales the CISG to fill gaps in its provisions.~{ Drobnig, id. p. 228, comment that the CISG precludes recourse to general principles of contract law in Article 7. This does not refer to the situation where parties determine that the PICC should do so, see CISG Article 6. Or that in future the PICC will not be of importance under CISG Articles 8 and 9. }~ Provisions of the CISG would be given precedence over the PICC under the accepted principle of /{specialia generalibus derogant}/,~{ "Special principles have precedence over general ones." See Huet, Synthesis (1995) p. 277. }~ the mandatory content of the PICC excepted. The CISG has many situations that are not provided for at all, or which are provided for in less detail than the PICC.
+
+Work on PICC and PECL under the chairmanship of Professors Bonell and Ole Lando respectively, was wisely cross-pollinated (conceptually and through cross-membership of preparatory committees), as common foundations strengthen both sets of principles. A couple of points should be noted. Firstly, despite the maintained desirability of a transnational solution, this does not exclude the desirability of regional solutions, especially if there is choice, and the regional solutions are more comprehensive and easier to keep of uniform application. Secondly, the European Union has powers and influence (within the EU) unparalleled by UNIDROIT that can be utilised in future with regard to the PECL if the desirability of a common European contract solution is recognised and agreed upon by EU member States. As a further observation, there is, hypothetically at least, nothing to prevent there in future being developed an alternative extensive (competing) transnational contract /{lex}/ solution, though the weighty effort already in place as represented by PICC and the high investment in time and independent skilled legal minds, necessary to achieve this in a widely acceptable manner, makes such a development not very likely. It may however be the case that for electronic commerce, some other particularly suitable rules and principles will in time be developed in a similar vein, along the lines of an "IoL".
+
+4~ Contract /{Lex}/ design. Questions of commonweal
+
+The virtues of freedom of contract are acknowledged in this paper in that they allow the international business community to structure their business relationships to suit their requirements, and as such reflect the needs and working of the market economy. However, it is instructive also to explore the limits of the principles: freedom of contract, pacta sunt servanda and caveat subscriptor. These principles are based on free market arguments that parties best understand their interests, and that the contract they arrive at will be an optimum compromise between their competing interests. It not being for an outsider to regulate or evaluate what a party of their own free will and volition has gained from electing to contract on those terms. This approach to contract is adversarial, based on the conflicting wills of the parties, achieving a meeting of minds. It imposes no duty of good faith and fair dealing or of loyalty (including the disclosure of material facts) upon the contracting parties to one another, who are to protect their own interests. However, in international commerce, this demand can be more costly, and may have a negative and restrictive effect. Also, although claimed to be neutral in making no judgement as to the contents of a contract, this claim can be misleading.
+
+5~ The neutrality of contract law and information cost
+
+The information problem is a general one that needs to be recognised in its various forms where it arises and addressed where possible.
+
+Adherents to the caveat subscriptor model, point to the fact that parties have conflicting interests, and should look out for their own interests. However information presents particular problems which are exacerbated in international commerce.~{ The more straightforward cases of various types of misrepresentation apart. }~ As Michael Trebilcock put it: "Even the most committed proponents of free markets and freedom of contract recognise that certain information preconditions must be met for a given exchange to possess Pareto superior qualities."~{ Trebilcock, (1993) p. 102, followed by a quotation of Milton Friedman, from /{Capitalism and Freedom}/ (1962) p. 13. }~ Compared with domestic transactions, the contracting parties are less likely to possess information about each other or of what material facts there may be within the other party's knowledge, and will find it more difficult and costly to acquire. With resource inequalities, some parties will be in a much better position to determine and access what they need to know, the more so as the more information one already has, the less it costs to identify and to obtain any additional information that is required.~{ Trebilcock, (1993) p. 102, note quoted passage of Kim Lane Scheppele, /{Legal Secrets: Equality and Efficiency in the Common Law}/ (1988) p. 25. }~ The converse lot of the financially weaker party, makes their problem of high information costs (both actual and relative), near insurmountable. Ignorance may even become a rational choice, as the marginal cost of information remains higher than its marginal benefit. "This, in fact is the economic rationale for the failure to fully specify all contingencies in a contract."~{ See for example Nicholas Mercuro and Steven G. Medema, p. 58 }~ The argument is tied to transaction cost and further elucidates a general role played by underlying default rules and principles. It also extends further to the value of immutable principles that may help mitigate the problem in some circumstances. More general arguments are presented below.
+
+5~ Justifying mandatory loyalty principles
+
+Given the ability to create alternative solutions and even an independent /{lex}/ a question that arises is as to what limits if any should be imposed upon freedom of contract? What protective principles are required? Should protective principles be default rules that can be excluded? Should they be mandatory? Should mandatory law only exist at the level of municipal law?
+
+A kernel of mandatory protective principles with regard to loyalty may be justified, as beneficial, and even necessary for "IoL" to be acceptable in international commerce, in that they (on the balance) reflect the collective needs of the international business community. The present author is of the opinion that the duties of good faith and fair dealing and loyalty (or an acceptable equivalent) should be a necessary part of any attempt at the self-legislation or institutional legislation of any contract regime that is based on "rules and principles" (rather than a national legal order). If absent a requirement for them should be imposed by mandatory international law. Such protective provisions are to be found within the PICC and PECL.~{ Examples include: the deliberately excluded validity (Article 4); the provision on interest (Article 78); impediment (Article 79), and; what many believe to be the inadequate coverage of battle of forms (Article 19). }~ As regards PICC *(a)* The loyalty (and other protective) principles help bring about confidence and foster relations between parties. They provide an assurance in the international arena where parties are less likely to know each other and may have more difficulty in finding out about each other. *(b)* They better reflect the focus of the international business community on a business relationship from which both sides seek to gain. *(c)* They result in wider acceptability of the principles within both governments and the business community in the pluralistic international community. These protective principles may be regarded as enabling the PICC to better represent the needs of the commonweal. *(d)* Good faith and fair dealing~{ The commented PECL explain "'Good faith' means honesty and fairness in mind, which are subjective concepts... 'fair dealing' means observance of fairness in fact which is an objective test". }~ are fundamental underlying principles of international commercial relations. *(e)* Reliance only on the varied mandatory law protections of various States does not engender uniformity, which is also desirable with regard to that which can be counted upon as immutable. (Not that it is avoidable, given that mandatory State law remains overriding.) More generally, freedom of contract benefits from these protective principles that need immutable protection from contractual freedom to effectively serve their function. In seeking a transnational or non-national regime to govern contractual relations, one might suggest this to be the minimum price of freedom of contract that should be insisted upon by mandatory international law, as the limitation which hinders the misuse by one party of unlimited contractual freedom. They appear to be an essential basis for acceptability of the autonomous contract (non-national contract, based on agreed rules and principles/ "IoL"). As immutable principles they (hopefully and this is to be encouraged) become the default standard for the conduct of international business and as such may be looked upon as "common property." Unless immutable they suffer a fate somewhat analogous to that of "the tragedy of the commons."~{ Special problem regarding common/shared resources discussed by Garrett Hardin in Science (1968) 162 pp. 1243-1248. For short discussion and summary see Trebilcock, (1993) p. 13-15. }~ It should be recognised that argument over the loyalty principles should be of degree, as the concept must not be compromised, and needs to be protected (even if they come at the price of a degree of uncertainty), especially against particularly strong parties who are most likely to argue against their necessity.
+
+4~ Problems beyond uniform texts
+
+5~ In support of four objectives
+
+In the formulation of many international legal texts a pragmatic approach was taken. Formulating legislators from different States developed solutions based on suitable responses to factual example circumstances. This was done, successfully, with a view to avoiding arguments over alternative legal semantics and methodologies. However, having arrived at a common text, what then? Several issues are raised by asking the question, given that differences of interpretation can arise and become entrenched, by what means is it possible to foster a sustainable drive towards the uniform application of shared texts? Four principles appear to be desirable and should insofar as it is possible be pursued together: *(i)* the promotion of certainty and predictability; *(ii)* the promotion of uniformity of application; *(iii)* the protection of democratic ideals and ensuring of jurisprudential deliberation, and; *(iv)* the retention of efficiency.
+
+5~ Improving the predictability, certainty and uniform application of international and transnational law
+
+The key to the (efficient) achievement of greater certainty and predictability in an international and/or transnational commercial law regime is through the uniform application of shared texts that make up this regime.
+
+Obviously a distinction is to be made between transnational predictability in application, that is "uniform application", and predictability at a domestic level. Where the "uniform law" is applied by a municipal court of State "A" that looks first to its domestic writings, there may be a clear - predictable manner of application, even if not in the spirit of the "Convention". Another State "B" may apply the uniform law in a different way that is equally predictable, being perfectly consistent internally. This however defeats much of the purpose of the uniform law.
+
+A first step is for municipal courts to accept the /{UN Convention on the Law of Treaties 1969}/ (in force 1980) as a codification of existing public international law with regard to the interpretation of treaties.~{ This is the position in English law see Lord Diplock in Fothergill v Monarch Airlines [1981], A.C. 251, 282 or see http://www.jus.uio.no/lm/england.fothergill.v.monarch.airlines.hl.1980/2_diplock.html also Mann (London, 1983) at p. 379. The relevant articles on interpretation are Article 31 and 32. }~ A potentially fundamental step towards the achievement of uniform application is through the conscientious following of the admonitions of the interpretation clauses of modern conventions, rules and principles~{ Examples: The CISG, Article 7; The PICC, Article 1.6; PECL Article 1.106; /{UN Convention on the Carriage of Goods by Sea (The Hamburg Rules) 1978}/, Article 3; /{UN Convention on the Limitation Period in the International Sale of Goods 1974}/ and /{1978}/, Article 7; /{UN Model Law on Electronic Commerce 1996}/, Article 3; /{UNIDROIT Convention on International Factoring 1988}/, Article 4; /{UNIDROIT Convention on International Financial Leasing 1988}/, Article 6; also /{EC Convention on the Law Applicable to Contractual Obligations 1980}/, Article 18. }~ to take into account their international character and the need to promote uniformity in their application,~{ For an online collection of articles see the Pace CISG Database http://www.cisg.law.pace.edu/cisg/text/e-text-07.html and amongst the many other articles do not miss Michael Van Alstine /{Dynamic Treaty Interpretation}/ 146 /{University of Pennsylvania Law Review}/ (1998) 687-793. }~ together with all this implies.~{ Such as the CISG provision on interpretation - Article 7. }~ However, the problems of uniform application, being embedded in differences of legal methodology, go beyond the agreement of a common text, and superficial glances at the works of other legal municipalities. These include questions related to sources of authority and technique applied in developing valid legal argument. Problems with sources include differences in authority and weight given to: *(a)* legislative history; *(b)* rulings domestic and international; *(c)* official and other commentaries; *(d)* scholarly writings. There should be an ongoing discussion of legal methodology to determine the methods best suited to addressing the problem of achieving greater certainty, predictability and uniformity in the application of shared international legal texts. With regard to information sharing, again the technology associated with the Net offers potential solutions.
+
+5~ The Net and information sharing through transnational databases
+
+The Net has been a godsend permitting the collection and dissemination of information on international law. With the best intentions to live up to admonitions to "to take into account their international character and the need to promote uniformity in their application" of "ScIL" and "IoL", a difficulty has been in knowing what has been written and decided elsewhere. In discussing solutions, Professor Honnold in /{"Uniform Words and Uniform Application" }/~{ Based on the CISG, and inputs from several professors from different legal jurisdictions, on the problems of achieving the uniform application of the text across different legal municipalities. J. Honnold, /{Uniform words and uniform applications. Uniform Words and Uniform Application: The 1980 Sales Convention and International Juridical Practice}/. /{Einheitliches Kaufrecht und nationales Obligationenrecht. Referate Diskussionen der Fachtagung}/. am 16/17-2-1987. Hrsg. von P. Schlechtriem. Baden-Baden, Nomos, 1987. p. 115-147, at p. 127-128. }~ suggests the following: "General Access to Case-Law and Bibliographic Material: The development of a homogenous body of law under the Convention depends on channels for the collection and sharing of judicial decisions and bibliographic material so that experience in each country can be evaluated and followed or rejected in other jurisdictions." Honnold then goes on to discuss "the need for an international clearing-house to collect and disseminate experience on the Convention" the need for which, he writes there is general agreement. He also discusses information-gathering methods through the use of national reporters. He poses the question "Will these channels be adequate? ..."
+
+The Net, offering inexpensive ways to build databases and to provide global access to information, provides an opportunity to address these problems that was not previously available. The Net extends the reach of the admonitions of the interpretation clauses. Providing the medium whereby if a decision or scholarly writing exists on a particular article or provision of a Convention, anywhere in the world, it will be readily available. Whether or not a national court or arbitration tribunal chooses to follow their example, they should be aware of it. Whatever a national court decides will also become internationally known, and will add to the body of experience on the Convention.~{ Nor is it particularly difficult to set into motion the placement of such information on the Net. With each interested participant publishing for their own interest, the Net could provide the key resources to be utilised in the harmonisation and reaching of common understandings of solutions and uniform application of legal texts. Works from all countries would be available. }~
+
+Such a library would be of interest to the institution promulgating the text, governments, practitioners and researchers alike. It could place at your fingertips: *(a)* Convention texts. *(b)* Implementation details of contracting States. *(c)* The legislative history. *(d)* Decisions generated by the convention around the world (court and arbitral where possible). *(e)* The official and other commentaries. *(f)* Scholarly writings on the Convention. *(g)* Bibliographies of scholarly writings. *(h)* Monographs and textbooks. *(i)* Student study material collections. *(j)* Information on promotional activities, lectures - moots etc. *(k)* Discussion groups/ mailing groups and other more interactive features.
+
+With respect to the CISG such databases are already being maintained.~{ Primary amongst them Pace University, Institute of International Commercial Law, CISG Database http://www.cisg.law.pace.edu/ which provides secondary support for the CISG, including providing a free on-line database of the legislative history, academic writings, and case-law on the CISG and additional material with regard to PICC and PECL insofar as they may supplement the CISG. Furthermore, the Pace CISG Project, networks with the several other existing Net based "autonomous" CISG projects. UNCITRAL under Secretary Gerold Herrmann, has its own database through which it distributes its case law materials collected from national reporters (CLOUT). }~
+
+The database by ensuring the availability of international materials, used in conjunction with legal practice, helps to support the fore-named four principles. That of efficiency is enhanced especially if there is a single source that can be searched for the information required.
+
+The major obstacle that remains to being confident of this as the great and free panacea that it should be is the cost of translation of texts.
+
+5~ Judicial minimalism promotes democratic jurisprudential deliberation
+
+How to protect liberal democratic ideals and ensure international jurisprudential deliberation? Looking at judicial method, where court decisions are looked to for guidance, liberal democratic ideals and international jurisprudential deliberation are fostered by a judicial minimalist approach.
+
+For those of us with a common law background, and others who pay special attention to cases as you are invited to by interpretation clauses, there is scope for discussion as to the most appropriate approach to be taken with regard to judicial decisions. US judge Cass Sunstein suggestion of judicial minimalism~{ Cass R. Sunstein, /{One Case at a Time - Judicial Minimalism on the Supreme Court}/ (1999) }~ which despite its being developed in a different context~{ His analysis is developed based largely on "hard" constitutional cases of the U.S. }~ is attractive in that it is suited to a liberal democracy in ensuring democratic jurisprudential deliberation. It maintains discussion, debate, and allows for adjustment as appropriate and the gradual development of a common understanding of issues. Much as one may admire farsighted and far-reaching decisions and expositions, there is less chance with the minimalist approach of the (dogmatic) imposition of particular values. Whilst information sharing offers the possibility of the percolation of good ideas.~{ D. Stauffer, /{Introduction to Percolation Theory}/ (London, 1985). Percolation represents the sudden dramatic expansion of a common idea or ideas thought he reaching of a critical level/mass in the rapid recognition of their power and the making of further interconnections. An epidemic like infection of ideas. Not quite the way we are used to the progression of ideas within a conservative tradition. }~ Much as we admire the integrity of Dworkin's Hercules,~{ Ronald Dworkin, /{Laws Empire}/ (Harvard, 1986); /{Hard Cases in Harvard Law Review}/ (1988). }~ that he can consistently deliver single solutions suitable across such disparate socio-economic cultures is questionable. In examining the situation his own "integrity" would likely give him pause and prevent him from dictating that he can.~{ Hercules was created for U.S. Federal Cases and the community represented by the U.S. }~ This position is maintained as a general principle across international commercial law, despite private (as opposed to public) international commercial law not being an area of particularly "hard" cases of principle, and; despite private international commercial law being an area in which over a long history it has been demonstrated that lawyers are able to talk a common language to make themselves and their concepts (which are not dissimilar) understood by each other.~{ In 1966, a time when there were greater differences in the legal systems of States comprising the world economy Clive Schmitthoff was able to comment that:<br>"22. The similarity of the law of international trade transcends the division of the world between countries of free enterprise and countries of centrally planned economy, and between the legal families of the civil law of Roman inspiration and the common law of English tradition. As a Polish scholar observed, "the law of external trade of the countries of planned economy does not differ in its fundamental principles from the law of external trade of other countries, such as e.g., Austria or Switzerland. Consequently, international trade law specialists of all countries have found without difficulty that they speak a 'common language'<br>23. The reason for this universal similarity of the law of international trade is that this branch of law is based on three fundamental propositions: first, that the parties are free, subject to limitations imposed by the national laws, to contract on whatever terms they are able to agree (principle of the autonomy of the parties' will); secondly, that once the parties have entered into a contract, that contract must be faithfully fulfilled (pacta sunt servanda) and only in very exceptional circumstances does the law excuse a party from performing his obligations, viz., if force majeure or frustration can be established; and, thirdly that arbitration is widely used in international trade for the settlement of disputes, and the awards of arbitration tribunals command far-reaching international recognition and are often capable of enforcement abroad."<br>/{Report of the Secretary-General of the United Nations, Progressive Development of the Law of International Trade}/ (1966). Report prepared for the UN by C. Schmitthoff. }~
+
+5~ Non-binding interpretative councils and their co-ordinating guides can provide a focal point for the convergence of ideas - certainty, predictability, and efficiency
+
+A respected central guiding body can provide a guiding influence with respect to: *(a)* the uniform application of texts; *(b)* information management control. Given the growing mass of writing on common legal texts - academic and by way of decisions, we are faced with an information management problem.~{ Future if not current. }~
+
+Supra-national interpretative councils have been called for previously~{ /{UNCITRAL Secretariat}/ (1992) p. 253. Proposed by David (France) at the second UNCITRAL Congress and on a later occasion by Farnsworth (USA). To date the political will backed by the financing for such an organ has not been forthcoming. In 1992 the UNCITRAL Secretariat concluded that "probably the time has not yet come". Suggested also by Louis Sono in /{Uniform laws require uniform interpretation: proposals for an international tribunal to interpret uniform legal texts}/ (1992) 25th UNCITRAL Congress, pp. 50-54. Drobnig, /{Observations in Uniform Law in Practice}/ at p. 306. }~ and have for various reasons been regarded impracticable to implement including problems associated with getting States to formally agree upon such a body with binding authority.
+
+However it is not necessary to go this route. In relation to "IoL" in such forms as the PICC and PECL it is possible for the promulgators themselves,~{ UNIDROIT and the EU }~ to update and clarify the accompanying commentary of the rules and principles, and to extend their work, through having councils with the necessary delegated powers. In relation to the CISG it is possible to do something similar of a non-binding nature, through the production of an updated commentary by an interpretive council (that could try to play the role of Hercules).~{ For references on interpretation of the CISG by a supranational committee of experts or council of "wise men" see Bonell, /{Proposal for the Establishment of a Permanent Editorial Board for the Vienna Sales Convention}/ in /{International Uniform Law in Practice/ Le droit uniforme international dans la practique [Acts and Proceedings of the 3rd Congress on Private Law held by the International Institute for the Unification of Private Law}/ (Rome, 1987)], (New York, 1988) pp. 241-244 }~ With respect, despite some expressed reservations, it is not true that it would have no more authority than a single author writing on the subject. A suitable non-binding interpretative council would provide a focal point for the convergence of ideas. Given the principle of ensuring democratic jurisprudential deliberation, that such a council would be advisory only (except perhaps on the contracting parties election) would be one of its more attractive features, as it would ensure continued debate and development.
+
+5~ Capacity Building
+
+_1 "... one should create awareness about the fact that an international contract or transaction is not naturally rooted in one particular domestic law, and that its international specifics are best catered for in a uniform law."~{ UNCITRAL Secretariat (1992) p. 255. }~
+
+_{/{Capacity building}/}_ - raising awareness, providing education, creating a new generation of lawyers versed in a relatively new paradigm. Capacity building in international and transnational law, is something relevant institutions including arbitration institutions; the business community, and; far sighted States, should be interested in promoting. Finding means to transcend national boundaries is also to continue in the tradition of seeking the means to break down barriers to legal communication and understanding. However, while the business community seeks and requires greater uniformity in their business relations, there has paradoxically, at a national level, been a trend towards a nationalisation of contract law, and a regionalisation of business practice.~{ Erich Schanze, /{New Directions in Business Research}/ in Børge Dahl & Ruth Nielsen (ed.), /{New Directions in Contract Research}/ (Copenhagen, 1996) p. 62. }~
+
+As an example, Pace University, Institute of International Commercial Law, plays a prominent role with regard to capacity building in relation to the CISG and PICC. Apart from the previously mentioned /{CISG Database}/, Pace University organise a large annual moot on the CISG~{ See http://www.cisg.law.pace.edu/vis.html }~ this year involving students of 79 universities from 28 countries, and respected arbitrators from the word over. Within the moot the finding of solutions based on PICC where the CISG is silent, is encouraged. Pace University also organise an essay competition~{ See http://www.cisg.law.pace.edu/cisg/text/essay.html }~ on the CISG and/or the PICC, which next year is to be expanded to include the PECL as a further option.
+
+4~ Marketing of transnational solutions
+
+Certain aspects of the Net/web may already be passé, but did you recognise it for what it was, or might become, when it arrived?
+
+As uniform law and transnational solutions are in competition with municipal approaches, to be successful a certain amount of marketing is necessary and may be effective. The approach should involve ensuring the concept of what they seek to achieve is firmly implanted in the business, legal and academic communities, and through engaging the business community and arbitration institutions, in capacity building and developing a new generation of lawyers. Feedback from the business community, and arbitrators will also prove invaluable. Whilst it is likely that the business community will immediately be able to recognise their potential advantages, it is less certain that they will find the support of the legal community. The normal reasons would be similar to those usually cited as being the primary constraints on its development "conservatism, routine, prejudice and inertia" René David. These are problems associated with gaining the initial foothold of acceptability, also associated with the lower part of an exponential growth curve. In addition the legal community may face tensions arising for various reasons including the possibility of an increase in world-wide competition.
+
+There are old well developed legal traditions with developed infrastructures and roots well established in several countries, that are dependable and known. The question arises why experiment with alternative non-extensively tested regimes? The required sophistication is developed in the centres providing legal services, and it may be argued that there is not the pressing need for unification or for transnational solutions, as the traditional way of contracting provides satisfactorily for the requirements of global commerce. The services required will continue to be easily and readily available from existing centres of skill. English law, to take an example is for various reasons (including perhaps language, familiarity of use, reputation and widespread Commonwealth~{ http://www.thecommonwealth.org/ }~ relations) the premier choice for the law governing international commercial transactions, and is likely to be for the foreseeable future. Utilising the Commonwealth as an example, what the "transnational" law (e.g. CISG) experience illustrates however, is that for States there may be greater advantage to be gained from participation in a horizontally shared area of commercial law, than from retaining a traditional vertically integrated commercial law system, based largely for example on the English legal system.
+
+Borrowing a term from the information technology sector, it is essential to guard against FUD (fear, uncertainty and doubt) with regard to the viability of new and/or competing transnational solutions, that may be spread by their detractors, and promptly, in the manner required by the free market, address any real problems that are discerned.
+
+4~ Tools in future development
+
+An attempt should be made by the legal profession to be more contemporary and to keep up to date with developments in technology and the sciences, and to adopt effective tools where suitable to achieve their goals. Technology one way or another is likely to encroach further upon law and the way we design it.
+
+Science works across cultures and is aspired to by most nations as being responsible for the phenomenal success of technology (both are similarly associated with globalisation). Science is extending its scope to (more confidently) tackle complex systems. It would not hurt to be more familiar with relevant scientific concepts and terminology. Certainly lawyers across the globe, myself included, would also benefit much in their conceptual reasoning from an early dose of the philosophy of science,~{ An excellent approachable introduction is provided by A.F. Chalmers /{What is this thing called Science?}/ (1978, Third Edition 1999). }~ what better than Karl Popper on scientific discovery and the role of "falsification" and value of predictive probity.~{ Karl R. Popper /{The Logic of Scientific Discovery}/ (1959). }~ And certainly Thomas Kuhn on scientific advancement and "paradigm shifts"~{ Thomas S. Kuhn /{The Structure of Scientific Revolutions}/ (1962, 3rd Edition 1976). }~ has its place. Having mentioned Karl Popper, it would not be unwise to go further (outside the realms of philosophy of science) to study his defence of democracy in both volumes of /{Open Society and Its Enemies}/.~{ Karl R. Popper /{The Open Society and Its Enemies: Volume 1, Plato}/ (1945) and /{The Open Society and Its Enemies: Volume 2, Hegel & Marx}/. (1945) }~
+
+Less ambitiously there are several tools not traditionally in the lawyers set, that may assist in transnational infrastructure modelling. These include further exploration and development of the potential of tools, including to suggest a few by way of example: flow charts, fuzzy thinking, "intelligent" electronic agents and Net collaborations.
+
+In the early 1990's I was introduced to a quantity surveyor and engineer who had reduced the /{FIDIC Red Book}/~{ FIDIC is the International Federation of Consulting Engineers http://www.fidic.com/ }~ to over a hundred pages of intricate flow charts (decision trees), printed horizontally on roughly A4 sized sheets. He was employed by a Norwegian construction firm, who insisted that based on past experience, they knew that he could, using his charts, consistently arrive at answers to their questions in a day, that law firms took weeks to produce. Flow charts can be used to show interrelationships and dependencies, in order to navigate the implications of a set of rules more quickly. They may also be used more pro-actively (and /{ex ante}/ rather than /{ex post}/) in formulating texts, to avoid unnecessary complexity and to arrive at more practical, efficient and elegant solutions.
+
+Explore such concepts as "fuzzy thinking"~{ Concept originally developed by Lotfi Zadeh /{Fuzzy Sets}/ Information Control 8 (1965) pp 338-353. For introductions see Daniel McNeill and Paul Freiberger /{Fuzzy Logic: The Revolutionary Computer Technology that is Changing our World}/ (1993); Bart Kosko Fuzzy Thinking (1993); Earl Cox The Fuzzy Systems Handbook (New York, 2nd ed. 1999). Perhaps to the uninitiated an unfortunate choice of name, as fuzzy logic and fuzzy set theory is more precise than classical logic and set theory, which comprise a subset of that which is fuzzy (representing those instances where membership is 0% or 100%). The statement is not entirely without controversy, in suggesting the possibility that classical thinking may be subsumed within the realms of an unfamiliar conceptual paradigm, that is to take hold of the future thinking. In the engineering field much pioneer work on fuzzy rule based systems was done at Queen Mary College by Ebrahim Mamdani in the early and mid-1970s. Time will tell. }~ including fuzzy logic, fuzzy set theory, and fuzzy systems modelling, of which classical logic and set theory are subsets. Both by way of analogy and as a tool fuzzy concepts are better at coping with complexity and map more closely to judicial thinking and argument in the application of principles and rules. Fuzzy theory provides a method for analysing and modelling principle and rule based systems, even where conflicting principles may apply permitting /{inter alia}/ working with competing principles and the contextual assignment of precision to terms such as "reasonableness". Fuzzy concepts should be explored in expert systems, and in future law. Problems of scaling associated with multiple decision trees do not prevent useful applications, and structured solutions. The analysis assists in discerning what lawyers are involved with.
+
+"Intelligent" electronic agents can be expected both to gather information on behalf of the business community and lawyers. In future electronic agents are likely to be employed to identify and bring to the attention of their principals "invitations to treat" or offers worthy of further investigation. In some cases they will be developed and relied upon as electronic legal agents, operating under a programmed mandate and vested with the authority to enter certain contracts on behalf of their principals. Such mandate would include choice of law upon which to contract, and the scenario could be assisted by transnational contract solutions (and catered for in the design of "future law").
+
+Another area of technology helping solve legal problems relates to various types of global register and transaction centres. Amongst them property registers being an obvious example, including patents and moveable property. Bolero providing an example of how electronic documents can be centrally brokered on behalf of trading parties.
+
+Primary law should be available on the Net free, and this applies also to "IoL" and the static material required for their interpretation. This should be the policy adopted by all institutions involved in contributing to the transnational legal infrastructure. Where possible larger databases also should be developed and shared. The Net has reduced the cost of dissemination of material, to a level infinitesimally lower than before. Universities now can and should play a more active role. Suitable funding arrangements should be explored that do not result in proprietary systems or the forwarding of specific lobby interests. In hard-copy to promote uniform standards, institutions should also strive to have their materials available at a reasonable price. Many appear to be unacceptably expensive given the need for their promotion and capacity building, amongst students, and across diverse States.
+
+Follow the open standards and community standards debate in relation to the development of technology standards and technology infrastructure tools - including operating systems,~{ See for example /{Open Sources : Voices from the Open Source Revolution - The Open Source Story}/ http://www.oreilly.com/catalog/opensources/book/toc.html }~ to discover what if anything it might suggest for the future development of law standards.
+
+4~ As an aside, a word of caution
+
+I end with an arguably gratuitous observation, by way of a reminder and general warning. Gratuitous in the context of this paper because the areas focused upon~{ Sale of goods (CISG), contract rules and principles (PICC), related Arbitration, and the promotion of certain egalitarian ideals. }~ were somewhat deliberately selected to fall outside the more contentious and "politically" problematic areas related to globalisation, economics, technology, law and politics.~{ It is not as evident in the area of private international commercial contract law the chosen focus for this paper, but appears repeatedly in relation to other areas and issues arising out of the economics, technology, law nexus. }~ Gratuitous also because there will be no attempt to concretise or exemplify the possibility suggested.
+
+Fortunately, we are not (necessarily) talking about a zero sum game, however, it is necessary to be able to distinguish and recognise that which may harm. International commerce/trade is competitive, and by its nature not benign, even if it results in an overall improvement in the economic lot of the peoples of our planet. "Neutral tests" such as Kaldor-Hicks efficiency, do not require that your interests are benefited one iota, just that whilst those of others are improved, yours are not made worse. If the measure adopted is overall benefit, it is even more possible that an overall gain may result where your interests are adversely affected. The more so if you have little, and those that gain, gain much. Furthermore such "tests" are based on assumptions, which at best are approximations of reality (e.g. that of zero transaction costs, where in fact not only are they not, but they are frequently proportionately higher for the economically weak). At worst they may be manipulated /{ex ante}/ with knowledge of their implications (e.g. engineering to ensure actual or relative~{ Low fixed costs have a "regressive" effect }~ asymmetrical transaction cost). It is important to be careful in a wide range of circumstances related to various aspects of the modelling of the infrastructure for international commerce that have an impact on the allocation of rights and obligations, and especially the allocation of resources, including various types of intellectual property rights. Ask what is the objective and justification for the protection? How well is the objective met? Are there other consequential effects? Are there other objectives that are worthy of protection? Could the stated objective(s) be achieved in a better way?
+
+Within a system are those who benefit from the way it has been, that may oppose change as resulting in loss to them or uncertainty of their continued privilege. For a stable system to initially arise that favours such a Select Set, does not require the conscious manipulation of conditions by the Select Set. Rather it requires that from the system (set) in place the Select Set emerges as beneficiary. Subsequently the Select Set having become established as favoured and empowered by their status as beneficiary, will seek to do what it can, to influence circumstances to ensure their continued beneficial status. That is, to keep the system operating to their advantage (or tune it to work even better towards this end), usually with little regard to the conditions resulting to other members of the system. Often this will be a question of degree, and the original purpose, or an alternative "neutral" argument, is likely to be used to justify the arrangement. The objective from the perspective of the Select Set is fixed; the means at their disposal may vary. Complexity is not required for such situations to arise, but having done so subsequent plays by the Select Set tend towards complexity. Furthermore, moves in the interest of the Select Set are more easily obscured/disguised in a complex system. Limited access to information and knowledge are devastating handicaps without which change cannot be contemplated let alone negotiated. Frequently, having information and knowledge are not enough. The protection of self-interest is an endemic part of our system, with the system repeatedly being co-opted to the purposes of those that are able to manipulate it. Membership over time is not static, for example, yesterday's "copycat nations" are today's innovators, and keen to protect their intellectual property. Which also illustrates the point that what it may take to set success in motion, may not be the same as that which is preferred to sustain it. Whether these observations appear to be self-evident and/or abstract and out of place with regard to this paper, they have far reaching implications repeatedly observable within the law, technology, and commerce (politics) nexus. Even if not arising much in the context of the selected material for this paper, their mention is justified by way of warning. Suitable examples would easily illustrate how politics arises inescapably as an emergent property from the nexus of commerce, technology, and law.~{ In such circumstances either economics or law on their own would be sufficient to result in politics arising as an emergent property. }~
+
+4~endnotes Endnote
+
+* Ralph Amissah is a Fellow of Pace University, Institute for International Commercial Law. http://www.cisg.law.pace.edu/ <br>RA lectured on the private law aspects of international trade whilst at the Law Faculty of the University of Tromsø, Norway. http://www.jus.uit.no/ <br> RA built the first web site related to international trade law, now known as lexmercatoria.org and described as "an (international | transnational) commercial law and e-commerce infrastructure monitor". http://lexmercatoria.org/ <br> RA is interested in the law, technology, commerce nexus. RA works with the law firm Amissahs.<br>/{[This is a draft document and subject to change.]}/ <br>All errors are very much my own.<br>ralph@amissah.com
+
+%% SiSU markup sample Notes:
+% SiSU http://www.jus.uio.no/sisu
+% SiSU markup for 0.16 and later:
+% 0.20.4 header 0~links
+% 0.22 may drop image dimensions (rmagick)
+% 0.23 utf-8 ß
+% 0.38 or later, may use alternative notation for headers, e.g. @title: (instead of 0~title)
+% 0.38 document structure alternative markup, experimental (rad) A,B,C,1,2,3 maps to 1,2,3,4,5,6
+% Output: http://www.jus.uio.no/sisu/autonomy_markup4/sisu_manifest.html
+% 0.36 markup
diff --git a/data/sisu_markup_samples/non-free/free_culture.lawrence_lessig.sst b/data/sisu_markup_samples/non-free/free_culture.lawrence_lessig.sst
new file mode 100644
index 0000000..3ee0db7
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/free_culture.lawrence_lessig.sst
@@ -0,0 +1,4834 @@
+% SiSU 0.38
+
+@title: Free Culture
+
+@subtitle: How Big Media Uses Technology and the Law to Lock Down Culture and Control Creativity
+
+@creator: Lawrence Lessig
+
+@type: Book
+
+@rights: Copyright Lawrence Lessig, 2004. Free Culture is Licensed under a Creative Commons License. This License permits non-commercial use of this work, so long as attribution is given. For more information about the license, visit http://creativecommons.org/licenses/by-nc/1.0/
+
+@date: 2004-03-25
+
+@date.created: 2004-03-25
+
+% @date.created: 2004-04-08
+
+@date.issued: 2004-03-25
+
+@date.available: 2004-03-25
+
+@date.modified: 2004-03-25
+
+@date.valid: 2004-03-25
+
+% @catalogue: isbn=1594200068
+
+@language: US
+
+@vocabulary: none
+
+@images: center
+
+@skin: skin_lessig
+
+@links: {Free Culture}http://www.free-culture.cc
+{Remixes}http://www.free-culture.cc/remixes/
+{Free Culture, Lawrence Lessig @ SiSU}http://www.jus.uio.no/sisu/free_culture.lawrence_lessig
+{@ Wikipedia}http://en.wikipedia.org/wiki/Free_Culture_%28book%29
+{@ Amazon.com}http://www.amazon.com/gp/product/1594200068
+{@ Barnes & Noble}http://search.barnesandnoble.com/booksearch/isbnInquiry.asp?isbn=1594200068
+{The Wealth of Networks, Yochai Benkler @ SiSU}http://www.jus.uio.no/sisu/the_wealth_of_networks.yochai_benkler
+{Free as in Freedom (on Richard M. Stallman), Sam Williams @ SiSU}http://www.jus.uio.no/sisu/free_as_in_freedom.richard_stallman_crusade_for_free_software.sam_williams
+{Free For All, Peter Wayner @ SiSU}http://www.jus.uio.no/sisu/free_for_all.peter_wayner
+{The Cathedral and the Bazaar, Eric S. Raymond @ SiSU }http://www.jus.uio.no/sisu/the_cathedral_and_the_bazaar.eric_s_raymond
+
+@level: new=:C; break=1
+
+:A~ Free Culture
+
+:B~ by Lawrence Lessig
+
+1~attribution Attribution~#
+
+To Eric Eldred - whose work first drew me to this cause, and for whom it continues still.~#
+
+:C~ PREFACE
+
+1~preface [Preface]-#
+
+*{At the end}* of his review of my first book, /{Code: And Other Laws of Cyberspace}/, David Pogue, a brilliant writer and author of countless technical and computer- related texts, wrote this:
+
+_1 Unlike actual law, Internet software has no capacity to punish. It doesn't affect people who aren't online (and only a tiny minority of the world population is). And if you don't like the Internet's system, you can always flip off the modem.~{ David Pogue, "Don't Just Chat, Do Something," /{New York Times,}/ 30 January 2000. }~
+
+Pogue was skeptical of the core argument of the book - that software, or "code," functioned as a kind of law - and his review suggested the happy thought that if life in cyberspace got bad, we could always "drizzle, drazzle, druzzle, drome"- like simply flip a switch and be back home. Turn off the modem, unplug the computer, and any troubles that exist in /{that}/ space wouldn't "affect" us anymore.
+
+Pogue might have been right in 1999 - I'm skeptical, but maybe. But even if he was right then, the point is not right now: /{Free Culture}/ is about the troubles the Internet causes even after the modem is turned off. It is an argument about how the battles that now rage regarding life on-line have fundamentally affected "people who aren't online." There is no switch that will insulate us from the Internet's effect.
+
+But unlike /{Code}/, the argument here is not much about the Internet itself. It is instead about the consequence of the Internet to a part of our tradition that is much more fundamental, and, as hard as this is for a geek-wanna-be to admit, much more important.
+
+That tradition is the way our culture gets made. As I explain in the pages that follow, we come from a tradition of "free culture" - not "free" as in "free beer" (to borrow a phrase from the founder of the free-software movement, ~{ Richard M. Stallman, /{Free Software, Free Societies}/ 57 ( Joshua Gay, ed. 2002). }~ but "free" as in "free speech," "free markets," "free trade," "free enterprise," "free will," and "free elections." A free culture supports and protects creators and innovators. It does this directly by granting intellectual property rights. But it does so indirectly by limiting the reach of those rights, to guarantee that follow-on creators and innovators remain /{as free as possible}/ from the control of the past. A free culture is not a culture without property, just as a free market is not a market in which everything is free. The opposite of a free culture is a "permission culture" - a culture in which creators get to create only with the permission of the powerful, or of creators from the past.
+
+If we understood this change, I believe we would resist it. Not "we" on the Left or "you" on the Right, but we who have no stake in the particular industries of culture that defined the twentieth century. Whether you are on the Left or the Right, if you are in this sense disinterested, then the story I tell here will trouble you. For the changes I describe affect values that both sides of our political culture deem fundamental.
+
+We saw a glimpse of this bipartisan outrage in the early summer of 2003. As the FCC considered changes in media ownership rules that would relax limits on media concentration, an extraordinary coalition generated more than 700,000 letters to the FCC opposing the change. As William Safire described marching "uncomfortably alongside CodePink Women for Peace and the National Rifle Association, between liberal Olympia Snowe and conservative Ted Stevens," he formulated perhaps most simply just what was at stake: the concentration of power. And as he asked,
+
+_1 Does that sound unconservative? Not to me. The concentration of power - political, corporate, media, cultural - should be anathema to conservatives. The diffusion of power through local control, thereby encouraging individual participation, is the essence of federalism and the greatest expression of democracy."~{ William Safire, "The Great Media Gulp," /{New York Times,}/ 22 May 2003. }~
+
+This idea is an element of the argument of /{Free Culture}/, though my focus is not just on the concentration of power produced by concentrations in ownership, but more importantly, if because less visibly, on the concentration of power produced by a radical change in the effective scope of the law. The law is changing; that change is altering the way our culture gets made; that change should worry you - whether or not you care about the Internet, and whether you're on Safire's left or on his right.
+
+*{The inspiration}* for the title and for much of the argument of this book comes from the work of Richard Stallman and the Free Software Foundation. Indeed, as I reread Stallman's own work, especially the essays in /{Free Software, Free Society}/, I realize that all of the theoretical insights I develop here are insights Stallman described decades ago. One could thus well argue that this work is "merely" derivative.
+
+I accept that criticism, if indeed it is a criticism. The work of a lawyer is always derivative, and I mean to do nothing more in this book than to remind a culture about a tradition that has always been its own. Like Stallman, I defend that tradition on the basis of values. Like Stallman, I believe those are the values of freedom. And like Stallman, I believe those are values of our past that will need to be defended in our future. A free culture has been our past, but it will only be our future if we change the path we are on right now.
+
+Like Stallman's arguments for free software, an argument for free culture stumbles on a confusion that is hard to avoid, and even harder to understand. A free culture is not a culture without property; it is not a culture in which artists don't get paid. A culture without property, or in which creators can't get paid, is anarchy, not freedom. Anarchy is not what I advance here.
+
+Instead, the free culture that I defend in this book is a balance between anarchy and control. A free culture, like a free market, is filled with property. It is filled with rules of property and contract that get enforced by the state. But just as a free market is perverted if its property becomes feudal, so too can a free culture be queered by extremism in the property rights that define it. That is what I fear about our culture today. It is against that extremism that this book is written.
+
+:C~ INTRODUCTION
+
+1~intro [Intro]-#
+
+*{On December 17, 1903,}* on a windy North Carolina beach for just shy of one hundred seconds, the Wright brothers demonstrated that a heavier-than-air, self- propelled vehicle could fly. The moment was electric and its importance widely understood. Almost immediately, there was an explosion of interest in this newfound technology of manned flight, and a gaggle of innovators began to build upon it.
+
+At the time the Wright brothers invented the airplane, American law held that a property owner presumptively owned not just the surface of his land, but all the land below, down to the center of the earth, and all the space above, to "an indefinite extent, upwards."~{ St. George Tucker, /{Blackstone's Commentaries}/ 3 (South Hackensack, N.J.: Rothman Reprints, 1969), 18. }~ For many years, scholars had puzzled about how best to interpret the idea that rights in land ran to the heavens. Did that mean that you owned the stars? Could you prosecute geese for their willful and regular trespass?
+
+Then came airplanes, and for the first time, this principle of American law - deep within the foundations of our tradition, and acknowledged by the most important legal thinkers of our past - mattered. If my land reaches to the heavens, what happens when United flies over my field? Do I have the right to banish it from my property? Am I allowed to enter into an exclusive license with Delta Airlines? Could we set up an auction to decide how much these rights are worth?
+
+In 1945, these questions became a federal case. When North Carolina farmers Thomas Lee and Tinie Causby started losing chickens because of low-flying military aircraft (the terrified chickens apparently flew into the barn walls and died), the Causbys filed a lawsuit saying that the government was trespassing on their land. The airplanes, of course, never touched the surface of the Causbys' land. But if, as Blackstone, Kent, and Coke had said, their land reached to "an indefinite extent, upwards," then the government was trespassing on their property, and the Causbys wanted it to stop.
+
+The Supreme Court agreed to hear the Causbys' case. Congress had declared the airways public, but if one's property really extended to the heavens, then Congress's declaration could well have been an unconstitutional "taking" of property without compensation. The Court acknowledged that "it is ancient doctrine that common law ownership of the land extended to the periphery of the universe." But Justice Douglas had no patience for ancient doctrine. In a single paragraph, hundreds of years of property law were erased. As he wrote for the Court,
+
+_1 [The] doctrine has no place in the modern world. The air is a public highway, as Congress has declared. Were that not true, every transcontinental flight would subject the operator to countless trespass suits. Common sense revolts at the idea. To recognize such private claims to the airspace would clog these highways, seriously interfere with their control and development in the public interest, and transfer into private ownership that to which only the public has a just claim."~{ United States v. Causby, U.S. 328 (1946): 256, 261. The Court did find that there could be a "taking" if the government's use of its land effectively destroyed the value of the Causbys' land. This example was suggested to me by Keith Aoki's wonderful piece, "(Intellectual) Property and Sovereignty: Notes Toward a Cultural Geography of Authorship," /{Stanford Law Review}/ 48 (1996): 1293, 1333. See also Paul Goldstein, /{Real Property}/ (Mineola, N.Y.: Foundation Press, 1984), 1112-13. }~
+
+"Common sense revolts at the idea."
+
+This is how the law usually works. Not often this abruptly or impatiently, but eventually, this is how it works. It was Douglas's style not to dither. Other justices would have blathered on for pages to reach the conclusion that Douglas holds in a single line: "Common sense revolts at the idea." But whether it takes pages or a few words, it is the special genius of a common law system, as ours is, that the law adjusts to the technologies of the time. And as it adjusts, it changes. Ideas that were as solid as rock in one age crumble in another.
+
+Or at least, this is how things happen when there's no one powerful on the other side of the change. The Causbys were just farmers. And though there were no doubt many like them who were upset by the growing traffic in the air (though one hopes not many chickens flew themselves into walls), the Causbys of the world would find it very hard to unite and stop the idea, and the technology, that the Wright brothers had birthed. The Wright brothers spat airplanes into the technological meme pool; the idea then spread like a virus in a chicken coop; farmers like the Causbys found themselves surrounded by "what seemed reasonable" given the technology that the Wrights had produced. They could stand on their farms, dead chickens in hand, and shake their fists at these newfangled technologies all they wanted. They could call their representatives or even file a lawsuit. But in the end, the force of what seems "obvious" to everyone else - the power of "common sense" - would prevail. Their "private interest" would not be allowed to defeat an obvious public gain.
+
+*{Edwin Howard Armstrong}* is one of America's forgotten inventor geniuses. He came to the great American inventor scene just after the titans Thomas Edison and Alexander Graham Bell. But his work in the area of radio technology was perhaps the most important of any single inventor in the first fifty years of radio. He was better educated than Michael Faraday, who as a bookbinder's apprentice had discovered electric induction in 1831. But he had the same intuition about how the world of radio worked, and on at least three occasions, Armstrong invented profoundly important technologies that advanced our understanding of radio.
+
+On the day after Christmas, 1933, four patents were issued to Armstrong for his most significant invention - FM radio. Until then, consumer radio had been amplitude-modulated (AM) radio. The theorists of the day had said that frequency-modulated (FM) radio could never work. They were right about FM radio in a narrow band of spectrum. But Armstrong discovered that frequency-modulated radio in a wide band of spectrum would deliver an astonishing fidelity of sound, with much less transmitter power and static.
+
+On November 5, 1935, he demonstrated the technology at a meeting of the Institute of Radio Engineers at the Empire State Building in New York City. He tuned his radio dial across a range of AM stations, until the radio locked on a broadcast that he had arranged from seventeen miles away. The radio fell totally silent, as if dead, and then with a clarity no one else in that room had ever heard from an electrical device, it produced the sound of an announcer's voice: "This is amateur station W2AG at Yonkers, New York, operating on frequency modulation at two and a half meters."
+
+The audience was hearing something no one had thought possible:
+
+_1 A glass of water was poured before the microphone in Yonkers; it sounded like a glass of water being poured. ... A paper was crumpled and torn; it sounded like paper and not like a crackling forest fire. ... Sousa marches were played from records and a piano solo and guitar number were performed. ... The music was projected with a live-ness rarely if ever heard before from a radio 'music box.' "~{ Lawrence Lessing, /{Man of High Fidelity: Edwin Howard Armstrong}/ (Philadelphia: J. B. Lipincott Company, 1956), 209. }~
+
+As our own common sense tells us, Armstrong had discovered a vastly superior radio technology. But at the time of his invention, Armstrong was working for RCA. RCA was the dominant player in the then dominant AM radio market. By 1935, there were a thousand radio stations across the United States, but the stations in large cities were all owned by a handful of networks.
+
+RCA's president, David Sarnoff, a friend of Armstrong's, was eager that Armstrong discover a way to remove static from AM radio. So Sarnoff was quite excited when Armstrong told him he had a device that removed static from "radio." But when Armstrong demonstrated his invention, Sarnoff was not pleased.
+
+_1 I thought Armstrong would invent some kind of a filter to remove static from our AM radio. I didn't think he'd start a revolution - start up a whole damn new industry to compete with RCA."~{ See "Saints: The Heroes and Geniuses of the Electronic Era," First Electronic Church of America, at www.webstationone.com/fecha, available at link #1. }~
+
+Armstrong's invention threatened RCA's AM empire, so the company launched a campaign to smother FM radio. While FM may have been a superior technology, Sarnoff was a superior tactician. As one author described,
+
+_1 The forces for FM, largely engineering, could not overcome the weight of strategy devised by the sales, patent, and legal offices to subdue this threat to corporate position. For FM, if allowed to develop unrestrained, posed ... a complete reordering of radio power ... and the eventual overthrow of the carefully restricted AM system on which RCA had grown to power."~{ Lessing, 226. }~
+
+RCA at first kept the technology in house, insisting that further tests were needed. When, after two years of testing, Armstrong grew impatient, RCA began to use its power with the government to stall FM radio's deployment generally. In 1936, RCA hired the former head of the FCC and assigned him the task of assuring that the FCC assign spectrum in a way that would castrate FM - principally by moving FM radio to a different band of spectrum. At first, these efforts failed. But when Armstrong and the nation were distracted by World War II, RCA's work began to be more successful. Soon after the war ended, the FCC announced a set of policies that would have one clear effect: FM radio would be crippled. As Lawrence Lessing described it,
+
+_1 _{The}_ series of body blows that FM radio received right after the war, in a series of rulings manipulated through the FCC by the big radio interests, were almost incredible in their force and deviousness."~{ Lessing, 256. }~
+
+To make room in the spectrum for RCA's latest gamble, television, FM radio users were to be moved to a totally new spectrum band. The power of FM radio stations was also cut, meaning FM could no longer be used to beam programs from one part of the country to another. (This change was strongly supported by AT&T, because the loss of FM relaying stations would mean radio stations would have to buy wired links from AT&T.) The spread of FM radio was thus choked, at least temporarily.
+
+Armstrong resisted RCA's efforts. In response, RCA resisted Armstrong's patents. After incorporating FM technology into the emerging standard for television, RCA declared the patents invalid - baselessly, and almost fifteen years after they were issued. It thus refused to pay him royalties. For six years, Armstrong fought an expensive war of litigation to defend the patents. Finally, just as the patents expired, RCA offered a settlement so low that it would not even cover Armstrong's lawyers' fees. Defeated, broken, and now broke, in 1954 Armstrong wrote a short note to his wife and then stepped out of a thirteenth- story window to his death.
+
+This is how the law sometimes works. Not often this tragically, and rarely with heroic drama, but sometimes, this is how it works. From the beginning, government and government agencies have been subject to capture. They are more likely captured when a powerful interest is threatened by either a legal or technical change. That powerful interest too often exerts its influence within the government to get the government to protect it. The rhetoric of this protection is of course always public spirited; the reality is something different. Ideas that were as solid as rock in one age, but that, left to themselves, would crumble in another, are sustained through this subtle corruption of our political process. RCA had what the Causbys did not: the power to stifle the effect of technological change.
+
+*{There's no}* single inventor of the Internet. Nor is there any good date upon which to mark its birth. Yet in a very short time, the Internet has become part of ordinary American life. According to the Pew Internet and American Life Project, 58 percent of Americans had access to the Internet in 2002, up from 49 percent two years before.~{ Amanda Lenhart, "The Ever-Shifting Internet Population: A New Look at Internet Access and the Digital Divide," Pew Internet and American Life Project, 15 April 2003: 6, available at link #2. }~ That number could well exceed two thirds of the nation by the end of 2004.
+
+As the Internet has been integrated into ordinary life, it has changed things. Some of these changes are technical - the Internet has made communication faster, it has lowered the cost of gathering data, and so on. These technical changes are not the focus of this book. They are important. They are not well understood. But they are the sort of thing that would simply go away if we all just switched the Internet off. They don't affect people who don't use the Internet, or at least they don't affect them directly. They are the proper subject of a book about the Internet. But this is not a book about the Internet.
+
+Instead, this book is about an effect of the Internet beyond the Internet itself: an effect upon how culture is made. My claim is that the Internet has induced an important and unrecognized change in that process. That change will radically transform a tradition that is as old as the Republic itself. Most, if they recognized this change, would reject it. Yet most don't even see the change that the Internet has introduced.
+
+We can glimpse a sense of this change by distinguishing between commercial and noncommercial culture, and by mapping the law's regulation of each. By "commercial culture" I mean that part of our culture that is produced and sold or produced to be sold. By "noncommercial culture" I mean all the rest. When old men sat around parks or on street corners telling stories that kids and others consumed, that was noncommercial culture. When Noah Webster published his "Reader," or Joel Barlow his poetry, that was commercial culture.
+
+At the beginning of our history, and for just about the whole of our tradition, noncommercial culture was essentially unregulated. Of course, if your stories were lewd, or if your song disturbed the peace, then the law might intervene. But the law was never directly concerned with the creation or spread of this form of culture, and it left this culture "free." The ordinary ways in which ordinary individuals shared and transformed their culture - telling stories, reenacting scenes from plays or TV, participating in fan clubs, sharing music, making tapes - were left alone by the law.
+
+The focus of the law was on commercial creativity. At first slightly, then quite extensively, the law protected the incentives of creators by granting them exclusive rights to their creative work, so that they could sell those exclusive rights in a commercial marketplace.~{ This is not the only purpose of copyright, though it is the overwhelmingly primary purpose of the copyright established in the federal constitution. State copyright law historically protected not just the commercial interest in publication, but also a privacy interest. By granting authors the exclusive right to first publication, state copyright law gave authors the power to control the spread of facts about them. See Samuel D. Warren and Louis D. Brandeis, "The Right to Privacy," /{Harvard Law Review}/ 4 (1890): 193, 198-200. }~ This is also, of course, an important part of creativity and culture, and it has become an increasingly important part in America. But in no sense was it dominant within our tradition. It was instead just one part, a controlled part, balanced with the free.
+
+This rough divide between the free and the controlled has now been erased.~{ 9. See Jessica Litman, /{Digital Copyright}/ (New York: Prometheus Books, 2001), ch. 13. }~ The Internet has set the stage for this erasure and, pushed by big media, the law has now affected it. For the first time in our tradition, the ordinary ways in which individuals create and share culture fall within the reach of the regulation of the law, which has expanded to draw within its control a vast amount of culture and creativity that it never reached before. The technology that preserved the balance of our history - between uses of our culture that were free and uses of our culture that were only upon permission - has been undone. The consequence is that we are less and less a free culture, more and more a permission culture.
+
+This change gets justified as necessary to protect commercial creativity. And indeed, protectionism is precisely its motivation. But the protectionism that justifies the changes that I will describe below is not the limited and balanced sort that has defined the law in the past. This is not a protectionism to protect artists. It is instead a protectionism to protect certain forms of business. Corporations threatened by the potential of the Internet to change the way both commercial and noncommercial culture are made and shared have united to induce lawmakers to use the law to protect them. It is the story of RCA and Armstrong; it is the dream of the Causbys.
+
+For the Internet has unleashed an extraordinary possibility for many to participate in the process of building and cultivating a culture that reaches far beyond local boundaries. That power has changed the marketplace for making and cultivating culture generally, and that change in turn threatens established content industries. The Internet is thus to the industries that built and distributed content in the twentieth century what FM radio was to AM radio, or what the truck was to the railroad industry of the nineteenth century: the beginning of the end, or at least a substantial transformation. Digital technologies, tied to the Internet, could produce a vastly more competitive and vibrant market for building and cultivating culture; that market could include a much wider and more diverse range of creators; those creators could produce and distribute a much more vibrant range of creativity; and depending upon a few important factors, those creators could earn more on average from this system than creators do today - all so long as the RCAs of our day don't use the law to protect themselves against this competition.
+
+Yet, as I argue in the pages that follow, that is precisely what is happening in our culture today. These modern-day equivalents of the early twentieth-century radio or nineteenth-century railroads are using their power to get the law to protect them against this new, more efficient, more vibrant technology for building culture. They are succeeding in their plan to remake the Internet before the Internet remakes them.
+
+It doesn't seem this way to many. The battles over copyright and the Internet seem remote to most. To the few who follow them, they seem mainly about a much simpler brace of questions - whether "piracy" will be permitted, and whether "property" will be protected. The "war" that has been waged against the technologies of the Internet - what Motion Picture Association of America (MPAA) president Jack Valenti calls his "own terrorist war"~{ Amy Harmon, "Black Hawk Download: Moving Beyond Music, Pirates Use New Tools to Turn the Net into an Illicit Video Club," /{New York Times,}/ 17 January 2002. }~ - has been framed as a battle about the rule of law and respect for property. To know which side to take in this war, most think that we need only decide whether we're for property or against it.
+
+If those really were the choices, then I would be with Jack Valenti and the content industry. I, too, am a believer in property, and especially in the importance of what Mr. Valenti nicely calls "creative property." I believe that "piracy" is wrong, and that the law, properly tuned, should punish "piracy," whether on or off the Internet.
+
+But those simple beliefs mask a much more fundamental question and a much more dramatic change. My fear is that unless we come to see this change, the war to rid the world of Internet "pirates" will also rid our culture of values that have been integral to our tradition from the start.
+
+These values built a tradition that, for at least the first 180 years of our Republic, guaranteed creators the right to build freely upon their past, and protected creators and innovators from either state or private control. The First Amendment protected creators against state control. And as Professor Neil Netanel powerfully argues,~{ Neil W. Netanel, "Copyright and a Democratic Civil Society," /{Yale Law Journal}/ 106 (1996): 283. }~ copyright law, properly balanced, protected creators against private control. Our tradition was thus neither Soviet nor the tradition of patrons. It instead carved out a wide berth within which creators could cultivate and extend our culture.
+
+Yet the law's response to the Internet, when tied to changes in the technology of the Internet itself, has massively increased the effective regulation of creativity in America. To build upon or critique the culture around us one must ask, Oliver Twist - like, for permission first. Permission is, of course, often granted - but it is not often granted to the critical or the independent. We have built a kind of cultural nobility; those within the noble class live easily; those outside it don't. But it is nobility of any form that is alien to our tradition.
+
+The story that follows is about this war. Is it not about the "centrality of technology" to ordinary life. I don't believe in gods, digital or otherwise. Nor is it an effort to demonize any individual or group, for neither do I believe in a devil, corporate or otherwise. It is not a morality tale. Nor is it a call to jihad against an industry.
+
+It is instead an effort to understand a hopelessly destructive war inspired by the technologies of the Internet but reaching far beyond its code. And by understanding this battle, it is an effort to map peace. There is no good reason for the current struggle around Internet technologies to continue. There will be great harm to our tradition and culture if it is allowed to continue unchecked. We must come to understand the source of this war. We must resolve it soon.
+
+*{Like the Causbys'}* battle, this war is, in part, about "property." The property of this war is not as tangible as the Causbys', and no innocent chicken has yet to lose its life. Yet the ideas surrounding this "property" are as obvious to most as the Causbys' claim about the sacredness of their farm was to them. We are the Causbys. Most of us take for granted the extraordinarily powerful claims that the owners of "intellectual property" now assert. Most of us, like the Causbys, treat these claims as obvious. And hence we, like the Causbys, object when a new technology interferes with this property. It is as plain to us as it was to them that the new technologies of the Internet are "trespassing" upon legitimate claims of "property." It is as plain to us as it was to them that the law should intervene to stop this trespass.
+
+And thus, when geeks and technologists defend their Armstrong or Wright brothers technology, most of us are simply unsympathetic. Common sense does not revolt. Unlike in the case of the unlucky Causbys, common sense is on the side of the property owners in this war. Unlike the lucky Wright brothers, the Internet has not inspired a revolution on its side.
+
+My hope is to push this common sense along. I have become increasingly amazed by the power of this idea of intellectual property and, more importantly, its power to disable critical thought by policy makers and citizens. There has never been a time in our history when more of our "culture" was as "owned" as it is now. And yet there has never been a time when the concentration of power to control the /{uses}/ of culture has been as unquestioningly accepted as it is now.
+
+The puzzle is, Why?
+
+Is it because we have come to understand a truth about the value and importance of absolute property over ideas and culture? Is it because we have discovered that our tradition of rejecting such an absolute claim was wrong?
+
+Or is it because the idea of absolute property over ideas and culture benefits the RCAs of our time and fits our own unreflective intuitions?
+
+Is the radical shift away from our tradition of free culture an instance of America correcting a mistake from its past, as we did after a bloody war with slavery, and as we are slowly doing with inequality? Or is the radical shift away from our tradition of free culture yet another example of a political system captured by a few powerful special interests?
+
+Does common sense lead to the extremes on this question because common sense actually believes in these extremes? Or does common sense stand silent in the face of these extremes because, as with Armstrong versus RCA, the more powerful side has ensured that it has the more powerful view?
+
+I don't mean to be mysterious. My own views are resolved. I believe it was right for common sense to revolt against the extremism of the Causbys. I believe it would be right for common sense to revolt against the extreme claims made today on behalf of "intellectual property." What the law demands today is increasingly as silly as a sheriff arresting an airplane for trespass. But the consequences of this silliness will be much more profound.
+
+*{The struggle}* that rages just now centers on two ideas: "piracy" and "property." My aim in this book's next two parts is to explore these two ideas.
+
+My method is not the usual method of an academic. I don't want to plunge you into a complex argument, buttressed with references to obscure French theorists' however natural that is for the weird sort we academics have become. Instead I begin in each part with a collection of stories that set a context within which these apparently simple ideas can be more fully understood.
+
+The two sections set up the core claim of this book: that while the Internet has indeed produced something fantastic and new, our government, pushed by big media to respond to this "something new," is destroying something very old. Rather than understanding the changes the Internet might permit, and rather than taking time to let "common sense" resolve how best to respond, we are allowing those most threatened by the changes to use their power to change the law - and more importantly, to use their power to change something fundamental about who we have always been.
+
+We allow this, I believe, not because it is right, and not because most of us really believe in these changes. We allow it because the interests most threatened are among the most powerful players in our depressingly compromised process of making law. This book is the story of one more consequence of this form of corruption - a consequence to which most of us remain oblivious.
+
+:C~ "PIRACY"
+
+1~intro_piracy [Intro]-#
+
+*{Since the inception}* of the law regulating creative property, there has been a war against "piracy." The precise contours of this concept, "piracy," are hard to sketch, but the animating injustice is easy to capture. As Lord Mansfield wrote in a case that extended the reach of English copyright law to include sheet music,
+
+_1 A person may use the copy by playing it, but he has no right to rob the author of the profit, by multiplying copies and disposing of them for his own use."~{ /{Bach}/ v. /{Longman,}/ 98 Eng. Rep. 1274 (1777) (Mansfield). }~
+
+Today we are in the middle of another "war" against "piracy." The Internet has provoked this war. The Internet makes possible the efficient spread of content. Peer-to-peer (p2p) file sharing is among the most efficient of the efficient technologies the Internet enables. Using distributed intelligence, p2p systems facilitate the easy spread of content in a way unimagined a generation ago.
+
+_{This}_ efficiency does not respect the traditional lines of copyright. The network doesn't discriminate between the sharing of copyrighted and uncopyrighted content. Thus has there been a vast amount of sharing of copyrighted content. That sharing in turn has excited the war, as copyright owners fear the sharing will "rob the author of the profit."
+
+The warriors have turned to the courts, to the legislatures, and increasingly to technology to defend their "property" against this "piracy." A generation of Americans, the warriors warn, is being raised to believe that "property" should be "free." Forget tattoos, never mind body piercing - our kids are becoming thieves!
+
+There's no doubt that "piracy" is wrong, and that pirates should be punished. But before we summon the executioners, we should put this notion of "piracy" in some context. For as the concept is increasingly used, at its core is an extraordinary idea that is almost certainly wrong.
+
+The idea goes something like this:
+
+_1 Creative work has value; whenever I use, or take, or build upon the creative work of others, I am taking from them something of value. Whenever I take something of value from someone else, I should have their permission. The taking of something of value from someone else without permission is wrong. It is a form of piracy."
+
+This view runs deep within the current debates. It is what NYU law professor Rochelle Dreyfuss criticizes as the "if value, then right" theory of creative property~{ See Rochelle Dreyfuss, "Expressive Genericity: Trademarks as Language in the Pepsi Generation," /{Notre Dame Law Review}/ 65 (1990): 397. }~ - if there is value, then someone must have a right to that value. It is the perspective that led a composers' rights organization, ASCAP, to sue the Girl Scouts for failing to pay for the songs that girls sang around Girl Scout campfires.~{ Lisa Bannon, "The Birds May Sing, but Campers Can't Unless They Pay Up," /{Wall Street Journal,}/ 21 August 1996, available at link #3; Jonathan Zittrain, "Calling Off the Copyright War: In Battle of Property vs. Free Speech, No One Wins," /{Boston Globe,}/ 24 November 2002. }~ There was "value" (the songs) so there must have been a "right" - even against the Girl Scouts.
+
+This idea is certainly a possible understanding of how creative poperty should work. It might well be a possible design for a system of law protecting creative property. But the "if value, then right" theory of creative property has never been America's theory of creative property. It has never taken hold within our law.
+
+Instead, in our tradition, intellectual property is an instrument. It sets the groundwork for a richly creative society but remains subservient to the value of creativity. The current debate has this turned around. We have become so concerned with protecting the instrument that we are losing sight of the value.
+
+The source of this confusion is a distinction that the law no longer takes care to draw - the distinction between republishing someone's work on the one hand and building upon or transforming that work on the other. Copyright law at its birth had only publishing as its concern; copyright law today regulates both.
+
+Before the technologies of the Internet, this conflation didn't matter all that much. The technologies of publishing were expensive; that meant the vast majority of publishing was commercial. Commercial entities could bear the burden of the law - even the burden of the Byzantine complexity that copyright law has become. It was just one more expense of doing business.
+
+But with the birth of the Internet, this natural limit to the reach of the law has disappeared. The law controls not just the creativity of commercial creators but effectively that of anyone. Although that expansion would not matter much if copyright law regulated only "copying," when the law regulates as broadly and obscurely as it does, the extension matters a lot. The burden of this law now vastly outweighs any original benefit - certainly as it affects noncommercial creativity, and increasingly as it affects commercial creativity as well. Thus, as we'll see more clearly in the chapters below, the law's role is less and less to support creativity, and more and more to protect certain industries against competition. Just at the time digital technology could unleash an extraordinary range of commercial and noncommercial creativity, the law burdens this creativity with insanely complex and vague rules and with the threat of obscenely severe penalties. We may be seeing, as Richard Florida writes, the "Rise of the Creative Class."~{ In /{The Rise of the Creative Class}/ (New York: Basic Books, 2002), Richard Florida documents a shift in the nature of labor toward a labor of creativity. His work, however, doesn't directly address the legal conditions under which that creativity is enabled or stifled. I certainly agree with him about the importance and significance of this change, but I also believe the conditions under which it will be enabled are much more tenuous. }~ Unfortunately, we are also seeing an extraordinary rise of regulation of this creative class.
+
+These burdens make no sense in our tradition. We should begin by understanding that tradition a bit more and by placing in their proper context the current battles about behavior labeled "piracy."
+
+1~ Chapter One: Creators
+
+In 1928, a cartoon character was born. An early Mickey Mouse made his debut in May of that year, in a silent flop called /{Plane Crazy}/. In November, in New York City's Colony Theater, in the first widely distributed cartoon synchronized with sound, /{Steamboat Willie}/ brought to life the character that would become Mickey Mouse.
+
+Synchronized sound had been introduced to film a year earlier in the movie /{The Jazz Singer}/. That success led Walt Disney to copy the technique and mix sound with cartoons. No one knew whether it would work or, if it did work, whether it would win an audience. But when Disney ran a test in the summer of 1928, the results were unambiguous. As Disney describes that first experiment,
+
+_1 A couple of my boys could read music, and one of them could play a mouth organ. We put them in a room where they could not see the screen and arranged to pipe their sound into the room where our wives and friends were going to see the picture.
+
+_1 The boys worked from a music and sound-effects score. After several false starts, sound and action got off with the gun. The mouth organist played the tune, the rest of us in the sound department bammed tin pans and blew slide whistles on the beat. The synchronization was pretty close.
+
+_1 The effect on our little audience was nothing less than electric. They responded almost instinctively to this union of sound and motion. I thought they were kidding me. So they put me in the audience and ran the action again. It was terrible, but it was wonderful! And it was something new!"~{ Leonard Maltin, /{Of Mice and Magic: A History of American Animated Cartoons}/ (New York: Penguin Books, 1987), 34-35. }~
+
+Disney's then partner, and one of animation's most extraordinary talents, Ub Iwerks, put it more strongly: "I have never been so thrilled in my life. Nothing since has ever equaled it."
+
+Disney had created something very new, based upon something relatively new. Synchronized sound brought life to a form of creativity that had rarely - except in Disney's hands - been anything more than filler for other films. Throughout animation's early history, it was Disney's invention that set the standard that others struggled to match. And quite often, Disney's great genius, his spark of creativity, was built upon the work of others.
+
+This much is familiar. What you might not know is that 1928 also marks another important transition. In that year, a comic (as opposed to cartoon) genius created his last independently produced silent film. That genius was Buster Keaton. The film was /{Steamboat Bill, Jr.}/
+
+Keaton was born into a vaudeville family in 1895. In the era of silent film, he had mastered using broad physical comedy as a way to spark uncontrollable laughter from his audience. Steamboat Bill, Jr. was a classic of this form, famous among film buffs for its incredible stunts. The film was classic Keaton - wildly popular and among the best of its genre.
+
+/{Steamboat Bill, Jr.}/ appeared before Disney's cartoon /{Steamboat Willie}/. The coincidence of titles is not coincidental. Steamboat Willie is a direct cartoon parody of Steamboat Bill,~{ I am grateful to David Gerstein and his careful history, described at link #4. According to Dave Smith of the Disney Archives, Disney paid royalties to use the music for five songs in /{Steamboat Willie:}/ "Steamboat Bill," "The Simpleton" (Delille), "Mischief Makers" (Carbonara), "Joyful Hurry No. 1" (Baron), and "Gawky Rube" (Lakay). A sixth song, "The Turkey in the Straw," was already in the public domain. Letter from David Smith to Harry Surden, 10 July 2003, on file with author. }~ and both are built upon a common song as a source. It is not just from the invention of synchronized sound in /{The Jazz Singer}/ that we get /{Steamboat Willie}/. It is also from Buster Keaton's invention of Steamboat Bill, Jr., itself inspired by the song "Steamboat Bill," that we get Steamboat Willie, and then from Steamboat Willie, Mickey Mouse.
+
+This "borrowing" was nothing unique, either for Disney or for the industry. Disney was always parroting the feature-length mainstream films of his day.~{ He was also a fan of the public domain. See Chris Sprigman, "The Mouse that Ate the Public Domain," Findlaw, 5 March 2002, at link #5. }~ So did many others. Early cartoons are filled with knockoffs - slight variations on winning themes; retellings of ancient stories. The key to success was the brilliance of the differences. With Disney, it was sound that gave his animation its spark. Later, it was the quality of his work relative to the production-line cartoons with which he competed. Yet these additions were built upon a base that was borrowed. Disney added to the work of others before him, creating something new out of something just barely old.
+
+Sometimes this borrowing was slight. Sometimes it was significant. Think about the fairy tales of the Brothers Grimm. If you're as oblivious as I was, you're likely to think that these tales are happy, sweet stories, appropriate for any child at bedtime. In fact, the Grimm fairy tales are, well, for us, grim. It is a rare and perhaps overly ambitious parent who would dare to read these bloody, moralistic stories to his or her child, at bedtime or anytime.
+
+Disney took these stories and retold them in a way that carried them into a new age. He animated the stories, with both characters and light. Without removing the elements of fear and danger altogether, he made funny what was dark and injected a genuine emotion of compassion where before there was fear. And not just with the work of the Brothers Grimm. Indeed, the catalog of Disney work drawing upon the work of others is astonishing when set together: /{Snow White}/ (1937), /{Fantasia}/ (1940), /{Pinocchio}/ (1940), /{Dumbo}/ (1941), /{Bambi}/ (1942), /{Song of the South}/ (1946), /{Cinderella}/ (1950), /{Alice in Wonderland}/ (1951), /{Robin Hood}/ (1952), /{Peter Pan}/ (1953), /{Lady and the Tramp}/ (1955), /{Mulan}/ (1998), /{Sleeping Beauty}/ (1959), /{101 Dalmatians}/ (1961), /{The Sword in the Stone}/ (1963), and /{The Jungle Book}/ (1967) - not to mention a recent example that we should perhaps quickly forget, /{Treasure Planet}/ (2003). In all of these cases, Disney (or Disney, Inc.) ripped creativity from the culture around him, mixed that creativity with his own extraordinary talent, and then burned that mix into the soul of his culture. Rip, mix, and burn.
+
+This is a kind of creativity. It is a creativity that we should remember and celebrate. There are some who would say that there is no creativity except this kind. We don't need to go that far to recognize its importance. We could call this "Disney creativity," though that would be a bit misleading. It is, more precisely, "Walt Disney creativity" - a form of expression and genius that builds upon the culture around us and makes it something different.
+
+In 1928, the culture that Disney was free to draw upon was relatively fresh. The public domain in 1928 was not very old and was therefore quite vibrant. The average term of copyright was just around thirty years - for that minority of creative work that was in fact copy-righted.~{ Until 1976, copyright law granted an author the possibility of two terms: an initial term and a renewal term. I have calculated the "average" term by determining the weighted average of total registrations for any particular year, and the proportion renewing. Thus, if 100 copyrights are registered in year 1, and only 15 are renewed, and the renewal term is 28 years, then the average term is 32.2 years. For the renewal data and other relevant data, see the Web site associated with this book, available at link #6. }~ That means that for thirty years, on average, the authors or copyright holders of a creative work had an "exclusive right" to control certain uses of the work. To use this copyrighted work in limited ways required the permission of the copyright owner.
+
+At the end of a copyright term, a work passes into the public domain. No permission is then needed to draw upon or use that work. No permission and, hence, no lawyers. The public domain is a "lawyer-free zone." Thus, most of the content from the nineteenth century was free for Disney to use and build upon in 1928. It was free for anyone - whether connected or not, whether rich or not, whether approved or not - to use and build upon.
+
+This is the ways things always were - until quite recently. For most of our history, the public domain was just over the horizon. From 1790 until 1978, the average copyright term was never more than thirty-two years, meaning that most culture just a generation and a half old was free for anyone to build upon without the permission of anyone else. Today's equivalent would be for creative work from the 1960s and 1970s to now be free for the next Walt Disney to build upon without permission. Yet today, the public domain is presumptive only for content from before the Great Depression.
+
+Of course, Walt Disney had no monopoly on "Walt Disney creativity." Nor does America. The norm of free culture has, until recently, and except within totalitarian nations, been broadly exploited and quite universal.
+
+Consider, for example, a form of creativity that seems strange to many Americans but that is inescapable within Japanese culture: /{manga}/, or comics. The Japanese are fanatics about comics. Some 40 percent of publications are comics, and 30 percent of publication revenue derives from comics. They are everywhere in Japanese society, at every magazine stand, carried by a large proportion of commuters on Japan's extraordinary system of public transportation.
+
+Americans tend to look down upon this form of culture. That's an unattractive characteristic of ours. We're likely to misunderstand much about manga, because few of us have ever read anything close to the stories that these "graphic novels" tell. For the Japanese, manga cover every aspect of social life. For us, comics are "men in tights." And anyway, it's not as if the New York subways are filled with readers of Joyce or even Hemingway. People of different cultures distract themselves in different ways, the Japanese in this interestingly different way.
+
+But my purpose here is not to understand manga. It is to describe a variant on manga that from a lawyer's perspective is quite odd, but from a Disney perspective is quite familiar.
+
+This is the phenomenon of /{doujinshi}/. Doujinshi are also comics, but they are a kind of copycat comic. A rich ethic governs the creation of doujinshi. It is not doujinshi if it is /{just}/ a copy; the artist must make a contribution to the art he copies, by transforming it either subtly or significantly. A doujinshi comic can thus take a mainstream comic and develop it differently - with a different story line. Or the comic can keep the character in character but change its look slightly. There is no formula for what makes the doujinshi sufficiently "different." But they must be different if they are to be considered true doujinshi. Indeed, there are committees that review doujinshi for inclusion within shows and reject any copycat comic that is merely a copy.
+
+These copycat comics are not a tiny part of the manga market. They are huge. More than 33,000 "circles" of creators from across Japan produce these bits of Walt Disney creativity. More than 450,000 Japanese come together twice a year, in the largest public gathering in the country, to exchange and sell them. This market exists in parallel to the mainstream commercial manga market. In some ways, it obviously competes with that market, but there is no sustained effort by those who control the commercial manga market to shut the doujinshi market down. It flourishes, despite the competition and despite the law.
+
+The most puzzling feature of the doujinshi market, for those trained in the law, at least, is that it is allowed to exist at all. Under Japanese copyright law, which in this respect (on paper) mirrors American copyright law, the doujinshi market is an illegal one. Doujinshi are plainly "derivative works." There is no general practice by doujinshi artists of securing the permission of the manga creators. Instead, the practice is simply to take and modify the creations of others, as Walt Disney did with /{Steamboat Bill, Jr}/. Under both Japanese and American law, that "taking" without the permission of the original copyright owner is illegal. It is an infringement of the original copyright to make a copy or a derivative work without the original copyright owner's permission.
+
+Yet this illegal market exists and indeed flourishes in Japan, and in the view of many, it is precisely because it exists that Japanese manga flourish. As American graphic novelist Judd Winick said to me, "The early days of comics in America are very much like what's going on in Japan now. ... American comics were born out of copying each other. ... That's how [the artists] learn to draw - by going into comic books and not tracing them, but looking at them and copying them" and building from them.~{ For an excellent history, see Scott McCloud, /{Reinventing Comics}/ (New York: Perennial, 2000). }~
+
+American comics now are quite different, Winick explains, in part because of the legal difficulty of adapting comics the way doujinshi are allowed. Speaking of Superman, Winick told me, "there are these rules and you have to stick to them." There are things Superman "cannot" do. "As a creator, it's frustrating having to stick to some parameters which are fifty years old."
+
+The norm in Japan mitigates this legal difficulty. Some say it is precisely the benefit accruing to the Japanese manga market that explains the mitigation. Temple University law professor Salil Mehra, for example, hypothesizes that the manga market accepts these technical violations because they spur the manga market to be more wealthy and productive. Everyone would be worse off if doujinshi were banned, so the law does not ban doujinshi.~{ See Salil K. Mehra, "Copyright and Comics in Japan: Does Law Explain Why All the Comics My Kid Watches Are Japanese Imports?" /{Rutgers Law Review}/ 55 (2002): 155, 182. "[T]here might be a collective economic rationality that would lead manga and anime artists to forgo bringing legal actions for infringement. One hypothesis is that all manga artists may be better off collectively if they set aside their individual self-interest and decide not to press their legal rights. This is essentially a prisoner's dilemma solved." }~
+
+The problem with this story, however, as Mehra plainly acknowledges, is that the mechanism producing this laissez faire response is not clear. It may well be that the market as a whole is better off if doujinshi are permitted rather than banned, but that doesn't explain why individual copyright owners don't sue nonetheless. If the law has no general exception for doujinshi, and indeed in some cases individual manga artists have sued doujinshi artists, why is there not a more general pattern of blocking this "free taking" by the doujinshi culture?
+
+I spent four wonderful months in Japan, and I asked this question as often as I could. Perhaps the best account in the end was offered by a friend from a major Japanese law firm. "We don't have enough lawyers," he told me one afternoon. There "just aren't enough resources to prosecute cases like this."
+
+This is a theme to which we will return: that regulation by law is a function of both the words on the books and the costs of making those words have effect. For now, focus on the obvious question that is begged: Would Japan be better off with more lawyers? Would manga be richer if doujinshi artists were regularly prosecuted? Would the Japanese gain something important if they could end this practice of uncompensated sharing? Does piracy here hurt the victims of the piracy, or does it help them? Would lawyers fighting this piracy help their clients or hurt them?
+
+!_ Let's pause for a moment.
+
+If you're like I was a decade ago, or like most people are when they first start thinking about these issues, then just about now you should be puzzled about something you hadn't thought through before.
+
+We live in a world that celebrates "property." I am one of those celebrants. I believe in the value of property in general, and I also believe in the value of that weird form of property that lawyers call "intellectual property."~{ The term /{intellectual property}/ is of relatively recent origin. See Siva Vaidhyanathan, /{Copyrights and Copywrongs,}/ 11 (New York: New York University Press, 2001). See also Lawrence Lessig, /{The Future of Ideas}/ (New York: Random House, 2001), 293 n. 26. The term accurately describes a set of "property" rights - copyright, patents, trademark, and trade-secret - but the nature of those rights is very different. }~ A large, diverse society cannot survive without property; a large, diverse, and modern society cannot flourish without intellectual property.
+
+But it takes just a second's reflection to realize that there is plenty of value out there that "property" doesn't capture. I don't mean "money can't buy you love," but rather, value that is plainly part of a process of production, including commercial as well as noncommercial production. If Disney animators had stolen a set of pencils to draw Steamboat Willie, we'd have no hesitation in condemning that taking as wrong - even though trivial, even if unnoticed. Yet there was nothing wrong, at least under the law of the day, with Disney's taking from Buster Keaton or from the Brothers Grimm. There was nothing wrong with the taking from Keaton because Disney's use would have been considered "fair." There was nothing wrong with the taking from the Grimms because the Grimms' work was in the public domain.
+
+Thus, even though the things that Disney took - or more generally, the things taken by anyone exercising Walt Disney creativity - are valuable, our tradition does not treat those takings as wrong. Some things remain free for the taking within a free culture, and that freedom is good.
+
+The same with the doujinshi culture. If a doujinshi artist broke into a publisher's office and ran off with a thousand copies of his latest work - or even one copy - without paying, we'd have no hesitation in saying the artist was wrong. In addition to having trespassed, he would have stolen something of value. The law bans that stealing in whatever form, whether large or small.
+
+Yet there is an obvious reluctance, even among Japanese lawyers, to say that the copycat comic artists are "stealing." This form of Walt Disney creativity is seen as fair and right, even if lawyers in particular find it hard to say why.
+
+It's the same with a thousand examples that appear everywhere once you begin to look. Scientists build upon the work of other scientists without asking or paying for the privilege. ("Excuse me, Professor Einstein, but may I have permission to use your theory of relativity to show that you were wrong about quantum physics?") Acting companies perform adaptations of the works of Shakespeare without securing permission from anyone. (Does /{anyone}/ believe Shakespeare would be better spread within our culture if there were a central Shakespeare rights clearinghouse that all productions of Shakespeare must appeal to first?) And Hollywood goes through cycles with a certain kind of movie: five asteroid films in the late 1990s; two volcano disaster films in 1997.
+
+Creators here and everywhere are always and at all times building upon the creativity that went before and that surrounds them now. That building is always and everywhere at least partially done without permission and without compensating the original creator. No society, free or controlled, has ever demanded that every use be paid for or that permission for Walt Disney creativity must always be sought. Instead, every society has left a certain bit of its culture free for the taking - free societies more fully than unfree, perhaps, but all societies to some degree.
+
+The hard question is therefore not /{whether}/ a culture is free. All cultures are free to some degree. The hard question instead is "/{How}/ free is this culture?" How much, and how broadly, is the culture free for others to take and build upon? Is that freedom limited to party members? To members of the royal family? To the top ten corporations on the New York Stock Exchange? Or is that freedom spread broadly? To artists generally, whether affiliated with the Met or not? To musicians generally, whether white or not? To filmmakers generally, whether affiliated with a studio or not?
+
+Free cultures are cultures that leave a great deal open for others to build upon; unfree, or permission, cultures leave much less. Ours was a free culture. It is becoming much less so.
+
+1~ Chapter Two: "Mere Copyists"
+
+*{In 1839,}* Louis Daguerre invented the first practical technology for producing what we would call "photographs." Appropriately enough, they were called "daguerreotypes." The process was complicated and expensive, and the field was thus limited to professionals and a few zealous and wealthy amateurs. (There was even an American Daguerre Association that helped regulate the industry, as do all such associations, by keeping competition down so as to keep prices up.)
+
+Yet despite high prices, the demand for daguerreotypes was strong. This pushed inventors to find simpler and cheaper ways to make "automatic pictures." William Talbot soon discovered a process for making "negatives." But because the negatives were glass, and had to be kept wet, the process still remained expensive and cumbersome. In the 1870s, dry plates were developed, making it easier to separate the taking of a picture from its developing. These were still plates of glass, and thus it was still not a process within reach of most amateurs.
+
+The technological change that made mass photography possible didn't happen until 1888, and was the creation of a single man. George Eastman, himself an amateur photographer, was frustrated by the technology of photographs made with plates. In a flash of insight (so to speak), Eastman saw that if the film could be made to be flexible, it could be held on a single spindle. That roll could then be sent to a developer, driving the costs of photography down substantially. By lowering the costs, Eastman expected he could dramatically broaden the population of photographers.
+
+Eastman developed flexible, emulsion-coated paper film and placed rolls of it in small, simple cameras: the Kodak. The device was marketed on the basis of its simplicity. "You press the button and we do the rest."~{ Reese V. Jenkins, /{Images and Enterprise}/ (Baltimore: Johns Hopkins University Press, 1975), 112. }~ As he described in /{The Kodak Primer}/:
+
+_1 The principle of the Kodak system is the separation of the work that any person whomsoever can do in making a photograph, from the work that only an expert can do. ... We furnish anybody, man, woman or child, who has sufficient intelligence to point a box straight and press a button, with an instrument which altogether removes from the practice of photography the necessity for exceptional facilities or, in fact, any special knowledge of the art. It can be employed without preliminary study, without a darkroom and without chemicals."~{ Brian Coe, /{The Birth of Photography}/ (New York: Taplinger Publishing, 1977), 53. }~
+
+For $25, anyone could make pictures. The camera came preloaded with film, and when it had been used, the camera was returned to an Eastman factory, where the film was developed. Over time, of course, the cost of the camera and the ease with which it could be used both improved. Roll film thus became the basis for the explosive growth of popular photography. Eastman's camera first went on sale in 1888; one year later, Kodak was printing more than six thousand negatives a day. From 1888 through 1909, while industrial production was rising by 4.7 percent, photographic equipment and material sales increased by 11 percent.~{ Jenkins, 177. }~ Eastman Kodak's sales during the same period experienced an average annual increase of over 17 percent.~{ Based on a chart in Jenkins, p. 178. }~
+
+The real significance of Eastman's invention, however, was not economic. It was social. Professional photography gave individuals a glimpse of places they would never otherwise see. Amateur photography gave them the ability to record their own lives in a way they had never been able to do before. As author Brian Coe notes, "For the first time the snapshot album provided the man on the street with a permanent record of his family and its activities. ... For the first time in history there exists an authentic visual record of the appearance and activities of the common man made without [literary] interpretation or bias."~{ Coe, 58. }~
+
+In this way, the Kodak camera and film were technologies of expression. The pencil or paintbrush was also a technology of expression, of course. But it took years of training before they could be deployed by amateurs in any useful or effective way. With the Kodak, expression was possible much sooner and more simply. The barrier to expression was lowered. Snobs would sneer at its "quality"; professionals would discount it as irrelevant. But watch a child study how best to frame a picture and you get a sense of the experience of creativity that the Kodak enabled. Democratic tools gave ordinary people a way to express themselves more easily than any tools could have before.
+
+What was required for this technology to flourish? Obviously, Eastman's genius was an important part. But also important was the legal environment within which Eastman's invention grew. For early in the history of photography, there was a series of judicial decisions that could well have changed the course of photography substantially. Courts were asked whether the photographer, amateur or professional, required permission before he could capture and print whatever image he wanted. Their answer was no.~{ For illustrative cases, see, for example, /{Pavesich}/ v. /{N.E. Life Ins. Co.,}/ 50 S.E. 68 (Ga. 1905); /{Foster-Milburn Co.}/ v. /{Chinn,}/ 123090 S.W. 364, 366 (Ky. 1909); /{Corliss}/ v. /{Walker,}/ 64 F. 280 (Mass. Dist. Ct. 1894). }~
+
+The arguments in favor of requiring permission will sound surprisingly familiar. The photographer was "taking" something from the person or building whose photograph he shot - pirating something of value. Some even thought he was taking the target's soul. Just as Disney was not free to take the pencils that his animators used to draw Mickey, so, too, should these photographers not be free to take images that they thought valuable.
+
+On the other side was an argument that should be familiar, as well. Sure, there may be something of value being used. But citizens should have the right to capture at least those images that stand in public view. (Louis Brandeis, who would become a Supreme Court Justice, thought the rule should be different for images from private spaces.~{ Samuel D. Warren and Louis D. Brandeis, "The Right to Privacy," /{Harvard Law Review}/ 4 (1890): 193. }~) It may be that this means that the photographer gets something for nothing. Just as Disney could take inspiration from /{Steamboat Bill, Jr.}/ or the Brothers Grimm, the photographer should be free to capture an image without compensating the source.
+
+Fortunately for Mr. Eastman, and for photography in general, these early decisions went in favor of the pirates. In general, no permission would be required before an image could be captured and shared with others. Instead, permission was presumed. Freedom was the default. (The law would eventually craft an exception for famous people: commercial photographers who snap pictures of famous people for commercial purposes have more restrictions than the rest of us. But in the ordinary case, the image can be captured without clearing the rights to do the capturing.~{ See Melville B. Nimmer, "The Right of Publicity," /{Law and Contemporary Problems}/ 19 (1954): 203; William L. Prosser, "Privacy," /{California Law Review}/ 48 (1960) 398-407; /{White}/ v. /{Samsung Electronics America, Inc.,}/ 971 F. 2d 1395 (9th Cir. 1992), cert. denied, 508 U.S. 951 (1993). }~)
+
+We can only speculate about how photography would have developed had the law gone the other way. If the presumption had been against the photographer, then the photographer would have had to demonstrate permission. Perhaps Eastman Kodak would have had to demonstrate permission, too, before it developed the film upon which images were captured. After all, if permission were not granted, then Eastman Kodak would be benefiting from the "theft" committed by the photographer. Just as Napster benefited from the copyright infringements committed by Napster users, Kodak would be benefiting from the "image-right" infringement of its photographers. We could imagine the law then requiring that some form of permission be demonstrated before a company developed pictures. We could imagine a system developing to demonstrate that permission.
+
+But though we could imagine this system of permission, it would be very hard to see how photography could have flourished as it did if the requirement for permission had been built into the rules that govern it. Photography would have existed. It would have grown in importance over time. Professionals would have continued to use the technology as they did - since professionals could have more easily borne the burdens of the permission system. But the spread of photography to ordinary people would not have occurred. Nothing like that growth would have been realized. And certainly, nothing like that growth in a democratic technology of expression would have been realized.
+
+If you drive through San Francisco's Presidio, you might see two gaudy yellow school buses painted over with colorful and striking images, and the logo "Just Think!" in place of the name of a school. But there's little that's "just" cerebral in the projects that these busses enable. These buses are filled with technologies that teach kids to tinker with film. Not the film of Eastman. Not even the film of your VCR. Rather the "film" of digital cameras. Just Think! is a project that enables kids to make films, as a way to understand and critique the filmed culture that they find all around them. Each year, these busses travel to more than thirty schools and enable three hundred to five hundred children to learn something about media by doing something with media. By doing, they think. By tinkering, they learn.
+
+These buses are not cheap, but the technology they carry is increasingly so. The cost of a high-quality digital video system has fallen dramatically. As one analyst puts it, "Five years ago, a good real-time digital video editing system cost $25,000. Today you can get professional quality for $595."~{ H. Edward Goldberg, "Essential Presentation Tools: Hardware and Software You Need to Create Digital Multimedia Presentations," cadalyst, 1 February 2002, available at link #7. }~ These buses are filled with technology that would have cost hundreds of thousands just ten years ago. And it is now feasible to imagine not just buses like this, but classrooms across the country where kids are learning more and more of something teachers call "media literacy."
+
+"Media literacy," as Dave Yanofsky, the executive director of Just Think!, puts it, "is the ability ... to understand, analyze, and deconstruct media images. Its aim is to make [kids] literate about the way media works, the way it's constructed, the way it's delivered, and the way people access it."
+
+This may seem like an odd way to think about "literacy." For most people, literacy is about reading and writing. Faulkner and Hemingway and noticing split infinitives are the things that "literate" people know about.
+
+Maybe. But in a world where children see on average 390 hours of television commercials per year, or between 20,000 and 45,000 commercials generally,~{ Judith Van Evra, /{Television and Child Development}/ (Hillsdale, N.J.: Lawrence Erlbaum Associates, 1990); "Findings on Family and TV Study," /{Denver Post,}/ 25 May 1997, B6. }~ it is increasingly important to understand the "grammar" of media. For just as there is a grammar for the written word, so, too, is there one for media. And just as kids learn how to write by writing lots of terrible prose, kids learn how to write media by constructing lots of (at least at first) terrible media.
+
+A growing field of academics and activists sees this form of literacy as crucial to the next generation of culture. For though anyone who has written understands how difficult writing is - how difficult it is to sequence the story, to keep a reader's attention, to craft language to be understandable - few of us have any real sense of how difficult media is. Or more fundamentally, few of us have a sense of how media works, how it holds an audience or leads it through a story, how it triggers emotion or builds suspense.
+
+It took filmmaking a generation before it could do these things well. But even then, the knowledge was in the filming, not in writing about the film. The skill came from experiencing the making of a film, not from reading a book about it. One learns to write by writing and then reflecting upon what one has written. One learns to write with images by making them and then reflecting upon what one has created.
+
+This grammar has changed as media has changed. When it was just film, as Elizabeth Daley, executive director of the University of Southern California's Annenberg Center for Communication and dean of the USC School of Cinema- Television, explained to me, the grammar was about "the placement of objects, color, ... rhythm, pacing, and texture."~{ Interview with Elizabeth Daley and Stephanie Barish, 13 December 2002. }~ But as computers open up an interactive space where a story is "played" as well as experienced, that grammar changes. The simple control of narrative is lost, and so other techniques are necessary. Author Michael Crichton had mastered the narrative of science fiction. But when he tried to design a computer game based on one of his works, it was a new craft he had to learn. How to lead people through a game without their feeling they have been led was not obvious, even to a wildly successful author.~{ See Scott Steinberg, "Crichton Gets Medieval on PCs," E!online, 4 November 2000, available at link #8; "Timeline," 22 November 2000, available at link #9. }~
+
+This skill is precisely the craft a filmmaker learns. As Daley describes, "people are very surprised about how they are led through a film. [I]t is perfectly constructed to keep you from seeing it, so you have no idea. If a filmmaker succeeds you do not know how you were led." If you know you were led through a film, the film has failed.
+
+Yet the push for an expanded literacy - one that goes beyond text to include audio and visual elements - is not about making better film directors. The aim is not to improve the profession of filmmaking at all. Instead, as Daley explained,
+
+_1 From my perspective, probably the most important digital divide is not access to a box. It's the ability to be empowered with the language that that box works in. Otherwise only a very few people can write with this language, and all the rest of us are reduced to being read-only."
+
+"Read-only." Passive recipients of culture produced elsewhere. Couch potatoes. Consumers. This is the world of media from the twentieth century.
+
+The twenty-first century could be different. This is the crucial point: It could be both read and write. Or at least reading and better understanding the craft of writing. Or best, reading and understanding the tools that enable the writing to lead or mislead. The aim of any literacy, and this literacy in particular, is to "empower people to choose the appropriate language for what they need to create or express."~{ Interview with Daley and Barish. }~ It is to enable students "to communicate in the language of the twenty-first century."~{ Ibid. }~
+
+As with any language, this language comes more easily to some than to others. It doesn't necessarily come more easily to those who excel in written language. Daley and Stephanie Barish, director of the Institute for Multimedia Literacy at the Annenberg Center, describe one particularly poignant example of a project they ran in a high school. The high school was a very poor inner-city Los Angeles school. In all the traditional measures of success, this school was a failure. But Daley and Barish ran a program that gave kids an opportunity to use film to express meaning about something the students know something about - gun violence.
+
+The class was held on Friday afternoons, and it created a relatively new problem for the school. While the challenge in most classes was getting the kids to come, the challenge in this class was keeping them away. The "kids were showing up at 6 A.M. and leaving at 5 at night," said Barish. They were working harder than in any other class to do what education should be about - learning how to express themselves.
+
+Using whatever "free web stuff they could find," and relatively simple tools to enable the kids to mix "image, sound, and text," Barish said this class produced a series of projects that showed something about gun violence that few would otherwise understand. This was an issue close to the lives of these students. The project "gave them a tool and empowered them to be able to both understand it and talk about it," Barish explained. That tool succeeded in creating expression - far more successfully and powerfully than could have been created using only text. "If you had said to these students, 'you have to do it in text,' they would've just thrown their hands up and gone and done something else," Barish described, in part, no doubt, because expressing themselves in text is not something these students can do well. Yet neither is text a form in which /{these}/ ideas can be expressed well. The power of this message depended upon its connection to this form of expression.
+
+"But isn't education about teaching kids to write?" I asked. In part, of course, it is. But why are we teaching kids to write? Education, Daley explained, is about giving students a way of "constructing meaning." To say that that means just writing is like saying teaching writing is only about teaching kids how to spell. Text is one part - and increasingly, not the most powerful part - of constructing meaning. As Daley explained in the most moving part of our interview,
+
+_1 What you want is to give these students ways of constructing meaning. If all you give them is text, they're not going to do it. Because they can't. You know, you've got Johnny who can look at a video, he can play a video game, he can do graffiti all over your walls, he can take your car apart, and he can do all sorts of other things. He just can't read your text. So Johnny comes to school and you say, "Johnny, you're illiterate. Nothing you can do matters." Well, Johnny then has two choices: He can dismiss you or he [can] dismiss himself. If his ego is healthy at all, he's going to dismiss you. [But i]nstead, if you say, "Well, with all these things that you can do, let's talk about this issue. Play for me music that you think reflects that, or show me images that you think reflect that, or draw for me something that reflects that." Not by giving a kid a video camera and ... saying, "Let's go have fun with the video camera and make a little movie." But instead, really help you take these elements that you understand, that are your language, and construct meaning about the topic. ...
+
+_1 That empowers enormously. And then what happens, of course, is eventually, as it has happened in all these classes, they bump up against the fact, "I need to explain this and I really need to write something." And as one of the teachers told Stephanie, they would rewrite a paragraph 5, 6, 7, 8 times, till they got it right.
+
+_1 Because they needed to. There was a reason for doing it. They needed to say something, as opposed to just jumping through your hoops. They actually needed to use a language that they didn't speak very well. But they had come to understand that they had a lot of power with this language."
+
+When two planes crashed into the World Trade Center, another into the Pentagon, and a fourth into a Pennsylvania field, all media around the world shifted to this news. Every moment of just about every day for that week, and for weeks after, television in particular, and media generally, retold the story of the events we had just witnessed. The telling was a retelling, because we had seen the events that were described. The genius of this awful act of terrorism was that the delayed second attack was perfectly timed to assure that the whole world would be watching.
+
+These retellings had an increasingly familiar feel. There was music scored for the intermissions, and fancy graphics that flashed across the screen. There was a formula to interviews. There was "balance," and seriousness. This was news choreographed in the way we have increasingly come to expect it, "news as entertainment," even if the entertainment is tragedy.
+
+But in addition to this produced news about the "tragedy of September 11," those of us tied to the Internet came to see a very different production as well. The Internet was filled with accounts of the same events. Yet these Internet accounts had a very different flavor. Some people constructed photo pages that captured images from around the world and presented them as slide shows with text. Some offered open letters. There were sound recordings. There was anger and frustration. There were attempts to provide context. There was, in short, an extraordinary worldwide barn raising, in the sense Mike Godwin uses the term in his book /{Cyber Rights}/, around a news event that had captured the attention of the world. There was ABC and CBS, but there was also the Internet.
+
+I don't mean simply to praise the Internet - though I do think the people who supported this form of speech should be praised. I mean instead to point to a significance in this form of speech. For like a Kodak, the Internet enables people to capture images. And like in a movie by a student on the "Just Think!" bus, the visual images could be mixed with sound or text.
+
+But unlike any technology for simply capturing images, the Internet allows these creations to be shared with an extraordinary number of people, practically instantaneously. This is something new in our tradition - not just that culture can be captured mechanically, and obviously not just that events are commented upon critically, but that this mix of captured images, sound, and commentary can be widely spread practically instantaneously.
+
+September 11 was not an aberration. It was a beginning. Around the same time, a form of communication that has grown dramatically was just beginning to come into public consciousness: the Web-log, or blog. The blog is a kind of public diary, and within some cultures, such as in Japan, it functions very much like a diary. In those cultures, it records private facts in a public way - it's a kind of electronic /{Jerry Springer}/, available anywhere in the world.
+
+But in the United States, blogs have taken on a very different character. There are some who use the space simply to talk about their private life. But there are many who use the space to engage in public discourse. Discussing matters of public import, criticizing others who are mistaken in their views, criticizing politicians about the decisions they make, offering solutions to problems we all see: blogs create the sense of a virtual public meeting, but one in which we don't all hope to be there at the same time and in which conversations are not necessarily linked. The best of the blog entries are relatively short; they point directly to words used by others, criticizing with or adding to them. They are arguably the most important form of unchoreographed public discourse that we have.
+
+That's a strong statement. Yet it says as much about our democracy as it does about blogs. This is the part of America that is most difficult for those of us who love America to accept: Our democracy has atrophied. Of course we have elections, and most of the time the courts allow those elections to count. A relatively small number of people vote in those elections. The cycle of these elections has become totally professionalized and routinized. Most of us think this is democracy.
+
+But democracy has never just been about elections. Democracy means rule by the people, but rule means something more than mere elections. In our tradition, it also means control through reasoned discourse. This was the idea that captured the imagination of Alexis de Tocqueville, the nineteenth-century French lawyer who wrote the most important account of early "Democracy in America." It wasn't popular elections that fascinated him - it was the jury, an institution that gave ordinary people the right to choose life or death for other citizens. And most fascinating for him was that the jury didn't just vote about the outcome they would impose. They deliberated. Members argued about the "right" result; they tried to persuade each other of the "right" result, and in criminal cases at least, they had to agree upon a unanimous result for the process to come to an end.~{ See, for example, Alexis de Tocqueville, /{Democracy in America,}/ bk. 1, trans. Henry Reeve (New York: Bantam Books, 2000), ch. 16. }~
+
+Yet even this institution flags in American life today. And in its place, there is no systematic effort to enable citizen deliberation. Some are pushing to create just such an institution.~{ Bruce Ackerman and James Fishkin, "Deliberation Day," /{Journal of Political Philosophy}/ 10 (2) (2002): 129. }~ And in some towns in New England, something close to deliberation remains. But for most of us for most of the time, there is no time or place for "democratic deliberation" to occur.
+
+More bizarrely, there is generally not even permission for it to occur. We, the most powerful democracy in the world, have developed a strong norm against talking about politics. It's fine to talk about politics with people you agree with. But it is rude to argue about politics with people you disagree with. Political discourse becomes isolated, and isolated discourse becomes more extreme.~{ Cass Sunstein, /{Republic.com}/ (Princeton: Princeton University Press, 2001), 65-80, 175, 182, 183, 192. }~ We say what our friends want to hear, and hear very little beyond what our friends say.
+
+Enter the blog. The blog's very architecture solves one part of this problem. People post when they want to post, and people read when they want to read. The most difficult time is synchronous time. Technologies that enable asynchronous communication, such as e-mail, increase the opportunity for communication. Blogs allow for public discourse without the public ever needing to gather in a single public place.
+
+But beyond architecture, blogs also have solved the problem of norms. There's no norm (yet) in blog space not to talk about politics. Indeed, the space is filled with political speech, on both the right and the left. Some of the most popular sites are conservative or libertarian, but there are many of all political stripes. And even blogs that are not political cover political issues when the occasion merits.
+
+The significance of these blogs is tiny now, though not so tiny. The name Howard Dean may well have faded from the 2004 presidential race but for blogs. Yet even if the number of readers is small, the reading is having an effect.
+
+One direct effect is on stories that had a different life cycle in the mainstream media. The Trent Lott affair is an example. When Lott "misspoke" at a party for Senator Strom Thurmond, essentially praising Thurmond's segregationist policies, he calculated correctly that this story would disappear from the mainstream press within forty-eight hours. It did. But he didn't calculate its life cycle in blog space. The bloggers kept researching the story. Over time, more and more instances of the same "misspeaking" emerged. Finally, the story broke back into the mainstream press. In the end, Lott was forced to resign as senate majority leader.~{ Noah Shachtman, "With Incessant Postings, a Pundit Stirs the Pot," /{New York Times,}/ 16 January 2003, G5. }~
+
+This different cycle is possible because the same commercial pressures don't exist with blogs as with other ventures. Television and newspapers are commercial entities. They must work to keep attention. If they lose readers, they lose revenue. Like sharks, they must move on.
+
+But bloggers don't have a similar constraint. They can obsess, they can focus, they can get serious. If a particular blogger writes a particularly interesting story, more and more people link to that story. And as the number of links to a particular story increases, it rises in the ranks of stories. People read what is popular; what is popular has been selected by a very democratic process of peer-generated rankings.
+
+There's a second way, as well, in which blogs have a different cycle from the mainstream press. As Dave Winer, one of the fathers of this movement and a software author for many decades, told me, another difference is the absence of a financial "conflict of interest." "I think you have to take the conflict of interest" out of journalism, Winer told me. "An amateur journalist simply doesn't have a conflict of interest, or the conflict of interest is so easily disclosed that you know you can sort of get it out of the way."
+
+These conflicts become more important as media becomes more concentrated (more on this below). A concentrated media can hide more from the public than an unconcentrated media can - as CNN admitted it did after the Iraq war because it was afraid of the consequences to its own employees.~{ Telephone interview with David Winer, 16 April 2003. }~ It also needs to sustain a more coherent account. (In the middle of the Iraq war, I read a post on the Internet from someone who was at that time listening to a satellite uplink with a reporter in Iraq. The New York headquarters was telling the reporter over and over that her account of the war was too bleak: She needed to offer a more optimistic story. When she told New York that wasn't warranted, they told her that /{they}/ were writing "the story.")
+
+Blog space gives amateurs a way to enter the debate - "amateur" not in the sense of inexperienced, but in the sense of an Olympic athlete, meaning not paid by anyone to give their reports. It allows for a much broader range of input into a story, as reporting on the Columbia disaster revealed, when hundreds from across the southwest United States turned to the Internet to retell what they had seen.~{ John Schwartz, "Loss of the Shuttle: The Internet; A Wealth of Information Online," /{New York Times,}/ 2 February 2003, A28; Staci D. Kramer, "Shuttle Disaster Coverage Mixed, but Strong Overall," Online Journalism Review, 2 February 2003, available at link #10. }~ And it drives readers to read across the range of accounts and "triangulate," as Winer puts it, the truth. Blogs, Winer says, are "communicating directly with our constituency, and the middle man is out of it" - with all the benefits, and costs, that might entail.
+
+Winer is optimistic about the future of journalism infected with blogs. "It's going to become an essential skill," Winer predicts, for public figures and increasingly for private figures as well. It's not clear that "journalism" is happy about this - some journalists have been told to curtail their blogging.~{ See Michael Falcone, "Does an Editor's Pencil Ruin a Web Log?" /{New York Times,}/ 29 September 2003, C4. ("Not all news organizations have been as accepting of employees who blog. Kevin Sites, a CNN correspondent in Iraq who started a blog about his reporting of the war on March 9, stopped posting 12 days later at his bosses' request. Last year Steve Olafson, a /{Houston Chronicle}/ reporter, was fired for keeping a personal Web log, published under a pseudonym, that dealt with some of the issues and people he was covering.") }~ But it is clear that we are still in transition. "A lot of what we are doing now is warm-up exercises," Winer told me. There is a lot that must mature before this space has its mature effect. And as the inclusion of content in this space is the least infringing use of the Internet (meaning infringing on copyright), Winer said, "we will be the last thing that gets shut down."
+
+This speech affects democracy. Winer thinks that happens because "you don't have to work for somebody who controls, [for] a gate-keeper." That is true. But it affects democracy in another way as well. As more and more citizens express what they think, and defend it in writing, that will change the way people understand public issues. It is easy to be wrong and misguided in your head. It is harder when the product of your mind can be criticized by others. Of course, it is a rare human who admits that he has been persuaded that he is wrong. But it is even rarer for a human to ignore when he has been proven wrong. The writing of ideas, arguments, and criticism improves democracy. Today there are probably a couple of million blogs where such writing happens. When there are ten million, there will be something extraordinary to report.
+
+John Seely Brown is the chief scientist of the Xerox Corporation. His work, as his Web site describes it, is "human learning and ... the creation of knowledge ecologies for creating ... innovation."
+
+Brown thus looks at these technologies of digital creativity a bit differently from the perspectives I've sketched so far. I'm sure he would be excited about any technology that might improve democracy. But his real excitement comes from how these technologies affect learning.
+
+As Brown believes, we learn by tinkering. When "a lot of us grew up," he explains, that tinkering was done "on motorcycle engines, lawn-mower engines, automobiles, radios, and so on." But digital technologies enable a different kind of tinkering - with abstract ideas though in concrete form. The kids at Just Think! not only think about how a commercial portrays a politician; using digital technology, they can take the commercial apart and manipulate it, tinker with it to see how it does what it does. Digital technologies launch a kind of bricolage, or "free collage," as Brown calls it. Many get to add to or transform the tinkering of many others.
+
+The best large-scale example of this kind of tinkering so far is free software or open-source software (FS/{OSS). FS}/OSS is software whose source code is shared. Anyone can download the technology that makes a FS/OSS program run. And anyone eager to learn how a particular bit of FS/OSS technology works can tinker with the code.
+
+This opportunity creates a "completely new kind of learning platform," as Brown describes. "As soon as you start doing that, you ... unleash a free collage on the community, so that other people can start looking at your code, tinkering with it, trying it out, seeing if they can improve it." Each effort is a kind of apprenticeship. "Open source becomes a major apprenticeship platform."
+
+In this process, "the concrete things you tinker with are abstract. They are code." Kids are "shifting to the ability to tinker in the abstract, and this tinkering is no longer an isolated activity that you're doing in your garage. You are tinkering with a community platform. ... You are tinkering with other people's stuff. The more you tinker the more you improve." The more you improve, the more you learn.
+
+This same thing happens with content, too. And it happens in the same collaborative way when that content is part of the Web. As Brown puts it, "the Web [is] the first medium that truly honors multiple forms of intelligence." Earlier technologies, such as the typewriter or word processors, helped amplify text. But the Web amplifies much more than text. "The Web ... says if you are musical, if you are artistic, if you are visual, if you are interested in film ... [then] there is a lot you can start to do on this medium. [It] can now amplify and honor these multiple forms of intelligence."
+
+Brown is talking about what Elizabeth Daley, Stephanie Barish, and Just Think! teach: that this tinkering with culture teaches as well as creates. It develops talents differently, and it builds a different kind of recognition.
+
+Yet the freedom to tinker with these objects is not guaranteed. Indeed, as we'll see through the course of this book, that freedom is increasingly highly contested. While there's no doubt that your father had the right to tinker with the car engine, there's great doubt that your child will have the right to tinker with the images she finds all around. The law and, increasingly, technology interfere with a freedom that technology, and curiosity, would otherwise ensure.
+
+These restrictions have become the focus of researchers and scholars. Professor Ed Felten of Princeton (whom we'll see more of in chapter 10) has developed a powerful argument in favor of the "right to tinker" as it applies to computer science and to knowledge in general.~{ See, for example, Edward Felten and Andrew Appel, "Technological Access Control Interferes with Noninfringing Scholarship," /{Communications of the Association for Computer Machinery}/ 43 (2000): 9. }~ But Brown's concern is earlier, or younger, or more fundamental. It is about the learning that kids can do, or can't do, because of the law.
+
+"This is where education in the twenty-first century is going," Brown explains. We need to "understand how kids who grow up digital think and want to learn."
+
+"Yet," as Brown continued, and as the balance of this book will evince, "we are building a legal system that completely suppresses the natural tendencies of today's digital kids. ... We're building an architecture that unleashes 60 percent of the brain [and] a legal system that closes down that part of the brain."
+
+We're building a technology that takes the magic of Kodak, mixes moving images and sound, and adds a space for commentary and an opportunity to spread that creativity everywhere. But we're building the law to close down that technology.
+
+"No way to run a culture," as Brewster Kahle, whom we'll meet in chapter 9, quipped to me in a rare moment of despondence.
+
+1~ Chapter Three: Catalogs
+
+*{In the fall of 2002,}* Jesse Jordan of Oceanside, New York, enrolled as a freshman at Rensselaer Polytechnic Institute, in Troy, New York. His major at RPI was information technology. Though he is not a programmer, in October Jesse decided to begin to tinker with search engine technology that was available on the RPI network.
+
+RPI is one of America's foremost technological research institutions. It offers degrees in fields ranging from architecture and engineering to information sciences. More than 65 percent of its five thousand undergraduates finished in the top 10 percent of their high school class. The school is thus a perfect mix of talent and experience to imagine and then build, a generation for the network age.
+
+RPI's computer network links students, faculty, and administration to one another. It also links RPI to the Internet. Not everything available on the RPI network is available on the Internet. But the network is designed to enable students to get access to the Internet, as well as more intimate access to other members of the RPI community.
+
+Search engines are a measure of a network's intimacy. Google brought the Internet much closer to all of us by fantastically improving the quality of search on the network. Specialty search engines can do this even better. The idea of "intranet" search engines, search engines that search within the network of a particular institution, is to provide users of that institution with better access to material from that institution. Businesses do this all the time, enabling employees to have access to material that people outside the business can't get. Universities do it as well.
+
+These engines are enabled by the network technology itself. Microsoft, for example, has a network file system that makes it very easy for search engines tuned to that network to query the system for information about the publicly (within that network) available content. Jesse's search engine was built to take advantage of this technology. It used Microsoft's network file system to build an index of all the files available within the RPI network.
+
+Jesse's wasn't the first search engine built for the RPI network. Indeed, his engine was a simple modification of engines that others had built. His single most important improvement over those engines was to fix a bug within the Microsoft file-sharing system that could cause a user's computer to crash. With the engines that existed before, if you tried to access a file through a Windows browser that was on a computer that was off-line, your computer could crash. Jesse modified the system a bit to fix that problem, by adding a button that a user could click to see if the machine holding the file was still on-line.
+
+Jesse's engine went on-line in late October. Over the following six months, he continued to tweak it to improve its functionality. By March, the system was functioning quite well. Jesse had more than one million files in his directory, including every type of content that might be on users' computers.
+
+Thus the index his search engine produced included pictures, which students could use to put on their own Web sites; copies of notes or research; copies of information pamphlets; movie clips that students might have created; university brochures - basically anything that users of the RPI network made available in a public folder of their computer.
+
+But the index also included music files. In fact, one quarter of the files that Jesse's search engine listed were music files. But that means, of course, that three quarters were not, and - so that this point is absolutely clear - Jesse did nothing to induce people to put music files in their public folders. He did nothing to target the search engine to these files. He was a kid tinkering with a Google-like technology at a university where he was studying information science, and hence, tinkering was the aim. Unlike Google, or Microsoft, for that matter, he made no money from this tinkering; he was not connected to any business that would make any money from this experiment. He was a kid tinkering with technology in an environment where tinkering with technology was precisely what he was supposed to do.
+
+On April 3, 2003, Jesse was contacted by the dean of students at RPI. The dean informed Jesse that the Recording Industry Association of America, the RIAA, would be filing a lawsuit against him and three other students whom he didn't even know, two of them at other universities. A few hours later, Jesse was served with papers from the suit. As he read these papers and watched the news reports about them, he was increasingly astonished.
+
+"It was absurd," he told me. "I don't think I did anything wrong. ... I don't think there's anything wrong with the search engine that I ran or ... what I had done to it. I mean, I hadn't modified it in any way that promoted or enhanced the work of pirates. I just modified the search engine in a way that would make it easier to use" - again, a /{search engine}/, which Jesse had not himself built, using the Windows file-sharing system, which Jesse had not himself built, to enable members of the RPI community to get access to content, which Jesse had not himself created or posted, and the vast majority of which had nothing to do with music.
+
+But the RIAA branded Jesse a pirate. They claimed he operated a network and had therefore "willfully" violated copyright laws. They demanded that he pay them the damages for his wrong. For cases of "willful infringement," the Copyright Act specifies something lawyers call "statutory damages." These damages permit a copyright owner to claim $150,000 per infringement. As the RIAA alleged more than one hundred specific copyright infringements, they therefore demanded that Jesse pay them at least $15,000,000.
+
+Similar lawsuits were brought against three other students: one other student at RPI, one at Michigan Technical University, and one at Princeton. Their situations were similar to Jesse's. Though each case was different in detail, the bottom line in each was exactly the same: huge demands for "damages" that the RIAA claimed it was entitled to. If you added up the claims, these four lawsuits were asking courts in the United States to award the plaintiffs close to $100 /{billion}/ - six times the /{total}/ profit of the film industry in 2001.~{ Tim Goral, "Recording Industry Goes After Campus P-2-P Networks: Suit Alleges $97.8 Billion in Damages," /{Professional Media Group LCC}/ 6 (2003): 5, available at 2003 WL 55179443. }~
+
+Jesse called his parents. They were supportive but a bit frightened. An uncle was a lawyer. He began negotiations with the RIAA. They demanded to know how much money Jesse had. Jesse had saved $12,000 from summer jobs and other employment. They demanded $12,000 to dismiss the case.
+
+The RIAA wanted Jesse to admit to doing something wrong. He refused. They wanted him to agree to an injunction that would essentially make it impossible for him to work in many fields of technology for the rest of his life. He refused. They made him understand that this process of being sued was not going to be pleasant. (As Jesse's father recounted to me, the chief lawyer on the case, Matt Oppenheimer, told Jesse, "You don't want to pay another visit to a dentist like me.") And throughout, the RIAA insisted it would not settle the case until it took every penny Jesse had saved.
+
+Jesse's family was outraged at these claims. They wanted to fight. But Jesse's uncle worked to educate the family about the nature of the American legal system. Jesse could fight the RIAA. He might even win. But the cost of fighting a lawsuit like this, Jesse was told, would be at least $250,000. If he won, he would not recover that money. If he won, he would have a piece of paper saying he had won, and a piece of paper saying he and his family were bankrupt.
+
+So Jesse faced a mafia-like choice: $250,000 and a chance at winning, or $12,000 and a settlement.
+
+The recording industry insists this is a matter of law and morality. Let's put the law aside for a moment and think about the morality. Where is the morality in a lawsuit like this? What is the virtue in scapegoatism? The RIAA is an extraordinarily powerful lobby. The president of the RIAA is reported to make more than $1 million a year. Artists, on the other hand, are not well paid. The average recording artist makes $45,900.~{ Occupational Employment Survey, U.S. Dept. of Labor (2001) (27-2042 - Musicians and Singers). See also National Endowment for the Arts, /{More Than One in a Blue Moon}/ (2000). }~ There are plenty of ways for the RIAA to affect and direct policy. So where is the morality in taking money from a student for running a search engine?~{ Douglas Lichtman makes a related point in "KaZaA and Punishment," /{Wall Street Journal,}/ 10 September 2003, A24. }~
+
+On June 23, Jesse wired his savings to the lawyer working for the RIAA. The case against him was then dismissed. And with this, this kid who had tinkered a computer into a $15 million lawsuit became an activist:
+
+_1 I was definitely not an activist [before]. I never really meant to be an activist. ... [But] I've been pushed into this. In no way did I ever foresee anything like this, but I think it's just completely absurd what the RIAA has done."
+
+Jesse's parents betray a certain pride in their reluctant activist. As his father told me, Jesse "considers himself very conservative, and so do I. ... He's not a tree hugger. . . . I think it's bizarre that they would pick on him. But he wants to let people know that they're sending the wrong message. And he wants to correct the record."
+
+1~ Chapter Four: "Pirates"
+
+*{If "piracy" means}* using the creative property of others without their permission - if "if value, then right" is true - then the history of the content industry is a history of piracy. Every important sector of "big media" today - film, records, radio, and cable TV - was born of a kind of piracy so defined. The consistent story is how last generation's pirates join this generation's country club - until now.
+
+2~ Film
+
+The film industry of Hollywood was built by fleeing pirates.~{ I am grateful to Peter DiMauro for pointing me to this extraordinary history. See also Siva Vaidhyanathan, /{Copyrights and Copywrongs,}/ 87-93, which details Edison's "adventures" with copyright and patent. }~ Creators and directors migrated from the East Coast to California in the early twentieth century in part to escape controls that patents granted the inventor of filmmaking, Thomas Edison. These controls were exercised through a monopoly "trust," the Motion Pictures Patents Company, and were based on Thomas Edison's creative property - patents. Edison formed the MPPC to exercise the rights this creative property gave him, and the MPPC was serious about the control it demanded. As one commentator tells one part of the story,
+
+_1 A January 1909 deadline was set for all companies to comply with the license. By February, unlicensed outlaws, who referred to themselves as independents protested the trust and carried on business without submitting to the Edison monopoly. In the summer of 1909 the independent movement was in full-swing, with producers and theater owners using illegal equipment and imported film stock to create their own underground market.
+
+_1 With the country experiencing a tremendous expansion in the number of nickelodeons, the Patents Company reacted to the independent movement by forming a strong-arm subsidiary known as the General Film Company to block the entry of non-licensed independents. With coercive tactics that have become legendary, General Film confiscated unlicensed equipment, discontinued product supply to theaters which showed unlicensed films, and effectively monopolized distribution with the acquisition of all U.S. film exchanges, except for the one owned by the independent William Fox who defied the Trust even after his license was revoked."~{ J. A. Aberdeen, /{Hollywood Renegades: The Society of Independent Motion Picture Producers}/ (Cobblestone Entertainment, 2000) and expanded texts posted at "The Edison Movie Monopoly: The Motion Picture Patents Company vs. the Independent Outlaws," available at link #11. For a discussion of the economic motive behind both these limits and the limits imposed by Victor on phonographs, see Randal C. Picker, "From Edison to the Broadcast Flag: Mechanisms of Consent and Refusal and the Propertization of Copyright" (September 2002), University of Chicago Law School, James M. Olin Program in Law and Economics, Working Paper No. 159. }~
+
+The Napsters of those days, the "independents," were companies like Fox. And no less than today, these independents were vigorously resisted. "Shooting was disrupted by machinery stolen, and 'accidents' resulting in loss of negatives, equipment, buildings and sometimes life and limb frequently occurred."~{ Marc Wanamaker, "The First Studios," /{The Silents Majority,}/ archived at link #12. }~ That led the independents to flee the East Coast. California was remote enough from Edison's reach that film- makers there could pirate his inventions without fear of the law. And the leaders of Hollywood filmmaking, Fox most prominently, did just that.
+
+Of course, California grew quickly, and the effective enforcement of federal law eventually spread west. But because patents grant the patent holder a truly "limited" monopoly (just seventeen years at that time), by the time enough federal marshals appeared, the patents had expired. A new industry had been born, in part from the piracy of Edison's creative property.
+
+2~ Recorded Music
+
+The record industry was born of another kind of piracy, though to see how requires a bit of detail about the way the law regulates music.
+
+At the time that Edison and Henri Fourneaux invented machines for reproducing music (Edison the phonograph, Fourneaux the player piano), the law gave composers the exclusive right to control copies of their music and the exclusive right to control public performances of their music. In other words, in 1900, if I wanted a copy of Phil Russel's 1899 hit "Happy Mose," the law said I would have to pay for the right to get a copy of the musical score, and I would also have to pay for the right to perform it publicly.
+
+But what if I wanted to record "Happy Mose," using Edison's phonograph or Fourneaux's player piano? Here the law stumbled. It was clear enough that I would have to buy any copy of the musical score that I performed in making this recording. And it was clear enough that I would have to pay for any public performance of the work I was recording. But it wasn't totally clear that I would have to pay for a "public performance" if I recorded the song in my own house (even today, you don't owe the Beatles anything if you sing their songs in the shower), or if I recorded the song from memory (copies in your brain are not - yet - regulated by copyright law). So if I simply sang the song into a recording device in the privacy of my own home, it wasn't clear that I owed the composer anything. And more importantly, it wasn't clear whether I owed the composer anything if I then made copies of those recordings. Because of this gap in the law, then, I could effectively pirate someone else's song without paying its composer anything.
+
+The composers (and publishers) were none too happy about this capacity to pirate. As South Dakota senator Alfred Kittredge put it,
+
+_1 Imagine the injustice of the thing. A composer writes a song or an opera. A publisher buys at great expense the rights to the same and copyrights it. Along come the phonographic companies and companies who cut music rolls and deliberately steal the work of the brain of the composer and publisher without any regard for [their] rights.~{ To Amend and Consolidate the Acts Respecting Copyright: Hearings on S. 6330 and H.R. 19853 Before the (Joint) Committees on Patents, 59th Cong. 59, 1st sess. (1906) (statement of Senator Alfred B. Kittredge, of South Dakota, chairman), reprinted in /{Legislative History of the 1909 Copyright Act,}/ E. Fulton Brylawski and Abe Goldman, eds. (South Hackensack, N.J.: Rothman Reprints, 1976). }~
+
+The innovators who developed the technology to record other people's works were "sponging upon the toil, the work, the talent, and genius of American composers,"~{ To Amend and Consolidate the Acts Respecting Copyright, 223 (statement of Nathan Burkan, attorney for the Music Publishers Association). }~ and the "music publishing industry" was thereby "at the complete mercy of this one pirate."~{ To Amend and Consolidate the Acts Respecting Copyright, 226 (statement of Nathan Burkan, attorney for the Music Publishers Association). }~ As John Philip Sousa put it, in as direct a way as possible, "When they make money out of my pieces, I want a share of it."~{ To Amend and Consolidate the Acts Respecting Copyright, 23 (statement of John Philip Sousa, composer). }~
+
+These arguments have familiar echoes in the wars of our day. So, too, do the arguments on the other side. The innovators who developed the player piano argued that "it is perfectly demonstrable that the introduction of automatic music players has not deprived any composer of anything he had before their introduction." Rather, the machines increased the sales of sheet music.~{ To Amend and Consolidate the Acts Respecting Copyright, 283-84 (statement of Albert Walker, representative of the Auto-Music Perforating Company of New York). }~ In any case, the innovators argued, the job of Congress was "to consider first the interest of [the public], whom they represent, and whose servants they are." "All talk about 'theft,'" the general counsel of the American Graphophone Company wrote, "is the merest claptrap, for there exists no property in ideas musical, literary or artistic, except as defined by statute."~{ To Amend and Consolidate the Acts Respecting Copyright, 376 (prepared memorandum of Philip Mauro, general patent counsel of the American Graphophone Company Association). }~
+
+The law soon resolved this battle in favor of the composer /{and}/ the recording artist. Congress amended the law to make sure that composers would be paid for the "mechanical reproductions" of their music. But rather than simply granting the composer complete control over the right to make mechanical reproductions, Congress gave recording artists a right to record the music, at a price set by Congress, once the composer allowed it to be recorded once. This is the part of copyright law that makes cover songs possible. Once a composer authorizes a recording of his song, others are free to record the same song, so long as they pay the original composer a fee set by the law.
+
+American law ordinarily calls this a "compulsory license," but I will refer to it as a "statutory license." A statutory license is a license whose key terms are set by law. After Congress's amendment of the Copyright Act in 1909, record companies were free to distribute copies of recordings so long as they paid the composer (or copyright holder) the fee set by the statute.
+
+This is an exception within the law of copyright. When John Grisham writes a novel, a publisher is free to publish that novel only if Grisham gives the publisher permission. Grisham, in turn, is free to charge whatever he wants for that permission. The price to publish Grisham is thus set by Grisham, and copyright law ordinarily says you have no permission to use Grisham's work except with permission of Grisham.
+
+But the law governing recordings gives recording artists less. And thus, in effect, the law /{subsidizes}/ the recording industry through a kind of piracy - by giving recording artists a weaker right than it otherwise gives creative authors. The Beatles have less control over their creative work than Grisham does. And the beneficiaries of this less control are the recording industry and the public. The recording industry gets something of value for less than it otherwise would pay; the public gets access to a much wider range of musical creativity. Indeed, Congress was quite explicit about its reasons for granting this right. Its fear was the monopoly power of rights holders, and that that power would stifle follow-on creativity.~{ Copyright Law Revision: Hearings on S. 2499, S. 2900, H.R. 243, and H.R. 11794 Before the ( Joint) Committee on Patents, 60th Cong., 1st sess., 217 (1908) (statement of Senator Reed Smoot, chairman), reprinted in /{Legislative History of the 1909 Copyright Act,}/ E. Fulton Brylawski and Abe Goldman, eds. (South Hackensack, N.J.: Rothman Reprints, 1976). }~
+
+While the recording industry has been quite coy about this recently, historically it has been quite a supporter of the statutory license for records. As a 1967 report from the House Committee on the Judiciary relates,
+
+_1 the record producers argued vigorously that the compulsory license system must be retained. They asserted that the record industry is a half-billion-dollar business of great economic importance in the United States and throughout the world; records today are the principal means of disseminating music, and this creates special problems, since performers need unhampered access to musical material on nondiscriminatory terms. Historically, the record producers pointed out, there were no recording rights before 1909 and the 1909 statute adopted the compulsory license as a deliberate anti-monopoly condition on the grant of these rights. They argue that the result has been an outpouring of recorded music, with the public being given lower prices, improved quality, and a greater choice."~{ Copyright Law Revision: Report to Accompany H.R. 2512, House Committee on the Judiciary, 90th Cong., 1st sess., House Document no. 83, 66 (8 March 1967). I am grateful to Glenn Brown for drawing my attention to this report. }~
+
+By limiting the rights musicians have, by partially pirating their creative work, the record producers, and the public, benefit.
+
+2~ Radio
+
+Radio was also born of piracy.
+
+When a radio station plays a record on the air, that constitutes a "public performance" of the composer's work.~{ See 17 /{United States Code,}/ sections 106 and 110. At the beginning, record companies printed "Not Licensed for Radio Broadcast" and other messages purporting to restrict the ability to play a record on a radio station. Judge Learned Hand rejected the argument that a warning attached to a record might restrict the rights of the radio station. See /{RCA Manufacturing Co.}/ v. /{Whiteman,}/ 114 F. 2d 86 (2nd Cir. 1940). See also Randal C. Picker, "From Edison to the Broadcast Flag: Mechanisms of Consent and Refusal and the Propertization of Copyright," /{University of Chicago Law Review}/ 70 (2003): 281. }~ As I described above, the law gives the composer (or copyright holder) an exclusive right to public performances of his work. The radio station thus owes the composer money for that performance.
+
+But when the radio station plays a record, it is not only performing a copy of the /{composer's}/ work. The radio station is also performing a copy of the /{recording artist's}/ work. It's one thing to have "Happy Birthday" sung on the radio by the local children's choir; it's quite another to have it sung by the Rolling Stones or Lyle Lovett. The recording artist is adding to the value of the composition performed on the radio station. And if the law were perfectly consistent, the radio station would have to pay the recording artist for his work, just as it pays the composer of the music for his work.
+
+But it doesn't. Under the law governing radio performances, the radio station does not have to pay the recording artist. The radio station need only pay the composer. The radio station thus gets a bit of something for nothing. It gets to perform the recording artist's work for free, even if it must pay the composer something for the privilege of playing the song.
+
+This difference can be huge. Imagine you compose a piece of music. Imagine it is your first. You own the exclusive right to authorize public performances of that music. So if Madonna wants to sing your song in public, she has to get your permission.
+
+Imagine she does sing your song, and imagine she likes it a lot. She then decides to make a recording of your song, and it becomes a top hit. Under our law, every time a radio station plays your song, you get some money. But Madonna gets nothing, save the indirect effect on the sale of her CDs. The public performance of her recording is not a "protected" right. The radio station thus gets to /{pirate}/ the value of Madonna's work without paying her anything.
+
+No doubt, one might argue that, on balance, the recording artists benefit. On average, the promotion they get is worth more than the performance rights they give up. Maybe. But even if so, the law ordinarily gives the creator the right to make this choice. By making the choice for him or her, the law gives the radio station the right to take something for nothing.
+
+2~ Cable TV
+
+Cable TV was also born of a kind of piracy.
+
+When cable entrepreneurs first started wiring communities with cable television in 1948, most refused to pay broadcasters for the content that they echoed to their customers. Even when the cable companies started selling access to television broadcasts, they refused to pay for what they sold. Cable companies were thus Napsterizing broadcasters' content, but more egregiously than anything Napster ever did - Napster never charged for the content it enabled others to give away.
+
+Broadcasters and copyright owners were quick to attack this theft. Rosel Hyde, chairman of the FCC, viewed the practice as a kind of "unfair and potentially destructive competition."~{ Copyright Law Revision - CATV: Hearing on S. 1006 Before the Subcommittee on Patents, Trademarks, and Copyrights of the Senate Committee on the Judiciary, 89th Cong., 2nd sess., 78 (1966) (statement of Rosel H. Hyde, chairman of the Federal Communications Commission). }~ There may have been a "public interest" in spreading the reach of cable TV, but as Douglas Anello, general counsel to the National Association of Broadcasters, asked Senator Quentin Burdick during testimony, "Does public interest dictate that you use somebody else's property?"~{ Copyright Law Revision - CATV, 116 (statement of Douglas A. Anello, general counsel of the National Association of Broadcasters). }~ As another broadcaster put it,
+
+_1 The extraordinary thing about the CATV business is that it is the only business I know of where the product that is being sold is not paid for."~{ Copyright Law Revision - CATV, 126 (statement of Ernest W. Jennes, general counsel of the Association of Maximum Service Telecasters, Inc.). }~
+
+Again, the demand of the copyright holders seemed reasonable enough:
+
+_1 All we are asking for is a very simple thing, that people who now take our property for nothing pay for it. We are trying to stop piracy and I don't think there is any lesser word to describe it. I think there are harsher words which would fit it."~{ Copyright Law Revision - CATV, 169 (joint statement of Arthur B. Krim, president of United Artists Corp., and John Sinn, president of United Artists Television, Inc.). }~
+
+These were "free-ride[rs]," Screen Actor's Guild president Charlton Heston said, who were "depriving actors of compensation."~{ Copyright Law Revision - CATV, 209 (statement of Charlton Heston, president of the Screen Actors Guild). }~
+
+But again, there was another side to the debate. As Assistant Attorney General Edwin Zimmerman put it,
+
+_1 Our point here is that unlike the problem of whether you have any copyright protection at all, the problem here is whether copyright holders who are already compensated, who already have a monopoly, should be permitted to extend that monopoly. ... The question here is how much compensation they should have and how far back they should carry their right to compensation."~{ Copyright Law Revision - CATV, 216 (statement of Edwin M. Zimmerman, acting assistant attorney general). }~
+
+Copyright owners took the cable companies to court. Twice the Supreme Court held that the cable companies owed the copyright owners nothing.
+
+It took Congress almost thirty years before it resolved the question of whether cable companies had to pay for the content they "pirated." In the end, Congress resolved this question in the same way that it resolved the question about record players and player pianos. Yes, cable companies would have to pay for the content that they broadcast; but the price they would have to pay was not set by the copyright owner. The price was set by law, so that the broadcasters couldn't exercise veto power over the emerging technologies of cable. Cable companies thus built their empire in part upon a "piracy" of the value created by broadcasters' content.
+
+These separate stories sing a common theme. If "piracy" means using value from someone else's creative property without permission from that creator - as it is increasingly described today~{ See, for example, National Music Publisher's Association, /{The Engine of Free Expression: Copyright on the Internet - The Myth of Free Information,}/ available at link #13. "The threat of piracy"the use of someone else's creative work without permission or compensation - has grown with the Internet." }~ - then /{every}/ industry affected by copyright today is the product and beneficiary of a certain kind of piracy. Film, records, radio, cable TV. ... The list is long and could well be expanded. Every generation welcomes the pirates from the last. Every generation - until now.
+
+1~ Chapter Five: "Piracy"
+
+There is piracy of copyrighted material. Lots of it. This piracy comes in many forms. The most significant is commercial piracy, the unauthorized taking of other people's content within a commercial context. Despite the many justifications that are offered in its defense, this taking is wrong. No one should condone it, and the law should stop it.
+
+But as well as copy-shop piracy, there is another kind of "taking" that is more directly related to the Internet. That taking, too, seems wrong to many, and it is wrong much of the time. Before we paint this taking "piracy," however, we should understand its nature a bit more. For the harm of this taking is significantly more ambiguous than outright copying, and the law should account for that ambiguity, as it has so often done in the past.
+
+2~ Piracy I
+
+All across the world, but especially in Asia and Eastern Europe, there are businesses that do nothing but take others people's copyrighted content, copy it, and sell it - all without the permission of a copyright owner. The recording industry estimates that it loses about $4.6 billion every year to physical piracy~{ See IFPI (International Federation of the Phonographic Industry), /{The Recording Industry Commercial Piracy Report 2003,}/ July 2003, available at link #14. See also Ben Hunt, "Companies Warned on Music Piracy Risk," /{Financial Times,}/ 14 February 2003, 11. }~ (that works out to one in three CDs sold worldwide). The MPAA estimates that it loses $3 billion annually worldwide to piracy.
+
+This is piracy plain and simple. Nothing in the argument of this book, nor in the argument that most people make when talking about the subject of this book, should draw into doubt this simple point: This piracy is wrong.
+
+Which is not to say that excuses and justifications couldn't be made for it. We could, for example, remind ourselves that for the first one hundred years of the American Republic, America did not honor foreign copyrights. We were born, in this sense, a pirate nation. It might therefore seem hypocritical for us to insist so strongly that other developing nations treat as wrong what we, for the first hundred years of our existence, treated as right.
+
+That excuse isn't terribly strong. Technically, our law did not ban the taking of foreign works. It explicitly limited itself to American works. Thus the American publishers who published foreign works without the permission of foreign authors were not violating any rule. The copy shops in Asia, by contrast, are violating Asian law. Asian law does protect foreign copyrights, and the actions of the copy shops violate that law. So the wrong of piracy that they engage in is not just a moral wrong, but a legal wrong, and not just an internationally legal wrong, but a locally legal wrong as well.
+
+True, these local rules have, in effect, been imposed upon these countries. No country can be part of the world economy and choose not to protect copyright internationally. We may have been born a pirate nation, but we will not allow any other nation to have a similar childhood.
+
+If a country is to be treated as a sovereign, however, then its laws are its laws regardless of their source. The international law under which these nations live gives them some opportunities to escape the burden of intellectual property law.~{ See Peter Drahos with John Braithwaite, /{Information Feudalism: Who Owns the Knowledge Economy?}/ (New York: The New Press, 2003), 10-13, 209. The Trade-Related Aspects of Intellectual Property Rights (TRIPS) agreement obligates member nations to create administrative and enforcement mechanisms for intellectual property rights, a costly proposition for developing countries. Additionally, patent rights may lead to higher prices for staple industries such as agriculture. Critics of TRIPS question the disparity between burdens imposed upon developing countries and benefits conferred to industrialized nations. TRIPS does permit governments to use patents for public, noncommercial uses without first obtaining the patent holder's permission. Developing nations may be able to use this to gain the benefits of foreign patents at lower prices. This is a promising strategy for developing nations within the TRIPS framework. }~ In my view, more developing nations should take advantage of that opportunity, but when they don't, then their laws should be respected. And under the laws of these nations, this piracy is wrong.
+
+Alternatively, we could try to excuse this piracy by noting that in any case, it does no harm to the industry. The Chinese who get access to American CDs at 50 cents a copy are not people who would have bought those American CDs at $15 a copy. So no one really has any less money than they otherwise would have had.~{ For an analysis of the economic impact of copying technology, see Stan Liebowitz, /{Rethinking the Network Economy}/ (New York: Amacom, 2002), 144-90. "In some instances ... the impact of piracy on the copyright holder's ability to appropriate the value of the work will be negligible. One obvious instance is the case where the individual engaging in pirating would not have purchased an original even if pirating were not an option." Ibid., 149. }~
+
+This is often true (though I have friends who have purchased many thousands of pirated DVDs who certainly have enough money to pay for the content they have taken), and it does mitigate to some degree the harm caused by such taking. Extremists in this debate love to say, "You wouldn't go into Barnes & Noble and take a book off of the shelf without paying; why should it be any different with on-line music?" The difference is, of course, that when you take a book from Barnes & Noble, it has one less book to sell. By contrast, when you take an MP3 from a computer network, there is not one less CD that can be sold. The physics of piracy of the intangible are different from the physics of piracy of the tangible.
+
+This argument is still very weak. However, although copyright is a property right of a very special sort, it /{is}/ a property right. Like all property rights, the copyright gives the owner the right to decide the terms under which content is shared. If the copyright owner doesn't want to sell, she doesn't have to. There are exceptions: important statutory licenses that apply to copyrighted content regardless of the wish of the copyright owner. Those licenses give people the right to "take" copyrighted content whether or not the copyright owner wants to sell. But where the law does not give people the right to take content, it is wrong to take that content even if the wrong does no harm. If we have a property system, and that system is properly balanced to the technology of a time, then it is wrong to take property without the permission of a property owner. That is exactly what "property" means.
+
+Finally, we could try to excuse this piracy with the argument that the piracy actually helps the copyright owner. When the Chinese "steal" Windows, that makes the Chinese dependent on Microsoft. Microsoft loses the value of the software that was taken. But it gains users who are used to life in the Microsoft world. Over time, as the nation grows more wealthy, more and more people will buy software rather than steal it. And hence over time, because that buying will benefit Microsoft, Microsoft benefits from the piracy. If instead of pirating Microsoft Windows, the Chinese used the free GNU/Linux operating system, then these Chinese users would not eventually be buying Microsoft. Without piracy, then, Microsoft would lose.
+
+This argument, too, is somewhat true. The addiction strategy is a good one. Many businesses practice it. Some thrive because of it. Law students, for example, are given free access to the two largest legal databases. The companies marketing both hope the students will become so used to their service that they will want to use it and not the other when they become lawyers (and must pay high subscription fees).
+
+Still, the argument is not terribly persuasive. We don't give the alcoholic a defense when he steals his first beer, merely because that will make it more likely that he will buy the next three. Instead, we ordinarily allow businesses to decide for themselves when it is best to give their product away. If Microsoft fears the competition of GNU/Linux, then Microsoft can give its product away, as it did, for example, with Internet Explorer to fight Netscape. A property right means giving the property owner the right to say who gets access to what - at least ordinarily. And if the law properly balances the rights of the copyright owner with the rights of access, then violating the law is still wrong.
+
+Thus, while I understand the pull of these justifications for piracy, and I certainly see the motivation, in my view, in the end, these efforts at justifying commercial piracy simply don't cut it. This kind of piracy is rampant and just plain wrong. It doesn't transform the content it steals; it doesn't transform the market it competes in. It merely gives someone access to something that the law says he should not have. Nothing has changed to draw that law into doubt. This form of piracy is flat out wrong.
+
+But as the examples from the four chapters that introduced this part suggest, even if some piracy is plainly wrong, not all "piracy" is. Or at least, not all "piracy" is wrong if that term is understood in the way it is increasingly used today. Many kinds of "piracy" are useful and productive, to produce either new content or new ways of doing business. Neither our tradition nor any tradition has ever banned all "piracy" in that sense of the term.
+
+This doesn't mean that there are no questions raised by the latest piracy concern, peer-to-peer file sharing. But it does mean that we need to understand the harm in peer-to-peer sharing a bit more before we condemn it to the gallows with the charge of piracy.
+
+For (1) like the original Hollywood, p2p sharing escapes an overly controlling industry; and (2) like the original recording industry, it simply exploits a new way to distribute content; but (3) unlike cable TV, no one is selling the content that is shared on p2p services.
+
+These differences distinguish p2p sharing from true piracy. They should push us to find a way to protect artists while enabling this sharing to survive.
+
+2~ Piracy II
+
+The key to the "piracy" that the law aims to quash is a use that "rob[s] the author of [his] profit."~{ /{Bach v. Longman,}/ 98 Eng. Rep. 1274 (1777). }~ This means we must determine whether and how much p2p sharing harms before we know how strongly the law should seek to either prevent it or find an alternative to assure the author of his profit.
+
+Peer-to-peer sharing was made famous by Napster. But the inventors of the Napster technology had not made any major technological innovations. Like every great advance in innovation on the Internet (and, arguably, off the Internet as well~{ See Clayton M. Christensen, /{The Innovator's Dilemma: The Revolutionary National Bestseller That Changed the Way We Do Business}/ (New York: HarperBusiness, 2000). Professor Christensen examines why companies that give rise to and dominate a product area are frequently unable to come up with the most creative, paradigm-shifting uses for their own products. This job usually falls to outside innovators, who reassemble existing technology in inventive ways. For a discussion of Christensen's ideas, see Lawrence Lessig, /{Future,}/ 89-92, 139. }~), Shawn Fanning and crew had simply put together components that had been developed independently.
+
+The result was spontaneous combustion. Launched in July 1999, Napster amassed over 10 million users within nine months. After eighteen months, there were close to 80 million registered users of the system.~{ See Carolyn Lochhead, "Silicon Valley Dream, Hollywood Nightmare," /{San Francisco Chronicle,}/ 24 September 2002, A1; "Rock 'n' Roll Suicide," /{New Scientist,}/ 6 July 2002, 42; Benny Evangelista, "Napster Names CEO, Secures New Financing," /{San Francisco Chronicle,}/ 23 May 2003, C1; "Napster's Wake-Up Call," /{Economist,}/ 24 June 2000, 23; John Naughton, "Hollywood at War with the Internet" (London) /{Times,}/ 26 July 2002, 18. }~ Courts quickly shut Napster down, but other services emerged to take its place. (Kazaa is currently the most popular p2p service. It boasts over 100 million members.) These services' systems are different architecturally, though not very different in function: Each enables users to make content available to any number of other users. With a p2p system, you can share your favorite songs with your best friend - or your 20,000 best friends.
+
+According to a number of estimates, a huge proportion of Americans have tasted file-sharing technology. A study by Ipsos-Insight in September 2002 estimated that 60 million Americans had downloaded music - 28 percent of Americans older than 12.~{ See Ipsos-Insight, /{TEMPO: Keeping Pace with Online Music Distribution}/ (September 2002), reporting that 28 percent of Americans aged twelve and older have downloaded music off of the Internet and 30 percent have listened to digital music files stored on their computers. }~ A survey by the NPD group quoted in /{The New York Times}/ estimated that 43 million citizens used file-sharing networks to exchange content in May 2003.~{ Amy Harmon, "Industry Offers a Carrot in Online Music Fight," /{New York Times,}/ 6 June 2003, A1. }~ The vast majority of these are not kids. Whatever the actual figure, a massive quantity of content is being "taken" on these networks. The ease and inexpensiveness of file-sharing networks have inspired millions to enjoy music in a way that they hadn't before.
+
+Some of this enjoying involves copyright infringement. Some of it does not. And even among the part that is technically copyright infringement, calculating the actual harm to copyright owners is more complicated than one might think. So consider - a bit more carefully than the polarized voices around this debate usually do - the kinds of sharing that file sharing enables, and the kinds of harm it entails.
+
+File sharers share different kinds of content. We can divide these different kinds into four types.
+
+_1 A. There are some who use sharing networks as substitutes for purchasing content. Thus, when a new Madonna CD is released, rather than buying the CD, these users simply take it. We might quibble about whether everyone who takes it would actually have bought it if sharing didn't make it available for free. Most probably wouldn't have, but clearly there are some who would. The latter are the target of category A: users who download instead of purchasing.
+
+_1 B. There are some who use sharing networks to sample music before purchasing it. Thus, a friend sends another friend an MP3 of an artist he's not heard of. The other friend then buys CDs by that artist. This is a kind of targeted advertising, quite likely to succeed. If the friend recommending the album gains nothing from a bad recommendation, then one could expect that the recommendations will actually be quite good. The net effect of this sharing could increase the quantity of music purchased.
+
+_1 C. There are many who use sharing networks to get access to copyrighted content that is no longer sold or that they would not have purchased because the transaction costs off the Net are too high. This use of sharing networks is among the most rewarding for many. Songs that were part of your childhood but have long vanished from the marketplace magically appear again on the network. (One friend told me that when she discovered Napster, she spent a solid weekend "recalling" old songs. She was astonished at the range and mix of content that was available.) For content not sold, this is still technically a violation of copyright, though because the copyright owner is not selling the content anymore, the economic harm is zero - the same harm that occurs when I sell my collection of 1960s 45-rpm records to a local collector.
+
+_1 D. Finally, there are many who use sharing networks to get access to content that is not copyrighted or that the copyright owner wants to give away.
+
+How do these different types of sharing balance out?
+
+Let's start with some simple but important points. From the perspective of the law, only type D sharing is clearly legal. From the perspective of economics, only type A sharing is clearly harmful.~{ See Liebowitz, /{Rethinking the Network Economy,}/ 148-49. }~ Type B sharing is illegal but plainly beneficial. Type C sharing is illegal, yet good for society (since more exposure to music is good) and harmless to the artist (since the work is not otherwise available). So how sharing matters on balance is a hard question to answer - and certainly much more difficult than the current rhetoric around the issue suggests.
+
+Whether on balance sharing is harmful depends importantly on how harmful type A sharing is. Just as Edison complained about Hollywood, composers complained about piano rolls, recording artists complained about radio, and broadcasters complained about cable TV, the music industry complains that type A sharing is a kind of "theft" that is "devastating" the industry.
+
+While the numbers do suggest that sharing is harmful, how harmful is harder to reckon. It has long been the recording industry's practice to blame technology for any drop in sales. The history of cassette recording is a good example. As a study by Cap Gemini Ernst & Young put it, "Rather than exploiting this new, popular technology, the labels fought it."~{ See Cap Gemini Ernst & Young, /{Technology Evolution and the Music Industry's Business Model Crisis}/ (2003), 3. This report describes the music industry's effort to stigmatize the budding practice of cassette taping in the 1970s, including an advertising campaign featuring a cassette-shape skull and the caption "Home taping is killing music."<br>At the time digital audio tape became a threat, the Office of Technical Assessment conducted a survey of consumer behavior. In 1988, 40 percent of consumers older than ten had taped music to a cassette format. U.S. Congress, Office of Technology Assessment, /{Copyright and Home Copying: Technology Challenges the Law,}/ OTA-CIT-422 (Washington, D.C.: U.S. Government Printing Office, October 1989), 145-56. }~ The labels claimed that every album taped was an album unsold, and when record sales fell by 11.4 percent in 1981, the industry claimed that its point was proved. Technology was the problem, and banning or regulating technology was the answer.
+
+Yet soon thereafter, and before Congress was given an opportunity to enact regulation, MTV was launched, and the industry had a record turnaround. "In the end," Cap Gemini concludes, "the 'crisis' ... was not the fault of the tapers" who did not [stop after MTV came into being] - but had to a large extent resulted from stagnation in musical innovation at the major labels."~{ U.S. Congress, /{Copyright and Home Copying,}/ 4. }~
+
+But just because the industry was wrong before does not mean it is wrong today. To evaluate the real threat that p2p sharing presents to the industry in particular, and society in general - or at least the society that inherits the tradition that gave us the film industry, the record industry, the radio industry, cable TV, and the VCR - the question is not simply whether type A sharing is harmful. The question is also /{how}/ harmful type A sharing is, and how beneficial the other types of sharing are.
+
+We start to answer this question by focusing on the net harm, from the standpoint of the industry as a whole, that sharing networks cause. The "net harm" to the industry as a whole is the amount by which type A sharing exceeds type B. If the record companies sold more records through sampling than they lost through substitution, then sharing networks would actually benefit music companies on balance. They would therefore have little /{static}/ reason to resist them.
+
+Could that be true? Could the industry as a whole be gaining because of file sharing? Odd as that might sound, the data about CD sales actually suggest it might be close.
+
+In 2002, the RIAA reported that CD sales had fallen by 8.9 percent, from 882 million to 803 million units; revenues fell 6.7 percent.~{ See Recording Industry Association of America, /{2002 Yearend Statistics,}/ available at link #15. A later report indicates even greater losses. See Recording Industry Association of America, /{Some Facts About Music Piracy,}/ 25 June 2003, available at link #16: "In the past four years, unit shipments of recorded music have fallen by 26 percent from 1.16 billion units in 1999 to 860 million units in 2002 in the United States (based on units shipped). In terms of sales, revenues are down 14 percent, from $14.6 billion in 1999 to $12.6 billion last year (based on U.S. dollar value of shipments). The music industry worldwide has gone from a $39 billion industry in 2000 down to a $32 billion industry in 2002 (based on U.S. dollar value of shipments)." }~ This confirms a trend over the past few years. The RIAA blames Internet piracy for the trend, though there are many other causes that could account for this drop. SoundScan, for example, reports a more than 20 percent drop in the number of CDs released since 1999. That no doubt accounts for some of the decrease in sales. Rising prices could account for at least some of the loss. "From 1999 to 2001, the average price of a CD rose 7.2 percent, from $13.04 to $14.19."~{ Jane Black, "Big Music's Broken Record," BusinessWeek online, 13 February 2003, available at link #17. }~ Competition from other forms of media could also account for some of the decline. As Jane Black of /{BusinessWeek}/ notes, "The soundtrack to the film /{High Fidelity}/ has a list price of $18.98. You could get the whole movie [on DVD] for $19.99."~{ Ibid. }~
+
+But let's assume the RIAA is right, and all of the decline in CD sales is because of Internet sharing. Here's the rub: In the same period that the RIAA estimates that 803 million CDs were sold, the RIAA estimates that 2.1 billion CDs were downloaded for free. Thus, although 2.6 times the total number of CDs sold were downloaded for free, sales revenue fell by just 6.7 percent.
+
+There are too many different things happening at the same time to explain these numbers definitively, but one conclusion is unavoidable: The recording industry constantly asks, "What's the difference between downloading a song and stealing a CD?" - but their own numbers reveal the difference. If I steal a CD, then there is one less CD to sell. Every taking is a lost sale. But on the basis of the numbers the RIAA provides, it is absolutely clear that the same is not true of downloads. If every download were a lost sale - if every use of Kazaa "rob[bed] the author of [his] profit" - then the industry would have suffered a 100 percent drop in sales last year, not a 7 percent drop. If 2.6 times the number of CDs sold were downloaded for free, and yet sales revenue dropped by just 6.7 percent, then there is a huge difference between "downloading a song and stealing a CD."
+
+These are the harms - alleged and perhaps exaggerated but, let's assume, real. What of the benefits? File sharing may impose costs on the recording industry. What value does it produce in addition to these costs?
+
+One benefit is type C sharing - making available content that is technically still under copyright but is no longer commercially available. This is not a small category of content. There are millions of tracks that are no longer commercially available.~{ By one estimate, 75 percent of the music released by the major labels is no longer in print. See Online Entertainment and Copyright Law - Coming Soon to a Digital Device Near You: Hearing Before the Senate Committee on the Judiciary, 107th Cong., 1st sess. (3 April 2001) (prepared statement of the Future of Music Coalition), available at link #18. }~ And while it's conceivable that some of this content is not available because the artist producing the content doesn't want it to be made available, the vast majority of it is unavailable solely because the publisher or the distributor has decided it no longer makes economic sense /{to the company}/ to make it available.
+
+In real space - long before the Internet - the market had a simple response to this problem: used book and record stores. There are thousands of used book and used record stores in America today.~{ While there are not good estimates of the number of used record stores in existence, in 2002, there were 7,198 used book dealers in the United States, an increase of 20 percent since 1993. See Book Hunter Press, /{The Quiet Revolution: The Expansion of the Used Book Market}/ (2002), available at link #19. Used records accounted for $260 million in sales in 2002. See National Association of Recording Merchandisers, "2002 Annual Survey Results," available at link #20. }~ These stores buy content from owners, then sell the content they buy. And under American copyright law, when they buy and sell this content, /{even if the content is still under copyright}/, the copyright owner doesn't get a dime. Used book and record stores are commercial entities; their owners make money from the content they sell; but as with cable companies before statutory licensing, they don't have to pay the copyright owner for the content they sell.
+
+Type C sharing, then, is very much like used book stores or used record stores. It is different, of course, because the person making the content available isn't making money from making the content available. It is also different, of course, because in real space, when I sell a record, I don't have it anymore, while in cyberspace, when someone shares my 1949 recording of Bernstein's "Two Love Songs," I still have it. That difference would matter economically if the owner of the 1949 copyright were selling the record in competition to my sharing. But we're talking about the class of content that is not currently commercially available. The Internet is making it available, through cooperative sharing, without competing with the market.
+
+It may well be, all things considered, that it would be better if the copyright owner got something from this trade. But just because it may well be better, it doesn't follow that it would be good to ban used book stores. Or put differently, if you think that type C sharing should be stopped, do you think that libraries and used book stores should be shut as well?
+
+Finally, and perhaps most importantly, file-sharing networks enable type D sharing to occur - the sharing of content that copyright owners want to have shared or for which there is no continuing copyright. This sharing clearly benefits authors and society. Science fiction author Cory Doctorow, for example, released his first novel, /{Down and Out in the Magic Kingdom}/, both free on-line and in bookstores on the same day. His (and his publisher's) thinking was that the on-line distribution would be a great advertisement for the "real" book. People would read part on-line, and then decide whether they liked the book or not. If they liked it, they would be more likely to buy it. Doctorow's content is type D content. If sharing networks enable his work to be spread, then both he and society are better off. (Actually, much better off: It is a great book!)
+
+Likewise for work in the public domain: This sharing benefits society with no legal harm to authors at all. If efforts to solve the problem of type A sharing destroy the opportunity for type D sharing, then we lose something important in order to protect type A content.
+
+The point throughout is this: While the recording industry understandably says, "This is how much we've lost," we must also ask, "How much has society gained from p2p sharing? What are the efficiencies? What is the content that otherwise would be unavailable?"
+
+For unlike the piracy I described in the first section of this chapter, much of the "piracy" that file sharing enables is plainly legal and good. And like the piracy I described in chapter 4, much of this piracy is motivated by a new way of spreading content caused by changes in the technology of distribution. Thus, consistent with the tradition that gave us Hollywood, radio, the recording industry, and cable TV, the question we should be asking about file sharing is how best to preserve its benefits while minimizing (to the extent possible) the wrongful harm it causes artists. The question is one of balance. The law should seek that balance, and that balance will be found only with time.
+
+"But isn't the war just a war against illegal sharing? Isn't the target just what you call type A sharing?"
+
+You would think. And we should hope. But so far, it is not. The effect of the war purportedly on type A sharing alone has been felt far beyond that one class of sharing. That much is obvious from the Napster case itself. When Napster told the district court that it had developed a technology to block the transfer of 99.4 percent of identified infringing material, the district court told counsel for Napster 99.4 percent was not good enough. Napster had to push the infringements "down to zero."~{ See Transcript of Proceedings, In Re: Napster Copyright Litigation at 34- 35 (N.D. Cal., 11 July 2001), nos. MDL-00-1369 MHP, C 99-5183 MHP, available at link #21. For an account of the litigation and its toll on Napster, see Joseph Menn, /{All the Rave: The Rise and Fall of Shawn Fanning's Napster}/ (New York: Crown Business, 2003), 269-82. }~
+
+If 99.4 percent is not good enough, then this is a war on file-sharing technologies, not a war on copyright infringement. There is no way to assure that a p2p system is used 100 percent of the time in compliance with the law, any more than there is a way to assure that 100 percent of VCRs or 100 percent of Xerox machines or 100 percent of handguns are used in compliance with the law. Zero tolerance means zero p2p. The court's ruling means that we as a society must lose the benefits of p2p, even for the totally legal and beneficial uses they serve, simply to assure that there are zero copyright infringements caused by p2p.
+
+Zero tolerance has not been our history. It has not produced the content industry that we know today. The history of American law has been a process of balance. As new technologies changed the way content was distributed, the law adjusted, after some time, to the new technology. In this adjustment, the law sought to ensure the legitimate rights of creators while protecting innovation. Sometimes this has meant more rights for creators. Sometimes less.
+
+So, as we've seen, when "mechanical reproduction" threatened the interests of composers, Congress balanced the rights of composers against the interests of the recording industry. It granted rights to composers, but also to the recording artists: Composers were to be paid, but at a price set by Congress. But when radio started broadcasting the recordings made by these recording artists, and they complained to Congress that their "creative property" was not being respected (since the radio station did not have to pay them for the creativity it broadcast), Congress rejected their claim. An indirect benefit was enough.
+
+Cable TV followed the pattern of record albums. When the courts rejected the claim that cable broadcasters had to pay for the content they rebroadcast, Congress responded by giving broadcasters a right to compensation, but at a level set by the law. It likewise gave cable companies the right to the content, so long as they paid the statutory price.
+
+This compromise, like the compromise affecting records and player pianos, served two important goals - indeed, the two central goals of any copyright legislation. First, the law assured that new innovators would have the freedom to develop new ways to deliver content. Second, the law assured that copyright holders would be paid for the content that was distributed. One fear was that if Congress simply required cable TV to pay copyright holders whatever they demanded for their content, then copyright holders associated with broadcasters would use their power to stifle this new technology, cable. But if Congress had permitted cable to use broadcasters' content for free, then it would have unfairly subsidized cable. Thus Congress chose a path that would assure /{compensation}/ without giving the past (broadcasters) control over the future (cable).
+
+In the same year that Congress struck this balance, two major producers and distributors of film content filed a lawsuit against another technology, the video tape recorder (VTR, or as we refer to them today, VCRs) that Sony had produced, the Betamax. Disney's and Universal's claim against Sony was relatively simple: Sony produced a device, Disney and Universal claimed, that enabled consumers to engage in copyright infringement. Because the device that Sony built had a "record" button, the device could be used to record copyrighted movies and shows. Sony was therefore benefiting from the copyright infringement of its customers. It should therefore, Disney and Universal claimed, be partially liable for that infringement.
+
+There was something to Disney's and Universal's claim. Sony did decide to design its machine to make it very simple to record television shows. It could have built the machine to block or inhibit any direct copying from a television broadcast. Or possibly, it could have built the machine to copy only if there were a special "copy me" signal on the line. It was clear that there were many television shows that did not grant anyone permission to copy. Indeed, if anyone had asked, no doubt the majority of shows would not have authorized copying. And in the face of this obvious preference, Sony could have designed its system to minimize the opportunity for copyright infringement. It did not, and for that, Disney and Universal wanted to hold it responsible for the architecture it chose.
+
+MPAA president Jack Valenti became the studios' most vocal champion. Valenti called VCRs "tapeworms." He warned, "When there are 20, 30, 40 million of these VCRs in the land, we will be invaded by millions of 'tapeworms,' eating away at the very heart and essence of the most precious asset the copyright owner has, his copyright."~{ Copyright Infringements (Audio and Video Recorders): Hearing on S. 1758 Before the Senate Committee on the Judiciary, 97th Cong., 1st and 2nd sess., 459 (1982) (testimony of Jack Valenti, president, Motion Picture Association of America, Inc.). }~ "One does not have to be trained in sophisticated marketing and creative judgment," he told Congress, "to understand the devastation on the after-theater marketplace caused by the hundreds of millions of tapings that will adversely impact on the future of the creative community in this country. It is simply a question of basic economics and plain common sense."~{ Copyright Infringements (Audio and Video Recorders), 475. }~ Indeed, as surveys would later show, 45 percent of VCR owners had movie libraries of ten videos or more~{ /{Universal City Studios, Inc.}/ v. /{Sony Corp. of America,}/ 480 F. Supp. 429, 438 (C.D. Cal., 1979). }~ - a use the Court would later hold was not "fair." By "allowing VCR owners to copy freely by the means of an exemption from copyright infringement without creating a mechanism to compensate copyright owners," Valenti testified, Congress would "take from the owners the very essence of their property: the exclusive right to control who may use their work, that is, who may copy it and thereby profit from its reproduction."~{ Copyright Infringements (Audio and Video Recorders), 485 (testimony of Jack Valenti). }~
+
+It took eight years for this case to be resolved by the Supreme Court. In the interim, the Ninth Circuit Court of Appeals, which includes Hollywood in its jurisdiction - leading Judge Alex Kozinski, who sits on that court, refers to it as the "Hollywood Circuit" - held that Sony would be liable for the copyright infringement made possible by its machines. Under the Ninth Circuit's rule, this totally familiar technology - which Jack Valenti had called "the Boston Strangler of the American film industry" (worse yet, it was a /{Japanese}/ Boston Strangler of the American film industry) - was an illegal technology.~{ /{Universal City Studios, Inc.}/ v. /{Sony Corp. of America,}/ 659 F. 2d 963 (9th Cir. 1981). }~
+
+But the Supreme Court reversed the decision of the Ninth Circuit. And in its reversal, the Court clearly articulated its understanding of when and whether courts should intervene in such disputes. As the Court wrote,
+
+_1 Sound policy, as well as history, supports our consistent deference to Congress when major technological innovations alter the market for copyrighted materials. Congress has the constitutional authority and the institutional ability to accommodate fully the varied permutations of competing interests that are inevitably implicated by such new technology."~{ /{Sony Corp. of America}/ v. /{Universal City Studios, Inc.,}/ 464 U.S. 417, 431 (1984). }~
+
+Congress was asked to respond to the Supreme Court's decision. But as with the plea of recording artists about radio broadcasts, Congress ignored the request. Congress was convinced that American film got enough, this "taking" notwithstanding.
+
+If we put these cases together, a pattern is clear:
+
+table{~h c4; 10; 30; 30; 30;
+
+CASE
+WHOSE VALUE WAS "PIRATED"
+RESPONSE OF THE COURTS
+RESPONSE OF CONGRESS
+
+Recordings
+Composers
+No Protection
+Statutory License
+
+Radio
+Recording Artists
+N/A
+Nothing
+
+Cable TV
+Broadcasters
+No Protection
+Statutory Licese
+
+VCR
+Film Creators
+No Protection
+Nothing
+
+}table
+
+In each case throughout our history, a new technology changed the way content was distributed.~{ These are the most important instances in our history, but there are other cases as well. The technology of digital audio tape (DAT), for example, was regulated by Congress to minimize the risk of piracy. The remedy Congress imposed did burden DAT producers, by taxing tape sales and controlling the technology of DAT. See Audio Home Recording Act of 1992 (Title 17 of the /{United States Code}/), Pub. L. No. 102-563, 106 Stat. 4237, codified at 17 U.S.C. §1001. Again, however, this regulation did not eliminate the opportunity for free riding in the sense I've described. See Lessig, /{Future,}/ 71. See also Picker, "From Edison to the Broadcast Flag," /{University of Chicago Law Review}/ 70 (2003): 293-96. }~ In each case, throughout our history, that change meant that someone got a "free ride" on someone else's work.
+
+In /{none}/ of these cases did either the courts or Congress eliminate all free riding. In /{none}/ of these cases did the courts or Congress insist that the law should assure that the copyright holder get all the value that his copyright created. In every case, the copyright owners complained of "piracy." In every case, Congress acted to recognize some of the legiti macy in the behavior of the "pirates." In each case, Congress allowed some new technology to benefit from content made before. It balanced the interests at stake.
+
+When you think across these examples, and the other examples that make up the first four chapters of this section, this balance makes sense. Was Walt Disney a pirate? Would doujinshi be better if creators had to ask permission? Should tools that enable others to capture and spread images as a way to cultivate or criticize our culture be better regulated? Is it really right that building a search engine should expose you to $15 million in damages? Would it have been better if Edison had controlled film? Should every cover band have to hire a lawyer to get permission to record a song?
+
+We could answer yes to each of these questions, but our tradition has answered no. In our tradition, as the Supreme Court has stated, copyright "has never accorded the copyright owner complete control over all possible uses of his work."~{ /{Sony Corp. of America}/ v. /{Universal City Studios, Inc.,}/ 464 U.S. 417, 432 (1984). }~ Instead, the particular uses that the law regulates have been defined by balancing the good that comes from granting an exclusive right against the burdens such an exclusive right creates. And this balancing has historically been done /{after}/ a technology has matured, or settled into the mix of technologies that facilitate the distribution of content.
+
+We should be doing the same thing today. The technology of the Internet is changing quickly. The way people connect to the Internet (wires vs. wireless) is changing very quickly. No doubt the network should not become a tool for "stealing" from artists. But neither should the law become a tool to entrench one particular way in which artists (or more accurately, distributors) get paid. As I describe in some detail in the last chapter of this book, we should be securing income to artists while we allow the market to secure the most efficient way to promote and distribute content. This will require changes in the law, at least in the interim. These changes should be designed to balance the protection of the law against the strong public interest that innovation continue.
+
+This is especially true when a new technology enables a vastly superior mode of distribution. And this p2p has done. P2p technologies can be ideally efficient in moving content across a widely diverse network. Left to develop, they could make the network vastly more efficient. Yet these "potential public benefits," as John Schwartz writes in /{The New York Times}/, "could be delayed in the P2P fight."~{ John Schwartz, "New Economy: The Attack on Peer-to-Peer Software Echoes Past Efforts," /{New York Times,}/ 22 September 2003, C3. }~
+
+Yet when anyone begins to talk about "balance," the copyright warriors raise a different argument. "All this hand waving about balance and incentives," they say, "misses a fundamental point. Our content," the warriors insist, "is our /{property}/. Why should we wait for Congress to 'rebalance' our property rights? Do you have to wait before calling the police when your car has been stolen? And why should Congress deliberate at all about the merits of this theft? Do we ask whether the car thief had a good use for the car before we arrest him?"
+
+"It is /{our property}/," the warriors insist. "And it should be protected just as any other property is protected."
+
+:C~ "PROPERTY"
+
+1~intro_property [Intro]-#
+
+The copyright warriors are right: A copyright is a kind of property. It can be owned and sold, and the law protects against its theft. Ordinarily, the copyright owner gets to hold out for any price he wants. Markets reckon the supply and demand that partially determine the price she can get.
+
+But in ordinary language, to call a copyright a "property" right is a bit misleading, for the property of copyright is an odd kind of property. Indeed, the very idea of property in any idea or any expression is very odd. I understand what I am taking when I take the picnic table you put in your backyard. I am taking a thing, the picnic table, and after I take it, you don't have it. But what am I taking when I take the good /{idea}/ you had to put a picnic table in the backyard - by, for example, going to Sears, buying a table, and putting it in my backyard? What is the thing I am taking then?
+
+The point is not just about the thingness of picnic tables versus ideas, though that's an important difference. The point instead is that in the ordinary case - indeed, in practically every case except for a narrow range of exceptions - ideas released to the world are free. I don't take anything from you when I copy the way you dress - though I might seem weird if I did it every day, and especially weird if you are a woman. Instead, as Thomas Jefferson said (and as is especially true when I copy the way someone else dresses), - He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me."~{ Letter from Thomas Jefferson to Isaac McPherson (13 August 1813) in /{The Writings of Thomas Jefferson,}/ vol. 6 (Andrew A. Lipscomb and Albert Ellery Bergh, eds., 1903), 330, 333-34. }~
+
+The exceptions to free use are ideas and expressions within the reach of the law of patent and copyright, and a few other domains that I won't discuss here. Here the law says you can't take my idea or expression without my permission: The law turns the intangible into property.
+
+But how, and to what extent, and in what form - the details, in other words - matter. To get a good sense of how this practice of turning the intangible into property emerged, we need to place this "property" in its proper context.~{ As the legal realists taught American law, all property rights are intangible. A property right is simply a right that an individual has against the world to do or not do certain things that may or may not attach to a physical object. The right itself is intangible, even if the object to which it is (metaphorically) attached is tangible. See Adam Mossoff, "What Is Property? Putting the Pieces Back Together," /{Arizona Law Review}/ 45 (2003): 373, 429 n. 241. }~
+
+My strategy in doing this will be the same as my strategy in the preceding part. I offer four stories to help put the idea of "copyright material is property" in context. Where did the idea come from? What are its limits? How does it function in practice? After these stories, the significance of this true statement - "copyright material is property" - will be a bit more clear, and its implications will be revealed as quite different from the implications that the copyright warriors would have us draw.
+
+1~ Chapter Six: Founders
+
+*{William Shakespeare}* wrote /{Romeo and Juliet}/ in 1595. The play was first published in 1597. It was the eleventh major play that Shakespeare had written. He would continue to write plays through 1613, and the plays that he wrote have continued to define Anglo-American culture ever since. So deeply have the works of a sixteenth-century writer seeped into our culture that we often don't even recognize their source. I once overheard someone commenting on Kenneth Branagh's adaptation of Henry V: "I liked it, but Shakespeare is so full of clichés."
+
+In 1774, almost 180 years after /{Romeo and Juliet}/ was written, the "copy-right" for the work was still thought by many to be the exclusive right of a single London publisher, Jacob Tonson.~{ Jacob Tonson is typically remembered for his associations with prominent eighteenth-century literary figures, especially John Dryden, and for his handsome "definitive editions" of classic works. In addition to /{Romeo and Juliet,}/ he published an astonishing array of works that still remain at the heart of the English canon, including collected works of Shakespeare, Ben Jonson, John Milton, and John Dryden. See Keith Walker, "Jacob Tonson, Bookseller," /{American Scholar}/ 61:3 (1992): 424-31. }~ Tonson was the most prominent of a small group of publishers called the Conger~{ Lyman Ray Patterson, /{Copyright in Historical Perspective}/ (Nashville: Vanderbilt University Press, 1968), 151-52. }~ who controlled bookselling in England during the eighteenth century. The Conger claimed a perpetual right to control the "copy" of books that they had acquired from authors. That perpetual right meant that no one else could publish copies of a book to which they held the copyright. Prices of the classics were thus kept high; competition to produce better or cheaper editions was eliminated.
+
+Now, there's something puzzling about the year 1774 to anyone who knows a little about copyright law. The better-known year in the history of copyright is 1710, the year that the British Parliament adopted the first "copyright" act. Known as the Statute of Anne, the act stated that all published works would get a copyright term of fourteen years, renewable once if the author was alive, and that all works already published by 1710 would get a single term of twenty-one additional years.~{ As Siva Vaidhyanathan nicely argues, it is erroneous to call this a "copyright law." See Vaidhyanathan, /{Copyrights and Copywrongs,}/ 40. }~ Under this law, /{Romeo and Juliet}/ should have been free in 1731. So why was there any issue about it still being under Tonson's control in 1774?
+
+The reason is that the English hadn't yet agreed on what a "copy-right" was - indeed, no one had. At the time the English passed the Statute of Anne, there was no other legislation governing copyrights. The last law regulating publishers, the Licensing Act of 1662, had expired in 1695. That law gave publishers a monopoly over publishing, as a way to make it easier for the Crown to control what was published. But after it expired, there was no positive law that said that the publishers, or "Stationers," had an exclusive right to print books.
+
+There was no /{positive}/ law, but that didn't mean that there was no law. The Anglo-American legal tradition looks to both the words of legislatures and the words of judges to know the rules that are to govern how people are to behave. We call the words from legislatures "positive law." We call the words from judges "common law." The common law sets the background against which legislatures legislate; the legislature, ordinarily, can trump that background only if it passes a law to displace it. And so the real question after the licensing statutes had expired was whether the common law protected a copyright, independent of any positive law.
+
+This question was important to the publishers, or "booksellers," as they were called, because there was growing competition from foreign publishers. The Scottish, in particular, were increasingly publishing and exporting books to England. That competition reduced the profits of the Conger, which reacted by demanding that Parliament pass a law to again give them exclusive control over publishing. That demand ultimately resulted in the Statute of Anne.
+
+The Statute of Anne granted the author or "proprietor" of a book an exclusive right to print that book. In an important limitation, however, and to the horror of the booksellers, the law gave the bookseller that right for a limited term. At the end of that term, the copyright "expired," and the work would then be free and could be published by anyone. Or so the legislature is thought to have believed.
+
+Now, the thing to puzzle about for a moment is this: Why would Parliament limit the exclusive right? Not why would they limit it to the particular limit they set, but why would they limit the right /{at all?}}/
+
+For the booksellers, and the authors whom they represented, had a very strong claim. Take /{Romeo and Juliet}/ as an example: That play was written by Shakespeare. It was his genius that brought it into the world. He didn't take anybody's property when he created this play (that's a controversial claim, but never mind), and by his creating this play, he didn't make it any harder for others to craft a play. So why is it that the law would ever allow someone else to come along and take Shakespeare's play without his, or his estate's, permission? What reason is there to allow someone else to "steal" Shakespeare's work?
+
+The answer comes in two parts. We first need to see something special about the notion of "copyright" that existed at the time of the Statute of Anne. Second, we have to see something important about "booksellers."
+
+First, about copyright. In the last three hundred years, we have come to apply the concept of "copyright" ever more broadly. But in 1710, it wasn't so much a concept as it was a very particular right. The copyright was born as a very specific set of restrictions: It forbade others from reprinting a book. In 1710, the "copy-right" was a right to use a particular machine to replicate a particular work. It did not go beyond that very narrow right. It did not control any more generally how a work could be /{used}/. Today the right includes a large collection of restrictions on the freedom of others: It grants the author the exclusive right to copy, the exclusive right to distribute, the exclusive right to perform, and so on.
+
+So, for example, even if the copyright to Shakespeare's works were perpetual, all that would have meant under the original meaning of the term was that no one could reprint Shakespeare's work without the permission of the Shakespeare estate. It would not have controlled anything, for example, about how the work could be performed, whether the work could be translated, or whether Kenneth Branagh would be allowed to make his films. The "copy-right" was only an exclusive right to print - no less, of course, but also no more.
+
+Even that limited right was viewed with skepticism by the British. They had had a long and ugly experience with "exclusive rights," especially "exclusive rights" granted by the Crown. The English had fought a civil war in part about the Crown's practice of handing out monopolies - especially monopolies for works that already existed. King Henry VIII granted a patent to print the Bible and a monopoly to Darcy to print playing cards. The English Parliament began to fight back against this power of the Crown. In 1656, it passed the Statute of Monopolies, limiting monopolies to patents for new inventions. And by 1710, Parliament was eager to deal with the growing monopoly in publishing.
+
+Thus the "copy-right," when viewed as a monopoly right, was naturally viewed as a right that should be limited. (However convincing the claim that "it's my property, and I should have it forever," try sounding convincing when uttering, "It's my monopoly, and I should have it forever.") The state would protect the exclusive right, but only so long as it benefited society. The British saw the harms from special-interest favors; they passed a law to stop them.
+
+Second, about booksellers. It wasn't just that the copyright was a monopoly. It was also that it was a monopoly held by the booksellers. Booksellers sound quaint and harmless to us. They were not viewed as harmless in seventeenth- century England. Members of the Conger were increasingly seen as monopolists of the worst kind - tools of the Crown's repression, selling the liberty of England to guarantee themselves a monopoly profit. The attacks against these monopolists were harsh: Milton described them as "old patentees and monopolizers in the trade of book-selling"; they were "men who do not therefore labour in an honest profession to which learning is indetted."~{ Philip Wittenberg, /{The Protection and Marketing of Literary Property}/ (New York: J. Messner, Inc., 1937), 31. }~
+
+Many believed the power the booksellers exercised over the spread of knowledge was harming that spread, just at the time the Enlightenment was teaching the importance of education and knowledge spread generally. The idea that knowledge should be free was a hallmark of the time, and these powerful commercial interests were interfering with that idea.
+
+To balance this power, Parliament decided to increase competition among booksellers, and the simplest way to do that was to spread the wealth of valuable books. Parliament therefore limited the term of copyrights, and thereby guaranteed that valuable books would become open to any publisher to publish after a limited time. Thus the setting of the term for existing works to just twenty-one years was a compromise to fight the power of the booksellers. The limitation on terms was an indirect way to assure competition among publishers, and thus the construction and spread of culture.
+
+When 1731 (1710 + 21) came along, however, the booksellers were getting anxious. They saw the consequences of more competition, and like every competitor, they didn't like them. At first booksellers simply ignored the Statute of Anne, continuing to insist on the perpetual right to control publication. But in 1735 and 1737, they tried to persuade Parliament to extend their terms. Twenty-one years was not enough, they said; they needed more time.
+
+Parliament rejected their requests. As one pamphleteer put it, in words that echo today,
+
+_1 I see no Reason for granting a further Term now, which will not hold as well for granting it again and again, as often as the Old ones Expire; so that should this Bill pass, it will in Effect be establishing a perpetual Monopoly, a Thing deservedly odious in the Eye of the Law; it will be a great Cramp to Trade, a Discouragement to Learning, no Benefit to the Authors, but a general Tax on the Publick; and all this only to increase the private Gain of the Booksellers."~{ A Letter to a Member of Parliament concerning the Bill now depending in the House of Commons, for making more effectual an Act in the Eighth Year of the Reign of Queen Anne, entitled, An Act for the Encouragement of Learning, by Vesting the Copies of Printed Books in the Authors or Purchasers of such Copies, during the Times therein mentioned (London, 1735), in Brief Amici Curiae of Tyler T. Ochoa et al., 8, /{Eldred}/ v. /{Ashcroft,}/ 537 U.S. 186 (2003) (No. 01- 618). }~
+
+Having failed in Parliament, the publishers turned to the courts in a series of cases. Their argument was simple and direct: The Statute of Anne gave authors certain protections through positive law, but those protections were not intended as replacements for the common law. Instead, they were intended simply to supplement the common law. Under common law, it was already wrong to take another person's creative "property" and use it without his permission. The Statute of Anne, the booksellers argued, didn't change that. Therefore, just because the protections of the Statute of Anne expired, that didn't mean the protections of the common law expired: Under the common law they had the right to ban the publication of a book, even if its Statute of Anne copyright had expired. This, they argued, was the only way to protect authors.
+
+This was a clever argument, and one that had the support of some of the leading jurists of the day. It also displayed extraordinary chutzpah. Until then, as law professor Raymond Patterson has put it, "The publishers ... had as much concern for authors as a cattle rancher has for cattle."~{ Lyman Ray Patterson, "Free Speech, Copyright, and Fair Use," /{Vanderbilt Law Review}/ 40 (1987): 28. For a wonderfully compelling account, see Vaidhyanathan, 37-48. }~ The bookseller didn't care squat for the rights of the author. His concern was the monopoly profit that the author's work gave.
+
+The booksellers' argument was not accepted without a fight. The hero of this fight was a Scottish bookseller named Alexander Donaldson.~{ For a compelling account, see David Saunders, /{Authorship and Copyright}/ (London: Routledge, 1992), 62-69. }~
+
+Donaldson was an outsider to the London Conger. He began his career in Edinburgh in 1750. The focus of his business was inexpensive reprints "of standard works whose copyright term had expired," at least under the Statute of Anne.~{ Mark Rose, /{Authors and Owners}/ (Cambridge: Harvard University Press, 1993), 92. }~ Donaldson's publishing house prospered and became "something of a center for literary Scotsmen." "[A]mong them," Professor Mark Rose writes, was "the young James Boswell who, together with his friend Andrew Erskine, published an anthology of contemporary Scottish poems with Donaldson."~{ Ibid., 93. }~
+
+When the London booksellers tried to shut down Donaldson's shop in Scotland, he responded by moving his shop to London, where he sold inexpensive editions "of the most popular English books, in defiance of the supposed common law right of Literary Property."~{ Lyman Ray Patterson, /{Copyright in Historical Perspective,}/ 167 (quoting Borwell). }~ His books undercut the Conger prices by 30 to 50 percent, and he rested his right to compete upon the ground that, under the Statute of Anne, the works he was selling had passed out of protection.
+
+The London booksellers quickly brought suit to block "piracy" like Donaldson's. A number of actions were successful against the "pirates," the most important early victory being /{Millar}/ v. /{Taylor}/.
+
+Millar was a bookseller who in 1729 had purchased the rights to James Thomson's poem "The Seasons." Millar complied with the requirements of the Statute of Anne, and therefore received the full protection of the statute. After the term of copyright ended, Robert Taylor began printing a competing volume. Millar sued, claiming a perpetual common law right, the Statute of Anne notwithstanding.~{ Howard B. Abrams, "The Historic Foundation of American Copyright Law: Exploding the Myth of Common Law Copyright," /{Wayne Law Review}/ 29 (1983): 1152. }~
+
+Astonishingly to modern lawyers, one of the greatest judges in English history, Lord Mansfield, agreed with the booksellers. Whatever protection the Statute of Anne gave booksellers, it did not, he held, extinguish any common law right. The question was whether the common law would protect the author against subsequent "pirates." Mansfield's answer was yes: The common law would bar Taylor from reprinting Thomson's poem without Millar's permission. That common law rule thus effectively gave the booksellers a perpetual right to control the publication of any book assigned to them.
+
+Considered as a matter of abstract justice - reasoning as if justice were just a matter of logical deduction from first principles - Mansfield's conclusion might make some sense. But what it ignored was the larger issue that Parliament had struggled with in 1710: How best to limit the monopoly power of publishers? Parliament's strategy was to offer a term for existing works that was long enough to buy peace in 1710, but short enough to assure that culture would pass into competition within a reasonable period of time. Within twenty-one years, Parliament believed, Britain would mature from the controlled culture that the Crown coveted to the free culture that we inherited.
+
+The fight to defend the limits of the Statute of Anne was not to end there, however, and it is here that Donaldson enters the mix.
+
+Millar died soon after his victory, so his case was not appealed. His estate sold Thomson's poems to a syndicate of printers that included Thomas Beckett. ~{ Ibid., 1156. }~ Donaldson then released an unauthorized edition of Thomson's works. Beckett, on the strength of the decision in /{Millar}/, got an injunction against Donaldson. Donaldson appealed the case to the House of Lords, which functioned much like our own Supreme Court. In February of 1774, that body had the chance to interpret the meaning of Parliament's limits from sixty years before.
+
+As few legal cases ever do, /{Donaldson}/ v. /{Beckett}/ drew an enormous amount of attention throughout Britain. Donaldson's lawyers argued that whatever rights may have existed under the common law, the Statute of Anne terminated those rights. After passage of the Statute of Anne, the only legal protection for an exclusive right to control publication came from that statute. Thus, they argued, after the term specified in the Statute of Anne expired, works that had been protected by the statute were no longer protected.
+
+The House of Lords was an odd institution. Legal questions were presented to the House and voted upon first by the "law lords," members of special legal distinction who functioned much like the Justices in our Supreme Court. Then, after the law lords voted, the House of Lords generally voted.
+
+The reports about the law lords' votes are mixed. On some counts, it looks as if perpetual copyright prevailed. But there is no ambiguity about how the House of Lords voted as whole. By a two-to-one majority (22 to 11) they voted to reject the idea of perpetual copyrights. Whatever one's understanding of the common law, now a copyright was fixed for a limited time, after which the work protected by copyright passed into the public domain.
+
+"The public domain." Before the case of /{Donaldson}/ v. /{Beckett}/, there was no clear idea of a public domain in England. Before 1774, there was a strong argument that common law copyrights were perpetual. After 1774, the public domain was born. For the first time in Anglo- American history, the legal control over creative works expired, and the greatest works in English history - including those of Shakespeare, Bacon, Milton, Johnson, and Bunyan - were free of legal restraint.
+
+It is hard for us to imagine, but this decision by the House of Lords fueled an extraordinarily popular and political reaction. In Scotland, where most of the "pirate publishers" did their work, people celebrated the decision in the streets. As the /{Edinburgh Advertiser}/ reported, "No private cause has so much engrossed the attention of the public, and none has been tried before the House of Lords in the decision of which so many individuals were interested." "Great rejoicing in Edinburgh upon victory over literary property: bonfires and illuminations."~{ Rose, 97. }~
+
+In London, however, at least among publishers, the reaction was equally strong in the opposite direction. The /{Morning Chronicle}/ reported:
+
+_1 By the above decision ... near 200,000 pounds worth of what was honestly purchased at public sale, and which was yesterday thought property is now reduced to nothing. The Booksellers of London and Westminster, many of whom sold estates and houses to purchase Copy-right, are in a manner ruined, and those who after many years industry thought they had acquired a competency to provide for their families now find themselves without a shilling to devise to their successors."~{ Ibid. }~
+
+"Ruined" is a bit of an exaggeration. But it is not an exaggeration to say that the change was profound. The decision of the House of Lords meant that the booksellers could no longer control how culture in England would grow and develop. Culture in England was thereafter /{free}/. Not in the sense that copyrights would not be respected, for of course, for a limited time after a work was published, the bookseller had an exclusive right to control the publication of that book. And not in the sense that books could be stolen, for even after a copyright expired, you still had to buy the book from someone. But /{free}/ in the sense that the culture and its growth would no longer be controlled by a small group of publishers. As every free market does, this free market of free culture would grow as the consumers and producers chose. English culture would develop as the many English readers chose to let it develop - chose in the books they bought and wrote; chose in the memes they repeated and endorsed. Chose in a /{competitive context}/, not a context in which the choices about what culture is available to people and how they get access to it are made by the few despite the wishes of the many.
+
+At least, this was the rule in a world where the Parliament is anti-monopoly, resistant to the protectionist pleas of publishers. In a world where the Parliament is more pliant, free culture would be less protected.
+
+1~ Chapter Seven: Recorders
+
+*{Jon Else}* is a filmmaker. He is best known for his documentaries and has been very successful in spreading his art. He is also a teacher, and as a teacher myself, I envy the loyalty and admiration that his students feel for him. (I met, by accident, two of his students at a dinner party. He was their god.)
+
+Else worked on a documentary that I was involved in. At a break, he told me a story about the freedom to create with film in America today.
+
+In 1990, Else was working on a documentary about Wagner's Ring Cycle. The focus was stagehands at the San Francisco Opera. Stage- hands are a particularly funny and colorful element of an opera. During a show, they hang out below the stage in the grips' lounge and in the lighting loft. They make a perfect contrast to the art on the stage.
+
+During one of the performances, Else was shooting some stage- hands playing checkers. In one corner of the room was a television set. Playing on the television set, while the stagehands played checkers and the opera company played Wagner, was /{The Simpsons}/. As Else judged it, this touch of cartoon helped capture the flavor of what was special about the scene.
+
+Years later, when he finally got funding to complete the film, Else attempted to clear the rights for those few seconds of /{The Simpsons}/. For of course, those few seconds are copyrighted; and of course, to use copyrighted material you need the permission of the copyright owner, unless "fair use" or some other privilege applies.
+
+Else called /{Simpsons}/ creator Matt Groening's office to get permission. Groening approved the shot. The shot was a four-and-a-half-second image on a tiny television set in the corner of the room. How could it hurt? Groening was happy to have it in the film, but he told Else to contact Gracie Films, the company that produces the program.
+
+Gracie Films was okay with it, too, but they, like Groening, wanted to be careful. So they told Else to contact Fox, Gracie's parent company. Else called Fox and told them about the clip in the corner of the one room shot of the film. Matt Groening had already given permission, Else said. He was just confirming the permission with Fox.
+
+Then, as Else told me, "two things happened. First we discovered ... that Matt Groening doesn't own his own creation - or at least that someone [at Fox] believes he doesn't own his own creation." And second, Fox "wanted ten thousand dollars as a licensing fee for us to use this four-point-five seconds of ... entirely unsolicited /{Simpsons}/ which was in the corner of the shot."
+
+Else was certain there was a mistake. He worked his way up to someone he thought was a vice president for licensing, Rebecca Herrera. He explained to her, "There must be some mistake here. ... We're asking for your educational rate on this." That was the educational rate, Herrera told Else. A day or so later, Else called again to confirm what he had been told.
+
+"I wanted to make sure I had my facts straight," he told me. "Yes, you have your facts straight," she said. It would cost $10,000 to use the clip of /{The Simpsons}/ in the corner of a shot in a documentary film about Wagner's Ring Cycle. And then, astonishingly, Herrera told Else, "And if you quote me, I'll turn you over to our attorneys." As an assistant to Herrera told Else later on, "They don't give a shit. They just want the money."
+
+Else didn't have the money to buy the right to replay what was playing on the television backstage at the San Francisco Opera.To reproduce this reality was beyond the documentary filmmaker's budget. At the very last minute before the film was to be released, Else digitally replaced the shot with a clip from another film that he had worked on, /{The Day After Trinity}/, from ten years before.
+
+There's no doubt that someone, whether Matt Groening or Fox, owns the copyright to /{The Simpsons}/. That copyright is their property. To use that copyrighted material thus sometimes requires the permission of the copyright owner. If the use that Else wanted to make of the /{Simpsons}/ copyright were one of the uses restricted by the law, then he would need to get the permission of the copyright owner before he could use the work in that way. And in a free market, it is the owner of the copyright who gets to set the price for any use that the law says the owner gets to control.
+
+For example, "public performance" is a use of /{The Simpsons}/ that the copyright owner gets to control. If you take a selection of favorite episodes, rent a movie theater, and charge for tickets to come see "My Favorite /{Simpsons}/," then you need to get permission from the copyright owner. And the copyright owner (rightly, in my view) can charge whatever she wants - $10 or $1,000,000. That's her right, as set by the law.
+
+But when lawyers hear this story about Jon Else and Fox, their first thought is "fair use."~{ For an excellent argument that such use is "fair use," but that lawyers don't permit recognition that it is "fair use," see Richard A. Posner with William F. Patry, "Fair Use and Statutory Reform in the Wake of /{Eldred}/" (draft on file with author), University of Chicago Law School, 5 August 2003. }~ Else's use of just 4.5 seconds of an indirect shot of a /{Simpsons}/ episode is clearly a fair use of /{The Simpsons}/ - and fair use does not require the permission of anyone.
+
+So I asked Else why he didn't just rely upon "fair use." Here's his reply:
+
+_1 The /{Simpsons}/ fiasco was for me a great lesson in the gulf between what lawyers find irrelevant in some abstract sense, and what is crushingly relevant in practice to those of us actually trying to make and broadcast documentaries. I never had any doubt that it was "clearly fair use" in an absolute legal sense. But I couldn't rely on the concept in any concrete way. Here's why:
+
+_1 1. Before our films can be broadcast, the network requires that we buy Errors and Omissions insurance. The carriers require a detailed "visual cue sheet" listing the source and licensing status of each shot in the film. They take a dim view of "fair use," and a claim of "fair use" can grind the application process to a halt.
+
+_1 2. I probably never should have asked Matt Groening in the first place. But I knew (at least from folklore) that Fox had a history of tracking down and stopping unlicensed /{Simpsons}/ usage, just as George Lucas had a very high profile litigating /{Star Wars}/ usage. So I decided to play by the book, thinking that we would be granted free or cheap license to four seconds of /{Simpsons}/. As a documentary producer working to exhaustion on a shoestring, the last thing I wanted was to risk legal trouble, even nuisance legal trouble, and even to defend a principle.
+
+_1 3. I did, in fact, speak with one of your colleagues at Stanford Law School ... who confirmed that it was fair use. He also confirmed that Fox would "depose and litigate you to within an inch of your life," regardless of the merits of my claim. He made clear that it would boil down to who had the bigger legal department and the deeper pockets, me or them.
+
+_1 4. The question of fair use usually comes up at the end of the project, when we are up against a release deadline and out of money."
+
+In theory, fair use means you need no permission. The theory therefore supports free culture and insulates against a permission culture. But in practice, fair use functions very differently. The fuzzy lines of the law, tied to the extraordinary liability if lines are crossed, means that the effective fair use for many types of creators is slight. The law has the right aim; practice has defeated the aim.
+
+This practice shows just how far the law has come from its eighteenth-century roots. The law was born as a shield to protect publishers' profits against the unfair competition of a pirate. It has matured into a sword that interferes with any use, transformative or not.
+
+1~ Chapter Eight: Transformers
+
+*{In 1993,}* Alex Alben was a lawyer working at Starwave, Inc. Starwave was an innovative company founded by Microsoft cofounder Paul Allen to develop digital entertainment. Long before the Internet became popular, Starwave began investing in new technology for delivering entertainment in anticipation of the power of networks.
+
+Alben had a special interest in new technology. He was intrigued by the emerging market for CD-ROM technology - not to distribute film, but to do things with film that otherwise would be very difficult. In 1993, he launched an initiative to develop a product to build retrospectives on the work of particular actors. The first actor chosen was Clint Eastwood. The idea was to showcase all of the work of Eastwood, with clips from his films and interviews with figures important to his career.
+
+At that time, Eastwood had made more than fifty films, as an actor and as a director. Alben began with a series of interviews with Eastwood, asking him about his career. Because Starwave produced those interviews, it was free to include them on the CD.
+
+That alone would not have made a very interesting product, so Starwave wanted to add content from the movies in Eastwood's career: posters, scripts, and other material relating to the films Eastwood made. Most of his career was spent at Warner Brothers, and so it was relatively easy to get permission for that content.
+
+Then Alben and his team decided to include actual film clips. "Our goal was that we were going to have a clip from every one of East-wood's films," Alben told me. It was here that the problem arose. "No one had ever really done this before," Alben explained. "No one had ever tried to do this in the context of an artistic look at an actor's career."
+
+Alben brought the idea to Michael Slade, the CEO of Starwave. Slade asked, "Well, what will it take?"
+
+Alben replied, "Well, we're going to have to clear rights from everyone who appears in these films, and the music and everything else that we want to use in these film clips." Slade said, "Great! Go for it."~{ Technically, the rights that Alben had to clear were mainly those of publicity"rights an artist has to control the commercial exploitation of his image. But these rights, too, burden "Rip, Mix, Burn" creativity, as this chapter evinces. }~
+
+The problem was that neither Alben nor Slade had any idea what clearing those rights would mean. Every actor in each of the films could have a claim to royalties for the reuse of that film. But CD-ROMs had not been specified in the contracts for the actors, so there was no clear way to know just what Starwave was to do.
+
+I asked Alben how he dealt with the problem. With an obvious pride in his resourcefulness that obscured the obvious bizarreness of his tale, Alben recounted just what they did:
+
+_1 So we very mechanically went about looking up the film clips. We made some artistic decisions about what film clips to include - of course we were going to use the "Make my day" clip from /{Dirty Harry}/. But you then need to get the guy on the ground who's wiggling under the gun and you need to get his permission. And then you have to decide what you are going to pay him.
+
+_1 We decided that it would be fair if we offered them the day-player rate for the right to reuse that performance. We're talking about a clip of less than a minute, but to reuse that performance in the CD-ROM the rate at the time was about $600.
+
+_1 So we had to identify the people - some of them were hard to identify because in Eastwood movies you can't tell who's the guy crashing through the glass - is it the actor or is it the stuntman? And then we just, we put together a team, my assistant and some others, and we just started calling people."
+
+Some actors were glad to help - Donald Sutherland, for example, followed up himself to be sure that the rights had been cleared. Others were dumbfounded at their good fortune. Alben would ask, "Hey, can I pay you $600 or maybe if you were in two films, you know, $1,200?" And they would say, "Are you for real? Hey, I'd love to get $1,200." And some of course were a bit difficult (estranged ex-wives, in particular). But eventually, Alben and his team had cleared the rights to this retrospective CD-ROM on Clint Eastwood's career.
+
+It was one /{year}/ later - " and even then we weren't sure whether we were totally in the clear."
+
+Alben is proud of his work. The project was the first of its kind and the only time he knew of that a team had undertaken such a massive project for the purpose of releasing a retrospective.
+
+_1 Everyone thought it would be too hard. Everyone just threw up their hands and said, "Oh, my gosh, a film, it's so many copyrights, there's the music, there's the screenplay, there's the director, there's the actors." But we just broke it down. We just put it into its constituent parts and said, "Okay, there's this many actors, this many directors, ... this many musicians," and we just went at it very systematically and cleared the rights."
+
+And no doubt, the product itself was exceptionally good. Eastwood loved it, and it sold very well.
+
+But I pressed Alben about how weird it seems that it would have to take a year's work simply to clear rights. No doubt Alben had done this efficiently, but as Peter Drucker has famously quipped, "There is nothing so useless as doing efficiently that which should not be done at all."~{ U.S. Department of Commerce Office of Acquisition Management, /{Seven Steps to Performance-Based Services Acquisition,}/ available at link #22. }~ Did it make sense, I asked Alben, that this is the way a new work has to be made?
+
+For, as he acknowledged, "very few ... have the time and resources, and the will to do this," and thus, very few such works would ever be made. Does it make sense, I asked him, from the standpoint of what anybody really thought they were ever giving rights for originally, that you would have to go clear rights for these kinds of clips?
+
+_1 I don't think so. When an actor renders a performance in a movie, he or she gets paid very well. ... And then when 30 seconds of that performance is used in a new product that is a retrospective of somebody's career, I don't think that that person ... should be compensated for that."
+
+Or at least, is this /{how}/ the artist should be compensated? Would it make sense, I asked, for there to be some kind of statutory license that someone could pay and be free to make derivative use of clips like this? Did it really make sense that a follow-on creator would have to track down every artist, actor, director, musician, and get explicit permission from each? Wouldn't a lot more be created if the legal part of the creative process could be made to be more clean?
+
+_1 Absolutely. I think that if there were some fair-licensing mechanism - where you weren't subject to hold-ups and you weren't subject to estranged former spouses - you'd see a lot more of this work, because it wouldn't be so daunting to try to put together a retrospective of someone's career and meaningfully illustrate it with lots of media from that person's career. You'd build in a cost as the producer of one of these things. You'd build in a cost of paying X dollars to the talent that performed. But it would be a known cost. That's the thing that trips everybody up and makes this kind of product hard to get off the ground. If you knew I have a hundred minutes of film in this product and it's going to cost me X, then you build your budget around it, and you can get investments and everything else that you need to produce it. But if you say, "Oh, I want a hundred minutes of something and I have no idea what it's going to cost me, and a certain number of people are going to hold me up for money," then it becomes difficult to put one of these things together."
+
+Alben worked for a big company. His company was backed by some of the richest investors in the world. He therefore had authority and access that the average Web designer would not have. So if it took him a year, how long would it take someone else? And how much creativity is never made just because the costs of clearing the rights are so high?
+
+These costs are the burdens of a kind of regulation. Put on a Republican hat for a moment, and get angry for a bit. The government defines the scope of these rights, and the scope defined determines how much it's going to cost to negotiate them. (Remember the idea that land runs to the heavens, and imagine the pilot purchasing fly- through rights as he negotiates to fly from Los Angeles to San Francisco.) These rights might well have once made sense; but as circumstances change, they make no sense at all. Or at least, a well-trained, regulation-minimizing Republican should look at the rights and ask, "Does this still make sense?"
+
+I've seen the flash of recognition when people get this point, but only a few times. The first was at a conference of federal judges in California. The judges were gathered to discuss the emerging topic of cyber-law. I was asked to be on the panel. Harvey Saferstein, a well-respected lawyer from an L.A. firm, introduced the panel with a video that he and a friend, Robert Fairbank, had produced.
+
+The video was a brilliant collage of film from every period in the twentieth century, all framed around the idea of a /{60 Minutes}/ episode. The execution was perfect, down to the sixty-minute stopwatch. The judges loved every minute of it.
+
+When the lights came up, I looked over to my copanelist, David Nimmer, perhaps the leading copyright scholar and practitioner in the nation. He had an astonished look on his face, as he peered across the room of over 250 well- entertained judges. Taking an ominous tone, he began his talk with a question: "Do you know how many federal laws were just violated in this room?"
+
+For of course, the two brilliantly talented creators who made this film hadn't done what Alben did. They hadn't spent a year clearing the rights to these clips; technically, what they had done violated the law. Of course, it wasn't as if they or anyone were going to be prosecuted for this violation (the presence of 250 judges and a gaggle of federal marshals notwithstanding). But Nimmer was making an important point: A year before anyone would have heard of the word Napster, and two years before another member of our panel, David Boies, would defend Napster before the Ninth Circuit Court of Appeals, Nimmer was trying to get the judges to see that the law would not be friendly to the capacities that this technology would enable. Technology means you can now do amazing things easily; but you couldn't easily do them legally.
+
+We live in a "cut and paste" culture enabled by technology. Anyone building a presentation knows the extraordinary freedom that the cut and paste architecture of the Internet created - in a second you can find just about any image you want; in another second, you can have it planted in your presentation.
+
+But presentations are just a tiny beginning. Using the Internet and its archives, musicians are able to string together mixes of sound never before imagined; filmmakers are able to build movies out of clips on computers around the world. An extraordinary site in Sweden takes images of politicians and blends them with music to create biting political commentary. A site called Camp Chaos has produced some of the most biting criticism of the record industry that there is through the mixing of Flash! and music.
+
+All of these creations are technically illegal. Even if the creators wanted to be "legal," the cost of complying with the law is impossibly high. Therefore, for the law-abiding sorts, a wealth of creativity is never made. And for that part that is made, if it doesn't follow the clearance rules, it doesn't get released.
+
+To some, these stories suggest a solution: Let's alter the mix of rights so that people are free to build upon our culture. Free to add or mix as they see fit. We could even make this change without necessarily requiring that the "free" use be free as in "free beer." Instead, the system could simply make it easy for follow-on creators to compensate artists without requiring an army of lawyers to come along: a rule, for example, that says "the royalty owed the copyright owner of an unregistered work for the derivative reuse of his work will be a flat 1 percent of net revenues, to be held in escrow for the copyright owner." Under this rule, the copyright owner could benefit from some royalty, but he would not have the benefit of a full property right (meaning the right to name his own price) unless he registers the work.
+
+Who could possibly object to this? And what reason would there be for objecting? We're talking about work that is not now being made; which if made, under this plan, would produce new income for artists. What reason would anyone have to oppose it?
+
+In February 2003, DreamWorks studios announced an agreement with Mike Myers, the comic genius of /{Saturday Night Live}/ and Austin Powers. According to the announcement, Myers and DreamWorks would work together to form a "unique filmmaking pact." Under the agreement, DreamWorks "will acquire the rights to existing motion picture hits and classics, write new storylines and - with the use of state- of-the-art digital technology - insert Myers and other actors into the film, thereby creating an entirely new piece of entertainment."
+
+The announcement called this "film sampling." As Myers explained, "Film Sampling is an exciting way to put an original spin on existing films and allow audiences to see old movies in a new light. Rap artists have been doing this for years with music and now we are able to take that same concept and apply it to film." Steven Spielberg is quoted as saying, "If anyone can create a way to bring old films to new audiences, it is Mike."
+
+Spielberg is right. Film sampling by Myers will be brilliant. But if you don't think about it, you might miss the truly astonishing point about this announcement. As the vast majority of our film heritage remains under copyright, the real meaning of the DreamWorks announcement is just this: It is Mike Myers and only Mike Myers who is free to sample. Any general freedom to build upon the film archive of our culture, a freedom in other contexts presumed for us all, is now a privilege reserved for the funny and famous - and presumably rich.
+
+This privilege becomes reserved for two sorts of reasons. The first continues the story of the last chapter: the vagueness of "fair use." Much of "sampling" should be considered "fair use." But few would rely upon so weak a doctrine to create. That leads to the second reason that the privilege is reserved for the few: The costs of negotiating the legal rights for the creative reuse of content are astronomically high. These costs mirror the costs with fair use: You either pay a lawyer to defend your fair use rights or pay a lawyer to track down permissions so you don't have to rely upon fair use rights. Either way, the creative process is a process of paying lawyers - again a privilege, or perhaps a curse, reserved for the few.
+
+1~ Chapter Nine: Collectors
+
+*{In April 1996,}* millions of "bots" - computer codes designed to "spider," or automatically search the Internet and copy content - began running across the Net. Page by page, these bots copied Internet-based information onto a small set of computers located in a basement in San Francisco's Presidio. Once the bots finished the whole of the Internet, they started again. Over and over again, once every two months, these bits of code took copies of the Internet and stored them.
+
+By October 2001, the bots had collected more than five years of copies. And at a small announcement in Berkeley, California, the archive that these copies created, the Internet Archive, was opened to the world. Using a technology called "the Way Back Machine," you could enter a Web page, and see all of its copies going back to 1996, as well as when those pages changed.
+
+This is the thing about the Internet that Orwell would have appreciated. In the dystopia described in /{1984}/, old newspapers were constantly updated to assure that the current view of the world, approved of by the government, was not contradicted by previous news reports. Thousands of workers constantly reedited the past, meaning there was no way ever to know whether the story you were reading today was the story that was printed on the date published on the paper.
+
+It's the same with the Internet. If you go to a Web page today, there's no way for you to know whether the content you are reading is the same as the content you read before. The page may seem the same, but the content could easily be different. The Internet is Orwell's library - constantly updated, without any reliable memory.
+
+Until the Way Back Machine, at least. With the Way Back Machine, and the Internet Archive underlying it, you can see what the Internet was. You have the power to see what you remember. More importantly, perhaps, you also have the power to find what you don't remember and what others might prefer you forget.~{ The temptations remain, however. Brewster Kahle reports that the White House changes its own press releases without notice. A May 13, 2003, press release stated, "Combat Operations in Iraq Have Ended." That was later changed, without notice, to "Major Combat Operations in Iraq Have Ended." E-mail from Brewster Kahle, 1 December 2003. }~
+
+We take it for granted that we can go back to see what we remember reading. Think about newspapers. If you wanted to study the reaction of your hometown newspaper to the race riots in Watts in 1965, or to Bull Connor's water cannon in 1963, you could go to your public library and look at the newspapers. Those papers probably exist on microfiche. If you're lucky, they exist in paper, too. Either way, you are free, using a library, to go back and remember - not just what it is convenient to remember, but remember something close to the truth.
+
+It is said that those who fail to remember history are doomed to repeat it.That's not quite correct. We /{all}/ forget history. The key is whether we have a way to go back to rediscover what we forget. More directly, the key is whether an objective past can keep us honest. Libraries help do that, by collecting content and keeping it, for schoolchildren, for researchers, for grandma. A free society presumes this knowledge.
+
+The Internet was an exception to this presumption. Until the Internet Archive, there was no way to go back. The Internet was the quintessentially transitory medium. And yet, as it becomes more important in forming and reforming society, it becomes more and more important to maintain in some historical form. It's just bizarre to think that we have scads of archives of newspapers from tiny towns around the world, yet there is but one copy of the Internet - the one kept by the Internet Archive.
+
+Brewster Kahle is the founder of the Internet Archive. He was a very successful Internet entrepreneur after he was a successful computer researcher. In the 1990s, Kahle decided he had had enough business success. It was time to become a different kind of success. So he launched a series of projects designed to archive human knowledge. The Internet Archive was just the first of the projects of this Andrew Carnegie of the Internet. By December of 2002, the archive had over 10 billion pages, and it was growing at about a billion pages a month.
+
+The Way Back Machine is the largest archive of human knowledge in human history. At the end of 2002, it held "two hundred and thirty terabytes of material" - and was "ten times larger than the Library of Congress." And this was just the first of the archives that Kahle set out to build. In addition to the Internet Archive, Kahle has been constructing the Television Archive. Television, it turns out, is even more ephemeral than the Internet. While much of twentieth- century culture was constructed through television, only a tiny proportion of that culture is available for anyone to see today. Three hours of news are recorded each evening by Vanderbilt University - thanks to a specific exemption in the copyright law.That content is indexed, and is available to scholars for a very low fee. "But other than that, [television] is almost unavailable," Kahle told me. "If you were Barbara Walters you could get access to [the archives], but if you are just a graduate student?" As Kahle put it,
+
+_1 Do you remember when Dan Quayle was interacting with Murphy Brown? Remember that back and forth surreal experience of a politician interacting with a fictional television character? If you were a graduate student wanting to study that, and you wanted to get those original back and forth exchanges between the two, the /{60 Minutes}/ episode that came out after it ... it would be almost impossible. ... Those materials are almost unfindable. ..."
+
+Why is that? Why is it that the part of our culture that is recorded in newspapers remains perpetually accessible, while the part that is recorded on videotape is not? How is it that we've created a world where researchers trying to understand the effect of media on nineteenth-century America will have an easier time than researchers trying to understand the effect of media on twentieth-century America?
+
+In part, this is because of the law. Early in American copyright law, copyright owners were required to deposit copies of their work in libraries. These copies were intended both to facilitate the spread of knowledge and to assure that a copy of the work would be around once the copyright expired, so that others might access and copy the work.
+
+These rules applied to film as well. But in 1915, the Library of Congress made an exception for film. Film could be copyrighted so long as such deposits were made. But the filmmaker was then allowed to borrow back the deposits - for an unlimited time at no cost. In 1915 alone, there were more than 5,475 films deposited and "borrowed back." Thus, when the copyrights to films expire, there is no copy held by any library. The copy exists - if it exists at all - in the library archive of the film company.~{ Doug Herrick, "Toward a National Film Collection: Motion Pictures at the Library of Congress," /{Film Library Quarterly}/ 13 nos. 2-3 (1980): 5; Anthony Slide, /{Nitrate Won't Wait: A History of Film Preservation in the United States}/ (Jefferson, N.C.: McFarland & Co., 1992), 36. }~
+
+The same is generally true about television. Television broadcasts were originally not copyrighted - there was no way to capture the broadcasts, so there was no fear of "theft." But as technology enabled capturing, broadcasters relied increasingly upon the law. The law required they make a copy of each broadcast for the work to be "copy-righted." But those copies were simply kept by the broadcasters. No library had any right to them; the government didn't demand them. The content of this part of American culture is practically invisible to anyone who would look.
+
+Kahle was eager to correct this. Before September 11, 2001, he and his allies had started capturing television. They selected twenty stations from around the world and hit the Record button. After September 11, Kahle, working with dozens of others, selected twenty stations from around the world and, beginning October 11, 2001, made their coverage during the week of September 11 available free on- line. Anyone could see how news reports from around the world covered the events of that day.
+
+Kahle had the same idea with film. Working with Rick Prelinger, whose archive of film includes close to 45,000 "ephemeral films" (meaning films other than Hollywood movies, films that were never copyrighted), Kahle established the Movie Archive. Prelinger let Kahle digitize 1,300 films in this archive and post those films on the Internet to be downloaded for free. Prelinger's is a for- profit company. It sells copies of these films as stock footage. What he has discovered is that after he made a significant chunk available for free, his stock footage sales went up dramatically. People could easily find the material they wanted to use. Some downloaded that material and made films on their own. Others purchased copies to enable other films to be made. Either way, the archive enabled access to this important part of our culture. Want to see a copy of the "Duck and Cover" film that instructed children how to save themselves in the middle of nuclear attack? Go to archive.org, and you can download the film in a few minutes - for free.
+
+Here again, Kahle is providing access to a part of our culture that we otherwise could not get easily, if at all. It is yet another part of what defines the twentieth century that we have lost to history. The law doesn't require these copies to be kept by anyone, or to be deposited in an archive by anyone. Therefore, there is no simple way to find them.
+
+The key here is access, not price. Kahle wants to enable free access to this content, but he also wants to enable others to sell access to it. His aim is to ensure competition in access to this important part of our culture. Not during the commercial life of a bit of creative property, but during a second life that all creative property has - a noncommercial life.
+
+For here is an idea that we should more clearly recognize. Every bit of creative property goes through different "lives." In its first life, if the creator is lucky, the content is sold. In such cases the commercial market is successful for the creator. The vast majority of creative property doesn't enjoy such success, but some clearly does. For that content, commercial life is extremely important. Without this commercial market, there would be, many argue, much less creativity.
+
+After the commercial life of creative property has ended, our tradition has always supported a second life as well. A newspaper delivers the news every day to the doorsteps of America. The very next day, it is used to wrap fish or to fill boxes with fragile gifts or to build an archive of knowledge about our history. In this second life, the content can continue to inform even if that information is no longer sold.
+
+The same has always been true about books. A book goes out of print very quickly (the average today is after about a year~{ Dave Barns, "Fledgling Career in Antique Books: Woodstock Landlord, Bar Owner Starts a New Chapter by Adopting Business," /{Chicago Tribune,}/ 5 September 1997, at Metro Lake 1L. Of books published between 1927 and 1946, only 2.2 percent were in print in 2002. R. Anthony Reese, "The First Sale Doctrine in the Era of Digital Networks," /{Boston College Law Review}/ 44 (2003): 593 n. 51. }~). After it is out of print, it can be sold in used book stores without the copyright owner getting anything and stored in libraries, where many get to read the book, also for free. Used book stores and libraries are thus the second life of a book. That second life is extremely important to the spread and stability of culture.
+
+Yet increasingly, any assumption about a stable second life for creative property does not hold true with the most important components of popular culture in the twentieth and twenty-first centuries. For these - television, movies, music, radio, the Internet - there is no guarantee of a second life. For these sorts of culture, it is as if we've replaced libraries with Barnes & Noble superstores. With this culture, what's accessible is nothing but what a certain limited market demands. Beyond that, culture disappears.
+
+For most of the twentieth century, it was economics that made this so. It would have been insanely expensive to collect and make accessible all television and film and music: The cost of analog copies is extraordinarily high. So even though the law in principle would have restricted the ability of a Brewster Kahle to copy culture generally, the real restriction was economics. The market made it impossibly difficult to do anything about this ephemeral culture; the law had little practical effect.
+
+Perhaps the single most important feature of the digital revolution is that for the first time since the Library of Alexandria, it is feasible to imagine constructing archives that hold all culture produced or distributed publicly. Technology makes it possible to imagine an archive of all books published, and increasingly makes it possible to imagine an archive of all moving images and sound.
+
+The scale of this potential archive is something we've never imagined before. The Brewster Kahles of our history have dreamed about it; but we are for the first time at a point where that dream is possible. As Kahle describes,
+
+_1 It looks like there's about two to three million recordings of music. Ever. There are about a hundred thousand theatrical releases of movies, ... and about one to two million movies [distributed] during the twentieth century. There are about twenty-six million different titles of books. All of these would fit on computers that would fit in this room and be able to be afforded by a small company. So we're at a turning point in our history. Universal access is the goal. And the opportunity of leading a different life, based on this, is ... thrilling. It could be one of the things humankind would be most proud of. Up there with the Library of Alexandria, putting a man on the moon, and the invention of the printing press."
+
+Kahle is not the only librarian. The Internet Archive is not the only archive. But Kahle and the Internet Archive suggest what the future of libraries or archives could be. /{When}/ the commercial life of creative property ends, I don't know. But it does. And whenever it does, Kahle and his archive hint at a world where this knowledge, and culture, remains perpetually available. Some will draw upon it to understand it; some to criticize it. Some will use it, as Walt Disney did, to re-create the past for the future. These technologies promise something that had become unimaginable for much of our past - a future /{for}/ our past. The technology of digital arts could make the dream of the Library of Alexandria real again.
+
+Technologists have thus removed the economic costs of building such an archive. But lawyers' costs remain. For as much as we might like to call these "archives," as warm as the idea of a "library" might seem, the "content" that is collected in these digital spaces is also some-one's "property." And the law of property restricts the freedoms that Kahle and others would exercise.
+
+1~ Chapter Ten: "Property"
+
+*{Jack Valenti}* has been the president of the Motion Picture Association of America since 1966. He first came to Washington, D.C., with Lyndon Johnson's administration - literally. The famous picture of Johnson's swearing-in on Air Force One after the assassination of President Kennedy has Valenti in the background. In his almost forty years of running the MPAA, Valenti has established himself as perhaps the most prominent and effective lobbyist in Washington.
+
+The MPAA is the American branch of the international Motion Picture Association. It was formed in 1922 as a trade association whose goal was to defend American movies against increasing domestic criticism. The organization now represents not only filmmakers but producers and distributors of entertainment for television, video, and cable. Its board is made up of the chairmen and presidents of the seven major producers and distributors of motion picture and television programs in the United States: Walt Disney, Sony Pictures Entertainment, MGM, Paramount Pictures, Twentieth Century Fox, Universal Studios, and Warner Brothers.
+
+Valenti is only the third president of the MPAA. No president before him has had as much influence over that organization, or over Washington. As a Texan, Valenti has mastered the single most important political skill of a Southerner - the ability to appear simple and slow while hiding a lightning-fast intellect. To this day, Valenti plays the simple, humble man. But this Harvard MBA, and author of four books, who finished high school at the age of fifteen and flew more than fifty combat missions in World War II, is no Mr. Smith. When Valenti went to Washington, he mastered the city in a quintessentially Washingtonian way.
+
+In defending artistic liberty and the freedom of speech that our culture depends upon, the MPAA has done important good. In crafting the MPAA rating system, it has probably avoided a great deal of speech-regulating harm. But there is an aspect to the organization's mission that is both the most radical and the most important. This is the organization's effort, epitomized in Valenti's every act, to redefine the meaning of "creative property."
+
+In 1982, Valenti's testimony to Congress captured the strategy perfectly:
+
+_1 No matter the lengthy arguments made, no matter the charges and the counter- charges, no matter the tumult and the shouting, reasonable men and women will keep returning to the fundamental issue, the central theme which animates this entire debate: /{Creative property owners must be accorded the same rights and protection resident in all other property owners in the nation}/. That is the issue. That is the question. And that is the rostrum on which this entire hearing and the debates to follow must rest."~{ Home Recording of Copyrighted Works: Hearings on H.R. 4783, H.R. 4794, H.R. 4808, H.R. 5250, H.R. 5488, and H.R. 5705 Before the Subcommittee on Courts, Civil Liberties, and the Administration of Justice of the Committee on the Judiciary of the House of Representatives, 97th Cong., 2nd sess. (1982): 65 (testimony of Jack Valenti). }~
+
+The strategy of this rhetoric, like the strategy of most of Valenti's rhetoric, is brilliant and simple and brilliant because simple. The "central theme" to which "reasonable men and women" will return is this: "Creative property owners must be accorded the same rights and protections resident in all other property owners in the nation." There are no second-class citizens, Valenti might have continued. There should be no second-class property owners.
+
+This claim has an obvious and powerful intuitive pull. It is stated with such clarity as to make the idea as obvious as the notion that we use elections to pick presidents. But in fact, there is no more extreme a claim made by /{anyone}/ who is serious in this debate than this claim of Valenti's. Jack Valenti, however sweet and however brilliant, is perhaps the nation's foremost extremist when it comes to the nature and scope of "creative property." His views have /{no}/ reasonable connection to our actual legal tradition, even if the subtle pull of his Texan charm has slowly redefined that tradition, at least in Washington.
+
+While "creative property" is certainly "property" in a nerdy and precise sense that lawyers are trained to understand,~{ Lawyers speak of "property" not as an absolute thing, but as a bundle of rights that are sometimes associated with a particular object. Thus, my "property right" to my car gives me the right to exclusive use, but not the right to drive at 150 miles an hour. For the best effort to connect the ordinary meaning of "property" to "lawyer talk," see Bruce Ackerman, /{Private Property and the Constitution}/ (New Haven: Yale University Press, 1977), 26-27. }~ it has never been the case, nor should it be, that "creative property owners" have been "ac- corded the same rights and protection resident in all other property owners." Indeed, if creative property owners were given the same rights as all other property owners, that would effect a radical, and radically undesirable, change in our tradition.
+
+Valenti knows this. But he speaks for an industry that cares squat for our tradition and the values it represents. He speaks for an industry that is instead fighting to restore the tradition that the British overturned in 1710. In the world that Valenti's changes would create, a powerful few would exercise powerful control over how our creative culture would develop.
+
+I have two purposes in this chapter. The first is to convince you that, historically, Valenti's claim is absolutely wrong. The second is to convince you that it would be terribly wrong for us to reject our history. We have always treated rights in creative property differently from the rights resident in all other property owners. They have never been the same. And they should never be the same, because, however counterintuitive this may seem, to make them the same would be to fundamentally weaken the opportunity for new creators to create. Creativity depends upon the owners of creativity having less than perfect control.
+
+Organizations such as the MPAA, whose board includes the most powerful of the old guard, have little interest, their rhetoric notwithstanding, in assuring that the new can displace them. No organization does. No person does. (Ask me about tenure, for example.) But what's good for the MPAA is not necessarily good for America. A society that defends the ideals of free culture must preserve precisely the opportunity for new creativity to threaten the old.
+
+To get just a hint that there is something fundamentally wrong in Valenti's argument, we need look no further than the United States Constitution itself.
+
+The framers of our Constitution loved "property." Indeed, so strongly did they love property that they built into the Constitution an important requirement. If the government takes your property - if it condemns your house, or acquires a slice of land from your farm - it is required, under the Fifth Amendment's "Takings Clause," to pay you "just compensation" for that taking. The Constitution thus guarantees that property is, in a certain sense, sacred. It cannot /{ever}/ be taken from the property owner unless the government pays for the privilege.
+
+Yet the very same Constitution speaks very differently about what Valenti calls "creative property." In the clause granting Congress the power to create "creative property," the Constitution /{requires}/ that after a "limited time," Congress take back the rights that it has granted and set the "creative property" free to the public domain. Yet when Congress does this, when the expiration of a copyright term "takes" your copyright and turns it over to the public domain, Congress does not have any obligation to pay "just compensation" for this "taking." Instead, the same Constitution that requires compensation for your land requires that you lose your "creative property" right without any compensation at all.
+
+The Constitution thus on its face states that these two forms of property are not to be accorded the same rights. They are plainly to be treated differently. Valenti is therefore not just asking for a change in our tradition when he argues that creative-property owners should be accorded the same rights as every other property-right owner. He is effectively arguing for a change in our Constitution itself.
+
+Arguing for a change in our Constitution is not necessarily wrong. There was much in our original Constitution that was plainly wrong. The Constitution of 1789 entrenched slavery; it left senators to be appointed rather than elected; it made it possible for the electoral college to produce a tie between the president and his own vice president (as it did in 1800). The framers were no doubt extraordinary, but I would be the first to admit that they made big mistakes. We have since rejected some of those mistakes; no doubt there could be others that we should reject as well. So my argument is not simply that because Jefferson did it, we should, too.
+
+Instead, my argument is that because Jefferson did it, we should at least try to understand /{why}/. Why did the framers, fanatical property types that they were, reject the claim that creative property be given the same rights as all other property? Why did they require that for creative property there must be a public domain?
+
+To answer this question, we need to get some perspective on the history of these "creative property" rights, and the control that they enabled. Once we see clearly how differently these rights have been defined, we will be in a better position to ask the question that should be at the core of this war: Not /{whether}/ creative property should be protected, but how. Not /{whether}/ we will enforce the rights the law gives to creative-property owners, but what the particular mix of rights ought to be. Not /{whether}/ artists should be paid, but whether institutions designed to assure that artists get paid need also control how culture develops.
+
+To answer these questions, we need a more general way to talk about how property is protected. More precisely, we need a more general way than the narrow language of the law allows. In /{Code and Other Laws of Cyberspace}/, I used a simple model to capture this more general perspective. For any particular right or regulation, this model asks how four different modalities of regulation interact to support or weaken the right or regulation. I represented it with this diagram:
+
+{freeculture01.png 350x350 }http://www.free-culture.cc/
+
+At the center of this picture is a regulated dot: the individual or group that is the target of regulation, or the holder of a right. (In each case throughout, we can describe this either as regulation or as a right. For simplicity's sake, I will speak only of regulations.) The ovals represent four ways in which the individual or group might be regulated - either constrained or, alternatively, enabled. Law is the most obvious constraint (to lawyers, at least). It constrains by threatening punishments after the fact if the rules set in advance are violated. So if, for example, you willfully infringe Madonna's copyright by copying a song from her latest CD and posting it on the Web, you can be punished with a $150,000 fine. The fine is an ex post punishment for violating an ex ante rule. It is imposed by the state.
+
+Norms are a different kind of constraint. They, too, punish an individual for violating a rule. But the punishment of a norm is imposed by a community, not (or not only) by the state. There may be no law against spitting, but that doesn't mean you won't be punished if you spit on the ground while standing in line at a movie. The punishment might not be harsh, though depending upon the community, it could easily be more harsh than many of the punishments imposed by the state. The mark of the difference is not the severity of the rule, but the source of the enforcement.
+
+The market is a third type of constraint. Its constraint is effected through conditions: You can do X if you pay Y; you'll be paid M if you do N. These constraints are obviously not independent of law or norms - it is property law that defines what must be bought if it is to be taken legally; it is norms that say what is appropriately sold. But given a set of norms, and a background of property and contract law, the market imposes a simultaneous constraint upon how an individual or group might behave.
+
+Finally, and for the moment, perhaps, most mysteriously, "architecture" - the physical world as one finds it - is a constraint on behavior. A fallen bridge might constrain your ability to get across a river. Railroad tracks might constrain the ability of a community to integrate its social life. As with the market, architecture does not effect its constraint through ex post punishments. Instead, also as with the market, architecture effects its constraint through simultaneous conditions. These conditions are imposed not by courts enforcing contracts, or by police punishing theft, but by nature, by "architecture." If a 500-pound boulder blocks your way, it is the law of gravity that enforces this constraint. If a $500 airplane ticket stands between you and a flight to New York, it is the market that enforces this constraint.
+
+So the first point about these four modalities of regulation is obvious: They interact. Restrictions imposed by one might be reinforced by another. Or restrictions imposed by one might be undermined by another.
+
+The second point follows directly: If we want to understand the effective freedom that anyone has at a given moment to do any particular thing, we have to consider how these four modalities interact. Whether or not there are other constraints (there may well be; my claim is not about comprehensiveness), these four are among the most significant, and any regulator (whether controlling or freeing) must consider how these four in particular interact.
+
+So, for example, consider the "freedom" to drive a car at a high speed. That freedom is in part restricted by laws: speed limits that say how fast you can drive in particular places at particular times. It is in part restricted by architecture: speed bumps, for example, slow most rational drivers; governors in buses, as another example, set the maximum rate at which the driver can drive. The freedom is in part restricted by the market: Fuel efficiency drops as speed increases, thus the price of gasoline indirectly constrains speed. And finally, the norms of a community may or may not constrain the freedom to speed. Drive at 50 mph by a school in your own neighborhood and you're likely to be punished by the neighbors. The same norm wouldn't be as effective in a different town, or at night.
+
+The final point about this simple model should also be fairly clear: While these four modalities are analytically independent, law has a special role in affecting the three.~{ By describing the way law affects the other three modalities, I don't mean to suggest that the other three don't affect law. Obviously, they do. Law's only distinction is that it alone speaks as if it has a right self-consciously to change the other three. The right of the other three is more timidly expressed. See Lawrence Lessig, /{Code: And Other Laws of Cyberspace}/ (New York: Basic Books, 1999): 90-95; Lawrence Lessig, "The New Chicago School," /{Journal of Legal Studies,}/ June 1998. }~ The law, in other words, sometimes operates to increase or decrease the constraint of a particular modality. Thus, the law might be used to increase taxes on gasoline, so as to increase the incentives to drive more slowly. The law might be used to mandate more speed bumps, so as to increase the difficulty of driving rapidly. The law might be used to fund ads that stigmatize reckless driving. Or the law might be used to require that other laws be more strict - a federal requirement that states decrease the speed limit, for example" so as to decrease the attractiveness of fast driving.
+
+{freeculture02.png 540x350 }http://www.free-culture.cc/
+
+These constraints can thus change, and they can be changed. To understand the effective protection of liberty or protection of property at any particular moment, we must track these changes over time. A restriction imposed by one modality might be erased by another. A freedom enabled by one modality might be displaced by another.~{ Some people object to this way of talking about "liberty." They object because their focus when considering the constraints that exist at any particular moment are constraints imposed exclusively by the government. For instance, if a storm destroys a bridge, these people think it is meaningless to say that one's liberty has been restrained. A bridge has washed out, and it's harder to get from one place to another. To talk about this as a loss of freedom, they say, is to confuse the stuff of politics with the vagaries of ordinary life. I don't mean to deny the value in this narrower view, which depends upon the context of the inquiry. I do, however, mean to argue against any insistence that this narrower view is the only proper view of liberty. As I argued in /{Code,}/ we come from a long tradition of political thought with a broader focus than the narrow question of what the government did when. John Stuart Mill defended freedom of speech, for example, from the tyranny of narrow minds, not from the fear of government prosecution; John Stuart Mill, /{On Liberty}/ (Indiana: Hackett Publishing Co., 1978), 19. John R. Commons famously defended the economic freedom of labor from constraints imposed by the market; John R. Commons, "The Right to Work," in Malcom Rutherford and Warren J. Samuels, eds., /{John R. Commons: Selected Essays}/ (London: Routledge: 1997), 62. The Americans with Disabilities Act increases the liberty of people with physical disabilities by changing the architecture of certain public places, thereby making access to those places easier; 42 /{United States Code}/, section 12101 (2000). Each of these interventions to change existing conditions changes the liberty of a particular group. The effect of those interventions should be accounted for in order to understand the effective liberty that each of these groups might face. }~
+
+2~ Why Hollywood Is Right
+
+The most obvious point that this model reveals is just why, or just how, Hollywood is right. The copyright warriors have rallied Congress and the courts to defend copyright. This model helps us see why that rallying makes sense.
+
+Let's say this is the picture of copyright's regulation before the Internet:
+
+{freeculture01.png 350x350 }http://www.free-culture.cc/
+
+There is balance between law, norms, market, and architecture. The law limits the ability to copy and share content, by imposing penalties on those who copy and share content. Those penalties are reinforced by technologies that make it hard to copy and share content (architecture) and expensive to copy and share content (market). Finally, those penalties are mitigated by norms we all recognize - kids, for example, taping other kids' records. These uses of copyrighted material may well be infringement, but the norms of our society (before the Internet, at least) had no problem with this form of infringement.
+
+Enter the Internet, or, more precisely, technologies such as MP3s and p2p sharing. Now the constraint of architecture changes dramatically, as does the constraint of the market. And as both the market and architecture relax the regulation of copyright, norms pile on. The happy balance (for the warriors, at least) of life before the Internet becomes an effective state of anarchy after the Internet.
+
+Thus the sense of, and justification for, the warriors' response. Technology has changed, the warriors say, and the effect of this change, when ramified through the market and norms, is that a balance of protection for the copyright owners' rights has been lost. This is Iraq after the fall of Saddam, but this time no government is justifying the looting that results.
+
+{freeculture03.png 350x350 }http://www.free-culture.cc/
+
+Neither this analysis nor the conclusions that follow are new to the warriors. Indeed, in a "White Paper" prepared by the Commerce Department (one heavily influenced by the copyright warriors) in 1995, this mix of regulatory modalities had already been identified and the strategy to respond already mapped. In response to the changes the Internet had effected, the White Paper argued (1) Congress should strengthen intellectual property law, (2) businesses should adopt innovative marketing techniques, (3) technologists should push to develop code to protect copyrighted material, and (4) educators should educate kids to better protect copyright.
+
+This mixed strategy is just what copyright needed - if it was to preserve the particular balance that existed before the change induced by the Internet. And it's just what we should expect the content industry to push for. It is as American as apple pie to consider the happy life you have as an entitlement, and to look to the law to protect it if something comes along to change that happy life. Homeowners living in a flood plain have no hesitation appealing to the government to rebuild (and rebuild again) when a flood (architecture) wipes away their property (law). Farmers have no hesitation appealing to the government to bail them out when a virus (architecture) devastates their crop. Unions have no hesitation appealing to the government to bail them out when imports (market) wipe out the U.S. steel industry.
+
+Thus, there's nothing wrong or surprising in the content industry's campaign to protect itself from the harmful consequences of a technological innovation. And I would be the last person to argue that the changing technology of the Internet has not had a profound effect on the content industry's way of doing business, or as John Seely Brown describes it, its "architecture of revenue."
+
+But just because a particular interest asks for government support, it doesn't follow that support should be granted. And just because technology has weakened a particular way of doing business, it doesn't follow that the government should intervene to support that old way of doing business. Kodak, for example, has lost perhaps as much as 20 percent of their traditional film market to the emerging technologies of digital cameras.~{ See Geoffrey Smith, "Film vs. Digital: Can Kodak Build a Bridge?" BusinessWeek online, 2 August 1999, available at link #23. For a more recent analysis of Kodak's place in the market, see Chana R. Schoenberger, "Can Kodak Make Up for Lost Moments?" Forbes.com, 6 October 2003, available at link #24. }~ Does anyone believe the government should ban digital cameras just to support Kodak? Highways have weakened the freight business for railroads. Does anyone think we should ban trucks from roads /{for the purpose of}/ protecting the railroads? Closer to the subject of this book, remote channel changers have weakened the "stickiness" of television advertising (if a boring commercial comes on the TV, the remote makes it easy to surf ), and it may well be that this change has weakened the television advertising market. But does anyone believe we should regulate remotes to reinforce commercial television? (Maybe by limiting them to function only once a second, or to switch to only ten channels within an hour?)
+
+The obvious answer to these obviously rhetorical questions is no. In a free society, with a free market, supported by free enterprise and free trade, the government's role is not to support one way of doing business against others. Its role is not to pick winners and protect them against loss. If the government did this generally, then we would never have any progress. As Microsoft chairman Bill Gates wrote in 1991, in a memo criticizing software patents, "established companies have an interest in excluding future competitors."~{ Fred Warshofsky, /{The Patent Wars}/ (New York: Wiley, 1994), 170-71. }~ And relative to a startup, established companies also have the means. (Think RCA and FM radio.) A world in which competitors with new ideas must fight not only the market but also the government is a world in which competitors with new ideas will not succeed. It is a world of stasis and increasingly concentrated stagnation. It is the Soviet Union under Brezhnev.
+
+Thus, while it is understandable for industries threatened with new technologies that change the way they do business to look to the government for protection, it is the special duty of policy makers to guarantee that that protection not become a deterrent to progress. It is the duty of policy makers, in other words, to assure that the changes they create, in response to the request of those hurt by changing technology, are changes that preserve the incentives and opportunities for innovation and change.
+
+In the context of laws regulating speech - which include, obviously, copyright law - that duty is even stronger. When the industry complaining about changing technologies is asking Congress to respond in a way that burdens speech and creativity, policy makers should be especially wary of the request. It is always a bad deal for the government to get into the business of regulating speech markets. The risks and dangers of that game are precisely why our framers created the First Amendment to our Constitution: "Congress shall make no law ... abridging the freedom of speech." So when Congress is being asked to pass laws that would "abridge" the freedom of speech, it should ask" carefully - whether such regulation is justified.
+
+My argument just now, however, has nothing to do with whether the changes that are being pushed by the copyright warriors are "justified." My argument is about their effect. For before we get to the question of justification, a hard question that depends a great deal upon your values, we should first ask whether we understand the effect of the changes the content industry wants.
+
+Here's the metaphor that will capture the argument to follow.
+
+In 1873, the chemical DDT was first synthesized. In 1948, Swiss chemist Paul Hermann Müller won the Nobel Prize for his work demonstrating the insecticidal properties of DDT. By the 1950s, the insecticide was widely used around the world to kill disease-carrying pests. It was also used to increase farm production.
+
+No one doubts that killing disease-carrying pests or increasing crop production is a good thing. No one doubts that the work of Müller was important and valuable and probably saved lives, possibly millions.
+
+But in 1962, Rachel Carson published /{Silent Spring}/, which argued that DDT, whatever its primary benefits, was also having unintended environmental consequences. Birds were losing the ability to reproduce. Whole chains of the ecology were being destroyed.
+
+No one set out to destroy the environment. Paul Müller certainly did not aim to harm any birds. But the effort to solve one set of problems produced another set which, in the view of some, was far worse than the problems that were originally attacked. Or more accurately, the problems DDT caused were worse than the problems it solved, at least when considering the other, more environmentally friendly ways to solve the problems that DDT was meant to solve.
+
+It is to this image precisely that Duke University law professor James Boyle appeals when he argues that we need an "environmentalism" for culture.~{ See, for example, James Boyle, "A Politics of Intellectual Property: Environmentalism for the Net?" /{Duke Law Journal}/ 47 (1997): 87. }~ His point, and the point I want to develop in the balance of this chapter, is not that the aims of copyright are flawed. Or that authors should not be paid for their work. Or that music should be given away "for free." The point is that some of the ways in which we might protect authors will have unintended consequences for the cultural environment, much like DDT had for the natural environment. And just as criticism of DDT is not an endorsement of malaria or an attack on farmers, so, too, is criticism of one particular set of regulations protecting copyright not an endorsement of anarchy or an attack on authors. It is an environment of creativity that we seek, and we should be aware of our actions' effects on the environment.
+
+My argument, in the balance of this chapter, tries to map exactly this effect. No doubt the technology of the Internet has had a dramatic effect on the ability of copyright owners to protect their content. But there should also be little doubt that when you add together the changes in copyright law over time, plus the change in technology that the Internet is undergoing just now, the net effect of these changes will not be only that copyrighted work is effectively protected. Also, and generally missed, the net effect of this massive increase in protection will be devastating to the environment for creativity.
+
+In a line: To kill a gnat, we are spraying DDT with consequences for free culture that will be far more devastating than that this gnat will be lost.
+
+2~ Beginnings
+
+America copied English copyright law. Actually, we copied and improved English copyright law. Our Constitution makes the purpose of "creative property" rights clear; its express limitations reinforce the English aim to avoid overly powerful publishers.
+
+The power to establish "creative property" rights is granted to Congress in a way that, for our Constitution, at least, is very odd. Article I, section 8, clause 8 of our Constitution states that:
+
+_1 Congress has the power to promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries."
+
+We can call this the "Progress Clause," for notice what this clause does not say. It does not say Congress has the power to grant "creative property rights." It says that Congress has the power /{to promote progress}/. The grant of power is its purpose, and its purpose is a public one, not the purpose of enriching publishers, nor even primarily the purpose of rewarding authors.
+
+The Progress Clause expressly limits the term of copyrights. As we saw in chapter 6, the English limited the term of copyright so as to assure that a few would not exercise disproportionate control over culture by exercising disproportionate control over publishing. We can assume the framers followed the English for a similar purpose. Indeed, unlike the English, the framers reinforced that objective, by requiring that copyrights extend "to Authors" only.
+
+The design of the Progress Clause reflects something about the Constitution's design in general. To avoid a problem, the framers built structure. To prevent the concentrated power of publishers, they built a structure that kept copyrights away from publishers and kept them short. To prevent the concentrated power of a church, they banned the federal government from establishing a church. To prevent concentrating power in the federal government, they built structures to reinforce the power of the states - including the Senate, whose members were at the time selected by the states, and an electoral college, also selected by the states, to select the president. In each case, a /{structure}/ built checks and balances into the constitutional frame, structured to prevent otherwise inevitable concentrations of power.
+
+I doubt the framers would recognize the regulation we call "copyright" today. The scope of that regulation is far beyond anything they ever considered. To begin to understand what they did, we need to put our "copyright" in context: We need to see how it has changed in the 210 years since they first struck its design.
+
+Some of these changes come from the law: some in light of changes in technology, and some in light of changes in technology given a particular concentration of market power. In terms of our model, we started here:
+
+{freeculture01.png 350x350 }http://www.free-culture.cc/
+
+%% image 05 same as 01, renumber?
+
+We will end here:
+
+{freeculture04.png 310x350 }http://www.free-culture.cc/
+
+Let me explain how.
+
+2~ Law: Duration
+
+When the first Congress enacted laws to protect creative property, it faced the same uncertainty about the status of creative property that the English had confronted in 1774. Many states had passed laws protecting creative property, and some believed that these laws simply supplemented common law rights that already protected creative authorship.~{ William W. Crosskey, /{Politics and the Constitution in the History of the United States}/ (London: Cambridge University Press, 1953), vol. 1, 485-86: "extinguish[ing], by plain implication of "the supreme Law of the Land," /{the perpetual rights which authors had, or were supposed by some to have, under the Common Law}/" (emphasis added). }~ This meant that there was no guaranteed public domain in the United States in 1790. If copyrights were protected by the common law, then there was no simple way to know whether a work published in the United States was controlled or free. Just as in England, this lingering uncertainty would make it hard for publishers to rely upon a public domain to reprint and distribute works.
+
+That uncertainty ended after Congress passed legislation granting copyrights. Because federal law overrides any contrary state law, federal protections for copyrighted works displaced any state law protections. Just as in England the Statute of Anne eventually meant that the copyrights for all English works expired, a federal statute meant that any state copyrights expired as well.
+
+In 1790, Congress enacted the first copyright law. It created a federal copyright and secured that copyright for fourteen years. If the author was alive at the end of that fourteen years, then he could opt to renew the copyright for another fourteen years. If he did not renew the copyright, his work passed into the public domain.
+
+While there were many works created in the United States in the first ten years of the Republic, only 5 percent of the works were actually registered under the federal copyright regime. Of all the work created in the United States both before 1790 and from 1790 through 1800, 95 percent immediately passed into the public domain; the balance would pass into the pubic domain within twenty-eight years at most, and more likely within fourteen years.~{ Although 13,000 titles were published in the United States from 1790 to 1799, only 556 copyright registrations were filed; John Tebbel, /{A History of Book Publishing in the United States,}/ vol. 1, /{The Creation of an Industry, 1630- 1865}/ (New York: Bowker, 1972), 141. Of the 21,000 imprints recorded before 1790, only twelve were copyrighted under the 1790 act; William J. Maher, /{Copyright Term, Retrospective Extension and the Copyright Law of 1790 in Historical Context,}/ 7-10 (2002), available at link #25. Thus, the overwhelming majority of works fell immediately into the public domain. Even those works that were copyrighted fell into the public domain quickly, because the term of copyright was short. The initial term of copyright was fourteen years, with the option of renewal for an additional fourteen years. Copyright Act of May 31, 1790, §1, 1 stat. 124. }~
+
+This system of renewal was a crucial part of the American system of copyright. It assured that the maximum terms of copyright would be granted only for works where they were wanted. After the initial term of fourteen years, if it wasn't worth it to an author to renew his copyright, then it wasn't worth it to society to insist on the copyright, either.
+
+Fourteen years may not seem long to us, but for the vast majority of copyright owners at that time, it was long enough: Only a small minority of them renewed their copyright after fourteen years; the balance allowed their work to pass into the public domain.~{ Few copyright holders ever chose to renew their copyrights. For instance, of the 25,006 copyrights registered in 1883, only 894 were renewed in 1910. For a year-by-year analysis of copyright renewal rates, see Barbara A. Ringer, "Study No. 31: Renewal of Copyright," /{Studies on Copyright,}/ vol. 1 (New York: Practicing Law Institute, 1963), 618. For a more recent and comprehensive analysis, see William M. Landes and Richard A. Posner, "Indefinitely Renewable Copyright," /{University of Chicago Law Review}/ 70 (2003): 471, 498-501, and accompanying figures. }~
+
+Even today, this structure would make sense. Most creative work has an actual commercial life of just a couple of years. Most books fall out of print after one year.~{ See Ringer, ch. 9, n. 2. }~ When that happens, the used books are traded free of copyright regulation. Thus the books are no longer /{effectively}/ controlled by copyright. The only practical commercial use of the books at that time is to sell the books as used books; that use - because it does not involve publication - is effectively free.
+
+In the first hundred years of the Republic, the term of copyright was changed once. In 1831, the term was increased from a maximum of 28 years to a maximum of 42 by increasing the initial term of copyright from 14 years to 28 years. In the next fifty years of the Republic, the term increased once again. In 1909, Congress extended the renewal term of 14 years to 28 years, setting a maximum term of 56 years.
+
+Then, beginning in 1962, Congress started a practice that has defined copyright law since. Eleven times in the last forty years, Congress has extended the terms of existing copyrights; twice in those forty years, Congress extended the term of future copyrights. Initially, the extensions of existing copyrights were short, a mere one to two years. In 1976, Congress extended all existing copyrights by nineteen years. And in 1998, in the Sonny Bono Copyright Term Extension Act, Congress extended the term of existing and future copyrights by twenty years.
+
+The effect of these extensions is simply to toll, or delay, the passing of works into the public domain. This latest extension means that the public domain will have been tolled for thirty-nine out of fifty-five years, or 70 percent of the time since 1962. Thus, in the twenty years after the Sonny Bono Act, while one million patents will pass into the public domain, zero copyrights will pass into the public domain by virtue of the expiration of a copyright term.
+
+The effect of these extensions has been exacerbated by another, little-noticed change in the copyright law. Remember I said that the framers established a two- part copyright regime, requiring a copyright owner to renew his copyright after an initial term. The requirement of renewal meant that works that no longer needed copyright protection would pass more quickly into the public domain. The works remaining under protection would be those that had some continuing commercial value.
+
+The United States abandoned this sensible system in 1976. For all works created after 1978, there was only one copyright term - the maximum term. For "natural" authors, that term was life plus fifty years. For corporations, the term was seventy-five years. Then, in 1992, Congress abandoned the renewal requirement for all works created before 1978. All works still under copyright would be accorded the maximum term then available. After the Sonny Bono Act, that term was ninety-five years.
+
+This change meant that American law no longer had an automatic way to assure that works that were no longer exploited passed into the public domain. And indeed, after these changes, it is unclear whether it is even possible to put works into the public domain. The public domain is orphaned by these changes in copyright law. Despite the requirement that terms be "limited," we have no evidence that anything will limit them.
+
+The effect of these changes on the average duration of copyright is dramatic. In 1973, more than 85 percent of copyright owners failed to renew their copyright. That meant that the average term of copyright in 1973 was just 32.2 years. Because of the elimination of the renewal requirement, the average term of copyright is now the maximum term. In thirty years, then, the average term has tripled, from 32.2 years to 95 years.~{ These statistics are understated. Between the years 1910 and 1962 (the first year the renewal term was extended), the average term was never more than thirty-two years, and averaged thirty years. See Landes and Posner, "Indefinitely Renewable Copyright," loc. cit. }~
+
+2~ Law: Scope
+
+The "scope" of a copyright is the range of rights granted by the law. The scope of American copyright has changed dramatically. Those changes are not necessarily bad. But we should understand the extent of the changes if we're to keep this debate in context.
+
+In 1790, that scope was very narrow. Copyright covered only "maps, charts, and books." That means it didn't cover, for example, music or architecture. More significantly, the right granted by a copyright gave the author the exclusive right to "publish" copyrighted works. That means someone else violated the copyright only if he republished the work without the copyright owner's permission. Finally, the right granted by a copyright was an exclusive right to that particular book. The right did not extend to what lawyers call "derivative works." It would not, therefore, interfere with the right of someone other than the author to translate a copyrighted book, or to adapt the story to a different form (such as a drama based on a published book).
+
+This, too, has changed dramatically. While the contours of copyright today are extremely hard to describe simply, in general terms, the right covers practically any creative work that is reduced to a tangible form. It covers music as well as architecture, drama as well as computer programs. It gives the copyright owner of that creative work not only the exclusive right to "publish" the work, but also the exclusive right of control over any "copies" of that work. And most significant for our purposes here, the right gives the copyright owner control over not only his or her particular work, but also any "derivative work" that might grow out of the original work. In this way, the right covers more creative work, protects the creative work more broadly, and protects works that are based in a significant way on the initial creative work.
+
+At the same time that the scope of copyright has expanded, procedural limitations on the right have been relaxed. I've already described the complete removal of the renewal requirement in 1992. In addition to the renewal requirement, for most of the history of American copyright law, there was a requirement that a work be registered before it could receive the protection of a copyright. There was also a requirement that any copyrighted work be marked either with that famous © or the word /{copyright}/. And for most of the history of American copyright law, there was a requirement that works be deposited with the government before a copyright could be secured.
+
+The reason for the registration requirement was the sensible understanding that for most works, no copyright was required. Again, in the first ten years of the Republic, 95 percent of works eligible for copyright were never copyrighted. Thus, the rule reflected the norm: Most works apparently didn't need copyright, so registration narrowed the regulation of the law to the few that did. The same reasoning justified the requirement that a work be marked as copyrighted - that way it was easy to know whether a copyright was being claimed. The requirement that works be deposited was to assure that after the copyright expired, there would be a copy of the work somewhere so that it could be copied by others without locating the original author.
+
+All of these "formalities" were abolished in the American system when we decided to follow European copyright law. There is no requirement that you register a work to get a copyright; the copyright now is automatic; the copyright exists whether or not you mark your work with a ©; and the copyright exists whether or not you actually make a copy available for others to copy.
+
+Consider a practical example to understand the scope of these differences.
+
+If, in 1790, you wrote a book and you were one of the 5 percent who actually copyrighted that book, then the copyright law protected you against another publisher's taking your book and republishing it without your permission. The aim of the act was to regulate publishers so as to prevent that kind of unfair competition. In 1790, there were 174 publishers in the United States.~{ See Thomas Bender and David Sampliner, "Poets, Pirates, and the Creation of American Literature," 29 /{New York University Journal of International Law and Politics}/ 255 (1997), and James Gilraeth, ed., Federal Copyright Records, 1790- 1800 (U.S. G.P.O., 1987). }~ The Copyright Act was thus a tiny regulation of a tiny proportion of a tiny part of the creative market in the United States - publishers.
+
+The act left other creators totally unregulated. If I copied your poem by hand, over and over again, as a way to learn it by heart, my act was totally unregulated by the 1790 act. If I took your novel and made a play based upon it, or if I translated it or abridged it, none of those activities were regulated by the original copyright act. These creative activities remained free, while the activities of publishers were restrained.
+
+Today the story is very different: If you write a book, your book is automatically protected. Indeed, not just your book. Every e-mail, every note to your spouse, every doodle, /{every}/ creative act that's reduced to a tangible form - all of this is automatically copyrighted. There is no need to register or mark your work. The protection follows the creation, not the steps you take to protect it.
+
+That protection gives you the right (subject to a narrow range of fair use exceptions) to control how others copy the work, whether they copy it to republish it or to share an excerpt.
+
+That much is the obvious part. Any system of copyright would control competing publishing. But there's a second part to the copyright of today that is not at all obvious. This is the protection of "derivative rights." If you write a book, no one can make a movie out of your book without permission. No one can translate it without permission. CliffsNotes can't make an abridgment unless permission is granted. All of these derivative uses of your original work are controlled by the copyright holder. The copyright, in other words, is now not just an exclusive right to your writings, but an exclusive right to your writings and a large proportion of the writings inspired by them.
+
+It is this derivative right that would seem most bizarre to our framers, though it has become second nature to us. Initially, this expansion was created to deal with obvious evasions of a narrower copyright. If I write a book, can you change one word and then claim a copyright in a new and different book? Obviously that would make a joke of the copyright, so the law was properly expanded to include those slight modifications as well as the verbatim original work.
+
+In preventing that joke, the law created an astonishing power within a free culture - at least, it's astonishing when you understand that the law applies not just to the commercial publisher but to anyone with a computer. I understand the wrong in duplicating and selling someone else's work. But whatever /{that}/ wrong is, transforming someone else's work is a different wrong. Some view transformation as no wrong at all - they believe that our law, as the framers penned it, should not protect derivative rights at all.~{ Jonathan Zittrain, "The Copyright Cage," /{Legal Affairs,}/ July/August 2003, available at link #26. }~ Whether or not you go that far, it seems plain that whatever wrong is involved is fundamentally different from the wrong of direct piracy.
+
+Yet copyright law treats these two different wrongs in the same way. I can go to court and get an injunction against your pirating my book. I can go to court and get an injunction against your transformative use of my book.~{ Professor Rubenfeld has presented a powerful constitutional argument about the difference that copyright law should draw (from the perspective of the First Amendment) between mere "copies" and derivative works. See Jed Rubenfeld, "The Freedom of Imagination: Copyright's Constitutionality," /{Yale Law Journal}/ 112 (2002): 1-60 (see especially pp. 53-59). }~ These two different uses of my creative work are treated the same.
+
+This again may seem right to you. If I wrote a book, then why should you be able to write a movie that takes my story and makes money from it without paying me or crediting me? Or if Disney creates a creature called "Mickey Mouse," why should you be able to make Mickey Mouse toys and be the one to trade on the value that Disney originally created?
+
+These are good arguments, and, in general, my point is not that the derivative right is unjustified. My aim just now is much narrower: simply to make clear that this expansion is a significant change from the rights originally granted.
+
+2~ Law and Architecture: Reach
+
+Whereas originally the law regulated only publishers, the change in copyright's scope means that the law today regulates publishers, users, and authors. It regulates them because all three are capable of making copies, and the core of the regulation of copyright law is copies.~{ This is a simplification of the law, but not much of one. The law certainly regulates more than "copies" - a public performance of a copyrighted song, for example, is regulated even though performance per se doesn't make a copy; 17 /{United States Code,}/ section 106(4). And it certainly sometimes doesn't regulate a "copy"; 17 /{United States Code,}/ section 112(a). But the presumption under the existing law (which regulates "copies;" 17 /{United States Code,}/ section 102) is that if there is a copy, there is a right. }~
+
+"Copies." That certainly sounds like the obvious thing for /{copy}/right law to regulate. But as with Jack Valenti's argument at the start of this chapter, that "creative property" deserves the "same rights" as all other property, it is the /{obvious}/ that we need to be most careful about. For while it may be obvious that in the world before the Internet, copies were the obvious trigger for copyright law, upon reflection, it should be obvious that in the world with the Internet, copies should /{not}/ be the trigger for copyright law. More precisely, they should not /{always}/ be the trigger for copyright law.
+
+This is perhaps the central claim of this book, so let me take this very slowly so that the point is not easily missed. My claim is that the Internet should at least force us to rethink the conditions under which the law of copyright automatically applies,~{ Thus, my argument is not that in each place that copyright law extends, we should repeal it. It is instead that we should have a good argument for its extending where it does, and should not determine its reach on the basis of arbitrary and automatic changes caused by technology. }~ because it is clear that the current reach of copyright was never contemplated, much less chosen, by the legislators who enacted copyright law.
+
+We can see this point abstractly by beginning with this largely empty circle.
+
+{freeculture05.png 350x350 "uses" }http://www.free-culture.cc/
+
+Think about a book in real space, and imagine this circle to represent all its potential /{uses}/. Most of these uses are unregulated by copyright law, because the uses don't create a copy. If you read a book, that act is not regulated by copyright law. If you give someone the book, that act is not regulated by copyright law. If you resell a book, that act is not regulated (copyright law expressly states that after the first sale of a book, the copyright owner can impose no further conditions on the disposition of the book). If you sleep on the book or use it to hold up a lamp or let your puppy chew it up, those acts are not regulated by copyright law, because those acts do not make a copy.
+
+{freeculture06.png 350x350 "unregulated" }http://www.free-culture.cc/
+
+Obviously, however, some uses of a copyrighted book are regulated by copyright law. Republishing the book, for example, makes a copy. It is therefore regulated by copyright law. Indeed, this particular use stands at the core of this circle of possible uses of a copyrighted work. It is the paradigmatic use properly regulated by copyright regulation (see first diagram on next page).
+
+Finally, there is a tiny sliver of otherwise regulated copying uses that remain unregulated because the law considers these "fair uses."
+
+{freeculture07.png 350x350 }http://www.free-culture.cc/
+
+These are uses that themselves involve copying, but which the law treats as unregulated because public policy demands that they remain unregulated. You are free to quote from this book, even in a review that is quite negative, without my permission, even though that quoting makes a copy. That copy would ordinarily give the copyright owner the exclusive right to say whether the copy is allowed or not, but the law denies the owner any exclusive right over such "fair uses" for public policy (and possibly First Amendment) reasons.
+
+{freeculture08.png 450x350 }http://www.free-culture.cc/
+
+{freeculture09.png 350x350 }http://www.free-culture.cc/
+
+In real space, then, the possible uses of a book are divided into three sorts: (1) unregulated uses, (2) regulated uses, and (3) regulated uses that are nonetheless deemed "fair" regardless of the copyright owner's views.
+
+Enter the Internet - a distributed, digital network where every use of a copyrighted work produces a copy.~{ I don't mean "nature" in the sense that it couldn't be different, but rather that its present instantiation entails a copy. Optical networks need not make copies of content they transmit, and a digital network could be designed to delete anything it copies so that the same number of copies remain. }~ And because of this single, arbitrary feature of the design of a digital network, the scope of category 1 changes dramatically. Uses that before were presumptively unregulated are now presumptively regulated. No longer is there a set of presumptively unregulated uses that define a freedom associated with a copyrighted work. Instead, each use is now subject to the copyright, because each use also makes a copy - category 1 gets sucked into category 2. And those who would defend the unregulated uses of copyrighted work must look exclusively to category 3, fair uses, to bear the burden of this shift.
+
+So let's be very specific to make this general point clear. Before the Internet, if you purchased a book and read it ten times, there would be no plausible /{copyright}/-related argument that the copyright owner could make to control that use of her book. Copyright law would have nothing to say about whether you read the book once, ten times, or every night before you went to bed. None of those instances of use - reading - could be regulated by copyright law because none of those uses produced a copy.
+
+But the same book as an e-book is effectively governed by a different set of rules. Now if the copyright owner says you may read the book only once or only once a month, then /{copyright law}/ would aid the copyright owner in exercising this degree of control, because of the accidental feature of copyright law that triggers its application upon there being a copy. Now if you read the book ten times and the license says you may read it only five times, then whenever you read the book (or any portion of it) beyond the fifth time, you are making a copy of the book contrary to the copyright owner's wish.
+
+There are some people who think this makes perfect sense. My aim just now is not to argue about whether it makes sense or not. My aim is only to make clear the change. Once you see this point, a few other points also become clear:
+
+First, making category 1 disappear is not anything any policy maker ever intended. Congress did not think through the collapse of the presumptively unregulated uses of copyrighted works. There is no evidence at all that policy makers had this idea in mind when they allowed our policy here to shift. Unregulated uses were an important part of free culture before the Internet.
+
+Second, this shift is especially troubling in the context of transformative uses of creative content. Again, we can all understand the wrong in commercial piracy. But the law now purports to regulate /{any}/ transformation you make of creative work using a machine. "Copy and paste" and "cut and paste" become crimes. Tinkering with a story and releasing it to others exposes the tinkerer to at least a requirement of justification. However troubling the expansion with respect to copying a particular work, it is extraordinarily troubling with respect to transformative uses of creative work.
+
+Third, this shift from category 1 to category 2 puts an extraordinary burden on category 3 ("fair use") that fair use never before had to bear. If a copyright owner now tried to control how many times I could read a book on-line, the natural response would be to argue that this is a violation of my fair use rights. But there has never been any litigation about whether I have a fair use right to read, because before the Internet, reading did not trigger the application of copyright law and hence the need for a fair use defense. The right to read was effectively protected before because reading was not regulated.
+
+This point about fair use is totally ignored, even by advocates for free culture. We have been cornered into arguing that our rights depend upon fair use"never even addressing the earlier question about the expansion in effective regulation. A thin protection grounded in fair use makes sense when the vast majority of uses are /{unregulated}/. But when everything becomes presumptively regulated, then the protections of fair use are not enough.
+
+The case of Video Pipeline is a good example. Video Pipeline was in the business of making "trailer" advertisements for movies available to video stores. The video stores displayed the trailers as a way to sell videos. Video Pipeline got the trailers from the film distributors, put the trailers on tape, and sold the tapes to the retail stores.
+
+The company did this for about fifteen years. Then, in 1997, it began to think about the Internet as another way to distribute these previews. The idea was to expand their "selling by sampling" technique by giving on-line stores the same ability to enable "browsing." Just as in a bookstore you can read a few pages of a book before you buy the book, so, too, you would be able to sample a bit from the movie on-line before you bought it.
+
+In 1998, Video Pipeline informed Disney and other film distributors that it intended to distribute the trailers through the Internet (rather than sending the tapes) to distributors of their videos. Two years later, Disney told Video Pipeline to stop. The owner of Video Pipeline asked Disney to talk about the matter - he had built a business on distributing this content as a way to help sell Disney films; he had customers who depended upon his delivering this content. Disney would agree to talk only if Video Pipeline stopped the distribution immediately. Video Pipeline thought it was within their "fair use" rights to distribute the clips as they had. So they filed a lawsuit to ask the court to declare that these rights were in fact their rights.
+
+Disney countersued - for $100 million in damages. Those damages were predicated upon a claim that Video Pipeline had - willfully infringed" on Disney's copyright. When a court makes a finding of willful infringement, it can award damages not on the basis of the actual harm to the copyright owner, but on the basis of an amount set in the statute. Because Video Pipeline had distributed seven hundred clips of Disney movies to enable video stores to sell copies of those movies, Disney was now suing Video Pipeline for $100 million.
+
+Disney has the right to control its property, of course. But the video stores that were selling Disney's films also had some sort of right to be able to sell the films that they had bought from Disney. Disney's claim in court was that the stores were allowed to sell the films and they were permitted to list the titles of the films they were selling, but they were not allowed to show clips of the films as a way of selling them without Disney's permission.
+
+Now, you might think this is a close case, and I think the courts would consider it a close case. My point here is to map the change that gives Disney this power. Before the Internet, Disney couldn't really control how people got access to their content. Once a video was in the marketplace, the "first-sale doctrine" would free the seller to use the video as he wished, including showing portions of it in order to engender sales of the entire movie video. But with the Internet, it becomes possible for Disney to centralize control over access to this content. Because each use of the Internet produces a copy, use on the Internet becomes subject to the copyright owner's control. The technology expands the scope of effective control, because the technology builds a copy into every transaction.
+
+No doubt, a potential is not yet an abuse, and so the potential for control is not yet the abuse of control. Barnes & Noble has the right to say you can't touch a book in their store; property law gives them that right. But the market effectively protects against that abuse. If Barnes & Noble banned browsing, then consumers would choose other bookstores. Competition protects against the extremes. And it may well be (my argument so far does not even question this) that competition would prevent any similar danger when it comes to copyright. Sure, publishers exercising the rights that authors have assigned to them might try to regulate how many times you read a book, or try to stop you from sharing the book with anyone. But in a competitive market such as the book market, the dangers of this happening are quite slight.
+
+Again, my aim so far is simply to map the changes that this changed architecture enables. Enabling technology to enforce the control of copyright means that the control of copyright is no longer defined by balanced policy. The control of copyright is simply what private owners choose. In some contexts, at least, that fact is harmless. But in some contexts it is a recipe for disaster.
+
+2~ Architecture and Law: Force
+
+The disappearance of unregulated uses would be change enough, but a second important change brought about by the Internet magnifies its significance. This second change does not affect the reach of copyright regulation; it affects how such regulation is enforced.
+
+In the world before digital technology, it was generally the law that controlled whether and how someone was regulated by copyright law. The law, meaning a court, meaning a judge: In the end, it was a human, trained in the tradition of the law and cognizant of the balances that tradition embraced, who said whether and how the law would restrict your freedom.
+
+There's a famous story about a battle between the Marx Brothers and Warner Brothers. The Marxes intended to make a parody of /{Casablanca}/. Warner Brothers objected. They wrote a nasty letter to the Marxes, warning them that there would be serious legal consequences if they went forward with their plan.~{ See David Lange, "Recognizing the Public Domain," /{Law and Contemporary Problems}/ 44 (1981): 172-73. }~
+
+This led the Marx Brothers to respond in kind. They warned Warner Brothers that the Marx Brothers "were brothers long before you were."~{ Ibid. See also Vaidhyanathan, /{Copyrights and Copywrongs,}/ 1-3. }~ The Marx Brothers therefore owned the word /{brothers}/, and if Warner Brothers insisted on trying to control /{Casablanca}/, then the Marx Brothers would insist on control over /{brothers}/.
+
+An absurd and hollow threat, of course, because Warner Brothers, like the Marx Brothers, knew that no court would ever enforce such a silly claim. This extremism was irrelevant to the real freedoms anyone (including Warner Brothers) enjoyed.
+
+On the Internet, however, there is no check on silly rules, because on the Internet, increasingly, rules are enforced not by a human but by a machine: Increasingly, the rules of copyright law, as interpreted by the copyright owner, get built into the technology that delivers copyrighted content. It is code, rather than law, that rules. And the problem with code regulations is that, unlike law, code has no shame. Code would not get the humor of the Marx Brothers. The consequence of that is not at all funny.
+
+Consider the life of my Adobe eBook Reader.
+
+An e-book is a book delivered in electronic form. An Adobe eBook is not a book that Adobe has published; Adobe simply produces the software that publishers use to deliver e-books. It provides the technology, and the publisher delivers the content by using the technology.
+
+On the next page is a picture of an old version of my Adobe eBook Reader.
+
+As you can see, I have a small collection of e-books within this e-book library. Some of these books reproduce content that is in the public domain: /{Middlemarch}/, for example, is in the public domain. Some of them reproduce content that is not in the public domain: My own book /{The Future of Ideas}/ is not yet within the public domain.
+
+Consider /{Middlemarch}/ first. If you click on my e-book copy of /{Middlemarch}/, you'll see a fancy cover, and then a button at the bottom called Permissions.
+
+{freeculture10.png 340x450 }http://www.free-culture.cc/
+
+If you click on the Permissions button, you'll see a list of the permissions that the publisher purports to grant with this book.
+
+{freeculture11.png 560x250 }http://www.free-culture.cc/
+
+According to my eBook Reader, I have the permission to copy to the clipboard of the computer ten text selections every ten days. (So far, I've copied no text to the clipboard.) I also have the permission to print ten pages from the book every ten days. Lastly, I have the permission to use the Read Aloud button to hear /{Middlemarch}/ read aloud through the computer.
+
+{freeculture12.png 310x410 }http://www.free-culture.cc/
+
+Here's the e-book for another work in the public domain (including the translation): Aristotle's /{Politics}/.
+
+According to its permissions, no printing or copying is permitted at all. But fortunately, you can use the Read Aloud button to hear the book.
+
+{freeculture13.png 560x220 }http://www.free-culture.cc/
+
+Finally (and most embarrassingly), here are the permissions for the original e- book version of my last book, /{The Future of Ideas}/:
+
+{freeculture14.png 560x224 }http://www.free-culture.cc/
+
+No copying, no printing, and don't you dare try to listen to this book!
+
+Now, the Adobe eBook Reader calls these controls "permissions" - as if the publisher has the power to control how you use these works. For works under copyright, the copyright owner certainly does have the power - up to the limits of the copyright law. But for work not under copyright, there is no such copyright power.~{ In principle, a contract might impose a requirement on me. I might, for example, buy a book from you that includes a contract that says I will read it only three times, or that I promise to read it three times. But that obligation (and the limits for creating that obligation) would come from the contract, not from copyright law, and the obligations of contract would not necessarily pass to anyone who subsequently acquired the book. }~ When my e-book of /{Middlemarch}/ says I have the permission to copy only ten text selections into the memory every ten days, what that really means is that the eBook Reader has enabled the publisher to control how I use the book on my computer, far beyond the control that the law would enable.
+
+The control comes instead from the code - from the technology within which the e- book "lives." Though the e-book says that these are permissions, they are not the sort of "permissions" that most of us deal with. When a teenager gets "permission" to stay out till midnight, she knows (unless she's Cinderella) that she can stay out till 2 A.M., but will suffer a punishment if she's caught. But when the Adobe eBook Reader says I have the permission to make ten copies of the text into the computer's memory, that means that after I've made ten copies, the computer will not make any more. The same with the printing restrictions: After ten pages, the eBook Reader will not print any more pages. It's the same with the silly restriction that says that you can't use the Read Aloud button to read my book aloud - it's not that the company will sue you if you do; instead, if you push the Read Aloud button with my book, the machine simply won't read aloud.
+
+These are /{controls}/, not permissions. Imagine a world where the Marx Brothers sold word processing software that, when you tried to type "Warner Brothers," erased "Brothers" from the sentence.
+
+This is the future of copyright law: not so much copyright /{law}/ as copyright /{code}/. The controls over access to content will not be controls that are ratified by courts; the controls over access to content will be controls that are coded by programmers. And whereas the controls that are built into the law are always to be checked by a judge, the controls that are built into the technology have no similar built-in check.
+
+How significant is this? Isn't it always possible to get around the controls built into the technology? Software used to be sold with technologies that limited the ability of users to copy the software, but those were trivial protections to defeat. Why won't it be trivial to defeat these protections as well?
+
+We've only scratched the surface of this story. Return to the Adobe eBook Reader.
+
+Early in the life of the Adobe eBook Reader, Adobe suffered a public relations nightmare. Among the books that you could download for free on the Adobe site was a copy of /{Alice's Adventures in Wonderland}/. This wonderful book is in the public domain. Yet when you clicked on Permissions for that book, you got the following report:
+
+{freeculture15.png 560x310 }http://www.free-culture.cc/
+
+Here was a public domain children's book that you were not allowed to copy, not allowed to lend, not allowed to give, and, as the "permissions" indicated, not allowed to "read aloud"!
+
+The public relations nightmare attached to that final permission. For the text did not say that you were not permitted to use the Read Aloud button; it said you did not have the permission to read the book aloud. That led some people to think that Adobe was restricting the right of parents, for example, to read the book to their children, which seemed, to say the least, absurd.
+
+Adobe responded quickly that it was absurd to think that it was trying to restrict the right to read a book aloud. Obviously it was only restricting the ability to use the Read Aloud button to have the book read aloud. But the question Adobe never did answer is this: Would Adobe thus agree that a consumer was free to use software to hack around the restrictions built into the eBook Reader? If some company (call it Elcomsoft) developed a program to disable the technological protection built into an Adobe eBook so that a blind person, say, could use a computer to read the book aloud, would Adobe agree that such a use of an eBook Reader was fair? Adobe didn't answer because the answer, however absurd it might seem, is no.
+
+The point is not to blame Adobe. Indeed, Adobe is among the most innovative companies developing strategies to balance open access to content with incentives for companies to innovate. But Adobe's technology enables control, and Adobe has an incentive to defend this control. That incentive is understandable, yet what it creates is often crazy.
+
+To see the point in a particularly absurd context, consider a favorite story of mine that makes the same point.
+
+Consider the robotic dog made by Sony named "Aibo." The Aibo learns tricks, cuddles, and follows you around. It eats only electricity and that doesn't leave that much of a mess (at least in your house).
+
+The Aibo is expensive and popular. Fans from around the world have set up clubs to trade stories. One fan in particular set up a Web site to enable information about the Aibo dog to be shared. This fan set up aibopet.com (and aibohack.com, but that resolves to the same site), and on that site he provided information about how to teach an Aibo to do tricks in addition to the ones Sony had taught it.
+
+"Teach" here has a special meaning. Aibos are just cute computers. You teach a computer how to do something by programming it differently. So to say that aibopet.com was giving information about how to teach the dog to do new tricks is just to say that aibopet.com was giving information to users of the Aibo pet about how to hack their computer "dog" to make it do new tricks (thus, aibohack.com).
+
+If you're not a programmer or don't know many programmers, the word /{hack}/ has a particularly unfriendly connotation. Nonprogrammers hack bushes or weeds. Nonprogrammers in horror movies do even worse. But to programmers, or coders, as I call them, /{hack}/ is a much more positive term. /{Hack}/ just means code that enables the program to do something it wasn't originally intended or enabled to do. If you buy a new printer for an old computer, you might find the old computer doesn't run, or "drive," the printer. If you discovered that, you'd later be happy to discover a hack on the Net by someone who has written a driver to enable the computer to drive the printer you just bought.
+
+Some hacks are easy. Some are unbelievably hard. Hackers as a community like to challenge themselves and others with increasingly difficult tasks. There's a certain respect that goes with the talent to hack well. There's a well-deserved respect that goes with the talent to hack ethically.
+
+The Aibo fan was displaying a bit of both when he hacked the program and offered to the world a bit of code that would enable the Aibo to dance jazz. The dog wasn't programmed to dance jazz. It was a clever bit of tinkering that turned the dog into a more talented creature than Sony had built.
+
+I've told this story in many contexts, both inside and outside the United States. Once I was asked by a puzzled member of the audience, is it permissible for a dog to dance jazz in the United States? We forget that stories about the backcountry still flow across much of the world. So let's just be clear before we continue: It's not a crime anywhere (anymore) to dance jazz. Nor is it a crime to teach your dog to dance jazz. Nor should it be a crime (though we don't have a lot to go on here) to teach your robot dog to dance jazz. Dancing jazz is a completely legal activity. One imagines that the owner of aibopet.com thought, /{What possible problem could there be with teaching a robot dog to dance?}/
+
+Let's put the dog to sleep for a minute, and turn to a pony show - not literally a pony show, but rather a paper that a Princeton academic named Ed Felten prepared for a conference. This Princeton academic is well known and respected. He was hired by the government in the Microsoft case to test Microsoft's claims about what could and could not be done with its own code. In that trial, he demonstrated both his brilliance and his coolness. Under heavy badgering by Microsoft lawyers, Ed Felten stood his ground. He was not about to be bullied into being silent about something he knew very well.
+
+But Felten's bravery was really tested in April 2001.~{ See Pamela Samuelson, "Anticircumvention Rules: Threat to Science," /{Science}/ 293 (2001): 2028; Brendan I. Koerner, "Play Dead: Sony Muzzles the Techies Who Teach a Robot Dog New Tricks," /{American Prospect,}/ 1 January 2002; "Court Dismisses Computer Scientists' Challenge to DMCA," /{Intellectual Property Litigation Reporter,}/ 11 December 2001; Bill Holland, "Copyright Act Raising Free-Speech Concerns," /{Billboard,}/ 26 May 2001; Janelle Brown, "Is the RIAA Running Scared?" Salon.com, 26 April 2001; Electronic Frontier Foundation, "Frequently Asked Questions about /{Felten and USENIX}/ v. /{RIAA}/ Legal Case," available at link #27. }~ He and a group of colleagues were working on a paper to be submitted at conference. The paper was intended to describe the weakness in an encryption system being developed by the Secure Digital Music Initiative as a technique to control the distribution of music.
+
+The SDMI coalition had as its goal a technology to enable content owners to exercise much better control over their content than the Internet, as it originally stood, granted them. Using encryption, SDMI hoped to develop a standard that would allow the content owner to say "this music cannot be copied," and have a computer respect that command. The technology was to be part of a "trusted system" of control that would get content owners to trust the system of the Internet much more.
+
+When SDMI thought it was close to a standard, it set up a competition. In exchange for providing contestants with the code to an SDMI-encrypted bit of content, contestants were to try to crack it and, if they did, report the problems to the consortium.
+
+Felten and his team figured out the encryption system quickly. He and the team saw the weakness of this system as a type: Many encryption systems would suffer the same weakness, and Felten and his team thought it worthwhile to point this out to those who study encryption.
+
+Let's review just what Felten was doing. Again, this is the United States. We have a principle of free speech. We have this principle not just because it is the law, but also because it is a really great idea. A strongly protected tradition of free speech is likely to encourage a wide range of criticism. That criticism is likely, in turn, to improve the systems or people or ideas criticized.
+
+What Felten and his colleagues were doing was publishing a paper describing the weakness in a technology. They were not spreading free music, or building and deploying this technology. The paper was an academic essay, unintelligible to most people. But it clearly showed the weakness in the SDMI system, and why SDMI would not, as presently constituted, succeed.
+
+What links these two, aibopet.com and Felten, is the letters they then received. Aibopet.com received a letter from Sony about the aibopet.com hack. Though a jazz-dancing dog is perfectly legal, Sony wrote:
+
+_1 Your site contains information providing the means to circumvent AIBO-ware's copy protection protocol constituting a violation of the anti-circumvention provisions of the Digital Millennium Copyright Act."
+
+And though an academic paper describing the weakness in a system of encryption should also be perfectly legal, Felten received a letter from an RIAA lawyer that read:
+
+_1 Any disclosure of information gained from participating in the Public Challenge would be outside the scope of activities permitted by the Agreement and could subject you and your research team to actions under the Digital Millennium Copyright Act ("DMCA")."
+
+In both cases, this weirdly Orwellian law was invoked to control the spread of information. The Digital Millennium Copyright Act made spreading such information an offense.
+
+The DMCA was enacted as a response to copyright owners' first fear about cyberspace. The fear was that copyright control was effectively dead; the response was to find technologies that might compensate. These new technologies would be copyright protection technologies - technologies to control the replication and distribution of copyrighted material. They were designed as /{code}/ to modify the original /{code}/ of the Internet, to reestablish some protection for copyright owners.
+
+The DMCA was a bit of law intended to back up the protection of this code designed to protect copyrighted material. It was, we could say, /{legal code}/ intended to buttress /{software code}/ which itself was intended to support the /{legal code of copyright}/.
+
+But the DMCA was not designed merely to protect copyrighted works to the extent copyright law protected them. Its protection, that is, did not end at the line that copyright law drew. The DMCA regulated devices that were designed to circumvent copyright protection measures. It was designed to ban those devices, whether or not the use of the copyrighted material made possible by that circumvention would have been a copyright violation.
+
+Aibopet.com and Felten make the point. The Aibo hack circumvented a copyright protection system for the purpose of enabling the dog to dance jazz. That enablement no doubt involved the use of copyrighted material. But as aibopet.com's site was noncommercial, and the use did not enable subsequent copyright infringements, there's no doubt that aibopet.com's hack was fair use of Sony's copyrighted material. Yet fair use is not a defense to the DMCA. The question is not whether the use of the copyrighted material was a copyright violation. The question is whether a copyright protection system was circumvented.
+
+The threat against Felten was more attenuated, but it followed the same line of reasoning. By publishing a paper describing how a copyright protection system could be circumvented, the RIAA lawyer suggested, Felten himself was distributing a circumvention technology. Thus, even though he was not himself infringing anyone's copyright, his academic paper was enabling others to infringe others' copyright.
+
+The bizarreness of these arguments is captured in a cartoon drawn in 1981 by Paul Conrad. At that time, a court in California had held that the VCR could be banned because it was a copyright-infringing technology: It enabled consumers to copy films without the permission of the copyright owner. No doubt there were uses of the technology that were legal: Fred Rogers, aka "Mr. Rogers," for example, had testified in that case that he wanted people to feel free to tape /{Mr. Rogers' Neighborhood}/.
+
+_1 Some public stations, as well as commercial stations, program the "Neighborhood" at hours when some children cannot use it. I think that it's a real service to families to be able to record such programs and show them at appropriate times. I have always felt that with the advent of all of this new technology that allows people to tape the "Neighborhood" off-the-air, and I'm speaking for the "Neighborhood" because that's what I produce, that they then become much more active in the programming of their family's television life. Very frankly, I am opposed to people being programmed by others. My whole approach in broadcasting has always been "You are an important person just the way you are. You can make healthy decisions." Maybe I'm going on too long, but I just feel that anything that allows a person to be more active in the control of his or her life, in a healthy way, is important."~{ /{Sony Corporation of America}/ v. /{Universal City Studios, Inc.,}/ 464 U.S. 417, 455 fn. 27 (1984). Rogers never changed his view about the VCR. See James Lardner, /{Fast Forward: Hollywood, the Japanese, and the Onslaught of the VCR}/ (New York: W. W. Norton, 1987), 270-71. }~
+
+Even though there were uses that were legal, because there were some uses that were illegal, the court held the companies producing the VCR responsible.
+
+This led Conrad to draw the cartoon below, which we can adopt to the DMCA.
+
+No argument I have can top this picture, but let me try to get close.
+
+The anticircumvention provisions of the DMCA target copyright circumvention technologies. Circumvention technologies can be used for different ends. They can be used, for example, to enable massive pirating of copyrighted material - a bad end. Or they can be used to enable the use of particular copyrighted materials in ways that would be considered fair use - a good end.
+
+A handgun can be used to shoot a police officer or a child. Most would agree such a use is bad. Or a handgun can be used for target practice or to protect against an intruder. At least some would say that such a use would be good. It, too, is a technology that has both good and bad uses.
+
+{freeculture16.png 425x500 }http://www.free-culture.cc/
+
+The obvious point of Conrad's cartoon is the weirdness of a world where guns are legal, despite the harm they can do, while VCRs (and circumvention technologies) are illegal. Flash: /{No one ever died from copyright circumvention}/. Yet the law bans circumvention technologies absolutely, despite the potential that they might do some good, but permits guns, despite the obvious and tragic harm they do.
+
+The Aibo and RIAA examples demonstrate how copyright owners are changing the balance that copyright law grants. Using code, copyright owners restrict fair use; using the DMCA, they punish those who would attempt to evade the restrictions on fair use that they impose through code. Technology becomes a means by which fair use can be erased; the law of the DMCA backs up that erasing.
+
+This is how /{code}/ becomes /{law}/. The controls built into the technology of copy and access protection become rules the violation of which is also a violation of the law. In this way, the code extends the law - increasing its regulation, even if the subject it regulates (activities that would otherwise plainly constitute fair use) is beyond the reach of the law. Code becomes law; code extends the law; code thus extends the control that copyright owners effect - at least for those copyright holders with the lawyers who can write the nasty letters that Felten and aibopet.com received.
+
+There is one final aspect of the interaction between architecture and law that contributes to the force of copyright's regulation. This is the ease with which infringements of the law can be detected. For contrary to the rhetoric common at the birth of cyberspace that on the Internet, no one knows you're a dog, increasingly, given changing technologies deployed on the Internet, it is easy to find the dog who committed a legal wrong. The technologies of the Internet are open to snoops as well as sharers, and the snoops are increasingly good at tracking down the identity of those who violate the rules.
+
+For example, imagine you were part of a /{Star Trek}/ fan club. You gathered every month to share trivia, and maybe to enact a kind of fan fiction about the show. One person would play Spock, another, Captain Kirk. The characters would begin with a plot from a real story, then simply continue it.~{ For an early and prescient analysis, see Rebecca Tushnet, "Legal Fictions, Copyright, Fan Fiction, and a New Common Law," /{Loyola of Los Angeles Entertainment Law Journal}/ 17 (1997): 651. }~
+
+Before the Internet, this was, in effect, a totally unregulated activity. No matter what happened inside your club room, you would never be interfered with by the copyright police. You were free in that space to do as you wished with this part of our culture. You were allowed to build on it as you wished without fear of legal control.
+
+But if you moved your club onto the Internet, and made it generally available for others to join, the story would be very different. Bots scouring the Net for trademark and copyright infringement would quickly find your site. Your posting of fan fiction, depending upon the ownership of the series that you're depicting, could well inspire a lawyer's threat. And ignoring the lawyer's threat would be extremely costly indeed. The law of copyright is extremely efficient. The penalties are severe, and the process is quick.
+
+This change in the effective force of the law is caused by a change in the ease with which the law can be enforced. That change too shifts the law's balance radically. It is as if your car transmitted the speed at which you traveled at every moment that you drove; that would be just one step before the state started issuing tickets based upon the data you transmitted. That is, in effect, what is happening here.
+
+2~ Market: Concentration
+
+So copyright's duration has increased dramatically - tripled in the past thirty years. And copyright's scope has increased as well - from regulating only publishers to now regulating just about everyone. And copyright's reach has changed, as every action becomes a copy and hence presumptively regulated. And as technologists find better ways to control the use of content, and as copyright is increasingly enforced through technology, copyright's force changes, too. Misuse is easier to find and easier to control. This regulation of the creative process, which began as a tiny regulation governing a tiny part of the market for creative work, has become the single most important regulator of creativity there is. It is a massive expansion in the scope of the government's control over innovation and creativity; it would be totally unrecognizable to those who gave birth to copyright's control.
+
+Still, in my view, all of these changes would not matter much if it weren't for one more change that we must also consider. This is a change that is in some sense the most familiar, though its significance and scope are not well understood. It is the one that creates precisely the reason to be concerned about all the other changes I have described.
+
+This is the change in the concentration and integration of the media. In the past twenty years, the nature of media ownership has undergone a radical alteration, caused by changes in legal rules governing the media. Before this change happened, the different forms of media were owned by separate media companies. Now, the media is increasingly owned by only a few companies. Indeed, after the changes that the FCC announced in June 2003, most expect that within a few years, we will live in a world where just three companies control more than 85 percent of the media.
+
+These changes are of two sorts: the scope of concentration, and its nature.
+
+Changes in scope are the easier ones to describe. As Senator John McCain summarized the data produced in the FCC's review of media ownership, "five companies control 85 percent of our media sources."~{ FCC Oversight: Hearing Before the Senate Commerce, Science and Transportation Committee, 108th Cong., 1st sess. (22 May 2003) (statement of Senator John McCain). }~ The five recording labels of Universal Music Group, BMG, Sony Music Entertainment, Warner Music Group, and EMI control 84.8 percent of the U.S. music market.~{ Lynette Holloway, "Despite a Marketing Blitz, CD Sales Continue to Slide," /{New York Times,}/ 23 December 2002. }~ The "five largest cable companies pipe programming to 74 percent of the cable subscribers nationwide."~{ Molly Ivins, "Media Consolidation Must Be Stopped," /{Charleston Gazette,}/ 31 May 2003. }~
+
+The story with radio is even more dramatic. Before deregulation, the nation's largest radio broadcasting conglomerate owned fewer than seventy-five stations. Today /{one}/ company owns more than 1,200 stations. During that period of consolidation, the total number of radio owners dropped by 34 percent. Today, in most markets, the two largest broadcasters control 74 percent of that market's revenues. Overall, just four companies control 90 percent of the nation's radio advertising revenues.
+
+Newspaper ownership is becoming more concentrated as well. Today, there are six hundred fewer daily newspapers in the United States than there were eighty years ago, and ten companies control half of the nation's circulation. There are twenty major newspaper publishers in the United States. The top ten film studios receive 99 percent of all film revenue. The ten largest cable companies account for 85 percent of all cable revenue. This is a market far from the free press the framers sought to protect. Indeed, it is a market that is quite well protected - by the market.
+
+Concentration in size alone is one thing. The more invidious change is in the nature of that concentration. As author James Fallows put it in a recent article about Rupert Murdoch,
+
+_1 Murdoch's companies now constitute a production system unmatched in its integration. They supply content - Fox movies ... Fox TV shows ... Fox-controlled sports broadcasts, plus newspapers and books. They sell the content to the public and to advertisers - in newspapers, on the broadcast network, on the cable channels. And they operate the physical distribution system through which the content reaches the customers. Murdoch's satellite systems now distribute News Corp. content in Europe and Asia; if Murdoch becomes DirecTV's largest single owner, that system will serve the same function in the United States."~{ James Fallows, "The Age of Murdoch," /{Atlantic Monthly}/ (September 2003): 89. }~
+
+The pattern with Murdoch is the pattern of modern media. Not just large companies owning many radio stations, but a few companies owning as many outlets of media as possible. A picture describes this pattern better than a thousand words could do:
+
+{freeculture17.png 560x350 }http://www.free-culture.cc/
+
+Does this concentration matter? Will it affect what is made, or what is distributed? Or is it merely a more efficient way to produce and distribute content?
+
+My view was that concentration wouldn't matter. I thought it was nothing more than a more efficient financial structure. But now, after reading and listening to a barrage of creators try to convince me to the contrary, I am beginning to change my mind.
+
+Here's a representative story that begins to suggest how this integration may matter.
+
+In 1969, Norman Lear created a pilot for /{All in the Family}/. He took the pilot to ABC. The network didn't like it. It was too edgy, they told Lear. Make it again. Lear made a second pilot, more edgy than the first. ABC was exasperated. You're missing the point, they told Lear. We wanted less edgy, not more.
+
+Rather than comply, Lear simply took the show elsewhere. CBS was happy to have the series; ABC could not stop Lear from walking. The copyrights that Lear held assured an independence from network control.~{ Leonard Hill, "The Axis of Access," remarks before Weidenbaum Center Forum, "Entertainment Economics: The Movie Industry," St. Louis, Missouri, 3 April 2003 (transcript of prepared remarks available at link #28; for the Lear story, not included in the prepared remarks, see link #29). }~
+
+The network did not control those copyrights because the law forbade the networks from controlling the content they syndicated. The law required a separation between the networks and the content producers; that separation would guarantee Lear freedom. And as late as 1992, because of these rules, the vast majority of prime time television - 75 percent of it - was "independent" of the networks.
+
+In 1994, the FCC abandoned the rules that required this independence. After that change, the networks quickly changed the balance. In 1985, there were twenty- five independent television production studios; in 2002, only five independent television studios remained. "In 1992, only 15 percent of new series were produced for a network by a company it controlled. Last year, the percentage of shows produced by controlled companies more than quintupled to 77 percent." "In 1992, 16 new series were produced independently of conglomerate control, last year there was one."~{ NewsCorp./DirecTV Merger and Media Consolidation: Hearings on Media Ownership Before the Senate Commerce Committee, 108th Cong., 1st sess. (2003) (testimony of Gene Kimmelman on behalf of Consumers Union and the Consumer Federation of America), available at link #30. Kimmelman quotes Victoria Riskin, president of Writers Guild of America, West, in her Remarks at FCC En Banc Hearing, Richmond, Virginia, 27 February 2003. }~ In 2002, 75 percent of prime time television was owned by the networks that ran it. "In the ten-year period between 1992 and 2002, the number of prime time television hours per week produced by network studios increased over 200%, whereas the number of prime time television hours per week produced by independent studios decreased 63%."~{ Ibid. }~
+
+Today, another Norman Lear with another /{All in the Family}/ would find that he had the choice either to make the show less edgy or to be fired: The content of any show developed for a network is increasingly owned by the network.
+
+While the number of channels has increased dramatically, the ownership of those channels has narrowed to an ever smaller and smaller few. As Barry Diller said to Bill Moyers,
+
+_1 Well, if you have companies that produce, that finance, that air on their channel and then distribute worldwide everything that goes through their controlled distribution system, then what you get is fewer and fewer actual voices participating in the process. [We u]sed to have dozens and dozens of thriving independent production companies producing television programs. Now you have less than a handful."~{ "Barry Diller Takes on Media Deregulation," /{Now with Bill Moyers,}/ Bill Moyers, 25 April 2003, edited transcript available at link #31. }~
+
+This narrowing has an effect on what is produced. The product of such large and concentrated networks is increasingly homogenous. Increasingly safe. Increasingly sterile. The product of news shows from networks like this is increasingly tailored to the message the network wants to convey. This is not the communist party, though from the inside, it must feel a bit like the communist party. No one can question without risk of consequence - not necessarily banishment to Siberia, but punishment nonetheless. Independent, critical, different views are quashed. This is not the environment for a democracy.
+
+Economics itself offers a parallel that explains why this integration affects creativity. Clay Christensen has written about the "Innovator's Dilemma": the fact that large traditional firms find it rational to ignore new, breakthrough technologies that compete with their core business. The same analysis could help explain why large, traditional media companies would find it rational to ignore new cultural trends.~{ Clayton M. Christensen, /{The Innovator's Dilemma: The Revolutionary National Bestseller that Changed the Way We Do Business}/ (Cambridge: Harvard Business School Press, 1997). Christensen acknowledges that the idea was first suggested by Dean Kim Clark. See Kim B. Clark, "The Interaction of Design Hierarchies and Market Concepts in Technological Evolution," /{Research Policy}/ 14 (1985): 235- 51. For a more recent study, see Richard Foster and Sarah Kaplan, /{Creative Destruction: Why Companies That Are Built to Last Underperform the Market - and How to Successfully Transform Them}/ (New York: Currency/Doubleday, 2001). }~ Lumbering giants not only don't, but should not, sprint. Yet if the field is only open to the giants, there will be far too little sprinting.
+
+I don't think we know enough about the economics of the media market to say with certainty what concentration and integration will do. The efficiencies are important, and the effect on culture is hard to measure.
+
+But there is a quintessentially obvious example that does strongly suggest the concern.
+
+In addition to the copyright wars, we're in the middle of the drug wars. Government policy is strongly directed against the drug cartels; criminal and civil courts are filled with the consequences of this battle.
+
+Let me hereby disqualify myself from any possible appointment to any position in government by saying I believe this war is a profound mistake. I am not pro drugs. Indeed, I come from a family once wrecked by drugs - though the drugs that wrecked my family were all quite legal. I believe this war is a profound mistake because the collateral damage from it is so great as to make waging the war insane. When you add together the burdens on the criminal justice system, the desperation of generations of kids whose only real economic opportunities are as drug warriors, the queering of constitutional protections because of the constant surveillance this war requires, and, most profoundly, the total destruction of the legal systems of many South American nations because of the power of the local drug cartels, I find it impossible to believe that the marginal benefit in reduced drug consumption by Americans could possibly outweigh these costs.
+
+You may not be convinced. That's fine. We live in a democracy, and it is through votes that we are to choose policy. But to do that, we depend fundamentally upon the press to help inform Americans about these issues.
+
+Beginning in 1998, the Office of National Drug Control Policy launched a media campaign as part of the "war on drugs." The campaign produced scores of short film clips about issues related to illegal drugs. In one series (the Nick and Norm series) two men are in a bar, discussing the idea of legalizing drugs as a way to avoid some of the collateral damage from the war. One advances an argument in favor of drug legalization. The other responds in a powerful and effective way against the argument of the first. In the end, the first guy changes his mind (hey, it's television). The plug at the end is a damning attack on the pro-legalization campaign.
+
+Fair enough. It's a good ad. Not terribly misleading. It delivers its message well. It's a fair and reasonable message.
+
+But let's say you think it is a wrong message, and you'd like to run a countercommercial. Say you want to run a series of ads that try to demonstrate the extraordinary collateral harm that comes from the drug war. Can you do it?
+
+Well,obviously, these ads cost lots of money. Assume you raise the money. Assume a group of concerned citizens donates all the money in the world to help you get your message out. Can you be sure your message will be heard then?
+
+No.You cannot. Television stations have a general policy of avoiding "controversial" ads. Ads sponsored by the government are deemed uncontroversial; ads disagreeing with the government are controversial. This selectivity might be thought inconsistent with the First Amendment, but the Supreme Court has held that stations have the right to choose what they run. Thus, the major channels of commercial media will refuse one side of a crucial debate the opportunity to present its case. And the courts will defend the rights of the stations to be this biased.~{ The Marijuana Policy Project, in February 2003, sought to place ads that directly responded to the Nick and Norm series on stations within the Washington, D.C., area. Comcast rejected the ads as "against [their] policy." The local NBC affiliate, WRC, rejected the ads without reviewing them. The local ABC affiliate, WJOA, originally agreed to run the ads and accepted payment to do so, but later decided not to run the ads and returned the collected fees. Interview with Neal Levine, 15 October 2003. These restrictions are, of course, not limited to drug policy. See, for example, Nat Ives, "On the Issue of an Iraq War, Advocacy Ads Meet with Rejection from TV Networks," /{New York Times,}/ 13 March 2003, C4. Outside of election-related air time there is very little that the FCC or the courts are willing to do to even the playing field. For a general overview, see Rhonda Brown, "Ad Hoc Access: The Regulation of Editorial Advertising on Television and Radio," /{Yale Law and Policy Review}/ 6 (1988): 449-79, and for a more recent summary of the stance of the FCC and the courts, see /{Radio-Television News Directors Association}/ v. /{FCC,}/ 184 F. 3d 872 (D.C. Cir. 1999). Municipal authorities exercise the same authority as the networks. In a recent example from San Francisco, the San Francisco transit authority rejected an ad that criticized its Muni diesel buses. Phillip Matier and Andrew Ross, "Antidiesel Group Fuming After Muni Rejects Ad," SFGate.com, 16 June 2003, available at link #32. The ground was that the criticism was "too controversial." }~
+
+I'd be happy to defend the networks' rights, as well - if we lived in a media market that was truly diverse. But concentration in the media throws that condition into doubt. If a handful of companies control access to the media, and that handful of companies gets to decide which political positions it will allow to be promoted on its channels, then in an obvious and important way, concentration matters. You might like the positions the handful of companies selects. But you should not like a world in which a mere few get to decide which issues the rest of us get to know about.
+
+2~ Together
+
+There is something innocent and obvious about the claim of the copyright warriors that the government should "protect my property." In the abstract, it is obviously true and, ordinarily, totally harmless. No sane sort who is not an anarchist could disagree.
+
+But when we see how dramatically this "property" has changed - when we recognize how it might now interact with both technology and markets to mean that the effective constraint on the liberty to cultivate our culture is dramatically different - the claim begins to seem less innocent and obvious. Given (1) the power of technology to supplement the law's control, and (2) the power of concentrated markets to weaken the opportunity for dissent, if strictly enforcing the massively expanded "property" rights granted by copyright fundamentally changes the freedom within this culture to cultivate and build upon our past, then we have to ask whether this property should be redefined.
+
+Not starkly. Or absolutely. My point is not that we should abolish copyright or go back to the eighteenth century. That would be a total mistake, disastrous for the most important creative enterprises within our culture today.
+
+But there is a space between zero and one, Internet culture notwithstanding. And these massive shifts in the effective power of copyright regulation, tied to increased concentration of the content industry and resting in the hands of technology that will increasingly enable control over the use of culture, should drive us to consider whether another adjustment is called for. Not an adjustment that increases copyright's power. Not an adjustment that increases its term. Rather, an adjustment to restore the balance that has traditionally defined copyright's regulation - a weakening of that regulation, to strengthen creativity.
+
+Copyright law has not been a rock of Gibraltar. It's not a set of constant commitments that, for some mysterious reason, teenagers and geeks now flout. Instead, copyright power has grown dramatically in a short period of time, as the technologies of distribution and creation have changed and as lobbyists have pushed for more control by copyright holders. Changes in the past in response to changes in technology suggest that we may well need similar changes in the future. And these changes have to be /{reductions}/ in the scope of copyright, in response to the extraordinary increase in control that technology and the market enable.
+
+For the single point that is lost in this war on pirates is a point that we see only after surveying the range of these changes. When you add together the effect of changing law, concentrated markets, and changing technology, together they produce an astonishing conclusion: /{Never in our history have fewer had a legal right to control more of the development of our culture than now}/.
+
+Not when copyrights were perpetual, for when copyrights were perpetual, they affected only that precise creative work. Not when only publishers had the tools to publish, for the market then was much more diverse. Not when there were only three television networks, for even then, newspapers, film studios, radio stations, and publishers were independent of the networks. /{Never}/ has copyright protected such a wide range of rights, against as broad a range of actors, for a term that was remotely as long. This form of regulation - a tiny regulation of a tiny part of the creative energy of a nation at the founding - is now a massive regulation of the overall creative process. Law plus technology plus the market now interact to turn this historically benign regulation into the most significant regulation of culture that our free society has known.~{ Siva Vaidhyanathan captures a similar point in his "four surrenders" of copyright law in the digital age. See Vaidhyanathan, 159-60. }~
+
+This has been a long chapter. Its point can now be briefly stated.
+
+At the start of this book, I distinguished between commercial and noncommercial culture. In the course of this chapter, I have distinguished between copying a work and transforming it. We can now combine these two distinctions and draw a clear map of the changes that copyright law has undergone.
+
+In 1790, the law looked like this:
+
+table{~h c3; 33; 33; 33;
+
+&nbsp;
+Publish
+Transform
+
+Commercial
+Free
+
+Noncommercial
+Free
+Free
+
+}table
+
+The act of publishing a map, chart, and book was regulated by copyright law. Nothing else was. Transformations were free. And as copyright attached only with registration, and only those who intended to benefit commercially would register, copying through publishing of noncommercial work was also free.
+
+By the end of the nineteenth century, the law had changed to this:
+
+table{~h c3; 33; 33; 33;
+
+&nbsp;
+Publish
+Transform
+
+Commercial
+
+Noncommercial
+Free
+Free
+
+}table
+
+Derivative works were now regulated by copyright law - if published, which again, given the economics of publishing at the time, means if offered commercially. But noncommercial publishing and transformation were still essentially free.
+
+In 1909 the law changed to regulate copies, not publishing, and after this change, the scope of the law was tied to technology. As the technology of copying became more prevalent, the reach of the law expanded. Thus by 1975, as photocopying machines became more common, we could say the law began to look like this:
+
+table{~h c3; 33; 33; 33;
+
+&nbsp;
+Publish
+Transform
+
+Commercial
+
+Noncommercial
+©/Free
+Free
+
+}table
+
+The law was interpreted to reach noncommercial copying through, say, copy machines, but still much of copying outside of the commercial market remained free. But the consequence of the emergence of digital technologies, especially in the context of a digital network, means that the law now looks like this:
+
+table{~h c3; 33; 33; 33;
+
+&nbsp;
+Publish
+Transform
+
+Commercial
+
+Noncommercial
+
+}table
+
+Every realm is governed by copyright law, whereas before most creativity was not. The law now regulates the full range of creativity - commercial or not, transformative or not - with the same rules designed to regulate commercial publishers.
+
+Obviously, copyright law is not the enemy. The enemy is regulation that does no good. So the question that we should be asking just now is whether extending the regulations of copyright law into each of these domains actually does any good.
+
+I have no doubt that it does good in regulating commercial copying. But I also have no doubt that it does more harm than good when regulating (as it regulates just now) noncommercial copying and, especially, noncommercial transformation. And increasingly, for the reasons sketched especially in chapters 7 and 8, one might well wonder whether it does more harm than good for commercial transformation. More commercial transformative work would be created if derivative rights were more sharply restricted.
+
+The issue is therefore not simply whether copyright is property. Of course copyright is a kind of "property," and of course, as with any property, the state ought to protect it. But first impressions notwithstanding, historically, this property right (as with all property rights~{ It was the single most important contribution of the legal realist movement to demonstrate that all property rights are always crafted to balance public and private interests. See Thomas C. Grey, "The Disintegration of Property," in /{Nomos XXII: Property,}/ J. Roland Pennock and John W. Chapman, eds. (New York: New York University Press, 1980). }~) has been crafted to balance the important need to give authors and artists incentives with the equally important need to assure access to creative work. This balance has always been struck in light of new technologies. And for almost half of our tradition, the "copyright" did not control /{at all}/ the freedom of others to build upon or transform a creative work. American culture was born free, and for almost 180 years our country consistently protected a vibrant and rich free culture.
+
+We achieved that free culture because our law respected important limits on the scope of the interests protected by "property." The very birth of "copyright" as a statutory right recognized those limits, by granting copyright owners protection for a limited time only (the story of chapter 6). The tradition of "fair use" is animated by a similar concern that is increasingly under strain as the costs of exercising any fair use right become unavoidably high (the story of chapter 7). Adding statutory rights where markets might stifle innovation is another familiar limit on the property right that copyright is (chapter 8). And granting archives and libraries a broad freedom to collect, claims of property notwithstanding, is a crucial part of guaranteeing the soul of a culture (chapter 9). Free cultures, like free markets, are built with property. But the nature of the property that builds a free culture is very different from the extremist vision that dominates the debate today.
+
+Free culture is increasingly the casualty in this war on piracy. In response to a real, if not yet quantified, threat that the technologies of the Internet present to twentieth-century business models for producing and distributing culture, the law and technology are being transformed in a way that will undermine our tradition of free culture. The property right that is copyright is no longer the balanced right that it was, or was intended to be. The property right that is copyright has become unbalanced, tilted toward an extreme. The opportunity to create and transform becomes weakened in a world in which creation requires permission and creativity must check with a lawyer.
+
+:C~ PUZZLES
+
+1~ Chapter Eleven: Chimera
+
+*{In a well-known}* short story by H. G. Wells, a mountain climber named Nunez trips (literally, down an ice slope) into an unknown and isolated valley in the Peruvian Andes.~{ H. G. Wells, "The Country of the Blind" (1904, 1911). See H. G. Wells, /{The Country of the Blind and Other Stories,}/ Michael Sherborne, ed. (New York: Oxford University Press, 1996). }~ The valley is extraordinarily beautiful, with "sweet water, pasture, an even climate, slopes of rich brown soil with tangles of a shrub that bore an excellent fruit." But the villagers are all blind. Nunez takes this as an opportunity. "In the Country of the Blind," he tells himself, "the One-Eyed Man is King." So he resolves to live with the villagers to explore life as a king.
+
+Things don't go quite as he planned. He tries to explain the idea of sight to the villagers. They don't understand. He tells them they are "blind." They don't have the word /{blind}/. They think he's just thick. Indeed, as they increasingly notice the things he can't do (hear the sound of grass being stepped on, for example), they increasingly try to control him. He, in turn, becomes increasingly frustrated. "'You don't understand,' he cried, in a voice that was meant to be great and resolute, and which broke. 'You are blind and I can see. Leave me alone!'"
+
+The villagers don't leave him alone. Nor do they see (so to speak) the virtue of his special power. Not even the ultimate target of his affection, a young woman who to him seems "the most beautiful thing in the whole of creation," understands the beauty of sight. Nunez's description of what he sees "seemed to her the most poetical of fancies, and she listened to his description of the stars and the mountains and her own sweet white-lit beauty as though it was a guilty indulgence." "She did not believe," Wells tells us, and "she could only half understand, but she was mysteriously delighted."
+
+When Nunez announces his desire to marry his "mysteriously delighted" love, the father and the village object. "You see, my dear," her father instructs, "he's an idiot. He has delusions. He can't do anything right." They take Nunez to the village doctor.
+
+After a careful examination, the doctor gives his opinion. "His brain is affected," he reports.
+
+"What affects it?" the father asks.
+
+"Those queer things that are called the eyes ... are diseased ... in such a way as to affect his brain."
+
+The doctor continues: "I think I may say with reasonable certainty that in order to cure him completely, all that we need to do is a simple and easy surgical operation - namely, to remove these irritant bodies [the eyes]."
+
+"Thank Heaven for science!" says the father to the doctor. They inform Nunez of this condition necessary for him to be allowed his bride. (You'll have to read the original to learn what happens in the end. I believe in free culture, but never in giving away the end of a story.)
+
+It sometimes happens that the eggs of twins fuse in the mother's womb. That fusion produces a "chimera." A chimera is a single creature with two sets of DNA. The DNA in the blood, for example, might be different from the DNA of the skin. This possibility is an underused plot for murder mysteries. "But the DNA shows with 100 percent certainty that she was not the person whose blood was at the scene. ..."
+
+Before I had read about chimeras, I would have said they were impossible. A single person can't have two sets of DNA. The very idea of DNA is that it is the code of an individual. Yet in fact, not only can two individuals have the same set of DNA (identical twins), but one person can have two different sets of DNA (a chimera). Our understanding of a "person" should reflect this reality.
+
+The more I work to understand the current struggle over copyright and culture, which I've sometimes called unfairly, and sometimes not unfairly enough, "the copyright wars," the more I think we're dealing with a chimera. For example, in the battle over the question "What is p2p file sharing?" both sides have it right, and both sides have it wrong. One side says, "File sharing is just like two kids taping each others' records - the sort of thing we've been doing for the last thirty years without any question at all." That's true, at least in part. When I tell my best friend to try out a new CD that I've bought, but rather than just send the CD, I point him to my p2p server, that is, in all relevant respects, just like what every executive in every recording company no doubt did as a kid: sharing music.
+
+But the description is also false in part. For when my p2p server is on a p2p network through which anyone can get access to my music, then sure, my friends can get access, but it stretches the meaning of "friends" beyond recognition to say "my ten thousand best friends" can get access. Whether or not sharing my music with my best friend is what "we have always been allowed to do," we have not always been allowed to share music with "our ten thousand best friends."
+
+Likewise, when the other side says, "File sharing is just like walking into a Tower Records and taking a CD off the shelf and walking out with it," that's true, at least in part. If, after Lyle Lovett (finally) releases a new album, rather than buying it, I go to Kazaa and find a free copy to take, that is very much like stealing a copy from Tower.
+
+But it is not quite stealing from Tower. After all, when I take a CD from Tower Records, Tower has one less CD to sell. And when I take a CD from Tower Records, I get a bit of plastic and a cover, and something to show on my shelves. (And, while we're at it, we could also note that when I take a CD from Tower Records, the maximum fine that might be imposed on me, under California law, at least, is $1,000. According to the RIAA, by contrast, if I download a ten-song CD, I'm liable for $1,500,000 in damages.)
+
+The point is not that it is as neither side describes. The point is that it is both - both as the RIAA describes it and as Kazaa describes it. It is a chimera. And rather than simply denying what the other side asserts, we need to begin to think about how we should respond to this chimera. What rules should govern it?
+
+We could respond by simply pretending that it is not a chimera. We could, with the RIAA, decide that every act of file sharing should be a felony. We could prosecute families for millions of dollars in damages just because file sharing occurred on a family computer. And we can get universities to monitor all computer traffic to make sure that no computer is used to commit this crime. These responses might be extreme, but each of them has either been proposed or actually implemented.~{ For an excellent summary, see the report prepared by GartnerG2 and the Berkman Center for Internet and Society at Harvard Law School, "Copy- right and Digital Media in a Post-Napster World," 27 June 2003, available at link #33. Reps. John Conyers Jr. (D-Mich.) and Howard L. Berman (D-Calif.) have introduced a bill that would treat unauthorized on-line copying as a felony offense with punishments ranging as high as five years imprisonment; see Jon Healey, "House Bill Aims to Up Stakes on Piracy," /{Los Angeles Times,}/ 17 July 2003, available at link #34. Civil penalties are currently set at $150,000 per copied song. For a recent (and unsuccessful) legal challenge to the RIAA's demand that an ISP reveal the identity of a user accused of sharing more than 600 songs through a family computer, see /{RIAA}/ v. /{Verizon Internet Services (In re. Verizon Internet Services),}/ 240 F. Supp. 2d 24 (D.D.C. 2003). Such a user could face liability ranging as high as $90 million. Such astronomical figures furnish the RIAA with a powerful arsenal in its prosecution of file sharers. Settlements ranging from $12,000 to $17,500 for four students accused of heavy file sharing on university networks must have seemed a mere pittance next to the $98 billion the RIAA could seek should the matter proceed to court. See Elizabeth Young, "Downloading Could Lead to Fines," redandblack.com, 26 August 2003, available at link #35. For an example of the RIAA's targeting of student file sharing, and of the subpoenas issued to universities to reveal student file-sharer identities, see James Collins, "RIAA Steps Up Bid to Force BC, MIT to Name Students," /{Boston Globe,}/ 8 August 2003, D3, available at link #36. }~
+
+Alternatively, we could respond to file sharing the way many kids act as though we've responded. We could totally legalize it. Let there be no copyright liability, either civil or criminal, for making copyrighted content available on the Net. Make file sharing like gossip: regulated, if at all, by social norms but not by law.
+
+Either response is possible. I think either would be a mistake. Rather than embrace one of these two extremes, we should embrace something that recognizes the truth in both. And while I end this book with a sketch of a system that does just that, my aim in the next chapter is to show just how awful it would be for us to adopt the zero-tolerance extreme. I believe /{either}/ extreme would be worse than a reasonable alternative. But I believe the zero-tolerance solution would be the worse of the two extremes.
+
+Yet zero tolerance is increasingly our government's policy. In the middle of the chaos that the Internet has created, an extraordinary land grab is occurring. The law and technology are being shifted to give content holders a kind of control over our culture that they have never had before. And in this extremism, many an opportunity for new innovation and new creativity will be lost.
+
+I'm not talking about the opportunities for kids to "steal" music. My focus instead is the commercial and cultural innovation that this war will also kill. We have never seen the power to innovate spread so broadly among our citizens, and we have just begun to see the innovation that this power will unleash. Yet the Internet has already seen the passing of one cycle of innovation around technologies to distribute content. The law is responsible for this passing. As the vice president for global public policy at one of these new innovators, eMusic.com, put it when criticizing the DMCA's added protection for copyrighted material,
+
+_1 eMusic opposes music piracy. We are a distributor of copyrighted material, and we want to protect those rights.
+
+_1 But building a technology fortress that locks in the clout of the major labels is by no means the only way to protect copyright interests, nor is it necessarily the best. It is simply too early to answer that question. Market forces operating naturally may very well produce a totally different industry model.
+
+_1 This is a critical point. The choices that industry sectors make with respect to these systems will in many ways directly shape the market for digital media and the manner in which digital media are distributed. This in turn will directly influence the options that are available to consumers, both in terms of the ease with which they will be able to access digital media and the equipment that they will require to do so. Poor choices made this early in the game will retard the growth of this market, hurting everyone's interests."~{ WIPO and the DMCA One Year Later: Assessing Consumer Access to Digital Entertainment on the Internet and Other Media: Hearing Before the Subcommittee on Telecommunications, Trade, and Consumer Protection, House Committee on Commerce, 106th Cong. 29 (1999) (statement of Peter Harter, vice president, Global Public Policy and Standards, EMusic.com), available in LEXIS, Federal Document Clearing House Congressional Testimony File. }~
+
+In April 2001, eMusic.com was purchased by Vivendi Universal, one of "the major labels." Its position on these matters has now changed.
+
+Reversing our tradition of tolerance now will not merely quash piracy. It will sacrifice values that are important to this culture, and will kill opportunities that could be extraordinarily valuable.
+
+1~ Chapter Twelve: Harms
+
+*{To fight}* "piracy," to protect "property," the content industry has launched a war. Lobbying and lots of campaign contributions have now brought the government into this war. As with any war, this one will have both direct and collateral damage. As with any war of prohibition, these damages will be suffered most by our own people.
+
+My aim so far has been to describe the consequences of this war, in particular, the consequences for "free culture." But my aim now is to extend this description of consequences into an argument. Is this war justified?
+
+In my view, it is not. There is no good reason why this time, for the first time, the law should defend the old against the new, just when the power of the property called "intellectual property" is at its greatest in our history.
+
+Yet "common sense" does not see it this way. Common sense is still on the side of the Causbys and the content industry. The extreme claims of control in the name of property still resonate; the uncritical rejection of "piracy" still has play.
+
+There will be many consequences of continuing this war. I want to describe just three. All three might be said to be unintended. I am quite confident the third is unintended. I'm less sure about the first two. The first two protect modern RCAs, but there is no Howard Armstrong in the wings to fight today's monopolists of culture.
+
+2~ Constraining Creators
+
+In the next ten years we will see an explosion of digital technologies. These technologies will enable almost anyone to capture and share content. Capturing and sharing content, of course, is what humans have done since the dawn of man. It is how we learn and communicate. But capturing and sharing through digital technology is different. The fidelity and power are different. You could send an e-mail telling someone about a joke you saw on Comedy Central, or you could send the clip. You could write an essay about the inconsistencies in the arguments of the politician you most love to hate, or you could make a short film that puts statement against statement. You could write a poem to express your love, or you could weave together a string - a mash-up - of songs from your favorite artists in a collage and make it available on the Net.
+
+This digital "capturing and sharing" is in part an extension of the capturing and sharing that has always been integral to our culture, and in part it is something new. It is continuous with the Kodak, but it explodes the boundaries of Kodak-like technologies. The technology of digital "capturing and sharing" promises a world of extraordinarily diverse creativity that can be easily and broadly shared. And as that creativity is applied to democracy, it will enable a broad range of citizens to use technology to express and criticize and contribute to the culture all around.
+
+Technology has thus given us an opportunity to do something with culture that has only ever been possible for individuals in small groups, isolated from others. Think about an old man telling a story to a collection of neighbors in a small town. Now imagine that same storytelling extended across the globe.
+
+Yet all this is possible only if the activity is presumptively legal. In the current regime of legal regulation, it is not. Forget file sharing for a moment. Think about your favorite amazing sites on the Net. Web sites that offer plot summaries from forgotten television shows; sites that catalog cartoons from the 1960s; sites that mix images and sound to criticize politicians or businesses; sites that gather newspaper articles on remote topics of science or culture. There is a vast amount of creative work spread across the Internet. But as the law is currently crafted, this work is presumptively illegal.
+
+That presumption will increasingly chill creativity, as the examples of extreme penalties for vague infringements continue to proliferate. It is impossible to get a clear sense of what's allowed and what's not, and at the same time, the penalties for crossing the line are astonishingly harsh. The four students who were threatened by the RIAA ( Jesse Jordan of chapter 3 was just one) were threatened with a $98 billion lawsuit for building search engines that permitted songs to be copied. Yet WorldCom - which defrauded investors of $11 billion, resulting in a loss to investors in market capitalization of over $200 billion - received a fine of a mere $750 million.~{ See Lynne W. Jeter, /{Disconnected: Deceit and Betrayal at WorldCom}/ (Hoboken, N.J.: John Wiley & Sons, 2003), 176, 204; for details of the settlement, see MCI press release, "MCI Wins U.S. District Court Approval for SEC Settlement" (7 July 2003), available at link #37. }~ And under legislation being pushed in Congress right now, a doctor who negligently removes the wrong leg in an operation would be liable for no more than $250,000 in damages for pain and suffering.~{ The bill, modeled after California's tort reform model, was passed in the House of Representatives but defeated in a Senate vote in July 2003. For an overview, see Tanya Albert, "Measure Stalls in Senate: 'We'll Be Back,' Say Tort Reformers," amednews.com, 28 July 2003, available at link #38, and "Senate Turns Back Malpractice Caps," CBSNews.com, 9 July 2003, available at link #39. President Bush has continued to urge tort reform in recent months. }~ Can common sense recognize the absurdity in a world where the maximum fine for downloading two songs off the Internet is more than the fine for a doctor's negligently butchering a patient?
+
+The consequence of this legal uncertainty, tied to these extremely high penalties, is that an extraordinary amount of creativity will either never be exercised, or never be exercised in the open. We drive this creative process underground by branding the modern-day Walt Disneys "pirates." We make it impossible for businesses to rely upon a public domain, because the boundaries of the public domain are designed to be unclear. It never pays to do anything except pay for the right to create, and hence only those who can pay are allowed to create. As was the case in the Soviet Union, though for very different reasons, we will begin to see a world of underground art - not because the message is necessarily political, or because the subject is controversial, but because the very act of creating the art is legally fraught. Already, exhibits of "illegal art" tour the United States.~{ See Danit Lidor, "Artists Just Wanna Be Free," /{Wired,}/ 7 July 2003, available at link #40. For an overview of the exhibition, see link #41. }~ In what does their "illegality" consist? In the act of mixing the culture around us with an expression that is critical or reflective.
+
+Part of the reason for this fear of illegality has to do with the changing law. I described that change in detail in chapter 10. But an even bigger part has to do with the increasing ease with which infractions can be tracked. As users of file-sharing systems discovered in 2002, it is a trivial matter for copyright owners to get courts to order Internet service providers to reveal who has what content. It is as if your cassette tape player transmitted a list of the songs that you played in the privacy of your own home that anyone could tune into for whatever reason they chose.
+
+Never in our history has a painter had to worry about whether his painting infringed on someone else's work; but the modern-day painter, using the tools of Photoshop, sharing content on the Web, must worry all the time. Images are all around, but the only safe images to use in the act of creation are those purchased from Corbis or another image farm. And in purchasing, censoring happens. There is a free market in pencils; we needn't worry about its effect on creativity. But there is a highly regulated, monopolized market in cultural icons; the right to cultivate and transform them is not similarly free.
+
+Lawyers rarely see this because lawyers are rarely empirical. As I described in chapter 7, in response to the story about documentary filmmaker Jon Else, I have been lectured again and again by lawyers who insist Else's use was fair use, and hence I am wrong to say that the law regulates such a use.
+
+But fair use in America simply means the right to hire a lawyer to defend your right to create. And as lawyers love to forget, our system for defending rights such as fair use is astonishingly bad - in practically every context, but especially here. It costs too much, it delivers too slowly, and what it delivers often has little connection to the justice underlying the claim. The legal system may be tolerable for the very rich. For everyone else, it is an embarrassment to a tradition that prides itself on the rule of law.
+
+Judges and lawyers can tell themselves that fair use provides adequate "breathing room" between regulation by the law and the access the law should allow. But it is a measure of how out of touch our legal system has become that anyone actually believes this. The rules that publishers impose upon writers, the rules that film distributors impose upon filmmakers, the rules that newspapers impose upon journalists - these are the real laws governing creativity. And these rules have little relationship to the "law" with which judges comfort themselves.
+
+For in a world that threatens $150,000 for a single willful infringement of a copyright, and which demands tens of thousands of dollars to even defend against a copyright infringement claim, and which would never return to the wrongfully accused defendant anything of the costs she suffered to defend her right to speak - in that world, the astonishingly broad regulations that pass under the name "copyright" silence speech and creativity. And in that world, it takes a studied blindness for people to continue to believe they live in a culture that is free.
+
+As Jed Horovitz, the businessman behind Video Pipeline, said to me,
+
+_1 We're losing [creative] opportunities right and left. Creative people are being forced not to express themselves. Thoughts are not being expressed. And while a lot of stuff may [still] be created, it still won't get distributed. Even if the stuff gets made ... you're not going to get it distributed in the mainstream media unless you've got a little note from a lawyer saying, "This has been cleared." You're not even going to get it on PBS without that kind of permission. That's the point at which they control it."
+
+2~ Constraining Innovators
+
+The story of the last section was a crunchy-lefty story - creativity quashed, artists who can't speak, yada yada yada. Maybe that doesn't get you going. Maybe you think there's enough weird art out there, and enough expression that is critical of what seems to be just about everything. And if you think that, you might think there's little in this story to worry you.
+
+But there's an aspect of this story that is not lefty in any sense. Indeed, it is an aspect that could be written by the most extreme pro-market ideologue. And if you're one of these sorts (and a special one at that, 188 pages into a book like this), then you can see this other aspect by substituting "free market" every place I've spoken of "free culture." The point is the same, even if the interests affecting culture are more fundamental.
+
+The charge I've been making about the regulation of culture is the same charge free marketers make about regulating markets. Everyone, of course, concedes that some regulation of markets is necessary - at a minimum, we need rules of property and contract, and courts to enforce both. Likewise, in this culture debate, everyone concedes that at least some framework of copyright is also required. But both perspectives vehemently insist that just because some regulation is good, it doesn't follow that more regulation is better. And both perspectives are constantly attuned to the ways in which regulation simply enables the powerful industries of today to protect themselves against the competitors of tomorrow.
+
+This is the single most dramatic effect of the shift in regulatory strategy that I described in chapter 10. The consequence of this massive threat of liability tied to the murky boundaries of copyright law is that innovators who want to innovate in this space can safely innovate only if they have the sign-off from last generation's dominant industries. That lesson has been taught through a series of cases that were designed and executed to teach venture capitalists a lesson. That lesson - what former Napster CEO Hank Barry calls a "nuclear pall" that has fallen over the Valley - has been learned.
+
+Consider one example to make the point, a story whose beginning I told in /{The Future of Ideas}/ and which has progressed in a way that even I (pessimist extraordinaire) would never have predicted.
+
+In 1997, Michael Roberts launched a company called MP3.com. MP3.com was keen to remake the music business. Their goal was not just to facilitate new ways to get access to content. Their goal was also to facilitate new ways to create content. Unlike the major labels, MP3.com offered creators a venue to distribute their creativity, without demanding an exclusive engagement from the creators.
+
+To make this system work, however, MP3.com needed a reliable way to recommend music to its users. The idea behind this alternative was to leverage the revealed preferences of music listeners to recommend new artists. If you like Lyle Lovett, you're likely to enjoy Bonnie Raitt. And so on.
+
+This idea required a simple way to gather data about user preferences. MP3.com came up with an extraordinarily clever way to gather this preference data. In January 2000, the company launched a service called my.mp3.com. Using software provided by MP3.com, a user would sign into an account and then insert into her computer a CD. The software would identify the CD, and then give the user access to that content. So, for example, if you inserted a CD by Jill Sobule, then wherever you were - at work or at home - you could get access to that music once you signed into your account. The system was therefore a kind of music-lockbox.
+
+No doubt some could use this system to illegally copy content. But that opportunity existed with or without MP3.com. The aim of the my.mp3.com service was to give users access to their own content, and as a by-product, by seeing the content they already owned, to discover the kind of content the users liked.
+
+To make this system function, however, MP3.com needed to copy 50,000 CDs to a server. (In principle, it could have been the user who uploaded the music, but that would have taken a great deal of time, and would have produced a product of questionable quality.) It therefore purchased 50,000 CDs from a store, and started the process of making copies of those CDs. Again, it would not serve the content from those copies to anyone except those who authenticated that they had a copy of the CD they wanted to access. So while this was 50,000 copies, it was 50,000 copies directed at giving customers something they had already bought.
+
+Nine days after MP3.com launched its service, the five major labels, headed by the RIAA, brought a lawsuit against MP3.com. MP3.com settled with four of the five. Nine months later, a federal judge found MP3.com to have been guilty of willful infringement with respect to the fifth. Applying the law as it is, the judge imposed a fine against MP3.com of $118 million. MP3.com then settled with the remaining plaintiff, Vivendi Universal, paying over $54 million. Vivendi purchased MP3.com just about a year later.
+
+That part of the story I have told before. Now consider its conclusion.
+
+After Vivendi purchased MP3.com, Vivendi turned around and filed a malpractice lawsuit against the lawyers who had advised it that they had a good faith claim that the service they wanted to offer would be considered legal under copyright law. This lawsuit alleged that it should have been obvious that the courts would find this behavior illegal; therefore, this lawsuit sought to punish any lawyer who had dared to suggest that the law was less restrictive than the labels demanded.
+
+The clear purpose of this lawsuit (which was settled for an unspecified amount shortly after the story was no longer covered in the press) was to send an unequivocal message to lawyers advising clients in this space: It is not just your clients who might suffer if the content industry directs its guns against them. It is also you. So those of you who believe the law should be less restrictive should realize that such a view of the law will cost you and your firm dearly.
+
+This strategy is not just limited to the lawyers. In April 2003, Universal and EMI brought a lawsuit against Hummer Winblad, the venture capital firm (VC) that had funded Napster at a certain stage of its development, its cofounder (John Hummer), and general partner (Hank Barry).~{ See Joseph Menn, "Universal, EMI Sue Napster Investor," /{Los Angeles Times,}/ 23 April 2003. For a parallel argument about the effects on innovation in the distribution of music, see Janelle Brown, "The Music Revolution Will Not Be Digitized," Salon.com, 1 June 2001, available at link #42. See also Jon Healey, "Online Music Services Besieged," /{Los Angeles Times,}/ 28 May 2001. }~ The claim here, as well, was that the VC should have recognized the right of the content industry to control how the industry should develop. They should be held personally liable for funding a company whose business turned out to be beyond the law. Here again, the aim of the lawsuit is transparent: Any VC now recognizes that if you fund a company whose business is not approved of by the dinosaurs, you are at risk not just in the marketplace, but in the courtroom as well. Your investment buys you not only a company, it also buys you a lawsuit. So extreme has the environment become that even car manufacturers are afraid of technologies that touch content. In an article in /{Business 2.0}/, Rafe Needleman describes a discussion with BMW:
+
+_1 I asked why, with all the storage capacity and computer power in the car, there was no way to play MP3 files. I was told that BMW engineers in Germany had rigged a new vehicle to play MP3s via the car's built-in sound system, but that the company's marketing and legal departments weren't comfortable with pushing this forward for release stateside. Even today, no new cars are sold in the United States with bona fide MP3 players. ..."~{ Rafe Needleman, "Driving in Cars with MP3s," /{Business 2.0,}/ 16 June 2003, available at link #43. I am grateful to Dr. Mohammad Al-Ubaydli for this example. }~
+
+This is the world of the mafia - filled with "your money or your life" offers, governed in the end not by courts but by the threats that the law empowers copyright holders to exercise. It is a system that will obviously and necessarily stifle new innovation. It is hard enough to start a company. It is impossibly hard if that company is constantly threatened by litigation.
+
+The point is not that businesses should have a right to start illegal enterprises. The point is the definition of "illegal." The law is a mess of uncertainty. We have no good way to know how it should apply to new technologies. Yet by reversing our tradition of judicial deference, and by embracing the astonishingly high penalties that copyright law imposes, that uncertainty now yields a reality which is far more conservative than is right. If the law imposed the death penalty for parking tickets, we'd not only have fewer parking tickets, we'd also have much less driving. The same principle applies to innovation. If innovation is constantly checked by this uncertain and unlimited liability, we will have much less vibrant innovation and much less creativity.
+
+The point is directly parallel to the crunchy-lefty point about fair use. Whatever the "real" law is, realism about the effect of law in both contexts is the same. This wildly punitive system of regulation will systematically stifle creativity and innovation. It will protect some industries and some creators, but it will harm industry and creativity generally. Free market and free culture depend upon vibrant competition. Yet the effect of the law today is to stifle just this kind of competition. The effect is to produce an overregulated culture, just as the effect of too much control in the market is to produce an overregulated- regulated market.
+
+The building of a permission culture, rather than a free culture, is the first important way in which the changes I have described will burden innovation. A permission culture means a lawyer's culture - a culture in which the ability to create requires a call to your lawyer. Again, I am not antilawyer, at least when they're kept in their proper place. I am certainly not antilaw. But our profession has lost the sense of its limits. And leaders in our profession have lost an appreciation of the high costs that our profession imposes upon others. The inefficiency of the law is an embarrassment to our tradition. And while I believe our profession should therefore do everything it can to make the law more efficient, it should at least do everything it can to limit the reach of the law where the law is not doing any good. The transaction costs buried within a permission culture are enough to bury a wide range of creativity. Someone needs to do a lot of justifying to justify that result.
+
+The uncertainty of the law is one burden on innovation. There is a second burden that operates more directly. This is the effort by many in the content industry to use the law to directly regulate the technology of the Internet so that it better protects their content.
+
+The motivation for this response is obvious. The Internet enables the efficient spread of content. That efficiency is a feature of the Inter-net's design. But from the perspective of the content industry, this feature is a "bug." The efficient spread of content means that content distributors have a harder time controlling the distribution of content. One obvious response to this efficiency is thus to make the Internet less efficient. If the Internet enables "piracy," then, this response says, we should break the kneecaps of the Internet.
+
+The examples of this form of legislation are many. At the urging of the content industry, some in Congress have threatened legislation that would require computers to determine whether the content they access is protected or not, and to disable the spread of protected content.~{ "Copyright and Digital Media in a Post-Napster World," GartnerG2 and the Berkman Center for Internet and Society at Harvard Law School (2003), 33-35, available at link #44. }~ Congress has already launched proceedings to explore a mandatory "broad- cast flag" that would be required on any device capable of transmitting digital video (i.e., a computer), and that would disable the copying of any content that is marked with a broadcast flag. Other members of Congress have proposed immunizing content providers from liability for technology they might deploy that would hunt down copyright violators and disable their machines.~{ GartnerG2, 26-27. }~
+
+In one sense, these solutions seem sensible. If the problem is the code, why not regulate the code to remove the problem. But any regulation of technical infrastructure will always be tuned to the particular technology of the day. It will impose significant burdens and costs on the technology, but will likely be eclipsed by advances around exactly those requirements.
+
+In March 2002, a broad coalition of technology companies, led by Intel, tried to get Congress to see the harm that such legislation would impose.~{ See David McGuire, "Tech Execs Square Off Over Piracy," Newsbytes, 28 February 2002 (Entertainment). }~ Their argument was obviously not that copyright should not be protected. Instead, they argued, any protection should not do more harm than good.
+
+There is one more obvious way in which this war has harmed innovation - again, a story that will be quite familiar to the free market crowd.
+
+Copyright may be property, but like all property, it is also a form of regulation. It is a regulation that benefits some and harms others. When done right, it benefits creators and harms leeches. When done wrong, it is regulation the powerful use to defeat competitors.
+
+As I described in chapter 10, despite this feature of copyright as regulation, and subject to important qualifications outlined by Jessica Litman in her book /{Digital Copyright}/,~{ Jessica Litman, /{Digital Copyright}/ (Amherst, N.Y.: Prometheus Books, 2001). }~ overall this history of copyright is not bad. As chapter 10 details, when new technologies have come along, Congress has struck a balance to assure that the new is protected from the old. Compulsory, or statutory, licenses have been one part of that strategy. Free use (as in the case of the VCR) has been another.
+
+But that pattern of deference to new technologies has now changed with the rise of the Internet. Rather than striking a balance between the claims of a new technology and the legitimate rights of content creators, both the courts and Congress have imposed legal restrictions that will have the effect of smothering the new to benefit the old.
+
+The response by the courts has been fairly universal.~{ The only circuit court exception is found in /{Recording Industry Association of America (RIAA)}/ v. /{Diamond Multimedia Systems,}/ 180 F. 3d 1072 (9th Cir. 1999). There the court of appeals for the Ninth Circuit reasoned that makers of a portable MP3 player were not liable for contributory copyright infringement for a device that is unable to record or redistribute music (a device whose only copying function is to render portable a music file already stored on a user's hard drive). At the district court level, the only exception is found in /{Metro-Goldwyn-Mayer Studios, Inc.}/ v. /{Grokster, Ltd.,}/ 259 F. Supp. 2d 1029 (C.D. Cal., 2003), where the court found the link between the distributor and any given user's conduct too attenuated to make the distributor liable for contributory or vicarious infringement liability. }~ It has been mirrored in the responses threatened and actually implemented by Congress. I won't catalog all of those responses here.~{ For example, in July 2002, Representative Howard Berman introduced the Peer- to-Peer Piracy Prevention Act (H.R. 5211), which would immunize copyright holders from liability for damage done to computers when the copyright holders use technology to stop copyright infringement. In August 2002, Representative Billy Tauzin introduced a bill to mandate that technologies capable of rebroadcasting digital copies of films broadcast on TV (i.e., computers) respect a "broadcast flag" that would disable copying of that content. And in March of the same year, Senator Fritz Hollings introduced the Consumer Broadband and Digital Television Promotion Act, which mandated copyright protection technology in all digital media devices. See GartnerG2, "Copyright and Digital Media in a Post-Napster World," 27 June 2003, 33-34, available at link #44. }~ But there is one example that captures the flavor of them all. This is the story of the demise of Internet radio.
+
+As I described in chapter 4, when a radio station plays a song, the recording artist doesn't get paid for that "radio performance" unless he or she is also the composer. So, for example if Marilyn Monroe had recorded a version of "Happy Birthday" - to memorialize her famous performance before President Kennedy at Madison Square Garden - then whenever that recording was played on the radio, the current copyright owners of "Happy Birthday" would get some money, whereas Marilyn Monroe would not.
+
+The reasoning behind this balance struck by Congress makes some sense. The justification was that radio was a kind of advertising. The recording artist thus benefited because by playing her music, the radio station was making it more likely that her records would be purchased. Thus, the recording artist got something, even if only indirectly. Probably this reasoning had less to do with the result than with the power of radio stations: Their lobbyists were quite good at stopping any efforts to get Congress to require compensation to the recording artists.
+
+Enter Internet radio. Like regular radio, Internet radio is a technology to stream content from a broadcaster to a listener. The broadcast travels across the Internet, not across the ether of radio spectrum. Thus, I can "tune in" to an Internet radio station in Berlin while sitting in San Francisco, even though there's no way for me to tune in to a regular radio station much beyond the San Francisco metropolitan area.
+
+This feature of the architecture of Internet radio means that there are potentially an unlimited number of radio stations that a user could tune in to using her computer, whereas under the existing architecture for broadcast radio, there is an obvious limit to the number of broadcasters and clear broadcast frequencies. Internet radio could therefore be more competitive than regular radio; it could provide a wider range of selections. And because the potential audience for Internet radio is the whole world, niche stations could easily develop and market their content to a relatively large number of users worldwide. According to some estimates, more than eighty million users worldwide have tuned in to this new form of radio.
+
+Internet radio is thus to radio what FM was to AM. It is an improvement potentially vastly more significant than the FM improvement over AM, since not only is the technology better, so, too, is the competition. Indeed, there is a direct parallel between the fight to establish FM radio and the fight to protect Internet radio. As one author describes Howard Armstrong's struggle to enable FM radio,
+
+_1 An almost unlimited number of FM stations was possible in the shortwaves, thus ending the unnatural restrictions imposed on radio in the crowded longwaves. If FM were freely developed, the number of stations would be limited only by economics and competition rather than by technical restrictions. ... Armstrong likened the situation that had grown up in radio to that following the invention of the printing press, when governments and ruling interests attempted to control this new instrument of mass communications by imposing restrictive licenses on it. This tyranny was broken only when it became possible for men freely to acquire printing presses and freely to run them. FM in this sense was as great an invention as the printing presses, for it gave radio the opportunity to strike off its shackles.~{ Lessing, 239. }~
+
+This potential for FM radio was never realized - not because Armstrong was wrong about the technology, but because he underestimated the power of "vested interests, habits, customs and legislation"~{ Ibid., 229. }~ to retard the growth of this competing technology.
+
+Now the very same claim could be made about Internet radio. For again, there is no technical limitation that could restrict the number of Internet radio stations. The only restrictions on Internet radio are those imposed by the law. Copyright law is one such law. So the first question we should ask is, what copyright rules would govern Internet radio?
+
+But here the power of the lobbyists is reversed. Internet radio is a new industry. The recording artists, on the other hand, have a very powerful lobby, the RIAA. Thus when Congress considered the phenomenon of Internet radio in 1995, the lobbyists had primed Congress to adopt a different rule for Internet radio than the rule that applies to terrestrial radio. While terrestrial radio does not have to pay our hypothetical Marilyn Monroe when it plays her hypothetical recording of "Happy Birthday" on the air, /{Internet radio does}/. Not only is the law not neutral toward Internet radio - the law actually burdens Internet radio more than it burdens terrestrial radio.
+
+This financial burden is not slight. As Harvard law professor William Fisher estimates, if an Internet radio station distributed ad- free popular music to (on average) ten thousand listeners, twenty-four hours a day, the total artist fees that radio station would owe would be over $1 million a year.~{ This example was derived from fees set by the original Copyright Arbitration Royalty Panel (CARP) proceedings, and is drawn from an example offered by Professor William Fisher. Conference Proceedings, iLaw (Stanford), 3 July 2003, on file with author. Professors Fisher and Zittrain submitted testimony in the CARP proceeding that was ultimately rejected. See Jonathan Zittrain, Digital Performance Right in Sound Recordings and Ephemeral Recordings, Docket No. 2000- 9, CARP DTRA 1 and 2, available at link #45. For an excellent analysis making a similar point, see Randal C. Picker, "Copyright as Entry Policy: The Case of Digital Distribution," /{Antitrust Bulletin}/ (Summer/Fall 2002): 461: "This was not confusion, these are just old- fashioned entry barriers. Analog radio stations are protected from digital entrants, reducing entry in radio and diversity. Yes, this is done in the name of getting royalties to copyright holders, but, absent the play of powerful interests, that could have been done in a media-neutral way." }~ A regular radio station broadcasting the same content would pay no equivalent fee.
+
+The burden is not financial only. Under the original rules that were proposed, an Internet radio station (but not a terrestrial radio station) would have to collect the following data from /{every listening transaction}/:
+
+_1 1. name of the service;
+
+_1 2. channel of the program (AM/FM stations use station ID);
+
+_1 3. type of program (archived/looped/live);
+
+_1 4. date of transmission;
+
+_1 5. time of transmission;
+
+_1 6. time zone of origination of transmission;
+
+_1 7. numeric designation of the place of the sound recording within the program;
+
+_1 8. duration of transmission (to nearest second);
+
+_1 9. sound recording title;
+
+_1 10. ISRC code of the recording;
+
+_1 11. release year of the album per copyright notice and in the case of compilation albums, the release year of the album and copyright date of the track;
+
+_1 12. featured recording artist;
+
+_1 13. retail album title;
+
+_1 14. recording label;
+
+_1 15. UPC code of the retail album;
+
+_1 16. catalog number;
+
+_1 17. copyright owner information;
+
+_1 18. musical genre of the channel or program (station format);
+
+_1 19. name of the service or entity;
+
+_1 20. channel or program;
+
+_1 21. date and time that the user logged in (in the user's time zone);
+
+_1 22. date and time that the user logged out (in the user's time zone);
+
+_1 23. time zone where the signal was received (user);
+
+_1 24. Unique User identifier;
+
+_1 25. the country in which the user received the transmissions.
+
+The Librarian of Congress eventually suspended these reporting requirements, pending further study. And he also changed the original rates set by the arbitration panel charged with setting rates. But the basic difference between Internet radio and terrestrial radio remains: Internet radio has to pay a /{type of copyright fee}/ that terrestrial radio does not.
+
+Why? What justifies this difference? Was there any study of the economic consequences from Internet radio that would justify these differences? Was the motive to protect artists against piracy?
+
+In a rare bit of candor, one RIAA expert admitted what seemed obvious to everyone at the time. As Alex Alben, vice president for Public Policy at Real Networks, told me,
+
+_1 The RIAA, which was representing the record labels, presented some testimony about what they thought a willing buyer would pay to a willing seller, and it was much higher. It was ten times higher than what radio stations pay to perform the same songs for the same period of time. And so the attorneys representing the webcasters asked the RIAA, ... "How do you come up with a rate that's so much higher? Why is it worth more than radio? Because here we have hundreds of thousands of webcasters who want to pay, and that should establish the market rate, and if you set the rate so high, you're going to drive the small webcasters out of business. ..."
+
+_1 And the RIAA experts said, "Well, we don't really model this as an industry with thousands of webcasters, /{we think it should be an industry with, you know, five or seven big players who can pay a high rate and it's a stable, predictable market.}/" (Emphasis added.)
+
+Translation: The aim is to use the law to eliminate competition, so that this platform of potentially immense competition, which would cause the diversity and range of content available to explode, would not cause pain to the dinosaurs of old. There is no one, on either the right or the left, who should endorse this use of the law. And yet there is practically no one, on either the right or the left, who is doing anything effective to prevent it.
+
+2~ Corrupting Citizens
+
+Overregulation stifles creativity. It smothers innovation. It gives dinosaurs a veto over the future. It wastes the extraordinary opportunity for a democratic creativity that digital technology enables.
+
+In addition to these important harms, there is one more that was important to our forebears, but seems forgotten today. Overregulation corrupts citizens and weakens the rule of law.
+
+The war that is being waged today is a war of prohibition. As with every war of prohibition, it is targeted against the behavior of a very large number of citizens. According to /{The New York Times}/, 43 million Americans downloaded music in May 2002.~{ Mike Graziano and Lee Rainie, "The Music Downloading Deluge," Pew Internet and American Life Project (24 April 2001), available at link #46. The Pew Internet and American Life Project reported that 37 million Americans had downloaded music files from the Internet by early 2001. }~ According to the RIAA, the behavior of those 43 million Americans is a felony. We thus have a set of rules that transform 20 percent of America into criminals. As the RIAA launches lawsuits against not only the Napsters and Kazaas of the world, but against students building search engines, and increasingly against ordinary users downloading content, the technologies for sharing will advance to further protect and hide illegal use. It is an arms race or a civil war, with the extremes of one side inviting a more extreme response by the other.
+
+The content industry's tactics exploit the failings of the American legal system. When the RIAA brought suit against Jesse Jordan, it knew that in Jordan it had found a scapegoat, not a defendant. The threat of having to pay either all the money in the world in damages ($15,000,000) or almost all the money in the world to defend against paying all the money in the world in damages ($250,000 in legal fees) led Jordan to choose to pay all the money he had in the world ($12,000) to make the suit go away. The same strategy animates the RIAA's suits against individual users. In September 2003, the RIAA sued 261 individuals - including a twelve-year-old girl living in public housing and a seventy-year-old man who had no idea what file sharing was.~{ Alex Pham, "The Labels Strike Back: N.Y. Girl Settles RIAA Case," /{Los Angeles Times,}/ 10 September 2003, Business. }~ As these scapegoats discovered, it will always cost more to defend against these suits than it would cost to simply settle. (The twelve year old, for example, like Jesse Jordan, paid her life savings of $2,000 to settle the case.) Our law is an awful system for defending rights. It is an embarrassment to our tradition. And the consequence of our law as it is, is that those with the power can use the law to quash any rights they oppose.
+
+Wars of prohibition are nothing new in America. This one is just something more extreme than anything we've seen before. We experimented with alcohol prohibition, at a time when the per capita consumption of alcohol was 1.5 gallons per capita per year. The war against drinking initially reduced that consumption to just 30 percent of its preprohibition levels, but by the end of prohibition, consumption was up to 70 percent of the preprohibition level. Americans were drinking just about as much, but now, a vast number were criminals.~{ Jeffrey A. Miron and Jeffrey Zwiebel, "Alcohol Consumption During Prohibition," /{American Economic Review}/ 81, no. 2 (1991): 242. }~ We have launched a war on drugs aimed at reducing the consumption of regulated narcotics that 7 percent (or 16 million) Americans now use.~{ National Drug Control Policy: Hearing Before the House Government Reform Committee, 108th Cong., 1st sess. (5 March 2003) (statement of John P. Walters, director of National Drug Control Policy). }~ That is a drop from the high (so to speak) in 1979 of 14 percent of the population. We regulate automobiles to the point where the vast majority of Americans violate the law every day. We run such a complex tax system that a majority of cash businesses regularly cheat.~{ See James Andreoni, Brian Erard, and Jonathon Feinstein, "Tax Compliance," /{Journal of Economic Literature}/ 36 (1998): 818 (survey of compliance literature). }~ We pride ourselves on our "free society," but an endless array of ordinary behavior is regulated within our society. And as a result, a huge proportion of Americans regularly violate at least some law.
+
+This state of affairs is not without consequence. It is a particularly salient issue for teachers like me, whose job it is to teach law students about the importance of "ethics." As my colleague Charlie Nesson told a class at Stanford, each year law schools admit thousands of students who have illegally downloaded music, illegally consumed alcohol and sometimes drugs, illegally worked without paying taxes, illegally driven cars. These are kids for whom behaving illegally is increasingly the norm. And then we, as law professors, are supposed to teach them how to behave ethically - how to say no to bribes, or keep client funds separate, or honor a demand to disclose a document that will mean that your case is over. Generations of Americans - more significantly in some parts of America than in others, but still, everywhere in America today - can't live their lives both normally and legally, since "normally" entails a certain degree of illegality.
+
+The response to this general illegality is either to enforce the law more severely or to change the law. We, as a society, have to learn how to make that choice more rationally. Whether a law makes sense depends, in part, at least, upon whether the costs of the law, both intended and collateral, outweigh the benefits. If the costs, intended and collateral, do outweigh the benefits, then the law ought to be changed. Alternatively, if the costs of the existing system are much greater than the costs of an alternative, then we have a good reason to consider the alternative.
+
+My point is not the idiotic one: Just because people violate a law, we should therefore repeal it. Obviously, we could reduce murder statistics dramatically by legalizing murder on Wednesdays and Fridays. But that wouldn't make any sense, since murder is wrong every day of the week. A society is right to ban murder always and everywhere.
+
+My point is instead one that democracies understood for generations, but that we recently have learned to forget. The rule of law depends upon people obeying the law. The more often, and more repeatedly, we as citizens experience violating the law, the less we respect the law. Obviously, in most cases, the important issue is the law, not respect for the law. I don't care whether the rapist respects the law or not; I want to catch and incarcerate the rapist. But I do care whether my students respect the law. And I do care if the rules of law sow increasing disrespect because of the extreme of regulation they impose. Twenty million Americans have come of age since the Internet introduced this different idea of "sharing." We need to be able to call these twenty million Americans "citizens," not "felons."
+
+When at least forty-three million citizens download content from the Internet, and when they use tools to combine that content in ways unauthorized by copyright holders, the first question we should be asking is not how best to involve the FBI. The first question should be whether this particular prohibition is really necessary in order to achieve the proper ends that copyright law serves. Is there another way to assure that artists get paid without transforming forty-three million Americans into felons? Does it make sense if there are other ways to assure that artists get paid without transforming America into a nation of felons?
+
+This abstract point can be made more clear with a particular example.
+
+We all own CDs. Many of us still own phonograph records. These pieces of plastic encode music that in a certain sense we have bought. The law protects our right to buy and sell that plastic: It is not a copyright infringement for me to sell all my classical records at a used record store and buy jazz records to replace them. That "use" of the recordings is free.
+
+But as the MP3 craze has demonstrated, there is another use of phonograph records that is effectively free. Because these recordings were made without copy-protection technologies, I am "free" to copy, or "rip," music from my records onto a computer hard disk. Indeed, Apple Corporation went so far as to suggest that "freedom" was a right: In a series of commercials, Apple endorsed the "Rip, Mix, Burn" capacities of digital technologies.
+
+This "use" of my records is certainly valuable. I have begun a large process at home of ripping all of my and my wife's CDs, and storing them in one archive. Then, using Apple's iTunes, or a wonderful program called Andromeda, we can build different play lists of our music: Bach, Baroque, Love Songs, Love Songs of Significant Others - the potential is endless. And by reducing the costs of mixing play lists, these technologies help build a creativity with play lists that is itself independently valuable. Compilations of songs are creative and meaningful in their own right.
+
+This use is enabled by unprotected media - either CDs or records. But unprotected media also enable file sharing. File sharing threatens (or so the content industry believes) the ability of creators to earn a fair return from their creativity. And thus, many are beginning to experiment with technologies to eliminate unprotected media. These technologies, for example, would enable CDs that could not be ripped. Or they might enable spy programs to identify ripped content on people's machines.
+
+If these technologies took off, then the building of large archives of your own music would become quite difficult. You might hang in hacker circles, and get technology to disable the technologies that protect the content. Trading in those technologies is illegal, but maybe that doesn't bother you much. In any case, for the vast majority of people, these protection technologies would effectively destroy the archiving use of CDs. The technology, in other words, would force us all back to the world where we either listened to music by manipulating pieces of plastic or were part of a massively complex "digital rights management" system.
+
+If the only way to assure that artists get paid were the elimination of the ability to freely move content, then these technologies to interfere with the freedom to move content would be justifiable. But what if there were another way to assure that artists are paid, without locking down any content? What if, in other words, a different system could assure compensation to artists while also preserving the freedom to move content easily?
+
+My point just now is not to prove that there is such a system. I offer a version of such a system in the last chapter of this book. For now, the only point is the relatively uncontroversial one: If a different system achieved the same legitimate objectives that the existing copyright system achieved, but left consumers and creators much more free, then we'd have a very good reason to pursue this alternative - namely, freedom. The choice, in other words, would not be between property and piracy; the choice would be between different property systems and the freedoms each allowed.
+
+I believe there is a way to assure that artists are paid without turning forty- three million Americans into felons. But the salient feature of this alternative is that it would lead to a very different market for producing and distributing creativity. The dominant few, who today control the vast majority of the distribution of content in the world, would no longer exercise this extreme of control. Rather, they would go the way of the horse-drawn buggy.
+
+Except that this generation's buggy manufacturers have already saddled Congress, and are riding the law to protect themselves against this new form of competition. For them the choice is between forty-three million Americans as criminals and their own survival.
+
+It is understandable why they choose as they do. It is not understandable why we as a democracy continue to choose as we do. Jack Valenti is charming; but not so charming as to justify giving up a tradition as deep and important as our tradition of free culture.
+
+There's one more aspect to this corruption that is particularly important to civil liberties, and follows directly from any war of prohibition. As Electronic Frontier Foundation attorney Fred von Lohmann describes, this is the "collateral damage" that "arises whenever you turn a very large percentage of the population into criminals." This is the collateral damage to civil liberties generally.
+
+"If you can treat someone as a putative lawbreaker," von Lohmann explains,
+
+_1 then all of a sudden a lot of basic civil liberty protections evaporate to one degree or another. ... If you're a copyright infringer, how can you hope to have any privacy rights? If you're a copyright infringer, how can you hope to be secure against seizures of your computer? How can you hope to continue to receive Internet access? ... Our sensibilities change as soon as we think, "Oh, well, but that person's a criminal, a lawbreaker." Well, what this campaign against file sharing has done is turn a remarkable percentage of the American Internet-using population into "law-breakers."
+
+And the consequence of this transformation of the American public into criminals is that it becomes trivial, as a matter of due process, to effectively erase much of the privacy most would presume.
+
+Users of the Internet began to see this generally in 2003 as the RIAA launched its campaign to force Internet service providers to turn over the names of customers who the RIAA believed were violating copyright law. Verizon fought that demand and lost. With a simple request to a judge, and without any notice to the customer at all, the identity of an Internet user is revealed.
+
+The RIAA then expanded this campaign, by announcing a general strategy to sue individual users of the Internet who are alleged to have downloaded copyrighted music from file-sharing systems. But as we've seen, the potential damages from these suits are astronomical: If a family's computer is used to download a single CD's worth of music, the family could be liable for $2 million in damages. That didn't stop the RIAA from suing a number of these families, just as they had sued Jesse Jordan.~{ See Frank Ahrens, "RIAA's Lawsuits Meet Surprised Targets; Single Mother in Calif., 12-Year-Old Girl in N.Y. Among Defendants," /{Washington Post,}/ 10 September 2003, E1; Chris Cobbs, "Worried Parents Pull Plug on File 'Stealing'; With the Music Industry Cracking Down on File Swapping, Parents are Yanking Software from Home PCs to Avoid Being Sued," /{Orlando Sentinel Tribune,}/ 30 August 2003, C1; Jefferson Graham, "Recording Industry Sues Parents," /{USA Today,}/ 15 September 2003, 4D; John Schwartz, "She Says She's No Music Pirate. No Snoop Fan, Either," /{New York Times,}/ 25 September 2003, C1; Margo Varadi, "Is Brianna a Criminal?" /{Toronto Star,}/ 18 September 2003, P7. }~
+
+Even this understates the espionage that is being waged by the RIAA. A report from CNN late last summer described a strategy the RIAA had adopted to track Napster users.~{ See "Revealed: How RIAA Tracks Downloaders: Music Industry Discloses Some Methods Used," CNN.com, available at link #47. }~ Using a sophisticated hashing algorithm, the RIAA took what is in effect a fingerprint of every song in the Napster catalog. Any copy of one of those MP3s will have the same "fingerprint."
+
+So imagine the following not-implausible scenario: Imagine a friend gives a CD to your daughter - a collection of songs just like the cassettes you used to make as a kid. You don't know, and neither does your daughter, where these songs came from. But she copies these songs onto her computer. She then takes her computer to college and connects it to a college network, and if the college network is "cooperating" with the RIAA's espionage, and she hasn't properly protected her content from the network (do you know how to do that yourself ?), then the RIAA will be able to identify your daughter as a "criminal." And under the rules that universities are beginning to deploy,~{ See Jeff Adler, "Cambridge: On Campus, Pirates Are Not Penitent," /{Boston Globe,}/ 18 May 2003, City Weekly, 1; Frank Ahrens, "Four Students Sued over Music Sites; Industry Group Targets File Sharing at Colleges," /{Washington Post,}/ 4 April 2003, E1; Elizabeth Armstrong, "Students 'Rip, Mix, Burn' at Their Own Risk," /{Christian Science Monitor,}/ 2 September 2003, 20; Robert Becker and Angela Rozas, "Music Pirate Hunt Turns to Loyola; Two Students Names Are Handed Over; Lawsuit Possible," /{Chicago Tribune,}/ 16 July 2003, 1C; Beth Cox, "RIAA Trains Antipiracy Guns on Universities," /{Internet News,}/ 30 January 2003, available at link #48; Benny Evangelista, "Download Warning 101: Freshman Orientation This Fall to Include Record Industry Warnings Against File Sharing," /{San Francisco Chronicle,}/ 11 August 2003, E11; "Raid, Letters Are Weapons at Universities," /{USA Today,}/ 26 September 2000, 3D. }~ your daughter can lose the right to use the university's computer network. She can, in some cases, be expelled.
+
+Now, of course, she'll have the right to defend herself. You can hire a lawyer for her (at $300 per hour, if you're lucky), and she can plead that she didn't know anything about the source of the songs or that they came from Napster. And it may well be that the university believes her. But the university might not believe her. It might treat this "contraband" as presumptive of guilt. And as any number of college students have already learned, our presumptions about innocence disappear in the middle of wars of prohibition. This war is no different.
+
+Says von Lohmann,
+
+_1 So when we're talking about numbers like forty to sixty million Americans that are essentially copyright infringers, you create a situation where the civil liberties of those people are very much in peril in a general matter. [I don't] think [there is any] analog where you could randomly choose any person off the street and be confident that they were committing an unlawful act that could put them on the hook for potential felony liability or hundreds of millions of dollars of civil liability. Certainly we all speed, but speeding isn't the kind of an act for which we routinely forfeit civil liberties. Some people use drugs, and I think that's the closest analog, [but] many have noted that the war against drugs has eroded all of our civil liberties because it's treated so many Americans as criminals. Well, I think it's fair to say that file sharing is an order of magnitude larger number of Americans than drug use. ... If forty to sixty million Americans have become lawbreakers, then we're really on a slippery slope to lose a lot of civil liberties for all forty to sixty million of them."
+
+When forty to sixty million Americans are considered "criminals" under the law, and when the law could achieve the same objective - securing rights to authors - without these millions being considered "criminals," who is the villain? Americans or the law? Which is American, a constant war on our own people or a concerted effort through our democracy to change our law?
+
+:C~ BALANCES
+
+1~intro_balances [Intro]-#
+
+*{So here's}* the picture: You're standing at the side of the road. Your car is on fire. You are angry and upset because in part you helped start the fire. Now you don't know how to put it out. Next to you is a bucket, filled with gasoline. Obviously, gasoline won't put the fire out.
+
+As you ponder the mess, someone else comes along. In a panic, she grabs the bucket. Before you have a chance to tell her to stop - or before she understands just why she should stop - the bucket is in the air. The gasoline is about to hit the blazing car. And the fire that gasoline will ignite is about to ignite everything around.
+
+*{A war}* about copyright rages all around - and we're all focusing on the wrong thing. No doubt, current technologies threaten existing businesses. No doubt they may threaten artists. But technologies change. The industry and technologists have plenty of ways to use technology to protect themselves against the current threats of the Internet. This is a fire that if let alone would burn itself out.
+
+Yet policy makers are not willing to leave this fire to itself. Primed with plenty of lobbyists' money, they are keen to intervene to eliminate the problem they perceive. But the problem they perceive is not the real threat this culture faces. For while we watch this small fire in the corner, there is a massive change in the way culture is made that is happening all around.
+
+Somehow we have to find a way to turn attention to this more important and fundamental issue. Somehow we have to find a way to avoid pouring gasoline onto this fire.
+
+We have not found that way yet. Instead, we seem trapped in a simpler, binary view. However much many people push to frame this debate more broadly, it is the simple, binary view that remains. We rubberneck to look at the fire when we should be keeping our eyes on the road.
+
+This challenge has been my life these last few years. It has also been my failure. In the two chapters that follow, I describe one small brace of efforts, so far failed, to find a way to refocus this debate. We must understand these failures if we're to understand what success will require.
+
+1~ Chapter Thirteen: Eldred
+
+In 1995, a father was frustrated that his daughters didn't seem to like Hawthorne. No doubt there was more than one such father, but at least one did something about it. Eric Eldred, a retired computer programmer living in New Hampshire, decided to put Hawthorne on the Web. An electronic version, Eldred thought, with links to pictures and explanatory text, would make this nineteenth-century author's work come alive.
+
+It didn't work - at least for his daughters. They didn't find Hawthorne any more interesting than before. But Eldred's experiment gave birth to a hobby, and his hobby begat a cause: Eldred would build a library of public domain works by scanning these works and making them available for free.
+
+Eldred's library was not simply a copy of certain public domain works, though even a copy would have been of great value to people across the world who can't get access to printed versions of these works. Instead, Eldred was producing derivative works from these public domain works. Just as Disney turned Grimm into stories more accessible to the twentieth century, Eldred transformed Hawthorne, and many others, into a form more accessible - technically accessible - today.
+
+Eldred's freedom to do this with Hawthorne's work grew from the same source as Disney's. Hawthorne's /{Scarlet Letter}/ had passed into the public domain in 1907. It was free for anyone to take without the permission of the Hawthorne estate or anyone else. Some, such as Dover Press and Penguin Classics, take works from the public domain and produce printed editions, which they sell in bookstores across the country. Others, such as Disney, take these stories and turn them into animated cartoons, sometimes successfully (/{Cinderella}/), sometimes not (/{The Hunchback of Notre Dame}/, /{Treasure Planet}/). These are all commercial publications of public domain works.
+
+The Internet created the possibility of noncommercial publications of public domain works. Eldred's is just one example. There are literally thousands of others. Hundreds of thousands from across the world have discovered this platform of expression and now use it to share works that are, by law, free for the taking. This has produced what we might call the "noncommercial publishing industry," which before the Internet was limited to people with large egos or with political or social causes. But with the Internet, it includes a wide range of individuals and groups dedicated to spreading culture generally.~{ There's a parallel here with pornography that is a bit hard to describe, but it's a strong one. One phenomenon that the Internet created was a world of noncommercial pornographers - people who were distributing porn but were not making money directly or indirectly from that distribution. Such a class didn't exist before the Internet came into being because the costs of distributing porn were so high. Yet this new class of distributors got special attention in the Supreme Court, when the Court struck down the Communications Decency Act of 1996. It was partly because of the burden on noncommercial speakers that the statute was found to exceed Congress's power. The same point could have been made about noncommercial publishers after the advent of the Internet. The Eric Eldreds of the world before the Internet were extremely few. Yet one would think it at least as important to protect the Eldreds of the world as to protect noncommercial pornographers. }~
+
+As I said, Eldred lives in New Hampshire. In 1998, Robert Frost's collection of poems /{New Hampshire}/ was slated to pass into the public domain. Eldred wanted to post that collection in his free public library. But Congress got in the way. As I described in chapter 10, in 1998, for the eleventh time in forty years, Congress extended the terms of existing copyrights - this time by twenty years. Eldred would not be free to add any works more recent than 1923 to his collection until 2019. Indeed, no copyrighted work would pass into the public domain until that year (and not even then, if Congress extends the term again). By contrast, in the same period, more than 1 million patents will pass into the public domain.
+
+This was the Sonny Bono Copyright Term Extension Act (CTEA), enacted in memory of the congressman and former musician Sonny Bono, who, his widow, Mary Bono, says, believed that "copy- rights should be forever."~{ The full text is: "Sonny [Bono] wanted the term of copyright protection to last forever. I am informed by staff that such a change would violate the Constitution. I invite all of you to work with me to strengthen our copyright laws in all of the ways available to us. As you know, there is also Jack Valenti's proposal for a term to last forever less one day. Perhaps the Committee may look at that next Congress," 144 Cong. Rec. H9946, 9951-2 (October 7, 1998). }~
+
+Eldred decided to fight this law. He first resolved to fight it through civil disobedience. In a series of interviews, Eldred announced that he would publish as planned, CTEA notwithstanding. But because of a second law passed in 1998, the NET (No Electronic Theft) Act, his act of publishing would make Eldred a felon - whether or not anyone complained. This was a dangerous strategy for a disabled programmer to undertake.
+
+It was here that I became involved in Eldred's battle. I was a constitutional scholar whose first passion was constitutional interpretation. And though constitutional law courses never focus upon the Progress Clause of the Constitution, it had always struck me as importantly different. As you know, the Constitution says,
+
+_1 Congress has the power to promote the Progress of Science ... by securing for limited Times to Authors ... exclusive Right to their ... Writings. ..."
+
+As I've described, this clause is unique within the power-granting clause of Article I, section 8 of our Constitution. Every other clause granting power to Congress simply says Congress has the power to do something - for example, to regulate "commerce among the several states" or "declare War." But here, the "something" is something quite specific - to "promote ... Progress" - through means that are also specific - by "securing" "exclusive Rights" (i.e., copyrights) "for limited Times."
+
+In the past forty years, Congress has gotten into the practice of extending existing terms of copyright protection. What puzzled me about this was, if Congress has the power to extend existing terms, then the Constitution's requirement that terms be "limited" will have no practical effect. If every time a copyright is about to expire, Congress has the power to extend its term, then Congress can achieve what the Constitution plainly forbids - perpetual terms "on the installment plan," as Professor Peter Jaszi so nicely put it.
+
+As an academic, my first response was to hit the books. I remember sitting late at the office, scouring on-line databases for any serious consideration of the question. No one had ever challenged Congress's practice of extending existing terms. That failure may in part be why Congress seemed so untroubled in its habit. That, and the fact that the practice had become so lucrative for Congress. Congress knows that copyright owners will be willing to pay a great deal of money to see their copyright terms extended. And so Congress is quite happy to keep this gravy train going.
+
+For this is the core of the corruption in our present system of government."Corruption" not in the sense that representatives are bribed. Rather, "corruption" in the sense that the system induces the beneficiaries of Congress's acts to raise and give money to Congress to induce it to act. There's only so much time; there's only so much Congress can do. Why not limit its actions to those things it must do - and those things that pay? Extending copyright terms pays.
+
+If that's not obvious to you, consider the following: Say you're one of the very few lucky copyright owners whose copyright continues to make money one hundred years after it was created. The Estate of Robert Frost is a good example. Frost died in 1963. His poetry continues to be extraordinarily valuable. Thus the Robert Frost estate benefits greatly from any extension of copyright, since no publisher would pay the estate any money if the poems Frost wrote could be published by anyone for free.
+
+So imagine the Robert Frost estate is earning $100,000 a year from three of Frost's poems. And imagine the copyright for those poems is about to expire. You sit on the board of the Robert Frost estate. Your financial adviser comes to your board meeting with a very grim report:
+
+"Next year," the adviser announces, "our copyrights in works A, B, and C will expire. That means that after next year, we will no longer be receiving the annual royalty check of $100,000 from the publishers of those works.
+
+"There's a proposal in Congress, however," she continues, "that could change this. A few congressmen are floating a bill to extend the terms of copyright by twenty years. That bill would be extraordinarily valuable to us. So we should hope this bill passes."
+
+"Hope?" a fellow board member says. "Can't we be doing something about it?"
+
+"Well, obviously, yes," the adviser responds. "We could contribute to the campaigns of a number of representatives to try to assure that they support the bill."
+
+You hate politics. You hate contributing to campaigns. So you want to know whether this disgusting practice is worth it. "How much would we get if this extension were passed?" you ask the adviser. "How much is it worth?"
+
+"Well," the adviser says, "if you're confident that you will continue to get at least $100,000 a year from these copyrights, and you use the 'discount rate' that we use to evaluate estate investments (6 percent), then this law would be worth $1,146,000 to the estate."
+
+You're a bit shocked by the number, but you quickly come to the correct conclusion:
+
+"So you're saying it would be worth it for us to pay more than $1,000,000 in campaign contributions if we were confident those contributions would assure that the bill was passed?"
+
+"Absolutely," the adviser responds. "It is worth it to you to contribute up to the 'present value' of the income you expect from these copyrights. Which for us means over $1,000,000."
+
+You quickly get the point - you as the member of the board and, I trust, you the reader. Each time copyrights are about to expire, every beneficiary in the position of the Robert Frost estate faces the same choice: If they can contribute to get a law passed to extend copyrights, they will benefit greatly from that extension. And so each time copyrights are about to expire, there is a massive amount of lobbying to get the copyright term extended.
+
+Thus a congressional perpetual motion machine: So long as legislation can be bought (albeit indirectly), there will be all the incentive in the world to buy further extensions of copyright.
+
+In the lobbying that led to the passage of the Sonny Bono Copyright Term Extension Act, this "theory" about incentives was proved real. Ten of the thirteen original sponsors of the act in the House received the maximum contribution from Disney's political action committee; in the Senate, eight of the twelve sponsors received contributions.~{ Associated Press, "Disney Lobbying for Copyright Extension No Mickey Mouse Effort; Congress OKs Bill Granting Creators 20 More Years," /{Chicago Tribune,}/ 17 October 1998, 22. }~ The RIAA and the MPAA are estimated to have spent over $1.5 million lobbying in the 1998 election cycle. They paid out more than $200,000 in campaign contributions.~{ See Nick Brown, "Fair Use No More?: Copyright in the Information Age," available at link #49. }~ Disney is estimated to have contributed more than $800,000 to reelection campaigns in the 1998 cycle.~{ Alan K. Ota, "Disney in Washington: The Mouse That Roars," /{Congressional Quarterly This Week,}/ 8 August 1990, available at link #50. }~
+
+Constitutional law is not oblivious to the obvious. Or at least, it need not be. So when I was considering Eldred's complaint, this reality about the never- ending incentives to increase the copyright term was central to my thinking. In my view, a pragmatic court committed to interpreting and applying the Constitution of our framers would see that if Congress has the power to extend existing terms, then there would be no effective constitutional requirement that terms be "limited." If they could extend it once, they would extend it again and again and again.
+
+It was also my judgment that /{this}/ Supreme Court would not allow Congress to extend existing terms. As anyone close to the Supreme Court's work knows, this Court has increasingly restricted the power of Congress when it has viewed Congress's actions as exceeding the power granted to it by the Constitution. Among constitutional scholars, the most famous example of this trend was the Supreme Court's decision in 1995 to strike down a law that banned the possession of guns near schools.
+
+Since 1937, the Supreme Court had interpreted Congress's granted powers very broadly; so, while the Constitution grants Congress the power to regulate only "commerce among the several states" (aka "interstate commerce"), the Supreme Court had interpreted that power to include the power to regulate any activity that merely affected interstate commerce.
+
+As the economy grew, this standard increasingly meant that there was no limit to Congress's power to regulate, since just about every activity, when considered on a national scale, affects interstate commerce. A Constitution designed to limit Congress's power was instead interpreted to impose no limit.
+
+The Supreme Court, under Chief Justice Rehnquist's command, changed that in /{United States v. Lopez}/. The government had argued that possessing guns near schools affected interstate commerce. Guns near schools increase crime, crime lowers property values, and so on. In the oral argument, the Chief Justice asked the government whether there was any activity that would not affect interstate commerce under the reasoning the government advanced. The government said there was not; if Congress says an activity affects interstate commerce, then that activity affects interstate commerce. The Supreme Court, the government said, was not in the position to second-guess Congress.
+
+"We pause to consider the implications of the government's arguments," the Chief Justice wrote.~{ /{United States}/ v. /{Lopez,}/ 514 U.S. 549, 564 (1995). }~ If anything Congress says is interstate commerce must therefore be considered interstate commerce, then there would be no limit to Congress's power. The decision in /{Lopez}/ was reaffirmed five years later in /{United States}/ v. /{Morrison}/.~{ /{United States}/ v. /{Morrison,}/ 529 U.S. 598 (2000). }~
+
+If a principle were at work here, then it should apply to the Progress Clause as much as the Commerce Clause.~{ If it is a principle about enumerated powers, then the principle carries from one enumerated power to another. The animating point in the context of the Commerce Clause was that the interpretation offered by the government would allow the government unending power to regulate commerce - the limitation to interstate commerce notwithstanding. The same point is true in the context of the Copyright Clause. Here, too, the government's interpretation would allow the government unending power to regulate copyrights - the limitation to "limited times" notwithstanding. }~ And if it is applied to the Progress Clause, the principle should yield the conclusion that Congress can't extend an existing term. If Congress could extend an existing term, then there would be no "stopping point" to Congress's power over terms, though the Constitution expressly states that there is such a limit. Thus, the same principle applied to the power to grant copyrights should entail that Congress is not allowed to extend the term of existing copyrights.
+
+/{If}/, that is, the principle announced in /{Lopez}/ stood for a principle. Many believed the decision in /{Lopez}/ stood for politics - a conservative Supreme Court, which believed in states' rights, using its power over Congress to advance its own personal political preferences. But I rejected that view of the Supreme Court's decision. Indeed, shortly after the decision, I wrote an article demonstrating the "fidelity" in such an interpretation of the Constitution. The idea that the Supreme Court decides cases based upon its politics struck me as extraordinarily boring. I was not going to devote my life to teaching constitutional law if these nine Justices were going to be petty politicians.
+
+Now let's pause for a moment to make sure we understand what the argument in /{Eldred}/ was not about. By insisting on the Constitution's limits to copyright, obviously Eldred was not endorsing piracy. Indeed, in an obvious sense, he was fighting a kind of piracy - piracy of the public domain. When Robert Frost wrote his work and when Walt Disney created Mickey Mouse, the maximum copyright term was just fifty-six years. Because of interim changes, Frost and Disney had already enjoyed a seventy-five-year monopoly for their work. They had gotten the benefit of the bargain that the Constitution envisions: In exchange for a monopoly protected for fifty-six years, they created new work. But now these entities were using their power - expressed through the power of lobbyists' money" to get another twenty-year dollop of monopoly. That twenty-year dollop would be taken from the public domain. Eric Eldred was fighting a piracy that affects us all.
+
+Some people view the public domain with contempt. In their brief before the Supreme Court, the Nashville Songwriters Association wrote that the public domain is nothing more than "legal piracy."~{ Brief of the Nashville Songwriters Association, /{Eldred}/ v. /{Ashcroft,}/ 537 U.S. 186 (2003) (No. 01-618), n.10, available at link #51. }~ But it is not piracy when the law allows it; and in our constitutional system, our law requires it. Some may not like the Constitution's requirements, but that doesn't make the Constitution a pirate's charter.
+
+As we've seen, our constitutional system requires limits on copyright as a way to assure that copyright holders do not too heavily influence the development and distribution of our culture. Yet, as Eric Eldred discovered, we have set up a system that assures that copyright terms will be repeatedly extended, and extended, and extended. We have created the perfect storm for the public domain. Copyrights have not expired, and will not expire, so long as Congress is free to be bought to extend them again.
+
+It is valuable copyrights that are responsible for terms being extended. Mickey Mouse and "Rhapsody in Blue." These works are too valuable for copyright owners to ignore. But the real harm to our society from copyright extensions is not that Mickey Mouse remains Disney's. Forget Mickey Mouse. Forget Robert Frost. Forget all the works from the 1920s and 1930s that have continuing commercial value. The real harm of term extension comes not from these famous works. The real harm is to the works that are not famous, not commercially exploited, and no longer available as a result.
+
+If you look at the work created in the first twenty years (1923 to 1942) affected by the Sonny Bono Copyright Term Extension Act, 2 percent of that work has any continuing commercial value. It was the copyright holders for that 2 percent who pushed the CTEA through. But the law and its effect were not limited to that 2 percent. The law extended the terms of copyright generally.~{ The figure of 2 percent is an extrapolation from the study by the Congressional Research Service, in light of the estimated renewal ranges. See Brief of Petitioners, /{Eldred}/ v. /{Ashcroft,}/ 7, available at link #52. }~
+
+Think practically about the consequence of this extension - practically, as a businessperson, and not as a lawyer eager for more legal work. In 1930, 10,047 books were published. In 2000, 174 of those books were still in print. Let's say you were Brewster Kahle, and you wanted to make available to the world in your iArchive project the remaining 9,873. What would you have to do?
+
+Well, first, you'd have to determine which of the 9,873 books were still under copyright. That requires going to a library (these data are not on-line) and paging through tomes of books, cross-checking the titles and authors of the 9,873 books with the copyright registration and renewal records for works published in 1930. That will produce a list of books still under copyright.
+
+Then for the books still under copyright, you would need to locate the current copyright owners. How would you do that?
+
+Most people think that there must be a list of these copyright owners somewhere. Practical people think this way. How could there be thousands and thousands of government monopolies without there being at least a list?
+
+But there is no list. There may be a name from 1930, and then in 1959, of the person who registered the copyright. But just think practically about how impossibly difficult it would be to track down thousands of such records - especially since the person who registered is not necessarily the current owner. And we're just talking about 1930!
+
+"But there isn't a list of who owns property generally," the apologists for the system respond. "Why should there be a list of copyright owners?"
+
+Well, actually, if you think about it, there /{are}/ plenty of lists of who owns what property. Think about deeds on houses, or titles to cars. And where there isn't a list, the code of real space is pretty good at suggesting who the owner of a bit of property is. (A swing set in your backyard is probably yours.) So formally or informally, we have a pretty good way to know who owns what tangible property.
+
+So: You walk down a street and see a house. You can know who owns the house by looking it up in the courthouse registry. If you see a car, there is ordinarily a license plate that will link the owner to the car. If you see a bunch of children's toys sitting on the front lawn of a house, it's fairly easy to determine who owns the toys. And if you happen to see a baseball lying in a gutter on the side of the road, look around for a second for some kids playing ball. If you don't see any kids, then okay: Here's a bit of property whose owner we can't easily determine. It is the exception that proves the rule: that we ordinarily know quite well who owns what property.
+
+Compare this story to intangible property. You go into a library. The library owns the books. But who owns the copyrights? As I've already described, there's no list of copyright owners. There are authors' names, of course, but their copyrights could have been assigned, or passed down in an estate like Grandma's old jewelry. To know who owns what, you would have to hire a private detective. The bottom line: The owner cannot easily be located. And in a regime like ours, in which it is a felony to use such property without the property owner's permission, the property isn't going to be used.
+
+The consequence with respect to old books is that they won't be digitized, and hence will simply rot away on shelves. But the consequence for other creative works is much more dire.
+
+Consider the story of Michael Agee, chairman of Hal Roach Studios, which owns the copyrights for the Laurel and Hardy films. Agee is a direct beneficiary of the Bono Act. The Laurel and Hardy films were made between 1921 and 1951. Only one of these films, /{The Lucky Dog}/, is currently out of copyright. But for the CTEA, films made after 1923 would have begun entering the public domain. Because Agee controls the exclusive rights for these popular films, he makes a great deal of money. According to one estimate, "Roach has sold about 60,000 videocassettes and 50,000 DVDs of the duo's silent films."~{ See David G. Savage, "High Court Scene of Showdown on Copyright Law," /{Los Angeles Times,}/ 6 October 2002; David Streitfeld, "Classic Movies, Songs, Books at Stake; Supreme Court Hears Arguments Today on Striking Down Copyright Extension," /{Orlando Sentinel Tribune,}/ 9 October 2002. }~
+
+Yet Agee opposed the CTEA. His reasons demonstrate a rare virtue in this culture: selflessness. He argued in a brief before the Supreme Court that the Sonny Bono Copyright Term Extension Act will, if left standing, destroy a whole generation of American film.
+
+His argument is straightforward. A tiny fraction of this work has any continuing commercial value. The rest - to the extent it survives at all - sits in vaults gathering dust. It may be that some of this work not now commercially valuable will be deemed to be valuable by the owners of the vaults. For this to occur, however, the commercial benefit from the work must exceed the costs of making the work available for distribution.
+
+We can't know the benefits, but we do know a lot about the costs. For most of the history of film, the costs of restoring film were very high; digital technology has lowered these costs substantially. While it cost more than $10,000 to restore a ninety-minute black-and-white film in 1993, it can now cost as little as $100 to digitize one hour of 8 mm film.~{ Brief of Hal Roach Studios and Michael Agee as Amicus Curiae Supporting the Petitoners, /{Eldred}/ v. /{Ashcroft,}/ 537 U.S. 186 (2003) (No. 01- 618), 12. See also Brief of Amicus Curiae filed on behalf of Petitioners by the Internet Archive, /{Eldred}/ v. /{Ashcroft,}/ available at link #53. }~
+
+Restoration technology is not the only cost, nor the most important. Lawyers, too, are a cost, and increasingly, a very important one. In addition to preserving the film, a distributor needs to secure the rights. And to secure the rights for a film that is under copyright, you need to locate the copyright owner.
+
+Or more accurately, /{owners}/. As we've seen, there isn't only a single copyright associated with a film; there are many. There isn't a single person whom you can contact about those copyrights; there are as many as can hold the rights, which turns out to be an extremely large number. Thus the costs of clearing the rights to these films is exceptionally high.
+
+"But can't you just restore the film, distribute it, and then pay the copyright owner when she shows up?" Sure, if you want to commit a felony. And even if you're not worried about committing a felony, when she does show up, she'll have the right to sue you for all the profits you have made. So, if you're successful, you can be fairly confident you'll be getting a call from someone's lawyer. And if you're not successful, you won't make enough to cover the costs of your own lawyer. Either way, you have to talk to a lawyer. And as is too often the case, saying you have to talk to a lawyer is the same as saying you won't make any money.
+
+For some films, the benefit of releasing the film may well exceed these costs. But for the vast majority of them, there is no way the benefit would outweigh the legal costs. Thus, for the vast majority of old films, Agee argued, the film will not be restored and distributed until the copyright expires.
+
+But by the time the copyright for these films expires, the film will have expired. These films were produced on nitrate-based stock, and nitrate stock dissolves over time. They will be gone, and the metal canisters in which they are now stored will be filled with nothing more than dust.
+
+Of all the creative work produced by humans anywhere, a tiny fraction has continuing commercial value. For that tiny fraction, the copyright is a crucially important legal device. For that tiny fraction, the copyright creates incentives to produce and distribute the creative work. For that tiny fraction, the copyright acts as an "engine of free expression."
+
+But even for that tiny fraction, the actual time during which the creative work has a commercial life is extremely short. As I've indicated, most books go out of print within one year. The same is true of music and film. Commercial culture is sharklike. It must keep moving. And when a creative work falls out of favor with the commercial distributors, the commercial life ends.
+
+Yet that doesn't mean the life of the creative work ends. We don't keep libraries of books in order to compete with Barnes & Noble, and we don't have archives of films because we expect people to choose between spending Friday night watching new movies and spending Friday night watching a 1930 news documentary. The noncommercial life of culture is important and valuable - for entertainment but also, and more importantly, for knowledge. To understand who we are, and where we came from, and how we have made the mistakes that we have, we need to have access to this history.
+
+Copyrights in this context do not drive an engine of free expression. In this context, there is no need for an exclusive right. Copyrights in this context do no good.
+
+Yet, for most of our history, they also did little harm. For most of our history, when a work ended its commercial life, there was no /{copyright-related use}/ that would be inhibited by an exclusive right. When a book went out of print, you could not buy it from a publisher. But you could still buy it from a used book store, and when a used book store sells it, in America, at least, there is no need to pay the copyright owner anything. Thus, the ordinary use of a book after its commercial life ended was a use that was independent of copyright law.
+
+The same was effectively true of film. Because the costs of restoring a film - the real economic costs, not the lawyer costs - were so high, it was never at all feasible to preserve or restore film. Like the remains of a great dinner, when it's over, it's over. Once a film passed out of its commercial life, it may have been archived for a bit, but that was the end of its life so long as the market didn't have more to offer.
+
+In other words, though copyright has been relatively short for most of our history, long copyrights wouldn't have mattered for the works that lost their commercial value. Long copyrights for these works would not have interfered with anything.
+
+But this situation has now changed.
+
+One crucially important consequence of the emergence of digital technologies is to enable the archive that Brewster Kahle dreams of. Digital technologies now make it possible to preserve and give access to all sorts of knowledge. Once a book goes out of print, we can now imagine digitizing it and making it available to everyone, forever. Once a film goes out of distribution, we could digitize it and make it available to everyone, forever. Digital technologies give new life to copyrighted material after it passes out of its commercial life. It is now possible to preserve and assure universal access to this knowledge and culture, whereas before it was not.
+
+And now copyright law does get in the way. Every step of producing this digital archive of our culture infringes on the exclusive right of copyright. To digitize a book is to copy it. To do that requires permission of the copyright owner. The same with music, film, or any other aspect of our culture protected by copyright. The effort to make these things available to history, or to researchers, or to those who just want to explore, is now inhibited by a set of rules that were written for a radically different context.
+
+Here is the core of the harm that comes from extending terms: Now that technology enables us to rebuild the library of Alexandria, the law gets in the way. And it doesn't get in the way for any useful /{copyright}/ purpose, for the purpose of copyright is to enable the commercial market that spreads culture. No, we are talking about culture after it has lived its commercial life. In this context, copyright is serving no purpose /{at all}/ related to the spread of knowledge. In this context, copyright is not an engine of free expression. Copyright is a brake.
+
+You may well ask, "But if digital technologies lower the costs for Brewster Kahle, then they will lower the costs for Random House, too. So won't Random House do as well as Brewster Kahle in spreading culture widely?"
+
+Maybe. Someday. But there is absolutely no evidence to suggest that publishers would be as complete as libraries. If Barnes & Noble offered to lend books from its stores for a low price, would that eliminate the need for libraries? Only if you think that the only role of a library is to serve what "the market" would demand. But if you think the role of a library is bigger than this - if you think its role is to archive culture, whether there's a demand for any particular bit of that culture or not - then we can't count on the commercial market to do our library work for us.
+
+I would be the first to agree that it should do as much as it can: We should rely upon the market as much as possible to spread and enable culture. My message is absolutely not antimarket. But where we see the market is not doing the job, then we should allow nonmarket forces the freedom to fill the gaps. As one researcher calculated for American culture, 94 percent of the films, books, and music produced between 1923 and 1946 is not commercially available. However much you love the commercial market, if access is a value, then 6 percent is a failure to provide that value.~{ Jason Schultz, "The Myth of the 1976 Copyright 'Chaos' Theory," 20 December 2002, available at link #54. }~
+
+In January 1999, we filed a lawsuit on Eric Eldred's behalf in federal district court in Washington, D.C., asking the court to declare the Sonny Bono Copyright Term Extension Act unconstitutional. The two central claims that we made were (1) that extending existing terms violated the Constitution's "limited Times" requirement, and (2) that extending terms by another twenty years violated the First Amendment.
+
+The district court dismissed our claims without even hearing an argument. A panel of the Court of Appeals for the D.C. Circuit also dismissed our claims, though after hearing an extensive argument. But that decision at least had a dissent, by one of the most conservative judges on that court. That dissent gave our claims life.
+
+Judge David Sentelle said the CTEA violated the requirement that copyrights be for "limited Times" only. His argument was as elegant as it was simple: If Congress can extend existing terms, then there is no "stopping point" to Congress's power under the Copyright Clause. The power to extend existing terms means Congress is not required to grant terms that are "limited." Thus, Judge Sentelle argued, the court had to interpret the term "limited Times" to give it meaning. And the best interpretation, Judge Sentelle argued, would be to deny Congress the power to extend existing terms.
+
+We asked the Court of Appeals for the D.C. Circuit as a whole to hear the case. Cases are ordinarily heard in panels of three, except for important cases or cases that raise issues specific to the circuit as a whole, where the court will sit "en banc" to hear the case.
+
+The Court of Appeals rejected our request to hear the case en banc. This time, Judge Sentelle was joined by the most liberal member of the D.C. Circuit, Judge David Tatel. Both the most conservative and the most liberal judges in the D.C. Circuit believed Congress had over-stepped its bounds.
+
+It was here that most expected /{Eldred}/ v. /{Ashcroft}/ would die, for the Supreme Court rarely reviews any decision by a court of appeals. (It hears about one hundred cases a year, out of more than five thousand appeals.) And it practically never reviews a decision that upholds a statute when no other court has yet reviewed the statute.
+
+But in February 2002, the Supreme Court surprised the world by granting our petition to review the D.C. Circuit opinion. Argument was set for October of 2002. The summer would be spent writing briefs and preparing for argument.
+
+It is over a year later as I write these words. It is still astonishingly hard. If you know anything at all about this story, you know that we lost the appeal. And if you know something more than just the minimum, you probably think there was no way this case could have been won. After our defeat, I received literally thousands of missives by well-wishers and supporters, thanking me for my work on behalf of this noble but doomed cause. And none from this pile was more significant to me than the e-mail from my client, Eric Eldred.
+
+But my client and these friends were wrong. This case could have been won. It should have been won. And no matter how hard I try to retell this story to myself, I can never escape believing that my own mistake lost it.
+
+The mistake was made early, though it became obvious only at the very end. Our case had been supported from the very beginning by an extraordinary lawyer, Geoffrey Stewart, and by the law firm he had moved to, Jones, Day, Reavis and Pogue. Jones Day took a great deal of heat from its copyright-protectionist clients for supporting us. They ignored this pressure (something that few law firms today would ever do), and throughout the case, they gave it everything they could.
+
+There were three key lawyers on the case from Jones Day. Geoff Stewart was the first, but then Dan Bromberg and Don Ayer became quite involved. Bromberg and Ayer in particular had a common view about how this case would be won: We would only win, they repeatedly told me, if we could make the issue seem "important" to the Supreme Court. It had to seem as if dramatic harm were being done to free speech and free culture; otherwise, they would never vote against "the most powerful media companies in the world."
+
+I hate this view of the law. Of course I thought the Sonny Bono Act was a dramatic harm to free speech and free culture. Of course I still think it is. But the idea that the Supreme Court decides the law based on how important they believe the issues are is just wrong. It might be "right" as in "true," I thought, but it is "wrong" as in "it just shouldn't be that way." As I believed that any faithful interpretation of what the framers of our Constitution did would yield the conclusion that the CTEA was unconstitutional, and as I believed that any faithful interpretation of what the First Amendment means would yield the conclusion that the power to extend existing copyright terms is unconstitutional, I was not persuaded that we had to sell our case like soap. Just as a law that bans the swastika is unconstitutional not because the Court likes Nazis but because such a law would violate the Constitution, so too, in my view, would the Court decide whether Congress's law was constitutional based on the Constitution, not based on whether they liked the values that the framers put in the Constitution.
+
+In any case, I thought, the Court must already see the danger and the harm caused by this sort of law. Why else would they grant review? There was no reason to hear the case in the Supreme Court if they weren't convinced that this regulation was harmful. So in my view, we didn't need to persuade them that this law was bad, we needed to show why it was unconstitutional.
+
+There was one way, however, in which I felt politics would matter and in which I thought a response was appropriate. I was convinced that the Court would not hear our arguments if it thought these were just the arguments of a group of lefty loons. This Supreme Court was not about to launch into a new field of judicial review if it seemed that this field of review was simply the preference of a small political minority. Although my focus in the case was not to demonstrate how bad the Sonny Bono Act was but to demonstrate that it was unconstitutional, my hope was to make this argument against a background of briefs that covered the full range of political views. To show that this claim against the CTEA was grounded in /{law}/ and not politics, then, we tried to gather the widest range of credible critics - credible not because they were rich and famous, but because they, in the aggregate, demonstrated that this law was unconstitutional regardless of one's politics.
+
+The first step happened all by itself. Phyllis Schlafly's organization, Eagle Forum, had been an opponent of the CTEA from the very beginning. Mrs. Schlafly viewed the CTEA as a sellout by Congress. In November 1998, she wrote a stinging editorial attacking the Republican Congress for allowing the law to pass. As she wrote, "Do you sometimes wonder why bills that create a financial windfall to narrow special interests slide easily through the intricate legislative process, while bills that benefit the general public seem to get bogged down?" The answer, as the editorial documented, was the power of money. Schlafly enumerated Disney's contributions to the key players on the committees. It was money, not justice, that gave Mickey Mouse twenty more years in Disney's control, Schlafly argued.
+
+In the Court of Appeals, Eagle Forum was eager to file a brief supporting our position. Their brief made the argument that became the core claim in the Supreme Court: If Congress can extend the term of existing copyrights, there is no limit to Congress's power to set terms. That strong conservative argument persuaded a strong conservative judge, Judge Sentelle.
+
+In the Supreme Court, the briefs on our side were about as diverse as it gets. They included an extraordinary historical brief by the Free Software Foundation (home of the GNU project that made GNU/ Linux possible). They included a powerful brief about the costs of uncertainty by Intel. There were two law professors' briefs, one by copyright scholars and one by First Amendment scholars. There was an exhaustive and uncontroverted brief by the world's experts in the history of the Progress Clause. And of course, there was a new brief by Eagle Forum, repeating and strengthening its arguments.
+
+Those briefs framed a legal argument. Then to support the legal argument, there were a number of powerful briefs by libraries and archives, including the Internet Archive, the American Association of Law Libraries, and the National Writers Union.
+
+But two briefs captured the policy argument best. One made the argument I've already described: A brief by Hal Roach Studios argued that unless the law was struck, a whole generation of American film would disappear. The other made the economic argument absolutely clear.
+
+This economists' brief was signed by seventeen economists, including five Nobel Prize winners, including Ronald Coase, James Buchanan, Milton Friedman, Kenneth Arrow, and George Akerlof. The economists, as the list of Nobel winners demonstrates, spanned the political spectrum. Their conclusions were powerful: There was no plausible claim that extending the terms of existing copyrights would do anything to increase incentives to create. Such extensions were nothing more than "rent-seeking" - the fancy term economists use to describe special- interest legislation gone wild.
+
+The same effort at balance was reflected in the legal team we gathered to write our briefs in the case. The Jones Day lawyers had been with us from the start. But when the case got to the Supreme Court, we added three lawyers to help us frame this argument to this Court: Alan Morrison, a lawyer from Public Citizen, a Washington group that had made constitutional history with a series of seminal victories in the Supreme Court defending individual rights; my colleague and dean, Kathleen Sullivan, who had argued many cases in the Court, and who had advised us early on about a First Amendment strategy; and finally, former solicitor general Charles Fried.
+
+Fried was a special victory for our side. Every other former solicitor general was hired by the other side to defend Congress's power to give media companies the special favor of extended copyright terms. Fried was the only one who turned down that lucrative assignment to stand up for something he believed in. He had been Ronald Reagan's chief lawyer in the Supreme Court. He had helped craft the line of cases that limited Congress's power in the context of the Commerce Clause. And while he had argued many positions in the Supreme Court that I personally disagreed with, his joining the cause was a vote of confidence in our argument.
+
+The government, in defending the statute, had its collection of friends, as well. Significantly, however, none of these "friends" included historians or economists. The briefs on the other side of the case were written exclusively by major media companies, congressmen, and copyright holders.
+
+The media companies were not surprising. They had the most to gain from the law. The congressmen were not surprising either - they were defending their power and, indirectly, the gravy train of contributions such power induced. And of course it was not surprising that the copyright holders would defend the idea that they should continue to have the right to control who did what with content they wanted to control.
+
+Dr. Seuss's representatives, for example, argued that it was better for the Dr. Seuss estate to control what happened to Dr. Seuss's work - better than allowing it to fall into the public domain - because if this creativity were in the public domain, then people could use it to "glorify drugs or to create pornography."~{ Brief of Amici Dr. Seuss Enterprise et al., /{Eldred}/ v. /{Ashcroft,}/ 537 U.S. 186 (2003) (No. 01-618), 19. }~ That was also the motive of the Gershwin estate, which defended its "protection" of the work of George Gershwin. They refuse, for example, to license /{Porgy and Bess}/ to anyone who refuses to use African Americans in the cast.~{ Dinitia Smith, "Immortal Words, Immortal Royalties? Even Mickey Mouse Joins the Fray," /{New York Times,}/ 28 March 1998, B7. }~ That's their view of how this part of American culture should be controlled, and they wanted this law to help them effect that control.
+
+This argument made clear a theme that is rarely noticed in this debate. When Congress decides to extend the term of existing copyrights, Congress is making a choice about which speakers it will favor. Famous and beloved copyright owners, such as the Gershwin estate and Dr. Seuss, come to Congress and say, "Give us twenty years to control the speech about these icons of American culture. We'll do better with them than anyone else." Congress of course likes to reward the popular and famous by giving them what they want. But when Congress gives people an exclusive right to speak in a certain way, that's just what the First Amendment is traditionally meant to block.
+
+We argued as much in a final brief. Not only would upholding the CTEA mean that there was no limit to the power of Congress to extend copyrights - extensions that would further concentrate the market; it would also mean that there was no limit to Congress's power to play favorites, through copyright, with who has the right to speak.
+
+Between February and October, there was little I did beyond preparing for this case. Early on, as I said, I set the strategy.
+
+The Supreme Court was divided into two important camps. One camp we called "the Conservatives." The other we called "the Rest." The Conservatives included Chief Justice Rehnquist, Justice O'Connor, Justice Scalia, Justice Kennedy, and Justice Thomas. These five had been the most consistent in limiting Congress's power. They were the five who had supported the /{Lopez/Morrison}/ line of cases that said that an enumerated power had to be interpreted to assure that Congress's powers had limits.
+
+The Rest were the four Justices who had strongly opposed limits on Congress's power. These four - Justice Stevens, Justice Souter, Justice Ginsburg, and Justice Breyer - had repeatedly argued that the Constitution gives Congress broad discretion to decide how best to implement its powers. In case after case, these justices had argued that the Court's role should be one of deference. Though the votes of these four justices were the votes that I personally had most consistently agreed with, they were also the votes that we were least likely to get.
+
+In particular, the least likely was Justice Ginsburg's. In addition to her general view about deference to Congress (except where issues of gender are involved), she had been particularly deferential in the context of intellectual property protections. She and her daughter (an excellent and well-known intellectual property scholar) were cut from the same intellectual property cloth. We expected she would agree with the writings of her daughter: that Congress had the power in this context to do as it wished, even if what Congress wished made little sense.
+
+Close behind Justice Ginsburg were two justices whom we also viewed as unlikely allies, though possible surprises. Justice Souter strongly favored deference to Congress, as did Justice Breyer. But both were also very sensitive to free speech concerns. And as we strongly believed, there was a very important free speech argument against these retrospective extensions.
+
+The only vote we could be confident about was that of Justice Stevens. History will record Justice Stevens as one of the greatest judges on this Court. His votes are consistently eclectic, which just means that no simple ideology explains where he will stand. But he had consistently argued for limits in the context of intellectual property generally. We were fairly confident he would recognize limits here.
+
+This analysis of "the Rest" showed most clearly where our focus had to be: on the Conservatives. To win this case, we had to crack open these five and get at least a majority to go our way.Thus, the single overriding argument that animated our claim rested on the Conservatives' most important jurisprudential innovation - the argument that Judge Sentelle had relied upon in the Court of Appeals, that Congress's power must be interpreted so that its enumerated powers have limits.
+
+This then was the core of our strategy - a strategy for which I am responsible. We would get the Court to see that just as with the /{Lopez}/ case, under the government's argument here, Congress would always have unlimited power to extend existing terms. If anything was plain about Congress's power under the Progress Clause, it was that this power was supposed to be "limited." Our aim would be to get the Court to reconcile /{Eldred}/ with /{Lopez:}/ If Congress's power to regulate commerce was limited, then so, too, must Congress's power to regulate copyright be limited.
+
+The argument on the government's side came down to this: Congress has done it before. It should be allowed to do it again. The government claimed that from the very beginning, Congress has been extending the term of existing copyrights. So, the government argued, the Court should not now say that practice is unconstitutional.
+
+There was some truth to the government's claim, but not much. We certainly agreed that Congress had extended existing terms in 1831 and in 1909. And of course, in 1962, Congress began extending existing terms regularly - eleven times in forty years.
+
+But this "consistency" should be kept in perspective. Congress extended existing terms once in the first hundred years of the Republic. It then extended existing terms once again in the next fifty. Those rare extensions are in contrast to the now regular practice of extending existing terms. Whatever restraint Congress had had in the past, that restraint was now gone. Congress was now in a cycle of extensions; there was no reason to expect that cycle would end. This Court had not hesitated to intervene where Congress was in a similar cycle of extension. There was no reason it couldn't intervene here.
+
+Oral argument was scheduled for the first week in October. I arrived in D.C. two weeks before the argument. During those two weeks, I was repeatedly "mooted" by lawyers who had volunteered to help in the case. Such "moots" are basically practice rounds, where wannabe justices fire questions at wannabe winners.
+
+I was convinced that to win, I had to keep the Court focused on a single point: that if this extension is permitted, then there is no limit to the power to set terms. Going with the government would mean that terms would be effectively unlimited; going with us would give Congress a clear line to follow: Don't extend existing terms. The moots were an effective practice; I found ways to take every question back to this central idea.
+
+One moot was before the lawyers at Jones Day. Don Ayer was the skeptic. He had served in the Reagan Justice Department with Solicitor General Charles Fried. He had argued many cases before the Supreme Court. And in his review of the moot, he let his concern speak:
+
+"I'm just afraid that unless they really see the harm, they won't be willing to upset this practice that the government says has been a consistent practice for two hundred years. You have to make them see the harm - passionately get them to see the harm. For if they don't see that, then we haven't any chance of winning."
+
+He may have argued many cases before this Court, I thought, but he didn't understand its soul. As a clerk, I had seen the Justices do the right thing - not because of politics but because it was right. As a law professor, I had spent my life teaching my students that this Court does the right thing - not because of politics but because it is right. As I listened to Ayer's plea for passion in pressing politics, I understood his point, and I rejected it. Our argument was right. That was enough. Let the politicians learn to see that it was also good.
+
+The night before the argument, a line of people began to form in front of the Supreme Court. The case had become a focus of the press and of the movement to free culture. Hundreds stood in line for the chance to see the proceedings. Scores spent the night on the Supreme Court steps so that they would be assured a seat.
+
+Not everyone has to wait in line. People who know the Justices can ask for seats they control. (I asked Justice Scalia's chambers for seats for my parents, for example.) Members of the Supreme Court bar can get a seat in a special section reserved for them. And senators and congressmen have a special place where they get to sit, too. And finally, of course, the press has a gallery, as do clerks working for the Justices on the Court. As we entered that morning, there was no place that was not taken. This was an argument about intellectual property law, yet the halls were filled. As I walked in to take my seat at the front of the Court, I saw my parents sitting on the left. As I sat down at the table, I saw Jack Valenti sitting in the special section ordinarily reserved for family of the Justices.
+
+When the Chief Justice called me to begin my argument, I began where I intended to stay: on the question of the limits on Congress's power. This was a case about enumerated powers, I said, and whether those enumerated powers had any limit.
+
+Justice O'Connor stopped me within one minute of my opening. The history was bothering her.
+
+_1 JUSTICE O'CONNOR: Congress has extended the term so often through the years, and if you are right, don't we run the risk of upsetting previous extensions of time? I mean, this seems to be a practice that began with the very first act."
+
+She was quite willing to concede "that this flies directly in the face of what the framers had in mind." But my response again and again was to emphasize limits on Congress's power.
+
+_1 MR. LESSIG: Well, if it flies in the face of what the framers had in mind, then the question is, is there a way of interpreting their words that gives effect to what they had in mind, and the answer is yes."
+
+There were two points in this argument when I should have seen where the Court was going. The first was a question by Justice Kennedy, who observed,
+
+_1 JUSTICE KENNEDY: Well, I suppose implicit in the argument that the '76 act, too, should have been declared void, and that we might leave it alone because of the disruption, is that for all these years the act has impeded progress in science and the useful arts. I just don't see any empirical evidence for that.
+
+Here follows my clear mistake. Like a professor correcting a student, I answered,
+
+_1 MR. LESSIG: Justice, we are not making an empirical claim at all. Nothing in our Copyright Clause claim hangs upon the empirical assertion about impeding progress. Our only argument is this is a structural limit necessary to assure that what would be an effectively perpetual term not be permitted under the copyright laws."
+
+That was a correct answer, but it wasn't the right answer. The right answer was instead that there was an obvious and profound harm. Any number of briefs had been written about it. He wanted to hear it. And here was the place Don Ayer's advice should have mattered. This was a softball; my answer was a swing and a miss.
+
+The second came from the Chief, for whom the whole case had been crafted. For the Chief Justice had crafted the /{Lopez}/ ruling, and we hoped that he would see this case as its second cousin.
+
+It was clear a second into his question that he wasn't at all sympathetic. To him, we were a bunch of anarchists. As he asked:
+
+_1 CHIEF JUSTICE: Well, but you want more than that. You want the right to copy verbatim other people's books, don't you?
+
+_1 MR. LESSIG: We want the right to copy verbatim works that should be in the public domain and would be in the public domain but for a statute that cannot be justified under ordinary First Amendment analysis or under a proper reading of the limits built into the Copyright Clause."
+
+Things went better for us when the government gave its argument; for now the Court picked up on the core of our claim. As Justice Scalia asked Solicitor General Olson,
+
+_1 JUSTICE SCALIA: You say that the functional equivalent of an unlimited time would be a violation [of the Constitution], but that's precisely the argument that's being made by petitioners here, that a limited time which is extendable is the functional equivalent of an unlimited time."
+
+When Olson was finished, it was my turn to give a closing rebuttal. Olson's flailing had revived my anger. But my anger still was directed to the academic, not the practical. The government was arguing as if this were the first case ever to consider limits on Congress's Copyright and Patent Clause power. Ever the professor and not the advocate, I closed by pointing out the long history of the Court imposing limits on Congress's power in the name of the Copyright and Patent Clause - indeed, the very first case striking a law of Congress as exceeding a specific enumerated power was based upon the Copyright and Patent Clause. All true. But it wasn't going to move the Court to my side.
+
+As I left the court that day, I knew there were a hundred points I wished I could remake. There were a hundred questions I wished I had answered differently. But one way of thinking about this case left me optimistic.
+
+The government had been asked over and over again, what is the limit? Over and over again, it had answered there is no limit. This was precisely the answer I wanted the Court to hear. For I could not imagine how the Court could understand that the government believed Congress's power was unlimited under the terms of the Copyright Clause, and sustain the government's argument. The solicitor general had made my argument for me. No matter how often I tried, I could not understand how the Court could find that Congress's power under the Commerce Clause was limited, but under the Copyright Clause, unlimited. In those rare moments when I let myself believe that we may have prevailed, it was because I felt this Court - in particular, the Conservatives - would feel itself constrained by the rule of law that it had established elsewhere.
+
+The morning of January 15, 2003, I was five minutes late to the office and missed the 7:00 A.M.call from the Supreme Court clerk. Listening to the message, I could tell in an instant that she had bad news to report.The Supreme Court had affirmed the decision of the Court of Appeals. Seven justices had voted in the majority. There were two dissents.
+
+A few seconds later, the opinions arrived by e-mail. I took the phone off the hook, posted an announcement to our blog, and sat down to see where I had been wrong in my reasoning.
+
+My /{reasoning}/. Here was a case that pitted all the money in the world against /{reasoning}/. And here was the last naïve law professor, scouring the pages, looking for reasoning.
+
+I first scoured the opinion, looking for how the Court would distinguish the principle in this case from the principle in /{Lopez}/. The argument was nowhere to be found. The case was not even cited. The argument that was the core argument of our case did not even appear in the Court's opinion.
+
+Justice Ginsburg simply ignored the enumerated powers argument. Consistent with her view that Congress's power was not limited generally, she had found Congress's power not limited here.
+
+Her opinion was perfectly reasonable - for her, and for Justice Souter. Neither believes in /{Lopez}/. It would be too much to expect them to write an opinion that recognized, much less explained, the doctrine they had worked so hard to defeat.
+
+But as I realized what had happened, I couldn't quite believe what I was reading. I had said there was no way this Court could reconcile limited powers with the Commerce Clause and unlimited powers with the Progress Clause. It had never even occurred to me that they could reconcile the two simply /{by not addressing the argument}/. There was no inconsistency because they would not talk about the two together. There was therefore no principle that followed from the /{Lopez}/ case: In that context, Congress's power would be limited, but in this context it would not.
+
+Yet by what right did they get to choose which of the framers' values they would respect? By what right did they - the silent five - get to select the part of the Constitution they would enforce based on the values they thought important? We were right back to the argument that I said I hated at the start: I had failed to convince them that the issue here was important, and I had failed to recognize that however much I might hate a system in which the Court gets to pick the constitutional values that it will respect, that is the system we have.
+
+Justices Breyer and Stevens wrote very strong dissents. Stevens's opinion was crafted internal to the law: He argued that the tradition of intellectual property law should not support this unjustified extension of terms. He based his argument on a parallel analysis that had governed in the context of patents (so had we). But the rest of the Court discounted the parallel - without explaining how the very same words in the Progress Clause could come to mean totally different things depending upon whether the words were about patents or copyrights. The Court let Justice Stevens's charge go unanswered.
+
+Justice Breyer's opinion, perhaps the best opinion he has ever written, was external to the Constitution. He argued that the term of copyrights has become so long as to be effectively unlimited. We had said that under the current term, a copyright gave an author 99.8 percent of the value of a perpetual term. Breyer said we were wrong, that the actual number was 99.9997 percent of a perpetual term. Either way, the point was clear: If the Constitution said a term had to be "limited," and the existing term was so long as to be effectively unlimited, then it was unconstitutional.
+
+These two justices understood all the arguments we had made. But because neither believed in the /{Lopez}/ case, neither was willing to push it as a reason to reject this extension. The case was decided without anyone having addressed the argument that we had carried from Judge Sentelle. It was /{Hamlet}/ without the Prince.
+
+Defeat brings depression. They say it is a sign of health when depression gives way to anger. My anger came quickly, but it didn't cure the depression. This anger was of two sorts.
+
+It was first anger with the five "Conservatives." It would have been one thing for them to have explained why the principle of /{Lopez}/ didn't apply in this case. That wouldn't have been a very convincing argument, I don't believe, having read it made by others, and having tried to make it myself. But it at least would have been an act of integrity. These justices in particular have repeatedly said that the proper mode of interpreting the Constitution is "originalism" - to first understand the framers' text, interpreted in their context, in light of the structure of the Constitution. That method had produced /{Lopez}/ and many other "originalist" rulings. Where was their "originalism" now?
+
+Here, they had joined an opinion that never once tried to explain what the framers had meant by crafting the Progress Clause as they did; they joined an opinion that never once tried to explain how the structure of that clause would affect the interpretation of Congress's power. And they joined an opinion that didn't even try to explain why this grant of power could be unlimited, whereas the Commerce Clause would be limited. In short, they had joined an opinion that did not apply to, and was inconsistent with, their own method for interpreting the Constitution. This opinion may well have yielded a result that they liked. It did not produce a reason that was consistent with their own principles.
+
+My anger with the Conservatives quickly yielded to anger with myself. For I had let a view of the law that I liked interfere with a view of the law as it is.
+
+Most lawyers, and most law professors, have little patience for idealism about courts in general and this Supreme Court in particular. Most have a much more pragmatic view. When Don Ayer said that this case would be won based on whether I could convince the Justices that the framers' values were important, I fought the idea, because I didn't want to believe that that is how this Court decides. I insisted on arguing this case as if it were a simple application of a set of principles. I had an argument that followed in logic. I didn't need to waste my time showing it should also follow in popularity.
+
+As I read back over the transcript from that argument in October, I can see a hundred places where the answers could have taken the conversation in different directions, where the truth about the harm that this unchecked power will cause could have been made clear to this Court. Justice Kennedy in good faith wanted to be shown. I, idiotically, corrected his question. Justice Souter in good faith wanted to be shown the First Amendment harms. I, like a math teacher, reframed the question to make the logical point. I had shown them how they could strike this law of Congress if they wanted to. There were a hundred places where I could have helped them want to, yet my stubbornness, my refusal to give in, stopped me. I have stood before hundreds of audiences trying to persuade; I have used passion in that effort to persuade; but I refused to stand before this audience and try to persuade with the passion I had used elsewhere. It was not the basis on which a court should decide the issue.
+
+Would it have been different if I had argued it differently? Would it have been different if Don Ayer had argued it? Or Charles Fried? Or Kathleen Sullivan?
+
+My friends huddled around me to insist it would not. The Court was not ready, my friends insisted. This was a loss that was destined. It would take a great deal more to show our society why our framers were right. And when we do that, we will be able to show that Court.
+
+Maybe, but I doubt it. These Justices have no financial interest in doing anything except the right thing. They are not lobbied. They have little reason to resist doing right. I can't help but think that if I had stepped down from this pretty picture of dispassionate justice, I could have persuaded.
+
+And even if I couldn't, then that doesn't excuse what happened in January. For at the start of this case, one of America's leading intellectual property professors stated publicly that my bringing this case was a mistake. "The Court is not ready," Peter Jaszi said; this issue should not be raised until it is.
+
+After the argument and after the decision, Peter said to me, and publicly, that he was wrong. But if indeed that Court could not have been persuaded, then that is all the evidence that's needed to know that here again Peter was right. Either I was not ready to argue this case in a way that would do some good or they were not ready to hear this case in a way that would do some good. Either way, the decision to bring this case - a decision I had made four years before - was wrong.
+
+While the reaction to the Sonny Bono Act itself was almost unanimously negative, the reaction to the Court's decision was mixed. No one, at least in the press, tried to say that extending the term of copyright was a good idea. We had won that battle over ideas. Where the decision was praised, it was praised by papers that had been skeptical of the Court's activism in other cases. Deference was a good thing, even if it left standing a silly law. But where the decision was attacked, it was attacked because it left standing a silly and harmful law. /{The New York Times}/ wrote in its editorial,
+
+_1 In effect, the Supreme Court's decision makes it likely that we are seeing the beginning of the end of public domain and the birth of copyright perpetuity. The public domain has been a grand experiment, one that should not be allowed to die. The ability to draw freely on the entire creative output of humanity is one of the reasons we live in a time of such fruitful creative ferment."
+
+The best responses were in the cartoons. There was a gaggle of hilarious images" of Mickey in jail and the like. The best, from my view of the case, was Ruben Bolling's, reproduced on the next page. The "powerful and wealthy" line is a bit unfair. But the punch in the face felt exactly like that.
+
+The image that will always stick in my head is that evoked by the quote from /{The New York Times}/. That "grand experiment" we call the "public domain" is over? When I can make light of it, I think, "Honey, I shrunk the Constitution." But I can rarely make light of it. We had in our Constitution a commitment to free culture. In the case that I fathered, the Supreme Court effectively renounced that commitment. A better lawyer would have made them see differently.
+
+{freeculture18.png 550x720 }http://www.free-culture.cc/
+
+1~ Chapter Fourteen: Eldred II
+
+*{The day}* /{Eldred}/ was decided, fate would have it that I was to travel to Washington, D.C. (The day the rehearing petition in /{Eldred}/ was denied - meaning the case was really finally over - fate would have it that I was giving a speech to technologists at Disney World.) This was a particularly long flight to my least favorite city. The drive into the city from Dulles was delayed because of traffic, so I opened up my computer and wrote an op-ed piece.
+
+It was an act of contrition. During the whole of the flight from San Francisco to Washington, I had heard over and over again in my head the same advice from Don Ayer: You need to make them see why it is important. And alternating with that command was the question of Justice Kennedy: "For all these years the act has impeded progress in science and the useful arts. I just don't see any empirical evidence for that." And so, having failed in the argument of constitutional principle, finally, I turned to an argument of politics.
+
+/{The New York Times}/ published the piece. In it, I proposed a simple fix: Fifty years after a work has been published, the copyright owner would be required to register the work and pay a small fee. If he paid the fee, he got the benefit of the full term of copyright. If he did not, the work passed into the public domain.
+
+We called this the Eldred Act, but that was just to give it a name. Eric Eldred was kind enough to let his name be used once again, but as he said early on, it won't get passed unless it has another name.
+
+Or another two names. For depending upon your perspective, this is either the "Public Domain Enhancement Act" or the "Copyright Term Deregulation Act." Either way, the essence of the idea is clear and obvious: Remove copyright where it is doing nothing except blocking access and the spread of knowledge. Leave it for as long as Congress allows for those works where its worth is at least $1. But for everything else, let the content go.
+
+The reaction to this idea was amazingly strong. Steve Forbes endorsed it in an editorial. I received an avalanche of e-mail and letters expressing support. When you focus the issue on lost creativity, people can see the copyright system makes no sense. As a good Republican might say, here government regulation is simply getting in the way of innovation and creativity. And as a good Democrat might say, here the government is blocking access and the spread of knowledge for no good reason. Indeed, there is no real difference between Democrats and Republicans on this issue. Anyone can recognize the stupid harm of the present system.
+
+Indeed, many recognized the obvious benefit of the registration requirement. For one of the hardest things about the current system for people who want to license content is that there is no obvious place to look for the current copyright owners. Since registration is not required, since marking content is not required, since no formality at all is required, it is often impossibly hard to locate copyright owners to ask permission to use or license their work. This system would lower these costs, by establishing at least one registry where copyright owners could be identified.
+
+As I described in chapter 10, formalities in copyright law were removed in 1976, when Congress followed the Europeans by abandoning any formal requirement before a copyright is granted.~{ Until the 1908 Berlin Act of the Berne Convention, national copyright legislation sometimes made protection depend upon compliance with formalities such as registration, deposit, and affixation of notice of the author's claim of copyright. However, starting with the 1908 act, every text of the Convention has provided that "the enjoyment and the exercise" of rights guaranteed by the Convention "shall not be subject to any formality." The prohibition against formalities is presently embodied in Article 5(2) of the Paris Text of the Berne Convention. Many countries continue to impose some form of deposit or registration requirement, albeit not as a condition of copyright. French law, for example, requires the deposit of copies of works in national repositories, principally the National Museum. Copies of books published in the United Kingdom must be deposited in the British Library. The German Copyright Act provides for a Registrar of Authors where the author's true name can be filed in the case of anonymous or pseudonymous works. Paul Goldstein, /{International Intellectual Property Law, Cases and Materials}/ (New York: Foundation Press, 2001), 153-54. }~ The Europeans are said to view copyright as a "natural right." Natural rights don't need forms to exist. Traditions, like the Anglo-American tradition that required copyright owners to follow form if their rights were to be protected, did not, the Europeans thought, properly respect the dignity of the author. My right as a creator turns on my creativity, not upon the special favor of the government.
+
+That's great rhetoric. It sounds wonderfully romantic. But it is absurd copyright policy. It is absurd especially for authors, because a world without formalities harms the creator. The ability to spread "Walt Disney creativity" is destroyed when there is no simple way to know what's protected and what's not.
+
+The fight against formalities achieved its first real victory in Berlin in 1908. International copyright lawyers amended the Berne Convention in 1908, to require copyright terms of life plus fifty years, as well as the abolition of copyright formalities. The formalities were hated because the stories of inadvertent loss were increasingly common. It was as if a Charles Dickens character ran all copyright offices, and the failure to dot an /{i}/ or cross a /{t}/ resulted in the loss of widows' only income.
+
+These complaints were real and sensible. And the strictness of the formalities, especially in the United States, was absurd. The law should always have ways of forgiving innocent mistakes. There is no reason copyright law couldn't, as well. Rather than abandoning formalities totally, the response in Berlin should have been to embrace a more equitable system of registration.
+
+Even that would have been resisted, however, because registration in the nineteenth and twentieth centuries was still expensive. It was also a hassle. The abolishment of formalities promised not only to save the starving widows, but also to lighten an unnecessary regulatory burden imposed upon creators.
+
+In addition to the practical complaint of authors in 1908, there was a moral claim as well. There was no reason that creative property should be a second-class form of property. If a carpenter builds a table, his rights over the table don't depend upon filing a form with the government. He has a property right over the table "naturally," and he can assert that right against anyone who would steal the table, whether or not he has informed the government of his ownership of the table.
+
+This argument is correct, but its implications are misleading. For the argument in favor of formalities does not depend upon creative property being second-class property. The argument in favor of formalities turns upon the special problems that creative property presents. The law of formalities responds to the special physics of creative property, to assure that it can be efficiently and fairly spread.
+
+No one thinks, for example, that land is second-class property just because you have to register a deed with a court if your sale of land is to be effective. And few would think a car is second-class property just because you must register the car with the state and tag it with a license. In both of those cases, everyone sees that there is an important reason to secure registration" both because it makes the markets more efficient and because it better secures the rights of the owner. Without a registration system for land, landowners would perpetually have to guard their property. With registration, they can simply point the police to a deed. Without a registration system for cars, auto theft would be much easier. With a registration system, the thief has a high burden to sell a stolen car. A slight burden is placed on the property owner, but those burdens produce a much better system of protection for property generally.
+
+It is similarly special physics that makes formalities important in copyright law. Unlike a carpenter's table, there's nothing in nature that makes it relatively obvious who might own a particular bit of creative property. A recording of Lyle Lovett's latest album can exist in a billion places without anything necessarily linking it back to a particular owner. And like a car, there's no way to buy and sell creative property with confidence unless there is some simple way to authenticate who is the author and what rights he has. Simple transactions are destroyed in a world without formalities. Complex, expensive, /{lawyer}/ transactions take their place.
+
+This was the understanding of the problem with the Sonny Bono Act that we tried to demonstrate to the Court. This was the part it didn't "get." Because we live in a system without formalities, there is no way easily to build upon or use culture from our past. If copyright terms were, as Justice Story said they would be, "short," then this wouldn't matter much. For fourteen years, under the framers' system, a work would be presumptively controlled. After fourteen years, it would be presumptively uncontrolled.
+
+But now that copyrights can be just about a century long, the inability to know what is protected and what is not protected becomes a huge and obvious burden on the creative process. If the only way a library can offer an Internet exhibit about the New Deal is to hire a lawyer to clear the rights to every image and sound, then the copyright system is burdening creativity in a way that has never been seen before /{because there are no formalities}/.
+
+The Eldred Act was designed to respond to exactly this problem. If it is worth $1 to you, then register your work and you can get the longer term. Others will know how to contact you and, therefore, how to get your permission if they want to use your work. And you will get the benefit of an extended copyright term.
+
+If it isn't worth it to you to register to get the benefit of an extended term, then it shouldn't be worth it for the government to defend your monopoly over that work either. The work should pass into the public domain where anyone can copy it, or build archives with it, or create a movie based on it. It should become free if it is not worth $1 to you.
+
+Some worry about the burden on authors. Won't the burden of registering the work mean that the $1 is really misleading? Isn't the hassle worth more than $1? Isn't that the real problem with registration?
+
+It is. The hassle is terrible. The system that exists now is awful. I completely agree that the Copyright Office has done a terrible job (no doubt because they are terribly funded) in enabling simple and cheap registrations. Any real solution to the problem of formalities must address the real problem of /{governments}/ standing at the core of any system of formalities. In this book, I offer such a solution. That solution essentially remakes the Copyright Office. For now, assume it was Amazon that ran the registration system. Assume it was one-click registration. The Eldred Act would propose a simple, one-click registration fifty years after a work was published. Based upon historical data, that system would move up to 98 percent of commercial work, commercial work that no longer had a commercial life, into the public domain within fifty years. What do you think?
+
+When Steve Forbes endorsed the idea, some in Washington began to pay attention. Many people contacted me pointing to representatives who might be willing to introduce the Eldred Act. And I had a few who directly suggested that they might be willing to take the first step.
+
+One representative, Zoe Lofgren of California, went so far as to get the bill drafted. The draft solved any problem with international law. It imposed the simplest requirement upon copyright owners possible. In May 2003, it looked as if the bill would be introduced. On May 16, I posted on the Eldred Act blog, "we are close." There was a general reaction in the blog community that something good might happen here.
+
+But at this stage, the lobbyists began to intervene. Jack Valenti and the MPAA general counsel came to the congresswoman's office to give the view of the MPAA. Aided by his lawyer, as Valenti told me, Valenti informed the congresswoman that the MPAA would oppose the Eldred Act. The reasons are embarrassingly thin. More importantly, their thinness shows something clear about what this debate is really about.
+
+The MPAA argued first that Congress had "firmly rejected the central concept in the proposed bill" - that copyrights be renewed. That was true, but irrelevant, as Congress's "firm rejection" had occurred long before the Internet made subsequent uses much more likely. Second, they argued that the proposal would harm poor copyright owners - apparently those who could not afford the $1 fee. Third, they argued that Congress had determined that extending a copyright term would encourage restoration work. Maybe in the case of the small percentage of work covered by copyright law that is still commercially valuable, but again this was irrelevant, as the proposal would not cut off the extended term unless the $1 fee was not paid. Fourth, the MPAA argued that the bill would impose "enormous" costs, since a registration system is not free. True enough, but those costs are certainly less than the costs of clearing the rights for a copyright whose owner is not known. Fifth, they worried about the risks if the copyright to a story underlying a film were to pass into the public domain. But what risk is that? If it is in the public domain, then the film is a valid derivative use.
+
+Finally, the MPAA argued that existing law enabled copyright owners to do this if they wanted. But the whole point is that there are thousands of copyright owners who don't even know they have a copyright to give. Whether they are free to give away their copyright or not - a controversial claim in any case - unless they know about a copyright, they're not likely to.
+
+At the beginning of this book, I told two stories about the law reacting to changes in technology. In the one, common sense prevailed. In the other, common sense was delayed. The difference between the two stories was the power of the opposition - the power of the side that fought to defend the status quo. In both cases, a new technology threatened old interests. But in only one case did those interest's have the power to protect themselves against this new competitive threat.
+
+I used these two cases as a way to frame the war that this book has been about. For here, too, a new technology is forcing the law to react. And here, too, we should ask, is the law following or resisting common sense? If common sense supports the law, what explains this common sense?
+
+When the issue is piracy, it is right for the law to back the copyright owners. The commercial piracy that I described is wrong and harmful, and the law should work to eliminate it. When the issue is p2p sharing, it is easy to understand why the law backs the owners still: Much of this sharing is wrong, even if much is harmless. When the issue is copyright terms for the Mickey Mouses of the world, it is possible still to understand why the law favors Hollywood: Most people don't recognize the reasons for limiting copyright terms; it is thus still possible to see good faith within the resistance.
+
+But when the copyright owners oppose a proposal such as the Eldred Act, then, finally, there is an example that lays bare the naked self-interest driving this war. This act would free an extraordinary range of content that is otherwise unused. It wouldn't interfere with any copyright owner's desire to exercise continued control over his content. It would simply liberate what Kevin Kelly calls the "Dark Content" that fills archives around the world. So when the warriors oppose a change like this, we should ask one simple question:
+
+What does this industry really want?
+
+With very little effort, the warriors could protect their content. So the effort to block something like the Eldred Act is not really about protecting /{their}/ content. The effort to block the Eldred Act is an effort to assure that nothing more passes into the public domain. It is another step to assure that the public domain will never compete, that there will be no use of content that is not commercially controlled, and that there will be no commercial use of content that doesn't require /{their}/ permission first.
+
+The opposition to the Eldred Act reveals how extreme the other side is. The most powerful and sexy and well loved of lobbies really has as its aim not the protection of "property" but the rejection of a tradition. Their aim is not simply to protect what is theirs. /{Their aim is to assure that all there is is what is theirs}/.
+
+It is not hard to understand why the warriors take this view. It is not hard to see why it would benefit them if the competition of the public domain tied to the Internet could somehow be quashed. Just as RCA feared the competition of FM, they fear the competition of a public domain connected to a public that now has the means to create with it and to share its own creation.
+
+What is hard to understand is why the public takes this view. It is as if the law made airplanes trespassers. The MPAA stands with the Causbys and demands that their remote and useless property rights be respected, so that these remote and forgotten copyright holders might block the progress of others.
+
+All this seems to follow easily from this untroubled acceptance of the "property" in intellectual property. Common sense supports it, and so long as it does, the assaults will rain down upon the technologies of the Internet. The consequence will be an increasing "permission society." The past can be cultivated only if you can identify the owner and gain permission to build upon his work. The future will be controlled by this dead (and often unfindable) hand of the past.
+
+:C~ CONCLUSION
+
+1~conclusion [Conclusion]-#
+
+*{There are more}* than 35 million people with the AIDS virus worldwide. Twenty-five million of them live in sub-Saharan Africa. Seventeen million have already died. Seventeen million Africans is proportional percentage-wise to seven million Americans. More importantly, it is seventeen million Africans.
+
+There is no cure for AIDS, but there are drugs to slow its progression. These antiretroviral therapies are still experimental, but they have already had a dramatic effect. In the United States, AIDS patients who regularly take a cocktail of these drugs increase their life expectancy by ten to twenty years. For some, the drugs make the disease almost invisible.
+
+These drugs are expensive. When they were first introduced in the United States, they cost between $10,000 and $15,000 per person per year. Today, some cost $25,000 per year. At these prices, of course, no African nation can afford the drugs for the vast majority of its population: $15,000 is thirty times the per capita gross national product of Zimbabwe. At these prices, the drugs are totally unavailable.~{ Commission on Intellectual Property Rights, "Final Report: Integrating Intellectual Property Rights and Development Policy" (London, 2002), available at link #55. According to a World Health Organization press release issued 9 July 2002, only 230,000 of the 6 million who need drugs in the developing world receive them - and half of them are in Brazuil. }~
+
+These prices are not high because the ingredients of the drugs are expensive. These prices are high because the drugs are protected by patents. The drug companies that produced these life-saving mixes enjoy at least a twenty-year monopoly for their inventions. They use that monopoly power to extract the most they can from the market. That power is in turn used to keep the prices high.
+
+There are many who are skeptical of patents, especially drug patents. I am not. Indeed, of all the areas of research that might be supported by patents, drug research is, in my view, the clearest case where patents are needed. The patent gives the drug company some assurance that if it is successful in inventing a new drug to treat a disease, it will be able to earn back its investment and more. This is socially an extremely valuable incentive. I am the last person who would argue that the law should abolish it, at least without other changes.
+
+But it is one thing to support patents, even drug patents. It is another thing to determine how best to deal with a crisis. And as African leaders began to recognize the devastation that AIDS was bringing, they started looking for ways to import HIV treatments at costs significantly below the market price.
+
+In 1997, South Africa tried one tack. It passed a law to allow the importation of patented medicines that had been produced or sold in another nation's market with the consent of the patent owner. For example, if the drug was sold in India, it could be imported into Africa from India. This is called "parallel importation," and it is generally permitted under international trade law and is specifically permitted within the European Union.~{ See Peter Drahos with John Braithwaite, /{Information Feudalism: Who Owns the Knowledge Economy?}/ (New York: The New Press, 2003), 37. }~
+
+However, the United States government opposed the bill. Indeed, more than opposed. As the International Intellectual Property Association characterized it, "The U.S. government pressured South Africa ... not to permit compulsory licensing or parallel imports."~{ International Intellectual Property Institute (IIPI), /{Patent Protection and Access to HIV/AIDS Pharmaceuticals in Sub-Saharan Africa, a Report Prepared for the World Intellectual Property Organization}/ (Washington, D.C., 2000), 14, available at link #56. For a firsthand account of the struggle over South Africa, see Hearing Before the Subcommittee on Criminal Justice, Drug Policy, and Human Resources, House Committee on Government Reform, H. Rep., 1st sess., Ser. No. 106-126 (22 July 1999), 150-57 (statement of James Love). }~ Through the Office of the United States Trade Representative, the government asked South Africa to change the law - and to add pressure to that request, in 1998, the USTR listed South Africa for possible trade sanctions. That same year, more than forty pharmaceutical companies began proceedings in the South African courts to challenge the govern-ment's actions. The United States was then joined by other governments from the EU. Their claim, and the claim of the pharmaceutical companies, was that South Africa was violating its obligations under international law by discriminating against a particular kind of patent - pharmaceutical patents. The demand of these governments, with the United States in the lead, was that South Africa respect these patents as it respects any other patent, regardless of any effect on the treatment of AIDS within South Africa.~{ International Intellectual Property Institute (IIPI), /{Patent Protection and Access to HIV/AIDS Pharmaceuticals in Sub-Saharan Africa, a Report Prepared for the World Intellectual Property Organization}/ (Washington, D.C., 2000), 15. }~
+
+We should place the intervention by the United States in context. No doubt patents are not the most important reason that Africans don't have access to drugs. Poverty and the total absence of an effective health care infrastructure matter more. But whether patents are the most important reason or not, the price of drugs has an effect on their demand, and patents affect price. And so, whether massive or marginal, there was an effect from our government's intervention to stop the flow of medications into Africa.
+
+By stopping the flow of HIV treatment into Africa, the United States government was not saving drugs for United States citizens. This is not like wheat (if they eat it, we can't); instead, the flow that the United States intervened to stop was, in effect, a flow of knowledge: information about how to take chemicals that exist within Africa, and turn those chemicals into drugs that would save 15 to 30 million lives.
+
+Nor was the intervention by the United States going to protect the profits of United States drug companies - at least, not substantially. It was not as if these countries were in the position to buy the drugs for the prices the drug companies were charging. Again, the Africans are wildly too poor to afford these drugs at the offered prices. Stopping the parallel import of these drugs would not substantially increase the sales by U.S. companies.
+
+Instead, the argument in favor of restricting this flow of information, which was needed to save the lives of millions, was an argument about the sanctity of property.~{ See Sabin Russell, "New Crusade to Lower AIDS Drug Costs: Africa's Needs at Odds with Firms' Profit Motive," /{San Francisco Chronicle,}/ 24 May 1999, A1, available at link #57 ("compulsory licenses and gray markets pose a threat to the entire system of intellectual property protection"); Robert Weissman, "AIDS and Developing Countries: Democratizing Access to Essential Medicines," /{Foreign Policy in Focus}/ 4:23 (August 1999), available at link #58 (describing U.S. policy); John A. Harrelson, "TRIPS, Pharmaceutical Patents, and the HIV/AIDS Crisis: Finding the Proper Balance Between Intellectual Property Rights and Compassion, a Synopsis," /{Widener Law Symposium Journal}/ (Spring 2001): 175. }~ It was because "intellectual property" would be violated that these drugs should not flow into Africa. It was a principle about the importance of "intellectual property" that led these government actors to intervene against the South African response to AIDS.
+
+Now just step back for a moment. There will be a time thirty years from now when our children look back at us and ask, how could we have let this happen? How could we allow a policy to be pursued whose direct cost would be to speed the death of 15 to 30 million Africans, and whose only real benefit would be to uphold the "sanctity" of an idea? What possible justification could there ever be for a policy that results in so many deaths? What exactly is the insanity that would allow so many to die for such an abstraction?
+
+Some blame the drug companies. I don't. They are corporations. Their managers are ordered by law to make money for the corporation. They push a certain patent policy not because of ideals, but because it is the policy that makes them the most money. And it only makes them the most money because of a certain corruption within our political system - a corruption the drug companies are certainly not responsible for.
+
+The corruption is our own politicians' failure of integrity. For the drug companies would love - they say, and I believe them - to sell their drugs as cheaply as they can to countries in Africa and elsewhere. There are issues they'd have to resolve to make sure the drugs didn't get back into the United States, but those are mere problems of technology. They could be overcome.
+
+A different problem, however, could not be overcome. This is the fear of the grandstanding politician who would call the presidents of the drug companies before a Senate or House hearing, and ask, "How is it you can sell this HIV drug in Africa for only $1 a pill, but the same drug would cost an American $1,500?" Because there is no "sound bite" answer to that question, its effect would be to induce regulation of prices in America. The drug companies thus avoid this spiral by avoiding the first step. They reinforce the idea that property should be sacred. They adopt a rational strategy in an irrational context, with the unintended consequence that perhaps millions die. And that rational strategy thus becomes framed in terms of this ideal - the sanctity of an idea called "intellectual property."
+
+So when the common sense of your child confronts you, what will you say? When the common sense of a generation finally revolts against what we have done, how will we justify what we have done? What is the argument?
+
+A sensible patent policy could endorse and strongly support the patent system without having to reach everyone everywhere in exactly the same way. Just as a sensible copyright policy could endorse and strongly support a copyright system without having to regulate the spread of culture perfectly and forever, a sensible patent policy could endorse and strongly support a patent system without having to block the spread of drugs to a country not rich enough to afford market prices in any case. A sensible policy, in other words, could be a balanced policy. For most of our history, both copyright and patent policies were balanced in just this sense.
+
+But we as a culture have lost this sense of balance. We have lost the critical eye that helps us see the difference between truth and extremism. A certain property fundamentalism, having no connection to our tradition, now reigns in this culture - bizarrely, and with consequences more grave to the spread of ideas and culture than almost any other single policy decision that we as a democracy will make.
+
+A simple idea blinds us, and under the cover of darkness, much happens that most of us would reject if any of us looked. So uncritically do we accept the idea of property in ideas that we don't even notice how monstrous it is to deny ideas to a people who are dying without them. So uncritically do we accept the idea of property in culture that we don't even question when the control of that property removes our ability, as a people, to develop our culture democratically. Blindness becomes our common sense. And the challenge for anyone who would reclaim the right to cultivate our culture is to find a way to make this common sense open its eyes.
+
+So far, common sense sleeps. There is no revolt. Common sense does not yet see what there could be to revolt about. The extremism that now dominates this debate fits with ideas that seem natural, and that fit is reinforced by the RCAs of our day. They wage a frantic war to fight "piracy," and devastate a culture for creativity. They defend the idea of "creative property," while transforming real creators into modern-day sharecroppers. They are insulted by the idea that rights should be balanced, even though each of the major players in this content war was itself a beneficiary of a more balanced ideal. The hypocrisy reeks. Yet in a city like Washington, hypocrisy is not even noticed. Powerful lobbies, complex issues, and MTV attention spans produce the "perfect storm" for free culture.
+
+In August 2003, a fight broke out in the United States about a decision by the World Intellectual Property Organization to cancel a meeting.~{ Jonathan Krim, "The Quiet War over Open-Source," /{Washington Post,}/ 21 August 2003, E1, available at link #59; William New, "Global Group's Shift on 'Open Source' Meeting Spurs Stir," National Journal's Technology Daily, 19 August 2003, available at link #60; William New, "U.S. Official Opposes 'Open Source' Talks at WIPO," /{National Journal's Technology Daily,}/ 19 August 2003, available at link #61. }~ At the request of a wide range of interests, WIPO had decided to hold a meeting to discuss "open and collaborative projects to create public goods." These are projects that have been successful in producing public goods without relying exclusively upon a proprietary use of intellectual property. Examples include the Internet and the World Wide Web, both of which were developed on the basis of protocols in the public domain. It included an emerging trend to support open academic journals, including the Public Library of Science project that I describe in the Afterword. It included a project to develop single nucleotide polymorphisms (SNPs), which are thought to have great significance in biomedical research. (That nonprofit project comprised a consortium of the Wellcome Trust and pharmaceutical and technological companies, including Amersham Biosciences, AstraZeneca, Aventis, Bayer, Bristol-Myers Squibb, Hoffmann-La Roche, Glaxo- SmithKline, IBM, Motorola, Novartis, Pfizer, and Searle.) It included the Global Positioning System, which Ronald Reagan set free in the early 1980s. And it included "open source and free software."
+
+The aim of the meeting was to consider this wide range of projects from one common perspective: that none of these projects relied upon intellectual property extremism. Instead, in all of them, intellectual property was balanced by agreements to keep access open or to impose limitations on the way in which proprietary claims might be used.
+
+From the perspective of this book, then, the conference was ideal.~{ I should disclose that I was one of the people who asked WIPO for the meeting. }~ The projects within its scope included both commercial and noncommercial work. They primarily involved science, but from many perspectives. And WIPO was an ideal venue for this discussion, since WIPO is the preeminent international body dealing with intellectual property issues.
+
+Indeed, I was once publicly scolded for not recognizing this fact about WIPO. In February 2003, I delivered a keynote address to a preparatory conference for the World Summit on the Information Society (WSIS). At a press conference before the address, I was asked what I would say. I responded that I would be talking a little about the importance of balance in intellectual property for the development of an information society. The moderator for the event then promptly interrupted to inform me and the assembled reporters that no question about intellectual property would be discussed by WSIS, since those questions were the exclusive domain of WIPO. In the talk that I had prepared, I had actually made the issue of intellectual property relatively minor. But after this astonishing statement, I made intellectual property the sole focus of my talk. There was no way to talk about an "Information Society" unless one also talked about the range of information and culture that would be free. My talk did not make my immoderate moderator very happy. And she was no doubt correct that the scope of intellectual property protections was ordinarily the stuff of WIPO. But in my view, there couldn't be too much of a conversation about how much intellectual property is needed, since in my view, the very idea of balance in intellectual property had been lost.
+
+So whether or not WSIS can discuss balance in intellectual property, I had thought it was taken for granted that WIPO could and should. And thus the meeting about "open and collaborative projects to create public goods" seemed perfectly appropriate within the WIPO agenda.
+
+But there is one project within that list that is highly controversial, at least among lobbyists. That project is "open source and free software." Microsoft in particular is wary of discussion of the subject. From its perspective, a conference to discuss open source and free software would be like a conference to discuss Apple's operating system. Both open source and free software compete with Microsoft's software. And internationally, many governments have begun to explore requirements that they use open source or free software, rather than "proprietary software," for their own internal uses.
+
+I don't mean to enter that debate here. It is important only to make clear that the distinction is not between commercial and noncommercial software. There are many important companies that depend fundamentally upon open source and free software, IBM being the most prominent. IBM is increasingly shifting its focus to the GNU/Linux operating system, the most famous bit of "free software" - and IBM is emphatically a commercial entity. Thus, to support "open source and free software" is not to oppose commercial entities. It is, instead, to support a mode of software development that is different from Microsoft's.~{ Microsoft's position about free and open source software is more sophisticated. As it has repeatedly asserted, it has no problem with "open source" software or software in the public domain. Microsoft's principal opposition is to "free software" licensed under a "copyleft" license, meaning a license that requires the licensee to adopt the same terms on any derivative work. See Bradford L. Smith, "The Future of Software: Enabling the Marketplace to Decide," /{Government Policy Toward Open Source Software}/ (Washington, D.C.: AEI-Brookings Joint Center for Regulatory Studies, American Enterprise Institute for Public Policy Research, 2002), 69, available at link #62. See also Craig Mundie, Microsoft senior vice president, /{The Commercial Software Model,}/ discussion at New York University Stern School of Business (3 May 2001), available at link #63. }~
+
+More important for our purposes, to support "open source and free software" is not to oppose copyright. "Open source and free software" is not software in the public domain. Instead, like Microsoft's software, the copyright owners of free and open source software insist quite strongly that the terms of their software license be respected by adopters of free and open source software. The terms of that license are no doubt different from the terms of a proprietary software license. Free software licensed under the General Public License (GPL), for example, requires that the source code for the software be made available by anyone who modifies and redistributes the software. But that requirement is effective only if copyright governs software. If copyright did not govern software, then free software could not impose the same kind of requirements on its adopters. It thus depends upon copyright law just as Microsoft does.
+
+It is therefore understandable that as a proprietary software developer, Microsoft would oppose this WIPO meeting, and understandable that it would use its lobbyists to get the United States government to oppose it, as well. And indeed, that is just what was reported to have happened. According to Jonathan Krim of the /{Washington Post}/, Microsoft's lobbyists succeeded in getting the United States government to veto the meeting.~{ Krim, "The Quiet War over Open-Source," available at link #64. }~ And without U.S. backing, the meeting was canceled.
+
+I don't blame Microsoft for doing what it can to advance its own interests, consistent with the law. And lobbying governments is plainly consistent with the law. There was nothing surprising about its lobbying here, and nothing terribly surprising about the most powerful software producer in the United States having succeeded in its lobbying efforts.
+
+What was surprising was the United States government's reason for opposing the meeting. Again, as reported by Krim, Lois Boland, acting director of international relations for the U.S. Patent and Trademark Office, explained that "open-source software runs counter to the mission of WIPO, which is to promote intellectual-property rights." She is quoted as saying, "To hold a meeting which has as its purpose to disclaim or waive such rights seems to us to be contrary to the goals of WIPO."
+
+These statements are astonishing on a number of levels.
+
+First, they are just flat wrong. As I described, most open source and free software relies fundamentally upon the intellectual property right called "copyright." Without it, restrictions imposed by those licenses wouldn't work. Thus, to say it "runs counter" to the mission of promoting intellectual property rights reveals an extraordinary gap in under- standing - the sort of mistake that is excusable in a first-year law student, but an embarrassment from a high government official dealing with intellectual property issues.
+
+Second, who ever said that WIPO's exclusive aim was to "promote" intellectual property maximally? As I had been scolded at the preparatory conference of WSIS, WIPO is to consider not only how best to protect intellectual property, but also what the best balance of intellectual property is. As every economist and lawyer knows, the hard question in intellectual property law is to find that balance. But that there should be limits is, I had thought, uncontested. One wants to ask Ms. Boland, are generic drugs (drugs based on drugs whose patent has expired) contrary to the WIPO mission? Does the public domain weaken intellectual property? Would it have been better if the protocols of the Internet had been patented?
+
+Third, even if one believed that the purpose of WIPO was to maximize intellectual property rights, in our tradition, intellectual property rights are held by individuals and corporations. They get to decide what to do with those rights because, again, they are /{their}/ rights. If they want to "waive" or "disclaim" their rights, that is, within our tradition, totally appropriate. When Bill Gates gives away more than $20 billion to do good in the world, that is not inconsistent with the objectives of the property system. That is, on the contrary, just what a property system is supposed to be about: giving individuals the right to decide what to do with /{their}/ property.
+
+When Ms. Boland says that there is something wrong with a meeting "which has as its purpose to disclaim or waive such rights," she's saying that WIPO has an interest in interfering with the choices of the individuals who own intellectual property rights. That somehow, WIPO's objective should be to stop an individual from "waiving" or "dis-claiming" an intellectual property right. That the interest of WIPO is not just that intellectual property rights be maximized, but that they also should be exercised in the most extreme and restrictive way possible.
+
+There is a history of just such a property system that is well known in the Anglo-American tradition. It is called "feudalism." Under feudalism, not only was property held by a relatively small number of individuals and entities. And not only were the rights that ran with that property powerful and extensive. But the feudal system had a strong interest in assuring that property holders within that system not weaken feudalism by liberating people or property within their control to the free market. Feudalism depended upon maximum control and concentration. It fought any freedom that might interfere with that control.
+
+As Peter Drahos and John Braithwaite relate, this is precisely the choice we are now making about intellectual property.~{ See Drahos with Braithwaite, /{Information Feudalism,}/ 210-20. }~ We will have an information society. That much is certain. Our only choice now is whether that information society will be /{free}/ or /{feudal}/. The trend is toward the feudal.
+
+When this battle broke, I blogged it. A spirited debate within the comment section ensued. Ms. Boland had a number of supporters who tried to show why her comments made sense. But there was one comment that was particularly depressing for me. An anonymous poster wrote,
+
+_1 George, you misunderstand Lessig: He's only talking about the world as it should be ("the goal of WIPO, and the goal of any government, should be to promote the right balance of intellectual- property rights, not simply to promote intellectual property rights"), not as it is. If we were talking about the world as it is, then of course Boland didn't say anything wrong. But in the world as Lessig would have it, then of course she did. Always pay attention to the distinction between Lessig's world and ours."
+
+I missed the irony the first time I read it. I read it quickly and thought the poster was supporting the idea that seeking balance was what our government should be doing. (Of course, my criticism of Ms. Boland was not about whether she was seeking balance or not; my criticism was that her comments betrayed a first-year law student's mistake. I have no illusion about the extremism of our government, whether Republican or Democrat. My only illusion apparently is about whether our government should speak the truth or not.)
+
+Obviously, however, the poster was not supporting that idea. Instead, the poster was ridiculing the very idea that in the real world, the "goal" of a government should be "to promote the right balance" of intellectual property. That was obviously silly to him. And it obviously betrayed, he believed, my own silly utopianism. "Typical for an academic," the poster might well have continued.
+
+I understand criticism of academic utopianism. I think utopianism is silly, too, and I'd be the first to poke fun at the absurdly unrealistic ideals of academics throughout history (and not just in our own country's history).
+
+But when it has become silly to suppose that the role of our government should be to "seek balance," then count me with the silly, for that means that this has become quite serious indeed. If it should be obvious to everyone that the government does not seek balance, that the government is simply the tool of the most powerful lobbyists, that the idea of holding the government to a different standard is absurd, that the idea of demanding of the government that it speak truth and not lies is just naïve, then who have we, the most powerful democracy in the world, become?
+
+It might be crazy to expect a high government official to speak the truth. It might be crazy to believe that government policy will be something more than the handmaiden of the most powerful interests. It might be crazy to argue that we should preserve a tradition that has been part of our tradition for most of our history - free culture.
+
+If this is crazy, then let there be more crazies. Soon.
+
+There are moments of hope in this struggle. And moments that surprise. When the FCC was considering relaxing ownership rules, which would thereby further increase the concentration in media ownership, an extraordinary bipartisan coalition formed to fight this change. For perhaps the first time in history, interests as diverse as the NRA, the ACLU, Moveon.org, William Safire, Ted Turner, and CodePink Women for Peace organized to oppose this change in FCC policy. An astonishing 700,000 letters were sent to the FCC, demanding more hearings and a different result.
+
+This activism did not stop the FCC, but soon after, a broad coalition in the Senate voted to reverse the FCC decision. The hostile hearings leading up to that vote revealed just how powerful this movement had become. There was no substantial support for the FCC's decision, and there was broad and sustained support for fighting further concentration in the media.
+
+But even this movement misses an important piece of the puzzle. Largeness as such is not bad. Freedom is not threatened just because some become very rich, or because there are only a handful of big players. The poor quality of Big Macs or Quarter Pounders does not mean that you can't get a good hamburger from somewhere else.
+
+The danger in media concentration comes not from the concentration, but instead from the feudalism that this concentration, tied to the change in copyright, produces. It is not just that there are a few powerful companies that control an ever expanding slice of the media. It is that this concentration can call upon an equally bloated range of rights - property rights of a historically extreme form - that makes their bigness bad.
+
+It is therefore significant that so many would rally to demand competition and increased diversity. Still, if the rally is understood as being about bigness alone, it is not terribly surprising. We Americans have a long history of fighting "big," wisely or not. That we could be motivated to fight "big" again is not something new.
+
+It would be something new, and something very important, if an equal number could be rallied to fight the increasing extremism built within the idea of "intellectual property." Not because balance is alien to our tradition; indeed, as I've argued, balance is our tradition. But because the muscle to think critically about the scope of anything called "property" is not well exercised within this tradition anymore.
+
+If we were Achilles, this would be our heel. This would be the place of our tragedy.
+
+As I write these final words, the news is filled with stories about the RIAA lawsuits against almost three hundred individuals.~{ John Borland, "RIAA Sues 261 File Swappers," CNET News.com, 8 September 2003, available at link #65; Paul R. La Monica, "Music Industry Sues Swappers," CNN/Money, 8 September 2003, available at link #66; Soni Sangha and Phyllis Furman with Robert Gearty, "Sued for a Song, N.Y.C. 12-Yr-Old Among 261 Cited as Sharers," /{New York Daily News,}/ 9 September 2003, 3; Frank Ahrens, "RIAA's Lawsuits Meet Surprised Targets; Single Mother in Calif., 12-Year-Old Girl in N.Y. Among Defendants," /{Washington Post,}/ 10 September 2003, E1; Katie Dean, "Schoolgirl Settles with RIAA," /{Wired News,}/ 10 September 2003, available at link #67. }~ Eminem has just been sued for "sampling" someone else's music.~{ Jon Wiederhorn, "Eminem Gets Sued ... by a Little Old Lady," mtv.com, 17 September 2003, available at link #68. }~ The story about Bob Dylan "stealing" from a Japanese author has just finished making the rounds.~{ Kenji Hall, Associated Press, "Japanese Book May Be Inspiration for Dylan Songs," Kansascity.com, 9 July 2003, available at link #69. }~ An insider from Hollywood - who insists he must remain anonymous - reports "an amazing conversation with these studio guys. They've got extraordinary [old] content that they'd love to use but can't because they can't begin to clear the rights. They've got scores of kids who could do amazing things with the content, but it would take scores of lawyers to clean it first." Congressmen are talking about deputizing computer viruses to bring down computers thought to violate the law. Universities are threatening expulsion for kids who use a computer to share content.
+
+Yet on the other side of the Atlantic, the BBC has just announced that it will build a "Creative Archive," from which British citizens can download BBC content, and rip, mix, and burn it.~{ "BBC Plans to Open Up Its Archive to the Public," BBC press release, 24 August 2003, available at link #70. }~ And in Brazil, the culture minister, Gilberto Gil, himself a folk hero of Brazilian music, has joined with Creative Commons to release content and free licenses in that Latin American country.~{ "Creative Commons and Brazil," Creative Commons Weblog, 6 August 2003, available at link #71. }~
+
+I've told a dark story. The truth is more mixed. A technology has given us a new freedom. Slowly, some begin to understand that this freedom need not mean anarchy. We can carry a free culture into the twenty-first century, without artists losing and without the potential of digital technology being destroyed. It will take some thought, and more importantly, it will take some will to transform the RCAs of our day into the Causbys.
+
+Common sense must revolt. It must act to free culture. Soon, if this potential is ever to be realized.
+
+:C~ AFTERWORD
+
+1~intro_afterword [Intro]-#
+
+*{At least some}* who have read this far will agree with me that something must be done to change where we are heading. The balance of this book maps what might be done.
+
+I divide this map into two parts: that which anyone can do now, and that which requires the help of lawmakers. If there is one lesson that we can draw from the history of remaking common sense, it is that it requires remaking how many people think about the very same issue.
+
+That means this movement must begin in the streets. It must recruit a significant number of parents, teachers, librarians, creators, authors, musicians, filmmakers, scientists - all to tell this story in their own words, and to tell their neighbors why this battle is so important.
+
+Once this movement has its effect in the streets, it has some hope of having an effect in Washington. We are still a democracy. What people think matters. Not as much as it should, at least when an RCA stands opposed, but still, it matters. And thus, in the second part below, I sketch changes that Congress could make to better secure a free culture.
+
+1~us US, NOW
+
+*{Common sense}* is with the copyright warriors because the debate so far has been framed at the extremes - as a grand either/or: either property or anarchy, either total control or artists won't be paid. If that really is the choice, then the warriors should win.
+
+The mistake here is the error of the excluded middle. There are extremes in this debate, but the extremes are not all that there is. There are those who believe in maximal copyright - "All Rights Reserved" - and those who reject copyright - "No Rights Reserved." The "All Rights Reserved" sorts believe that you should ask permission before you "use" a copyrighted work in any way. The "No Rights Reserved" sorts believe you should be able to do with content as you wish, regardless of whether you have permission or not.
+
+When the Internet was first born, its initial architecture effectively tilted in the "no rights reserved" direction. Content could be copied perfectly and cheaply; rights could not easily be controlled. Thus, regardless of anyone's desire, the effective regime of copyright under the original design of the Internet was "no rights reserved." Content was "taken" regardless of the rights. Any rights were effectively unprotected.
+
+This initial character produced a reaction (opposite, but not quite equal) by copyright owners. That reaction has been the topic of this book. Through legislation, litigation, and changes to the network's design, copyright holders have been able to change the essential character of the environment of the original Internet. If the original architecture made the effective default "no rights reserved," the future architecture will make the effective default "all rights reserved." The architecture and law that surround the Internet's design will increasingly produce an environment where all use of content requires permission. The "cut and paste" world that defines the Internet today will become a "get permission to cut and paste" world that is a creator's nightmare.
+
+What's needed is a way to say something in the middle - neither "all rights reserved" nor "no rights reserved" but "some rights reserved" - and thus a way to respect copyrights but enable creators to free content as they see fit. In other words, we need a way to restore a set of freedoms that we could just take for granted before.
+
+2~ Rebuilding Freedoms Previously Presumed: Examples
+
+If you step back from the battle I've been describing here, you will recognize this problem from other contexts. Think about privacy. Before the Internet, most of us didn't have to worry much about data about our lives that we broadcast to the world. If you walked into a bookstore and browsed through some of the works of Karl Marx, you didn't need to worry about explaining your browsing habits to your neighbors or boss. The "privacy" of your browsing habits was assured.
+
+What made it assured?
+
+Well, if we think in terms of the modalities I described in chapter 10, your privacy was assured because of an inefficient architecture for gathering data and hence a market constraint (cost) on anyone who wanted to gather that data. If you were a suspected spy for North Korea, working for the CIA, no doubt your privacy would not be assured. But that's because the CIA would (we hope) find it valuable enough to spend the thousands required to track you. But for most of us (again, we can hope), spying doesn't pay. The highly inefficient architecture of real space means we all enjoy a fairly robust amount of privacy. That privacy is guaranteed to us by friction. Not by law (there is no law protecting "privacy" in public places), and in many places, not by norms (snooping and gossip are just fun), but instead, by the costs that friction imposes on anyone who would want to spy.
+
+Enter the Internet, where the cost of tracking browsing in particular has become quite tiny. If you're a customer at Amazon, then as you browse the pages, Amazon collects the data about what you've looked at. You know this because at the side of the page, there's a list of "recently viewed" pages. Now, because of the architecture of the Net and the function of cookies on the Net, it is easier to collect the data than not. The friction has disappeared, and hence any "privacy" protected by the friction disappears, too.
+
+Amazon, of course, is not the problem. But we might begin to worry about libraries. If you're one of those crazy lefties who thinks that people should have the "right" to browse in a library without the government knowing which books you look at (I'm one of those lefties, too), then this change in the technology of monitoring might concern you. If it becomes simple to gather and sort who does what in electronic spaces, then the friction-induced privacy of yesterday disappears.
+
+It is this reality that explains the push of many to define "privacy" on the Internet. It is the recognition that technology can remove what friction before gave us that leads many to push for laws to do what friction did.~{ See, for example, Marc Rotenberg, "Fair Information Practices and the Architecture of Privacy (What Larry Doesn't Get)," /{Stanford Technology Law Review}/ 1 (2001): par. 6-18, available at link #72 (describing examples in which technology defines privacy policy). See also Jeffrey Rosen, /{The Naked Crowd: Reclaiming Security and Freedom in an Anxious Age}/ (New York: Random House, 2004) (mapping tradeoffs between technology and privacy). }~ And whether you're in favor of those laws or not, it is the pattern that is important here. We must take affirmative steps to secure a kind of freedom that was passively provided before. A change in technology now forces those who believe in privacy to affirmatively act where, before, privacy was given by default.
+
+A similar story could be told about the birth of the free software movement. When computers with software were first made available commercially, the software - both the source code and the binaries - was free. You couldn't run a program written for a Data General machine on an IBM machine, so Data General and IBM didn't care much about controlling their software.
+
+That was the world Richard Stallman was born into, and while he was a researcher at MIT, he grew to love the community that developed when one was free to explore and tinker with the software that ran on machines. Being a smart sort himself, and a talented programmer, Stallman grew to depend upon the freedom to add to or modify other people's work.
+
+In an academic setting, at least, that's not a terribly radical idea. In a math department, anyone would be free to tinker with a proof that someone offered. If you thought you had a better way to prove a theorem, you could take what someone else did and change it. In a classics department, if you believed a colleague's translation of a recently discovered text was flawed, you were free to improve it. Thus, to Stallman, it seemed obvious that you should be free to tinker with and improve the code that ran a machine. This, too, was knowledge. Why shouldn't it be open for criticism like anything else?
+
+No one answered that question. Instead, the architecture of revenue for computing changed. As it became possible to import programs from one system to another, it became economically attractive (at least in the view of some) to hide the code of your program. So, too, as companies started selling peripherals for mainframe systems. If I could just take your printer driver and copy it, then that would make it easier for me to sell a printer to the market than it was for you.
+
+Thus, the practice of proprietary code began to spread, and by the early 1980s, Stallman found himself surrounded by proprietary code. The world of free software had been erased by a change in the economics of computing. And as he believed, if he did nothing about it, then the freedom to change and share software would be fundamentally weakened.
+
+Therefore, in 1984, Stallman began a project to build a free operating system, so that at least a strain of free software would survive. That was the birth of the GNU project, into which Linus Torvalds's "Linux" kernel was added to produce the GNU/Linux operating system.
+
+Stallman's technique was to use copyright law to build a world of software that must be kept free. Software licensed under the Free Software Foundation's GPL cannot be modified and distributed unless the source code for that software is made available as well. Thus, anyone building upon GPL'd software would have to make their buildings free as well. This would assure, Stallman believed, that an ecology of code would develop that remained free for others to build upon. His fundamental goal was freedom; innovative creative code was a byproduct.
+
+Stallman was thus doing for software what privacy advocates now do for privacy. He was seeking a way to rebuild a kind of freedom that was taken for granted before. Through the affirmative use of licenses that bind copyrighted code, Stallman was affirmatively reclaiming a space where free software would survive. He was actively protecting what before had been passively guaranteed.
+
+Finally, consider a very recent example that more directly resonates with the story of this book. This is the shift in the way academic and scientific journals are produced.
+
+As digital technologies develop, it is becoming obvious to many that printing thousands of copies of journals every month and sending them to libraries is perhaps not the most efficient way to distribute knowledge. Instead, journals are increasingly becoming electronic, and libraries and their users are given access to these electronic journals through password-protected sites. Something similar to this has been happening in law for almost thirty years: Lexis and Westlaw have had electronic versions of case reports available to subscribers to their service. Although a Supreme Court opinion is not copyrighted, and anyone is free to go to a library and read it, Lexis and Westlaw are also free to charge users for the privilege of gaining access to that Supreme Court opinion through their respective services.
+
+There's nothing wrong in general with this, and indeed, the ability to charge for access to even public domain materials is a good incentive for people to develop new and innovative ways to spread knowledge. The law has agreed, which is why Lexis and Westlaw have been allowed to flourish. And if there's nothing wrong with selling the public domain, then there could be nothing wrong, in principle, with selling access to material that is not in the public domain.
+
+But what if the only way to get access to social and scientific data was through proprietary services? What if no one had the ability to browse this data except by paying for a subscription?
+
+As many are beginning to notice, this is increasingly the reality with scientific journals. When these journals were distributed in paper form, libraries could make the journals available to anyone who had access to the library. Thus, patients with cancer could become cancer experts because the library gave them access. Or patients trying to understand the risks of a certain treatment could research those risks by reading all available articles about that treatment. This freedom was therefore a function of the institution of libraries (norms) and the technology of paper journals (architecture) - namely, that it was very hard to control access to a paper journal.
+
+As journals become electronic, however, the publishers are demanding that libraries not give the general public access to the journals. This means that the freedoms provided by print journals in public libraries begin to disappear. Thus, as with privacy and with software, a changing technology and market shrink a freedom taken for granted before.
+
+This shrinking freedom has led many to take affirmative steps to restore the freedom that has been lost. The Public Library of Science (PLoS), for example, is a nonprofit corporation dedicated to making scientific research available to anyone with a Web connection. Authors of scientific work submit that work to the Public Library of Science. That work is then subject to peer review. If accepted, the work is then deposited in a public, electronic archive and made permanently available for free. PLoS also sells a print version of its work, but the copyright for the print journal does not inhibit the right of anyone to redistribute the work for free.
+
+This is one of many such efforts to restore a freedom taken for granted before, but now threatened by changing technology and markets. There's no doubt that this alternative competes with the traditional publishers and their efforts to make money from the exclusive distribution of content. But competition in our tradition is presumptively a good - especially when it helps spread knowledge and science.
+
+2~ Rebuilding Free Culture: One Idea
+
+The same strategy could be applied to culture, as a response to the increasing control effected through law and technology.
+
+Enter the Creative Commons. The Creative Commons is a nonprofit corporation established in Massachusetts, but with its home at Stanford University. Its aim is to build a layer of /{reasonable}/ copyright on top of the extremes that now reign. It does this by making it easy for people to build upon other people's work, by making it simple for creators to express the freedom for others to take and build upon their work. Simple tags, tied to human-readable descriptions, tied to bullet-proof licenses, make this possible.
+
+/{Simple}/ - which means without a middleman, or without a lawyer. By developing a free set of licenses that people can attach to their content, Creative Commons aims to mark a range of content that can easily, and reliably, be built upon. These tags are then linked to machine-readable versions of the license that enable computers automatically to identify content that can easily be shared. These three expressions together - a legal license, a human-readable description, and machine-readable tags - constitute a Creative Commons license. A Creative Commons license constitutes a grant of freedom to anyone who accesses the license, and more importantly, an expression of the ideal that the person associated with the license believes in something different than the "All" or "No" extremes. Content is marked with the CC mark, which does not mean that copyright is waived, but that certain freedoms are given.
+
+These freedoms are beyond the freedoms promised by fair use. Their precise contours depend upon the choices the creator makes. The creator can choose a license that permits any use, so long as attribution is given. She can choose a license that permits only noncommercial use. She can choose a license that permits any use so long as the same freedoms are given to other uses ("share and share alike"). Or any use so long as no derivative use is made. Or any use at all within developing nations. Or any sampling use, so long as full copies are not made. Or lastly, any educational use.
+
+These choices thus establish a range of freedoms beyond the default of copyright law. They also enable freedoms that go beyond traditional fair use. And most importantly, they express these freedoms in a way that subsequent users can use and rely upon without the need to hire a lawyer. Creative Commons thus aims to build a layer of content, governed by a layer of reasonable copyright law, that others can build upon. Voluntary choice of individuals and creators will make this content available. And that content will in turn enable us to rebuild a public domain.
+
+This is just one project among many within the Creative Commons. And of course, Creative Commons is not the only organization pursuing such freedoms. But the point that distinguishes the Creative Commons from many is that we are not interested only in talking about a public domain or in getting legislators to help build a public domain. Our aim is to build a movement of consumers and producers of content ("content conducers," as attorney Mia Garlick calls them) who help build the public domain and, by their work, demonstrate the importance of the public domain to other creativity.
+
+The aim is not to fight the "All Rights Reserved" sorts. The aim is to complement them. The problems that the law creates for us as a culture are produced by insane and unintended consequences of laws written centuries ago, applied to a technology that only Jefferson could have imagined. The rules may well have made sense against a background of technologies from centuries ago, but they do not make sense against the background of digital technologies. New rules - with different freedoms, expressed in ways so that humans without lawyers can use them - are needed. Creative Commons gives people a way effectively to begin to build those rules.
+
+Why would creators participate in giving up total control? Some participate to better spread their content. Cory Doctorow, for example, is a science fiction author. His first novel, /{Down and Out in the Magic Kingdom}/, was released on- line and for free, under a Creative Commons license, on the same day that it went on sale in bookstores.
+
+Why would a publisher ever agree to this? I suspect his publisher reasoned like this: There are two groups of people out there: (1) those who will buy Cory's book whether or not it's on the Internet, and (2) those who may never hear of Cory's book, if it isn't made available for free on the Internet. Some part of (1) will download Cory's book instead of buying it. Call them bad-(1)s. Some part of (2) will download Cory's book, like it, and then decide to buy it. Call them (2)-goods. If there are more (2)-goods than bad-(1)s, the strategy of releasing Cory's book free on-line will probably /{increase}/ sales of Cory's book.
+
+Indeed, the experience of his publisher clearly supports that conclusion. The book's first printing was exhausted months before the publisher had expected. This first novel of a science fiction author was a total success.
+
+The idea that free content might increase the value of nonfree content was confirmed by the experience of another author. Peter Wayner, who wrote a book about the free software movement titled /{Free for All}/, made an electronic version of his book free on-line under a Creative Commons license after the book went out of print. He then monitored used book store prices for the book. As predicted, as the number of downloads increased, the used book price for his book increased, as well.
+
+These are examples of using the Commons to better spread proprietary content. I believe that is a wonderful and common use of the Commons. There are others who use Creative Commons licenses for other reasons. Many who use the "sampling license" do so because anything else would be hypocritical. The sampling license says that others are free, for commercial or noncommercial purposes, to sample content from the licensed work; they are just not free to make full copies of the licensed work available to others. This is consistent with their own art - they, too, sample from others. Because the /{legal}/ costs of sampling are so high (Walter Leaphart, manager of the rap group Public Enemy, which was born sampling the music of others, has stated that he does not "allow" Public Enemy to sample anymore, because the legal costs are so high~{ /{Willful Infringement: A Report from the Front Lines of the Real Culture Wars}/ (2003), produced by Jed Horovitz, directed by Greg Hittelman, a Fiat Lucre production, available at link #72. }~), these artists release into the creative environment content that others can build upon, so that their form of creativity might grow.
+
+Finally, there are many who mark their content with a Creative Commons license just because they want to express to others the importance of balance in this debate. If you just go along with the system as it is, you are effectively saying you believe in the "All Rights Reserved" model. Good for you, but many do not. Many believe that however appropriate that rule is for Hollywood and freaks, it is not an appropriate description of how most creators view the rights associated with their content. The Creative Commons license expresses this notion of "Some Rights Reserved," and gives many the chance to say it to others.
+
+In the first six months of the Creative Commons experiment, over 1 million objects were licensed with these free-culture licenses. The next step is partnerships with middleware content providers to help them build into their technologies simple ways for users to mark their content with Creative Commons freedoms. Then the next step is to watch and celebrate creators who build content based upon content set free.
+
+These are first steps to rebuilding a public domain. They are not mere arguments; they are action. Building a public domain is the first step to showing people how important that domain is to creativity and innovation. Creative Commons relies upon voluntary steps to achieve this rebuilding. They will lead to a world in which more than voluntary steps are possible.
+
+Creative Commons is just one example of voluntary efforts by individuals and creators to change the mix of rights that now govern the creative field. The project does not compete with copyright; it complements it. Its aim is not to defeat the rights of authors, but to make it easier for authors and creators to exercise their rights more flexibly and cheaply. That difference, we believe, will enable creativity to spread more easily.
+
+1~them THEM, SOON
+
+*{We will}* not reclaim a free culture by individual action alone. It will also take important reforms of laws. We have a long way to go before the politicians will listen to these ideas and implement these reforms. But that also means that we have time to build awareness around the changes that we need.
+
+In this chapter, I outline five kinds of changes: four that are general, and one that's specific to the most heated battle of the day, music. Each is a step, not an end. But any of these steps would carry us a long way to our end.
+
+2~1 1. More Formalities
+
+If you buy a house, you have to record the sale in a deed. If you buy land upon which to build a house, you have to record the purchase in a deed. If you buy a car, you get a bill of sale and register the car. If you buy an airplane ticket, it has your name on it.
+
+These are all formalities associated with property. They are requirements that we all must bear if we want our property to be protected.
+
+In contrast, under current copyright law, you automatically get a copyright, regardless of whether you comply with any formality. You don't have to register. You don't even have to mark your content. The default is control, and "formalities" are banished.
+
+Why?
+
+As I suggested in chapter 10, the motivation to abolish formalities was a good one. In the world before digital technologies, formalities imposed a burden on copyright holders without much benefit. Thus, it was progress when the law relaxed the formal requirements that a copyright owner must bear to protect and secure his work. Those formalities were getting in the way.
+
+But the Internet changes all this. Formalities today need not be a burden. Rather, the world without formalities is the world that burdens creativity. Today, there is no simple way to know who owns what, or with whom one must deal in order to use or build upon the creative work of others. There are no records, there is no system to trace - there is no simple way to know how to get permission. Yet given the massive increase in the scope of copyright's rule, getting permission is a necessary step for any work that builds upon our past. And thus, the /{lack}/ of formalities forces many into silence where they otherwise could speak.
+
+The law should therefore change this requirement~{ The proposal I am advancing here would apply to American works only. Obviously, I believe it would be beneficial for the same idea to be adopted by other countries as well. }~ - but it should not change it by going back to the old, broken system. We should require formalities, but we should establish a system that will create the incentives to minimize the burden of these formalities.
+
+The important formalities are three: marking copyrighted work, registering copyrights, and renewing the claim to copyright. Traditionally, the first of these three was something the copyright owner did; the second two were something the government did. But a revised system of formalities would banish the government from the process, except for the sole purpose of approving standards developed by others.
+
+3~ Registration and Renewal
+
+Under the old system, a copyright owner had to file a registration with the Copyright Office to register or renew a copyright. When filing that registration, the copyright owner paid a fee. As with most government agencies, the Copyright Office had little incentive to minimize the burden of registration; it also had little incentive to minimize the fee. And as the Copyright Office is not a main target of government policy- making, the office has historically been terribly underfunded. Thus, when people who know something about the process hear this idea about formalities, their first reaction is panic - nothing could be worse than forcing people to deal with the mess that is the Copyright Office.
+
+Yet it is always astonishing to me that we, who come from a tradition of extraordinary innovation in governmental design, can no longer think innovatively about how governmental functions can be designed. Just because there is a public purpose to a government role, it doesn't follow that the government must actually administer the role. Instead, we should be creating incentives for private parties to serve the public, subject to standards that the government sets.
+
+In the context of registration, one obvious model is the Internet. There are at least 32 million Web sites registered around the world. Domain name owners for these Web sites have to pay a fee to keep their registration alive. In the main top-level domains (.com, .org, .net), there is a central registry. The actual registrations are, however, performed by many competing registrars. That competition drives the cost of registering down, and more importantly, it drives the ease with which registration occurs up.
+
+We should adopt a similar model for the registration and renewal of copyrights. The Copyright Office may well serve as the central registry, but it should not be in the registrar business. Instead, it should establish a database, and a set of standards for registrars. It should approve registrars that meet its standards. Those registrars would then compete with one another to deliver the cheapest and simplest systems for registering and renewing copyrights. That competition would substantially lower the burden of this formality - while producing a database of registrations that would facilitate the licensing of content.
+
+3~ Marking
+
+It used to be that the failure to include a copyright notice on a creative work meant that the copyright was forfeited. That was a harsh punishment for failing to comply with a regulatory rule - akin to imposing the death penalty for a parking ticket in the world of creative rights. Here again, there is no reason that a marking requirement needs to be enforced in this way. And more importantly, there is no reason a marking requirement needs to be enforced uniformly across all media.
+
+The aim of marking is to signal to the public that this work is copyrighted and that the author wants to enforce his rights. The mark also makes it easy to locate a copyright owner to secure permission to use the work.
+
+One of the problems the copyright system confronted early on was that different copyrighted works had to be differently marked. It wasn't clear how or where a statue was to be marked, or a record, or a film. A new marking requirement could solve these problems by recognizing the differences in media, and by allowing the system of marking to evolve as technologies enable it to. The system could enable a special signal from the failure to mark - not the loss of the copyright, but the loss of the right to punish someone for failing to get permission first.
+
+Let's start with the last point. If a copyright owner allows his work to be published without a copyright notice, the consequence of that failure need not be that the copyright is lost. The consequence could instead be that anyone has the right to use this work, until the copyright owner complains and demonstrates that it is his work and he doesn't give permission.~{ There would be a complication with derivative works that I have not solved here. In my view, the law of derivatives creates a more complicated system than is justified by the marginal incentive it creates. }~ The meaning of an unmarked work would therefore be "use unless someone complains." If someone does complain, then the obligation would be to stop using the work in any new work from then on though no penalty would attach for existing uses. This would create a strong incentive for copyright owners to mark their work.
+
+That in turn raises the question about how work should best be marked. Here again, the system needs to adjust as the technologies evolve. The best way to ensure that the system evolves is to limit the Copyright Office's role to that of approving standards for marking content that have been crafted elsewhere.
+
+For example, if a recording industry association devises a method for marking CDs, it would propose that to the Copyright Office. The Copyright Office would hold a hearing, at which other proposals could be made. The Copyright Office would then select the proposal that it judged preferable, and it would base that choice /{solely}/ upon the consideration of which method could best be integrated into the registration and renewal system. We would not count on the government to innovate; but we would count on the government to keep the product of innovation in line with its other important functions.
+
+Finally, marking content clearly would simplify registration requirements. If photographs were marked by author and year, there would be little reason not to allow a photographer to reregister, for example, all photographs taken in a particular year in one quick step. The aim of the formality is not to burden the creator; the system itself should be kept as simple as possible.
+
+The objective of formalities is to make things clear. The existing system does nothing to make things clear. Indeed, it seems designed to make things unclear.
+
+If formalities such as registration were reinstated, one of the most difficult aspects of relying upon the public domain would be removed. It would be simple to identify what content is presumptively free; it would be simple to identify who controls the rights for a particular kind of content; it would be simple to assert those rights, and to renew that assertion at the appropriate time.
+
+2~2 2. Shorter Terms
+
+The term of copyright has gone from fourteen years to ninety-five years for corporate authors, and life of the author plus seventy years for natural authors.
+
+In /{The Future of Ideas}/, I proposed a seventy-five-year term, granted in five- year increments with a requirement of renewal every five years. That seemed radical enough at the time. But after we lost /{Eldred}/ v. /{Ashcroft}/, the proposals became even more radical. /{The Economist}/ endorsed a proposal for a fourteen-year copyright term.~{ "A Radical Rethink," /{Economist,}/ 366:8308 (25 January 2003): 15, available at link #74. }~ Others have proposed tying the term to the term for patents.
+
+I agree with those who believe that we need a radical change in copyright's term. But whether fourteen years or seventy-five, there are four principles that are important to keep in mind about copyright terms.
+
+_1 (1) /{Keep it short:}/ The term should be as long as necessary to give incentives to create, but no longer. If it were tied to very strong protections for authors (so authors were able to reclaim rights from publishers), rights to the same work (not derivative works) might be extended further. The key is not to tie the work up with legal regulations when it no longer benefits an author.
+
+_1 (2) /{Keep it simple:}/ The line between the public domain and protected content must be kept clear. Lawyers like the fuzziness of "fair use," and the distinction between "ideas" and "expression." That kind of law gives them lots of work. But our framers had a simpler idea in mind: protected versus unprotected. The value of short terms is that there is little need to build exceptions into copyright when the term itself is kept short. A clear and active "lawyer-free zone" makes the complexities of "fair use" and "idea/expression" less necessary to navigate.
+
+_1 (3) /{Keep it alive:}/ Copyright should have to be renewed. Especially if the maximum term is long, the copyright owner should be required to signal periodically that he wants the protection continued. This need not be an onerous burden, but there is no reason this monopoly protection has to be granted for free. On average, it takes ninety minutes for a veteran to apply for a pension. ~{ Department of Veterans Affairs, Veteran's Application for Compensation and/or Pension, VA Form 21-526 (OMB Approved No. 2900-0001), available at link #75. }~ If we make veterans suffer that burden, I don't see why we couldn't require authors to spend ten minutes every fifty years to file a single form.
+
+_1 (4) /{Keep it prospective:}/ Whatever the term of copyright should be, the clearest lesson that economists teach is that a term once given should not be extended. It might have been a mistake in 1923 for the law to offer authors only a fifty-six-year term. I don't think so, but it's possible. If it was a mistake, then the consequence was that we got fewer authors to create in 1923 than we otherwise would have. But we can't correct that mistake today by increasing the term. No matter what we do today, we will not increase the number of authors who wrote in 1923. Of course, we can increase the reward that those who write now get (or alternatively, increase the copyright burden that smothers many works that are today invisible). But increasing their reward will not increase their creativity in 1923. What's not done is not done, and there's nothing we can do about that now.
+
+These changes together should produce an /{average}/ copyright term that is much shorter than the current term. Until 1976, the average term was just 32.2 years. We should be aiming for the same.
+
+No doubt the extremists will call these ideas "radical." (After all, I call them "extremists.") But again, the term I recommended was longer than the term under Richard Nixon. How "radical" can it be to ask for a more generous copyright law than Richard Nixon presided over?
+
+2~3 3. Free Use Vs. Fair Use
+
+As I observed at the beginning of this book, property law originally granted property owners the right to control their property from the ground to the heavens. The airplane came along. The scope of property rights quickly changed. There was no fuss, no constitutional challenge. It made no sense anymore to grant that much control, given the emergence of that new technology.
+
+Our Constitution gives Congress the power to give authors "exclusive right" to "their writings." Congress has given authors an exclusive right to "their writings" plus any derivative writings (made by others) that are sufficiently close to the author's original work. Thus, if I write a book, and you base a movie on that book, I have the power to deny you the right to release that movie, even though that movie is not "my writing."
+
+Congress granted the beginnings of this right in 1870, when it expanded the exclusive right of copyright to include a right to control translations and dramatizations of a work.~{ Benjamin Kaplan, /{An Unhurried View of Copyright}/ (New York: Columbia University Press, 1967), 32. }~ The courts have expanded it slowly through judicial interpretation ever since. This expansion has been commented upon by one of the law's greatest judges, Judge Benjamin Kaplan.
+
+_1 So inured have we become to the extension of the monopoly to a large range of so-called derivative works, that we no longer sense the oddity of accepting such an enlargement of copyright while yet intoning the abracadabra of idea and expression."~{ Ibid., 56. }~
+
+I think it's time to recognize that there are airplanes in this field and the expansiveness of these rights of derivative use no longer make sense. More precisely, they don't make sense for the period of time that a copyright runs. And they don't make sense as an amorphous grant. Consider each limitation in turn.
+
+/{Term:}/ If Congress wants to grant a derivative right, then that right should be for a much shorter term. It makes sense to protect John Grisham's right to sell the movie rights to his latest novel (or at least I'm willing to assume it does); but it does not make sense for that right to run for the same term as the underlying copyright. The derivative right could be important in inducing creativity; it is not important long after the creative work is done.
+
+/{Scope:}/ Likewise should the scope of derivative rights be narrowed. Again, there are some cases in which derivative rights are important. Those should be specified. But the law should draw clear lines around regulated and unregulated uses of copyrighted material. When all "reuse" of creative material was within the control of businesses, perhaps it made sense to require lawyers to negotiate the lines. It no longer makes sense for lawyers to negotiate the lines. Think about all the creative possibilities that digital technologies enable; now imagine pouring molasses into the machines. That's what this general requirement of permission does to the creative process. Smothers it.
+
+This was the point that Alben made when describing the making of the Clint Eastwood CD. While it makes sense to require negotiation for foreseeable derivative rights - turning a book into a movie, or a poem into a musical score - it doesn't make sense to require negotiation for the unforeseeable. Here, a statutory right would make much more sense.
+
+In each of these cases, the law should mark the uses that are protected, and the presumption should be that other uses are not protected. This is the reverse of the recommendation of my colleague Paul Goldstein.~{ Paul Goldstein, /{Copyright's Highway: From Gutenberg to the Celestial Jukebox}/ (Stanford: Stanford University Press, 2003), 187-216. }~ His view is that the law should be written so that expanded protections follow expanded uses.
+
+Goldstein's analysis would make perfect sense if the cost of the legal system were small. But as we are currently seeing in the context of the Internet, the uncertainty about the scope of protection, and the incentives to protect existing architectures of revenue, combined with a strong copyright, weaken the process of innovation.
+
+The law could remedy this problem either by removing protection beyond the part explicitly drawn or by granting reuse rights upon certain statutory conditions. Either way, the effect would be to free a great deal of culture to others to cultivate. And under a statutory rights regime, that reuse would earn artists more income.
+
+2~4 4. Liberate the Music - Again
+
+The battle that got this whole war going was about music, so it wouldn't be fair to end this book without addressing the issue that is, to most people, most pressing - music. There is no other policy issue that better teaches the lessons of this book than the battles around the sharing of music.
+
+The appeal of file-sharing music was the crack cocaine of the Inter-net's growth. It drove demand for access to the Internet more powerfully than any other single application. It was the Internet's killer app-possibly in two senses of that word. It no doubt was the application that drove demand for bandwidth. It may well be the application that drives demand for regulations that in the end kill innovation on the network.
+
+The aim of copyright, with respect to content in general and music in particular, is to create the incentives for music to be composed, performed, and, most importantly, spread. The law does this by giving an exclusive right to a composer to control public performances of his work, and to a performing artist to control copies of her performance.
+
+File-sharing networks complicate this model by enabling the spread of content for which the performer has not been paid. But of course, that's not all the file-sharing networks do. As I described in chapter 5, they enable four different kinds of sharing:
+
+_1 A. There are some who are using sharing networks as substitutes for purchasing CDs.
+
+_1 B. There are also some who are using sharing networks to sample, on the way to purchasing CDs.
+
+_1 C. There are many who are using file-sharing networks to get access to content that is no longer sold but is still under copyright or that would have been too cumbersome to buy off the Net.
+
+_1 D. There are many who are using file-sharing networks to get access to content that is not copyrighted or to get access that the copyright owner plainly endorses.
+
+Any reform of the law needs to keep these different uses in focus. It must avoid burdening type D even if it aims to eliminate type A. The eagerness with which the law aims to eliminate type A, moreover, should depend upon the magnitude of type B. As with VCRs, if the net effect of sharing is actually not very harmful, the need for regulation is significantly weakened.
+
+As I said in chapter 5, the actual harm caused by sharing is controversial. For the purposes of this chapter, however, I assume the harm is real. I assume, in other words, that type A sharing is significantly greater than type B, and is the dominant use of sharing networks.
+
+Nonetheless, there is a crucial fact about the current technological context that we must keep in mind if we are to understand how the law should respond.
+
+Today, file sharing is addictive. In ten years, it won't be. It is addictive today because it is the easiest way to gain access to a broad range of content. It won't be the easiest way to get access to a broad range of content in ten years. Today, access to the Internet is cumbersome and slow - we in the United States are lucky to have broadband service at 1.5 MBs, and very rarely do we get service at that speed both up and down. Although wireless access is growing, most of us still get access across wires. Most only gain access through a machine with a keyboard. The idea of the always on, always connected Internet is mainly just an idea.
+
+But it will become a reality, and that means the way we get access to the Internet today is a technology in transition. Policy makers should not make policy on the basis of technology in transition. They should make policy on the basis of where the technology is going. The question should not be, how should the law regulate sharing in this world? The question should be, what law will we require when the network becomes the network it is clearly becoming? That network is one in which every machine with electricity is essentially on the Net; where everywhere you are - except maybe the desert or the Rockies - you can instantaneously be connected to the Internet. Imagine the Internet as ubiquitous as the best cell-phone service, where with the flip of a device, you are connected.
+
+In that world, it will be extremely easy to connect to services that give you access to content on the fly - such as Internet radio, content that is streamed to the user when the user demands. Here, then, is the critical point: When it is /{extremely}/ easy to connect to services that give access to content, it will be /{easier}/ to connect to services that give you access to content than it will be to download and store content /on the many devices you will have for playing content/. It will be easier, in other words, to subscribe than it will be to be a database manager, as everyone in the download-sharing world of Napster-like technologies essentially is. Content services will compete with content sharing, even if the services charge money for the content they give access to. Already cell-phone services in Japan offer music (for a fee) streamed over cell phones (enhanced with plugs for headphones). The Japanese are paying for this content even though "free" content is available in the form of MP3s across the Web.~{ See, for example, "Music Media Watch," The J@pan Inc. Newsletter, 3 April 2002, available at link #76. }~
+
+This point about the future is meant to suggest a perspective on the present: It is emphatically temporary. The "problem" with file sharing - to the extent there is a real problem - is a problem that will increasingly disappear as it becomes easier to connect to the Internet. And thus it is an extraordinary mistake for policy makers today to be "solving" this problem in light of a technology that will be gone tomorrow. The question should not be how to regulate the Internet to eliminate file sharing (the Net will evolve that problem away). The question instead should be how to assure that artists get paid, during this transition between twentieth-century models for doing business and twenty-first-century technologies.
+
+The answer begins with recognizing that there are different "problems" here to solve. Let's start with type D content - uncopyrighted content or copyrighted content that the artist wants shared. The "problem" with this content is to make sure that the technology that would enable this kind of sharing is not rendered illegal. You can think of it this way: Pay phones are used to deliver ransom demands, no doubt. But there are many who need to use pay phones who have nothing to do with ransoms. It would be wrong to ban pay phones in order to eliminate kidnapping.
+
+Type C content raises a different "problem." This is content that was, at one time, published and is no longer available. It may be unavailable because the artist is no longer valuable enough for the record label he signed with to carry his work. Or it may be unavailable because the work is forgotten. Either way, the aim of the law should be to facilitate the access to this content, ideally in a way that returns something to the artist.
+
+Again, the model here is the used book store. Once a book goes out of print, it may still be available in libraries and used book stores. But libraries and used book stores don't pay the copyright owner when someone reads or buys an out-of- print book. That makes total sense, of course, since any other system would be so burdensome as to eliminate the possibility of used book stores' existing. But from the author's perspective, this "sharing" of his content without his being compensated is less than ideal.
+
+The model of used book stores suggests that the law could simply deem out-of- print music fair game. If the publisher does not make copies of the music available for sale, then commercial and noncommercial providers would be free, under this rule, to "share" that content, even though the sharing involved making a copy. The copy here would be incidental to the trade; in a context where commercial publishing has ended, trading music should be as free as trading books.
+
+Alternatively, the law could create a statutory license that would ensure that artists get something from the trade of their work. For example, if the law set a low statutory rate for the commercial sharing of content that was not offered for sale by a commercial publisher, and if that rate were automatically transferred to a trust for the benefit of the artist, then businesses could develop around the idea of trading this content, and artists would benefit from this trade.
+
+This system would also create an incentive for publishers to keep works available commercially. Works that are available commercially would not be subject to this license. Thus, publishers could protect the right to charge whatever they want for content if they kept the work commercially available. But if they don't keep it available, and instead, the computer hard disks of fans around the world keep it alive, then any royalty owed for such copying should be much less than the amount owed a commercial publisher.
+
+The hard case is content of types A and B, and again, this case is hard only because the extent of the problem will change over time, as the technologies for gaining access to content change. The law's solution should be as flexible as the problem is, understanding that we are in the middle of a radical transformation in the technology for delivering and accessing content.
+
+So here's a solution that will at first seem very strange to both sides in this war, but which upon reflection, I suggest, should make some sense.
+
+Stripped of the rhetoric about the sanctity of property, the basic claim of the content industry is this: A new technology (the Internet) has harmed a set of rights that secure copyright. If those rights are to be protected, then the content industry should be compensated for that harm. Just as the technology of tobacco harmed the health of millions of Americans, or the technology of asbestos caused grave illness to thousands of miners, so, too, has the technology of digital networks harmed the interests of the content industry.
+
+I love the Internet, and so I don't like likening it to tobacco or asbestos. But the analogy is a fair one from the perspective of the law. And it suggests a fair response: Rather than seeking to destroy the Internet, or the p2p technologies that are currently harming content providers on the Internet, we should find a relatively simple way to compensate those who are harmed.
+
+The idea would be a modification of a proposal that has been floated by Harvard law professor William Fisher.~{ William Fisher, /{Digital Music: Problems and Possibilities}/ (last revised: 10 October 2000), available at link #77; William Fisher, /{Promises to Keep: Technology, Law, and the Future of Entertainment}/ (forthcoming) (Stanford: Stanford University Press, 2004), ch. 6, available at link #78. Professor Netanel has proposed a related idea that would exempt noncommercial sharing from the reach of copyright and would establish compensation to artists to balance any loss. See Neil Weinstock Netanel, "Impose a Noncommercial Use Levy to Allow Free P2P File Sharing," available at link #79. For other proposals, see Lawrence Lessig, "Who's Holding Back Broadband?" /{Washington Post,}/ 8 January 2002, A17; Philip S. Corwin on behalf of Sharman Networks, A Letter to Senator Joseph R. Biden, Jr., Chairman of the Senate Foreign Relations Committee, 26 February 2002, available at link #80; Serguei Osokine, /{A Quick Case for Intellectual Property Use Fee (IPUF),}/ 3 March 2002, available at link #81; Jefferson Graham, "Kazaa, Verizon Propose to Pay Artists Directly," /{USA Today,}/ 13 May 2002, available at link #82; Steven M. Cherry, "Getting Copyright Right," IEEE Spectrum Online, 1 July 2002, available at link #83; Declan Mc-Cullagh, "Verizon's Copyright Campaign," CNET News.com, 27 August 2002, available at link #84. Fisher's proposal is very similar to Richard Stallman's proposal for DAT. Unlike Fisher's, Stallman's proposal would not pay artists directly proportionally, though more popular artists would get more than the less popular. As is typical with Stallman, his proposal predates the current debate by about a decade. See link #85. }~ Fisher suggests a very clever way around the current impasse of the Internet. Under his plan, all content capable of digital transmission would (1) be marked with a digital watermark (don't worry about how easy it is to evade these marks; as you'll see, there's no incentive to evade them). Once the content is marked, then entrepreneurs would develop (2) systems to monitor how many items of each content were distributed. On the basis of those numbers, then (3) artists would be compensated. The compensation would be paid for by (4) an appropriate tax.
+
+Fisher's proposal is careful and comprehensive. It raises a million questions, most of which he answers well in his upcoming book, /{Promises to Keep}/. The modification that I would make is relatively simple: Fisher imagines his proposal replacing the existing copyright system. I imagine it complementing the existing system. The aim of the proposal would be to facilitate compensation to the extent that harm could be shown. This compensation would be temporary, aimed at facilitating a transition between regimes. And it would require renewal after a period of years. If it continues to make sense to facilitate free exchange of content, supported through a taxation system, then it can be continued. If this form of protection is no longer necessary, then the system could lapse into the old system of controlling access.
+
+Fisher would balk at the idea of allowing the system to lapse. His aim is not just to ensure that artists are paid, but also to ensure that the system supports the widest range of "semiotic democracy" possible. But the aims of semiotic democracy would be satisfied if the other changes I described were accomplished - in particular, the limits on derivative uses. A system that simply charges for access would not greatly burden semiotic democracy if there were few limitations on what one was allowed to do with the content itself.
+
+No doubt it would be difficult to calculate the proper measure of "harm" to an industry. But the difficulty of making that calculation would be outweighed by the benefit of facilitating innovation. This background system to compensate would also not need to interfere with innovative proposals such as Apple's MusicStore. As experts predicted when Apple launched the MusicStore, it could beat "free" by being easier than free is. This has proven correct: Apple has sold millions of songs at even the very high price of 99 cents a song. (At 99 cents, the cost is the equivalent of a per-song CD price, though the labels have none of the costs of a CD to pay.) Apple's move was countered by Real Networks, offering music at just 79 cents a song. And no doubt there will be a great deal of competition to offer and sell music on-line.
+
+This competition has already occurred against the background of "free" music from p2p systems. As the sellers of cable television have known for thirty years, and the sellers of bottled water for much more than that, there is nothing impossible at all about "competing with free." Indeed, if anything, the competition spurs the competitors to offer new and better products. This is precisely what the competitive market was to be about. Thus in Singapore, though piracy is rampant, movie theaters are often luxurious - with "first class" seats, and meals served while you watch a movie - as they struggle and succeed in finding ways to compete with "free."
+
+This regime of competition, with a backstop to assure that artists don't lose, would facilitate a great deal of innovation in the delivery of content. That competition would continue to shrink type A sharing. It would inspire an extraordinary range of new innovators - ones who would have a right to the content, and would no longer fear the uncertain and barbarically severe punishments of the law.
+
+In summary, then, my proposal is this:
+
+The Internet is in transition. We should not be regulating a technology in transition. We should instead be regulating to minimize the harm to interests affected by this technological change, while enabling, and encouraging, the most efficient technology we can create.
+
+We can minimize that harm while maximizing the benefit to innovation by
+
+_1 1. guaranteeing the right to engage in type D sharing;
+
+_1 2. permitting noncommercial type C sharing without liability, and commercial type C sharing at a low and fixed rate set by statute;
+
+_1 3. while in this transition, taxing and compensating for type A sharing, to the extent actual harm is demonstrated.
+
+But what if "piracy" doesn't disappear? What if there is a competitive market providing content at a low cost, but a significant number of consumers continue to "take" content for nothing? Should the law do something then?
+
+Yes, it should. But, again, what it should do depends upon how the facts develop. These changes may not eliminate type A sharing. But the real issue is not whether it eliminates sharing in the abstract. The real issue is its effect on the market. Is it better (a) to have a technology that is 95 percent secure and produces a market of size /{x}/, or (b) to have a technology that is 50 percent secure but produces a market of five times /{x}/? Less secure might produce more unauthorized sharing, but it is likely to also produce a much bigger market in authorized sharing. The most important thing is to assure artists' compensation without breaking the Internet. Once that's assured, then it may well be appropriate to find ways to track down the petty pirates.
+
+But we're a long way away from whittling the problem down to this subset of type A sharers. And our focus until we're there should not be on finding ways to break the Internet. Our focus until we're there should be on how to make sure the artists are paid, while protecting the space for innovation and creativity that the Internet is.
+
+2~5 5. Fire Lots of Lawyers
+
+I'm a lawyer. I make lawyers for a living. I believe in the law. I believe in the law of copyright. Indeed, I have devoted my life to working in law, not because there are big bucks at the end but because there are ideals at the end that I would love to live.
+
+Yet much of this book has been a criticism of lawyers, or the role lawyers have played in this debate. The law speaks to ideals, but it is my view that our profession has become too attuned to the client. And in a world where the rich clients have one strong view, the unwillingness of the profession to question or counter that one strong view queers the law.
+
+The evidence of this bending is compelling. I'm attacked as a "radical" by many within the profession, yet the positions that I am advocating are precisely the positions of some of the most moderate and significant figures in the history of this branch of the law. Many, for example, thought crazy the challenge that we brought to the Copyright Term Extension Act. Yet just thirty years ago, the dominant scholar and practitioner in the field of copyright, Melville Nimmer, thought it obvious.~{ Lawrence Lessig, "Copyright's First Amendment" (Melville B. Nimmer Memorial Lecture), /{UCLA Law Review}/ 48 (2001): 1057, 1069-70. }~
+
+However, my criticism of the role that lawyers have played in this debate is not just about a professional bias. It is more importantly about our failure to actually reckon the costs of the law.
+
+Economists are supposed to be good at reckoning costs and benefits. But more often than not, economists, with no clue about how the legal system actually functions, simply assume that the transaction costs of the legal system are slight.~{ A good example is the work of Professor Stan Liebowitz. Liebowitz is to be commended for his careful review of data about infringement, leading him to question his own publicly stated position - twice. He initially predicted that downloading would substantially harm the industry. He then revised his view in light of the data, and he has since revised his view again. Compare Stan J. Liebowitz, /{Rethinking the Network Economy: The True Forces That Drive the Digital Marketplace}/ (New York: Amacom, 2002), 173 (reviewing his original view but expressing skepticism) with Stan J. Liebowitz, "Will MP3s Annihilate the Record Industry?" working paper, June 2003, available at link #86. Liebowitz's careful analysis is extremely valuable in estimating the effect of file-sharing technology. In my view, however, he underestimates the costs of the legal system. See, for example, /{Rethinking,}/ 174-76. }~ They see a system that has been around for hundreds of years, and they assume it works the way their elementary school civics class taught them it works.
+
+But the legal system doesn't work. Or more accurately, it doesn't work for anyone except those with the most resources. Not because the system is corrupt. I don't think our legal system (at the federal level, at least) is at all corrupt. I mean simply because the costs of our legal system are so astonishingly high that justice can practically never be done.
+
+These costs distort free culture in many ways. A lawyer's time is billed at the largest firms at more than $400 per hour. How much time should such a lawyer spend reading cases carefully, or researching obscure strands of authority? The answer is the increasing reality: very little. The law depended upon the careful articulation and development of doctrine, but the careful articulation and development of legal doctrine depends upon careful work. Yet that careful work costs too much, except in the most high-profile and costly cases.
+
+The costliness and clumsiness and randomness of this system mock our tradition. And lawyers, as well as academics, should consider it their duty to change the way the law works - or better, to change the law so that it works. It is wrong that the system works well only for the top 1 percent of the clients. It could be made radically more efficient, and inexpensive, and hence radically more just.
+
+But until that reform is complete, we as a society should keep the law away from areas that we know it will only harm. And that is precisely what the law will too often do if too much of our culture is left to its review.
+
+Think about the amazing things your kid could do or make with digital technology - the film, the music, the Web page, the blog. Or think about the amazing things your community could facilitate with digital technology - a wiki, a barn raising, activism to change something. Think about all those creative things, and then imagine cold molasses poured onto the machines. This is what any regime that requires permission produces. Again, this is the reality of Brezhnev's Russia.
+
+The law should regulate in certain areas of culture - but it should regulate culture only where that regulation does good. Yet lawyers rarely test their power, or the power they promote, against this simple pragmatic question: "Will it do good?" When challenged about the expanding reach of the law, the lawyer answers, "Why not?"
+
+We should ask, "Why?" Show me why your regulation of culture is needed. Show me how it does good. And until you can show me both, keep your lawyers away.
+
+:C~ NOTES
+
+1~webnotes Notes~#
+
+Throughout this text, there are references to links on the World Wide Web. As anyone who has tried to use the Web knows, these links can be highly unstable. I have tried to remedy the instability by redirecting readers to the original source through the Web site associated with this book. For each link below, you can go to http://free-culture.cc/notes and locate the original source by clicking on the number after the # sign. If the original link remains alive, you will be redirected to that link. If the original link has disappeared, you will be redirected to an appropriate reference for the material.
+
+:C~ ACKNOWLEDGMENTS
+
+1~acknowledgements [Acknowledgments]-#
+
+This book is the product of a long and as yet unsuccessful struggle that began when I read of Eric Eldred's war to keep books free. Eldred's work helped launch a movement, the free culture movement, and it is to him that this book is dedicated. I received guidance in various places from friends and academics, including Glenn Brown, Peter DiCola, Jennifer Mnookin, Richard Posner, Mark Rose, and Kathleen Sullivan. And I received correction and guidance from many amazing students at Stanford Law School and Stanford University. They included Andrew B. Coan, John Eden, James P. Fellers, Christopher Guzelian, Erica Goldberg, Robert Hall- man, Andrew Harris, Matthew Kahn, Brian Link, Ohad Mayblum, Alina Ng, and Erica Platt. I am particularly grateful to Catherine Crump and Harry Surden, who helped direct their research, and to Laura Lynch, who brilliantly managed the army that they assembled, and provided her own critical eye on much of this. Yuko Noguchi helped me to understand the laws of Japan as well as its culture. I am thankful to her, and to the many in Japan who helped me prepare this book: Joi Ito, Takayuki Matsutani, Naoto Misaki, Michihiro Sasaki, Hiromichi Tanaka, Hiroo Yamagata, and Yoshihiro Yonezawa. I am thankful as well as to Professor Nobuhiro Nakayama, and the Tokyo University Business Law Center, for giving me the chance to spend time in Japan, and to Tadashi Shiraishi and Kiyokazu Yamagami for their generous help while I was there. These are the traditional sorts of help that academics regularly draw upon. But in addition to them, the Internet has made it possible to receive advice and correction from many whom I have never even met. Among those who have responded with extremely helpful advice to requests on my blog about the book are Dr. Mohammad Al-Ubaydli, David Gerstein, and Peter DiMauro, as well as a long list of those who had specific ideas about ways to develop my argument. They included Richard Bondi, Steven Cherry, David Coe, Nik Cubrilovic, Bob Devine, Charles Eicher, Thomas Guida, Elihu M. Gerson, Jeremy Hunsinger, Vaughn Iverson, John Karabaic, Jeff Keltner, James Lindenschmidt, K. L. Mann, Mark Manning, Nora McCauley, Jeffrey McHugh, Evan McMullen, Fred Norton, John Pormann, Pedro A. D. Rezende, Shabbir Safdar, Saul Schleimer, Clay Shirky, Adam Shostack, Kragen Sitaker, Chris Smith, Bruce Steinberg, Andrzej Jan Taramina, Sean Walsh, Matt Wasserman, Miljenko Williams, "Wink," Roger Wood, "Ximmbo da Jazz," and Richard Yanco. (I apologize if I have missed anyone; with computers come glitches, and a crash of my e-mail system meant I lost a bunch of great replies.) Richard Stallman and Michael Carroll each read the whole book in draft, and each provided extremely helpful correction and advice. Michael helped me to see more clearly the significance of the regulation of derivitive works. And Richard corrected an embarrassingly large number of errors. While my work is in part inspired by Stallman's, he does not agree with me in important places throughout this book. Finally, and forever, I am thankful to Bettina, who has always insisted that there would be unending happiness away from these battles, and who has always been right. This slow learner is, as ever, grateful for her perpetual patience and love.
+
+1~bookindex (Original) Index
+
+INDEX
+
+ABC, 164, 321n
+
+academic journals, 262, 280-82
+
+Adobe eBook Reader, 148-53
+
+advertising, 36, 45-46, 127, 145-46, 167-68, 321n
+
+Africa, medications for HIV patients in, 257-61
+
+Agee, Michael, 223-24, 225
+
+agricultural patents, 313n
+
+Aibo robotic dog, 153-55, 156, 157, 160
+
+AIDS medications, 257-60
+
+air traffic, land ownership vs., 1-3
+
+Akerlof, George, 232
+
+Alben, Alex, 100-104, 105, 198-99, 295, 317n
+
+alcohol prohibition, 200
+
+Alice's Adventures in Wonderland (Carroll), 152-53
+
+Allen, Paul, 100
+
+All in the Family, 164, 165
+
+Amazon, 278
+
+American Association of Law Libraries, 232
+
+American Graphophone Company, 56
+
+Americans with Disabilities Act (1990), 318n
+
+Andromeda, 203
+
+Anello, Douglas, 60
+
+animated cartoons, 21-24
+
+antiretroviral drugs, 257-61
+
+Apple Corporation, 203, 264, 302
+
+architecture, constraint effected through, 122, 123, 124, 318n
+
+archive.org, 112
+
+see also Internet Archive
+
+archives, digital, 108-15, 173, 222, 226-27
+
+Aristotle, 150
+
+Armstrong, Edwin Howard, 3-6, 184, 196
+
+Arrow, Kenneth, 232
+
+art, underground, 186
+
+artists:
+
+publicity rights on images of, 317n
+
+recording industry payments to, 52, 58-59, 74, 195, 196-97, 199, 301, 329n-30n
+
+retrospective compilations on, 100-104
+
+ASCAP, 18
+
+Asia, commercial piracy in, 63, 64, 65, 302
+
+AT&T, 6
+
+Ayer, Don, 230, 237, 239, 244, 248
+
+Bacon, Francis, 93
+
+Barish, Stephanie, 38, 39, 46
+
+Barlow, Joel, 8
+
+Barnes & Noble, 147
+
+Barry, Hank, 189, 191
+
+BBC, 270
+
+Beatles, 57
+
+Beckett, Thomas, 92
+
+Bell, Alexander Graham, 3
+
+Berlin Act (1908), 327n
+
+Berman, Howard L., 322n, 324n
+
+Berne Convention (1908), 250, 327n
+
+Bernstein, Leonard, 72
+
+Betamax, 75-76
+
+biomedical research, 262-63
+
+Black, Jane, 70
+
+blogs (Web-logs), 41, 42-45, 310n-11n
+
+BMG, 162
+
+BMW, 191
+
+Boies, David, 105
+
+Boland, Lois, 265, 266-68
+
+Bolling, Ruben, 246, 247
+
+Bono, Mary, 215, 326n
+
+Bono, Sonny, 215, 325n
+
+books:
+
+_1 English copyright law developed for, 85-94
+
+_1 free on-line releases of, 72-73, 284-85
+
+_1 on Internet, 143, 144, 148-53
+
+_1 out of print, 72, 113, 134, 299, 317n
+
+_1 resales of, 72, 134, 299, 314n
+
+_1 three types of uses of, 141-43
+
+_1 total number of, 114
+
+booksellers, English, 88-94, 316n
+
+Boswell, James, 91
+
+bots, 108, 161
+
+Boyle, James, 129
+
+Braithwaite, John, 267
+
+Branagh, Kenneth, 85, 88
+
+Brandeis, Louis, 34
+
+Brazil, free culture in, 270
+
+Breyer, Stephen, 234, 235, 242, 243
+
+Brezhnev, Leonid, 128
+
+British Parliament, 86, 87, 89-90, 91-92, 94
+
+broadcast flag, 193, 324n
+
+Bromberg, Dan, 230
+
+Brown, John Seely, 45, 46, 47, 127
+
+browsing, 145, 147, 277-78
+
+Buchanan, James, 232
+
+Bunyan, John, 93
+
+Burdick, Quentin, 60
+
+Bush, George W., 323n
+
+cable television, 59-61, 74-75, 162, 163, 302
+
+camera technology, 32-33, 34, 35, 127
+
+Camp Chaos, 106
+
+CARP (Copyright Arbitration Royalty Panel), 324n
+
+cars, MP3 sound systems in, 191
+
+Carson, Rachel, 129
+
+cartoon films, 21-25
+
+Casablanca, 148
+
+cassette recording, 69-70, 314n
+
+VCRs, 75-76, 77, 158-60, 194, 297, 320n
+
+Causby, Thomas Lee, 2, 3, 7, 11, 12, 256, 307n
+
+Causby, Tinie, 2, 3, 7, 11, 12, 256, 307n
+
+CBS, 164
+
+CD-ROMs, film clips used in, 100-104
+
+CDs:
+
+_1 copyright marking of, 291
+
+_1 foreign piracy of, 63, 64
+
+_1 mix technology and, 203-4
+
+_1 preference data on, 189-90
+
+_1 prices of, 70, 302
+
+_1 sales levels of, 70-71, 314n
+
+cell phones, music streamed over, 298
+
+chimeras, 178-79
+
+Christensen, Clayton M., 166, 313n, 321n
+
+circumvention technologies, 156, 157-60
+
+civil liberties, 205-7
+
+Clark, Kim B., 321n
+
+CNN, 44
+
+Coase, Ronald, 232
+
+Code (Lessig), xiii, xiv, 121, 318n
+
+CodePink Women for Peace, xiv, 269
+
+Coe, Brian, 33
+
+Comcast, 321n
+
+comics, Japanese, 25-26, 27-28, 29, 309n
+
+commerce, interstate, 219, 236, 326n
+
+Commerce, U.S. Department of, 126
+
+commercials, 36, 45-46, 127, 167-68, 321n
+
+common law, 86, 90, 91, 92
+
+Commons, John R., 318n
+
+Communications Decency Act (1996), 325n
+
+composers, copyright protections of, 55-59, 74
+
+compulsory license, 57-58
+
+computer games, 37
+
+Conger, 85, 87, 88-89, 90, 91
+
+Congress, U.S.:
+
+_1 on cable television, 61, 74-75
+
+_1 challenge of CTEA legislation of, 228-48
+
+_1 constitutional powers of, 215-16, 219-20, 233, 234-35, 238-39, 240
+
+_1 in constitutional Progress Clause, 130-31, 236
+
+_1 on copyright laws, 56-57, 61, 74-75, 76, 77-78, 133, 134-35, 193, 194, 196, 197, 294, 324n
+
+_1 copyright terms extended by, 134-35, 214-18, 219-21, 228, 236
+
+_1 on derivative rights, 294
+
+_1 on digital audio tape, 315n
+
+_1 lobbying of, 217-18
+
+_1 on radio, 196, 197
+
+_1 on recording industry, 56-57, 74, 196
+
+_1 Supreme Court restraint on, 218-19, 220, 234
+
+_1 on VCR technology, 76, 77
+
+Conrad, Paul, 158, 159, 160
+
+Constitution, U.S.:
+
+_1 Commerce Clause of, 219, 233, 244, 326n
+
+_1 copyright purpose established in, 130-31, 220, 221, 308n, 326n
+
+_1 on creative property, 119-20, 130
+
+_1 Fifth Amendment to, 119
+
+_1 First Amendment to, 10, 128, 142, 168, 228, 230, 234, 244, 319n
+
+_1 originalist interpretation of, 243
+
+_1 Progress Clause of, 130-31, 215, 218, 232, 236, 243-44
+
+_1 structural checks and balances of, 131
+
+_1 Takings Clause of, 119
+
+Consumer Broadband and Digital Television Promotion Act, 324n
+
+contracts, 320n
+
+Conyers, John, Jr., 322n
+
+cookies, Internet, 278
+
+"copyleft" licenses, 328n
+
+copyright:
+
+_1 constitutional purpose of, 130-31, 220, 221, 308n, 326n
+
+_1 Creative Commons licenses for material in, 282-86
+
+_1 duration of, 24-25, 86, 89-94, 130, 131, 133-35, 172, 214-18, 220, 221-22, 292-93, 294-95, 309n, 319n
+
+_1 four regulatory modalities on, 124-26, 132
+
+_1 infringement lawsuits on, see copyright infringement lawsuits
+
+_1 marking of, 137, 288, 290-91
+
+_1 as narrow monopoly right, 87-94
+
+_1 of natural authors vs. corporations, 135
+
+_1 no registration of works, 222-23, 249
+
+_1 in perpetuity, 89-90, 91, 92-93, 170, 215, 243, 246, 318n, 325n-26n
+
+_1 as property, 83-84, 172
+
+_1 renewability of, 86, 133-34, 135, 289-90, 293, 309n, 319n
+
+_1 scope of, 136-39, 140, 169-72, 295, 320n
+
+_1 usage restrictions attached to, 87-88, 143-44, 146, 320n
+
+_1 voluntary reform efforts on, 275, 277-86
+
+_1 see also copyright law
+
+Copyright Act (1790), 133, 137-38, 319n
+
+Copyright Arbitration Royalty Panel (CARP), 324n
+
+copyright infringement lawsuits:
+
+_1 distribution technology targeted in, 75-77, 190, 191, 323n
+
+_1 exaggerated claims of, 51, 180, 185, 187, 190, 206, 322n
+
+_1 individual defendants intimidated by, 51-52, 185, 187, 200, 270
+
+_1 in recording industry, 50-52, 180, 185, 190, 200, 270, 322n, 323n
+
+_1 statutory damages of, 51
+
+_1 against student file sharing, 50-52, 180, 322n
+
+_1 willful infringement findings in, 146
+
+_1 zero tolerance in, 73-74, 180-81
+
+copyright law:
+
+_1 authors vs. composers in, 56-57
+
+_1 on cable television rebroadcasting, 59-61, 74-75
+
+_1 circumvention technology banned by, 156, 157-60
+
+_1 commercial creativity as primary purpose of, 8, 204, 308n
+
+_1 copies as core issue of, 139-40, 141-44, 146, 171, 319n, 320n
+
+_1 creativity impeded by, 19, 184-88, 308n
+
+_1 development of, 85-94, 316n
+
+_1 English, 17, 85-94, 316n
+
+_1 European, 137, 250, 327n
+
+_1 as ex post regulation modality, 121-22
+
+_1 fair use and, 95-99, 107, 141-42, 143, 145, 146, 157, 160, 172, 186-87, 283, 292, 316n
+
+_1 felony punishment for infringement of, 180, 215, 223, 322n
+
+_1 formalities reinstated in, 287-91, 329n
+
+_1 government reforms proposed on, 287-306
+
+_1 history of American, 132-38, 170-71
+
+_1 illegal behavior as broad response to, 199-207
+
+_1 innovation hampered by, 188-99
+
+_1 innovative freedom balanced with fair compensation in, 75, 77-79, 120, 129-30, 172-73
+
+_1 international compliance with, 63-64, 313n
+
+_1 Japanese, 26, 27-28
+
+_1 lawyers as detriment to, 292, 304-6
+
+_1 malpractice lawsuits against lawyers advising on, 190-91
+
+_1 on music recordings, 55-58, 74, 181, 195, 291
+
+_1 privacy interests in, 308n
+
+_1 as protection of creators, 10, 131, 204
+
+_1 registration requirement of, 137, 170-71, 248-54, 288, 289-90, 291, 327n
+
+_1 on republishing vs. transformation of original work, 19, 136, 138-39, 144-45, 170-72, 294-96, 319n
+
+_1 royalty proposal on derivative reuse in, 106
+
+_1 statutory licenses in, 56-58, 64, 74, 194, 295-96, 300
+
+_1 Supreme Court case on term extension of, 218, 228-48
+
+_1 technology as automatic enforcer of, 147, 148-61, 181, 186, 203, 320n, 324n
+
+_1 term extensions in, 134-35, 214-18, 219-21, 228-48
+
+_1 two central goals of, 75
+
+Copyright Office, 252-53, 289, 291
+
+corporations:
+
+_1 copyright terms for, 135
+
+_1 in pharmaceutical industry, 260
+
+"Country of the Blind, The" (Wells), 177-78
+
+Court of Appeals:
+
+_1 D.C. Circuit, 228-29, 231, 235
+
+_1 Ninth Circuit, 76, 105, 323n
+
+cover songs, 57
+
+Creative Commons, 270, 282-86
+
+creative property:
+
+_1 of authors vs. composers, 56-57
+
+_1 common law protections of, 133
+
+_1 constitutional tradition on, 118-20, 130-31
+
+_1 "if value, then right" theory of, 18-19, 53
+
+_1 noncommercial second life of, 112-13, 114-15
+
+_1 other property rights vs., 117-24, 140
+
+_1 see also intellectual property rights
+
+creativity:
+
+_1 labor shift to, 308n
+
+_1 legal restrictions on, 19, 184-88, 308n
+
+_1 by transforming previous works, 22-24, 25-29
+
+_1 see also innovation
+
+Crichton, Michael, 37
+
+criminal justice system, 167
+
+Crosskey, William W., 318n
+
+CTEA, see Sonny Bono Copyright Term Extension Act
+
+culture:
+
+_1 archives of, 108-15, 173, 226-27
+
+_1 commerical vs. noncommercial, 7-8, 170-72, 225
+
+_1 see also free culture
+
+Cyber Rights (Godwin), 40
+
+Daguerre, Louis, 31
+
+Daley, Elizabeth, 36-37, 38, 39-40, 46
+
+DAT (digital audio tape), 315n, 330n
+
+Data General, 279
+
+Day After Trinity, The, 97
+
+D.C. Court of Appeals, 228-29, 231, 235
+
+DDT, 129-30
+
+Dean, Howard, 43
+
+democracy:
+
+_1 digital sharing within, 184
+
+_1 media concentration and, 166
+
+_1 public discourse in, 42, 45
+
+_1 semiotic, 301-2
+
+_1 in technologies of expression, 33, 35, 41-42, 43, 44-45
+
+Democratic Party, 249
+
+derivative works, 329n
+
+_1 fair use vs., 145
+
+_1 First Amendment and, 319n
+
+_1 historical shift in copyright coverage of, 136, 170-72
+
+_1 piracy vs., 22-24, 25-29, 138-39, 141
+
+_1 reform of copyright term and scope on, 294-96
+
+_1 royalty system proposed for, 106
+
+_1 technological developments and, 144, 171
+
+developing countries, foreign patent costs in, 63, 257-61, 313n
+
+Diamond Multimedia Systems, 323n
+
+digital audio tape (DAT), 315n, 330n
+
+digital cameras, 35, 127
+
+Digital Copyright (Litman), 194
+
+Digital Millennium Copyright Act (DMCA), 156, 157, 159, 160, 181
+
+Diller, Barry, 165-66
+
+DirecTV, 163
+
+Dirty Harry, 101
+
+Disney, Inc., 23-24, 116, 145-46, 218, 231
+
+_1 Sony Betamax technology opposed by, 75-76
+
+Disney, Walt, 21-24, 25, 26, 28-29, 33-34, 78, 115, 139, 213, 220, 309n
+
+DMCA (Digital Millennium Copyright Act), 156, 157, 159, 160, 181
+
+Doctorow, Cory, 72-73, 284
+
+doctors, malpractice claims against, 185, 323n
+
+documentary film, 95-99
+
+domain names, 289
+
+Donaldson, Alexander, 90-91, 92
+
+Donaldson v. Beckett, 92-94
+
+Douglas, William O., 2-3
+
+doujinshi comics, 25-26, 27-28, 29
+
+Down and Out in the Magic Kingdom (Doctorow), 72-73, 284
+
+Drahos, Peter, 267
+
+DreamWorks, 106-7
+
+Dreyfuss, Rochelle, 18
+
+driving speed, constraints on, 123-24, 207
+
+Drucker, Peter, 103
+
+drugs:
+
+_1 illegal, 166-67, 201, 207, 321n
+
+_1 pharmaceutical, 257-61, 266, 327n, 328n
+
+Dryden, John, 316n
+
+"Duck and Cover" film, 112
+
+DVDs:
+
+_1 piracy of, 64
+
+_1 price of, 70
+
+Dylan, Bob, 270
+
+Eagle Forum, 231, 232
+
+Eastman, George, 31-34
+
+Eastwood, Clint, 100-103, 295
+
+e-books, 144, 148-53
+
+Edison, Thomas, 3, 53-54, 55, 69, 78
+
+education:
+
+_1 in media literacy, 35-40
+
+_1 tinkering as means of, 45-47, 50
+
+Eldred, Eric, 213-15, 218, 220, 221, 229, 249, 325n
+
+Eldred Act, 249-54, 255
+
+Eldred v. Ashcroft, 220, 228-48, 292
+
+elections, 41-42, 43
+
+electoral college, 120, 131
+
+Electronic Frontier Foundation, 205
+
+Else, Jon, 95-99, 186
+
+e-mail, 42
+
+EMI, 162, 191
+
+Eminem, 270
+
+eMusic.com, 181-82
+
+encryption systems, 155-56
+
+England, copyright laws developed in, 85-94
+
+Enlightenment, 89
+
+environmentalism, 129-30
+
+ephemeral films, 112
+
+Errors and Omissions insurance, 98
+
+Erskine, Andrew, 91
+
+ethics, 201
+
+expression, technologies of:
+
+_1 democratic, 33, 35, 41-42, 43, 44-45
+
+_1 media literacy and, 35-40
+
+Fairbank, Robert, 105
+
+fair use, 141-43
+
+_1 circumvention technology ban and, 157-58
+
+_1 Creative Commons license vs., 283
+
+_1 in documentary film, 95-99, 316n
+
+_1 fuzziness of, 292
+
+_1 Internet burdens on, 143, 145
+
+_1 legal intimidation tactics against, 98-99, 146, 172, 186-87
+
+_1 in sampling works, 107
+
+_1 technological restriction of, 160
+
+Fallows, James, 163-64
+
+Fanning, Shawn, 67
+
+Faraday, Michael, 3
+
+farming, 127, 129
+
+FCC:
+
+_1 on FM radio, 5-6
+
+_1 on media bias, 321n
+
+_1 media ownership regulated by, xiv-xv, 162, 269
+
+_1 on television production studios, 165
+
+Felton, Ed, 47, 155-57, 158, 160
+
+feudal system, 267
+
+Fifth Amendment, 119
+
+film industry:
+
+_1 consolidation of, 163
+
+_1 luxury theaters vs. video piracy in, 302
+
+_1 patent piracy at inception of, 53-55
+
+_1 rating system of, 117
+
+_1 trade association of, 116-17, 119, 218, 253-54, 256
+
+_1 trailer advertisements of, 145-46
+
+_1 VCR taping facility opposed by, 75-76
+
+films:
+
+_1 animated, 21-24
+
+_1 archive of, 111, 112
+
+_1 clips and collages of, 100-107
+
+_1 digital copies of, 324n
+
+_1 fair use of copyrighted material in, 95-99
+
+_1 multiple copyrights associated with, 95, 101-3, 224
+
+_1 in public domain, 223-25, 254
+
+_1 restoration of, 224, 226
+
+_1 total number of, 114
+
+film sampling, 107
+
+First Amendment, 10, 128, 142, 168, 319n
+
+_1 copyright extension as violation of, 228, 230, 234, 244
+
+first-sale doctrine, 146
+
+Fisher, William, 197, 301, 324n, 330n
+
+Florida, Richard, 20, 308n
+
+FM radio, 4-6, 128, 196, 256
+
+Forbes, Steve, 249, 253
+
+formalities, 137, 287-91
+
+Fourneaux, Henri, 55
+
+Fox, William, 54
+
+Fox (film company), 96, 97, 98, 163
+
+free culture:
+
+_1 Creative Commons licenses for recreation of, 282-86
+
+_1 defined, xvi
+
+_1 derivative works based on, 29-30
+
+_1 English legal establishment of, 94
+
+_1 four modalities of constraint on, 121-26, 317n, 318n
+
+_1 permission culture vs., xiv, 8, 173
+
+_1 restoration efforts on previous aspects of, 277-82
+
+/{Free for All}/ (Wayner), 285
+
+free market, technological changes in, 127-28
+
+Free Software Foundation, xv, 231-32, 280
+
+free software/open-source software (FS/OSS), 45, 65, 264-66, 279-80, 328n
+
+French copyright law, 327n
+
+Fried, Charles, 233, 237
+
+Friedman, Milton, 232
+
+Frost, Robert, 214, 216-17, 220
+
+Future of Ideas, The (Lessig), 148, 150, 189, 292
+
+Garlick, Mia, 284
+
+Gates, Bill, 128, 266
+
+General Film Company, 54
+
+General Public License (GPL), 265, 280
+
+generic drugs, 266
+
+German copyright law, 327n
+
+Gershwin, George, 233, 234
+
+Gil, Gilberto, 270
+
+Ginsburg, Ruth Bader, 234, 235, 242
+
+Girl Scouts, 18
+
+Global Positioning System, 263
+
+GNU/Linux operating system, 65, 232, 264, 280
+
+Godwin, Mike, 40
+
+Goldstein, Paul, 295
+
+Google, 48-49, 50
+
+GPL (General Public License), 265, 280
+
+Gracie Films, 96
+
+Grimm fairy tales, 23, 28, 213-14
+
+Grisham, John, 57, 294-95
+
+Groening, Matt, 96, 97, 98
+
+Grokster, Ltd., 323n
+
+guns, 159-60, 219
+
+hacks, 154
+
+Hal Roach Studios, 223, 232
+
+Hand, Learned, 312n
+
+handguns, 159-60
+
+Hawthorne, Nathaniel, 213, 214
+
+Henry V, 85
+
+Henry VIII, King of England, 88
+
+Herrera, Rebecca, 96, 97
+
+Heston, Charlton, 60
+
+history, records of, 109
+
+HIV/AIDS therapies, 257-61
+
+Hollings, Fritz, 324n
+
+Hollywood film industry, 53-55
+
+_1 see also film industry
+
+Horovitz, Jed, 187-88
+
+House of Lords, 92-93, 94
+
+Hummer, John, 191
+
+Hummer Winblad, 191
+
+Hyde, Rosel, 60
+
+IBM, 264, 279
+
+"if value, then right" theory, 18-19, 53
+
+images, ownership of, 34, 186
+
+innovation, 67, 313n
+
+_1 copyright profit balanced with, 75, 77-79
+
+_1 industry establishment opposed to, 75-76, 188-99
+
+_1 media conglomeration as disincentive for, 164-66
+
+_1 see also creativity
+
+/{Innovator's Dilemma,}/ The (Christensen), 166, 321n
+
+insecticide, environmental consequences of, 129-30
+
+Intel, 194, 232
+
+intellectual property rights, 11-12
+
+_1 components of, 309n
+
+_1 of drug patents, 260-61, 328n
+
+_1 international organization on issues of, 262-64, 265-67, 328n
+
+_1 U.S. Patent Office on private control of, 266-69
+
+international law, 63-64, 258-59, 313n
+
+Internet:
+
+_1 blogs on, 41, 42-45, 310n-11n
+
+_1 books on, 72-73, 143-44, 148-53, 284-85
+
+_1 copyright applicability altered by technology of, 141-44
+
+_1 copyright enforced through, 149-57, 161
+
+_1 copyright regulatory balance lost with, 125-26
+
+_1 creative Web sites on, 185
+
+_1 cultural process transformed by, 7-8
+
+_1 development of, 7, 262, 276-77
+
+_1 domain name registration on, 289
+
+_1 efficient content distribution on, 17-18, 193-94
+
+_1 encryption systems designed for, 155-56
+
+_1 initial free character of, 276-77
+
+_1 music files downloaded from, 67, 180-82, 199, 313n, 323n, 324n
+
+_1 news events on, 40-41, 43
+
+_1 peer-generated rankings on, 43
+
+_1 peer-to-peer file sharing on, see peer-to-peer (p2p) file sharing
+
+_1 pornography on, 325n
+
+_1 privacy protection on, 278-79
+
+_1 public discourse conducted on, 41-45
+
+_1 radio on, 194-99, 324n
+
+_1 search engines used on, 48-50
+
+_1 speed of access to, 297-98
+
+_1 user identities released by service providers of, 186, 205-6, 322n
+
+Internet Archive, 108-10, 112, 114, 222, 232
+
+Internet Explorer, 65
+
+interstate commerce, 219, 236, 326n
+
+Iraq war, 44, 310n, 317n
+
+ISPs (Internet service providers), user identities revealed by, 186, 205-6, 322n
+
+Iwerks, Ub, 22
+
+Japanese comics, 25-26, 27-28, 29, 309n
+
+Jaszi, Peter, 216, 245
+
+Jefferson, Thomas, 84, 120, 284
+
+Johnson, Lyndon, 116
+
+Johnson, Samuel, 93
+
+Jones, Day, Reavis and Pogue ( Jones Day), 229-30, 232, 237
+
+Jonson, Ben, 316n
+
+Jordan, Jesse, 48, 49-52, 185, 200, 206
+
+journalism, 44
+
+jury system, 42
+
+Just Think!, 35-36, 41, 45-46
+
+Kahle, Brewster, 47, 110-15, 222, 226-27, 317n
+
+Kaplan, Benjamin, 294
+
+Kazaa, 67, 71, 179, 180
+
+Keaton, Buster, 22, 23, 28
+
+Kelly, Kevin, 255
+
+Kennedy, Anthony, 234, 239, 244, 248
+
+Kennedy, John F., 116, 195
+
+Kittredge, Alfred, 56
+
+knowledge, freedom of, 89
+
+Kodak cameras, 32-33, 34, 127, 184
+
+Kodak Primer, The (Eastman), 32
+
+Kozinski, Alex, 76
+
+Krim, Jonathan, 265
+
+labor, 308n, 318n
+
+land ownership, air traffic and, 1-3, 294
+
+Laurel and Hardy films, 223
+
+law:
+
+_1 citizen respect for, 199-207
+
+_1 common vs. positive, 86, 90
+
+_1 as constraint modality, 121-22, 123-24, 125, 317n
+
+_1 on copyrights, see copyright law
+
+_1 databases of case reports in, 65, 280-81
+
+_1 federal vs. state, 133
+
+law schools, 201
+
+lawyers:
+
+_1 copyright cultural balance impeded by, 292, 304-6
+
+_1 malpractice suits against, 190-91
+
+Leaphart, Walter, 285
+
+Lear, Norman, 164, 165
+
+legal realist movement, 322
+
+legal system, attorney costs in, 51-52, 185, 186-87, 304-6
+
+Lessig, Lawrence, xiii, xiv, 121, 148, 150, 189, 292, 318n
+
+_1 Eldred case involvement of, 215, 216, 218, 228-48
+
+_1 in international debate on intellectual property, 263-64, 267-68, 328n
+
+Lessing, Lawrence, 5-6
+
+Lexis and Westlaw, 280-81
+
+libraries:
+
+_1 archival function of, 109, 111, 113, 114, 173, 227
+
+_1 journals in, 280, 281
+
+_1 privacy rights in use of, 278
+
+_1 of public-domain literature, 213-14
+
+Library of Congress, 110, 111, 198
+
+Licensing Act (1662), 86
+
+Liebowitz, Stan, 313n, 330n
+
+Linux operating system, 65, 232, 264, 280
+
+Litman, Jessica, 194
+
+Lofgren, Zoe, 253
+
+Lott, Trent, 43
+
+Lovett, Lyle, 179, 189
+
+Lucas, George, 98
+
+Lucky Dog, The, 223
+
+McCain, John, 162
+
+Madonna, 59, 121
+
+manga, 25-26, 27-28, 29, 309n
+
+Mansfield, William Murray, Lord, 17, 91
+
+Marijuana Policy Project, 321n
+
+market competition, 128, 147
+
+market constraints, 122, 123, 125, 188, 192, 318n
+
+Marx Brothers, 147-48, 152
+
+media:
+
+_1 blog pressure on, 43
+
+_1 commercial imperatives of, 43, 44
+
+_1 ownership concentration in, xiv-xv, 4-6, 44, 162-68, 269-70
+
+media literacy, 35-40
+
+Mehra, Salil, 27, 309n
+
+Metro-Goldwyn-Mayer Studios, Inc. v. Grokster, Ltd., 323n
+
+MGM, 116
+
+Michigan Technical University, 51
+
+Mickey Mouse, 21-22, 139, 220, 221, 231
+
+Microsoft, 100
+
+_1 competitive strategies of, 65
+
+_1 on free software, 264, 265, 328n
+
+_1 government case against, 155
+
+_1 international software piracy of, 65
+
+_1 network file system of, 49
+
+_1 Windows operating system of, 65
+
+_1 WIPO meeting opposed by, 265
+
+Middlemarch (Eliot), 148-50, 151
+
+Mill, John Stuart, 318n
+
+Millar v. Taylor, 91, 92
+
+Milton, John, 89, 93, 316n
+
+monopoly, copyright as, 88-94
+
+Monroe, Marilyn, 195
+
+Morrison, Alan, 232
+
+Motion Picture Association of America (MPAA), 116-17, 119, 218, 253-54, 256
+
+Motion Pictures Patents Company (MPPC), 53-54, 63
+
+Movie Archive, 112
+
+Moyers, Bill, 165
+
+MP3.com, 189-90
+
+MP3 players, 191
+
+MP3s, 125
+
+_1 see also peer-to-peer (p2p) file sharing
+
+Mr. Rogers' Neighborhood, 158
+
+MTV, 69-70
+
+Müller, Paul Hermann, 129
+
+Murdoch, Rupert, 163
+
+music publishing, 17, 55-56
+
+music recordings:
+
+_1 total number of, 114
+
+_1 see peer-to-peer (p2p) file sharing; recording industry
+
+MusicStore, 302
+
+Myers, Mike, 106-7
+
+my.mp3.com, 189-90
+
+Napster, 34, 60, 105
+
+_1 infringing material blocked by, 73-74
+
+_1 number of registrations on, 67
+
+_1 range of content on, 68
+
+_1 recording industry tracking of users of, 206
+
+_1 replacement of, 67
+
+_1 venture capital for, 191
+
+Nashville Songwriters Association, 221
+
+National Writers Union, 232
+
+NBC, 321n
+
+Needleman, Rafe, 191
+
+Nesson, Charlie, 201
+
+NET (No Electronic Theft) Act (1998), 215
+
+Netanel, Neil Weinstock, 10, 329n
+
+Netscape, 65
+
+New Hampshire (Frost), 214
+
+News Corp., 163
+
+news coverage, 40-41, 43, 44, 110-12
+
+newspapers:
+
+_1 archives of, 109, 110
+
+_1 ownership consolidation of, 163
+
+Nick and Norm anti-drug campaign, 167, 321n
+
+Nimmer, David, 105
+
+Nimmer, Melville, 304
+
+1984 (Orwell), 108-9
+
+Ninth Circuit Court of Appeals, 76, 105, 323n
+
+Nixon, Richard, 293
+
+No Electronic Theft (NET) Act (1998), 215
+
+norms, regulatory influence of, 122, 123, 125
+
+O'Connor, Sandra Day, 234, 238
+
+Olafson, Steve, 310n-11n
+
+Olson, Theodore B., 240
+
+open-source software, see free software/open-source software
+
+Oppenheimer, Matt, 51
+
+originalism, 243
+
+Orwell, George, 108-9
+
+parallel importation, 258
+
+Paramount Pictures, 116
+
+Patent and Trademark Office, U.S., 265-69
+
+patents:
+
+gj_1 duration of, 54-55, 242, 292
+
+_1 on film technology, 53-55
+
+_1 on pharmaceuticals, 258-61, 266, 328n
+
+_1 in public domain, 135, 214
+
+Patterson, Raymond, 90
+
+peer-to-peer (p2p) file sharing:
+
+_1 benefits of, 71-73, 79
+
+_1 of books, 72-73
+
+_1 efficiency of, 17-18
+
+_1 felony punishments for, 180, 215, 322n
+
+_1 four types of, 68-69, 296-97
+
+_1 infringement protections in, 73-74, 181-82
+
+_1 participation levels of, 67, 313n
+
+_1 piracy vs., 66-79
+
+_1 reform proposals of copyright restraints on, 296-304
+
+_1 regulatory balance lost in, 125, 206-7
+
+_1 shoplifting vs., 179-80
+
+_1 total legalization of, 180
+
+_1 zero-tolerance of, 180-82
+
+Peer-to-Peer Piracy Prevention Act, 324n
+
+permission culture:
+
+_1 free culture vs., xiv, 8, 173
+
+_1 transaction burdens of, 192-93
+
+permissions:
+
+_1 coded controls vs., 149-53
+
+_1 photography exempted from, 33-35
+
+_1 for use of film clips, 100-107
+
+_1 see also copyright
+
+pharmaceutical patents, 258-61, 328n
+
+phonograph, 55
+
+photocopying machines, 171
+
+photography, 31-35
+
+Picker, Randal C., 324n
+
+piracy:
+
+_1 in Asia, 63, 64, 65, 302
+
+_1 commercial, 62-66, 313n
+
+_1 derivative work vs., 22-24, 25-29, 138-39, 141
+
+_1 in development of content industry, 53-61, 312n
+
+_1 of intangible property, 64, 71, 179-80
+
+_1 international, 63-64
+
+_1 profit reduction as criterion of, 66-71, 73
+
+_1 p2p file sharing vs., 66-79
+
+_1 uncritical rejection of, 183-84
+
+player pianos, 55, 56, 75
+
+PLoS (Public Library of Science), 262, 281-82
+
+Pogue, David, xiii
+
+political discourse, 41, 42-45
+
+Politics (Aristotle), 150
+
+Porgy and Bess, 233
+
+pornography, 233, 325n
+
+positive law, 86, 90
+
+power, concentration of, xv, 12
+
+Prelinger, Rick, 112
+
+Princeton University, 51
+
+privacy rights, 205, 277-79
+
+Progress Clause, 130-31, 215, 218, 232, 236, 243-44
+
+prohibition, citizen rebellion against, 199-207
+
+Promises to Keep (Fisher), 301
+
+property rights:
+
+_1 air traffic vs., 1-3, 294
+
+_1 as balance of public good vs. private interests, 172-73, 322n
+
+_1 copyright vs., 83-84, 172-73
+
+_1 feudal system of, 267
+
+_1 formalities associated with, 287-88
+
+_1 intangibility of, 84, 315n
+
+_1 Takings Clause on, 119
+
+_1 see also copyright; creative property; intellectual property rights
+
+proprietary code, 279-80
+
+protectionism, of artists vs. business interests, 9
+
+p2p file sharing, see peer-to-peer (p2p) file sharing
+
+Public Citizen, 232
+
+public domain:
+
+_1 access fees for material in, 281
+
+_1 balance of U.S. content in, 133, 170-72, 318n-19n
+
+_1 content industry opposition to, 253-56
+
+_1 defined, 24
+
+_1 e-book restrictions on, 148-50, 152-53
+
+_1 English legal establishment of, 93
+
+_1 films in, 223-25, 254
+
+_1 future patents vs. future copyrights in, 134-35, 214
+
+_1 legal murkiness on, 185-86
+
+_1 library of works derived from, 213-14
+
+_1 license system for rebuilding of, 281-86
+
+_1 protection of, 220-21
+
+_1 p2p sharing of work in, 73
+
+_1 public projects in, 262-63
+
+_1 traditional term for conversion to, 24-25
+
+Public Enemy, 285
+
+Public Library of Science (PloS), 262, 281-82
+
+Quayle, Dan, 110
+
+radio:
+
+_1 FM spectrum of, 3-6, 128, 196, 256
+
+_1 on Internet, 194-99
+
+_1 music recordings played on, 58-59, 74, 195, 312n
+
+_1 ownership consolidation in, 162-63
+
+railroad industry, 127
+
+rap music, 107
+
+RCA, 4-5, 6, 7, 128, 184, 256, 275
+
+Reagan, Ronald, 233, 237, 263
+
+Real Networks, 302
+
+recording industry:
+
+gj_1 artist remuneration in, 52, 58-59, 74, 195, 196-97, 199, 301, 329n-30n
+
+_1 CD sales levels in, 70-71, 314n
+
+_1 composers' rights vs. producers' rights in, 56-58, 74
+
+_1 copyright infringement lawsuits of, 50-52, 180, 185, 190, 200, 270, 322n, 323n
+
+_1 copyright protections in, 55-58, 74, 181, 195, 291
+
+_1 international piracy in, 63
+
+_1 Internet radio hampered by, 196-99, 324n
+
+_1 new recording technology opposed by, 69-70, 314n
+
+_1 out-of-print music of, 68, 71-72, 314n
+
+_1 ownership concentration in, 162
+
+_1 piracy in, 55-58
+
+_1 radio broadcast and, 58-59, 74, 196, 312n
+
+_1 statutory license system in, 56-58
+
+Recording Industry Association of America (RIAA):
+
+_1 on CD sales decline, 70, 71
+
+_1 on circumvention technology, 158, 160
+
+_1 copyright infringement lawsuits filed by, 50-52, 180, 185, 190, 200, 270, 322n
+
+_1 on encryption system critique, 156-57
+
+_1 on Internet radio fees, 197, 198-99
+
+_1 intimidation tactics of, 51-52, 200, 206
+
+_1 ISP user identities sought by, 205-6, 322n
+
+_1 lobbying power of, 52, 197, 218
+
+Recording Industry Association of America (RIAA) v. Diamond Multimedia Systems, 323n
+
+Recording Industry Association of America v. Verizon Internet Services, 322n
+
+regulation:
+
+_1 as establishment protectionism, 126-28, 188-99
+
+_1 four modalities of, 121-26, 317n, 318n
+
+_1 outsize penalties of, 190, 192
+
+_1 rule of law degraded by excess of, 199-207
+
+Rehnquist, William H., 219, 234, 239-40
+
+remote channel changers, 127
+
+Rensselaer Polytechnic Institute (RPI), 48
+
+computer network search engine of, 49-51
+
+Republican Party, 104, 249
+
+"Rhapsody in Blue" (Gershwin), 221
+
+RIAA, see Recording Industry Association of America
+
+"Rip, Mix, Burn" technologies, 203
+
+Rise of the Creative Class, The (Florida), 20, 308n
+
+Roberts, Michael, 189
+
+robotic dog, 153-55, 156, 157, 160
+
+Rogers, Fred, 158, 320n
+
+Romeo and Juliet (Shakespeare), 85-86, 87, 316n
+
+Rose, Mark, 91
+
+RPI, see Rensselaer Polytechnic Institute
+
+Rubenfeld, Jed, 319n
+
+Russel, Phil, 55
+
+Saferstein, Harvey, 104-5
+
+Safire, William, xiv-xv, 269
+
+San Francisco Muni, 321n
+
+San Francisco Opera, 95, 97
+
+Sarnoff, David, 5
+
+Saturday Night Live, 106
+
+Scalia, Antonin, 234, 238, 240, 247
+
+Scarlet Letter, The (Hawthorne), 214
+
+Schlafly, Phyllis, 231
+
+schools, gun possession near, 219
+
+Schwartz, John, 79
+
+scientific journals, 280, 281-82
+
+Scottish publishers, 86, 90-91, 93
+
+Screen Actors Guild, 60
+
+search engines, 48-50
+
+"Seasons, The" (Thomson), 91
+
+Secure Digital Music Initiative (SDMI), 155-56
+
+semiotic democracy, 301-2
+
+Senate, U.S., 120, 131
+
+_1 FCC media ownership rules reversed by, 269
+
+_1 see also Congress, U.S.
+
+Sentelle, David, 228-29, 231, 235, 243
+
+September 11, 2001, terrorist attacks of, 40, 41, 111-12
+
+Seuss, Dr., 233, 234
+
+Shakespeare, William, 29, 85, 87, 88, 93, 316n
+
+sheet music, 17, 56
+
+Silent Spring (Carson), 129
+
+Simpsons, The, 95-98
+
+single nucleotide polymorphisms (SNPs), 262-63
+
+Sites, Kevin, 310n-11n
+
+60 Minutes, 105, 111
+
+Slade, Michael, 101
+
+slavery, 120
+
+Smith, David, 309n
+
+Snowe, Olympia, xv
+
+software, open-source, see free software/open-source software
+
+Sonny Bono Copyright Term Extension Act (CTEA) (1998), 134, 135, 215, 218, 221, 223
+
+_1 Supreme Court challenge of, 228, 230, 231, 234-48, 252, 304
+
+Sony:
+
+_1 Aibo robotic dog produced by, 153-55, 156, 157
+
+_1 Betamax technology developed by, 75-76
+
+Sony Music Entertainment, 162
+
+Sony Pictures Entertainment, 116
+
+Sousa, John Philip, 56
+
+Souter, David, 234, 235, 242, 244
+
+South Africa, Republic of, pharmaceutical imports by, 258-59
+
+speech, freedom of, 318n
+
+constitutional guarantee of, 128
+
+film-rating system vs., 117
+
+useful criticism fostered by, 156
+
+speeding, constraints on, 123-24, 207
+
+spider, 108
+
+Spielberg, Steven, 107
+
+Stallman, Richard, xv-xvi, 279-80, 330n
+
+Stanford University, 282
+
+Star Wars, 98
+
+Starwave, 100-101
+
+Statute of Anne (1710), 86, 87, 89, 90, 91, 92, 133
+
+Statute of Monopolies (1656), 88
+
+statutory damages, 51
+
+statutory licenses, 57-58, 64, 74, 194, 295-96, 300
+
+Steamboat Bill, Jr., 22-23, 26, 34
+
+Steamboat Willie, 21-23, 309n
+
+steel industry, 127
+
+Stevens, John Paul, 234, 235, 242
+
+Stevens, Ted, xv
+
+Stewart, Gordon, 229, 230
+
+Story, Joseph, 252
+
+Sullivan, Kathleen, 232-33
+
+Superman comics, 27
+
+Supreme Court, U.S.:
+
+_1 access to opinions of, 281
+
+_1 on airspace vs. land rights, 2-3, 307n
+
+_1 annual docket of, 229
+
+_1 on balance of interests in copyright law, 77, 78
+
+_1 on cable television, 61
+
+_1 congressional actions restrained by, 218-19, 220, 234
+
+_1 on copyright term extensions, 218, 228-48
+
+_1 factions of, 234-35
+
+_1 House of Lords vs., 92
+
+_1 on Internet pornography restrictions, 325n
+
+_1 on television advertising bans, 168
+
+_1 on VCR technology, 76-77
+
+Sutherland, Donald, 102
+
+Takings Clause, 119
+
+Talbot, William, 31
+
+Tatel, David, 229
+
+Tauzin, Billy, 324n
+
+tax system, 201
+
+Taylor, Robert, 91
+
+technology:
+
+_1 archival opportunity afforded through, 113-14, 115
+
+_1 of circumvention, 156, 157-60
+
+_1 of copying, 171
+
+_1 copyright enforcement controlled by, 147, 148-61, 181, 186, 203-4, 320n, 324n
+
+_1 copyright intent altered by, 141-44
+
+_1 cut-and-paste culture enabled by, 105-6, 203
+
+_1 of digital capturing and sharing, 184-85
+
+_1 established industries threatened by changes in, 69-70, 126-28
+
+_1 innovative improvements in, 67, 313n
+
+_1 legal murkiness on, 192
+
+television, 6
+
+_1 advertising on, 36, 127, 167-68, 321n
+
+_1 cable vs. broadcast, 59-61, 74-75, 302
+
+_1 controversy avoided by, 168, 321n
+
+_1 independent production for, 164-66
+
+_1 industry trade association of, 116
+
+_1 ownership consolidation in, 162, 163
+
+_1 VCR taping of, 75-76, 158-60
+
+Television Archive, 110, 111-12
+
+Thomas, Clarence, 234
+
+Thomson, James, 91, 92
+
+Thurmond, Strom, 43
+
+Tocqueville, Alexis de, 42
+
+Tonson, Jacob, 85, 86, 316n
+
+tort reform, 323n
+
+Torvalds, Linus, 280
+
+Trade-Related Aspects of Intellectual Property Rights (TRIPS) agreement, 313n
+
+Turner, Ted, 269
+
+Twentieth Century Fox, 116
+
+twins, as chimera, 178-79
+
+United Kingdom:
+
+copyright requirements in, 327n
+
+history of copyright law in, 85-94
+
+public creative archive in, 270
+
+United States Trade Representative (USTR), 258-59
+
+United States v. Lopez, 219, 220, 234, 235-36, 239, 241, 242, 243
+
+United States v. Morrison, 219, 234
+
+Universal Music Group, 162, 191
+
+Universal Pictures, 75-76, 116
+
+university computer networks, p2p sharing on, 48-51, 180, 206-7, 270, 322n
+
+used record sales, 72, 314n
+
+Vaidhyanathan, Siva, 316n, 322n
+
+Valenti, Jack, 205, 238
+
+_1 background of, 116, 117
+
+_1 on creative property rights, 10, 117-20, 140
+
+_1 Eldred Act opposed by, 253
+
+_1 perpetual copyright term proposed by, 326n
+
+_1 on VCR technology, 76
+
+Vanderbilt University, 110
+
+VCRs, 75-76, 77, 158-60, 194, 297, 320n
+
+venture capitalists, 189, 191
+
+Verizon Internet Services, 205, 322n
+
+veterans' pensions, 293
+
+Video Pipeline, 145-46, 187
+
+Vivendi Universal, 182, 190
+
+von Lohmann, Fred, 205, 207
+
+Wagner, Richard, 95, 97
+
+Warner Brothers, 101, 116, 147-48, 152
+
+Warner Music Group, 162
+
+Way Back Machine, 108, 109, 110
+
+Wayner, Peter, 284
+
+Web-logs (blogs), 41, 42-45, 310n-11n
+
+Web sites, domain name registration of, 289
+
+Webster, Noah, 8
+
+Wellcome Trust, 262
+
+Wells, H. G., 177-78
+
+White House press releases, 317n
+
+willful infringement, 146
+
+Windows, 65
+
+Winer, Dave, 44-45
+
+Winick, Judd, 26-27
+
+WJOA, 321n
+
+WorldCom, 185
+
+World Intellectual Property Organization (WIPO), 262-64, 265-67, 328n
+
+World Summit on the Information Society (WSIS), 263-64, 266
+
+World Trade Center, 40
+
+World Wide Web, 262
+
+WRC, 321n
+
+Wright brothers, 1, 3, 11-12
+
+Yanofsky, Dave, 36
+
+Zimmerman, Edwin, 60-61
+
+Zittrain, Jonathan, 324n
+
+1~about ABOUT THE AUTHOR
+
+{lessig.jpg 151x227 "Lawrence Lessig" }http://www.lessig.org/
+
+LAWRENCE LESSIG ( http://www.lessig.org ), professor of law and a John A. Wilson Distinguished Faculty Scholar at Stanford Law School, is founder of the Stanford Center for Internet and Society and is chairman of the Creative Commons ( http://creativecommons.org ). The author of The Future of Ideas (Random House, 2001) and Code: And Other Laws of Cyberspace (Basic Books, 1999), Lessig is a member of the boards of the Public Library of Science, the Electronic Frontier Foundation, and Public Knowledge. He was the winner of the Free Software Foundation's Award for the Advancement of Free Software, twice listed in BusinessWeek's "e.biz 25," and named one of Scientific American's "50 visionaries." A graduate of the University of Pennsylvania, Cambridge University, and Yale Law School, Lessig clerked for Judge Richard Posner of the U.S. Seventh Circuit Court of Appeals.
+
+1~misc Other Works and REVIEWS of FreeCulture
+
+http://www.lessig.org/blog/archives/001840.shtml
+
+http://www.free-culture.cc/reviews/
+
+1~jacket JACKET
+
+"FREE CULTURE is an entertaining and important look at the past and future of the cold war between the media industry and new technologies."
+
+-- Marc Andreessen, cofounder of Netscape
+
+"The twenty-first century could be the century of unprecedented creativity, but only if we embrace the brilliantly articulated messages in Lawrence Lessig's FREE CULTURE. This book is beautifully written, crisply argued, and deeply provocative. Please read it!"
+
+-- John Seely Brown, coauthor of THE SOCIAL LIFE OF INFORMATION and former Chief Scientist, Xerox PARC
+
+"America needs a national conversation about the way in which so-called 'intellectual property rights' have come to dominate the rights of scholars, researchers, and everyday citizens. A copyright cartel, bidding for absolute control over digital worlds, music, and movies, now has a veto over technological innovation and has halted most contributions to the public domain from which so many have benefited. The patent system has spun out of control, giving enormous power to entrenched interests, and even trademarks are being misused. Lawrence Lessig's latest book is essential reading for anyone who want to join this conversation. He explains how technology and the law are robbing us of the public domain; but for all his educated pessimism, Professor Lessig offers some solutions, too, because he recognizes that technology can be the catalyst for freedom. If you care about the future of innovation, read this book."
+
+-- Dan Gillmor, author of MAKING THE NEWS, an upcoming book on the collision of media and technology
+
+"FREE CULTURE goes beyond illuminating the catastrophe to our culture of increasing regulation to show examples of how we can make a different future. These new-style heroes and examples are rooted in the traditions of the founding fathers in ways that seem obvious after reading this book. Recommended reading to those trying to unravel the shrill hype around 'intellectual property.'"
+
+-- Brewster Kahle, founder of the Internet Archive
+
+%% SiSU markup sample Notes:
+% SiSU http://www.jus.uio.no/sisu
+% SiSU markup for 0.16 and later:
+% 0.20.4 header 0~links
+% 0.22 may drop image dimensions (rmagick)
+% 0.23 utf-8 ß
+% 0.38 or later, may use alternative notation for headers, e.g. @title: (instead of 0~title)
+% 0.38 document structure alternative markup, experimental (rad) A,B,C,1,2,3 maps to 1,2,3,4,5,6
+% Output: http://www.jus.uio.no/sisu/free_culture.lawrence_lessig/sisu_manifest.html
+% SiSU 0.38 experimental (alternative structure) markup used for this document
diff --git a/data/sisu_markup_samples/non-free/free_for_all.peter_wayner.sst b/data/sisu_markup_samples/non-free/free_for_all.peter_wayner.sst
new file mode 100644
index 0000000..4c8a537
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/free_for_all.peter_wayner.sst
@@ -0,0 +1,3269 @@
+% SiSU 0.38
+
+@title: Free For All
+
+@subtitle: How Linux and the Free Software Movement Undercut the High Tech Titans
+
+@creator: Peter Wayner
+
+@type: Book
+
+@rights: Copyright Peter Wayner, 2000. Free For All is Licensed under a Creative Commons License. This License permits non-commercial use of this work, so long as attribution is given. For more information about the license, visit http://creativecommons.org/licenses/by-nc/1.0/
+
+@date: 2002-12-22
+
+@date.created: 2002-12-22
+
+@date.issued: 2002-12-22
+
+@date.available: 2002-12-22
+
+@date.modified: 2002-12-22
+
+@date.valid: 2002-12-22
+
+@language: US
+
+@level: new=:A,:B,:C,1; num_top=1
+
+@vocabulary: none
+
+@image: center
+
+@skin: skin_wayner
+
+% @catalogue isbn=0066620503
+
+@links: {The Original Authoritative and Updated Version of the Text available in pdf}http://www.wayner.org/books/ffa
+{Free For All @ SiSU}http://www.jus.uio.no/sisu/free_for_all.peter_wayner
+{Syntax}http://www.jus.uio.no/sisu/sample/syntax/free_for_all.peter_wayner.sst.html
+{@ Amazon.com}http://www.amazon.com/gp/product/0066620503
+{@ Barnes & Noble}http://search.barnesandnoble.com/booksearch/isbnInquiry.asp?isbn=0066620503
+{Free as in Freedom (on Richard M. Stallman), Sam Williams @ SiSU}http://www.jus.uio.no/sisu/free_as_in_freedom.richard_stallman_crusade_for_free_software.sam_williams
+{Free Culture, Lawrence Lessig @ SiSU}http://www.jus.uio.no/sisu/free_culture.lawrence_lessig
+{The Wealth of Networks, Yochai Benkler @ SiSU}http://www.jus.uio.no/sisu/the_wealth_of_networks.yochai_benkler
+{The Cathedral and the Bazaar, Eric S. Raymond @ SiSU }http://www.jus.uio.no/sisu/the_cathedral_and_the_bazaar.eric_s_raymond
+
+:A~ Free For All
+
+:B~ How Linux and the Free Software Movement Undercut the High Tech Titans
+
+:C~ by Peter Wayner
+
+1~acknowledgements Acknowledgments
+
+This is just a book about the free software movement. It wouldn't be possible without the hard work and the dedication of the thousands if not millions of people who like to spend their free time hacking code. I salute you. Thank you.
+
+Many people spoke to me during the process of assembling this book, and it would be impossible to cite them all. The list should begin with the millions of people who write and contribute to the various free software lists. The letters, notes, and postings to these lists are a wonderful history of the evolution of free software and an invaluable resource.
+
+The list should also include the dozens of journalists at places like Slashdot.org, LinuxWorld, Linux magazine, Linux Weekly News, Kernel Traffic, Salon, and the New York Times. I should specifically mention the work of Joe Barr, Jeff Bates, Janelle Brown, Zack Brown, Jonathan Corbet, Elizabeth Coolbaugh, Amy Harmon, Andrew Leonard, Rob Malda, John Markoff, Mark Nielsen, Nicholas Petreley, Harald Radke, and Dave Whitinger. They wrote wonderful pieces that will make a great first draft of the history of the open source movement. Only a few of the pieces are cited directly in the footnotes, largely for practical reasons. The entire body of websites like Slashdot, Linux Journal, Linux World, Kernel Notes, or Linux Weekly News should be required reading for anyone interested in the free software movement.
+
+There are hundreds of folks at Linux trade shows who took the time to show me their products, T-shirts, or, in one case, cooler filled with beer. Almost everyone I met at the conferences was happy to speak about their experiences with open source software. They were all a great source of information, and I don't even know most of their names.
+
+Some people went beyond the call of duty. John Gilmore, Ethan Rasiel, and Caroline McKeldin each read drafts when the book was quite unfinished. Their comments were crucial.
+
+Many friends, acquaintances, and subjects of the book were kind enough to read versions that were a bit more polished, but far from complete: L. David Baron, Jeff Bates, Brian Behlendorf, Alan Cox, Robert Dreyer, Theo de Raadt, Telsa Gwynne, Jordan Hubbard, James Lewis Moss, Kirk McKusick, Sam Ockman, Tim O'Reilly, Sameer Parekh, Bruce Perens, Eric Raymond, and Richard Stallman.
+
+There are some people who deserve a different kind of thanks. Daniel Greenberg and James Levine did a great job shaping the conception of the book. When I began, it was just a few ideas on paper. My editors, David Conti, Laureen Rowland, Devi Pillai, and Adrian Zackheim, were largely responsible for this transition. Kimberly Monroe suffered through my mistakes as she took the book through its production stages. They took a bunch of rambling comments about a social phenomenon and helped turn it into a book.
+
+Finally, I want to thank everyone in my family for everything they've given through all of my life. And, of course, Caroline, who edited large portions with a slavish devotion to grammar and style.
+
+Visit http://www.wayner.org/books/ffa/ for updates, corrections, and additional comments.
+
+1~edition Version Information
+
+FREE FOR ALL. Copyright 2000 by Peter Wayner.
+
+Some Rights Reserved:
+
+This is [a complete version of] the free electronic version of the book originally published by HarperCollins. The book is still protected by copyright and bound by a license granting you the limited rights to make complete copies for non-commercial purposes. You're welcome to read it in electronic form subject to these conditions:
+
+1) You may not make derivative works. You must reproduce the work in its entirety.
+
+2) You may not sell versions.
+
+3) You refer everyone receiving a copy to the website where they may get the latest corrected version. http://www.wayner.org/books/ffa/
+
+A full license developed by the Creative Commons (www.creativecommons.org) will be forthcoming. Please write p3@wayner.org if you have any questions or suggestions.
+
+See http://www.wayner.org/books/ffa/ for the FIRST PDF EDITION Page layout for this and the original paper edition designed by William Ruoto, see Not printed on acid-free paper. Library of Congress Cataloging-in-Publication Data Wayner, Peter, 1964 Free for all : how Linux and the free software movement undercut the high-tech titans / Peter Wayner. p. cm. ISBN 0-06-662050-3 1. Linux. 2. Operating systems (Computers) 3. Free computer software. I. Title. QA76.76.063 W394 2000 005.4'469 dc21 00-023919 00 01 02 03 04 V/RRD 10 9 8 7 6 5 4 3 2 1
+
+{ffa.png 93x140 "Free For All by Peter Wayner" }http://www.amazon.com/exec/obidos/tg/detail/-/0066620503/
+
+*{Free For All}* may be purchased at Amazon.com
+
+1~ Battle
+
+The world where cash was king, greed was good, and money was power fell off its axis and stopped rotating, if only for a second, in January 1999. Microsoft, the great software giant and unstoppable engine of cash, was defending itself in a courtroom in Washington, D.C. The Department of Justice claimed that Microsoft was a monopoly and was using this power to cut off competitors. Microsoft denied it all and claimed that the world was hurling threat after competitive threat its way. They weren't a monopoly, they were just a very competitive company that managed to withstand the slings and arrows of other equally ruthless competitors out to steal its market share.
+
+The trial quickly turned into everyone's worst nightmare as the lawyers, the economists, and the programmers filled the courtroom with a thick mixture of technobabble and legal speak. On the stands, the computer nerds spewed out three-letter acronyms (TLAs) as they talked about creating operating systems. Afterward, the legal nerds started slicing them up into one-letter acronyms and testing to see just which of the three letters was really the one that committed the crime. Then the economists came forward and offered their theories on just when a monopoly is a monopoly. Were three letters working in collusion enough? What about two? Everyone in the courtroom began to dread spending the day cooped up in a small room as Microsoft tried to deny what was obvious to practically everyone.
+
+In the fall and early winter of 1998 and 1999, the Department of Justice had presented its witnesses, who explained how Microsoft had slanted contracts, tweaked software, and twisted arms to ensure that it and it alone got the lion's share of the computer business. Many watching the trial soon developed the opinion that Microsoft had adopted a mixture of tactics from the schoolyard bully, the local mob boss, and the mother from hell. The Department of Justice trotted out a number of witnesses who produced ample evidence that suggested the computer customers of the world will buy Microsoft products unless Microsoft decides otherwise. Competitors must be punished.
+
+By January, the journalists covering the trial were quietly complaining about this endless waste of time. The Department of Justice's case was so compelling that they saw the whole trial as just a delay in what would eventually come to be a ruling that would somehow split or shackle Microsoft.
+
+But Microsoft wasn't going to be bullied or pushed into splitting up. The trial allowed them to present their side of the story, and they had one ready. Sure, everyone seemed to use Microsoft products, but that was because they were great. It wasn't because there weren't any competitors, but because the competitors just weren't good enough.
+
+In the middle of January, Richard Schmalensee, the dean of the Sloan School of Management at the Massachusetts Institute of Technology, took the stand to defend Microsoft. Schmalensee had worked for the Federal Trade Commission and the Department of Justice as an economist who examined the marketplace and the effects of anti-competitive behavior. He studied how monopolies behave, and to him Microsoft had no monopoly power. Now, he was being paid handsomely by Microsoft as an expert witness to repeat this view in court.
+
+Schmalensee's argument was simple: competitors are popping up all over the place. Microsoft, he said in his direct testimony, "is in a constant struggle for competitive survival. That struggle--the race to win and the victor's perpetual fear of being displaced--is the source of competitive vitality in the microcomputer software industry."
+
+Schmalensee even had a few competitors ready. "The iMac clearly competes directly and fiercely with Intel-compatible computers running Windows," he said without mentioning that Microsoft had bailed out Apple several months before with hundreds of millions of dollars in an investment. When Steve Jobs, the iCEO of Apple, announced the deal to a crowd of Mac lovers, the crowd booed. Jobs quieted them and tried to argue that the days of stiff competition with Microsoft were over. The scene did such a good job of capturing the total domination of Microsoft that the television movie The Pirates of Silicon Valley used it to illustrate how Bill Gates had won all of the marbles.
+
+After the announcement of the investment, Apple began shipping Microsoft's Internet Explorer web browser as the preferred browser on its machines. Microsoft's competitor Netscape became just a bit harder to find on the iMac. After that deal, Steve Jobs even began making statements that the old sworn enemies, Apple and Microsoft, were now more partners than competitors. Schmalensee didn't focus on this facet of Apple's new attitude toward competition.
+
+Next, Schmalensee trotted out BeOS, an operating system made by Be, a small company with about 100 employees run by ex-Apple executive Jean-Louis Gassée. This company had attracted millions of dollars in funding, he said, and some people really liked it. That made it a competitor.
+
+Schmalensee didn't mention that Be had trouble giving away the BeOS operating system. Gassée approached a number of PC manufacturers to see if they would include BeOS on their machines and give users the chance to switch between two operating systems. Gassée found, to no one's real surprise, that Microsoft's contracts with manufacturers made it difficult, if not practically impossible, to get BeOS in customers' hands. Microsoft controlled much of what the user got to see and insisted on almost total control over the viewer's experience. Schmalensee didn't mention these details in his testimony. BeOS may have been as locked up as a prisoner in a windowless cell in a stone-walled asylum on an island in the middle of the ocean, but BeOS was still a competitor for the love of the fair maiden.
+
+The last competitor, though, was the most surprising to everyone. Schmalensee saw Linux, a program given away for free, as a big potential competitor. When he said Linux, he really meant an entire collection of programs known as "open source" software. These were written by a loose-knit group of programmers who shared all of the source code to the software over the Internet.
+
+Open source software floated around the Internet controlled by a variety of licenses with names like the GNU General Public License (GPL). To say that the software was "controlled" by the license is a bit of a stretch. If anything, the licenses were deliberately worded to prohibit control. The GNU GPL, for instance, let users modify the program and give away their own versions. The license did more to enforce sharing of all the source code than it did to control or constrain. It was more an anti-license than anything else, and its author, Richard Stallman, often called it a "copyleft."
+
+Schmalensee didn't mention that most people thought of Linux as a strange tool created and used by hackers in dark rooms lit by computer monitors. He didn't mention that many people had trouble getting Linux to work with their computers. He forgot to mention that Linux manuals came with subheads like "Disk Druid-like 'fstab editor' available." He didn't delve into the fact that for many of the developers, Linux was just a hobby they dabbled with when there was nothing interesting on television. And he certainly didn't mention that most people thought the whole Linux project was the work of a mad genius and his weirdo disciples who still hadn't caught on to the fact that the Soviet Union had already failed big-time. The Linux folks actually thought sharing would make the world a better place. Fat-cat programmers who spent their stock-option riches on Porsches and balsamic vinegar laughed at moments like this.
+
+Schmalensee didn't mention these facts. He just offered Linux as an alternative to Windows and said that computer manufacturers might switch to it at any time. Poof. Therefore, Microsoft had competitors. At the trial, the discourse quickly broke down into an argument over what is really a worthy competitor and what isn't. Were there enough applications available for Linux or the Mac? What qualifies as "enough"? Were these really worthy?
+
+Under cross-examination, Schmalensee explained that he wasn't holding up the Mac, BeOS, or Linux as competitors who were going to take over 50 percent of the marketplace. He merely argued that their existence proved that the barriers produced by the so-called Microsoft monopoly weren't that strong. If rational people were investing in creating companies like BeOS, then Microsoft's power wasn't absolute.
+
+Afterward, most people quickly made up their minds. Everyone had heard about the Macintosh and knew that back then conventional wisdom dictated that it would soon fail. But most people didn't know anything about BeOS or Linux. How could a company be a competitor if no one had heard of it? Apple and Microsoft had TV commercials. BeOS, at least, had a charismatic chairman. There was no Linux pitchman, no Linux jingle, and no Linux 30-second spot in major media. At the time, only the best-funded projects in the Linux community had enough money to buy spots on late-night community-access cable television. How could someone without money compete with a company that hired the Rolling Stones to pump excitement into a product launch?
+
+When people heard that Microsoft was offering a free product as a worthy competitor, they began to laugh even louder at the company's chutzpah. Wasn't money the whole reason the country was having a trial? Weren't computer programmers in such demand that many companies couldn't hire as many as they needed, no matter how high the salary? How could Microsoft believe that anyone would buy the supposition that a bunch of pseudo-communist nerds living in their weird techno-utopia where all the software was free would ever come up with software that could compete with the richest company on earth? At first glance, it looked as if Microsoft's case was sinking so low that it had to resort to laughable strategies. It was as if General Motors were to tell the world "We shouldn't have to worry about fixing cars that pollute because a collective of hippies in Ithaca, New York, is refurbishing old bicycles and giving them away for free." It was as if Exxon waved away the problems of sinking oil tankers by explaining that folksingers had written a really neat ballad for teaching birds and otters to lick themselves clean after an oil spill. If no one charged money for Linux, then it was probably because it wasn't worth buying.
+
+But as everyone began looking a bit deeper, they began to see that Linux was being taken seriously in some parts of the world. Many web servers, it turned out, were already running on Linux or another free cousin known as FreeBSD. A free webserving tool known as Apache had controlled more than 50 percent of the web servers for some time, and it was gradually beating out Microsoft products that cost thousands of dollars. Many of the web servers ran Apache on top of a Linux or a FreeBSD machine and got the job done. The software worked well, and the nonexistent price made it easy to choose.
+
+Linux was also winning over some of the world's most serious physicists, weapons designers, biologists, and hard-core scientists. Some of the nation's top labs had wired together clusters of cheap PCs and turned them into supercomputers that were highly competitive with the best machines on the market. One upstart company started offering "supercomputers" for $3,000. These machines used Linux to keep the data flowing while the racks of computers plugged and chugged their way for hours on complicated simulations.
+
+There were other indications. Linux users bragged that their system rarely crashed. Some claimed to have machines that had been running for a year or more without a problem. Microsoft (and Apple) users, on the other hand, had grown used to frequent crashes. The "Blue Screen of Death" that appears on Windows users' monitors when something goes irretrievably wrong is the butt of many jokes.
+
+Linux users also bragged about the quality of their desktop interface. Most of the uninitiated thought of Linux as a hacker's system built for nerds. Yet recently two very good operating shells called GNOME and KDE had taken hold. Both offered the user an environment that looked just like Windows but was better. Linux hackers started bragging that they were able to equip their girlfriends, mothers, and friends with Linux boxes without grief. Some people with little computer experience were adopting Linux with little trouble.
+
+Building websites and supercomputers is not an easy task, and it is often done in back rooms out of the sight of most people. When people began realizing that the free software hippies had slowly managed to take over a large chunk of the web server and supercomputing world, they realized that perhaps Microsoft's claim was viable. Web servers and supercomputers are machines built and run by serious folks with bosses who want something in return for handing out paychecks. They aren't just toys sitting around the garage.
+
+If these free software guys had conquered such serious arenas, maybe they could handle the office and the desktop. If the free software world had created something usable by the programmers' mothers, then maybe they were viable competitors. Maybe Microsoft was right.
+
+2~ Sleeping In
+
+While Microsoft focused its eyes and ears upon Washington, one of its biggest competitors was sleeping late. When Richard Schmalensee was prepping to take the stand in Washington, D.C., to defend Microsoft's outrageous fortune against the slings and arrows of a government inquisition, Alan Cox was still sleeping in. He didn't get up until 2:00 PM. at his home in Swansea on the south coast of Wales. This isn't too odd for him. His wife, Telsa, grouses frequently that it's impossible to get him moving each morning without a dose of Jolt Cola, the kind that's overloaded with caffeine.
+
+The night before, Cox and his wife went to see The Mask of Zorro, the latest movie that describes how Don Diego de la Vega assumed the secret identity of Zorro to free the Mexican people from the tyranny of Don Rafael Montero. In this version, Don Diego, played by Anthony Hopkins, chooses an orphan, Alejandro Murrieta, played by Antonio Banderas, and teaches him to be the next Zorro so the fight can continue. Its theme resonates with writers of open source software: a small band of talented, passionate warriors warding off the evil oppressor.
+
+Cox keeps an open diary and posts the entries on the web. "It's a nice looking film, with some great stunts and character play," he wrote, but
+
+_1 You could, however, have fitted the plot, including all the twists, on the back of a matchbox. That made it feel a bit ponderous so it only got a 6 out of 10 even though I'm feeling extremely smug because I spotted one of the errors in the film while watching it not by consulting imdb later.
+
+By the imdb, he meant the Internet Movie Database, which is one of the most complete listings of film credits, summaries, and glitches available on the Net. Users on the Internet write in with their own reviews and plot synopses, which the database dutifully catalogs and makes available to everyone. It's a reference book with thousands of authors.
+
+In this case, the big glitch in the film is the fact that one of the train gauges uses the metric system. Mexico converted to this system in 1860, but the film is set in 1841. Whoops. Busted.
+
+Telsa wrote in her diary, which she also posts to the Net under the title "The More Accurate Diary. Really."
+
+_1 Dragged him to cinema to see Zorro. I should have remembered he'd done some fencing and found something different. He also claimed he'd spotted a really obscure error. I checked afterward on IMDB, and was amazed. How did he see this?
+
+Cox is a big bear of a man who wears a long, brown wizard's beard. He has an agile, analytic mind that constantly picks apart a system and probes it for weaknesses. If he's playing a game, he plays until he finds a trick or a loophole that will give him the winning edge. If he's working around the house, he often ends up meddling with things until he fixes and improves them. Of course, he also often breaks them. His wife loves to complain about the bangs and crashes that come from his home office, where he often works until 6:30 in the morning.
+
+To his wife, this crashing, banging, and late-night hacking is the source of the halfhearted grousing inherent in every marriage. She obviously loves both his idiosyncrasies and the opportunity to discuss just how strange they can be. In January, Telsa was trying to find a way to automate her coffeepot by hooking it up to her computer.
+
+She wrote in her diary,
+
+_1 Alan is reluctant to get involved with any attempt to make a coffee-maker switch on via the computer now because he seems to think I will eventually switch it on with no water in and start a fire. I'm not the one who welded tinned spaghetti to the non-stick saucepan. Or set the wok on fire. More than once. Once with fifteen guests in the house. But there we are.
+
+To the rest of the world, this urge to putter and fiddle with machines is more than a source of marital comedy. Cox is one of the great threats to the continued dominance of Microsoft, despite the fact that he found a way to weld spaghetti to a nonstick pan. He is one of the core developers who help maintain the Linux kernel. In other words, he's one of the group of programmers who helps guide the development of the Linux operating system, the one Richard Schmalensee feels is such a threat to Microsoft. Cox is one of the few people whom Linus Torvalds, the creator of Linux, trusts to make important decisions about future directions. Cox is an expert on the networking guts of the system and is responsible for making sure that most of the new ideas that people suggest for Linux are considered carefully and integrated correctly. Torvalds defers to Cox on many matters about how Linux-based computers talk with other computers over a network. Cox works long and hard to find efficient ways for Linux to juggle multiple connections without slowing down or deadlocking.
+
+The group that works with Cox and Torvalds operates with no official structure. Millions of people use Linux to keep their computers running, and all of them have copies of the source code. In the 1980s, most companies began keeping the source code to their software as private as possible because they worried that a competitor might come along and steal the ideas the source spelled out. The source code, which is written in languages like C, Java, FORTRAN, BASIC, or Pascal, is meant to be read by programmers. Most companies didn't want other programmers understanding too much about the guts of their software. Information is power, and the companies instinctively played their cards close to their chests.
+
+When Linus Torvalds first started writing Linux in 1991, however, he decided to give away the operating system for free. He included all the source code because he wanted others to read it, comment upon it, and perhaps improve it. His decision was as much a radical break from standard programming procedure as a practical decision. He was a poor student at the time, and this operating system was merely a hobby. If he had tried to sell it, he wouldn't have gotten anything for it. He certainly had no money to build a company that could polish the software and market it. So he just sent out copies over the Internet.
+
+Sharing software had already been endorsed by Richard Stallman, a legendary programmer from MIT who believed that keeping source code private was a sin and a crime against humanity. A programmer who shares the source code lets others learn, and those others can contribute their ideas back into the mix. Closed source code leaves users frustrated because they can't learn about the software or fix any bugs. Stallman broke away from MIT in 1984 when he founded the Free Software Foundation. This became the organization that sponsored Stallman's grand project to free source code, a project he called GNU. In the 1980s, Stallman created very advanced tools like the GNU Emacs text editor, which people could use to write programs and articles. Others donated their work and the GNU project soon included a wide range of tools, utilities, and games. All of them were distributed for free.
+
+Torvalds looked at Stallman and decided to follow his lead with open source code. Torvalds's free software began to attract people who liked to play around with technology. Some just glanced at it. Others messed around for a few hours. Free is a powerful incentive. It doesn't let money, credit cards, purchase orders, and the boss's approval get in the way of curiosity. A few, like Alan Cox, had such a good time taking apart an operating system that they stayed on and began contributing back to the project.
+
+In time, more and more people like Alan Cox discovered Torvalds's little project on the Net. Some slept late. Others kept normal hours and worked in offices. Some just found bugs. Others fixed the bugs. Still others added new features that they wanted. Slowly, the operating system grew from a toy that satisfied the curiosity of computer scientists into a usable tool that powers supercomputers, web servers, and millions of other machines around the world.
+
+Today, about a thousand people regularly work with people like Alan Cox on the development of the Linux kernel, the official name for the part of the operating system that Torvalds started writing back in 1991. That may not be an accurate estimate because many people check in for a few weeks when a project requires their participation. Some follow everything, but most people are just interested in little corners. Many other programmers have contributed various pieces of software such as word processors or spreadsheets. All of these are bundled together into packages that are often called plain Linux or GNU/Linux and shipped by companies like Red Hat or more ad hoc groups like Debian.~{ /{Linux Weekly News}/ keeps a complete list of distributors. These range from the small, one- or two-man operations to the biggest, most corporate ones like Red Hat: Alzza Linux, Apokalypse, Armed Linux, Bad Penguin Linux, Bastille Linux, Best Linux (Finnish/Swedish), Bifrost, Black Cat Linux (Ukrainian/Russian), Caldera OpenLinux, CCLinux, Chinese Linux Extension, Complete Linux, Conectiva Linux (Brazilian), Debian GNU/Linux, Definite Linux, DemoLinux, DLD, DLite, DLX, DragonLinux, easyLinux, Enoch, Eridani Star System, Eonova Linux, e-smith server and gateway, Eurielec Linux (Spanish), eXecutive Linux, floppyfw, Floppix, Green Frog Linux, hal91, Hard Hat Linux, Immunix, Independence, Jurix, Kha0s Linux, KRUD, KSI-Linux, Laetos, LEM, Linux Cyrillic Edition, LinuxGT, Linux-Kheops (French), Linux MLD (Japanese), LinuxOne OS, LinuxPPC, LinuxPPP (Mexican), Linux Pro Plus, Linux Router Project, LOAF, LSD, Mandrake, Mastodon, MicroLinux, MkLinux, muLinux, nanoLinux II, NoMad Linux, OpenClassroom, Peanut Linux, Plamo Linux, PLD, Project Ballantain, PROSA, QuadLinux, Red Hat, Rock Linux, RunOnCD, ShareTheNet, Skygate, Slackware, Small Linux, Stampede, Stataboware, Storm Linux, SuSE, Tomsrtbt, Trinux, TurboLinux, uClinux, Vine Linux, WinLinux 2000, Xdenu, XTeamLinux, and Yellow Dog Linux. }~ While Torvalds only wrote the core kernel, people use his name, Linux, to stand for a whole body of software written by thousands of others. It's not exactly fair, but most let it slide. If there hadn't been the Linux kernel, the users wouldn't have the ability to run software on a completely free system. The free software would need to interact with something from Microsoft, Apple, or IBM. Of course, if it weren't for all of the other free software from Berkeley, the GNU project, and thousands of other garages around the world, there would be little for the Linux kernel to do.
+
+Officially, Linus Torvalds is the final arbiter for the kernel and the one who makes the final decisions about new features. In practice, the group runs like a loosely knit "ad-hocracy." Some people might care about a particular feature like the ability to interface with Macintoshes, and they write special code that makes this task easier. Others who run really big databases may want larger file systems that can store more information without limits.
+
+All of these people work at their own pace. Some work in their homes, like Alan Cox. Some work in university labs. Others work for businesses that use Linux and encourage their programmers to plug away so it serves their needs.
+
+The team is united by mailing lists. The Linux Kernel mailing list hooks up Cox in Britain, Torvalds in Silicon Valley, and the others around the globe. They post notes to the list and discuss ideas. Sometimes verbal fights break out, and sometimes everyone agrees. Sometimes people light a candle by actually writing new code to make the kernel better, and other times they just curse the darkness.
+
+Cox is now one of several people responsible for coordinating the addition of new code. He tests it for compatibility and guides Linux authors to make sure they're working together optimally. In essence, he tests every piece of incoming software to make sure all of the gauges work with the right system of measurement so there will be no glitches. He tries to remove the incompatibilities that marred Zorro.
+
+Often, others will duplicate Cox's work. Some new features are very popular and have many cooks minding the stew. The technology for speeding up computers with multiple CPUs lets each computer harness the extra power, so many list members test it frequently. They want the fastest machines they can get, and smoothing the flow of data between the CPUs is the best way to let the machines cooperate.
+
+Other features are not so popular, and they're tackled by the people who need the features. Some people want to hook their Linux boxes up to Macintoshes. Doing that smoothly can require some work in the kernel. Others may want to add special code to enable a special device like a high-speed camera or a strange type of disk drive. These groups often work on their own but coordinate their solutions with the main crowd. Ideally, they'll be able to come up with some patches that solve their problem without breaking some other part of the system.
+
+It's a very social and political process that unrolls in slow motion through e-mail messages. One person makes a suggestion. Others may agree. Someone may squabble with the idea because it seems inelegant, sloppy, or, worst of all, dangerous. After some time, a rough consensus evolves. Easy problems can be solved in days or even minutes, but complicated decisions can wait as the debate rages for years.
+
+Each day, Cox and his virtual colleagues pore through the lists trying to figure out how to make Linux better, faster, and more usable. Sometimes they skip out to watch a movie. Sometimes they go for hikes. But one thing they don't do is spend months huddled in conference rooms trying to come up with legal arguments. Until recently, the Linux folks didn't have money for lawyers, and that means they didn't get sidetracked by figuring out how to get big and powerful people like Richard Schmalensee to tell a court that there's no monopoly in the computer operating system business.
+
+2~ Suits Against Hackers
+
+Schmalensee and Cox couldn't be more different from each other. One is a career technocrat who moves easily between the government and MIT. The other is what used to be known as an absentminded professor--the kind who works when he's really interested in a problem. It just so happens that Cox is pretty intrigued with building a better operating system than the various editions of Windows that form the basis of Microsoft's domination of the computer industry.
+
+The battle between Linux and Microsoft is lining up to be the classic fight between the people like Schmalensee and the people like Cox. On one side are the armies of lawyers, lobbyists, salesmen, and expensive executives who are armed with patents, lawsuits, and legislation. They are skilled at moving the levers of power until the gears line up just right and billions of dollars pour into their pockets. They know how to schmooze, toady, beg, or even threaten until they wear the mantle of authority and command the piety and devotion of the world. People buy Microsoft because it's "the standard." No one decreed this, but somehow it has come to be.
+
+On the other side are a bunch of guys who just like playing with computers and will do anything to take them apart. They're not like the guy in the song by John Mellencamp who sings "I fight authority and authority always wins." Some might have an attitude, but most just want to look at the insides of their computers and rearrange them to hook up to coffee machines or networks. They want to fidget with the guts of their machines. If they weld some spaghetti to the insides, so be it.
+
+Normally, these battles between the suits and the geeks don't threaten the established order. There are university students around the world building solar-powered cars, but they don't actually pose a threat to the oil or auto industries. "21," a restaurant in New York, makes a great hamburger, but they're not going to put McDonald's out of business. The experimentalists and the perfectionists don't usually knock heads with the corporations who depend upon world domination for their profits. Except when it comes to software.
+
+Software is different from cars or hamburgers. Once someone writes the source code, copying the source costs next to nothing. That makes it much easier for tinkerers like Cox to have a global effect. If Cox, Stallman, Torvalds, and his chums just happen to luck upon something that's better than Microsoft, then the rest of the world can share their invention for next to nothing. That's what makes Cox, Torvalds, and their buddies a credible threat no matter how often they sleep late.
+
+It's easy to get high off of the idea alone. A few guys sleeping late and working in bedrooms aren't supposed to catch up to a cash engine like Microsoft. They aren't supposed to create a webserving engine that controls more than half of the web. They aren't supposed to create a graphical user interface for drawing windows and icons on the screen that's much better than Windows. They aren't supposed to create supercomputers with sticker prices of $3,000. Money isn't supposed to lose.
+
+Of course, the folks who are working on free software projects have advantages that money can't buy. These programmers don't need lawyers to create licenses, negotiate contracts, or argue over terms. Their software is free, and lawyers lose interest pretty quickly when there's no money around. The free software guys don't need to scrutinize advertising copy. Anyone can download the software and just try it. The programmers also don't need to sit in the corner when their computer crashes and complain about the idiot who wrote the software. Anyone can read the source code and fix the glitches.
+
+The folks in the free source software world are, in other words, grooving on freedom. They're high on the original American dream of life, liberty, and the pursuit of happiness. The founders of the United States of America didn't set out to create a wealthy country where citizens spent their days worrying whether they would be able to afford new sport utility vehicles when the stock options were vested. The founders just wanted to secure the blessings of liberty for posterity. Somehow, the wealth followed.
+
+This beautiful story is easy to embrace: a group of people started out swapping cool software on the Net and ended up discovering that their free sharing created better software than what a corporation could produce with a mountain of cash.
+
+The programmers found that unrestricted cooperation made it easy for everyone to contribute. No price tags kept others away. No stereotypes or biases excluded anyone. The software and the source code were on the Net for anyone to read.
+
+Wide-open cooperation also turned out to be wide-open competition because the best software won the greatest attention. The corporate weasels with the ear of the president could not stop a free source software project from shipping. No reorganization or downsizing could stop people from working on free software if they wanted to hack. The freedom to create was more powerful than money.
+
+That's an idyllic picture, and the early success of Linux, FreeBSD, and other free packages makes it tempting to think that the success will build. Today, open source servers power more than 50 percent of the web servers on the Internet, and that is no small accomplishment. Getting thousands, if not millions, of programmers to work together is quite amazing given how quirky programmers can be. The ease of copying makes it possible to think that Alan Cox could get up late and still move the world.
+
+But the 1960s were also an allegedly idyllic time when peace, love, and sharing were going to create a beautiful planet where everyone gave to everyone else in an eternal golden braid of mutual respect and caring. Everyone assumed that the same spirit that so quickly and easily permeated the college campuses and lovefests in the parks was bound to sweep the world. The communes were really happening, man. But somehow, the groovy beat never caught on beyond those small nests of easy caring and giving. Somehow, the folks started dropping back in, getting real jobs, taking on real mortgages, and buying back into the world where money was king.
+
+Over the years, the same sad ending has befallen many communes, utopian visions, and hypnotic vibes. Freedom is great. It allows brilliant inventors to work independently of the wheels of power. But capital is another powerful beast that drives innovation. The great communes often failed because they never converted their hard work into money, making it difficult for them to save and invest. Giving things away may be, like, really groovy, but it doesn't build a nest egg.
+
+Right now, the free software movement stands at a crucial moment in its history. In the past, a culture of giving and wide-open sharing let thousands of programmers build a great operating system that was, in many ways, better than anything coming from the best companies. Many folks began working on Linux, FreeBSD, and thousands of other projects as hobbies, but now they're waking up to find IBM, HewlettPackard, Apple, and all the other big boys pounding on their door. If the kids could create something as nice as Linux, everyone began to wonder whether these kids really had enough good stuff to go the distance and last nine innings against the greatest power hitters around.
+
+Perhaps the free software movement will just grow faster and better as more people hop on board. More users mean more eyes looking for bugs. More users mean more programmers writing new source code for new features. More is better.
+
+On the other hand, sharing may be neat, but can it beat the power of capital? Microsoft's employees may be just serfs motivated by the dream that someday their meager stock options will be worth enough to retire upon, but they have a huge pile of cash driving them forward. This capital can be shifted very quickly. If Bill Gates wants 1,000 programmers to create something, he can wave his hand. If he wants to buy 1,000 computers, it takes him a second. That's the power of capital.
+
+Linus Torvalds may be on the cover of magazines, but he can't do anything with the wave of a hand. He must charm and cajole the thousands of folks on the Linux mailing list to make a change. Many of the free software projects may generate great code, but they have to beg for computers. The programmers might even surprise him and come up with an even better solution. They've done it in the past. But no money means that no one has to do what anyone says.
+
+In the past, the free software movement was like the movies in which Mickey Rooney and Judy Garland put on a great show in the barn. That part won't change. Cool kids with a dream will still be spinning up great programs that will be wonderful gifts for the world.
+
+But shows that are charming and fresh in a barn can become thin and weak on a big stage on Broadway. The glitches and raw functionality of Linux and free software don't seem too bad if you know that they're built by kids in their spare time. Building real tools for real companies, moms, police stations, and serious users everywhere is another matter. Everyone may be hoping that sharing, caring, and curiosity are enough, but no one knows for certain. Maybe capital will end up winning. Maybe it won't. It's freedom versus assurance; it's wide-open sharing versus stock options; it's cooperation versus intimidation; it's the geeks versus the suits, all in one knockdown, hack-till-you-drop, winner-take-everything fight.
+
+1~ Lists
+
+While Alan Cox was sleeping late and Microsoft was putting Richard Schmalensee on the stand, the rest of the open source software world was tackling their own problems. Some were just getting up, others were in the middle of their day, and still others were just going to sleep. This is not just because the open source hackers like to work at odd times around the clock. Some do. But they also live around the globe in all of the different time zones. The sun never sets on the open source empire.
+
+On January 14, 1999, for instance, Peter Jeremy, an Australian, announced that he had just discovered a potential Y2K problem in the control software in the central database that helped maintain the FreeBSD source code. He announced this by posting a note to a mailing list that forwarded the message to many other FreeBSD users. The problem was that the software simply appended the two characters "19"
+to the front of the year. When the new millennium came about a year later, the software would start writing the new date as "19100." Oops. The problem was largely cosmetic because it only occurred in some of the support software used by the system.
+
+FreeBSD is a close cousin to the Linux kernel and one that predates it in some ways. It descends from a long tradition of research and development of operating systems at the University of California at Berkeley. The name BSD stands for "Berkeley Software Distribution," the name given to one of the first releases of operating system source code that Berkeley made for the world. That small package grew, morphed, and absorbed many other contributions over the years.
+
+Referring to Linux and FreeBSD as cousins is an apt term because they share much of the same source code in the same way that cousins share some of the same genes. Both borrow source code and ideas from each other. If you buy a disk with FreeBSD, which you can do from companies like Walnut Creek, you may get many of the same software packages that you get from a disk from Red Hat Linux. Both include, for instance, some of the GNU compilers that turn source code into something that can be understood by computers.
+
+FreeBSD, in fact, has some of its own fans and devotees. The FreeBSD site lists thousands of companies large and small that use the software. Yahoo, the big Internet directory, game center, and news operation, uses FreeBSD in some of its servers. So does Blue Mountain Arts, the electronic greeting card company that is consistently one of the most popular sites on the web. There are undoubtedly thousands more who aren't listed on the FreeBSD site. The software produced by the FreeBSD project is, after all, free, so people can give it away, share it with their friends, or even pretend they are "stealing" it by making a copy of a disk at work. No one really knows how many copies of FreeBSD are out there because there's no reason to count. Microsoft may need to count heads so they can bill everyone for using Windows, but FreeBSD doesn't have that problem.
+
+That morning, Peter Jeremy's message went out to everyone who subscribed to the FreeBSD mailing list. Some users who cared about the Y2K bug could take Jeremy's patch and use it to fix their software directly. They didn't need to wait for some central bureaucracy to pass judgment on the information. They didn't need to wait for the Y2K guy at FreeBSD to get around to vetting the change. Everyone could just insert the fix because they had all of the source code available to them.
+
+Of course, most people never use all their freedoms. In this case, most people didn't have to bother dealing with Jeremy's patch because they waited for the official version. The FreeBSD infrastructure absorbed the changes into its source code vaults, and the changes appeared in the next fully updated version. This new complete version is where most people first started using the fix. Jeremy is a programmer who created a solution that was easy for other programmers to use. Most people, however, aren't programmers, and they want their software to be easy to use. Most programmers aren't even interested in poking around inside their machines. Everyone wants the solution to either fix itself or come as close to that as possible.
+
+Jeremy's message was just one of the hundreds percolating through the FreeBSD community that day. Some fell on deaf ears, some drew snotty comments, and a few gathered some real attention. The mailing lists were fairly complex ecologies where ideas blossomed and grew before they faded away and died.
+
+Of course, it's not fair to categorize the FreeBSD world as a totally decentralized anarchy. There is one central team led by one man, Jordan Hubbard, who organizes the leadership of a core group of devoted programmers. The group runs the website, maintains an up-to-date version of FreeBSD, and sponsors dozens of lists devoted to different corners or features. One list focuses on hooking up the fast high-performance SCSI hard disks that are popular with people who demand high-performance systems. Another concentrates on building in enough security to keep out attackers who might try to sneak in through the Internet.
+
+That January 14, a man in Great Britain, Roger Hardiman, was helping a man in Switzerland, Reto Trachsel, hook up a Hauppauge video card to his system. They were communicating on the Multimedia mailing list devoted to finding ways to add audio and video functions to FreeBSD systems. Trachsel posted a note to the list asking for information on how to find the driver software that would make sure that the data coming out of the Hauppauge television receiver would be generally available to the rest of the computer. Hardiman pointed out a solution, but cautioned, "If your Hauppauge card has the MSP34xx Stereo Decoder audio chip, you may get no sound when watching TV. I should get this fixed in the next week or two."
+
+Solutions like these float around the FreeBSD community. Most people don't really care if they can watch television with their computer, but a few do. The easy access to source code and drivers means that the few can go off and do their own thing without asking some major company for permission. The big companies like Microsoft and Apple, for instance, have internal projects that are producing impressive software for creating and displaying multimedia extravaganzas on computers. But they have a strict view of the world: the company is the producer of high-quality tools that make their way to the consumer who uses them and pays for them in one way or another.
+
+The list ecology is more organic and anti-hierarchical. Everyone has access to the source code. Everyone can make changes. Everyone can do what they want. There is no need for the FreeBSD management to meet and decide "Multimedia is good." There is no need for a project team to prioritize and list action items and best-of-breed deliverables. Someone in Switzerland decides he wants to hook up a television receiver to his computer and, what do you know, someone in Great Britain has already solved the problem. Well, he's solved it if you don't have an MSP34xx stereo decoder chip in your card. But that should be fixed sooner or later, too.
+
+2~ Free Doesn't Mean Freeloading
+
+There are thousands of other mailing lists linking thousands of other projects. It's hard to actually put a number to them because the projects grow, merge, and fade as people's interests wax and wane. The best flourish, and the others just drift away.
+
+Life on the mailing lists is often a bit more brutal and short than life on earth. The work on the project needs to split up. The volunteers need to organize themselves so that great software can be written.
+
+On that January 14, a new member of the WINE list was learning just how volunteering works. The guy posted a note to the list that described his Diamond RIO portable music device that lets you listen to MP3 files whenever you want. "I think the WINE development team should drop everything and work on getting this program to work as it doesn't seem like Diamond wants to release a Linux utility for the Rio," he wrote.
+
+WINE stands for "WINE Is Not an Emulator," which is a joke that only programmers and free software lovers can get. It's first a play on the recursive acronym for the GNU project ("GNU is not UNIX"). It's also a bit of a political statement for programmers. An emulator is a piece of software that makes one computer act like another. A company named Connectix, for instance, sells an emulator that lets a Macintosh behave like a Windows PC so anyone can use their Windows software on the Mac. Emulators, however, are pretty slow because they're constantly translating information on the fly. Anyone who has tried to hold a conversation with someone who speaks a different language knows how frustrating it can be to require a translator.
+
+The WINE project is an ambitious attempt to knock out one of the most important structural elements of the Microsoft monopoly. Software written for Windows only functions when people buy a version of Windows from Microsoft. When you purchase a Connectix emulator for the Mac, you get a version of Windows bundled with it.
+
+The WINE project is a group of people who are trying to clone Windows. Well, not clone all of it. They just want to clone what is known as the Win32 API, a panoply of features that make it easier to write software for a Microsoft machine. A programmer who wants to create a new button for a Windows computer doesn't need to write all of the instructions for drawing a frame with three-dimensional shading. A Microsoft employee has already bundled those instructions into the Win32 API. There are millions of functions in these kits that help programmers. Some play audio files, others draw complex images or movies. These features make it easy for programmers to write software for Windows because some of the most repetitive work is already finished.
+
+The WINE clone of the Win32 is a fascinating example of how open source starts slowly and picks up steam. Bob Amstadt started the project in 1993, but soon turned it over to Alexandre Julliard, who has been the main force behind it. The project, although still far from finished, has produced some dramatic accomplishments, making it possible to run major programs like Microsoft Word or Microsoft Excel on a Linux box without using Windows. In essence, the WINE software is doing a good enough job acting like Windows that it's fooling Excel and Word. If you can trick the cousins, that's not too bad.
+
+The WINE home page (www.winehq.com) estimates that more than 90,000 people use WINE regularly to run programs for Microsoft Windows without buying Windows. About 140 or more people regularly contribute to the project by writing code or fixing bugs. Many are hobbyists who want the thrill of getting their software to run without Windows, but some are corporate programmers. The corporate programmers want to sell their software to the broadest possible marketplace, but they don't want to take the time to rewrite everything. If they can get their software working well with WINE, then people who use Linux or BSD can use the software that was written for Microsoft Windows.
+
+The new user who wanted to get his RIO player working with his Linux computer soon got a rude awakening. Andreas Mohr, a German programmer, wrote back,
+
+_1 Instead of suggesting the WINE team to "drop everything" in order to get a relatively minor thing like PMP300 to work, would you please install WINE, test it, read documentation/bug reports and post a useful bug report here? There are zillions of very useful and impressing Windoze apps out there . . . (After all that's only my personal opinion, maybe that was a bit too harsh ;-)
+
+Most new free software users soon discover that freedom isn't always easy. If you want to get free software, you're going to have to put in some work. Sometimes you get lucky. The man in Switzerland who posted his note on the same day found out that someone in Britain was solving his problems for him. There was no one, however, working on the RIO software and making sure it worked with WINE.
+
+Mohr's suggestion was to file a bug report that ranks the usability of the software so the programmers working on WINE can tweak it. This is just the first step in the free software experience. Someone has to notice the problem and fix it. In this case, someone needs to hook up their Diamond RIO MP3 player to a Linux box and try to move MP3 files with the software written for Windows. Ideally, the software will work perfectly, and now all Linux users will be able to use RIO players. In reality, there might be problems or glitches. Some of the graphics on the screen might be wrong. The software might not download anything at all. The first step is for someone to test the product and write up a detailed report about what works and what doesn't.
+
+At the time of this writing, no one has stepped up to the plate. There are no reports about the Diamond player in the WINE database. Maybe the new user didn't have time. Maybe he wasn't technically sophisticated enough to get WINE running in the first place. It's still not a simple system to use. In any case, his bright idea fell by the wayside.
+
+The mailing lists buzz with idle chatter about neat, way-out ideas that never come to fruition. Some people see this as a limitation of the free software world. A corporation, however, is able to dispatch a team of programmers to create solutions. These companies have money to spend on polishing a product and making sure it works. Connectix, for instance, makes an emulator that lets Mac users play games written for the Sony PlayStation. The company employs a substantial number of people who simply play all the Sony games from beginning to end until all of the bugs are gone. It's a rough job, but someone has to do it.
+
+WINE can't pay anyone, and that means that great ideas sometimes get ignored. The free software community, however, doesn't necessarily see this as a limitation. If the RIO player were truly important, someone else would come along and pick up the project. Someone else would do the work and file a bug report so everyone could use the software. If there's no one else, then maybe the RIO software isn't that important to the Linux community. Work gets done when someone really cares enough to do it.
+
+These mailing lists are the fibers that link the open source community into the network of minds. Before e-mail, they were just a bunch of rebels haunting the moors and rattling around their basements inventing monstrous machines. Now they're smoothly tuned mechanisms coordinated by messages, notes, and missives. They're not madmen who roar at dinner parties about the bad technology from Borg-like corporations. They've got friends now. One person may be a flake, but a group might be on to something.
+
+1~ Image
+
+Consider this picture: Microsoft is a megalith built by one man with a towering ego. It may not be fair to lump all of the serfs in the corporate cubicle farms in Redmond into one big army of automatons, but it sure conjures a striking image that isn't altogether inaccurate. Microsoft employees are fiercely loyal and often more dedicated to the cause than the average worker bee. Bill Gates built the company from scratch with the help of several college friends, and this group maintains tight control over all parts of the empire. The flavor of the organization is set by one man with the mind and the ego to micromanage it all.
+
+Now consider the image of the members of the free software revolution. Practically every newspaper article and colorful feature describing the group talks about a ragtag army of scruffy, bearded programmers who are just a bit too pale from spending their days in front of a computer screen. The writers love to conjure up a picture of a group that looks like it came stumbling out of some dystopian fantasy movie like Mad Max or A Boy and His Dog. They're the outsiders. They're a tightly knit band of rebel outcasts who are planning to free the people from their Microsoft slavery and return to the people the power usurped by Mr. Gates. What do they want? Freedom! When do they want it? Now!
+
+There's only one problem with this tidy, Hollywood-ready image: it's far from true. While Microsoft is one big corporation with reins of control that keep everyone in line, there is no strong or even weak organization that binds the world of open source software. The movement, if it could be called that, is comprised of individuals, each one free to do whatever he wants with the software. That's the point: no more shackles. No more corporate hegemony. Just pure source code that runs fast, clean, and light, straight through the night.
+
+This doesn't mean that the image is all wrong. Some of the luminaries like Richard Stallman and Alan Cox have been known to sport long, Rip van Winkle-grade beards. Some folks are strikingly pale. A few could bathe a bit more frequently. Caffeine is a bit too popular with them. Some people look as if they were targets for derision by the idiots on the high school football team.
+
+But there are many counterexamples. Linus Torvalds drives a Pontiac and lives in a respectable home with a wife and two children. He works during the day at a big company and spends his evenings shopping and doing errands. His life would be perfectly categorized as late 1950s sitcom if his wife, Tove, weren't a former Finnish karate champion and trucks weren't driving up to his house to deliver top-of-the-line computers like a 200-pound monstrosity with four Xeon processors. He told VAR Business, "A large truck brought it to our house and the driver was really confused. He said, 'You don't have a loading dock?'" On second thought, those are the kind of shenanigans that drive most sitcoms.
+
+There's no easy way to classify the many free source code contributors. Many have children, but many don't. Some don't mention them, some slip in references to them, and others parade them around with pride. Some are married, some are not. Some are openly gay. Some exist in sort of a presexual utopia of early teenage boyhood. Some of them are still in their early teens. Some aren't.
+
+Some contributors are fairly described as "ragtag," but many aren't. Many are corporate droids who work in cubicle farms during the day and create free software projects at night. Some work at banks. Some work on databases for human resource departments. Some build websites. Everyone has a day job, and many keep themselves clean and ready to be promoted to the next level. Bruce Perens, one of the leaders of the Debian group, used to work at the Silicon Valley glitz factory Pixar and helped write some of the software that created the hit Toy Story.
+
+Still, he told me, "At the time Toy Story was coming out, there was a space shuttle flying with the Debian GNU/Linux distribution on it controlling a biological experiment. People would say 'Are you proud of working at Pixar?' and then I would say my hobby software was running on the space shuttle now. That was a turnaround point when I realized that Linux might become my career."
+
+In fact, it's not exactly fair to categorize many of the free software programmers as a loosely knit band of rebel programmers out to destroy Microsoft. It's a great image that feeds the media's need to highlight conflict, but it's not exactly true. The free software movement began long before Microsoft was a household word. Richard Stallman wrote his manifesto setting out some of the precepts in 1984. He was careful to push the notion that programmers always used to share the source code to software until the 1980s, when corporations began to develop the shrink-wrapped software business. In the olden days of the 1950s, 1960s, and 1970s, programmers always shared. While Stallman has been known to flip his middle finger out at the name Bill Gates for the reporting pleasure of a writer from Salon magazine, he's not after Microsoft per se. He just wants to return computing to the good old days when the source was free and sharing was possible.
+
+The same holds for most of the other programmers. Some contribute source code because it helps them with their day job. Some stay up all night writing code because they're obsessed. Some consider it an act of charity, a kind of noblesse oblige. Some want to fix bugs that bother them. Some want fame, glory, and the respect of all other computer programmers. There are thousands of reasons why new open source software gets written, and very few of them have anything to do with Microsoft.
+
+In fact, it's a bad idea to see the free software revolution as having much to do with Microsoft. Even if Linux, FreeBSD, and other free software packages win, Microsoft will probably continue to fly along quite happily in much the same way that IBM continues to thrive even after losing the belt of the Heavyweight Computing Champion of the World to Microsoft. Anyone who spends his or her time focused on the image of a ragtag band of ruffians and orphans battling the Microsoft leviathan is bound to miss the real story.
+
+The fight is really just a by-product of the coming of age of the information business. The computer trade is rapidly maturing and turning into a service industry. In the past, the manufacture of computers and software took place on assembly lines and in cubicle farms. People bought shrink-wrapped items from racks. These were items that were manufactured. Now both computers and software are turning into dirtcheap commodities whose only source of profit is customization and handholding. The real money now is in service.
+
+Along the way, the free software visionaries stumbled onto a curious fact. They could give away software, and people would give back improvements to it. Software cost practically nothing to duplicate, so it wasn't that hard to just give it away after it was written. At first, this was sort of a pseudo-communist thing to do, but today it seems like a brilliant business decision. If the software is turning into a commodity with a price falling toward zero, why not go all the way and gain whatever you can by freely sharing the code? The profits could come by selling services like programming and education. The revolution isn't about defeating Microsoft; it's just a change in the whole way the world buys and uses computers.
+
+The revolution is also the latest episode in the battle between the programmers and the suits. In a sense, it's a battle for the hearts and minds of the people who are smart enough to create software for the world. The programmers want to write challenging tools that impress their friends. The suits want to rein in programmers and channel their energy toward putting more money in the pockets of the corporation. The suits hope to keep programmers devoted by giving them fat paychecks, but it's not clear that programmers really want the cash. The freedom to do whatever you want with source code is intrinsically rewarding. The suits want to keep software under lock and key so they can sell it and maximize revenues. The free software revolution is really about a bunch of programmers saying, "Screw the cash. I really want the source code."
+
+The revolution is also about defining wealth in cyberspace. Microsoft promises to build neat tools that will help us get wherever we want to go today--if we keep writing larger and larger checks. The open source movement promises software with practically no limitations. Which is a better deal? The Microsoft millionaires probably believe in proprietary software and suggest that the company wouldn't have succeeded as it did if it didn't provide something society wanted. They created good things, and the people rewarded them.
+
+But the open source movement has also created great software that many think is better than anything Microsoft has built. Is society better off with a computer infrastructure controlled by a big corporate machine driven by cash? Or does sharing the source code create better software? Are we at a point where money is not the best vehicle for lubricating the engines of societal advancement? Many in the free software world are pondering these questions.
+
+Anyone who tunes in to the battle between Microsoft and the world expecting to see a good old-fashioned fight for marketplace domination is going to miss the real excitement. Sure, Linux, FreeBSD, OpenBSD, NetBSD, Mach, and the thousands of other free software projects are going to come out swinging. Microsoft is going to counterpunch with thousands of patents defended by armies of lawyers. Some of the programmers might even be a bit weird, and a few will be entitled to wear the adjective "ragtag." But the real revolution has nothing to do with whether Bill Gates keeps his title as King of the Hill. It has nothing to do with whether the programmers stay up late and work in the nude. It has nothing to do with poor grooming, extravagant beards, Coke-bottle glasses, black trench coats, or any of the other stereotypes that fuel the media's image.
+
+It's about the gradual commodification of software and hardware. It's about the need for freedom and the quest to create cool software. It's about a world just discovering how much can be accomplished when information can be duplicated for next to nothing.
+
+The real struggle is finding out how long society can keep hanging ten toes off the edge of the board as we get carried by the wave of freedom. Is there enough energy in the wave and enough grace in society to ride it all the way to the shore? Or will something wicked, something evil, or something sloppy come along and mess it up?
+
+1~ College
+
+2~ Speaking in Tongues
+
+I was part of the free software movement for many years, but I didn't know it. When I was a graduate student, I released the source code to a project. In 1991, that was the sort of thing to do in universities. Publishing the source code to a project was part of publishing a paper about it. And the academy put publishing pretty high on its list.
+
+My first big release came in May 1991 when I circulated a program that let people hide secret messages as innocuous text. My program turned any message into some cute play-by-play from a baseball game, like "No contact in Mudsville! It's a fastball with wings. No wood on that one. He's uncorking what looks like a spitball. Whooooosh! Strike!
+He's out of there." The secret message was encoded in the choices of phrases. "He's out of there" meant something different from "He pops it up to Orville Baskethands." The program enabled information to mutate into other forms, just like the shapeshifting monsters from The X-Files. I sent out an announcement to the influential newsgroup comp.risks and soon hundreds of people were asking for free copies of the software.
+
+I created this program because Senator Joe Biden introduced a bill into the Senate that would require the manufacturers of all computer networks to provide a way for the police to get copies of any message. The Federal Bureau of Investigation, among others, was afraid that they would have trouble obtaining evidence if people were able to encode data. My software illustrated how hard it would be to stop the flow of information.
+
+The best, and perhaps most surprising, part of the whole bloom of email came when a fellow I had never met, D. Jason Penney, converted the program from the fading Pascal into the more popular C. He did this on his own and sent the new, converted software back to me. When I asked him whether I could distribute his version, he said that it was my program. He was just helping out.
+
+I never thought much more about that project until I started to write this book. While two or three people a month would write asking for copies of the software, it never turned into more than a bit of research into the foundations of secret codes and a bit of a mathematical parlor trick. It was more an academic exercise than a prototype of something that could rival Microsoft and make me rich.
+
+In the past, I thought the project never developed into more than a cute toy because there was no market for it. The product wasn't readily useful for businesses, and no one starts a company without the hope that millions of folks desperately need a product. Projects needed programmers and programmers cost money. I just assumed that other free software projects would fall into the same chasm of lack of funding.
+
+Now, after investigating the free software world, I am convinced that my project was a small success. Penney's contribution was not just a strange aberration but a relatively common event on the Internet. People are quite willing to take a piece of software that interests them, modify it to suit their needs, and then contribute it back to the world. Sure, most people only have a few hours a week to work on such projects, but they add up. Penney's work made my software easier to use for many C programmers, thus spreading it further.
+
+In fact, I may have been subconsciously belittling the project. It took only three or four days of my time and a bit more of Penney's, but it was a complete version of a powerful encryption system that worked well. Yes, there was no money flowing, but that may have made it more of a success. Penney probably wouldn't have given me his C version if he knew I was going to sell it. He probably would have demanded a share. Lawyers would have gotten involved. The whole project would have been gummed up with contracts, release dates, distribution licenses, and other hassles that just weren't worth it for a neat way to hide messages. Sure, money is good, but money also brings hassles.
+
+2~ Cash Versus Sharing
+
+In the 1980s and 1990s, programmers in universities still shared heavily with the world. The notion of sharing source code with the world owes a great deal to the academic tradition of publishing results so others can read them, think about them, critique them, and ultimately extend them. Many of the government granting agencies like the National Science Foundation and the Defense Advanced Research Projects Agency fostered this sharing by explicitly requiring that people with grants release the source code to the world with no restrictions. Much of the Internet was created by people who gave out these kinds of contracts and insisted upon shared standards that weren't proprietary. This tradition has fallen on harder times as universities became more obsessed with the profits associated with patents and contract research, but the idea is so powerful that it's hard to displace.
+
+The free software movement in particular owes a great deal to the Massachusetts Institute of Technology. Richard Stallman, the man who is credited with starting the movement, began working in MIT's computer labs in the 1970s. He gets credit for sparking the revolution because he wrote the GNU Manifesto in 1984. The document spelled out why it's essential to share the source code to a program with others. Stallman took the matter to heart because he also practiced what he wrote about and contributed several great programs, including a text editor with thousands of features.
+
+Of course, Stallman doesn't take credit for coming up with the idea of sharing source code. He remembers his early years at MIT quite fondly and speaks of how people would share their source code and software without restrictions. The computers were new, complicated, and temperamental. Cooperation was the only way that anyone could accomplish anything. That's why IBM shared the source code to the operating systems on their mainframes though the early part of the 1960s.
+
+This tradition started to fade by the early 1980s as the microcomputer revolution began. Companies realized that most people just wanted software that worked. They didn't need the source code and all the instructions that only programmers could read. So companies quickly learned that they could keep the source code to themselves and keep their customers relatively happy while locking out competitors. They were kings who built a wall to keep out the intruders.
+
+The GNU Manifesto emerged as the most radical reaction to the trend toward locking up the source code. While many people looked at the GNU Manifesto with confusion, others became partial converts. They began donating code that they had written. Some tossed random utility programs into the soup, some offered games, and some sent in sophisticated packages that ran printers, networks, or even networks of printers. A few even became complete disciples and started writing code full-time for the GNU project. This growth was largely ignored by the world, which became entranced with the growth of Microsoft. More and more programmers, however, were spending more time mingling with the GNU project, and it was taking hold.
+
+In the early 1980s, an operating system known as UNIX had grown to be very popular in universities and laboratories. AT&T designed and built it at Bell Labs throughout the 1970s. In the beginning, the company shared the source code with researchers and computer scientists in universities, in part because the company was a monopoly that was only allowed to sell telephone service. UNIX was just an experiment that the company started to help run the next generation of telephone switches, which were already turning into specialized computers.
+
+In the beginning, the project was just an academic exercise, but all of the research and sharing helped create a nice operating system with a wide audience. UNIX turned out to be pretty good. When the phone company started splitting up in 1984, the folks at AT&T wondered how they could turn a profit from what was a substantial investment in time and money. They started by asking people who used UNIX at the universities to sign non-disclosure agreements.
+
+Stallman looked at this as mind control and the death of a great tradition. Many others at the universities were more pragmatic. AT&T had given plenty of money and resources to the university. Wasn't it fair for the university to give something back?
+
+Stallman looked at this a bit differently. Yes, AT&T was being nice when they gave grants to the university, but weren't masters always kind when they gave bowls of gruel to their slaves? The binary version AT&T started distributing to the world was just gruel for Stallman. The high priests and lucky few got to read the source code. They got to eat the steak and lobster spread. Stallman saw this central, controlling, corporate force as the enemy, and he began naming his work GNU, which was a recursive acronym that stood for "GNU's Not UNIX." The GNU project aimed to produce a complete working operating system that was going to do everything that UNIX did for none of the moral, emotional, or ethical cost. Users would be able to read the source code to Stallman's OS and modify it without signing a tough non-disclosure agreement drafted by teams of lawyers. They would be able to play with their software in complete freedom. Stallman notes that he never aimed to produce an operating system that didn't cost anything. The world may be entranced with the notion of a price tag of zero, but for Stallman, that was just a side effect of the unrestricted sharing.
+
+Creating a stand-alone system that would do everything with free software was his dream, but it was a long way from fruition, and Stallman was smart enough to start off with a manageable project. He began by producing a text editor known as GNU Emacs. The program was a big hit because it was highly customizable. Some people just used the program to edit papers, but others programmed it to accomplish fancier tasks such as reading their e-mail and generating automatic responses. One programmer was told by management that he had to include plenty of comments in his source code, so he programmed GNU Emacs to insert them automatically. One professor created a version of GNU Emacs that would automatically insert random praise into requests to his secretary.~{ "Where are those reports I asked you to copy? You're doing a great job. Thanks for all the help," on one day. "Are you ever going to copy those reports? You're doing a great job. Thanks for all the help," on the next. }~ Practically everything in Emacs could be changed or customized. If you didn't like hitting the delete key to fix a mistyped character, then you could arrange for the 6 key to do the same thing. This might make it hard to type numbers, but the user was free to mess up his life as much as he wanted.
+
+It took Microsoft years to catch up with Stallman's solution, and even then they implemented it in a dangerous way. They let people create little custom programs for modifying documents, but they forgot to prevent malicious code from crying havoc. Today, Microsoft Word allows little programs named macro viruses to roam around the planet. Open up a Word document, and a virus might be lurking.
+
+In the 1980s, the free software world devoted itself to projects like this. GNU Emacs became a big hit in the academic world where system administrators could install it for free and not worry about counting students or negotiating licenses. Also, smart minds were better able to appreciate the cool flexibility Stallman had engineered into the system. Clever folks wasted time by adding filters to the text editor that would scan their text and translate it into, like, Valley Girl talk or more urban jive.
+
+The GNU project grew by accepting contributions from many folks across the country. Some were fairly sophisticated, eye-catching programs like GNU Chess, a program that was quite competitive and as good as all but the best packages. Most were simple tools for handling many of the day-to-day chores for running a computer system. System administrators, students, and programmers from around the country would often take on small jobs because they felt compelled to fix something. When they were done, a few would kick the source code over to the GNU project.
+
+Stallman's biggest programming project for GNU during the 1980s was writing the GNU C compiler (GCC). This program was an important tool that converted the C source code written by humans into the machine code understood by computers. The GCC package was an important cornerstone for the GNU project in several ways. First, it was one of the best compilers around. Second, it could easily move from machine to machine. Stallman personally ported it to several different big platforms like Intel's x86 line of processors. Third, the package was free, which in the case of GNU software meant that anyone was free to use and modify the software.
+
+The GCC provided an important harmonizing effect to the GNU project. Someone could write his program on a machine built by Digital, compile it with GCC, and be fairly certain that it would run on all other machines with GCC. That allowed the GNU software to migrate freely throughout the world, from machine to machine, from Sun to Apollo to DEC to Intel.
+
+The GCC's license also attracted many developers and curious engineers. Anyone could use the source code for their projects, and many did. Over time, the compiler moved from machine to machine as users converted it. Sometimes a chip company engineer would rework the compiler to make it work on a new chip. Sometimes a user would do it for a project. Sometimes a student would do it when insomnia struck. Somehow, it moved from machine to machine, and it carried all of the other GNU software with it.
+
+The next great leap forward came in the early 1990s as people began to realize that a completely free operating system was a serious possibility. Stallman had always dreamed of replacing UNIX with something that was just as good and accompanied by the source code, but it was a large task. It was the reason he started the GNU project. Slowly but surely, the GNU project was assembling the parts to make it work. There were hundreds of small utilities and bigger tools donated to the GNU project, and those little bits were starting to add up.
+
+The free software movement also owes a great deal to Berkeley, or more precisely to a small group in the Department of Computer Science at the University of California at Berkeley. The group of hardcore hackers, which included professors, research associates, graduate students, and a few undergraduates, had developed a version of UNIX known as BSD (Berkeley Software Distribution). AT&T shared their version of UNIX with Berkeley, and the programmers at Berkeley fixed, extended, and enhanced the software. These extensions formed the core of BSD. Their work was part experimental and part practical, but the results were widely embraced. Sun Microsystems, one of Silicon Valley's UNIX workstation companies, used a version on its machines through the early 1990s when they created a new version known as Solaris by folding in some of AT&T's System V. Many feel that BSD and its approach remain the foundation of the OS.
+
+The big problem was that the team built their version on top of source code from AT&T. The folks at Berkeley and their hundreds, if not thousands, of friends, colleagues, and students who contributed to the project gave their source code away, but AT&T did not. This gave AT&T control over anyone who wanted to use BSD, and the company was far from ready to join the free software movement. Millions of dollars were spent on the research developing UNIX. The company wanted to make some money back.
+
+The team at Berkeley fought back, and Keith Bostic, one of the core team, began organizing people together to write the source code that could replace these bits. By the beginning of the 1990s, he had cajoled enough of his friends to accomplish it. In June 1991, the group produced "Networking Release 2," a version that included almost all of a complete working version of UNIX. All you needed to do was add six files to have a complete operating system.
+
+AT&T was not happy. It had created a separate division known as the UNIX Systems Laboratory and wanted to make a profit. Free source code from Berkeley was tough competition. So the UNIX Systems Laboratory sued.
+
+This lawsuit marked the end of universities' preeminent role in the development of free software. Suddenly, the lawsuit focused everyone's attention and made them realize that taking money from corporations came into conflict with sharing software source code. Richard Stallman left MIT in 1984 when he understood that a university's need for money would eventually trump his belief in total sharing of source code. Stallman was just a staff member who kept the computers running. He wasn't a tenured professor who could officially do anything. So he started the Free Software Foundation and never looked back. MIT helped him at the beginning by loaning him space, but it was clear that the relationship was near the end. Universities needed money to function. Professors at many institutions had quotas specifying how much grant money they needed to raise. Stallman wasn't bringing in cash by giving away his software.
+
+Meanwhile, on the other coast, the lawsuit tied up Berkeley and the BSD project for several years, and the project lost valuable energy and time by devoting them to the legal fight. In the meantime, several other completely free software projects started springing up around the globe. These began in basements and depended on machines that the programmer owned. One of these projects was started by Linus Torvalds and would eventually grow to become Linux, the unstoppable engine of hype and glory. He didn't have the money of the Berkeley computer science department, and he didn't have the latest machines that corporations gave them. But he had freedom and the pile of source code that came from unaffiliated, free projects like GNU that refused to compromise and cut intellectual corners. Although Torvalds might not have realized it at the time, freedom turned out to be most valuable of all.
+
+1~ Quicksand
+
+The story of the end of the university's preeminence in the free software world is a tale of greed and corporate power. While many saw an unhappy ending coming for many years, few could do much to stop the inevitable collision between the University of California at Berkeley and its former patron, AT&T.
+
+The lawsuit between AT&T and the University of California at Berkeley had its roots in what marriage counselors love to call a "poorly conceived relationship." By the end of the 1980s, the computer science department at Berkeley had a problem. They had been collaborating with AT&T on the UNIX system from the beginning. They had written some nice code, including some of the crucial software that formed the foundation of the Internet. Students, professors, scientists, and even Wall Street traders loved the power and flexibility of UNIX. Everyone wanted UNIX.
+
+The problem was that not everyone could get UNIX. AT&T, which had sponsored much of the research at Berkeley, kept an iron hand on its invention. If you wanted to run UNIX, then you needed to license some essential software from AT&T that sat at the core of the system. They were the supreme ruler of the UNIX domain, and they expected a healthy tithe for the pleasure of living within it.
+
+One of the people who wanted UNIX was the Finnish student Linus Torvalds, who couldn't afford this tithe. He was far from the first one, and the conflict began long before he started to write Linux in 1991.
+
+Toward the end of the 1980s, most people in the computer world were well aware of Stallman's crusade against the corporate dominance of AT&T and UNIX. Most programmers knew that GNU stood for "GNU's Not UNIX." Stallman was not the only person annoyed by AT&T's attitude toward secrecy and non-disclosure agreements. In fact, his attitude was contagious. Some of the folks at Berkeley looked at the growth of tools emerging from the GNU project and felt a bit used. They had written many pieces of code that found their way into AT&T's version of UNIX. They had contributed many great ideas. Yet AT&T was behaving as if AT&T alone owned it. They gave and gave, while AT&T took.
+
+Stallman got to distribute his source code. Stallman got to share with others. Stallman got to build his reputation. Programmers raved about Stallman's Emacs. People played GNU Chess at their offices. Others were donating their tools to the GNU project. Everyone was getting some attention by sharing except the folks at Berkeley who collaborated with AT&T. This started to rub people the wrong way.
+
+Something had to be done, and the folks at Berkeley started feeling the pressure. Some at Berkeley wondered why the professors had entered into such a Faustian bargain with a big corporation. Was the payoff great enough to surrender their academic souls? Just where did AT&T get off telling us what we could publish?
+
+Others outside of Berkeley looked in and saw a treasure trove of software that was written by academics. Many of them were friends. Some of them had studied at Berkeley. Some had even written some of the UNIX code before they graduated. Some were companies competing with AT&T. All of them figured that they could solve their UNIX problems if they could just get their hands on the source code. There had to be some way to get it released.
+
+Slowly, the two groups began making contact and actively speculating on how to free Berkeley's version of UNIX from AT&T's grip.
+
+2~ Breaking the Bond
+
+The first move to separate Berkeley's version of UNIX from AT&T's control wasn't really a revolution. No one was starting a civil war by firing shots at Fort Sumter or starting a revolution by dropping tea in the harbor. In fact, it started long before the lawsuit and Linux. In 1989, some people wanted to start hooking their PCs and other devices up to the Internet, and they didn't want to use UNIX.
+
+Berkeley had written some of the software known as TCP/IP that defined how computers on the Internet would communicate and share packets. They wrote the software for UNIX because that was one of the favorite OSs around the labs. Other companies got a copy of the code by buying a source license for UNIX from AT&T. The TCP/IP code was just part of the mix. Some say that the cost of the license reached $250,000 or more and required that the customer pay a per-unit fee for every product that was shipped. Those prices didn't deter the big companies like IBM or DEC. They thought of UNIX as an OS for the hefty workstations and minicomputers sold to businesses and scientists. Those guys had the budget to pay for big hardware, so it was possible to slip the cost of the UNIX OS in with the package.
+
+But the PC world was different. It was filled with guys in garages who wanted to build simple boards that would let a PC communicate on the Internet. These guys were efficient and knew how to scrounge up cheap parts from all over the world. Some of them had gone to Berkeley and learned to program on the VAXes and Sun workstations running Berkeley's version of UNIX. A few of them had even helped write or debug the code. They didn't see why they had to buy such a big license for something that non-AT&T folks had written with the generous help of large government grants. Some even worked for corporations that gave money to support Berkeley's projects. Why couldn't they get at the code they helped pay to develop?
+
+Kirk McKusick, one of the members of the Computer Systems Research Group at the time, remembers, "People came to us and said,
+'Look, you wrote TCP/IP. Surely you shouldn't require an AT&T license for that?' These seemed like reasonable requests. We decided to start with something that was clearly not part of the UNIX we got from AT&T. It seemed very clear that we could pull out the TCP/IP stack and distribute that without running afoul of AT&T's license."
+
+So the Berkeley Computer Systems Research Group (CSRG) created what they called Network Release 1 and put it on the market for $1,000 in June 1989. That wasn't really the price because the release came with one of the first versions of what would come to be known as the BSD-style license. Once you paid the $1,000, you could do whatever you wanted with the code, including just putting it up on the Net and giving it away.
+
+"We thought that two or three groups would pay the money and then put the code on the Internet, but in fact, hundreds of sites actually paid the one thousand dollars for it," says McKusick and adds, "mostly so they could get a piece of paper from the university saying, 'You can do what you want with this.'"
+
+This move worked out well for Berkeley and also for UNIX. The Berkeley TCP/IP stack became the best-known version of the code, and it acted like a reference version for the rest of the Net. If it had a glitch, everyone else had to work around the glitch because it was so prevalent. Even today, companies like Sun like to brag that their TCP/IP forms the backbone of the Net, and this is one of the reasons to buy a Sun instead of an NT workstation. Of course, the code in Sun's OS has a rich, Berkeley-based heritage, and it may still contain some of the original BSD code for controlling the net.
+
+2~ In for a Penny, in for a Pound
+
+In time, more and more companies started forming in the Bay Area and more and more realized that Berkeley's version of UNIX was the reference for the Internet. They started asking for this bit or that bit.
+
+Keith Bostic heard these requests and decided that the Berkeley CSRG needed to free up as much of the source code as possible. Everyone agreed it was a utopian idea, but only Bostic thought it was possible to accomplish. McKusick writes, in his history of BSD, "Mike Karels [a fellow software developer] and I pointed out that releasing large parts of the system was a huge task, but we agreed that if he could sort out how to deal with re-implementing the hundreds of utilities and the massive C library, then we would tackle the kernel. Privately, Karels and I thought that would be the end of the discussion."
+
+Dave Hitz, a good friend of Bostic's, remembers the time. "Bostic was more of a commanding type. He just rounded up all of his friends to finish up the code. You would go over to his house for dinner and he would say, 'I've got a list. What do you want to do?' I think I did the cp command and maybe the look command." Hitz, of course, is happy that he took part in the project. He recently founded Network Appliance, a company that packages a stripped-down version of BSD into a file server that is supposed to be a fairly bulletproof appliance for customers. Network Appliance didn't need to do much software engineering when they began. They just grabbed the free version of BSD and hooked it up.
+
+Bostic pursued people far and wide to accomplish the task. He gave them the published description of the utility or the part of the library from the documentation and then asked them to reimplement it without looking at the source code. This cloning operation is known as a cleanroom operation because it is entirely legal if it takes place inside a metaphorical room where the engineers inside don't have any information about how the AT&T engineers built UNIX.
+
+This was not an easy job, but Bostic was quite devoted and pursued people everywhere. He roped everyone who could code into the project and often spent time fixing things afterward. The task took 18 months and included more than 400 people who received just notoriety and some thanks afterward. The 400-plus names are printed in the book he wrote with McKusick and Karels in 1996.
+
+When Bostic came close to finishing, he stopped by McKusick's office and asked how the kernel was coming along. This called McKusick and Karels's bluff and forced them to do some hard engineering work. In some respects, Bostic had the easier job. Writing small utility programs that his team used was hard work, but it was essentially preorganized and segmented. Many folks over the years created manual files that documented exactly what the programs were supposed to do. Each program could be assigned separately and people didn't need to coordinate their work too much. These were just dishes for a potluck supper.
+
+Cleaning up the kernel, however, was a different matter. It was much larger than many of the smaller utilities and was filled with more complicated code that formed a tightly coordinated mechanism. Sloppy work in one of the utility files would probably affect only that one utility, but a glitch in the kernel would routinely bring down the entire system. If Bostic was coordinating a potluck supper, McKusick and Karels had to find a way to create an entire restaurant that served thousands of meals a day to thousands of customers. Every detail needed to work together smoothly.
+
+To make matters more complicated, Berkeley's contributions to the kernel were mixed in with AT&T's contributions. Both had added on parts, glued in new features, and created new powers over the years. They were de facto partners on the project. Back in the good old days, they had both shared their source code without any long-term considerations or cares. But now that AT&T claimed ownership of it all, they had to find a way to unwind all of the changes and figure out who wrote what.
+
+McKusick says, "We built a converted database up line by line. We took every line of code and inserted it into the database. You end up finding pretty quickly where the code migrated to and then you decide whether it is sufficiently large enough to see if it needed recoding."
+
+This database made life much easier for them and they were able to plow through the code, quickly recoding islets of AT&T code here and there. They could easily pull up a file filled with source code and let the database mark up the parts that might be owned by AT&T. Some parts went quickly, but other parts dragged on. By late spring of 1991, they had finished all but six files that were just too much work.
+
+It would be nice to report that they bravely struggled onward, forgoing all distractions like movies, coffeehouses, and friends, but that's not true. They punted and tossed everything out the door and called it "Network Release 2."The name implied that this new version was just a new revision of their earlier product, Network Release 1, and this made life easier with the lawyers. They just grabbed the old, simple license and reused it. It also disguised the fact that this new pile of code was only about six files short of a full-grown OS.
+
+The good news about open source is that projects often succeed even when they initially fail. A commercial product couldn't ship without the complete functionality of the six files. Few would buy it. Plus, no one could come along, get a bug under his bonnet, and patch up the holes. Proprietary source code isn't available and no one wants to help someone else in business without compensation.
+
+The new, almost complete UNIX, however, was something different. It was a university project and so university rules of camaraderie and sharing seemed to apply. Another programmer, Bill Jolitz, picked up Network Release 2 and soon added the code necessary to fill the gap. He became fascinated with getting UNIX up and running on a 386 processor, a task that was sort of like trying to fit the latest traction control hardware and anti-lock brakes on a go-cart. At the time, serious computer scientists worked on serious machines from serious workstation and minicomputer companies. The PC industry was building toys. Of course, there was something macho to the entire project. Back then I remember joking to a friend that we should try to get UNIX running on the new air-conditioning system, just to prove it could be done.
+
+Jolitz's project, of course, found many people on the Net who didn't think it was just a toy. Once he put the source code on the Net, a bloom of enthusiasm spread through the universities and waystations of the world. People wanted to experiment with a high-grade OS and most could only afford relatively cheap hardware like the 386. Sure, places like Berkeley could get the government grant money and the big corporate donations, but 2,000-plus other schools were stuck waiting. Jolitz's version of 386BSD struck a chord.
+
+While news traveled quickly to some corners, it didn't reach Finland. Network Release 2 came in June 1991, right around the same time that Linus Torvalds was poking around looking for a high-grade OS to use in experiments. Jolitz's 386BSD came out about six months later as Torvalds began to dig into creating the OS he would later call Linux. Soon afterward, Jolitz lost interest in the project and let it lie, but others came along. In fact, two groups called NetBSD and FreeBSD sprang up to carry the torch.
+
+Although it may seem strange that three groups building a free operating system could emerge without knowing about each other, it is important to realize that the Internet was a very different world in 1991 and 1992. The World Wide Web was only a gleam in some people's eyes. Only the best universities had general access to the web for its students, and most people didn't understand what an e-mail address was. Only a few computer-related businesses like IBM and Xerox put their researchers on the Net. The community was small and insular.
+
+The main conduits for information were the USENET newsgroups, which were read only by people who could get access through their universities. This technology was an efficient way of sharing information, although quite flawed. Here's how it worked: every so often, each computer would call up its negotiators and swap the latest articles. Information traveled like gossip, which is to say that it traveled quickly but with very uneven distribution. Computers were always breaking down or being upgraded. No one could count on every message getting to every corner of the globe.
+
+The NetBSD and the FreeBSD forks of the BSD kernel continue to exist separately today. The folks who work on NetBSD concentrate on making their code run on all possible machines, and they currently list 21 different platforms that range from the omnipresent Intel 486 to the gone but not forgotten Commodore Amiga.
+
+The FreeBSD team, on the other hand, concentrates on making their product work well on the Intel 386. They added many layers of installation tools to make it easier for the average Joe to use, and now it's the most popular version of BSD code around.
+
+Those two versions used the latest code from Berkeley. Torvalds, on the other hand, didn't know about the 386BSD, FreeBSD, or NetBSD. If he had found out, he says, he probably would have just downloaded the versions and joined one of those teams. Why run off and reinvent the wheel?
+
+2~ AT&T Notices the Damage
+
+Soon after Network Release 2 hit the world, the real problems began for BSD. While AT&T didn't really notice 386BSD, NetBSD, or FreeBSD, they did notice a company called Berkeley Software Design Incorporated. This corporation created their own OS by taking Network Release 2 and adding their own versions of the missing six files, but they didn't release this for free on the Net. They started putting advertisements in the trade press offering the source code for $995, a price they claimed was a huge discount over AT&T's charge.
+
+The modern, post-Internet reader should find this hilarious. Two to three groups and countless splinter factions were distributing the BSD software over the Internet for free and this didn't seem to catch AT&T's attention, but the emergence of BSDI selling the same product for almost $1,000 rang alarm bells. That was the time, though, before the Internet infrastructure became ubiquitous. In the early 1990s, people only halfjoked that FedEx was the most efficient Internet Service Provider around. It was much faster to copy hundreds of megabytes of data onto a magnetic tape and drop it in FedEx than to actually try to copy it over the Internet. Back then only real nerds were on the Internet. Managers and lawyers wore suits and got their news from the trade press and advertisements.
+
+BSDI's cost-cutting was a major headache for AT&T. This small company was selling a product that AT&T felt it had shepherded, organized, and coordinated over time.
+
+AT&T started off by claiming UNIX as a trademark and threatening BSDI with infringing upon it. BSDI countered by changing the ads to emphasize that BSDI was a separate company that wasn't related to AT&T or the subsidiary AT&T created to market UNIX known as UNIX System Laboratories, or USL.
+
+That didn't work. USL saw its cash cow melting away and assumed folks would jump at the chance to buy a complete OS with all the source code for $995. The price seems outrageously high today, but that's only after the stiff price competition of the 1990s. It was still a good deal at the time. So USL sued BSDI for actually stealing proprietary source code from AT&T.
+
+This argument didn't work, either. BSDI turned around and waved the Network Release 2 license they got from Berkeley. They bought all but six of the files from Berkeley, and Berkeley claimed that all of the source code was theirs to sell. BSDI wrote the missing six files themselves and they were quite sure that they got no help from AT&T or USL. Therefore, BSDI didn't steal anything. If AT&T thought it was stolen, they should take it up with Berkeley. The judge bought BSDI's argument and narrowed the case to focus on the six files.
+
+This was a crucial moment in the development of the free software movement and its various kernels. AT&T found itself cornered. Backing down meant giving up its claim to UNIX and the wonderful stream of license fees that kept pouring in. Pressing ahead meant suing the University of California, its old friend, partner, and author of lots of UNIX code. Eventually, the forces of greed and omnipotent corporate power won out and AT&T's USL filed a lawsuit naming both BSDI and the University of California.
+
+Taking sides in this case was pretty easy for most folks in the academic and free software world. The CSRG at Berkeley did research. They published things. University research was supposed to be open and freely distributed. AT&T was trying to steal the work of hundreds if not thousands of students, researchers, professors, and others. That wasn't fair.
+
+In reality, AT&T did pay something for what they got. They sent their employees to Berkeley to get master's degrees, they shared the original Versions 5, 6, and 7 and 32/V source code, and they even sent some hardware to the computer science department. The original creators of UNIX lived and worked at Bell Labs drawing AT&T paychecks. Berkeley students got summer jobs at AT&T. There wasn't an official quid-pro-quo. It wasn't very well spelled out, but AT&T was paying something.
+
+Some folks on AT&T's side might even want to paint the CSRG at Berkeley as filled with academic freeloaders who worked hard to weasel money out of the big corporations without considering the implications. The folks at Berkeley should have known that AT&T was going to want something for its contributions. There's no such thing as a free lunch.
+
+There's something to this argument because running a high-rent research project at a top-notch school requires a fair amount of guile and marketing sophistication. By the 1990s, the top universities had become very good at making vague, unofficial promises with their pleas for corporate gifts. This sort of coquetry and teasing was bound to land someone in a fight. McKusick, for instance, says that the CSRG designed the BSD license to be very liberal to please the corporate donors. "Hewlett-Packard put in hundreds of thousands of dollars and they were doing so under the understanding that they were going to use the code," he said. If the BSD hadn't kept releasing code like Network Release 2 in a clear, easy-to-reuse legal form, he says, some of the funding for the group would have dried up.
+
+But there's also a bit of irony here. McKusick points out that AT&T was far from the most generous company to support the CSRG. "In fact, we even had to pay for our license to UNIX," he says before adding, "although it was only ninety-nine dollars at the time."
+
+AT&T's support of the department was hardly bountiful. The big checks weren't grants outright. They paid for the out-of-state tuition for AT&T employees who came to Berkeley to receive their master's degrees. While AT&T could have sent their employees elsewhere, there's no doubt that there are more generous ways to send money to researchers.
+
+McKusick also notes that AT&T didn't even send along much hardware. The only hardware he remembers receiving from them were some 5620 terminals and a Datakit circuit-based switch that he says "was a huge headache that really did us very little good." Berkeley was on the forefront of developing the packet-based standards that would dominate the Internet. If anything, the older circuit-based switch convinced the Berkeley team that basing the Internet on the old phone system would be a major mistake.
+
+To make matters worse, AT&T often wanted the BSD team to include features that would force all the BSD users to buy a newer, more expensive license from AT&T. In addition, license verification was never a quick or easy task. McKusick says, "We had a person whose fulltime job was to keep the AT&T licensing person happy."
+
+In the end, he concludes, "They paid us next to nothing and got a huge windfall."
+
+Choosing sides in this battle probably isn't worth the trouble at this point because Berkeley eventually won. The hard work of Bostic's hundreds of volunteers and the careful combing of the kernel by the CSRG paid off. AT&T's case slowly withered away as the University of California was able to show how much of the distribution came from innocent, non-AT&T sources.
+
+Berkeley even landed a few good blows of its own. They found that AT&T had stripped copyrights from Berkeley code that they had imported to System V and had failed to provide due credit to Berkeley. The BSD license is probably one of the least restrictive ones in the world. Companies like Apple use BSD source code all the time. The license has few requirements beyond keeping the copyright notice intact and including some credit for the University of California. AT&T didn't pay attention to this and failed to cite Berkeley's contributions in their releases. Oops. The CSRG countersued claiming that AT&T had violated a license that may be one of the least restrictive in the world.
+
+The battle raged in the courts for more than a year. It moved from federal to California state court. Judges held hearings, lawyers took depositions, clerks read briefs, judges heard arguments presented by briefs written by lawyers who had just held depositions. The burn rate of legal fees was probably larger than most Internet start-ups.
+
+Any grown-up should take one look at this battle and understand just how the free software movement got so far. While the Berkeley folks were meeting with lawyers and worrying about whether the judges were going to choose the right side, Linus Torvalds was creating his own kernel. He started Linux on his own, and that made him a free man.
+
+In the end, the University of California settled the lawsuit after the USL was sold to Novell, a company run by Ray Noorda. McKusick believes that Noorda's embrace of free competition made a big difference, and by January 1994 the legal fight was over. Berkeley celebrated by releasing a completely free and unencumbered 4.4BSD-Lite in June 1994.
+
+The terms of the settlement were pretty minor. Net Release 2 came with about 18,000 files. 4.4BSD-Lite contained all but three of them. Seventy of them included a new, expanded copyright that gave some credit to AT&T and USL, but didn't constrain anyone's right to freely distribute them. McKusick, Bostic, and the hundreds of volunteers did a great job making sure that Net Release 2 was clean. In fact, two people familiar with the settlement talks say that Berkeley just deleted a few files to allow USL's lawyers to save face. We'll never know for sure because the details of the settlement are sealed. McKusick and the others can't talk about the details. That's another great example of how the legal system fails the American people and inadvertently gives the free software world another leg up. There's no information in the record to help historians or give future generations some hints on how to solve similar disputes.
+
+1~ Outsider
+
+The battle between the University of California at Berkeley's computer science department and AT&T did not reach the court system until 1992, but the friction between the department's devotion to sharing and the corporation's insistence on control started long before.
+
+While the BSD team struggled with lawyers, a free man in Finland began to write his own operating system without any of the legal or institutional encumbrance. At the beginning he said it was a project that probably wouldn't amount to much, but only a few years later people began to joke about "Total World Domination." A few years after that, they started using the phrase seriously.
+
+In April 1991, Linus Torvalds had a problem. He was a relatively poor university student in Finland who wanted to hack in the guts of a computer operating system. Microsoft's machines at the time were the cheapest around, but they weren't very interesting. The basic Disk Operating System (DOS) essentially let one program control the computer. Windows 3.1 was not much more than a graphical front end to DOS featuring pretty pictures--icons--to represent the files. Torvalds wanted to experiment with a real OS, and that meant UNIX or something that was UNIX-like. These real OSs juggled hundreds of programs at one time and often kept dozens of users happy. Playing with DOS was like practicing basketball shots by yourself. Playing with UNIX was like playing with a team that had 5, 10, maybe as many as 100 people moving around the court in complicated, clockwork patterns.
+
+But UNIX machines cost a relative fortune. The high-end customers requested the OS, so generally only high-end machines came with it. A poor university student in Finland didn't have the money for a topnotch Sun Sparc station. He could only afford a basic PC, which came with the 386 processor. This was a top-of-the-line PC at the time, but it still wasn't particularly exciting. A few companies made a version of UNIX for this low-end machine, but they charged for it.
+
+In June 1991, soon after Torvalds~{ Everyone in the community, including many who don't know him, refers to him by his first name. The rules of style prevent me from using that in something as proper as a book. }~ started his little science project, the Computer Systems Research Group at Berkeley released what they thought was their completely unencumbered version of BSD UNIX known as Network Release 2. Several projects emerged to port this to the 386, and the project evolved to become the FreeBSD and NetBSD versions of today. Torvalds has often said that he might never have started Linux if he had known that he could just download a more complete OS from Berkeley.
+
+But Torvalds didn't know about BSD at the time, and he's lucky he didn't. Berkeley was soon snowed under by the lawsuit with AT&T claiming that the university was somehow shipping AT&T's intellectual property. Development of the BSD system came to a screeching halt as programmers realized that AT&T could shut them down at any time if Berkeley was found guilty of giving away source code that AT&T owned.
+
+If he couldn't afford to buy a UNIX machine, he would write his own version. He would make it POSIX-compatible, a standard for UNIX designers, so others would be able to use it. Minix was another UNIXlike OS that a professor, Andrew Tanenbaum, wrote for students to experiment with the guts of an OS. Torvalds initially considered using Minix as a platform. Tanenbaum included the source code to his project, but he charged for the package. It was like a textbook for students around the world.
+
+Torvalds looked at the price of Minix ($150) and thought it was too much. Richard Stallman's GNU General Public License had taken root in Torvalds's brain, and he saw the limitations in charging for software. GNU had also produced a wide variety of tools and utility programs that he could use on his machine. Minix was controlled by Tanenbaum, albeit with a much looser hand than many of the other companies at the time.
+
+People could add their own features to Minix and some did. They did get a copy of the source code for $150. But few changes made their way back into Minix. Tanenbaum wanted to keep it simple and grew frustrated with the many people who, as he wrote back then, "want to turn Minix into BSD UNIX."
+
+So Torvalds started writing his own tiny operating system for this 386. It wasn't going to be anything special. It wasn't going to topple AT&T or the burgeoning Microsoft. It was just going to be a fun experiment in writing a computer operating system that was all his. He wrote in January 1992," Many things should have been done more portably if it would have been a real project. I'm not making overly many excuses about it though: it was a design decision, and last April when I started the thing, I didn't think anybody would actually want to use it."
+
+Still, Torvalds had high ambitions. He was writing a toy, but he wanted it to have many, if not all, of the features found in full-strength UNIX versions on the market. On July 3, he started wondering how to accomplish this and placed a posting on the USENET newsgroup comp.os.minix, writing:
+
+_1 Hello netlanders, Due to a project I'm working on (in minix), I'm interested in the posix standard definition. Could somebody please point me to a (preferably) machine-readable format of the latest posix rules? Ftp-sites would be nice.
+
+Torvalds's question was pretty simple. When he wrote the message in 1991, UNIX was one of the major operating systems in the world. The project that started at AT&T and Berkeley was shipping on computers from IBM, Sun, Apple, and most manufacturers of higher-powered machines known as workstations. Wall Street banks and scientists loved the more powerful machines, and they loved the simplicity and hackability of UNIX machines. In an attempt to unify the marketplace, computer manufacturers created a way to standardize UNIX and called it POSIX. POSIX ensured that each UNIX machine would behave in a standardized way.
+
+Torvalds worked quickly. By September he was posting notes to the group with the subject line "What would you like to see most in Minix?" He was adding features to his clone, and he wanted to take a poll about where he should add next.
+
+Torvalds already had some good news to report. "I've currently ported bash(1.08) and GCC(1.40), and things seem to work. This implies that I'll get something practical within a few months," he said.
+
+At first glance, he was making astounding progress. He created a working system with a compiler in less than half a year. But he also had the advantage of borrowing from the GNU project. Stallman's GNU project group had already written a compiler (GCC) and a nice text user interface (bash). Torvalds just grabbed these because he could. He was standing on the shoulders of the giants who had come before him.
+
+The core of an OS is often called the "kernel," which is one of the strange words floating around the world of computers. When people are being proper, they note that Linus Torvalds was creating the Linux kernel in 1991. Most of the other software, like the desktop, the utilities, the editors, the web browsers, the games, the compilers, and practically everything else, was written by other folks. If you measure this in disk space, more than 95 percent of the code in an average distribution lies outside the kernel. If you measure it by user interaction, most people using Linux or BSD don't even know that there's a kernel in there. The buttons they click, the websites they visit, and the printing they do are all controlled by other programs that do the work.
+
+Of course, measuring the importance of the kernel this way is stupid. The kernel is sort of the combination of the mail room, boiler room, kitchen, and laundry room for a computer. It's responsible for keeping the data flowing between the hard drives, the memory, the printers, the video screen, and any other part that happens to be attached to the computer.
+
+In many respects, a well-written kernel is like a fine hotel. The guests check in, they're given a room, and then they can order whatever they need from room service and a smoothly oiled concierge staff. Is this new job going to take an extra 10 megabytes of disk space? No problem, sir. Right away, sir. We'll be right up with it. Ideally, the software won't even know that other software is running in a separate room. If that other program is a loud rock-and-roll MP3 playing tool, the other software won't realize that when it crashes and burns up its own room. The hotel just cruises right along, taking care of business.
+
+In 1991, Torvalds had a short list of features he wanted to add to the kernel. The Internet was still a small network linking universities and some advanced labs, and so networking was a small concern. He was only aiming at the 386, so he could rely on some of the special features that weren't available on other chips. High-end graphics hardware cards were still pretty expensive, so he concentrated on a text-only interface. He would later fix all of these problems with the help of the people on the Linux kernel mailing list, but for now he could avoid them.
+
+Still, hacking the kernel means anticipating what other programmers might do to ruin things. You don't know if someone's going to try to snag all 128 megabytes of RAM available. You don't know if someone's going to hook up a strange old daisy-wheel printer and try to dump a PostScript file down its throat. You don't know if someone's going to create an endless loop that's going to write random numbers all over the memory. Stupid programmers and dumb users do these things every day, and you've got to be ready for it. The kernel of the OS has to keep things flowing smoothly between all the different parts of the system. If one goes bad because of a sloppy bit of code, the kernel needs to cut it off like a limb that's getting gangrene. If one job starts heating up, the kernel needs to try to give it all the resources it can so the user will be happy. The kernel hacker needs to keep all of these things straight.
+
+Creating an operating system like this is no easy job. Many of the commercial systems crash frequently for no perceptible reason, and most of the public just takes it.~{ Microsoft now acknowledges the existence of a bug in the tens of millions of copies of Windows 95 and Windows 98 that will cause your computer to 'stop responding (hang)'--you know, what you call crash--after exactly 49 days, 17 hours, 2 minutes, and 47.296 seconds of continuous operation . . . . Why 49.7? days? Because computers aren't counting the days. They're counting the milliseconds. One counter begins when Windows starts up; when it gets to 232 milliseconds--which happens to be 49.7 days--well, that's the biggest number this counter can handle. And instead of gracefully rolling over and starting again at zero, it manages to bring the entire operating system to a halt."--James Gleick in the New York Times. }~ Many people somehow assume that it must be their fault that the program failed. In reality, it's probably the kernel's. Or more precisely, it's the kernel designer's fault for not anticipating what could go wrong.
+
+By the mid-1970s, companies and computer scientists were already experimenting with many different ways to create workable operating systems. While the computers of the day weren't very powerful by modern standards, the programmers created operating systems that let tens if not hundreds of people use a machine simultaneously. The OS would keep the different tasks straight and make sure that no user could interfere with another.
+
+As people designed more and more operating systems, they quickly realized that there was one tough question: how big should it be? Some people argued that the OS should be as big as possible and come complete with all the features that someone might want to use. Others countered with stripped-down designs that came with a small core of the OS surrounded by thousands of little programs that did the same thing.
+
+To some extent, the debate is more about semantics than reality. A user wants the computer to be able to list the different files stored in one directory. It doesn't matter if the question is answered by a big operating system that handles everything or a little operating system that uses a program to find the answer. The job still needs to be done, and many of the instructions are the same. It's just a question of whether the instructions are labeled the "operating system" or an ancillary program.
+
+But the debate is also one about design. Programmers, teachers, and the Lego company all love to believe that any problem can be solved by breaking it down into small parts that can be assembled to create the whole. Every programmer wants to turn the design of an operating system into thousands of little problems that can be solved individually. This dream usually lasts until someone begins to assemble the parts and discovers that they don't work together as perfectly as they should.
+
+When Torvalds started crafting the Linux kernel, he decided he was going to create a bigger, more integrated version that he called a "monolithic kernel." This was something of a bold move because the academic community was entranced with what they called "microkernels." The difference is partly semantic and partly real, but it can be summarized by analogy with businesses. Some companies try to build large, smoothly integr the steps of production. Others try to create smaller operations that subcontract much of the production work to other companies. One is big, monolithic, and all-encompassing, while the other is smaller, fragmented, and heterogeneous. It's not uncommon to find two companies in the same industry taking different approaches and thinking they're doing the right thing.
+
+The design of an operating system often boils down to the same decision. Do we want to build a monolithic core that handles all the juggling internally, or do we want a smaller, more fragmented model that should be more flexible as long as the parts interact correctly?
+
+In time, the OS world started referring to this core as the kernel of the operating system. People who wanted to create big OSs with many features wrote monolithic kernels. Their ideological enemies who wanted to break the OS into hundreds of small programs running on a small core wrote microkernels. Some of the most extreme folks labeled their work a nanokernel because they thought it did even less and thus was even more pure than those bloated microkernels.
+
+The word "kernel" is a bit confusing for most people because they often use it to mean a fragment of an object or a small fraction. An extreme argument may have a kernel of truth to it. A disaster movie always gives the characters and the audience a kernel of hope to which to cling.
+
+Mathematicians use the word a bit differently and emphasize the word's ability to let a small part define a larger concept. Technically, a kernel of a function f is the set of values, x<sub>1</sub>, x<sub>2</sub>, . . . x<sub>n</sub> such that f(x<sub>i</sub>)=1, or whatever the identity element happens to be. The action of the kernel of a function does a good job of defining how the function behaves with all the other elements. The algebraists study a kernel of a function because it reveals the overall behavior.~{ The kernel of f(x)=x<sub>2</sub> is (-1, 1) and it illustrates how the function has two branches. }~
+
+The OS designers use the word in the same way. If they define the kernel correctly, then the behavior of the rest of the OS will follow. The small part of the code defines the behavior of the entire computer. If the kernel does one thing well, the entire computer will do it well. If it does one thing badly, then everything will suffer.
+
+Many computer users often notice this effect without realizing why it ated operations where one company controls all exists. Most Macintosh computers, for instance, can be sluggish at times because the OS does not do a good job juggling the workload between processes. The kernel of the OS has not been completely overhauled since the early days when the machines ran one program at a time. This sluggishness will persist for a bit longer until Macintosh releases a new version known as MacOS X. This will be based on the Mach kernel, a version developed at Carnegie-Mellon University and released as open source software. Steve Jobs adopted it when he went to NeXT, a company that was eventually folded back into Apple. This kernel does a much better job of juggling different tasks because it uses preemptive multitasking instead of cooperative multitasking. The original version of the MacOS let each program decide when and if it was going to give up control of the computer to let other programs run. This low-rent version of juggling was called cooperative multitasking, but it failed when some program in the hotel failed to cooperate. Most software developers obeyed the rules, but mistakes would still occur. Bad programs would lock up the machine. Preemptive multitasking takes this power away from the individual programs. It swaps control from program to program without asking permission. One pig of a program can't slow down the entire machine. When the new MacOS X kernel starts offering preemptive multitasking, the users should notice less sluggish behavior and more consistent performance.
+
+Torvalds plunged in and created a monolithic kernel. This made it easier to tweak all the strange interactions between the programs. Sure, a microkernel built around a clean, message-passing architecture was an elegant way to construct the guts of an OS, but it had its problems. There was no easy way to deal with special exceptions. Let's say you want a web server to run very quickly on your machine. That means you need to treat messages coming into the computer from the Internet with exceptional speed. You need to ship them with the equivalent of special delivery or FedEx. You need to create a special exception for them. Tacking these exceptions onto a clean microkernel starts to make it look bad. The design starts to get cluttered and less elegant. After a few special exceptions are added, the microkernel can start to get confused.
+
+Torvalds's monolithic kernel did not have the elegance or the simplicity of a microkernel OS like Minix or Mach, but it was easier to hack. New tweaks to speed up certain features were relatively easy to add. There was no need to come up with an entirely new architecture for the message-passing system. The downside was that the guts could grow remarkably byzantine, like the bureaucracy of a big company.
+
+In the past, this complexity hurt the success of proprietary operating systems. The complexity produced bugs because no one could understand it. Torvalds's system, however, came with all the source code, making it much easier for application programmers to find out what was causing their glitch. To carry the corporate bureaucracy metaphor a bit further, the source code acted like the omniscient secretary who is able to explain everything to a harried executive. This perfect knowledge reduced the cost of complexity.
+
+By the beginning of 1992, Linux was no longer a Finnish student's part-time hobby. Several influential programmers became interested in the code. It was free and relatively usable. It ran much of the GNU code, and that made it a neat, inexpensive way to experiment with some excellent tools. More and more people downloaded the system, and a significant fraction started reporting bugs and suggestions to Torvalds. He rolled them back in and the project snowballed.
+
+2~ A Hobby Begets a Project that Begets a Movement
+
+On the face of it, Torvalds's decision to create an OS wasn't extraordinary. Millions of college-age students decide that they can do anything if they just put in a bit more elbow grease. The college theater departments, newspapers, and humor magazines all started with this impulse, and the notion isn't limited to college students. Millions of adults run Little League teams, build model railroads, lobby the local government to create parks, and take on thousands of projects big and small in their spare time.
+
+Every great idea has a leader who can produce a system to sustain it. Every small-town lot had kids playing baseball, but a few guys organized a Little League program that standardized the rules and the competition. Every small town had people campaigning for parks, but one small group created the Sierra Club, which fights for parks throughout the world.
+
+This talent for organizing the work of others is a rare commodity, and Torvalds had a knack for it. He was gracious about sharing his system with the world and he never lorded it over anyone. His messages were filled with jokes and self-deprecating humor, most of which were carefully marked with smiley faces (:-)) to make sure that the message was clear. If he wrote something pointed, he would apologize for being a "hothead." He was always gracious in giving credit to others and noted that much of Linux was just a clone of UNIX. All of this made him easy to read and thus influential.
+
+His greatest trick, though, was his decision to avoid the mantle of power. He wrote in 1992, "Here's my standing on 'keeping control,' in 2 words (three?): I won't. The only control I've effectively been keeping on Linux is that I know it better than anybody else."
+
+He pointed out that his control was only an illusion that was caused by the fact that he did a good job maintaining the system. "I've made my changes available to ftp-sites etc. Those have become effectively official releases, and I don't expect this to change for some time: not because I feel I have some moral right to it, but because I haven't heard too many complaints."
+
+As he added new features to his OS, he shipped new copies frequently. The Internet made this easy to do. He would just pop a new version up on a server and post a notice for all to read: come download the latest version.
+
+He made it clear that people could vote to depose him at any time.
+"If people feel I do a bad job, they can do it themselves." They could just take all of his Linux code and start their own version using Torvalds's work as a foundation.
+
+Anyone could break off from Torvalds's project because Torvalds decided to ship the source code to his project under Richard Stallman's GNU General Public License, or GPL. In the beginning, he issued it with a more restrictive license that prohibited any "commercial" use, but eventually moved to the GNU license. This was a crucial decision because it cemented a promise with anyone who spent a few minutes playing with his toy operating system for the 386. It stated that all of the source code that Torvalds or anyone else wrote would be freely accessible and shared with everyone. This decision was a double-edged sword for the community. Everyone could take the software for free,
+
+but if they started circulating some new software built with the code, they would have to donate their changes back to the project. It was like flypaper. Anyone who started working with the project grew attached to it. They couldn't run off into their own corner. Some programmers joke that this flypaper license is like sex. If you make one mistake by hooking up with a project protected by GPL, you pay for it forever. If you ever ship a version of the project, you must include all of the source code. It can be distributed freely forever.
+
+While some people complained about the sticky nature of the GPL, enough saw it as a virtue. They liked Torvalds's source code, and they liked the fact that the GPL made them full partners in the project. Anyone could donate their time and be sure it wasn't going to disappear. The source code became a body of work held in common trust for everyone. No one could rope it off, fence it in, or take control.
+
+In time, Torvalds's pet science project and hacking hobby grew as more people got interested in playing with the guts of machines. The price was right, and idle curiosity could be powerful. Some wondered what a guy in Finland could do with a 386 machine. Others wondered if it was really as usable as the big machines from commercial companies. Others wondered if it was powerful enough to solve some problems in the lab. Still others just wanted to tinker. All of these folks gave it a try, and some even began to contribute to the project.
+
+Torvalds's burgeoning kernel dovetailed nicely with the tools that the GNU project created. All of the work by Stallman and his disciples could be easily ported to work with the operating system core that Torvalds was now calling Linux. This was the power of freely distributable source code. Anyone could make a connection, and someone invariably did. Soon, much of the GNU code began running on Linux. These tools made it easier to create more new programs, and the snowball began to roll.
+
+Many people feel that Linus Torvalds's true act of genius was in coming up with a flexible and responsive system for letting his toy OS grow and change. He released new versions often, and he encouraged everyone to test them with him. In the past, many open source developers using the GNU GPL had only shipped new versions at major landmarks in development, acting a bit like the commercial developers. After they released version 1.0, they would hole up in their basements until they had added enough new features to justify version 2.0.
+
+Torvalds avoided this perfectionism and shared frequently. If he fixed a bug on Monday, then he would roll out a new version that afternoon. It's not strange to have two or three new versions hit the Internet each week. This was a bit more work for Torvalds, but it also made it much easier for others to become involved. They could watch what he was doing and make their own suggestions.
+
+This freedom also attracted others to the party. They knew that Linux would always be theirs, too. They could write neat features and plug them into the Linux kernel without worrying that Torvalds would yank the rug out from under them. The GPL was a contract that lasted long into the future. It was a promise that bound them together.
+
+The Linux kernel also succeeded because it was written from the ground up for the PC platform. When the Berkeley UNIX hackers were porting BSD to the PC platform, they weren't able to make it fit perfectly. They were taking a piece of software crafted for older computers like the VAX, and shaving off corners and rewriting sections until it ran on the PC.
+
+Alan Cox pointed out to me, "The early BSD stuff was by UNIX people for UNIX people. You needed a calculator and familiarity with BSD UNIX on big machines (or a lot of reading) to install it. You also couldn't share a disk between DOS/Windows and 386BSD or the early branches off it.
+
+"Nowadays FreeBSD understands DOS partitions and can share a disk, but at the time BSD was scary to install," he continued.
+
+The BSD also took certain pieces of hardware for granted. Early versions of BSD required a 387, a numerical coprocessor that would speed up the execution of floating point numbers. Cox remembers that the price (about $100) was just too much for his budget. At that time, the free software world was a very lean organization.
+
+Torvalds's operating system plugged a crucial hole in the world of free source software and made it possible for someone to run a computer without paying anyone for a license. Richard Stallman had dreamed of this day, and Torvalds came up with the last major piece of the puzzle.
+
+2~ A Different Kind of Trial
+
+During the early months of Torvalds's work, the BSD group was stuck in a legal swamp. While the BSD team was involved with secret settlement talks and secret depositions, Linus Torvalds was happily writing code and sharing it with the world on the Net. His life wasn't all peaches and cream, but all of his hassles were open. Professor Andy Tanenbaum, a fairly well-respected and famous computer scientist, got in a long, extended debate with Torvalds over the structure of Linux. He looked down at Linux and claimed that Linux would have been worth two F's in his class because of its design. This led to a big flame war that was every bit as nasty as the fight between Berkeley and AT&T's USL. In fact, to the average observer it was even nastier. Torvalds returned Tanenbaum's fire with strong words like "fiasco," "brain-damages," and "suck." He brushed off the bad grades by pointing out that Albert Einstein supposedly got bad grades in math and physics. The highpriced lawyers working for AT&T and Berkeley probably used very expensive and polite words to try and hide the shivs they were trying to stick in each other's back. Torvalds and Tanenbaum pulled out each other's virtual hair like a squawkfest on the Jerry Springer show.
+
+But Torvalds's flame war with Tanenbaum occurred in the open in an Internet newsgroup. Other folks could read it, think about it, add their two cents' worth, and even take sides. It was a wide-open debate that uncovered many flaws in the original versions of Linux and Tanenbaum's Minix. They forced Torvalds to think deeply about what he wanted to do with Linux and consider its flaws. He had to listen to the arguments of a critic and a number of his peers on the Net and then come up with arguments as to why his Linux kernel didn't suck too badly.
+
+This open fight had a very different effect from the one going on in the legal system. Developers and UNIX hackers avoided the various free versions of BSD because of the legal cloud. If a judge decided that AT&T and USL were right, everyone would have to abandon their work on the platform. While the CSRG worked hard to get free, judges don't always make the choices we want.
+
+The fight between Torvalds and Tanenbaum, however, drew people into the project. Other programmers like David Miller, Ted T'so, and Peter da Silva chimed in with their opinions. At the time, they were just interested bystanders. In time, they became part of the Linux brain trust. Soon they were contributing source code that ran on Linux. The argument's excitement forced them to look at Torvalds's toy OS and try to decide whether his defense made any sense. Today, David Miller is one of the biggest contributors to the Linux kernel. Many of the original debaters became major contributors to the foundations of Linux.
+
+This fight drew folks in and kept them involved. It showed that Torvalds was serious about the project and willing to think about its limitations. More important, it exposed these limitations and inspired other folks on the Net to step forward and try to fix them. Everyone could read the arguments and jump in. Even now, you can dig up the archives of this battle and read in excruciating detail what people were thinking and doing. The AT&T/USL-versus-Berkeley fight is still sealed.
+
+To this day, all of the devotees of the various BSDs grit their teeth when they hear about Linux. They think that FreeBSD, NetBSD, and OpenBSD are better, and they have good reasons for these beliefs. They know they were out the door first with a complete running system. But Linux is on the cover of the magazines. All of the great technically unwashed are now starting to use "Linux" as a synonym for free software. If AT&T never sued, the BSD teams would be the ones reaping the glory. They would be the ones to whom Microsoft turned when it needed a plausible competitor. They would be more famous.
+
+But that's crying over spilled milk. The Berkeley CSRG lived a life of relative luxury in their world made fat with big corporate and government donations. They took the cash, and it was only a matter of time before someone called them on it. Yes, they won in the end, but it came too late. Torvalds was already out of the gate and attracting more disciples.
+
+McKusick says, "If you plot the installation base of Linux and BSD over the last five years, you'll see that they're both in exponential growth. But BSD's about eighteen to twenty months behind. That's about how long it took between Net Release 2 and the unencumbered 4.4BSD-Lite. That's about how long it took for the court system to do its job."
+
+1~ Growth
+
+Through the 1990s, the little toy operating system grew slowly and quietly as more and more programmers were drawn into the vortex. At the beginning, the OS wasn't rich with features. You could run several different programs at once, but you couldn't do much with the programs. The system's interface was just text. Still, this was often good enough for a few folks in labs around the world. Some just enjoyed playing with computers. Getting Linux running on their PC was a challenge, not unlike bolting an aftermarket supercharger onto a Honda Civic. But others took the project more seriously because they had serious jobs that couldn't be solved with a proprietary operating system that came from Microsoft or others.
+
+In time, more people started using the system and started contributing their additions to the pot. Someone figured out how to make MIT's free X Window System run on Linux so everyone could have a graphical interface. Someone else discovered how to roll in technology for interfacing with the Internet. That made a big difference because everyone could hack, tweak, and fiddle with the code and then just upload the new versions to the Net.
+
+It goes without saying that all the cool software coming out of Stallman's Free Software Foundation found its way to Linux. Some were simple toys like GNU Chess, but others were serious tools that were essential to the growth of the project. By 1991, the FSF was offering what might be argued were the best text editor and compiler in the world. Others might have been close, but Stallman's were free. These were crucial tools that made it possible for Linux to grow quickly from a tiny experimental kernel into a full-featured OS for doing everything a programmer might want to do.
+
+James Lewis-Moss, one of the many programmers who devote some time to Linux, says that GCC made it possible for programmers to create, revise, and extend the kernel. "GCC is integral to the success of Linux," he says, and points out that this may be one of the most important reasons why "it's polite to refer to it as GNU/Linux."
+
+Lewis-Moss points out one of the smoldering controversies in the world of free software: all of the tools and games that came from the GNU project started becoming part of what people simply thought of as plain "Linux."
+The name for the small kernel of the operating system soon grew to apply to almost all the free software that ran with it. This angered Stallman, who first argued that a better name would be"Lignux."When that failed to take hold, he moved to "GNU/Linux." Some ignored his pleas and simply used "Linux," which is still a bit unfair. Some feel that"GNU/Linux"is too much of a mouthful and, for better or worse, just plain Linux is an appropriate shortcut. Some, like Lewis-Moss, hold firm to GNU/Linux.
+
+Soon some people were bundling together CD-ROMs with all this software in one batch. The group would try to work out as many glitches as possible so that the purchaser's life would be easier. All boasted strange names like Yggdrasil, Slackware, SuSE, Debian, or Red Hat. Many were just garage projects that never made much money, but that was okay. Making money wasn't really the point. People just wanted to play with the source. Plus, few thought that much money could be made. The GPL, for instance, made it difficult to differentiate the product because it required everyone to share their source code with the world. If Slackware came up with a neat fix that made their version of Linux better, then Debian and SuSE could grab it. The GPL prevented anyone from constraining the growth of Linux.
+
+But only greedy businessmen see sharing and competition as negatives. In practice, the free flow of information enhanced the market for Linux by ensuring that it was stable and freely available. If one key CDROM developer gets a new girlfriend and stops spending enough time programming, another distribution will pick up the slack. If a hurricane flattened Raleigh, North Carolina, the home of Red Hat, then another supplier would still be around. A proprietary OS like Windows is like a set of manacles. An earthquake in Redmond, Washington, could cause a serious disruption for everyone.
+
+The competition and the GPL meant that the users would never feel bound to one OS. If problems arose, anyone could always just start a splinter group and take Linux in that direction. And they did. All the major systems began as splinter groups, and some picked up enough steam and energy to dominate. In time, the best splinter groups spun off their own splinter groups and the process grew terribly complicated.
+
+2~ The Establishment Begins to Notice
+
+By the mid-1990s, the operating system had already developed quite a following. In 1994, Jon Hall was a programmer for Digital, a company that was later bought by Compaq. Hall also wears a full beard and uses the name "maddog" as a nickname. At that time, Digital made workstations that ran a version of UNIX. In the early 1990s, Digital made a big leap forward by creating a 64-bit processor version of its workstation CPU chip, the Alpha, and the company wanted to make sure that the chip found widespread acceptance.
+
+Hall remembers well the moment he discovered Linux. He told Linux Today,
+
+_1 I didn't even know I was involved with Linux at first. I got a copy of Dr. Dobb's Journal, and in there was an advertisement for "get a UNIX operating system, all the source code, and run it on your PC." And I think it was $99. And I go, "Oh, wow, that's pretty cool. For $99, I can do that." So I sent away for it, got the CD. The only trouble was that I didn't have a PC to run it on. So I put it on my Ultrix system, took a look at the main pages, directory structure and stuff, and said, "Hey, that looks pretty cool." Then I put it away in the filing cabinet. That was probably around January of 1994.
+
+In May 1994, Hall met Torvalds at a DECUS (Digital Equipment Corporation User Society) meeting and became a big fan. Hall is a programmer's programmer who has written code for many different machines over the years, like the IBM 1130 and the DEC PDP-8. He started out as an electrical engineer in college, but took up writing software "after seeing a friend of mine fried by 13,600 volts and 400 amps, which was not a pretty sight." Hall started playing with UNIX when he worked at Bell Labs and fell in love with the OS.
+
+At the meeting, Torvalds helped Hall and his boss set up a PC with Linux. This was the first time that Hall actually saw Linux run, and he was pleasantly surprised. He said, "By that time I had been using UNIX for probably about fifteen years. I had used System V, I had used Berkeley, and all sorts of stuff, and this really felt like UNIX. You know . . . I mean, it's kind of like playing the piano. You can play the piano, even if it's a crappy piano. But when it's a really good piano, your fingers just fly over the keys. That's the way this felt. It felt good, and I was really impressed."
+
+This experience turned Hall into a true convert and he went back to Digital convinced that the Linux project was more than just some kids playing with a toy OS. These so-called amateurs with no centralized system or corporate backing had produced a very, very impressive system that was almost as good as the big commercial systems. Hall was an instant devotee. Many involved in the project recall their day of conversion with the same strength. A bolt of lightning peeled the haze away from their eyes, and they saw.
+
+Hall set out trying to get Torvalds to rewrite Linux so it would work well on the Alpha. This was not a simple task, but it was one that helped the operating system grow a bit more. The original version included some software that assumed the computer was designed like the Intel 386. This was fine when Linux only ran on Intel machines, but removing these assumptions made it possible for the software to run well on all types of machines.
+
+Hall went sailing with Torvalds to talk about the guts of the Linux OS. Hall told me, "I took him out on the Mississippi River, went up and down the Mississippi in the river boat, drinking Hurricanes, and I said to him, 'Linus, did you ever think about porting Linux to a 64-bit processor, like the Alpha?' He said, 'Well, I thought about doing that, but the Helsinki office has been having problems getting me a system, so I guess I'll have to do the PowerPC instead.'
+
+"I knew that was the wrong answer, so I came back to Digital (at the time), and got a friend of mine, named Bill Jackson, to send out a system to Linus, and he received it about a couple weeks after that. Then I found some people inside Digital who were also thinking about porting Linux to an Alpha. I got the two groups together, and after that, we started on the Alpha Linux project."
+
+This was one of the first times that a major corporation started taking note of what was happening in the garages and basements of hardcore computer programmers. It was also one of the first times that a corporation looked at an open source operating system and did not react with fear or shock. Sun was always a big contributor of open source software, but they kept their OS proprietary. Hall worked tirelessly at Digital to ensure that the corporation understood the implications of the GPL and saw that it was a good way to get more interested in the Alpha chip. He says he taught upper management at Digital how to "say the L-word."
+
+Hall also helped start a group called Linux International, which works to make the corporate world safe for Linux. "We help vendors understand the Linux marketplace," Hall told me. "There's a lot of confusion about what the GPL means. Less now, but still there's a lot of confusion. We helped them find the markets."
+
+Today, Linux International helps control the trademark on the name Linux and ensures that it is used in an open way. "When someone wanted to call themselves something like 'Linux University,' we said that's bad because there's going to be more than one. 'Linux University of North Carolina' is okay. It opens up the space."
+
+In the beginning, Torvalds depended heavily on the kindness of strangers like Hall. He didn't have much money, and the Linux project wasn't generating a huge salary for him. Of course, poverty also made it easier for people like Hall to justify giving him a machine. Torvalds wasn't rich monetarily, but he became rich in machines.
+
+By 1994, when Hall met Torvalds, Linux was already far from just a one-man science project. The floppy disks and CD-ROMs holding a version of the OS were already on the market, and this distribution mechanism was one of the crucial unifying forces. Someone could just plunk down a few dollars and get a version that was more or less ready to run. Many simply downloaded their versions for free from the Internet.
+
+2~ Making it Easy to Use
+
+In 1994, getting Linux to run was never really as simple as putting the CD-ROM in the drive and pressing a button. Many of the programs didn't work with certain video cards. Some modems didn't talk to Linux. Not all of the printers communicated correctly. Yet most of the software worked together on many standard machines. It often took a bit of tweaking, but most people could get the OS up and running on their computers.
+
+This was a major advance for the Linux OS because most people could quickly install a new version without spending too much time downloading the new code or debugging it. Even programmers who understood exactly what was happening felt that installing a new version was a long, often painful slog through technical details. These CDROMs not only helped programmers, they also encouraged casual users to experiment with the system.
+
+The CD-ROM marketplace also created a new kind of volunteer for the project. Someone had to download the latest code from the author. Someone had to watch the kernel mailing list to see when Torvalds, Cox, and the rest had minted a new version that was stable enough to release. Someone needed to check all the other packages like GNU Emacs or GNU CC to make sure they still worked correctly. This didn't require the obsessive programming talent that created the kernel, but it did take some dedication and devotion.
+
+Today, there are many different kinds of volunteers putting together these packages. The Debian group, for instance, is one of the best known and most devoted to true open source principles. It was started by Ian Murdock, who named it after himself and his girlfriend, Debra. The Debian group, which now includes hundreds of official members, checks to make sure that the software is both technically sound and politically correct. That is, they check the licenses to make sure that the software can be freely distributed by all users. Their guidelines later morphed into the official definition of open source software.
+
+Other CD-ROM groups became more commercial. Debian sold its disks to pay for Internet connection fees and other expenses, but they were largely a garage operation. So were groups with names like Slackware, FreeBSD, and OpenBSD. Other groups like Red Hat actually set out to create a burgeoning business, and to a large extent, they succeeded. They took the money and used it to pay programmers who wrote more software to make Linux easier to use.
+
+In the beginning, there wasn't much difference between the commercially minded groups like Red Hat and the more idealistic collectives like Debian. The marketplace was small, fragmented, and tribal. But by 1998, Red Hat had attracted major funding from companies like Intel, and it plowed more and more money into making the package as presentable and easy to use as possible. This investment paid off because more users turned instinctively to Red Hat, whose CD-ROM sales then exploded.
+
+Most of this development lived in its own Shangri-La. Red Hat, for instance, charged money for its disks, but released all of its software under the GPL. Others could copy their disks for free, and many did. Red Hat may be a company, but the management realized that they depended on thousands if not millions of unpaid volunteers to create their product.
+
+Slowly but surely, more and more people became aware of Linux, the GNU project, and its cousins like FreeBSD. No one was making much money off the stuff, but the word of mouth was spreading very quickly. The disks were priced reasonably, and people were curious. The GPL encouraged people to share. People began borrowing disks from their friends. Some companies even manufactured cheap rip-off copies of the CD-ROMs, an act that the GPL encouraged.
+
+At the top of the pyramid was Linus Torvalds. Many Linux developers treated him like the king of all he surveyed, but he was like the monarchs who were denuded by a popular constitutional democracy. He had always focused on building a fast, stable kernel, and that was what he continued to do. The rest of the excitement, the packaging, the features, and the toys, were the dominion of the volunteers and contributors.
+
+Torvalds never said much about the world outside his kernel, and it developed without him.
+
+Torvalds moved to Silicon Valley and took a job with the very secret company Transmeta in order to help design the next generation of computer chips. He worked out a special deal with the company that allowed him to work on Linux in his spare time. He felt that working for one of the companies like Red Hat would give that one version of Linux a special imprimatur, and he wanted to avoid that. Plus, Transmeta was doing cool things.
+
+In January 1999, the world caught up with the pioneers. Schmalensee mentioned Linux on the witness stand during the trial and served official notice to the world that Microsoft was worried about the growth of Linux. The system had been on the company's radar screen for some time. In October 1998, an internal memo from Microsoft describing the threat made its way to the press. Some thought it was just Microsoft's way of currying favor during the antitrust investigation. Others thought it was a serious treatment of a topic that was difficult for the company to understand.
+
+The media followed Schmalensee's lead. Everyone wanted to know about Linux, GNU, open source software, and the magical effects of widespread, unconditional sharing. The questions came in tidal waves, and Torvalds tried to answer them again and again. Was he sorry he gave it all away? No. If he charged anything, no one would have bought his toy and no one would have contributed anything. Was he a communist? No, he was rather apolitical. Don't programmers have to eat? Yes, but they will make their money selling a service instead of getting rich off bad proprietary code. Was Linux going to overtake Microsoft? Yes, if he had his way. World Domination Soon became the motto.
+
+But there were also difficult questions. How would the Linux world resist the embrace of big companies like IBM, Apple, Hewlett-Packard, and maybe even Microsoft? These were massive companies with paid programmers and schedules to meet. All the open source software was just as free to them as anyone else. Would these companies use their strength to monopolize Linux?
+
+Some were worried that the money would tear apart the open source community. It's easy to get everyone to donate their time to a project when no one is getting paid. Money changes the equation. Would a gulf develop between the rich companies like Red Hat and the poor programmers who just gave away their hard work?
+
+Many wanted to know when Linux would become easier to use for nonprogrammers. Programmers built the OS to be easy to take apart and put back together again. That's a great feature if you like hacking the inside of a kernel, but that doesn't excite the average computer user. How was the open source community going to get the programmers to donate their time to fix the mundane, everyday glitches that confused and infuriated the nonprogrammers? Was the Linux community going to be able to produce something that a nonprogrammer could even understand?
+
+Others wondered if the Linux world could ever agree enough to create a software package with some coherence. Today, Microsoft users and programmers pull their hair out trying to keep Windows 95, Windows 98, and Windows NT straight. Little idiosyncrasies cause games to crash and programs to fail. Microsoft has hundreds of quality assurance engineers and thousands of support personnel. Still, the little details drive everyone crazy.
+
+New versions of Linux appear as often as daily. People often create their own versions to solve particular problems. Many of these changes won't affect anyone, but they can add up. Is there enough consistency to make the tools easy enough to use?
+
+Many wondered if Linux was right for world domination. Programmers might love playing with source code, but the rest of the world just wants something that delivers the e-mail on time. More important, the latter are willing to pay for this efficiency.
+
+Such questions have been bothering the open source community for years and still have no easy answers today. Programmers need food, and food requires money. Making easy-to-use software requires discipline, and discipline doesn't always agree with total freedom.
+
+When the first wave of hype about free software swept across the zeitgeist, no one wanted to concentrate on these difficult questions. The high quality of free operating systems and their use at high-profile sites like Yahoo! was good news for the world. The success of unconditional cooperation was intoxicating. If free software could do so much with so little, it could overcome the difficult questions. Besides, it didn't have to be perfect. It just needed to be better than Microsoft.
+
+1~ Freedom
+
+The notion embodied by the word "free" is one of the great marketing devices of all time. Cereal manufacturers know that kids will slog through bowls of sugar to get a free prize. Stores know that people will gladly give them their names and addresses if they stand a chance of winning something for free. Car ads love to emphasize the freedom a new car will give to someone.
+
+Of course, Microsoft knows this fact as well. One of their big advertising campaigns stresses the freedom to create new documents, write long novels, fiddle with photographs, and just do whatever you want with a computer. "Where do you want to go today?" the Microsoft ads ask.
+
+Microsoft also recognizes the pure power of giving away something for free. When Bill Gates saw Netscape's browser emerging as a great competitive threat, he first bought a competing version and then wrote his own version of a web browser. Microsoft gave their versions away for free. This bold move shut down the revenue stream of Netscape, which had to cut its price to zero in order to compete. Of course, Netscape didn't have revenues from an operating system to pay the rent. Netscape cried foul and eventually the Department of Justice brought a lawsuit to decide whether the free software from Microsoft was just a plot to keep more people paying big bucks for their not-so-free Windows OS. The fact that Microsoft is now threatened by a group of people who are giving away a free OS has plenty of irony.
+
+The word "free" has a much more complicated and nuanced meaning within the free software movement. In fact, many people who give away their software don't even like the word "free" and prefer to use "open" to describe the process of sharing. In the case of free software, it's not just an ad campaign to make people feel good about buying a product. It's also not a slick marketing sleight of hand to focus people's attention on a free gift while the magician charges full price for a product. The word "free" is more about a way of life. The folks who write the code throw around the word in much the same way the Founding Fathers of the United States used it. To many of them, the free software revolution was also conceived in liberty and dedicated to certain principles like the fact that all men and women have certain inalienable rights to change, modify, and do whatever they please with their software in the pursuit of happiness.
+
+Tossing about the word "free" is easy to do. Defining what it means takes much longer. The Declaration of Independence was written in 1776, but the colonial governments fought and struggled with creating a free government through the ratification of the current United States Constitution in 1787. The Bill of Rights came soon afterward, and the Supreme Court is still continually struggling with defining the boundaries of freedom described by the document. Much of the political history of the United States might be said to be an extended argument about the meaning of the words "free country."
+
+The free software movement is no different. It's easy for one person to simply give their software away for free. It's much harder to attract and organize an army to take on Microsoft and dominate the world. That requires a proper definition of the word "free" so that everyone understands the rights and limitations behind the word. Everyone needs to be on the same page if the battle is to be won. Everyone needs to understand what is meant by "free software."
+
+The history of the free software world is also filled with long, extended arguments defining the freedom that comes bundled with the source code. Many wonder if it is more about giving the user something for nothing, or if is it about empowering him. Does this freedom come with any responsibilities? What should they be? How is the freedom enforced? Is freeloading a proper part of the freedom?
+
+In the early years of computers, there were no real arguments. Software was free because people just shared it with each other. Magazines like Creative Computing and BYTE published the source code to programs because that was an easy way to share information.
+
+People would even type in the data themselves. Computers cost money, and getting them to run was part of the challenge. Sharing software was just part of being neighborly. If someone needed to borrow your plow, you lent it to them when you weren't using it.
+
+This changed as corporations recognized that they could copyright software and start charging money for it. Most people loved this arrangement because the competition brought new packages and tools to market and people were more than willing to pay for them. How else are the programmers and the manual writers going to eat?
+
+A few people thought this was a disaster. Richard Stallman watched the world change from his office in the artificial intelligence labs of MIT. Stallman is the ultimate hacker, if you use the word in the classical sense. In the beginning, the word only described someone who knows how to program well and loves to poke around in the insides of computers. It only took on its more malicious tone later as the media managed to group all of those with the ability to wrangle computers into the same dangerous camp. Hackers often use the term "cracker" to refer to these people.
+
+Stallman is a model of the hacker. He is strident, super intelligent, highly logical, and completely honest. Most corporations keep their hackers shut off in a back room because these traits seem to scare away customers and investors who just want sweet little lies in their ears. Stallman was never that kind of guy. He looked at the burgeoning corporate control of software and didn't like it one bit. His freedom was slowly being whittled away, and he wasn't the type to simply sit by and not say anything.
+
+When Stallman left the AI lab in 1984, he didn't want to be controlled by its policies. Universities started adopting many of the same practices as the corporations in the 1980s, and Stallman couldn't be a special exception. If MIT was going to be paying him a salary, MIT would own his code and any patents that came from it. Even MIT, which is a much cooler place than most, couldn't accommodate him on staff. He didn't move far, however, because after he set up the Free Software Foundation, he kept an office at MIT, first unofficially and then officially. Once he wasn't "on the staff," the rules became different.
+
+Stallman turned to consulting for money, but it was consulting with a twist. He would only work for companies that wouldn't put any restrictions on the software he created. This wasn't an easy sell. He was insisting that any work he did for Corporation X could also be shared with Corporations Y and Z, even if they were direct competitors.
+
+This wasn't how things were done in the 1980s. That was the decade when companies figured out how to lock up the source code to a program by only distributing a machine-readable version. They hoped this would control their product and let them restrain people who might try to steal their ideas and their intellectual property. Stallman thought it was shutting down his ability to poke around inside the computer and fix it. This secrecy blocked him from sharing his thoughts and ideas with other programmers.
+
+Most programmers looked at the scheme of charging for locked-up binary versions of a program as a necessary evil. Sure, they couldn't play around with the guts of Microsoft Windows, but it also meant that no one could play around with the guts of the programs they wrote. The scheme locked doors and compartmentalized the world, but it also gave the creator of programs more power. Most programmers thought having power over their own creation was pretty neat, even if others had more power. Being disarmed is okay if everyone else is disarmed and locked in a cage.
+
+Stallman thought this was a disaster for the world and set out to convince the world that he was right. In 1984, he wrote the GNU Manifesto, which started his GNU project and laid out the conditions for his revolution. This document stood out a bit in the middle of the era of Ronald Reagan because it laid out Stallman's plan for creating a virtual commune where people would be free to use the software. It is one of the first cases when someone tried to set down a definition of the word "free" for software users. Sure, software and ideas were quite free long ago, but no one noticed until the freedom was gone.
+
+He wrote,
+
+_1 I consider that the golden rule requires that if I like a program I must share it with other people who like it. Software sellers want to divide the users and conquer them, making each user agree not to share with others. I refuse to break solidarity with other users in this way. . . . So that I can continue to use computers without dishonor, I have decided to put together a sufficient body of free software so that I will be able to get along without any software that is not free.
+
+The document is a wonderful glimpse at the nascent free software world because it is as much a recruiting document as a tirade directed at corporate business practices. When the American colonies split off from England, Thomas Paine spelled out the problems with the English in the first paragraph of his pamphlet "Common Sense." In his manifesto, Stallman didn't get started using words like "dishonor" until the sixth paragraph. The first several paragraphs spelled out the cool tools he had developed already: "an Emacs text editor with Lisp for writing editor commands, a source level debugger, a yacc-compatible parser generator, a linker, and around 35 utilities." Then he pointed to the work he wanted to complete soon: "A new portable optimizing C compiler has compiled itself and may be released this year. An initial kernel exists but many more features are needed to emulate Unix." He was saying, in effect, that he already had a few juicy peaches growing on the trees of his commune.
+
+If this wasn't enough, he intended to do things a bit better than UNIX. His operating system was going to offer the latest, greatest ideas of computer science, circa 1984. "In particular, we plan to have longer file names, file version numbers, a crashproof file system, file name completion perhaps, terminal-independent display support, and perhaps eventually a Lisp-based window system through which several Lisp programs and ordinary Unix programs can share a screen." The only thing that was missing from every computer nerd's wish list was a secret submarine docking site in the basement grotto.
+
+The fifth paragraph even explained to everyone that the name of the project would be the acronym GNU, which stood for "GNU's Not UNIX," and it should be pronounced with a hard G to make sure that no one would get it confused with the word "new." Stallman has always cared about words, the way they're used and the way they're pronounced.
+
+In 1984, UNIX became the focus of Stallman's animus because its original developer, AT&T, was pushing to try to make some money back after paying so many people at Bell Labs to create it. Most people were somewhat conflicted by the fight. They understood that AT&T had paid good money and supported many researchers with the company's beneficence. The company gave money, time, and spare computers. Sure, it was a pain to pay AT&T for something and get only a long license drafted by teams of lawyers. Yes, it would be nice if we could poke around under the hood of UNIX without signing a non-disclosure agreement. It would be nice if we could be free to do whatever we want, but certainly someone who pays for something deserves the right to decide how it is used. We've all got to eat.
+
+Stallman wasn't confused at all. Licenses like AT&T's would constrict his freedom to share with others. To make matters worse, the software companies wanted him to pay for the privilege of getting software without the source code.
+
+Stallman explains that his feelings weren't focused on AT&T per se. Software companies were springing up all over the place, and most of them were locking up their source code with proprietary licenses. It was the 1980s thing to do, like listening to music by Duran Duran and Boy George.
+
+"When I decided to write a free operating system, I did not have AT&T in mind at all, because I had never had any dealings with them. I had never used a UNIX system. They were just one of many companies doing the same discreditable thing," he told me recently. "I chose a Unix-like design just because I thought it was a good design for the job, not because I had any particular feelings about AT&T."
+
+When he wrote the GNU Manifesto, he made it clear to the world that his project was more about choosing the right moral path than saving money. He wrote then that the GNU project means "much more than just saving everyone the price of a UNIX license. It means that much wasteful duplication of system programming effort will be avoided. This effort can go instead into advancing the state of the art."
+
+This was a crucial point that kept Stallman from being dismissed as a quasi-communist crank who just wanted everyone to live happily on some nerd commune. The source code is a valuable tool for everyone because it is readable by humans, or at least humans who happen to be good at programming. Companies learned to keep source code proprietary, and it became almost a reflex. If people wanted to use it, they should pay to help defray the cost of creating it. This made sense to programmers who wanted to make a living or even get rich writing their own code. But it was awfully frustrating at times. Many programmers have pulled their hair out in grief when their work was stopped by some bug or undocumented feature buried deep in the proprietary, super-secret software made by Microsoft, IBM, Apple, or whomever. If they had the source code, they would be able to poke around and figure out what was really happening. Instead, they had to treat the software like a black box and keep probing it with test programs that might reveal the secrets hidden inside. Every programmer has had an experience like this, and every programmer knew that they could solve the problem much faster if they could only read the source code. They didn't want to steal anything, they just wanted to know what was going on so they could make their own code work.
+
+Stallman's GNU project would be different, and he explained,
+"Complete system sources will be available to everyone. As a result, a user who needs changes in the system will always be free to make them himself, or hire any available programmer or company to make them for him. Users will no longer be at the mercy of one programmer or company which owns the sources and is in sole position to make changes."
+
+He was quick to mention that people would be "free to hire any available programmer" to ensure that people understood he wasn't against taking money for writing software. That was okay and something he did frequently himself. He was against people controlling the source with arbitrarily complex legal barriers that made it impossible for him or anyone else to get something done.
+
+When people first heard of his ideas, they became fixated on the word "free." These were the Reagan years. Saying that people should just give away their hard work was sounding mighty communist to everyone, and this was long before the Berlin Wall fell. Stallman reexamined the word "free" and all of its different meanings. He carefully considered all of the different connotations, examined the alternatives, and decided that "free" was still the best word. He began to try to explain the shades of meaning he was after. His revolution was about "free speech," not "free beer." This wasn't going to be a revolution in the sense that frequent flyer miles revolutionized air travel nor in the way that aluminum cans revolutionized beer drinking. No, this was going to be a revolution as Rousseau, Locke, and Paine used the word.
+
+He later codified this into four main principles:
+
+_1 The freedom to run the program, for any purpose (freedom 0).~{ He numbered them starting at zero because that was what computer scientists did. Someone figured out that it was simpler to start numbering databases at zero because you didn't have to subtract 1 as often. }~
+
+_1 The freedom to study how the program works, and adapt it to your needs (freedom 1).
+
+_1 The freedom to redistribute copies so you can help your neighbor (freedom 2).
+
+_1 The freedom to improve the program, and release your improvements to the public, so that the whole community benefits (freedom 3).
+
+2~ Free Beer
+
+While Stallman pushed people away from the notion of "free beer,"
+there's little question that this element turned out to be a very important part of the strategy and a foundation of its success. Stallman insisted that anyone could do what they wanted with the software, so he insisted that the source code must be freely distributed. That is, no one could put any restrictions on how you used the software. While this didn't make it free beer, it did mean that you could turn around and give a copy to your friends or your clients. It was pretty close.
+
+The "free beer" nature of Stallman's software also attracted users. If some programmers wanted to check out a new tool, they could download it and try it out without paying for it. They didn't need to ask their boss for a budget, and they didn't need to figure out a way to deal with an invoice. Just one click and the software was there. Commercial software companies continue to imitate this feature by distributing trial versions that come with either a few crippled features or a time lock that shuts them down after a few days.
+
+Of course, the "free beer" nature of the GNU project soon led to money problems. The GNU project took up his time and generated no real revenues at first. Stallman had always lived frugally. He says that he never made more than $20,000 a year at MIT, and still managed to save on that salary. But he was finding it harder and harder to get his assigned jobs done at MIT and write the cool GNU code. While Stallman always supported a programmer's right to make money for writing code, the GNU project wasn't generating any money.
+
+Most folks saw this conflict coming from the beginning. Sure, Stallman would be able to rant and rave about corporate software development for a bit, but eventually he and his disciples would need to eat.
+
+When the MIT support ended, Stallman soon stumbled upon a surprising fact: he could charge for the software he was giving away and make some money. People loved his software, but it was often hard to keep track of it. Getting the package delivered on computer tape or a CD-ROM gave people a hard copy that they could store for future reference or backup. Online manuals were also nice, but the printed book is still a very popular and easy-to-use way of storing information. Stallman's Free Software Foundation began selling printed manuals, tapes, and then CD-ROMs filled with software to make money. Surprisingly, people started paying money for these versions despite the fact that they could download the same versions for free.
+
+Some folks enjoyed pointing out the hypocrisy in Stallman's move. Stallman had run his mouth for so long that many programming "sellouts" who worked for corporations savored the irony. At last that weenie had gotten the picture. He was forced to make money to support himself, and he was selling out, too. These cynics didn't get what Stallman was trying to do.
+
+Most of us would have given up at this time. The free software thing seemed like a good idea, but now that the money was running out it was time to get a real job. In writing this book and interviewing some of the famous and not-so-famous free software developers, I found that some were involved in for-profit, not-so-free software development now. Stallman, though, wasn't going to give up his ideals, and his mind started shifting to accommodate this new measure of reality. He decided that it wouldn't be wrong to sell copies of software or even software services as long as you didn't withhold the source code and stomp on anyone's freedom to use the source code as they wished.
+
+Stallman has always been great at splitting hairs and creating Jesuitical distinctions, and this insight was one of his best. At first glance, it looked slightly nutty. If people were free to do anything they wanted with software, they could just give a copy to their friend and their friend would never send money back to Stallman's Free Software Foundation. In fact, someone could buy a copy from Stallman and then start reselling copies to others to undercut Stallman. The Free Software Foundation and the GNU GPL gave them the freedom to do so. It was as if a movie theater sold tickets to a movie, but also posted a big sign near the exit door that said "Hey, it's absolutely okay for you to prop this open so your friends can sneak in without paying."
+
+While this total freedom befuddled most people, it didn't fail. Many paid for tapes or CD-ROM versions because they wanted the convenience. Stallman's versions came with the latest bug fixes and new features. They were the quasi-official versions. Others felt that paying helped support the work so they didn't feel bad about doing it. They liked the FSF and wanted it to produce more code. Others just liked printed books better than electronic documentation. Buying them from Stallman was cheaper than printing them out. Still others paid for the CD-ROMs because they just wanted to support the Free Software Foundation.
+
+Stallman also found other support. The MacArthur Foundation gave him one of their genius grants that paid him a nice salary for five years to do whatever he wanted. Companies like Intel hired him as a consultant and asked him to make sure that some of his software ran on Intel chips. People were quite willing to pay for convenience because even free software didn't do everything that it should.
+
+Stallman also recognized that this freedom introduced a measure of competition. If he could charge for copies, then so could others. The source code would be a vast commonweal, but the means of delivering it would be filled with people struggling to do the best job of distributing the software. It was a pretty hard-core Reaganaut notion for a reputed communist. At the beginning, few bothered to compete with him, but in time all of the GNU code began to be included with computer operating systems. By the time Linus Torvalds wrote his OS, the GNU code was ready to be included.
+
+2~ Copyleft
+
+If Stallman's first great insight was that the world did not need to put up with proprietary source code, then his second was that he could strictly control the use of GNU software with an innovative legal document entitled GNU General Public License, or GPL. To illustrate the difference, he called the agreement a "copyleft" and set about creating a legal document defining what it meant for software to be "free." Well, defining what he thought it should mean.
+
+The GPL was a carefully crafted legal document that didn't put the software into the "public domain," a designation that would have allowed people to truly do anything they wanted with the software. The license, in fact, copyrighted the software and then extended users very liberal rights for making innumerable copies as long as the users didn't hurt other people's rights to use the software.
+
+The definition of stepping on other people's rights is one that keeps political science departments at universities in business. There are many constituencies that all frame their arguments in terms of protecting someone's rights. Stallman saw protecting the rights of other users in very strong terms and strengthened his grip a bit by inserting a controversial clause. He insisted that a person who distributes an improved version of the program must also share the source code. That meant that some greedy company couldn't download his GNU Emacs editor, slap on a few new features, and then sell the whole package without including all of the source code they created. If people were going to benefit from the GNU sharing, they were going to have to share back. It was freedom with a price.
+
+This strong compact was ready-built for some ironic moments. When Apple began trying to expand the scope of intellectual property laws by suing companies like Microsoft for stealing their "look and feel," Stallman became incensed and decided that he wouldn't develop software for Apple machines as a form of protest and spite. If Apple was going to pollute the legal landscape with terrible impediments to sharing ideas, then Stallman wasn't going to help them sell machines by writing software for the machines. But the GNU copyleft license specifically allowed anyone to freely distribute the source code and use it as they wanted. That meant that others could use the GNU code and convert it to run on the Apple if they wanted to do so. Many did port much of the GNU software to the Mac and distributed the source code with it in order to comply with the license. Stallman couldn't do anything about it. Sure, he was the great leader of the FSF and the author of some of its code, but he had given away his power with the license. The only thing he could do was refuse to help the folks moving the software to the Mac. When it came to principles, he placed freedom to use the source code at the top of the hierarchy.
+
+2~ The GNU Virus
+
+Some programmers soon started referring to the sticky nature of the license as the "GNU virus" because it infected software projects with its freedom bug. If a developer wanted to save time and grab some of the neat GNU software, he was stuck making the rest of his work just as free. These golden handcuffs often scared away programmers who wanted to make money by charging for their work.
+
+Stallman hates that characterization. "To call anything 'like a virus' is a very vicious thing. People who say things like that are trying to find ways to make the GPL look bad," he says.
+
+Stallman did try to work around this problem by creating what he at first called the "Library General Public License" and now refers to as the "Lesser General Public License," a document that allowed software developers to share small chunks of code with each other under less restrictive circumstances. A programmer can use the LGPL to bind chunks of code known as libraries. Others can share the libraries and use them with their source code as long as they don't fully integrate them. Any changes they make to the library itself must be made public, but there is no requirement to release the source code for the main program that uses the library.
+
+This license is essentially a concession to some rough edges at the corners where the world of programming joins the world of law. While Stallman was dead set on creating a perfect collection of free programs that would solve everyone's needs, he was far from finished. If people were going to use his software, they were going to have to use it on machines made by Sun, AT&T, IBM, or someone else who sold a proprietary operating system along with it. He understood that he needed to compromise, at least for system libraries.
+
+The problem is drawing boundaries around what is one pile of software owned by one person and what is another pile owned by someone else. The GPL guaranteed that GNU software would "infect" other packages and force people who used his code to join the party and release theirs as well. So he had to come up with a definition that spelled out what it meant for people to use his code and "incorporate" it with others.
+
+This is often easier said than done. The marketplace has developed ways to sell software as big chunks to people, but these are fictions that camouflage software integration. In modern practice, programmers don't just create one easily distinguished chunk of software known as Microsoft Word or Adobe Photoshop. They build up a variety of smaller chunks known as libraries and link these together. Microsoft Windows, in fact, includes a large collection of libraries for creating the menus, forms, click boxes, and what-not that make the graphical user interfaces. Programmers don't need to write their own instructions for drawing these on the screen and interacting with them. This saves plenty of time and practice for the programmers, and it is a large part of what Microsoft is selling when it sells someone a box with Windows on it.
+
+Stallman recognized that programmers sometimes wrote libraries that they wanted others to use. After all, that was the point of GNU: creating tools that others would be free to use. So Stallman relented and created the Lesser Public License, which would allow people to create libraries that might be incorporated into other programs that weren't fully GNU. The library itself still came with source code, and the user would need to distribute all changes made to the library, but there was no limitation on the larger package.
+
+This new license was also something of a concession to reality. In the most abstract sense, programs are just black boxes that take some input and produce some output. There's no limit to the hierarchies that can be created by plugging these boxes together so that the output for one is the input for another. Eventually, the forest of connections grows so thick that it is difficult to draw a line and label one collection of boxes "ProprietarySoft's SUX-2000" and another collection "GNUSoft's Wombat 3.14.15." The connections are so numerous in well-written, effective software that line-drawing is difficult.
+
+The problem is similar to the one encountered by biologists as they try to define ecosystems and species. Some say there are two different groups of tuna that swim in the Atlantic. Others say there is only one. The distinction would be left to academics if it didn't affect the international laws on fishing. Some groups pushing the vision of one school are worried that others on the other side of the ocean are catching their fish. Others push the two-school theory to minimize the meddling of the other side's bureaucracy. No one knows, though, how to draw a good line.
+
+Stallman's LGPL was a concession to the fact that sometimes programs can be used like libraries and sometimes libraries can be used like programs. In the end, the programmer can draw a strong line around one set of boxes and say that the GPL covers these functions without leaking out to infect the software that links up with the black boxes.
+
+2~ Is the Free Software Foundation Anti-Freedom?
+
+Still, these concessions aren't enough for some people. Many continue to rail against Stallman's definition of freedom and characterize the GPL as a fascist document that steals the rights of any programmer who comes along afterward. Being free means having the right to do anything you want with the code, including keeping all your modifications private.
+
+To be fair, the GPL never forces you to give away your changes to the source code. It just forces you to release your modifications if you redistribute it. If you just run your own version in your home, then you don't need to share anything. When you start sharing binary versions of the software, however, you need to ship the source code, too.
+
+Some argue that corporations have the potential to work around this loophole because they act like one person. A company could revise software and "ship it" by simply hiring anyone who wanted to buy it. The new employees or members of the corporation would get access to the software without shipping the source. The source code would never be distributed because it was not publicly shipped. No one seriously believes that anyone would try to exploit this provision with such an extreme interpretation, but it does open the question of whether an airtight license can ever be created.
+
+These fine distinctions didn't satisfy many programmers who weren't so taken with Stallman's doctrinaire version of freedom. They wanted to create free software and have the freedom to make some money off of it. This tradition dates back many years before Stallman and is a firm part of academic life. Many professors and students developed software and published a free version before starting up a company that would commercialize the work. They used their professor's salary or student stipend to support the work, and the free software they contributed to the world was meant as an exchange. In many cases, the U.S. government paid for the creation of the software through a grant, and the free release was a gift to the taxpayers who ultimately funded it. In other cases, corporations paid for parts of the research and the free release was seen as a way to give something back to the sponsoring corporation without turning the university into a home for the corporation's lowpaid slave programmers who were students in name only.
+
+In many cases, the free distribution was an honest gift made by researchers who wanted to give their work the greatest possible distribution. They would be repaid in fame and academic prestige, which can be more lucrative than everything but a good start-up's IPO. Sharing knowledge and creating more of it was what universities were all about. Stallman tapped into that tradition.
+
+But many others were fairly cynical. They would work long enough to generate a version that worked well enough to convince people of its value. Then, when the funding showed up, they would release this buggy version into the "public domain," move across the street into their own new start-up, and resume development. The public domain version satisfied the university's rules and placated any granting agencies, but it was often close to unusable. The bugs were too numerous and too hidden in the cruft to make it worth someone's time. Of course, the original authors knew where the problems lurked, and they would fix them before releasing the commercial version.
+
+The leader of this academic branch of the free software world became the Computer Systems Research Group at the University of California at Berkeley. The published Berkeley Software Distribution (BSD) versions of UNIX started emerging from Berkeley in the late 1970s. Their work emerged with a free license that gave everyone the right to do what they wanted with the software, including start up a company, add some neat features, and start reselling the whole package. The only catch was that the user must keep the copyright message intact and give the university some credit in the manual and in advertisements. This requirement was loosened in 1999 when the list of people who needed credit on software projects grew too long. Many groups were taking the BSD license and simply replacing the words "University of California" with their name. The list of people who needed to be publicly acknowledged grew with each new project. As the distributions grew larger to include all of these new projects, the process of listing all the names and projects became onerous. The University of California struck the clause requiring advertising credit in the hopes of setting an example that others would follow.
+
+Today, many free software projects begin with a debate of "GNU versus BSD" as the initial founders argue whether it is a good idea to restrict what users can do with the code. The GNU side always believes that programmers should be forced to donate the code they develop back to the world, while the BSD side pushes for practically unlimited freedom.
+
+Rick Rashid is one of the major forces behind the development of Microsoft's Windows NT and also a major contributor to our knowledge of how to build a computer operating system. Before he went to Microsoft, he was a professor at Carnegie-Mellon. While he was there, he spearheaded the team responsible for developing Mach, an operating system that offered relatively easy-to-use multitasking built upon a very tiny kernel. Mach let programmers break their software into multiple "threads" that could run independently of each other while sharing the same access to data.
+
+When asked recently about Mach and the Mach license, he explained that he deliberately wrote the license to be as free as possible.
+
+The GNU GPL, he felt, wasn't appropriate for technology that was developed largely with government grants. The work should be as free as possible and shouldn't force "other people to do things (e.g., give away their personal work) in order to get access to what you had done."
+
+He said, in an e-mail interview, "It was my intent to encourage use of the system both for academic and commercial use and it was used heavily in both environments. Accent, the predecessor to Mach, had already been commercialized and used by a variety of companies. Mach continues to be heavily used today--both as the basis for Apple's new MacOS and as the basis for variants of Unix in the marketplace (e.g., Compaq's 64-bit Unix for the Alpha)."
+
+2~ The Evolution of BSD
+
+The BSD license evolved along a strange legal path that was more like the meandering of a drunken cow than the laser-like devotion of Stallman.
+
+Many professors and students cut their teeth experimenting with UNIX on DEC Vaxes that communicated with old teletypes and dumb terminals. AT&T gave Berkeley the source code to UNIX, and this allowed the students and professors to add their instructions and features to the software. Much of their insight into operating system design and many of their bug fixes made their way back to AT&T, where they were incorporated in the next versions of UNIX. No one really thought twice about the source code being available because the shrink-wrapped software market was still in its infancy. The personal computer market wasn't even born until the latter half of the 1970s, and it took some time for people to believe that source code was something for a company to withhold and protect. In fact, many of the programs still weren't being written in higher-level languages. The programmers would write instructions directly for the computer, and while these often would include some instructions for humans, there was little difference between what the humans wrote and the machine read.
+
+After Bill Joy and others at Berkeley started coming up with several good pieces of software, other universities started asking for copies. At the time, Joy remembers, it was considered a bit shabby for computer science researchers to actually write software and share it with others. The academic departments were filled with many professors who received their formal training in mathematics, and they held the attitude that rigorous formal proofs and analysis were the ideal form of research. Joy and several other students began rebelling by arguing that creating working operating systems was essential experimental research. The physics departments supported experimentalists and theorists.
+
+So Joy began to "publish" his code by sending out copies to other researchers who wanted it. Although many professors and students at Berkeley added bits and pieces to the software running on the DEC Vaxes, Joy was the one who bundled it all together and gave it the name. Kirk McKusick says in his history of Berkeley UNIX, ". . . interest in the error recovery work in the Pascal compiler brought in requests for copies of the system. Early in 1977, Joy put together the 'Berkeley Software Distribution.' This first distribution included the Pascal system, and, in an obscure subdirectory of the Pascal source, the editor vi. Over the next year, Joy, acting in the capacity of the distribution secretary, sent out about 30 free copies of the system."
+
+Today, Joy tells the story with a bit of bemused distraction. He explains that he just copied over a license from the University of Toronto and"whited out""University ofToronto" and replaced it with "University of California."
+He simply wanted to get the source code out the door. In the beginning, the Berkeley Software Distribution included a few utilities, but by 1979 the code became tightly integrated with AT&T's basic UNIX code. Berkeley gave away the collection of software in BSD, but only AT&T license holders could use it. Many universities were attracted to the package, in part because the Pascal system was easy for its students to use. The personal computer world, however, was focusing on a simpler language known as Basic. Bill Gates would make Microsoft Basic one of his first products.
+
+Joy says that he wrote a letter to AT&T inquiring about the legal status of the source code from AT&T that was rolled together with the BSD code. After a year, he says, "They wrote back saying, 'We take no position' on the matter." Kirk McKusick, who later ran the BSD project through the years of the AT&T lawsuit, explained dryly, "Later they wrote a different letter."
+
+Joy was just one of a large number of people who worked heavily on the BSD project from 1977 through the early 1980s. The work was low-level and grungy by today's standards. The students and professors scrambled just to move UNIX to the new machines they bought. Often, large parts of the guts of the operating system needed to be modified or upgraded to deal with a new type of disk drive or file system. As they did this more and more often, they began to develop more and more higher-level abstractions to ease the task. One of the earliest examples was Joy's screen editor known as vi, a simple package that could be used to edit text files and reprogram the system. The "battle" between Joy's vi and Stallman's Emacs is another example of the schism between MIT and Berkeley. This was just one of the new tools included in version 2 of BSD, a collection that was shipped to 75 different people and institutions.
+
+By the end of the 1970s, Bell Labs and Berkeley began to split as AT&T started to commercialize UNIX and Berkeley stuck to its job of education. Berkeley professor Bob Fabry was able to interest the Pentagon's Defense Advanced Research Projects Agency (DARPA) into signing up to support more development at Berkeley. Fabry sold the agency on a software package that would be usable on many of the new machines being installed in research labs throughout the country. It would be more easily portable so that research would not need to stop every time a new computer arrived. The work on this project became versions 3 and 4 of BSD.
+
+During this time, the relationship between AT&T and the universities was cordial. AT&T owned the commercial market for UNIX and Berkeley supplied many of the versions used in universities. While the universities got BSD for free, they still needed to negotiate a license with AT&T, and companies paid a fortune. This wasn't too much of a problem because universities are often terribly myopic. If they share their work with other universities and professors, they usually consider their sharing done. There may be folks out there without university appointments, but those folks are usually viewed as cranks who can be safely ignored. Occasionally, those cranks write their own OS that grows up to be Linux. The BSD version of freedom was still a far cry from Stallman's, but then Stallman hadn't articulated it yet. His manifesto was still a few years off.
+
+The intellectual tension between Stallman and Berkeley grew during the 1980s. While Stallman began what many thought was a quixotic journey to build a completely free OS, Berkeley students and professors continued to layer their improvements to UNIX on top of AT&T's code. The AT&T code was good, it was available, and many of the folks at Berkeley had either directly or indirectly helped influence it. They were generally happy keeping AT&T code at the core despite the fact that all of the BSD users needed to negotiate with AT&T. This process grew more and more expensive as AT&T tried to make more and more money off of UNIX.
+
+Of course, Stallman didn't like the freedom of the BSD-style license. To him, it meant that companies could run off with the hard work and shared source code of another, make a pile of money, and give nothing back. The companies and individuals who were getting the BSD network release were getting the cumulative hard work of many students and professors at Berkeley (and other places) who donated their time and effort to building a decent OS. The least these companies owed the students were the bug fixes, the extensions, and the enhancements they created when they were playing with the source code and gluing it into their products.
+
+Stallman had a point. Many of these companies "shared" by selling the software back to these students and the taxpayers who had paid for their work. While it is impossible to go back and audit the motives of everyone who used the code, there have been many who've used BSDstyle code for their personal gain.
+
+Bill Joy, for instance, went to work at Sun Microsystems in 1982 and brought with him all the knowledge he had gained in developing BSD. Sun was always a very BSD-centered shop, and many of the people who bought Sun workstations ran BSD. At that time, AT&T still controlled much of the kernel and many of the small extra programs that made UNIX a usable system.
+
+But there are counter arguments as well. Joy certainly contributed a lot to the different versions of BSD. If anyone deserves to go off and get rich at a company like Sun, it's he.
+
+Also, the BSD source code was freely available to all comers, and all companies started with the same advantages. The software business is often considered to be one of the most free marketplaces around because of the low barriers to entry. This means that companies should only be able to charge for the value they add to the BSD code. Sure, all of the Internet was influenced by the TCP/IP code, but now Microsoft, Apple, IBM, Be, and everyone else compete on the quality of their interface.
+
+2~ The Price of Total Freedom
+
+The debate between BSD-style freedom and GNU-style freedom is one of the greatest in the free programming world and is bound to continue for a long time as programmers join sides and experiment.
+
+John Gilmore is one programmer who has worked with software developed under both types of licenses. He was employee number five at Sun Microsystems, a cofounder of the software development tool company Cygnus Solutions, and one of the board members of the Electronic Frontier Foundation. His early work at Sun gave him the wealth to pursue many independent projects, and he has spent the last 10 years devoting himself to making it easy for people around the world to use encryption software. He feels that privacy is a fundamental right and an important crime deterrent, and he has funded a number of different projects to advance this right.
+
+Gilmore also runs the cypherpunks mailing list on a computer in his house named Toad Hall near Haight Street in San Francisco. The mailing list is devoted to exploring how to create strong encryption tools that will protect people's privacy and is well known for the strong libertarian tone of the deliberations. Practically the whole list believes (and frequently reiterates) that people need the right to protect their privacy against both the government and private eavesdropping. Wired magazine featured Gilmore on the cover, along with fellow travelers Eric Hughes and Tim May.
+
+One of his recent tasks was creating a package of free encryption utilities that worked at the lowest level of the network operating system. These tools, known as Free/SWAN, would allow two computers that meet on the Internet to automatically begin encoding the data they swap with some of the best and most secure codes available. He imagines that banks, scientific laboratories, and home workers everywhere will want to use the toolkit. In fact, AT&T is currently examining how to incorporate the toolkit into products it is building to sell more highspeed services to workers staying at home to avoid the commute.
+
+Gilmore decided to use the GNU license to protect the Free/SWAN software, in part because he has had bad experiences in the past with totally free software. He once wrote a little program called PDTar that was an improvement over the standard version of Tar used on the Internet to bundle together a group of files into one big, easy-tomanage bag of bits often known affectionately as "tarballs." He decided he wasn't going to mess around with Stallman's GNU license or impose any restrictions on the source code at all. He was just going to release it into the public domain and give everyone total freedom.
+
+This good deed did not go unpunished, although the punishment was relatively minor. He recalls, "I never made PDTar work for DOS, but six or eight people did. For years after the release, I would get mail saying, 'I've got this binary for the DOS release and it doesn't work.'
+They often didn't even have the sources that went with the version so I couldn't help them if I tried." Total freedom, it turned out, brought a certain amount of anarchy that made it difficult for him to manage the project. While the total freedom may have encouraged others to build their own versions of PDTar, it didn't force them to release the source code that went with their versions so others could learn from or fix their mistakes Hugh Daniel, one of the testers for the Free/SWAN project, says that he thinks the GNU General Public License will help keep some coherency to the project. "There's also a magic thing with GPL code that open source doesn't have," Daniel said. "For some reason, projects don't bifurcate in GPL space. People don't grab a copy of the code and call it their own. For some reason there's a sense of community in GPL code. There seems to be one version. There's one GPL kernel and there's umpty-ump BSD branches."
+
+Daniel is basically correct. The BSD code has evolved, or forked, into many different versions with names like FreeBSD, OpenBSD, and NetBSD while the Linux UNIX kernel released under Stallman's GPL is limited to one fairly coherent package. Still, there is plenty of crosspollination between the different versions of BSD UNIX. Both NetBSD 1.0 and FreeBSD 2.0, for instance, borrowed code from 4.4 BSD-Lite. Also, many versions of Linux come with tools and utilities that came from the BSD project.
+
+But Daniel's point is also clouded with semantics. There are dozens if not hundreds of different Linux distributions available from different vendors. Many differ in subtle points, but some are markedly different. While these differences are often as great as the ones between the various flavors of BSD, the groups do not consider them psychologically separate. They haven't forked politically even though they've split off their code.
+
+While different versions may be good for some projects, it may be a problem for packages like Free/SWAN that depend upon interoperability. If competing versions of Free/SWAN emerge, then all begin to suffer because the product was designed to let people communicate with each other. If the software can't negotiate secure codes because of differences, then it begins to fail.
+
+But it's not clear that the extra freedom is responsible for the fragmentation. In reality, the different BSD groups emerged because they had different needs. The NetBSD group, for instance, wanted to emphasize multiplatform support and interoperability. Their website brags that the NetBSD release works well on 21 different hardware platforms and also points out that some of these hardware platforms themselves are quite diverse. There are 93 different versions of the Macintosh running on Motorola's 68k chips, including the very first Mac. Eighty-nine of them run some part of NetBSD and 37 of them run all of it. That's why they say their motto is "Of course it runs NetBSD."
+
+The OpenBSD group, on the other hand, is emphasizing security without compromising portability and interoperability. They want to fix all security bugs immediately and be the most secure OS on the marketplace.
+
+There are also deep personal differences in the way Theo de Raadt, the founder of OpenBSD, started the project after the NetBSD group kicked him out of their core group.
+
+For all of these reasons, it may be hard to argue that the freedoms provided by the BSD-style license were largely responsible for the splintering. The GNU software users are just as free to make new versions as long as they kick back the source code into free circulation. In fact, it may be possible to argue that the Macintosh versions of some of the GNU code comprise a splinter group because it occurred despite the ill will Stallman felt for the Mac.
+
+2~ The Synthesis of "Open Source"
+
+The tension between the BSD licenses and the GNU has always festered like the abortion debate. Everyone picked sides and rarely moved from them.
+
+In 1998, a group of people in the free software community tried to unify the two camps by creating a new term, "open source." To make sure everyone knew they were serious, they started an unincorporated organization, registered a trademark, and set up a website (www.opensource.org). Anyone who wanted to label their project "open source"
+would have to answer to them because they would control the trademark on the name.
+
+Sam Ockman, a Linux enthusiast and the founder of Penguin Computing, remembers the day of the meeting just before Netscape announced it was freeing its source code. "Eric Raymond came into town because of the Netscape thing. Netscape was going to free their software, so we drove down to Transmeta and had a meeting so we could advise Netscape," he said.
+
+He explained that the group considered a number of different options about the structure. Some wanted to choose a leader now. Others wanted to emulate an open source project and let a leader emerge through the display of talent and, well, leadership. Others wanted elections.
+
+The definition of what was open source grew out of the Debian project, one of the different groups that banded together to press CDROMs of stable Linux releases. Groups like these often get into debates about what software to include on the disks. Some wanted to be very pure and only include GPL'ed software. In a small way, that would force others to contribute back to the project because they wouldn't get their software distributed by the group unless it was GPL'ed. Others wanted less stringent requirements that might include quasi-commercial projects that still came with their source code. There were some cool projects out there that weren't protected by GPL, and it could be awfully hard to pass up the chance to integrate them into a package.
+
+Over time, one of the leaders of the Debian group, Bruce Perens, came to create a definition of what was acceptable and what wasn't. This definition would be large enough to include the GNU General Public License, the BSD-style licenses, and a few others like MIT's X Consortium license and the Artistic license. The X-windows license covers a graphical windowing interface that began at MIT and was also freely distributed with BSD-like freedom. The Artistic license applies to the Perl programming language, a tool that is frequently used to transform files. The Debian meta-definition would embrace all of these.
+
+The official definition of what was acceptable to Debian leaned toward more freedom and fewer restrictions on the use of software. Of course, that's the only way that anyone could come up with a definition that included both GNU and the much less restrictive BSD. But this was also the intent of the open source group. Perens and Eric Raymond felt that Stallman still sounded too quasi-communist for "conservative businessmen," and they wanted the open source definition to avoid insisting upon the sort of forced sharing that Stallman's GNU virus provided.
+
+Still, the definition borrowed heavily from Stallman's concept of GNU, and Perens credits him by saying that many of the Debian guidelines are derived from the GPL. An official open source license for a product must provide the programmer with source code that is human-readable. It can't restrict what modifications are made to the software or how it is sold or given away.
+
+The definition glossed over the difference between BSD and GPU by stating, "The license must allow modifications and derived works, and must allow them to be distributed under the same terms as the license of the original software."
+
+The definition proved to be the model for more commercial offerings like the Netscape Public License. In 1998, Netscape started distributing the source code to its popular browser in hopes of collecting help from the Internet and stopping Microsoft's gradual erosion of its turf. The license gave users wide opportunities to make changes and tinker with the software, but it also allowed Netscape to use the changes internally and refuse to share what they did with them. This special privilege offended some users who didn't like the imbalance, but it didn't bother many others who thought it was a reasonable compromise for a chance to tinker with commercial code. Netscape, of course, returned some of the favor by allowing people to keep their modifications private in much the same way that the BSD-style license provided.
+
+In June 1999, the Open Source Initiative revealed a startling fact. They were close to failing in their attempts to register the term "open source" as a trademark. The phrase was too common to be registered. Instead, they backed away and offered to check out licenses and classify them officially as "OSI Certified" if they met the terms of the OSI's definition of freedom.
+
+Some reacted negatively. Richard Stallman decided that he didn't like the word "open" as much as "free." Open doesn't capture the essence of freedom. Ockman says, "I don't think it's very fair. For ages, he's always said that the term 'free software' is problematic because people think of 'free beer' when they should be thinking of 'free speech.' We were attempting to solve that term. If the masses are confused, then corporate America is confused even more."
+
+The debate has even produced more terms. Some people now use the phrase "free source" to apply to the general conglomeration of the GPL and the open source world. Using "free software" implies that someone is aligned with Stallman's Free Software Foundation. Using "open source" implies you're aligned with the more business-friendly Open Source Initiative. So "free source" and "open source" both work as a compromise. Others tweak the meaning of free and refer to GPL protected software as "GNUFree."
+
+Naturally, all of this debate about freedom can reach comic proportions. Programmers are almost better than lawyers at finding loopholes, if only because they have to live with a program that crashes.~{ Lawyers just watch their clients go to jail. }~ Stallman, for instance, applies the GPL to everything coming out of the GNU project except the license itself. That can't be changed, although it can be freely reproduced. Some argue that if it were changeable, people would be able to insert and delete terms at will. Then they could apply the changed GPL to the new version of the software and do what they want. Stallman's original intent would not be changed. The GPL would still apply to all of the GNU software and its descendants, but it wouldn't be the same GPL.
+
+1~ Source
+
+Computer programmers love Star Wars. So it should be no surprise that practically every single member of the free source community has, at one time or another, rolled out the phrase, "Use the Source, Luke." It does a perfect job of capturing the mythical faith that the free source world places in the ability to access the source code to a program. As everyone points out, in the original version of Star Wars, the rebel troops used the plans, the Source, to the Death Star carried in R2D2 to look for weaknesses.
+
+The free source realm has been pushing the parallels for some time now. When AT&T unveiled their round logo with an offset dimple, most free source people began to snicker. The company that began the free software revolution by pushing its intellectual property rights and annoying Richard Stallman had chosen a logo that looked just like the Death Star. Everyone said, "Imperialist minds think alike." Some even wondered and hoped that George Lucas would sue AT&T for some sort of look-and-feel, trademark infringement. Those who use the legal intimidation light saber should die by the legal intimidation light saber.
+
+Of course, the free source folks knew that only their loose coalition of rebels spread out around the galaxy would be a strong match for the Empire. The Source was information, and information was power. The Source was also about freedom, one of the best and most consistent reservoirs of revolutionary inspiration around. The rebels might not have teams of lawyers in imperial star cruisers, but they hoped to use the Source to knit together a strong, effective, and more powerful resistance.
+
+The myth of open access to free source code is a powerful one that has made true believers out of many in the community. The source code is a list of instructions for the computer written out in a programming lan guage that is understandable by humans. Once the compilers converted the source code into the string of bits known as the binary or object code, only computers (and some very talented humans) could understand the instructions. I've known several people who could read 8080 binary code by eye, but they're a bit different from the general population.
+
+When companies tried to keep their hard work and research secret by locking up the source code, they built a barrier between the users and their developers. The programmers would work behind secret walls to write the source code. After compilers turned the Source into something that computers could read, the Source would be locked up again. The purchasers would only get the binary code because that's all the companies thought the consumers needed. The source code needed to be kept secret because someone might steal the ideas inside and create their own version.
+
+Stallman saw this secrecy as a great crime. Computer users should be able to share the source code so they can share ways to make it better. This trade should lead to more information-trading in a great feedback loop. Some folks even used the word "bloom" to describe the explosion of interest and cross-feedback. They're using the word the way biologists use it to describe the way algae can just burst into existence, overwhelming a region of the ocean. Clever insights, brilliant bug fixes, and wonderful new features just appear out of nowhere as human curiosity is amplified by human generosity in a grand explosion of intellectual synergy. The only thing missing from the picture is a bunch of furry Ewoks dancing around a campfire.~{ Linux does have many marketing opportunities. Torvalds chose a penguin named Tux as the mascot, and several companies actually manufacture and sell stuffed penguins to the Linux realm. The BSD world has embraced a cute demon, a visual pun on the fact that BSD UNIX uses the word "daemon" to refer to some of the faceless background programs in the OS. }~
+
+2~ The Bishop of the Free Marketplace
+
+Eric Raymond, a man who is sort of the armchair philosopher of the open source world, did a great job of summarizing the phenomenon and creating this myth in his essay "The Cathedral and the Bazaar."
+Raymond is an earnest programmer who spent some time working on projects like Stallman's GNU Emacs. He saw the advantages of open source development early, perhaps because he's a hard-core libertarian. Government solutions are cumbersome. Empowering individuals by not restraining them is great. Raymond comes off as a bit more extreme than other libertarians, in part because he doesn't hesitate to defend the second amendment of the U.S. Constitution as much as the first. Raymond is not ashamed to support widespread gun ownership as a way to further empower the individual. He dislikes the National Rifle Association because they're too willing to compromise away rights that he feels are absolute.
+
+Some people like to call him the Margaret Mead of the free source world because he spent some time studying and characterizing the culture in much the same way that Mead did when she wrote Coming of Age in Samoa. This can be a subtle jab because Margaret Mead is not really the same intellectual angel she was long ago. Derek Freeman and other anthropologists raise serious questions about Mead's ability to see without bias. Mead was a big fan of free love, and many contend it was no accident that she found wonderful tales of unchecked sexuality in Samoa. Freeman revisited Samoa and found it was not the guilt-free land of libertine pleasures that Mead described in her book. He documented many examples of sexual restraint and shame that Mead apparently missed in her search for a paradise.
+
+Raymond looked at open source development and found what he wanted to find: the wonderful efficiency of unregulated markets. Sure, some folks loved to label Richard Stallman a communist, a description that has always annoyed Stallman. Raymond looked a bit deeper and saw that the basis of the free software movement's success was the freedom that gave each user the complete power to change and improve their software. Just as Sigmund Freud found sex at the root of everything and Carl Jung uncovered a battle of animus and anima, the libertarian found freedom.
+
+Raymond's essay was one of the first to try to explain why free source efforts can succeed and even prosper without the financial incentives of a standard money-based software company. One of the biggest reasons he cited was that a programmer could "scratch an itch" that bothered him. That is, a programmer might grow annoyed by a piece of software that limited his choices or had an annoying glitch. Instead of cursing the darkness in the brain cavity of the corporate programmer who created the problem, the free source hacker was able to use the Source to try to find the bug.
+
+Itch-scratching can be instrumental in solving many problems. Some bugs in software are quite hard to identify and duplicate. They only occur in strange situations, like when the printer is out of paper and the modem is overloaded by a long file that is coming over the Internet. Then, and only then, the two buffers may fill to the brim, bump into each other, and crash the computer. The rest of the time, the program floats along happily, encountering no problems.
+
+These types of bugs are notoriously hard for corporate testing environments to discover and characterize. The companies try to be diligent by hiring several young programmers and placing them in a room with a computer. The team beats on the software all day long and develops a healthy animosity toward the programming team that has to fix the problems they discover. They can nab many simple bugs, but what happens if they don't have a printer hooked up to their machine? What happens if they aren't constantly printing out things the way some office users are? The weird bug goes unnoticed and probably unfixed.
+
+The corporate development model tries to solve this limitation by shipping hundreds, thousands, and often hundreds of thousands of copies to ambitious users they called "beta testers." Others called them "suckers" or "free volunteers" because once they finish helping develop the software, they get to pay for it. Microsoft even charges some users for the pleasure of being beta testers. Many of the users are pragmatic. They often have no choice but to participate in the scheme because they often base their businesses on some of the software shipped by these companies. If it didn't work, they would be out of a job.
+
+While this broad distribution of beta copies is much more likely to find someone who is printing and overloading a modem at the same time, it doesn't give the user the tools to help find the problem. Their only choice is to write an e-mail message to the company saying "I was printing yesterday and your software crashed." That isn't very helpful for the engineer, and it's no surprise that many of these reports are either ignored or unsolved.
+
+Raymond pointed out that the free source world can do a great job with these nasty bugs. He characterized this with the phrase, "Given enough eyeballs, all bugs are shallow," which he characterized as "Linus's Law." That is, eventually some programmer would start printing and using the Internet at the same time. After the system crashed a few times, some programmer would care enough about the problem to dig into the free source, poke around, and spot the problem. Eventually somebody would come along with the time and the energy and the commitment to diagnose the problem. Raymond named this "Linus's Law" after Linus Torvalds. Raymond is a great admirer of Torvalds and thinks that Torvalds's true genius was organizing an army to work on Linux. The coding itself was a distant second.
+
+Of course, waiting for a user to find the bugs depended on there being someone with enough time and commitment. Most users aren't talented programmers, and most have day jobs. Raymond and the rest of the free source community acknowledge this limitation, but point out that the right person often comes along if the bug occurs often enough to be a real problem. If the bug is serious enough, a non-programmer may even hire a programmer to poke into the source code.
+
+Waiting for the bug and the programmer to find each other is like waiting for Arthur to find the sword in the stone. But Raymond and the rest of the free source community have even turned this limitation on its head and touted it as an advantage. Relying on users to scratch itches means that problems only get addressed if they have real constituencies with a big enough population to generate the one true believer with enough time on his hands. It's sort of a free market in people's time for fixing bugs. If the demand is there, the solution will be created. It's Say's Law recast for software development: "the supply of bugs creates the talent for fixes."
+
+Corporate development, on the other hand, has long been obsessed with adding more and more features to programs to give people enough reason to buy the upgrade. Managers have long known that it's better to put more time into adding more doohickeys and widgets to a program than into fixing its bugs. That's why Microsoft Word can do so many different things with the headers and footers of documents but can't stop a Word Macro virus from reproducing. The folks at Microsoft know that when the corporate managers sit down to decide whether to spend the thousands of dollars to upgrade their machines, they'll need a set of new compelling features. People don't like to pay for bug fixes.
+
+Of course, corporations also have some advantages. Money makes sure that someone is actively trying to solve the bugs in the program. The same free market vision guarantees that the companies that consistently disappoint their customers will go out of business. This developer has the advantage of studying the same source code day in and day out. Eventually he'll learn enough about the guts of the Source to be much more effective than the guy with the jammed printer and modem. He should be able to nab the bug 10 times more quickly then the free source hobbyist just because he's an expert in the system.
+
+Raymond acknowledges this problem but proposes that the free source model can still be more effective despite the inexperience of the people who are forced to scratch an itch. Again he taps the world of libertarian philosophy and suggests that the free software world is like a bazaar filled with many different merchants offering their wares. Corporate development, on the other hand, is structured like the religious syndicates that built the medieval cathedrals. The bazaars offered plenty of competition but no order. The cathedrals were run by central teams of priests who tapped the wealth of the town to build the vision of one architect.
+
+The differences between the two were pretty simple. The cathedral team could produce a great work of art if the architect was talented, the funding team was successful, and the management was able to keep everyone focused on doing their jobs. If not, it never got that far. The bazaar, on the other hand, consisted of many small merchants trying to outstrip each other. The best cooks ended up with the most customers. The others soon went out of business.
+
+The comparison to software was simple. Corporations gathered the tithes, employed a central architect with a grand vision, managed the team of programmers, and shipped a product every once and a bit. The Linux world, however, let everyone touch the Source. People would try to fix things or add new features. The best solutions would be adopted by oth ers and the mediocre would fall by the wayside. Many different Linux versions would proliferate, but over time the marketplace of software would coalesce around the best standard version.
+
+"In the cathedral-builder view of programming, bugs and development problems are tricky, insidious, deep phenomena. It takes months of scrutiny by a dedicated few to develop confidence that you've winkled them all out. Thus the long release intervals, and the inevitable disappointment when long-awaited releases are not perfect," Raymond said.
+
+"In the bazaar view, on the other hand, you assume that bugs are generally shallow phenomena--or, at least, that they turn shallow pretty quick when exposed to a thousand eager code-developers pounding on every single new release. Accordingly you release often in order to get more corrections, and as a beneficial side effect you have less to lose if an occasional botch gets out the door."
+
+2~ They Put a Giant Arrow on the Problem
+
+This bazaar can be a powerful influence on solving problems. Sure, it isn't guided by a talented architect and teams of priests, but it is a great free-for-all. It is quite unlikely, for instance, that the guy with the overloaded printer and modem line will also be a talented programmer with a grand vision to solve the problem. Someone named Arthur only stumbles across the right stone with the right sword every once and a bit. But if the frustrated user can do a good job characterizing it and reporting it, then someone else can solve it.
+
+Dave Hitz was one of the programmers who helped Keith Bostic rewrite UNIX so it could be free of AT&T's copyright. Today, he runs Network Appliance, a company that builds stripped-down file servers that run BSD at their core. He's been writing file systems ever since college, and the free software came in quite handy when he was starting his company. When they started building the big machines, the engineers just reached into the pool of free source code for operating systems and pulled out much of the code that would power his servers. They modified the code heavily, but the body of free software that he helped create was a great starting point.
+
+In his experience, many people would find a bug and patch it with a solution that was good enough for them. Some were just kids in college. Others were programmers who didn't have the time or the energy to read the Source and understand the best way to fix the problem. Some fixed the problem for themselves, but inadvertently created another problem elsewhere. Sorting through all of these problems was hard to do.
+
+But Hitz says, "Even if they fixed it entirely the wrong way, if they found the place where the problem went away, then they put a giant arrow on the problem." Eventually, enough arrows would provide someone with enough information to solve the problem correctly. Many of the new versions written by people may be lost to time, but that doesn't mean that they didn't have an important effect on the evolution of the Source.
+
+"I think it's rarely the case that you get people who make a broad base of source code their life," he said. "There are just a whole bunch of people who are dilettantes. The message is, 'Don't underestimate the dilettantes.'"
+
+2~ How Free Software Can Be a Bazaar or a Cathedral
+
+When Raymond wrote the essay, he was just trying to suss out the differences between several of the camps in the free source world. He noticed that people running free source projects had different ways of sharing. He wanted to explain which free source development method worked better than others. It was only later that the essay began to take on a more serious target when everyone began to realize that Microsoft was perhaps the biggest cathedral-like development team around.
+
+Raymond said, "I think that like everyone else in the culture I wandered back and forth between the two modes as it seemed appropriate because I didn't have a theory or any consciousness."
+
+He saw Richard Stallman and the early years of the GNU projects as an example of cathedral-style development. These teams would often labor for months if not years before sharing their tools with the world. Raymond himself said he behaved the same way with some of the early tools that he wrote and contributed to the GNU project.
+
+Linus Torvalds changed his mind by increasing the speed of sharing, which Raymond characterized as the rule of "release early and often, delegate everything you can, be open to the point of promiscuity."
+Torvalds ran Linux as openly as possible, and this eventually attracted some good contributors. In the past, the FSF was much more careful about what it embraced and brought into the GNU project. Torvalds took many things into his distributions and they mutated as often as daily. Occasionally, new versions came out twice a day.
+
+Of course, Stallman and Raymond have had tussles in the past. Raymond is careful to praise the man and say he values his friendship, but also tempers it by saying that Stallman is difficult to work with.
+
+In Raymond's case, he says that he once wanted to rewrite much of the Lisp code that was built into GNU Emacs. Stallman's Emacs allowed any user to hook up their own software into Emacs by writing it in a special version of Lisp. Some had written mail readers. Others had added automatic comment-generating code. All of this was written in Lisp.
+
+Raymond says that in 1992, "The Lisp libraries were in bad shape in a number of ways. They were poorly documented. There was a lot of work that had gone on outside the FSF and I wanted to tackle that project."
+
+According to Raymond, Stallman didn't want him to do the work and refused to build it into the distribution. Stallman could do this because he controlled the Free Software Foundation and the distribution of the software. Raymond could have created his own version, but refused because it was too complicated and ultimately bad for everyone if two versions emerged.
+
+For his part, Stallman explains that he was glad to accept parts of Raymond's work, but he didn't want to be forced into accepting them all. Stallman says, "Actually, I accepted a substantial amount of work that Eric had done. He had a number of ideas I liked, but he also had some ideas I thought were mistaken. I was happy to accept his help, as long as I could judge his ideas one by one, accepting some and declining some.
+
+"But subsequently he asked me to make a blanket arrangement in which he would take over the development of a large part of Emacs, operating independently. I felt I should continue to judge his ideas individually, so I said no."
+
+Raymond mixed this experience with his time watching Torvalds's team push the Linux kernel and used them as the basis for his essay on distributing the Source. "Mostly I was trying to pull some factors that I had observed as unconscious folklore so people could take them out and reason about them," he said.
+
+Raymond says, "Somebody pointed out that there's a parallel of politics. Rigid political and social institutions tend to change violently if they change at all, while ones with more play in them tend to change peacefully."
+
+There is a good empirical reason for the faith in the strength of free source. After all, a group of folks who rarely saw each other had assembled a great pile of source code that was kicking Microsoft's butt in some corners of the computer world. Linux servers were common on the Internet and growing more common every day. The desktop was waiting to be conquered. They had done this without stock options, without corporate jets, without secret contracts, and without potentially illegal alliances with computer manufacturers. The success of the software from the GNU and Linux world was really quite impressive.
+
+Of course, myths can be taken too far. Programming computers is hard work and often frustrating. Sharing the source code doesn't make bugs or problems go away--it just makes it a bit easier for someone else to dig into a program to see what's going wrong. The source code may just be a list of instructions written in a programming language that is designed to be readable by humans, but that doesn't mean that it is easy to understand. In fact, most humans won't figure out most source code because programming languages are designed to be understood by other programmers, not the general population.
+
+To make matters worse, programmers themselves have a hard time understanding source code. Computer programs are often quite complicated and it can take days, weeks, and even months to understand what a strange piece of source code is telling a computer to do. Learning what is happening in a program can be a complicated job for even the best programmers, and it is not something that is taken lightly.
+
+While many programmers and members of the open source world are quick to praise the movement, they will also be able to cite problems with the myth of the Source. It isn't that the Source doesn't work, they'll say, it's just that it rarely works anywhere near as well as the hype implies. The blooms are rarely as vigorous and the free markets in improvements are rarely as liquid.
+
+Larry McVoy, an avid programmer, proto-academic, and developer of the BitKeeper toolkit, likes to find fault with the model. It isn't that he doesn't like sharing source code, it's just that he isn't wealthy enough to take on free software projects. "We need to find a way for people to develop free software and pay their mortgages and raise a family," he says.
+
+"If you look closely," he says, "there really isn't a bazaar. At the top it's always a one-person cathedral. It's either Linus, Stallman, or someone else." That is, the myth of a bazaar as a wide-open, free-for-all of competition isn't exactly true. Sure, everyone can download the source code, diddle with it, and make suggestions, but at the end of the day it matters what Torvalds, Stallman, or someone else says. There is always a great architect of Chartres lording it over his domain.
+
+Part of this problem is the success of Raymond's metaphor. He said he just wanted to give the community some tools to understand the success of Linux and reason about it. But his two visions of a cathedral and a bazaar had such a clarity that people concentrated more on dividing the world into cathedrals and bazaars. In reality, there's a great deal of blending in between. The most efficient bazaars today are the suburban malls that have one management company building the site, leasing the stores, and creating a unified experience. Downtown shopping areas often failed because there was always one shop owner who could ruin an entire block by putting in a store that sold pornography. On the other side, religion has always been something of a bazaar. Martin Luther effectively split apart Christianity by introducing competition. Even within denominations, different parishes fight for the hearts and souls of people.
+
+The same blurring holds true for the world of open source software. The Linux kernel, for instance, contains many thousands of lines of source code. Some put the number at 500,000. A few talented folks like Alan Cox or Linus Torvalds know all of it, but most are only familiar with the corners of it that they need to know. These folks, who may number in the thousands, are far outnumbered by the millions who use the Linux OS daily.
+
+It's interesting to wonder if the ratio of technically anointed to blithe users in the free source world is comparable to the ratio in Microsoft's dominion. After all, Microsoft will share its source code with close partners after they sign some non-disclosure forms.~{ At this writing, Microsoft has not released its source code, but the company is widely known to be examining the option as part of its settlement with the Department of Justice. }~ While Microsoft is careful about what it tells its partners, it will reveal information only when there's something to gain. Other companies have already jumped right in and started offering source code to all users who want to look at it.
+
+Answering this question is impossible for two different reasons. First, no one knows what Microsoft reveals to its partners because it keeps all of this information secret, by reflex. Contracts are usually negotiated under non-disclosure, and the company has not been shy about exploiting the power that comes from the lack of information.
+
+Second, no one really knows who reads the Linux source code for the opposite reason. The GNU/Linux source is widely available and frequently downloaded, but that doesn't mean it's read or studied. The Red Hat CDs come with one CD full of pre-compiled binaries and the second full of source code. Who knows whoever pops the second CDROM in their computer? Everyone is free to do so in the privacy of their own cubicle, so no records are kept.
+
+If I were to bet, I would guess that the ratios of cognoscenti to uninformed users in the Linux and Microsoft worlds are pretty close. Reading the Source just takes too much time and too much effort for many in the Linux world to take advantage of the huge river of information available to them.
+
+If this is true or at least close to true, then why has the free source world been able to move so much more quickly than the Microsoft world? The answer isn't that everyone in the free source world is using the Source, it's that everyone is free to use it. When one person needs to ask a question or scratch an itch, the Source is available with no questions asked and no lawyers consulted. Even at 3:00 A.M., a person can read the Source. At Microsoft and other corporations, they often need to wait for the person running that division or section to give them permission to access the source code.
+
+There are other advantages. The free source world spends a large amount of time keeping the source code clean and accessible. A programmer who tries to get away with sloppy workmanship and bad documentation will pay for it later as others come along and ask thousands of questions.
+
+Corporate developers, on the other hand, have layers of secrecy and bureaucracy to isolate them from questions and comments. It is often hard to find the right programmer in the rabbit warren of cubicles who has the source code in the first place. One Microsoft programmer was quoted as saying, "A developer at Microsoft working on the OS can't scratch an itch he's got with Excel, neither can the Excel developer scratch his itch with the OS--it would take him months to figure out how to build and debug and install, and he probably couldn't get proper source access anyway."
+
+This problem is endemic to corporations. The customers are buying the binary version, not the source code, so there is no reason to dress up the backstage wings of the theater. After some time, though, people change cubicles, move to other corporations, and information disappears. While companies try to keep source code databases to synchronize development, the efforts often fall apart. After Apple canceled development of their Newton handheld, many Newton users were livid. They had based big projects on the platform and they didn't want to restart their work. Many asked whether Apple could simply give away the OS's source code instead of leaving it to rot on some hard disk. Apple dodged these requests, and this made some people even more cynical. One outside developer speculated, "It probably would not be possible to re-create the OS. The developers are all gone. All of them went to Palm, and they probably couldn't just put it back together again if they wanted to."
+
+Of course, corporations try to fight this rot by getting their programmers to do a good job at the beginning and write plenty of documentation. In practice, this slips a bit because it is not rewarded by the culture of secrecy. I know one programmer who worked for a project at MIT. The boss thought he was being clever by requiring comments on each procedure and actually enforcing it with an automated text-scanning robot that would look over the source code and count the comments. My friend turned around and hooked up one version of the popular artificial intelligence chatterbots like Eliza and funneled the responses into the comment field. Then everyone was happy. The chatterbot filled the comment field, the automated comment police found something vaguely intelligent, and the programmer got to spend his free time doing other things. The boss never discovered the problem.
+
+Programmers are the same the world over, and joining the free source world doesn't make them better people or destroy their impudence. But it does penalize them if others come along and try to use their code. If it's inscrutable, sloppy, or hard to understand, then others will either ignore it or pummel them with questions. That is a strong incentive to do it right.
+
+2~ Open Source and Lightbulbs
+
+The limitations to the power of open source might be summarized in the answer to the question "How many open source developers does it take to change a lightbulb?" The answer is: 17. Seventeen to argue about the license; 17 to argue about the brain-deadedness of the lightbulb architecture; 17 to argue about a new model that encompasses all models of illumination and makes it simple to replace candles, campfires, pilot lights, and skylights with the same easy-to-extend mechanism; 17 to speculate about the secretive industrial conspiracy that ensures that lightbulbs will burn out frequently; 1 to finally change the bulb; and 16 who decide that this solution is good enough for the time being.
+
+The open source development model is a great way for very creative people to produce fascinating software that breaks paradigms and establishes new standards for excellence. It may not be the best way, however, to finish boring jobs like fine-tuning a graphical interface, or making sure that the scheduling software used by executives is as bulletproof as possible.
+
+While the open development model has successfully tackled the problem of creating some great tools, of building a strong OS, and of building very flexible appliance applications like web browsers, it is a long way from winning the battle for the desktop. Some free source people say the desktop applications for average users are just around the corner and the next stop on the Free Software Express. Others aren't so sure.
+
+David Henkel-Wallace is one of the founders of the free software company Cygnus. This company built its success around supporting the development tools created by Stallman's Free Software Foundation. They would sign contracts with companies to answer any questions they had about using the free software tools. At first companies would balk at paying for support until they realized that it was cheaper than hiring in-house technical staff to do the work. John Gilmore, one of the cofounders, liked to say, "We make free software affordable."
+
+The company grew by helping chip manufacturers tune the FSF compiler, GCC, for their chip. This was often a difficult and arduous task, but it was very valuable to the chip manufacturer because potential customers knew they could get a good compiler to produce software for the chip. While Intel continued to dominate the desktop, the market for embedded chips to go into products like stoves, microwave ovens, VCRs, or other smart boxes boomed as manufacturers rolled out new chips to make it cheaper and easier to add smart features to formerly dumb boxes. The engineers at the companies were often thrilled to discover that they could continue to use GCC to write software for a new chip, and this made it easier to sell the chip.
+
+Cygnus always distributed to the Source their modifications to GCC as the GNU General Public License demanded. This wasn't a big deal because the chip manufacturers wanted the software to be free and easy for everyone to use. This made Cygnus one of the clearing-houses for much of the information on how GCC worked and how to make it faster.
+
+Henkel-Wallace is quick to praise the power of publicly available source code for Cygnus's customers. They were all programmers, after all. If they saw something they didn't like with GCC, they knew how to poke around on the insides and fix it. That was their job.
+
+"[GCC] is a compiler tool and it was used by developers so they were smart enough. When something bothered someone, we fixed it. There was a very tight coupling," he said.
+
+He openly wonders, though, whether the average word processor or basic tool user will be able to do anything. He says, "The downside is that it's hard to transfer that knowledge with a user who isn't a developer. Let's say Quicken has a special feature for lawyers. You need to have a more formal model because the lawyers aren't developers. (We're fortunate in that regard.)"
+
+That is, lawyers aren't schooled enough in the guts of computer development to complain in the right way. A programmer could say,
+"GCC is optimizing away too much dead code that isn't really dead."
+Other folks in the GCC community would know what is going on and be able to fix it. A lawyer might just say, "Quicken screwed up my billing and had me billing twenty-six hours in a day." This wouldn't pinpoint the problem enough for people to solve it. The lawyer doesn't understand the inside of the software like the programmer.
+
+In situations like this, Henkel-Wallace believes that a corporate-style team may be the only one that can study the problems thoroughly enough to find solutions. Intuit, the manufacturer of Quicken, is well known for videotaping many standard users who use their product for the first time. This allows them to pinpoint rough spots in the program and identify places where it could be improved. This relentless smoothing and polishing has made the product one of the best-known and widely used tools on desktops. It isn't clear that non-programmers could have accomplished the same quality by working together with the Source at their disposal.
+
+2~ The Source and the Language that We Speak
+
+There are deeper, more philosophical currents to the open source world. The personal computer industry is only a few decades old. While it has advanced rapidly and solved many problems, there is still very little understanding of the field and what it takes to make a computer easy to use. This has been the great struggle, and the free source world may be an essential part of this trip.
+
+Tim O'Reilly, the publisher of many books and a vocal proponent of the open source world, says, "We've gone through this period of thinking of programs as artifacts. A binary object is a thing. Open source is part of thinking of computers as a process." In other words, we've done a good job of creating computers you can buy off the shelf and software that can be bought in shrink-wrapped boxes, but we haven't done a good job of making it possible for people to talk to the machines.
+
+To a large extent, the process has been a search for a good language to use to communicate with the computer. Most of the recent development followed the work at Xerox PARC that created some of the first graphical user interfaces. Apple followed their lead and Microsoft followed Apple. Each bought into the notion that creating a neat picture representing the files on a screen would make a neat metaphor that could make it easier for people to interact with the computers. Dragging a file to the trash was somehow easier for people to do than typing a cryptic command like "rm."
+
+In the 1980s, that sort of graphical thinking was considered brilliant. Pictures were prettier than words, so it was easy to look at the clean, pretty Macintosh screen and think it was easier to use just because it was easier to look at.
+
+But the pretty features merely hid a massive amount of complexity, and it was still hard to work with the machines. Don Norman, a human/computer interface engineer at Apple, once wrote a fascinating discussion of the company's design of their computer's on-off switch. He pointed out that the switch couldn't be a simple power switch that could cut the power on and off because the computer needed to orchestrate the start-up and shutdown procedure. It needed to close up files, store data safely, and make sure everything was ready to start up again.
+
+The design of the power switch was made even more complicated by the fact that it was supposed to work even when the computer crashed. That is, if bad programming jumbles the memory and screws up the central processor, the power switch is still supposed to shut down the machine. Of course, the computer couldn't even add two numbers together after it crashed, so it couldn't even begin to move through all the clerical work necessary to shut down the machine. The Macintosh on which I wrote this book can crash so badly that the power switch doesn't work, and I can only reset it by sticking a paper clip into a hidden hole.
+
+Norman's work shows how hard it can be to come up with a simple language that allows humans and computers to communicate about a task that used to be solved with a two-position light switch. This problem can be seen throughout the industry. One computer tutor told me,
+"I am so tired of telling people to shut down their computers by pushing the 'Start' button." Microsoft Windows places all of the features on a menu tree that grows out of one button labeled "Start." This may have been a great way to capture the potential to do new things that they felt they were selling, but it continues to be confusing to all new users of the machines. Why should they push start to stop it?
+
+The quest for this Source-level control can take many strange turns. By the middle of the 1980s, programmers at Apple realized that they had gone a bit too far when they simplified the Mac's interface. The visual language of pointing and clicking at icons may have been great for new users, but it was beginning to thwart sophisticated users who wanted to automate what they did. Many graphics designers would find themselves repeatedly doing the same steps to image files, and it was boring. They wondered, why couldn't the computer just repeat all their instructions and save them all that pointing and clicking?
+
+In a sense, the sophisticated Mac users were looking for the Source. They wanted to be able to write and modify simple programs that controlled their software. The problem was that the graphical display on the Mac wasn't really suited to the task. How do you describe moving the mouse and clicking on a button? How do you come up with a language that means "cut out this sample and paste it over here"? The actions were so visual that there weren't any words or language to describe them.
+
+This problem confounded Apple for the next 10 years, and the company is slowly finishing its solution, known as AppleScript. The task has not been simple, but it has been rewarding for many who use their Macintoshes as important chains in data production lines. Apple included instructions for moving icons to locations, uploading files, changing the color of icons, and starting up programs with others.
+
+The nicest extension was a trick that made the AppleScript "recordable." That is, you could turn on a recorder before stepping through the different jobs. The Mac would keep track of your actions and generate a program that would allow you to repeat what you were doing. Still, the results were far from simple to understand or use. Here's a simple snippet of AppleScript code that will select all files in one directory with the word "Speckle" in their title and open them up with another application:
+
+code{
+
+tell application "Finder"
+ set theFiles to every file in folder (rootPlus) whose
+ name contains "Speckle"
+ with timeout of 600 seconds
+ repeat with aFile in theFiles
+ open (aFile) using (file "Macintosh HD: Make
+ GIF (16 colors, Web)")
+ end repeat
+ end timeout
+end tell
+
+}code
+
+This Source can then be run again and again to finish a task. Making this tool available to users has been a challenge for Apple because it forces them to make programming easier. Many people learn AppleScript by turning on the recording feature and watching what happens when they do what they would normally do. Then they learn how to insert a few more commands to accomplish the task successfully. In the end, they become programmers manipulating the Source without realizing it.
+
+O'Reilly and others believe that the open source effort is just an extension of this need. As computers become more and more complex, the developers need to make the internal workings more and more open to users. This is the only way users can solve their problems and use the computers effectively.
+
+"The cutting edge of the computer industry is in infoware. There's not all that much juice in the kind of apps we wrote in the eighties and nineties. As we get speech recognition, we'll go even more in the direction of open source," he says.
+
+"There are more and more recipes that are written down. These are going to migrate into lower and lower layers of software and the computer is going to get a bigger and bigger vocabulary."
+
+That is, more and more of the Source is going to need to become transparent to the users. It's not just a political battle of Microsoft versus the world. It's not just a programmer's struggle to poke a nose into every corner of a device. It's about usability. More and more people need to write programs to teach computers to do what they need to do. Access to the Source is the only way to accomplish it.
+
+In other words, computers are becoming a bigger and bigger part of our lives. Their language is becoming more readily understandable by humans, and humans are doing a better job of speaking the language of computers. We're converging. The more we do so, the more important the Source will be. There's nothing that Microsoft or corporate America can do about this. They're going to have to go along. They're going to have to give us access to the Source.
+
+1~ People
+
+When I was in college, a friend of mine in a singing group would often tweak his audience by making them recite Steve Martin's "Individualist's Creed" in unison. Everyone would proclaim that they were different, unique, and wonderfully eccentric individuals together with everyone else in the audience. The gag played well because all the individualists were also deeply committed to living a life filled with irony.
+
+The free source world is sort of a Club Med for these kinds of individualists. Richard Stallman managed to organize a group of highly employable people and get them to donate their $50+-per-hour time to a movement by promising complete freedom. Everyone who showed up valued freedom much more than the money they could be making working for big companies. It's not a bit surprising that all of the free thinkers are also coming up with the same answers to life. Great minds think alike, right?
+
+This large collection of dedicated individualists is predisposed to moments of easy irony. Black is by far their favorite color. Long hair and beards are common. T-shirts and shorts are the rule when it gets warm, and T-shirts and jeans dominate when the weather turns cold. No one wears suits or anything so traditional. That would be silly because they're not as comfortable as T-shirts and jeans. Fitting in with the free thinkers isn't hard.
+
+The group is not particularly republican or democrat, but libertarian politics are easily understood and widely supported. Gun control is usually considered to be wrong, if only because the federal government will move on to controlling something else when they're finished with guns. ~{ In fact, the federal government already considers encryption software to be a munition and often tries to regulate it as such. }~ Taxes are bad, and some in the group like to dream of when they'll be driven away by the free-flowing, frictionless economy of the Internet. Folks like to say things like "Governments are just speed bumps on the information superhighway."
+
+The first amendment is very popular and many are sure that practically everything they do with a computer is a form of speech or expression. The government shouldn't have the right to control a website's content because they'll surely come to abuse that power in the future. Some even rage against private plans to rate websites for their content because they're certain that these tools will eventually be controlled by those in power. To the most extreme, merely creating a list of sites with information unsuitable for kids is setting up the infrastructure for the future Nazis to start burning websites.
+
+Virtually everyone believes that strong codes and cryptography are essential for protecting a person's privacy online. The U.S. government's attempt to control the technology by regulating its export is widely seen as a silly example of how governments are trying to grab power at the expense of their citizens. The criminals already have the secret codes;
+why shouldn't the honest people be able to protect their data?
+
+Pornography or references to sex in the discussions are rare, if only because the world of the libido is off the main topic. It's not that sex isn't on the minds of the free software community, it's just that the images are so freely available that they're uninteresting. Anyone can go to www.playboy.com, but not everyone can write a recursively descending code optimizer. People also rarely swear. While four-letter words are common on Wall Street and other highly charged environments, they're rare in the technology world.
+
+Much of the community are boys and men, or perhaps more correctly "guys." While there are some older programmers who continue to dig the excitement and tussle of the free source world, many are high school and college guys with plenty of extra time on their hands. Many of them are too smart for school, and writing neat software is a challenge for them. Older people usually get bogged down with a job and mortgage payments. It's hard for them to take advantage of the freedom that comes with the source code. Still, the older ones who survive are often the best. They have both deep knowledge and experience.
+
+The average population, however, is aging quickly. As the software becomes better, it is easier for working stiffs to bring it into the corporate environments. Many folks brag about sneaking Linux into their office and replacing Microsoft on some hidden server. As more and more users find a way to make money with the free software, more and more older people (i.e.,
+over 25) are able to devote some time to the revolution.
+
+I suppose I would like to report that there's a healthy contingent of women taking part in the free source world, but I can't. It would be nice to isolate the free software community from the criticism that usually finds any group of men. By some definition or legal reasoning, these guys must be practicing some de facto discrimination. Somebody will probably try to sue someone someday. Still, the women are scarce and it's impossible to use many of the standard explanations. The software is, after all, free. It runs well on machines that are several generations old and available from corporate scrap heaps for several hundred dollars. Torvalds started writing Linux because he couldn't afford a real version of UNIX. Lack of money or the parsimony of evil, gender-nasty parents who refuse to buy their daughters a computer can hardly be blamed.
+
+In fact, many of the people online don't even know the gender of the person on the other end. Oblique nicknames like "303," "nomad,"
+"CmdrTaco," or "Hemos" are common. No one knows if you're a boy or a girl online. It's almost like the ideal of a gender-free existence proposed by the unisex dreamers who wrote such stuff as "Free to Be You and Me," trying to convince children that they were free to pursue any dream they wanted. Despite the prevalence of these gender-free visions, the folks who ended up dreaming of a world where all the software was free turned out to be almost entirely men.
+
+Most of the men would like to have a few more women show up. They need dates as much as any guy. If anything, the crown of Evil Discriminator might be placed on the heads of the girls who scorn the guys who are geeks, dweebs, and nerds. A girl couldn't find a better ratio of men if she tried.
+
+This may change in the future if organizations like LinuxChix (www.linuxchix.org) have their way. They run a site devoted to celebrating women who enjoy the open source world, and they've been trying to start up chapters around the world. The site gives members a chance to post their names and biographical details. Of course, several of the members are men and one is a man turning into a woman. The member writes, "I'm transsexual (male-to-female, pre-op), and at the moment still legally married to my wife, which means that if we stay together we'll eventually have a legal same-sex marriage."
+
+Still, there's not much point in digging into this too deeply because the free source world rarely debates this topic. Everyone is free to use the software and contribute what they want. If the women want to come, they can. If they don't, they don't have to do so to fulfill some mandate from society. No one is sitting around debating whether having it all as a woman includes having all of the source code. It's all about freedom to use software, not dating, mating, or debating sexual roles in society.
+
+Racial politics, however, are more complicated. Much of the Linux community is spread out throughout the globe. While many members come from the United States, major contributors can be found in most countries. Linus Torvalds, of course, came from Finland, one of the more technically advanced countries in the world. Miguel de Icaza, the lead developer of the GNOME desktop, comes from Mexico, a country perceived as technically underdeveloped by many in the United States.
+
+Jon Hall, often called maddog, is one of the first members of corporate America to recognize that neat things were going on throughout the world of open source software. He met Torvalds at a conference and shipped him a Digital computer built around the Alpha chip when he found out that Torvalds wanted to experiment with porting his software to a 64-bit architecture. Hall loves to speculate about the spread of free software throughout the globe and says, "Who knows where the next great mind will come from? It could be Spain, Brazil, India, Singapore, or dare I say Finland?"
+
+In general, the free source revolution is worldwide and rarely encumbered by racial and national barricades. Europe is just as filled with Linux developers as America, and the Third World is rapidly skipping over costly Microsoft and into inexpensive Linux. Interest in Linux is booming in China and India. English is, of course, the default language, but other languages continue to live thanks to automatic translation mechanisms like Babelfish.
+
+This border-free existence can only help the spread of free source software. Many countries, claiming national pride, would rather use software developed by local people. Many countries explicitly distrust software coming from the United States because it is well known that the U.S. government tries to restrict security software like encryption at the request of its intelligence-gathering agencies. In November 1999, the German government's Federal Ministry of Finance and Technology announced a grant for the GNU Privacy Guard project. Why would a country want to send all of its money to Redmond, Washington, when it could bolster a local group of hackers by embracing a free OS? For everyone but the United States, installing a free OS may even be a patriotic gesture.
+
+2~ Icons
+
+The archetypes are often defined by prominent people, and no one is more central to the free source world than Richard Stallman. Some follow the man like a disciple, others say that his strong views color the movement and scare away normal people. Everyone goes out of their way to praise the man and tell you how much they respect what he's done. Almost everyone will turn around and follow the compliment with a veiled complaint like, "He can be difficult to work with."
+Stallman is known for being a very unreasonable man in the sense that George Bernard Shaw used the word when he said, "The Reasonable man adapts to nature. The unreasonable man seeks to adapt nature to himself. Therefore it is only through the actions of unreasonable men that civilization advances." The reasonable man would still be waiting on hold as the tech support folks in MegaSoft played with their Nerf footballs and joked about the weenies who needed help using their proprietary software.
+
+I often think that only someone as obsessed and brilliant as Stallman could have dreamed up the GNU Public License. Only he could have realized that it was possible to insist that everyone give away the source code and allow them to charge for it at the same time if they want. Most of us would have locked our brains if we found ourselves with a dream of a world of unencumbered source code but hobbled by the reality that we needed money to live. Stallman found himself in that place in the early days of the Free Software Foundation and then found a way to squeeze his way out of the dilemma by charging for CD-ROMs and printed manuals. The fact that others could still freely copy the information they got meant that he wasn't compromising his core dream.
+
+If Stallman is a product of MIT, then one opposite of him is the group of hackers that emerged from Berkeley and produced the other free software known as FreeBSD, NetBSD, and OpenBSD. Berkeley's computer science department always had a tight bond with AT&T and Sun and shared much of the early UNIX code with both.
+
+While there were many individuals at Berkeley who are well known among developers and hackers, no one stands out like Richard Stallman. This is because Stallman is such a strong iconoclast, not because Berkeley is the home of ne'er-do-wells who don't measure up. In fact, the pragmatism of some of the leaders to emerge from the university is almost as great as Stallman's idealism, and this pragmatism is one of the virtues celebrated by Berkeley's circle of coders. For instance, Bill Joy helped develop much of the early versions of the BSD before he went off to take a strong leadership role at Sun Microsystems.
+
+Sun has a contentious relationship with the free software world. It's far from a free software company like Red Hat, but it has contributed a fair number of lines of software to the open source community. Still, Sun guards its intellectual property rights to some packages fiercely and refuses to distribute the source with an official open source license. Instead, it calls their approach the "community source license" and insists that it's good enough for almost everyone. Users can read the source code, but they can't run off with it and start their own distribution.
+
+Many others from Berkeley followed Joy's path to Sun. John Ousterhout left his position as a professor at Berkeley in 1994 to move to Sun. Ousterhout was known for developing a fairly simple but powerful scripting tool known as TCL/Tk. One part of it, the Tool Control Language (TCL), was a straightforward English-like language that made it pretty easy for people to knit together different modules of code. The user didn't have to be a great programmer to work with the code because the language was designed to be straightforward. There were no complicated data structures or pointers. Everything was a string of ASCII text.
+
+The second part, the Tool kit (Tk), contained a variety of visual widgets that could be used to get input for and output from a program. The simplest ones were buttons, sliders, or menus, but many people wrote complicated ones that served their particular needs.
+
+The TCL/Tk project at Berkeley attracted a great deal of attention from the Net. Ousterhout, like most academics, freely distributed his code and did a good job helping others use the software. He and his students rewrote and extended the code a number of times, and this constant support helped create even more fans. The software scratched an itch for many academics who were smart enough to program the machines in their lab, but burdened by more important jobs like actually doing the research they set out to do. TCL/Tk picked up a wide following because it was easy for people to learn a small amount quickly. Languages like C required a semester or more to master. TCL could be picked up in an afternoon.
+
+Many see the pragmatism of the BSD-style license as a way for the Berkeley hackers to ease their trip into corporate software production. The folks would develop the way-out, unproven ideas using public money before releasing it with the BSD license. Then companies like Sun would start to resell it.
+
+The supporters of the BSD licenses, of course, don't see corporate development as a bad thing. They just see it as a way for people to pay for the extra bells and whistles that a dedicated, market-driven team can add to software.
+
+Ousterhout's decision to move to Sun worried many people because they thought it might lead to a commercialization of the language. Ousterhout answered these with an e-mail message that said TCL/Tk would remain free, but Sun would try to make some money on the project by selling development tools.
+
+"Future enhancements made toTcl andTk by my group at Sun, including the ports to Macs and PCs, will be made freely available to anyone to use for any purpose. My view, and that of the people I report to at Sun, is that it wouldn't work for Sun to try to takeTcl andTk proprietary anyway:
+someone (probably me, in a new job) would just pick up the last free release and start an independent development path. This would be a terrible thing for everyone since it would result in incompatible versions.
+
+"Of course, Sun does need to make money from the work of my team or else they won't be able to continue to support us. Our current plan is to charge for development tools and interesting extensions and applications. Balancing the public and the profitable will be an ongoing challenge for us, but it is very important both to me and to Sun to keep the support of the existing Tcl community," he wrote.
+
+In some respects, Ousterhout's pragmatism was entirely different from Stallman's. He openly acknowledged the need to make money and also admitted that Sun was leaving TCL/Tk free because it might be practically impossible to make it proprietary. The depth of interest in the community made it likely that a free version would continue to evolve. Stallman would never cut such a deal with a company shipping proprietary software.
+
+In other respects, many of the differences are only at the level of rhetoric. Ousterhout worked on producing a compromise that would leave TCL/Tk free while the sales of development tools paid the bills. Stallman did the same thing when he figured out a way to charge people for CD-ROMs and manuals. Ousterhout's work at Sun was spun off into a company called Scriptics that is surprisingly like many of the other free software vendors. The core of the product, TCL/Tk 8.1 at this time, is governed by a BSD-style license. The source code can be downloaded from the site. The company itself, on the other hand, sells a more enhanced product known as TCLPro.
+
+In many ways, the real opposite to Richard Stallman is not Bill Joy or John Ousterhout, it's Linus Benedict Torvalds. While Stallman, Joy, and Ousterhout are products of the U.S. academic system, Torvalds is very much an outsider who found himself trying to program in Europe without access to a decent OS. While the folks at Berkeley, MIT, and many U.S. universities were able to get access to UNIX thanks to carefully constructed licenses produced by the OS's then-owner, AT&T, students in Finland like Torvalds were frozen out.
+
+"I didn't have many alternatives. I had the commercial alternative [UNIX], which was way too expensive. It was really out of reach for a normal human being, and not only out of reach in a monetary sense, but because years ago commercial UNIX vendors weren't interested in selling to individuals. They were interested in selling to large corporations and banks. So for a normal person, there was no choice," he told VAR Business.
+
+When Linux began to take off, Torvalds moved to Silicon Valley and took a job with the supersecret research firm Transmeta. At Comdex in November 1999, Torvalds announced that Transmeta was working on a low-power computing chip with the nickname "Crusoe."
+
+There are, of course, some conspiracy theories. Transmeta is funded by a number of big investors including Microsoft cofounder Paul Allen. The fact that they chose to employ Torvalds may be part of a plan, some think, to distract him from Linux development. After all, version 2.2 of the kernel took longer than many expected, although it may have been because its goals were too ambitious. When Microsoft needed a coherent threat to offer up to the Department of Justice, Transmeta courteously made Torvalds available to the world. Few seriously believe this theory, but it is constantly whispered as a nervous joke.
+
+2~ Flames
+
+The fights and flamefests of the Internet are legendary, and the open source world is one of the most contentious corners of the Net. People frequently use strong words like "brain dead," "loser," "lame," "gross,"
+and "stoooopid" to describe one another's ideas. If words are the only way to communicate, then the battle for mindshare means that those who wield the best words win.
+
+In fact, most of the best hackers and members of the free source world are also great writers. Spending days, weeks, months, and years of your life communicating by e-mail and newsgroups teaches people how to write well and get to the point quickly. The Internet is very textual, and the hard-core computer programmers have plenty of experience spitting out text. As every programmer knows, you're supposed to send e-mail to the person next to you if you want to schedule lunch. That person might be in the middle of something.
+
+Of course, there's a danger to making a sweeping generalization implying that the free source world is filled with great writers. The fact is that we might not have heard from the not-so-great writers who sit lurking on the Net. While some of the students who led the revolutions of 1968 were quite articulate, many of the tie-dyed masses were also in the picture. You couldn't miss them. On the Internet, the silent person is invisible.
+
+Some argue that the free software world has burgeoned because the silent folks embraced the freely available source code. Anyone could download the source code and play with it without asking permission or spending money. That meant that 13-year-old kids could start using the software without asking their parents for money. SCO Unix and Windows NT cost big bucks.
+
+This freedom also extended to programmers at work. In many companies, the computer managers are doctrinaire and officious. They often quickly develop knee-jerk reactions to technologies and use these stereotypes to make technical decisions. Free software like Linux was frequently rejected out of hand by the gatekeepers, who thought something must be wrong with the software if no one was charging for it. These attitudes couldn't stop the engineers who wanted to experiment with the free software, however, because it had no purchase order that needed approval.
+
+The invisible-man quality is an important part of the free software world. While I've described the bodies and faces of some of the betterknown free source poster boys, it is impossible to say much about many of the others. The community is spread out over the Internet throughout the world. Many people who work closely on projects never meet each other. The physical world with all of its ways of encoding a position in a hierarchy are gone. No one can tell how rich you are by your shoes. The color of your skin doesn't register. It's all about technology and technological ideas.
+
+In fact, there is a certain degree of Emily Dickinson in the world. Just as that soul selected her own society and shut the door on the rest of the world, the free software world frequently splits and resplits into smaller groups. While there is some cross-pollination, many are happy to live in their own corners. OpenBSD, FreeBSD, and NetBSD are more separate countries than partners in crime. They evolve on their own, occasionally stealing ideas and source code to bridge the gap.
+
+Many writers have described some of their problems with making hay of the Silicon Valley world. Screenwriters and television producers often start up projects to tap into the rich texture of nerdlands only to discover that there's nothing that compelling to film. It's just miles and miles of steel-frame buildings holding acres and acres of cubicles. Sure, there are some Ping-Pong tables and pinball machines, but the work is all in the mind. Eyes want physical action, and all of the excitement in a free source world is in the ideas.
+
+But people are people. While there's no easy way to use the old standbys of race or clothes to discriminate, the technical world still develops ways to classify its members and place them in camps. The free software world has its own ways to distinguish between these camps.
+
+The biggest distinction may be between folks who favor the GPL and those who use the BSD-style license to protect their software. This is probably the biggest decision a free software creator must make because it controls whether others will be able to build commercial versions of the software without contributing the new code back to the project.
+
+People who embrace the GPL are more likely to embrace Richard Stallman, or at least less likely to curse him in public. They tend to be iconoclastic and individualistic. GPL projects tend to be more cultish and driven by a weird mixture of personality and ain't-it-cool hysteria.
+
+The people on the side of BSD-style license, on the other hand, seem pragmatic, organized, and focused. There are three major free versions of BSD UNIX alone, and they're notable because they each have centrally administered collections of files. The GPL-protected Linux can be purchased from at least six major groups that bundle it together, and each of them includes packages and pieces of software they find all over the Net.
+
+The BSD-license folks are also less cultish. The big poster boys, Torvalds and Stallman, are both GPL men. The free versions of BSD, which helped give Linux much of its foundation, are largely ignored by the press for all the wrong reasons. The BSD teams appear to be fragmented because they are all separate political organizations who have no formal ties. There are many contributors, which means that BSD has no major charismatic leader with a story as compelling as that of Linus Torvalds.
+
+Many contributors could wear this mantle and many have created just as much code. But life, or at least the media's description of it, is far from fair.
+
+The flagship of the BSD world may be the Apache web server group, which contributed greatly to the success of the platform. This core team has no person who stands out as a leader. Most of the people on the team are fully employed in the web business, and several members of the team said that the Apache team was just a good way for the people to advance their day jobs. It wasn't a crusade for them to free source code from jail.
+
+The Apache web server is protected by a BSD-style license that permits commercial reuse of the software without sharing the source code. It is a separate program, however, and many Linux users run the software on Linux boxes. Of course, this devotion to business and relatively quiet disposition isn't always true. Theo de Raadt, the leader of the OpenBSD faction, is fond of making bold proclamations. In his interview with me, he dismissed the Free Software Foundation as terribly misnamed because you weren't truly free to do whatever you wanted with the software.
+
+In fact, it's easy to take these stereotypes too far. Yes, GPL folks can be aggressive, outspoken, quick-thinking, driven, and tempestuous. Sure, BSD folks are organized, thorough, mainstream, dedicated, and precise. But there are always exceptions to these rules, and the people in each camp will be quick to spot them.
+
+Someone might point out that Alan Cox, one of the steadfast keepers of the GPL-protected Linux kernels, is not particularly flashy nor given to writing long manifestos on the Net. Others might say that Brian Behlendorf has been a great defender of the Apache project. He certainly hasn't avoided defending the BSD license, although not in the way that Stallman might have liked. He was, after all, one of the members of the Apache team who helped convince IBM that they could use the Apache web server without danger.
+
+After BSD versus GPL, the next greatest fault line is the choice of editor. Some use the relatively simple vi, which came out of Berkeley and the early versions of BSD. Others cleave to Stallman's Emacs, which is far more baroque and extreme. The vi camp loves the simplicity. The Emacs fans brag about how they've programmed their version of Emacs to break into the White House, snag secret pictures of people in compromising positions, route them through an anonymous remailer, and negotiate for a big tax refund all with one complicated control-meta-trans keystroke.
+
+While this war is well known, it has little practical significance. People can choose for themselves, and their choices have no effect on others. GPL or BSD can affect millions; vi versus Emacs makes no big difference. It's just one of the endless gag controversies in the universe. If Entertainment Tonight were covering the free software world, they would spend hours cataloging which stars used vi and which used Emacs. Did Shirley MacLaine use vi or Emacs or even wordstar in a previous life?
+
+Some of the other fault lines aren't so crisp, but end up being very important. The amount of order or lack of order is an important point of distinction for many free source people, and there is a wide spectrum of choices available. While the fact that all of the source code is freely redistributable makes the realm crazy, many groups try to control it with varying amounts of order. Some groups are fanatically organized. Others are more anarchic. Each has a particular temperament.
+
+The three BSD projects are well known for keeping control of all the source code for all the software in the distribution. They're very centrally managed and brag about keeping all the source code together in one build tree. The Linux distributions, on the other hand, include software from many different sources. Some include the KDE desktop. Others choose GNOME. Many include both.
+
+Some of the groups have carefully delineated jobs. The Debian group elects a president and puts individuals in charge of particular sections of the distribution. Or perhaps more correctly, the individuals nominate themselves for jobs they can accomplish. The group is as close to a government as exists in the open software world. Many of the Open Source Initiative guidelines on what fits the definition of "open source" evolved from the earlier rules drafted by the Debian group to help define what could and couldn't be included in an official Debian distribution. The OpenBSD group, on the other hand, opens up much of the source tree to everyone on the team. Anyone can make changes. Core areas, on the other hand, are still controlled by leaders.
+
+Some groups have become very effective marketing forces. Red Hat is a well-run company that has marketing teams selling people on upgrading their software as well as engineering teams with a job of writing improved code to include in future versions. Red Hat packages their distribution in boxes that are sold through normal sales channels like bookstores and catalogs. They have a big presence at trade shows like LinuxExpo, in part because they help organize them.
+
+Other groups like Slackware only recently opened up a website. OpenBSD sells copies to help pay for its Internet bills, not to expand its marketing force. Some distributions are only available online.
+
+In many cases, there is no clear spectrum defined between order and anarchy. The groups just have their own brands of order. OpenBSD brags about stopping security leaks and going two years without a rootlevel intrusion, but some of its artwork is a bit scruffy. Red Hat, on the other hand, has been carefully working to make Linux easy for everyone to use, but they're not as focused on security details.
+
+Of course, this amount of order is always a bit of a relative term. None of these groups have strong lines of control. All of them depend upon the contributions of people. Problems only get solved if someone cares enough to do it.
+
+This disorder is changing a bit now that serious companies like Red Hat and VA Linux are entering the arena. These companies pay fulltime programmers to ensure that their products are bug free and easy to use. If their management does a good job, the open source software world may grow more ordered and actually anticipate more problems instead of waiting for the right person to come along with the time and the inclination to solve them.
+
+These are just a few of the major fault lines. Practically every project comes with major technical distinctions that split the community. Is Java a good language or another attempt at corporate control? How should the basic Apache web server handle credit cards? What is the best way to handle 64-bit processors? There are thousands of differences, hundreds of fault lines, scores of architectural arguments, and dozens of licenses. But at least all of the individuals agree upon one thing: reading the source code is essential.
+
+1~ Politics
+
+One of the great questions about the free source movement is its politics. The world loves to divide every issue into two sides and then start picking teams. You're either part of the problem or part of the solution. You're either for us or against us. You're either on the red team or the blue team.
+
+The notion of giving software and source code away isn't really a radical concept. People give stuff away all the time. But when the process actually starts to work and folks start joining up, the stakes change. Suddenly it's not about random acts of kindness and isolated instances of charity--it's now a movement with emotional inertia and political heft. When things start working, people want to know what this group is going to do and how its actions are going to affect them. They want to know who gets the credit and who gets the blame.
+
+The questions about the politics of the free source world usually boil down to a simple dilemma: some think it's a communist utopia and others think it's a free market nirvana. Normally, the two ideas sit on the opposite ends of the spectrum looking at each other with contempt and disdain. In the strange world of software, ideas aren't so easy to place. Anyone can duplicate software as many times as they want and it's still useful. The communist notion of sharing equally is much easier to achieve in this realm than in the world of, say, grain, which requires hard work in the sun to make it grow. On the other hand, the ease of exchange also means that people are able to swap and trade versions of software with little overhead or restriction. The well-greased marketplace in the free marketer's dreams is also easy to create. The act of giving a disk to a friend could either be a bona fide example of universal brotherhood or the vigorously competitive act of trying to win the hearts and minds of a software consumer. Take your pick.
+
+The nature of software also mitigates many of the problems that naturally occur in each of these worlds. There is no scarcity, so there is no reason why sharing has to be so complicated or orchestrated from the central planning committees of the Soviets. People just give. On the other hand, the lack of scarcity also limits the differences between the rich and the poor. There's no reason why everyone can't have the same software as the rich because it's so easy to duplicate. Folks who are into economic competition for the ego gratification of having a bigger sport utility vehicle than everyone else on the street are going to be disappointed.
+
+To some extent, the politics of the free source movement are such a conundrum that people simply project their wishes onto it. John Gilmore told me over dinner, "Well, it depends. Eric Raymond is sort of a libertarian but Richard Stallman is sort of a communist. I guess it's both." The freedom makes it possible for people to mold the movement to be what they want.
+
+Raymond has no problem seeing his libertarian dreams acted out in the free software community. He looked at the various groups creating their own versions of free source code and saw a big bazaar where merchants competed to provide the best solutions to computer users everywhere. People wrote neat stuff and worked hard to make sure that others were happy. It was competition at its finest, and there was no money or costs of exchange to get in the way.
+
+Most people quickly become keenly aware of this competition. Each of the different teams creating distributions flags theirs as the best, the most up-to-date, the easiest to install, and the most plush. The licenses mean that each group is free to grab stuff from the other, and this ensures that no one builds an unstoppable lead like Microsoft did in the proprietary OS world. Sure, Red Hat has a large chunk of the mindshare and people think their brand name is synonymous with Linux, but anyone can grab their latest distribution and start making improvements on it. It takes little time at all.
+
+Stallman and his supposed communist impulse is a bit harder to characterize. He has made his peace with money and he's quick to insist that he's not a communist or an enemy of the capitalist state. He's perfectly happy when people charge for their work as programmers and he often does the same. But it's easy to see why people start to think he's something of a communist. One of his essays, which he insists is not strictly communist, is entitled "Why Software Should Not Have Owners."
+
+Some of his basic instincts sure look Marxist. The source code to a program often acts like the means of production, and this is why the capitalists running the businesses try to control it. Stallman wanted to place these means of production in the hands of everyone so people could be free to do what they wanted. While Stallman didn't rail against the effects of money, he rejected the principle that intellectual capital, the source code, should be controlled.
+
+Stallman stops well short of giving everything away to everyone. Copyrighting books is okay, he says, because it "restricts only the mass producers of copies. It did not take freedom away from readers of books. An ordinary reader, who did not own a printing press, could copy books only with pen and ink, and few readers were sued for that." In other words, the copyright rules in the age of printing only restricted the guy across town with a printing press who was trying to steal someone else's business. The emergence of the computer, however, changes everything. When people can copy freely, the shackles bind everyone.
+
+Communism, of course, is the big loser of the 20th century, and so it's not surprising that Stallman tries to put some distance between the Soviet and the GNU empires. He notes puckishly that the draconian effects of the copyright laws in America are sort of similar to life in the Soviet Union, "where every copying machine had a guard to prevent forbidden copying, and where individuals had to copy information secretly and pass it from hand to hand as samizdat." He notes, however, that "There is of course a difference: the motive for information control in the Soviet Union was political; in the U.S. the motive is profit. But it is the actions that affect us, not the motive. Any attempt to block the sharing of information, no matter why, leads to the same methods and the same harshness."
+
+Stallman has a point. The copyright rules restrict the ability of people to add, improve upon, or engage other people's work. The fair use rules that let a text author quote sections for comment don't really work in the software world, where it's pretty hard to copy anything but 100 percent of some source code. For programmers, the rules on source code can be pretty Soviet-like in practice.
+
+He's also correct that some companies would think nothing of locking up the world. A consortium of megalithic content companies like Disney and the other studios got the U.S. Congress to pass a law restricting tools for making copies. Ostensibly it only applied to computer programs and other software used to pirate movies or other software, but the effect could be chilling on the marketplace. The home video enthusiast who loves to edit the tapes of his child's birthday party needs many of the same functions as the content pirate. Cutting and pasting is cutting and pasting. The rules are already getting a bit more Soviet-like in America.
+
+But Stallman is right to distance himself from Soviet-style communism because there are few similarities. There's little central control in Stallman's empire. All Stallman can do to enforce the GNU General Public License is sue someone in court. He, like the Pope, has no great armies ready to keep people in line. None of the Linux companies have much power to force people to do anything. The GNU General Public License is like a vast disarmament treaty. Everyone is free to do what they want with the software, and there are no legal cudgels to stop them. The only way to violate the license is to publish the software and not release the source code.
+
+Many people who approach the free software world for the first time see only communism. Bob Metcalfe, an entrepreneur, has proved himself several times over by starting companies like 3Com and inventing the Ethernet. Yet he looked at the free software world and condemned it with a derisive essay entitled "Linux's 60's technology, open-sores ideology won't beat W2K, but what will?"
+
+Using the term "open sores" may be clever, but it belies a lack of understanding of some of the basic tenets. The bugs and problems in the software are open for everyone to see. Ideally, someone will fix them. Does he prefer the closed world of proprietary software where the bugs just magically appear? Does he prefer a hidden cancer to melanoma?
+
+The essay makes more confounding points equating Richard Stallman to Karl Marx for his writing and Linus Torvalds to Vladimir Lenin because of his aim to dominate the software world with his OS. For grins, he compares Eric Raymond to "Trotsky waiting for The People's ice pick" for no clear reason. Before this gets out of hand, he backpedals a bit and claims, "OK, communism is too harsh on Linux. Lenin too harsh on Torvalds [sic]."Then he sets off comparing the world of open source to the tree-hugging, back-to-the-earth movement.
+
+Of course, it's easy to see how the open source world is much different from the Soviet-style world of communism. That experiment failed because it placed the good of the many above the freedom of the individual. It was a dictatorship that did not shirk from state-sponsored terrorism or pervasive spying. It was no surprise, for instance, to discover that East German athletes were doped with performance-enhancing drugs without their knowledge. It was for the glory of Lenin or Marx or Stalin, or whoever held the reins. Does the country need someone to live in Siberia to mine for minerals? Does the country need land for vast collective farms? The state makes the call and people go.
+
+The Soviet Union didn't really fail because it clung too deeply to the notion that no one should own property. It failed when it tried to enforce this by denying people the fruits of their labor. If someone wanted to build something neat, useful, or inventive, they had better do it for the glory of the Soviet state. That turned the place into a big cesspool of inactivity because everyone's hard work was immediately stolen away from them.
+
+The free software world is quite different from that world. The GPL and the BSD licenses don't strip away someone's freedom and subjugate them to the state, it gives them the source code and a compiler to use with it. Yes, the GPL does restrict the freedom of people to take the free source code and sell their own proprietary additions, but this isn't the same as moving them to Siberia.
+
+The Free Software State doesn't steal the fruits of someone's labor away from them. Once you develop the code, you can still use it. The GPL doesn't mean that only Torvalds can sit around his dacha and compile the code. You get to use it, too. In fact, one of the reasons that people cite for contributing to GPL projects is the legal assurance that the enhancements will never be taken away from them. The source will always remain open and accessible.
+
+Metcalfe's point is that communism didn't work, so the free software world will fail, too. He makes his point a bit clearer when he starts comparing the free software folks to tree-hugging environmentalists.
+
+"How about Linux as organic software grown in utopia by spiritualists?" he wonders. "If North America actually went back to the earth, close to 250 million people would die of starvation before you could say agribusiness. When they bring organic fruit to market, you pay extra for small apples with open sores--the Open Sores Movement."
+
+The problem with this analogy is that no one is starving with open source software. Data is not a physical good. Pesticides and fertilizers can boost crop yields, but that doesn't matter with software. If anything, free software ends up in even more people's hands than proprietary software. Everyone in the free software world has a copy of the image editing tool, GIMP, but only the richest Americans have a copy of the very expensive Adobe Photoshop.
+
+Of course, he has half a point about the polish of open source code. The programmers often spend more time adding neat features they like instead of making the code as accessible as possible. The tools are often designed for programmers by programmers. There isn't much of a quality assurance and human factors team trying to get them to engineer it so the other 95 percent of humanity can use it.
+
+But this problem is going away. Companies like Red Hat and Caldera have a profit motive in making the software accessible to all. The tools look nicer, and they are often just as presentable as the tools from the proprietary firms. The programmers are also getting more sensitive to these problems. In the past, the free software world was sort of an alternative Eden where programmers went to escape from the rest of programmatically challenged society. Now the world is open to free software and the programmers are more open to taking everyone's needs into account.
+
+The problem with all of Metcalfe's analogies is that he assumes the same rules that control the world of physical goods also govern the world of ideas. The software industry likes to pretend that this isn't true by packaging the software in big, empty boxes that look good on shelves. Swapping ideas is easy and costs little. Of course, the Soviet Union worried about the swapping of ideas and tried to control the press and all forms of expression. The free software movement is the exact opposite of this.
+
+In fact, it is much easier to see the free software world as the libertarian ideal of strong competition and personal freedom if you remember that it exists in the realm of ideas. The landscape is similar to universities, which usually boast that they're just big melting pots where the marketplace of ideas stays open all night. The best ideas gradually push out the worst ones and society gradually moves toward a total understanding of the world.
+
+Perhaps it's just not fair to characterize the politics of the open source or free software world at all. Terms like communism, libertarianism, liberalism, and Marxism all come from an age when large portions of society did not have easy access to ample supplies of food and housing.
+
+Data and information are not limited goods that can only be consumed by a limited group. One person or one million people can read a computer file and the marginal costs aren't very different. Sharing is cheap, so it makes sense to use it to all of its advantages. We're just learning how to use the low cost of widespread cooperation.
+
+Perhaps it's better to concentrate on the real political battles that rage inside the open source code community. It may be better to see the battle as one of GPL versus BSD instead of communist versus libertarian. The license debate is tuned to the Internet world. It sets out the debate in terms the computer user can understand.
+
+1~ Charity
+
+The open source movement is filled with people who analyze software, look for bugs, and search for fixes. These quiet workhorses are the foundation of the movement's success. One member of this army is David Baron, an undergraduate student who started out at Harvard in the fall of 1998 and found, like most students, that he had a bit of spare time. Some students turn to theater, some to the newspaper, some to carousing, some to athletic teams, some to drinking, and most choose one or more of the above. A few students search out some charitable work for their spare time and volunteer at a homeless shelter or hospital. Law students love to work at the free legal clinic for the poor. Baron, however, is a bit of a nerd in all of the good senses of the word. He's been working on cleaning up Netscape's open source browser project known as Mozilla, and he thinks it's a great act of charity.
+
+Baron spends his spare time poking around the Mozilla layout engine responsible for arranging the graphics, text, form slots, buttons, and whatnot in a consistent way. Graphic designers want all web browsers on the Net to behave in a consistent way and they've been agitating to try and get the browser companies (Netscape, Microsoft, iCab, WebTV, and Opera) to adhere to a set of standards developed by the W3C, the World Wide Web Consortium based at MIT. These standards spell out exactly how the browsers are supposed to handle complicated layout instructions like cascading style sheets.
+
+Baron looked at these standards and thought they were a good idea. If all web browsers handled content in the same way, then little buttons saying "Best Viewed with Microsoft IE" or "Best Viewed by Netscape"
+would disappear. The browser companies would be able to compete on features, not on their ability to display weirder web pages. It would cut the web designers out of the battle between Microsoft and Netscape.
+
+The standards also help users, especially users with different needs. He told me, "Standards (particularly CSS) encourage accessibility for users with all sorts of disabilities because they allow authors to use HTML as it was originally intended--as a structural markup language that can be interpreted by browsers that display things in nonvisual media or in very large fonts for users with poor vision. Changing the HTML on the web back to structural markup will also allow these browsers to produce sensible output."
+
+Handling standards like this is always a bit of a political problem for companies. Every developer tries to stick their fingers in the wind and see which standards will be important and which ones will fall by the wayside. Microsoft, Netscape, iCab, WebTV, and Opera have all been wondering about the cascading style sheets because they're sort of a pain in the neck. Ideally, the graphics designers will be able to come up with graphics rules for a set of web pages and they'll be applied using the rules set out by the reader.
+
+CSS is not about "total control by the author of the page," says Baron. "The basic idea of the cascade is that user preferences (through the browser's UI or possibly through a user CSS style sheet) and author suggestions (contained in CSS style sheets) combine to produce the formatting of the page."
+
+A modern catalog conglomerate, for instance, may have two branches. One would be aimed at middle-aged men who dote on their cars by giving them endless wax jobs and cleaning them forever. Another might be aimed at young mothers who dote on their children, in part by keeping the home as clean as could be. Normally, the catalog company would use different designers to create very different-looking catalogs. One would come with retro, hard-edged graphics covered with racing stripes, and the other with floral prints. What happens when these catalogs head to the web? Normally two designers would give two different websites two different looks.
+
+What if there is one cleaning product, say a car wheel cleaner, that appears in both catalogs? In the old days before cascading style sheets, both designers would have to do up each page separately. A well-designed system of cascading style sheets would let one web page for the product display correctly on both sites. It would pick up either the floral prints or the racing stripes automatically when either site called it up.
+
+These standards are notoriously difficult to enforce. Armies around the world dream of turning out perfect privates that can be inserted into any conflict in any platoon without any retraining. Newspapers dream of having interchangeable reporters who can cover the White House or a cricket match in India. It's no wonder that the web industry wants the same thing.
+
+Baron told me, "I got interested in Mozilla because I'm interested in web standards." He noticed that a group known as the Web Standards Project was running a political campaign to pressure the browser companies to lay out pages the same way (www.webstandards.org).
+
+"A group of developers got together and said, 'The browsers aren't supporting the standards' and this makes it impossible to create pages," Baron explained. "If every browser supports the standards in a different way, then you have to design a different version of the site for each browser. Or, more realistically, web designers resort to hacks that make the page legible in all the 'major' browsers but not accessible to people with disabilities or people with older computers."
+
+Of course, it's one thing for a web designer or a web master to take up this call. Baron, however, was just a college freshman who framed this as volunteer work. When he happened upon the Web Standards Project, he heard their message and saw an itch that he wanted to scratch.
+
+"I want to see the standards supported correctly. Someone's got to do it," he told me. "I might as well be doing this instead of playing around and looking at websites all day. A lot of people do volunteer work, but not a lot of people get to do volunteer work at this level. It uses what I know pretty well. A lot of students who are very smart end up doing volunteer work which doesn't use their skills. When you can do volunteer work that uses what you know, it's even better."
+
+So Baron would download the latest versions of the Mozilla layout engine known as Gecko and play with web pages. He would create weird web pages with strange style sheets, load them up, and watch where they broke. When things went wrong, he would write up detailed bug reports and mail them off to the folks doing the coding. He was part of a quality control team that included some Netscape employees and a wide variety of other users on the Net.
+
+This community involvement was what Netscape wanted when it created Mozilla. They hoped that more people would take it upon themselves to test out the code and at least make complaints when things were going wrong. One hacker named James Clark, who isn't related to the founder of Netscape with the same name, actually kicked in a complete XML parser, a tool for taking apart the latest superset of HTML that is capturing the attention of software and web designers.
+
+Baron is one of the few folks I met while writing this book who frames his work on an open source project as charity. Most devotees get into the projects because they offer them the freedom to mess with the source code. Most also cite the practical strengths of open source, like the relatively quick bug fixes and the stability of well-run projects. Most people like to distance themselves from the more political firebrands of the free software movement like Richard Stallman by pointing out that they're not really in it to bring about the second coming of the Communist Revolution. Few suggest that their work is sort of a gift of their time that might make the world a better place. Few compare their work to the folks cleaning up homeless shelters or hospitals. Most don't disagree when it is pointed out to them, but most free software hackers don't roll out the charitable rhetoric to explain what they're up to.
+
+This may just be a class difference. Baron is a sophomore, as this is written, at Harvard and Harvard is, by definition, a finishing school for the upper crust. Even the vast sea of kids from middle-class families and public schools end up talking and acting as if they came out of Choate or Exeter by the end of their time at Harvard. They pick up the Kennedyesque noblesse oblige that somehow commands the rich and fortunate to be out helping the poor with very public acts of assistance. It just sort of seeps into all of those Harvard kids.
+
+Most of the free software members, on the other hand, are kind of outcasts. The hackers come from all parts of the globe and from all corners of the social hierarchy, but few of them are from the beautiful people who glide through life on golden rails. The programmers usually have their heads in strange, obtuse mathematical clouds instead of the overstuffed clouds of Olympus. They're concerned with building neat software and spinning up wonderful abstract structures that interlock in endlessly repeating, elegant patterns. If they were interested in power or social prestige, they wouldn't be spending their nights in front of a terminal waiting for some code to compile.
+
+But if the free software movement doesn't use the charitable card very often, it doesn't mean that the work is too different from that of the homeless shelters. In fact, so little money changes hands that there are not many reasons for people to take their donations off on their taxes. Donations of time don't count. Maybe a few companies could write it off their books, but that's about it.
+
+In fact, Baron is right that work like his can make a difference for people. Software is a growing part of the cost of a computer today. In low-end PCs, the Microsoft OS may cost more than the processor or the memory. A free OS with a free web browser that works correctly can help the thousands of schools, homeless shelters, hospitals, and recreation centers get on the web at a cheaper cost.
+
+The free software charity is often a bit cleaner. Bill Gates and many of the other Microsoft millionaires aren't shy about giving away real money to schools and other needy organizations. Melinda Gates, Bill's wife, runs a charitable foundation that is very generous. In 1999, for instance, the foundation made a very real gift of tuition money for minority students. The foundation has also given millions of dollars to help fund medical research throughout the globe.
+
+Still, at other times, there has been a sly edge to the Gates benevolence. In some cases, the company gives away millions of dollars in Microsoft software. This helps get kids used to Microsoft products and acts like subtle advertising. Of course, there's nothing new about this kind of charity. Most corporations insist that they receive some publicity for their giving. It's how they justify the benevolence to their shareholders.
+
+The value of giving copies of software away is a difficult act to measure. One million copies of Windows 95 might retail for about $100 million, but the cost to Microsoft is significantly lower. CD-ROMs cost less than one dollar to duplicate, and many schools probably received one CD-ROM for all of their machines. Giving the users support is an important cost, but it can be controlled and limited by restricting the number of employees dedicated to particular phone lines. Determining the value of all of the benevolence must be a tough job for the tax accountants. How Microsoft chose to account for its donations is a private matter between Gates, the Internal Revenue Service, and his God.
+
+Consider the example of an imaginary proprietary software company called SoftSoft that gives away one million copies of its $50 WidgetWare product to schools and charities across the United States. This is, in many ways, generous because SoftSoft only sells 500,000 copies a year, giving them gross revenues of $25 million.
+
+If SoftSoft values the gift at the full market value, they have a deduction of $50 million, which clearly puts them well in the red and beyond the reach of taxes for the year. They can probably carry the loss forward and wipe out next year's earnings, too.
+
+The accountants may not choose to be so adventurous. The IRS might insist that they deduct the cost of the goods given, not their potentially inflated market price. Imagine that the company's cost for developing WidgetWare came to $21 million. If there were no gift, they would have a nice profit of $4 million. SoftSoft could split the development costs of $21 million between all of the 1.5 million units that are shipped. Instead of deducting the market value of the software, it would only deduct the costs allocated to it. Still, that means they get a $14 million deduction, which is still far from shabby.
+
+More conservative companies may come up with smaller deductions based upon the cost of duplicating the additional copies and the cost of supporting the schools and charities. Strict accounting measures would be the most honest, but it's hard to know what companies do and what they should do.
+
+Free software, of course, avoids all that paperwork and accounting. The software costs nothing, so giving it away generates no deduction. There's no need for complicated cost accounting or great press releases. It just sits on the web server and people download it.
+
+Of course, it's possible to start counting up downloads and doing some multiplication to come up with outrageous numbers. Windows NT can sell for between $200 and $1,000. There are about 3.7 million web servers running Apache, according to the latest Netcraft poll. If 1 percent qualify as charitable sites, then 37,000 are served by Apache. Of course, not all sites sit on separate machines. To correct for this, assume that each server hosts 10 machines and there are only 3,700 machines using Apache. That's still about $3.7 million in donations.
+
+But numbers like this can't really capture the depth of the gift. Linus Torvalds always likes to say that he started writing Linux because he couldn't afford a decent OS for his machine so he could do some experiments. Who knows how many kids, grown-ups, and even retired people are hacking Linux now and doing some sophisticated computer science experiments because they can? How do we count this beneficence?
+
+Free software essentially removes the red tape and the institutional character of charity. There are no boards. There is no counting of gifts. There's no fawning or flattering. There are no new J. Henry P. Plutocrat Wings for the Franklin P. Moneysucker Museum of Philanthropy. It's just a pure gift with no overhead.
+
+There is also a smooth efficiency to the world of free software charity. My economics professor used to joke that gifts were just very inefficient. Grandmas always bought unhip sweaters for their grandkids. Left on their own, children would give candy and stuffed animals to their parents on their birthdays and Christmas. All of these bad choices must be returned or thrown away, ruining the efficiency of the economy. The professor concluded by saying, "So, guys, when you go out on the date, don't bother with the flowers. Forget about the jewelry. Just give her cash."
+
+Free source software, of course, doesn't fit into many of the standard models of economic theory. Giving the stuff away doesn't cost much money, and accepting it often requires a bit of work. The old rules of gift giving and charity don't really apply.
+
+Imagine that some grandmother wrote some complicated software for computing the patterns for knitting sweaters. Some probably have. If they give the source code away, it ends up in the vast pool of free source code and other knitters may find it. It might not help any grandchildren, at least not for 20 or 30 years, but it will be moving to the place where it can do the most good with as little friction as possible. The software hacked by the kids, on the other hand, would flow from child to child without reaching the parents. The software tools for generating dumb jokes and sorting bubble gum cards would make a generation of kids happy, and they would be able to exchange it without their parents or grandparents getting in the way.
+
+The inefficiencies of gift-giving can often affect charities, which have less freedom to be picky than grandchildren. Charities can't look a gift horse in the mouth. If a company wants to give a women's shelter 1,000 new men's raincoats, the shelter will probably take them. Refusing them can offend potential contributors who might give them something of value in the next quarter.
+
+Free source code has none of these inefficiencies. Websites like Slashdot, Freshmeat, Linux Weekly News, LinuxWorld, KernelTraffic, and hundreds of other Linux or project-specific portals do a great job moving the software to the people who can use its value. People write the code and then other folks discover the value in it. Bad or unneeded code isn't foisted on anyone.
+
+Free software also avoids being painted as a cynical tax scheme. It is not uncommon for drug manufacturers to donate some surplus pills to disaster relief operations. In some cases, the manufacturers clear their shelves of pills that are about to expire and thus about to be destroyed. They take a liability and turn it into a tax-deductible asset. This may be a good idea when the drugs are needed, but they are often superfluous. In many cases, the drugs just end up in a landfill. The relief organizations accept millions of dollars in drugs to get a few thousand dollars'
+worth of ones they really need.
+
+2~ Charitable Open Source Organizations
+
+Of course, there are some open source charities. Richard Stallman's Free Software Foundation is a tax-exempt 501(c)(3) charity that raises money and solicits tax-deductible donations. This money is used to pay for computers, overhead, and the salaries of young programmers who have great ideas for free software. The Debian Project also has a charitable arm known as Software in the Public Interest that raises money and computer equipment to support the creation of more free software.
+
+These organizations are certainly part of the world of tax deductions, fund-raisers, and the charity-industrial complex. The Free Software Foundation, for instance, notes that you can arrange for all or part of your gift to the United Way to go to the Foundation.
+
+But there are differences, too. Stallman, for instance, is proud of the fact that he accepts no salary or travel reimbursement from the Free Software Foundation. He works 2 months a year to support himself and then donates the other 10 months a year to raising money to support other programmers to work on Foundation projects.
+
+Their budgets are pretty manageable as well. Perens notes that Debian's budget is about $10,000 a year, and this is spent largely on distributing the software. Servers that support plenty of traffic cost a fair amount of money, but the group does get donations of hardware and bandwidth. The group also presses a large number of CD-ROMs with the software.
+
+The groups also make a point of insisting that good code is more valuable than money. The Free Software Foundation, for instance, lists projects that need work next to its call for money. Volunteers are needed to write documentation, test software, organize the office, and also write more code.
+
+Jordan Hubbard, the director of the FreeBSD project, says that money is not always the best gift. "I'll take people over six-digit sums of donations almost any day," he says, and explains that FreeBSD is encouraging companies to donate some of the spare time of its employees. He suggests that companies assign a worker to the FreeBSD project for a month or two if there is time to spare.
+
+"Employees also give us a window into what that company's needs are. All of those co-opted employees bring back the needs of their jobsite. Those are really valuable working relationships," he continues.
+
+Hubbard has also found that money is often not the best motivator. Hardware, it turns out, often works well at extracting work out of programmers. He likes to ship a programmer one of the newest peripherals like a DVD drive or a joystick and ask him to write a driver for the technology in exchange. "It's so much more cost-effective to buy someone a $500 piece of hardware, which in turn motivates him to donate thousands of dollars worth of work, something we probably couldn't pay for anyway," he says.
+
+Money is still important, however, to take care of all the jobs that can't be accomplished by piquing someone's curiosity. "The area we need the most contributions for are infrastructure. Secretarial things are no fun to do and you don't want to make volunteers do it," he says.
+
+All of these charitable organizations are bound to grow in the next several years as the free software movement becomes more sophisticated. In some cases it will be because the hackers who loved playing with computers will discover that the tax system is just another pile of code filled with bugs looking to be hacked. In most cases, though, I think it will be because large companies with their sophisticated tax attorneys will become interested. I would not be surprised if a future version of this book includes a very cynical treatment of the tax habits of some open source organizations. Once an idea reaches a critical mass, it is impossible to shield it from the forces of minor and major corruption.
+
+2~ Gifts as a Cultural Imperative
+
+Marcel Mauss was an anthropologist who studied the tribes of the northwestern corner of North America. His book Gift: The Form and Reason for Exchange in Archaic Societies explained how the tribes like the Chinook, the Tlinget, and the Kwakiutl would spend the months of the fall giving and going to huge feasts. Each year, the members in the tribe would take the bounty of the harvest and throw a feast for their friends. The folks who attended might have a good time, but they were then obligated to give a feast of equal or greater value next year.
+
+Many anthropologists of the free software world like to draw parallels between these feasts, known as potlatches in one tribe, and the free-for-all world of free source software. The hackers are giving away source code in much the same way that the tribe members gave away salmon or deer meat.
+
+The comparison does offer some insight into life in the free software community. Some conventions like LinuxExpo and the hundreds of install-fests are sort of like parties. One company at a LinuxExpo was serving beer in its booth to attract attention. Of course, Netscape celebrated its decision to launch the Mozilla project with a big party. They then threw another one at the project's first birthday.
+
+But the giving goes beyond the parties and the conferences. Giving great software packages creates social standing in much the same way that giving a lavish feast will establish you as a major member of the tribe. There is a sort of pecking order, and the coders of great systems like Perl or Linux are near the top. The folks at the top of the pyramid often have better luck calling on other programmers for help, making it possible for them to get their jobs done a little better. Many managers justify letting their employees contribute to the free software community because they build up a social network that they can tap to finish their official jobs.
+
+But there's a difference between tribal potlatch and free software. The potlatch feasts built very strong individual bonds between people in the same tribe who knew each other and worked together. The gifts flowed between people who were part of each other's small community.
+
+The free source world, on the other hand, is a big free-for-all in both senses of the phrase. The code circulates for everyone to grab, and only those who need it dig in. There's no great connection between programmer and user. People grab software and take it without really knowing to whom they owe any debt. I only know a few of the big names who wrote the code running the Linux box on my desk, and I know that there are thousands of people who also contributed. It would be impossible for me to pay back any of these people because it's hard to keep them straight.
+
+This vast mass of contributors often negates the value and prestige that comes from writing neat code. Since no one can keep track of it all, people tend to treat all requests from unknown people equally. The free source world tends to have many equals, just because there's no hierarchy to make it easy for us to suss out each other's place. Corporations have titles like executive vice president and super executive vice president. The military labels people as private, sergeant, or major. There are no guideposts in the free software world.
+
+Still, good contributions pay off in good reputations. A bug fix here and a bug fix there might not build a name, but after a year or two they pay off. A good reputation opens doors, wins jobs, creates friendships, and makes it possible to interest people in new projects.
+
+The free source world is also a strange mirror image of the hierarchies that emerge after a season of tribal potlatch ceremonies. In the tribes, those who receive great gifts are required to return the favor with even greater ones. So the skillful hunters and gatherers give good gifts and receive something better in return. The rich get richer by giving away their bounty. The less skillful end up at the bottom of the list. The free source world, on the other hand, spreads its riches out to everyone. There are many modest programmers who enjoy the source code of the great programmers, and there may be billions of non-programmers who also tag along. Many major websites run on free OSs alone. Who knows which cheap Internet tools will come along in the future? The poor get lifted along at no great cost to the economy. The charity is broadcast to everyone, not narrowcast to a few.
+
+The efficiency goes deeper. There's a whole class of products for the home that are much fancier and sophisticated than what people need. One company near me sells perfectly usable nonstick pans for $2.95. A fancy department store sells hefty, industrial-grade pans that do the same thing for more than $100. Why? They make great gifts for people getting married. This wedding-industrial complex adds needless accoutrements, doodads, and schmaltz just to give products enough caché to make them great gifts.
+
+The free source world, on the other hand, has no real incentive to generate phony, chrome-plated glitz to make its gifts acceptable or worthy enough of giving. People give away what they write for themselves, and they tend to write what they need. The result is a very efficient, usable collection of software that helps real people solve real problems. The inefficiency of the wedding-industrial complex, the Father's Day-industrial complex, the Christmas-industrial complex, and their need to create acceptable gifts are gone.
+
+Of course, there's also a certain element of selfishness to the charity. The social prestige that comes from writing good free software is worth a fair amount in the job market. People like to list accomplishments like "wrote driver" or "contributed code to Linux Kernel 2.2" on their résumé. Giving to the right project is a badge of honor because serious folks doing serious work embraced the gift. That's often more valuable and more telling than a plaque or an award from a traditional boss.
+
+Rob Newberry is a programmer at Group Logic, a small software house in northern Virginia where I once did some consulting. His official title is "Director of Fajita Technology," and he is sometimes known as "The Dude," a reference to a character in the movie /{The Big Lebowski}/. Technically, his job is building and supporting their products, which are used to automate the prepress industry. One of their products, known as Mass Transit, will move files over the Internet and execute a number of automated programs to them before moving them on. Printers use them to take in new jobs, massage the data to their needs by performing tasks like color separation, and then send the jobs to the presses. This work requires great understanding of the various network protocols like FTP of NFS.
+
+Newberry is also a Linux fan. He reads the Kernel list but rarely contributes much to it. He runs various versions of Linux around the house, and none of them were working as well as he wanted with his Macintosh. So he poked around in the software, fixed it, and sent his code off to Alan Cox, who watches over the part of the kernel where his fixes belonged.
+
+"I contributed some changes to the Appletalk stack that's in the Linux Kernel that make it easier for a Linux machine to offer dial-in services for Macintosh users," he said in an article published in Salon.
+"As it stands, Mac users have always been able to dial into a Linux box and use IP protocols, but if they wanted to use Appletalk over PPP, the support wasn't really there."
+
+Newberry, of course, is doing all of this on his own time because he enjoys it. But his boss, Derick Naef, still thinks it's pretty cool that he's spending some of his programming energy on a project that won't add anything immediately to the bottom line.
+
+"He's plugged into that community and mailing lists a lot more," explains Naef. "There are other people here who are, too, but there are all these tools out there in the open source world. There's code out there that can be incorporated into our computer projects. It can cut your development costs if you can find stuff you can use."
+
+Of course, all of this justification and rationalization aren't the main reason why Newberry spends so much of his time hacking on Linux. Sure, it may help his company's bottom line. Sure, it might beef up his résumé by letting him brag that he got some code in the Linux kernel. But he also sees this as a bit of charity.
+
+"I get a certain amount of satisfaction from the work . . . but I get a certain amount of satisfaction out of helping people. Improving Linux and especially its integration with Macs has been a pet project of mine for some time," he says. Still, he sums up his real motivation by saying, "I write software because I just love doing it." Perhaps we're just lucky that so many people love writing open source software and giving it away.
+
+1~ Love
+
+It's not hard to find bad stories about people who write good code. One person at a Linux conference told me, "The strange thing about Linus Torvalds is that he hasn't really offended everyone yet. All of the other leaders have managed to piss off someone at one time or another. It's hard to find someone who isn't hated by someone else." While he meant it as a compliment for Torvalds, he sounded as if he wouldn't be surprised if Torvalds did a snotty, selfish, petulant thing. It would just be par for the course.
+
+There are thousands of examples of why people in the open source community hate each other and there are millions of examples of why they annoy each other. The group is filled with many strong-minded, independent individuals who aren't afraid to express their opinions. Flame wars spring up again and again as people try to decide technical questions like whether it makes more sense to use long integers or floating point numbers to hold a person's wealth in dollars.
+
+Of course, hate is really too strong a word. If you manage to pin down some of the people and ask them, point blank, whether they really hate someone, they'll say, "No." They really just don't like a few of that person's technical decisions. These points of friction fester and turn into what might more commonly be called hate.
+
+These technical debates are terrible tar pits for the community, and they eat up the energy. The debates turn frustrating because they have the strange distinction of being both technically important and utterly trivial. Everyone would like to just sail through life and not worry about tiny details like the type of integer used in a calculation. There are millions of these decisions that take up time that might be better spent imagining grand dreams of a seamless information noosphere that provides the wisdom of the ages in a simple graphical interface. But every programmer learns that it's the details that count. NASA lost a spacecraft when some programmer used English units instead of the metric system. So the work needs to get done.
+
+Occasionally, the fights get interesting. Eric Raymond and Bruce Perens are both great contributors to the open source movement. In fact, both worked together to try to define the meaning of the term. Perens worked with the community that creates the Debian distribution of Linux to come up with a definition of what was acceptable for the community. This definition morphed into a more official version used by the Open Source Initiative. When they got a definition they liked, they published it and tried to trademark the term "open source" in order to make sure it was applied with some consistency. It should be no surprise that all of that hard work brought them farther apart.
+
+In early April 1999, soon after Apple Computer joined the free source world by releasing some of the source code to their operating system, Raymond and Perens found themselves at each other's throats. Raymond had worked closely with Apple on developing their license and blessed it soon after it emerged. Apple was so pleased that it put Raymond's endorsement on their web page. The decision was a big coup for the open source movement and strong proof that corporations were embracing the movement. Big executives from big companies like Apple were knocking on the open source movement's door. Raymond thought the victory would bring more attention to the cause.
+
+Others thought Raymond had given away the farm. Perens and many others looked at the license and spotted a small clause that seemed dangerous. The license for their open source code could be withdrawn at a moment's notice. Someone pointed out that it would be a real bummer to do lots of work on Apple's system and then find out that some neb-nosed lawyer at Apple could just pull the license. No one wanted to take that chance. Flame wars erupted and Perens started publicly disagreeing with Raymond. To Perens, the Apple license just wasn't open enough to be called "open source."
+
+Raymond didn't take this too well. He had worked hard to build a strong coalition. He had worked hard to convince corporations that open source was much more than a way for teenagers to experiment with communism while they were living on their parents' dime. He wanted the open source world to be a smoothly running, suave machine that gracefully welcomed Apple into its fold. Now his buddy Bruce Perens was effectively aping Lloyd Bentsen's famous putdown of Dan Quayle: "I've known open source; I've worked with open source; and Eric, this license isn't open source." His whole announcement was supposed to unroll with the clockwork precision of great corporate PR, and now someone had lobbed a grenade.
+
+Raymond fired back a terse e-mail that said, "If you ever again behave like that kind of disruptive asshole in public, insult me, and jeopardize the interests of our entire tribe, I'll take it just as personally and I will find a way to make you regret it. Watch your step."
+
+This note rattled Perens, so he started sending copies around the Net. Then he got serious and called the police. Officially, he was publicizing the disagreement to preserve his health because Raymond is quite vocal about his support for the second amendment. Therefore the phrase "Watch your step" should be taken as a veiled threat of violence.
+
+Perens defended his decision to call the police and told me afterward,
+"When I don't like something, I write about it. Well, gee, maybe Eric was threatening to just write about me. In the signature at the bottom of the page was a Thomas Jefferson quote, which claimed the pistol was the best form of exercise. The next day, Perens decided that he was overreacting a bit and posted a new note: "Eric says he only meant to threaten me with 'defamation of character,' not with any kind of violence. Thus, I think I'll just let this issue drop now."
+
+When I asked him about the matter several months later after tempers had cooled, Raymond said that the disagreement began several months before the Apple event when Perens and Raymond clashed over whether the book publisher O'Reilly should be allowed to use the term "open source" in the name of their conference. "He was *flaming*, and not the initiative itself but a critical supporter," says Raymond.
+
+"Sometime back I had to accept Bruce's resignation from the OSI because he was flaming public allies on a mailing list. If you're going to go public, you can't run your mouth like a rabid attack dog. When the APSL [Apple Public Source License] came along, he convinced people that everybody should go mug Eric and the OSI," Raymond said. It caused more grief.
+
+Perens, for his part, said, "I was disappointed in Eric because certainly open source is about freedom of speech. He should be able to tolerate a dissenting voice. The entire argument was about my not deferring to his leadership. He felt that my dissent was damaging. The actual result was that Apple took my criticism seriously and took all of the suggestions."
+
+Raymond is still critical. He says, "Apple was more diplomatic to Bruce in public than they should have been. The truth is that his meddling got the people inside Apple who were pushing open source into considerable political trouble, and they considered him a disruptive asshole. Their bosses wanted to know, quite reasonably, why Apple should bother trying to do an open source license if all it meant was that they'd be attacked by every flake case with an agenda. By undermining OSI's status as trusted representatives of the whole community, Bruce nearly scuttled the whole process."
+
+For now, the two work apart. Perens says he'll make up with Raymond, but doesn't see it happening too soon. Raymond is happy to focus on the future of open source and write more analysis of the movement. They've been separated, and the tempers are cool.
+
+Giving away software seems like an entirely altruistic act. Writing code is hard work, and simply casting it onto the net with no restrictions is a pretty nice gift outright, especially if the code took months or years to write. This image of selflessness is so strong that many people assume that the free software world is inhabited by saints who are constantly doing nice things for each other. It seems like a big love-in.
+
+But love is more than a many splendored thing. It's a strange commodity that binds us together emotionally in ways that run deeper than placid pools reflecting starry eyes. After the flush of infatuation, strong love lasts if and only if it answers everyone's needs. The hippie culture of free love lasted only a few years, but the institution of marriage continues to live on despite the battle scars and wounds that are almost mortal. Half may fail, but half succeed.
+
+The free software community also flourishes by creating a strong, transcendent version of love and binding it with a legal document that sets out the rules of the compact. Stallman wrote his first copyleft virus more than 15 years before this book began, and the movement is just beginning to gain real strength. The free software world isn't just a groovy love nest, it's a good example of how strong fences, freedom, and mutual respect can build strong relationships.
+
+The important thing to realize is that free software people aren't any closer to being saints than the folks in the proprietary software companies. They're just as given to emotion, greed, and the lust for power. It's just that the free software rules tend to restrain their worst instincts and prevent them from acting upon them.
+
+The rules are often quite necessary. E-mail and the news services give people the ability to vent their anger quickly. Many of the programmers are very proficient writers, so they can tear each other apart with verbal scalpels. The free source world is cut up into hundreds if not thousands of political camps and many dislike each other immensely. One group begged with me not to ask them questions about another group because just hearing someone's name brought up terrible memories of pain and discord.
+
+Despite these quick-raging arguments, despite the powerful disagreements, despite the personal animosities, the principles of the public licenses keep everything running smoothly. The people are just as human as the rats running around in the maze of the proprietary software business, but the license keeps them in line.
+
+The various public licenses counter human behavior in two key ways. First, they encourage debate by making everyone a principal in the project. Everyone has a right to read, change, and of course make comments about the software. Making everything available opens the doors for discussion, and discussion usually leads to arguments.
+
+But when the arguments come to blows, as they often do, the second effect of free source licenses kicks in and moderates the fallout by treating everyone equally. If Bob and John don't like each other, then there's still nothing they can do to stop each other from working on the project. The code is freely available to all and shutting off the distribution to your enemy just isn't allowed. You can't shut out anyone, even someone you hate.
+
+Anyone familiar with corporate politics should immediately see the difference. Keeping rivals in the dark is just standard practice in a corporation. Information is a powerful commodity, and folks competing for the same budget will use it to the best of their ability. Bosses often move to keep their workers locked away from other groups to keep some control over the flow of information.
+
+Retribution is also common in the corporate world. Many managers quickly develop enemies in the ranks, and the groups constantly spend time sabotaging projects. Requests will be answered quickly or slowly depending on who makes them. Work will be done or put off depending on which division is asking for it to be done. Managers will often complain that their job is keeping their underlings from killing each other and then turn around and start battling the other managers at their level.
+
+The people in the free source world aren't any nicer than the people in the corporate cubicle farms, but their powers of secrecy and retribution are severely limited. The GNU General Public License requires that anyone who makes changes to a program and then releases the program must also release the source code to the world. No shutting off your enemies allowed.
+
+This effect could be called a number of different things. It isn't much different from the mutual disarmament treaties signed by nations. Athletic teams strive for this sort of pure focus when they hire referees to make the tough calls and keep everyone playing by the same rules. The government sometimes tries to enforce some discipline in the free market through regulation.
+
+Now, compare this disarmament with a story about the poor folks who stayed behind at the Hotmail website after Microsoft bought them. It's really just one of a million stories about corporate politics. The workers at Hotmail went from being supreme lords of their Hotmail domain to soldiers in the Microsoft army. Their decisions needed to further Microsoft's relentless growth in wealth, not the good of the Hotmail site. This probably didn't really bother the Hotmail people as much as the fact that the people at Microsoft couldn't decide what they wanted from Hotmail.
+
+Robert X. Cringely described the situation in an article in PBS Online, and he quoted one Hotmail worker as saying, "They send a new top-level group down to see us every week, yet it really means nothing. The plan is constantly changing. Today Hotmail is primarily a way of shoveling new users into the MSN portal. We had for a short time a feature called Centerpoint for communicating directly with our users, but that was killed as a possible competitor with the MSN portal. No new features could be added because the Outlook Express team saw us as competition and sabotaged everything."
+
+Cringely explained the corporate friction and gridlock this way:
+
+_1 "What Hotmail learned is that at Microsoft almost anyone can say 'no,' but hardly anyone can say 'yes.' The way it specifically works at Microsoft is that everyone says 'no' to anyone below them on the organizational structure or on the same level, and 'yes' to anyone above. Since the vertical lines of authority are narrow this means people tend to agree only with their bosses and their boss's boss and try to kick and gouge everyone else."
+
+The free software world, of course, removes these barriers. If the Hotmail folks had joined the Linux team instead of Microsoft, they would be free to do whatever they wanted with their website even if it annoyed Linus Torvalds, Richard Stallman, and the pope. They wouldn't be rich, but there's always a price.
+
+Using the word "love" is a bit dangerous because the word manages to include the head-over-heels infatuation of teenagers and the affection people feel for a new car or a restaurant's food. The love that's embodied by the GPL, on the other hand, isn't anywhere near as much fun and it isn't particularly noteworthy. It just encompasses the mutual responsibility and respect that mature folks occasionally feel for each other. It's St. Paul's version of unconditional, everlasting love, not the pangs of desire that kept St. Augustine up late in his youth.
+
+Anyone who has spent time in the trenches in a corporate cubicle farm knows how wasteful the battles between groups and divisions can be. While the competition can sometimes produce healthy rivalries, it often just promotes discord. Any veteran of these wars should see the immediate value of disarmament treaties like the GPL. They permit healthy rivalries to continue while preventing secrecy and selfishness from erupting. The free source movement may not have money to move mountains, but it does have this love.
+
+This love also has a more traditional effect on the hackers who create the free source code. They do it because they love what they're doing. Many of the people in the free source movement are motivated by writing great software, and they judge their success by the recognition they get from equally talented peers. A "nice job" from the right person--like Richard Stallman, Alan Cox, or Linus Torvalds--can be worth more than $100,000 for some folks. It's a strange way to keep score, but for most of the programmers in the free source world it's more of a challenge than money. Any schmoe in Silicon Valley can make a couple of million dollars, but only a few select folks can rewrite the network interface code of the Linux kernel to improve the throughput of the Apache server by 20 percent.
+
+Keeping score by counting the number of people who dig your work is a strange system, but one that offers the same incentives as business. A good store doesn't insult people who could be repeat customers. A good free software project doesn't insult people who have a choice of which package to use. A good businessman makes it easy for people to get to the store, park, and make a purchase. A good free software project makes it simple for people to download the code, compile it, modify it, understand it, and use it.
+
+There's even some research to support the notion that rewards can diminish the creativity of people. Stallman likes to circulate a 1987 article from the Boston Globe that describes a number of different scientific experiments that show how people who get paid are less creative than those who produce things from their love of the art. The studies evaluated the success of poets, artists, and teachers who did their job for the fun of it and compared it with those who were rewarded for their efforts. In many cases, these were short-bounded exercises that could be evaluated fairly easily.
+
+One scientist, Theresa Amabile, told the Globe that her work "definitely refutes the notion that creativity can be operantly conditioned."
+That is, you can't turn it on by just pouring some money on it. Many free software folks point out that this is why the free source movement is just as likely to succeed as a massively funded corporate juggernaut.
+
+Many people don't need scientists to tell them that you can't throw money at many problems and expect them to go away. This is a hard lesson that managers and businesses learn quickly. But this doesn't mean that the lack of money means that the free source movement will beat the thousands of shackled programmers in their corporate rabbit hutches. These studies just measured "creativity" and found that the unpaid folks were more "creative." That's not necessarily a compliment. In fact, the word is often used as a euphemism for "strange," "weird," or just plain "bad." It's more often a measure of just how different something is instead of how good it is. Would you rather eat at the house of a creative chef or a good chef?
+
+This love of creativity can be a problem for the free source world. Most people don't want to use a creative spreadsheet to do their accounting--it could get them in trouble with the SEC or the IRS. They want a solid team player for many of their jobs, not a way cool creative one.
+
+The free source world is often seen as too artistic and temperamental to undertake the long, arduous task of creating good, solid software that solves the jobs of banks, pharmacies, airlines, and everyone else. Many of these tasks are both mind-numbingly boring and difficult to do. While they just involve adding a few numbers and matching up some data, the tasks have to be done right or airplanes will crash. The free source world can't rely on love or creativity to motivate people to take on these tasks. The only solution might be money.
+
+Of course, it's important to recognize that even seemingly boring jobs can have very creative solutions. Stallman's GNU Emacs is a fascinating and over-the-top, creative solution to the simple job of manipulating text. Word processors and text editors might not be that exciting anymore, but finding creative ways to accomplish the task is still possible.
+
+1~ Corporations
+
+Many movies about teenagers follow a time-proven formula: once the magic summer is over, the gang is going to split up and it will never be the same again. Bob's going to college; Rick is getting married; and Harry is going to be stuck in the old town forever. Right now, the free software world is playing out the same emotions and dramas as the greater world discovers open source software. In the fall, the corporations are coming and the old, cool world of late-night hackfests fueled by pizza and Jolt are in danger. Some people in the realm of free source software are going to grow up, get educated, and join the establishment;
+some will get married; and some will get left behind wondering why the old game isn't as cool anymore.
+
+The free source world is suffering from an acute case of success. Many of the great projects like Apache and Sendmail are growing up and being taken over by corporations with balance sheets. Well, not exactly taken over, but the corporations will exist and they'll try to shepherd development. Other corporations like Apple, Sun, and Netscape are experimenting with open source licenses and trying to make money while sharing code. Some quaint open source companies like Red Hat are growing wealthy by floating IPOs to raise some money and maybe buy a few Porsches for their stakeholders. There's a lot of coming of age going on.
+
+On the face of it, none of this rampant corporatization should scare the guys who built the free software world in their spare cycles. The corporations are coming to free source because it's a success. They want to grab some of the open software mojo and use it to drive their own companies. The suits on the plane are all tuning into Slashdot, buying T-shirts, and reading Eric Raymond's essay "The Cathedral and the Bazaar" in the hopes of glomming on to a great idea. The suits have given up their usual quid pro quo: be a good nerd, keep the code running, and we'll let you wear a T-shirt in your basement office. Now they want to try to move in and live the life, too. If Eric Raymond were selling Kool-Aid, they would be fighting to drink it.
+
+The talk is serious, and it's affecting many of the old-line companies as well. Netscape started the game by releasing the source code to a development version of their browser in March of 1998. Apple and Sun followed and began giving away the source code to part of their OS. Of course, Apple got part of the core of their OS from the open source world, but that's sort of beside the point. They're still sharing some of their new, Apple-only code. Some, not all. But that's a lot more than they shared before. Sun is even sharing the source code to their Java system. If you sign the right papers or click the right buttons, you can download the code right now. Its license is more restrictive, but they're joining the club, getting religion, and hopping on the bandwagon.
+
+Most of the true devotees are nervous about all of this attention. The free software world was easy to understand when it was just late-night hackfests and endless railing against AT&T and UNIX. It was simple when it was just messing around with grungy code that did way cool things. It was a great, he-man, Windoze-hating clubhouse back then.
+
+Well, the truth is that some of the free software world is going to go off to college, graduate with a business degree, and turn respectable. Eric Allman, for instance, is trying to build a commercial version of his popular free package Sendmail. The free version will still be free, but you can get a nicer interface and some cooler features for managing accounts if you buy in. If things work out, some of the folks with the free version will want all of the extra features he's tacking on and they'll pay him. No one knows what this will do to the long-term development of Sendmail, of course. Will he only make new improvements in the proprietary code? Will other folks stop contributing to the project because they see a company involved? There's some evidence that Allman's not the same guy who hung around the pizza joint. When I contacted him for an interview, he passed me along to his public relations expert, who wrote back wanting to "make sure this is a profitable way to spend Eric's time." For all we know, Eric may have even been wearing a suit when he hired a corporate PR team.
+
+Some of the other free software folks are going to get married. The Apache group has leveraged its success with small server organizations into connections with the best companies selling high-powered products. IBM is now a firm supporter of Apache, and they run it on many of their systems. Brian Behlendorf still schedules his own appointments, jokes often, and speaks freely about his vision for Apache, but he's as serious as any married man with several kids to support. It's not just about serving up a few web pages filled with song lyrics or Star Wars trivia. People are using Apache for business--serious business. There can still be fun, but Apache needs to be even more certain that they're not screwing up.
+
+And of course there are thousands of free software projects that are going to get left behind hanging out at the same old pizza joint. There were always going to be thousands left behind. People get excited about new projects, better protocols, and neater code all the time. The old code just sort of withers away. Occasionally someone rediscovers it, but it is usually just forgotten and superseded. But this natural evolution wasn't painful until the successful projects started ending up on the covers of magazines and generating million-dollar deals with venture capitalists. People will always be wondering why their project isn't as big as Linux.
+
+There will also be thousands of almost great projects that just sail on being almost great. All of the distributions come with lots of programs that do some neat things. But there's no way that the spotlight can be bright enough to cover them all. There will be only one Torvalds and everyone is just going to be happy that he's so gracious when he reminds the adoring press that most of the work was done by thousands of other nameless folks.
+
+Most of the teen movies don't bother trying to figure out what happens after that last fateful summer. It's just better to end the movie with a dramatic race or stage show that crystallizes all the unity and passion that built up among this group during their formative years. They sing, they dance, they win the big game, they go to the prom, and then cameras love to freeze the moment at the end of the film. The free software movement, on the other hand, is just too important and powerful to stop this book on a climactic note. It would be fun to just pause the book at the moment in time when Linus Torvalds and Bob Young were all over the magazines. Their big show was a success, but the real question is what will happen when some folks go to school, some folks get married, and some folks are left behind.
+
+To some extent, the influx of money and corporations is old news. Very old news. Richard Stallman faced the same problem in the 1980s when he realized that he needed to find a way to live without a university paycheck. He came up with the clever notion that the software and the source must always be free, but that anyone could charge whatever the market would bear for the copies. The Free Software Foundation itself continues to fund much of its development by creating and selling both CD-ROMs and printed manuals.
+
+This decision to welcome money into the fold didn't wreck free software. If anything, it made it possible for companies like Red Hat to emerge and sell easier-to-use versions of the free software. The companies competed to put out the best distributions and didn't use copyright and other intellectual property laws to constrain each other. This helped attract more good programmers to the realm because most folks would rather spend their time writing code than juggling drivers on their machine. Good distributions like Red Hat, Slackware, Debian, FreeBSD, and SuSE made it possible for everyone to get their machines up and running faster.
+
+There's no reason why the latest push into the mainstream is going to be any different. Sure, Red Hat is charging more and creating better packages, but most of the distribution is still governed by the GPL. Whenever people complain that Red Hat costs too much, Bob Young just points people to the companies that rip off his CDs and charge only $2 or $3 per copy. The GPL keeps many people from straying too far from the ideal.
+
+The source is also still available. Sure, the corporate suits can come in, cut deals, issue press releases, raise venture capital, and do some IPOs, but that doesn't change the fact that the source code is now widely distributed. Wasn't that the goal of Stallman's revolution? Didn't he want to be able to get at the guts of software and fix it? The source is now more omnipresent than ever. The corporations are practically begging folks to download it and send in bug fixes.
+
+Of course, access to the source was only half of Stallman's battle. A cynic might growl that the corporations seem to be begging folks to do their research, testing, and development work for them. They're looking for free beers. Stallman wanted freedom to do whatever he wanted with the source and many of the companies aren't ready to throw away all of their control.
+
+Apple sells its brand, and it was careful not to open up the source code to its classic desktop interface. They kept that locked away. Most of the source code that Apple released is from its next version of the operating system, Mac OS X, which came from the folks at NeXT when Apple acquired that company. Where did that code come from?
+Large portions came from the various free versions of BSD like NetBSD or Mach. It's easy to be generous when you only wrote a fraction of the code.
+
+Ernest Prabhakar, the project manager for Apple's first open source effort known as Darwin, describes the tack he took to get Apple's management to embrace this small core version of the BSD operating system tuned to the Macintosh hardware platform.
+
+"The first catalysts were the universities. There were a lot of universities like MIT and University of Michigan that had some specialized network infrastructure needs," he said.
+
+"We realized that the pieces they're most interested in are the most commoditized. There wasn't really any proprietary technology added that we had to worry about them copying. There are people who know them better than we do like the BSD community. We started making the case, if we really want to partner with the universities we should just open the source code and release it as a complete BSD-style operating system.
+
+"We wanted people to use this in classes, really embed it in the whole educational process without constraining teaching to fit some corporate model," he finishes.
+
+Of course, Prabhakar suggests that there is some self-interest as well. Apple wants to be a full partner with the BSD community. It wants the code it shares to mingle and cross-pollinate with the code from the BSD trees. In the long run, Apple's Darwin and the BSDs will grow closer together. In an ideal world, both groups will flourish as they avoid duplicating each other's efforts.
+
+Prabhakar says, "This reduces our reintegration costs. The ability to take the standard version of FreeBSD and dump it into our OS was a big win. Prior to doing the open source, we had done a small scale of givebacks."
+
+This view is echoed by other companies. IBM is a great hardware company and an even greater service company that's never had much luck selling software, at least in the same way that Microsoft sells software. Their OS/2 never got far off the ground. They've sold plenty of software to companies by bundling it with handholding and long-term service, but they've never had great success in the shrink-wrapped software business. Open source gives them the opportunity to cut software development costs and concentrate on providing service and hardware. They get free development help from everyone and the customers get more flexibility.
+
+Sun's Community Source License is also not without some self-interest. The company would like to make sure that Java continues to be "Write Once, Run Anywhere," and that means carefully controlling the APIs and the code to make sure no idiosyncrasies or other glitches emerge. People and companies that want to be part of the community must abide by Sun's fairly generous, but not complete, gift to the world.
+
+The company's web page points out the restriction Sun places on its source code fairly clearly. "Modified source code cannot be distributed without the express written permission of Sun" and "Binary programs built using modified Java 2 SDK source code may not be distributed, internally or externally, without meeting the compatibility and royalty requirements described in the License Agreement."
+
+While some see this clause as a pair of manacles, Bill Joy explains that the Community Source License is closer to our definition of a real community. "It's a community in a stronger sense," he told an audience at Stanford. "If you make improvements, you can own them." After you negotiate a license with Sun, you can sell them. Joy also points out that Sun's license does require some of the GNU-like sharing by requiring everyone to report bugs.
+
+Some customers may like a dictator demanding complete obeisance to Sun's definition of Java, but some users are chaffing a bit. The freedom to look at the code isn't enough. They want the freedom to add their own features that are best tuned to their own needs, a process that may start to Balkanize the realm by creating more and more slightly different versions of Java. Sun clearly worries that the benefits of all this tuning aren't worth living through the cacophony of having thousands of slightly different versions. Releasing the source code allows all of the users to see more information about the structure of Sun's Java and helps them work off the same page. This is still a great use of the source code, but it isn't as free as the use imagined by Stallman.
+
+Alan Baratz, the former president of Sun's Java division, says that their Community Source License has been a large success. Sure, some folks would like the ability to take the code and fork off their own versions as they might be able to do with software protected by a BSD- or GNU-style license, but Java developers really want the assurance that it's all compatible. As many said, "Microsoft wanted to fork Java so it could destroy it."
+
+Baratz said, "We now have forty thousand community source licensees. The developers and the systems builders and the users all want the branded Java technology. They want to know that all of the apps are going to be there. That's the number-one reason that developers are writing to the platform." Their more restrictive license may not make Stallman and other free software devotees happy, but at least Java will run everywhere.
+
+Maybe in this case, the quality and strength of the unity Sun brings to the marketplace is more important than the complete freedom to do whatever you want. There are already several Java clones available, like Kaffe. They were created without the help of Sun, so their creators aren't bound by Sun's licenses. But they also go out of their way to avoid splitting with Sun. Tim Wilkinson, the CEO of Transvirtual, the creators of Kaffe, says that he plans to continue to make Kaffe 100 percent Java compatible without paying royalties or abiding by the Community Source License. If his project or other similar ones continue to thrive and grow, then people will know that the freedom of open source can be as important as blind allegiance to Sun.
+
+These corporate efforts are largely welcomed by the open source world, but the welcome does not come with open arms or a great deal of warmth.
+
+Source code with some restrictions is generally better than no source at all, but there is still a great deal of suspicion. Theo de Raadt, the leader of the OpenBSD project, says, "Is that free? We will not look at Apple source code because we'll have contaminated ourselves." De Raadt is probably overreacting, but he may have reason to worry. AT&T's USL tied up the BSD project for more than a year with a lawsuit that it eventually lost. Who knows what Apple could do to the folks at OpenBSD if there were a some debate over whether some code should be constrained by the Apple license? It's just easier for everyone at OpenBSD to avoid looking at the Apple code so they can be sure that the Apple license won't give some lawyers a toehold on OpenBSD's code base.
+
+Richard Stallman says, "Sun wants to be thought of as having joined our club, without paying the dues or complying with the public service requirements. They want the users to settle for the fragments of freedom Sun will let them have."
+
+He continues, "Sun has intentionally rejected the free software community by using a license that is much too restrictive. You are not allowed to redistribute modified versions of Sun's Java software. It is not free software."
+
+2~ Fat Cats and Alley Cats
+
+The corporations could also sow discord and grief by creating two different classes: the haves and the have-nots. The people who work at the company and draw a salary would get paid for working on the software while others would get a cheery grin and some thanks. Everyone's code would still be free, but some of the contributors might get much more than others. In the past, everyone was just hanging out on the Net and adding their contributions because it was fun.
+
+This split is already growing. Red Hat software employs some of the major Linux contributors like Alan Cox. They get a salary while the rest of the contributors get nothing. Sun, Apple, and IBM employees get salaries, but folks who work on Apache or the open versions of BSD get nothing but the opportunity to hack cool code.
+
+One employee from Microsoft, who spoke on background, predicted complete and utter disaster. "Those folks are going to see the guys from Red Hat driving around in the Porsches and they're just going to quit writing code. Why help someone else get rich?" he said. I pointed out that jealousy wasn't just a problem for free software projects. Didn't many contract employees from Microsoft gather together and sue to receive stock options? Weren't they locked out, too?
+
+Still, he raises an interesting point. Getting people to join together for the sake of a group is easy to do when no one is getting rich. What will happen when more money starts pouring into some folks' pockets?
+Will people defect? Will they stop contributing?
+
+Naysayers are quick to point to experiments like Netscape's Mozilla project, which distributed the source code to the next generation of its browser. The project received plenty of hype because it was the first big open source project created by a major company. They set up their own website and built serious tools for keeping track of bugs. Still, the project has not generated any great browser that would allow it to be deemed a success. At this writing, about 15 months after the release, they're still circulating better and better beta versions, but none are as complete or feature-rich as the regular version of Netscape, which remains proprietary.~{ At this writing, version M13 of Mozilla looks very impressive. It's getting quite close to the proprietary version of Netscape. }~
+
+The naysayers like to point out that Netscape never really got much outside help on the Mozilla project. Many of the project's core group were Netscape employees and most of the work was done by Netscape employees. There were some shining examples like Jim Clark (no relation to the founder of Netscape with the same name), who contributed an entire XML parser to the project. David Baron began hacking and testing the Mozilla code when he was a freshman at Harvard. But beyond that, there was no great groundswell of enthusiasm. The masses didn't rise up and write hundreds of thousands of lines of code and save Netscape.
+
+But it's just as easy to cast the project as a success. Mozilla was the first big corporate-sponsored project. Nothing came before it, so it isn't possible to compare it with anything. It is both the best and the worst example. The civilian devotees could just as well be said to have broken the world record for source code contributed to a semi-commercial project. Yes, most of the work was officially done by Netscape employees, but how do you measure work? Many programmers think a good bug report is more valuable than a thousand lines of code. Sure, some folks like Baron spend most of their time testing the source code and looking for incompatibilities, but that's still very valuable. He might not have added new code himself, but his insight may be worth much more to the folks who eventually rely on the product to be bug-free.
+
+It's also important to measure the scope of the project. Mozilla set out to rewrite most of the Netscape code. In the early days, Netscape grew by leaps and bounds as the company struggled to add more and more features to keep ahead of Microsoft. The company often didn't have the time to rebuild and reengineer the product, and many of the new features were not added in the best possible way. The Mozilla team started off by trying to rebuild the code and put it on a stable foundation for the future. This hard-core, structural work often isn't as dramatic. Casual observers just note that the Mozilla browser doesn't have as many features as plain old Netscape. They don't realize that it's completely redesigned inside.
+
+Jeff Bates, an editor at Slashdot, says that Mozilla may have suffered because Netscape was so successful. The Netscape browser was already available for free for Linux. "There wasn't a big itch to scratch," he says.
+"We already had Netscape, which was fine for most people. This project interested a smaller group than if we'd not had Netscape-hence why it didn't get as much attention."
+
+The experiences at other companies like Apple and Sun have been more muted. These two companies also released the source code to their major products, but they did not frame the releases as big barn-raising projects where all of the users would rise up and do the development work for the company. Some people portrayed the Mozilla project as a bit of a failure because Netscape employees continued to do the bulk of code writing. Apple and Sun have done a better job emphasizing the value of having the source available while avoiding the impossible dream of getting the folks who buy the computers to write the OS, too.
+
+Not all interactions between open source projects and corporations involve corporations releasing their source code under a new open source license. Much more code flows from the open source community into corporations. Free things are just as tempting to companies as to people.
+
+In most cases, the flow is not particularly novel. The companies just choose FreeBSD or some version of Linux for their machines like any normal human being. Many web companies use a free OS like Linux or FreeBSD because they're both cheap and reliable. This is going to grow much more common as companies realize they can save a substantial amount of money over buying seat licenses from companies like Microsoft.
+
+In some cases, the interactions between the open source realm and the corporate cubicle farm become fairly novel. When the Apache web server grew popular, the developers at IBM recognized that they had an interesting opportunity at hand. If IBM could get the Apache server to work on its platforms, it might sell more machines. Apache was growing more common, and common software often sold machines. When people came looking for a new web server, the IBM salesmen thought it might be nice to offer something that was well known.
+
+Apache's license is pretty loose. IBM could have taken the Apache code, added some modifications, and simply released it under their own name. The license only required that IBM give some credit by saying the version was derived from Apache itself. This isn't hard to do when you're getting something for free.
+
+Other companies have done the same thing. Brian Behlendorf, one of the Apache core group, says, "There's a company that's taken the Apache code and ported it to Mac. They didn't contribute anything back to the Apache group, but it didn't really hurt us to do that." He suggested that the karma came back to haunt them because Apple began releasing their own version of Apache with the new OS, effectively limiting the company's market.
+
+IBM is, of course, an old master at creating smooth relationships with customers and suppliers. They chose to build a deeper relationship with Apache by hiring one of the core developers, Ken Coar, and paying him to keep everyone happy.
+
+"My job is multifaceted," says Coar. "I don't work on the IBM addedvalue stuff. I work on the base Apache code on whatever platforms are available to me. I serve as a liaison between IBM and the Apache group, basically advising IBM on whether the things that they want to do are appropriate. It's an interesting yet unique role. All of my code makes it back into the base Apache code."
+
+Coar ended up with the job because he helped IBM and Apache negotiate the original relationship. He said there was a considerable amount of uncertainty on both sides. IBM wondered how they could get something without paying for it, and Apache wondered whether IBM would come in and simply absorb Apache.
+
+"There were questions about it from the Apache side that any sort of IBM partnership would make it seem as if IBM had acquired Apache. It was something that Apache didn't want to see happen or seem to see happen," Coar said.
+
+Today, Coar says IBM tries to participate in the Apache project as a peer. Some of the code IBM develops will flow into the group and other bits may remain proprietary. When the Apache group incorporated, Coar and another IBM employee, Ken Stoddard, were members. This sort of long-term involvement can help ensure that the Apache group doesn't start developing the server in ways that will hurt its performance on IBM's machine. If you pay several guys who contribute frequently to the project, you can be certain that your needs will be heard by the group. It doesn't guarantee anything, but it can buy a substantial amount of goodwill.
+
+Of course, it's important to realize that the Apache group was always fairly business-oriented. Many of the original developers ran web servers and wanted access to the source code. They made money by selling the service of maintaining a website to the customers, not a shrink-wrapped copy of Apache itself. The deal with IBM didn't mean that Apache changed many of its ways; it just started working with some bigger fish.
+
+At first glance, each of these examples doesn't really suggest that the coming of the corporations is going to change much in the free source world. Many of the changes were made long ago when people realized that some money flowing around made the free software world a much better place. The strongest principles still survive: (1) hackers thrive when the source code is available, and (2) people can create their own versions at will.
+
+The arrival of companies like IBM doesn't change this. The core Apache code is still available and still running smoothly. The modules still plug in and work well. There's no code that requires IBM hardware to run and the committee seems determined to make sure that any IBM takeover doesn't occur. In fact, it still seems to be in everyone's best interest to keep the old development model. The marketplace loves standards, and IBM could sell many machines just offering a standard version of Apache. When the customers walk in looking for a web server, IBM's sales force can just say "This little baby handles X billion hits a day and it runs the industry-leading Apache server." IBM's arrival isn't much different from the arrival of a straightlaced, no-nonsense guy who strolls in from the Net and wants to contribute to Apache so he can get ahead in his job as a webmaster. In this case, it's just a corporation, not a person.
+
+Many suggest that IBM will gradually try to absorb more and more control over Apache because that's what corporations do. They generate inscrutable contracts and unleash armies of lawyers. This view is shortsighted because it ignores how much IBM gains by maintaining an arm'slength relationship. If Apache is a general program used on machines throughout the industry, then IBM doesn't need to educate customers on how to use it. Many of them learned in college or in their spare time on their home machines. Many of them read books published by third parties, and some took courses offered by others. IBM is effectively offloading much of its education and support costs onto a marketplace of third-party providers.
+
+Would IBM be happier if Apache was both the leading product in the market and completely owned by IBM? Sure, but that's not how it turned out. IBM designed the PC, but they couldn't push OS/2 on everyone. They can make great computers, however, and that's not a bad business to be in. At least Apache isn't controlled by anyone else, and that makes the compromise pretty easy on the ego.
+
+Some worry that there's a greater question left unanswered by the arrival of corporations. In the past, there was a general link between the creator of a product and the consumer. If the creator didn't do a good job, then the consumer could punish the creator by not buying another version. This marketplace would ensure that only the best survived.
+
+Patrick Reilly writes, "In a free market, identifiable manufacturers own the product. They are responsible for product performance, and they can be held liable for inexcusable flaws."
+
+What happens if a bug emerges in some version of the Linux kernel and it makes it into several distributions? It's not really the fault of the distribution creators, because they were just shipping the latest version of the kernel. And it's not really the kernel creators' fault, because they weren't marketing the kernel as ready for everyone to run. They were just floating some cool software on the Net for free. Who's responsible for the bug? Who gets sued?
+
+Reilly takes the scenario even further. Imagine that one clever distribution company finds a fix for the bug and puts it into their distribution. They get no long-term reward because any of the other distribution companies can come along and grab the bug fix.
+
+He writes, "Consumers concerned about software compatibility would probably purchase the standard versions. But companies would lose profit as other consumers would freely download improved versions of the software from the Internet. Eventually the companies would suffer from widespread confusion over the wide variety of software versions of each product, including standard versions pirated by profiteers."
+
+There's no doubt that Reilly points toward a true breakdown in the feedback loop that is supposed to keep free markets honest and efficient. Brand names are important, and the free source world is a pretty confusing stew of brand names.
+
+But he also overestimates the quality of the software emerging from proprietary companies that can supposedly be punished by the marketplace. Many users complain frequently about bugs that never get fixed in proprietary code, in part because the proprietary companies are frantically trying to glom on more features so they can convince more people to buy another version of the software. Bugs don't always get fixed in the proprietary model, either.
+
+Richard Stallman understands Reilly's point, but he suggests that the facts don't bear him out. If this feedback loop is so important, why do so many people brag about free software's reliability?
+
+Stallman says, "He has pointed out a theoretical problem, but if you look at the empirical facts, we do not have a real problem. So it is only a problem for the theory, not a problem for the users. Economists may have a challenge explaining why we DO produce such reliable software, but users have no reason to worry."
+
+2~ The Return of the Hardware Kings
+
+The biggest effect of the free software revolution may be to shift the power between the hardware and software companies. The biggest corporate proponents of open source are IBM, Apple, Netscape/AOL, Sun, and Hewlett-Packard. All except Netscape are major hardware companies that watched Microsoft turn the PC world into a software monopoly that ruled a commodity hardware business.
+
+Free source code changes the equation and shifts power away from software companies like Microsoft. IBM and Hewlett-Packard are no longer as beholden to Microsoft if they can ship machines running a free OS. Apple is borrowing open source software and using it for the core of their new OS. These companies know that the customers come to them looking for a computer that works nicely when it comes from the factory. Who cares whether the software is free or not? If it does what the customer wants, then they can make their money on hardware.
+
+The free software movement pushes software into the public realm, and this makes it easier for the hardware companies to operate. Car companies don't sit around and argue about who owns the locations of the pedals or the position of the dials on the dashboard. Those notions and design solutions are freely available to all car companies equally. The lawyers don't need to get involved in that level of car creation.
+
+Of course, the free software movement could lead to more consolidation in the hardware business. The car business coalesced over the years because the large companies were able to use their economies of scale to push out the small companies. No one had dominion over the idea of putting four wheels on a car or building an engine with pistons, so the most efficient companies grew big.
+
+This is also a threat for the computer business. Microsoft licensed their OS to all companies, big or small, that were willing to prostrate themselves before the master. It was in Microsoft's best interests to foster free competition between the computer companies. Free software takes this one step further. If no company has control over the dominant OS, then competition will shift to the most efficient producers. The same forces that brought GM to the center of the car industry could help aggregate the hardware business.
+
+This vision would be more worrisome if it hadn't happened already. Intel dominates the market for CPU chips and takes home the lion's share of the price of a PC. The marketplace already chose a winner of that battle. Now, free software could unshackle Intel from its need to maintain a partnership with Microsoft by making Intel stronger.
+
+Of course, the free OSs could also weaken Intel by opening it up to competition. Windows 3.1, 95, and 98 always ran only on Intel platforms. This made it easier for Intel to dominate the PC world because the OS that was most in demand would only run on Intel or Intel compatible chips. Microsoft made some attempt to break out of this tight partnership by creating versions of Windows NT that ran on the Alpha chip, but these were never an important part of the market.
+
+The free OS also puts Intel's lion's share up for grabs. Linux runs well on Intel chips, but it also runs on chips made by IBM, Motorola, Compaq, and many others. The NetBSD team loves to brag that its software runs on almost all platforms available and is dedicated to porting it to as many as possible. Someone using Linux or NetBSD doesn't care who made the chip inside because the OS behaves similarly on all of them.
+
+Free source code also threatens one of the traditional ways computer manufacturers differentiated their products. The Apple Macintosh lost market share and potential customers because it was said that there wasn't much software available for it. The software written for the PC would run on the Mac only using a slow program that converted it. Now, if everyone has access to the source code, they can convert the software to run on their machine. In many cases, it's as simple as just recompiling it, a step that takes less than a minute. Someone using an Amiga version of NetBSD could take software running on an Intel chip version and recompile it.
+
+This threat shows that the emergence of the free OSs ensures that hardware companies will also face increased competitive pressure. Sure, they may be able to get Microsoft off their back, but Linux may make things a bit worse.
+
+In the end, the coming of age of free software may be just as big a threat to the old way of life for corporations as it is to the free software community. Sure, the hackers will lose the easy camaraderie of swapping code with others, but the corporations will need to learn to live without complete control. Software companies will be under increasing pressure from free versions, and hardware companies will be shocked to discover that their product will become more of a commodity than it was before. Everyone is going to have to find a way to compete and pay the rent when much of the intellectual property is free.
+
+These are big changes that affect big players. But what will the changes mean to the programmers who stay up late spinning mountains of code? Will they be disenfranchised? Will they quit in despair? Will they move on to open source experiments on the human genome?
+
+"The money flowing in won't turn people off or break up the community, and here's why," says Eric Raymond. "The demand for programmers has been so high for the last decade that anyone who really cared about money is already gone. We've been selected for artistic passion."
+
+1~ Money
+
+Everyone who's made it past high school knows that money changes everything. Jobs disappear, love crumbles, and wars begin when money gets tight. Of course, a good number of free source believers aren't out of high school, but they'll figure this out soon enough. Money is just the way that we pay for things we need like food, clothing, housing, and of course newer, bigger, and faster computers.
+
+The concept of money has always been the Achilles heel of the free software world. Everyone quickly realizes the advantages of sharing the source code with everyone else. As they say in the software business, "It's a no-brainer." But figuring out a way to keep the fridge stocked with Jolt Cola confounds some of the best advocates for free software.
+
+Stallman carefully tried to spell out his solution in the GNU Manifesto. He wrote, "There's nothing wrong with wanting pay for work, or seeking to maximize one's income, as long as one does not use means that are destructive. But the means customary in the field of software today are based on destruction.
+
+"Extracting money from users of a program by restricting their use of it is destructive because the restrictions reduce the amount and the way that the program can be used. This reduces the amount of wealth that humanity derives from the program. When there is a deliberate choice to restrict, the harmful consequences are deliberate destruction."
+
+At first glance, Richard Stallman doesn't have to worry too much about making ends meet. MIT gave him an office. He got a genius grant from the MacArthur Foundation. Companies pay him to help port his free software to their platforms. His golden reputation combined with a frugal lifestyle means that he can support himself with two months of paid work a year. The rest of the time he donates to the Free Software Foundation. It's not in the same league as running Microsoft, but he gets by.
+
+Still, Stallman's existence is far from certain. He had to work hard to develop the funding lines he has. In order to avoid any conflicts of interest, the Free Software Foundation doesn't pay Stallman a salary or cover his travel expenses. He says that getting paid by corporations to port software helped make ends meet, but it didn't help create new software. Stallman works hard to raise new funds for the FSF, and the money goes right out the door to pay programmers on new projects. This daily struggle for some form of income is one of the greatest challenges in the free source world today.
+
+Many other free software folks are following Stallman's tack by selling the services, not the software. Many of the members of the Apache Webserver Core, for instance, make their money by running websites. They get paid because their customers are able to type in www.website.com and see something pop up. The customer doesn't care whether it is free software or something from Microsoft that is juggling the requests. They just want the graphics and text to keep moving.
+
+Some consultants are following in the same footsteps. Several now offer discounts of something like 25 percent if the customer agrees to release the source code from the project as free software. If there's no great proprietary information in the project, then customers often take the deal. At first glance, the consultant looks like he's cutting his rates by 25 percent, but at second glance, he might be just making things a bit more efficient for all of his customers. He can reuse the software his clients release, and no one knows it better than he does. In time, all of his clients share code and enjoy lower development costs.
+
+The model of selling services instead of source code works well for many people, but it is still far from perfect. Software that is sold as part of a shrink-wrapped license is easy for people to understand and budget. If you pay the price, you get the software. Services are often billed by the hour and they're often very open-ended. Managing these relationships can be just as difficult as raising some capital to write the software and then marketing it as shrink-wrapped code.
+
+2~ Cygnus--One Company that Grew Rich on Free Software
+
+There have been a number of different success stories of companies built around selling free software. One of the better-known examples is Cygnus, a company that specializes in maintaining and porting the GNU C Compiler. The company originally began by selling support contracts for the free software before realizing that there was a great demand for compiler development.
+
+The philosophy in the beginning was simple. John Gilmore, one of the founders, said, "We make free software affordable." They felt that free software offered many great tools that people needed and wanted, but realized that the software did not come with guaranteed support. Cygnus would sell people contracts that would pay for an engineer who would learn the source code inside and out while waiting to answer questions. The engineer could also rewrite code and help out.
+
+David Henkel-Wallace, one of the other founders, says, "We started in 1989 technically, 1990 really. Our first offices were in my house on University Avenue [in Palo Alto]. We didn't have a garage, we had a carport. It was an apartment complex. We got another apartment and etherneted them together. By the time we left, we had six apartments."
+
+While the Bay Area was very technically sophisticated, the Internet was mainly used at that time by universities and research labs. Commercial hookups were rare and only found in special corners like the corporate research playpen, Xerox PARC. In order to get Net service, Cygnus came up with a novel plan to wire the apartment complex and sell off some of the extra bandwidth to their neighbors. HenkelWallace says, "We started our own ISP [Internet Service Provider] as a cooperative because there weren't those things in those days. Then people moved into those apartments because they were on the Internet."
+
+At the beginning, the company hoped that the free software would allow them to offer something the major manufacturers didn't: cross-platform consistency. The GNU software would perform the same on a DEC Alpha, a Sun SPARC, and even a Microsoft box. The manufacturers, on the other hand, were locked up in their proprietary worlds where there was little cross-pollination. Each company developed its own editors, compilers, and source code tools, and each took slightly different approaches.
+
+One of the other founders, Michael Tiemann, writes of the time:
+"When it came to tools for programmers in 1989, proprietary software was in a dismal state. First, the tools were primitive in the features they offered. Second, the features, when available, often had built-in limitations that tended to break when projects started to get complicated. Third, support from proprietary vendors was terrible . . . finally, every vendor implemented their own proprietary extensions, so that when you did use the meager features of one platform, you became, imperceptibly at first, then more obviously later, inextricably tied to that platform."
+
+The solution was to clean up the GNU tools, add some features, and sell the package to people who had shops filled with different machines. Henkel-Wallace said, "We were going to have two products: compiler tools and shell tools. Open systems people will buy a bunch of SGIs, a bunch of HPs, a bunch of Unix machines. Well, we thought people who have the same environment would want to have the same tools."
+
+This vision didn't work out. They sold no contracts that offered that kind of support. They did find, however, that people wanted them to move the compiler to other platforms. "The compilers people got from the vendors weren't as good and the compiler side of the business was making money from day one," says Henkel-Wallace.
+
+The company began to specialize in porting GCC, the GNU compiler written first by Richard Stallman, to new chips that came along. While much of the visible world of computers was frantically standardizing on Intel chips running Microsoft operating systems, an invisible world was fragmenting as competition for the embedded systems blossomed. Everyone was making different chips to run the guts of microwave ovens, cell phones, laser printers, network routers, and other devices. These manufacturers didn't care whether a chip ran the latest MS software, they just wanted it to run. The appliance makers would set up the chip makers to compete against each other to provide the best solution with the cheapest price, and the chip manufacturers responded by churning out a stream of new, smaller, faster, and cheaper chips.
+
+Cygnus began porting the GCC to each of these new chips, usually after being paid by the manufacturer. In the past, the chip companies would write or license their own proprietary compilers in the hope of generating something unique that would attract sales. Cygnus undercut this idea by offering something standard and significantly cheaper. The chip companies would save themselves the trouble of coming up with their own compiler tools and also get something that was fairly familiar to their customers. Folks who used GCC on Motorola's chip last year were open to trying out National Semiconductor's new chip if it also ran GCC. Supporting free software may not have found many takers, but Cygnus found more than enough people who wanted standard systems for their embedded processors.
+
+Selling processor manufacturers on the conversion contracts was also a bit easier. Businesses wondered what they were doing paying good money for free software. It just didn't compute. The chip manufacturers stopped worrying about this when they realized that the free compilers were just incentives to get people to use their chips. The companies spent millions buying pens, T-shirts, and other doodads that they gave away to market the chips. What was different about buying software? If it made the customers happy, great. The chip companies didn't worry as much about losing a competitive advantage by giving away their work. It was just lagniappe.
+
+Cygnus, of course, had to worry about competition. There was usually some guy who worked at the chip company or knew someone who worked at the chip company who would say, "Hey, I know compilers as well as those guys at Cygnus. I can download GCC too and underbid them."
+
+Henkel-Wallace says, "Cygnus was rarely the lowest bidder. People who cared about price more than anyone else were often the hardest customers anyway. We did deals on a fair price and I think people were happy with the result. We rarely competed on price. What really matters to you? Getting a working tool set or a cheap price?"
+
+2~ How the GPL Built Cygnus's Monopoly
+
+The GNU General Public License was also a bit of a secret weapon for Cygnus. When their competitors won a contract, they had to release the source code for their version when they were done with it. All of the new features and insights developed by competitors would flow directly back to Cygnus.
+
+Michael Tiemann sounds surprisingly like Bill Gates when he speaks about this power: "Fortunately, the open source model comes to the rescue again. Unless and until a competitor can match the one hundred-plus engineers we have on staff today, most of whom are primary authors or maintainers of the software we support, they cannot displace us from our position as the 'true GNU' source. The best they can hope to do is add incremental features that their customers might pay them to add. But because the software is open source, whatever value they add comes back to Cygnus. . . ."
+
+Seeing these effects is something that only a truely devoted fan of free software can do. Most people rarely get beyond identifying the problems with giving up the source code to a project. They don't realize that the GPL affects all users and also hobbles the potential competitors. It's like a mutual disarmament or mutual armament treaty that fixes the rules for all comers and disarmament treaties are often favored by the most powerful.
+
+The money Cygnus makes by selling this support has been quite outstanding. The company continues to grow every year, and it has been listed as one of the largest and fastest-growing private software companies. The operation was also a bootstrap business where the company used the funds from existing contracts to fund the research and development of new tools. They didn't take funding from outside venture capital firms until 1995. This let the founders and the workers keep a large portion of the company, one of the dreams of every Silicon Valley start-up. In 1999, Red Hot merged with Cygnus to "create an open source powerhouse."
+
+The success of Cygnus doesn't mean that others have found ways of duplicating the model. While Cygnus has found some success and venture capital, Gilmore says, "The free software business gives many MBAs the willies."Many programmers have found that free software is just a free gift for others. They haven't found an easy way to charge for their work.
+
+2~ Snitchware
+
+Larry McVoy is one programmer who looks at the free source world and cringes. He's an old hand from the UNIX world who is now trying to build a new system for storing the source code. To him, giving away source code is a one-way train to no money. Sure, companies like Cygnus and Red Hat can make money by adding some extra service, but the competition means that the price of this value will steadily go to zero. There are no regulatory or large capital costs to restrain entry, so he feels that the free software world will eventually push out all but the independently wealthy and the precollege teens who can live at home.
+"We need to find a sustainable method. People need to write code and raise families, pay mortgages, and all of that stuff," he says.
+
+McVoy's solution is a strange license that some call "snitchware." He's developing a product known as BitKeeper and he's giving it away, with several very different hooks attached. He approached this philosophically. He says, "In order to make money, I need to find something that the free software guys don't value that the businesspeople do value. Then I take it away from the free software guys. The thing I found is your privacy."
+
+BitKeeper is an interesting type of product that became essential as software projects grew larger and more unwieldy. In the beginning, programmers wrote a program that was just one coherent file with a beginning, a middle, some digressions, and then an end. These were very self-contained and easily managed by one person.
+
+When more than one programmer started working on a project together, however, everyone needed to work on coordinating their work with each other. One person couldn't start tearing apart the menus because another might be trying to hook up the menus to a new file system. If both started working on the same part, the changes would be difficult if not impossible to sort out when both were done. Once a team of programmers digs out from a major mess like that, they look for some software like BitKeeper to keep the source code organized.
+
+BitKeeper is sophisticated and well-integrated with the Internet. Teams of programmers can be spread out throughout the world. At particular times, programmers can call each other up and synchronize their projects. Both tightly controlled, large corporate teams and loose and uncoordinated open source development teams can use the tool.
+
+The synchronization creates change logs that summarize the differences between two versions of the project. These change logs are optimized to move the least amount of information. If two programmers don't do too much work, then synchronizing them doesn't take too long. The change logs build up a complete history of the project and make it possible to roll back the project to earlier points if it turns out that development took the wrong path.
+
+McVoy's snitchware solution is to post the change logs of the people who don't buy a professional license. These logs include detailed information on how two programs are synchronized, and he figures that this information should be valuable enough for a commercial company to keep secret. They might say, "Moved auction control structure to Bob's version from Caroline's version. Moved new PostScript graphics engine to Caroline's version from Bob's."
+
+McVoy says, "If you're Sun or Boeing, you don't want the Internet to be posting a message like 'I just added the bomb bay.' But for the free software guys, not only is that acceptable, but it's desirable. If you're doing open source, what do you have to hide?"
+
+BitKeeper is free for anyone to use, revise, and extend as long as they don't mess with the part that tattles. If you don't care about the world reading your change logs, then it's not much different from the traditional open source license. The user has the same rights to extend, revise, and modify BitKeeper as they do GNU Emacs, with one small exception: you can't disable the snitch feature.
+
+McVoy thinks this is an understandable trade-off. "From the business guys you can extract money. You can hope that they'll pay you. This is an important point I learned consulting at Schwab and Morgan Stanley. They insist that they pay for the software they get. They don't want to pay nothing. I used to think that they were idiots. Now I think they're very smart," he says.
+
+The matter is simple economics, he explains. "They believe that if enough money is going to their supplier, it won't be a total disaster. I call this an insurance model of software."
+
+Companies that pay for the privacy with BitKeeper will also be funding further development. The work won't be done in someone's spare time between exams and the homecoming game. It won't be done between keeping the network running and helping the new secretary learn Microsoft Word. It will be developed by folks who get paid to do the work.
+
+"There's enough money going back to the corporation so it can be supported," McVoy says. "This is the crux of the problem with the open source model. It's possible to abuse the proprietary model, too. They get you in there, they lock you in, and then they rape you. This business of hoping that it will be okay is unacceptable. You need to have a lock. The MIS directors insist you have a lock."
+
+He has a point. Linux is a lot of fun to play with and it is now a very stable OS, but it took a fair number of years to get to this point. Many folks in the free source world like to say things like, "It used to be that the most fun in Linux was just getting it to work." Companies like Morgan Stanley, Schwab, American Airlines, and most others live and die on the quality of their computer systems. They're quite willing to pay money if it helps ensure that things don't go wrong.
+
+McVoy's solution hasn't rubbed everyone the right way. The Open Source Initiative doesn't include his snitchware license in a list of acceptable solutions. "The consensus of the license police is that my license is NOT open source," he says. "The consensus of my lawyer is that it is. But I don't call it open source anymore."
+
+He's going his own way. "I made my own determination of what people value in the OS community: they have to be able to get the source, modify the source, and redistribute the source for no fee. All of the other crap is yeah, yeah whatever," he says.
+
+"The problem with the GPL is the GPL has an ax to grind, and in order to grind that ax it takes away all of the rights of the person who wrote the code. It serves the need of everyone in the community except the person who wrote it."
+
+McVoy has also considered a number of other alternatives. Instead of taking away something that the free software folks don't value, he considered putting in something that the businesses would pay to get rid of. The product could show ads it downloaded from a central location. This solution is already well known on the Internet, where companies give away e-mail, searching solutions, directories, and tons of information in order to sell ads. This solution, however, tends to wreck the usability of the software. Eudora, the popular e-mail program, is distributed with this option.
+
+McVoy also considered finding a way to charge for changes and support to BitKeeper. "The Cygnus model isn't working well because it turns them into a contracting shop. That means you actually have to do something for every hour of work."
+
+To him, writing software and charging for each version can generate money without work--that is, without doing further work. The support house has to have someone answering the phone every moment. A company that is selling shrink-wrapped software can collect money as people buy new copies. McVoy doesn't want this cash to spend tipping bartenders on cruise ships, although he doesn't rule it out. He wants the capital to reinvest in other neat ideas. He wants to have some cash coming in so he can start up development teams looking at new and bigger projects.
+
+The Cygnus model is too constraining for him. He argues that a company relying on support contracts must look for a customer to fund each project. Cygnus, for instance, had to convince Intel that they could do a good job porting the GCC to the i960. They found few people interested in general support of GNU, so they ended up concentrating on GCC.
+
+McVoy argues that it's the engineers who come up with the dreams first. The customers are often more conservative and less able to see how some new tool or piece of software could be really useful. Someone needs to hole up in a garage for a bit to create a convincing demonstration of the idea. Funding a dream takes capital.
+
+To him, the absence of money in the free software world can be a real limitation because money is a way to store value. It's not just about affording a new Range Rover and balsamic vinegars that cost more than cocaine by weight. Money can be a nice way to store up effort and transport it across time. Someone can work like a dog for a six months, turn out a great product, and sell it for a pile of cash. Ten years later, the cash can be spent on something else. The work is effectively stored for the future.
+
+Of course, this vision isn't exactly true. Cygnus has managed to charge enough for their contracts to fund the development of extra tools. Adding new features and rolling them out into the general distribution of some GNU tool is part of the job that the Cygnus team took on for themselves. These new features also mean that the users need more support. On one level, it's not much different from a traditional software development cycle. Cygnus is doing its work by subscription while a traditional house is creating its new features on spec.
+
+In fact, Cygnus did so well over such a long period of time that it found it could raise capital. "Once Cygnus had a track record of making money and delivering on time, investors wanted a piece of it," says Gilmore.
+
+Red Hat has managed to sell enough CD-ROM disks to fund the development of new projects. They've created a good selection of installation tools that make it relatively easy for people to use Linux. They also help pay salaries for people like Alan Cox who contribute a great deal to the evolution of the kernel. They do all of this while others are free to copy their distribution disks verbatim.
+
+McVoy doesn't argue with these facts, but feels that they're just a temporary occurrence. The huge growth of interest in Linux means that many new folks are exploring the operating system. There's a great demand for the hand-holding and packaging that Red Hat offers. In time, though, everyone will figure out how to use the product and the revenue stream should disappear as competition drives out the ability to charge $50 for each disk.
+
+Of course, the folks at Cygnus or Red Hat might not disagree with McVoy either. They know it's a competitive world and they figure that their only choice is to remain competitive by finding something that people will want to pay for. They've done it in the past and they should probably be able to do it in the future. There are always new features.
+
+2~ Bounties for Quicker Typer-Uppers
+
+Some developers are starting to explore a third way of blending capital with open source development by trying to let companies and people put bounties out on source code. The concept is pretty simple and tuned to the open software world. Let's say you have an annoying habit of placing French bon mots in the middle of sentences. Although this looks stupide to your friends, you think it's quite chic. The problem is that your old word processor's spell checker isn't quite à la mode and it only operates avec une seule langue. The problem is that you've spent too much time studying français and drinking de café and not enough time studying Java, the programming language. You're très désolé by your word processor's inability to grok just how BCBG you can be and spell-check in deux languages.
+
+The bounty system could be your savior. You would post a message saying, "Attention! I will reward with a check for $100 anyone who creates a two-language spell-checker." If you're lucky, someone who knows something about the spell-checker's source code will add the feature in a few minutes. One hundred dollars for a few minutes' work isn't too shabby.
+
+It is entirely possible that another person out there is having the same problem getting their word processor to verstehen their needs. They might chip in $50 to the pool. If the problem is truly grande, then the pot could grow quite large.
+
+This solution is blessed with the wide-open, free-market sensibility that many people in the open software community like. The bounties are posted in the open and anyone is free to try to claim the bounties by going to work. Ideally, the most knowledgeable will be the first to complete the job and nab the payoff.
+
+Several developers are trying to create a firm infrastructure for the plan. Brian Behlendorf, one of the founding members of the Apache web server development team, is working with Tim O'Reilly's company to build a website known as SourceXchange. Another group known as CoSource is led by Bernie Thompson and his wife, Laurie. Both will work to create more software that is released with free source.
+
+Of course, these projects are more than websites. They're really a process, and how the process will work is still unclear right now. While it is easy to circulate a notice that some guy will pay some money for some software, it is another thing to actually make it work. Writing software is a frustrating process and there are many chances for disagreement. The biggest question on every developer's mind is "How can I be sure I'll be paid?" and the biggest question on every sugar daddy's mind is "How can I be sure that the software works?"
+
+These questions are part of any software development experience. There is often a large gap between the expectations of the person commissioning the software and the person writing the code. In this shadow are confusion, betrayal, and turmoil.
+
+The normal solution is to break the project up into milestones and require payment after each milestone passes. If the coder is doing something unsatisfactory, the message is transmitted when payment doesn't arrive. Both SourceXchange and CoSource plan on carrying over the same structure to the world of bounty-hunting programmers. Each project might be broken into a number of different steps and a price for each step might be posted in advance.
+
+Both systems try to alleviate the danger of nonpayment by requiring that someone step in and referee the end of the project. A peer reviewer must be able to look over the specs of the project and the final code and then determine whether money should be paid. Ideally, this person should be someone both sides respect.
+
+A neutral party with the ability to make respectable decisions is something many programmers and consultants would welcome. In many normal situations, the contractors can only turn to the courts to solve disagreements, and the legal system is not really schooled in making these kinds of decisions. The company with the money is often able to dangle payment in front of the programmers and use this as a lever to extract more work. Many programmers have at least one horror story to tell about overly ambitious expectations.
+
+Of course, the existence of a wise neutral party who can see deeply into the problems and provide a fair solution is close to a myth. Judging takes time. SourceXchange promises that these peer reviewers will be paid, and this money will probably have to come from the people offering the bounty. They're the only ones putting money into the system in the long run. Plus, the system must make the people offering bounties happy in the long run or it will fail.
+
+The CoSource project suggests that the developers must come up with their own authority who will judge the end of the job and present this person with their bid. The sponsors then decide whether to trust the peer reviewer when they okay the job. The authorities will be judged like the developers, and summaries of their reputation will be posted on the site. While it isn't clear how the reviewers will be paid, it is not too much to expect that there will be some people out there who will do it just for the pleasure of having their finger in the stew. They might, for instance, want to offer the bounty themselves but be unable to put up much money. Acting as a reviewer would give them the chance to make sure the software did what they wanted without putting up much cash.
+
+One of the most difficult questions is how to run the marketplace. A wide-open solution would let the sponsors pay when the job was done satisfactorily. The first person to the door with running code that met the specs would be the one to be paid. Any other team that showed up later would get nothing.
+
+This approach would offer the greatest guarantees of creating well-running code as quickly as possible. The programmers would have a strong incentive to meet the specs quickly in order to win the cash. The downside is that the price would be driven up because the programmers would be taking on more risk. They would need to capitalize their own development and take the chance that someone might beat them to the door. Anxious sponsors who need some code quickly should be willing to pay the price.
+
+Another solution is to award contracts before any work is done. Developers would essentially bid on the project and the sponsor would choose one to start work. The process would be fairly formal and favor the seasoned, connected programmers. A couple of kids working in their spare time might be able to win an open bounty, but they would be at a great disadvantage in this system. Both CoSource and SourceXchange say that they'll favor this sort of preliminary negotiation.
+
+If the contracts are awarded before work begins, the bounty system looks less like a wild free-for-all and more like just a neutral marketplace for contract programmers to make their deals. Companies like Cygnus already bid to be paid for jobs that produce open source. These market-places for bounties will need to provide some structure and efficiencies to make it worth people's time to use them.
+
+One possible benefit of the bounty system is to aggregate the desires of many small groups. While some bounties will only serve the person who asks for them, many have the potential to help people who are willing to pay. An efficient system should be able to join these people together into one group and put their money into one pot.
+
+CoSource says that it will try to put together the bounties of many small groups and allow people to pay them with credit cards. It uses the example of a group of Linux developers who would gather together to fund the creation of an open source version of their favorite game. They would each chip in $10, $20, or $50 and when the pot got big enough, someone would step forward. Creating a cohesive political group that could effectively offer a large bounty is a great job for these sites.
+
+Of course, there are deeper questions about the flow of capital and the nature of risks in these bounty-based approaches. In traditional software development, one group pays for the creation of the software in the hope that they'll be able to sell it for more than it cost to create. Here, the programmer would be guaranteed a fixed payment if he accomplished the job. The developer's risk is not completely eliminated because the job might take longer than they expected, but there is little of the traditional risk of a start-up firm. It may not be a good idea to separate the risk-taking from the people doing the work. That is often the best way to keep people focused and devoted.
+
+Each of these three systems shows how hard the free software industry is working at finding a way for people to pay their bills and share information successfully. Companies like Cygnus or BitKeeper are real efforts built by serious people who can't live off the largesse of a university or a steady stream of government grants. Their success shows that it is quite possible to make money and give the source code away for free, but it isn't easy.
+
+Still, there is no way to know how well these companies will survive the brutal competition that comes from the free flow of the source code. There are no barriers to entry, so each corporation must be constantly on its toes. The business becomes one of service, not manufacturing, and that changes everything. There are no grand slam home runs in that world. There are no billion-dollar explosions. Service businesses grow by careful attention to detail and plenty of focused effort.
+
+1~ Fork
+
+A T-shirt once offered this wisdom to the world: "If you love someone, set them free. If they come back to you, it was meant to be. If they don't come back, hunt them down and kill them." The world of free software revolves around letting your source code go off into the world. If things go well, others will love the source code, shower it with bug fixes, and send all of this hard work flowing back to you. It will be a shining example of harmony and another reason why the free software world is great. But if things don't work out, someone might fork you and there's nothing you can do about it.
+
+"Fork" is a UNIX command that allows you to split a job in half. UNIX is an operating system that allows several people to use the same computer to do different tasks, and the operating system pretends to run them simultaneously by quickly jumping from task to task. A typical UNIX computer has at least 100 different tasks running. Some watch the network for incoming data, some run programs for the user, some watch over the file system, and others do many menial tasks.
+
+If you "fork a job," you arrange to split it into two parts that the computer treats as two separate jobs. This can be quite useful if both jobs are often interrupted, because one can continue while the other one stalls. This solution is great if two tasks, A and B, need to be accomplished independently of each other. If you use one task and try to accomplish A first, then B won't start until A finishes. This can be quite inefficient if A stalls. A better solution is to fork the job and treat A and B as two separate tasks.
+
+Most programmers don't spend much time talking about these kinds of forks. They're mainly concerned about forks in the political process.
+
+Programmers use "fork" to describe a similar process in the organization of a project, but the meaning is quite different. Forks of a team mean that the group splits and goes in different directions. One part might concentrate on adding support for buzzword Alpha while the other might aim for full buzzword Beta compatibility.
+
+In some cases, there are deep divisions behind the decision to fork. One group thinks buzzword Alpha is a sloppy, brain-dead kludge job that's going to blow up in a few years. The other group hates buzzword Beta with a passion. Disputes like this happen all the time. They often get resolved peacefully when someone comes up with buzzword Gamma, which eclipses them both. When no Gamma arrives, people start talking about going their separate ways and forking the source. If the dust settles, two different versions start appearing on the Net competing with each other for the hearts and CPUs of the folks out there. Sometimes the differences between the versions are great and sometimes they're small. But there's now a fork in the evolution of the source code, and people have to start making choices.
+
+The free software community has a strange attitude toward forks. On one hand, forking is the whole reason Stallman wrote the free software manifesto. He wanted the right and the ability to mess around with the software on his computer. He wanted to be free to change it, modify it, and tear it to shreds if he felt like doing it one afternoon. No one should be able to stop him from doing that. He wanted to be totally free.
+
+On the other hand, forking can hurt the community by duplicating efforts, splitting alliances, and sowing confusion in the minds of users. If Bob starts writing and publishing his own version of Linux out of his house, then he's taking some energy away from the main version. People start wondering if the version they're running is the Missouri Synod version of Emacs or the Christian Baptist version. Where do they send bug fixes? Who's in charge? Distribution groups like Debian or Red Hat have to spend a few moments trying to decide whether they want to include one version or the other. If they include both, they have to choose one as the default. Sometimes they just throw up their hands and forget about both. It's a civil war, and those are always worse than a plain old war.
+
+Some forks evolve out of personalities that just rub each other the wrong way. I've heard time and time again, "Oh, we had to kick him out of the group because he was offending people." Many members of the community consider this kind of forking bad. They use the same tone of voice to describe a fork of the source code as they use to describe the breakup of two lovers. It is sad, unfortunate, unpleasant, and something we'll never really understand because we weren't there. Sometimes people take sides because they have a strong opinion about who is right. They'll usually go off and start contributing to that code fork. In other cases, people don't know which to pick and they just close their eyes and join the one with the cutest logo.
+
+2~ Forks and the Threat of Disunity
+
+Eric Raymond once got in a big fight with Richard Stallman about the structure of Emacs Lisp. Raymond said, "The Lisp libraries were in bad shape in a number of ways. They were poorly documented. There was a lot of work that had gone on outside the FSF that should be integrated and I wanted to merge in the best work from outside."
+
+The problem is that Stallman didn't want any part of Raymond's work. "He just said, 'I won't take those changes into the distribution.'
+That's his privilege to do," Raymond said.
+
+That put Raymond in an awkward position. He could continue to do the work, create his own distribution of Emacs, and publicly break with Stallman. If he were right and the Lisp code really needed work, then he would probably find more than a few folks who would cheer his work. They might start following him by downloading his distribution and sending their bug fixes his way. Of course, if he were wrong, he would set up his own web server, do all the work, put his Lisp fixes out there, and find that no one would show up. He would be ignored because people found it easier to just download Stallman's version of Emacs, which everyone thought was sort of the official version, if one could be said to exist. They didn't use the Lisp feature too much so it wasn't worth thinking about how some guy in Pennsylvania had fixed it. They were getting the real thing from the big man himself.
+
+Of course, something in between would probably happen. Some folks who cared about Lisp would make a point of downloading Raymond's version. The rest of the world would just go on using the regular version. In time, Stallman might soften and embrace the changes, but he might not. Perhaps someone would come along and create a third distribution that melded Raymond's changes with Stallman's into a harmonious version. That would be a great thing, except that it would force everyone to choose from among three different versions.
+
+In the end, Raymond decided to forget about his improvements.
+"Emacs is too large and too complicated and forking is bad. There was in fact one group that got so fed up with working with him that they did fork Emacs. That's why X Emacs exists. But major forks like that are rare events and I didn't want to be part of perpetrating another one,"
+he said. Someone else was going to have to start the civil war by firing those shots at Fort Sumter.
+
+2~ BSD's Garden of Forking Paths
+
+Some forks aren't so bad. There often comes a time when people have legitimate reasons to go down different paths. What's legitimate and what's not is often decided after a big argument, but the standard reasons are the same ones that drive programming projects. A good fork should make a computer run software a gazillion times faster. Or it might make the code much easier to port to a new platform. Or it might make the code more secure. There are a thousand different reasons, and it's impossible to really measure which is the right one. The only true measure is the number of people who follow each branch of the fork. If a project has a number of good disciples and the bug fixes are coming quickly, then people tend to assume it is legitimate.
+
+The various versions of the BSD software distribution are some of the more famous splits around. All are descended, in one way or another, from the original versions of UNIX that came out of Berkeley. Most of the current ones evolved from the 4.3BSD version and the Network Release 2 and some integrated code from the 4.4BSD release after it became free. All benefited from the work of the hundreds of folks who spent their free time cloning the features controlled by AT&T. All of them are controlled by the same loose BSD license that gives people the right to do pretty much anything they want to the code. All of them share the same cute daemon as a mascot.
+
+That's where the similarities end. The FreeBSD project is arguably the most successful version. It gets a fairly wide distribution because its developers have a good deal with Walnut Creek CD-ROM Distributors, a company that packages up large bundles of freeware and shareware on the Net and then sells them on CD-ROM. The system is well known and widely used because the FreeBSD team concentrates on making the software easy to use and install on Intel computers. Lately, they've created an Alpha version, but most of the users run the software on x86 chips. Yahoo! uses FreeBSD.
+
+FreeBSD, of course, began as a fork of an earlier project known as 386BSD, started by Bill Jolitz. This version of BSD was more of an academic example or a proof-of-concept than a big open source project designed to take over the world.
+
+Jordan Hubbard, someone who would come along later to create a fork of 386BSD, said of Jolitz's decision to create a 386-based fork of BSD, "Bill's real contribution was working with the 386 port. He was kind of an outsider. No one else saw the 386 as interesting. Berkeley had a myopic attitude toward PCs. They were just toys. No one would support Intel. That was the climate at the time. No one really took PCs seriously. Bill's contribution was to realize that PCs were going places."
+
+From the beginning, Hubbard and several others saw the genius in creating a 386 version of BSD that ran on the cheapest hardware available. They started adding features and gluing in bug fixes, which they distributed as a file that modified the main 386BSD distribution from Jolitz. This was practical at the beginning when the changes were few, but it continued out of respect for the original creator, even after the patches grew complicated.
+
+Finally, a tussle flared up in 1993. Jordan Hubbard, one of the forkers, writes in his history of the project,
+
+_1 386BSD was Bill Jolitz's operating system, which had been up to that point suffering rather severely from almost a year's worth of neglect. As the patchkit swelled ever more uncomfortably with each passing day, we were in unanimous agreement that something had to be done and decided to try and assist Bill by providing this interim "cleanup" snapshot. Those plans came to a rude halt when Bill Jolitz suddenly decided to withdraw his sanction from the project and without any clear indication of what would be done instead.
+
+The FreeBSD team pressed on despite the denial. They decided to fork. Today, 386BSD is largely part of the history of computing while FreeBSD is a living, current OS, at least at the time this book was written. The FreeBSD team has done a good job distributing bug-free versions, and they've been paid off in loyalty, disciples, and money and computers from Walnut Creek. Forking can often be good for society because it prevents one person or clique from thwarting another group. The free software world is filled with many of the same stories of politics that float across the watercoolers of corporations, but the stories don't have to end the same way. If one boss or group tries to shut down a free software project, it really can't. The source code is freely available, and people are free to carry on. The FreeBSD project is one example.
+
+Of course, good software can have anti-forking effects. Linus Torvalds said in one interview, "Actually, I have never even checked 386BSD out; when I started on Linux it wasn't available (although Bill Jolitz's series on it in Dr. Dobbs Journal had started and were interesting), and when 386BSD finally came out, Linux was already in a state where it was so usable that I never really thought about switching. If 386BSD had been available when I started on Linux, Linux would probably never have happened." So if 386BSD had been easier to find on the Net and better supported, Linux might never have begun.
+
+Once someone starts forking BSD, one fork is rarely enough. Another group known as NetBSD also grew fed up with the progress of 386BSD in 1993. This group, however, wanted to build a platform that ran well on many different machines, not just the Intel 386. The FreeBSD folks concentrated on doing a good job on Intel boxes, while the NetBSD wanted to create a version that ran on many different machines. Their slogan became "Of course it runs NetBSD."
+
+NetBSD runs on practically every machine you can imagine, including older, less up-to-date machines like the Amiga and the Atari. It has also been embraced by companies like NeXT, which bundled parts of it into the version of the OS for the Macintosh known as Rhapsody. Of course, the most common chips like the Intel line and the Alpha are also well supported.
+
+The NetBSD community emerged at the same time as the FreeBSD world. They didn't realize that each team was working on the same project at the same time. But once they started releasing their own versions, they stayed apart.
+
+"The NetBSD group has always been the purest. They saw it as an OS research vehicle. That was what CSRG was doing. Their only mandate was to do interesting research," said Hubbard. "It's a very different set of goals than we concentrated on for the 386. The important thing for us was to polish it up. We put all of our efforts into polishing, not porting. This was part of our bringing BSD to the masses kind of thing. We're going for numbers. We're going for mass penetration."
+
+This orientation meant that NetBSD never really achieved the same market domination as FreeBSD. The group only recently began shipping versions of NetBSD on CD-ROM. FreeBSD, on the other hand, has always excelled at attracting new and curious users thanks to their relationship with Walnut Creek. Many experimenters and open-minded users picked up one of the disks, and a few became excited enough to actually make some contributions. The Walnut Creek partnership also helped the FreeBSD team understand what it needed to do to make their distribution easier to install and simpler to use. That was Walnut Creek's business, after all.
+
+2~ Flames, Fights, and the Birth of OpenBSD
+
+The forking did not stop with NetBSD. Soon one member of the NetBSD world, Theo de Raadt, began to rub some people the wrong way. One member of the OpenBSD team told me, "The reason for the split from NetBSD was that Theo got kicked out. I don't understand it completely. More or less they say he was treating users on the mailing list badly. He does tend to be short and terse, but there's nothing wrong with that. He was one of the founding members of NetBSD and they asked him to resign."
+
+Now, four years after the split began in 1995, de Raadt is still a bit hurt by their decision. He says about his decision to fork BSD again, "I had no choice. I really like what I do. I really like working with a community. At the time it all happened, I was the second most active developer in their source tree. They took the second most active developer and kicked him off."
+
+Well, they didn't kick him out completely, but they did take away his ability to "commit" changes to the source tree and make them permanent. After the split, de Raadt had to e-mail his contributions to a member of the team so they could check them in. This didn't sit well with de Raadt, who saw it as both a demotion and a real impediment to doing work.
+
+The root of the split is easy to see. De Raadt is energetic. He thinks and speaks quickly about everything. He has a clear view about most free software and isn't afraid to share it. While some BSD members are charitable and conciliatory to Richard Stallman, de Raadt doesn't bother to hide his contempt for the organization. "The Free Software Foundation is one of the most misnamed organizations," he says, explaining that only BSD-style licensees have the true freedom to do whatever they want with the software. The GNU General Public License is a pair of handcuffs to him.
+
+De Raadt lives in Calgary and dresses up his personal web page with a picture of himself on top of a mountain wearing a bandanna. If you want to send him a pizza for any reason, he's posted the phone number of his favorite local shop (403/531-3131). Unfortunately, he reports that they don't take foreign credit card numbers anymore.
+
+He even manages to come up with strong opinions about simple things that he ostensibly loves. Mountain biking is a big obsession, but, he says, "I like mud and despise 'wooded back-alleys' (what most people call logging roads)." That's not the best way to make friends with less extreme folks who enjoy a Sunday ride down logging roads.
+
+If you like cats, don't read what he had to say about his pets: "I own cats. Their names are Galileo and Kepler--they're still kittens. Kepler-the little bitch--can apparently teleport through walls. Galileo is a rather cool monster. When they become full-grown cats I will make stew & soup out of them. (Kepler is only good for soup)."
+
+Throwaway comments like this have strange effects on the Net, where text is the only way people can communicate. There are no facial gestures or tonal clues to tell people someone is joking around, and some people don't have well-developed scanners for irony or sarcasm. Some love the sniping and baiting, while others just get annoyed. They can't let snide comments slide off their back. Eventually, the good gentlefolk who feel that personal kindness and politeness should still count for something in this world get annoyed and start trying to do something.
+
+It's easy to see how this affected the NetBSD folks, who conduct their business in a much more proper way. Charles Hannum, for instance, refused to talk to me about the schism unless I promised that he would be able to review the parts of the book that mentioned NetBSD. He also suggested that forks weren't particularly interesting and shouldn't be part of the book. Others begged off the questions with more polite letters saying that the split happened a long time ago and wasn't worth talking about anymore. Some pointed out that most of the members of the current NetBSD team weren't even around when the split happened.
+
+While their silence may be quite prudent and a better way to spend a life, it certainly didn't help me get both sides of the story. I pointed out that they wouldn't accept code into the NetBSD tree if the author demanded the right to review the final distribution. I said they could issue a statement or conduct the interview by e-mail. One argued that there was no great problem if a few paragraphs had to be deleted from the book in the end. I pointed out that I couldn't give the hundreds of people I spoke with veto power over the manuscript. It would be impossible to complete. The book wasn't being written by a committee. No one at NetBSD budged.
+
+De Raadt, on the other hand, spoke quite freely with no preconditions or limitations. He still keeps a log file with a good number of email letters exchanged during the separation and makes it easy to read them on his personal website. That's about as open as you can get. The NetBSD folks who refused to talk to me, on the other hand, seemed intent on keeping control of the story. Their silence came from a different world than the website offering the phone number of the local pizza place as a hint. They were Dragnet; de Raadt was Politically Incorrect.
+
+When the NetBSD folks decided to do something, they took away de Raadt's access to the source tree. He couldn't just poke around the code making changes as he went along. Well, he could poke around and make changes, but not to the official tree with the latest version. The project was open source, after all. He could download the latest release and start fiddling, but he couldn't make quasi-official decisions about what source was part of the latest official unreleased version.
+
+De Raadt thought this was a real barrier to work. He couldn't view the latest version of the code because it was kept out of his view. He was stuck with the last release, which might be several months old. That put him at an extreme disadvantage because he might start working on a problem only to discover that someone had either fixed it or changed it.
+
+Chris Demetriou found himself with the task of kicking de Raadt off of the team. His letter, which can still be found on the OpenBSD site, said that de Raadt's rough behavior and abusive messages had driven away people who might have contributed to the project. Demetriou also refused to talk about NetBSD unless he could review the sections of the book that contained his comments. He also threatened to take all possible action against anyone who even quoted his letters in a commercial book without his permission.
+
+De Raadt collected this note from Demetriou and the firestorm that followed in a 300k file that he keeps on his website. The NetBSD core tried to be polite and firm, but the matter soon degenerated into a seven-month-long flame war. After some time, people started having meta-arguments, debating whether the real argument was more or less like the bickering of a husband and wife who happen to work at the same company. Husbands and wives should keep their personal fights out of the workplace, they argued. And so they bickered over whether de Raadt's nastygrams were part of his "job" or just part of his social time.
+
+Through it all, de Raadt tried to get back his access to the source tree of NetBSD and the group tried to propose all sorts of mechanisms for making sure he was making a "positive" contribution and getting along with everyone. At one time, they offered him a letter to sign. These negotiations went nowhere, as de Raadt objected to being forced to make promises that other contributors didn't have to.
+
+De Raadt wrote free software because he wanted to be free to make changes or write code the way he wanted to do it. If he had wanted to wear the happy-face of a positive contributor, he could have gotten a job at a corporation. Giving up the right to get in flame wars and speak at will may not be that much of a trade-off for normal people with fulltime jobs. Normal folks swallow their pride daily. Normal people don't joke about turning their cats into soup. But de Raadt figured it was like losing a bit of his humanity and signing up willingly for a set of manacles. It just wasn't livable.
+
+The argument lasted months. De Raadt felt that he tried and tried to rejoin the project without giving away his honor. The core NetBSD team argued that they just wanted to make sure he would be positive. They wanted to make sure he wouldn't drive away perfectly good contributors with brash antics. No one ever gained any ground in the negotiations and in the end, de Raadt was gone.
+
+The good news is that the fork didn't end badly. De Raadt decided he wasn't going to take the demotion. He just couldn't do good work if he had to run all of his changes by one of the team that kicked him off the project. It took too long to ask "Mother, may I?" to fix every little bug. If he was going to have to run his own tree, he might as well go whole hog and start his own version of BSD. He called it OpenBSD. It was going to be completely open. There were going to be relatively few controls on the members. If the NetBSD core ran its world like the Puritan villagers in a Nathaniel Hawthorne story, then de Raadt was going to run his like Club Med.
+
+OpenBSD struggled for several months as de Raadt tried to attract more designers and coders to his project. It was a battle for popularity in many ways, not unlike high school. When the cliques split, everyone had to pick and choose. De Raadt had to get some folks in his camp if he was going to make some lemonade.
+
+The inspiration came to de Raadt one day when he discovered that the flame war archive on his web page was missing a few letters. He says that someone broke into his machine and made a few subtle deletions. Someone who had an intimate knowledge of the NetBSD system. Someone who cared about the image portrayed by the raw emotions in the supposedly private letters.
+
+He clarifies his comments to make it clear that he's not sure it was someone from the NetBSD core. "I never pursued it. If it happens, it's your own fault. It's not their fault," he said. Of course, the folks from NetBSD refused to discuss this matter or answer questions unless they could review the chapter.
+
+This break-in gave him a focus. De Raadt looked at NetBSD and decided that it was too insecure. He gathered a group of like-minded people and began to comb the code for potential insecurities.
+
+"About the same time, I got involved with a company that wrote a network security scanner. Three of the people over there started playing with the source tree and searching for security holes. We started finding problems all over the place, so we started a comprehensive security audit. We started from the beginning. Our task load increased massively. At one time, I had five pieces of paper on my desk full of things to look for," he said.
+
+Security holes in operating systems are strange beasts that usually appear by mistake when the programmer makes an unfounded assumption. One of the best-known holes is the buffer overflow, which became famous in 1988 after Robert Morris, then a graduate student at Cornell, unleashed a program that used the loophole to bring several important parts of the Internet to a crawl.
+
+In this case, the programmer creates a buffer to hold all of the information that someone on the net might send. Web browsers, for instance, send requests like "GET http://www.nytimes.com" to ask for the home page of the New York Times website. The programmer must set aside some chunk of memory to hold this request, usually a block that is about 512 bytes long. The programmer chooses an amount that should be more than enough for all requests, including the strangest and most complicated.
+
+Before the attack became well known, programmers would often ignore the length of the request and assume that 512 bytes was more than enough for anything. Who would ever type a URL that long?
+
+Who had an e-mail address that long? Attackers soon figured out that they could send more than 512 bytes and started writing over the rest of the computer's memory. The program would dutifully take in 100,000 bytes and keep writing it to memory. An attacker could download any software and start it running. And attackers did this.
+
+De Raadt and many others started combing the code for loopholes. They made sure every program that used a buffer included a bit of code that would check to ensure that no hacker was trying to sneak in more than the buffer could hold. They checked thousands of other possibilities. Every line was checked and changes were made even if there was no practical way for someone to get at the potential hole. Many buffers, for instance, only accept information from the person sitting at the terminal. The OpenBSD folks changed them, too.
+
+This audit began soon after the fork in 1995 and continues to this day. Most of the major work is done and the group likes to brag that they haven't had a hole that could be exploited remotely to gain root access in over two years. The latest logo boasts the tag line "Sending kiddies to /dev/null since 1995." That is, any attacker is going to go nowhere with OpenBSD because all of the extra information from the attacks would be routed to /dev/null, a UNIX conceit for being erased, ignored, and forgotten.
+
+The OpenBSD fork is a good example of how bad political battles can end up solving some important technical problems. Everyone fretted and worried when de Raadt announced that he was forking the BSD world one more time. This would further dilute the resources and sow confusion among users. The concentration on security, however, gave OpenBSD a brand identity, and the other BSD distributions keep at least one eye on the bug fixes distributed by the OpenBSD team. These often lead to surreptitious fixes in their own distribution.
+
+The focus also helped him attract new coders who were interested in security. "Some of them used to be crackers and they were really cool people. When they become eighteen, it becomes a federal offense, you know," de Raadt says.
+
+This fork may have made the BSD community stronger because it effectively elevated the focus on security and cryptography to the highest level. In the corporate world, it's like taking the leader of the development team responsible for security and promoting him from senior manager to senior executive vice president of a separate division. The autonomy also gave the OpenBSD team the ability to make bold technical decisions for their own reasons. If they saw a potential security problem that might hurt usability or portability, the OpenBSD team could make the change without worrying that other team members would complain. OpenBSD was about security. If you wanted to work on portability, go to NetBSD. If you cared about ease-of-use on Intel boxes, go to FreeBSD. Creating a separate OpenBSD world made it possible to give security a strong focus.
+
+2~ Temporary Forks
+
+It's a mistake to see these forks as absolute splits that never intermingle again. While NetBSD and OpenBSD continue to glower at each other across the Internet ether, the groups share code frequently because the licenses prevent one group from freezing out another.
+
+Jason Wright, one of the OpenBSD developers, says, "We do watch each other's source trees. One of the things I do for fun is take drivers out of FreeBSD and port them to OpenBSD. Then we have support for a new piece of hardware."
+
+He says he often looks for drivers written by Bill Paul, because "I've gotten used to his style. So I know what to change when I receive his code. I can do it in about five to six hours. That is, at least a rough port to test if it works."
+
+Still, the work is not always simple. He says some device drivers are much harder to handle because both groups have taken different approaches to the problem. "SCSI drivers are harder," he says. "There's been some divergence in the layering for SCSI. They're using something called CAM. We've got an older implementation that we've stuck to."
+That is, the FreeBSD has reworked the structure of the way that the SCSI information is shipped to the parts of the system asking for information. The OpenBSD hasn't adopted their changes, perhaps because of security reasons or perhaps because of inertia or perhaps because no one has gotten around to thinking about it. The intermingling isn't perfect.
+
+Both NetBSD and FreeBSD work on security, too. They also watch the change logs of OpenBSD and note when security holes are fixed. They also discover their own holes, and OpenBSD may use them as an inspiration to plug their own code. The discoveries and plugs go both ways as the groups compete to make a perfect OS.
+
+Kirk McKusick says, "The NetBSD and the OpenBSD have extremely strong personalities. Each one is absolutely terrified the other will gain an inch."
+
+While the three forks of BSD may cooperate more than they compete, the Linux world still likes to look at the BSD world with a bit of contempt. All of the forks look somewhat messy, even if having the freedom to fork is what Stallman and GNU are ostensibly fighting to achieve. The Linux enthusiasts seem to think, "We've got our ducks in a single row. What's your problem?" It's sort of like the Army mentality. If it's green, uniform, and the same everywhere, then it must be good.
+
+The BSD lacks the monomaniacal cohesion of Linux, and this seems to hurt their image. The BSD community has always felt that Linux is stealing the limelight that should be shared at least equally between the groups. Linux is really built around a cult of Linus Torvalds, and that makes great press. It's very easy for the press to take photos of one man and put him on the cover of a magazine. It's simple, clean, neat, and perfectly amenable to a 30-second sound bite. Explaining that there's FreeBSD, NetBSD, OpenBSD, and who knows what smaller versions waiting in the wings just isn't as manageable.
+
+Eric Raymond, a true disciple of Linus Torvalds and Linux, sees it in technical terms. The BSD community is proud of the fact that each distribution is built out of one big source tree. They get all the source code for all the parts of the kernel, the utilities, the editors, and whatnot together in one place. Then they push the compile button and let people work. This is a crisp, effective, well-managed approach to the project.
+
+The Linux groups, however, are not that coordinated at all. Torvalds only really worries about the kernel, which is his baby. Someone else worries about GCC. Everyone comes up with their own source trees for the parts. The distribution companies like Red Hat worry about gluing the mess together. It's not unusual to find version 2.0 of the kernel in one distribution while another is sporting version 2.2.
+
+"In BSD, you can do a unified make. They're fairly proud of that," says Raymond. "But this creates rigidities that give people incentives to fork. The BSD things that are built that way develop new spin-off groups each week, while Linux, which is more loosely coupled, doesn't fork."
+
+He elaborates, "Somebody pointed out that there's a parallel of politics. Rigid political and social institutions tend to change violently if they change at all, while ones with more play in them tend to change peacefully."
+
+But this distinction may be semantic. Forking does occur in the Linux realm, but it happens as small diversions that get explained away with other words. Red Hat may choose to use GNOME, while another distribution like SuSE might choose KDE. The users will see a big difference because both tools create virtual desktop environments. You can't miss them. But people won't label this a fork. Both distributions are using the same Linux kernel and no one has gone off and said, "To hell with Linus, I'm going to build my own version of Linux."
+Everyone's technically still calling themselves Linux, even if they're building something that looks fairly different on the surface.
+
+Jason Wright, one of the developers on the OpenBSD team, sees the organization as a good thing. "The one thing that all of the BSDs have over Linux is a unified source tree. We don't have Joe Blow's tree or Bob's tree," he says. In other words, when they fork, they do it officially, with great ceremony, and make sure the world knows of their separate creations. They make a clear break, and this makes it easier for developers.
+
+Wright says that this single source tree made it much easier for them to turn OpenBSD into a very secure OS."We've got the security over Linux. They've recently been doing a security audit for Linux, but they're going to have a lot more trouble. There's not one place to go for the source code."
+
+To extend this to political terms, the Linux world is like the 1980s when Ronald Reagan ran the Republican party with the maxim that no one should ever criticize another Republican. Sure, people argued internally about taxes, abortion, crime, and the usual controversies, but they displayed a rare public cohesion. No one criticizes Torvalds, and everyone is careful to pay lip service to the importance of Linux cohesion even as they're essentially forking by choosing different packages.
+
+The BSD world, on the other hand, is like the biblical realm in Monty Python's film The Life of Brian. In it, one character enumerates the various splinter groups opposing the occupation by the Romans. There is the People's Front of Judea, the Judean People's Front, the Front of Judean People, and several others. All are after the same thing and all are manifestly separate. The BSD world may share a fair amount of code; it may share the same goals, but it just presents it as coming from three different camps.
+
+John Gilmore, one of the founders of the free software company Cygnus and a firm believer in the advantages of the GNU General Public License, says, "In Linux, each package has a maintainer, and patches from all distributions go back through that maintainer. There is a sense of cohesion. People at each distribution work to reduce their differences from the version released by the maintainer. In the BSD world, each tree thinks they own each program--they don't send changes back to a central place because that violates the ego model."
+
+Jordan Hubbard, the leader of FreeBSD, is critical of Raymond's characterization of the BSD world. "I've always had a special place in my heart for that paper because he painted positions that didn't exist," Hubbard said of Raymond's piece "The Cathedral and the Bazaar." "You could point to just the Linux community and decide which part was cathedral-oriented and which part was bazaar-oriented.
+
+"Every single OS has cathedral parts and bazaar parts. There are some aspects of development that you leave deliberately unfocused and you let people contribute at their own pace. It's sort of a bubble-up model and that's the bazaar part. Then you have the organizational part of every project. That's the cathedral part. They're the gatekeepers and the standards setters. They're necessary, too," he said.
+
+When it comes right down to it, there's even plenty of forking going on about the definition of a fork. When some of the Linux team point at the BSD world and start making fun about the forks, the BSD team gets defensive. The BSD guys always get defensive because their founder isn't on the cover of all the magazines. The Linux team hints that maybe, if they weren't forking, they would have someone with a name in lights, too.
+
+Hubbard is right. Linux forks just as much, they just call it a distribution or an experimental kernel or a patch kit. No one has the chutzpah to spin off their own rival political organization. No one has the political clout.
+
+2~ A Fork, a Split, and a Reunion
+
+Now, after all of the nasty stories of backstabbing and bickering, it is important to realize that there are actually some happy stories of forks that merge back together. One of the best stories comes from the halls of an Internet security company, C2Net, that dealt with a fork in a very peaceful way.
+
+C2Net is a Berkeley-based company run by some hard-core advocates of online privacy and anonymity. The company began by offering a remailing service that allowed people to send anonymous e-mails to one another. Their site would strip off the return address and pass it along to the recipient with no trace of who sent it. They aimed to fulfill the need of people like whistleblowers, leakers, and other people in positions of weakness who wanted to use anonymity to avoid reprisals.
+
+The company soon took on a bigger goal when it decided to modify the popular Apache web server by adding strong encryption to make it possible for people to process credit cards over the web. The technology, known as SSL for "secure sockets layer," automatically arranged for all of the traffic between a remote web server and the user to be scrambled so that no one could eavesdrop. SSL is a very popular technology on the web today because many companies use it to scramble credit card numbers to defeat eavesdroppers.
+
+C2Net drew a fair deal of attention when one of its founders, Sameer Parekh, appeared on the cover of Forbes magazine with a headline teasing that he wanted to "overthrow the government." In reality, C2Net wanted to move development operations overseas, where there were no regulations on the creation of cryptographically secure software. C2Net went where the talent was available and priced right.
+
+In this case, C2Net chose a free version of SSL written by Eric Young known as SSLeay. Young's work is another of the open source success stories. He wrote the original version as a hobby and released it with a BSD-like license. Everyone liked his code, downloaded it, experimented with it, and used it to explore the boundaries of the protocol. Young was just swapping code with the Net and having a good time.
+
+Parekh and C2Net saw an opportunity. They would merge two free products, the Apache web server and Young's SSLeay, and make a secure version so people could easily set up secure commerce sites for the Internet. They called this product Stronghold and put it on the market commercially.
+
+C2Net's decision to charge for the software rubbed some folks the wrong way. They were taking two free software packages and making something commercial out of them. This wasn't just a fork, it seemed like robbery to some. Of course, these complaints weren't really fair. Both collections of code emerged with a BSD-style license that gave everyone the right to create and sell commercial additions to the product. There wasn't any GPL-like requirement that they give back to the community. If no one wanted a commercial version, they shouldn't have released the code with a very open license in the first place.
+
+Parekh understands these objections and says that he has weathered plenty of criticism on the internal mailing lists. Still, he feels that the Stronghold product contributed a great deal to the strength of Apache by legitimizing it.
+
+"I don't feel guilty about it. I don't think we've contributed a whole lot of source code, which is one of the key metrics that the people in the Apache group are using. In my perspective, the greatest contribution we've made is market acceptance," he said.
+
+Parekh doesn't mean that he had to build market acceptance among web developers. The Apache group was doing a good job of accomplishing that through their guerrilla tactics, excellent product, and free price tag. But no one was sending a message to the higher levels of the computer industry, where long-term plans were being made and corporate deals were being cut. Parekh feels that he built first-class respectability for the Apache name by creating and supporting a first-class product that big corporations could use successfully. He made sure that everyone knew that Apache was at the core of Stronghold, and people took notice.
+
+Parekh's first job was getting a patent license from RSA Data Security. Secure software like SSL relies on the RSA algorithm, an idea that was patented by three MIT professors in the 1970s. This patent is controlled by RSA Data Security. While the company publicized some of its licensing terms and went out of its way to market the technology, negotiating a license was not a trivial detail that could be handled by some free software team. Who's going to pay the license? Who's going to compute what some percentage of free is? Who's going to come up with the money? These questions are much easier to answer if you're a corporation charging customers to buy a product. C2Net was doing that. People who bought Stronghold got a license from RSA that ensured they could use the method without being sued.
+
+The patent was only the first hurdle. SSL is a technology that tries to bring some security to web connections by encrypting the connections between the browser and the server. Netscape added one feature that allows a connection to be established only if the server has a digital certificate that identifies it. These certificates are only issued to a company after it pays a fee to a registered certificate agent like Verisign.
+
+In the beginning, certificate agents like Verisign would issue the certificates only for servers created by big companies like Netscape or Microsoft. Apache was just an amorphous group on the Net. Verisign and the other authorities weren't paying attention to it.
+
+Parekh went to them and convinced them to start issuing the certificates so he could start selling Stronghold.
+
+"We became number three, right behind Microsoft and Netscape. Then they saw how much money they were making from us, so they started signing certificates for everyone," he said. Other Apache projects that used SSL found life much easier once Parekh showed Verisign that there was plenty of money to be made from folks using free software.
+
+Parekh does not deny that C2Net has not made many contributions to the code base of Apache, but he doesn't feel that this is the best measure. The political and marketing work of establishing Apache as a worthwhile tool is something that he feels may have been more crucial to its long-term health. When he started putting money in the hands of Verisign, he got those folks to realize that Apache had a real market share. That cash talked.
+
+The Stronghold fork, however, did not make everyone happy. SSL is an important tool and someone was going to start creating another free version. C2Net hired Eric Young and his collaborator Tim Hudson and paid them to do some work for Stronghold. The core version of Young's original SSLeay stayed open, and both continued to add bug fixes and other enhancements over time. Parekh felt comfortable with this relationship. Although Stronghold was paying the salaries of Young and Hudson, they were also spending some of their spare time keeping their SSLeay toolkit up to date.
+
+Still, the notion of a free version of SSL was a tempting project for someone to undertake. Many people wanted it. Secure digital commerce demanded it. There were plenty of economic incentives pushing for it to happen. Eventually, a German named Ralf S. Engelschall stepped up and wrote a new version he called mod_SSL. Engelschall is a well-regarded contributor to the Apache effort, and he has written or contributed to a number of different modules that could be added to Apache. He calls one the "all-dancing-all-singing mod_rewrite module"
+for handling URLs easily.
+
+Suddenly, Engelschall's new version meant that there were dueling forks. One version came out of Australia, where the creators worked for a company selling a proprietary version of the code. C2Net distributed the Australian version and concentrated on making their product easy to install. The other came out of Europe, distributed for free by someone committed to an open source license. The interface may have been a bit rougher, but it didn't cost any money and it came with the source code. The potential for battle between SSLeay and mod_SSL could have been great.
+
+The two sides reviewed their options. Parekh must have felt a bit frustrated and at a disadvantage. He had a company that was making a good product with repeat buyers. Then an open source solution came along. C2Net's Stronghold cost money and didn't come with source code, while Engelschall's mod_SSL cost nothing and came with code. Those were major negatives that he could combat only by increasing service. When Engelschall was asked whether his free version was pushing C2Net, he sent back the e-mail with the typed message, "[grin]."
+
+In essence, C2Net faced the same situation as many major companies like Microsoft and Apple do today. The customers now had a viable open source solution to their problems. No one had to pay C2Net for the software. The users in the United States needed a patent license, but that would expire in late 2000. Luckily, Parekh is a true devotee to the open source world, even though he has been running a proprietary source company for the last several years. He looked at the problem and decided that the only way to stay alive was to join forces and mend the fork.
+
+To make matters worse, Hudson and Young left C2Net to work for RSA Data Security. Parekh lost two important members of his team, and he faced intense competition. Luckily, his devotion to open source came to the rescue. Hudson and Young couldn't take back any of the work they did on SSLeay. It was open source and available to everyone.
+
+Parekh, Engelschall, several C2Net employees, and several others sat down (via e-mail) and created a new project they called OpenSSL. This group would carry the torch of SSLeay and keep it up-to-date. Young and Hudson stopped contributing and devoted their time to creating a commercial version for RSA Data Security.
+
+Parekh says of the time, "Even though it was a serious setback for C2Net to have RSA pirate our people, it was good for the public. Development really accelerated when we started OpenSSL. More people became involved and control became less centralized. It became more like the Apache group. It's a lot bigger than it was before and it's much easier for anyone to contribute."
+
+Parekh also worked on mending fences with Engelschall. C2Net began to adopt some of the mod_SSL code and blend it into their latest version of Stronghold. To make this blending easier, C2Net began sending some of their formerly proprietary code back to Engelschall so he could mix it with mod_SSL by releasing it as open source. In essence, C2Net was averting a disastrous competition by making nice and sharing with this competitor. It is a surprising move that might not happen in regular business.
+
+Parekh's decision seems open and beneficent, but it has a certain amount of self-interest behind it. He explains, "We just decided to contribute all of the features we had into mod_SSL so we could start using mod_SSL internally, because it makes our maintenance of that easier. We don't have to maintain our own proprietary version of mod_SSL. Granted, we've made the public version better, but those features weren't significant."
+
+This mixing wasn't particularly complicated--most of it focused on the structure of the parts of the source code that handle the interface. Programmers call these the "hooks" or the "API." If Stronghold and mod_SSL use the same hook structure, then connecting them is a piece of cake. If Engelschall had changed the hook structure of mod_SSL, then the C2Net would have had to do more work.
+
+The decision to contribute the code stopped Engelschall from doing the work himself in a way that might have caused more grief for C2Net.
+"He was actually planning on implementing them himself, so we were better off contributing ours to avoid compatibility issues," says Parekh. That is to say, Parekh was worried that Engelschall was going to go off and implement all the features C2Net used, and there was a very real danger that Engelschall would implement them in a way that was unusable to Parekh. Then there would be a more serious fork that would further split the two groups. C2Net wouldn't be able to borrow code from the free version of OpenSSL very easily. So it decided to contribute its own code. It was easier to give their code and guarantee that OpenSSL fit neatly into Stronghold. In essence, C2Net chose to give a little so it could continue to get all of the future improvements.
+
+It's not much different from the car industry. There's nothing inherently better or worse about cars that have their steering wheel on the right-hand side. They're much easier to use in England. But if some free car engineering development team emerged in England, it might make sense for a U.S. company to donate work early to ensure that the final product could have the steering wheel on either side of the car without extensive redesign. If Ford just sat by and hoped to grab the final free product, it might find that the British engineers happily designed for the only roads they knew.
+
+Engelschall is happy about this change. He wrote in an e-mail message, "They do the only reasonable approach: They base their server on mod_SSL because they know they cannot survive against the Open Source solution with their old proprietary code. And by contributing stuff to mod_SSL they implicitly make their own product better. This way both sides benefit."
+
+Parekh and C2Net now have a challenge. They must continue to make the Stronghold package better than the free version to justify the cost people are paying.
+
+Not all forks end with such a happy-faced story of mutual cooperation. Nor do all stories in the free software world end with the moneymaking corporation turning around and giving back their proprietary code to the general effort. But the C2Net/OpenSSL case illustrates how the nature of software development encourages companies and people to give and cooperate to satisfy their own selfish needs. Software can do a variety of wonderful things, but the structure often governs how easy it is for some of us to use. It makes sense to spend some extra time and make donations to a free software project if you want to make sure that the final product fits your specs.
+
+The good news is that most people don't have much incentive to break off and fork their own project. If you stay on the same team, then you can easily use all the results produced by the other members. Cooperating is so much easier than fighting that people have a big incentive to stay together. If it weren't so selfish, it would be heartwarming.
+
+1~ Core
+
+Projects in corporations have managers who report to other managers who report to the CEO who reports to the board. It's all very simple in theory, although it never really works that way in practice. The lines of control get crossed as people form alliances and struggle to keep their bosses happy.
+
+Projects in the world of open source software, on the other hand, give everyone a copy of the source code and let them be the master of the code running on their machine. Everyone gets to be the Board of Directors, the CEO, and the cubicle serfs rolled into one. If a free software user doesn't like something, then he has the power to change it. You don't like that icon? Boom, it's gone. You don't want KDE on your desktop? Whoosh, it's out of there. No vice president in charge of MSN marketing in Redmond is going to force you to have an icon for easy connection to the Microsoft Network on your desktop. No graphic designer at Apple is going to force you to look at that two-faced Picasso-esque MacOS logo every morning of your life just because their marketing studies show that they need to build a strong brand identity. You're the captain of your free software ship and you decide the menu, the course, the arrangement of the deck chairs, the placement of lookouts from which to watch for icebergs, the type of soap, and the number of toothpicks per passenger to order. In theory, you're the Lord High Master and Most Exalted Ruler of all Software Big and Small, Wild and Wonderful, and Interpreted and Compiled on your machine.
+
+In practice, no one has the time to use all of that power. It's downright boring to worry about soap and toothpicks. It's exhausting to rebuild window systems when they fail to meet your caviar-grade tastes in software.
+
+No one has the disk space to maintain an Imelda Marcos-like collection of screen savers, window managers, layout engines, and games for your computer. So you start hanging around with some friends who want similar things and the next thing you know, you've got a group. A group needs leadership, so the alpha dog emerges. Pretty soon, it all begins to look like a corporate development team. Well, kind of.
+
+Many neophytes in the free software world are often surprised to discover that most of the best free source code out there comes from teams that look surprisingly like corporate development groups. While the licenses and the rhetoric promise the freedom to go your own way, groups coalesce for many of the same reasons that wagon trains and convoys emerge. There's power in numbers. Sometimes these groups even get so serious that they incorporate. The Apache group recently formed the Apache Foundation, which has the job of guiding and supporting the development of the Apache web server. It's all very official looking. For all we know, they're putting cubicles in the foundation offices right now.
+
+This instinct to work together is just as powerful a force in the free software world as the instinct to grab as much freedom as possible and use it every day. If anything, it's just an essential feature of human life. The founders of the United States of America created an entire constitution without mentioning political parties, but once they pushed the start button, the parties appeared out of nowhere.
+
+These parties also emerged in the world of free source software. When projects grew larger than one person could safely handle, they usually evolved into development teams. The path for each group is somewhat different, and each one develops its own particular style. The strength of this organization is often the most important determinant of the strength of the software because if the people can work together well, then the problems in the software will be well fixed.
+
+The most prevalent form of government in these communities is the benign dictatorship. Richard Stallman wrote some of the most important code in the GNU pantheon, and he continues to write new code and help maintain the old software. The world of the Linux kernel is dominated by Linus Torvalds. The original founders always seem to hold a strong sway over the group. Most of the code in the Linux kernel is written by others and checked out by a tight circle of friends, but Torvalds still has the final word on many changes.
+
+The two of them are, of course, benign dictators, and the two of them don't really have any other choice. Both have a seemingly absolute amount of power, but this power is based on a mixture of personal affection and technical respect. There are no legal bounds that keep all of the developers in line. There are no rules about intellectual property or non-disclosure. Anyone can grab all of the Linux kernel or GNU source code, run off, and start making whatever changes they want. They could rename it FU, Bobux, Fredux, or Meganux and no one could stop them. The old threats of lawyers, guns, and money aren't anywhere to be seen.
+
+2~ Debian's Core Team
+
+The Debian group has a wonderful pedigree and many praise it as the purest version of Linux around, but it began as a bunch of outlaws who cried mutiny and tossed Richard Stallman overboard. Well, it wasn't really so dramatic. In fact, "mutiny" isn't really the right word when everyone is free to use the source code however they want.
+
+Bruce Perens remembers the split occurred less than a year after the project began and says, "Debian had already started. The FSF had been funding Ian Murdock for a few months. Richard at that time wanted us to make all of the executables unstripped."
+
+When programmers compile software and convert it from human-readable source code into machine-readable binary code, they often leave in some human readable information to help debug the program. Another way to say this is that the programmers don't strip the debugging tags out of the code. These tags are just the names of the variables used in the software, and a programmer can use them to analyze what each variable held when the software started going berserk.
+
+Perens continued, "His idea was if there was a problem, someone can send a stacktrace back without having to recompile a program and then making it break again. The problem with this was distributing executables unstripped makes them four times as large. It was a lot of extra expense and trouble. And our software didn't dump core anyway. That was really the bottom line. That sort of bug did not come up so often that it was necessary for us to distribute things that way anyways."
+
+Still, Stallman insisted it was a good idea. Debian resisted and said it took up too much space and raised duplication costs. Eventually, the debate ended as the Debian group went their own way. Although Stallman paid Murdock and wrote much of the GNU code on the disk, the GPL prevented him from doing much. The project continued. The source code lived on. And the Debian disks kept shipping. Stallman was no longer titular leader of Debian.
+
+The rift between the group has largely healed. Perens now praises Stallman and says that the two of them are still very close philosophically on the most important issues in the free software world. Stallman, for his part, uses Debian on his machines because he feels the closest kinship with it.
+
+Perens says, "Richard's actually grown up a lot in the last few years. He's learned a lot more about what to do to a volunteer because obviously we're free to walk away at any time."
+
+Stallman himself remembers the argument rather eloquently."The fact is, I wanted to influence them, but I did not want to force them. Forcing them would go against my moral beliefs. I believe that people are entitled to freedom in these matters, which means that I cannot tell them what to do," he told me. "I wrote the GPL to give everyone freedom from domination by authors of software, and that includes me on both sides."
+
+There's much debate over the best way to be a benign dictator. Eric Raymond and many others feel that Torvalds's greatest claim to success was creating a good development model. Torvalds released new versions of his kernel often and he tried to share the news about the development as openly as possible. Most of this news travels through a mailing list that is open to all and archived on a website. The mailing list is sort of like a perpetual congress where people debate the technical issues behind the latest changes to the kernel. It's often much better than the real United States Congress because the debate floor is open to all and there are no glaring special interests who try to steer the debate in their direction. After some period of debate, eventually Torvalds makes a decision and this becomes final. Usually he doesn't need to do anything. The answer is pretty obvious to everyone who's followed the discussion.
+
+This army is a diverse bunch. At a recent Linux conference, Jeff Bates, one of the editors of the influential website Slashdot (www.slashdot.org),
+pointed me toward the Debian booth, which was next to theirs. "If you look in the booth, you can see that map. They put a pushpin in the board for every developer and project leader they have around the world. China, Netherlands, Somalia, there are people coming from all over."
+
+James Lewis-Moss is one of the members, who just happened to be in the Debian booth next door. He lives in Asheville, North Carolina, which is four hours west of the Convention Center in downtown Raleigh. The Debian group normally relies upon local volunteers to staff the booth, answer questions, distribute CD-ROMs, and keep people interested in the project.
+
+Lewis-Moss is officially in charge of maintaining several packages, including the X Emacs, a program that is used to edit text files, read email and news, and do a number of other tasks. A package is the official name for a bundle of smaller programs, files, data, and documentation. These parts are normally installed together because the software won't work without all of its component parts.
+
+The packager's job is to download the latest software from the programmer and make sure that it runs well with the latest version of the other software to go in the Debian distribution. This crucial task is why groups like Debian are so necessary. If Lewis-Moss does his job well, someone who installs Debian on his computer will not have any trouble using X Emacs.
+
+Lewis-Moss's job isn't exactly programming, but it's close. He has to download the source code, compile the program, run it, and make sure that the latest version of the source works correctly with the latest version of the Linux kernel and the other parts of the OS that keep a system running. The packager must also ensure that the program works well with the Debian-specific tools that make installation easier. If there are obvious bugs, he'll fix them himself. Otherwise, he'll work with the author on tracking down and fixing the problems.
+
+He's quite modest about this effort and says, "Most Debian developers don't write a whole lot of code for Debian. We just test things to make sure it works well together. It would be offensive to some of the actual programmers to hear that some of the Debian folks are writing the programs when they're actually not."
+
+He added that many of the packagers are also programmers in other projects. In his case, he writes Java programs during the day for a company that makes point-of-sale terminals for stores.
+
+Lewis-Moss ended up with this job in the time-honored tradition of committees and volunteer organizations everywhere. "I reported a bug in X Emacs to Debian. The guy who had the package at that time said,
+'I don't want this anymore. Do you want it?' I guess it was random. It was sort of an accident. I didn't intend to become involved in it, but it was something I was interested in. I figured 'Hell, might as well.'"
+
+The Linux development effort moves slowly forward with thousands of stories like Lewis-Moss's. Folks come along, check out the code, and toss in a few contributions that make it a bit better for themselves. The mailing list debates some of the changes if they're controversial or if they'll affect many people. It's a very efficient system in many ways, if you can stand the heat of the debates.
+
+Most Americans are pretty divorced from the heated arguments that boil through the corridors of Washington. The view of the House and Senate floor is largely just for show because most members don't attend the debates. The real decisions are made in back rooms.
+
+The mailing lists that form the core of the different free software projects take all of this debate and pipe it right through to the members. While some discussions occur in private letters and even in the occasional phone call, much of the problem and controversy is dissected for everyone to read. This is crucial because most of the decisions are made largely by consensus.
+
+"Most of the decisions are technical and most of them will have the right answer or the best possible one at the moment," says Lewis-Moss.
+"Often things back down to who is willing to do the work. If you're willing to do the work and the person on the other side isn't willing, then yours is the right one by definition."
+
+While the mailing list looks like an idealized notion of a congress for the Linux kernel development, it is not as perfect as it may seem. Not all comments are taken equally because friendships and political alliances have evolved through time. The Debian group elected a president to make crucial decisions that can't be made by deep argument and consensus. The president doesn't have many other powers in other cases.
+
+While the Linux and GNU worlds are dominated by their one great Sun King, many other open source projects have adopted a more modern government structure that is more like Debian. The groups are still fairly ad hoc and unofficial, but they are more democratic. There's less idolatry and less dependence on one person.
+
+The Debian group is a good example of a very loose-knit structure with less reliance on the central leader. In the beginning, Ian Murdock started the distribution and did much of the coordination. In time, the mailing list grew and attracted other developers like Bruce Perens. As Murdock grew busier, he started handing off work to others. Eventually, he handed off central control to Perens, who slowly delegated more of the control until there was no key maintainer left. If someone dies in a bus crash, the group will live on.
+
+Now a large group of people act as maintainers for the different packages. Anyone who wants to work on the project can take responsibility for a particular package. This might be a small tool like a game or a bigger tool like the C compiler. In most cases, the maintainer isn't the author of the software or even a hard-core programmer. The maintainer's job is to make sure that the particular package continues to work with all the rest. In many cases, this is a pretty easy job. Most changes in the system don't affect simple programs. But in some cases it's a real challenge and the maintainer must act as a liaison between Debian and the original programmer. Sometimes the maintainers fix the bugs themselves. Sometimes they just report them. But in either case, the maintainer must make sure that the code works.
+
+Every once and a bit, Debian takes the latest stable kernel from Torvalds's team and mixes it together with all of the other packages. The maintainers check out their packages and when everything works well, Debian presses another CD-ROM and places the pile of code on the net. This is a stable "freeze" that the Debian group does to make sure they've got a stable platform that people can always turn to.
+
+"Making a whole OS with just a crew of volunteers and no money is a pretty big achievement. You can never discount that. It's easy for Red Hat to do it. They're all getting paid. The fact is that Debian makes a good system and still continues to do so. I don't think that there've been that many unpaid, collaborative projects that complex before," says Perens.
+
+When Perens took over at Debian he brought about two major changes. The first was to create a nonprofit corporation called Software in the Public Interest and arrange for the IRS to recognize it as a bona fide charitable organization. People and companies who donate money and equipment can take them off their taxes.
+
+Perens says that the group's budget is about $10,000 a year. "We pay for hardware sometimes. Although a lot of our hardware is donated. We fly people to conferences so they can promote Debian. We have a trade show booth. In general we get the trade show space from the show for free or severely discounted. We also have the conventional PO boxes, accounting, phone calls. The project doesn't have a ton of money, but it doesn't spend a lot, either."
+
+The Debian group also wrote the first guidelines for acceptable open source software during Perens's time in charge. These eventually mutated to become the definition endorsed by the Open Source Initiative. This isn't too surprising, since Perens was one of the founders of the Open Source Initiative.
+
+Debian's success has inspired many others. Red Hat, for instance, borrowed a significant amount of work done by Debian when they put together their distribution, and Debian borrows some of Red Hat's tools. When Red Hat went public, it arranged for Debian members to get a chance to buy some of the company's stock reserved for friends and family members. They recognized that Debian's team of package maintainers helped get their job done.
+
+Debian's constitution and strong political structure have also inspired Sun, which is trying to unite its Java and Jini customers into a community. The company is framing its efforts to support customers as the creation of a community that's protected by a constitution. The old paradigm of customer support is being replaced by a more active world of customer participation and representation.
+
+Of course, Sun is keeping a close hand on all of these changes. They protect their source code with a Community Source License that places crucial restrictions on the ability of these community members to stray. There's no real freedom to fork. Sun's not willing to embrace Debian's lead on that point, in part because they say they're afraid that Microsoft will use that freedom to scuttle Java.
+
+2~ Apache's Corporate Core
+
+The Apache group is one of the more businesslike development teams in the free source world. It emerged in the mid-1990s when the World Wide Web was just blossoming. In the early years, many sites relied on web servers like the free version that came from the NCSA, the supercomputer center at the University of Illinois that helped spark the web revolution by writing a server and a browser. This code was great, but it rarely served all of the purposes of the new webmasters who were starting new sites and building new tools as quickly as they could.
+
+Brian Behlendorf, one of the founders of the Apache group, remembers the time. "It wasn't just a hobbyist kind of thing. We had need for commercial-quality software and this was before Netscape released its software. We had developed our own set of patches that we traded like baseball cards. Finally we said, 'We had so many paths that overlap. Why don't we create our own version and continue on our own.'"
+
+These developers then coalesced into a core group and set up a structure for the code. They chose the basic, BSD-style license for their software, which allowed anyone to use the code for whatever purpose without distributing the source code to any changes. Many of the group lived in Berkeley then and still live in the area today. Of course, the BSD-style license also made sense for many of the developers who were involved in businesses and often didn't want to jump into the open source world with what they saw as Stallman's absolutist fervor. Businesses could adopt the Apache code without fear that some license would force them to reveal their source code later. The only catch was that they couldn't call the product Apache unless it was an unmodified copy of something approved by the Apache group.
+
+Several members of the group went off and formed their own companies and used the code as the basis for their products. Sameer Parekh based the Stronghold server product on Apache after his company added the encryption tools used to protect credit card information. Others just used versions of Apache to serve up websites and billed others for the cost of development.
+
+In 1999, the group decided to formalize its membership and create a not-for-profit corporation that was devoted to advancing the Apache server source code and the open source world in general. New members can apply to join the corporation, and they must be approved by a majority of the current members. This membership gets together and votes on a board of directors who make the substantive decisions about the group.
+
+This world isn't much different from the world before the corporation. A mailing list still carries debate and acts as the social glue for the group. But now the decision-making process is formalized. Before, the members of the core group would assign responsibility to different people but the decisions could only be made by rough consensus. This mechanism could be bruising and fractious if the consensus was not easy. This forced the board to work hard to develop potential compromises, but pushed them to shy away from tougher decisions. Now the board can vote and a pure majority can win.
+
+This seriousness and corporatization are probably the only possible steps that the Apache group could take. They've always been devoted to advancing the members' interests. Many of the other open source projects like Linux were hobbies that became serious. The Apache project was always filled with people who were in the business of building the web. While some might miss the small-town kind of feel of the early years, the corporate structure is bringing more certainty and predictability to the realm. The people don't have to wear suits now that it's a corporation. It just ensures that tough decisions will be made at a predictable pace.
+
+Still, the formalism adds plenty of rigidity to the structure. An excited newcomer can join the mailing lists, write plenty of code, and move mountains for the Apache group, but he won't be a full member before he is voted in. In the past, an energetic outsider could easily convert hard work into political clout in the organization. Now, a majority of the current members could keep interlopers out of the inner circle. This bureaucracy doesn't have to be a problem, but it has the potential to fragment the community by creating an institution where some people are more equal than others. Keeping the organization open in practice will be a real challenge for the new corporation.
+
+1~ T-Shirts
+
+If there's a pantheon for marketing geniuses, then it must include the guy who realized people would pay $1 for several cents' worth of sugar water if it came in a shapely bottle blessed by the brand name CocaCola. It might also include the guy who first figured out that adding new blue crystals to detergent would increase sales. It is a rare breed that understands how to get people to spend money they don't need to spend.
+
+The next induction ceremony for this pantheon should include Robert Young, the CEO of Red Hat Software, who helped the Linux and the open source world immeasurably by finding a way to charge people for something they could get for free. This discovery made the man rich, which isn't exactly what the free software world is supposed to do. But his company also contributed a sense of stability and certainty to the Linux marketplace, and that was sorely needed. Many hard-core programmers, who know enough to get all of the software for free, are willing to pay $70 to Red Hat just because it is easier. While some may be forever jealous of the millions of dollars in Young's pocket, everyone should realize that bringing Linux to a larger world of computer illiterates requires good packaging and hand-holding. Free software wouldn't be anywhere if someone couldn't find a good way to charge for it.
+
+The best way to understand why Young ranks with the folks who discovered how to sell sugar water is to go to a conference like LinuxExpo. In the center of the floor is the booth manned by Red Hat Software, the company Young started in Raleigh, North Carolina, after he got through working in the computer-leasing business. Young is in his fifties now and manages to survive despite the fact that most of his company's devotees are much closer to 13. Red Hat bundles together some of the free software made by the community and distributed over the Net and puts it on one relatively easy-to-use CD-ROM. Anyone who wants to install Linux or some of its packages can simply buy a disk from Red Hat and push a bunch of keys. All of the information is on one CD-ROM, and it's relatively tested and pretty much ready to go. If things go wrong, Red Hat promises to answer questions by e-mail or telephone to help people get the product working.
+
+Red Hat sells their disk at prices that range from $29.95 to $149.95. That buys the user a pretty box, three CD-ROMs including some demonstration versions of other software, all of the source code, a manual, and some telephone or e-mail support. That is pretty much like the same stuff that comes in software boxes from a normal company. The manual isn't as nice as the manuals produced by Apple or Microsoft, but it's not too bad.
+
+The big difference is that no one needs to buy the CD-ROM from Red Hat. Anyone can download all of the software from the Net. A friend of mine, Hal Skinner, did it yesterday and told me, "I just plugged it in and the software downloaded everything from the Net. I got everything in the Red Hat 6.0 distribution, and it didn't cost me anything."
+
+Of course, Red Hat isn't hurt too much by folks who grab copies without paying for them. In fact, the company maintains a website that makes it relatively easy for people to do just that. Red Hat didn't write most of the code. They also just grabbed it from various authors throughout the Net who published it under the GNU General Public License. They grabbed it without paying for it, so they're not really put out if someone grabs from them.
+
+The ability to snag GPL'ed software from around the Net keeps their development costs much lower than Sun, Apple, or Microsoft. They never paid most of the authors of the code they ship. They just package it up. Anyone else can just go find it on the Net and grab it themselves. This pretty much guarantees that Red Hat will be in a commodity business.
+
+To make matters worse for Red Hat, the potential competitors don't have to go out onto the Net and reassemble the collection of software for themselves. The GPL specifically forbids people from placing limitations on redistributing the source code. That means that a potential competitor doesn't have to do much more than buy a copy of Red Hat's disk and send it off to the CD-ROM pressing plant. People do this all the time. One company at the exposition was selling copies of all the major Linux distributions like Red Hat, Slackware, and OpenBSD for about $3 per disk. If you bought in bulk, you could get 11 disks for $25. Not a bad deal if you're a consumer.
+
+So, on one side of the floor, Young had a flashy booth filled with workers talking about what you could get if you paid $50 or more for Red Hat's version 6.0 with new enhancements like GNOME. Just a few hundred feet away, a company was selling ripoff copies of the same CDs for $3. Any company that is able to stay in business in a climate like that must be doing something right.
+
+It's not much different from the supermarket. Someone can pay $1 or more for two liters of Coca-Cola, or they can walk over a few aisles and buy Kool-Aid and raw sugar. It may be much cheaper to buy the raw ingredients, but many people don't.
+
+Young is also smart enough to use the competition from other cheap disk vendors to his advantage. He can't do anything about the GPL restrictions that force him to share with knockoff competitors, so he makes the best of them. "When people complain about how much we're charging for free software, I tell them to just go to CheapBytes.com," he said at the Expo. This is just one of the companies that regularly duplicates the CDs of Red Hat and resells them. Red Hat often gets some heat from people saying that the company is merely profiting off the hard work of others who've shared their software with the GPL. What gives them the right to charge so much for other people's software? But Young points out that people can get the software for $3. There must be a rational reason why they're buying Red Hat.
+
+Of course, the two packages aren't exactly equal. Both the original and the knockoff CD-ROM may have exactly the same contents, but the extras are different. The Red Hat package comes with "support," a rather amorphous concept in the software business. In theory, Red Hat has a team of people sitting around their offices diligently waiting to answer the questions of customers who can't get Red Hat software to do the right thing.
+
+In practice, the questions are often so hard or nebulous that even the support team can't answer them. When I first tried to get Red Hat to run on an old PC, the support team could only tell me that they never promised that their package would run on my funky, slightly obscure Cyrix MediaGX chip. That wasn't much help. Others probably have had better luck because they were using a more standard computer. Of course, I had no trouble installing Red Hat on my latest machine, and I didn't even need to contact tech support.
+
+The Red Hat packages also come with a book that tries to answer some of the questions in advance. This manual describes the basic installation procedure, but it doesn't go into any detail about the software included in the distribution. If you want to know how to run the database package, you need to dig into the online support provided by the database's developers.
+
+Many people enjoy buying these extra packages like the manual and the support, even if they never use them. Red Hat has blossomed by providing some hand-holding. Sure, some programmers could download the software from the Internet on their own, but most people don't want to spend the time needed to develop the expertise.
+
+When I say "Red Hat software," I really mean free source software that Red Hat picked up from the Net and knit into a coherent set of packages that should be, in theory, pretty bug free, tested, and ready for use. Red Hat is selling some hand-holding and filtering for the average user who doesn't want to spend time poking around the Net, checking out the different versions of the software, and ensuring that they work well together. Red Hat programmers have spent some time examining the software on the CD-ROM. They've tested it and occasionally improved it by adding new code to make it run better.
+
+Red Hat also added a custom installation utility to make life easier for people who want to add Red Hat to their computer.~{ Er, I mean to say "add Linux" or "add GNU/Linux." "Red Hat" is now one of the synonyms for free software. }~ They could have made this package installation tool proprietary. After all, Red Hat programmers wrote the tool on company time. But Young released it with the GNU General Public License, recognizing that the political value of giving something back was worth much more than the price they could charge for the tool.
+
+This is part of a deliberate political strategy to build goodwill among the programmers who distribute their software. Many Linux users compare the different companies putting together free source software CDROMs and test their commitment to the free software ideals. Debian, for instance, is very popular because it is a largely volunteer project that is careful to only include certified free source software on their CD-ROMs. Debian, however, isn't run like a business and it doesn't have the same attitude. This volunteer effort and enlightened pursuit of the essence of free software make the Debian distribution popular among the purists.
+
+Distributors like Caldera, on the other hand, include nonfree software with their disk. You pay $29.95 to $149.95 for a CD-ROM and get some nonfree software like a word processor tossed in as a bonus. This is a great deal if you're only going to install the software once, but the copyright on the nonfree software prevents you from distributing the CD-ROM to friends. Caldera is hoping that the extras it throws in will steer people toward its disk and get them to choose Caldera's version of Linux. Many of the purists, like Richard Stallman, hate this practice and think it is just a not very subtle way to privatize the free software. If the average user isn't free to redistribute all the code, then there's something evil afoot. Of course, Stallman or any of the other software authors can't do anything about this because they made their software freely distributable.
+
+Young is trying to walk the line between these two approaches. Red Hat is very much in the business of selling CD-ROMs. The company has a payroll with more than a handful of programmers who are drawing nonvolunteer salaries to keep the distributions fresh and the code clean. But he's avoided the temptation of adding not-so-free code to his disks. This gives him more credibility with the programmers who create the software and give it away. In theory, Young doesn't need to ingratiate himself to the various authors of GPL-protected software packages. They've already given the code away. Their power is gone. In practice, he gains plenty of political goodwill by playing the game by their rules.
+
+Several companies are already making PCs with Linux software installed at the factory. While they could simply download the software from the Net themselves and create their own package, several have chosen to bundle Red Hat's version with their machines. Sam Ockman, the president of Penguin Computing, runs one of those companies.
+
+Ockman is a recent Stanford graduate in his early twenties and a strong devotee of the Linux and GPL world. He says he started his company to prove that Linux could deliver solid, dependable servers that could compete with the best that Sun and Microsoft have to offer.
+
+Ockman has mixed feelings about life at Stanford. While he fondly remembers the "golf course-like campus," he says the classes were too easy. He graduated with two majors despite spending plenty of time playing around with the Linux kernel. He says that the computer science department's hobbled curriculum drove him to Linux. "Their whole CS community is using a stupid compiler for C on the Macintosh," he says."Why don't they start you off on Linux? By the time you get to [course] 248, you could hack on the Linux kernel or your own replacement kernel. It's just a tragedy that you're sitting there writing virtual kernels on a Sun system that you're not allowed to reboot."
+
+In essence, the computer science department was keeping their kids penned up in the shallow end of the pool instead of taking them out into the ocean. Ockman found the ocean on his own time and started writing GPL-protected code and contributing to the political emergence of free software.
+
+When Ockman had to choose a version of Linux for his Penguin computers, he chose Red Hat. Bob Young's company made the sale because it was playing by the rules of the game and giving software back with a GPL. Ockman says, "We actually buy the box set for every single one. Partially because the customers like to get the books, but also to support Red Hat. That's also why we picked Red Hat. They're the most free of all of the distributions."
+
+Debian, Ockman concedes, is also very free and politically interesting, but says that his company is too small to support multiple distributions. "We only do Red Hat. That was a very strategic decision on our part. All of the distributions are pretty much the same, but there are slight differences in this and that. We could have a twelve-person Debian group, but it would just be a nightmare for us to support all of these different versions of Linux."
+
+Of course, Penguin Computing could have just bought one Red Hat CD-ROM and installed their software on all of the machines going out the door. That would have let them cut their costs by about $50. The GPL lets anyone install the software as often as they wish. But this wouldn't be pure savings because Ockman is also offloading some of his own work when he bundles a Red Hat package with his computers. He adds, "Technically the box set I include allows customers to call Red Hat, but no one ever does, nor do we expect them or want them to call anyone but us." In essence, his company is adding some extra support with the Red Hat box.
+
+The support is an important add-on that Young is selling, but he realized long ago that much more was on sale. Red Hat was selling an image, the sense of belonging, and the indeterminant essence of cool. Soda manufacturers realized that anyone could put sugar and water in a bottle, but only the best could rise above the humdrum nature of life by employing the best artists in the land to give their sugar water the right hip feeling. So Young invested in image. His T-shirts and packages have always been some of the most graphically sophisticated on the market. While some folks would get girlfriends or neighbors to draw the images that covered their books and CDs, Red Hat used a talented team to develop their packaging.
+
+Young jokes about this. He said he was at a trade show talking to a small software company that was trying to give him one of their free promotional T-shirts. He said, "Why don't you try giving away the source code and selling the T-shirts?"
+
+At the LinuxExpo, Red Hat was selling T-shirts, too. One slick number retailing for $19 just said "The Revolution of Choice" in Red Hat's signature old typewriter font. Others for sale at the company's site routinely run for $15 or more. They sucked me in. When I ordered my first Red Hat disk from them, I bought an extra T-shirt to go with the mix.
+
+The technology folks at Red Hat may be working with some cuttingedge software that makes the software easy to install, but the marketing group was stealing its plays from Nike, Pepsi, and Disney. They weren't selling running shoes, sugar water, or a ride on a roller coaster--they were selling an experience. Red Hat wasn't repackaging some hacker's science project from the Net, it was offering folks a ticket to a revolution. If the old 1960s radicals had realized this, they might have been able to fund their movement without borrowing money from their square parents. Selling enough groovy, tie-died T-shirts would have been enough.~{ Apple is an old hand at the T-shirt game, and internal projects create T-shirts to celebrate milestones in development. These images were collected in a book, which may be as good a technical history of Apple as might exist. Many projects, including ones that failed, are part of the record. }~
+
+Many of the other groups are part of the game. The OpenBSD project sold out of their very fashionable T-shirts with wireframe versions of its little daemon logo soon after the beginning of the LinuxExpo. They continue to sell more T-shirts from their website. Users can also buy CD-ROMs from OpenBSD.
+
+Several attendees wear yellow copyleft shirts that hold an upsidedown copyright logo <:=Copyleft> arranged so the open side points to the left.
+
+The most expensive T-shirt at the show came with a logo that imitated one of the early marketing images of the first Star Wars movie. The shirt showed Torvalds and Stallman instead of Han Solo and Luke Skywalker under a banner headline of "OS Wars." The shirt cost only $100, but "came with free admission to the upcoming Linux convention in Atlanta."
+
+The corporate suits, of course, have adjusted as best they can. The IBM folks at the show wore identical khaki outfits with nicely cut and relatively expensive polo shirts with IBM logos. A regular suit would probably stick out less than the crisp, clean attempt to split the difference between casual cool and button-down business droid.
+
+Of course, the T-shirts weren't just about pretty packaging and slick images. The shirts also conveyed some information about someone's political affiliations in the community and showed something about the person's technical tastes. Sure, someone could wear an OpenBSD shirt because they liked the cute little daemon logo, but also because they wanted to show that they cared about security. The OpenBSD project began because some users wanted to build a version of UNIX that was much more secure. The group prides itself on fixing bugs early and well. Wearing an OpenBSD shirt proclaims a certain alliance with this team's commitment to security. After all, some of the profits from the shirts went to pay for the development of the software. Wearing the right T-shirt meant choosing an alliance. It meant joining a tribe.
+
+Young is keenly aware that much of his target market is 13-year-old boys who are flexing their minds and independence for the first time. The same images of rebellion that brought James Dean his stardom are painted on the T-shirts. Some wear shirts proclaiming TOTAL WORLD DOMINATION SOON. Raging against Microsoft is a cliché that is avoided as much as it is still used. The shirts are a mixture of parody, bluster, wit, and confidence. Of course, they're usually black. Everyone wears black.
+
+Ockman looks at this market competition for T-shirts and sees a genius. He says, "I think Bob Young's absolutely brilliant. Suddenly he realized that there's no future in releasing mainframes. He made a jump after finding college kids in Carolina [using Linux]. For him to make that jump is just amazing. He's a marketing guy. He sat down and figured it out.
+
+"Every time I hear him talk," Ockman says about Young, "he tells a different story about ketchup. If you take people who've never had ketchup before in their life and you blindly feed them ketchup, they have no taste for ketchup. They don't like it." If you feed them ketchup over time, they begin to demand it on their hamburgers.
+
+"No one who's never had Coca-Cola before would like it," Ockman continues. "These things are purely a branding issue. It has to be branded for cool in order for people to sit down and learn everything they have to know."
+
+In essence, Young looked around and saw that a bunch of scruffy kids were creating an OS that was just as good, if not better, than the major OSs costing major sums of money. This OS was, best of all, free for all comers. The OS had a problem, though. The scruffy kids never marketed their software. The deeply intelligent, free-thinking hackers picked up on how cool it was, but the rest of society couldn't make the jump. The scruffy kids didn't bother to try to market it to the rest of society. They were artists.
+
+Most people who looked at such a situation would have concluded that this strange clan of techno-outsiders was doomed to inhabit the periphery of society forever. There was no marketing of the product because there was no money in the budget and there would never be money in the budget because the software was free. Young recognized that you could still market the software without owning it. You could still slap on a veneer of cool without writing the code yourself. Sugar water costs practically nothing, too.
+
+Young's plan to brand the OS with a veneer of cool produced more success than anyone could imagine. Red Hat is by far the market leader in providing Linux to the masses, despite the fact that many can and do "steal" a low-cost version. Of course, "steal" isn't the right word, because Red Hat did the same thing. "Borrow" isn't right, "grab" is a bit casual, and "join in everlasting communion with the great free software continuum" is just too enthusiastic to be cool.
+
+In August 1999, Red Hat completed an initial public offering of the shares of its stock, the common benchmark for success in the cash-driven world of Silicon Valley. Many of the principals at Red Hat got rich when the stock opened at $14 a share on August 11 and closed the day at $52. Bob Young, the CEO of Red Hat, started the day with a bit more than 9 million shares or 15 percent of the company. Technically, not all of this was his because he had distributed some (3,222,746 shares, to be exact) to his wife, Nancy, and put some more (1,418,160) in various trusts for his children. Still, this cut adds up to about $468 million. Marc Ewing, executive vice president and chief technology officer, also ended up with a similar amount of money divided between trusts and his own pocket. Matthew Sulzik, the president, who joined in November 1998, got a bit less (2,736,248 shares) in his pot, but he was a relative newcomer. The big investors, Greylock IX Limited Partnership, Benchmark Capital Partners II, and Intel, split up the big part of the rest of the shares.
+
+Now, what happened to the boys who wrote the code? Did Richard Stallman get any of it? Did Linus Torvalds? Some of the major developers like Alan Cox and David Miller already work for Red Hat, so they probably drew shares out of the employee pool. There are thousands of names, however, who aren't on anyone's radar screen. They've written many lines of code for naught.
+
+Red Hat tried to alleviate some of the trouble by allocating 800,000 shares to "directors, officers and employees of Red Hat and to open source software developers and other persons that Red Hat believes have contributed to the success of the open source software community and the growth of Red Hat." This group, occasionally known as the "friends and family," was a way to reward buddies. Red Hat drew up a list of major contributors to the open source distribution and sent out invitations.
+
+"Dear open source community member," began the e-mail letter that Red Hat sent to about 1,000 people.
+
+_1 In appreciation of your contribution to the open source community, Red Hat is pleased to offer you this personal, non-transferable, opportunity. . . . Red Hat couldn't have grown this far without the ongoing help and support of the open source community, therefore, we have reserved a portion of the stock in our offering for distribution online to certain members of the open source community. We invite you to participate.
+
+Many programmers and developers were touched by the thoughtfulness. The list probably wasn't long enough or inclusive enough to pull everyone into the circle, but it did do a good job of spreading the wealth around. The plan began to backfire, however, when E*Trade began to parcel out the shares. Everyone who made it onto the list filled out a form listing their net worth, and E*Trade attempted to decide who was a sophisticated investor and who wasn't. Some folks who had little money (perhaps because they spent too much time writing free software) were locked out.
+
+One contributor, C. Scott Ananian, wrote about his rejection in Salon magazine, "I filled out the eligibility questionnaire myself. I knew they were trying to weed out inexperienced investors, so on every question that related to experience, I asserted the maximum possible. I knew what I was doing. And it was my money, anyway--I had a God-given right to risk it on as foolhardy a venture as I liked."
+
+The article drew plenty of flack and murmurs of a class action lawsuit from the disenfranchised. A discussion broke out on Slashdot, the hardcore site for nerds. Some defended E*Trade and pointed out that a Red Hat IPO was not a lock or a guarantee of wealth. Too many grandmothers had been burned by slick-talking stock salesmen in the past. E*Trade had to block out the little guys for their own protection. Stock can go down as well as up.
+
+Steve Gilliard, a "media operative" at the website NetSlaves, wrote,
+"If the Red Hat friends and family group were judged by normal standards, there is no brokerage in the U.S. which would let many of them buy into an IPO. In many cases, they would be denied a brokerage account. Poor people are usually encouraged to make other investments, like paying off Visa and Master Card."
+
+Others saw it as a trick to weed out the pool and make sure that E*Trade could allocate the shares to its buddies. The more the small guys were excluded, the more the big guys would get for their funds. In the end, the complaints reached some ears. More people were able to sneak in, but the circle was never big enough for all.
+
+2~ World Domination Pretty Soon?
+
+Red Hat's big pool of money created more than jealousy in the hearts and minds of the open source world. Jealousy was an emotional response. Fear of a new Microsoft was the rational response that came from the mind. Red Hat's pool of cash was unprecedented in the open source community. People saw what the pile of money and the stock options did to Bill Gates. Everyone began to wonder if the same would happen to Red Hat.
+
+On the face of it, most open source developers have little to worry about. All the code on the Red Hat disk is covered with a General Protection License and isn't going to become proprietary. Robert Young has been very open about his promise to make sure that everything Red Hat ships falls under the GPL. That includes the distribution tools it writes in-house.
+
+The GPL is a powerful force that prevents Red Hat from making many unilateral decisions. There are plenty of distributions that would like to take over the mantle of the most popular version of Linux. It's not hard. The source code is all there.
+
+But more savvy insiders whisper about a velvet-gloved version of Microsoft's "embrace and extend." The company first gains control by stroking the egos and padding the wallets of the most important developers.
+
+In time, other Red Hat employees will gradually become the most important developers. They're paid to work on open source projects all day. They'll gradually supplant the people who have day jobs. They'll pick up mindshare. Such a silent coup could guarantee that Red Hat will continue to receive large influxes of cash from people who buy the CD-ROMs.
+
+There are parts of this conspiracy theory that are already true. Red Hat does dominate the United States market for Linux and it controls a great deal of the mindshare. Their careful growth supported by an influx of cash ensured a strong position in the marketplace.
+
+In November 1999, Red Hat purchased Cygnus Solutions, the other major commercial developer of GPL-protected software, which specialized in maintaining and extending the compiler, GCC. Red Hat had 235 employees at the time and Cygnus Solutions had 181. That's a huge fraction of the open source developers under one roof. The Cygnus press release came with the headline, RED HAT TO ACQUIRE CYGNUS AND CREATE OPEN SOURCE POWERHOUSE.
+
+To make matters worse, one of the founders of Cygnus, Michael Tiemann, likes to brag that the open source software prevents competitors from rising up to threaten Cygnus. The GPL guarantees that the competitors will also have to publish their source, giving Cygnus a chance to stay ahead. In this model, any company with the money and stamina to achieve market dominance isn't going to be knocked down by some kids in a garage.
+
+Those are scary confluences. Let's imagine that the conspiracy theory is completely borne out. Let's imagine that all of the other distributions wither away as corporate and consumer clients rush head over heels to put Red Hat on their machines. Red Hat becomes the default in much the same way that Microsoft is the default today. Will Red Hat have the power that Microsoft has today?
+
+Will they be able to force everyone to have a Red Hat Network logon button on their desktop? Perhaps. Many people are going to trust Red Hat to create a good default installation. Getting software to be loaded by default will give them some power.
+
+Can they squeeze their partners by charging different rates for Linux? Microsoft is known to offer lower Windows prices to their friends. This is unlikely. Anyone can just buy a single Red Hat CDROM from a duplicator like CheapBytes. This power play won't work.
+
+Can they duplicate the code of a rival and give it away in much the same way that Microsoft created Internet Explorer and "integrated" it into their browser? You bet they can. They're going to take the best ideas they can get. If they're open source, they'll get sucked into the Red Hat orbit. If they're not, then they'll get someone to clone them.
+
+Can they force people to pay a "Red Hat tax" just to upgrade to the latest software? Not likely. Red Hat is going to be a service company, and they're going to compete on having the best service for their customers. Their real competitor will be companies that sell support contracts like LinuxCare. Service industries are hard work. Every customer needs perfect care or they'll go somewhere else next time. Red Hat's honeymoon with the IPO cash will only last so long. Eventually, they're going to have to earn the money to get a return on the investment. They're going to be answering a lot of phone calls and e-mails.
+
+1~ New
+
+Most of this book frames the entire free source movement as something new and novel. The notion of giving away free source code is something that seems strange and counterintuitive. But despite all of the gloss and excitement about serious folks doing serious work and then just giving it away like great philanthropists, it's pretty easy to argue that this has all been done before. The software world is just rediscovering secrets that the rest of the world learned long ago.
+
+Giving things away isn't a radical idea. People have been generous since, well, the snake gave Eve that apple. Businesses love to give things away in the hope of snagging customers. Paper towel manufacturers give away towel hardware that only accepts paper in a proprietary size. Food companies give coolers and freezers to stores if the stores agree not to stock rival brands in them.
+
+In fact, most industries do more than just give away free gifts to lure customers. Most share ideas, strategies, and plans between competitors because cooperation lets them all blossom. Stereo companies make components that interoperate because they adhere to the same standard. Lawyers, engineers, and doctors are just some of the people who constantly trade ideas and solutions with each other despite the fact that they work as competitors. A broad, central, unowned pool of knowledge benefits everyone in much the same way that it helps the free software community.
+
+The real question is not "Who do these pseudo-commie pinkos think they are?" It's "What took the software industry so long to figure this out?" How did the programmers who are supposedly a bunch of whip-smart, hard-core libertarians let a bunch of lawyers lead them down a path that put them in a cubicle farm and prevented them from talking to each other?
+
+Recipes are one of the closest things to software in the material world, and many restaurants now share them widely. While chefs once treated them like industrial secrets, they now frequently give copies to magazines and newspapers as a form of publicity. The free advertisement is worth more than the possibility that someone will start cloning the recipe. The restaurants recognized that they were selling more than unique food. Ambiance, service, and quality control are often more in demand than a particular recipe.
+
+When the free software industry succeeds by sharing the source code now, it's capitalizing on the fact that most people don't want to use the source code to set up a take-no-prisoners rivalry. Most people just want to get their work done. The cost of sharing source code is so low that it doesn't take much gain to make it worth the trouble. One bug fix or tiny feature could pay for it.
+
+2~ Shareware Is Not Open Source and Open Source Isn't Free
+
+The software industry has been flirting with how to make money off of the low cost of distributing its product. The concept of shareware began long before the ideological free software movement as companies and individual developers began sharing the software as a cheap form of advertisement. Developers without the capital to start a major marketing campaign have passed around free versions of their software. People could try it and if it met their needs, they could pay for it. Those who didn't like it were honor-bound to erase their version.
+
+Shareware continues to be popular to this day. A few products have made a large amount of money with this approach, but most have made very little. Some people, including many of the major companies, distribute their own crippled version of their product so people can try it. Crucial functions like the ability to print or save a document to the disk are usually left out as a strong encouragement to buy the real version.
+
+Of course, free source products aren't the same thing as shareware because most shareware products don't come with the source code. Programmers don't have the ability or the right to modify them to do what they want. This has always been one of the biggest selling points to the high-end marketplace that knows how to program.
+
+In fact, free source software is not dirt cheap either. Anyone who's been around the open software community for a time realizes that you end up having to pay something for the lunch. Keeping some costs hidden from the consumer isn't new, and it still hasn't gone away in the free software world. The costs may not be much and they may be a much better deal than the proprietary marketplace, but the software still costs something.
+
+The simplest cost is time. Free software is often not as polished as many commercial products. If you want to use many of the tools, you must study manuals and learn to think like a programmer. Some manuals are quite nice, but many are cursory. This may change as the free software movement aims to dominate the desktop, but the manuals and help aren't as polished as the solutions coming out of Microsoft. Of course, one free software devotee told me by way of apology, "Have you actually tried using Microsoft's manuals or help? They suck, too."
+
+Even when it is polished, free source software requires time to use. The more options that are available, the more time it takes to configure the software. Free source gives tons of options.
+
+The lack of polish isn't usually a problem for programmers, and it's often not an extra cost either. Programmers often need to learn a system before they find a way to revise and extend it to do what their boss wants it to do. Learning the guts of a free software package isn't much of an extra cost because they would be just trying to learn the guts of a Microsoft product instead. Plus, the source code makes the process easier.
+
+Still, most users including the best programmers end up paying a company like Red Hat, Caldera, or a group like OpenBSD to do some of the basic research in building a Linux system. All of the distribution companies charge for a copy of their software and throw in some support. While the software is technically free, you pay for help to get it to work.
+
+If the free source code is protected by the GNU General Public License, then you end up paying again when you're forced to include your changes with the software you ship. Bundling things up, setting up a server, writing documentation, and answering users' questions take time. Sure, it may be fair, good, and nice to give your additions back to the community, but it can be more of a problem for some companies. Let's say you have to modify a database to handle some proprietary process, like a weird way to make a chemical or manufacture a strange widget. Contributing your source code back into the public domain may reveal something to a competitor. Most companies won't have this problem, but being forced to redistribute code always has costs.
+
+Of course, the cost of this is debatable. Tivo, for instance, is a company that makes a set-top box for recording television content on an internal hard disk. The average user just sees a fancy, easy-to-use front end, but underneath, the entire system runs on the Linux operating system. Tivo released a copy of the stripped-down version of Linux it ships on its machines on its website, fulfilling its obligation to the GNU GPL. The only problem I've discovered is that the web page (www.tivo.com/linux/)
+is not particularly easy to find from the home page. If I hadn't known it was there, I wouldn't have found it.
+
+Of course, companies that adopt free source software also end up paying in one way or another because they need to hire programmers to keep the software running. This isn't necessarily an extra cost because they would have hired Microsoft experts anyway. Some argue that the free source software is easier to maintain and thus cheaper to use, but these are difficult arguments to settle.
+
+In each of these ways, the free software community is giving away something to spark interest and then finding a way to make up the cost later. Some in the free software community sell support and others get jobs. Others give back their extensions and bug fixes. A running business is a working ecology where enough gets reinvested to pay for the next generation of development. The free source world isn't a virtual single corporation like the phone company or the cable business, but it can be thought of in that way. Therefore, the free software isn't much different from the free toasters at the banks, the free lollipops at the barber's, or the free drugs from the neighborhood pusher.
+
+If you want to think bigger, it may be better to see the free software world as closer to the great socialized resources like the ocean, the freeway system, or the general utility infrastructure. These treat everyone equally and provide a common basis for travel and commerce.
+
+Of course, that's the most cynical way that free software is no different from many of the other industries. There are other ways that the free source vision is just a return to the way that things used to be before the software industry mucked them up. The problem is that a mixture of licensing, copyright, and patent laws have given the software industry more ways to control their product than virtually any other industry. The free source movement is more a reaction against these controls than a brave new experiment.
+
+2~ Would You License a Car from These Guys?
+
+Comparing the software industry to the car industry is always a popular game. Normally, the car industry looks a bit poky and slow off the mark because they haven't been turning out new products that are twice as fast and twice as efficient as last year's products. But many parts of the car industry are bright, shining examples of freedom compared to their software equivalents.
+
+Consider the Saturday afternoon mechanic who likes to change the oil, put in a new carburetor, swap the spark plugs, and keep the car in running order. The car guy can do all of these things without asking the manufacturer for permission. There's nothing illegal about taking apart an engine or even putting an entirely new, souped-up engine in your car. The environmental protection laws may prohibit adding engines that spew pollutants, but the manufacturer is out of the loop. After all, it's your car. You paid for it.
+
+Software is something completely different. You don't own most of the software you paid for on your computer. You just own a "license" to use it. The difference is that the license can be revoked at any time if you don't follow the rules, and some of the rules can be uncomfortable or onerous. There's nothing wrong with this mechanism. In the right hands, it can be very pleasant. The Berkeley Software Distribution license, for instance, has no real requirements except that you credit the university for its contributions, and the university just revoked that requirement. The GNU Public License is much stricter, but only if you want to change, modify, and distribute the code. In that case, you're only prevented from keeping these changes a secret. That's not a big problem for most of us.
+
+Other licenses are even more stricter. One Microsoft license prevents the programmer from trying to figure out how the software works inside by saying "LICENSEE may not reverse engineer, decompile or disassemble Microsoft Agent." These clauses are popular and found in many software licenses. The company lawyers argue that they ostensibly prevent people from stealing the secrets that are bound up in the software.
+
+These licenses have been interpreted in different ways. The video game maker Accolade, for instance, won its case against the manufacturer Sega by arguing that reverse engineering was the only way to create a clone. If companies couldn't clone, there would be no free market. On the other hand, Connectix lost some of the early court battles when Sony sued them for creating a software clone of the PlayStation. The judge decided that Connectix had violated Sony's copyright when they made a copy to study for reverse engineering. In February 2000, an appeals court struck down this ruling, freeing Connectix to sell the emulator again. By the time you read this, the legal landscape will probably have changed again.
+
+In practice, license clauses like this only hurt the honest programmers who are trying to deal with a nasty bug. Most people don't want to steal secrets, they just want to be able to make their software work correctly. Decompiling or disassembling the code is a good way to figure out exactly what is going on inside the software. It can save hours and plenty of grief.
+
+The license even borders on the absurd because the phrase "reverse engineer" is so ambiguous. It may be possible to argue that just learning to use a piece of software is reverse engineering it. Learning how a feature works means learning to predict what it will do. In many cases, the bugs and the glitches in software mean that the features are often a bit unpredictable and only a bit of black-box reverse engineering can teach us how they work. That's not much different from learning the steps that happen inside. Fiddling with shrink-wrapped software is like fiddling with a black box.
+
+Imagine that General Motors or Ford sold their cars with such a donot-reverse-engineer license. They would either weld the hood shut or add on a special lock and only give the keys to registered dealers who would sign lots of forms that guaranteed that they would keep the workings of the cars secret. No one could change the spark plugs, chop the hood, add a nitro tank, or do anything with the car except drive it around in a completely boring way. Some lawyers at the car companies might love to start shipping cars with such a license. Think how much more they could charge for service!The smart executives might realize that they were hurting their biggest fans, the people who liked to tune, tweak, fiddle, and futz with their machines. They would be stripping away one of the great pleasures of their devices and slowly but surely turning the cars into commodity items that put the owners in legal strait-jackets.
+
+Some software companies take the licensing requirements to even greater extremes. One of the most famous examples is the Microsoft Agent software, which allows a programmer to create little animated characters that might give instructions. Some versions of Microsoft Office, for instance, come with a talking paper clip that points out new and improved features. Microsoft released this technology to the general programmer community hoping that people would add the tools to their software and create their own talking characters.
+
+The software is free and Microsoft posts a number of nice tools for using the code on their website. They couldn't leave well enough alone, though, because anyone who wants to use the tool with their code needs to print out and file a separate license with the Microsoft legal staff. Many of the clauses are pretty simple and do useful things like force anyone using the software to try to keep their versions up to date. But the most insidious one ensures that no one will
+
+_1 "...use the Character Animation Data and Image Files to disparage Microsoft, its products or services or for promotional goods or for products which, in Microsoft's sole judgment, may diminish or otherwise damage Microsoft's goodwill in the SOFTWARE PRODUCT including but not limited to uses which could be deemed under applicable law to be obscene or pornographic, uses which are excessively violent, unlawful, or which purpose is to encourage unlawful activities."
+
+In other words, if you want to make the cute animated cartoon say something unkind about Microsoft, Microsoft can simply shut you down. And don't even think about creating a little animated marijuana cigarette for your Grateful Dead softwarepalooza. It's practically illegal just to think bad thoughts in the vicinity of a computer running Microsoft Agent.
+
+Most software licenses are not as bad or as restrictive as the Microsoft Agent license, but many cause their own share of grief. Companies continue to try to come up with more restrictive solutions for combating piracy, and in the end they bother the legitimate users. People are often buying new computers or upgrading a hard disk, and both of these acts require making a copy of old software. Companies that make it too difficult to do these things end up rubbing salt in the wounds of legitimate users who lose a hard disk.
+
+In this context, the free source world isn't a new flowering of mutual respect and sharing, it's just a return to the good old days when you could take apart what was yours. If you bought the software, you can fiddle with it. This isn't the Age of Aquarius, it is the second coming of Mayberry R.F.D., Home Improvement, and the Dukes of Hazzard.
+
+2~ Other Professions Were Open from the Start
+
+This comparison doesn't have to be limited to the car guys in the garage. Many other professions freely share ideas and operate without the very restrictive covenants of the software industry. The legal business is a great example of a world where people are free to beg, borrow, and steal ideas from others. If someone finds a neat loophole, they can't patent it or prevent others from exploiting it. Once other lawyers hear about it, they'll be filing their own lawsuits for their own clients. ~{ 1The legal system is not perfect. Too many cases are now filed under seal, and the courts are too willing to act as private dispute agencies for big corporations. When the law is locked up in this way, it is not a great example for the free software world. }~
+
+Consider the world of tobacco liability. Once one state advanced the legal opinion that the tobacco companies were liable for the cost of treating any disease that might emerge from smoking cigarettes, the other states and plenty of lawyers were able to jump on board. Once they settled, the lawyers turned their sights on the gun companies. By the time you read this, they'll probably have moved on to the fat delivery vehicle manufacturers in the fast-food industry and the stress induction groups, aka your employer. The exercise reduction industry, made up of a megalomaniacal consortium of moviemakers, television producers, and, yes, book writers, must be on someone's list.~{ The author recommends that you read this on the Stairmaster or a stationary bike, but only after checking with a registered doctor and consulting with a licensed exercise specialist who is thoroughly familiar with your medical history. These medical specialists will be able to tune your workout to provide the optimal fitness benefits so you can live long enough to get Alzheimer's disease. }~
+
+Free source folks are just as free to share ideas. Many of the rival Linux and BSD distributions often borrow code from each other. While they compete for the hearts and minds of buyers, they're forced by the free source rules to share the code. If someone writes one device driver for one platform, it is quickly modified for another.
+
+The proprietary software world moves slowly in comparison. They keep their ideas secret and people spend thousands of lawyer years on projects just keeping the various licenses straight. Code is shared, but only after lawyers vet the contracts.
+
+The legal industry is also a good example of how the free sharing of ideas, techniques, and strategies does not hurt the income of the practitioners. In fact, lawyers have managed to carve themselves a very nice slice of the nation's income. Most are not as rich as the lucky few who beat the tobacco companies, but they do all right.
+
+2~ Copyright, Tool of Dictators
+
+It would be unfair to the software industry to portray the rest of society as much more sharing and giving. Most of the other industries are frantically using the legal system and any other means necessary to stay ahead of their competitors. It's just part of doing business.
+
+One of the best examples is content production, which is led by mega-companies like Disney. In recent years, Hollywood has worked hard to get copyright laws changed so that the copyright lasts 95 years instead of 75 years. In 1998, Congress passed the Sonny Bono Copyright Term Extension Act of 1998 (CTEA) that kept works published after 1923 from passing into the public domain until 2019. The industry feels that this gives them the protection to keep creating new items. Creations like Mickey Mouse and Snow White will continue to live in the very safe place controlled by Disney and not fall into the evil hands of the public domain.
+
+Several Harvard professors, Larry Lessig, Charles Nesson, and Jonathan Zittrain of the Berkman Center for Internet & Society at Harvard Law School, and Geoffrey Stewart of the Boston law firm Hale and Dorr filed a lawsuit contesting the act by pointing out that the Constitution provides for a "limited" term. Artists, authors, and creators were given copyright protection, but it was only for a limited amount of time. Afterward, the society could borrow and use the work freely.
+
+There's little doubt that the major Hollywood producers recognize the value of a well-stocked collection of public domain literature. Movies based on works by William Shakespeare, Henry James, and Jane Austen continue to roll out of the studios to the welcoming patrons who buy tickets despite knowing how the story ends. Disney itself built its movie franchise on shared fables like Sleeping Beauty or Snow White. Very few of Disney's animated films (The Lion King was one of the first ones) were created in-house from a clean piece of paper. Most were market-tested for acceptance by their years in the public domain. Of course, Disney only pays attention to this fact when they're borrowing an idea to create their own version, not when they're defending the copyright of their own creations. They want to take, not give.
+
+The movie industry, like the proprietary software business, seems to forget just how valuable a shared repository of ideas and solutions can be. In this context, the free source movement isn't an explosion of creative brilliance or a renaissance of cooperation, it's just a return to the good old days when Congress wouldn't slavishly answer the whims of the content industry. If a theater owner wanted to put on a Shakespeare play, the text was in the public domain. If someone wanted to rewrite Jane Austen and create the movie Clueless, they were free to do so. In the good old days, copyright faded after a limited amount of time and the public got something back for granting a monopoly to the artist. In the good old days, the artist got something back, too, when the monopoly of other artists faded away.
+
+It's not like this brave new world of total copyright protection has generated superior content. The so-called original movies aren't that different. All of the action movies begin with some death or explosion in the first two minutes. They all run through a few car chases that lead to the dramatic final confrontation. The television world is filled with 30-minute sitcoms about a bunch of young kids trying to make it on their own. It's sort of surprising that Hollywood continues to suggest that the copyright laws actually promote creativity.
+
+It's not hard to believe that we might be better off if some of the characters were protected by an open source license. Superman and Batman have both gone through several decades of character morphing as the artists and writers assigned to the strips change. Of course, that change occurred under the strict control of the corporation with the copyright.
+
+The thousands of fan novels and short stories are better examples. Many fans of movies like Star Trek or Star Wars often write their own stories using the protected characters without permission. Most of the time the studios and megalithic corporations holding the copyright look the other way. The work doesn't make much money and is usually born out of love for the characters. The lawyers who have the job of defending the copyrights are often cool enough to let it slide.
+
+Each of these novels provides some insight into the characters and also the novelist. While not every novelist is as talented as the original authors, it can still be fun to watch the hands of another mold the characters and shape his or her destiny. The world of the theater has always accepted the notion that directors and actors will fiddle with plays and leave their own marks on them. Perhaps it wouldn't be so bad if writers could have the same latitude after the original author enjoyed a short period of exclusivity.
+
+There are many ways in which the free software world is strange and new to society, but sharing ideas without limitations is not one of them. Almost all businesses let people tinker and change the products they buy. The software industry likes to portray itself as a bunch of libertarians who worship the free market and all of its competition. In reality, the leading firms are riding a wave of power-grabbing that has lasted several decades. The firms and their lawyers have consistently interpreted their rules to allow them to shackle their customers with stronger and stronger bonds designed to keep them loyal and everspending.
+
+This is all part of a long progression that affects all industries. Linus Torvalds explained his view of the evolution when he told the San Jose Mercury-News, "Regardless of open source, programs will become really cheap. Any industry goes through three phases. First, there's the development of features people need. Then there's the frills-andupgrade phase, when people buy it because it looks cool. Then there's the everybody-takes-it-for-granted phase. This is when it becomes a commodity. Well, we're still in the look-cool-and-upgrade stage. In 10 or 15 years you'll be happy with software that's 5 years old. Open source is one sign that we're moving in that direction."
+
+In this light, the free software revolution isn't really a revolution at all. It's just the marketplace responding to the overly greedy approaches of some software companies. It's just a return to the good old days when buying something meant that you owned it, not that you just signed on as a sort of enlightened slave of the system.
+
+1~ Nations
+
+Microsoft is an American company. Bill Gates lives in Washington State and so do most of the programmers under his dominion. The software they write gets used around the globe in countries big and small, and the money people pay for the software comes flooding back to the Seattle area, where it buys huge houses, designer foods, and lots of serious and very competitive consumption. Through the years, this sort of economic imperialism has built the great cities of Rome, London, Tokyo, Barcelona, and many other minor cities. History is just a long series of epochs when some company comes up with a clever mechanism for moving the wealth of the world home to its cities. Britain relied on opium for a while. Rome, it might be said, sold a legal system. Spain trafficked in pure gold and silver. Microsoft is selling structured information in one of the most efficient schemes yet.
+
+Of course, these periods of wealth-building invariably come to an abrupt end when some army, which is invariably described as "ragtag," shows up to pillage and plunder. The Mongolian hordes, the Visigoths, and the Vikings are just a few of the lightweight, lean groups that appeared over the horizon and beat the standing army of the fat and complacent society. This was the cycle of boom and doom that built and trashed empire after dynasty after great society.
+
+Perhaps it's just a coincidence that Linus Torvalds has Viking blood in him. Although he grew up in Finland, he comes from the minority of the population for whom Swedish is the native tongue. The famous neutrality during World War II, the lumbering welfare states, the Nobel Peace Prize, and the bays filled with hiding Russian submarines give the impression that the Viking way is just a thing of the past, but maybe some of the old hack and sack is still left in the bloodlines.
+
+The Linux movement isn't really about nations and it's not really about war in the old-fashioned sense. It's about nerds building software and letting other nerds see how cool their code is. It's about empowering the world of programmers and cutting out the corporate suits. It's about spending all night coding on wonderful, magnificent software with massive colonnades, endless plazas, big brass bells, and huge steam whistles without asking a boss "Mother, may I?" It's very individualistic and peaceful.
+
+That stirring romantic vision may be moving the boys in the trenches, but the side effects are beginning to be felt in the world of global politics. Every time Linux, FreeBSD, or OpenBSD is installed, several dollars don't go flowing to Seattle. There's a little bit less available for the Microsoft crowd to spend on mega-mansions, SUVs, and local taxes. The local library, the local police force, and the local schools are going to have a bit less local wealth to tax. In essence, the Linux boys are sacking Seattle without getting out of their chairs or breaking a sweat. You won't see this battle retold on those cable channels that traffic in war documentaries, but it's unfolding as we speak.
+
+The repercussions go deeper. Microsoft is not just a Seattle firm. Microsoft is an American company and whatever is good for Microsoft is usually good, at least in some form, for the United States. There may be some fraternal squabbling between Microsoft and Silicon Valley, but the United States is doing quite well. The info boom is putting millions to work and raising trillions in taxes.
+
+The free software revolution undermines this great scheme in two very insidious ways. The first is subtle. No one officially has much control over a free software product, and that means that no country can claim it as its own. If Bill Gates says that the Japanese version of Windows will require a three-button mouse, then Japan will have to adjust. But Torvalds, Stallman, and the rest can't do a darn thing about anyone. People can just reprogram their mouse. If being boss means making people jump, then no one in the free software world is boss of anything. Free source code isn't on anyone's side. It's more neutral than Switzerland was in World War II. The United States can only take solace in the fact that many of the great free source minds choose to live in its boundaries.
+
+The second effect is more incendiary. Free software doesn't pay taxes. In the last several centuries, governments around the world have spent their days working out schemes to tax every transaction they can find. First, there were just tariffs on goods crossing borders, then the bold went after the income, and now the sales tax and the VAT are the crowning achievement. Along the way, the computer with its selfless ability to count made this possible. But how do you tax something that's free? How do you take a slice out of something that costs nothing?
+
+These are two insidious effects. The main job of governments is to tax people. Occasionally, one government will lust after the tax revenue of another and a war will break out that will force people to choose sides. The GPL and the BSD licenses destroy this tax mechanism, and no one knows what this will bring.
+
+One of the best places to see this destabilization is in the efforts of the United States government to regulate the flow of encryption software around the globe. Open source versions of encryption technology are oozing through the cracks of a carefully developed mechanism for restricting the flow of the software. The U.S. government has tried to keep a lid on the technology behind codes and ciphers since World War II. Some argue that the United States won World War II and many of the following wars by a judicious use of eavesdropping. Codebreakers in England and Poland cracked the German Enigma cipher, giving the Allies a valuable clue about German plans. The Allies also poked holes in the Japanese code system and used this to win countless battles. No one has written a comprehensive history of how code-breaking shifted the course of the conflicts in Vietnam, Korea, or the Middle East, but the stories are bound to be compelling.
+
+In recent years, the job of eavesdropping on conversations around the world has fallen on the National Security Agency, which is loath to lose the high ground that gave the United States so many victories in the past. Cheap consumer cryptographic software threatened the agency's ability to vacuum up bits of intelligence throughout the world, and something needed to be done. If good scrambling software was built into every copy of Eudora and Microsoft Word, then many documents would be virtually unreadable. The United States fought the threat by regulating the export of all encryption source code. The laws allowed the country to regulate the export of munitions, and scrambling software was put in that category.
+
+These regulations have caused an endless amount of grief in Silicon Valley. The software companies don't want someone telling them what to write. Clearing some piece of software with a bureaucrat in Washington, D.C., is a real pain in the neck. It's hard enough to clear it with your boss. Most of the time, the bureaucrat won't approve decent encryption software, and that means the U.S. company has a tough choice: it can either not export its product, or build a substandard one.
+
+There are branches of the U.S. government that would like to go further. The Federal Bureau of Investigation continues to worry that criminals will use the scrambling software to thwart investigations. The fact that encryption software can also be used by average folks to protect their money and privacy has presented a difficult challenge to policy analysts from the FBI. From time to time, the FBI raises the specter of just banning encryption software outright.
+
+The software industry has lobbied long and hard to lift these regulations, but they've had limited success. They've pointed out that much foreign software is as good as if not better than American encryption software. They've screamed that they were losing sales to foreign competitors from places like Germany, Australia, and Canada, competitors who could import their software into the U.S. and compete against American companies. None of these arguments went very far because the interests of the U.S. intelligence community always won when the president had to make a decision.
+
+The free source code world tripped into this debate when a peace activist named Phil Zimmerman sat down one day and wrote a program he called Pretty Good Privacy, or simply PGP. Zimmerman's package was solid, pretty easy to use, and free. To make matters worse for the government, Zimmerman gave away all of the source code and didn't even use a BSD or GPL license. It was just out there for all the world to see.
+
+The free source code had several effects. First, it made it easy for everyone to learn how to build encryption systems and add the features to their own software. Somewhere there are probably several programmers being paid by drug dealers to use PGP's source code to scramble their data. At least one person trading child pornography was caught using PGP.
+
+Of course, many legitimate folks embraced it. Network Solutions, the branch of SAIC, the techno powerhouse, uses digital signatures generated by PGP to protect the integrity of the Internet's root server. Many companies use PGP to protect their e-mail and proprietary documents. Banks continue to explore using tools like PGP to run transaction networks. Parents use PGP to protect their kids' e-mail from stalkers.
+
+The free source code also opened the door to scrutiny. Users, programmers, and other cryptographers took apart the PGP code and looked for bugs and mistakes. After several years of poking, everyone pretty much decided that the software was secure and safe.
+
+This type of assurance is important in cryptography. Paul Kocher, an expert in cryptography who runs Cryptography Research in San Francisco, explains that free source software is an essential part of developing cryptography."You need source code to test software, and careful testing is the only way to eliminate security problems in crypto-systems," he says. "We need everyone to review the design and code to look for weaknesses."
+
+Today, security products that come with open source code are the most trusted in the industry. Private companies like RSA Data Security or Entrust can brag about the quality of their in-house scientists or the number of outside contractors who've audited the code, but nothing compares to letting everyone look over the code.
+
+When Zimmerman launched PGP, however, he knew it was an explicitly political act designed to create the kind of veil of privacy that worried the eavesdroppers. He framed his decision, however, in crisp terms that implicitly gave each person the right to control their thoughts and words. "It's personal. It's private. And it's no one's business but yours," he wrote in the introduction to the manual accompanying the software. "You may be planning a political campaign, discussing your taxes, or having an illicit affair. Or you may be doing something that you feel shouldn't be illegal, but is. Whatever it is, you don't want your private electronic mail (e-mail) or confidential documents read by anyone else. There's nothing wrong with asserting your privacy. Privacy is as apple-pie as the Constitution."
+
+Initially, Zimmerman distributed PGP under the GPL, but backed away from that when he discovered that the GPL didn't give him much control over improvements. In fact, they proliferated and it made it hard to keep track of who created them. Today, the source code comes with a license that is very similar to the BSD license and lets people circulate the source code as much as they want.
+
+"I place no restraints on your modifying the source code for your own use," he writes in the accompanying documentation, and then catches himself."However, do not distribute a modified version of PGP under the name 'PGP' without first getting permission from me. Please respect this restriction. PGP's reputation for cryptographic integrity depends on maintaining strict quality control on PGP's cryptographic algorithms and protocols."
+
+Zimmerman's laissez-faire attitude, however, doesn't mean that the software is available with no restrictions. A holding company named Public Key Partners controlled several fundamental patents, including the ones created by Ron Rivest, Adi Shamir, and Len Adleman. Zimmerman's PGP used this algorithm, and technically anyone using the software was infringing the patent.
+
+While "infringing on a patent" has a certain legal gravitas, its real effects are hard to quantify. The law grants the patent holders the right to stop anyone from doing what is spelled out in the patent, but it only allows them to use a lawsuit to collect damages. In fact, patent holders can collect triple damages if they can prove that the infringers knew about the patent. These lawsuits can be quite a hassle for a big company like Microsoft, because Microsoft is selling a product and making a profit. Finding a number to multiply by three is easy to do. But the effects of the lawsuits on relatively poor, bearded peace activists who aren't making money is harder to judge. What's three times zero? The lawsuits make even less sense against some guy who's using PGP in his basement.
+
+Still, the threat of a lawsuit was enough of a cudgel to worry Zimmerman. The costs, however, put a limit on what PKP could demand. In the end, the two parties agreed that PGP could be distributed for non-commercial use if it relied upon a toolkit known as RSAREF made by PKP's sister company, RSA Data Security. Apparently, this would encourage people to use RSAREF in their commercial products and act like some free advertising for the toolkit.
+
+The patent lawsuit, however, was really a minor threat for Zimmerman. In 1994, the U.S. government started investigating whether Zimmerman had somehow exported encryption software by making it available on the Internet for download. While Zimmerman explicitly denounced violating the laws and took pains to keep the software inside the country, a copy leaked out. Some suggest it was through a posting on the Net that inadvertently got routed throughout the world. Was Zimmerman responsible?
+A branch of the U.S. Customs launched a criminal investigation in the Northern District of California to find out.
+
+Of course, determining how the source code got out of the country was a nearly impossible exercise. Unless Zimmerman confessed or somehow kept some incriminating evidence around, the prosecutors faced a tough job painting him as a lawbreaker. The software was available for free to anyone inside the country, and that meant that everyone had at least an opportunity to break the law. There were no purchase records or registration records. No one knew who had PGP on their disk. Maybe someone carried it across the border after forgetting that the source code was on a hard disk. Maybe a foreigner deliberately came into the U.S. and carried it out. Who knows? Zimmerman says it blew across the border "like dandelion seeds blowing in the wind."
+
+To make matters worse for the forces in the U.S. government that wanted to curtail PGP, the patent held by RSA wasn't filed abroad due to different regulations. Foreigners could use the software without care, and many did. This was the sort of nightmare that worried the parts of the U.S. intelligence-gathering branch that relied upon wholesale eavesdropping.
+
+Eventually, the criminal investigation amounted to nothing. No indictments were announced. No trials began. Soon after the investigation ended, Zimmerman helped form a company to create commercial versions of PGP. While the free versions continue to be available today and are in widespread use among individuals, companies often turn to PGP for commercial products that come with a license from PKP. When the RSA patent expires in September 2000, the people will be free to use PGP again.~{ The GNU project has already worked around many of these impediments. Their Privacy Guard package (GNU PG) is released under the GNU license. }~
+
+Zimmerman's experiences show how free source code turned into a real thorn in the side of the U.S. government. Businesses can be bought or at least leaned on. Merchandise needs to flow through stores and stores have to obey the law. Red tape can ruin everything. But free software that floats like dandelion seeds can't be controlled. People can give it to each other and it flows like speech. Suddenly it's not a product that's being regulated, but the free exchange of ideas between people, ideas that just happen to be crystallized as a computer program.
+
+Of course, a bureaucracy has never met something it couldn't regulate, or at least something it couldn't try to regulate. Zimmerman's experience may have proved to some that governments are just speed bumps on the infobahn of the future, but others saw it as a challenge. Until the end of 1999, the U.S. government has tried to tighten up the restrictions on open source versions of encryption technology floating around the world. The problem was that many countries around the globe explicitly exempt open source software from the restrictions, and the United States has lobbied to tighten these loopholes.
+
+The best place to begin this story may be in the trenches where system administrators for the U.S. government try to keep out hackers. Theo de Raadt, the leader of the OpenBSD team, likes to brag that the U.S. government uses OpenBSD on its secure internal network. The system designers probably made that choice because OpenBSD has been thoroughly audited for security holes and bugs by both the OpenBSD team and the world at large. They want the best code, and it's even free.
+
+"They're running Network Flight Recorder," de Raadt says. "It's a super sniffing package and an intrusion detection system. They can tell you if bad traffic happens on your private little network that the firewall should have stopped. They have OpenBSD running NFR on every network. They run an IPSEC vpn back to a main network information center where they look and do traffic analysis."
+
+That is, the departments watch for bad hackers by placing OpenBSD boxes at judicious points to scan the traffic and look for incriminating information. These boxes, of course, must remain secure. If they're compromised, they're worthless. Turning to something like OpenBSD, which has at least been audited, makes sense.
+
+"They catch a lot of system administrators making mistakes. It's very much a proactive result. They can see that a sys admin has misconfigured a firewall," he says.
+
+Normally, this would just be a simple happy story about the government getting a great value from an open source operating system. They paid nothing for it and got the results of a widespread, open review looking for security holes.
+
+De Raadt lives in Canada, not the United States, and he develops OpenBSD there because the laws on the export of encryption software are much more lenient. For a time, Canada did not try to control any mass market software. Recently, it added the requirement that shrinkwrapped software receive a license, but the country seems willing to grant licenses quite liberally. Software that falls into the public domain is not restricted at all. While OpenBSD is not in the public domain, it does fit that definition as set out by the rules. The software is distributed with no restrictions or charge. By the end of 1999, senior officials realized that the stop crypt policy was generating too many ironic moments.
+
+This is just another example of how free source software throws the traditional-instincts regulatory system for a loop. Companies sell products, and products are regulated. Public domain information, on the other hand, is speech and speech is protected, at least by the U.S. Constitution. Relying on Canada for network security of the Internet was too much.
+
+In January 2000, the U.S. government capitulated. After relentless pressure from the computer industry, the government recognized that high-quality encryption software like OpenBSD was common throughout the world. It also recognized that the quality was so good that many within the United States imported it. The government loosened restrictions and practically eliminated them for open source software. While many people are still not happy with the new regulations, open source encryption software can now flow out of the United States. The distributors need only notify the U.S. government about where the software is available. The commercial, proprietary encryption software was not as lucky. The regulations are now substantially easier on the corporations but they still require substantial review before an export license is granted.
+
+The difference in treatment probably did not result from any secret love for Linux or OpenBSD lurking in the hearts of the regulators in the Bureau of Export Affairs at the Department of Commerce. The regulators are probably more afraid of losing a lawsuit brought by Daniel Bernstein. In the latest decision released in May 1999, two out of three judges on an appeals panel concluded that the U.S. government's encryption regulations violated Bernstein's rights of free speech. The government argued that source code is a device not speech. The case is currently being appealed. The new regulations seem targeted to specifically address the problems the court found with the current regulations.
+
+Encryption software is just the beginning of the travails as the government tries to decide what to do about the free exchange of source code on the Net. Taxes may be next. While people joke that they would be glad to pay 10 percent sales tax on the zero dollars they've spent on GNU software, they're missing some of the deeper philosophical issues behind taxation. Many states don't officially tax the sale of an object;
+they demand the money for the use of it. That means if you buy a stereo in Europe, you're still supposed to pay some "use tax" when you turn it on in a state. The states try to use this as a cudgel to demand sales tax revenue from out-of-state catalog and mail-order shops, but they haven't gotten very far. But this hasn't stopped them from trying.
+
+What tax could be due on a piece of free software? Well, the state could simply look at the software, assign a value to it, and send the user a bill. Many states do just that with automobiles. You might have a rusted clunker, but they use the Blue Book value of a car to determine the tax for the year and each year they send a new bill. This concept proved to be so annoying to citizens of Virginia that Jim Gilmore won the election for governor with a mandate to repeal it. But just because he removed it doesn't mean that others will leave the issue alone.
+
+If governments ever decide to try to tax free software, the community might be able to fight off the request by arguing that the tax is "paid"
+when the government also uses the free software. If 7 out of 100 Apache servers are located in government offices, then the government must be getting 7 percent returned as tax.
+
+One of the most difficult problems for people is differentiating between wealth and money. The free software movement creates wealth without moving money. The easy flow of digital information makes this possible. Some folks can turn this into money by selling support or assisting others, but most of the time the wealth sits happily in the public domain.
+
+Today, the Internet boom creates a great pool of knowledge and intellectual wealth for the entire society. Some people have managed to convert this into money by creating websites or tools and marketing them successfully, but the vast pool of intellectual wealth remains open and accessible to all. Who does this belong to? Who can tax this? Who controls it? The most forward-thinking countries will resist the urge to tax it, but how many will really be able to keep on resisting?
+
+1~ Wealth
+
+The writer, P. J. O'Rourke, once pointed out that wealth is a particularly confusing concept to understand. It had nothing to do with being born in the right place. Africa is filled with diamonds, gold, platinum, oil, and thousands of other valuable resources, while Japan has hardly anything underground except subway tunnels and anthrax from strange cults. Yet Japan is still far wealthier even after the long swoon of their postbubble economy.
+
+O'Rourke also pointed out that wealth has nothing to do with raw brains. The Russians play chess as a national sport while Brentwood is filled with dim bulbs like the folks we saw during the O. J. Simpson murder trial. Yet poverty is endemic in Russia, while Brentwood flourishes. Sure, people wait in line for food in Brentwood like they did in Soviet Russia, but this is only to get a table at the hottest new restaurant.
+
+Wealth is a strange commodity, and understanding it keeps economists busy. Governments need to justify their existence in some way, and lately people in the United States use their perception of the "economy" as a measure of how well the government is doing. But many of their attempts to use numbers to measure wealth and prosperity are doomed to failure. One year, the economists seem to be frantically battling deflation, then they turn around and rattle on and on about inflation. They gave up trying to measure the money supply to follow inflation and seem, at times, to be flying the economy by the seat of their pants. Of course, they're not really in charge. One minute you can't have growth without inflation. The next minute you can. It's all a bit like ancient days of tribal living when the high priest was responsible for dreaming up reasons why the volcano did or did not erupt. Some days the money supply smiles upon us, and on other days, she is very, very angry.
+
+Wealth in the free software world is an even slippier concept. There's not even any currency to use to keep score. Let's say we wanted to know or at least guesstimate whether the free source world was wealthy. That's not too hard. Most of the guys hacking the code just want to drink caffeinated beverages, play cool games, and write more code. The endless stream of faster and faster computer boxes makes this as close to a perfect world as there could be. To make matters better, new T-shirts with clever slogans keep appearing. It's a nerd utopia. It's Shangri-La for folks who dig computers.
+
+Of course, deciding whether or not someone is wealthy is not really an interesting question of economics. It's more about self-esteem and happiness. Someone who has simple needs can feel pretty wealthy in a shack. Spoiled kids will never be happy no matter how big their palace. There are plenty of content people in the free software world, but there are also a few who won't be happy until they have source code to a huge, wonderful, bug-free OS with the most features on the planet. They want total world domination.
+
+A more intriguing question is whether the free source world is wealthier than the proprietary source world. This starts to get tricky because it puts Apples up against oranges and tries to make complicated comparisons. Bill Gates is incredibly wealthy in many senses of the word. He's got billions of dollars, a huge house, dozens of cars, servants, toys, and who knows what else. Even his employees have their own private jets. All of the trappings of wealth are there. Linus Torvalds, on the other hand, says he's pretty happy with about $100,000 a year, although several IPOs will probably leave him well off. Microsoft has thousands of programmers who are paid well to write millions of lines of code a year. Most open source programmers aren't paid much to create what they do. If money were a good measure, then the proprietary source world would win hands-down.
+
+But money is the answer only if you want piles of paper with pictures of famous Americans on them. Several countries in Latin America generate huge piles of money from drugs, oil, and other natural resources, but the countries remain quite poor. The leaders who end up with most of the money might like the huge disparity, but it has very distinct limitations. When it comes time for college or medical care, the very rich start flying up to the United States. Johns Hopkins, a hospital in Baltimore near where I live, provides wonderful medical service to the poor who live in the surrounding neighborhood. It also has a special wing with plush suites for rich people who fly in for medical treatment. Many are potentates and high government officials from poor countries around the world.
+
+People in the United States can enjoy the synergies of living near other well-educated, creative, empowered, and engaged citizens. People in poor societies can't assume that someone else will design great roads, build airlines, create cool coffee shops, invent new drugs, or do anything except get by on the few scraps that slip through the cracks to the great unwashed poor. The ultrarich in Latin America may think they're getting a great deal by grabbing all the pie, until they get sick. Then they turn around and fly to hospitals like Johns Hopkins, a place where the poor of Baltimore also enjoy quite similar treatment. Wealth is something very different from cash.
+
+Most folks in the free source world may not have big bank accounts. Those are just numbers in a computer anyway, and everyone who can program knows how easy it is to fill a computer with numbers. But the free source world has good software and the source code that goes along with it. How many times a day must Bill Gates look at the blue screen of death that splashes across a Windows computer monitor when the Windows software crashes? How many times does Torvalds watch Linux crash? Who's better off? Who's wealthier?
+
+The question might be asked, "Is your software better than it was four years ago?" That is, does your software do a better job of fetching the mail, moving the data, processing the words, or spreading the sheets? Is it more intuitive, more powerful, more stable, more featurerich, more interesting, more expressive, or just better?
+
+The answers to these questions can't be measured like money. There's no numerical quotient that can settle any of these questions. There will always be some folks who are happy with their early-edition DOS word processor and don't see the need to reinvent the wheel. There are others who are still unhappy because their desktop machine can't read their mind.
+
+For the devoted disciples of the open software mantra, the software in the free source world is infinitely better. Richard Stallman feels that the GNU code is better than the Microsoft code just because he has the source code and the freedom to do what he wants with it. The freedom is more important to him than whatever super-duper feature comes out of the Microsoft teams. After all, he can add any feature he wants if he has access to the basic source code. Living without the source code means waiting like a good peon for the nice masters from the big corporation to bless us with a bug fix.
+
+There's no question that people like Stallman love life with source code. A deeper question is whether the free source realm offers a wealthier lifestyle for the average computer user. Most people aren't programmers, and most programmers aren't even the hard-core hackers who love to fiddle with the UNIX kernel. I've rarely used the source code to Linux, Emacs, or any of the neat tools on the Net, and many times I've simply recompiled the source code without looking at it. Is this community still a better deal?
+
+There are many ways of looking at the question. The simplest is to compare features. It's hard to deny that the free software world has made great strides in producing something that is easy to use and quite adaptable. The most current distributions at the time I'm writing this come with a variety of packages that provide all of the functionality of Microsoft Windows and more. The editors are good, the browser is excellent, and the availability of software is wonderful. The basic Red Hat or Caldera distribution provides a very rich user interface that is better in many ways than Windows or the Mac. Some of the slightly specialized products like video software editors and music programs aren't as rich-looking, but this is bound to change with time. It is really a very usable world.
+
+Some grouse that comparing features like this isn't fair to the Mac or Windows world. The GNOME toolkit, they point out, didn't come out of years of research and development. The start button and the toolbar look the same because the GNOME developers were merely copying. The GNU/Linux world didn't create their own OS, they merely cloned all of the hard commercial research that produced UNIX. It's always easier to catch up, but pulling ahead is hard. The folks who want to stay on the cutting edge need to be in the commercial world. It's easy to come up with a list of commercial products and tools that haven't been cloned by an open source dude at the time of this writing: streaming video, vector animation, the full Java API, speech recognition, three dimensional CAD programs, speech synthesis, and so forth. The list goes on and on. The hottest innovations will always come from well capitalized start-ups driven by the carrot of wealth.
+
+Others point out that the free software world has generated more than its share of innovation. Most of the Internet was built upon non-proprietary standards developed by companies with Department of Defense contracts. Stallman's Emacs continues to be one of the great programs in the world. Many of the projects like Apache are the first place where new ideas are demonstrated. People who want to mock up a project find it easier to extend free source software. These ideas are often reborn as commercial products. While free source users may not have access to the latest commercial innovations, they have plenty of their own emerging from the open software world. GNOME isn't just a Windows clone--it comes with thousands of neat extensions and improvements that can't be found in Redmond.
+
+Stallman himself says the GNU project improved many pieces of software when they rewrote them. He says, "We built on their work, to the extent that we could legally do so (since we could not use any of their code), but that is the way progress is made. Almost every GNU program that replaces a piece of Unix includes improvements."
+
+Another way to approach the question is to look at people's behavior. Some argue that companies like Red Hat or organizations like Debian prove that people need and want some of the commercial world's handholding. They can't afford to simply download the code and fiddle with it. Most people aren't high school students doing time for being young. They've got jobs, families, and hobbies. They pay because paying brings continuity, form, structure, and order to the free source world. Ultimately, these Red Hat users aren't Stallman disciples, they're commercial sheep who are just as dependent on Red Hat as the Windows customers are on Microsoft.
+
+The counter-argument is that this insight overlooks a crucial philosophical difference. The Red Hat customers may be slaves like the Microsoft customers, but they still have important freedoms. Sure, many Americans are wage slaves to an employer who pays them as little as possible, but they do have the freedom to go be wage slaves of another employer if they choose. Old-fashioned slaves faced the whip and death if they tried to take that route.
+
+Most Linux users don't need to rewrite the source, but they can still benefit from the freedom. If everyone has the freedom, then someone will come along with the ability to do it and if the problem is big enough, someone probably will. In other words, only one person has to fly the X-wing fighter down the trench and blow up the Death Star.
+
+Some point out that the free source world is fine-if you've got the time and the attention to play with it. The source code only helps those who want to spend the time to engage it. You've got to read it, study it, and practice it to get any value from it at all. Most of us, however, just want the software to work. It's like the distinction between people who relax by watching a baseball game on television and those who join a league to play. The spectators are largely passive, waiting for the action to be served up to them. The league players, on the other hand, don't get anything unless they practice, stretch, push, and hustle. They need to be fully engaged with the game. All of us like an occasional competition, but we often need a soft couch, a six-pack, and the remote control. Free software is a nice opportunity to step up to the plate, but it's not true refreshment for the masses.
+
+Which is a better world? A polished Disneyland where every action is scripted, or a pile of Lego blocks waiting for us to give them form?
+Do we want to be entertained or do we want to interact? Many free software folks would point out that free software doesn't preclude you from settling into the bosom of some corporation for a long winter's nap. Companies like Caldera and Linuxcare are quite willing to hold your hand and give you the source code. Many other corporations are coming around to the same notion. Netscape led the way, and many companies like Apple and Sun will follow along. Microsoft may even do the same thing by the time you read this.
+
+Money isn't the same as wealth, and the nature of software emphasizes some of the ways in which this is true. Once someone puts the hours into creating software, it costs almost nothing to distribute it to the world. The only real cost is time because raw computer power and caffeinated beverages are very inexpensive.
+
+2~ Wealth and Poverty
+
+George Gilder laid out the gap between wealth and money in his influential book Wealth and Poverty. The book emerged in 1981 just before Ronald Reagan took office, and it became one of the philosophical touchstones for the early years of the administration. At the time, Gilder's words were aimed at a world where socialist economies had largely failed but capitalists had never declared victory. The Soviet Union was sliding deeper into poverty. Sweden was heading for some of the highest interest rates imaginable. Yet the newspapers and colleges of the United States refused to acknowledge the failure. Gilder wanted to dispel the notion that capitalism and socialism were locked into some eternal yin/yang battle. In his mind, efficient markets and decentralized capital allocation were a smashing success compared to the plodding bureaucracy that was strangling the Soviet Union.
+
+Although Gilder spoke generally about the nature of wealth, his insights are particularly good at explaining just why things went so right for the open software world. "Capitalism begins with giving," he says, and explains that societies flourish when people are free to put their money where they hope it will do the best. The investments are scattered like seeds and only some find a good place to grow. Those capitalists who are a mixture of smart and lucky gain the most and then plow their gains back into the society, repeating the process. No one knows what will succeed, so encouraging the bold risk-takers makes sense.
+
+Gilder's chapter on gift-giving is especially good at explaining the success of the free software world. Capitalism, he explains, is not about greed. It's about giving to people with the implicit knowledge that they'll return the favor severalfold. He draws heavily on anthropology and the writings of academics like Claude Lévi-Strauss to explain how the best societies create capital through gifts that come with the implicit debt that people give something back. The competition between people to give better and better gifts drives society to develop new things that improve everyone's life.
+
+Gilder and others have seen the roots of capital formation and wealth creation in this gift-giving. "The unending offerings of entrepreneurs, investing capital, creating products, building businesses, inventing jobs,
+
+accumulating inventories--all long before any return is received, all without assurance that the enterprise will not fail--constitute a pattern of giving that dwarfs in extent and in essential generosity any primitive rite of exchange. Giving is the vital impulse and moral center of capitalism," he writes.
+
+The socialists who've railed against the injustices and brutalities of market capitalism at work would disagree with the strength of his statement, but there are plenty of good examples. The American Civil War was the battle between the northern states where workers were occasionally chained to looms during their shifts and the southern states where the workers were always slaves. In the end, the least cruel society won, in part because of the strength of its industry and its ability to innovate. Companies that discovered this fact flourished and those that didn't eventually failed. By the end of the 20th century, the demand for labor in the United States was so high that companies were actively competing in offering plush treatment for their workers.
+
+The free software world, of course, is a perfect example of the altruistic nature of the potlatch. Software is given away with no guarantee of any return. People are free to use the software and change it in any way. The GNU Public License is not much different from the social glue that forces tribe members to have a larger party the next year and give back even more. If someone ends up creating something new or interesting after using GPL code as a foundation, then they become required to give the code back to the tribe.
+
+Of course, it's hard to get much guidance from Gilder over whether the GPL is better than the BSD license. He constantly frames investment as a "gift" to try to deemphasize the greed of capitalism. Of course, anyone who has been through a mortgage foreclosure or a debt refinancing knows that the banks don't act as if they've given away a gift. There are legal solutions for strong-arming the folks who don't give back enough. He was trying to get readers to forget these tactics a bit and get them to realize that after all of the arms are broken, the bank is still left with whatever the loan produced. There were no ultimate guarantees that all of the money would come back.
+
+Gilder smooths over this with a sharply drawn analogy. Everyone, he says, has experienced the uncomfortable feeling that comes from getting a gift that is the wrong size, the wrong style, or just wrong altogether. "Indeed, it is the very genius of capitalism that it recognizes the difficulty of successful giving, understands the hard work and sacrifice entailed in the mandate to help one's fellow men, and offers a practical way of living a life of effective charity," he writes. It's not enough to give a man a fish, because teaching him to fish is a much better gift. A fish farm that hires a man and gives him stock options may be offering the highest form of giving around.
+
+Gilder does note that the cycle of gifts alone is not enough to build a strong economy. He suggests that the bigger and bigger piles of coconuts and whale blubber were all that emerged from the endless rounds of potlatching. They were great for feasting, but the piles would rot and go stale before they were consumed. The successful society reinterpreted the cycle of gifts as investment and dividends, and the introduction of money made it possible for people to easily move the returns from one investment to the start of another. This liquidity lets the cycles be more and more efficient and gives people a place to store their wealth.
+
+Of course, Gilder admits that money is only a temporary storage device. It's just a tool for translating the wealth of one sector of the economy into the wealth of another. It's just a wheelbarrow or an ox cart. If society doesn't value the contributions of the capitalists, the transfer will fail. If the roads are too rocky or blocked by too many toll collectors, the carts won't make the trip.
+
+At first glance, none of this matters to the free software world. The authors give away their products, and as long as someone pays a minimal amount for storage the software will not decay. The web is filled with source code repositories and strongholds that let people store away their software and let others download it at will. These cost a minimal amount to keep up and the cost is dropping every day. There's no reason to believe that the original work of Stallman will be lost to the disease, pestilence, wear, and decay that have cursed physical objects like houses, clothes, and food.
+
+But despite the beautiful permanence of software, everyone knows that it goes bad. Programmers don't use the term "bit rot" for fun. As operating systems mature and other programs change, the old interfaces start to slowly break down. One program may depend upon the operating system to print out a file in response to a command. Then a new version of the printing code is revved up to add fancier fonts and more colors. Suddenly the interface doesn't work exactly right. Over time, these thousands of little changes can ruin the heart of a good program in much the same way worms can eat the hull of a wooden ship.
+
+The good news is that free source software is well positioned to fix these problems. Distributing the source code with the software lets others do their best to keep the software running in a changing environment. John Gilmore, for instance, says that he now embraces the GPL because earlier experiments with totally free software created versions without accompanying source code.
+
+The bad news is that Gilder has a point about capital formation. Richard Stallman did a great job writing Emacs and GCC, but the accolades weren't as easy to spend as cash. Stallman was like the guy with a pile of whale meat in his front yard. He could feast for a bit, but you can only eat so much whale meat. Stallman could edit all day and night with Emacs. He could revel in the neat features and cool Emacs LISP hacks that friends and disciples would contribute back to the project. But he couldn't translate that pile of whale meat into a free OS that would let him throw away UNIX and Windows.
+
+While Stallman didn't have monetary capital, he did have plenty of intellectual capital. By 1991, his GNU project had built many well respected tools that were among the best in their class. Torvalds had a great example of what the GPL could do before he chose to protect his Linux kernel with the license. He also had a great set of tools that the GNU project created.
+
+The GNU project and the Free Software Foundation were able to raise money just on the strength of their software. Emacs and GCC opened doors. People gave money that flowed through to the programmers. While there was no cash flow from software sales, the project found that it could still function quite well.
+
+Stallman's reputation also can be worth more than money when it opens the right doors. He continues to be blessed by the implicit support of MIT, and many young programmers are proud to contribute their work to his projects. It's a badge of honor to be associated with either Linux or the Free Software Foundation. Programmers often list these details on their résumés, and the facts have weight.
+
+The reputation also helps him start new projects. I could write the skeleton of a new double-rotating, buzzword-enhanced editor, label it "PeteMACS," and post it to the Net hoping everyone would love it, fix it, and extend it. It could happen. But I'm sure that Stallman would find it much easier to grab the hearts, minds, and spare cycles of programmers because he's got a great reputation. That may not be as liquid as money, but it can be better.
+
+The way to transfer wealth from project to project is something that the free software world doesn't understand well, but it has a good start. Microsoft struck it rich with DOS and used that money to build Windows. Now it has been frantically trying to use this cash cow to create other new businesses. They push MSN, the Microsoft Network, and hope it will stomp AOL. They've built many content-delivery vehicles like Slate and MSNBC. They've created data-manipulation businesses like Travelocity. Bill Gates can simply dream a dream and put 10,000 programmers to work creating it. He has serious intellectual liquidity.
+
+In this sense, the battle between free and proprietary software development is one between pure giving and strong liquidity. The GPL world gives with no expectation of return and finds that it often gets a return of a thousand times back from a grateful world of programmers. The proprietary world, on the other hand, can take its profits and redirect them quickly to take on another project. It's a battle of the speed of easy, unfettered, open source cooperation versus the lightning speed of money flowing to make things work.
+
+Of course, companies like Red Hat lie in a middle ground. The company charges money for support and plows this money back into improving the product. It pays several engineers to devote their time to improving the entire Linux product. It markets its work well and is able to charge a premium for what people are able to get for free.
+
+No one knows if the way chosen by companies like Red Hat and Caldera and groups like the Free Software Foundation is going to be successful in the long run. Competition can be a very effective way of driving down the price of a product. Some worry that Red Hat will eventually be driven out of business by cheap $2 CDs that rip off the latest distribution. For now, though, the success of these companies shows that people are willing to pay for hand-holding that works well.
+
+A deeper question is whether the open or proprietary model does a better job of creating a world where we want to live. Satisfying our wants is the ultimate measure of a wealthy society. Computers, cyberspace, and the Internet are rapidly taking up a larger and larger part of people's time. Television viewership is dropping, often dramatically, as people turn to life online. The time spent in cyberspace is going to be important. _1 Stallman wrote in BYTE magazine in 1986, I'm trying to change the way people approach knowledge and information in general. I think that to try to own knowledge, to try to control whether people are allowed to use it, or to try to stop other people from sharing it, is sabotage. It is an activity that benefits the person that does it at the cost of impoverishing all of society. One person gains one dollar by destroying two dollars' worth of wealth.
+
+No one knows what life online will look like in 5 or 10 years. It will certainly include web pages and e-mail, but no one knows who will pay how much. The cost structures and the willingness to pay haven't been sorted out. Some companies are giving away some products so they can make money with others. Many are frantically giving away everything in the hope of attracting enough eyeballs to eventually make some money.
+
+The proprietary model rewards risk-takers and gives the smartest, fastest programmers a pile of capital they can use to play the game again. It rewards the ones who satisfy our needs and gives them cash they can use to build newer and bigger models. The distribution of power is pretty meritocratic, although it can break down when monopolies are involved.
+
+But the open source solution certainly provides good software to everyone who wants to bother to try to use it. The free price goes a long way to spreading its bounty to a wide variety of people. No one is excluded and no one is locked out of contributing to the commonweal because they don't have the right pedigree, education, racial heritage, or hair color. Openness is a powerful tool.
+
+Richard Stallman told me, "Why do you keep talking about 'capital'?
+None of this has anything to do with capital. Linus didn't need capital to develop a kernel, he just wrote it. We used money to hire hackers to work on the kernel, but describing that as capital is misleading.
+
+"The reason why free software is such a good idea is that developing software does not really need a lot of money. If we cannot 'raise capital' the way the proprietary software companies do, that is not really a problem.
+
+"We do develop a lot of free software. If a theory says we can't, you have to look for the flaws in the theory."
+
+One of the best ways to illustrate this conundrum is to look at the experiences of the workers at Hotmail after they were acquired by Microsoft. Sure, many of them were overjoyed to receive so much for their share in an organization. Many might even do the same thing again if they had the choice. Many, though, are frustrated by their new position as corporate citizens whose main job is augmenting Microsoft's bottom line.
+
+One Hotmail founder told the PBS Online columnist Robert Cringely, "All we got was money. There was no recognition, no fun. Microsoft got more from the deal than we did. They knew nothing about the Internet. MSN was a failure. We had 10 million users, yet we got no respect at all from Redmond. Bill Gates specifically said, 'Don't screw-up Hotmail,' yet that's what they did."
+
+1~ Future
+
+David Henkel-Wallace sat quietly in a chair in a Palo Alto coffee shop explaining what he did when he worked at the free software firm Cygnus. He brought his new daughter along in a baby carriage and kept her parked alongside. Cygnus, of course, is one of the bigger successes in the free software world. He helped make some real money building and sustaining the free compiler, GCC, that Richard Stallman built and gave away. Cygnus managed to make the real money even after they gave away all of their work.
+
+In the middle of talking about Cygnus and open source, he points to his child and says, "What I'm really worried about is she'll grow up in a world where software continues to be as buggy as it is today." Other parents might be worried about the economy, gangs, guns in schools, or the amount of sex in films, but Henkel-Wallace wants to make sure that random software crashes start to disappear.
+
+He's done his part. The open source movement thrives on the GCC compiler, and Cygnus managed to find a way to make money on the process of keeping the compiler up to date. The free operating systems like Linux or FreeBSD are great alternatives for people today. They're small, fast, and very stable, unlike the best offerings of Microsoft or Apple. If the open software movement continues to succeed and grow, his child could grow up into a world where the blue screen of death that terrorizes Microsoft users is as foreign to them as manual typewriters.
+
+No one knows if the open software world will continue to grow. Some people are very positive and point out that all the features that made it possible for the free OSs to bloom are not going away. If anything, the forces of open exchange and freedom will only accelerate as more people are drawn into the mix. More people mean more bug fixes, which means better software.
+
+Others are not so certain, and this group includes many of the people who are deeply caught up in the world of open source. Henkel-Wallace, for instance, isn't so sure that the source code makes much difference when 99 percent of the people don't program. Sure, Cygnus had great success sharing source code with the programmers who used GCC, but all of those guys knew how to read the code. What difference will the source code make to the average user who just wants to read his e-mail?
+Someone who can't read the source code isn't going to contribute much back to the project and certainly isn't going to put much value in getting it. A proprietary company like Microsoft may be able to maintain a broad base of loyalty just by offering better hand-holding for the folks who can't program.
+
+Free software stands at an interesting crossroads as this book is being written. It won over a few hackers in garages in the early 1990s. By the mid-1990s, webmasters embraced it as a perfectly good option. Now everyone wonders whether it will conquer the desktop in the next century.
+
+It's always tempting for an author to take the classic TV news gambit and end the story with the earnest punt phrase, "Whether this will happen remains to be seen. "That may be the most fair way to approach reporting the news, but it's not as much fun. I'm going to boldly predict that open source software will win the long-term war against proprietary companies, but it will be a bloody war and it will be more costly than people expect. Over the next several years, lawyers will spend hours arguing cases; people will spend time in jail; and fortunes will be lost to the struggle.
+
+While it seems difficult to believe, some people have already spent time in jail for their part in the free software revolution. Kevin Mitnick was arrested in 1995 amid accusations that he stole millions if not billions of dollars' worth of source code. There was no trial, nor even a bail hearing. Finally, after almost five years in prison, Mitnick pled guilty to some charges and received a sentence that was only a few months longer than the time he served while waiting for a trial. Mitnick was accused of stealing millions of dollars from companies by breaking into computers and stealing copies of their source code.
+
+In the statement he made following his release, he said, ". . . my crimes were simple crimes of trespass. I've acknowledged since my arrest in February 1995 that the actions I took were illegal, and that I committed invasions of privacy--I even offered to plead guilty to my crimes soon after my arrest."
+
+He continued, "The fact of the matter is that I never deprived the companies involved in this case of anything. I never committed fraud against these companies. And there is not a single piece of evidence suggesting that I did so."
+
+This trespass, of course, would be breaking the rules. The irony is that in 1999, Sun announced that it was sharing its source code with the world. They begged everyone to look at it and probe it for weaknesses. The tide of opinion changed and Sun changed with it.
+
+Of course, breaking into a company's computer system will always be bad, but it's hard to view Mitnick's alleged crimes as a terrible thing. Now that source code is largely free and everyone digs public sharing, he begins to look more like a moonshine manufacturer during Prohibition. The free source revolution has given him a rakish charm. Who knows if he deserves it, but the zeitgeist has changed.
+
+There are more arrests on the way. In January 2000, a young Norwegian man was detained by the Norwegian police who wanted to understand his part in the development of software to unscramble the video data placed on DVD disks. Motion picture producers who released their movies in this format were worried that a tool known as DeCSS, which was floating around the Internet, would make it easier for pirates to make unlicensed copies of their movies.
+
+The man, Jan Johansen, did not write the tool, but merely helped polish and circulate it on the Net. News reports suggest an anonymous German programmer did the actual heavy lifting.
+
+Still, Johansen made a great target for the police, who never officially arrested him, although they did take him in for questioning.
+
+At this writing, it's not clear if Johansen officially broke any laws. Some argue that he violated the basic strictures against breaking and entering. Others argue that he circulated trade secrets that were not legimately obtained.
+
+Still others see the motion picture industry's response as an effort to control the distribution of movies and the machines that display them. A pirate doesn't need to use the DeCSS tool to unlock the data on a DVD disk. They just make a verbatim copy of the disk without bothering with the encryption. That leads others to suspect that the true motive is to sharply limit the companies that produce machines that can display DVD movies.
+
+One group that is locked out of the fray is the Linux community. While software for playing DVD movies exists for Macintoshes and PCs, there's none for Linux. DeCSS should not be seen as a hacker's tool, but merely a device that allows Linux users to watch the legitimate copies of the DVDs that they bought. Locking out Linux is like locking in Apple and Microsoft.
+
+The battle between the motion picture community and the Linux world is just heating up as I write this. There will be more lawsuits and prehaps more jail time ahead for the developers who produced DeCSS and the people who shared it through their websites.
+
+Most of the battles are not so dramatic. They're largely technical, and the free source world should win these easily. Open source solutions haven't had the same sophisticated graphical interface as Apple or Windows products. Most of the programmers who enjoy Linux or the various versions of BSD don't need the graphical interface and may not care about it. The good news is that projects like KDE and GNOME are great tools already. The open source world must continue to tackle this area and fight to produce something that the average guy can use.
+
+The good news is that open source software usually wins most technical battles. The free versions of UNIX are already much more stable than the products coming from Microsoft and Apple, and it seems unlikely that this will change. The latest version of Apple's OS has free versions of BSD in its core. That battle is won. Microsoft's version of NT can beat these free OSs in some extreme cases, but these are getting to be rarer by the day. Sun's Solaris is still superior in some ways, but the company is sharing the source code with its users in a way that emulates the open source world. More attention means more programmers and more bug fixes. Technical struggles are easy for open source to win.
+
+Microsoft's greatest asset is the installed base of Windows, and it will try to use this to the best of its ability to defeat Linux. At this writing, Microsoft is rolling out a new version of the Domain Name Server (DNS), which acts like a telephone book for the Internet. In the past, many of the DNS machines were UNIX boxes because UNIX helped define the Internet. Windows 2000 includes new extensions to DNS that practically force offices to switch over to Windows machines to run DNS. Windows 2000 just won't work as well with an old Linux or UNIX box running DNS.
+
+This is a typical strategy for Microsoft and one that is difficult, but not impossible, for open source projects to thwart. If the cost of these new servers is great enough, some group of managers is going to create its own open source clone of the modified DNS server. This has happened time and time again, but not always with great success. Linux boxes come with Samba, a program that lets Linux machines act as file servers. It works well and is widely used. Another project, WINE, started with the grand design of cloning all of the much more complicated Windows API used by programmers. It is a wonderful project, but it is far from finished. The size and complexity make a big difference.
+
+Despite these tactics, Microsoft (and other proprietary companies) will probably lose their quest to dominate the standards on the Internet. They can only devote a few programmers to each monopolistic grab. The free software world has many programmers willing to undertake projects. The numbers are now great enough that the cloners should be able to handle anything Microsoft sends its way.
+
+The real battles will be political and legal. While the computer world seems to move at a high speed with lots of constant turnover, there's plenty of inertia built into the marketplace. Many people were rather surprised to find that there was plenty of COBOL, FORTRAN, and other old software happily running along without any idea of how to store a date with more than two digits. While Y2K incidents fell far short of the media's hype, the number of systems that required reprogramming was still much larger than conventional wisdom predicted. IBM continues to sell mainframes to customers who started buying mainframes in the 1960s. Once people choose one brand or product or computer architecture, they often stay with it forever.
+
+This is bad news for the people who expect the free OSs to take over the desktop in the next 5 or 10 years. Corporate managers who keep the machines on people's desktops hate change. Change means reeducation. Change means installing new software throughout the plant. Change means teaching folks a new set of commands for running their word processors. Change means work. People who manage the computer networks in offices get graded on the number of glitches that stop workflow. Why abandon Microsoft now?
+
+If Microsoft has such an emotional stranglehold on the desktop and the computer industry takes forever to change, will free software ever grow beyond the 10 million or so desktops owned by programmers and their friends?
+
+Its strongest lever will be price. Freedom is great, but corporations respond better to a cost that is close to, if not exactly, zero. Big companies like Microsoft are enormous cash engines. They need a huge influx of cash to pay the workers, and they can't let their stock price slip. Microsoft's revenues increased with a precision that is rare in corporate America. Some stock analysts joke that the stock price suggests that Microsoft's revenues will grow faster than 10 percent forever. In the past, the company accomplished this by absorbing more and more of the market while finding a way to charge more and more for the software they supply. Businesses that lived quite well with Windows 95 are now running Windows NT. Businesses that ran NT are now using special service packs that handle network management and data functions. The budget for computers just keeps going up, despite the fact that hardware costs go down.
+
+Something has to give. It's hard to know how much of a lever the price will be. If the revenue at Microsoft stops growing, then the company's stock price could take a sharp dive. The company manages continually to produce greater and greater revenues each quarter with smooth precision. The expectation of the growth is built into the price. Any hiccup could bring the price tumbling down.
+
+The biggest question is how much people are willing to pay to continue to use Microsoft products. Retooling an office is an expensive proposition. The cost of buying new computers and software is often smaller than the cost of reeducation. While the free software world is much cheaper, shifting is not an easy proposition. Only time will tell how much people are willing to pay for their reluctance to change.
+
+The first cracks are already obvious. Microsoft lost the server market to Apache and Linux on the basis of price and performance. Web server managers are educated computer users who can make their own decisions without having to worry about the need to train others. Hidden computers like this are easy targets, and the free software world will gobble many of them up. More users mean more bug fixes and propagations of better code.
+
+The second crack in Microsoft's armor will be appliance computers. Most people want to browse the web and exchange some e-mail. The basic distribution from Red Hat or FreeBSD is good enough. Many people are experimenting with creating computers that are defined by the job they do, not the operating system or the computer chip. Free source packages should have no trouble winning many battles in this arena. The price is right and the manufacturers have to hire the programmers anyway.
+
+The third breach will be young kids. They have no previous allegiances and are eager to learn new computer technology. Microsoft may ask "Where do you want to go today?" but they don't want to talk with someone whose answer is "The guts of your OS."The best and brightest 13-year-olds are already the biggest fans of free software. They love the power and the complete access.
+
+The fourth crack will be the large installations in businesses that are interested in competitive bidding. Microsoft charges a bundle for each seat in a company, and anyone bidding for these contracts will be able to charge much less if they ship a free OS. It's not uncommon for a company to pay more than a million dollars to Microsoft for license fees. There's plenty of room for price competition when the bill gets that high. Companies that don't want to change will be hard to move from Windows, but ones that are price-sensitive will be moved.
+
+Of course, free software really isn't free. A variety of companies offering Linux support need to charge something to pay their bills. Distributions like Red Hat or FreeBSD may not cost much, but they often need some customization and hand-holding. Is a business just trading one bill for another? Won't Linux support end up costing the same thing as Microsoft's product?
+
+Many don't think so. Microsoft currently wastes billions of dollars a year expanding its business in unproductive ways that don't yield new profits. It spent millions writing a free web browser to compete with Netscape's and then they just gave it away. They probably gave up millions of dollars and untold bargaining chips when they twisted the arms of competitors into shunning Netscape. The company's successful products pay for these excursions. At the very least, a free OS operation would avoid these costs.
+
+Free OS systems are inherently cheaper to run. If you have the source, you might be able to debug the problem yourself. You probably can't, but it doesn't hurt to try. Companies running Microsoft products can't even try. The free flow of information will help keep costs down.
+
+Of course, there are also hard numbers. An article in Wired by Andrew Leonard comes with numbers originally developed by the Gartner Group. A 25-person office would cost $21,453 to outfit with Microsoft products and $5,544.70 to outfit with Linux. This estimate is a bit conservative. Most of the Linux cost is debatable because it includes almost $3,000 for 10 service calls to a Linux consultant and about $2,500 for Applixware, an office suite that does much of the same job as Microsoft Office. A truly cheap and technically hip office could make do with the editor built into Netscape and one of the free spreadsheets available for Linux. It's not hard to imagine someone doing the same job for about $3, which is the cost of a cheap knockoff of Red Hat's latest distribution.
+
+Of course, it's important to realize that free software still costs money to support. But so does Microsoft's. The proprietary software companies also charge to answer questions and provide reliable information. It's not clear that Linux support is any more expensive to offer.
+
+Also, many offices large and small keep computer technicians on hand. There's no reason to believe that Linux technicians will be any more or less expensive than Microsoft technicians. Both answer questions. Both keep the systems running. At least the Linux tech can look at the source code.
+
+The average home user and small business user will be the last to go.
+
+These users will be the most loyal to Microsoft because they will find it harder than anyone else to move. They can't afford to hire their own Linux gurus to redo the office, and they don't have the time to teach themselves.
+
+These are the main weaknesses for Microsoft, and the company is already taking them seriously. I think many underestimate how bloody the battle is about to become. If free source software is able to stop and even reverse revenue growth for Microsoft, there are going to be some very rich people with deep pockets who feel threatened. Microsoft is probably going to turn to the same legal system that gave it such grief and find some wedge to drive into the Linux community. Their biggest weapon will be patents and copyright to stop the cloners.
+
+Any legal battle will be an interesting fight. On the one hand, the free software community is diverse and spread out among many different entities. There's no central office and no one source that could be brought down. This means Microsoft would fight a war on many fronts, and this is something that's emotionally and intellectually taxing for anyone, no matter how rich or powerful.
+
+On the other hand, the free software community has no central reservoir of money or strength. Each small group could be crippled, one by one, by a nasty lawsuit. Groups like OpenBSD are always looking for donations. The Free Software Foundation has some great depth and affection, but its budget is a tiny fraction of Sun's or Microsoft's. Legal bills are real, and lawyers have a way of making them blossom. There may be hundreds of different targets for Microsoft, but many of them won't take much firepower to knock out.
+
+The free software community is not without some deep pockets itself. Many of the traditional hardware companies like IBM, Compaq, Gateway, Sun, Hewlett-Packard, and Apple can make money by selling either hardware or software. They've been hurt in recent years by Microsoft's relentless domination of the desktop. Microsoft negotiated hard contracts with each of the companies that controlled what the user saw. The PC manufacturers received little ability to customize their product. Microsoft turned them into commodity manufacturers and stripped away their control. Each of these companies should see great potential in moving to a free OS and adopting it. There is no extra cost, no strange meetings, no veiled threats, no arm-twisting.
+
+Suddenly, brands like Hewlett-Packard or IBM can mean something when they're slapped on a PC. Any goofball in a garage can put a circuit board in a box and slap on Microsoft Windows. A big company like HP or IBM could do extra work to make sure the Linux distribution on the box worked well with the components and provided a glitch-free existence for the user.
+
+The hardware companies will be powerful allies for the free software realm because the companies will be the ones who benefit economically the most from the free software licenses. When all of the software is free, no one controls it and this strips away many of Microsoft's traditional ways of applying leverage. Microsoft, for instance, knocked the legs out from underneath Netscape by giving away Internet Explorer for free. Now the free software world is using the same strategy against Microsoft. It's hard for them to undercut free for most users.
+
+The university system is a less stable ally. While the notion of free exchange of information is still floating around many of the nation's campuses, the places are frighteningly corporate and profit-minded. Microsoft has plenty of cash at its disposal and it hasn't been shy about spreading it around places like MIT, Harvard, and Stanford. The computer science departments on those campuses are the recipients of brand-new buildings compliments of Bill Gates. These gifts are hard to ignore.
+
+Microsoft will probably avoid a direct confrontation with the academic tradition of the institutions and choose to cut their prices as low as necessary to dominate the desktops. Universities will probably be given "free," tax-deductible donations of software whenever they stray far from the Microsoft-endorsed solution. Lab managers and people who make decisions about the computing infrastructure of the university will probably get neat "consulting" contracts from Microsoft or its buddies. This will probably not mean total domination, but it will buy a surprisingly large amount of obedience.
+
+Despite these gifts, free software will continue to grow on the campuses. Students often have little cash and Microsoft doesn't get any great tax deduction by giving gifts to individual students (that's income). The smartest kids in the dorms will continue to run Linux. Many labs do cutting-edge work that requires customized software. These groups will naturally be attracted to free source code because it makes their life easier. It will be difficult for Microsoft to counteract the very real attraction of free software.
+
+Of course, Microsoft is not without its own arms. Microsoft still has patent law on its side, and this may prove to be a very serious weapon. The law allows the patent holder the exclusive right to determine who uses an idea or invention over the course of the patent, which is now 20 years from the first filing date. That means the patent holder can sue anyone who makes a product that uses the invention. It also means that the patent holder can sue someone who simply cobbles up the invention in his basement and uses the idea without paying anything to anyone. This means that even someone who distributes the software for free or uses the software can be liable for damages.
+
+In the past, many distrusted the idea of software patents because the patent system wasn't supposed to allow you to lay claim to the laws of nature. This interpretation fell by the wayside as patent lawyers argued successfully that software combined with a computer was a separate machine and machines were eligible for protection.
+
+Today, it is quite easy to get patent protection for new ideas on how to structure a computer network, an operating system, or a software tool. The only requirement is that they're new and nonobvious. Microsoft has plenty of these.
+
+If things go perfectly for Microsoft, the company will be able to pull out one or two patents from its huge portfolio and use these to sue Red Hat, Walnut Creek, and a few of the other major distributors. Ideally, this patent would cover some crucial part of the Linux or BSD operating system. After the first few legal bills started arriving on the desk of the Red Hat or Walnut Creek CEO, the companies would have to settle by quitting the business. Eventually, all of the distributors of Linux would crumble and return to the small camps in the hills to lick their wounds. At least, that's probably the dream of some of Microsoft's greatest legal soldiers.
+
+This maneuver is far from a lock for Microsoft because the free software world has a number of good defenses. The first is that the Linux and BSD world do a good job of publicizing their advances. Any patent holder must file the patent before someone else publishes their ideas. The Linux discussion groups and source distributions are a pretty good public forum. The ideas and patches often circulate publicly long before they make their way into a stable version of the kernel. That means that the patent holders will need to be much farther ahead than the free software world.
+
+Linux and the free software world are often the cradle of new ideas. University students use open source software all the time. It's much easier to do way cool things if you've got access to the source. Sure, Microsoft has some smart researchers with great funding, but can they compete with all the students?
+
+Microsoft's ability to dominate the patent world may be hurt by the nature of the game. Filing the application first or publishing an idea first is all that matters in the patent world. Producing a real product is hard work that is helped by the cash supply of Microsoft. Coming up with ideas and circulating them is much easier than building real tools that people can use.
+
+The second defense is adaptability. The free software distributions can simply strip out the offending code. The Linux and BSD disks are very modular because they come from a variety of different sources. The different layers and tools come from different authors, so they are not highly integrated. This makes it possible to remove one part without ruining the entire system.
+
+Stallman's GNU project has been dealing with patents for a long time and has some experience programming around them. The GNU Zip program, for instance, was written to avoid the patents on the Lempel-Ziv compression algorithm claimed by UNISYS and IBM. The software is well-written and it works as well as, if not better than, the algorithm it replaces. Now it's pretty standard on the web and very popular because it is open source and patent-free. It's the politically correct compression algorithm to use because it's open to everyone.
+
+It will be pretty difficult for a company like Microsoft to find a patent that will allow it to deal a fatal blow to either the Linux or BSD distributions. The groups will just clip out the offending code and then work around it.
+
+Microsoft's greatest hope is to lock up the next generation of computing with patents. New technologies like streaming multimedia or Internet audio are still up for grabs. While people have been studying these topics in universities for some time, the Linux community is further behind. Microsoft will try to dominate these areas with crucial patents that affect how operating systems deal with this kind of data. Their success at this is hard to predict. In any event, while they may be able to cripple the adoption of some new technologies like streaming multimedia, they won't be able to smash the entire world.
+
+The third and greatest defense for the free source ideology is a loophole in the patent law that may also help many people in the free software world. It is not illegal to use a patented idea if you're in the process of doing some research on how to improve the state of the art in that area. The loophole is very narrow, but many users of free software might fall within it. All of the distributions come with source code, and many of the current users are programmers experimenting with the code. Most of these programmers give their work back to the project and this makes most of their work pretty noncommercial. The loophole probably wouldn't protect the corporations that are using free software simply because it is cheap, but it would still be large enough to allow innovation to continue. A non-commercial community built up around research could still thrive even if Microsoft manages to come up with some patents that are very powerful.
+
+The world of patents can still constrain the world of free software. Many companies work hard on developing new technology and then rely upon patents to guarantee them a return on investment. These companies have trouble working well with the free software movement because there's no revenue stream to use. A company like Adobe can integrate some neat new streaming technology or compression algorithm and add the cost of a patent license to the price of the product. A free software tool can't.
+
+This does not preclude the free software world from using some ideas or software. There's no reason why Linux can't run proprietary application software that costs money. Perhaps people will sell licenses for some distributions and patches. Still, the users must shift mental gears when they encounter these packages.
+
+There are no easy solutions to patent problems. The best news is that proprietary, patented technology rarely comes to dominate the marketplace. There are often ways to work around solutions, and other engineers are great at finding them. Sure, there will be the occasional brilliant lightbulb, transistor, radio, or other solution that is protected by a broad patent, but these will be relatively rare.
+
+There are a few things that the open source community can do to protect themselves against patents. Right now, many of the efforts at developing open source solutions come after technology emerges. For instance, developing drivers for DVD disks is one of the current challenges at the time that I'm writing this chapter even though the technology has been shipping with many midpriced computers for about a year.
+
+There is no reason why some ivory-tower, blue-sky research can't take place in a patent-free world of open source. Many companies already allow their researchers to attend conferences and present papers on their open work and classify this as "precompetitive" research. Standards like JPEG or MPEG emerge from committees that pledge not to patent their work. There is no reason why these loose research groups can't be organized around a quasi-BSD or GNU license that forces development to be kept in the open.
+
+These research groups will probably be poorly funded but much more agile than the corporate teams or even the academic teams. They might be organized around a public newsgroup or mailing list that is organized for the purpose of publicly disclosing ideas. Once they're officially disclosed, no patents can be issued on them. Many companies like IBM and Xerox publish paper journals for defensive purposes.
+
+Still, the debate about patents will be one that will confound the entire software industry for some time. Many for-profit, proprietary firms are thrown for a loop by some of the patents granted to their competitors. The open source world will have plenty of allies who want to remake the system.
+
+The patents are probably the most potent legal tool that proprietary software companies can use to threaten the open source world. There is no doubt that the companies will use it to fend off low-rent competition.
+
+One of the biggest challenges for the free software community will be developing the leadership to undertake these battles. It is one thing to mess around in a garage with your buddies and hang out in some virtual he-man/Microsoft-haters clubhouse cooking up neat code. It's a very different challenge to actually achieve the world domination that the Linux world muses about. When I started writing the book, I thought that an anthem for the free software movement might be Spinal Tap's "Flower People." Now I think it's going to be Buffalo Springfield's "For What It's Worth," which warns, "There's something happening here / What it is ain't exactly clear."
+
+Tim O'Reilly emphasizes this point. When asked about some of the legal battles, he said, "There's definitely going to be a war over this stuff. When I look back at previous revolutions, I realize how violent they became. They threatened to burn Galileo at the stake. They said 'Take it back,' and he backed down. But it didn't make any difference in the end. But just because there's a backlash doesn't mean that open source won't win in the long run."
+
+Companies like Microsoft don't let markets and turf just slip away. They have a large budget for marketing their software. They know how to generate positive press and plenty of fear in the hearts of managers who must make decisions. They understand the value of intellectual property, and they aren't afraid of dispatching teams of lawyers to ensure that their markets remain defended.
+
+The open source community, however, is not without a wide variety of strengths, although it may not be aware of them. In fact, this diffuse power and lack of self-awareness and organization is one of its greatest strengths. There is no powerful leadership telling the open source community "Thou shalt adopt these libraries and write to this API." The people in the trenches are testing code, proposing solutions, and getting their hands dirty while making decisions. The realm is not a juggernaut, a bandwagon, a dreadnought, or an unstoppable freight train roaring down the track. It's creeping kudzu, an algae bloom, a teenage fad, and a rising tide mixed together.
+
+The strength of the free price shouldn't be underestimated. While the cost isn't really nothing after you add up the price of paying Red Hat, Slackware, SuSE, Debian, or someone else to provide support, it's still much cheaper than the proprietary solutions on the market. Price isn't the only thing on people's minds, but it will always be an important one.
+
+In the end, though, I think the free software world will flourish because of the ideals it embraces. The principles of open debate, broad circulation, easy access, and complete disclosure are like catnip to kids who crackle with intelligence. Why would anyone want to work in a corporate cubicle with a Dilbert boss when you can spend all night hacking on the coolest tools? Why would you want to join some endless corporate hierarchy when you can dive in and be judged on the value of your code? For these reasons, the free software world can always count on recruiting the best and the brightest.
+
+This process will continue because the Dilbert-grade bosses aren't so dumb. I know more than a few engineers and early employees at startup firms who received very small stock allowances at IPO time. One had written three of the six systems that were crucial to the company's success on the web. Yet he got less than 1 percent of the shares allocated to the new CEO who had just joined the company. The greed of the non-programming money changers who plumb the venture capital waters will continue to poison the experience of the programmers and drive many to the world of free software. If they're not going to get anything, they might as well keep access to the code they write.
+
+The open source ideals are also strangely empowering because they force everyone to give up their will to power and control. Even if Richard Stallman, Linus Torvalds, Eric Raymond, and everyone else in the free software world decides that you're a scumbag who should be exiled to Siberia, they can't take away the code from you. That freedom is a very powerful drug.
+
+The free software movement is rediscovering the same notions that drove the American colonists to rebel against the forces of English oppression. The same words that flowed through the pens of Thomas Paine, Thomas Jefferson, and Benjamin Franklin are just as important today. The free software movement certifies that we are all created equal, with the same rights to life, liberty, and the pursuit of bug-free code. This great nation took many years to evolve and took many bad detours along the way, but in the end, the United States tends to do the right thing.
+
+The free software movement has many flaws, blemishes, and weaknesses, but I believe that it will also flourish over the years. It will take wrong turns and encounter great obstacles, but in the end the devotion to liberty, fraternity, and equality will lead it to make the right decisions and will outstrip all of its proprietary competitors.
+
+In the end, the lure of the complete freedom to change, revise, extend, and improve the source code of a project is a powerful drug that creative people can't resist. Shrink-wrapped software's ease-of-use and prepackaged convenience are quite valuable for many people, but its world is static and slow.
+
+In the end, the power to write code and change it without hiring a team of lawyers to parse agreements between companies ensures that the free software world will gradually win. Corporate organization provides money and stability, but in technology the race is usually won by the swiftest.
+
+In the end, free software creates wealth, not cash, and wealth is much better than cash. You can't eat currency and you can't build a car with gold. Free software does things and accomplishes tasks without crashing into the blue screen of death. It empowers people. People who create it and share it are building real infrastructure that everyone can use. The corporations can try to control it with intellectual property laws. They can buy people, hornswoggle judges, and co-opt politicians, but they can't offer more than money.
+
+In the end, information wants to be free. Corporations want to believe that software is a manufactured good like a car or a toaster. They want to pretend it is something that can be consumed only once. In reality, it is much closer to a joke, an idea, or gossip. Who's managed to control those?
+
+For all of these reasons, this grand free-for-all, this great swapfest of software, this wonderful nonstop slumber party of cooperative knowledge creation, this incredible science project on steroids will grow in strange leaps and unexpected bounds until it swallows the world. There will be battles, there will be armies, there will be spies, there will be snakes, there will be court cases, there will be laws, there will be martyrs, there will be heroes, and there will be traitors. But in the end, information just wants to be free. That's what we love about it.
+
+1~glossary Glossary
+
+*{Apache License}* A close cousin of the BSD License. The software comes with few restrictions, and none prevent you from taking a copy of Apache, modifying it, and selling binary versions. The only restriction is that you can't call it Apache. For instance, C2Net markets a derivative of Apache known as Stronghold.
+
+*{AppleScript}* A text language that can be used to control the visual interface of the Macintosh. It essentially says things like "Open that folder and double click on Adobe Photoshop to start it up. Then open the file named 'Pete's Dog's Picture.'" architecture Computer scientists use the word "architecture" to describe the high-level, strategic planning of a system. A computer architect may decide, for instance, that a new system should come with three multiplier circuits but not four after analyzing the sequence of arithmetic operations that a computer will likely be called upon to execute. If there are often three multiplications that could be done concurrently, then installing three multiplier circuits would increase efficiency. Adding a fourth, however, would be a waste of effort if there were few occasions to use it. In most cases, the term "computer architect" applies only to hardware engineers. All sufficiently complicated software projects, however, have an architect who makes the initial design decisions.
+
+*{Artistic License}* A license created to protect the original PERL language. Some users dislike the license because it is too complex and filled with loopholes. Bruce Perens writes, "The Artistic License requires you to make modifications free, but then gives you a loophole (in Section 7) that allows you to take modifications private or even place parts of the Artistic-licensed program in the public domain!"
+
+*{BeOS}* An operating system created by the Be, a company run by exApple executive Jean Louis Gassée.
+
+*{BSD}* An abbreviation for Berkeley Software Distribution, a package first released by Bill Joy in the 1970s. The term has come to mean both a class of UNIX that was part of the distribution and also the license that protects this software. There are several free versions of BSD UNIX that are well-accepted and well-supported by the free source software community. OpenBSD, NetBSD, and FreeBSD are three of them. Many commercial versions of UNIX, like Sun's Solaris and NeXT's OS, can trace their roots to this distribution. The BSD was originally protected by a license that allowed anyone to freely copy and modify the source code as long as they gave some credit to the University of California at Berkeley. Unlike the GNU GPL, the license does not require the user to release the source code to any modifications.
+
+*{BSD License}* The original license for BSD software. It placed few restrictions on what you did with the code. The important terms forced you to keep the copyright intact and credit the University of California at Berkeley when you advertise a product. The requirement to include credit is now removed because people realized that they often needed to publish hundreds of acknowledgments for a single CD-ROM. Berkeley removed the term in the hopes that it would set a good example for the rest of the community.
+
+*{copyleft}* Another term that is sometimes used as a synonym for the GNU General Public License.
+
+*{Debian Free Software Guidelines}* See Open Source. (www.debian.org)
+
+*{driver}* Most computers are designed to work with optional devices like modems, disk drives, printers, cameras, and keyboards. A driver is a piece of software that translates the signals sent by the device into a set of signals that can be understood by the operating system. Most operating systems are designed to be modular, so these drivers can be added as an afterthought whenever a user connects a new device. They are usually designed to have a standard structure so other software will work with them. The driver for each mouse, for instance, translates the signals from the mouse into a standard description that includes the position of the mouse and its direction. Drivers are an important point of debate in the free software community because volunteers must often create the drivers. Most manufacturers write the drivers for Windows computers because these customers make up the bulk of their sales. The manufacturers often avoid creating drivers for Linux or BSD systems because they perceive the market to be small. Some manufacturers also cite the GNU GPL as an impediment because they feel that releasing the source code to their drivers publishes important competitive information.
+
+*{FreeBSD}* The most popular version of BSD. The development team, led by Jordan Hubbard, works hard to provide an easy-to-use tool for computers running the Intel x86 architecture. In recent years, they've tried to branch out into other lines. (www.freebsd.org)
+
+*{Free Software Foundation}* An organization set up by Richard Stallman to raise money for the creation of new free software. Stallman donates his time to the organization and takes no salary. The money is spent on hiring programmers to create new free software.
+
+*{GIMP}* The GNU Image Manipulation Program, which can manipulate image files in much the same way as Adobe Photoshop. (www.gimp.org)
+
+*{GNOME}* The GNU Network Object Model Environment, which might be summarized as "All of the functionality of Microsoft Windows for Linux." It's actually more. There are many enhancements that make the tool easier to use and more flexible than the prototype from Redmond. See also KDE, another package that accomplishes much of the same. (www.gnome.org)
+
+*{GNU}* A recursive acronym that stands for "GNU is Not UNIX." The project was started by Richard Stallman in the 1980s to fight against the tide of proprietary software. The project began with several very nice programs like GNU Emacs and GCC, the C compiler that was protected by Stallman's GNU General Purpose License. It has since grown to issue software packages that handle many different tasks from games (GNU Chess) to privacy (GNU Privacy Guard). See also GPL and Free Software Foundation (www.gnu.org). Its main goal is to produce a free operating system that provides a user with the ability to do everything they want with software that comes with the source code.
+
+*{GNU/Linux}* The name some people use for Linux as a way of giving credit to the GNU project for its leadership and contribution of code.
+
+*{GPL}* An abbreviation that stands for "General Purpose License."
+This license was first written by Richard Stallman to control the usage of software created by the GNU project. A user is free to read and modify the source code of a GPL-protected package, but the user must agree to distribute any changes or improvements if they distribute the software at all. Stallman views the license as a way to force people to share their own improvements and contribute back to the project if they benefit from the project's hard work. See also BSD.
+
+*{higher-level}* languages Modern computer programmers almost always write their software in languages like C, Java, Pascal, or Lisp, which are known as higher-level languages. The word "higher" is a modifier that measures the amount of abstraction available to a programmer. A high-level language might let a programmer say, "Add variable revenues to variable losses to computer profits." A high-level language would be able to figure out just where to find the information about the profits and the losses. A low-level programming language would require the software author to point directly to a location in the memory where the data could be found.
+
+*{KDE}* The K desktop environment is another toolkit that offers much of the same functionality as Windows. It is controversial because it originally used some proprietary software and some users needed a license. See also GNOME, a similar package that is distributed under the GNU GPL. (www.kde.org)
+
+*{kernel}* The core of an OS responsible for juggling the different tasks and balancing all of the demands. Imagine a short-order cook who scrambles eggs, toasts bread, chops food, and somehow manages to get an order out in a few minutes. A kernel in an OS juggles the requests to send information to a printer, display a picture on the screen, get data from a website, and a thousand other tasks.
+
+*{Linux}* The name given to the core of the operating system started by Linus Torvalds in 1991. The word is now generally used to refer to an entire bundle of free software packages that work together. Red Hat Linux, for instance, is a large bundle of software including packages written by many other unrelated projects.
+
+*{Mozilla Public License}* A cousin of the Netscape Public License that was created to protect the public contributions to the source tree of the Mozilla project. Netscape cannot relicense the modifications to code protected by the MPL, but they can do it to the NPL. See also Netscape Public License.
+
+*{NetBSD}* One of the original free distributions of BSD. The team focuses on making sure that the software works well on a wide variety of hardware platforms, including relatively rare ones like the Amiga. (www.netbsd.org)
+
+*{Netscape Public License}* A license created by Netscape when the company decided to release their browser as open source. The license is similar to the BSD License, but it provides special features to Netscape. They're allowed to take snapshots of the open source code and turn them back into a private, proprietary project again. Bruce Perens, one of the unpaid consultants who helped Netscape draft the license, says that the provision was included because Netscape had special contracts with companies to provide a proprietary tool. See also Mozilla Public License.
+
+*{OpenBSD}* One of the three major versions of BSD available. The development team, led by Theo de Raadt, aims to provide the best possible security by examining the source code in detail and looking for potential holes. (www.openbsd.org) open source A broad term used by the Open Source Initiative (www.opensource.org) to embrace software developed and released under the GNU General Public License, the BSD license, the Artistic License, the X Consortium, and the Netscape License. It includes software licenses that put few restrictions on the redistribution of source code. The Open Source Initiative's definition was adapted from the Debian Free Software Guidelines. The OSI's definition includes 10 criteria, which range from insisting that the software and the source code must be freely redistributable to insisting that the license not discriminate.
+
+*{Open Source Initiative}* A group created by Eric Raymond, Sam Ockman, Bruce Perens, Larry Augustin, and more than a few others. The group checks licenses to see if they match their definition of open source. If the license fits, then it can wear the term "certified by the OSI."
+
+*{Symmetric Multi-Processing}* Much of the recent work in operating system design is focused on finding efficient ways to run multiple programs simultaneously on multiple CPU chips. This job is relatively straightforward if the different pieces of software run independently of each other. The complexity grows substantially if the CPUs must exchange information to coordinate their progress. The kernel must orchestrate the shuffle of information so that each CPU has enough information to continue its work with a minimum amount of waiting time. Finding a good way to accomplish this SMP is important because many of the new machines appearing after 2000 may come with multiple processors.
+
+*{UNIX}* An operating system created at AT&T Bell Labs by Ken Thompson and Dennis Ritchie. The system was originally designed to support multiple users on a variety of different hardware platforms. Most programs written for the system accept ASCII text and spit out ASCII text, which makes it easy to chain them together. The original name was "unics," which was a pun on the then-popular system known as Multics.
+
+1~bibliography Bibliography
+
+*{Abelson, Reed.}* "Among U.S. Donations, Tons of Worthless Drugs." New York Times, June 29, 1999.
+
+*{Ananian, C. Scott.}* "A Linux Lament: As Red Hat Prepares to Go Public, One Linux Hacker's Dreams of IPO Glory Are Crushed by the Man." Salon magazine, July 30, 1999. <br />
+http://www.salon.com/tech/feature/1999/07/30/redhat_shares/index.html <br />
+"Questions Not to Ask on Linux-Kernel." May 1998. <br />
+http://lwn.net/980521/a/nonfaq.html
+
+*{Aragon, Lawrence, and Matthew A. De Bellis.}* "Our Lunch With Linus: (Almost) Everything You Need to Know About the World's Hottest Programmer." VAR Business, April 12, 1999.
+
+*{Betz, David, and Jon Edwards.}* "GNU's NOT UNIX." BYTE, July 1986.
+
+*{Brinkley, Joel.}* "Microsoft Witness Attacked for Contradictory Opinions." New York Times, January 15, 1999. <br />
+http://www.nytimes.com/library/1999/01/biztech/articles/15soft.html
+
+*{Bronson, Po.}* "Manager's Journal Silicon Valley Searches for an Image."Wall Street Journal, June 8, 1998. <br />Nudist on the Late Shift: And Other True Tales of Silicon Valley. New York: Random House, 1999.
+
+*{Brown, Zack.}* "The 'Linux' vs. 'GNU/Linux' Debate." Kernel Traffic, April 13, 1999. <br />
+http://www.kt.opensrc.org/kt19990408_13.html#editorial
+
+*{Caravita, Giuseppe.}* "Telecommunications, Technology, and Science." Il Sole 24 Ore, March 5, 1999. <br />
+http://www.ilsole24ore.it/24oreinformatica/speciale_3d.19990305/INFORMATICA/Informatica/A.html
+
+*{Chalmers, Rachel.}* "Challenges Ahead for the Linux Standards Base."LinuxWorld, April 1999. <br />
+http://www.linuxworld.com/linuxworld/lw-1999-04/lw-04-lsb.html
+
+*{Coates, James.}* "A Rebellious Reaction to the Linux Revolution."Chicago Tribune, April 25, 1999. <br />
+http://www.chicagotribune.com/business/printedition/article/0,1051,SA-Vo9904250051,00.html
+
+*{Cox, Alan.}* "Editorial." Freshmeat, July 18, 1999. <br />
+http://www.freshmeat.net/news/1998/07/18/900797536.html
+
+*{Cringely, Robert X.}* "Be Careful What You Wish For: Why Being Acquired by Microsoft Makes Hardly Anyone Happy in the Long Run." PBS Online, August 27, 1999. <br />
+http://www.pbs.org/cringely/pulpit/pulpit19990826.html
+
+*{D'Amico, Mary Lisbeth.}* "German Division of Microsoft Protests 'Where Do You Want to Go Tomorrow' Slogan: Linux Site Holds Contest for New Slogan While Case Is Pending." LinuxWorld, April 13, 1999. <br />
+http://www.linuxworld.com/linuxworld/lw-1999-04/lw-04-german.html
+
+*{Diamond, David.}* "Linus the Liberator." San Jose Mercury News. <br />
+http://www.mercurycenter.com/svtech/news/special/linus/story.html
+
+*{DiBona, Chris, Sam Ockman, and Mark Stone.}* Open Sources:Voices from the Open Source Revolution. San Francisco: O'Reilly, 1999.
+
+*{Freeman, Derek.}* Margaret Mead and Samoa: The Making and Unmaking of an Anthropological Myth. Cambridge, MA: Harvard University Press, 1988.
+
+*{Gilder, George.}* Wealth and Poverty. Institute for Contemporary Studies. San Fransisco: CA, 1981.
+
+*{Gleick, James.}* "Control Freaks." New York Times, July 19, 1998. <br />"Broken Windows Theory." New York Times, March 21, 1999.
+
+*{"Interview with Linus Torvalds."}* FatBrain.com, May 1999. <br />
+http://www.kt.opensrc.org/interviews/ti19990528_fb.html
+
+*{Jelinek, Jakub.}* "Re: Mach64 Problems in UltraPenguin 1.1.9." Linux Weekly News, April 27, 1999. <br />
+http://www.lwn.net/1999/0429/a/up-dead.html
+
+*{Johnson, Richard B., and Chris Wedgwood.}* "Segfault in syslogd [problem shown]." April 1999. <br />
+http://www.kt.opensrc.org/kt19990415_14.html#8
+
+*{Joy, Bill.}* "Talk to Stanford EE 380 Students." November 1999.
+
+*{Kahn, David.}* The Codebreakers. New York: Macmillan, 1967.
+
+*{Kahney, Leander.}* "Open-Source Gurus Trade Jabs." Wired News, April 10, 1999. <br />
+http://www.wired.com/news/news/technology/story/19049.html
+<br />"Apple Lifts License Restrictions." Wired News, April 21, 1999. <br />
+http://www.wired.com/news/news/technology/story/19233.html
+
+*{Kidd, Eric.}* "Why You Might Want to Use the Library GPL for Your Next Library." Linux Gazette, March 1999. <br />
+http://www.linuxgazette.com/issue38/kidd.html
+
+*{Kohn, Alfie.}* "Studies Find Reward Often No Motivator; Creativity and Intrinsic Interest Diminish If Task Is Done for Gain." Boston Globe, January 19, 1987.
+
+*{Leonard, Andrew.}* "Open Season: Why an Industry of Cutthroat Competition Is Suddenly Deciding Good Karma Is Great Business." Wired News, May 1999.
+
+*{Linksvayer, Mike.}* "Choice of the GNU Generation." Meta Magazine. <br />
+http://gondwanaland.com/meta/history/interview.html
+
+*{"Linux Beat Windows NT Handily in an Oracle Performance Benchmark."}* Linux Weekly News, April 29, 1999. <br />
+http://rpmfind.net/veillard/oracle/
+
+*{Liston, Robert.}* The Pueblo Surrender: A Covert Action by the National Security Agency. New York: Evans, 1988.
+
+*{Little, Darnell.}* "Comdex Q&A: Linus Torvalds on the Battle Against Microsoft." Chicago Tribune April 19, 1999. <br />
+http://chicagotribune.com/business/businessnews/ws/item/0,1267,2674627007-27361,00.html
+
+*{Lohr, Steve.}* "Tiny Software Maker Takes Aim at Microsoft in Court." New York Times, May 31, 1999.
+
+*{Mauss, Marcel.}* "Gift: The Form and Reason for Exchange in Archaic Societies," trans. W. D. Halls. New York: W.W. Norton & Company (of reissue in US), 1950.
+
+*{McKusick, Marshall Kirk.}* "Twenty Years of Berkeley Unix." In Open Sources: Voices from the Open Source Revolution. San Francisco: O'Reilly, 1999.
+
+*{McKusick, Marshall Kirk, Keith Bostic, and Michael J. Karels, eds.}* The Design and Implementation of the 4.4BSD Operating System <br />(Unix and Open Systems Series). Reading, MA: Addison-Wesley, 1996.
+
+*{McMillan, Robert, and Nora Mikes.}* "After the 'Sweet Sixteen': Linus Torvalds's Take on the State of Linux." LinuxWorld, March 1999. <br />
+http://www.linuxworld.com/linuxworld/lw-1999-03/lw03-torvalds.html
+
+*{Metcalfe, Bob.}* "Linux's '60s Technology: Open-Sores Ideology Won't Beat W2K, but What Will?" June 19, 1999. <br />
+http://www.infoworld.com/articles/op/xml/990621opmetcalfe.xml
+
+*{Nolan, Chris.}* "Microsoft Antitrust: the Gassée Factor: U.S. Reportedly Looks into Obstacles for Be Operating System." San Jose Mercury News, February 11, 1999. <br />
+http://www.sjmercury.com/svtech/columns/talkischeap/docs/cn021199.html
+
+*{Oakes, Chris.}* "Netscape Browser Guru: We Failed." Wired News, April 2, 1999. <br />
+http://www.wired.com/news/news/technology/story/18926.html
+
+*{Ousterhout, John.}* "Free Software Needs Profit." Dr. Dobb's Journal website, 1999. <br />
+http://www.ddj.com/oped/1999/oust.htm
+
+*{Perens, Bruce, Wichert Akkerman, and Ian Jackson.}* "The Apple Public Source License--Our Concerns." March 1999. <br />
+http://perens.com/APSL.html/ <br />
+"The Open Source Definition." In Open Sources: Voices from the Open Source Revolution, ed. Chris DiBona, Sam Ockman, and Mark Stone, 171-85. San Francisco: O'Reilly, 1999.
+
+*{Picarille, Lisa, and Malcolm Maclachlan.}* "Apple Defends Open Source Initiative." March 24, 1999. <br />
+http://www.techweb.com/wire/story/TWB19990324S0027
+
+*{Raymond, Eric.}* The Cathedral and the Bazaar:Musings on Linux and Open Source by an Accidental Revolutionary. San Francisco: O'Reilly, 1999.
+
+*{Reilly, Patrick.}* "Nader's Microsoft Agenda: Progressive Nonprofit Plan for 'Free' Software." Capital Research Center, April 1, 1999. <br />
+http://www.capitalresearch.org/trends/ot-0499a.html
+
+*{Rubini, Alessandro.}* "Tour of the Linux Kernel Source." Linux Documentation Project.
+
+*{Rusling, David A.}* "The Linux Kernel." <br />
+http://metalab.unc.edu/mdw/LDP/tlk/tlk-title.html
+
+*{Schmalensee, Richard.}* "Direct Testimony in the Microsoft Anti-Trust Case of 1999." <br />
+http://www.courttv.com/trials/ microsoft/legaldocs/ms_wit.html
+
+*{Schulman, Andrew.}* Unauthorized Windows 95. Foster City, CA: IDG Books, 1995.
+
+*{Searles, Doc.}* "It's an Industry." Linux Journal, May 21, 1999. <br />
+http://www.linuxresources.com/articles/conversations/001.html
+
+*{Slind-Flor, Victoria.}* "Linux May Alter IP Legal Landscape: Some Predict More Contract Work if Alternative to Windows Catches On." National Law Journal, March 12, 1999. <br />
+http://www.lawnewsnetwork.com/stories/mar/e030899q.html
+
+*{Stallman, Richard.}* "The GNU Manifesto." 1984. <br />
+http://www.gnu.org/gnu/manifesto.html <br />
+"Why Software Should Not Have Owners." 1994. <br />
+http://www.gnu.org/philosophy/why-free.html
+
+*{Thompson, Ken, and Dennis Ritchie.}* "The UNIX Time-Sharing System." Communications of the ACM, 1974.
+
+*{Thygeson, Gordon.}* Apple T-Shirts: A Yearbook of History at Apple Computer. Cupertino, CA: Pomo Publishing, 1998
+
+*{Torvalds, Linus.}* "Linus Torvalds: Leader of the Revolution." Transcript of Chat with Linus Torvalds, creator of the Linux OS. ABCNews.com. <br />"Linux's History." July 31, 1992. <br />
+http://www.li.org/li/linuxhistory.shtml
+
+*{Valloppillil, Vinod.}* "Open Source Software: A (New?) Development Methodology." Microsoft, Redmond, WA, August 1998.
+
+*{Wayner, Peter.}* "If SB266 Wants Plaintext, Give Them Plaintext . . . ," Risks Digest, May 23, 1991. <br />
+http://catless.ncl.ac.uk/Risks/11.71.html#subj2 <br />"Should Hackers Spend Years in Prison?" Salon, June 9, 1999. <br />
+http://www.salon.com/tech/feature/1999/06/09/hacker_penalties/index.html
+<br />"Netscape to Release New Browser Engine to Developers." New York Times, December 7, 1999. <br />"Glory Among the Geeks." Salon, January 1999. <br />
+http://www.salon.com/21st/feature/1999/01/28feature.html
+
+*{Whitenger, Dave. "Words of a Maddog."}* Linux Today, April 19, 1999. <br />
+http://linuxtoday.com/stories/5118.html
+
+*{"Web and File Server Comparison: Microsoft Windows NT Server 4.0 and Red Hat Linux 5.2 Upgraded to the Linux 2.2.2 Kernel."}* Mindcraft, April 13, 1999. <br />
+http://www.mindcraft.com/whitepapers/nts4rhlinux.html
+
+*{Williams, Sam.}* "Linus Has Left the Building." Upside, May 5, 1999. <br />
+http://www.upside.com/Open_Season/
+
+*{Williams, Riley.}* "Linux Kernel Vertsion History." <br />
+http://ps.cus.umist.ac.uk/~rhw/kernel.versions.html
+
+*{Zawinski, Jamie.}* "Resignation and Postmortem." <br />
+http://www.jwz.org/gruntle/nomo.html
+
+1~other.works Other works by Peter Wayner
+
+2~ Disappearing Cryptography, Information Hiding: Steganography & Watermarking -#
+
+Disappearing Cryptography, Information Hiding: Steganography & Watermarking, 2nd ed. by Peter Wayner ISBN 1-55860-769-2 $44.95
+
+To order, visit: http://www.wayner.org/books/discrypt2/
+
+Disappearing Cryptography, Second Edition describes how to take words, sounds, or images and hide them in digital data so they look like other words, sounds, or images. When used properly, this powerful technique makes it almost impossible to trace the author and the recipient of a message. Conversations can be submerged in the flow of information through the Internet so that no one can know if a conversation exists at all.
+
+This full revision of the best-selling first edition describes a number of different techniques to hide information. These include encryption, making data incomprehensible; steganography, embedding information into video, audio, or graphics files; watermarking, hiding data in the noise of image or sound files; mimicry, "dressing up" data and making it appear to be other data, and more.
+
+The second edition also includes an expanded discussion on hiding information with spread-spectrum algorithms, shuffling tricks, and synthetic worlds. Each chapter is divided into sections, first providing an introduction and high-level summary for those who want to understand the concepts without wading through technical explanations, and then presenting greater detail for those who want to write their own programs. To encourage exploration, the author's Web site
+www.wayner.org/books/discrypt2/ contains implementations for hiding information in lists, sentences, and images.
+
+"Disappearing Cryptography is a witty and entertaining look at the world of information hiding. Peter Wayner provides an intuitive perspective of the many techniques, applications, and research directions in the area of steganography. The sheer breadth of topics is outstanding and makes this book truly unique. A must read for those who would like to begin learning about information hiding." --Deepa Kundur, University of Toronto
+
+"An excellent introduction for private individuals, businesses, and governments who need to under- stand the complex technologies and their effects on protecting privacy, intellectual property and other interests." - David Banisar, Research Fellow, Harvard Information Infrastructure Project, & Deputy Director, Privacy International.
+
+2~ Translucent Databases -#
+
+Translucent Databases, a new book by Peter Wayner, comes with more than two dozen examples in Java and SQL code. The book comes with a royalty-free license to use the code for your own projects in any way you wish.
+
+_* Do you have personal information in your database?
+
+_* Do you keep les on your customers, your employees, or anyone else?
+
+_* Do you need to worry about European laws restricting the information you keep?
+
+_* Do you keep copies of credit card numbers, social security numbers, or other informa- tion that might be useful to identity thieves or insurance fraudsters?
+
+_* Do you deal with medical records or personal secrets?
+
+Most database administrators spend some of each day worrying about the information they keep. Some spend all of their time. Caring for information can be a dangerous responsibility.
+
+This new book, Translucent Databases, describes a different attitude toward protecting the information. Most databases provide elaborate control mechanisms for letting the right people in to see the right records. These tools are well designed and thoroughly tested, but they can only provide so much support. If someone breaks into the operating system itself, all of the data on the hard disk is unveiled. If a clerk, a supervisor, or a system administrator decides to turn traitor, there's nothing anyone can do.
+
+Translucent databases provide better, deeper protection by scrambling the data with encryption algorithms. The solutions use the minimal amount of encryption to ensure that the database is still functional. In the best applications, the personal and sensitive information is protected but the database still delivers the information.
+
+Order today at
+http://www.wayner.org/books/td/
diff --git a/data/sisu_markup_samples/non-free/the_cathedral_and_the_bazaar.eric_s_raymond.sst b/data/sisu_markup_samples/non-free/the_cathedral_and_the_bazaar.eric_s_raymond.sst
new file mode 100644
index 0000000..d6829df
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/the_cathedral_and_the_bazaar.eric_s_raymond.sst
@@ -0,0 +1,592 @@
+% SiSU 0.38
+
+@title: The Cathedral and the Bazaar
+
+@creator: Eric S. Raymond
+
+@type: Book
+
+@rights: Copyright © 2000 Eric S. Raymond. Permission is granted to copy, distribute and/or modify this document under the terms of the Open Publication License, version 2.0.
+
+@date.created: 1997-05-21
+
+@date.issued: 1997-05-21
+
+@date.available: 1997-05-21
+
+@date.modified: 2002-08-02
+
+@date: 2002-08-02
+
+@links: {The Cathedral and the Bazaar @ SiSU }http://www.jus.uio.no/sisu/the_cathedral_and_the_bazaar.eric_s_raymond
+{The Cathedral and the Bazaar, Source }http://www.catb.org/~esr/writings/cathedral-bazaar/cathedral-bazaar/
+{@ Wikipedia}http://en.wikipedia.org/wiki/The_Cathedral_and_the_Bazaar
+{The Wealth of Networks, Yochai Benkler @ SiSU}http://www.jus.uio.no/sisu/the_wealth_of_networks.yochai_benkler
+{Free Culture, Lawrence Lessig @ SiSU}http://www.jus.uio.no/sisu/free_culture.lawrence_lessig
+{Free as in Freedom (on Richard Stallman), Sam Williams @ SiSU}http://www.jus.uio.no/sisu/free_as_in_freedom.richard_stallman_crusade_for_free_software.sam_williams
+{Free For All, Peter Wayner @ SiSU}http://www.jus.uio.no/sisu/free_for_all.peter_wayner
+{CatB @ Amazon.com}http://www.amazon.com/Wealth-Networks-Production-Transforms-Markets/dp/0596001088/
+{CatB @ Barnes & Noble}http://search.barnesandnoble.com/booksearch/isbnInquiry.asp?isbn=0596001088
+
+@level: new=:C; break=1
+
+@skin: skin_sisu
+
+@abstract: I anatomize a successful open-source project, fetchmail, that was run as a deliberate test of the surprising theories about software engineering suggested by the history of Linux. I discuss these theories in terms of two fundamentally different development styles, the "cathedral" model of most of the commercial world versus the "bazaar" model of the Linux world. I show that these models derive from opposing assumptions about the nature of the software-debugging task. I then make a sustained argument from the Linux experience for the proposition that "Given enough eyeballs, all bugs are shallow", suggest productive analogies with other self-correcting systems of selfish agents, and conclude with some exploration of the implications of this insight for the future of software.
+
+:A~ The Cathedral and the Bazaar
+
+:B~ Eric Steven Raymond
+
+1~ The Cathedral and the Bazaar
+
+Linux is subversive. Who would have thought even five years ago (1991) that a world-class operating system could coalesce as if by magic out of part-time hacking by several thousand developers scattered all over the planet, connected only by the tenuous strands of the Internet?
+
+Certainly not I. By the time Linux swam onto my radar screen in early 1993, I had already been involved in Unix and open-source development for ten years. I was one of the first GNU contributors in the mid-1980s. I had released a good deal of open-source software onto the net, developing or co-developing several programs (nethack, Emacs's VC and GUD modes, xlife, and others) that are still in wide use today. I thought I knew how it was done.
+
+Linux overturned much of what I thought I knew. I had been preaching the Unix gospel of small tools, rapid prototyping and evolutionary programming for years. But I also believed there was a certain critical complexity above which a more centralized, a priori approach was required. I believed that the most important software (operating systems and really large tools like the Emacs programming editor) needed to be built like cathedrals, carefully crafted by individual wizards or small bands of mages working in splendid isolation, with no beta to be released before its time.
+
+Linus Torvalds's style of development—release early and often, delegate everything you can, be open to the point of promiscuity—came as a surprise. No quiet, reverent cathedral-building here—rather, the Linux community seemed to resemble a great babbling bazaar of differing agendas and approaches (aptly symbolized by the Linux archive sites, who'd take submissions from anyone) out of which a coherent and stable system could seemingly emerge only by a succession of miracles.
+
+The fact that this bazaar style seemed to work, and work well, came as a distinct shock. As I learned my way around, I worked hard not just at individual projects, but also at trying to understand why the Linux world not only didn't fly apart in confusion but seemed to go from strength to strength at a speed barely imaginable to cathedral-builders.
+
+By mid-1996 I thought I was beginning to understand. Chance handed me a perfect way to test my theory, in the form of an open-source project that I could consciously try to run in the bazaar style. So I did—and it was a significant success.
+
+This is the story of that project. I'll use it to propose some aphorisms about effective open-source development. Not all of these are things I first learned in the Linux world, but we'll see how the Linux world gives them particular point. If I'm correct, they'll help you understand exactly what it is that makes the Linux community such a fountain of good software—and, perhaps, they will help you become more productive yourself.
+
+1~ The Mail Must Get Through
+
+Since 1993 I'd been running the technical side of a small free-access Internet service provider called Chester County InterLink (CCIL) in West Chester, Pennsylvania. I co-founded CCIL and wrote our unique multiuser bulletin-board software—you can check it out by telnetting to locke.ccil.org. Today it supports almost three thousand users on thirty lines. The job allowed me 24-hour-a-day access to the net through CCIL's 56K line—in fact, the job practically demanded it!
+
+I had gotten quite used to instant Internet email. I found having to periodically telnet over to locke to check my mail annoying. What I wanted was for my mail to be delivered on snark (my home system) so that I would be notified when it arrived and could handle it using all my local tools.
+
+The Internet's native mail forwarding protocol, SMTP (Simple Mail Transfer Protocol), wouldn't suit, because it works best when machines are connected full-time, while my personal machine isn't always on the Internet, and doesn't have a static IP address. What I needed was a program that would reach out over my intermittent dialup connection and pull across my mail to be delivered locally. I knew such things existed, and that most of them used a simple application protocol called POP (Post Office Protocol). POP is now widely supported by most common mail clients, but at the time, it wasn't built in to the mail reader I was using.
+
+I needed a POP3 client. So I went out on the Internet and found one. Actually, I found three or four. I used one of them for a while, but it was missing what seemed an obvious feature, the ability to hack the addresses on fetched mail so replies would work properly.
+
+The problem was this: suppose someone named `joe' on locke sent me mail. If I fetched the mail to snark and then tried to reply to it, my mailer would cheerfully try to ship it to a nonexistent `joe' on snark. Hand-editing reply addresses to tack on <@ccil.org> quickly got to be a serious pain.
+
+This was clearly something the computer ought to be doing for me. But none of the existing POP clients knew how! And this brings us to the first lesson:
+
+_1 1. Every good work of software starts by scratching a developer's personal itch.
+
+Perhaps this should have been obvious (it's long been proverbial that "Necessity is the mother of invention") but too often software developers spend their days grinding away for pay at programs they neither need nor love. But not in the Linux world—which may explain why the average quality of software originated in the Linux community is so high.
+
+So, did I immediately launch into a furious whirl of coding up a brand-new POP3 client to compete with the existing ones? Not on your life! I looked carefully at the POP utilities I had in hand, asking myself "Which one is closest to what I want?" Because:
+
+_1 2. Good programmers know what to write. Great ones know what to rewrite (and reuse).
+
+While I don't claim to be a great programmer, I try to imitate one. An important trait of the great ones is constructive laziness. They know that you get an A not for effort but for results, and that it's almost always easier to start from a good partial solution than from nothing at all.
+
+Linus Torvalds, for example, didn't actually try to write Linux from scratch. Instead, he started by reusing code and ideas from Minix, a tiny Unix-like operating system for PC clones. Eventually all the Minix code went away or was completely rewritten—but while it was there, it provided scaffolding for the infant that would eventually become Linux.
+
+In the same spirit, I went looking for an existing POP utility that was reasonably well coded, to use as a development base.
+
+The source-sharing tradition of the Unix world has always been friendly to code reuse (this is why the GNU project chose Unix as a base OS, in spite of serious reservations about the OS itself). The Linux world has taken this tradition nearly to its technological limit; it has terabytes of open sources generally available. So spending time looking for some else's almost-good-enough is more likely to give you good results in the Linux world than anywhere else.
+
+And it did for me. With those I'd found earlier, my second search made up a total of nine candidates—fetchpop, PopTart, get-mail, gwpop, pimp, pop-perl, popc, popmail and upop. The one I first settled on was `fetchpop' by Seung-Hong Oh. I put my header-rewrite feature in it, and made various other improvements which the author accepted into his 1.9 release.
+
+A few weeks later, though, I stumbled across the code for popclient by Carl Harris, and found I had a problem. Though fetchpop had some good original ideas in it (such as its background-daemon mode), it could only handle POP3 and was rather amateurishly coded (Seung-Hong was at that time a bright but inexperienced programmer, and both traits showed). Carl's code was better, quite professional and solid, but his program lacked several important and rather tricky-to-implement fetchpop features (including those I'd coded myself).
+
+Stay or switch? If I switched, I'd be throwing away the coding I'd already done in exchange for a better development base.
+
+A practical motive to switch was the presence of multiple-protocol support. POP3 is the most commonly used of the post-office server protocols, but not the only one. Fetchpop and the other competition didn't do POP2, RPOP, or APOP, and I was already having vague thoughts of perhaps adding IMAP (Internet Message Access Protocol, the most recently designed and most powerful post-office protocol) just for fun.
+
+But I had a more theoretical reason to think switching might be as good an idea as well, something I learned long before Linux.
+
+_1 3. "Plan to throw one away; you will, anyhow." (Fred Brooks, The Mythical Man-Month, Chapter 11)
+
+Or, to put it another way, you often don't really understand the problem until after the first time you implement a solution. The second time, maybe you know enough to do it right. So if you want to get it right, be ready to start over at least once [JB].
+
+Well (I told myself) the changes to fetchpop had been my first try. So I switched.
+
+After I sent my first set of popclient patches to Carl Harris on 25 June 1996, I found out that he had basically lost interest in popclient some time before. The code was a bit dusty, with minor bugs hanging out. I had many changes to make, and we quickly agreed that the logical thing for me to do was take over the program.
+
+Without my actually noticing, the project had escalated. No longer was I just contemplating minor patches to an existing POP client. I took on maintaining an entire one, and there were ideas bubbling in my head that I knew would probably lead to major changes.
+
+In a software culture that encourages code-sharing, this is a natural way for a project to evolve. I was acting out this principle:
+
+_1 4. If you have the right attitude, interesting problems will find you.
+
+But Carl Harris's attitude was even more important. He understood that
+
+_1 5. When you lose interest in a program, your last duty to it is to hand it off to a competent successor.
+
+Without ever having to discuss it, Carl and I knew we had a common goal of having the best solution out there. The only question for either of us was whether I could establish that I was a safe pair of hands. Once I did that, he acted with grace and dispatch. I hope I will do as well when it comes my turn.
+
+1~ The Importance of Having Users
+
+And so I inherited popclient. Just as importantly, I inherited popclient's user base. Users are wonderful things to have, and not just because they demonstrate that you're serving a need, that you've done something right. Properly cultivated, they can become co-developers.
+
+Another strength of the Unix tradition, one that Linux pushes to a happy extreme, is that a lot of users are hackers too. Because source code is available, they can be effective hackers. This can be tremendously useful for shortening debugging time. Given a bit of encouragement, your users will diagnose problems, suggest fixes, and help improve the code far more quickly than you could unaided.
+
+_1 6. Treating your users as co-developers is your least-hassle route to rapid code improvement and effective debugging.
+
+The power of this effect is easy to underestimate. In fact, pretty well all of us in the open-source world drastically underestimated how well it would scale up with number of users and against system complexity, until Linus Torvalds showed us differently.
+
+In fact, I think Linus's cleverest and most consequential hack was not the construction of the Linux kernel itself, but rather his invention of the Linux development model. When I expressed this opinion in his presence once, he smiled and quietly repeated something he has often said: "I'm basically a very lazy person who likes to get credit for things other people actually do." Lazy like a fox. Or, as Robert Heinlein famously wrote of one of his characters, too lazy to fail.
+
+In retrospect, one precedent for the methods and success of Linux can be seen in the development of the GNU Emacs Lisp library and Lisp code archives. In contrast to the cathedral-building style of the Emacs C core and most other GNU tools, the evolution of the Lisp code pool was fluid and very user-driven. Ideas and prototype modes were often rewritten three or four times before reaching a stable final form. And loosely-coupled collaborations enabled by the Internet, a la Linux, were frequent.
+
+Indeed, my own most successful single hack previous to fetchmail was probably Emacs VC (version control) mode, a Linux-like collaboration by email with three other people, only one of whom (Richard Stallman, the author of Emacs and founder of the Free Software Foundation) I have met to this day. It was a front-end for SCCS, RCS and later CVS from within Emacs that offered "one-touch" version control operations. It evolved from a tiny, crude sccs.el mode somebody else had written. And the development of VC succeeded because, unlike Emacs itself, Emacs Lisp code could go through release/test/improve generations very quickly.
+
+The Emacs story is not unique. There have been other software products with a two-level architecture and a two-tier user community that combined a cathedral-mode core and a bazaar-mode toolbox. One such is MATLAB, a commercial data-analysis and visualization tool. Users of MATLAB and other products with a similar structure invariably report that the action, the ferment, the innovation mostly takes place in the open part of the tool where a large and varied community can tinker with it.
+
+1~ Release Early, Release Often
+
+Early and frequent releases are a critical part of the Linux development model. Most developers (including me) used to believe this was bad policy for larger than trivial projects, because early versions are almost by definition buggy versions and you don't want to wear out the patience of your users.
+
+This belief reinforced the general commitment to a cathedral-building style of development. If the overriding objective was for users to see as few bugs as possible, why then you'd only release a version every six months (or less often), and work like a dog on debugging between releases. The Emacs C core was developed this way. The Lisp library, in effect, was not—because there were active Lisp archives outside the FSF's control, where you could go to find new and development code versions independently of Emacs's release cycle [QR].
+
+The most important of these, the Ohio State Emacs Lisp archive, anticipated the spirit and many of the features of today's big Linux archives. But few of us really thought very hard about what we were doing, or about what the very existence of that archive suggested about problems in the FSF's cathedral-building development model. I made one serious attempt around 1992 to get a lot of the Ohio code formally merged into the official Emacs Lisp library. I ran into political trouble and was largely unsuccessful.
+
+But by a year later, as Linux became widely visible, it was clear that something different and much healthier was going on there. Linus's open development policy was the very opposite of cathedral-building. Linux's Internet archives were burgeoning, multiple distributions were being floated. And all of this was driven by an unheard-of frequency of core system releases.
+
+Linus was treating his users as co-developers in the most effective possible way:
+
+_1 7. Release early. Release often. And listen to your customers.
+
+Linus's innovation wasn't so much in doing quick-turnaround releases incorporating lots of user feedback (something like this had been Unix-world tradition for a long time), but in scaling it up to a level of intensity that matched the complexity of what he was developing. In those early times (around 1991) it wasn't unknown for him to release a new kernel more than once a day! Because he cultivated his base of co-developers and leveraged the Internet for collaboration harder than anyone else, this worked.
+
+But how did it work? And was it something I could duplicate, or did it rely on some unique genius of Linus Torvalds?
+
+I didn't think so. Granted, Linus is a damn fine hacker. How many of us could engineer an entire production-quality operating system kernel from scratch? But Linux didn't represent any awesome conceptual leap forward. Linus is not (or at least, not yet) an innovative genius of design in the way that, say, Richard Stallman or James Gosling (of NeWS and Java) are. Rather, Linus seems to me to be a genius of engineering and implementation, with a sixth sense for avoiding bugs and development dead-ends and a true knack for finding the minimum-effort path from point A to point B. Indeed, the whole design of Linux breathes this quality and mirrors Linus's essentially conservative and simplifying design approach.
+
+So, if rapid releases and leveraging the Internet medium to the hilt were not accidents but integral parts of Linus's engineering-genius insight into the minimum-effort path, what was he maximizing? What was he cranking out of the machinery?
+
+Put that way, the question answers itself. Linus was keeping his hacker/users constantly stimulated and rewarded—stimulated by the prospect of having an ego-satisfying piece of the action, rewarded by the sight of constant (even daily) improvement in their work.
+
+Linus was directly aiming to maximize the number of person-hours thrown at debugging and development, even at the possible cost of instability in the code and user-base burnout if any serious bug proved intractable. Linus was behaving as though he believed something like this:
+
+_1 8. Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.
+
+Or, less formally, "Given enough eyeballs, all bugs are shallow." I dub this: "Linus's Law".
+
+My original formulation was that every problem "will be transparent to somebody". Linus demurred that the person who understands and fixes the problem is not necessarily or even usually the person who first characterizes it. "Somebody finds the problem," he says, "and somebody else understands it. And I'll go on record as saying that finding it is the bigger challenge." That correction is important; we'll see how in the next section, when we examine the practice of debugging in more detail. But the key point is that both parts of the process (finding and fixing) tend to happen rapidly.
+
+In Linus's Law, I think, lies the core difference underlying the cathedral-builder and bazaar styles. In the cathedral-builder view of programming, bugs and development problems are tricky, insidious, deep phenomena. It takes months of scrutiny by a dedicated few to develop confidence that you've winkled them all out. Thus the long release intervals, and the inevitable disappointment when long-awaited releases are not perfect.
+
+In the bazaar view, on the other hand, you assume that bugs are generally shallow phenomena—or, at least, that they turn shallow pretty quickly when exposed to a thousand eager co-developers pounding on every single new release. Accordingly you release often in order to get more corrections, and as a beneficial side effect you have less to lose if an occasional botch gets out the door.
+
+And that's it. That's enough. If "Linus's Law" is false, then any system as complex as the Linux kernel, being hacked over by as many hands as the that kernel was, should at some point have collapsed under the weight of unforseen bad interactions and undiscovered "deep" bugs. If it's true, on the other hand, it is sufficient to explain Linux's relative lack of bugginess and its continuous uptimes spanning months or even years.
+
+Maybe it shouldn't have been such a surprise, at that. Sociologists years ago discovered that the averaged opinion of a mass of equally expert (or equally ignorant) observers is quite a bit more reliable a predictor than the opinion of a single randomly-chosen one of the observers. They called this the Delphi effect. It appears that what Linus has shown is that this applies even to debugging an operating system—that the Delphi effect can tame development complexity even at the complexity level of an OS kernel. [CV]
+
+One special feature of the Linux situation that clearly helps along the Delphi effect is the fact that the contributors for any given project are self-selected. An early respondent pointed out that contributions are received not from a random sample, but from people who are interested enough to use the software, learn about how it works, attempt to find solutions to problems they encounter, and actually produce an apparently reasonable fix. Anyone who passes all these filters is highly likely to have something useful to contribute.
+
+Linus's Law can be rephrased as "Debugging is parallelizable". Although debugging requires debuggers to communicate with some coordinating developer, it doesn't require significant coordination between debuggers. Thus it doesn't fall prey to the same quadratic complexity and management costs that make adding developers problematic.
+
+In practice, the theoretical loss of efficiency due to duplication of work by debuggers almost never seems to be an issue in the Linux world. One effect of a "release early and often" policy is to minimize such duplication by propagating fed-back fixes quickly [JH].
+
+Brooks (the author of The Mythical Man-Month) even made an off-hand observation related to this: "The total cost of maintaining a widely used program is typically 40 percent or more of the cost of developing it. Surprisingly this cost is strongly affected by the number of users. More users find more bugs." [emphasis added].
+
+More users find more bugs because adding more users adds more different ways of stressing the program. This effect is amplified when the users are co-developers. Each one approaches the task of bug characterization with a slightly different perceptual set and analytical toolkit, a different angle on the problem. The "Delphi effect" seems to work precisely because of this variation. In the specific context of debugging, the variation also tends to reduce duplication of effort.
+
+So adding more beta-testers may not reduce the complexity of the current "deepest" bug from the developer's point of view, but it increases the probability that someone's toolkit will be matched to the problem in such a way that the bug is shallow to that person.
+
+Linus coppers his bets, too. In case there are serious bugs, Linux kernel version are numbered in such a way that potential users can make a choice either to run the last version designated "stable" or to ride the cutting edge and risk bugs in order to get new features. This tactic is not yet systematically imitated by most Linux hackers, but perhaps it should be; the fact that either choice is available makes both more attractive. [HBS]
+
+1~ How Many Eyeballs Tame Complexity
+
+It's one thing to observe in the large that the bazaar style greatly accelerates debugging and code evolution. It's another to understand exactly how and why it does so at the micro-level of day-to-day developer and tester behavior. In this section (written three years after the original paper, using insights by developers who read it and re-examined their own behavior) we'll take a hard look at the actual mechanisms. Non-technically inclined readers can safely skip to the next section.
+
+One key to understanding is to realize exactly why it is that the kind of bug report non–source-aware users normally turn in tends not to be very useful. Non–source-aware users tend to report only surface symptoms; they take their environment for granted, so they (a) omit critical background data, and (b) seldom include a reliable recipe for reproducing the bug.
+
+The underlying problem here is a mismatch between the tester's and the developer's mental models of the program; the tester, on the outside looking in, and the developer on the inside looking out. In closed-source development they're both stuck in these roles, and tend to talk past each other and find each other deeply frustrating.
+
+Open-source development breaks this bind, making it far easier for tester and developer to develop a shared representation grounded in the actual source code and to communicate effectively about it. Practically, there is a huge difference in leverage for the developer between the kind of bug report that just reports externally-visible symptoms and the kind that hooks directly to the developer's source-code–based mental representation of the program.
+
+Most bugs, most of the time, are easily nailed given even an incomplete but suggestive characterization of their error conditions at source-code level. When someone among your beta-testers can point out, "there's a boundary problem in line nnn", or even just "under conditions X, Y, and Z, this variable rolls over", a quick look at the offending code often suffices to pin down the exact mode of failure and generate a fix.
+
+Thus, source-code awareness by both parties greatly enhances both good communication and the synergy between what a beta-tester reports and what the core developer(s) know. In turn, this means that the core developers' time tends to be well conserved, even with many collaborators.
+
+Another characteristic of the open-source method that conserves developer time is the communication structure of typical open-source projects. Above I used the term "core developer"; this reflects a distinction between the project core (typically quite small; a single core developer is common, and one to three is typical) and the project halo of beta-testers and available contributors (which often numbers in the hundreds).
+
+The fundamental problem that traditional software-development organization addresses is Brook's Law: "Adding more programmers to a late project makes it later." More generally, Brooks's Law predicts that the complexity and communication costs of a project rise with the square of the number of developers, while work done only rises linearly.
+
+Brooks's Law is founded on experience that bugs tend strongly to cluster at the interfaces between code written by different people, and that communications/coordination overhead on a project tends to rise with the number of interfaces between human beings. Thus, problems scale with the number of communications paths between developers, which scales as the square of the humber of developers (more precisely, according to the formula N*(N - 1)/2 where N is the number of developers).
+
+The Brooks's Law analysis (and the resulting fear of large numbers in development groups) rests on a hidden assummption: that the communications structure of the project is necessarily a complete graph, that everybody talks to everybody else. But on open-source projects, the halo developers work on what are in effect separable parallel subtasks and interact with each other very little; code changes and bug reports stream through the core group, and only within that small core group do we pay the full Brooksian overhead. [SU]
+
+There are are still more reasons that source-code–level bug reporting tends to be very efficient. They center around the fact that a single error can often have multiple possible symptoms, manifesting differently depending on details of the user's usage pattern and environment. Such errors tend to be exactly the sort of complex and subtle bugs (such as dynamic-memory-management errors or nondeterministic interrupt-window artifacts) that are hardest to reproduce at will or to pin down by static analysis, and which do the most to create long-term problems in software.
+
+A tester who sends in a tentative source-code–level characterization of such a multi-symptom bug (e.g. "It looks to me like there's a window in the signal handling near line 1250" or "Where are you zeroing that buffer?") may give a developer, otherwise too close to the code to see it, the critical clue to a half-dozen disparate symptoms. In cases like this, it may be hard or even impossible to know which externally-visible misbehaviour was caused by precisely which bug—but with frequent releases, it's unnecessary to know. Other collaborators will be likely to find out quickly whether their bug has been fixed or not. In many cases, source-level bug reports will cause misbehaviours to drop out without ever having been attributed to any specific fix.
+
+Complex multi-symptom errors also tend to have multiple trace paths from surface symptoms back to the actual bug. Which of the trace paths a given developer or tester can chase may depend on subtleties of that person's environment, and may well change in a not obviously deterministic way over time. In effect, each developer and tester samples a semi-random set of the program's state space when looking for the etiology of a symptom. The more subtle and complex the bug, the less likely that skill will be able to guarantee the relevance of that sample.
+
+For simple and easily reproducible bugs, then, the accent will be on the "semi" rather than the "random"; debugging skill and intimacy with the code and its architecture will matter a lot. But for complex bugs, the accent will be on the "random". Under these circumstances many people running traces will be much more effective than a few people running traces sequentially—even if the few have a much higher average skill level.
+
+This effect will be greatly amplified if the difficulty of following trace paths from different surface symptoms back to a bug varies significantly in a way that can't be predicted by looking at the symptoms. A single developer sampling those paths sequentially will be as likely to pick a difficult trace path on the first try as an easy one. On the other hand, suppose many people are trying trace paths in parallel while doing rapid releases. Then it is likely one of them will find the easiest path immediately, and nail the bug in a much shorter time. The project maintainer will see that, ship a new release, and the other people running traces on the same bug will be able to stop before having spent too much time on their more difficult traces [RJ].
+
+1~ When Is a Rose Not a Rose?
+
+Having studied Linus's behavior and formed a theory about why it was successful, I made a conscious decision to test this theory on my new (admittedly much less complex and ambitious) project.
+
+But the first thing I did was reorganize and simplify popclient a lot. Carl Harris's implementation was very sound, but exhibited a kind of unnecessary complexity common to many C programmers. He treated the code as central and the data structures as support for the code. As a result, the code was beautiful but the data structure design ad-hoc and rather ugly (at least by the high standards of this veteran LISP hacker).
+
+I had another purpose for rewriting besides improving the code and the data structure design, however. That was to evolve it into something I understood completely. It's no fun to be responsible for fixing bugs in a program you don't understand.
+
+For the first month or so, then, I was simply following out the implications of Carl's basic design. The first serious change I made was to add IMAP support. I did this by reorganizing the protocol machines into a generic driver and three method tables (for POP2, POP3, and IMAP). This and the previous changes illustrate a general principle that's good for programmers to keep in mind, especially in languages like C that don't naturally do dynamic typing:
+
+_1 9. Smart data structures and dumb code works a lot better than the other way around.
+
+Brooks, Chapter 9: "Show me your flowchart and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won't usually need your flowchart; it'll be obvious." Allowing for thirty years of terminological/cultural shift, it's the same point.
+
+At this point (early September 1996, about six weeks from zero) I started thinking that a name change might be in order—after all, it wasn't just a POP client any more. But I hesitated, because there was as yet nothing genuinely new in the design. My version of popclient had yet to develop an identity of its own.
+
+That changed, radically, when popclient learned how to forward fetched mail to the SMTP port. I'll get to that in a moment. But first: I said earlier that I'd decided to use this project to test my theory about what Linus Torvalds had done right. How (you may well ask) did I do that? In these ways:
+
+_* I released early and often (almost never less often than every ten days; during periods of intense development, once a day).
+
+_* I grew my beta list by adding to it everyone who contacted me about fetchmail.
+
+_* I sent chatty announcements to the beta list whenever I released, encouraging people to participate.
+
+_* And I listened to my beta-testers, polling them about design decisions and stroking them whenever they sent in patches and feedback.
+
+The payoff from these simple measures was immediate. From the beginning of the project, I got bug reports of a quality most developers would kill for, often with good fixes attached. I got thoughtful criticism, I got fan mail, I got intelligent feature suggestions. Which leads to:
+
+_1 10. If you treat your beta-testers as if they're your most valuable resource, they will respond by becoming your most valuable resource.
+
+One interesting measure of fetchmail's success is the sheer size of the project beta list, fetchmail-friends. At the time of latest revision of this paper (November 2000) it has 287 members and is adding two or three a week.
+
+Actually, when I revised in late May 1997 I found the list was beginning to lose members from its high of close to 300 for an interesting reason. Several people have asked me to unsubscribe them because fetchmail is working so well for them that they no longer need to see the list traffic! Perhaps this is part of the normal life-cycle of a mature bazaar-style project.
+
+1~ Popclient becomes Fetchmail
+
+The real turning point in the project was when Harry Hochheiser sent me his scratch code for forwarding mail to the client machine's SMTP port. I realized almost immediately that a reliable implementation of this feature would make all the other mail delivery modes next to obsolete.
+
+For many weeks I had been tweaking fetchmail rather incrementally while feeling like the interface design was serviceable but grubby—inelegant and with too many exiguous options hanging out all over. The options to dump fetched mail to a mailbox file or standard output particularly bothered me, but I couldn't figure out why.
+
+(If you don't care about the technicalia of Internet mail, the next two paragraphs can be safely skipped.)
+
+What I saw when I thought about SMTP forwarding was that popclient had been trying to do too many things. It had been designed to be both a mail transport agent (MTA) and a local delivery agent (MDA). With SMTP forwarding, it could get out of the MDA business and be a pure MTA, handing off mail to other programs for local delivery just as sendmail does.
+
+Why mess with all the complexity of configuring a mail delivery agent or setting up lock-and-append on a mailbox when port 25 is almost guaranteed to be there on any platform with TCP/IP support in the first place? Especially when this means retrieved mail is guaranteed to look like normal sender-initiated SMTP mail, which is really what we want anyway.
+
+(Back to a higher level....)
+
+Even if you didn't follow the preceding technical jargon, there are several important lessons here. First, this SMTP-forwarding concept was the biggest single payoff I got from consciously trying to emulate Linus's methods. A user gave me this terrific idea—all I had to do was understand the implications.
+
+_1 11. The next best thing to having good ideas is recognizing good ideas from your users. Sometimes the latter is better.
+
+Interestingly enough, you will quickly find that if you are completely and self-deprecatingly truthful about how much you owe other people, the world at large will treat you as though you did every bit of the invention yourself and are just being becomingly modest about your innate genius. We can all see how well this worked for Linus!
+
+(When I gave my talk at the first Perl Conference in August 1997, hacker extraordinaire Larry Wall was in the front row. As I got to the last line above he called out, religious-revival style, "Tell it, tell it, brother!". The whole audience laughed, because they knew this had worked for the inventor of Perl, too.)
+
+After a very few weeks of running the project in the same spirit, I began to get similar praise not just from my users but from other people to whom the word leaked out. I stashed away some of that email; I'll look at it again sometime if I ever start wondering whether my life has been worthwhile :-).
+
+But there are two more fundamental, non-political lessons here that are general to all kinds of design.
+
+_1 12. Often, the most striking and innovative solutions come from realizing that your concept of the problem was wrong.
+
+I had been trying to solve the wrong problem by continuing to develop popclient as a combined MTA/MDA with all kinds of funky local delivery modes. Fetchmail's design needed to be rethought from the ground up as a pure MTA, a part of the normal SMTP-speaking Internet mail path.
+
+When you hit a wall in development—when you find yourself hard put to think past the next patch—it's often time to ask not whether you've got the right answer, but whether you're asking the right question. Perhaps the problem needs to be reframed.
+
+Well, I had reframed my problem. Clearly, the right thing to do was (1) hack SMTP forwarding support into the generic driver, (2) make it the default mode, and (3) eventually throw out all the other delivery modes, especially the deliver-to-file and deliver-to-standard-output options.
+
+I hesitated over step 3 for some time, fearing to upset long-time popclient users dependent on the alternate delivery mechanisms. In theory, they could immediately switch to .forward files or their non-sendmail equivalents to get the same effects. In practice the transition might have been messy.
+
+But when I did it, the benefits proved huge. The cruftiest parts of the driver code vanished. Configuration got radically simpler—no more grovelling around for the system MDA and user's mailbox, no more worries about whether the underlying OS supports file locking.
+
+Also, the only way to lose mail vanished. If you specified delivery to a file and the disk got full, your mail got lost. This can't happen with SMTP forwarding because your SMTP listener won't return OK unless the message can be delivered or at least spooled for later delivery.
+
+Also, performance improved (though not so you'd notice it in a single run). Another not insignificant benefit of this change was that the manual page got a lot simpler.
+
+Later, I had to bring delivery via a user-specified local MDA back in order to allow handling of some obscure situations involving dynamic SLIP. But I found a much simpler way to do it.
+
+The moral? Don't hesitate to throw away superannuated features when you can do it without loss of effectiveness. Antoine de Saint-Exupéry (who was an aviator and aircraft designer when he wasn't authoring classic children's books) said:
+
+_1 13. "Perfection (in design) is achieved not when there is nothing more to add, but rather when there is nothing more to take away."
+
+When your code is getting both better and simpler, that is when you know it's right. And in the process, the fetchmail design acquired an identity of its own, different from the ancestral popclient.
+
+It was time for the name change. The new design looked much more like a dual of sendmail than the old popclient had; both are MTAs, but where sendmail pushes then delivers, the new popclient pulls then delivers. So, two months off the blocks, I renamed it fetchmail.
+
+There is a more general lesson in this story about how SMTP delivery came to fetchmail. It is not only debugging that is parallelizable; development and (to a perhaps surprising extent) exploration of design space is, too. When your development mode is rapidly iterative, development and enhancement may become special cases of debugging—fixing `bugs of omission' in the original capabilities or concept of the software.
+
+Even at a higher level of design, it can be very valuable to have lots of co-developers random-walking through the design space near your product. Consider the way a puddle of water finds a drain, or better yet how ants find food: exploration essentially by diffusion, followed by exploitation mediated by a scalable communication mechanism. This works very well; as with Harry Hochheiser and me, one of your outriders may well find a huge win nearby that you were just a little too close-focused to see.
+
+1~ Fetchmail Grows Up
+
+There I was with a neat and innovative design, code that I knew worked well because I used it every day, and a burgeoning beta list. It gradually dawned on me that I was no longer engaged in a trivial personal hack that might happen to be useful to few other people. I had my hands on a program that every hacker with a Unix box and a SLIP/PPP mail connection really needs.
+
+With the SMTP forwarding feature, it pulled far enough in front of the competition to potentially become a "category killer", one of those classic programs that fills its niche so competently that the alternatives are not just discarded but almost forgotten.
+
+I think you can't really aim or plan for a result like this. You have to get pulled into it by design ideas so powerful that afterward the results just seem inevitable, natural, even foreordained. The only way to try for ideas like that is by having lots of ideas—or by having the engineering judgment to take other peoples' good ideas beyond where the originators thought they could go.
+
+Andy Tanenbaum had the original idea to build a simple native Unix for IBM PCs, for use as a teaching tool (he called it Minix). Linus Torvalds pushed the Minix concept further than Andrew probably thought it could go—and it grew into something wonderful. In the same way (though on a smaller scale), I took some ideas by Carl Harris and Harry Hochheiser and pushed them hard. Neither of us was `original' in the romantic way people think is genius. But then, most science and engineering and software development isn't done by original genius, hacker mythology to the contrary.
+
+The results were pretty heady stuff all the same—in fact, just the kind of success every hacker lives for! And they meant I would have to set my standards even higher. To make fetchmail as good as I now saw it could be, I'd have to write not just for my own needs, but also include and support features necessary to others but outside my orbit. And do that while keeping the program simple and robust.
+
+The first and overwhelmingly most important feature I wrote after realizing this was multidrop support—the ability to fetch mail from mailboxes that had accumulated all mail for a group of users, and then route each piece of mail to its individual recipients.
+
+I decided to add the multidrop support partly because some users were clamoring for it, but mostly because I thought it would shake bugs out of the single-drop code by forcing me to deal with addressing in full generality. And so it proved. Getting RFC 822 address parsing right took me a remarkably long time, not because any individual piece of it is hard but because it involved a pile of interdependent and fussy details.
+
+But multidrop addressing turned out to be an excellent design decision as well. Here's how I knew:
+
+_1 14. Any tool should be useful in the expected way, but a truly great tool lends itself to uses you never expected.
+
+The unexpected use for multidrop fetchmail is to run mailing lists with the list kept, and alias expansion done, on the client side of the Internet connection. This means someone running a personal machine through an ISP account can manage a mailing list without continuing access to the ISP's alias files.
+
+Another important change demanded by my beta-testers was support for 8-bit MIME (Multipurpose Internet Mail Extensions) operation. This was pretty easy to do, because I had been careful to keep the code 8-bit clean (that is, to not press the 8th bit, unused in the ASCII character set, into service to carry information within the program). Not because I anticipated the demand for this feature, but rather in obedience to another rule:
+
+_1 15. When writing gateway software of any kind, take pains to disturb the data stream as little as possible—and never throw away information unless the recipient forces you to!
+
+Had I not obeyed this rule, 8-bit MIME support would have been difficult and buggy. As it was, all I had to do is read the MIME standard (RFC 1652) and add a trivial bit of header-generation logic.
+
+Some European users bugged me into adding an option to limit the number of messages retrieved per session (so they can control costs from their expensive phone networks). I resisted this for a long time, and I'm still not entirely happy about it. But if you're writing for the world, you have to listen to your customers—this doesn't change just because they're not paying you in money.
+
+1~ A Few More Lessons from Fetchmail
+
+Before we go back to general software-engineering issues, there are a couple more specific lessons from the fetchmail experience to ponder. Nontechnical readers can safely skip this section.
+
+The rc (control) file syntax includes optional `noise' keywords that are entirely ignored by the parser. The English-like syntax they allow is considerably more readable than the traditional terse keyword-value pairs you get when you strip them all out.
+
+These started out as a late-night experiment when I noticed how much the rc file declarations were beginning to resemble an imperative minilanguage. (This is also why I changed the original popclient "server" keyword to "poll").
+
+It seemed to me that trying to make that imperative minilanguage more like English might make it easier to use. Now, although I'm a convinced partisan of the "make it a language" school of design as exemplified by Emacs and HTML and many database engines, I am not normally a big fan of "English-like" syntaxes.
+
+Traditionally programmers have tended to favor control syntaxes that are very precise and compact and have no redundancy at all. This is a cultural legacy from when computing resources were expensive, so parsing stages had to be as cheap and simple as possible. English, with about 50% redundancy, looked like a very inappropriate model then.
+
+This is not my reason for normally avoiding English-like syntaxes; I mention it here only to demolish it. With cheap cycles and core, terseness should not be an end in itself. Nowadays it's more important for a language to be convenient for humans than to be cheap for the computer.
+
+There remain, however, good reasons to be wary. One is the complexity cost of the parsing stage—you don't want to raise that to the point where it's a significant source of bugs and user confusion in itself. Another is that trying to make a language syntax English-like often demands that the "English" it speaks be bent seriously out of shape, so much so that the superficial resemblance to natural language is as confusing as a traditional syntax would have been. (You see this bad effect in a lot of so-called "fourth generation" and commercial database-query languages.)
+
+The fetchmail control syntax seems to avoid these problems because the language domain is extremely restricted. It's nowhere near a general-purpose language; the things it says simply are not very complicated, so there's little potential for confusion in moving mentally between a tiny subset of English and the actual control language. I think there may be a broader lesson here:
+
+_1 16. When your language is nowhere near Turing-complete, syntactic sugar can be your friend.
+
+Another lesson is about security by obscurity. Some fetchmail users asked me to change the software to store passwords encrypted in the rc file, so snoopers wouldn't be able to casually see them.
+
+I didn't do it, because this doesn't actually add protection. Anyone who's acquired permissions to read your rc file will be able to run fetchmail as you anyway—and if it's your password they're after, they'd be able to rip the necessary decoder out of the fetchmail code itself to get it.
+
+All .fetchmailrc password encryption would have done is give a false sense of security to people who don't think very hard. The general rule here is:
+
+_1 17. A security system is only as secure as its secret. Beware of pseudo-secrets.
+
+1~ Necessary Preconditions for the Bazaar Style
+
+Early reviewers and test audiences for this essay consistently raised questions about the preconditions for successful bazaar-style development, including both the qualifications of the project leader and the state of code at the time one goes public and starts to try to build a co-developer community.
+
+It's fairly clear that one cannot code from the ground up in bazaar style [IN]. One can test, debug and improve in bazaar style, but it would be very hard to originate a project in bazaar mode. Linus didn't try it. I didn't either. Your nascent developer community needs to have something runnable and testable to play with.
+
+When you start community-building, what you need to be able to present is a plausible promise. Your program doesn't have to work particularly well. It can be crude, buggy, incomplete, and poorly documented. What it must not fail to do is (a) run, and (b) convince potential co-developers that it can be evolved into something really neat in the foreseeable future.
+
+Linux and fetchmail both went public with strong, attractive basic designs. Many people thinking about the bazaar model as I have presented it have correctly considered this critical, then jumped from that to the conclusion that a high degree of design intuition and cleverness in the project leader is indispensable.
+
+But Linus got his design from Unix. I got mine initially from the ancestral popclient (though it would later change a great deal, much more proportionately speaking than has Linux). So does the leader/coordinator for a bazaar-style effort really have to have exceptional design talent, or can he get by through leveraging the design talent of others?
+
+I think it is not critical that the coordinator be able to originate designs of exceptional brilliance, but it is absolutely critical that the coordinator be able to recognize good design ideas from others.
+
+Both the Linux and fetchmail projects show evidence of this. Linus, while not (as previously discussed) a spectacularly original designer, has displayed a powerful knack for recognizing good design and integrating it into the Linux kernel. And I have already described how the single most powerful design idea in fetchmail (SMTP forwarding) came from somebody else.
+
+Early audiences of this essay complimented me by suggesting that I am prone to undervalue design originality in bazaar projects because I have a lot of it myself, and therefore take it for granted. There may be some truth to this; design (as opposed to coding or debugging) is certainly my strongest skill.
+
+But the problem with being clever and original in software design is that it gets to be a habit—you start reflexively making things cute and complicated when you should be keeping them robust and simple. I have had projects crash on me because I made this mistake, but I managed to avoid this with fetchmail.
+
+So I believe the fetchmail project succeeded partly because I restrained my tendency to be clever; this argues (at least) against design originality being essential for successful bazaar projects. And consider Linux. Suppose Linus Torvalds had been trying to pull off fundamental innovations in operating system design during the development; does it seem at all likely that the resulting kernel would be as stable and successful as what we have?
+
+A certain base level of design and coding skill is required, of course, but I expect almost anybody seriously thinking of launching a bazaar effort will already be above that minimum. The open-source community's internal market in reputation exerts subtle pressure on people not to launch development efforts they're not competent to follow through on. So far this seems to have worked pretty well.
+
+There is another kind of skill not normally associated with software development which I think is as important as design cleverness to bazaar projects—and it may be more important. A bazaar project coordinator or leader must have good people and communications skills.
+
+This should be obvious. In order to build a development community, you need to attract people, interest them in what you're doing, and keep them happy about the amount of work they're doing. Technical sizzle will go a long way towards accomplishing this, but it's far from the whole story. The personality you project matters, too.
+
+It is not a coincidence that Linus is a nice guy who makes people like him and want to help him. It's not a coincidence that I'm an energetic extrovert who enjoys working a crowd and has some of the delivery and instincts of a stand-up comic. To make the bazaar model work, it helps enormously if you have at least a little skill at charming people.
+
+1~ The Social Context of Open-Source Software
+
+It is truly written: the best hacks start out as personal solutions to the author's everyday problems, and spread because the problem turns out to be typical for a large class of users. This takes us back to the matter of rule 1, restated in a perhaps more useful way:
+
+_1 18. To solve an interesting problem, start by finding a problem that is interesting to you.
+
+So it was with Carl Harris and the ancestral popclient, and so with me and fetchmail. But this has been understood for a long time. The interesting point, the point that the histories of Linux and fetchmail seem to demand we focus on, is the next stage—the evolution of software in the presence of a large and active community of users and co-developers.
+
+In The Mythical Man-Month, Fred Brooks observed that programmer time is not fungible; adding developers to a late software project makes it later. As we've seen previously, he argued that the complexity and communication costs of a project rise with the square of the number of developers, while work done only rises linearly. Brooks's Law has been widely regarded as a truism. But we've examined in this essay an number of ways in which the process of open-source development falsifies the assumptionms behind it—and, empirically, if Brooks's Law were the whole picture Linux would be impossible.
+
+Gerald Weinberg's classic The Psychology of Computer Programming supplied what, in hindsight, we can see as a vital correction to Brooks. In his discussion of "egoless programming", Weinberg observed that in shops where developers are not territorial about their code, and encourage other people to look for bugs and potential improvements in it, improvement happens dramatically faster than elsewhere. (Recently, Kent Beck's `extreme programming' technique of deploying coders in pairs looking over one anothers' shoulders might be seen as an attempt to force this effect.)
+
+Weinberg's choice of terminology has perhaps prevented his analysis from gaining the acceptance it deserved—one has to smile at the thought of describing Internet hackers as "egoless". But I think his argument looks more compelling today than ever.
+
+The bazaar method, by harnessing the full power of the "egoless programming" effect, strongly mitigates the effect of Brooks's Law. The principle behind Brooks's Law is not repealed, but given a large developer population and cheap communications its effects can be swamped by competing nonlinearities that are not otherwise visible. This resembles the relationship between Newtonian and Einsteinian physics—the older system is still valid at low energies, but if you push mass and velocity high enough you get surprises like nuclear explosions or Linux.
+
+The history of Unix should have prepared us for what we're learning from Linux (and what I've verified experimentally on a smaller scale by deliberately copying Linus's methods [EGCS]). That is, while coding remains an essentially solitary activity, the really great hacks come from harnessing the attention and brainpower of entire communities. The developer who uses only his or her own brain in a closed project is going to fall behind the developer who knows how to create an open, evolutionary context in which feedback exploring the design space, code contributions, bug-spotting, and other improvements come from from hundreds (perhaps thousands) of people.
+
+But the traditional Unix world was prevented from pushing this approach to the ultimate by several factors. One was the legal contraints of various licenses, trade secrets, and commercial interests. Another (in hindsight) was that the Internet wasn't yet good enough.
+
+Before cheap Internet, there were some geographically compact communities where the culture encouraged Weinberg's "egoless" programming, and a developer could easily attract a lot of skilled kibitzers and co-developers. Bell Labs, the MIT AI and LCS labs, UC Berkeley—these became the home of innovations that are legendary and still potent.
+
+Linux was the first project for which a conscious and successful effort to use the entire world as its talent pool was made. I don't think it's a coincidence that the gestation period of Linux coincided with the birth of the World Wide Web, and that Linux left its infancy during the same period in 1993–1994 that saw the takeoff of the ISP industry and the explosion of mainstream interest in the Internet. Linus was the first person who learned how to play by the new rules that pervasive Internet access made possible.
+
+While cheap Internet was a necessary condition for the Linux model to evolve, I think it was not by itself a sufficient condition. Another vital factor was the development of a leadership style and set of cooperative customs that could allow developers to attract co-developers and get maximum leverage out of the medium.
+
+But what is this leadership style and what are these customs? They cannot be based on power relationships—and even if they could be, leadership by coercion would not produce the results we see. Weinberg quotes the autobiography of the 19th-century Russian anarchist Pyotr Alexeyvich Kropotkin's Memoirs of a Revolutionist to good effect on this subject:
+
+_1 Having been brought up in a serf-owner's family, I entered active life, like all young men of my time, with a great deal of confidence in the necessity of commanding, ordering, scolding, punishing and the like. But when, at an early stage, I had to manage serious enterprises and to deal with [free] men, and when each mistake would lead at once to heavy consequences, I began to appreciate the difference between acting on the principle of command and discipline and acting on the principle of common understanding. The former works admirably in a military parade, but it is worth nothing where real life is concerned, and the aim can be achieved only through the severe effort of many converging wills.
+
+The "severe effort of many converging wills" is precisely what a project like Linux requires—and the "principle of command" is effectively impossible to apply among volunteers in the anarchist's paradise we call the Internet. To operate and compete effectively, hackers who want to lead collaborative projects have to learn how to recruit and energize effective communities of interest in the mode vaguely suggested by Kropotkin's "principle of understanding". They must learn to use Linus's Law.[SP]
+
+Earlier I referred to the "Delphi effect" as a possible explanation for Linus's Law. But more powerful analogies to adaptive systems in biology and economics also irresistably suggest themselves. The Linux world behaves in many respects like a free market or an ecology, a collection of selfish agents attempting to maximize utility which in the process produces a self-correcting spontaneous order more elaborate and efficient than any amount of central planning could have achieved. Here, then, is the place to seek the "principle of understanding".
+
+The "utility function" Linux hackers are maximizing is not classically economic, but is the intangible of their own ego satisfaction and reputation among other hackers. (One may call their motivation "altruistic", but this ignores the fact that altruism is itself a form of ego satisfaction for the altruist). Voluntary cultures that work this way are not actually uncommon; one other in which I have long participated is science fiction fandom, which unlike hackerdom has long explicitly recognized "egoboo" (ego-boosting, or the enhancement of one's reputation among other fans) as the basic drive behind volunteer activity.
+
+Linus, by successfully positioning himself as the gatekeeper of a project in which the development is mostly done by others, and nurturing interest in the project until it became self-sustaining, has shown an acute grasp of Kropotkin's "principle of shared understanding". This quasi-economic view of the Linux world enables us to see how that understanding is applied.
+
+We may view Linus's method as a way to create an efficient market in "egoboo"—to connect the selfishness of individual hackers as firmly as possible to difficult ends that can only be achieved by sustained cooperation. With the fetchmail project I have shown (albeit on a smaller scale) that his methods can be duplicated with good results. Perhaps I have even done it a bit more consciously and systematically than he.
+
+Many people (especially those who politically distrust free markets) would expect a culture of self-directed egoists to be fragmented, territorial, wasteful, secretive, and hostile. But this expectation is clearly falsified by (to give just one example) the stunning variety, quality, and depth of Linux documentation. It is a hallowed given that programmers hate documenting; how is it, then, that Linux hackers generate so much documentation? Evidently Linux's free market in egoboo works better to produce virtuous, other-directed behavior than the massively-funded documentation shops of commercial software producers.
+
+Both the fetchmail and Linux kernel projects show that by properly rewarding the egos of many other hackers, a strong developer/coordinator can use the Internet to capture the benefits of having lots of co-developers without having a project collapse into a chaotic mess. So to Brooks's Law I counter-propose the following:
+
+_1 19: Provided the development coordinator has a communications medium at least as good as the Internet, and knows how to lead without coercion, many heads are inevitably better than one.
+
+I think the future of open-source software will increasingly belong to people who know how to play Linus's game, people who leave behind the cathedral and embrace the bazaar. This is not to say that individual vision and brilliance will no longer matter; rather, I think that the cutting edge of open-source software will belong to people who start from individual vision and brilliance, then amplify it through the effective construction of voluntary communities of interest.
+
+Perhaps this is not only the future of open-source software. No closed-source developer can match the pool of talent the Linux community can bring to bear on a problem. Very few could afford even to hire the more than 200 (1999: 600, 2000: 800) people who have contributed to fetchmail!
+
+Perhaps in the end the open-source culture will triumph not because cooperation is morally right or software "hoarding" is morally wrong (assuming you believe the latter, which neither Linus nor I do), but simply because the closed-source world cannot win an evolutionary arms race with open-source communities that can put orders of magnitude more skilled time into a problem.
+
+1~ On Management and the Maginot Line
+
+The original Cathedral and Bazaar paper of 1997 ended with the vision above—that of happy networked hordes of programmer/anarchists outcompeting and overwhelming the hierarchical world of conventional closed software.
+
+A good many skeptics weren't convinced, however; and the questions they raise deserve a fair engagement. Most of the objections to the bazaar argument come down to the claim that its proponents have underestimated the productivity-multiplying effect of conventional management.
+
+Traditionally-minded software-development managers often object that the casualness with which project groups form and change and dissolve in the open-source world negates a significant part of the apparent advantage of numbers that the open-source community has over any single closed-source developer. They would observe that in software development it is really sustained effort over time and the degree to which customers can expect continuing investment in the product that matters, not just how many people have thrown a bone in the pot and left it to simmer.
+
+There is something to this argument, to be sure; in fact, I have developed the idea that expected future service value is the key to the economics of software production in the essay The Magic Cauldron.
+
+But this argument also has a major hidden problem; its implicit assumption that open-source development cannot deliver such sustained effort. In fact, there have been open-source projects that maintained a coherent direction and an effective maintainer community over quite long periods of time without the kinds of incentive structures or institutional controls that conventional management finds essential. The development of the GNU Emacs editor is an extreme and instructive example; it has absorbed the efforts of hundreds of contributors over 15 years into a unified architectural vision, despite high turnover and the fact that only one person (its author) has been continuously active during all that time. No closed-source editor has ever matched this longevity record.
+
+This suggests a reason for questioning the advantages of conventionally-managed software development that is independent of the rest of the arguments over cathedral vs. bazaar mode. If it's possible for GNU Emacs to express a consistent architectural vision over 15 years, or for an operating system like Linux to do the same over 8 years of rapidly changing hardware and platform technology; and if (as is indeed the case) there have been many well-architected open-source projects of more than 5 years duration -- then we are entitled to wonder what, if anything, the tremendous overhead of conventionally-managed development is actually buying us.
+
+Whatever it is certainly doesn't include reliable execution by deadline, or on budget, or to all features of the specification; it's a rare `managed' project that meets even one of these goals, let alone all three. It also does not appear to be ability to adapt to changes in technology and economic context during the project lifetime, either; the open-source community has proven far more effective on that score (as one can readily verify, for example, by comparing the 30-year history of the Internet with the short half-lives of proprietary networking technologies—or the cost of the 16-bit to 32-bit transition in Microsoft Windows with the nearly effortless upward migration of Linux during the same period, not only along the Intel line of development but to more than a dozen other hardware platforms, including the 64-bit Alpha as well).
+
+One thing many people think the traditional mode buys you is somebody to hold legally liable and potentially recover compensation from if the project goes wrong. But this is an illusion; most software licenses are written to disclaim even warranty of merchantability, let alone performance—and cases of successful recovery for software nonperformance are vanishingly rare. Even if they were common, feeling comforted by having somebody to sue would be missing the point. You didn't want to be in a lawsuit; you wanted working software.
+
+So what is all that management overhead buying?
+
+In order to understand that, we need to understand what software development managers believe they do. A woman I know who seems to be very good at this job says software project management has five functions:
+
+_* To define goals and keep everybody pointed in the same direction
+
+_* To monitor and make sure crucial details don't get skipped
+
+_* To motivate people to do boring but necessary drudgework
+
+_* To organize the deployment of people for best productivity
+
+_* To marshal resources needed to sustain the project
+
+Apparently worthy goals, all of these; but under the open-source model, and in its surrounding social context, they can begin to seem strangely irrelevant. We'll take them in reverse order.
+
+My friend reports that a lot of resource marshalling is basically defensive; once you have your people and machines and office space, you have to defend them from peer managers competing for the same resources, and from higher-ups trying to allocate the most efficient use of a limited pool.
+
+But open-source developers are volunteers, self-selected for both interest and ability to contribute to the projects they work on (and this remains generally true even when they are being paid a salary to hack open source.) The volunteer ethos tends to take care of the `attack' side of resource-marshalling automatically; people bring their own resources to the table. And there is little or no need for a manager to `play defense' in the conventional sense.
+
+Anyway, in a world of cheap PCs and fast Internet links, we find pretty consistently that the only really limiting resource is skilled attention. Open-source projects, when they founder, essentially never do so for want of machines or links or office space; they die only when the developers themselves lose interest.
+
+That being the case, it's doubly important that open-source hackers organize themselves for maximum productivity by self-selection—and the social milieu selects ruthlessly for competence. My friend, familiar with both the open-source world and large closed projects, believes that open source has been successful partly because its culture only accepts the most talented 5% or so of the programming population. She spends most of her time organizing the deployment of the other 95%, and has thus observed first-hand the well-known variance of a factor of one hundred in productivity between the most able programmers and the merely competent.
+
+The size of that variance has always raised an awkward question: would individual projects, and the field as a whole, be better off without more than 50% of the least able in it? Thoughtful managers have understood for a long time that if conventional software management's only function were to convert the least able from a net loss to a marginal win, the game might not be worth the candle.
+
+The success of the open-source community sharpens this question considerably, by providing hard evidence that it is often cheaper and more effective to recruit self-selected volunteers from the Internet than it is to manage buildings full of people who would rather be doing something else.
+
+Which brings us neatly to the question of motivation. An equivalent and often-heard way to state my friend's point is that traditional development management is a necessary compensation for poorly motivated programmers who would not otherwise turn out good work.
+
+This answer usually travels with a claim that the open-source community can only be relied on only to do work that is `sexy' or technically sweet; anything else will be left undone (or done only poorly) unless it's churned out by money-motivated cubicle peons with managers cracking whips over them. I address the psychological and social reasons for being skeptical of this claim in Homesteading the Noosphere. For present purposes, however, I think it's more interesting to point out the implications of accepting it as true.
+
+If the conventional, closed-source, heavily-managed style of software development is really defended only by a sort of Maginot Line of problems conducive to boredom, then it's going to remain viable in each individual application area for only so long as nobody finds those problems really interesting and nobody else finds any way to route around them. Because the moment there is open-source competition for a `boring' piece of software, customers are going to know that it was finally tackled by someone who chose that problem to solve because of a fascination with the problem itself—which, in software as in other kinds of creative work, is a far more effective motivator than money alone.
+
+Having a conventional management structure solely in order to motivate, then, is probably good tactics but bad strategy; a short-term win, but in the longer term a surer loss.
+
+So far, conventional development management looks like a bad bet now against open source on two points (resource marshalling, organization), and like it's living on borrowed time with respect to a third (motivation). And the poor beleaguered conventional manager is not going to get any succour from the monitoring issue; the strongest argument the open-source community has is that decentralized peer review trumps all the conventional methods for trying to ensure that details don't get slipped.
+
+Can we save defining goals as a justification for the overhead of conventional software project management? Perhaps; but to do so, we'll need good reason to believe that management committees and corporate roadmaps are more successful at defining worthy and widely shared goals than the project leaders and tribal elders who fill the analogous role in the open-source world.
+
+That is on the face of it a pretty hard case to make. And it's not so much the open-source side of the balance (the longevity of Emacs, or Linus Torvalds's ability to mobilize hordes of developers with talk of "world domination") that makes it tough. Rather, it's the demonstrated awfulness of conventional mechanisms for defining the goals of software projects.
+
+One of the best-known folk theorems of software engineering is that 60% to 75% of conventional software projects either are never completed or are rejected by their intended users. If that range is anywhere near true (and I've never met a manager of any experience who disputes it) then more projects than not are being aimed at goals that are either (a) not realistically attainable, or (b) just plain wrong.
+
+This, more than any other problem, is the reason that in today's software engineering world the very phrase "management committee" is likely to send chills down the hearer's spine—even (or perhaps especially) if the hearer is a manager. The days when only programmers griped about this pattern are long past; Dilbert cartoons hang over executives' desks now.
+
+Our reply, then, to the traditional software development manager, is simple—if the open-source community has really underestimated the value of conventional management, why do so many of you display contempt for your own process?
+
+Once again the example of the open-source community sharpens this question considerably—because we have fun doing what we do. Our creative play has been racking up technical, market-share, and mind-share successes at an astounding rate. We're proving not only that we can do better software, but that joy is an asset.
+
+Two and a half years after the first version of this essay, the most radical thought I can offer to close with is no longer a vision of an open-source–dominated software world; that, after all, looks plausible to a lot of sober people in suits these days.
+
+Rather, I want to suggest what may be a wider lesson about software, (and probably about every kind of creative or professional work). Human beings generally take pleasure in a task when it falls in a sort of optimal-challenge zone; not so easy as to be boring, not too hard to achieve. A happy programmer is one who is neither underutilized nor weighed down with ill-formulated goals and stressful process friction. Enjoyment predicts efficiency.
+
+Relating to your own work process with fear and loathing (even in the displaced, ironic way suggested by hanging up Dilbert cartoons) should therefore be regarded in itself as a sign that the process has failed. Joy, humor, and playfulness are indeed assets; it was not mainly for the alliteration that I wrote of "happy hordes" above, and it is no mere joke that the Linux mascot is a cuddly, neotenous penguin.
+
+It may well turn out that one of the most important effects of open source's success will be to teach us that play is the most economically efficient mode of creative work.
+
+1~ Epilog: Netscape Embraces the Bazaar
+
+It's a strange feeling to realize you're helping make history....
+
+On January 22 1998, approximately seven months after I first published The Cathedral and the Bazaar, Netscape Communications, Inc. announced plans to give away the source for Netscape Communicator. I had had no clue this was going to happen before the day of the announcement.
+
+Eric Hahn, executive vice president and chief technology officer at Netscape, emailed me shortly afterwards as follows: "On behalf of everyone at Netscape, I want to thank you for helping us get to this point in the first place. Your thinking and writings were fundamental inspirations to our decision."
+
+The following week I flew out to Silicon Valley at Netscape's invitation for a day-long strategy conference (on 4 Feb 1998) with some of their top executives and technical people. We designed Netscape's source-release strategy and license together.
+
+A few days later I wrote the following:
+
+_1 Netscape is about to provide us with a large-scale, real-world test of the bazaar model in the commercial world. The open-source culture now faces a danger; if Netscape's execution doesn't work, the open-source concept may be so discredited that the commercial world won't touch it again for another decade.
+
+_1 On the other hand, this is also a spectacular opportunity. Initial reaction to the move on Wall Street and elsewhere has been cautiously positive. We're being given a chance to prove ourselves, too. If Netscape regains substantial market share through this move, it just may set off a long-overdue revolution in the software industry.
+
+_1 The next year should be a very instructive and interesting time.
+
+And indeed it was. As I write in mid-2000, the development of what was later named Mozilla has been only a qualified success. It achieved Netscape's original goal, which was to deny Microsoft a monopoly lock on the browser market. It has also achieved some dramatic successes (notably the release of the next-generation Gecko rendering engine).
+
+However, it has not yet garnered the massive development effort from outside Netscape that the Mozilla founders had originally hoped for. The problem here seems to be that for a long time the Mozilla distribution actually broke one of the basic rules of the bazaar model; it didn't ship with something potential contributors could easily run and see working. (Until more than a year after release, building Mozilla from source required a license for the proprietary Motif library.)
+
+Most negatively (from the point of view of the outside world) the Mozilla group didn't ship a production-quality browser for two and a half years after the project launch—and in 1999 one of the project's principals caused a bit of a sensation by resigning, complaining of poor management and missed opportunities. "Open source," he correctly observed, "is not magic pixie dust."
+
+And indeed it is not. The long-term prognosis for Mozilla looks dramatically better now (in November 2000) than it did at the time of Jamie Zawinski's resignation letter—in the last few weeks the nightly releases have finally passed the critical threshold to production usability. But Jamie was right to point out that going open will not necessarily save an existing project that suffers from ill-defined goals or spaghetti code or any of the software engineering's other chronic ills. Mozilla has managed to provide an example simultaneously of how open source can succeed and how it could fail.
+
+In the mean time, however, the open-source idea has scored successes and found backers elsewhere. Since the Netscape release we've seen a tremendous explosion of interest in the open-source development model, a trend both driven by and driving the continuing success of the Linux operating system. The trend Mozilla touched off is continuing at an accelerating rate.
+
+% Thyrsus Enterprises <esr@thyrsus.com>
+
+% http://www.catb.org/~esr/writings/cathedral-bazaar/
+
+% This is version 3.0
+
+% $Date: 2007/01/26 20:28:08 $
+% Revision History
+% Revision 1.57 11 September 2000 esr
+% New major section ``How Many Eyeballs Tame Complexity".
+% Revision 1.52 28 August 2000 esr
+% MATLAB is a reinforcing parallel to Emacs. Corbatoó & Vyssotsky got it in 1965.
+% Revision 1.51 24 August 2000 esr
+% First DocBook version. Minor updates to Fall 2000 on the time-sensitive material.
+% Revision 1.49 5 May 2000 esr
+% Added the HBS note on deadlines and scheduling.
+% Revision 1.51 31 August 1999 esr
+% This the version that O'Reilly printed in the first edition of the book.
+% Revision 1.45 8 August 1999 esr
+% Added the endnotes on the Snafu Principle, (pre)historical examples of bazaar development, and originality in the bazaar.
+% Revision 1.44 29 July 1999 esr
+% Added the ``On Management and the Maginot Line" section, some insights about the usefulness of bazaars for exploring design space, and substantially improved the Epilog.
+% Revision 1.40 20 Nov 1998 esr
+% Added a correction of Brooks based on the Halloween Documents.
+% Revision 1.39 28 July 1998 esr
+% I removed Paul Eggert's 'graph on GPL vs. bazaar in response to cogent aguments from RMS on
+% Revision 1.31 February 10 1998 esr
+% Added ``Epilog: Netscape Embraces the Bazaar!"
+% Revision 1.29 February 9 1998 esr
+% Changed ``free software" to ``open source".
+% Revision 1.27 18 November 1997 esr
+% Added the Perl Conference anecdote.
+% Revision 1.20 7 July 1997 esr
+% Added the bibliography.
+% Revision 1.16 21 May 1997 esr
+
+% First official presentation at the Linux Kongress.
+
diff --git a/data/sisu_markup_samples/non-free/the_wealth_of_networks.book_index.yochai_benkler.sst b/data/sisu_markup_samples/non-free/the_wealth_of_networks.book_index.yochai_benkler.sst
new file mode 100644
index 0000000..00bd947
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/the_wealth_of_networks.book_index.yochai_benkler.sst
@@ -0,0 +1,1847 @@
+% SiSU 0.38
+
+@title: Book Index for - The Wealth of Networks
+
+@subtitle: How Social Production Transforms Markets and Freedom
+
+@creator: Yochai Benkler
+
+@type: Book
+
+@rights: Copyright 2006 by Yochai Benkler. All rights reserved. Subject to the exception immediately following, this book may not be reproduced, in whole or in part, including illustrations, in any form (beyond that copying permitted by Sections 107 and 108 of the U.S. Copyright Law and except by reviewers for the public press), without written permission from the publishers. http://creativecommons.org/licenses/by-nc-sa/2.5/ The author has made an online version of the book available under a Creative Commons Noncommercial Sharealike license; it can be accessed through the author's website at http://www.benkler.org.
+
+@date: 2006-01-27
+
+@date.created: 2006-01-27
+
+@date.issued: 2006-01-27
+
+@date.available: 2006-11-26
+
+@date.modified: 2006-11-26
+
+@date.valid: 2006-01-27
+
+% @catalogue: isbn=0300110561
+
+@language: US
+
+@vocabulary: none
+
+@images: center
+
+@skin: skin_won_benkler
+
+@links: {The Wealth of Networks, dedicated wiki}http://www.benkler.org/wealth_of_networks/index.php/Main_Page
+{The Wealth of Networks, Yochai Benkler @ SiSU}http://www.jus.uio.no/sisu/the_wealth_of_networks.yochai_benkler
+
+@level: new=:C; break=1
+
+
+:A~ The Wealth of Networks - How Social Production Transforms Markets and Freedom
+
+:B~ Yochai Benkler
+
+:C~ Book Index
+
+
+1~ Index~{ http://www.jus.uio.no/sisu/the_wealth_of_networks.yochai_benkler }~
+
+http://www.jus.uio.no/sisu/the_wealth_of_networks.yochai_benkler ~#
+
+Abilene, Texas, 407
+
+access: broadband services, concentration of, 240; cable providers, regulation of, 399-401; human development and justice, 13-15; influence exaction, 156, 158-159; large-audience programming, 197, 204-210, 259-260; limited by mass media, 197-199; to medicine, 344-353; to raw data, 313-314; systematically blocked by policy routers, 147-149, 156, 197-198, 397
+
+access regulation. See policy
+
+accreditation, 68, 75-80, 169-174, 183-184; Amazon, 75; capacity for, by mass media, 199; concentration of mass-media power, 157, 220-225, 235, 237-241; as distributed system, 171-172; Google, 76; Open Directory Project (ODP), 76; power of mass media owners, 197, 199-204, 220-225; as public good, 12; Slashdot, 76-80, 104
+
+Ackerman, Bruce, 184, 281, 305-307
+
+action, individual. See individual capabilities
+
+active vs. passive consumers, 126-127, 135
+
+ad hoc mesh networks, 89
+
+Adamic, Lada, 244, 246-248, 257
+
+Adams, Scott, 138
+
+advertiser-supported media, 194-195, 199-204; lowest-common-denominator programming, 197, 204-210, 259-260; reflection of consumer preference, 203
+
+aggregate effect of individual action, 4-5. See also clusters in network topology; peer production
+
+agonistic giving, 83
+
+agricultural innovation, commons-based, 329-344
+
+% ,{[pg 492]},
+
+Albert, Reka, 243-244, 251
+
+alertness, undermined by commercialism, 197, 204-210
+
+alienation, 359-361
+
+allocating excess capacity, 81-89, 114-115, 157, 351-352
+
+almanac-type information, emergence of, 70. See also Wikipedia project
+
+Alstott, Anne, 305
+
+altruism, 82-83
+
+Amazon, 75
+
+anticircumvention provisions, DMCA, 414-417
+
+antidevice provisions, DMCA, 415
+
+Antidilution Act of 1995, 290, 447
+
+appropriation strategies, 49
+
+arbitrage, domain names, 433
+
+archiving of scientific publications, 325-326
+
+Arrow, Kenneth, 36, 93
+
+ArXiv.org, 325-326
+
+asymmetric commons, 61-62
+
+AT&T, 191, 194
+
+Atrios (blogger Duncan Black), 263
+
+attention fragmentation, 15, 234-235, 238, 256, 465-466. See also social relations and norms
+
+authoring of scientific publications, 323-325
+
+authoritarian control, 236; working around, 266-271
+
+authorship, collaborative. See peer production
+
+autonomy, 8-9, 133-175, 464-465; culture and, 280-281; formal conception of, 140-141; independence of Web sites, 103; individual capabilities in, 20-22; information environment, structure of, 146-161; mass media and, 164-166
+
+B92 radio, 266
+
+Babel objection, 10, 12, 169-174, 233-235, 237-241, 465-466
+
+backbone Web sites, 249-250, 258-260
+
+background knowledge. See culture bad luck, justice and, 303-304
+
+Bagdikian, Ben, 205
+
+Baker, Edwin, 165, 203
+
+Balkin, Jack, 15, 256, 276, 284, 294, 295
+
+Barabasi, Albert-Laszlo, 243-246, 251
+
+Barbie (doll), culture of, 277, 285-289
+
+Barlow, John Perry, 45
+
+barriers to access. See access
+
+BBC (British Broadcasting Corporation), 189
+
+Beebe, Jack, 207
+
+behavior: enforced with social software, 372-375; motivation to produce, 6, 92-99, 115; number and variety of options, 150-152, 170. See also autonomy
+
+Benabou, Roland, 94
+
+benefit maximization, 42
+
+Beniger, James, 187
+
+Benjamin, Walter, 295, 296
+
+Bennett, James Gordon, 188
+
+Berlusconi effect, 201, 204, 220-225
+
+bilateral trade negotiations. See trade policy
+
+BioForge platform, 343
+
+bioinformatics, 351
+
+BioMed Central, 324
+
+biomedical research, commons-based, 344-353
+
+BIOS initiative, 342-344
+
+biotechnology, 332-338
+
+blocked access: authoritarian control, 236, 266-271; autonomy and, 147-152, 170-171; influence exaction, 156, 158-159; large-audience programming, 197, 204-210, 259-260; mass media and, 197-199; policy routers, 147-149, 156, 197-198, 397
+
+blogs, 216-217; Sinclair Broadcasting case study, 220-225; small-worlds effect, 252-253; as social software, 372-375; watchdog functionality, 262-264
+
+% ,{[pg 493]},
+
+blood donation, 93
+
+bots. See trespass to chattels
+
+bow tie structure of Web, 249-250
+
+Bower, Chris, 221
+
+boycott of Sinclair Broadcasting, 220-225
+
+BoycottSBG.com site, 222-223, 225
+
+Boyd, Dana, 368
+
+Boyle, James, 25, 415, 446-447, 449, 487-488
+
+branding: domain names and, 431-433; trademark dilution, 290, 446-448
+
+bridging social relationships, 368
+
+Bristol, Virginia, 406
+
+broadband networks, 24-25; cable as commons, 399-401; concentration in access services, 240; market structure of, 152-153; municipal initiatives, 405-408; open wireless networks, 402-405; regulation of, 399-402. See also wired communications
+
+broadcast flag regulation, 410
+
+broadcasting, radio. See radio broadcasting, toll, 194-195
+
+Broder, Andrei, 249
+
+browsers, 434-436
+
+Bt cotton, 337-338
+
+building on existing information, 37-39, 52
+
+Bullock, William, 188
+
+business decisions vs. editorial decisions, 204
+
+business strategies for information production, 41-48
+
+cable broadband transport, as commons, 399-401. See also broadband networks
+
+cacophony. See Babel objection; relevance filtering
+
+CAMBIA research institute, 342-344
+
+capabilities of individuals, 20-22; coordinated effects of individual actions, 4-5; cultural shift, 284; economic condition and, 304; human capacity as resource, 52-55; as modality of production, 119-120; as physical capital, 99; technology and human affairs, 16-18. See also autonomy; nonmarket information producers
+
+capacity: diversity of content in largeaudience media, 197, 204-210, 259-260; human communication, 52-55, 99-106, 110; mass media limits on, 199; networked public sphere generation, 225-232; networked public sphere reaction, 220-225; opportunities created by social production, 123-126; policy routers, 147-149, 156, 197-198, 397; processing (computational), 81-82, 86; radio, sharing, 402-403; securing, 458; sharing, 81-89, 114-115, 157, 351-352; storage, 86; transaction costs, 112-115
+
+capital for production, 6-7, 32; control of, 99; cost minimization and benefit maximization, 42; fixed and initial costs, 110; production costs as limiting, 164-165; transaction costs, 59-60. See also commons; social capital
+
+Carey, James, 131
+
+carriage requirements of cable providers, 401
+
+Castells, Manuel, 16, 18, 362
+
+CBDPTA (Consumer Broadband and Digital Television Promotion Act), 409
+
+Cejas, Rory, 134, 141-142
+
+censorship, 268-270
+
+centralization of communications, 62, 235, 237-241, 258-260; authoritarian filtering, 268; decentralization, 10-12, 62
+
+CGIAR's GCP program, 341
+
+Chakrabarti, Soumen, 251
+
+Chandler, Alfred, 187
+
+channels, transmission. See transport channel policy chaotic, Internet as, 237-241
+
+% ,{[pg 494]},
+
+Chaplin, Charlie, 138
+
+chat rooms, 269
+
+Chinese agricultural research, 337-338
+
+Chung, Minn, 267
+
+Cisco policy routers, 147-149, 156, 197-198, 397; influence exaction, 156, 158-159
+
+Clark, Dave, 412
+
+Clarke, Ian, 269
+
+click-wrap licenses, 444-446
+
+clickworkers project (NASA), 69-70
+
+clinical trials, peer-produced, 353
+
+clusters in network topology, 12-13, 248-250, 253-256; bow tie structure of Web, 249-250; synthesis of public opinion, 184, 199. See also topology, network
+
+Coase, Ronald, 59, 87
+
+Cohen, Julie, 416
+
+Coleman, James, 95, 361
+
+collaboration, open-source, 66-67
+
+collaboration, traditional. See traditional model of communication
+
+collaborative authorship, 218; among universities, 338-341, 347-350; social software, 372-375. See also peer production collective social action, 22
+
+commercial culture, production of, 295-296
+
+commercial mass media: basic critiques of, 196-211; corrective effects of network environment, 220-225; as platform for public sphere, 178-180, 185-186, 198-199; structure of, 178-180. See also traditional model of communication commercial mass media, political freedom and, 176-211; criticisms, 196-211; design characteristics of liberal public sphere, 180-185
+
+commercial model of communication, 4, 9, 22-28, 59-60, 383-459, 470-471; autonomy and, 164-166; barriers to justice, 302; emerging role of mass media, 178-180, 185-186, 198-199; enclosure movement, 380-382; mapping, framework for, 389-396; medical innovation and, 345-346; path dependency, 386-389; relationship with social producers, 122-127; security-related policy, 73-74, 396, 457-459; shift away from, 10-13; stakes of information policy, 460-473; structure of mass media, 178-180; transaction costs, 59-60, 106-116. See also market-based information producers
+
+commercial press, 186-188, 202
+
+commercialism, undermining political concern, 197, 204-210
+
+common-carriage regulatory system, 160
+
+commons, 24, 60-62, 129-132, 316-317; autonomy and, 144-146; cable providers as, 399-401; crispness of social exchange, 109; human welfare and development, 308-311; municipal broadband initiatives, 405-408; types of, 61-62; wireless communications as, 89, 152-154
+
+commons, production through. See peer production commons-based research, 317-328, 354-355; food and agricultural innovation, 328-344; medical and pharmaceutical innovation, 344-353
+
+communication: authoritarian control, working around, 266-271; capacity of, 52-55; feasibility conditions for social production, 99-106; pricing, 110; thickening of preexisting relations, 357; through performance, 205; transaction costs, 112-115; university alliances, 338-341, 347-350. See also wired communications; wireless communications
+
+% ,{[pg 495]},
+
+communication diversity. See diversity communication tools, 215-219
+
+communities: critical culture and self-reflection, 15-16, 70-74, 76, 112, 293-294; fragmentation of, 15, 234-235, 238, 256, 465-466; human and Internet, together, 375-377; immersive entertainment, 74, 135-136; municipal broadband initiatives, 405-408; open wireless networks, 402-405; as persons, 19-20; technology-defined social structure, 29-34; virtual, 348-361
+
+community clusters. See clusters in network topology community regulation by social norms. See social relations and norms competition: communications infrastructure, 157-159; market and nonmarket producers, 122-123
+
+computational capacity, 81-82, 86; transaction costs, 112-115
+
+computer gaming environment, 74, 135-136
+
+computers, 105; infrastructure ownership, 155; policy on physical devices, 408-412; as shareable, lumpy goods, 113-115
+
+concentration in broadband access services, 240
+
+concentration of mass-media power, 157, 197, 199-204, 235, 237-241; corrective effects of network environment, 220-225
+
+concentration of Web attention, 241-261
+
+connectivity, 86
+
+constraints of information production, monetary, 6-7, 32; control of, 99; cost minimization and benefit maximization, 42; fixed and initial costs, 110; production costs as limiting, 164-165; transaction costs, 59-60. See also commons; social capital
+
+constraints of information production, physical, 3-4, 24-25. See also capital for production
+
+constraints on behavior. See autonomy; freedom consumer demand for information, 203
+
+consumer surplus. See capacity, sharing consumerism, active vs. passive, 126-127, 135
+
+contact, online vs. physical, 360-361
+
+content layer of institutional ecology, 384, 392, 439-457, 469-470; copyright issues, 439-444; recent changes, 395
+
+context, cultural. See culture contractual enclosure, 444-446
+
+control of public sphere. See mass media controlling
+
+culture, 297-300
+
+controversy, avoidance of, 205
+
+cooperation gain, 88
+
+cooperative production. See peer production
+
+coordinated effects of individual actions, 4-5. See also clusters in network topology; peer production
+
+copyleft, 65, 342 copyright issues, 277-278, 439-444. See also proprietary rights
+
+core Web sites, 249-250
+
+cost: crispness of, 109-113; minimizing, 42; of production, as limiting, 164-165; proprietary models, 461-462; technologies, 462. See also capital for production creative capacity, 52-55; feasibility conditions for social production, 99-106; pricing, 110
+
+Creative Commons initiative, 455
+
+creativity, value of, 109-113
+
+credibility, earning. See accreditation
+
+criminalization of copyright infringement, 441-442
+
+crispness of currency exchange, 109-113
+
+% ,{[pg 496]},
+
+critical culture and self-reflection, 15-16, 293-294; Open Directory Project, 76; self-identification as transaction cost, 112; Wikipedia project, 70-74
+
+cultural production. See culture; information production
+
+culture, 273-300, 466-467; criticality of (self-reflection), 15-16, 70-74, 76, 112, 293-294; freedom of, 279-285, 297; influence exaction, 156, 158-159; as motivational context, 97; participatory, policies for, 297-300; security of context, 143-146; shaping perceptions of others, 147-152, 170, 220-225, 297-300; social exchange, crispness of, 109-113; of television, 135; transparency of, 285-294
+
+daily newspapers, 40
+
+dailyKos.com site, 221
+
+data storage capacity, 86; transaction costs, 112-115
+
+Database Directive, 449-450
+
+database protection, 449-451; trespass to chattels, 451-453
+
+Davis, Nick, 221-223, 245-246, 260
+
+Dawkins, Richard, 284
+
+de minimis digital sampling, 443-444
+
+de Solla Price, Derek, 243
+
+Dean, Howard, 258
+
+decency. See social relations and norms
+
+decentralization of communications, 10-12, 62
+
+Deci, Edward, 94
+
+DeCSS program, 417
+
+defining price, 109-113
+
+demand for information, consumer, 203
+
+demand-side effects of information production, 43, 45
+
+democratic societies, 7-16, 177; autonomy, 8-9; critical culture and social relations, 15-16; independence of Web sites, 103; individual capabilities in, 20-22; justice and human development, 13-15; public sphere, shift from mass media, 10-13; shift from mass-media communications model, 10-13; social-democratic theories of justice, 308-311
+
+democratizing effect of Internet, 213-214; critiques of claims of, 233-237
+
+depression, 359-361
+
+deregulation. See policy determinism, technological, 16-18
+
+development, commons-based, 317-328, 354-355; food and agricultural innovation, 328-344; medical and pharmaceutical innovation, 344-353
+
+devices (physical), policy regarding, 408-412. See also computers
+
+Diebold Election Systems, 225-232, 262, 389-390
+
+digital copyright. See proprietary rights
+
+digital divide, 236-237
+
+Digital Millennium Copyright Act (DMCA), 380, 413-418
+
+digital sampling, 443-444
+
+dignity, 19
+
+Dill, Stephen, 249-250
+
+dilution of trademarks, 290, 446-448
+
+discussion lists (electronic), 215
+
+displacement of real-world interaction, 357, 362-366
+
+distributed computing projects, 81-83
+
+distributed filtering and accreditation, 171-172
+
+distributed production. See peer production Distributed Proofreading site, 81
+
+distribution lists (electronic), 215
+
+distribution of information, 68-69, 80-81; power law distribution of site connections, 241-261; university-based innovation, 348-350
+
+diversity, 164-169; appropriation strategies, 49; of behavioral options, 150-152, 170; changes in taste, 126; fragmentation of communication, 15, 234-235, 238, 256, 465-466; granularity of participation, 100-102, 113-114; human communication, 55-56; human motivation, 6; large-audience programming, 197, 204-210, 259-260; mass-mediated environments, 165-166; motivation to produce, 6, 92-99, 115. See also autonomy
+
+% ,{[pg 497]},
+
+
+DMCA (Digital Millennium Copyright Act), 380, 413-418
+
+Doctors Without Borders, 347
+
+domain name system, 429-434
+
+Drezner, Daniel, 251, 255
+
+drugs, commons-based research on, 344-353
+
+DSL. See broadband networks dumb luck, justice and, 303-304
+
+
+Dworkin, Gerard, 140
+
+Dworkin, Ronald, 304, 307
+
+dynamic inefficiency. See efficiency of information regulation
+
+Dyson, Esther, 45
+
+e-mail, 215; thickening of preexisting relations, 363-366 /{eBay v. Bidder's Edge}/, 451-453
+
+economic analysis, role of, 18
+
+economic data, access to, 313-314
+
+economic opportunity, 130-131
+
+economics in liberal political theory, 19-20; cultural freedom, 279-285, 297
+
+economics of information production and innovation, 35-58; current production strategies, 41-48; exclusive rights, 49-50, 56-58; production over computer networks, 50-56
+
+economics of nonmarket production, 91-127; emergence in digital networks, 116-122; feasibility conditions, 99-106; transaction costs, 59-60, 106-116. See also motivation to produce
+
+Edelman, Ben, 268
+
+editorial filtering. See relevance filtering editorial vs. business decisions, 204
+
+educational instruction, 314-315, 327
+
+efficiency of information regulation, 36-41, 49-50, 106-116, 461-462; capacity reallocation, 114-116; property protections, 319; wireless communications policy, 154
+
+Eisenstein, Elizabeth, 17 /{Eldred v. Ashcroft}/, 442
+
+electronic voting machines (case study), 225-232, 262, 389-390
+
+emergent order in networks. See clusters in network topology
+
+enclosure movement, 380-382
+
+encryption, 457
+
+encryption circumvention, 414-417
+
+encyclopedic information, emergence of, 70. See also Wikipedia project enhanced autonomy. See autonomy
+
+entertainment industry: hardware regulation and, 409-412; immersive, 74, 135-136; peer-to-peer networks and, 425-428. See also music industry entitlement theory, 304
+
+environmental criticism of GM foods, 334
+
+equality. See justice and human development
+
+esteem. See intrinsic motivations
+
+ethic (journalistic) vs. business necessity, 197, 204-210
+
+excess capacity, sharing, 81-89, 114-115, 157, 351-352
+
+exclusivity. See also proprietary rights
+
+exercise of programming power, 197, 199-204; corrective effects of network environment, 220-225
+
+existing information, building on, 37-39, 52
+
+extrinsic motivations, 94-95
+
+factual reporting, access to, 314
+
+fair use in copyright, 440-441
+
+family relations, strengthening of, 357, 362-366
+
+% ,{[pg 498]},
+
+Fanning, Shawn, 84, 419
+
+Farrell, Henry, 251, 255
+
+FastTrack architecture, 420
+
+FCC. See policy
+
+feasibility conditions for social production, 99-106
+
+feedback and intake limits of mass media, 199
+
+Feinberg, Joel, 140 /{Feist Publications, Inc. v. Rural Tel. Serv. Co.}/, 449
+
+Felten, Edward, 416
+
+FHSST (Free High School Science Texts), 101, 326
+
+Fightaids@home project, 82
+
+file-sharing networks, 83-86, 418-428; security considerations, 457
+
+filtering, 68, 75-80, 169-174, 183, 258-260; Amazon, 75; by authoritarian countries, 236; capacity for, by mass media, 199; concentration of massmedia power, 157, 197, 199-204, 235, 237-241; corrective effects of network environment, 220-225; as distributed system, 171-172; Google, 76; Open Directory Project (ODP), 76; as public good, 12; Slashdot, 76-80, 104; watchdog functionality, 236, 261-266
+
+filtering by information provider. See blocked access financial reward, as demotivator, 94-96
+
+fine-grained goods, 113
+
+firms. See market-based information producers; traditional model of communication
+
+first-best preferences, mass media and: concentration of mass-media power, 157, 220-225, 235, 237-241; largeaudience programming, 197, 204-210, 259-260; power of mass media owners, 197, 199-204, 220-225
+
+Fisher, William (Terry), 15, 123, 276, 293, 409
+
+Fiske, John, 135, 275, 293
+
+fixed costs, 110
+
+Folding@home project, 82-83
+
+folk culture. See culture food, commons-based research on, 328-329
+
+food security, commons-based research on, 329-344
+
+formal autonomy theory, 140-141
+
+formal instruction, 314-315
+
+fragmentation of communication, 15, 234-235, 238, 256, 465-466. See also social relations and norms
+
+Franklin, Benjamin, 187
+
+Franks, Charles, 81, 137
+
+Free High School Science Texts (FHSST), 101, 326
+
+free software, 5, 46, 63-67; commonsbased welfare development, 320-323; as competition to market-based business, 123; human development and justice, 14; policy on, 436-437; project modularity and granularity, 102; security considerations, 457-458
+
+free trade agreements. See trade policy
+
+freedom, 19, 129; behavioral options, 150-152, 170; of commons, 62; cultural, 279-285, 297; property and commons, 143-146
+
+freedom as individuals. See autonomy freedom policy. See policy
+
+Freenet, 269-270
+
+Frey, Bruno, 93-94
+
+Friedman, Milton, 38
+
+friendship as motivation. See intrinsic motivations
+
+friendships, virtual, 359-361
+
+Friendster, 368
+
+Froomkin, Michael, 412, 432
+
+FTAs. See trade policy
+
+future: participatory culture, 297-300; public sphere, 271-272
+
+% ,{[pg 499]},
+
+games, immersive, 74, 135-136
+
+GCP (Generation Challenge Program), 341
+
+GE (General Electric), 191, 195
+
+General Public License (GPL), 63-65, 104. See also free software
+
+Generation Challenge Program (GCP), 341
+
+genetically modified (GM) foods, 332-338
+
+Genome@home project, 82
+
+geographic community, strength of. See thickening of preexisting relations
+
+Ghosh, Rishab, 106
+
+gifts, 116-117
+
+Gilmore, Dan, 219, 262
+
+Glance, Natalie, 248, 257
+
+global development, 308-311, 355; food and agricultural innovation, 328-344; international harmonization, 453-455; medical and pharmaceutical innovation, 344-353
+
+global injustice. See justice and human development
+
+GM (genetically modified) foods, 332-338
+
+GNU/Linux operating system, 64-65
+
+Gnutella, 420
+
+Godelier, Maurice, 109, 116
+
+golden rice, 339
+
+goods, information-embedded, 311-312
+
+Google, 76
+
+Gould, Stephen Jay, 27
+
+government: authoritarian control, 236, 266-271; independence from control of, 184, 197-198; role of, 20-22; working around authorities, 266-271. See also policy
+
+GPL (General Public License), 63-65, 104. See also free software
+
+Gramsci, Antonio, 280
+
+Granovetter, Mark, 95, 360, 361
+
+granularity, 100-102; of lumpy goods, 113-114
+
+Green Revolution, 331-332
+
+Grokster, 421
+
+growth rates of Web sites, 244, 246-247
+
+gTLD-MoU document, 431
+
+Habermas, Jurgen, 181, 184, 205, 281, 412
+
+The Halloween Memo, 123
+
+Hampton, Keith, 363
+
+handhelds. See computers; mobile phones
+
+HapMap Project, 351
+
+hardware, 105; infrastructure ownership, 155; policy on physical devices, 408-412; as shareable, lumpy goods, 113-115
+
+hardware regulations, 408-412
+
+harmonization, international, 453-455
+
+Harris, Bev, 227, 228, 231
+
+Hart, Michael, 80-81, 137
+
+Hayek, Friedrich, 20, 143
+
+HDI (Human Development Index), 309-310
+
+health effects of GM foods, 334
+
+Hearst, William Randolph, 203
+
+Heller, Michael, 312
+
+HHI (Herfindahl-Hirschman Index), 202
+
+hierarchical organizations. See traditional model of communication
+
+high-production value content, 167-169, 294-297. See also accreditation
+
+HIV/AIDS, 319, 328-329, 344-345; Genome
+
+home project, 82
+
+Holiday, Billie, 273
+
+Hollings, Fritz, 409-410
+
+Hollywood. See entertainment industry
+
+Hoover, Herbert, 192-194
+
+Hopkins Report, 229
+
+Horner, Mark, 101
+
+Huberman, Bernardo, 243-244, 246-247
+
+human affairs, technology and, 16-18
+
+% ,{[pg 500]},
+
+human communicative capacity, 52-55; feasibility conditions for social production, 99-106; pricing, 110
+
+human community, coexisting with Internet, 375-377
+
+human contact, online vs. physical, 360-361
+
+human development and justice, 13-15, 301-355, 467-468; commons-based research, 317-328; commons-based strategies, 308-311; liberal theories of, 303-308. See also welfare
+
+Human Development Index (HDI), 309-310
+
+Human Development Report, 309
+
+human freedom. See freedom
+
+human motivation, 6, 92-99; crowding out theory, 115; cultural context of, 97; granularity of participation and, 100-102, 113-114
+
+human welfare, 130-131; commons-based research, 317-328; commons-based strategies, 308-311; digital divide, 236-237; freedom from constraint, 157-158; information-based advantages, 311-315; liberal theories of justice, 303-308. See also justice and human development
+
+Hundt, Reed, 222
+
+hyperlinking on the Web, 218; power law distribution of site connections, 241-261; as trespass, 451-453
+
+IAHC (International Ad Hoc Committee), 430-431
+
+IANA (Internet Assigned Numbers Authority), 430
+
+IBM's business strategy, 46-47, 123-124
+
+ICANN (Internet Corporation for Assigned Names and Numbers), 431-432
+
+iconic representations of opinion, 205, 209-210
+
+ideal market, 62-63
+
+immersive entertainment, 74, 135-136
+
+implicit knowledge, transfer of, 314-315
+
+incentives of exclusive rights. See proprietary rights
+
+incentives to produce, 6, 92-99; crowding out theory, 115; cultural context of, 97; granularity of participation and, 100-102, 113-114
+
+independence from government control, 184, 197-198
+
+independence of Web sites, 103
+
+individual autonomy, 8-9, 133-175, 464-465; culture and, 280-281; formal conception of, 140-141; independence of Web sites, 103; individual capabilities in, 20-22; information environment, structure of, 146-161; mass media and, 164-166
+
+individual capabilities and action, 20-22; coordinated effects of individual actions, 4-5; cultural shift, 284; economic condition and, 304; human capacity as resource, 52-55; as modality of production, 119-120; as physical capital, 99; technology and human affairs, 16-18. See also autonomy; nonmarket information producers
+
+individualist methodologies, 18
+
+industrial age: destabilization of, 32; reduction of individual autonomy, 137-138
+
+industrial model of communication, 4, 9, 22-28, 59-60, 383-459, 470-471; autonomy and, 164-166; barriers to justice, 302; emerging role of mass media, 178-180, 185-186, 198-199; enclosure movement, 380-382; information industries, 315-317; mapping, framework for, 389-396; medical innovation and, 345-346; path dependency, 386-389; relationship with social producers, 122-127; securityrelated policy, 73-74, 396, 457-459; shift away from, 10-13; stakes of information policy, 460-473; structure of mass media, 178-180; transaction costs, 59-60, 106-116. See also marketbased information producers
+
+% ,{[pg 501]},
+
+inefficiency of information regulation, 36-41, 49-50, 106-116, 461-462; capacity reallocation, 114-116; property protections, 319; wireless communications policy, 154
+
+inertness, political, 197, 204-210
+
+influence exaction, 156, 158-159
+
+information, defined, 31, 313-314
+
+information, perfect, 203
+
+information appropriation strategies, 49
+
+information as nonrival, 36-39
+
+information economy, 2-34; democracy and liberalism, 7-16; effects on public sphere, 219-233; emergence of, 2-7; institutional ecology, 22-28; justice, liberal theories of, 303-308; methodological choices, 16-22
+
+information-embedded goods, 311-312
+
+information-embedded tools, 312
+
+information flow, 12; controlling with policy routers, 147-149, 156, 197-198, 397; large-audience programming, 197, 204-210, 259-260; limited by mass media, 197-199
+
+information industries, 315-317
+
+information laws. See policy
+
+information licensing and ownership. See also proprietary rights
+
+information overload and Babel objection, 10, 12, 169-174, 233-235, 237-241, 465-466
+
+information production, 464; feasibility conditions for social production, 99-106; networked public sphere capacity for, 225-232; nonrivalry, 36-39, 85-86; physical constraints on, 3-4; strategies of, 41-48. See also distribution of information; peer production
+
+information production, market-based: cultural change, transparency of, 290-293; mass popular culture, 295-296; relationship with social producers, 122-127; transaction costs, 59-60, 106-116; universities as, 347-348; without property protections, 39-41, 45-48
+
+information production, models of. See traditional model of communication
+
+information production, nonmarketbased. See entries at nonmarket production
+
+information production capital, 6-7, 32; control of, 99; cost minimization and benefit maximization, 42; fixed and initial costs, 110; production costs as limiting, 164-165; transaction costs, 59-60. See also commons; social capital
+
+information production economics, 35-58; current production strategies, 41-48; exclusive rights, 49-50, 56-58; production over computer networks, 50-56
+
+information production efficiency. See efficiency of information regulation
+
+information production inputs, 68-75; existing information, 37-39, 52; immersive entertainment, 74-75; individual action as modality, 119-120; large-audience programming, 197, 204-210, 259-260; limited by mass media, 197-199; NASA Clickworkers project, 69-70; pricing, 109-113; propaganda, 149-150, 220-225, 297-300; systematically blocked by policy routers, 147-149, 156, 197-198, 397; universal intake, 182, 197-199; Wikipedia project, 70-74. See also collaborative authorship
+
+information sharing. See sharing information storage capacity, 86; transaction costs, 112-115 infrastructure ownership, 155 initial costs, 110
+
+% ,{[pg 502]},
+
+injustice. See justice and human development
+
+Innis, Harold, 17
+
+innovation: agricultural, commonsbased, 329-344; human development, 14; software patents and, 437-439; wireless communications policy, 154
+
+innovation economics, 35-58; current production strategies, 41-48; exclusive rights, 49-50, 56-58; production over computer networks, 50-56
+
+innovation efficiency. See efficiency of information regulation
+
+inputs to production, 68-75; existing information, 37-39, 52; immersive entertainment, 74-75; individual action as modality, 119-120; large-audience programming, 197, 204-210, 259-260; limited by mass media, 197-199; NASA Clickworkers project, 69-70; pricing, 109-113; propaganda, 149-150, 220-225, 297-300; systematically blocked by policy routers, 147-149, 156, 197-198, 397; universal intake, 182, 197-199; Wikipedia project, 70-74. See also collaborative authorship
+
+instant messaging, 365
+
+Institute for One World Health, 350
+
+institutional ecology of digital environment, 4, 9, 22-28, 59-60, 383-459, 470-471; autonomy and, 164-166; barriers to justice, 302; emerging role of mass media, 178-180, 185-186, 198-199; enclosure movement, 380-382; mapping, framework for, 389-396; medical innovation and, 345-346; path dependency, 386-389; relationship with social producers, 122-127; security-related policy, 73-74, 396, 457-459; shift away from, 10-13; stakes of information policy, 460-473; structure of mass media, 178-180; transaction costs, 59-60, 106-116. See also market-based information producers
+
+intellectual property. See proprietary rights
+
+interaction, social. See social relations and norms
+
+interest communities. See clusters in network topology
+
+interlinking. See topology, network
+
+International HapMap Project, 351
+
+international harmonization, 453-455
+
+Internet: authoritarian control over, 266-271; centralization of, 235, 237-241; coexisting with human community, 375-377; democratizing effect of, 213-214, 233-237; globality of, effects on policy, 396; linking as trespass, 451-453; plasticity of culture, 294-297, 299; as platform for human connection, 369-372; power law distribution of site connections, 241-261; strongly connected Web sites, 249-250; technologies of, 215-219; transparency of culture, 285-294; Web addresses, 429-434; Web browsers, 434-436
+
+Internet Explorer browser, 434-436
+
+Internet usage patterns. See social relations and norms
+
+intrinsic motivations, 94-99. See also motivation to produce
+
+Introna, Lucas, 261
+
+isolation, 359-361
+
+Jackson, Jesse, 264 The Jedi Saga, 134
+
+Jefferson, Richard, 342
+
+Joe Einstein model, 43, 47-48, 315
+
+Johanson, Jon, 417
+
+journalism, undermined by commercialism, 197, 204-210
+
+judgment of relevance. See relevance filtering
+
+justice and human development, 13-15, 301-355, 467-468; commons-based research, 317-328; commons-based strategies, 308-311; liberal theories of, 303-308
+
+% ,{[pg 503]},
+
+Kant, Immanuel, 143
+
+karma (Slashdot), 78
+
+KaZaa, 421
+
+KDKA Pittsburgh, 190, 191
+
+Keillor, Garrison, 243
+
+Kick, Russ, 103, 259-260
+
+Know-How model, 45-46
+
+
+knowledge, defined, 314-315
+
+Koren, Niva Elkin, 15
+
+Kottke, Jason, 252
+
+Kraut, Robert, 360, 363
+
+Kumar, Ravi, 253
+
+Kymlicka, Will, 281
+
+laboratories, peer-produced, 352-353
+
+Lakhani, Karim, 106
+
+Lange, David, 25
+
+large-audience programming, 197, 204-210; susceptibility of networked public sphere, 259-260
+
+large-circulation presses, 187-188
+
+large-grained goods, 113-114
+
+large-scale peer cooperation. See peer production
+
+last mile (wireless), 402-405
+
+laws. See policy
+
+layers of institutional ecology, 384, 389-396, 469-470; content layer, 384, 392, 395, 439-457, 469-470; physical layer, 392, 469-470. See also logical layer of institutional ecology
+
+learning networks, 43, 46, 112
+
+Lemley, Mark, 399, 445
+
+Lerner, Josh, 39, 106
+
+Lessig, Lawrence (Larry), 15, 25, 239, 276, 278, 385, 399
+
+liberal political theory, 19-20; cultural freedom, 278-285, 297
+
+liberal societies, 7-16; autonomy, 8-9; critical culture and social relations, 15-16; design of public sphere, 180-185; justice and human development, 13-15; public sphere, shift from mass media, 10-13; theories of justice, 303-308
+
+licensing: agricultural biotechnologies, 338-344; GPL (General Public License), 63-65, 104; radio, 191-194; shrink-wrap (contractual enclosure), 444-446. See also proprietary rights
+
+limited-access common resources, 61
+
+limited intake of mass media, 197-199
+
+limited sharing networks, 43, 48
+
+Lin, Nan, 95
+
+Linden Labs. See Second Life game environment
+
+linking on the Web, 218; power law distribution of site connections, 241-261; as trespass, 451-453
+
+Linux operating system, 65-66
+
+Litman, Jessica, 25, 33, 278, 439
+
+local clusters in network topology, 12-13. See also clusters in network topology
+
+logical layer of institutional ecology, 384, 392, 412-439, 469; database protection, 449-451; DMCA (Digital Millennium Copyright Act), 380, 413-418; domain name system, 429-434; free software policies, 436-437; international harmonization, 453-455; peerto-peer networks, 83-86, 418-428, 457; recent changes, 395; trademark dilution, 290, 446-448; Web browsers, 434-436
+
+loneliness, 359-361
+
+loose affiliations, 9, 357, 362, 366-369
+
+Los Alamos model, 43, 48
+
+Lott, Trent, 258, 263-264
+
+lowest-common-denominator programming, 197, 204-210, 259-260
+
+Lucas, George, 134
+
+luck, justice and, 303-304
+
+lumpy goods, 113-115
+
+Luther, Martin, 27
+
+% ,{[pg 504]},
+
+machinery. See computers
+
+mailing lists (electronic), 215
+
+management, changing relationships of, 124-126
+
+Mangabeira Unger, Roberto, 138
+
+manipulating perceptions of others, 147-152, 170; influence exaction, 156, 158-159; with propaganda, 149-150, 220-225, 297-300
+
+mapping utterances. See relevance filtering
+
+Marconi, 191
+
+market-based information producers: cultural change, transparency of, 290-293; mass popular culture, 295-296; relationship with social producers, 122-127; transaction costs, 59-60, 106-116; universities as, 347-348; without property protections, 39-41, 45-48
+
+market reports, access to, 314
+
+market transactions, 107-109
+
+Marshall, Josh, 221, 222, 246, 263
+
+Marx, Karl, 143, 279
+
+mass media: basic critiques of, 196-211; corrective effects of network environment, 220-225; as platform for public sphere, 178-180, 185-186, 198-199; structure of, 178-180. See also traditional model of communication
+
+mass media, political freedom and, 176-211; commercial platform for public sphere, 178-180, 185-186, 198-199; criticisms, 196-211; design characteristics of liberal public sphere, 180-185
+
+massive multiplayer games, 74, 135-136
+
+maximizing viewers as business necessity. See large-audience programming
+
+McChesney, Robert, 196
+
+McHenry, Robert, 71
+
+McLuhan, Marshall, 16, 17
+
+McVeigh, Timothy (sailor), 367
+
+Medecins San Frontieres, 347
+
+media concentration, 157, 235, 237-241; corrective effects of network environment, 220-225. See also power of mass media owners
+
+medicines, commons-based research on, 344-353
+
+medium-grained goods, 113
+
+medium of exchange, 109-113
+
+Meetup.com site, 368
+
+The Memory Hole, 103
+
+metamoderation (Slashdot), 79
+
+methodological individualism, 18
+
+Mickey model, 42-44
+
+Microsoft Corporation: browser wars, 434-436; sidewalk.com, 452
+
+Milgram, Stanley, 252
+
+misfortune, justice and, 303-304
+
+MIT's Open Courseware Initiative, 314-315, 327
+
+MMOGs (massive multiplayer online games), 74, 135-136
+
+mobile phones, 219, 367; open wireless networks, 402-405
+
+moderation of content. See accreditation
+
+modularity, 100-103
+
+Moglen, Eben, 5, 55, 426
+
+monetary constraints on information production, 6-7, 32; control of, 99; cost minimization and benefit maximization, 42; fixed and initial costs, 110; production costs as limiting, 164-165; transaction costs, 59-60. See also commons; social capital
+
+money: centralization of communications, 258-260; cost minimization and benefit maximization, 42; cost of production as limiting, 164-165; crispness of currency exchange, 109-113; as demotivator, 94-96; as dominant factor, 234. See also capital for production
+
+monitoring, authoritarian, 236
+
+monopoly: authoritarian control, 266-271; breadth of programming under, 207; medical research and innovation, 345-346; radio broadcasting, 189, 195; wired environment as, 152-153
+
+% ,{[pg 505]},
+
+Moore, Michael, 200
+
+motivation to produce, 6, 92-99; crowding out theory, 115; cultural context of, 97; granularity of participation and, 100-102, 113-114
+
+Moulitsas, Markos, 221
+
+movie industry. See entertainment industry
+
+MP3.com, 419, 422-423
+
+MSF (Medecins San Frontieres), 347
+
+Mumford, Lewis, 16
+
+municipal broadband initiatives, 405-408
+
+Murdoch, Rupert, 203
+
+music industry, 50-51, 425-427; digital sampling, 443-444; DMCA violations, 416; peer-to-peer networks and, 84
+
+MyDD.com site, 221
+
+Napster, 419. See also peer-to-peer networks
+
+NASA Clickworkers, 69-70
+
+NBC (National Broadcasting Company), 195
+
+Negroponte, Nicholas, 238
+
+neighborhood relations, strengthening of, 357, 362-366
+
+Nelson, W. R., 205
+
+Netanel, Neil, 236, 261, 261-262
+
+Netscape and browser wars, 435
+
+network topology, 172-173; autonomy and, 146-161; emergent ordered structure, 253-256; linking as trespass, 451-453; moderately linked sites, 251-252; peer-to-peer networks, 83-86, 418-428, 457; power law distribution of site connections, 241-261; quoting on Web, 218; repeater networks, 88-89; strongly connected Web sites, 249-250. See also clusters in network topology
+
+networked environment policy. See policy networked information economy, 2-34; democracy and liberalism, 7-16; effects on public sphere, 219-233; emergence of, 2-7; institutional ecology, 22-28; justice, liberal theories of, 303-308; methodological choices, 16-22
+
+networked public sphere, 10-12, 212-271, 465; authoritarian control, working around, 266-271; basic communication tools, 215-219; critiques that Internet democratizes, 233-237; defined, 177-178; Diebold Election Systems case study, 225-232, 262, 389-390; future of, 271-272; Internet as concentrated vs. chaotic, 237-241; liberal, design characteristics of, 180-185; loose affiliations, 9, 357, 362, 366-369; mass-media platform for, 178-180, 185-186, 198-199; topology and connectivity of, 241-261; transparency of Internet culture, 285-294; watchdog functionality, 236, 261-266. See also social relations and norms networked society, 376
+
+news (as data), 314
+
+newspapers, 40, 186-188; market concentration, 202
+
+Newton, Isaac, 37
+
+niche markets, 56
+
+NIH (National Institutes of Health), 324
+
+Nissenbaum, Helen, 261
+
+No Electronic Theft (NET) Act, 441-442
+
+Noam, Eli, 201-202, 238-239
+
+nonexclusion-market production strategies, 39-41, 45-48
+
+nonmarket information producers, 4-5, 39-40; conditions for production, 99-106; cultural change, transparency of, 290-293; emergence of social production, 116-122; relationship with nonmarket information producers (cont.) market-based businesses, 122-127; role of, 18-19; strategies for information production, 43, 47-48; universities as, 347-348
+
+% ,{[pg 506]},
+
+nonmarket production, economics of, 91-127; emergence in digital networks, 116-122; feasibility conditions, 99-106; transaction costs, 59-60, 106-116. See also motivation to produce
+
+nonmarket strategies, effectiveness of, 54-56
+
+nonmonetary motivations. See motivation to produce
+
+nonprofit medical research, 350
+
+nonrival goods, 36-39; peer-to-peer
+
+networks sharing, 85-86
+
+norms (social), 72-74, 356-377; enforced norms with software, 372-375; fragmentation of communication, 15, 234-235, 238, 256, 465-466; Internet and human coexistence, 375-377; Internet as platform for, 369-372; loose affiliations, 9, 357, 362, 366-369; motivation within, 92-94; property, commons, and autonomy, 143-146; Slashdot mechanisms for, 78; software for, emergence of, 372-375; technology-defined structure, 29-34; thickening of preexisting relations, 357; transaction costs, 59-60, 106-116; working with social expectations, 366-369
+
+Nozick, Robert, 304
+
+NSI (Network Solutions, Inc.), 430
+
+number of behavioral options, 150-152, 170 OAIster protocol, 326
+
+obscurity of some Web sites, 246, 251-252
+
+ODP (Open Directory Project), 76
+
+older Web sites, obscurity of, 246
+
+"on the shoulders of giants", 37-39
+
+One World Health, 350
+
+Open Archives Initiative, 326
+
+open commons, 61
+
+Open Courseware Initiative (MIT), 314-315, 327
+
+Open Directory Project (ODP), 76
+
+open-source software, 5, 46, 63-67; commons-based welfare development, 320-323; as competition to marketbased business, 123; human development and justice, 14; policy on, 436-437; project modularity and granularity, 102; security considerations, 457-458
+
+open wireless networks, 402-405; municipal broadband initiatives, 405-408; security, 457
+
+opinion, public: iconic representations of, 205, 209-210; synthesis of, 184, 199. See also accreditation; relevance filtering
+
+opportunities created by social production, 123-126
+
+options, behavioral, 150-152, 170
+
+order, emergent. See clusters in network topology
+
+organization structure, 100-106; granularity, 100-102, 113-114; justice and, 303-304; modularity, 100-103
+
+organizational clustering, 248-249
+
+organizations as persons, 19-20
+
+organized production, traditional. See traditional model of communication
+
+OSTG (Open Source Technology Group), 77
+
+Ostrom, Elinor, 144
+
+owners of mass media, power of, 197, 199-204; corrective effects of network environment, 220-225 ownership of information. See also proprietary rights
+
+p2p networks, 83-86, 418-428; security considerations, 457
+
+% ,{[pg 507]},
+
+packet filtering. See blocked access
+
+Pantic, Drazen, 219
+
+Pareto, Vilfredo, 243
+
+participatory culture, 297-300. See also culture passive vs. active consumers, 126-127, 135
+
+patents. See proprietary rights path dependency, 388-389
+
+patterns of Internet use. See social relations and norms
+
+peer production, 5, 33, 59-90, 462-464; drug research and development, 351; electronic voting machines (case study), 225-232; feasibility conditions for social production, 99-106; loose affiliations, 9, 357, 362, 366-369; maintenance of cooperation, 104; as platform for human connection, 374-375; relationship with market-based businesses, 122-127; sustainability of, 106-116; watchdog functionality, 236, 261-266. See also sharing
+
+peer production, order emerging from. See accreditation; relevance filtering
+
+peer review of scientific publications, 323-325
+
+peer-to-peer networks, 83-86, 418-428; security considerations, 457
+
+Pennock, David, 251
+
+perceptions of others, shaping, 147-152, 170; influence exaction, 156, 158-159; with propaganda, 149-150, 220-225, 297-300
+
+perfect information, 203
+
+performance as means of communication, 205
+
+permission to communicate, 155
+
+permissions. See proprietary rights
+
+personal computers, 105; infrastructure ownership, 155; policy on physical devices, 408-412; as shareable, lumpy goods, 113-115
+
+Pew studies, 364-365, 423
+
+pharmaceuticals, commons-based research on, 344-353
+
+Philadelphia, wireless initiatives in, 406-408
+
+physical capital for production, 6-7, 32, 384, 396-412; control of, 99; cost minimization and benefit maximization, 42; fixed and initial costs, 110; production costs as limiting, 164-165; transaction costs, 59-60. See also commons; social capital
+
+physical constraints on information production, 3-4, 24-25. See also capital for production
+
+physical contact, diminishment of, 360-361
+
+physical layer of institutional ecology, 392, 469-470; recent changes, 395
+
+physical machinery and computers, 105; infrastructure ownership, 155; policy on physical devices, 408-412; as shareable, lumpy goods, 113-115
+
+Piore, Michael, 138
+
+PIPRA (Public Intellectual Property for Agriculture), 338-341
+
+planned modularization, 101-102
+
+plasticity of Internet culture, 294-297, 299
+
+PLoS (Public Library of Science), 324
+
+polarization, 235, 256-258
+
+policy, 26, 383-459; authoritarian control, 266-271; commons-based research, 317-328; Diebold Election Systems case study, 225-232, 262, 389-390; enclosure movement, 380-382; global Internet and, 396; independence from government control, 184, 197-198; international harmonization, 453-455; liberal theories of justice, 305-307; mapping institutional ecology, 389-396; participatory culture, 297-300; path dependency, 386-389; pharmaceutical innovation, 345-346; property-based, 159-160; proprietary policy (continued ) rights vs. justice, 302-303; securityrelated, 73-74, 396, 457-459; stakes of, 460-473; wireless spectrum rights, 87. See also privatization; proprietary rights
+
+% ,{[pg 508]},
+
+policy, global. See global development
+
+policy, social. See social relations and norms
+
+policy efficiency. See efficiency of information regulation
+
+policy layers, 384, 389-396, 469-470; content layer, 384, 392, 395, 439-457, 469-470; physical layer, 392, 469-470. See also logical layer of institutional ecology
+
+policy routers, 147-149, 156, 197-198, 397; influence exaction, 156, 158-159
+
+political concern, undermined by commercialism, 197, 204-210
+
+political freedom, mass media and, 176-211; commercial platform for public sphere, 178-180, 185-186, 198-199; criticisms, 196-211; design characteristics of liberal public sphere, 180-185
+
+political freedom, public sphere and, 212-271; authoritarian control, working around, 266-271; basic communication tools, 215-219; critiques that Internet democratizes, 233-237; future of, 271-272; Internet as concentrated vs. chaotic, 237-241; topology and connectivity of, 241-261; watchdog functionality, 236, 261-266. See also networked information economy politics. See policy
+
+Pool, Ithiel de Sola, 388
+
+popular culture, commercial production of, 295-296
+
+Post, Robert, 140
+
+Postel, Jon, 430
+
+Postman, Neil, 186
+
+poverty. See justice and human development; welfare
+
+Powell, Walter, 112
+
+power law distribution of Web connections, 241-261; strongly connected Web sites, 249-250; uniform component of moderate connectivity, 251-252
+
+power of mass media owners, 197, 199-204; corrective effects of network environment, 220-225
+
+preexisting relations, thickening of, 357
+
+press, commercial, 186-188, 202
+
+price compensation, as demotivator, 94-96
+
+pricing, 109-113
+
+Pringle, Peter, 335
+
+print media, commercial, 186-188
+
+private communications, 177
+
+privatization: agricultural biotechnologies, 335-336; of communications and information systems, 152-154, 159-160 /{ProCD v. Zeidenberg}/, 445
+
+processing capacity, 81-82, 86
+
+processors. See computers producer surplus, 157
+
+production capital, 6-7, 32; control of, 99; cost minimization and benefit maximization, 42; fixed and initial costs, 110; production costs as limiting, 164-165; transaction costs, 59-60. See also commons; social capital
+
+production inputs, 68-75; existing information, 37-39, 52; immersive entertainment, 74-75; individual action as modality, 119-120; large-audience programming, 197, 204-210, 259-260; limited by mass media, 197-199; NASA Clickworkers project, 69-70; pricing, 109-113; propaganda, 149-150, 220-225, 297-300; systematically blocked by policy routers, 147-149, 156, 197-198, 397; universal intake, 182, 197-199; Wikipedia project, 70-74. See also collaborative authorship
+
+% ,{[pg 509]},
+
+production of information, 464; feasibility conditions for social production, 99-106; networked public sphere capacity for, 225-232; nonrivalry, 36-39, 85-86; physical constraints on, 3-4; strategies of, 41-48. See also distribution of information; peer production
+
+production of information, efficiency of. See efficiency of information regulation
+
+production of information, industrial model of. See traditional model of communication
+
+production of information, nonmarket. See nonmarket information
+
+producers professionalism, mass media, 198
+
+Project Gutenberg, 80-81, 136
+
+propaganda, 149-150; manipulating culture, 297-300; Stolen Honor documentary, 220-225
+
+property ownership, 23-27, 129-132; autonomy and, 143-146; control over, as asymmetric, 60-61; effects of exclusive rights, 49-50; trade policy, 319. See also commons; proprietary rights
+
+property ownership, efficiency of. See efficiency of information regulation
+
+proprietary rights, 22-28, 56-58; agricultural biotechnologies, 335-336, 338-344; commons-based research, 317-328; contractual enclosure, 444-446; copyright issues, 439-444; cultural environment and, 277-278; database protection, 449-451; Digital Millennium Copyright Act (DMCA), 380, 413-418; domain names, 431-433; dominance of, overstated, 460-461; effects of, 49-50; enclosure movement, 380-382; global welfare and research, 317-320, 354-355; information-embedded goods and tools, 311-312; infrastructure ownership, 155; international harmonization, 453-455; justice vs., 302-303; medical and pharmaceutical innovation, 345-346; models of, 42-45; openness of personal computers, 409; peer-to-peer networks and, 84-85; radio patents, 191, 194; scientific publication, 323-325; software patenting, 437-439; strategies for information production, 41-48; trademark dilution, 290, 446-448; trespass to chattels, 451-453; university alliances, 338-341; wireless networks, 87, 153-154. See also access
+
+proprietary rights, inefficiency of, 36-41, 49-50, 106-116, 461-462; capacity reallocation, 114-116; property protections, 319; wireless communications policy, 154
+
+psychological motivation. See motivation to produce
+
+public-domain data, 313-314
+
+public goods vs. nonrival goods, 36-39
+
+Public Library of Science (PLoS), 324
+
+public opinion: iconic representations of, 205, 209-210; synthesis of, 184, 199. See also accreditation; relevance filtering
+
+public sphere, 10-12, 212-271, 465; authoritarian control, working around, 266-271; basic communication tools, 215-219; critiques that Internet democratizes, 233-237; defined, 177-178; Diebold Election Systems case study, 225-232, 262, 389-390; future of, 271-272; Internet as concentrated vs. chaotic, 237-241; liberal, design characteristics of, 180-185; loose affiliations, 9, 357, 362, 366-369; massmedia platform for, 178-180, 185-186, 198-199; topology and connectivity of, 241-261; transparency of Internet culture, 285-294; watchdog functionality, 236, 261-266
+
+public sphere economy. See networked information economy
+
+% ,{[pg 510]},
+
+public sphere relationships. See social relations and norms publication, scientific, 313, 323-328
+
+Putnam, Robert, 362
+
+quality of information. See accreditation; high-production value content; relevance filtering
+
+quoting on Web, 218
+
+radio, 186-196, 387-388, 402-403; market concentration, 202; patents, 191, 194; as platform for human connection, 369; as public sphere platform, 190. See also wireless communications
+
+Radio Act of 1927, 196
+
+Radio B92, 266
+
+radio telephony, 194
+
+raw data, 313-314; database protection, 449-451
+
+raw materials of information. See inputs to production
+
+Rawls, John, 184, 279, 303-304, 306
+
+Raymond, Eric, 66, 137, 259
+
+Raz, Joseph, 140
+
+RCA (Radio Corporation of America), 191, 195
+
+RCA strategy, 43, 44
+
+reallocating excess capacity, 81-89, 114-115, 157, 351-352
+
+recognition. See intrinsic motivations
+
+redistribution theory, 304
+
+referencing on the Web, 218; linking as trespass, 451-453; power law distribution of Web site connections, 241-261
+
+regional clusters in network topology, 12-13. See also clusters in network topology
+
+regions of interest. See clusters in network topology
+
+regulated commons, 61
+
+regulating information, efficiency of, 36-41, 49-50, 106-116, 461-462; capacity
+
+reallocation, 114-116; property protections, 319; wireless communications policy, 154
+
+regulation. See policy
+
+regulation by social norms, 72-74, 356-377; enforced norms with software, 372-375; fragmentation of communication, 15, 234-235, 238, 256, 465-466; Internet and human coexistence, 375-377; Internet as platform for, 369-372; loose affiliations, 9, 357, 362, 366-369; motivation within, 92-94; property, commons, and autonomy, 143-146; Slashdot mechanisms for, 78; software for, emergence of, 372-375; technology-defined structure, 29-34; thickening of preexisting relations, 357; transaction costs, 59-60, 106-116; working with social expectations, 366-369
+
+Reichman, Jerome, 449
+
+relationships, social. See social relations and norms
+
+relevance filtering, 68, 75-80, 169-174, 183, 258-260; Amazon, 75; by authoritarian countries, 236; capacity for, by mass media, 199; concentration of mass-media power, 157, 220-225, 235, 237-241; as distributed system, 171-172; Google, 76; Open Directory Project (ODP), 76; power of mass media owners, 197, 199-204, 220-225; as public good, 12; Slashdot, 76-80, 104; watchdog functionality, 236, 261-266
+
+relevance filtering by information providers. See blocked access
+
+repeater networks, 88-89
+
+research, commons-based, 317-328, 354-355; food and agricultural innovation, 328-344; medical and pharmaceutical innovation, 344-353
+
+resource sharing. See capacity, sharing
+
+% ,{[pg 511]},
+
+resources, common. See commons
+
+responsive communications, 199
+
+reuse of information, 37-39, 52
+
+reward. See motivation to produce
+
+Reynolds, Glenn, 264
+
+Rheingold, Howard, 219, 265, 358-359
+
+RIAA (Recording Industry Association of America), 416
+
+right to read, 439-440
+
+rights. See proprietary rights
+
+Romantic Maximizer model, 42-43
+
+Rose, Carol, 61
+
+routers, controlling information flow with, 147-149, 156, 197-198, 397; influence exaction, 156, 158-159
+
+Rubin, Aviel, 228, 229
+
+Sabel, Charles, 62, 111, 138
+
+Saltzer, Jerome, 399
+
+sampling, digital (music), 443-444
+
+Samuelson, Pamela, 25, 414, 488
+
+Sarnoff, David, 195
+
+SBG (Sinclair Broadcast Group), 199-200, 220-225
+
+Scholarly Lawyers model, 43, 45
+
+scientific data, access to, 313-314
+
+scientific publication, 313; commons-based welfare development, 323-328
+
+scope of loose relationships, 9, 357
+
+Scott, William, 353
+
+Second Life game environment, 74-75, 136
+
+security of context, 143-146
+
+security-related policy, 396, 457-459; vandalism on Wikipedia, 73-74
+
+Security Systems Standards and Certification Act, 409
+
+
+self-archiving of scientific publications, 325-326
+
+self-determinism, extrinsic motivation and, 94
+
+self-direction. See autonomy
+
+self-esteem, extrinsic motivation and, 94
+
+self-organization. See clusters in network topology self-reflection, 15-16, 293-294; Open Directory Project, 76; selfidentification as transaction cost, 112; Wikipedia project, 70-74
+
+services, software, 322-323
+
+SETI@home project, 81-83
+
+shaping perceptions of others, 147-152, 170; influence exaction, 156, 158-159; with propaganda, 149-150, 220-225, 297-300
+
+Shapiro, Carl, 312
+
+shareable goods, 113-115
+
+sharing, 59-90, 81-89; emergence of social production, 116-122; excess capacity, 81-89, 114-115, 157, 351-352; limited sharing networks, 43, 48; open wireless networks, 402-405; radio capacity, 402-403; technologydependence of, 120; university patents, 347-350
+
+sharing peer-to-peer. See peer-to-peer networks
+
+Shirky, Clay, 173, 252, 368, 373 "shoulders of giants", 37-39
+
+shrink-wrap licenses, 444-446
+
+sidewalk.com, 452
+
+Simon, Herbert, 243
+
+Sinclair Broadcast Group (SBG), 199-200, 220-225
+
+Skype utility, 86, 421
+
+Slashdot, 76-80, 104
+
+small-worlds effect, 252-253
+
+SMS (short message service). See text messaging
+
+social action, 22
+
+social capital, 95-96, 361-369; networked society, 366-369; thickening of preexisting relations, 363-366
+
+social clustering, 248-249
+
+% ,{[pg 512]},
+
+social-democratic theories of justice, 308-311
+
+social motivation. See intrinsic motivations
+
+social production, relationship with market-based businesses, 122-127
+
+social relations and norms, 72-74, 356-377; enforced norms with software, 372-375; fragmentation of communication, 15, 234-235, 238, 256, 465-466; Internet and human coexistence, 375-377; Internet as platform for, 369-372; loose affiliations, 9, 357, 362, 366-369; motivation within, 92-94; property, commons, and autonomy, 143-146; Slashdot mechanisms for, 78; software for, emergence of, 372-375; technology-defined structure, 29-34; thickening of preexisting relations, 357; transaction costs, 59-60, 106-116; working with social expectations, 366-369
+
+social software, 372-375
+
+social structure, defined by technology, 29-34
+
+societal culture. See culture
+
+software: commons-based welfare development, 320-323; patents for, 437-439; social, 372-375
+
+software, open-source, 5, 46, 63-67; commons-based welfare development, 320-323; as competition to marketbased business, 123; human development and justice, 14; policy on, 436-437; project modularity and granularity, 102; security considerations, 457-458
+
+Solum, Lawrence, 267
+
+Sonny Bono Copyright Term Extension Act of 1998, 442-443, 454
+
+specificity of price, 109-113
+
+spectrum property rights, 87. See also proprietary rights
+
+spiders. See trespass to chattels
+
+Spielberg, Steven, 416
+
+stakes of information policy, 460-473
+
+Stallman, Richard, 5, 64-66
+
+standardizing creativity, 109-113
+
+Starr, Paul, 17, 388
+
+state, role of, 20-22
+
+static inefficiency. See efficiency of information regulation
+
+static Web pages, 216
+
+Steiner, Peter, 205
+
+Stolen Honor documentary, 220-225
+
+storage capacity, 86; transaction costs, 112-115
+
+strategies for information production, 41-48; transaction costs, 59-60, 106-116
+
+Strogatz, Steven, 252
+
+strongly connected Web sites, 249-250
+
+structure of mass media, 178-180
+
+structure of network, 172-173; autonomy and, 146-161; emergent ordered structure, 253-256; linking as trespass, 451-453; moderately linked sites, 251-252; peer-to-peer networks, 83-86, 418-428, 457; power law distribution of Web site connections, 241-261; quoting on Web, 218; repeater networks, 88-89; strongly connected Web sites, 249-250. See also clusters in network topology
+
+structure of networks. See network topology
+
+structure of organizations, 100-106; granularity, 100-102, 113-114; justice and, 303-304; modularity, 100-103
+
+structured production, 100-106; granularity, 100-102, 113-114; maintenance of cooperation, 104; modularity, 100-103
+
+Sunstein, Cass, 234
+
+supercomputers, 81-82
+
+supplantation of real-world interaction, 357, 362-366
+
+% ,{[pg 513]},
+
+supply-side effects of information production, 45-46
+
+sustainability of peer production, 106-116
+
+symmetric commons, 61-62
+
+Syngenta, 337
+
+synthesis of public opinion, 184, 199. See also accreditation
+
+TalkingPoints site, 221
+
+taste, changes in, 126
+
+Taylor, Fredrick, 138
+
+teaching materials, 326
+
+technology, 215-219; agricultural, 335-344; costs of, 462; dependence on, for sharing, 120; effectiveness of nonmarket strategies, 54-55; enabling social sharing as production modality, 120-122; role of, 16-18; social software, 372-375; social structure defined by, 29-34
+
+telephone, as platform for human connection, 371
+
+television, 186; culture of, 135; Internet use vs., 360, 364; large-audience programming, 197, 204-210, 259-260; market concentration, 202
+
+tendrils (Web topology), 249-250
+
+term of copyright, 442-443, 454
+
+text distribution as platform for human connection, 369
+
+text messaging, 219, 365, 367
+
+textbooks, 326
+
+thickening of preexisting relations, 357, 362-366
+
+thinness of online relations, 360
+
+Thurmond, Strom, 263
+
+Ticketmaster, 452
+
+Tirole, Jean, 94, 106
+
+Titmuss, Richard, 93 de Tocqueville, Alexis, 187
+
+toll broadcasting, 194-195
+
+too much information. See Babel objection; relevance filtering
+
+tools, information-embedded, 312
+
+Toomey, Jenny, 123
+
+topical clustering, 248-249
+
+topology, network, 172-173; autonomy and, 146-161; emergent ordered structure, 253-256; linking as trespass, 451-453; moderately linked sites, 251-252; peer-to-peer networks, 83-86, 418-428, 457; power law distribution of Web site connections, 241-261; quoting on Web, 218; repeater networks, 88-89; strongly connected Web sites, 249-250. See also clusters in network topology
+
+Torvalds, Linus, 65-66, 104-105, 136-137
+
+trade policy, 317-320, 354-355, 454
+
+trademark dilution, 290, 446-448. See also proprietary rights
+
+traditional model of communication, 4, 9, 22-28, 59-60, 383-459, 470-471; autonomy and, 164-166; barriers to justice, 302; emerging role of mass media, 178-180, 185-186, 198-199; enclosure movement, 380-382; mapping, framework for, 389-396; medical innovation and, 345-346; path dependency, 386-389; relationship with social producers, 122-127; security-related policy, 73-74, 396, 457-459; shift away from, 10-13; stakes of information policy, 460-473; structure of mass media, 178-180; transaction costs, 59-60, 106-116. See also market-based information producers
+
+transaction costs, 59-60, 106-116
+
+transfer of knowledge, 314-315
+
+transparency of free software, 322
+
+transparency of Internet culture, 285-294
+
+transport channel policy, 397-408; broadband regulation, 399-402; municipal broadband initiatives, 405-408; open wireless networks, 402-405
+
+trespass to chattels, 451-453
+
+troll filters (Slashdot), 78
+
+trusted systems, computers as, 409-410
+
+tubes (Web topology), 249-250
+
+UCC (Uniform Commercial Code), 445
+
+UCITA (Uniform Computer Information Transactions Act), 444-446
+
+Uhlir, Paul, 449
+
+universal intake, 182, 197-199
+
+university alliances, 338-341, 347-350
+
+university-owned radio, 192
+
+unregulated commons, 61
+
+use permissions. See proprietary rights
+
+users as consumers, 126-127
+
+uttering content. See inputs to production
+
+vacuity of online relations, 360
+
+Vaidhyanathan, Siva, 278, 488
+
+value-added distribution. See distribution of information; relevance filtering
+
+value of online contact, 360
+
+vandalism on Wikipedia, 73-74
+
+variety of behavioral options, 150-152, 170
+
+Varmus, Harold, 313
+
+virtual communities, 348-361. See also social relations and norms
+
+visibility of mass media, 198
+
+volunteer activity. See nonmarket information producers; peer production
+
+volunteer computation resources. See capacity, sharing
+
+von Hippel, Eric, 5, 47, 106, 127
+
+voting, electronic, 225-232, 262, 389-390
+
+vouching for others, network of, 368 Waltzer, Michael, 281
+
+% ,{[pg 514]},
+
+watchdog functionality, 236, 261-266
+
+Watts, Duncan, 252
+
+weak ties of online relations, 360, 363
+
+Web, 216, 218; backbone sites, 249-250, 258-260; browser wars, 434-436; domain name addresses, 429-434; linking as trespass, 451-453; power law distribution of Web site connections, 241-261; quoting from other sites, 218. See also Internet
+
+Web topology. See network topology
+
+Weber, Steve, 104-105
+
+welfare, 130-131; commons-based research, 317-328; commons-based strategies, 308-311; digital divide, 236-237; freedom from constraint, 157-158; information-based advantages, 311-315; liberal theories of justice, 303-308. See also justice and human development
+
+well-being, 19
+
+WELL (Whole Earth `Lectronic Link), 358
+
+Wellman, Barry, 16, 17, 362, 363, 366
+
+Westinghouse, 191, 195
+
+wet-lab science, peer production of, 352-353
+
+WiFi. See wireless communications
+
+Wikibooks project, 101
+
+Wikipedia project, 70-74, 104; Barbie doll content, 287-289, 292
+
+Wikis as social software, 372-375
+
+Williamson, Oliver, 59
+
+Winner, Langdon, 17
+
+wired communications: market structure of, 152-153; policy on, 399-402. See also broadband networks
+
+wireless communications, 87-89; municipal broadband initiatives, 405-408; open networks, 402-405; privatization vs. commons, 152-154. See also radio World Wide Web, 216, 218; backbone sites, 249-250, 258-260; browser wars, 434-436; domain name addresses, 429-434; linking as trespass, 451-453; power law distribution of Web site connections, 241-261; quoting from other sites, 218. See also Internet
+
+% ,{[pg 515]},
+
+writable Web, 216-217
+
+written communication as platform for human connection, 369
+
+Zipf, George, 243
+
+Zittrain, Jonathan, 268
+
diff --git a/data/sisu_markup_samples/non-free/the_wealth_of_networks.yochai_benkler.sst b/data/sisu_markup_samples/non-free/the_wealth_of_networks.yochai_benkler.sst
new file mode 100644
index 0000000..6279004
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/the_wealth_of_networks.yochai_benkler.sst
@@ -0,0 +1,2165 @@
+% SiSU 0.48.8
+
+@title: The Wealth of Networks
+
+@subtitle: How Social Production Transforms Markets and Freedom
+
+@creator: Yochai Benkler
+
+@type: Book
+
+@rights: Copyright 2006 by Yochai Benkler. All rights reserved. Subject to the exception immediately following, this book may not be reproduced, in whole or in part, including illustrations, in any form (beyond that copying permitted by Sections 107 and 108 of the U.S. Copyright Law and except by reviewers for the public press), without written permission from the publishers. http://creativecommons.org/licenses/by-nc-sa/2.5/ The author has made an online version of the book available under a Creative Commons Noncommercial Sharealike license; it can be accessed through the author's website at http://www.benkler.org.
+
+% STRANGE FRUIT By Lewis Allan 1939 (Renewed) by Music Sales Corporation (ASCAP) International copyright secured. All rights reserved. All rights outside the United States controlled by Edward B. Marks Music Company. Reprinted by permission.
+
+@date: 2006-04-03
+
+% @date.created: 2006-01-27
+
+@date.created: 2006-04-03
+
+@date.issued: 2006-04-03
+
+@date.available: 2006-04-03
+
+@date.modified: 2006-04-03
+
+@date.valid: 2006-04-03
+
+% @catalogue: isbn=0300110561
+
+@language: US
+
+@vocabulary: none
+
+@images: center
+
+@skin: skin_won_benkler
+
+@links: {The Wealth of Networks, dedicated wiki}http://www.benkler.org/wealth_of_networks/index.php/Main_Page
+{The Wealth of Networks, Yochai Benkler @ SiSU}http://www.jus.uio.no/sisu/the_wealth_of_networks.yochai_benkler
+{tWoN book index @ SiSU}http://www.jus.uio.no/sisu/the_wealth_of_networks.book_index.yochai_benkler/doc.html
+{@ Wikipedia}http://en.wikipedia.org/wiki/The_Wealth_of_Networks
+{Free Culture, Lawrence Lessig @ SiSU}http://www.jus.uio.no/sisu/free_culture.lawrence_lessig
+{Free as in Freedom (on Richard M. Stallman), Sam Williams @ SiSU}http://www.jus.uio.no/sisu/free_as_in_freedom.richard_stallman_crusade_for_free_software.sam_williams
+{Free For All, Peter Wayner @ SiSU}http://www.jus.uio.no/sisu/free_for_all.peter_wayner
+{The Cathedral and the Bazaar, Eric S. Raymond @ SiSU }http://www.jus.uio.no/sisu/the_cathedral_and_the_bazaar.eric_s_raymond
+{WoN @ Amazon.com}http://www.amazon.com/Wealth-Networks-Production-Transforms-Markets/dp/0300110561/
+{WoN @ Barnes & Noble}http://search.barnesandnoble.com/booksearch/isbnInquiry.asp?isbn=0300110561
+
+@level: new=:C; break=1
+
+
+:A~ The Wealth of Networks - How Social Production Transforms Markets and Freedom
+
+:B~ Yochai Benkler
+
+1~attribution Attribution~#
+
+!_ For Deb, Noam, and Ari~#
+
+"Human nature is not a machine to be built after a model, and set to do exactly the work prescribed for it, but a tree, which requires to grow and develop itself on all sides, according to the tendency of the inward forces which make it a living thing." "Such are the differences among human beings in their sources of pleasure, their susceptibilities of pain, and the operation on them of different physical and moral agencies, that unless there is a corresponding diversity in their modes of life, they neither obtain their fair share of happiness, nor grow up to the mental, moral, and aesthetic stature of which their nature is capable."~#
+
+John Stuart Mill, On Liberty (1859)~#
+
+1~acknowledgments Acknowledgments
+
+Reading this manuscript was an act of heroic generosity. I owe my gratitude to those who did and who therefore helped me to avoid at least some of the errors that I would have made without their assistance. Bruce Ackerman spent countless hours listening, and reading and challenging both this book and its precursor bits and pieces since 2001. I owe much of its present conception and form to his friendship. Jack Balkin not only read the manuscript, but in an act of great generosity taught it to his seminar, imposed it on the fellows of Yale's Information Society Project, and then spent hours with me working through the limitations and pitfalls they found. Marvin Ammori, Ady Barkan, Elazar Barkan, Becky Bolin, Eszter Hargittai, Niva Elkin Koren, Amy Kapczynski, Eddan Katz, Zac Katz, Nimrod Koslovski, Orly Lobel, Katherine McDaniel, and Siva Vaidhyanathan all read the manuscript and provided valuable thoughts and insights. Michael O'Malley from Yale University Press deserves special thanks for helping me decide to write the book that I really wanted to write, not something else, and then stay the course. ,{[pg 10]},
+
+This book has been more than a decade in the making. Its roots go back to 1993-1994: long nights of conversations, as only graduate students can have, with Niva Elkin Koren about democracy in cyberspace; a series of formative conversations with Mitch Kapor; a couple of madly imaginative sessions with Charlie Nesson; and a moment of true understanding with Eben Moglen. Equally central from around that time, but at an angle, were a paper under Terry Fisher's guidance on nineteenth-century homesteading and the radical republicans, and a series of classes and papers with Frank Michelman, Duncan Kennedy, Mort Horwitz, Roberto Unger, and the late David Charny, which led me to think quite fundamentally about the role of property and economic organization in the construction of human freedom. It was Frank Michelman who taught me that the hard trick was to do so as a liberal.
+
+Since then, I have been fortunate in many and diverse intellectual friendships and encounters, from people in different fields and foci, who shed light on various aspects of this project. I met Larry Lessig for (almost) the first time in 1998. By the end of a two-hour conversation, we had formed a friendship and intellectual conversation that has been central to my work ever since. He has, over the past few years, played a pivotal role in changing the public understanding of control, freedom, and creativity in the digital environment. Over the course of these years, I spent many hours learning from Jamie Boyle, Terry Fisher, and Eben Moglen. In different ways and styles, each of them has had significant influence on my work. There was a moment, sometime between the conference Boyle organized at Yale in 1999 and the one he organized at Duke in 2001, when a range of people who had been doing similar things, pushing against the wind with varying degrees of interconnection, seemed to cohere into a single intellectual movement, centered on the importance of the commons to information production and creativity generally, and to the digitally networked environment in particular. In various contexts, both before this period and since, I have learned much from Julie Cohen, Becky Eisenberg, Bernt Hugenholtz, David Johnson, David Lange, Jessica Litman, Neil Netanel, Helen Nissenbaum, Peggy Radin, Arti Rai, David Post, Jerry Reichman, Pam Samuelson, Jon Zittrain, and Diane Zimmerman. One of the great pleasures of this field is the time I have been able to spend with technologists, economists, sociologists, and others who don't quite fit into any of these categories. Many have been very patient with me and taught me much. In particular, I owe thanks to Sam Bowles, Dave Clark, Dewayne Hendricks, Richard Jefferson, Natalie Jeremijenko, Tara Lemmey, Josh Lerner, Andy Lippman, David Reed, Chuck Sabel, Jerry Saltzer, Tim Shepard, Clay Shirky, and Eric von Hippel. In constitutional law and political theory, I benefited early and consistently from the insights of Ed Baker, with whom I spent many hours puzzling through practically every problem of political theory that I tackle in this book; Chris Eisgruber, Dick Fallon, Larry Kramer, Burt Neuborne, Larry Sager, and Kathleen Sullivan all helped in constructing various components of the argument.
+
+Much of the early work in this project was done at New York University, whose law school offered me an intellectually engaging and institutionally safe environment to explore some quite unorthodox views. A friend, visiting when I gave a brown-bag workshop there in 1998, pointed out that at very few law schools could I have presented "The Commons as a Neglected Factor of Information Policy" as an untenured member of the faculty, to a room full of law and economics scholars, without jeopardizing my career. Mark Geistfeld, in particular, helped me work though the economics of sharing--as we shared many a pleasant afternoon on the beach, watching our boys playing in the waves. I benefited from the generosity of Al Engelberg, who funded the Engelberg Center on Innovation Law and Policy and through it students and fellows, from whose work I learned so much; and Arthur Penn, who funded the Information Law Institute and through it that amazing intellectual moment, the 2000 conference on "A Free Information Ecology in the Digital Environment," and the series of workshops that became the Open Spectrum Project. During that period, I was fortunate enough to have had wonderful students and fellows with whom I worked in various ways that later informed this book, in particular Gaia Bernstein, Mike Burstein, John Kuzin, Greg Pomerantz, Steve Snyder, and Alan Toner.
+
+Since 2001, first as a visitor and now as a member, I have had the remarkable pleasure of being part of the intellectual community that is Yale Law School. The book in its present form, structure, and emphasis is a direct reflection of my immersion in this wonderful community. Practically every single one of my colleagues has read articles I have written over this period, attended workshops where I presented my work, provided comments that helped to improve the articles--and through them, this book, as well. I owe each and every one of them thanks, not least to Tony Kronman, who made me see that it would be so. To list them all would be redundant. To list some would inevitably underrepresent the various contributions they have made. Still, I will try to say a few of the special thanks, owing much yet to ,{[pg xii]}, those I will not name. Working out the economics was a precondition of being able to make the core political claims. Bob Ellickson, Dan Kahan, and Carol Rose all engaged deeply with questions of reciprocity and commonsbased production, while Jim Whitman kept my feet to the fire on the relationship to the anthropology of the gift. Ian Ayres, Ron Daniels during his visit, Al Klevorick, George Priest, Susan Rose-Ackerman, and Alan Schwartz provided much-needed mixtures of skepticism and help in constructing the arguments that would allay it. Akhil Amar, Owen Fiss, Jerry Mashaw, Robert Post, Jed Rubenfeld, Reva Siegal, and Kenji Yoshino helped me work on the normative and constitutional questions. The turn I took to focusing on global development as the core aspect of the implications for justice, as it is in chapter 9, resulted from an invitation from Harold Koh and Oona Hathaway to speak at their seminar on globalization, and their thoughtful comments to my paper. The greatest influence on that turn has been Amy Kapczynski's work as a fellow at Yale, and with her, the students who invited me to work with them on university licensing policy, in particular, Sam Chaifetz.
+
+Oddly enough, I have never had the proper context in which to give two more basic thanks. My father, who was swept up in the resistance to British colonialism and later in Israel's War of Independence, dropped out of high school. He was left with a passionate intellectual hunger and a voracious appetite for reading. He died too young to even imagine sitting, as I do today with my own sons, with the greatest library in human history right there, at the dinner table, with us. But he would have loved it. Another great debt is to David Grais, who spent many hours mentoring me in my first law job, bought me my first copy of Strunk and White, and, for all practical purposes, taught me how to write in English; as he reads these words, he will be mortified, I fear, to be associated with a work of authorship as undisciplined as this, with so many excessively long sentences, replete with dependent clauses and unnecessarily complex formulations of quite simple ideas.
+
+Finally, to my best friend and tag-team partner in this tussle we call life, Deborah Schrag, with whom I have shared nicely more or less everything since we were barely adults. ,{[pg 1]},
+
+1~1 Chapter 1 - Introduction: A Moment of Opportunity and Challenge
+
+Information, knowledge, and culture are central to human freedom and human development. How they are produced and exchanged in our society critically affects the way we see the state of the world as it is and might be; who decides these questions; and how we, as societies and polities, come to understand what can and ought to be done. For more than 150 years, modern complex democracies have depended in large measure on an industrial information economy for these basic functions. In the past decade and a half, we have begun to see a radical change in the organization of information production. Enabled by technological change, we are beginning to see a series of economic, social, and cultural adaptations that make possible a radical transformation of how we make the information environment we occupy as autonomous individuals, citizens, and members of cultural and social groups. It seems passe today to speak of "the Internet revolution." In some academic circles, it is positively naïve. But it should not be. The change brought about by the networked information environment is deep. It is structural. It goes to the very foundations of how liberal markets and liberal democracies have coevolved for almost two centuries. ,{[pg 2]},
+
+A series of changes in the technologies, economic organization, and social practices of production in this environment has created new opportunities for how we make and exchange information, knowledge, and culture. These changes have increased the role of nonmarket and nonproprietary production, both by individuals alone and by cooperative efforts in a wide range of loosely or tightly woven collaborations. These newly emerging practices have seen remarkable success in areas as diverse as software development and investigative reporting, avant-garde video and multiplayer online games. Together, they hint at the emergence of a new information environment, one in which individuals are free to take a more active role than was possible in the industrial information economy of the twentieth century. This new freedom holds great practical promise: as a dimension of individual freedom; as a platform for better democratic participation; as a medium to foster a more critical and self-reflective culture; and, in an increasingly information dependent global economy, as a mechanism to achieve improvements in human development everywhere.
+
+The rise of greater scope for individual and cooperative nonmarket production of information and culture, however, threatens the incumbents of the industrial information economy. At the beginning of the twenty-first century, we find ourselves in the midst of a battle over the institutional ecology of the digital environment. A wide range of laws and institutions-- from broad areas like telecommunications, copyright, or international trade regulation, to minutiae like the rules for registering domain names or whether digital television receivers will be required by law to recognize a particular code--are being tugged and warped in efforts to tilt the playing field toward one way of doing things or the other. How these battles turn out over the next decade or so will likely have a significant effect on how we come to know what is going on in the world we occupy, and to what extent and in what forms we will be able--as autonomous individuals, as citizens, and as participants in cultures and communities--to affect how we and others see the world as it is and as it might be.
+
+2~ THE EMERGENCE OF THE NETWORKED INFORMATION ECONOMY
+
+The most advanced economies in the world today have made two parallel shifts that, paradoxically, make possible a significant attenuation of the limitations that market-based production places on the pursuit of the political ,{[pg 3]}, values central to liberal societies. The first move, in the making for more than a century, is to an economy centered on information (financial services, accounting, software, science) and cultural (films, music) production, and the manipulation of symbols (from making sneakers to branding them and manufacturing the cultural significance of the Swoosh). The second is the move to a communications environment built on cheap processors with high computation capabilities, interconnected in a pervasive network--the phenomenon we associate with the Internet. It is this second shift that allows for an increasing role for nonmarket production in the information and cultural production sector, organized in a radically more decentralized pattern than was true of this sector in the twentieth century. The first shift means that these new patterns of production--nonmarket and radically decentralized--will emerge, if permitted, at the core, rather than the periphery of the most advanced economies. It promises to enable social production and exchange to play a much larger role, alongside property- and marketbased production, than they ever have in modern democracies.
+
+The first part of this book is dedicated to establishing a number of basic economic observations. Its overarching claim is that we are seeing the emergence of a new stage in the information economy, which I call the "networked information economy." It is displacing the industrial information economy that typified information production from about the second half of the nineteenth century and throughout the twentieth century. What characterizes the networked information economy is that decentralized individual action--specifically, new and important cooperative and coordinate action carried out through radically distributed, nonmarket mechanisms that do not depend on proprietary strategies--plays a much greater role than it did, or could have, in the industrial information economy. The catalyst for this change is the happenstance of the fabrication technology of computation, and its ripple effects throughout the technologies of communication and storage. The declining price of computation, communication, and storage have, as a practical matter, placed the material means of information and cultural production in the hands of a significant fraction of the world's population--on the order of a billion people around the globe. The core distinguishing feature of communications, information, and cultural production since the mid-nineteenth century was that effective communication spanning the ever-larger societies and geographies that came to make up the relevant political and economic units of the day required ever-larger investments of physical capital. Large-circulation mechanical presses, the telegraph ,{[pg 4]}, system, powerful radio and later television transmitters, cable and satellite, and the mainframe computer became necessary to make information and communicate it on scales that went beyond the very local. Wanting to communicate with others was not a sufficient condition to being able to do so. As a result, information and cultural production took on, over the course of this period, a more industrial model than the economics of information itself would have required. The rise of the networked, computer-mediated communications environment has changed this basic fact. The material requirements for effective information production and communication are now owned by numbers of individuals several orders of magnitude larger than the number of owners of the basic means of information production and exchange a mere two decades ago.
+
+The removal of the physical constraints on effective information production has made human creativity and the economics of information itself the core structuring facts in the new networked information economy. These have quite different characteristics than coal, steel, and manual human labor, which characterized the industrial economy and structured our basic thinking about economic production for the past century. They lead to three observations about the emerging information production system. First, nonproprietary strategies have always been more important in information production than they were in the production of steel or automobiles, even when the economics of communication weighed in favor of industrial models. Education, arts and sciences, political debate, and theological disputation have always been much more importantly infused with nonmarket motivations and actors than, say, the automobile industry. As the material barrier that ultimately nonetheless drove much of our information environment to be funneled through the proprietary, market-based strategies is removed, these basic nonmarket, nonproprietary, motivations and organizational forms should in principle become even more important to the information production system.
+
+Second, we have in fact seen the rise of nonmarket production to much greater importance. Individuals can reach and inform or edify millions around the world. Such a reach was simply unavailable to diversely motivated individuals before, unless they funneled their efforts through either market organizations or philanthropically or state-funded efforts. The fact that every such effort is available to anyone connected to the network, from anywhere, has led to the emergence of coordinate effects, where the aggregate effect of individual action, even when it is not self-consciously cooperative, produces ,{[pg 5]}, the coordinate effect of a new and rich information environment. One needs only to run a Google search on any subject of interest to see how the "information good" that is the response to one's query is produced by the coordinate effects of the uncoordinated actions of a wide and diverse range of individuals and organizations acting on a wide range of motivations-- both market and nonmarket, state-based and nonstate.
+
+Third, and likely most radical, new, and difficult for observers to believe, is the rise of effective, large-scale cooperative efforts--peer production of information, knowledge, and culture. These are typified by the emergence of free and open-source software. We are beginning to see the expansion of this model not only to our core software platforms, but beyond them into every domain of information and cultural production--and this book visits these in many different domains--from peer production of encyclopedias, to news and commentary, to immersive entertainment.
+
+It is easy to miss these changes. They run against the grain of some of our most basic Economics 101 intuitions, intuitions honed in the industrial economy at a time when the only serious alternative seen was state Communism--an alternative almost universally considered unattractive today. The undeniable economic success of free software has prompted some leading-edge economists to try to understand why many thousands of loosely networked free software developers can compete with Microsoft at its own game and produce a massive operating system--GNU/Linux. That growing literature, consistent with its own goals, has focused on software and the particulars of the free and open-source software development communities, although Eric von Hippel's notion of "user-driven innovation" has begun to expand that focus to thinking about how individual need and creativity drive innovation at the individual level, and its diffusion through networks of likeminded individuals. The political implications of free software have been central to the free software movement and its founder, Richard Stallman, and were developed provocatively and with great insight by Eben Moglen. Free software is but one salient example of a much broader phenomenon. Why can fifty thousand volunteers successfully coauthor /{Wikipedia}/, the most serious online alternative to the Encyclopedia Britannica, and then turn around and give it away for free? Why do 4.5 million volunteers contribute their leftover computer cycles to create the most powerful supercomputer on Earth, SETI@Home? Without a broadly accepted analytic model to explain these phenomena, we tend to treat them as curiosities, perhaps transient fads, possibly of significance in one market segment or another. We ,{[pg 6]}, should try instead to see them for what they are: a new mode of production emerging in the middle of the most advanced economies in the world-- those that are the most fully computer networked and for which information goods and services have come to occupy the highest-valued roles.
+
+Human beings are, and always have been, diversely motivated beings. We act instrumentally, but also noninstrumentally. We act for material gain, but also for psychological well-being and gratification, and for social connectedness. There is nothing new or earth-shattering about this, except perhaps to some economists. In the industrial economy in general, and the industrial information economy as well, most opportunities to make things that were valuable and important to many people were constrained by the physical capital requirements of making them. From the steam engine to the assembly line, from the double-rotary printing press to the communications satellite, the capital constraints on action were such that simply wanting to do something was rarely a sufficient condition to enable one to do it. Financing the necessary physical capital, in turn, oriented the necessarily capital-intensive projects toward a production and organizational strategy that could justify the investments. In market economies, that meant orienting toward market production. In state-run economies, that meant orienting production toward the goals of the state bureaucracy. In either case, the practical individual freedom to cooperate with others in making things of value was limited by the extent of the capital requirements of production.
+
+In the networked information economy, the physical capital required for production is broadly distributed throughout society. Personal computers and network connections are ubiquitous. This does not mean that they cannot be used for markets, or that individuals cease to seek market opportunities. It does mean, however, that whenever someone, somewhere, among the billion connected human beings, and ultimately among all those who will be connected, wants to make something that requires human creativity, a computer, and a network connection, he or she can do so--alone, or in cooperation with others. He or she already has the capital capacity necessary to do so; if not alone, then at least in cooperation with other individuals acting for complementary reasons. The result is that a good deal more that human beings value can now be done by individuals, who interact with each other socially, as human beings and as social beings, rather than as market actors through the price system. Sometimes, under conditions I specify in some detail, these nonmarket collaborations can be better at motivating effort and can allow creative people to work on information projects more ,{[pg 7]}, efficiently than would traditional market mechanisms and corporations. The result is a flourishing nonmarket sector of information, knowledge, and cultural production, based in the networked environment, and applied to anything that the many individuals connected to it can imagine. Its outputs, in turn, are not treated as exclusive property. They are instead subject to an increasingly robust ethic of open sharing, open for all others to build on, extend, and make their own.
+
+Because the presence and importance of nonmarket production has become so counterintuitive to people living in market-based economies at the end of the twentieth century, part I of this volume is fairly detailed and technical; overcoming what we intuitively "know" requires disciplined analysis. Readers who are not inclined toward economic analysis should at least read the introduction to part I, the segments entitled "When Information Production Meets the Computer Network" and "Diversity of Strategies in our Current Production System" in chapter 2, and the case studies in chapter 3. These should provide enough of an intuitive feel for what I mean by the diversity of production strategies for information and the emergence of nonmarket individual and cooperative production, to serve as the basis for the more normatively oriented parts of the book. Readers who are genuinely skeptical of the possibility that nonmarket production is sustainable and effective, and in many cases is an efficient strategy for information, knowledge, and cultural production, should take the time to read part I in its entirety. The emergence of precisely this possibility and practice lies at the very heart of my claims about the ways in which liberal commitments are translated into lived experiences in the networked environment, and forms the factual foundation of the political-theoretical and the institutional-legal discussion that occupies the remainder of the book.
+
+2~ NETWORKED INFORMATION ECONOMY AND LIBERAL, DEMOCRATIC SOCIETIES
+
+How we make information, how we get it, how we speak to others, and how others speak to us are core components of the shape of freedom in any society. Part II of this book provides a detailed look at how the changes in the technological, economic, and social affordances of the networked information environment affect a series of core commitments of a wide range of liberal democracies. The basic claim is that the diversity of ways of organizing information production and use opens a range of possibilities for pursuing % ,{[pg 8]}, the core political values of liberal societies--individual freedom, a more genuinely participatory political system, a critical culture, and social justice. These values provide the vectors of political morality along which the shape and dimensions of any liberal society can be plotted. Because their practical policy implications are often contradictory, rather than complementary, the pursuit of each places certain limits on how we pursue the others, leading different liberal societies to respect them in different patterns. How much a society constrains the democratic decision-making powers of the majority in favor of individual freedom, or to what extent it pursues social justice, have always been attributes that define the political contours and nature of that society. But the economics of industrial production, and our pursuit of productivity and growth, have imposed a limit on how we can pursue any mix of arrangements to implement our commitments to freedom and justice. Singapore is commonly trotted out as an extreme example of the trade-off of freedom for welfare, but all democracies with advanced capitalist economies have made some such trade-off. Predictions of how well we will be able to feed ourselves are always an important consideration in thinking about whether, for example, to democratize wheat production or make it more egalitarian. Efforts to push workplace democracy have also often foundered on the shoals--real or imagined--of these limits, as have many plans for redistribution in the name of social justice. Market-based, proprietary production has often seemed simply too productive to tinker with. The emergence of the networked information economy promises to expand the horizons of the feasible in political imagination. Different liberal polities can pursue different mixtures of respect for different liberal commitments. However, the overarching constraint represented by the seeming necessity of the industrial model of information and cultural production has significantly shifted as an effective constraint on the pursuit of liberal commitments.
+
+3~ Enhanced Autonomy
+
+The networked information economy improves the practical capacities of individuals along three dimensions: (1) it improves their capacity to do more for and by themselves; (2) it enhances their capacity to do more in loose commonality with others, without being constrained to organize their relationship through a price system or in traditional hierarchical models of social and economic organization; and (3) it improves the capacity of individuals to do more in formal organizations that operate outside the market sphere. This enhanced autonomy is at the core of all the other improvements I ,{[pg 9]}, describe. Individuals are using their newly expanded practical freedom to act and cooperate with others in ways that improve the practiced experience of democracy, justice and development, a critical culture, and community.
+
+I begin, therefore, with an analysis of the effects of networked information economy on individual autonomy. First, individuals can do more for themselves independently of the permission or cooperation of others. They can create their own expressions, and they can seek out the information they need, with substantially less dependence on the commercial mass media of the twentieth century. Second, and no less importantly, individuals can do more in loose affiliation with others, rather than requiring stable, long-term relations, like coworker relations or participation in formal organizations, to underwrite effective cooperation. Very few individuals living in the industrial information economy could, in any realistic sense, decide to build a new Library of Alexandria of global reach, or to start an encyclopedia. As collaboration among far-flung individuals becomes more common, the idea of doing things that require cooperation with others becomes much more attainable, and the range of projects individuals can choose as their own therefore qualitatively increases. The very fluidity and low commitment required of any given cooperative relationship increases the range and diversity of cooperative relations people can enter, and therefore of collaborative projects they can conceive of as open to them.
+
+These ways in which autonomy is enhanced require a fairly substantive and rich conception of autonomy as a practical lived experience, rather than the formal conception preferred by many who think of autonomy as a philosophical concept. But even from a narrower perspective, which spans a broader range of conceptions of autonomy, at a minimum we can say that individuals are less susceptible to manipulation by a legally defined class of others--the owners of communications infrastructure and media. The networked information economy provides varied alternative platforms for communication, so that it moderates the power of the traditional mass-media model, where ownership of the means of communication enables an owner to select what others view, and thereby to affect their perceptions of what they can and cannot do. Moreover, the diversity of perspectives on the way the world is and the way it could be for any given individual is qualitatively increased. This gives individuals a significantly greater role in authoring their own lives, by enabling them to perceive a broader range of possibilities, and by providing them a richer baseline against which to measure the choices they in fact make. ,{[pg 10]},
+
+3~ Democracy: The Networked Public Sphere
+
+The second major implication of the networked information economy is the shift it enables from the mass-mediated public sphere to a networked public sphere. This shift is also based on the increasing freedom individuals enjoy to participate in creating information and knowledge, and the possibilities it presents for a new public sphere to emerge alongside the commercial, mass-media markets. The idea that the Internet democratizes is hardly new. It has been a staple of writing about the Internet since the early 1990s. The relatively simple first-generation claims about the liberating effects of the Internet, summarized in the U.S. Supreme Court's celebration of its potential to make everyone a pamphleteer, came under a variety of criticisms and attacks over the course of the past half decade or so. Here, I offer a detailed analysis of how the emergence of a networked information economy in particular, as an alternative to mass media, improves the political public sphere. The first-generation critique of the democratizing effect of the Internet was based on various implications of the problem of information overload, or the Babel objection. According to the Babel objection, when everyone can speak, no one can be heard, and we devolve either to a cacophony or to the reemergence of money as the distinguishing factor between statements that are heard and those that wallow in obscurity. The second-generation critique was that the Internet is not as decentralized as we thought in the 1990s. The emerging patterns of Internet use show that very few sites capture an exceedingly large amount of attention, and millions of sites go unnoticed. In this world, the Babel objection is perhaps avoided, but only at the expense of the very promise of the Internet as a democratic medium.
+
+In chapters 6 and 7, I offer a detailed and updated analysis of this, perhaps the best-known and most contentious claim about the Internet's liberalizing effects. First, it is important to understand that any consideration of the democratizing effects of the Internet must measure its effects as compared to the commercial, mass-media-based public sphere, not as compared to an idealized utopia that we embraced a decade ago of how the Internet might be. Commercial mass media that have dominated the public spheres of all modern democracies have been studied extensively. They have been shown in extensive literature to exhibit a series of failures as platforms for public discourse. First, they provide a relatively limited intake basin--that is, too many observations and concerns of too many people in complex modern ,{[pg 11]},
+
+societies are left unobserved and unattended to by the small cadre of commercial journalists charged with perceiving the range of issues of public concern in any given society. Second, particularly where the market is concentrated, they give their owners inordinate power to shape opinion and information. This power they can either use themselves or sell to the highest bidder. And third, whenever the owners of commercial media choose not to exercise their power in this way, they then tend to program toward the inane and soothing, rather than toward that which will be politically engaging, and they tend to oversimplify complex public discussions. On the background of these limitations of the mass media, I suggest that the networked public sphere enables many more individuals to communicate their observations and their viewpoints to many others, and to do so in a way that cannot be controlled by media owners and is not as easily corruptible by money as were the mass media.
+
+The empirical and theoretical literature about network topology and use provides answers to all the major critiques of the claim that the Internet improves the structure of the public sphere. In particular, I show how a wide range of mechanisms--starting from the simple mailing list, through static Web pages, the emergence of writable Web capabilities, and mobility--are being embedded in a social system for the collection of politically salient information, observations, and comments, and provide a platform for discourse. These platforms solve some of the basic limitations of the commercial, concentrated mass media as the core platform of the public sphere in contemporary complex democracies. They enable anyone, anywhere, to go through his or her practical life, observing the social environment through new eyes--the eyes of someone who could actually inject a thought, a criticism, or a concern into the public debate. Individuals become less passive, and thus more engaged observers of social spaces that could potentially become subjects for political conversation; they become more engaged participants in the debates about their observations. The various formats of the networked public sphere provide anyone with an outlet to speak, to inquire, to investigate, without need to access the resources of a major media organization. We are seeing the emergence of new, decentralized approaches to fulfilling the watchdog function and to engaging in political debate and organization. These are being undertaken in a distinctly nonmarket form, in ways that would have been much more difficult to pursue effectively, as a standard part of the construction of the public sphere, before the networked information environment. Working through detailed examples, I try ,{[pg 12]}, to render the optimism about the democratic advantages of the networked public sphere a fully specified argument.
+
+The networked public sphere has also begun to respond to the information overload problem, but without re-creating the power of mass media at the points of filtering and accreditation. There are two core elements to these developments: First, we are beginning to see the emergence of nonmarket, peer-produced alternative sources of filtration and accreditation in place of the market-based alternatives. Relevance and accreditation are themselves information goods, just like software or an encyclopedia. What we are seeing on the network is that filtering for both relevance and accreditation has become the object of widespread practices of mutual pointing, of peer review, of pointing to original sources of claims, and its complement, the social practice that those who have some ability to evaluate the claims in fact do comment on them. The second element is a contingent but empirically confirmed observation of how users actually use the network. As a descriptive matter, information flow in the network is much more ordered than a simple random walk in the cacophony of information flow would suggest, and significantly less centralized than the mass media environment was. Some sites are much more visible and widely read than others. This is true both when one looks at the Web as a whole, and when one looks at smaller clusters of similar sites or users who tend to cluster. Most commentators who have looked at this pattern have interpreted it as a reemergence of mass media--the dominance of the few visible sites. But a full consideration of the various elements of the network topology literature supports a very different interpretation, in which order emerges in the networked environment without re-creating the failures of the mass-media-dominated public sphere. Sites cluster around communities of interest: Australian fire brigades tend to link to other Australian fire brigades, conservative political blogs (Web logs or online journals) in the United States to other conservative political blogs in the United States, and to a lesser but still significant extent, to liberal political blogs. In each of these clusters, the pattern of some high visibility nodes continues, but as the clusters become small enough, many more of the sites are moderately linked to each other in the cluster. Through this pattern, the network seems to be forming into an attention backbone. "Local" clusters--communities of interest--can provide initial vetting and "peer-review-like" qualities to individual contributions made within an interest cluster. Observations that are seen as significant within a community ,{[pg 13]}, of interest make their way to the relatively visible sites in that cluster, from where they become visible to people in larger ("regional") clusters. This continues until an observation makes its way to the "superstar" sites that hundreds of thousands of people might read and use. This path is complemented by the practice of relatively easy commenting and posting directly to many of the superstar sites, which creates shortcuts to wide attention. It is fairly simple to grasp intuitively why these patterns might emerge. Users tend to treat other people's choices about what to link to and to read as good indicators of what is worthwhile for them. They are not slavish in this, though; they apply some judgment of their own as to whether certain types of users--say, political junkies of a particular stripe, or fans of a specific television program--are the best predictors of what will be interesting for them. The result is that attention in the networked environment is more dependent on being interesting to an engaged group of people than it is in the mass-media environment, where moderate interest to large numbers of weakly engaged viewers is preferable. Because of the redundancy of clusters and links, and because many clusters are based on mutual interest, not on capital investment, it is more difficult to buy attention on the Internet than it is in mass media outlets, and harder still to use money to squelch an opposing view. These characteristics save the networked environment from the Babel objection without reintroducing excessive power in any single party or small cluster of them, and without causing a resurgence in the role of money as a precondition to the ability to speak publicly.
+
+3~ Justice and Human Development
+
+Information, knowledge, and information-rich goods and tools play a significant role in economic opportunity and human development. While the networked information economy cannot solve global hunger and disease, its emergence does open reasonably well-defined new avenues for addressing and constructing some of the basic requirements of justice and human development. Because the outputs of the networked information economy are usually nonproprietary, it provides free access to a set of the basic instrumentalities of economic opportunity and the basic outputs of the information economy. From a liberal perspective concerned with justice, at a minimum, these outputs become more readily available as "finished goods" to those who are least well off. More importantly, the availability of free information resources makes participating in the economy less dependent on ,{[pg 14]}, surmounting access barriers to financing and social-transactional networks that made working out of poverty difficult in industrial economies. These resources and tools thus improve equality of opportunity.
+
+From a more substantive and global perspective focused on human development, the freedom to use basic resources and capabilities allows improved participation in the production of information and informationdependent components of human development. First, and currently most advanced, the emergence of a broad range of free software utilities makes it easier for poor and middle-income countries to obtain core software capabilities. More importantly, free software enables the emergence of local capabilities to provide software services, both for national uses and as a basis for participating in a global software services industry, without need to rely on permission from multinational software companies. Scientific publication is beginning to use commons-based strategies to publish important sources of information in a way that makes the outputs freely available in poorer countries. More ambitiously, we begin to see in agricultural research a combined effort of public, nonprofit, and open-source-like efforts being developed and applied to problems of agricultural innovation. The ultimate purpose is to develop a set of basic capabilities that would allow collaboration among farmers and scientists, in both poor countries and around the globe, to develop better, more nutritious crops to improve food security throughout the poorer regions of the world. Equally ambitious, but less operationally advanced, we are beginning to see early efforts to translate this system of innovation to health-related products.
+
+All these efforts are aimed at solving one of the most glaring problems of poverty and poor human development in the global information economy: Even as opulence increases in the wealthier economies--as information and innovation offer longer and healthier lives that are enriched by better access to information, knowledge, and culture--in many places, life expectancy is decreasing, morbidity is increasing, and illiteracy remains rampant. Some, although by no means all, of this global injustice is due to the fact that we have come to rely ever-more exclusively on proprietary business models of the industrial economy to provide some of the most basic information components of human development. As the networked information economy develops new ways of producing information, whose outputs are not treated as proprietary and exclusive but can be made available freely to everyone, it offers modest but meaningful opportunities for improving human development everywhere. We are seeing early signs of the emergence of an innovation ,{[pg 15]}, ecosystem made of public funding, traditional nonprofits, and the newly emerging sector of peer production that is making it possible to advance human development through cooperative efforts in both rich countries and poor.
+
+3~ A Critical Culture and Networked Social Relations
+
+The networked information economy also allows for the emergence of a more critical and self-reflective culture. In the past decade, a number of legal scholars--Niva Elkin Koren, Terry Fisher, Larry Lessig, and Jack Balkin-- have begun to examine how the Internet democratizes culture. Following this work and rooted in the deliberative strand of democratic theory, I suggest that the networked information environment offers us a more attractive cultural production system in two distinct ways: (1) it makes culture more transparent, and (2) it makes culture more malleable. Together, these mean that we are seeing the emergence of a new folk culture--a practice that has been largely suppressed in the industrial era of cultural production--where many more of us participate actively in making cultural moves and finding meaning in the world around us. These practices make their practitioners better "readers" of their own culture and more self-reflective and critical of the culture they occupy, thereby enabling them to become more selfreflective participants in conversations within that culture. This also allows individuals much greater freedom to participate in tugging and pulling at the cultural creations of others, "glomming on" to them, as Balkin puts it, and making the culture they occupy more their own than was possible with mass-media culture. In these senses, we can say that culture is becoming more democratic: self-reflective and participatory.
+
+Throughout much of this book, I underscore the increased capabilities of individuals as the core driving social force behind the networked information economy. This heightened individual capacity has raised concerns by many that the Internet further fragments community, continuing the long trend of industrialization. A substantial body of empirical literature suggests, however, that we are in fact using the Internet largely at the expense of television, and that this exchange is a good one from the perspective of social ties. We use the Internet to keep in touch with family and intimate friends, both geographically proximate and distant. To the extent we do see a shift in social ties, it is because, in addition to strengthening our strong bonds, we are also increasing the range and diversity of weaker connections. Following ,{[pg 16]}, Manuel Castells and Barry Wellman, I suggest that we have become more adept at filling some of the same emotional and context-generating functions that have traditionally been associated with the importance of community with a network of overlapping social ties that are limited in duration or intensity.
+
+2~ FOUR METHODOLOGICAL COMMENTS
+
+There are four methodological choices represented by the thesis that I have outlined up to this point, and therefore in this book as a whole, which require explication and defense. The first is that I assign a very significant role to technology. The second is that I offer an explanation centered on social relations, but operating in the domain of economics, rather than sociology. The third and fourth are more internal to liberal political theory. The third is that I am offering a liberal political theory, but taking a path that has usually been resisted in that literature--considering economic structure and the limits of the market and its supporting institutions from the perspective of freedom, rather than accepting the market as it is, and defending or criticizing adjustments through the lens of distributive justice. Fourth, my approach heavily emphasizes individual action in nonmarket relations. Much of the discussion revolves around the choice between markets and nonmarket social behavior. In much of it, the state plays no role, or is perceived as playing a primarily negative role, in a way that is alien to the progressive branches of liberal political thought. In this, it seems more of a libertarian or an anarchistic thesis than a liberal one. I do not completely discount the state, as I will explain. But I do suggest that what is special about our moment is the rising efficacy of individuals and loose, nonmarket affiliations as agents of political economy. Just like the market, the state will have to adjust to this new emerging modality of human action. Liberal political theory must first recognize and understand it before it can begin to renegotiate its agenda for the liberal state, progressive or otherwise.
+
+3~ The Role of Technology in Human Affairs
+
+The first methodological choice concerns how one should treat the role of technology in the development of human affairs. The kind of technological determinism that typified Lewis Mumford, or, specifically in the area of communications, Marshall McLuhan, is widely perceived in academia today ,{[pg 17]}, as being too deterministic, though perhaps not so in popular culture. The contemporary effort to offer more nuanced, institution-based, and politicalchoice-based explanations is perhaps best typified by Paul Starr's recent and excellent work on the creation of the media. While these contemporary efforts are indeed powerful, one should not confuse a work like Elizabeth Eisenstein's carefully argued and detailed The Printing Press as an Agent of Change, with McLuhan's determinism. Assuming that technologies are just tools that happen, more or less, to be there, and are employed in any given society in a pattern that depends only on what that society and culture makes of them is too constrained. A society that has no wheel and no writing has certain limits on what it can do. Barry Wellman has imported into sociology a term borrowed from engineering--affordances.~{ Barry Wellman et al., "The Social Affordances of the Internet for Networked Individualism," JCMC 8, no. 3 (April 2003). }~ Langdon Winner called these the "political properties" of technologies.~{ Langdon Winner, ed., "Do Artifacts Have Politics?" in The Whale and The Reactor: A Search for Limits in an Age of High Technology (Chicago: University of Chicago Press, 1986), 19-39. }~ An earlier version of this idea is Harold Innis's concept of "the bias of communications."~{ Harold Innis, The Bias of Communication (Toronto: University of Toronto Press, 1951). Innis too is often lumped with McLuhan and Walter Ong as a technological determinist. His work was, however, one of a political economist, and he emphasized the relationship between technology and economic and social organization, much more than the deterministic operation of technology on human cognition and capability. }~ In Internet law and policy debates this approach has become widely adopted through the influential work of Lawrence Lessig, who characterized it as "code is law."~{ Lawrence Lessig, Code and Other Laws of Cyberspace (New York: Basic Books, 1999). }~
+
+The idea is simple to explain, and distinct from a naïve determinism. Different technologies make different kinds of human action and interaction easier or harder to perform. All other things being equal, things that are easier to do are more likely to be done, and things that are harder to do are less likely to be done. All other things are never equal. That is why technological determinism in the strict sense--if you have technology "t," you should expect social structure or relation "s" to emerge--is false. Ocean navigation had a different adoption and use when introduced in states whose land empire ambitions were effectively countered by strong neighbors--like Spain and Portugal--than in nations that were focused on building a vast inland empire, like China. Print had different effects on literacy in countries where religion encouraged individual reading--like Prussia, Scotland, England, and New England--than where religion discouraged individual, unmediated interaction with texts, like France and Spain. This form of understanding the role of technology is adopted here. Neither deterministic nor wholly malleable, technology sets some parameters of individual and social action. It can make some actions, relationships, organizations, and institutions easier to pursue, and others harder. In a challenging environment--be the challenges natural or human--it can make some behaviors obsolete by increasing the efficacy of directly competitive strategies. However, within the realm of the feasible--uses not rendered impossible by the adoption or rejection of a technology--different patterns of adoption and use ,{[pg 18]}, can result in very different social relations that emerge around a technology. Unless these patterns are in competition, or unless even in competition they are not catastrophically less effective at meeting the challenges, different societies can persist with different patterns of use over long periods. It is the feasibility of long-term sustainability of different patterns of use that makes this book relevant to policy, not purely to theory. The same technologies of networked computers can be adopted in very different patterns. There is no guarantee that networked information technology will lead to the improvements in innovation, freedom, and justice that I suggest are possible. That is a choice we face as a society. The way we develop will, in significant measure, depend on choices we make in the next decade or so.
+
+3~ The Role of Economic Analysis and Methodological Individualism
+
+It should be emphasized, as the second point, that this book has a descriptive methodology that is distinctly individualist and economic in orientation, which is hardly the only way to approach this problem. Manuel Castells's magisterial treatment of the networked society~{ Manuel Castells, The Rise of Networked Society (Cambridge, MA, and Oxford: Blackwell Publishers, 1996). }~ locates its central characteristic in the shift from groups and hierarchies to networks as social and organizational models--looser, flexible arrangements of human affairs. Castells develops this theory as he describes a wide range of changes, from transportation networks to globalization and industrialization. In his work, the Internet fits into this trend, enabling better coordination and cooperation in these sorts of loosely affiliated networks. My own emphasis is on the specific relative roles of market and nonmarket sectors, and how that change anchors the radical decentralization that he too observes, as a matter of sociological observation. I place at the core of the shift the technical and economic characteristics of computer networks and information. These provide the pivot for the shift toward radical decentralization of production. They underlie the shift from an information environment dominated by proprietary, market-oriented action, to a world in which nonproprietary, nonmarket transactional frameworks play a large role alongside market production. This newly emerging, nonproprietary sector affects to a substantial degree the entire information environment in which individuals and societies live their lives. If there is one lesson we can learn from globalization and the ever-increasing reach of the market, it is that the logic of the market exerts enormous pressure on existing social structures. If we are indeed seeing the emergence of a substantial component of nonmarket production at the very ,{[pg 19]}, core of our economic engine--the production and exchange of information, and through it of information-based goods, tools, services, and capabilities-- then this change suggests a genuine limit on the extent of the market. Such a limit, growing from within the very market that it limits, in its most advanced loci, would represent a genuine shift in direction for what appeared to be the ever-increasing global reach of the market economy and society in the past half century.
+
+3~ Economic Structure in Liberal Political Theory
+
+The third point has to do with the role of economic structure in liberal political theory. My analysis in this regard is practical and human centric. By this, I mean to say two things: First, I am concerned with human beings, with individuals as the bearers of moral claims regarding the structure of the political and economic systems they inhabit. Within the liberal tradition, the position I take is humanistic and general, as opposed to political and particular. It is concerned first and foremost with the claims of human beings as human beings, rather than with the requirements of democracy or the entitlements of citizenship or membership in a legitimate or meaningfully self-governed political community. There are diverse ways of respecting the basic claims of human freedom, dignity, and well-being. Different liberal polities do so with different mixes of constitutional and policy practices. The rise of global information economic structures and relationships affects human beings everywhere. In some places, it complements democratic traditions. In others, it destabilizes constraints on liberty. An understanding of how we can think of this moment in terms of human freedom and development must transcend the particular traditions, both liberal and illiberal, of any single nation. The actual practice of freedom that we see emerging from the networked environment allows people to reach across national or social boundaries, across space and political division. It allows people to solve problems together in new associations that are outside the boundaries of formal, legal-political association. In this fluid social economic environment, the individual's claims provide a moral anchor for considering the structures of power and opportunity, of freedom and well-being. Furthermore, while it is often convenient and widely accepted to treat organizations or communities as legal entities, as "persons," they are not moral agents. Their role in an analysis of freedom and justice is derivative from their role--both enabling and constraining--as structuring context in which human beings, ,{[pg 20]}, the actual moral agents of political economy, find themselves. In this regard, my positions here are decidedly "liberal," as opposed to either communitarian or critical.
+
+Second, I am concerned with actual human beings in actual historical settings, not with representations of human beings abstracted from their settings. These commitments mean that freedom and justice for historically situated individuals are measured from a first-person, practical perspective. No constraints on individual freedom and no sources of inequality are categorically exempt from review, nor are any considered privileged under this view. Neither economy nor cultural heritage is given independent moral weight. A person whose life and relations are fully regimented by external forces is unfree, no matter whether the source of regimentation can be understood as market-based, authoritarian, or traditional community values. This does not entail a radical anarchism or libertarianism. Organizations, communities, and other external structures are pervasively necessary for human beings to flourish and to act freely and effectively. This does mean, however, that I think of these structures only from the perspective of their effects on human beings. Their value is purely derivative from their importance to the actual human beings that inhabit them and are structured--for better or worse--by them. As a practical matter, this places concern with market structure and economic organization much closer to the core of questions of freedom than liberal theory usually is willing to do. Liberals have tended to leave the basic structure of property and markets either to libertarians--who, like Friedrich Hayek, accepted its present contours as "natural," and a core constituent element of freedom--or to Marxists and neo-Marxists. I treat property and markets as just one domain of human action, with affordances and limitations. Their presence enhances freedom along some dimensions, but their institutional requirements can become sources of constraint when they squelch freedom of action in nonmarket contexts. Calibrating the reach of the market, then, becomes central not only to the shape of justice or welfare in a society, but also to freedom.
+
+3~ Whither the State?
+
+The fourth and last point emerges in various places throughout this book, but deserves explicit note here. What I find new and interesting about the networked information economy is the rise of individual practical capabilities, and the role that these new capabilities play in increasing the relative salience of nonproprietary, often nonmarket individual and social behavior. ,{[pg 21]},
+
+In my discussion of autonomy and democracy, of justice and a critical culture, I emphasize the rise of individual and cooperative private action and the relative decrease in the dominance of market-based and proprietary action. Where in all this is the state? For the most part, as you will see particularly in chapter 11, the state in both the United States and Europe has played a role in supporting the market-based industrial incumbents of the twentieth-century information production system at the expense of the individuals who make up the emerging networked information economy. Most state interventions have been in the form of either captured legislation catering to incumbents, or, at best, well-intentioned but wrongheaded efforts to optimize the institutional ecology for outdated modes of information and cultural production. In the traditional mapping of political theory, a position such as the one I present here--that freedom and justice can and should best be achieved by a combination of market action and private, voluntary (not to say charitable) nonmarket action, and that the state is a relatively suspect actor--is libertarian. Perhaps, given that I subject to similar criticism rules styled by their proponents as "property"--like "intellectual property" or "spectrum property rights"--it is anarchist, focused on the role of mutual aid and highly skeptical of the state. (It is quite fashionable nowadays to be libertarian, as it has been for a few decades, and more fashionable to be anarchist than it has been in a century.)
+
+The more modest truth is that my position is not rooted in a theoretical skepticism about the state, but in a practical diagnosis of opportunities, barriers, and strategies for achieving improvements in human freedom and development given the actual conditions of technology, economy, and politics. I have no objection in principle to an effective, liberal state pursuing one of a range of liberal projects and commitments. Here and there throughout this book you will encounter instances where I suggest that the state could play constructive roles, if it stopped listening to incumbents for long enough to realize this. These include, for example, municipal funding of neutral broadband networks, state funding of basic research, and possible strategic regulatory interventions to negate monopoly control over essential resources in the digital environment. However, the necessity for the state's affirmative role is muted because of my diagnosis of the particular trajectory of markets, on the one hand, and individual and social action, on the other hand, in the digitally networked information environment. The particular economics of computation and communications; the particular economics of information, knowledge, and cultural production; and the relative role of ,{[pg 22]}, information in contemporary, advanced economies have coalesced to make nonmarket individual and social action the most important domain of action in the furtherance of the core liberal commitments. Given these particular characteristics, there is more freedom to be found through opening up institutional spaces for voluntary individual and cooperative action than there is in intentional public action through the state. Nevertheless, I offer no particular reasons to resist many of the roles traditionally played by the liberal state. I offer no reason to think that, for example, education should stop being primarily a state-funded, public activity and a core responsibility of the liberal state, or that public health should not be so. I have every reason to think that the rise of nonmarket production enhances, rather than decreases, the justifiability of state funding for basic science and research, as the spillover effects of publicly funded information production can now be much greater and more effectively disseminated and used to enhance the general welfare.
+
+The important new fact about the networked environment, however, is the efficacy and centrality of individual and collective social action. In most domains, freedom of action for individuals, alone and in loose cooperation with others, can achieve much of the liberal desiderata I consider throughout this book. From a global perspective, enabling individuals to act in this way also extends the benefits of liberalization across borders, increasing the capacities of individuals in nonliberal states to grab greater freedom than those who control their political systems would like. By contrast, as long as states in the most advanced market-based economies continue to try to optimize their institutional frameworks to support the incumbents of the industrial information economy, they tend to threaten rather than support liberal commitments. Once the networked information economy has stabilized and we come to understand the relative importance of voluntary private action outside of markets, the state can begin to adjust its policies to facilitate nonmarket action and to take advantage of its outputs to improve its own support for core liberal commitments.
+
+2~ THE STAKES OF IT ALL: THE BATTLE OVER THE INSTITUTIONAL ECOLOGY OF THE DIGITAL ENVIRONMENT
+
+No benevolent historical force will inexorably lead this technologicaleconomic moment to develop toward an open, diverse, liberal equilibrium. ,{[pg 23]}, If the transformation I describe as possible occurs, it will lead to substantial redistribution of power and money from the twentieth-century industrial producers of information, culture, and communications--like Hollywood, the recording industry, and perhaps the broadcasters and some of the telecommunications services giants--to a combination of widely diffuse populations around the globe, and the market actors that will build the tools that make this population better able to produce its own information environment rather than buying it ready-made. None of the industrial giants of yore are taking this reallocation lying down. The technology will not overcome their resistance through an insurmountable progressive impulse. The reorganization of production and the advances it can bring in freedom and justice will emerge, therefore, only as a result of social and political action aimed at protecting the new social patterns from the incumbents' assaults. It is precisely to develop an understanding of what is at stake and why it is worth fighting for that I write this book. I offer no reassurances, however, that any of this will in fact come to pass.
+
+The battle over the relative salience of the proprietary, industrial models of information production and exchange and the emerging networked information economy is being carried out in the domain of the institutional ecology of the digital environment. In a wide range of contexts, a similar set of institutional questions is being contested: To what extent will resources necessary for information production and exchange be governed as a commons, free for all to use and biased in their availability in favor of none? To what extent will these resources be entirely proprietary, and available only to those functioning within the market or within traditional forms of wellfunded nonmarket action like the state and organized philanthropy? We see this battle played out at all layers of the information environment: the physical devices and network channels necessary to communicate; the existing information and cultural resources out of which new statements must be made; and the logical resources--the software and standards--necessary to translate what human beings want to say to each other into signals that machines can process and transmit. Its central question is whether there will, or will not, be a core common infrastructure that is governed as a commons and therefore available to anyone who wishes to participate in the networked information environment outside of the market-based, proprietary framework.
+
+This is not to say that property is in some sense inherently bad. Property, together with contract, is the core institutional component of markets, and ,{[pg 24]}, a core institutional element of liberal societies. It is what enables sellers to extract prices from buyers, and buyers to know that when they pay, they will be secure in their ability to use what they bought. It underlies our capacity to plan actions that require use of resources that, without exclusivity, would be unavailable for us to use. But property also constrains action. The rules of property are circumscribed and intended to elicit a particular datum--willingness and ability to pay for exclusive control over a resource. They constrain what one person or another can do with regard to a resource; that is, use it in some ways but not others, reveal or hide information with regard to it, and so forth. These constraints are necessary so that people must transact with each other through markets, rather than through force or social networks, but they do so at the expense of constraining action outside of the market to the extent that it depends on access to these resources.
+
+Commons are another core institutional component of freedom of action in free societies, but they are structured to enable action that is not based on exclusive control over the resources necessary for action. For example, I can plan an outdoor party with some degree of certainty by renting a private garden or beach, through the property system. Alternatively, I can plan to meet my friends on a public beach or at Sheep's Meadow in Central Park. I can buy an easement from my neighbor to reach a nearby river, or I can walk around her property using the public road that makes up our transportation commons. Each institutional framework--property and commons--allows for a certain freedom of action and a certain degree of predictability of access to resources. Their complementary coexistence and relative salience as institutional frameworks for action determine the relative reach of the market and the domain of nonmarket action, both individual and social, in the resources they govern and the activities that depend on access to those resources. Now that material conditions have enabled the emergence of greater scope for nonmarket action, the scope and existence of a core common infrastructure that includes the basic resources necessary to produce and exchange information will shape the degree to which individuals will be able to act in all the ways that I describe as central to the emergence of a networked information economy and the freedoms it makes possible.
+
+At the physical layer, the transition to broadband has been accompanied by a more concentrated market structure for physical wires and connections, and less regulation of the degree to which owners can control the flow of ,{[pg 25]}, information on their networks. The emergence of open wireless networks, based on "spectrum commons," counteracts this trend to some extent, as does the current apparent business practice of broadband owners not to use their ownership to control the flow of information over their networks. Efforts to overcome the broadband market concentration through the development of municipal broadband networks are currently highly contested in legislation and courts. The single most threatening development at the physical layer has been an effort driven primarily by Hollywood, over the past few years, to require the manufacturers of computation devices to design their systems so as to enforce the copyright claims and permissions imposed by the owners of digital copyrighted works. Should this effort succeed, the core characteristic of computers--that they are general-purpose devices whose abilities can be configured and changed over time by their owners as uses and preferences change--will be abandoned in favor of machines that can be trusted to perform according to factory specifications, irrespective of what their owners wish. The primary reason that these laws have not yet passed, and are unlikely to pass, is that the computer hardware and software, and electronics and telecommunications industries all understand that such a law would undermine their innovation and creativity. At the logical layer, we are seeing a concerted effort, again headed primarily by Hollywood and the recording industry, to shape the software and standards to make sure that digitally encoded cultural products can continue to be sold as packaged goods. The Digital Millennium Copyright Act and the assault on peer-topeer technologies are the most obvious in this regard.
+
+More generally information, knowledge, and culture are being subjected to a second enclosure movement, as James Boyle has recently explored in depth. The freedom of action for individuals who wish to produce information, knowledge, and culture is being systematically curtailed in order to secure the economic returns demanded by the manufacturers of the industrial information economy. A rich literature in law has developed in response to this increasing enclosure over the past twenty years. It started with David Lange's evocative exploration of the public domain and Pamela Samuelson's prescient critique of the application of copyright to computer programs and digital materials, and continued through Jessica Litman's work on the public domain and digital copyright and Boyle's exploration of the basic romantic assumptions underlying our emerging "intellectual property" construct and the need for an environmentalist framework for preserving the public domain. It reached its most eloquent expression in Lawrence Lessig's arguments ,{[pg 26]}, for the centrality of free exchange of ideas and information to our most creative endeavors, and his diagnoses of the destructive effects of the present enclosure movement. This growing skepticism among legal academics has been matched by a long-standing skepticism among economists (to which I devote much discussion in chapter 2). The lack of either analytic or empirical foundation for the regulatory drive toward ever-stronger proprietary rights has not, however, resulted in a transformed politics of the regulation of intellectual production. Only recently have we begun to see a politics of information policy and "intellectual property" emerge from a combination of popular politics among computer engineers, college students, and activists concerned with the global poor; a reorientation of traditional media advocates; and a very gradual realization by high-technology firms that rules pushed by Hollywood can impede the growth of computer-based businesses. This political countermovement is tied to quite basic characteristics of the technology of computer communications, and to the persistent and growing social practices of sharing--some, like p2p (peer-to-peer) file sharing, in direct opposition to proprietary claims; others, increasingly, are instances of the emerging practices of making information on nonproprietary models and of individuals sharing what they themselves made in social, rather than market patterns. These economic and social forces are pushing at each other in opposite directions, and each is trying to mold the legal environment to better accommodate its requirements. We still stand at a point where information production could be regulated so that, for most users, it will be forced back into the industrial model, squelching the emerging model of individual, radically decentralized, and nonmarket production and its attendant improvements in freedom and justice.
+
+Social and economic organization is not infinitely malleable. Neither is it always equally open to affirmative design. The actual practices of human interaction with information, knowledge, and culture and with production and consumption are the consequence of a feedback effect between social practices, economic organization, technological affordances, and formal constraints on behavior through law and similar institutional forms. These components of the constraints and affordances of human behavior tend to adapt dynamically to each other, so that the tension between the technological affordances, the social and economic practices, and the law are often not too great. During periods of stability, these components of the structure within which human beings live are mostly aligned and mutually reinforce ,{[pg 27]}, each other, but the stability is subject to shock at any one of these dimensions. Sometimes shock can come in the form of economic crisis, as it did in the United States during the Great Depression. Often it can come from an external physical threat to social institutions, like a war. Sometimes, though probably rarely, it can come from law, as, some would argue, it came from the desegregation decision in /{Brown v. Board of Education}/. Sometimes it can come from technology; the introduction of print was such a perturbation, as was, surely, the steam engine. The introduction of the highcapacity mechanical presses and telegraph ushered in the era of mass media. The introduction of radio created a similar perturbation, which for a brief moment destabilized the mass-media model, but quickly converged to it. In each case, the period of perturbation offered more opportunities and greater risks than the periods of relative stability. During periods of perturbation, more of the ways in which society organizes itself are up for grabs; more can be renegotiated, as the various other components of human stability adjust to the changes. To borrow Stephen Jay Gould's term from evolutionary theory, human societies exist in a series of punctuated equilibria. The periods of disequilibrium are not necessarily long. A mere twenty-five years passed between the invention of radio and its adaptation to the mass-media model. A similar period passed between the introduction of telephony and its adoption of the monopoly utility form that enabled only one-to-one limited communications. In each of these periods, various paths could have been taken. Radio showed us even within the past century how, in some societies, different paths were in fact taken and then sustained over decades. After a period of instability, however, the various elements of human behavioral constraint and affordances settled on a new stable alignment. During periods of stability, we can probably hope for little more than tinkering at the edges of the human condition.
+
+This book is offered, then, as a challenge to contemporary liberal democracies. We are in the midst of a technological, economic, and organizational transformation that allows us to renegotiate the terms of freedom, justice, and productivity in the information society. How we shall live in this new environment will in some significant measure depend on policy choices that we make over the next decade or so. To be able to understand these choices, to be able to make them well, we must recognize that they are part of what is fundamentally a social and political choice--a choice about how to be free, equal, productive human beings under a new set of technological and ,{[pg 28]}, economic conditions. As economic policy, allowing yesterday's winners to dictate the terms of tomorrow's economic competition would be disastrous. As social policy, missing an opportunity to enrich democracy, freedom, and justice in our society while maintaining or even enhancing our productivity would be unforgivable. ,{[pg 29]},
+
+:C~ Part One - The Networked Information Economy
+
+1~p1 Introduction
+
+For more than 150 years, new communications technologies have tended to concentrate and commercialize the production and exchange of information, while extending the geographic and social reach of information distribution networks. High-volume mechanical presses and the telegraph combined with new business practices to change newspapers from small-circulation local efforts into mass media. Newspapers became means of communications intended to reach ever-larger and more dispersed audiences, and their management required substantial capital investment. As the size of the audience and its geographic and social dispersion increased, public discourse developed an increasingly one-way model. Information and opinion that was widely known and formed the shared basis for political conversation and broad social relations flowed from ever more capital-intensive commercial and professional producers to passive, undifferentiated consumers. It was a model easily adopted and amplified by radio, television, and later cable and satellite communications. This trend did not cover all forms of communication and culture. Telephones and personal interactions, most importantly, ,{[pg 30]}, and small-scale distributions, like mimeographed handbills, were obvious alternatives. Yet the growth of efficient transportation and effective large-scale managerial and administrative structures meant that the sources of effective political and economic power extended over larger geographic areas and required reaching a larger and more geographically dispersed population. The economics of long-distance mass distribution systems necessary to reach this constantly increasing and more dispersed relevant population were typified by high up-front costs and low marginal costs of distribution. These cost characteristics drove cultural production toward delivery to everwider audiences of increasingly high production-value goods, whose fixed costs could be spread over ever-larger audiences--like television series, recorded music, and movies. Because of these economic characteristics, the mass-media model of information and cultural production and transmission became the dominant form of public communication in the twentieth century.
+
+The Internet presents the possibility of a radical reversal of this long trend. It is the first modern communications medium that expands its reach by decentralizing the capital structure of production and distribution of information, culture, and knowledge. Much of the physical capital that embeds most of the intelligence in the network is widely diffused and owned by end users. Network routers and servers are not qualitatively different from the computers that end users own, unlike broadcast stations or cable systems, which are radically different in economic and technical terms from the televisions that receive their signals. This basic change in the material conditions of information and cultural production and distribution have substantial effects on how we come to know the world we occupy and the alternative courses of action open to us as individuals and as social actors. Through these effects, the emerging networked environment structures how we perceive and pursue core values in modern liberal societies.
+
+Technology alone does not, however, determine social structure. The introduction of print in China and Korea did not induce the kind of profound religious and political reformation that followed the printed Bible and disputations in Europe. But technology is not irrelevant, either. Luther's were not the first disputations nailed to a church door. Print, however, made it practically feasible for more than 300,000 copies of Luther's publications to be circulated between 1517 and 1520 in a way that earlier disputations could not have been.~{ Elizabeth Eisenstein, Printing Press as an Agent of Change (Cambridge: Cambridge University Press, 1979). }~ Vernacular reading of the Bible became a feasible form of religious self-direction only when printing these Bibles and making them ,{[pg 31]}, available to individual households became economically feasible, and not when all copyists were either monks or otherwise dependent on the church. Technology creates feasibility spaces for social practice. Some things become easier and cheaper, others harder and more expensive to do or to prevent under different technological conditions. The interaction between these technological-economic feasibility spaces, and the social responses to these changes--both in terms of institutional changes, like law and regulation, and in terms of changing social practices--define the qualities of a period. The way life is actually lived by people within a given set of interlocking technological, economic, institutional, and social practices is what makes a society attractive or unattractive, what renders its practices laudable or lamentable.
+
+A particular confluence of technical and economic changes is now altering the way we produce and exchange information, knowledge, and culture in ways that could redefine basic practices, first in the most advanced economies, and eventually around the globe. The potential break from the past 150 years is masked by the somewhat liberal use of the term "information economy" in various permutations since the 1970s. The term has been used widely to signify the dramatic increase in the importance of usable information as a means of controlling production and the flow of inputs, outputs, and services. While often evoked as parallel to the "postindustrial" stage, in fact, the information economy was tightly linked throughout the twentieth century with controlling the processes of the industrial economy. This is clearest in the case of accounting firms and financial markets, but is true of the industrial modalities of organizing cultural production as well. Hollywood, the broadcast networks, and the recording industry were built around a physical production model. Once the cultural utterances, the songs or movies, were initially produced and fixed in some means of storage and transmission, the economics of production and distribution of these physical goods took over. Making the initial utterances and the physical goods that embodied them required high capital investment up front. Making many copies was not much more expensive than making few copies, and very much cheaper on a per-copy basis. These industries therefore organized themselves to invest large sums in making a small number of high production-value cultural "artifacts," which were then either replicated and stamped onto many low-cost copies of each artifact, or broadcast or distributed through high-cost systems for low marginal cost ephemeral consumption on screens and with receivers. This required an effort to manage demand for those ,{[pg 32]}, products that were in fact recorded and replicated or distributed, so as to make sure that the producers could sell many units of a small number of cultural utterances at a low per-unit cost, rather than few units each of many cultural utterances at higher per-unit costs. Because of its focus around capital-intensive production and distribution techniques, this first stage might best be thought of as the "industrial information economy."
+
+Radical decentralization of intelligence in our communications network and the centrality of information, knowledge, culture, and ideas to advanced economic activity are leading to a new stage of the information economy-- the networked information economy. In this new stage, we can harness many more of the diverse paths and mechanisms for cultural transmission that were muted by the economies of scale that led to the rise of the concentrated, controlled form of mass media, whether commercial or state-run. The most important aspect of the networked information economy is the possibility it opens for reversing the control focus of the industrial information economy. In particular, it holds out the possibility of reversing two trends in cultural production central to the project of control: concentration and commercialization.
+
+Two fundamental facts have changed in the economic ecology in which the industrial information enterprises have arisen. First, the basic output that has become dominant in the most advanced economies is human meaning and communication. Second, the basic physical capital necessary to express and communicate human meaning is the connected personal computer. The core functionalities of processing, storage, and communications are widely owned throughout the population of users. Together, these changes destabilize the industrial stage of the information economy. Both the capacity to make meaning--to encode and decode humanly meaningful statements-- and the capacity to communicate one's meaning around the world, are held by, or readily available to, at least many hundreds of millions of users around the globe. Any person who has information can connect with any other person who wants it, and anyone who wants to make it mean something in some context, can do so. The high capital costs that were a prerequisite to gathering, working, and communicating information, knowledge, and culture, have now been widely distributed in the society. The entry barrier they posed no longer offers a condensation point for the large organizations that once dominated the information environment. Instead, emerging models of information and cultural production, radically decentralized and based on ,{[pg 33]}, emergent patterns of cooperation and sharing, but also of simple coordinate coexistence, are beginning to take on an ever-larger role in how we produce meaning--information, knowledge, and culture--in the networked information economy.
+
+A Google response to a query, which returns dozens or more sites with answers to an information question you may have, is an example of coordinate coexistence producing information. As Jessica Litman demonstrated in Sharing and Stealing, hundreds of independent producers of information, acting for reasons ranging from hobby and fun to work and sales, produce information, independently and at widely varying costs, related to what you were looking for. They all coexist without knowing of each other, most of them without thinking or planning on serving you in particular, or even a class of user like you. Yet the sheer volume and diversity of interests and sources allows their distributed, unrelated efforts to be coordinated-- through the Google algorithm in this case, but also through many others-- into a picture that has meaning and provides the answer to your question. Other, more deeply engaged and cooperative enterprises are also emerging on the Internet. /{Wikipedia}/, a multilingual encyclopedia coauthored by fifty thousand volunteers, is one particularly effective example of many such enterprises.
+
+The technical conditions of communication and information processing are enabling the emergence of new social and economic practices of information and knowledge production. Eisenstein carefully documented how print loosened the power of the church over information and knowledge production in Europe, and enabled, particularly in the Protestant North, the emergence of early modern capitalist enterprises in the form of print shops. These printers were able to use their market revenues to become independent of the church or the princes, as copyists never were, and to form the economic and social basis of a liberal, market-based freedom of thought and communication. Over the past century and a half, these early printers turned into the commercial mass media: A particular type of market-based production--concentrated, largely homogenous, and highly commercialized--that came to dominate our information environment by the end of the twentieth century. On the background of that dominant role, the possibility that a radically different form of information production will emerge--decentralized; socially, no less than commercially, driven; and as diverse as human thought itself--offers the promise of a deep change in how we see the world ,{[pg 34]}, around us, how we come to know about it and evaluate it, and how we are capable of communicating with others about what we know, believe, and plan.
+
+This part of the book is dedicated to explaining the technological-economic transformation that is making these practices possible. Not because economics drives all; not because technology determines the way society or communication go; but because it is the technological shock, combined with the economic sustainability of the emerging social practices, that creates the new set of social and political opportunities that are the subject of this book. By working out the economics of these practices, we can understand the economic parameters within which practical political imagination and fulfillment can operate in the digitally networked environment. I describe sustained productive enterprises that take the form of decentralized and nonmarket-based production, and explain why productivity and growth are consistent with a shift toward such modes of production. What I describe is not an exercise in pastoral utopianism. It is not a vision of a return to production in a preindustrial world. It is a practical possibility that directly results from our economic understanding of information and culture as objects of production. It flows from fairly standard economic analysis applied to a very nonstandard economic reality: one in which all the means of producing and exchanging information and culture are placed in the hands of hundreds of millions, and eventually billions, of people around the world, available for them to work with not only when they are functioning in the market to keep body and soul together, but also, and with equal efficacy, when they are functioning in society and alone, trying to give meaning to their lives as individuals and as social beings. ,{[pg 35]},
+
+1~2 Chapter 2 - Some Basic Economics of Information Production and Innovation
+
+There are no noncommercial automobile manufacturers. There are no volunteer steel foundries. You would never choose to have your primary source of bread depend on voluntary contributions from others. Nevertheless, scientists working at noncommercial research institutes funded by nonprofit educational institutions and government grants produce most of our basic science. Widespread cooperative networks of volunteers write the software and standards that run most of the Internet and enable what we do with it. Many people turn to National Public Radio or the BBC as a reliable source of news. What is it about information that explains this difference? Why do we rely almost exclusively on markets and commercial firms to produce cars, steel, and wheat, but much less so for the most critical information our advanced societies depend on? Is this a historical contingency, or is there something about information as an object of production that makes nonmarket production attractive?
+
+The technical economic answer is that certain characteristics of information and culture lead us to understand them as "public ,{[pg 36]}, goods," rather than as "pure private goods" or standard "economic goods." When economists speak of information, they usually say that it is "nonrival." We consider a good to be nonrival when its consumption by one person does not make it any less available for consumption by another. Once such a good is produced, no more social resources need be invested in creating more of it to satisfy the next consumer. Apples are rival. If I eat this apple, you cannot eat it. If you nonetheless want to eat an apple, more resources (trees, labor) need to be diverted from, say, building chairs, to growing apples, to satisfy you. The social cost of your consuming the second apple is the cost of not using the resources needed to grow the second apple (the wood from the tree) in their next best use. In other words, it is the cost to society of not having the additional chairs that could have been made from the tree. Information is nonrival. Once a scientist has established a fact, or once Tolstoy has written War and Peace, neither the scientist nor Tolstoy need spend a single second on producing additional War and Peace manuscripts or studies for the one-hundredth, one-thousandth, or one-millionth user of what they wrote. The physical paper for the book or journal costs something, but the information itself need only be created once. Economists call such goods "public" because a market will not produce them if priced at their marginal cost--zero. In order to provide Tolstoy or the scientist with income, we regulate publishing: We pass laws that enable their publishers to prevent competitors from entering the market. Because no competitors are permitted into the market for copies of War and Peace, the publishers can price the contents of the book or journal at above their actual marginal cost of zero. They can then turn some of that excess revenue over to Tolstoy. Even if these laws are therefore necessary to create the incentives for publication, the market that develops based on them will, from the technical economic perspective, systematically be inefficient. As Kenneth Arrow put it in 1962, "precisely to the extent that [property] is effective, there is underutilization of the information."~{ The full statement was: "[A]ny information obtained, say a new method of production, should, from the welfare point of view, be available free of charge (apart from the costs of transmitting information). This insures optimal utilization of the information but of course provides no incentive for investment in research. In a free enterprise economy, inventive activity is supported by using the invention to create property rights; precisely to the extent that it is successful, there is an underutilization of information." Kenneth Arrow, "Economic Welfare and the Allocation of Resources for Invention," in Rate and Direction of Inventive Activity: Economic and Social Factors, ed. Richard R. Nelson (Princeton, NJ: Princeton University Press, 1962), 616-617. }~ Because welfare economics defines a market as producing a good efficiently only when it is pricing the good at its marginal cost, a good like information (and culture and knowledge are, for purposes of economics, forms of information), which can never be sold both at a positive (greater than zero) price and at its marginal cost, is fundamentally a candidate for substantial nonmarket production.
+
+This widely held explanation of the economics of information production has led to an understanding that markets based on patents or copyrights involve a trade-off between static and dynamic efficiency. That is, looking ,{[pg 37]}, at the state of the world on any given day, it is inefficient that people and firms sell the information they possess. From the perspective of a society's overall welfare, the most efficient thing would be for those who possess information to give it away for free--or rather, for the cost of communicating it and no more. On any given day, enforcing copyright law leads to inefficient underutilization of copyrighted information. However, looking at the problem of information production over time, the standard defense of exclusive rights like copyright expects firms and people not to produce if they know that their products will be available for anyone to take for free. In order to harness the efforts of individuals and firms that want to make money, we are willing to trade off some static inefficiency to achieve dynamic efficiency. That is, we are willing to have some inefficient lack of access to information every day, in exchange for getting more people involved in information production over time. Authors and inventors or, more commonly, companies that contract with musicians and filmmakers, scientists, and engineers, will invest in research and create cultural goods because they expect to sell their information products. Over time, this incentive effect will give us more innovation and creativity, which will outweigh the inefficiency at any given moment caused by selling the information at above its marginal cost. This defense of exclusive rights is limited by the extent to which it correctly describes the motivations of information producers and the business models open to them to appropriate the benefits of their investments. If some information producers do not need to capture the economic benefits of their particular information outputs, or if some businesses can capture the economic value of their information production by means other than exclusive control over their products, then the justification for regulating access by granting copyrights or patents is weakened. As I will discuss in detail, both of these limits on the standard defense are in fact the case.
+
+Nonrivalry, moreover, is not the only quirky characteristic of information production as an economic phenomenon. The other crucial quirkiness is that information is both input and output of its own production process. In order to write today's academic or news article, I need access to yesterday's articles and reports. In order to write today's novel, movie, or song, I need to use and rework existing cultural forms, such as story lines and twists. This characteristic is known to economists as the "on the shoulders of giants" effect, recalling a statement attributed to Isaac Newton: "If I have seen farther it is because I stand on the shoulders of giants."~{ Suzanne Scotchmer, "Standing on the Shoulders of Giants: Cumulative Research and the Patent Law," Journal of Economic Perspectives 5 (1991): 29-41. }~ This second quirkiness ,{[pg 38]}, of information as a production good makes property-like exclusive rights less appealing as the dominant institutional arrangement for information and cultural production than it would have been had the sole quirky characteristic of information been its nonrivalry. The reason is that if any new information good or innovation builds on existing information, then strengthening intellectual property rights increases the prices that those who invest in producing information today must pay to those who did so yesterday, in addition to increasing the rewards an information producer can get tomorrow. Given the nonrivalry, those payments made today for yesterday's information are all inefficiently too high, from today's perspective. They are all above the marginal cost--zero. Today's users of information are not only today's readers and consumers. They are also today's producers and tomorrow's innovators. Their net benefit from a strengthened patent or copyright regime, given not only increased potential revenues but also the increased costs, may be negative. If we pass a law that regulates information production too strictly, allowing its beneficiaries to impose prices that are too high on today's innovators, then we will have not only too little consumption of information today, but also too little production of new information for tomorrow.
+
+Perhaps the most amazing document of the consensus among economists today that, because of the combination of nonrivalry and the "on the shoulders of giants" effect, excessive expansion of "intellectual property" protection is economically detrimental, was the economists' brief filed in the Supreme Court case of /{Eldred v. Ashcroft}/.~{ Eldred v. Ashcroft, 537 U.S. 186 (2003). }~ The case challenged a law that extended the term of copyright protection from lasting for the life of the author plus fifty years, to life of the author plus seventy years, or from seventy-five years to ninety-five years for copyrights owned by corporations. If information were like land or iron, the ideal length of property rights would be infinite from the economists' perspective. In this case, however, where the "property right" was copyright, more than two dozen leading economists volunteered to sign a brief opposing the law, counting among their number five Nobel laureates, including that well-known market skeptic, Milton Friedman.
+
+The efficiency of regulating information, knowledge, and cultural production through strong copyright and patent is not only theoretically ambiguous, it also lacks empirical basis. The empirical work trying to assess the impact of intellectual property on innovation has focused to date on patents. The evidence provides little basis to support stronger and increasing exclusive ,{[pg 39]}, rights of the type we saw in the last two and a half decades of the twentieth century. Practically no studies show a clear-cut benefit to stronger or longer patents.~{ Adam Jaffe, "The U.S. Patent System in Transition: Policy Innovation and the Innovation Process," Research Policy 29 (2000): 531. }~ In perhaps one of the most startling papers on the economics of innovation published in the past few years, Josh Lerner looked at changes in intellectual property law in sixty countries over a period of 150 years. He studied close to three hundred policy changes, and found that, both in developing countries and in economically advanced countries that already have patent law, patenting both at home and abroad by domestic firms of the country that made the policy change, a proxy for their investment in research and development, decreases slightly when patent law is strengthened!~{ Josh Lerner, "Patent Protection and Innovation Over 150 Years" (working paper no. 8977, National Bureau of Economic Research, Cambridge, MA, 2002). }~ The implication is that when a country--either one that already has a significant patent system, or a developing nation--increases its patent protection, it slightly decreases the level of investment in innovation by local firms. Going on intuitions alone, without understanding the background theory, this seems implausible--why would inventors or companies innovate less when they get more protection? Once you understand the interaction of nonrivalry and the "on the shoulders of giants" effect, the findings are entirely consistent with theory. Increasing patent protection, both in developing nations that are net importers of existing technology and science, and in developed nations that already have a degree of patent protection, and therefore some nontrivial protection for inventors, increases the costs that current innovators have to pay on existing knowledge more than it increases their ability to appropriate the value of their own contributions. When one cuts through the rent-seeking politics of intellectual property lobbies like the pharmaceutical companies or Hollywood and the recording industry; when one overcomes the honestly erroneous, but nonetheless conscience-soothing beliefs of lawyers who defend the copyright and patent-dependent industries and the judges they later become, the reality of both theory and empirics in the economics of intellectual property is that both in theory and as far as empirical evidence shows, there is remarkably little support in economics for regulating information, knowledge, and cultural production through the tools of intellectual property law.
+
+Where does innovation and information production come from, then, if it does not come as much from intellectual-property-based market actors, as many generally believe? The answer is that it comes mostly from a mixture of (1) nonmarket sources--both state and nonstate--and (2) market actors whose business models do not depend on the regulatory framework of intellectual property. The former type of producer is the expected answer, ,{[pg 40]}, within mainstream economics, for a public goods problem like information production. The National Institutes of Health, the National Science Foundation, and the Defense Department are major sources of funding for research in the United States, as are government agencies in Europe, at the national and European level, Japan, and other major industrialized nations. The latter type--that is, the presence and importance of market-based producers whose business models do not require and do not depend on intellectual property protection--is not theoretically predicted by that model, but is entirely obvious once you begin to think about it.
+
+Consider a daily newspaper. Normally, we think of newspapers as dependent on copyrights. In fact, however, that would be a mistake. No daily newspaper would survive if it depended for its business on waiting until a competitor came out with an edition, then copied the stories, and reproduced them in a competing edition. Daily newspapers earn their revenue from a combination of low-priced newsstand sales or subscriptions together with advertising revenues. Neither of those is copyright dependent once we understand that consumers will not wait half a day until the competitor's paper comes out to save a nickel or a quarter on the price of the newspaper. If all copyright on newspapers were abolished, the revenues of newspapers would be little affected.~{ At most, a "hot news" exception on the model of /{International News Service v. Associated Press}/, 248 U.S. 215 (1918), might be required. Even that, however, would only be applicable to online editions that are for pay. In paper, habits of reading, accreditation of the original paper, and first-to-market advantages of even a few hours would be enough. Online, where the first-to-market advantage could shrink to seconds, "hot news" protection may be worthwhile. However, almost all papers are available for free and rely solely on advertising. The benefits of reading a copied version are, at that point, practically insignificant to the reader. }~ Take, for example, the 2003 annual reports of a few of the leading newspaper companies in the United States. The New York Times Company receives a little more than $3 billion a year from advertising and circulation revenues, and a little more than $200 million a year in revenues from all other sources. Even if the entire amount of "other sources" were from syndication of stories and photos--which likely overstates the role of these copyright-dependent sources--it would account for little more than 6 percent of total revenues. The net operating revenues for the Gannett Company were more than $5.6 billion in newspaper advertising and circulation revenue, relative to about $380 million in all other revenues. As with the New York Times, at most a little more than 6 percent of revenues could be attributed to copyright-dependent activities. For Knight Ridder, the 2003 numbers were $2.8 billion and $100 million, respectively, or a maximum of about 3.5 percent from copyrights. Given these numbers, it is safe to say that daily newspapers are not a copyright-dependent industry, although they are clearly a market-based information production industry.
+
+As it turns out, repeated survey studies since 1981 have shown that in all industrial sectors except for very few--most notably pharmaceuticals--firm managers do not see patents as the most important way they capture the ,{[pg 41]}, benefits of their research and developments.~{ Wesley Cohen, R. Nelson, and J. Walsh, "Protecting Their Intellectual Assets: Appropriability Conditions and Why U.S. Manufacturing Firms Patent (or Not)" (working paper no. 7552, National Bureau Economic Research, Cambridge, MA, 2000); Richard Levin et al., "Appropriating the Returns from Industrial Research and Development"Brookings Papers on Economic Activity 3 (1987): 783; Mansfield et al., "Imitation Costs and Patents: An Empirical Study," The Economic Journal 91 (1981): 907. }~ They rank the advantages that strong research and development gives them in lowering the cost or improving the quality of manufacture, being the first in the market, or developing strong marketing relationships as more important than patents. The term "intellectual property" has high cultural visibility today. Hollywood, the recording industry, and pharmaceuticals occupy center stage on the national and international policy agenda for information policy. However, in the overall mix of our information, knowledge, and cultural production system, the total weight of these exclusivity-based market actors is surprisingly small relative to the combination of nonmarket sectors, government and nonprofit, and market-based actors whose business models do not depend on proprietary exclusion from their information outputs.
+
+The upshot of the mainstream economic analysis of information production today is that the widely held intuition that markets are more or less the best way to produce goods, that property rights and contracts are efficient ways of organizing production decisions, and that subsidies distort production decisions, is only very ambiguously applicable to information. While exclusive rights-based production can partially solve the problem of how information will be produced in our society, a comprehensive regulatory system that tries to mimic property in this area--such as both the United States and the European Union have tried to implement internally and through international agreements--simply cannot work perfectly, even in an ideal market posited by the most abstract economics models. Instead, we find the majority of businesses in most sectors reporting that they do not rely on intellectual property as a primary mechanism for appropriating the benefits of their research and development investments. In addition, we find mainstream economists believing that there is a substantial role for government funding; that nonprofit research can be more efficient than for-profit research; and, otherwise, that nonproprietary production can play an important role in our information production system.
+
+2~ THE DIVERSITY OF STRATEGIES IN OUR CURRENT INFORMATION PRODUCTION SYSTEM
+
+The actual universe of information production in the economy then, is not as dependent on property rights and markets in information goods as the last quarter century's increasing obsession with "intellectual property" might ,{[pg 42]}, suggest. Instead, what we see both from empirical work and theoretical work is that individuals and firms in the economy produce information using a wide range of strategies. Some of these strategies indeed rely on exclusive rights like patents or copyrights, and aim at selling information as a good into an information market. Many, however, do not. In order to provide some texture to what these models look like, we can outline a series of idealtype "business" strategies for producing information. The point here is not to provide an exhaustive map of the empirical business literature. It is, instead, to offer a simple analytic framework within which to understand the mix of strategies available for firms and individuals to appropriate the benefits of their investments--of time, money, or both, in activities that result in the production of information, knowledge, and culture. The differentiating parameters are simple: cost minimization and benefit maximization. Any of these strategies could use inputs that are already owned--such as existing lyrics for a song or a patented invention to improve on--by buying a license from the owner of the exclusive rights for the existing information. Cost minimization here refers purely to ideal-type strategies for obtaining as many of the information inputs as possible at their marginal cost of zero, instead of buying licenses to inputs at a positive market price. It can be pursued by using materials from the public domain, by using materials the producer itself owns, or by sharing/bartering for information inputs owned by others in exchange for one's own information inputs. Benefits can be obtained either in reliance on asserting one's exclusive rights, or by following a non-exclusive strategy, using some other mechanism that improves the position of the information producer because they invested in producing the information. Nonexclusive strategies for benefit maximization can be pursued both by market actors and by nonmarket actors. Table 2.1 maps nine ideal-type strategies characterized by these components.
+
+The ideal-type strategy that underlies patents and copyrights can be thought of as the "Romantic Maximizer." It conceives of the information producer as a single author or inventor laboring creatively--hence romantic--but in expectation of royalties, rather than immortality, beauty, or truth. An individual or small start-up firm that sells software it developed to a larger firm, or an author selling rights to a book or a film typify this model. The second ideal type that arises within exclusive-rights based industries, "Mickey," is a larger firm that already owns an inventory of exclusive rights, some through in-house development, some by buying from Romantic Maximizers. ,{[pg 43]},
+
+!_ Table 2.1: Ideal-Type Information Production Strategies
+
+table{~h c4; 25; 25; 25; 25;
+
+Cost Minimization/ Benefit Acquisition
+Public Domain
+Intrafirm
+Barter/Sharing
+
+Rights based exclusion (make money by exercising exclusive rights - licensing or blocking competition)
+Romantic Maximizers (authors, composers; sell to publishers; sometimes sell to Mickeys)
+Mikey (Disney reuses inventory for derivative works; buy outputs of Romantic Maximizers)
+RCA (small number of companies hold blocking patents; they create patent pools to build valuable goods)
+
+Nonexclusion - Market (make money from information production but not by exercising the exclusive rights)
+Scholarly Lawyers (write articles to get clients; other examples include bands that give music out for free as advertisements for touring and charge money for performance; software developers who develop software and make money from customizing it to a particular client, on-site management, advice and training, not from licensing)
+Know-How (firms that have cheaper or better production processes because of their research, lower their costs or improve the quality of other goods or services; lawyer offices that build on existing forms)
+Learning Networks (share information with similar organizations - make money from early access to information. For example, newspapers join together to create a wire service; firms where engineers and scientists from different firms attend professional societies to diffuse knowledge)
+
+Nonexclusion - Nonmarket
+Joe Einstein (give away information for free in return for status, benefits to reputation, value for the innovation to themselves; wide range of motivations. Includes members of amateur choirs who perform for free, academics who write articles for fame, people who write opeds, contribute to mailing lists; many free software developers and free software generally for most uses)
+Los Alamos (share in-house information, rely on in-house inputs to produce valuable public goods used to secure additional government funding and status)
+Limited sharing networks (release paper to small number of colleagues to get comments so you can improve it before publication. Make use of time delay to gain relative advantage later on using Joe Einstein strategy. Share one's information on formal condition of reciprocity: like "copyleft" conditions on derivative works for distribution)
+
+}table
+
+,{[pg 44]},
+
+- A defining cost-reduction mechanism for Mickey is that it applies creative people to work on its own inventory, for which it need not pay above marginal cost prices in the market. This strategy is the most advantageous in an environment of very strong exclusive rights protection for a number of reasons. First, the ability to extract higher rents from the existing inventory of information goods is greatest for firms that (a) have an inventory and (b) rely on asserting exclusive rights as their mode of extracting value. Second, the increased costs of production associated with strong exclusive rights are cushioned by the ability of such firms to rework their existing inventory, rather than trying to work with materials from an evershrinking public domain or paying for every source of inspiration and element of a new composition. The coarsest version of this strategy might be found if Disney were to produce a "winter sports" thirty-minute television program by tying together scenes from existing cartoons, say, one in which Goofy plays hockey followed by a snippet of Donald Duck ice skating, and so on. More subtle, and representative of the type of reuse relevant to the analysis here, would be the case where Disney buys the rights to Winniethe-Pooh, and, after producing an animated version of stories from the original books, then continues to work with the same characters and relationships to create a new film, say, Winnie-the-Pooh--Frankenpooh (or Beauty and the Beast--Enchanted Christmas; or The Little Mermaid--Stormy the Wild Seahorse). The third exclusive-rights-based strategy, which I call "RCA," is barter among the owners of inventories. Patent pools, cross-licensing, and market-sharing agreements among the radio patents holders in 1920-1921, which I describe in chapter 6, are a perfect example. RCA, GE, AT&T, and Westinghouse held blocking patents that prevented each other and anyone else from manufacturing the best radios possible given technology at that time. The four companies entered an agreement to combine their patents and divide the radio equipment and services markets, which they used throughout the 1920s to exclude competitors and to capture precisely the postinnovation monopoly rents sought to be created by patents.
+
+Exclusive-rights-based business models, however, represent only a fraction of our information production system. There are both market-based and nonmarket models to sustain and organize information production. Together, these account for a substantial portion of our information output. Indeed, industry surveys concerned with patents have shown that the vast majority of industrial R&D is pursued with strategies that do not rely primarily on patents. This does not mean that most or any of the firms that ,{[pg 45]}, pursue these strategies possess or seek no exclusive rights in their information products. It simply means that their production strategy does not depend on asserting these rights through exclusion. One such cluster of strategies, which I call "Scholarly Lawyers," relies on demand?side effects of access to the information the producer distributes. It relies on the fact that sometimes using an information good that one has produced makes its users seek out a relationship with the author. The author then charges for the relationship, not for the information. Doctors or lawyers who publish in trade journals, become known, and get business as a result are an instance of this strategy. An enormously creative industry, much of which operates on this model, is software. About two-thirds of industry revenues in software development come from activities that the Economic Census describes as: (1) writing, modifying, testing, and supporting software to meet the needs of a particular customer; (2) planning and designing computer systems that integrate computer hardware, software, and communication technologies; (3) on-site management and operation of clients' computer systems and/or data processing facilities; and (4) other professional and technical computer-related advice and services, systems consultants, and computer training. "Software publishing," by contrast, the business model that relies on sales based on copyright, accounts for a little more than one-third of the industry's revenues.~{ In the 2002 Economic Census, compare NAICS categories 5415 (computer systems and related services) to NAICS 5112 (software publishing). Between the 1997 Economic Census and the 2002 census, this ratio remained stable, at about 36 percent in 1997 and 37 percent in 2002. See 2002 Economic Census, "Industry Series, Information, Software Publishers, and Computer Systems, Design and Related Services" (Washington, DC: U.S. Census Bureau, 2004). }~ Interestingly, this is the model of appropriation that more than a decade ago, Esther Dyson and John Perry Barlow heralded as the future of music and musicians. They argued in the early 1990s for more or less free access to copies of recordings distributed online, which would lead to greater attendance at live gigs. Revenue from performances, rather than recording, would pay artists.
+
+The most common models of industrial R&D outside of pharmaceuticals, however, depend on supply?side effects of information production. One central reason to pursue research is its effects on firm-specific advantages, like production know-how, which permit the firm to produce more efficiently than competitors and sell better or cheaper competing products. Daily newspapers collectively fund news agencies, and individually fund reporters, because their ability to find information and report it is a necessary input into their product--timely news. As I have already suggested, they do not need copyright to protect their revenues. Those are protected by the short half-life of dailies. The investments come in order to be able to play in the market for daily newspapers. Similarly, the learning curve and knowhow effects in semiconductors are such that early entry into the market for ,{[pg 46]}, a new chip will give the first mover significant advantages over competitors. Investment is then made to capture that position, and the investment is captured by the quasi-rents available from the first-mover advantage. In some cases, innovation is necessary in order to be able to produce at the state of the art. Firms participate in "Learning Networks" to gain the benefits of being at the state of the art, and sharing their respective improvements. However, they can only participate if they innovate. If they do not innovate, they lack the in-house capacity to understand the state of the art and play at it. Their investments are then recouped not from asserting their exclusive rights, but from the fact that they sell into one of a set of markets, access into which is protected by the relatively small number of firms with such absorption capacity, or the ability to function at the edge of the state of the art. Firms of this sort might barter their information for access, or simply be part of a small group of organizations with enough knowledge to exploit the information generated and informally shared by all participants in these learning networks. They obtain rents from the concentrated market structure, not from assertion of property rights.~{ Levin et al., "Appropriating the Returns," 794-796 (secrecy, lead time, and learningcurve advantages regarded as more effective than patents by most firms). See also F. M. Scherer, "Learning by Doing and International Trade in Semiconductors" (faculty research working paper series R94-13, John F. Kennedy School of Government, Harvard University, Cambridge, MA, 1994), an empirical study of semiconductor industry suggesting that for industries with steep learning curves, investment in information production is driven by advantages of being first down the learning curve rather than the expectation of legal rights of exclusion. The absorption effect is described in Wesley M. Cohen and Daniel A. Leventhal, "Innovation and Learning: The Two Faces of R&D," The Economic Journal 99 (1989): 569-596. The collaboration effect was initially described in Richard R. Nelson, "The Simple Economics of Basic Scientific Research," Journal of Political Economy 67 (June 1959): 297-306. The most extensive work over the past fifteen years, and the source of the term of learning networks, has been from Woody Powell on knowledge and learning networks. Identifying the role of markets made concentrated by the limited ability to use information, rather than through exclusive rights, was made in F. M. Scherer, "Nordhaus's Theory of Optimal Patent Life: A Geometric Reinterpretation," American Economic Review 62 (1972): 422-427.}~
+
+An excellent example of a business strategy based on nonexclusivity is IBM's. The firm has obtained the largest number of patents every year from 1993 to 2004, amassing in total more than 29,000 patents. IBM has also, however, been one of the firms most aggressively engaged in adapting its business model to the emergence of free software. Figure 2.1 shows what happened to the relative weight of patent royalties, licenses, and sales in IBM's revenues and revenues that the firm described as coming from "Linuxrelated services." Within a span of four years, the Linux-related services category moved from accounting for practically no revenues, to providing double the revenues from all patent-related sources, of the firm that has been the most patent-productive in the United States. IBM has described itself as investing more than a billion dollars in free software developers, hired programmers to help develop the Linux kernel and other free software; and donated patents to the Free Software Foundation. What this does for the firm is provide it with a better operating system for its server business-- making the servers better, faster, more reliable, and therefore more valuable to consumers. Participating in free software development has also allowed IBM to develop service relationships with its customers, building on free software to offer customer-specific solutions. In other words, IBM has combined both supply-side and demand-side strategies to adopt a nonproprietary business model that has generated more than $2 billion yearly of business ,{[pg 47]}, for the firm. Its strategy is, if not symbiotic, certainly complementary to free software.
+
+{won_benkler_2_1.png "Figure 2.1: Selected IBM Revenues, 2000-2003" }http://www.jus.uio.no/sisu/
+
+I began this chapter with a puzzle--advanced economies rely on nonmarket organizations for information production much more than they do in other sectors. The puzzle reflects the fact that alongside the diversity of market-oriented business models for information production there is a wide diversity of nonmarket models as well. At a broad level of abstraction, I designate this diversity of motivations and organizational forms as "Joe Einstein"--to underscore the breadth of the range of social practices and practitioners of nonmarket production. These include universities and other research institutes; government research labs that publicize their work, or government information agencies like the Census Bureau. They also include individuals, like academics; authors and artists who play to "immortality" rather than seek to maximize the revenue from their creation. Eric von Hippel has for many years documented user innovation in areas ranging from surfboard design to new mechanisms for pushing electric wiring through insulation tiles.~{ Eric von Hippel, Democratizing Innovation (Cambridge, MA: MIT Press, 2005). }~ The Oratorio Society of New York, whose chorus ,{[pg 48]}, members are all volunteers, has filled Carnegie Hall every December with a performance of Handel's Messiah since the theatre's first season in 1891. Political parties, advocacy groups, and churches are but few of the stable social organizations that fill our information environment with news and views. For symmetry purposes in table 2.1, we also see reliance on internal inventories by some nonmarket organizations, like secret government labs that do not release their information outputs, but use it to continue to obtain public funding. This is what I call "Los Alamos." Sharing in limited networks also occurs in nonmarket relationships, as when academic colleagues circulate a draft to get comments. In the nonmarket, nonproprietary domain, however, these strategies were in the past relatively smaller in scope and significance than the simple act of taking from the public domain and contributing back to it that typifies most Joe Einstein behaviors. Only since the mid-1980s have we begun to see a shift from releasing into the public domain to adoption of commons-binding licensing, like the "copyleft" strategies I describe in chapter 3. What makes these strategies distinct from Joe Einstein is that they formalize the requirement of reciprocity, at least for some set of rights shared.
+
+My point is not to provide an exhaustive list of all the ways we produce information. It is simply to offer some texture to the statement that information, knowledge, and culture are produced in diverse ways in contemporary society. Doing so allows us to understand the comparatively limited role that production based purely on exclusive rights--like patents, copyrights, and similar regulatory constraints on the use and exchange of information--has played in our information production system to this day. It is not new or mysterious to suggest that nonmarket production is important to information production. It is not new or mysterious to suggest that efficiency increases whenever it is possible to produce information in a way that allows the producer--whether market actor or not--to appropriate the benefits of production without actually charging a price for use of the information itself. Such strategies are legion among both market and nonmarket actors. Recognizing this raises two distinct questions: First, how does the cluster of mechanisms that make up intellectual property law affect this mix? Second, how do we account for the mix of strategies at any given time? Why, for example, did proprietary, market-based production become so salient in music and movies in the twentieth century, and what is it about the digitally networked environment that could change this mix? ,{[pg 49]},
+
+2~ THE EFFECTS OF EXCLUSIVE RIGHTS
+
+Once we recognize that there are diverse strategies of appropriation for information production, we come to see a new source of inefficiency caused by strong "intellectual property"-type rights. Recall that in the mainstream analysis, exclusive rights always cause static inefficiency--that is, they allow producers to charge positive prices for products (information) that have a zero marginal cost. Exclusive rights have a more ambiguous effect dynamically. They raise the expected returns from information production, and thereby are thought to induce investment in information production and innovation. However, they also increase the costs of information inputs. If existing innovations are more likely covered by patent, then current producers will more likely have to pay for innovations or uses that in the past would have been available freely from the public domain. Whether, overall, any given regulatory change that increases the scope of exclusive rights improves or undermines new innovation therefore depends on whether, given the level of appropriability that preceded it, it increased input costs more or less than it increased the prospect of being paid for one's outputs.
+
+The diversity of appropriation strategies adds one more kink to this story. Consider the following very simple hypothetical. Imagine an industry that produces "infowidgets." There are ten firms in the business. Two of them are infowidget publishers on the Romantic Maximizer model. They produce infowidgets as finished goods, and sell them based on patent. Six firms produce infowidgets on supply-side (Know-How) or demand-side (Scholarly Lawyer) effects: they make their Realwidgets or Servicewidgets more efficient or desirable to consumers, respectively. Two firms are nonprofit infowidget producers that exist on a fixed, philanthropically endowed income. Each firm produces five infowidgets, for a total market supply of fifty. Now imagine a change in law that increases exclusivity. Assume that this is a change in law that, absent diversity of appropriation, would be considered efficient. Say it increases input costs by 10 percent and appropriability by 20 percent, for a net expected gain of 10 percent. The two infowidget publishers would each see a 10 percent net gain, and let us assume that this would cause each to increase its efforts by 10 percent and produce 10 percent more infowidgets. Looking at these two firms alone, the change in law caused an increase from ten infowidgets to eleven--a gain for the policy change. Looking at the market as a whole, however, eight firms see an increase of 10 percent in costs, and no gain in appropriability. This is because none of these firms ,{[pg 50]}, actually relies on exclusive rights to appropriate its product's value. If, commensurate with our assumption for the publishers, we assume that this results in a decline in effort and productivity of 10 percent for the eight firms, we would see these firms decline from forty infowidgets to thirty-six, and total market production would decline from fifty infowidgets to forty-seven.
+
+Another kind of effect for the change in law may be to persuade some of the firms to shift strategies or to consolidate. Imagine, for example, that most of the inputs required by the two publishers were owned by the other infowidget publisher. If the two firms merged into one Mickey, each could use the outputs of the other at its marginal cost--zero--instead of at its exclusive-rights market price. The increase in exclusive rights would then not affect the merged firm's costs, only the costs of outside firms that would have to buy the merged firm's outputs from the market. Given this dynamic, strong exclusive rights drive concentration of inventory owners. We see this very clearly in the increasing sizes of inventory-based firms like Disney. Moreover, the increased appropriability in the exclusive-rights market will likely shift some firms at the margin of the nonproprietary business models to adopt proprietary business models. This, in turn, will increase the amount of information available only from proprietary sources. The feedback effect will further accelerate the rise in information input costs, increasing the gains from shifting to a proprietary strategy and to consolidating larger inventories with new production.
+
+Given diverse strategies, the primary unambiguous effect of increasing the scope and force of exclusive rights is to shape the population of business strategies. Strong exclusive rights increase the attractiveness of exclusiverights-based strategies at the expense of nonproprietary strategies, whether market-based or nonmarket based. They also increase the value and attraction of consolidation of large inventories of existing information with new production.
+
+2~ WHEN INFORMATION PRODUCTION MEETS THE COMPUTER NETWORK
+
+Music in the nineteenth century was largely a relational good. It was something people did in the physical presence of each other: in the folk way through hearing, repeating, and improvising; in the middle-class way of buying sheet music and playing for guests or attending public performances; or in the upper-class way of hiring musicians. Capital was widely distributed ,{[pg 51]}, among musicians in the form of instruments, or geographically dispersed in the hands of performance hall (and drawing room) owners. Market-based production depended on performance through presence. It provided opportunities for artists to live and perform locally, or to reach stardom in cultural centers, but without displacing the local performers. With the introduction of the phonograph, a new, more passive relationship to played music was made possible in reliance on the high-capital requirements of recording, copying, and distributing specific instantiations of recorded music--records. What developed was a concentrated, commercial industry, based on massive financial investments in advertising, or preference formation, aimed at getting ever-larger crowds to want those recordings that the recording executives had chosen. In other words, the music industry took on a more industrial model of production, and many of the local venues--from the living room to the local dance hall--came to be occupied by mechanical recordings rather than amateur and professional local performances. This model crowded out some, but not all, of the live-performance-based markets (for example, jazz clubs, piano bars, or weddings), and created new liveperformance markets--the megastar concert tour. The music industry shifted from a reliance on Scholarly Lawyer and Joe Einstein models to reliance on Romantic Maximizer and Mickey models. As computers became more music-capable and digital networks became a ubiquitously available distribution medium, we saw the emergence of the present conflict over the regulation of cultural production--the law of copyright--between the twentieth-century, industrial model recording industry and the emerging amateur distribution systems coupled, at least according to its supporters, to a reemergence of decentralized, relation-based markets for professional performance artists.
+
+This stylized story of the music industry typifies the mass media more generally. Since the introduction of the mechanical press and the telegraph, followed by the phonograph, film, the high-powered radio transmitter, and through to the cable plant or satellite, the capital costs of fixing information and cultural goods in a transmission medium--a high-circulation newspaper, a record or movie, a radio or television program--have been high and increasing. The high physical and financial capital costs involved in making a widely accessible information good and distributing it to the increasingly larger communities (brought together by better transportation systems and more interlinked economic and political systems) muted the relative role of nonmarket production, and emphasized the role of those firms that could ,{[pg 52]}, muster the financial and physical capital necessary to communicate on a mass scale. Just as these large, industrial-age machine requirements increased the capital costs involved in information and cultural production, thereby triggering commercialization and concentration of much of this sector, so too ubiquitously available cheap processors have dramatically reduced the capital input costs required to fix information and cultural expressions and communicate them globally. By doing so, they have rendered feasible a radical reorganization of our information and cultural production system, away from heavy reliance on commercial, concentrated business models and toward greater reliance on nonproprietary appropriation strategies, in particular nonmarket strategies whose efficacy was dampened throughout the industrial period by the high capital costs of effective communication.
+
+Information and cultural production have three primary categories of inputs. The first is existing information and culture. We already know that existing information is a nonrival good--that is, its real marginal cost at any given moment is zero. The second major cost is that of the mechanical means of sensing our environment, processing it, and communicating new information goods. This is the high cost that typified the industrial model, and which has drastically declined in computer networks. The third factor is human communicative capacity--the creativity, experience, and cultural awareness necessary to take from the universe of existing information and cultural resources and turn them into new insights, symbols, or representations meaningful to others with whom we converse. Given the zero cost of existing information and the declining cost of communication and processing, human capacity becomes the primary scarce resource in the networked information economy.
+
+Human communicative capacity, however, is an input with radically different characteristics than those of, say, printing presses or satellites. It is held by each individual, and cannot be "transferred" from one person to another or aggregated like so many machines. It is something each of us innately has, though in divergent quanta and qualities. Individual human capacities, rather than the capacity to aggregate financial capital, become the economic core of our information and cultural production. Some of that human capacity is currently, and will continue to be, traded through markets in creative labor. However, its liberation from the constraints of physical capital leaves creative human beings much freer to engage in a wide range of information and cultural production practices than those they could afford to participate in when, in addition to creativity, experience, cultural awareness ,{[pg 53]}, and time, one needed a few million dollars to engage in information production. From our friendships to our communities we live life and exchange ideas, insights, and expressions in many more diverse relations than those mediated by the market. In the physical economy, these relationships were largely relegated to spaces outside of our economic production system. The promise of the networked information economy is to bring this rich diversity of social life smack into the middle of our economy and our productive lives.
+
+Let's do a little experiment. Imagine that you were performing a Web search with me. Imagine that we were using Google as our search engine, and that what we wanted to do was answer the questions of an inquisitive six-year-old about Viking ships. What would we get, sitting in front of our computers and plugging in a search request for "Viking Ships"? The first site is Canadian, and includes a collection of resources, essays, and worksheets. An enterprising elementary school teacher at the Gander Academy in Newfoundland seems to have put these together. He has essays on different questions, and links to sites hosted by a wide range of individuals and organizations, such as a Swedish museum, individual sites hosted on geocities, and even to a specific picture of a replica Viking ship, hosted on a commercial site dedicated to selling nautical replicas. In other words, it is a Joe Einstein site that points to other sites, which in turn use either Joe Einstein or Scholarly Lawyer strategies. This multiplicity of sources of information that show up on the very first site is then replicated as one continues to explore the remaining links. The second link is to a Norwegian site called "the Viking Network," a Web ring dedicated to preparing and hosting short essays on Vikings. It includes brief essays, maps, and external links, such as one to an article in Scientific American. "To become a member you must produce an Information Sheet on the Vikings in your local area and send it in electronic format to Viking Network. Your info-sheet will then be included in the Viking Network web." The third site is maintained by a Danish commercial photographer, and hosted in Copenhagen, in a portion dedicated to photographs of archeological finds and replicas of Danish Viking ships. A retired professor from the University of Pittsburgh runs the fourth. The fifth is somewhere between a hobby and a showcase for the services of an individual, independent Web publisher offering publishing-related services. The sixth and seventh are museums, in Norway and Virginia, respectively. The eighth is the Web site of a hobbyists' group dedicated to building Viking Ship replicas. The ninth includes classroom materials and ,{[pg 54]}, teaching guides made freely available on the Internet by PBS, the American Public Broadcasting Service. Certainly, if you perform this search now, as you read this book, the rankings will change from those I saw when I ran it; but I venture that the mix, the range and diversity of producers, and the relative salience of nonmarket producers will not change significantly.
+
+The difference that the digitally networked environment makes is its capacity to increase the efficacy, and therefore the importance, of many more, and more diverse, nonmarket producers falling within the general category of Joe Einstein. It makes nonmarket strategies--from individual hobbyists to formal, well-funded nonprofits--vastly more effective than they could be in the mass-media environment. The economics of this phenomenon are neither mysterious nor complex. Imagine the grade-school teacher who wishes to put together ten to twenty pages of materials on Viking ships for schoolchildren. Pre-Internet, he would need to go to one or more libraries and museums, find books with pictures, maps, and text, or take his own photographs (assuming he was permitted by the museums) and write his own texts, combining this research. He would then need to select portions, clear the copyrights to reprint them, find a printing house that would set his text and pictures in a press, pay to print a number of copies, and then distribute them to all children who wanted them. Clearly, research today is simpler and cheaper. Cutting and pasting pictures and texts that are digital is cheaper. Depending on where the teacher is located, it is possible that these initial steps would have been insurmountable, particularly for a teacher in a poorly endowed community without easy access to books on the subject, where research would have required substantial travel. Even once these barriers were surmounted, in the precomputer, pre-Internet days, turning out materials that looked and felt like a high quality product, with highresolution pictures and maps, and legible print required access to capitalintensive facilities. The cost of creating even one copy of such a product would likely dissuade the teacher from producing the booklet. At most, he might have produced a mimeographed bibliography, and perhaps some text reproduced on a photocopier. Now, place the teacher with a computer and a high-speed Internet connection, at home or in the school library. The cost of production and distribution of the products of his effort are trivial. A Web site can be maintained for a few dollars a month. The computer itself is widely accessible throughout the developed world. It becomes trivial for a teacher to produce the "booklet"--with more information, available to anyone in the world, anywhere, at any time, as long as he is willing to spend ,{[pg 55]}, some of his free time putting together the booklet rather than watching television or reading a book.
+
+When you multiply these very simple stylized facts by the roughly billion people who live in societies sufficiently wealthy to allow cheap ubiquitous Internet access, the breadth and depth of the transformation we are undergoing begins to become clear. A billion people in advanced economies may have between two billion and six billion spare hours among them, every day. In order to harness these billions of hours, it would take the whole workforce of almost 340,000 workers employed by the entire motion picture and recording industries in the United States put together, assuming each worker worked forty-hour weeks without taking a single vacation, for between three and eight and a half years! Beyond the sheer potential quantitative capacity, however one wishes to discount it to account for different levels of talent, knowledge, and motivation, a billion volunteers have qualities that make them more likely to produce what others want to read, see, listen to, or experience. They have diverse interests--as diverse as human culture itself. Some care about Viking ships, others about the integrity of voting machines. Some care about obscure music bands, others share a passion for baking. As Eben Moglen put it, "if you wrap the Internet around every person on the planet and spin the planet, software flows in the network. It's an emergent property of connected human minds that they create things for one another's pleasure and to conquer their uneasy sense of being too alone."~{ Eben Moglen, "Anarchism Triumphant: Free Software and the Death of Copyright," First Monday (1999), http://www.firstmonday.dk/issues/issue4_8/moglen/. }~ It is this combination of a will to create and to communicate with others, and a shared cultural experience that makes it likely that each of us wants to talk about something that we believe others will also want to talk about, that makes the billion potential participants in today's online conversation, and the six billion in tomorrow's conversation, affirmatively better than the commercial industrial model. When the economics of industrial production require high up-front costs and low marginal costs, the producers must focus on creating a few superstars and making sure that everyone tunes in to listen or watch them. This requires that they focus on averaging out what consumers are most likely to buy. This works reasonably well as long as there is no better substitute. As long as it is expensive to produce music or the evening news, there are indeed few competitors for top billing, and the star system can function. Once every person on the planet, or even only every person living in a wealthy economy and 10-20 percent of those living in poorer countries, can easily talk to their friends and compatriots, the competition becomes tougher. It does not mean that there is no continued role ,{[pg 56]}, for the mass-produced and mass-marketed cultural products--be they Britney Spears or the broadcast news. It does, however, mean that many more "niche markets"--if markets, rather than conversations, are what they should be called--begin to play an ever-increasing role in the total mix of our cultural production system. The economics of production in a digital environment should lead us to expect an increase in the relative salience of nonmarket production models in the overall mix of our information production system, and it is efficient for this to happen--more information will be produced, and much of it will be available for its users at its marginal cost.
+
+The known quirky characteristics of information and knowledge as production goods have always given nonmarket production a much greater role in this production system than was common in capitalist economies for tangible goods. The dramatic decline in the cost of the material means of producing and exchanging information, knowledge, and culture has substantially decreased the costs of information expression and exchange, and thereby increased the relative efficacy of nonmarket production. When these facts are layered over the fact that information, knowledge, and culture have become the central high-value-added economic activities of the most advanced economies, we find ourselves in a new and unfamiliar social and economic condition. Social behavior that traditionally was relegated to the peripheries of the economy has become central to the most advanced economies. Nonmarket behavior is becoming central to producing our information and cultural environment. Sources of knowledge and cultural edification, through which we come to know and comprehend the world, to form our opinions about it, and to express ourselves in communication with others about what we see and believe have shifted from heavy reliance on commercial, concentrated media, to being produced on a much more widely distributed model, by many actors who are not driven by the imperatives of advertising or the sale of entertainment goods.
+
+2~ STRONG EXCLUSIVE RIGHTS IN THE DIGITAL ENVIRONMENT
+
+We now have the basic elements of a clash between incumbent institutions and emerging social practice. Technologies of information and cultural production initially led to the increasing salience of commercial, industrialmodel production in these areas. Over the course of the twentieth century, ,{[pg 57]}, in some of the most culturally visible industries like movies and music, copyright law coevolved with the industrial model. By the end of the twentieth century, copyright was longer, broader, and vastly more encompassing than it had been at the beginning of that century. Other exclusive rights in information, culture, and the fruits of innovation expanded following a similar logic. Strong, broad, exclusive rights like these have predictable effects. They preferentially improve the returns to business models that rely on exclusive rights, like copyrights and patents, at the expense of information and cultural production outside the market or in market relationships that do not depend on exclusive appropriation. They make it more lucrative to consolidate inventories of existing materials. The businesses that developed around the material capital required for production fed back into the political system, which responded by serially optimizing the institutional ecology to fit the needs of the industrial information economy firms at the expense of other information producers.
+
+The networked information economy has upset the apple cart on the technical, material cost side of information production and exchange. The institutional ecology, the political framework (the lobbyists, the habits of legislatures), and the legal culture (the beliefs of judges, the practices of lawyers) have not changed. They are as they developed over the course of the twentieth century--centered on optimizing the conditions of those commercial firms that thrive in the presence of strong exclusive rights in information and culture. The outcome of the conflict between the industrial information economy and its emerging networked alternative will determine whether we evolve into a permission culture, as Lessig warns and projects, or into a society marked by social practice of nonmarket production and cooperative sharing of information, knowledge, and culture of the type I describe throughout this book, and which I argue will improve freedom and justice in liberal societies. Chapter 11 chronicles many of the arenas in which this basic conflict is played out. However, for the remainder of this part and part II, the basic economic understanding I offer here is all that is necessary.
+
+There are diverse motivations and strategies for organizing information production. Their relative attractiveness is to some extent dependent on technology, to some extent on institutional arrangements. The rise that we see today in the efficacy and scope of nonmarket production, and of the peer production that I describe and analyze in the following two chapters, are well within the predictable, given our understanding of the economics of information production. The social practices of information production ,{[pg 58]}, that form the basis of much of the normative analysis I offer in part II are internally sustainable given the material conditions of information production and exchange in the digitally networked environment. These patterns are unfamiliar to us. They grate on our intuitions about how production happens. They grate on the institutional arrangements we developed over the course of the twentieth century to regulate information and cultural production. But that is because they arise from a quite basically different set of material conditions. We must understand these new modes of production. We must learn to evaluate them and compare their advantages and disadvantages to those of the industrial information producers. And then we must adjust our institutional environment to make way for the new social practices made possible by the networked environment. ,{[pg 59]},
+
+1~3 Chapter 3 - Peer Production and Sharing
+
+At the heart of the economic engine, of the world's most advanced economies, we are beginning to notice a persistent and quite amazing phenomenon. A new model of production has taken root; one that should not be there, at least according to our most widely held beliefs about economic behavior. It should not, the intuitions of the late-twentieth-century American would say, be the case that thousands of volunteers will come together to collaborate on a complex economic project. It certainly should not be that these volunteers will beat the largest and best-financed business enterprises in the world at their own game. And yet, this is precisely what is happening in the software world.
+
+Industrial organization literature provides a prominent place for the transaction costs view of markets and firms, based on insights of Ronald Coase and Oliver Williamson. On this view, people use markets when the gains from doing so, net of transaction costs, exceed the gains from doing the same thing in a managed firm, net of the costs of organizing and managing a firm. Firms emerge when the opposite is true, and transaction costs can best be reduced by ,{[pg 60]}, bringing an activity into a managed context that requires no individual transactions to allocate this resource or that effort. The emergence of free and open-source software, and the phenomenal success of its flagships, the GNU/ Linux operating system, the Apache Web server, Perl, and many others, should cause us to take a second look at this dominant paradigm.~{ For an excellent history of the free software movement and of open-source development, see Glyn Moody, Rebel Code: Inside Linux and the Open Source Revolution (New York: Perseus Publishing, 2001). }~ Free software projects do not rely on markets or on managerial hierarchies to organize production. Programmers do not generally participate in a project because someone who is their boss told them to, though some do. They do not generally participate in a project because someone offers them a price to do so, though some participants do focus on long-term appropriation through money-oriented activities, like consulting or service contracts. However, the critical mass of participation in projects cannot be explained by the direct presence of a price or even a future monetary return. This is particularly true of the all-important, microlevel decisions: who will work, with what software, on what project. In other words, programmers participate in free software projects without following the signals generated by marketbased, firm-based, or hybrid models. In chapter 2 I focused on how the networked information economy departs from the industrial information economy by improving the efficacy of nonmarket production generally. Free software offers a glimpse at a more basic and radical challenge. It suggests that the networked environment makes possible a new modality of organizing production: radically decentralized, collaborative, and nonproprietary; based on sharing resources and outputs among widely distributed, loosely connected individuals who cooperate with each other without relying on either market signals or managerial commands. This is what I call "commons-based peer production."
+
+"Commons" refers to a particular institutional form of structuring the rights to access, use, and control resources. It is the opposite of "property" in the following sense: With property, law determines one particular person who has the authority to decide how the resource will be used. That person may sell it, or give it away, more or less as he or she pleases. "More or less" because property doesn't mean anything goes. We cannot, for example, decide that we will give our property away to one branch of our family, as long as that branch has boys, and then if that branch has no boys, decree that the property will revert to some other branch of the family. That type of provision, once common in English property law, is now legally void for public policy reasons. There are many other things we cannot do with our property--like build on wetlands. However, the core characteristic of property ,{[pg 61]}, as the institutional foundation of markets is that the allocation of power to decide how a resource will be used is systematically and drastically asymmetric. That asymmetry permits the existence of "an owner" who can decide what to do, and with whom. We know that transactions must be made-- rent, purchase, and so forth--if we want the resource to be put to some other use. The salient characteristic of commons, as opposed to property, is that no single person has exclusive control over the use and disposition of any particular resource in the commons. Instead, resources governed by commons may be used or disposed of by anyone among some (more or less well-defined) number of persons, under rules that may range from "anything goes" to quite crisply articulated formal rules that are effectively enforced.
+
+Commons can be divided into four types based on two parameters. The first parameter is whether they are open to anyone or only to a defined group. The oceans, the air, and highway systems are clear examples of open commons. Various traditional pasture arrangements in Swiss villages or irrigation regions in Spain are now classic examples, described by Eleanor Ostrom, of limited-access common resources--where access is limited only to members of the village or association that collectively "owns" some defined pasturelands or irrigation system.~{ Elinor Ostrom, Governing the Commons: The Evolution of Institutions for Collective Action (Cambridge: Cambridge University Press, 1990). }~ As Carol Rose noted, these are better thought of as limited common property regimes, rather than commons, because they behave as property vis-a-vis the entire world except members ` of the group who together hold them in common. The second parameter is whether a commons system is regulated or unregulated. Practically all well-studied, limited common property regimes are regulated by more or less elaborate rules--some formal, some social-conventional--governing the use of the resources. Open commons, on the other hand, vary widely. Some commons, called open access, are governed by no rule. Anyone can use resources within these types of commons at will and without payment. Air is such a resource, with respect to air intake (breathing, feeding a turbine). However, air is a regulated commons with regard to outtake. For individual human beings, breathing out is mildly regulated by social convention--you do not breath too heavily on another human being's face unless forced to. Air is a more extensively regulated commons for industrial exhalation--in the shape of pollution controls. The most successful and obvious regulated commons in contemporary landscapes are the sidewalks, streets, roads, and highways that cover our land and regulate the material foundation of our ability to move from one place to the other. In all these cases, however, the characteristic of commons is that the constraints, if any, are symmetric ,{[pg 62]}, among all users, and cannot be unilaterally controlled by any single individual. The term "commons-based" is intended to underscore that what is characteristic of the cooperative enterprises I describe in this chapter is that they are not built around the asymmetric exclusion typical of property. Rather, the inputs and outputs of the process are shared, freely or conditionally, in an institutional form that leaves them equally available for all to use as they choose at their individual discretion. This latter characteristic-- that commons leave individuals free to make their own choices with regard to resources managed as a commons--is at the foundation of the freedom they make possible. This is a freedom I return to in the discussion of autonomy. Not all commons-based production efforts qualify as peer production. Any production strategy that manages its inputs and outputs as commons locates that production modality outside the proprietary system, in a framework of social relations. It is the freedom to interact with resources and projects without seeking anyone's permission that marks commons-based production generally, and it is also that freedom that underlies the particular efficiencies of peer production, which I explore in chapter 4.
+
+The term "peer production" characterizes a subset of commons-based production practices. It refers to production systems that depend on individual action that is self-selected and decentralized, rather than hierarchically assigned. "Centralization" is a particular response to the problem of how to make the behavior of many individual agents cohere into an effective pattern or achieve an effective result. Its primary attribute is the separation of the locus of opportunities for action from the authority to choose the action that the agent will undertake. Government authorities, firm managers, teachers in a classroom, all occupy a context in which potentially many individual wills could lead to action, and reduce the number of people whose will is permitted to affect the actual behavior patterns that the agents will adopt. "Decentralization" describes conditions under which the actions of many agents cohere and are effective despite the fact that they do not rely on reducing the number of people whose will counts to direct effective action. A substantial literature in the past twenty years, typified, for example, by Charles Sabel's work, has focused on the ways in which firms have tried to overcome the rigidities of managerial pyramids by decentralizing learning, planning, and execution of the firm's functions in the hands of employees or teams. The most pervasive mode of "decentralization," however, is the ideal market. Each individual agent acts according to his or her will. Coherence and efficacy emerge because individuals signal their wishes, and plan ,{[pg 63]}, their behavior not in cooperation with others, but by coordinating, understanding the will of others and expressing their own through the price system.
+
+What we are seeing now is the emergence of more effective collective action practices that are decentralized but do not rely on either the price system or a managerial structure for coordination. In this, they complement the increasing salience of uncoordinated nonmarket behavior that we saw in chapter 2. The networked environment not only provides a more effective platform for action to nonprofit organizations that organize action like firms or to hobbyists who merely coexist coordinately. It also provides a platform for new mechanisms for widely dispersed agents to adopt radically decentralized cooperation strategies other than by using proprietary and contractual claims to elicit prices or impose managerial commands. This kind of information production by agents operating on a decentralized, nonproprietary model is not completely new. Science is built by many people contributing incrementally--not operating on market signals, not being handed their research marching orders by a boss--independently deciding what to research, bringing their collaboration together, and creating science. What we see in the networked information economy is a dramatic increase in the importance and the centrality of information produced in this way.
+
+2~ FREE/OPEN-SOURCE SOFTWARE
+
+The quintessential instance of commons-based peer production has been free software. Free software, or open source, is an approach to software development that is based on shared effort on a nonproprietary model. It depends on many individuals contributing to a common project, with a variety of motivations, and sharing their respective contributions without any single person or entity asserting rights to exclude either from the contributed components or from the resulting whole. In order to avoid having the joint product appropriated by any single party, participants usually retain copyrights in their contribution, but license them to anyone--participant or stranger--on a model that combines a universal license to use the materials with licensing constraints that make it difficult, if not impossible, for any single contributor or third party to appropriate the project. This model of licensing is the most important institutional innovation of the free software movement. Its central instance is the GNU General Public License, or GPL. ,{[pg 64]},
+
+This requires anyone who modifies software and distributes the modified version to license it under the same free terms as the original software. While there have been many arguments about how widely the provisions that prevent downstream appropriation should be used, the practical adoption patterns have been dominated by forms of licensing that prevent anyone from exclusively appropriating the contributions or the joint product. More than 85 percent of active free software projects include some version of the GPL or similarly structured license.~{ Josh Lerner and Jean Tirole, "The Scope of Open Source Licensing" (Harvard NOM working paper no. 02-42, table 1, Cambridge, MA, 2002). The figure is computed out of the data reported in this paper for the number of free software development projects that Lerner and Tirole identify as having "restrictive" or "very restrictive" licenses. }~
+
+Free software has played a critical role in the recognition of peer production, because software is a functional good with measurable qualities. It can be more or less authoritatively tested against its market-based competitors. And, in many instances, free software has prevailed. About 70 percent of Web server software, in particular for critical e-commerce sites, runs on the Apache Web server--free software.~{ Netcraft, April 2004 Web Server Survey, http://news.netcraft.com/archives/web_ server_survey.html. }~ More than half of all back-office e-mail functions are run by one free software program or another. Google, Amazon, and CNN.com, for example, run their Web servers on the GNU/Linux operating system. They do this, presumably, because they believe this peerproduced operating system is more reliable than the alternatives, not because the system is "free." It would be absurd to risk a higher rate of failure in their core business activities in order to save a few hundred thousand dollars on licensing fees. Companies like IBM and Hewlett Packard, consumer electronics manufacturers, as well as military and other mission-critical government agencies around the world have begun to adopt business and service strategies that rely and extend free software. They do this because it allows them to build better equipment, sell better services, or better fulfill their public role, even though they do not control the software development process and cannot claim proprietary rights of exclusion in the products of their contributions.
+
+The story of free software begins in 1984, when Richard Stallman started working on a project of building a nonproprietary operating system he called GNU (GNU's Not Unix). Stallman, then at the Massachusetts Institute of Technology (MIT), operated from political conviction. He wanted a world in which software enabled people to use information freely, where no one would have to ask permission to change the software they use to fit their needs or to share it with a friend for whom it would be helpful. These freedoms to share and to make your own software were fundamentally incompatible with a model of production that relies on property rights and markets, he thought, because in order for there to be a market in uses of ,{[pg 65]}, software, owners must be able to make the software unavailable to people who need it. These people would then pay the provider in exchange for access to the software or modification they need. If anyone can make software or share software they possess with friends, it becomes very difficult to write software on a business model that relies on excluding people from software they need unless they pay. As a practical matter, Stallman started writing software himself, and wrote a good bit of it. More fundamentally, he adopted a legal technique that started a snowball rolling. He could not write a whole operating system by himself. Instead, he released pieces of his code under a license that allowed anyone to copy, distribute, and modify the software in whatever way they pleased. He required only that, if the person who modified the software then distributed it to others, he or she do so under the exact same conditions that he had distributed his software. In this way, he invited all other programmers to collaborate with him on this development program, if they wanted to, on the condition that they be as generous with making their contributions available to others as he had been with his. Because he retained the copyright to the software he distributed, he could write this condition into the license that he attached to the software. This meant that anyone using or distributing the software as is, without modifying it, would not violate Stallman's license. They could also modify the software for their own use, and this would not violate the license. However, if they chose to distribute the modified software, they would violate Stallman's copyright unless they included a license identical to his with the software they distributed. This license became the GNU General Public License, or GPL. The legal jujitsu Stallman used--asserting his own copyright claims, but only to force all downstream users who wanted to rely on his contributions to make their own contributions available to everyone else--came to be known as "copyleft," an ironic twist on copyright. This legal artifice allowed anyone to contribute to the GNU project without worrying that one day they would wake up and find that someone had locked them out of the system they had helped to build.
+
+The next major step came when a person with a more practical, rather than prophetic, approach to his work began developing one central component of the operating system--the kernel. Linus Torvalds began to share the early implementations of his kernel, called Linux, with others, under the GPL. These others then modified, added, contributed, and shared among themselves these pieces of the operating system. Building on top of Stallman's foundation, Torvalds crystallized a model of production that was fundamentally ,{[pg 66]}, different from those that preceded it. His model was based on voluntary contributions and ubiquitous, recursive sharing; on small incremental improvements to a project by widely dispersed people, some of whom contributed a lot, others a little. Based on our usual assumptions about volunteer projects and decentralized production processes that have no managers, this was a model that could not succeed. But it did.
+
+It took almost a decade for the mainstream technology industry to recognize the value of free or open-source software development and its collaborative production methodology. As the process expanded and came to encompass more participants, and produce more of the basic tools of Internet connectivity--Web server, e-mail server, scripting--more of those who participated sought to "normalize" it, or, more specifically, to render it apolitical. Free software is about freedom ("free as in free speech, not free beer" is Stallman's epitaph for it). "Open-source software" was chosen as a term that would not carry the political connotations. It was simply a mode of organizing software production that may be more effective than market-based production. This move to depoliticize peer production of software led to something of a schism between the free software movement and the communities of open source software developers. It is important to understand, however, that from the perspective of society at large and the historical trajectory of information production generally the abandonment of political motivation and the importation of free software into the mainstream have not made it less politically interesting, but more so. Open source and its wide adoption in the business and bureaucratic mainstream allowed free software to emerge from the fringes of the software world and move to the center of the public debate about practical alternatives to the current way of doing things.
+
+So what is open-source software development? The best source for a phenomenology of open-source development continues to be Eric Raymond's /{Cathedral and Bazaar}/, written in 1998. Imagine that one person, or a small group of friends, wants a utility. It could be a text editor, photo-retouching software, or an operating system. The person or small group starts by developing a part of this project, up to a point where the whole utility--if it is simple enough--or some important part of it, is functional, though it might have much room for improvement. At this point, the person makes the program freely available to others, with its source code--instructions in a human-readable language that explain how the software does whatever it does when compiled into a machine-readable language. When others begin ,{[pg 67]}, to use it, they may find bugs, or related utilities that they want to add (e.g., the photo-retouching software only increases size and sharpness, and one of its users wants it to allow changing colors as well). The person who has found the bug or is interested in how to add functions to the software may or may not be the best person in the world to actually write the software fix. Nevertheless, he reports the bug or the new need in an Internet forum of users of the software. That person, or someone else, then thinks that they have a way of tweaking the software to fix the bug or add the new utility. They then do so, just as the first person did, and release a new version of the software with the fix or the added utility. The result is a collaboration between three people--the first author, who wrote the initial software; the second person, who identified a problem or shortcoming; and the third person, who fixed it. This collaboration is not managed by anyone who organizes the three, but is instead the outcome of them all reading the same Internet-based forum and using the same software, which is released under an open, rather than proprietary, license. This enables some of its users to identify problems and others to fix these problems without asking anyone's permission and without engaging in any transactions.
+
+The most surprising thing that the open source movement has shown, in real life, is that this simple model can operate on very different scales, from the small, three-person model I described for simple projects, up to the many thousands of people involved in writing the Linux kernel and the GNU/ Linux operating system--an immensely difficult production task. SourceForge, the most popular hosting-meeting place of such projects, has close to 100,000 registered projects, and nearly a million registered users. The economics of this phenomenon are complex. In the larger-scale models, actual organization form is more diverse than the simple, three-person model. In particular, in some of the larger projects, most prominently the Linux kernel development process, a certain kind of meritocratic hierarchy is clearly present. However, it is a hierarchy that is very different in style, practical implementation, and organizational role than that of the manager in the firm. I explain this in chapter 4, as part of the analysis of the organizational forms of peer production. For now, all we need is a broad outline of how peer-production projects look, as we turn to observe case studies of kindred production models in areas outside of software. ,{[pg 68]},
+
+2~ PEER PRODUCTION OF INFORMATION, KNOWLEDGE, AND CULTURE GENERALLY
+
+Free software is, without a doubt, the most visible instance of peer production at the turn of the twenty-first century. It is by no means, however, the only instance. Ubiquitous computer communications networks are bringing about a dramatic change in the scope, scale, and efficacy of peer production throughout the information and cultural production system. As computers become cheaper and as network connections become faster, cheaper, and ubiquitous, we are seeing the phenomenon of peer production of information scale to much larger sizes, performing more complex tasks than were possible in the past for nonprofessional production. To make this phenomenon more tangible, I describe a number of such enterprises, organized to demonstrate the feasibility of this approach throughout the information production and exchange chain. While it is possible to break an act of communication into finer-grained subcomponents, largely we see three distinct functions involved in the process. First, there is an initial utterance of a humanly meaningful statement. Writing an article or drawing a picture, whether done by a professional or an amateur, whether high quality or low, is such an action. Second, there is a separate function of mapping the initial utterances on a knowledge map. In particular, an utterance must be understood as "relevant" in some sense, and "credible." Relevance is a subjective question of mapping an utterance on the conceptual map of a given user seeking information for a particular purpose defined by that individual. Credibility is a question of quality by some objective measure that the individual adopts as appropriate for purposes of evaluating a given utterance. The distinction between the two is somewhat artificial, however, because very often the utility of a piece of information will depend on a combined valuation of its credibility and relevance. I therefore refer to "relevance/accreditation" as a single function for purposes of this discussion, keeping in mind that the two are complementary and not entirely separable functions that an individual requires as part of being able to use utterances that others have uttered in putting together the user's understanding of the world. Finally, there is the function of distribution, or how one takes an utterance produced by one person and distributes it to other people who find it credible and relevant. In the mass-media world, these functions were often, though by no means always, integrated. NBC news produced the utterances, gave them credibility by clearing them on the evening news, and distributed ,{[pg 69]}, them simultaneously. What the Internet is permitting is much greater disaggregation of these functions.
+
+3~ Uttering Content
+
+NASA Clickworkers was "an experiment to see if public volunteers, each working for a few minutes here and there can do some routine science analysis that would normally be done by a scientist or graduate student working for months on end." Users could mark craters on maps of Mars, classify craters that have already been marked, or search the Mars landscape for "honeycomb" terrain. The project was "a pilot study with limited funding, run part-time by one software engineer, with occasional input from two scientists." In its first six months of operation, more than 85,000 users visited the site, with many contributing to the effort, making more than 1.9 million entries (including redundant entries of the same craters, used to average out errors). An analysis of the quality of markings showed "that the automaticallycomputed consensus of a large number of clickworkers is virtually indistinguishable from the inputs of a geologist with years of experience in identifying Mars craters."~{ Clickworkers Results: Crater Marking Activity, July 3, 2001, http://clickworkers.arc .nasa.gov/documents/crater-marking.pdf. }~ The tasks performed by clickworkers (like marking craters) were discrete, each easily performed in a matter of minutes. As a result, users could choose to work for a few minutes doing a single iteration or for hours by doing many. An early study of the project suggested that some clickworkers indeed worked on the project for weeks, but that 37 percent of the work was done by one-time contributors.~{ /{B. Kanefsky, N. G. Barlow, and V. C. Gulick}/, Can Distributed Volunteers Accomplish Massive Data Analysis Tasks? http://www.clickworkers.arc.nasa.gov/documents /abstract.pdf. }~
+
+The clickworkers project was a particularly clear example of how a complex professional task that requires a number of highly trained individuals on full-time salaries can be reorganized so as to be performed by tens of thousands of volunteers in increments so minute that the tasks could be performed on a much lower budget. The low budget would be devoted to coordinating the volunteer effort. However, the raw human capital needed would be contributed for the fun of it. The professionalism of the original scientists was replaced by a combination of high modularization of the task. The organizers broke a large, complex task into small, independent modules. They built in redundancy and automated averaging out of both errors and purposeful erroneous markings--like those of an errant art student who thought it amusing to mark concentric circles on the map. What the NASA scientists running this experiment had tapped into was a vast pool of fiveminute increments of human judgment, applied with motivation to participate in a task unrelated to "making a living." ,{[pg 70]},
+
+While clickworkers was a distinct, self-conscious experiment, it suggests characteristics of distributed production that are, in fact, quite widely observable. We have already seen in chapter 2, in our little search for Viking ships, how the Internet can produce encyclopedic or almanac-type information. The power of the Web to answer such an encyclopedic question comes not from the fact that one particular site has all the great answers. It is not an Encyclopedia Britannica. The power comes from the fact that it allows a user looking for specific information at a given time to collect answers from a sufficiently large number of contributions. The task of sifting and accrediting falls to the user, motivated by the need to find an answer to the question posed. As long as there are tools to lower the cost of that task to a level acceptable to the user, the Web shall have "produced" the information content the user was looking for. These are not trivial considerations, but they are also not intractable. As we shall see, some of the solutions can themselves be peer produced, and some solutions are emerging as a function of the speed of computation and communication, which enables more efficient technological solutions.
+
+Encyclopedic and almanac-type information emerges on the Web out of the coordinate but entirely independent action of millions of users. This type of information also provides the focus on one of the most successful collaborative enterprises that has developed in the first five years of the twenty-first century, /{Wikipedia}/. /{Wikipedia}/ was founded by an Internet entrepreneur, Jimmy Wales. Wales had earlier tried to organize an encyclopedia named Nupedia, which was built on a traditional production model, but whose outputs were to be released freely: its contributors were to be PhDs, using a formal, peer-reviewed process. That project appears to have failed to generate a sufficient number of high-quality contributions, but its outputs were used in /{Wikipedia}/ as the seeds for a radically new form of encyclopedia writing. Founded in January 2001, /{Wikipedia}/ combines three core characteristics: First, it uses a collaborative authorship tool, Wiki. This platform enables anyone, including anonymous passersby, to edit almost any page in the entire project. It stores all versions, makes changes easily visible, and enables anyone to revert a document to any prior version as well as to add changes, small and large. All contributions and changes are rendered transparent by the software and database. Second, it is a self-conscious effort at creating an encyclopedia--governed first and foremost by a collective informal undertaking to strive for a neutral point of view, within the limits of substantial self-awareness as to the difficulties of such an enterprise. An effort ,{[pg 71]}, to represent sympathetically all views on a subject, rather than to achieve objectivity, is the core operative characteristic of this effort. Third, all the content generated by this collaboration is released under the GNU Free Documentation License, an adaptation of the GNU GPL to texts. The shift in strategy toward an open, peer-produced model proved enormously successful. The site saw tremendous growth both in the number of contributors, including the number of active and very active contributors, and in the number of articles included in the encyclopedia (table 3.1). Most of the early growth was in English, but more recently there has been an increase in the number of articles in many other languages: most notably in German (more than 200,000 articles), Japanese (more than 120,000 articles), and French (about 100,000), but also in another five languages that have between 40,000 and 70,000 articles each, another eleven languages with 10,000 to 40,000 articles each, and thirty-five languages with between 1,000 and 10,000 articles each.
+
+The first systematic study of the quality of /{Wikipedia}/ articles was published as this book was going to press. The journal Nature compared 42 science articles from /{Wikipedia}/ to the gold standard of the Encyclopedia Britannica, and concluded that "the difference in accuracy was not particularly great."~{ J. Giles, "Special Report: Internet Encyclopedias Go Head to Head," Nature, December 14, 2005, available at http://www.nature.com/news/2005/051212/full/438900a.html. }~ On November 15, 2004, Robert McHenry, a former editor in chief of the Encyclopedia Britannica, published an article criticizing /{Wikipedia}/ as "The Faith-Based Encyclopedia."~{ http://www.techcentralstation.com/111504A.html. }~ As an example, McHenry mocked the /{Wikipedia}/ article on Alexander Hamilton. He noted that Hamilton biographers have a problem fixing his birth year--whether it is 1755 or 1757. /{Wikipedia}/ glossed over this error, fixing the date at 1755. McHenry then went on to criticize the way the dates were treated throughout the article, using it as an anchor to his general claim: /{Wikipedia}/ is unreliable because it is not professionally produced. What McHenry did not note was that the other major online encyclopedias--like Columbia or Encarta--similarly failed to deal with the ambiguity surrounding Hamilton's birth date. Only the Britannica did. However, McHenry's critique triggered the /{Wikipedia}/ distributed correction mechanism. Within hours of the publication of McHenry's Web article, the reference was corrected. The following few days saw intensive cleanup efforts to conform all references in the biography to the newly corrected version. Within a week or so, /{Wikipedia}/ had a correct, reasonably clean version. It now stood alone with the Encyclopedia Britannica as a source of accurate basic encyclopedic information. In coming to curse it, McHenry found himself blessing /{Wikipedia}/. He had demonstrated ,{[pg 72]}, precisely the correction mechanism that makes /{Wikipedia}/, in the long term, a robust model of reasonably reliable information.
+
+!_ Table 3.1: Contributors to Wikipedia, January 2001 - June 2005
+
+{table~h 24; 12; 12; 12; 12; 12; 12;}
+ |Jan. 2001|Jan. 2002|Jan. 2003|Jan. 2004|July 2004|June 2006
+Contributors* | 10| 472| 2,188| 9,653| 25,011| 48,721
+Active contributors** | 9| 212| 846| 3,228| 8,442| 16,945
+Very active contributors*** | 0| 31| 190| 692| 1,639| 3,016
+No. of English language articles| 25| 16,000| 101,000| 190,000| 320,000| 630,000
+No. of articles, all languages | 25| 19,000| 138,000| 490,000| 862,000|1,600,000
+
+\* Contributed at least ten times; \** at least 5 times in last month; \*\** more than 100 times in last month.
+
+- Perhaps the most interesting characteristic about /{Wikipedia}/ is the selfconscious social-norms-based dedication to objective writing. Unlike some of the other projects that I describe in this chapter, /{Wikipedia}/ does not include elaborate software-controlled access and editing capabilities. It is generally open for anyone to edit the materials, delete another's change, debate the desirable contents, survey archives for prior changes, and so forth. It depends on self-conscious use of open discourse, usually aimed at consensus. While there is the possibility that a user will call for a vote of the participants on any given definition, such calls can, and usually are, ignored by the community unless a sufficiently large number of users have decided that debate has been exhausted. While the system operators and server host-- Wales--have the practical power to block users who are systematically disruptive, this power seems to be used rarely. The project relies instead on social norms to secure the dedication of project participants to objective writing. So, while not entirely anarchic, the project is nonetheless substantially more social, human, and intensively discourse- and trust-based than the other major projects described here. The following fragments from an early version of the self-described essential characteristics and basic policies of /{Wikipedia}/ are illustrative:
+
+_1 First and foremost, the /{Wikipedia}/ project is self-consciously an encyclopedia-- rather than a dictionary, discussion forum, web portal, etc. /{Wikipedia}/'s participants ,{[pg 73]}, commonly follow, and enforce, a few basic policies that seem essential to keeping the project running smoothly and productively. First, because we have a huge variety of participants of all ideologies, and from around the world, /{Wikipedia}/ is committed to making its articles as unbiased as possible. The aim is not to write articles from a single objective point of view--this is a common misunderstanding of the policy--but rather, to fairly and sympathetically present all views on an issue. See "neutral point of view" page for further explanation. ~{ Yochai Benkler, "Coase's Penguin, or Linux and the Nature of the Firm," Yale Law Journal 112 (2001): 369. }~
+
+The point to see from this quotation is that the participants of /{Wikipedia}/ are plainly people who like to write. Some of them participate in other collaborative authorship projects. However, when they enter the common project of /{Wikipedia}/, they undertake to participate in a particular way--a way that the group has adopted to make its product be an encyclopedia. On their interpretation, that means conveying in brief terms the state of the art on the item, including divergent opinions about it, but not the author's opinion. Whether that is an attainable goal is a subject of interpretive theory, and is a question as applicable to a professional encyclopedia as it is to /{Wikipedia}/. As the project has grown, it has developed more elaborate spaces for discussing governance and for conflict resolution. It has developed structures for mediation, and if that fails, arbitration, of disputes about particular articles.
+
+The important point is that /{Wikipedia}/ requires not only mechanical cooperation among people, but a commitment to a particular style of writing and describing concepts that is far from intuitive or natural to people. It requires self-discipline. It enforces the behavior it requires primarily through appeal to the common enterprise that the participants are engaged in, coupled with a thoroughly transparent platform that faithfully records and renders all individual interventions in the common project and facilitates discourse among participants about how their contributions do, or do not, contribute to this common enterprise. This combination of an explicit statement of common purpose, transparency, and the ability of participants to identify each other's actions and counteract them--that is, edit out "bad" or "faithless" definitions--seems to have succeeded in keeping this community from devolving into inefficacy or worse. A case study by IBM showed, for example, that while there were many instances of vandalism on /{Wikipedia}/, including deletion of entire versions of articles on controversial topics like "abortion," the ability of users to see what was done and to fix it with a single click by reverting to a past version meant that acts of vandalism were ,{[pg 74]}, corrected within minutes. Indeed, corrections were so rapid that vandalism acts and their corrections did not even appear on a mechanically generated image of the abortion definition as it changed over time.~{ IBM Collaborative User Experience Research Group, History Flows: Results (2003), http://www.research.ibm.com/history/results.htm. }~ What is perhaps surprising is that this success occurs not in a tightly knit community with many social relations to reinforce the sense of common purpose and the social norms embodying it, but in a large and geographically dispersed group of otherwise unrelated participants. It suggests that even in a group of this size, social norms coupled with a facility to allow any participant to edit out purposeful or mistaken deviations in contravention of the social norms, and a robust platform for largely unmediated conversation, keep the group on track.
+
+A very different cultural form of distributed content production is presented by the rise of massive multiplayer online games (MMOGs) as immersive entertainment. These fall in the same cultural "time slot" as television shows and movies of the twentieth century. The interesting thing about these types of games is that they organize the production of "scripts" very differently from movies or television shows. In a game like Ultima Online or EverQuest, the role of the commercial provider is not to tell a finished, highly polished story to be consumed start to finish by passive consumers. Rather, the role of the game provider is to build tools with which users collaborate to tell a story. There have been observations about this approach for years, regarding MUDs (Multi-User Dungeons) and MOOs (Multi-User Object Oriented games). The point to understand about MMOGs is that they produce a discrete element of "content" that was in the past dominated by centralized professional production. The screenwriter of an immersive entertainment product like a movie is like the scientist marking Mars craters--a professional producer of a finished good. In MMOGs, this function is produced by using the appropriate software platform to allow the story to be written by the many users as they experience it. The individual contributions of the users/coauthors of the story line are literally done for fun-- they are playing a game. However, they are spending real economic goods-- their attention and substantial subscription fees--on a form of entertainment that uses a platform for active coproduction of a story line to displace what was once passive reception of a finished, commercially and professionally manufactured good.
+
+By 2003, a company called Linden Lab took this concept a major step forward by building an online game environment called Second Life. Second Life began almost entirely devoid of content. It was tools all the way down. ,{[pg 75]}, Within a matter of months, it had thousands of subscribers, inhabiting a "world" that had thousands of characters, hundreds of thousands of objects, multiple areas, villages, and "story lines." The individual users themselves had created more than 99 percent of all objects in the game environment, and all story lines and substantive frameworks for interaction--such as a particular village or group of theme-based participants. The interactions in the game environment involved a good deal of gift giving and a good deal of trade, but also some very surprising structured behaviors. Some users set up a university, where lessons were given in both in-game skills and in programming. Others designed spaceships and engaged in alien abductions (undergoing one seemed to become a status symbol within the game). At one point, aiming (successfully) to prevent the company from changing its pricing policy, users staged a demonstration by making signs and picketing the entry point to the game; and a "tax revolt" by placing large numbers of "tea crates" around an in-game reproduction of the Washington Monument. Within months, Second Life had become an immersive experience, like a movie or book, but one where the commercial provider offered a platform and tools, while the users wrote the story lines, rendered the "set," and performed the entire play.
+
+3~ Relevance/Accreditation
+
+How are we to know that the content produced by widely dispersed individuals is not sheer gobbledygook? Can relevance and accreditation itself be produced on a peer-production model? One type of answer is provided by looking at commercial businesses that successfully break off precisely the "accreditation and relevance" piece of their product, and rely on peer production to perform that function. Amazon and Google are probably the two most prominent examples of this strategy.
+
+Amazon uses a mix of mechanisms to get in front of their buyers of books and other products that the users are likely to purchase. A number of these mechanisms produce relevance and accreditation by harnessing the users themselves. At the simplest level, the recommendation "customers who bought items you recently viewed also bought these items" is a mechanical means of extracting judgments of relevance and accreditation from the actions of many individuals, who produce the datum of relevance as byproduct of making their own purchasing decisions. Amazon also allows users to create topical lists and track other users as their "friends and favorites." Amazon, like many consumer sites today, also provides users with the ability ,{[pg 76]}, to rate books they buy, generating a peer-produced rating by averaging the ratings. More fundamentally, the core innovation of Google, widely recognized as the most efficient general search engine during the first half of the 2000s, was to introduce peer-based judgments of relevance. Like other search engines at the time, Google used a text-based algorithm to retrieve a given universe of Web pages initially. Its major innovation was its PageRank algorithm, which harnesses peer production of ranking in the following way. The engine treats links from other Web sites pointing to a given Web site as votes of confidence. Whenever someone who authors a Web site links to someone else's page, that person has stated quite explicitly that the linked page is worth a visit. Google's search engine counts these links as distributed votes of confidence in the quality of the page pointed to. Pages that are heavily linked-to count as more important votes of confidence. If a highly linked-to site links to a given page, that vote counts for more than the vote of a site that no one else thinks is worth visiting. The point to take home from looking at Google and Amazon is that corporations that have done immensely well at acquiring and retaining users have harnessed peer production to enable users to find things they want quickly and efficiently.
+
+The most prominent example of a distributed project self-consciously devoted to peer production of relevance is the Open Directory Project. The site relies on more than sixty thousand volunteer editors to determine which links should be included in the directory. Acceptance as a volunteer requires application. Quality relies on a peer-review process based substantially on seniority as a volunteer and level of engagement with the site. The site is hosted and administered by Netscape, which pays for server space and a small number of employees to administer the site and set up the initial guidelines. Licensing is free and presumably adds value partly to America Online's (AOL's) and Netscape's commercial search engine/portal and partly through goodwill. Volunteers are not affiliated with Netscape and receive no compensation. They spend time selecting sites for inclusion in the directory (in small increments of perhaps fifteen minutes per site reviewed), producing the most comprehensive, highest-quality human-edited directory of the Web--at this point outshining the directory produced by the company that pioneered human edited directories of the Web: Yahoo!.
+
+Perhaps the most elaborate platform for peer production of relevance and accreditation, at multiple layers, is used by Slashdot. Billed as "News for Nerds," Slashdot has become a leading technology newsletter on the Web, coproduced by hundreds of thousands of users. Slashdot primarily consists ,{[pg 77]}, of users commenting on initial submissions that cover a variety of technology-related topics. The submissions are typically a link to an off-site story, coupled with commentary from the person who submits the piece. Users follow up the initial submission with comments that often number in the hundreds. The initial submissions themselves, and more importantly, the approach to sifting through the comments of users for relevance and accreditation, provide a rich example of how this function can be performed on a distributed, peer-production model.
+
+First, it is important to understand that the function of posting a story from another site onto Slashdot, the first "utterance" in a chain of comments on Slashdot, is itself an act of relevance production. The person submitting the story is telling the community of Slashdot users, "here is a story that `News for Nerds' readers should be interested in." This initial submission of a link is itself very coarsely filtered by editors who are paid employees of Open Source Technology Group (OSTG), which runs a number of similar platforms--like SourceForge, the most important platform for free software developers. OSTG is a subsidiary of VA Software, a software services company. The FAQ (Frequently Asked Question) response to, "how do you verify the accuracy of Slashdot stories?" is revealing: "We don't. You do. If something seems outrageous, we might look for some corroboration, but as a rule, we regard this as the responsibility of the submitter and the audience. This is why it's important to read comments. You might find something that refutes, or supports, the story in the main." In other words, Slashdot very self-consciously is organized as a means of facilitating peer production of accreditation; it is at the comments stage that the story undergoes its most important form of accreditation--peer review ex-post.
+
+Filtering and accreditation of comments on Slashdot offer the most interesting case study of peer production of these functions. Users submit comments that are displayed together with the initial submission of a story. Think of the "content" produced in these comments as a cross between academic peer review of journal submissions and a peer-produced substitute for television's "talking heads." It is in the means of accrediting and evaluating these comments that Slashdot's system provides a comprehensive example of peer production of relevance and accreditation. Slashdot implements an automated system to select moderators from the pool of users. Moderators are chosen according to several criteria; they must be logged in (not anonymous), they must be regular users (who use the site averagely, not one-time page loaders or compulsive users), they must have been using ,{[pg 78]}, the site for a while (this defeats people who try to sign up just to moderate), they must be willing, and they must have positive "karma." Karma is a number assigned to a user that primarily reflects whether he or she has posted good or bad comments (according to ratings from other moderators). If a user meets these criteria, the program assigns the user moderator status and the user gets five "influence points" to review comments. The moderator rates a comment of his choice using a drop-down list with words such as "flamebait" and "informative." A positive word increases the rating of a comment one point and a negative word decreases the rating a point. Each time a moderator rates a comment, it costs one influence point, so he or she can only rate five comments for each moderating period. The period lasts for three days and if the user does not use the influence points, they expire. The moderation setup is designed to give many users a small amount of power. This decreases the effect of users with an ax to grind or with poor judgment. The site also implements some automated "troll filters," which prevent users from sabotaging the system. Troll filters stop users from posting more than once every sixty seconds, prevent identical posts, and will ban a user for twenty-four hours if he or she has been moderated down several times within a short time frame. Slashdot then provides users with a "threshold" filter that allows each user to block lower-quality comments. The scheme uses the numerical rating of the comment (ranging from 1 to 5). Comments start out at 0 for anonymous posters, 1 for registered users, and 2 for registered users with good "karma." As a result, if a user sets his or her filter at 1, the user will not see any comments from anonymous posters unless the comments' ratings were increased by a moderator. A user can set his or her filter anywhere from 1 (viewing all of the comments) to 5 (where only the posts that have been upgraded by several moderators will show up).
+
+Relevance, as distinct from accreditation, is also tied into the Slashdot scheme because off-topic posts should receive an "off topic" rating by the moderators and sink below the threshold level (assuming the user has the threshold set above the minimum). However, the moderation system is limited to choices that sometimes are not mutually exclusive. For instance, a moderator may have to choose between "funny" ( 1) and "off topic" ( 1) when a post is both funny and off topic. As a result, an irrelevant post can increase in ranking and rise above the threshold level because it is funny or informative. It is unclear, however, whether this is a limitation on relevance, or indeed mimics our own normal behavior, say in reading a newspaper or browsing a library, where we might let our eyes linger longer on a funny or ,{[pg 79]}, informative tidbit, even after we have ascertained that it is not exactly relevant to what we were looking for.
+
+The primary function of moderation is to provide accreditation. If a user sets a high threshold level, they will only see posts that are considered of high quality by the moderators. Users also receive accreditation through their karma. If their posts consistently receive high ratings, their karma will increase. At a certain karma level, their comments will start off with a rating of 2, thereby giving them a louder voice in the sense that users with a threshold of 2 will now see their posts immediately, and fewer upward moderations are needed to push their comments even higher. Conversely, a user with bad karma from consistently poorly rated comments can lose accreditation by having his or her posts initially start off at 0 or 1. In addition to the mechanized means of selecting moderators and minimizing their power to skew the accreditation system, Slashdot implements a system of peer-review accreditation for the moderators themselves. Slashdot accomplishes this "metamoderation" by making any user that has an account from the first 90 percent of accounts created on the system eligible to evaluate the moderators. Each eligible user who opts to perform metamoderation review is provided with ten random moderator ratings of comments. The user/metamoderator then rates the moderator's rating as either unfair, fair, or neither. The metamoderation process affects the karma of the original moderator, which, when lowered sufficiently by cumulative judgments of unfair ratings, will remove the moderator from the moderation system.
+
+Together, these mechanisms allow for distributed production of both relevance and accreditation. Because there are many moderators who can moderate any given comment, and thanks to the mechanisms that explicitly limit the power of any one moderator to overinfluence the aggregate judgment, the system evens out differences in evaluation by aggregating judgments. It then allows individual users to determine what level of accreditation pronounced by this aggregate system fits their particular time and needs by setting their filter to be more or less inclusive. By introducing "karma," the system also allows users to build reputation over time, and to gain greater control over the accreditation of their own work relative to the power of the critics. Users, moderators, and metamoderators are all volunteers.
+
+The primary point to take from the Slashdot example is that the same dynamic that we saw used for peer production of initial utterances, or content, can be implemented to produce relevance and accreditation. Rather than using the full-time effort of professional accreditation experts, the system ,{[pg 80]}, is designed to permit the aggregation of many small judgments, each of which entails a trivial effort for the contributor, regarding both relevance and accreditation of the materials. The software that mediates the communication among the collaborating peers embeds both the means to facilitate the participation and a variety of mechanisms designed to defend the common effort from poor judgment or defection.
+
+3~ Value-Added Distribution
+
+Finally, when we speak of information or cultural goods that exist (content has been produced) and are made usable through some relevance and accreditation mechanisms, there remains the question of distribution. To some extent, this is a nonissue on the Internet. Distribution is cheap. All one needs is a server and large pipes connecting one's server to the world. Nonetheless, this segment of the publication process has also provided us with important examples of peer production, including one of its earliest examples--Project Gutenberg.
+
+Project Gutenberg entails hundreds of volunteers who scan in and correct books so that they are freely available in digital form. It has amassed more than 13,000 books, and makes the collection available to everyone for free. The vast majority of the "e-texts" offered are public domain materials. The site itself presents the e-texts in ASCII format, the lowest technical common denominator, but does not discourage volunteers from offering the e-texts in markup languages. It contains a search engine that allows a reader to search for typical fields such as subject, author, and title. Project Gutenberg volunteers can select any book that is in the public domain to transform into an e-text. The volunteer submits a copy of the title page of the book to Michael Hart--who founded the project--for copyright research. The volunteer is notified to proceed if the book passes the copyright clearance. The decision on which book to convert to e-text is left up to the volunteer, subject to copyright limitations. Typically, a volunteer converts a book to ASCII format using OCR (optical character recognition) and proofreads it one time in order to screen it for major errors. He or she then passes the ASCII file to a volunteer proofreader. This exchange is orchestrated with very little supervision. The volunteers use a Listserv mailing list and a bulletin board to initiate and supervise the exchange. In addition, books are labeled with a version number indicating how many times they have been proofed. The site encourages volunteers to select a book that has a low number and proof it. The Project Gutenberg proofing process is simple. ,{[pg 81]}, Proofreaders (aside from the first pass) are not expected to have access to the book, but merely review the e-text for self-evident errors.
+
+Distributed Proofreading, a site originally unaffiliated with Project Gutenberg, is devoted to proofing Project Gutenberg e-texts more efficiently, by distributing the volunteer proofreading function in smaller and more information-rich modules. Charles Franks, a computer programmer from Las Vegas, decided that he had a more efficient way to proofread these etexts. He built an interface that allowed volunteers to compare scanned images of original texts with the e-texts available on Project Gutenberg. In the Distributed Proofreading process, scanned pages are stored on the site, and volunteers are shown a scanned page and a page of the e-text simultaneously so that they can compare the e-text to the original page. Because of the fine-grained modularity, proofreaders can come on the site and proof one or a few pages and submit them. By contrast, on the Project Gutenberg site, the entire book is typically exchanged, or at minimum, a chapter. In this fashion, Distributed Proofreading clears the proofing of tens of thousands of pages every month. After a couple of years of working independently, Franks joined forces with Hart. By late 2004, the site had proofread more than five thousand volumes using this method.
+
+3~ Sharing of Processing, Storage, and Communications Platforms
+
+All the examples of peer production that we have seen up to this point have been examples where individuals pool their time, experience, wisdom, and creativity to form new information, knowledge, and cultural goods. As we look around the Internet, however, we find that users also cooperate in similar loosely affiliated groups, without market signals or managerial commands, to build supercomputers and massive data storage and retrieval systems. In their radical decentralization and reliance on social relations and motivations, these sharing practices are similar to peer production of information, knowledge, and culture. They differ in one important aspect: Users are not sharing their innate and acquired human capabilities, and, unlike information, their inputs and outputs are not public goods. The participants are, instead, sharing material goods that they privately own, mostly personal computers and their components. They produce economic, not public, goods--computation, storage, and communications capacity.
+
+As of the middle of 2004, the fastest supercomputer in the world was SETI@home. It ran about 75 percent faster than the supercomputer that ,{[pg 82]}, was then formally known as "the fastest supercomputer in the world": the IBM Blue Gene/L. And yet, there was and is no single SETI@home computer. Instead, the SETI@home project has developed software and a collaboration platform that have enabled millions of participants to pool their computation resources into a single powerful computer. Every user who participates in the project must download a small screen saver. When a user's personal computer is idle, the screen saver starts up, downloads problems for calculation--in SETI@home, these are radio astronomy signals to be analyzed for regularities--and calculates the problem it has downloaded. Once the program calculates a solution, it automatically sends its results to the main site. The cycle continues for as long as, and repeats every time that, the computer is idle from its user's perspective. As of the middle of 2004, the project had harnessed the computers of 4.5 million users, allowing it to run computations at speeds greater than those achieved by the fastest supercomputers in the world that private firms, using full-time engineers, developed for the largest and best-funded government laboratories in the world. SETI@home is the most prominent, but is only one among dozens of similarly structured Internet-based distributed computing platforms. Another, whose structure has been the subject of the most extensive formal analysis by its creators, is Folding@home. As of mid-2004, Folding@home had amassed contributions of about 840,000 processors contributed by more than 365,000 users.
+
+SETI@home and Folding@home provide a good basis for describing the fairly common characteristics of Internet-based distributed computation projects. First, these are noncommercial projects, engaged in pursuits understood as scientific, for the general good, seeking to harness contributions of individuals who wish to contribute to such larger-than-themselves goals. SETI@home helps in the search for extraterrestrial intelligence. Folding@home helps in protein folding research. Fightaids@home is dedicated to running models that screen compounds for the likelihood that they will provide good drug candidates to fight HIV/AIDS. Genome@home is dedicated to modeling artificial genes that would be created to generate useful proteins. Other sites, like those dedicated to cryptography or mathematics, have a narrower appeal, and combine "altruistic" with hobby as their basic motivational appeal. The absence of money is, in any event, typical of the large majority of active distributed computing projects. Less than one-fifth of these projects mention money at all. Most of those that do mention money refer to the contributors' eligibility for a share of a generally available ,{[pg 83]}, prize for solving a scientific or mathematical challenge, and mix an appeal to hobby and altruism with the promise of money. Only two of about sixty projects active in 2004 were built on a pay-per-contribution basis, and these were quite small-scale by comparison to many of the others.
+
+Most of the distributed computing projects provide a series of utilities and statistics intended to allow contributors to attach meaning to their contributions in a variety of ways. The projects appear to be eclectic in their implicit social and psychological theories of the motivations for participation in the projects. Sites describe the scientific purpose of the models and the specific scientific output, including posting articles that have used the calculations. In these components, the project organizers seem to assume some degree of taste for generalized altruism and the pursuit of meaning in contributing to a common goal. They also implement a variety of mechanisms to reinforce the sense of purpose, such as providing aggregate statistics about the total computations performed by the project as a whole. However, the sites also seem to assume a healthy dose of what is known in the anthropology of gift literature as agonistic giving--that is, giving intended to show that the person giving is greater than or more important than others, who gave less. For example, most of the sites allow individuals to track their own contributions, and provide "user of the month"-type rankings. An interesting characteristic of quite a few of these is the ability to create "teams" of users, who in turn compete on who has provided more cycles or work units. SETI@home in particular taps into ready-made nationalisms, by offering country-level statistics. Some of the team names on Folding@home also suggest other, out-of-project bonding measures, such as national or ethnic bonds (for example, Overclockers Australia or Alliance Francophone), technical minority status (for example, Linux or MacAddict4Life), and organizational affiliation (University of Tennessee or University of Alabama), as well as shared cultural reference points (Knights who say Ni!). In addition, the sites offer platforms for simple connectedness and mutual companionship, by offering user fora to discuss the science and the social participation involved. It is possible that these sites are shooting in the dark, as far as motivating sharing is concerned. It also possible, however, that they have tapped into a valuable insight, which is that people behave sociably and generously for all sorts of different reasons, and that at least in this domain, adding reasons to participate--some agonistic, some altruistic, some reciprocity-seeking--does not have a crowding-out effect.
+
+Like distributed computing projects, peer-to-peer file-sharing networks are ,{[pg 84]}, an excellent example of a highly efficient system for storing and accessing data in a computer network. These networks of sharing are much less "mysterious," in terms of understanding the human motivation behind participation. Nevertheless, they provide important lessons about the extent to which large-scale collaboration among strangers or loosely affiliated users can provide effective communications platforms. For fairly obvious reasons, we usually think of peer-to-peer networks, beginning with Napster, as a "problem." This is because they were initially overwhelmingly used to perform an act that, by the analysis of almost any legal scholar, was copyright infringement. To a significant extent, they are still used in this form. There were, and continue to be, many arguments about whether the acts of the firms that provided peer-to-peer software were responsible for the violations. However, there has been little argument that anyone who allows thousands of other users to make copies of his or her music files is violating copyright-- hence the public interpretation of the creation of peer-to-peer networks as primarily a problem. From the narrow perspective of the law of copyright or of the business model of the recording industry and Hollywood, this may be an appropriate focus. From the perspective of diagnosing what is happening to our social and economic structure, the fact that the files traded on these networks were mostly music in the first few years of this technology's implementation is little more than a distraction. Let me explain why.
+
+Imagine for a moment that someone--be it a legislator defining a policy goal or a businessperson defining a desired service--had stood up in mid1999 and set the following requirements: "We would like to develop a new music and movie distribution system. We would like it to store all the music and movies ever digitized. We would like it to be available from anywhere in the world. We would like it to be able to serve tens of millions of users at any given moment." Any person at the time would have predicted that building such a system would cost tens if not hundreds of millions of dollars; that running it would require large standing engineering staffs; that managing it so that users could find what they wanted and not drown in the sea of content would require some substantial number of "curators"--DJs and movie buffs--and that it would take at least five to ten years to build. Instead, the system was built cheaply by a wide range of actors, starting with Shawn Fanning's idea and implementation of Napster. Once the idea was out, others perfected the idea further, eliminating the need for even the one centralized feature that Napster included--a list of who had what files on which computer that provided the matchmaking function in the Napster ,{[pg 85]}, network. Since then, under the pressure of suits from the recording industry and a steady and persistent demand for peer-to-peer music software, rapid successive generations of Gnutella, and then the FastTrack clients KaZaa and Morpheus, Overnet and eDonkey, the improvements of BitTorrent, and many others have enhanced the reliability, coverage, and speed of the peerto-peer music distribution system--all under constant threat of litigation, fines, police searches and even, in some countries, imprisonment of the developers or users of these networks.
+
+What is truly unique about peer-to-peer networks as a signal of what is to come is the fact that with ridiculously low financial investment, a few teenagers and twenty-something-year-olds were able to write software and protocols that allowed tens of millions of computer users around the world to cooperate in producing the most efficient and robust file storage and retrieval system in the world. No major investment was necessary in creating a server farm to store and make available the vast quantities of data represented by the media files. The users' computers are themselves the "server farm." No massive investment in dedicated distribution channels made of high-quality fiber optics was necessary. The standard Internet connections of users, with some very intelligent file transfer protocols, sufficed. Architecture oriented toward enabling users to cooperate with each other in storage, search, retrieval, and delivery of files was all that was necessary to build a content distribution network that dwarfed anything that existed before.
+
+Again, there is nothing mysterious about why users participate in peerto-peer networks. They want music; they can get it from these networks for free; so they participate. The broader point to take from looking at peer-topeer file-sharing networks, however, is the sheer effectiveness of large-scale collaboration among individuals once they possess, under their individual control, the physical capital necessary to make their cooperation effective. These systems are not "subsidized," in the sense that they do not pay the full marginal cost of their service. Remember, music, like all information, is a nonrival public good whose marginal cost, once produced, is zero. Moreover, digital files are not "taken" from one place in order to be played in the other. They are replicated wherever they are wanted, and thereby made more ubiquitous, not scarce. The only actual social cost involved at the time of the transmission is the storage capacity, communications capacity, and processing capacity necessary to store, catalog, search, retrieve, and transfer the information necessary to replicate the files from where copies reside to where more copies are desired. As with any nonrival good, if Jane is willing ,{[pg 86]}, to spend the actual social costs involved in replicating the music file that already exists and that Jack possesses, then it is efficient that she do so without paying the creator a dime. It may throw a monkey wrench into the particular way in which our society has chosen to pay musicians and recording executives. This, as we saw in chapter 2, trades off efficiency for longer-term incentive effects for the recording industry. However, it is efficient within the normal meaning of the term in economics in a way that it would not have been had Jane and Jack used subsidized computers or network connections.
+
+As with distributed computing, peer-to-peer file-sharing systems build on the fact that individual users own vast quantities of excess capacity embedded in their personal computers. As with distributed computing, peer-to-peer networks developed architectures that allowed users to share this excess capacity with each other. By cooperating in these sharing practices, users construct together systems with capabilities far exceeding those that they could have developed by themselves, as well as the capabilities that even the bestfinanced corporations could provide using techniques that rely on components they fully owned. The network components owned by any single music delivery service cannot match the collective storage and retrieval capabilities of the universe of users' hard drives and network connections. Similarly, the processors arrayed in the supercomputers find it difficult to compete with the vast computation resource available on the millions of personal computers connected to the Internet, and the proprietary software development firms find themselves competing, and in some areas losing to, the vast pool of programming talent connected to the Internet in the form of participants in free and open source software development projects.
+
+In addition to computation and storage, the last major element of computer communications networks is connectivity. Here, too, perhaps more dramatically than in either of the two other functionalities, we have seen the development of sharing-based techniques. The most direct transfer of the design characteristics of peer-to-peer networks to communications has been the successful development of Skype--an Internet telephony utility that allows the owners of computers to have voice conversations with each other over the Internet for free, and to dial into the public telephone network for a fee. As of this writing, Skype is already used by more than two million users at any given moment in time. They use a FastTrack-like architecture to share their computing and communications resources to create a global ,{[pg 87]}, telephone system running on top of the Internet. It was created, and is run by, the developers of KaZaa.
+
+Most dramatically, however, we have seen these techniques emerging in wireless communications. Throughout almost the entire twentieth century, radio communications used a single engineering approach to allow multiple messages to be sent wirelessly in a single geographic area. This approach was to transmit each of the different simultaneous messages by generating separate electromagnetic waves for each, which differed from each other by the frequency of oscillation, or wavelength. The receiver could then separate out the messages by ignoring all electromagnetic energy received at its antenna unless it oscillated at the frequency of the desired message. This engineering technique, adopted by Marconi in 1900, formed the basis of our notion of "spectrum": the range of frequencies at which we know how to generate electromagnetic waves with sufficient control and predictability that we can encode and decode information with them, as well as the notion that there are "channels" of spectrum that are "used" by a communication. For more than half a century, radio communications regulation was thought necessary because spectrum was scarce, and unless regulated, everyone would transmit at all frequencies causing chaos and an inability to send messages. From 1959, when Ronald Coase first published his critique of this regulatory approach, until the early 1990s, when spectrum auctions began, the terms of the debate over "spectrum policy," or wireless communications regulation, revolved around whether the exclusive right to transmit radio signals in a given geographic area should be granted as a regulatory license or a tradable property right. In the 1990s, with the introduction of auctions, we began to see the adoption of a primitive version of a property-based system through "spectrum auctions." By the early 2000s, this system allowed the new "owners" of these exclusive rights to begin to shift what were initially purely mobile telephony systems to mobile data communications as well.
+
+By this time, however, the century-old engineering assumptions that underlay the regulation-versus-property conceptualization of the possibilities open for the institutional framework of wireless communications had been rendered obsolete by new computation and network technologies.~{ For the full argument, see Yochai Benkler, "Some Economics of Wireless Communications," Harvard Journal of Law and Technology 16 (2002): 25; and Yochai Benkler, "Overcoming Agoraphobia: Building the Commons of the Digitally Networked Environment," Harvard Journal of Law and Technology 11 (1998): 287. For an excellent overview of the intellectual history of this debate and a contribution to the institutional design necessary to make space for this change, see Kevin Werbach, "Supercommons: Towards a Unified Theory of Wireless Communication," Texas Law Review 82 (2004): 863. The policy implications of computationally intensive radios using wide bands were first raised by George Gilder in "The New Rule of the Wireless," Forbes ASAP, March 29, 1993, and Paul Baran, "Visions of the 21st Century Communications: Is the Shortage of Radio Spectrum for Broadband Networks of the Future a Self Made Problem?" (keynote talk transcript, 8th Annual Conference on Next Generation Networks, Washington, DC, November 9, 1994). Both statements focused on the potential abundance of spectrum, and how it renders "spectrum management" obsolete. Eli Noam was the first to point out that, even if one did not buy the idea that computationally intensive radios eliminated scarcity, they still rendered spectrum property rights obsolete, and enabled instead a fluid, dynamic, real-time market in spectrum clearance rights. See Eli Noam, "Taking the Next Step Beyond Spectrum Auctions: Open Spectrum Access," Institute of Electrical and Electronics Engineers Communications Magazine 33, no. 12 (1995): 66-73; later elaborated in Eli Noam, "Spectrum Auction: Yesterday's Heresy, Today's Orthodoxy, Tomorrow's Anachronism. Taking the Next Step to Open Spectrum Access," Journal of Law and Economics 41 (1998): 765, 778-780. The argument that equipment markets based on a spectrum commons, or free access to frequencies, could replace the role planned for markets in spectrum property rights with computationally intensive equipment and sophisticated network sharing protocols, and would likely be more efficient even assuming that scarcity persists, was made in Benkler, "Overcoming Agoraphobia." Lawrence Lessig, Code and Other Laws of Cyberspace (New York: Basic Books, 1999) and Lawrence Lessig, The Future of Ideas: The Fate of the Commons in a Connected World (New York: Random House, 2001) developed a rationale based on the innovation dynamic in support of the economic value of open wireless networks. David Reed, "Comments for FCC Spectrum Task Force on Spectrum Policy," filed with the Federal Communications Commission July 10, 2002, crystallized the technical underpinnings and limitations of the idea that spectrum can be regarded as property. }~ The dramatic decline in computation cost and improvements in digital signal processing, network architecture, and antenna systems had fundamentally changed the design space of wireless communications systems. Instead of having one primary parameter with which to separate out messages--the ,{[pg 88]}, frequency of oscillation of the carrier wave--engineers could now use many different mechanisms to allow much smarter receivers to separate out the message they wanted to receive from all other sources of electromagnetic radiation in the geographic area they occupied. Radio transmitters could now transmit at the same frequency, simultaneously, without "interfering" with each other--that is, without confusing the receivers as to which radiation carried the required message and which did not. Just like automobiles that can share a commons-based medium--the road--and unlike railroad cars, which must use dedicated, owned, and managed railroad tracks--these new radios could share "the spectrum" as a commons. It was no longer necessary, or even efficient, to pass laws--be they in the form of regulations or of exclusive property-like rights--that carved up the usable spectrum into exclusively controlled slices. Instead, large numbers of transceivers, owned and operated by end users, could be deployed and use equipment-embedded protocols to coordinate their communications.
+
+The reasons that owners would share the excess capacity of their new radios are relatively straightforward in this case. Users want to have wireless connectivity all the time, to be reachable and immediately available everywhere. However, they do not actually want to communicate every few microseconds. They will therefore be willing to purchase and keep turned on equipment that provides them with such connectivity. Manufacturers, in turn, will develop and adhere to standards that will improve capacity and connectivity. As a matter of engineering, what has been called "cooperation gain"--the improved quality of the system gained when the nodes cooperate--is the most promising source of capacity scaling for distributed wireless systems.~{ See Benkler, "Some Economics," 44-47. The term "cooperation gain" was developed by Reed to describe a somewhat broader concept than "diversity gain" is in multiuser information theory. }~ Cooperation gain is easy to understand from day-to-day interactions. When we sit in a lecture and miss a word or two, we might turn to a neighbor and ask, "Did you hear what she said?" In radio systems, this kind of cooperation among the antennae (just like the ears) of neighbors is called antenna diversity, and is the basis for the design of a number of systems to improve reception. We might stand in a loud crowd without being able to shout or walk over to the other end of the room, but ask a friend: "If you see so and so, tell him x"; that friend then bumps into a friend of so and so and tells that person: "If you see so and so, tell him x"; and so forth. When we do this, we are using what in radio engineering is called repeater networks. These kinds of cooperative systems can carry much higher loads without interference, sharing wide swaths of spectrum, ,{[pg 89]}, in ways that are more efficient than systems that rely on explicit market transactions based on property in the right to emit power in discrete frequencies. The design of such "ad hoc mesh networks"--that is, networks of radios that can configure themselves into cooperative networks as need arises, and help each other forward messages and decipher incoming messages over the din of radio emissions--are the most dynamic area in radio engineering today.
+
+This technological shift gave rise to the fastest-growing sector in the wireless communications arena in the first few years of the twenty-first century-- WiFi and similar unlicensed wireless devices. The economic success of the equipment market that utilizes the few primitive "spectrum commons" available in the United States--originally intended for low-power devices like garage openers and the spurious emissions of microwave ovens--led toward at first slow, and more recently quite dramatic, change in U.S. wireless policy. In the past two years alone, what have been called "commons-based" approaches to wireless communications policy have come to be seen as a legitimate, indeed a central, component of the Federal Communication Commission's (FCC's) wireless policy.~{ Spectrum Policy Task Force Report to the Commission (Federal Communications Commission, Washington, DC, 2002); Michael K. Powell, "Broadband Migration III: New Directions in Wireless Policy" (Remarks at the Silicon Flatiron Telecommunications Program, University of Colorado at Boulder, October 30, 2002). }~ We are beginning to see in this space the most prominent example of a system that was entirely oriented toward regulation aimed at improving the institutional conditions of marketbased production of wireless transport capacity sold as a finished good (connectivity minutes), shifting toward enabling the emergence of a market in shareable goods (smart radios) designed to provision transport on a sharing model.
+
+I hope these detailed examples provide a common set of mental pictures of what peer production looks like. In the next chapter I explain the economics of peer production of information and the sharing of material resources for computation, communications, and storage in particular, and of nonmarket, social production more generally: why it is efficient, how we can explain the motivations that lead people to participate in these great enterprises of nonmarket cooperation, and why we see so much more of it online than we do off-line. The moral and political discussion throughout the remainder of the book does not, however, depend on your accepting the particular analysis I offer in chapter 4 to "domesticate" these phenomena within more or less standard economics. At this point, it is important that the stories have provided a texture for, and established the plausibility of, ,{[pg 90]}, the claim that nonmarket production in general and peer production in particular are phenomena of much wider application than free software, and exist in important ways throughout the networked information economy. For purposes of understanding the political implications that occupy most of this book, that is all that is necessary. ,{[pg 91]},
+
+1~4 Chapter 4 - The Economics of Social Production
+
+The increasing salience of nonmarket production in general, and peer production in particular, raises three puzzles from an economics perspective. First, why do people participate? What is their motivation when they work for or contribute resources to a project for which they are not paid or directly rewarded? Second, why now, why here? What, if anything, is special about the digitally networked environment that would lead us to believe that peer production is here to stay as an important economic phenomenon, as opposed to a fad that will pass as the medium matures and patterns of behavior settle toward those more familiar to us from the economy of steel, coal, and temp agencies. Third, is it efficient to have all these people sharing their computers and donating their time and creative effort? Moving through the answers to these questions, it becomes clear that the diverse and complex patterns of behavior observed on the Internet, from Viking ship hobbyists to the developers of the GNU/ Linux operating system, are perfectly consistent with much of our contemporary understanding of human economic behavior. We need to assume no fundamental change in the nature of humanity; ,{[pg 92]}, we need not declare the end of economics as we know it. We merely need to see that the material conditions of production in the networked information economy have changed in ways that increase the relative salience of social sharing and exchange as a modality of economic production. That is, behaviors and motivation patterns familiar to us from social relations generally continue to cohere in their own patterns. What has changed is that now these patterns of behavior have become effective beyond the domains of building social relations of mutual interest and fulfilling our emotional and psychological needs of companionship and mutual recognition. They have come to play a substantial role as modes of motivating, informing, and organizing productive behavior at the very core of the information economy. And it is this increasing role as a modality of information production that ripples through the rest this book. It is the feasibility of producing information, knowledge, and culture through social, rather than market and proprietary relations--through cooperative peer production and coordinate individual action--that creates the opportunities for greater autonomous action, a more critical culture, a more discursively engaged and better informed republic, and perhaps a more equitable global community.
+
+2~ MOTIVATION
+
+Much of economics achieves analytic tractability by adopting a very simple model of human motivation. The basic assumption is that all human motivations can be more or less reduced to something like positive and negative utilities--things people want, and things people want to avoid. These are capable of being summed, and are usually translatable into a universal medium of exchange, like money. Adding more of something people want, like money, to any given interaction will, all things considered, make that interaction more desirable to rational people. While simplistic, this highly tractable model of human motivation has enabled policy prescriptions that have proven far more productive than prescriptions that depended on other models of human motivation--such as assuming that benign administrators will be motivated to serve their people, or that individuals will undertake selfsacrifice for the good of the nation or the commune.
+
+Of course, this simple model underlying much of contemporary economics is wrong. At least it is wrong as a universal description of human motivation. If you leave a fifty-dollar check on the table at the end of a dinner party at a friend's house, you do not increase the probability that you will ,{[pg 93]}, be invited again. We live our lives in diverse social frames, and money has a complex relationship with these--sometimes it adds to the motivation to participate, sometimes it detracts from it. While this is probably a trivial observation outside of the field of economics, it is quite radical within that analytic framework. The present generation's efforts to formalize and engage it began with the Titmuss-Arrow debate of the early 1970s. In a major work, Richard Titmuss compared the U.S. and British blood supply systems. The former was largely commercial at the time, organized by a mix of private for-profit and nonprofit actors; the latter entirely voluntary and organized by the National Health Service. Titmuss found that the British system had higher-quality blood (as measured by the likelihood of recipients contracting hepatitis from transfusions), less blood waste, and fewer blood shortages at hospitals. Titmuss also attacked the U.S. system as inequitable, arguing that the rich exploited the poor and desperate by buying their blood. He concluded that an altruistic blood procurement system is both more ethical and more efficient than a market system, and recommended that the market be kept out of blood donation to protect the "right to give."~{ Richard M. Titmuss, The Gift Relationship: From Human Blood to Social Policy (New York: Vintage Books, 1971), 94. }~ Titmuss's argument came under immediate attack from economists. Most relevant for our purposes here, Kenneth Arrow agreed that the differences in blood quality indicated that the U.S. blood system was flawed, but rejected Titmuss's central theoretical claim that markets reduce donative activity. Arrow reported the alternative hypothesis held by "economists typically," that if some people respond to exhortation/moral incentives (donors), while others respond to prices and market incentives (sellers), these two groups likely behave independently--neither responds to the other's incentives. Thus, the decision to allow or ban markets should have no effect on donative behavior. Removing a market could, however, remove incentives of the "bad blood" suppliers to sell blood, thereby improving the overall quality of the blood supply. Titmuss had not established his hypothesis analytically, Arrow argued, and its proof or refutation would lie in empirical study.~{ Kenneth J. Arrow, "Gifts and Exchanges," Philosophy & Public Affairs 1 (1972): 343. }~ Theoretical differences aside, the U.S. blood supply system did in fact transition to an allvolunteer system of social donation since the 1970s. In surveys since, blood donors have reported that they "enjoy helping" others, experienced a sense of moral obligation or responsibility, or exhibited characteristics of reciprocators after they or their relatives received blood.
+
+A number of scholars, primarily in psychology and economics, have attempted to resolve this question both empirically and theoretically. The most systematic work within economics is that of Swiss economist Bruno Frey ,{[pg 94]}, and various collaborators, building on the work of psychologist Edward Deci.~{ Bruno S. Frey, Not Just for Money: An Economic Theory of Personal Motivation (Brookfield, VT: Edward Elgar, 1997); Bruno S. Frey, Inspiring Economics: Human Motivation in Political Economy (Northampton, MA: Edward Elgar, 2001), 52-72. An excellent survey of this literature is Bruno S. Frey and Reto Jegen, "Motivation Crowding Theory," Journal of Economic Surveys 15, no. 5 (2001): 589. For a crystallization of the underlying psychological theory, see Edward L. Deci and Richard M. Ryan, Intrinsic Motivation and Self-Determination in Human Behavior (New York: Plenum, 1985). }~ A simple statement of this model is that individuals have intrinsic and extrinsic motivations. Extrinsic motivations are imposed on individuals from the outside. They take the form of either offers of money for, or prices imposed on, behavior, or threats of punishment or reward from a manager or a judge for complying with, or failing to comply with, specifically prescribed behavior. Intrinsic motivations are reasons for action that come from within the person, such as pleasure or personal satisfaction. Extrinsic motivations are said to "crowd out" intrinsic motivations because they (a) impair self-determination--that is, people feel pressured by an external force, and therefore feel overjustified in maintaining their intrinsic motivation rather than complying with the will of the source of the extrinsic reward; or (b) impair self-esteem--they cause individuals to feel that their internal motivation is rejected, not valued, and as a result, their self-esteem is diminished, causing them to reduce effort. Intuitively, this model relies on there being a culturally contingent notion of what one "ought" to do if one is a welladjusted human being and member of a decent society. Being offered money to do something you know you "ought" to do, and that self-respecting members of society usually in fact do, implies that the person offering the money believes that you are not a well-adjusted human being or an equally respectable member of society. This causes the person offered the money either to believe the offerer, and thereby lose self-esteem and reduce effort, or to resent him and resist the offer. A similar causal explanation is formalized by Roland Benabou and Jean Tirole, who claim that the person receiving the monetary incentives infers that the person offering the compensation does not trust the offeree to do the right thing, or to do it well of their own accord. The offeree's self-confidence and intrinsic motivation to succeed are reduced to the extent that the offeree believes that the offerer--a manager or parent, for example--is better situated to judge the offeree's abilities.~{ Roland Benabou and Jean Tirole, "Self-Confidence and Social Interactions" (working paper no. 7585, National Bureau of Economic Research, Cambridge, MA, March 2000). }~
+
+More powerful than the theoretical literature is the substantial empirical literature--including field and laboratory experiments, econometrics, and surveys--that has developed since the mid-1990s to test the hypotheses of this model of human motivation. Across many different settings, researchers have found substantial evidence that, under some circumstances, adding money for an activity previously undertaken without price compensation reduces, rather than increases, the level of activity. The work has covered contexts as diverse as the willingness of employees to work more or to share their experience and knowledge with team members, of communities to ,{[pg 95]}, accept locally undesirable land uses, or of parents to pick up children from day-care centers punctually.~{ Truman F. Bewley, "A Depressed Labor Market as Explained by Participants," American Economic Review (Papers and Proceedings) 85 (1995): 250, provides survey data about managers' beliefs about the effects of incentive contracts; Margit Osterloh and Bruno S. Frey, "Motivation, Knowledge Transfer, and Organizational Form," Organization Science 11 (2000): 538, provides evidence that employees with tacit knowledge communicate it to coworkers more efficiently without extrinsic motivations, with the appropriate social motivations, than when money is offered for "teaching" their knowledge; Bruno S. Frey and Felix Oberholzer-Gee, "The Cost of Price Incentives: An Empirical Analysis of Motivation Crowding-Out," American Economic Review 87 (1997): 746; and Howard Kunreuther and Douslar Easterling, "Are Risk-Benefit Tradeoffs Possible in Siting Hazardous Facilities?" American Economic Review (Papers and Proceedings) 80 (1990): 252-286, describe empirical studies where communities became less willing to accept undesirable public facilities (Not in My Back Yard or NIMBY) when offered compensation, relative to when the arguments made were policy based on the common weal; Uri Gneezy and Aldo Rustichini, "A Fine Is a Price," Journal of Legal Studies 29 (2000): 1, found that introducing a fine for tardy pickup of kindergarten kids increased, rather than decreased, the tardiness of parents, and once the sense of social obligation was lost to the sense that it was "merely" a transaction, the parents continued to be late at pickup, even after the fine was removed. }~ The results of this empirical literature strongly suggest that across various domains some displacement or crowding out can be identified between monetary rewards and nonmonetary motivations. This does not mean that offering monetary incentives does not increase extrinsic rewards--it does. Where extrinsic rewards dominate, this will increase the activity rewarded as usually predicted in economics. However, the effect on intrinsic motivation, at least sometimes, operates in the opposite direction. Where intrinsic motivation is an important factor because pricing and contracting are difficult to achieve, or because the payment that can be offered is relatively low, the aggregate effect may be negative. Persuading experienced employees to communicate their tacit knowledge to the teams they work with is a good example of the type of behavior that is very hard to specify for efficient pricing, and therefore occurs more effectively through social motivations for teamwork than through payments. Negative effects of small payments on participation in work that was otherwise volunteer-based are an example of low payments recruiting relatively few people, but making others shift their efforts elsewhere and thereby reducing, rather than increasing, the total level of volunteering for the job.
+
+The psychology-based alternative to the "more money for an activity will mean more of the activity" assumption implicit in most of these new economic models is complemented by a sociology-based alternative. This comes from one branch of the social capital literature--the branch that relates back to Mark Granovetter's 1974 book, Getting a Job, and was initiated as a crossover from sociology to economics by James Coleman.~{ James S. Coleman, "Social Capital in the Creation of Human Capital," American Journal of Sociology 94, supplement (1988): S95, S108. For important early contributions to this literature, see Mark Granovetter, "The Strength of Weak Ties," American Journal of Sociology 78 (1973): 1360; Mark Granovetter, Getting a Job: A Study of Contacts and Careers (Cambridge, MA: Harvard University Press, 1974); Yoram BenPorath, "The F-Connection: Families, Friends and Firms and the Organization of Exchange," Population and Development Review 6 (1980): 1. }~ This line of literature rests on the claim that, as Nan Lin puts it, "there are two ultimate (or primitive) rewards for human beings in a social structure: economic standing and social standing."~{ Nan Lin, Social Capital: A Theory of Social Structure and Action (New York: Cambridge University Press, 2001), 150-151. }~ These rewards are understood as instrumental and, in this regard, are highly amenable to economics. Both economic and social aspects represent "standing"--that is, a relational measure expressed in terms of one's capacity to mobilize resources. Some resources can be mobilized by money. Social relations can mobilize others. For a wide range of reasons-- institutional, cultural, and possibly technological--some resources are more readily capable of being mobilized by social relations than by money. If you want to get your nephew a job at a law firm in the United States today, a friendly relationship with the firm's hiring partner is more likely to help than passing on an envelope full of cash. If this theory of social capital is correct, then sometimes you should be willing to trade off financial rewards for social ,{[pg 96]}, capital. Critically, the two are not fungible or cumulative. A hiring partner paid in an economy where monetary bribes for job interviews are standard does not acquire a social obligation. That same hiring partner in that same culture, who is also a friend and therefore forgoes payment, however, probably does acquire a social obligation, tenable for a similar social situation in the future. The magnitude of the social debt, however, may now be smaller. It is likely measured by the amount of money saved from not having to pay the price, not by the value of getting the nephew a job, as it would likely be in an economy where jobs cannot be had for bribes. There are things and behaviors, then, that simply cannot be commodified for market exchange, like friendship. Any effort to mix the two, to pay for one's friendship, would render it something completely different--perhaps a psychoanalysis session in our culture. There are things that, even if commodified, can still be used for social exchange, but the meaning of the social exchange would be diminished. One thinks of borrowing eggs from a neighbor, or lending a hand to friends who are moving their furniture to a new apartment. And there are things that, even when commodified, continue to be available for social exchange with its full force. Consider gamete donations as an example in contemporary American culture. It is important to see, though, that there is nothing intrinsic about any given "thing" or behavior that makes it fall into one or another of these categories. The categories are culturally contingent and cross-culturally diverse. What matters for our purposes here, though, is only the realization that for any given culture, there will be some acts that a person would prefer to perform not for money, but for social standing, recognition, and probably, ultimately, instrumental value obtainable only if that person has performed the action through a social, rather than a market, transaction.
+
+It is not necessary to pin down precisely the correct or most complete theory of motivation, or the full extent and dimensions of crowding out nonmarket rewards by the introduction or use of market rewards. All that is required to outline the framework for analysis is recognition that there is some form of social and psychological motivation that is neither fungible with money nor simply cumulative with it. Transacting within the price system may either increase or decrease the social-psychological rewards (be they intrinsic or extrinsic, functional or symbolic). The intuition is simple. As I have already said, leaving a fifty-dollar check on the table after one has finished a pleasant dinner at a friend's house would not increase the host's ,{[pg 97]}, social and psychological gains from the evening. Most likely, it would diminish them sufficiently that one would never again be invited. A bottle of wine or a bouquet of flowers would, to the contrary, improve the social gains. And if dinner is not intuitively obvious, think of sex. The point is simple. Money-oriented motivations are different from socially oriented motivations. Sometimes they align. Sometimes they collide. Which of the two will be the case is historically and culturally contingent. The presence of money in sports or entertainment reduced the social psychological gains from performance in late-nineteenth-century Victorian England, at least for members of the middle and upper classes. This is reflected in the long-standing insistence on the "amateur" status of the Olympics, or the status of "actors" in the Victorian society. This has changed dramatically more than a century later, where athletes' and popular entertainers' social standing is practically measured in the millions of dollars their performances can command.
+
+The relative relationships of money and social-psychological rewards are, then, dependent on culture and context. Similar actions may have different meanings in different social or cultural contexts. Consider three lawyers contemplating whether to write a paper presenting their opinion--one is a practicing attorney, the second is a judge, and the third is an academic. For the first, money and honor are often, though not always, positively correlated. Being able to command a very high hourly fee for writing the requested paper is a mode of expressing one's standing in the profession, as well as a means of putting caviar on the table. Yet, there are modes of acquiring esteem--like writing the paper as a report for a bar committee-- that are not improved by the presence of money, and are in fact undermined by it. This latter effect is sharpest for the judge. If a judge is approached with an offer of money for writing an opinion, not only is this not a mark of honor, it is a subversion of the social role and would render corrupt the writing of the opinion. For the judge, the intrinsic "rewards" for writing the opinion when matched by a payment for the product would be guilt and shame, and the offer therefore an expression of disrespect. Finally, if the same paper is requested of the academic, the presence of money is located somewhere in between the judge and the practitioner. To a high degree, like the judge, the academic who writes for money is rendered suspect in her community of scholarship. A paper clearly funded by a party, whose results support the party's regulatory or litigation position, is practically worthless as an academic work. In a mirror image of the practitioner, however, there ,{[pg 98]}, are some forms of money that add to and reinforce an academic's social psychological rewards--peer-reviewed grants and prizes most prominent among them.
+
+Moreover, individuals are not monolithic agents. While it is possible to posit idealized avaricious money-grubbers, altruistic saints, or social climbers, the reality of most people is a composite of these all, and one that is not like any of them. Clearly, some people are more focused on making money, and others are more generous; some more driven by social standing and esteem, others by a psychological sense of well-being. The for-profit and nonprofit systems probably draw people with different tastes for these desiderata. Academic science and commercial science also probably draw scientists with similar training but different tastes for types of rewards. However, welladjusted, healthy individuals are rarely monolithic in their requirements. We would normally think of someone who chose to ignore and betray friends and family to obtain either more money or greater social recognition as a fetishist of some form or another. We spend some of our time making money, some of our time enjoying it hedonically; some of our time being with and helping family, friends, and neighbors; some of our time creatively expressing ourselves, exploring who we are and what we would like to become. Some of us, because of economic conditions we occupy, or because of our tastes, spend very large amounts of time trying to make money-- whether to become rich or, more commonly, just to make ends meet. Others spend more time volunteering, chatting, or writing.
+
+For all of us, there comes a time on any given day, week, and month, every year and in different degrees over our lifetimes, when we choose to act in some way that is oriented toward fulfilling our social and psychological needs, not our market-exchangeable needs. It is that part of our lives and our motivational structure that social production taps, and on which it thrives. There is nothing mysterious about this. It is evident to any of us who rush home to our family or to a restaurant or bar with friends at the end of a workday, rather than staying on for another hour of overtime or to increase our billable hours; or at least regret it when we cannot. It is evident to any of us who has ever brought a cup of tea to a sick friend or relative, or received one; to anyone who has lent a hand moving a friend's belongings; played a game; told a joke, or enjoyed one told by a friend. What needs to be understood now, however, is under what conditions these many and diverse social actions can turn into an important modality of economic production. When can all these acts, distinct from our desire for ,{[pg 99]}, money and motivated by social and psychological needs, be mobilized, directed, and made effective in ways that we recognize as economically valuable?
+
+2~ SOCIAL PRODUCTION: FEASIBILITY CONDITIONS AND ORGANIZATIONAL FORM
+
+The core technologically contingent fact that enables social relations to become a salient modality of production in the networked information economy is that all the inputs necessary to effective productive activity are under the control of individual users. Human creativity, wisdom, and life experience are all possessed uniquely by individuals. The computer processors, data storage devices, and communications capacity necessary to make new meaningful conversational moves from the existing universe of information and stimuli, and to render and communicate them to others near and far are also under the control of these same individual users--at least in the advanced economies and in some portions of the population of developing economies. This does not mean that all the physical capital necessary to process, store, and communicate information is under individual user control. That is not necessary. It is, rather, that the majority of individuals in these societies have the threshold level of material capacity required to explore the information environment they occupy, to take from it, and to make their own contributions to it.
+
+There is nothing about computation or communication that naturally or necessarily enables this fact. It is a felicitous happenstance of the fabrication technology of computing machines in the last quarter of the twentieth century, and, it seems, in the reasonably foreseeable future. It is cheaper to build freestanding computers that enable their owners to use a wide and dynamically changing range of information applications, and that are cheap enough that each machine is owned by an individual user or household, than it is to build massive supercomputers with incredibly high-speed communications to yet cheaper simple terminals, and to sell information services to individuals on an on-demand or standardized package model. Natural or contingent, it is nevertheless a fact of the industrial base of the networked information economy that individual users--susceptible as they are to acting on diverse motivations, in diverse relationships, some market-based, some social--possess and control the physical capital necessary to make effective the human capacities they uniquely and individually possess. ,{[pg 100]},
+
+Now, having the core inputs of information production ubiquitously distributed in society is a core enabling fact, but it alone cannot assure that social production will become economically significant. Children and teenagers, retirees, and very rich individuals can spend most of their lives socializing or volunteering; most other people cannot. While creative capacity and judgment are universally distributed in a population, available time and attention are not, and human creative capacity cannot be fully dedicated to nonmarket, nonproprietary production all the time. Someone needs to work for money, at least some of the time, to pay the rent and put food on the table. Personal computers too are only used for earnings-generating activities some of the time. In both these resources, there remain large quantities of excess capacity--time and interest in human beings; processing, storage, and communications capacity in computers--available to be used for activities whose rewards are not monetary or monetizable, directly or indirectly.
+
+For this excess capacity to be harnessed and become effective, the information production process must effectively integrate widely dispersed contributions, from many individual human beings and machines. These contributions are diverse in their quality, quantity, and focus, in their timing and geographic location. The great success of the Internet generally, and peer-production processes in particular, has been the adoption of technical and organizational architectures that have allowed them to pool such diverse efforts effectively. The core characteristics underlying the success of these enterprises are their modularity and their capacity to integrate many finegrained contributions.
+
+"Modularity" is a property of a project that describes the extent to which it can be broken down into smaller components, or modules, that can be independently produced before they are assembled into a whole. If modules are independent, individual contributors can choose what and when to contribute independently of each other. This maximizes their autonomy and flexibility to define the nature, extent, and timing of their participation in the project. Breaking up the maps of Mars involved in the clickworkers project (described in chapter 3) and rendering them in small segments with a simple marking tool is a way of modularizing the task of mapping craters. In the SETI@home project (see chapter 3), the task of scanning radio astronomy signals is broken down into millions of little computations as a way of modularizing the calculations involved.
+
+"Granularity" refers to the size of the modules, in terms of the time and effort that an individual must invest in producing them. The five minutes ,{[pg 101]}, required for moderating a comment on Slashdot, or for metamoderating a moderator, is more fine-grained than the hours necessary to participate in writing a bug fix in an open-source project. More people can participate in the former than in the latter, independent of the differences in the knowledge required for participation. The number of people who can, in principle, participate in a project is therefore inversely related to the size of the smallestscale contribution necessary to produce a usable module. The granularity of the modules therefore sets the smallest possible individual investment necessary to participate in a project. If this investment is sufficiently low, then "incentives" for producing that component of a modular project can be of trivial magnitude. Most importantly for our purposes of understanding the rising role of nonmarket production, the time can be drawn from the excess time we normally dedicate to having fun and participating in social interactions. If the finest-grained contributions are relatively large and would require a large investment of time and effort, the universe of potential contributors decreases. A successful large-scale peer-production project must therefore have a predominate portion of its modules be relatively fine-grained.
+
+Perhaps the clearest example of how large-grained modules can make projects falter is the condition, as of the middle of 2005, of efforts to peer produce open textbooks. The largest such effort is Wikibooks, a site associated with /{Wikipedia}/, which has not taken off as did its famous parent project. Very few texts there have reached maturity to the extent that they could be usable as a partial textbook, and those few that have were largely written by one individual with minor contributions by others. Similarly, an ambitious initiative launched in California in 2004 still had not gone far beyond an impassioned plea for help by mid-2005. The project that seems most successful as of 2005 was a South African project, Free High School Science Texts (FHSST), founded by a physics graduate student, Mark Horner. As of this writing, that three-year-old project had more or less completed a physics text, and was about halfway through chemistry and mathematics textbooks. The whole FHSST project involves a substantially more managed approach than is common in peer-production efforts, with a core group of dedicated graduate student administrators recruiting contributors, assigning tasks, and integrating the contributions. Horner suggests that the basic limiting factor is that in order to write a high school textbook, the output must comply with state-imposed guidelines for content and form. To achieve these requirements, the various modules must cohere to a degree ,{[pg 102]}, much larger than necessary in a project like /{Wikipedia}/, which can endure high diversity in style and development without losing its utility. As a result, the individual contributions have been kept at a high level of abstraction-- an idea or principle explained at a time. The minimal time commitment required of each contributor is therefore large, and has led many of those who volunteered initially to not complete their contributions. In this case, the guideline requirements constrained the project's granularity, and thereby impeded its ability to grow and capture the necessary thousands of smallgrained contributions. With orders of magnitude fewer contributors, each must be much more highly motivated and available than is necessary in /{Wikipedia}/, Slashdot, and similar successful projects.
+
+It is not necessary, however, that each and every chunk or module be fine grained. Free software projects in particular have shown us that successful peer-production projects may also be structured, technically and culturally, in ways that make it possible for different individuals to contribute vastly different levels of effort commensurate with their ability, motivation, and availability. The large free software projects might integrate thousands of people who are acting primarily for social psychological reasons--because it is fun or cool; a few hundred young programmers aiming to make a name for themselves so as to become employable; and dozens of programmers who are paid to write free software by firms that follow one of the nonproprietary strategies described in chapter 2. IBM and Red Hat are the quintessential examples of firms that contribute paid employee time to peer-production projects in this form. This form of link between a commercial firm and a peer production community is by no means necessary for a peer-production process to succeed; it does, however, provide one constructive interface between market- and nonmarket-motivated behavior, through which actions on the two types of motivation can reinforce, rather than undermine, each other.
+
+The characteristics of planned modularization of a problem are highly visible and explicit in some peer-production projects--the distributed computing projects like SETI@home are particularly good examples of this. However, if we were to step back and look at the entire phenomenon of Web-based publication from a bird's-eye view, we would see that the architecture of the World Wide Web, in particular the persistence of personal Web pages and blogs and their self-contained, technical independence of each other, give the Web as a whole the characteristics of modularity and variable but fine-grained granularity. Imagine that you were trying to evaluate ,{[pg 103]}, how, if at all, the Web is performing the task of media watchdog. Consider one example, which I return to in chapter 7: The Memory Hole, a Web site created and maintained by Russ Kick, a freelance author and editor. Kick spent some number of hours preparing and filing a Freedom of Information Act request with the Defense Department, seeking photographs of coffins of U.S. military personnel killed in Iraq. He was able to do so over some period, not having to rely on "getting the scoop" to earn his dinner. At the same time, tens of thousands of other individual Web publishers and bloggers were similarly spending their time hunting down stories that moved them, or that they happened to stumble across in their own daily lives. When Kick eventually got the photographs, he could upload them onto his Web site, where they were immediately available for anyone to see. Because each contribution like Kick's can be independently created and stored, because no single permission point or failure point is present in the architecture of the Web--it is merely a way of conveniently labeling documents stored independently by many people who are connected to the Internet and use HTML (hypertext markup language) and HTTP (hypertext transfer protocol)--as an "information service," it is highly modular and diversely granular. Each independent contribution comprises as large or small an investment as its owner-operator chooses to make. Together, they form a vast almanac, trivia trove, and news and commentary facility, to name but a few, produced by millions of people at their leisure--whenever they can or want to, about whatever they want.
+
+The independence of Web sites is what marks their major difference from more organized peer-production processes, where contributions are marked not by their independence but by their interdependence. The Web as a whole requires no formal structure of cooperation. As an "information good" or medium, it emerges as a pattern out of coordinate coexistence of millions of entirely independent acts. All it requires is a pattern recognition utility superimposed over the outputs of these acts--a search engine or directory. Peer-production processes, to the contrary, do generally require some substantive cooperation among users. A single rating of an individual comment on Slashdot does not by itself moderate the comment up or down, neither does an individual marking of a crater. Spotting a bug in free software, proposing a fix, reviewing the proposed fix, and integrating it into the software are interdependent acts that require a level of cooperation. This necessity for cooperation requires peer-production processes to adopt more engaged strategies for assuring that everyone who participates is doing so in ,{[pg 104]}, good faith, competently, and in ways that do not undermine the whole, and weeding out those would-be participants who are not.
+
+Cooperation in peer-production processes is usually maintained by some combination of technical architecture, social norms, legal rules, and a technically backed hierarchy that is validated by social norms. /{Wikipedia}/ is the strongest example of a discourse-centric model of cooperation based on social norms. However, even /{Wikipedia}/ includes, ultimately, a small number of people with system administrator privileges who can eliminate accounts or block users in the event that someone is being genuinely obstructionist. This technical fallback, however, appears only after substantial play has been given to self-policing by participants, and to informal and quasi-formal communitybased dispute resolution mechanisms. Slashdot, by contrast, provides a strong model of a sophisticated technical system intended to assure that no one can "defect" from the cooperative enterprise of commenting and moderating comments. It limits behavior enabled by the system to avoid destructive behavior before it happens, rather than policing it after the fact. The Slash code does this by technically limiting the power any given person has to moderate anyone else up or down, and by making every moderator the subject of a peer review system whose judgments are enforced technically-- that is, when any given user is described by a sufficiently large number of other users as unfair, that user automatically loses the technical ability to moderate the comments of others. The system itself is a free software project, licensed under the GPL (General Public License)--which is itself the quintessential example of how law is used to prevent some types of defection from the common enterprise of peer production of software. The particular type of defection that the GPL protects against is appropriation of the joint product by any single individual or firm, the risk of which would make it less attractive for anyone to contribute to the project to begin with. The GPL assures that, as a legal matter, no one who contributes to a free software project need worry that some other contributor will take the project and make it exclusively their own. The ultimate quality judgments regarding what is incorporated into the "formal" releases of free software projects provide the clearest example of the extent to which a meritocratic hierarchy can be used to integrate diverse contributions into a finished single product. In the case of the Linux kernel development project (see chapter 3), it was always within the power of Linus Torvalds, who initiated the project, to decide which contributions should be included in a new release, and which should not. But it is a funny sort of hierarchy, whose quirkiness Steve Weber ,{[pg 105]}, well explicates.~{ Steve Weber, The Success of Open Source (Cambridge, MA: Harvard University Press, 2004). }~ Torvalds's authority is persuasive, not legal or technical, and certainly not determinative. He can do nothing except persuade others to prevent them from developing anything they want and add it to their kernel, or to distribute that alternative version of the kernel. There is nothing he can do to prevent the entire community of users, or some subsection of it, from rejecting his judgment about what ought to be included in the kernel. Anyone is legally free to do as they please. So these projects are based on a hierarchy of meritocratic respect, on social norms, and, to a great extent, on the mutual recognition by most players in this game that it is to everybody's advantage to have someone overlay a peer review system with some leadership.
+
+In combination then, three characteristics make possible the emergence of information production that is not based on exclusive proprietary claims, not aimed toward sales in a market for either motivation or information, and not organized around property and contract claims to form firms or market exchanges. First, the physical machinery necessary to participate in information and cultural production is almost universally distributed in the population of the advanced economies. Certainly, personal computers as capital goods are under the control of numbers of individuals that are orders of magnitude larger than the number of parties controlling the use of massproduction-capable printing presses, broadcast transmitters, satellites, or cable systems, record manufacturing and distribution chains, and film studios and distribution systems. This means that the physical machinery can be put in service and deployed in response to any one of the diverse motivations individual human beings experience. They need not be deployed in order to maximize returns on the financial capital, because financial capital need not be mobilized to acquire and put in service any of the large capital goods typical of the industrial information economy. Second, the primary raw materials in the information economy, unlike the industrial economy, are public goods--existing information, knowledge, and culture. Their actual marginal social cost is zero. Unless regulatory policy makes them purposefully expensive in order to sustain the proprietary business models, acquiring raw materials also requires no financial capital outlay. Again, this means that these raw materials can be deployed for any human motivation. They need not maximize financial returns. Third, the technical architectures, organizational models, and social dynamics of information production and exchange on the Internet have developed so that they allow us to structure the solution to problems--in particular to information production problems--in ways ,{[pg 106]}, that are highly modular. This allows many diversely motivated people to act for a wide range of reasons that, in combination, cohere into new useful information, knowledge, and cultural goods. These architectures and organizational models allow both independent creation that coexists and coheres into usable patterns, and interdependent cooperative enterprises in the form of peer-production processes.
+
+Together, these three characteristics suggest that the patterns of social production of information that we are observing in the digitally networked environment are not a fad. They are, rather, a sustainable pattern of human production given the characteristics of the networked information economy. The diversity of human motivation is nothing new. We now have a substantial literature documenting its importance in free and open-source software development projects, from Josh Lerner and Jean Tirole, Rishab Ghosh, Eric Von Hippel and Karim Lakhani, and others. Neither is the public goods nature of information new. What is new are the technological conditions that allow these facts to provide the ingredients of a much larger role in the networked information economy for nonmarket, nonproprietary production to emerge. As long as capitalization and ownership of the physical capital base of this economy remain widely distributed and as long as regulatory policy does not make information inputs artificially expensive, individuals will be able to deploy their own creativity, wisdom, conversational capacities, and connected computers, both independently and in loose interdependent cooperation with others, to create a substantial portion of the information environment we occupy. Moreover, we will be able to do so for whatever reason we choose--through markets or firms to feed and clothe ourselves, or through social relations and open communication with others, to give our lives meaning and context.
+
+2~ TRANSACTION COSTS AND EFFICIENCY
+
+For purposes of analyzing the political values that are the concern of most of this book, all that is necessary is that we accept that peer production in particular, and nonmarket information production and exchange in general, are sustainable in the networked information economy. Most of the remainder of the book seeks to evaluate why, and to what extent, the presence of a substantial nonmarket, commons-based sector in the information production system is desirable from the perspective of various aspects of freedom and justice. Whether this sector is "efficient" within the meaning of the ,{[pg 107]}, word in welfare economics is beside the point to most of these considerations. Even a strong commitment to a pragmatic political theory, one that accepts and incorporates into its consideration the limits imposed by material and economic reality, need not aim for "efficient" policy in the welfare sense. It is sufficient that the policy is economically and socially sustainable on its own bottom--in other words, that it does not require constant subsidization at the expense of some other area excluded from the analysis. It is nonetheless worthwhile spending a few pages explaining why, and under what conditions, commons-based peer production, and social production more generally, are not only sustainable but actually efficient ways of organizing information production.
+
+The efficient allocation of two scarce resources and one public good are at stake in the choice between social production--whether it is peer production or independent nonmarket production--and market-based production. Because most of the outputs of these processes are nonrival goods-- information, knowledge, and culture--the fact that the social production system releases them freely, without extracting a price for using them, means that it would, all other things being equal, be more efficient for information to be produced on a nonproprietary social model, rather than on a proprietary market model. Indeed, all other things need not even be equal for this to hold. It is enough that the net value of the information produced by commons-based social production processes and released freely for anyone to use as they please is no less than the total value of information produced through property-based systems minus the deadweight loss caused by the above-marginal-cost pricing practices that are the intended result of the intellectual property system.
+
+The two scarce resources are: first, human creativity, time, and attention; and second, the computation and communications resources used in information production and exchange. In both cases, the primary reason to choose among proprietary and nonproprietary strategies, between marketbased systems--be they direct market exchange or firm-based hierarchical production--and social systems, are the comparative transaction costs of each, and the extent to which these transaction costs either outweigh the benefits of working through each system, or cause the system to distort the information it generates so as to systematically misallocate resources.
+
+The first thing to recognize is that markets, firms, and social relations are three distinct transactional frameworks. Imagine that I am sitting in a room and need paper for my printer. I could (a) order paper from a store; (b) call ,{[pg 108]}, the storeroom, if I am in a firm or organization that has one, and ask the clerk to deliver the paper I need; or (c) walk over to a neighbor and borrow some paper. Choice (a) describes the market transactional framework. The store knows I need paper immediately because I am willing to pay for it now. Alternative (b) is an example of the firm as a transactional framework. The paper is in the storeroom because someone in the organization planned that someone else would need paper today, with some probability, and ordered enough to fill that expected need. The clerk in the storeroom gives it to me because that is his job; again, defined by someone who planned to have someone available to deliver paper when someone else in the proper channels of authority says that she needs it. Comparing and improving the efficiency of (a) and (b), respectively, has been a central project in transaction-costs organization theory. We might compare, for example, the costs of taking my call, verifying the credit card information, and sending a delivery truck for my one batch of paper, to the costs of someone planning for the average needs of a group of people like me, who occasionally run out of paper, and stocking a storeroom with enough paper and a clerk to fill our needs in a timely manner. However, notice that (c) is also an alternative transactional framework. I could, rather than incurring the costs of transacting through the market with the local store or of building a firm with sufficient lines of authority to stock and manage the storeroom, pop over to my neighbor and ask for some paper. This would make sense even within an existing firm when, for example, I need two or three pages immediately and do not want to wait for the storeroom clerk to do his rounds, or more generally, if I am working at home and the costs of creating "a firm," stocking a storeroom, and paying a clerk are too high for my neighbors and me. Instead, we develop a set of neighborly social relations, rather than a firm-based organization, to deal with shortfalls during periods when it would be too costly to assure a steady flow of paper from the market--for example, late in the evening, on a weekend, or in a sparsely populated area.
+
+The point is not, of course, to reduce all social relations and human decency to a transaction-costs theory. Too many such straight planks have already been cut from the crooked timber of humanity to make that exercise useful or enlightening. The point is that most of economics internally has been ignoring the social transactional framework as an alternative whose relative efficiency can be accounted for and considered in much the same way as the relative cost advantages of simple markets when compared to the hierarchical organizations that typify much of our economic activity--firms. ,{[pg 109]},
+
+A market transaction, in order to be efficient, must be clearly demarcated as to what it includes, so that it can be priced efficiently. That price must then be paid in equally crisply delineated currency. Even if a transaction initially may be declared to involve sale of "an amount reasonably required to produce the required output," for a "customary" price, at some point what was provided and what is owed must be crystallized and fixed for a formal exchange. The crispness is a functional requirement of the price system. It derives from the precision and formality of the medium of exchange--currency--and the ambition to provide refined representations of the comparative value of marginal decisions through denomination in an exchange medium that represents these incremental value differences. Similarly, managerial hierarchies require a crisp definition of who should be doing what, when, and how, in order to permit the planning and coordination process to be effective.
+
+Social exchange, on the other hand, does not require the same degree of crispness at the margin. As Maurice Godelier put it in /{The Enigma of the Gift}/, "the mark of the gift between close friends and relatives . . . is not the absence of obligations, it is the absence of `calculation.' "~{ Maurice Godelier, The Enigma of the Gift, trans. Nora Scott (Chicago: University of Chicago Press, 1999), 5. }~ There are, obviously, elaborate and formally ritualistic systems of social exchange, in both ancient societies and modern. There are common-property regimes that monitor and record calls on the common pool very crisply. However, in many of the common-property regimes, one finds mechanisms of bounding or fairly allocating access to the common pool that more coarsely delineate the entitlements, behaviors, and consequences than is necessary for a proprietary system. In modern market society, where we have money as a formal medium of precise exchange, and where social relations are more fluid than in traditional societies, social exchange certainly occurs as a fuzzier medium. Across many cultures, generosity is understood as imposing a debt of obligation; but none of the precise amount of value given, the precise nature of the debt to be repaid, or the date of repayment need necessarily be specified. Actions enter into a cloud of goodwill or membership, out of which each agent can understand him- or herself as being entitled to a certain flow of dependencies or benefits in exchange for continued cooperative behavior. This may be an ongoing relationship between two people, a small group like a family or group of friends, and up to a general level of generosity among strangers that makes for a decent society. The point is that social exchange does not require defining, for example, "I will lend you my car and help you move these five boxes on Monday, and in exchange you will feed my ,{[pg 110]}, fish next July," in the same way that the following would: "I will move five boxes on Tuesday for $100, six boxes for $120." This does not mean that social systems are cost free--far from it. They require tremendous investment, acculturation, and maintenance. This is true in this case every bit as much as it is true for markets or states. Once functional, however, social exchanges require less information crispness at the margin.
+
+Both social and market exchange systems require large fixed costs--the setting up of legal institutions and enforcement systems for markets, and creating social networks, norms, and institutions for the social exchange. Once these initial costs have been invested, however, market transactions systematically require a greater degree of precise information about the content of actions, goods, and obligations, and more precision of monitoring and enforcement on a per-transaction basis than do social exchange systems.
+
+This difference between markets and hierarchical organizations, on the one hand, and peer-production processes based on social relations, on the other, is particularly acute in the context of human creative labor--one of the central scarce resources that these systems must allocate in the networked information economy. The levels and focus of individual effort are notoriously hard to specify for pricing or managerial commands, considering all aspects of individual effort and ability--talent, motivation, workload, and focus--as they change in small increments over the span of an individual's full day, let alone months. What we see instead is codification of effort types--a garbage collector, a law professor--that are priced more or less finely. However, we only need to look at the relative homogeneity of law firm starting salaries as compared to the high variability of individual ability and motivation levels of graduating law students to realize that pricing of individual effort can be quite crude. Similarly, these attributes are also difficult to monitor and verify over time, though perhaps not quite as difficult as predicting them ex ante. Pricing therefore continues to be a function of relatively crude information about the actual variability among people. More importantly, as aspects of performance that are harder to fully specify in advance or monitor--like creativity over time given the occurrence of new opportunities to be creative, or implicit know-how--become a more significant aspect of what is valuable about an individual's contribution, market mechanisms become more and more costly to maintain efficiently, and, as a practical matter, simply lose a lot of information.
+
+People have different innate capabilities; personal, social, and educational histories; emotional frameworks; and ongoing lived experiences, which make ,{[pg 111]}, for immensely diverse associations with, idiosyncratic insights into, and divergent utilization of existing information and cultural inputs at different times and in different contexts. Human creativity is therefore very difficult to standardize and specify in the contracts necessary for either market-cleared or hierarchically organized production. As the weight of human intellectual effort increases in the overall mix of inputs into a given production process, an organization model that does not require contractual specification of the individual effort required to participate in a collective enterprise, and which allows individuals to self-identify for tasks, will be better at gathering and utilizing information about who should be doing what than a system that does require such specification. Some firms try to solve this problem by utilizing market- and social-relations-oriented hybrids, like incentive compensation schemes and employee-of-the-month?type social motivational frameworks. These may be able to improve on firm-only or market-only approaches. It is unclear, though, how well they can overcome the core difficulty: that is, that both markets and firm hierarchies require significant specification of the object of organization and pricing--in this case, human intellectual input. The point here is qualitative. It is not only, or even primarily, that more people can participate in production in a commons-based effort. It is that the widely distributed model of information production will better identify the best person to produce a specific component of a project, considering all abilities and availability to work on the specific module within a specific time frame. With enough uncertainty as to the value of various productive activities, and enough variability in the quality of both information inputs and human creative talent vis-a-vis any set of production ` opportunities, freedom of action for individuals coupled with continuous communications among the pool of potential producers and consumers can generate better information about the most valuable productive actions, and the best human inputs available to engage in these actions at a given time. Markets and firm incentive schemes are aimed at producing precisely this form of self-identification. However, the rigidities associated with collecting and comprehending bids from individuals through these systems (that is, transaction costs) limit the efficacy of self-identification by comparison to a system in which, once an individual self-identifies for a task, he or she can then undertake it without permission, contract, or instruction from another. The emergence of networked organizations (described and analyzed in the work of Charles Sabel and others) suggests that firms are in fact trying to overcome these limitations by developing parallels to the freedom to learn, ,{[pg 112]}, innovate, and act on these innovations that is intrinsic to peer-production processes by loosening the managerial bonds, locating more of the conception and execution of problem solving away from the managerial core of the firm, and implementing these through social, as well as monetary, motivations. However, the need to assure that the value created is captured within the organization limits the extent to which these strategies can be implemented within a single enterprise, as opposed to their implementation in an open process of social production. This effect, in turn, is in some sectors attenuated through the use of what Walter Powell and others have described as learning networks. Engineers and scientists often create frameworks that allow them to step out of their organizational affiliations, through conferences or workshops. By reproducing the social production characteristics of academic exchange, they overcome some of the information loss caused by the boundary of the firm. While these organizational strategies attenuate the problem, they also underscore the degree to which it is widespread and understood by organizations as such. The fact that the direction of the solutions business organizations choose tends to shift elements of the production process away from market- or firm-based models and toward networked social production models is revealing. Now, the self-identification that is central to the relative information efficiency of peer production is not always perfect. Some mechanisms used by firms and markets to codify effort levels and abilities--like formal credentials--are the result of experience with substantial errors or misstatements by individuals of their capacities. To succeed, therefore, peer-production systems must also incorporate mechanisms for smoothing out incorrect self-assessments--as peer review does in traditional academic research or in the major sites like /{Wikipedia}/ or Slashdot, or as redundancy and statistical averaging do in the case of NASA clickworkers. The prevalence of misperceptions that individual contributors have about their own ability and the cost of eliminating such errors will be part of the transaction costs associated with this form of organization. They parallel quality control problems faced by firms and markets.
+
+The lack of crisp specification of who is giving what to whom, and in exchange for what, also bears on the comparative transaction costs associated with the allocation of the second major type of scarce resource in the networked information economy: the physical resources that make up the networked information environment--communications, computation, and storage capacity. It is important to note, however, that these are very different from creativity and information as inputs: they are private goods, not a ,{[pg 113]}, public good like information, and they are standardized goods with wellspecified capacities, not heterogeneous and highly uncertain attributes like human creativity at a given moment and context. Their outputs, unlike information, are not public goods. The reasons that they are nonetheless subject to efficient sharing in the networked environment therefore require a different economic explanation. However, the sharing of these material resources, like the sharing of human creativity, insight, and attention, nonetheless relies on both the comparative transaction costs of markets and social relations and the diversity of human motivation.
+
+Personal computers, wireless transceivers, and Internet connections are "shareable goods." The basic intuition behind the concept of shareable goods is simple. There are goods that are "lumpy": given a state of technology, they can only be produced in certain discrete bundles that offer discontinuous amounts of functionality or capacity. In order to have any ability to run a computation, for example, a consumer must buy a computer processor. These, in turn, only come in discrete units with a certain speed or capacity. One could easily imagine a world where computers are very large and their owners sell computation capacity to consumers "on demand," whenever they needed to run an application. That is basically the way the mainframe world of the 1960s and 1970s worked. However, the economics of microchip fabrication and of network connections over the past thirty years, followed by storage technology, have changed that. For most functions that users need, the price-performance trade-off favors stand-alone, general-purpose personal computers, owned by individuals and capable of running locally most applications users want, over remote facilities capable of selling on-demand computation and storage. So computation and storage today come in discrete, lumpy units. You can decide to buy a faster or slower chip, or a larger or smaller hard drive, but once you buy them, you have the capacity of these machines at your disposal, whether you need it or not.
+
+Lumpy goods can, in turn, be fine-, medium-, or large-grained. A largegrained good is one that is so expensive it can only be used by aggregating demand for it. Industrial capital equipment, like a steam engine, is of this type. Fine-grained goods are of a granularity that allows consumers to buy precisely as much of the goods needed for the amount of capacity they require. Medium-grained goods are small enough for an individual to justify buying for her own use, given their price and her willingness and ability to pay for the functionality she plans to use. A personal computer is a mediumgrained lumpy good in the advanced economies and among the more well-to-do ,{[pg 114]}, in poorer countries, but is a large-grained capital good for most people in poor countries. If, given the price of such a good and the wealth of a society, a large number of individuals buy and use such medium-grained lumpy goods, that society will have a large amount of excess capacity "out there," in the hands of individuals. Because these machines are put into service to serve the needs of individuals, their excess capacity is available for these individuals to use as they wish--for their own uses, to sell to others, or to share with others. It is the combination of the fact that these machines are available at prices (relative to wealth) that allow users to put them in service based purely on their value for personal use, and the fact that they have enough capacity to facilitate additionally the action and fulfill the needs of others, that makes them "shareable." If they were so expensive that they could only be bought by pooling the value of a number of users, they would be placed in service either using some market mechanism to aggregate that demand, or through formal arrangements of common ownership by all those whose demand was combined to invest in purchasing the resource. If they were so finely grained in their capacity that there would be nothing left to share, again, sharing would be harder to sustain. The fact that they are both relatively inexpensive and have excess capacity makes them the basis for a stable model of individual ownership of resources combined with social sharing of that excess capacity.
+
+Because social sharing requires less precise specification of the transactional details with each transaction, it has a distinct advantage over market-based mechanisms for reallocating the excess capacity of shareable goods, particularly when they have small quanta of excess capacity relative to the amount necessary to achieve the desired outcome. For example, imagine that there are one thousand people in a population of computer owners. Imagine that each computer is capable of performing one hundred computations per second, and that each computer owner needs to perform about eighty operations per second. Every owner, in other words, has twenty operations of excess capacity every second. Now imagine that the marginal transaction costs of arranging a sale of these twenty operations--exchanging PayPal (a widely used low-cost Internet-based payment system) account information, insurance against nonpayment, specific statement of how much time the computer can be used, and so forth--cost ten cents more than the marginal transaction costs of sharing the excess capacity socially. John wants to render a photograph in one second, which takes two hundred operations per second. Robert wants to model the folding of proteins, which takes ten thousand ,{[pg 115]}, operations per second. For John, a sharing system would save fifty cents--assuming he can use his own computer for half of the two hundred operations he needs. He needs to transact with five other users to "rent" their excess capacity of twenty operations each. Robert, on the other hand, needs to transact with five hundred individual owners in order to use their excess capacity, and for him, using a sharing system is fifty dollars cheaper. The point of the illustration is simple. The cost advantage of sharing as a transactional framework relative to the price system increases linearly with the number of transactions necessary to acquire the level of resources necessary for an operation. If excess capacity in a society is very widely distributed in small dollops, and for any given use of the excess capacity it is necessary to pool the excess capacity of thousands or even millions of individual users, the transaction-cost advantages of the sharing system become significant.
+
+The transaction-cost effect is reinforced by the motivation crowding out theory. When many discrete chunks of excess capacity need to be pooled, each distinct contributor cannot be paid a very large amount. Motivation crowding out theory would predict that when the monetary rewards to an activity are low, the negative effect of crowding out the social-psychological motivation will weigh more heavily than any increased incentive that is created by the promise of a small payment to transfer one's excess capacity. The upshot is that when the technological state results in excess capacity of physical capital being widely distributed in small dollops, social sharing can outperform secondary markets as a mechanism for harnessing that excess capacity. This is so because of both transaction costs and motivation. Fewer owners will be willing to sell their excess capacity cheaply than to give it away for free in the right social context and the transaction costs of selling will be higher than those of sharing.
+
+From an efficiency perspective, then, there are clear reasons to think that social production systems--both peer production of information, knowledge, and culture and sharing of material resources--can be more efficient than market-based systems to motivate and allocate both human creative effort and the excess computation, storage, and communications capacity that typify the networked information economy. That does not mean that all of us will move out of market-based productive relationships all of the time. It does mean that alongside our market-based behaviors we generate substantial amounts of human creativity and mechanical capacity. The transaction costs of clearing those resources through the price system or through ,{[pg 116]}, firms are substantial, and considerably larger for the marginal transaction than clearing them through social-sharing mechanisms as a transactional framework. With the right institutional framework and peer-review or qualitycontrol mechanisms, and with well-modularized organization of work, social sharing is likely to identify the best person available for a job and make it feasible for that person to work on that job using freely available information inputs. Similarly, social transactional frameworks are likely to be substantially less expensive than market transactions for pooling large numbers of discrete, small increments of the excess capacity of the personal computer processors, hard drives, and network connections that make up the physical capital base of the networked information economy. In both cases, given that much of what is shared is excess capacity from the perspective of the contributors, available to them after they have fulfilled some threshold level of their market-based consumption requirements, social-sharing systems are likely to tap in to social psychological motivations that money cannot tap, and, indeed, that the presence of money in a transactional framework could nullify. Because of these effects, social sharing and collaboration can provide not only a sustainable alternative to market-based and firm-based models of provisioning information, knowledge, culture, and communications, but also an alternative that more efficiently utilizes the human and physical capital base of the networked information economy. A society whose institutional ecology permitted social production to thrive would be more productive under these conditions than a society that optimized its institutional environment solely for market- and firm-based production, ignoring its detrimental effects to social production.
+
+2~ THE EMERGENCE OF SOCIAL PRODUCTION IN THE DIGITALLY NETWORKED ENVIRONMENT
+
+There is a curious congruence between the anthropologists of the gift and mainstream economists today. Both treat the gift literature as being about the periphery, about societies starkly different from modern capitalist societies. As Godelier puts it, "What a contrast between these types of society, these social and mental universes, and today's capitalist society where the majority of social relations are impersonal (involving the individual as citizen and the state, for instance), and where the exchange of things and services is conducted for the most part in an anonymous marketplace, leaving little room for an economy and moral code based on gift-giving."~{ Godelier, The Enigma, 106. }~ And yet, ,{[pg 117]}, sharing is everywhere around us in the advanced economies. Since the 1980s, we have seen an increasing focus, in a number of literatures, on production practices that rely heavily on social rather than price-based or governmental policies. These include, initially, the literature on social norms and social capital, or trust.~{ In the legal literature, Robert Ellickson, Order Without Law: How Neighbors Settle Disputes (Cambridge, MA: Harvard University Press, 1991), is the locus classicus for showing how social norms can substitute for law. For a bibliography of the social norms literature outside of law, see Richard H. McAdams, "The Origin, Development, and Regulation of Norms," Michigan Law Review 96 (1997): 338n1, 339n2. Early contributions were: Edna Ullman-Margalit, The Emergence of Norms (Oxford: Clarendon Press, 1977); James Coleman, "Norms as Social Capital," in Economic Imperialism: The Economic Approach Applied Outside the Field of Economics, ed. Peter Bernholz and Gerard Radnitsky (New York: Paragon House Publishers, 1987), 133-155; Sally E. Merry, "Rethinking Gossip and Scandal," in Toward a Theory of Social Control, Fundamentals, ed. Donald Black (New York: Academic Press, 1984). }~ Both these lines of literature, however, are statements of the institutional role of social mechanisms for enabling market exchange and production. More direct observations of social production and exchange systems are provided by the literature on social provisioning of public goods-- like social norm enforcement as a dimension of policing criminality, and the literature on common property regimes.~{ On policing, see Robert C. Ellickson, "Controlling Chronic Misconduct in City Spaces: Of Panhandlers, Skid Rows, and Public-Space Zoning," Yale Law Journal 105 (1996): 1165, 1194-1202; and Dan M. Kahan, "Between Economics and Sociology: The New Path of Deterrence," Michigan Law Review 95 (1997): 2477. }~ The former are limited by their focus on public goods provisioning. The latter are usually limited by their focus on discretely identifiable types of resources--common pool resources-- that must be managed as among a group of claimants while retaining a proprietary outer boundary toward nonmembers. The focus of those who study these phenomena is usually on relatively small and tightly knit communities, with clear boundaries between members and nonmembers.~{ An early and broad claim in the name of commons in resources for communication and transportation, as well as human community building--like roads, canals, or social-gathering places--is Carol Rose, "The Comedy of the Commons: Custom, Commerce, and Inherently Public Property," University Chicago Law Review 53 (1986): 711. Condensing around the work of Elinor Ostrom, a more narrowly defined literature developed over the course of the 1990s: Elinor Ostrom, Governing the Commons: The Evolution of Institutions for Collective Action (New York: Cambridge University Press, 1990). Another seminal study was James M. Acheson, The Lobster Gangs of Maine (New Hampshire: University Press of New England, 1988). A brief intellectual history of the study of common resource pools and common property regimes can be found in Charlotte Hess and Elinor Ostrom, "Ideas, Artifacts, Facilities, and Content: Information as a Common-Pool Resource," Law & Contemporary Problems 66 (2003): 111. }~
+
+These lines of literature point to an emerging understanding of social production and exchange as an alternative to markets and firms. Social production is not limited to public goods, to exotic, out-of-the-way places like surviving medieval Spanish irrigation regions or the shores of Maine's lobster fishing grounds, or even to the ubiquitous phenomenon of the household. As SETI@home and Slashdot suggest, it is not necessarily limited to stable communities of individuals who interact often and know each other, or who expect to continue to interact personally. Social production of goods and services, both public and private, is ubiquitous, though unnoticed. It sometimes substitutes for, and sometimes complements, market and state production everywhere. It is, to be fanciful, the dark matter of our economic production universe.
+
+Consider the way in which the following sentences are intuitively familiar, yet as a practical matter, describe the provisioning of goods or services that have well-defined NAICS categories (the categories used by the Economic Census to categorize economic sectors) whose provisioning through the markets is accounted for in the Economic Census, but that are commonly provisioned in a form consistent with the definition of sharing--on a radically distributed model, without price or command.
+
+group{
+
+NAICS 624410624410 [Babysitting services, child day care]
+ "John, could you pick up Bobby today when you take Lauren to soccer?
+I have a conference call I have to make." ,{[pg 118]},
+ "Are you doing homework with Zoe today, or shall I?"
+
+}group
+
+group{
+
+NAICS 484210 [Trucking used household, office, or institutional
+furniture and equipment]
+ "Jane, could you lend a hand moving this table to the dining room?"
+ "Here, let me hold the elevator door for you, this looks heavy."
+
+}group
+
+group{
+
+NAICS 484122 [Trucking, general freight, long-distance,
+less-than-truckload]
+ "Jack, do you mind if I load my box of books in your trunk so
+you can drop it off at my brother's on your way to Boston?"
+
+}group
+
+group{
+
+NAICS 514110 [Traffic reporting services]
+ "Oh, don't take I-95, it's got horrible construction traffic to
+exit 39."
+
+}group
+
+group{
+
+NAICS 711510 [Newspaper columnists, independent (freelance)]
+ "I don't know about Kerry, he doesn't move me, I think he should be
+more aggressive in criticizing Bush on Iraq."
+
+}group
+
+group{
+
+NAICS 621610 [Home health-care services]
+ "Can you please get me my medicine? I'm too wiped to get up."
+ "Would you like a cup of tea?"
+
+}group
+
+group{
+
+NAICS 561591 [Tourist information bureaus]
+ "Excuse me, how do I get to Carnegie Hall?"
+
+}group
+
+group{
+
+NAICS 561321 [Temporary help services]
+ "I've got a real crunch on the farm, can you come over on Saturday
+and lend a hand?"
+ "This is crazy, I've got to get this document out tonight, could you
+lend me a hand with proofing and pulling it all together tonight?"
+
+}group
+
+group{
+
+NAICS 71 [Arts, entertainment, and recreation]
+ "Did you hear the one about the Buddhist monk, the Rabbi, and
+the Catholic priest...?"
+ "Roger, bring out your guitar...."
+ "Anybody up for a game of...?"
+
+}group
+
+The litany of examples generalizes through a combination of four dimensions that require an expansion from the current focus of the literatures related to social production. First, they relate to production of goods and services, not only of norms or rules. Social relations provide the very motivations for, and information relating to, production and exchange, not only the institutional framework for organizing action, which itself is motivated, informed, and organized by markets or managerial commands. Second, they relate to all kinds of goods, not only public goods. In particular, the paradigm cases of free software development and distributed computing involve labor and shareable goods--each plainly utilizing private goods as inputs, ,{[pg 119]}, and, in the case of distributed computing, producing private goods as outputs. Third, at least some of them relate not only to relations of production within well-defined communities of individuals who have repeated interactions, but extend to cover baseline standards of human decency. These enable strangers to ask one another for the time or for directions, enable drivers to cede the road to each other, and enable strangers to collaborate on software projects, on coauthoring an online encyclopedia, or on running simulations of how proteins fold. Fourth, they may either complement or substitute for market and state production systems, depending on the social construction of mixed provisioning. It is hard to measure the weight that social and sharing-based production has in the economy. Our intuitions about capillary systems would suggest that the total volume of boxes or books moved or lifted, instructions given, news relayed, and meals prepared by family, friends, neighbors, and minimally decent strangers would be very high relative to the amount of substitutable activity carried on through market exchanges or state provisioning.
+
+Why do we, despite the ubiquity of social production, generally ignore it as an economic phenomenon, and why might we now reconsider its importance? A threshold requirement for social sharing to be a modality of economic production, as opposed to one purely of social reproduction, is that sharing-based action be effective. Efficacy of individual action depends on the physical capital requirements for action to become materially effective, which, in turn, depend on technology. Effective action may have very low physical capital requirements, so that every individual has, by natural capacity, "the physical capital" necessary for action. Social production or sharing can then be ubiquitous (though in practice, it may not). Vocal cords to participate in a sing-along or muscles to lift a box are obvious examples. When the capital requirements are nontrivial, but the capital good is widely distributed and available, sharing can similarly be ubiquitous and effective. This is true both when the shared resource or good is the capacity of the capital good itself--as in the case of shareable goods--and when some widely distributed human capacity is made effective through the use of the widely distributed capital goods--as in the case of human creativity, judgment, experience, and labor shared in online peer-production processes--in which participants contribute using the widespread availability of connected computers. When use of larger-scale physical capital goods is a threshold requirement of effective action, we should not expect to see widespread reliance on decentralized sharing as a standard modality of production. Industrial ,{[pg 120]}, mass-manufacture of automobiles, steel, or plastic toys, for example, is not the sort of thing that is likely to be produced on a social-sharing basis, because of the capital constraints. This is not to say that even for large-scale capital projects, like irrigation systems and dams, social production systems cannot step into the breach. We have those core examples in the commonproperty regime literature, and we have worker-owned firms as examples of mixed systems. However, those systems tend to replicate the characteristics of firm, state, or market production--using various combinations of quotas, scrip systems, formal policing by "professional" officers, or management within worker-owned firms. By comparison, the "common property" arrangements described among lobster gangs of Maine or fishing groups in Japan, where capital requirements are much lower, tend to be more socialrelations-based systems, with less formalized or crisp measurement of contributions to, and calls on, the production system.
+
+To say that sharing is technology dependent is not to deny that it is a ubiquitous human phenomenon. Sharing is so deeply engrained in so many of our cultures that it would be difficult to argue that with the "right" (or perhaps "wrong") technological contingencies, it would simply disappear. My claim, however, is narrower. It is that the relative economic role of sharing changes with technology. There are technological conditions that require more or less capital, in larger or smaller packets, for effective provisioning of goods, services, and resources the people value. As these conditions change, the relative scope for social-sharing practices to play a role in production changes. When goods, services, and resources are widely dispersed, their owners can choose to engage with each other through social sharing instead of through markets or a formal, state-based relationship, because individuals have available to them the resources necessary to engage in such behavior without recourse to capital markets or the taxation power of the state. If technological changes make the resources necessary for effective action rare or expensive, individuals may wish to interact in social relations, but they can now only do so ineffectively, or in different fields of endeavor that do not similarly require high capitalization. Large-packet, expensive physical capital draws the behavior into one or the other of the modalities of production that can collect the necessary financial capital--through markets or taxation. Nothing, however, prevents change from happening in the opposite direction. Goods, services, and resources that, in the industrial stage of the information economy required large-scale, concentrated capital investment to provision, are now subject to a changing technological environment ,{[pg 121]}, that can make sharing a better way of achieving the same results than can states, markets, or their hybrid, regulated industries.
+
+Because of changes in the technology of the industrial base of the most advanced economies, social sharing and exchange is becoming a common modality of production at their very core--in the information, culture, education, computation, and communications sectors. Free software, distributed computing, ad hoc mesh wireless networks, and other forms of peer production offer clear examples of large-scale, measurably effective sharing practices. The highly distributed capital structure of contemporary communications and computation systems is largely responsible for this increased salience of social sharing as a modality of economic production in that environment. By lowering the capital costs required for effective individual action, these technologies have allowed various provisioning problems to be structured in forms amenable to decentralized production based on social relations, rather than through markets or hierarchies.
+
+My claim is not, of course, that we live in a unique moment of humanistic sharing. It is, rather, that our own moment in history suggests a more general observation. The technological state of a society, in particular the extent to which individual agents can engage in efficacious production activities with material resources under their individual control, affects the opportunities for, and hence the comparative prevalence and salience of, social, market-- both price-based and managerial--and state production modalities. The capital cost of effective economic action in the industrial economy shunted sharing to its economic peripheries--to households in the advanced economies, and to the global economic peripheries that have been the subject of the anthropology of gift or the common-property regime literatures. The emerging restructuring of capital investment in digital networks--in particular, the phenomenon of user-capitalized computation and communications capabilities--are at least partly reversing that effect. Technology does not determine the level of sharing. It does, however, set threshold constraints on the effective domain of sharing as a modality of economic production. Within the domain of the practically feasible, the actual level of sharing practices will be culturally driven and cross-culturally diverse.
+
+Most practices of production--social or market-based--are already embedded in a given technological context. They present no visible "problem" to solve or policy choice to make. We do not need to be focused consciously on improving the conditions under which friends lend a hand to each other to move boxes, make dinner, or take kids to school. We feel no need to ,{[pg 122]}, reconsider the appropriateness of market-based firms as the primary modality for the production of automobiles. However, in moments where a field of action is undergoing a technological transition that changes the opportunities for sharing as a modality of production, understanding that sharing is a modality of production becomes more important, as does understanding how it functions as such. This is so, as we are seeing today, when prior technologies have already set up market- or state-based production systems that have the law and policy-making systems already designed to fit their requirements. While the prior arrangement may have been the most efficient, or even may have been absolutely necessary for the incumbent production system, its extension under new technological conditions may undermine, rather than improve, the capacity of a society to produce and provision the goods, resources, or capacities that are the object of policy analysis. This is, as I discuss in part III, true of wireless communications regulation, or "spectrum management," as it is usually called; of the regulation of information, knowledge, and cultural production, or "intellectual property," as it is usually now called; and it may be true of policies for computation and wired communications networks, as distributed computing and the emerging peer-topeer architectures suggest.
+
+2~ THE INTERFACE OF SOCIAL PRODUCTION AND MARKET-BASED BUSINESSES
+
+The rise of social production does not entail a decline in market-based production. Social production first and foremost harnesses impulses, time, and resources that, in the industrial information economy, would have been wasted or used purely for consumption. Its immediate effect is therefore likely to increase overall productivity in the sectors where it is effective. But that does not mean that its effect on market-based enterprises is neutral. A newly effective form of social behavior, coupled with a cultural shift in tastes as well as the development of new technological and social solution spaces to problems that were once solved through market-based firms, exercises a significant force on the shape and conditions of market action. Understanding the threats that these developments pose to some incumbents explains much of the political economy of law in this area, which will occupy chapter 11. At the simplest level, social production in general and peer production in particular present new sources of competition to incumbents that produce information goods for which there are now socially produced substitutes. ,{[pg 123]}, Open source software development, for example, first received mainstream media attention in 1998 due to publication of a leaked internal memorandum from Microsoft, which came to be known as The Halloween Memo. In it, a Microsoft strategist identified the open source methodology as the one major potential threat to the company's dominance over the desktop. As we have seen since, definitively in the Web server market and gradually in segments of the operating system market, this prediction proved prescient. Similarly, /{Wikipedia}/ now presents a source of competition to online encyclopedias like Columbia, Grolier, or Encarta, and may well come to be seen as an adequate substitute for Britannica as well. Most publicly visible, peer-topeer file sharing networks have come to compete with the recording industry as an alternative music distribution system, to the point where the longterm existence of that industry is in question. Some scholars like William Fisher, and artists like Jenny Toomey and participants in the Future of Music Coalition, are already looking for alternative ways of securing for artists a living from the music they make.
+
+The competitive threat from social production, however, is merely a surface phenomenon. Businesses often face competition or its potential, and this is a new source, with new economics, which may or may not put some of the incumbents out of business. But there is nothing new about entrants with new business models putting slow incumbents out of business. More basic is the change in opportunity spaces, the relationships of firms to users, and, indeed, the very nature of the boundary of the firm that those businesses that are already adapting to the presence and predicted persistence of social production are exhibiting. Understanding the opportunities social production presents for businesses begins to outline how a stable social production system can coexist and develop a mutually reinforcing relationship with market-based organizations that adapt to and adopt, instead of fight, them.
+
+Consider the example I presented in chapter 2 of IBM's relationship to the free and open source software development community. IBM, as I explained there, has shown more than $2 billion a year in "Linux-related revenues." Prior to IBM's commitment to adapting to what the firm sees as the inevitability of free and open source software, the company either developed in house or bought from external vendors the software it needed as part of its hardware business, on the one hand, and its software services-- customization, enterprise solutions, and so forth--on the other hand. In each case, the software development follows a well-recognized supply chain model. Through either an employment contract or a supply contract the ,{[pg 124]}, company secures a legal right to require either an employee or a vendor to deliver a given output at a given time. In reliance on that notion of a supply chain that is fixed or determined by a contract, the company turns around and promises to its clients that it will deliver the integrated product or service that includes the contracted-for component. With free or open source software, that relationship changes. IBM is effectively relying for its inputs on a loosely defined cloud of people who are engaged in productive social relations. It is making the judgment that the probability that a sufficiently good product will emerge out of this cloud is high enough that it can undertake a contractual obligation to its clients, even though no one in the cloud is specifically contractually committed to it to produce the specific inputs the firm needs in the timeframe it needs it. This apparent shift from a contractually deterministic supply chain to a probabilistic supply chain is less dramatic, however, than it seems. Even when contracts are signed with employees or suppliers, they merely provide a probability that the employee or the supplier will in fact supply in time and at appropriate quality, given the difficulties of coordination and implementation. A broad literature in organization theory has developed around the effort to map the various strategies of collaboration and control intended to improve the likelihood that the different components of the production process will deliver what they are supposed to: from early efforts at vertical integration, to relational contracting, pragmatic collaboration, or Toyota's fabled flexible specialization. The presence of a formalized enforceable contract, for outputs in which the supplier can claim and transfer a property right, may change the probability of the desired outcome, but not the fact that in entering its own contract with its clients, the company is making a prediction about the required availability of necessary inputs in time. When the company turns instead to the cloud of social production for its inputs, it is making a similar prediction. And, as with more engaged forms of relational contracting, pragmatic collaborations, or other models of iterated relations with coproducers, the company may engage with the social process in order to improve the probability that the required inputs will in fact be produced in time. In the case of companies like IBM or Red Hat, this means, at least partly, paying employees to participate in the open source development projects. But managing this relationship is tricky. The firms must do so without seeking to, or even seeming to seek to, take over the project; for to take over the project in order to steer it more "predictably" toward the firm's needs is to kill the goose that lays the golden eggs. For IBM and more recently Nokia, supporting ,{[pg 125]}, the social processes on which they rely has also meant contributing hundreds of patents to the Free Software Foundation, or openly licensing them to the software development community, so as to extend the protective umbrella created by these patents against suits by competitors. As the companies that adopt this strategic reorientation become more integrated into the peer-production process itself, the boundary of the firm becomes more porous. Participation in the discussions and governance of open source development projects creates new ambiguity as to where, in relation to what is "inside" and "outside" of the firm boundary, the social process is. In some cases, a firm may begin to provide utilities or platforms for the users whose outputs it then uses in its own products. The Open Source Development Group (OSDG), for example, provides platforms for Slashdot and SourceForge. In these cases, the notion that there are discrete "suppliers" and "consumers," and that each of these is clearly demarcated from the other and outside of the set of stable relations that form the inside of the firm becomes somewhat attenuated.
+
+As firms have begun to experience these newly ambiguous relationships with individuals and social groups, they have come to wrestle with questions of leadership and coexistence. Businesses like IBM, or eBay, which uses peer production as a critical component of its business ecology--the peer reviewed system of creating trustworthiness, without which person-to-person transactions among individual strangers at a distance would be impossible-- have to structure their relationship to the peer-production processes that they co-exist with in a helpful and non-threatening way. Sometimes, as we saw in the case of IBM's contributions to the social process, this may mean support without attempting to assume "leadership" of the project. Sometimes, as when peer production is integrated more directly into what is otherwise a commercially created and owned platform--as in the case of eBay--the relationship is more like that of a peer-production leader than of a commercial actor. Here, the critical and difficult point for business managers to accept is that bringing the peer-production community into the newly semi-porous boundary of the firm--taking those who used to be customers and turning them into participants in a process of coproduction-- changes the relationship of the firm's managers and its users. Linden Labs, which runs Second Life, learned this in the context of the tax revolt described in chapter 3. Users cannot be ordered around like employees. Nor can they be simply advertised-to and manipulated, or even passively surveyed, like customers. To do that would be to lose the creative and generative social ,{[pg 126]}, character that makes integration of peer production into a commercial business model so valuable for those businesses that adopt it. Instead, managers must be able to identify patterns that emerge in the community and inspire trust that they are correctly judging the patterns that are valuable from the perspective of the users, not only the enterprise, so that the users in fact coalesce around and extend these patterns.
+
+The other quite basic change wrought by the emergence of social production, from the perspective of businesses, is a change in taste. Active users require and value new and different things than passive consumers did. The industrial information economy specialized in producing finished goods, like movies or music, to be consumed passively, and well-behaved appliances, like televisions, whose use was fully specified at the factory door. The emerging businesses of the networked information economy are focusing on serving the demand of active users for platforms and tools that are much more loosely designed, late-binding--that is, optimized only at the moment of use and not in advance--variable in their uses, and oriented toward providing users with new, flexible platforms for relationships. Personal computers, camera phones, audio and video editing software, and similar utilities are examples of tools whose value increases for users as they are enabled to explore new ways to be creative and productively engaged with others. In the network, we are beginning to see business models emerge to allow people to come together, like MeetUp, and to share annotations of Web pages they read, like del.icio.us, or photographs they took, like Flickr. Services like Blogger and Technorati similarly provide platforms for the new social and cultural practices of personal journals, or the new modes of expression described in chapters 7 and 8.
+
+The overarching point is that social production is reshaping the market conditions under which businesses operate. To some of the incumbents of the industrial information economy, the pressure from social production is experienced as pure threat. It is the clash between these incumbents and the new practices that was most widely reported in the media in the first five years of the twenty-first century, and that has driven much of policy making, legislation, and litigation in this area. But the much more fundamental effect on the business environment is that social production is changing the relationship of firms to individuals outside of them, and through this changing the strategies that firms internally are exploring. It is creating new sources of inputs, and new tastes and opportunities for outputs. Consumers are changing into users--more active and productive than the consumers of the ,{[pg 127]}, industrial information economy. The change is reshaping the relationships necessary for business success, requiring closer integration of users into the process of production, both in inputs and outputs. It requires different leadership talents and foci. By the time of this writing, in 2005, these new opportunities and adaptations have begun to be seized upon as strategic advantages by some of the most successful companies working around the Internet and information technology, and increasingly now around information and cultural production more generally. Eric von Hippel's work has shown how the model of user innovation has been integrated into the business model of innovative firms even in sectors far removed from either the network or from information production--like designing kite-surfing equipment or mountain bikes. As businesses begin to do this, the platforms and tools for collaboration improve, the opportunities and salience of social production increases, and the political economy begins to shift. And as these firms and social processes coevolve, the dynamic accommodation they are developing provides us with an image of what the future stable interface between market-based businesses and the newly salient social production is likely to look like. ,{[pg 128]}, ,{[pg 129]},
+
+:C~ Part Two - The Political Economy of Property and Commons
+
+1~p2 Introduction
+
+How a society produces its information environment goes to the very core of freedom. Who gets to say what, to whom? What is the state of the world? What counts as credible information? How will different forms of action affect the way the world can become? These questions go to the foundations of effective human action. They determine what individuals understand to be the range of options open to them, and the range of consequences to their actions. They determine what is understood to be open for debate in a society, and what is considered impossible as a collective goal or a collective path for action. They determine whose views count toward collective action, and whose views are lost and never introduced into the debate of what we should do as political entities or social communities. Freedom depends on the information environment that those individuals and societies occupy. Information underlies the very possibility of individual self-direction. Information and communication constitute the practices that enable a community to form a common range of understandings of what is at stake and what paths are open for the taking. They are constitutive ,{[pg 130]}, components of both formal and informal mechanisms for deciding on collective action. Societies that embed the emerging networked information economy in an institutional ecology that accommodates nonmarket production, both individual and cooperative, will improve the freedom of their constituents along all these dimensions.
+
+The networked information economy makes individuals better able to do things for and by themselves, and makes them less susceptible to manipulation by others than they were in the mass-media culture. In this sense, the emergence of this new set of technical, economic, social, and institutional relations can increase the relative role that each individual is able to play in authoring his or her own life. The networked information economy also promises to provide a much more robust platform for public debate. It enables citizens to participate in public conversation continuously and pervasively, not as passive recipients of "received wisdom" from professional talking heads, but as active participants in conversations carried out at many levels of political and social structure. Individuals can find out more about what goes on in the world, and share it more effectively with others. They can check the claims of others and produce their own, and they can be heard by others, both those who are like-minded and opponents. At a more foundational level of collective understanding, the shift from an industrial to a networked information economy increases the extent to which individuals can become active participants in producing their own cultural environment. It opens the possibility of a more critical and reflective culture.
+
+Unlike the relationship of information production to freedom, the relationship between the organization of information production and distributive justice is not intrinsic. However, the importance of knowledge in contemporary economic production makes a change in the modality of information production important to justice as well. The networked information economy can provide opportunities for global development and for improvements in the justice of distribution of opportunities and capacities everywhere. Economic opportunity and welfare today--of an individual, a social group, or a nation--depend on the state of knowledge and access to opportunities to learn and apply practical knowledge. Transportation networks, global financial markets, and institutional trade arrangements have made material resources and outputs capable of flowing more efficiently from any one corner of the globe to another than they were at any previous period. Economic welfare and growth now depend more on knowledge and social ,{[pg 131]}, organization than on natural sources. Knowledge transfer and social reform, probably more than any other set of changes, can affect the economic opportunities and material development of different parts of the global economic system, within economies both advanced and less developed. The emergence of a substantial nonmarket sector in the networked information economy offers opportunities for providing better access to knowledge and information as input from, and better access for information outputs of, developing and less-developed economies and poorer geographic and social sectors in the advanced economies. Better access to knowledge and the emergence of less capital-dependent forms of productive social organization offer the possibility that the emergence of the networked information economy will open up opportunities for improvement in economic justice, on scales both global and local.
+
+The basic intuition and popular belief that the Internet will bring greater freedom and global equity has been around since the early 1990s. It has been the technophile's basic belief, just as the horrors of cyberporn, cybercrime, or cyberterrorism have been the standard gut-wrenching fears of the technophobe. The technophilic response is reminiscent of claims made in the past for electricity, for radio, or for telegraph, expressing what James Carey described as "the mythos of the electrical sublime." The question this part of the book explores is whether this claim, given the experience of the past decade, can be sustained on careful analysis, or whether it is yet another instance of a long line of technological utopianism. The fact that earlier utopias were overly optimistic does not mean that these previous technologies did not in fact alter the conditions of life--material, social, and intellectual. They did, but they did so differently in different societies, and in ways that diverged from the social utopias attached to them. Different nations absorbed and used these technologies differently, diverging in social and cultural habits, but also in institutional strategies for adoption--some more state-centric, others more market based; some more controlled, others less so. Utopian or at least best-case conceptions of the emerging condition are valuable if they help diagnose the socially and politically significant attributes of the emerging networked information economy correctly and allow us to form a normative conception of their significance. At a minimum, with these in hand, we can begin to design our institutional response to the present technological perturbation in order to improve the conditions of freedom and justice over the next few decades. ,{[pg 132]},
+
+The chapters in this part focus on major liberal commitments or concerns. Chapter 5 addresses the question of individual autonomy. Chapters 6, 7, and 8 address democratic participation: first in the political public sphere and then, more broadly, in the construction of culture. Chapter 9 deals with justice and human development. Chapter 10 considers the effects of the networked information economy on community. ,{[pg 133]},
+
+1~5 Chapter 5 - Individual Freedom: Autonomy, Information, and Law
+
+The emergence of the networked information economy has the potential to increase individual autonomy. First, it increases the range and diversity of things that individuals can do for and by themselves. It does this by lifting, for one important domain of life, some of the central material constraints on what individuals can do that typified the industrial information economy. The majority of materials, tools, and platforms necessary for effective action in the information environment are in the hands of most individuals in advanced economies. Second, the networked information economy provides nonproprietary alternative sources of communications capacity and information, alongside the proprietary platforms of mediated communications. This decreases the extent to which individuals are subject to being acted upon by the owners of the facilities on which they depend for communications. The construction of consumers as passive objects of manipulation that typified television culture has not disappeared overnight, but it is losing its dominance in the information environment. Third, the networked information environment qualitatively increases the range and diversity of information ,{[pg 134]}, available to individuals. It does so by enabling sources commercial and noncommercial, mainstream and fringe, domestic or foreign, to produce information and communicate with anyone. This diversity radically changes the universe of options that individuals can consider as open for them to pursue. It provides them a richer basis to form critical judgments about how they could live their lives, and, through this opportunity for critical reflection, why they should value the life they choose.
+
+2~ FREEDOM TO DO MORE FOR ONESELF, BY ONESELF, AND WITH OTHERS
+
+Rory Cejas was a twenty-six-year-old firefighter/paramedic with the Miami Fire Department in 2003, when he enlisted the help of his brother, wife, and a friend to make a Star Wars-like fan film. Using a simple camcorder and tripod, and widely available film and image generation and editing software on his computer, he made a twenty-minute film he called /{The Jedi Saga}/. The film is not a parody. It is not social criticism. It is a straightforward effort to make a movie in the genre of /{Star Wars}/, using the same type of characters and story lines. In the predigital world, it would have been impossible, as a practical matter, for Cejas to do this. It would have been an implausible part of his life plan to cast his wife as a dark femme fatale, or his brother as a Jedi Knight, so they could battle shoulder-to-shoulder, light sabers drawn, against a platoon of Imperial clone soldiers. And it would have been impossible for him to distribute the film he had made to friends and strangers. The material conditions of cultural production have changed, so that it has now become part of his feasible set of options. He needs no help from government to do so. He needs no media access rules that give him access to fancy film studios. He needs no cable access rules to allow him to distribute his fantasy to anyone who wants to watch it. The new set of feasible options open to him includes not only the option passively to sit in the theatre or in front of the television and watch the images created by George Lucas, but also the option of trying his hand at making this type of film by himself.
+
+/{Jedi Saga}/ will not be a blockbuster. It is not likely to be watched by many people. Those who do watch it are not likely to enjoy it in the same way that they enjoyed any of Lucas's films, but that is not its point. When someone like Cejas makes such a film, he is not displacing what Lucas does. He is changing what he himself does--from sitting in front of a screen that ,{[pg 135]}, is painted by another to painting his own screen. Those who watch it will enjoy it in the same way that friends and family enjoy speaking to each other or singing together, rather than watching talking heads or listening to Talking Heads. Television culture, the epitome of the industrial information economy, structured the role of consumers as highly passive. While media scholars like John Fiske noted the continuing role of viewers in construing and interpreting the messages they receive, the role of the consumer in this model is well defined. The media product is a finished good that they consume, not one that they make. Nowhere is this clearer than in the movie theatre, where the absence of light, the enveloping sound, and the size of the screen are all designed to remove the viewer as agent, leaving only a set of receptors--eyes, ears--through which to receive the finished good that is the movie. There is nothing wrong with the movies as one mode of entertainment. The problem emerges, however, when the movie theatre becomes an apt metaphor for the relationship the majority of people have with most of the information environment they occupy. That increasing passivity of television culture came to be a hallmark of life for most people in the late stages of the industrial information economy. The couch potato, the eyeball bought and sold by Madison Avenue, has no part in making the information environment he or she occupies.
+
+Perhaps no single entertainment product better symbolizes the shift that the networked information economy makes possible from television culture than the massive multiplayer online game. These games are typified by two central characteristics. First, they offer a persistent game environment. That is, any action taken or "object" created anywhere in the game world persists over time, unless and until it is destroyed by some agent in the game; and it exists to the same extent for all players. Second, the games are effectively massive collaboration platforms for thousands, tens of thousands--or in the case of Lineage, the most popular game in South Korea, more than four million--users. These platforms therefore provide individual players with various contexts in which to match their wits and skills with other human players. The computer gaming environment provides a persistent relational database of the actions and social interactions of players. The first games that became mass phenomena, like Ultima Online or Everquest, started with an already richly instantiated context. Designers of these games continue to play a large role in defining the range of actions and relations feasible for players. The basic medieval themes, the role of magic and weapons, and the types and ranges of actions that are possible create much of the context, and ,{[pg 136]}, therefore the types of relationships pursued. Still, these games leave qualitatively greater room for individual effort and personal taste in producing the experience, the relationships, and hence the story line, relative to a television or movie experience. Second Life, a newer game by Linden Labs, offers us a glimpse into the next step in this genre of immersive entertainment. Like other massively multiplayer online games, Second Life is a persistent collaboration platform for its users. Unlike other games, however, Second Life offers only tools, with no story line, stock objects, or any cultural or meaning-oriented context whatsoever. Its users have created 99 percent of the objects in the game environment. The medieval village was nothing but blank space when they started. So was the flying vehicle design shop, the futuristic outpost, or the university, where some of the users are offering courses in basic programming skills and in-game design. Linden Labs charges a flat monthly subscription fee. Its employees focus on building tools that enable users to do everything from basic story concept down to the finest details of their own appearance and of objects they use in the game world. The in-game human relationships are those made by the users as they interact with each other in this immersive entertainment experience. The game's relationship to its users is fundamentally different from that of the movie or television studio. Movies and television seek to control the entire experience--rendering the viewer inert, but satisfied. Second Life sees the users as active makers of the entertainment environment that they occupy, and seeks to provide them with the tools they need to be so. The two models assume fundamentally different conceptions of play. Whereas in front of the television, the consumer is a passive receptacle, limited to selecting which finished good he or she will consume from a relatively narrow range of options, in the world of Second Life, the individual is treated as a fundamentally active, creative human being, capable of building his or her own fantasies, alone and in affiliation with others.
+
+Second Life and /{Jedi Saga}/ are merely examples, perhaps trivial ones, within the entertainment domain. They represent a shift in possibilities open both to human beings in the networked information economy and to the firms that sell them the tools for becoming active creators and users of their information environment. They are stark examples because of the centrality of the couch potato as the image of human action in television culture. Their characteristics are representative of the shift in the individual's role that is typical of the networked information economy in general and of peer production in particular. Linus Torvalds, the original creator of the Linux kernel ,{[pg 137]}, development community, was, to use Eric Raymond's characterization, a designer with an itch to scratch. Peer-production projects often are composed of people who want to do something in the world and turn to the network to find a community of peers willing to work together to make that wish a reality. Michael Hart had been working in various contexts for more than thirty years when he--at first gradually, and more recently with increasing speed--harnessed the contributions of hundreds of volunteers to Project Gutenberg in pursuit of his goal to create a globally accessible library of public domain e-texts. Charles Franks was a computer programmer from Las Vegas when he decided he had a more efficient way to proofread those e-texts, and built an interface that allowed volunteers to compare scanned images of original texts with the e-texts available on Project Gutenberg. After working independently for a couple of years, he joined forces with Hart. Franks's facility now clears the volunteer work of more than one thousand proofreaders, who proof between two hundred and three hundred books a month. Each of the thousands of volunteers who participate in free software development projects, in /{Wikipedia}/, in the Open Directory Project, or in any of the many other peer-production projects, is living some version, as a major or minor part of their lives, of the possibilities captured by the stories of a Linus Torvalds, a Michael Hart, or /{The Jedi Saga}/. Each has decided to take advantage of some combination of technical, organizational, and social conditions within which we have come to live, and to become an active creator in his or her world, rather than merely to accept what was already there. The belief that it is possible to make something valuable happen in the world, and the practice of actually acting on that belief, represent a qualitative improvement in the condition of individual freedom. They mark the emergence of new practices of self-directed agency as a lived experience, going beyond mere formal permissibility and theoretical possibility.
+
+Our conception of autonomy has not only been forged in the context of the rise of the democratic, civil rights?respecting state over its major competitors as a political system. In parallel, we have occupied the context of the increasing dominance of market-based industrial economy over its competitors. The culture we have developed over the past century is suffused with images that speak of the loss of agency imposed by that industrial economy. No cultural image better captures the way that mass industrial production reduced workers to cogs and consumers to receptacles than the one-dimensional curves typical of welfare economics--those that render human beings as mere production and demand functions. Their cultural, if ,{[pg 138]}, not intellectual, roots are in Fredrick Taylor's Theory of Scientific Management: the idea of abstracting and defining all motions and actions of employees in the production process so that all the knowledge was in the system, while the employees were barely more than its replaceable parts. Taylorism, ironically, was a vast improvement over the depredations of the first industrial age, with its sweatshops and child labor. It nonetheless resolved into the kind of mechanical existence depicted in Charlie Chaplin's tragic-comic portrait, Modern Times. While the grind of industrial Taylorism seems far from the core of the advanced economies, shunted as it is now to poorer economies, the basic sense of alienation and lack of effective agency persists. Scott Adams's Dilbert comic strip, devoted to the life of a whitecollar employee in a nameless U.S. corporation, thoroughly alienated from the enterprise, crimped by corporate hierarchy, resisting in all sorts of ways-- but trapped in a cubicle--powerfully captures this sense for the industrial information economy in much the same way that Chaplin's Modern Times did for the industrial economy itself.
+
+In the industrial economy and its information adjunct, most people live most of their lives within hierarchical relations of production, and within relatively tightly scripted possibilities after work, as consumers. It did not necessarily have to be this way. Michael Piore and Charles Sabel's Second Industrial Divide and Roberto Mangabeira Unger's False Necessity were central to the emergence of a "third way" literature that developed in the 1980s and 1990s to explore the possible alternative paths to production processes that did not depend so completely on the displacement of individual agency by hierarchical production systems. The emergence of radically decentralized, nonmarket production provides a new outlet for the attenuation of the constrained and constraining roles of employees and consumers. It is not limited to Northern Italian artisan industries or imagined for emerging economies, but is at the very heart of the most advanced market economies. Peer production and otherwise decentralized nonmarket production can alter the producer/consumer relationship with regard to culture, entertainment, and information. We are seeing the emergence of the user as a new category of relationship to information production and exchange. Users are individuals who are sometimes consumers and sometimes producers. They are substantially more engaged participants, both in defining the terms of their productive activity and in defining what they consume and how they consume it. In these two great domains of life--production and consumption, work and play--the networked information economy promises to enrich individual ,{[pg 139]}, autonomy substantively by creating an environment built less around control and more around facilitating action.
+
+The emergence of radically decentralized nonmarket production in general and of peer production in particular as feasible forms of action opens new classes of behaviors to individuals. Individuals can now justifiably believe that they can in fact do things that they want to do, and build things that they want to build in the digitally networked environment, and that this pursuit of their will need not, perhaps even cannot, be frustrated by insurmountable cost or an alien bureaucracy. Whether their actions are in the domain of political organization (like the organizers of MoveOn.org), or of education and professional attainment (as with the case of Jim Cornish, who decided to create a worldwide center of information on the Vikings from his fifth-grade schoolroom in Gander, Newfoundland), the networked information environment opens new domains for productive life that simply were not there before. In doing so, it has provided us with new ways to imagine our lives as productive human beings. Writing a free operating system or publishing a free encyclopedia may have seemed quixotic a mere few years ago, but these are now far from delusional. Human beings who live in a material and social context that lets them aspire to such things as possible for them to do, in their own lives, by themselves and in loose affiliation with others, are human beings who have a greater realm for their agency. We can live a life more authored by our own will and imagination than by the material and social conditions in which we find ourselves. At least we can do so more effectively than we could until the last decade of the twentieth century.
+
+This new practical individual freedom, made feasible by the digital environment, is at the root of the improvements I describe here for political participation, for justice and human development, for the creation of a more critical culture, and for the emergence of the networked individual as a more fluid member of community. In each of these domains, the improvements in the degree to which these liberal commitments are honored and practiced emerge from new behaviors made possible and effective by the networked information economy. These behaviors emerge now precisely because individuals have a greater degree of freedom to act effectively, unconstrained by a need to ask permission from anyone. It is this freedom that increases the salience of nonmonetizable motivations as drivers of production. It is this freedom to seek out whatever information we wish, to write about it, and to join and leave various projects and associations with others that underlies ,{[pg 140]}, the new efficiencies we see in the networked information economy. These behaviors underlie the cooperative news and commentary production that form the basis of the networked public sphere, and in turn enable us to look at the world as potential participants in discourse, rather than as potential viewers only. They are at the root of making a more transparent and reflective culture. They make possible the strategies I suggest as feasible avenues to assure equitable access to opportunities for economic participation and to improve human development globally.
+
+Treating these new practical opportunities for action as improvements in autonomy is not a theoretically unproblematic proposition. For all its intuitive appeal and centrality, autonomy is a notoriously nebulous concept. In particular, there are deep divisions within the literature as to whether it is appropriate to conceive of autonomy in substantive terms--as Gerald Dworkin, Joseph Raz, and Joel Feinberg most prominently have, and as I have here--or in formal terms. Formal conceptions of autonomy are committed to assuming that all people have the capacity for autonomous choice, and do not go further in attempting to measure the degree of freedom people actually exercise in the world in which they are in fact constrained by circumstances, both natural and human. This commitment is not rooted in some stubborn unwillingness to recognize the slings and arrows of outrageous fortune that actually constrain our choices. Rather, it comes from the sense that only by treating people as having these capacities and abilities can we accord them adequate respect as free, rational beings, and avoid sliding into overbearing paternalism. As Robert Post put it, while autonomy may well be something that needs to be "achieved" as a descriptive matter, the "structures of social authority" will be designed differently depending on whether or not individuals are treated as autonomous. "From the point of view of the designer of the structure, therefore, the presence or absence of autonomy functions as an axiomatic and foundational principle."~{ Robert Post, "Meiklejohn's Mistake: Individual Autonomy and the Reform of Public Discourse," University of Colorado Law Review 64 (1993): 1109, 1130-1132. }~ Autonomy theory that too closely aims to understand the degree of autonomy people actually exercise under different institutional arrangements threatens to form the basis of an overbearing benevolence that would undermine the very possibility of autonomous action.
+
+While the fear of an overbearing bureaucracy benevolently guiding us through life toward becoming more autonomous is justifiable, the formal conception of autonomy pays a high price in its bluntness as a tool to diagnose the autonomy implications of policy. Given how we are: situated, ,{[pg 141]}, context-bound, messy individuals, it would be a high price to pay to lose the ability to understand how law and policy actually affect whatever capacity we do have to be the authors of our own life choices in some meaningful sense. We are individuals who have the capacity to form beliefs and to change them, to form opinions and plans and defend them--but also to listen to arguments and revise our beliefs. We experience some decisions as being more free than others; we mock or lament ourselves when we find ourselves trapped by the machine or the cubicle, and we do so in terms of a sense of helplessness, a negation of freedom, not only, or even primarily, in terms of lack of welfare; and we cherish whatever conditions those are that we experience as "free" precisely for that freedom, not for other reasons. Certainly, the concerns with an overbearing state, whether professing benevolence or not, are real and immediate. No one who lives with the near past of the totalitarianism of the twentieth century or with contemporary authoritarianism and fundamentalism can belittle these. But the great evils that the state can impose through formal law should not cause us to adopt methodological commitments that would limit our ability to see the many ways in which ordinary life in democratic societies can nonetheless be more or less free, more or less conducive to individual self-authorship.
+
+If we take our question to be one concerned with diagnosing the condition of freedom of individuals, we must observe the conditions of life from a first-person, practical perspective--that is, from the perspective of the person whose autonomy we are considering. If we accept that all individuals are always constrained by personal circumstances both physical and social, then the way to think about autonomy of human agents is to inquire into the relative capacity of individuals to be the authors of their lives within the constraints of context. From this perspective, whether the sources of constraint are private actors or public law is irrelevant. What matters is the extent to which a particular configuration of material, social, and institutional conditions allows an individual to be the author of his or her life, and to what extent these conditions allow others to act upon the individual as an object of manipulation. As a means of diagnosing the conditions of individual freedom in a given society and context, we must seek to observe the extent to which people are, in fact, able to plan and pursue a life that can reasonably be described as a product of their own choices. It allows us to compare different conditions, and determine that a certain condition allows individuals to do more for themselves, without asking permission from anyone. In this sense, we can say that the conditions that enabled Cejas ,{[pg 142]}, to make /{Jedi Saga}/ are conditions that made him more autonomous than he would have been without the tools that made that movie possible. It is in this sense that the increased range of actions we can imagine for ourselves in loose affiliation with others--like creating a Project Gutenberg--increases our ability to imagine and pursue life plans that would have been impossible in the recent past.
+
+From the perspective of the implications of autonomy for how people act in the digital environment, and therefore how they are changing the conditions of freedom and justice along the various dimensions explored in these chapters, this kind of freedom to act is central. It is a practical freedom sufficient to sustain the behaviors that underlie the improvements in these other domains. From an internal perspective of the theory of autonomy, however, this basic observation that people can do more by themselves, alone or in loose affiliation with others, is only part of the contribution of the networked information economy to autonomy, and a part that will only be considered an improvement by those who conceive of autonomy as a substantive concept. The implications of the networked information economy for autonomy are, however, broader, in ways that make them attractive across many conceptions of autonomy. To make that point, however, we must focus more specifically on law as the source of constraint, a concern common to both substantive and formal conceptions of autonomy. As a means of analyzing the implications of law to autonomy, the perspective offered here requires that we broaden our analysis beyond laws that directly limit autonomy. We must also look to laws that structure the conditions of action for individuals living within the ambit of their effect. In particular, where we have an opportunity to structure a set of core resources necessary for individuals to perceive the state of the world and the range of possible actions, and to communicate their intentions to others, we must consider whether the way we regulate these resources will create systematic limitations on the capacity of individuals to control their own lives, and in their susceptibility to manipulation and control by others. Once we recognize that there cannot be a person who is ideally "free," in the sense of being unconstrained or uncaused by the decisions of others, we are left to measure the effects of all sorts of constraints that predictably flow from a particular legal arrangement, in terms of the effect they have on the relative role that individuals play in authoring their own lives. ,{[pg 143]},
+
+2~ AUTONOMY, PROPERTY, AND COMMONS
+
+The first legal framework whose role is altered by the emergence of the networked information economy is the property-like regulatory structure of patents, copyrights, and similar exclusion mechanisms applicable to information, knowledge, and culture. Property is usually thought in liberal theory to enhance, rather than constrain, individual freedom, in two quite distinct ways. First, it provides security of material context--that is, it allows one to know with some certainty that some set of resources, those that belong to her, will be available for her to use to execute her plans over time. This is the core of Kant's theory of property, which relies on a notion of positive liberty, the freedom to do things successfully based on life plans we can lay for ourselves. Second, property and markets provide greater freedom of action for the individual owner as compared both, as Marx diagnosed, to the feudal arrangements that preceded them, and, as he decidedly did not but Hayek did, to the models of state ownership and regulation that competed with them throughout most of the twentieth century.
+
+Markets are indeed institutional spaces that enable a substantial degree of free choice. "Free," however, does not mean "anything goes." If John possesses a car and Jane possesses a gun, a market will develop only if John is prohibited from running Jane over and taking her gun, and also if Jane is prohibited from shooting at John or threatening to shoot him if he does not give her his car. A market that is more or less efficient will develop only if many other things are prohibited to, or required of, one or both sides--like monopolization or disclosure. Markets are, in other words, structured relationships intended to elicit a particular datum--the comparative willingness and ability of agents to pay for goods or resources. The most basic set of constraints that structure behavior in order to enable markets are those we usually call property. Property is a cluster of background rules that determine what resources each of us has when we come into relations with others, and, no less important, what "having" or "lacking" a resource entails in our relations with these others. These rules impose constraints on who can do what in the domain of actions that require access to resources that are the subjects of property law. They are aimed to crystallize asymmetries of power over resources, which then form the basis for exchanges--I will allow you to do X, which I am asymmetrically empowered to do (for example, watch television using this cable system), and you, in turn, will allow me to do Y, which you are asymmetrically empowered to do (for example, receive payment ,{[pg 144]}, from your bank account). While a necessary precondition for markets, property also means that choice in markets is itself not free of constraints, but is instead constrained in a particular pattern. It makes some people more powerful with regard to some things, and must constrain the freedom of action of others in order to achieve this asymmetry.~{ This conception of property was first introduced and developed systematically by Robert Lee Hale in the 1920s and 1930s, and was more recently integrated with contemporary postmodern critiques of power by Duncan Kennedy, Sexy Dressing Etc.: Essays on the Power and Politics of Cultural Identity (Cambridge, MA: Harvard University Press, 1993). }~
+
+Commons are an alternative form of institutional space, where human agents can act free of the particular constraints required for markets, and where they have some degree of confidence that the resources they need for their plans will be available to them. Both freedom of action and security of resource availability are achieved in very different patterns than they are in property-based markets. As with markets, commons do not mean that anything goes. Managing resources as commons does, however, mean that individuals and groups can use those resources under different types of constraints than those imposed by property law. These constraints may be social, physical, or regulatory. They may make individuals more free or less so, in the sense of permitting a greater or lesser freedom of action to choose among a range of actions that require access to resources governed by them than would property rules in the same resources. Whether having a particular type of resource subject to a commons, rather than a property-based market, enhances freedom of action and security, or harms them, is a context-specific question. It depends on how the commons is structured, and how property rights in the resource would have been structured in the absence of a commons. The public spaces in New York City, like Central Park, Union Square, or any sidewalk, afford more people greater freedom than does a private backyard--certainly to all but its owner. Given the diversity of options that these public spaces make possible as compared to the social norms that neighbors enforce against each other, they probably offer more freedom of action than a backyard offers even to its owner in many loosely urban and suburban communities. Swiss pastures or irrigation districts of the type that Elinor Ostrom described as classic cases of long-standing sustainable commons offer their participants security of holdings at least as stable as any property system, but place substantial traditional constraints on who can use the resources, how they can use them, and how, if at all, they can transfer their rights and do something completely different. These types of commons likely afford their participants less, rather than more, freedom of action than would have been afforded had they owned the same resource in a marketalienable property arrangement, although they retain security in much the same way. Commons, like the air, the sidewalk, the road and highway, the ,{[pg 145]}, ocean, or the public beach, achieve security on a very different model. I can rely on the resources so managed in a probabilistic, rather than deterministic sense. I can plan to meet my friends for a picnic in the park, not because I own the park and can direct that it be used for my picnic, but because I know there will be a park, that it is free for me to use, and that there will be enough space for us to find a corner to sit in. This is also the sort of security that allows me to plan to leave my house at some hour, and plan to be at work at some other hour, relying not on owning the transportation path, but on the availability to me of the roads and highways on symmetric terms to its availability to everyone else. If we look more closely, we will see that property and markets also offer only a probabilistic security of context, whose parameters are different--for example, the degree of certainty we have as to whether the resource we rely on as our property will be stolen or damaged, whether it will be sufficient for what we need, or if we need more, whether it will be available for sale and whether we will be able to afford it.
+
+Like property and markets, then, commons provide both freedom of action and security of context. They do so, however, through the imposition of different constraints than do property and market rules. In particular, what typifies all these commons in contradistinction to property is that no actor is empowered by law to act upon another as an object of his or her will. I can impose conditions on your behavior when you are walking on my garden path, but I have no authority to impose on you when you walk down the sidewalk. Whether one or the other of the two systems, used exclusively, will provide "greater freedom" in some aggregate sense is not a priori determinable. It will depend on the technical characteristics of the resource, the precise contours of the rules of, respectively, the proprietary market and the commons, and the distribution of wealth in society. Given the diversity of resources and contexts, and the impossibility of a purely "anything goes" absence of rules for either system, some mix of the two different institutional frameworks is likely to provide the greatest diversity of freedom to act in a material context. This diversity, in turn, enables the greatest freedom to plan action within material contexts, allowing individuals to trade off the availabilities of, and constraints on, different resources to forge a context sufficiently provisioned to enable them to execute their plans, while being sufficiently unregulated to permit them to do so. Freedom inheres in diversity of constraint, not in the optimality of the balance of freedom and constraint represented by any single institutional arrangement. It is the diversity of constraint that allows individuals to plan to live out different ,{[pg 146]}, portions and aspects of their lives in different institutional contexts, taking advantage of the different degrees of freedom and security they make possible.
+
+In the context of information, knowledge, and culture, because of the nonrivalry of information and its characteristic as input as well as output of the production process, the commons provides substantially greater security of context than it does when material resources, like parks or roadways, are at stake. Moreover, peer production and the networked information economy provide an increasingly robust source of new information inputs. This reduces the risk of lacking resources necessary to create new expressions or find out new things, and renders more robust the freedom to act without being susceptible to constraint from someone who holds asymmetrically greater power over the information resources one needs. As to information, then, we can say with a high degree of confidence that a more expansive commons improves individual autonomy, while enclosure of the public domain undermines it. This is less determinate with communications systems. Because computers and network connections are rival goods, there is less certainty that a commons will deliver the required resources. Under present conditions, a mixture of commons-based and proprietary communications systems is likely to improve autonomy. If, however, technological and social conditions change so that, for example, sharing on the model of peer-topeer networks, distributed computation, or wireless mesh networks will be able to offer as dependable a set of communications and computation resources as the Web offers information and knowledge resources, the relative attractiveness of commons-oriented communications policies will increase from the perspective of autonomy.
+
+2~ AUTONOMY AND THE INFORMATION ENVIRONMENT
+
+The structure of our information environment is constitutive of our autonomy, not only functionally significant to it. While the capacity to act free of constraints is most immediately and clearly changed by the networked information economy, information plays an even more foundational role in our very capacity to make and pursue life plans that can properly be called our own. A fundamental requirement of self-direction is the capacity to perceive the state of the world, to conceive of available options for action, to connect actions to consequences, to evaluate alternative outcomes, and to ,{[pg 147]}, decide upon and pursue an action accordingly. Without these, no action, even if mechanically self-directed in the sense that my brain consciously directs my body to act, can be understood as autonomous in any normatively interesting sense. All of the components of decision making prior to action, and those actions that are themselves communicative moves or require communication as a precondition to efficacy, are constituted by the information and communications environment we, as agents, occupy. Conditions that cause failures at any of these junctures, which place bottlenecks, failures of communication, or provide opportunities for manipulation by a gatekeeper in the information environment, create threats to the autonomy of individuals in that environment. The shape of the information environment, and the distribution of power within it to control information flows to and from individuals, are, as we have seen, the contingent product of a combination of technology, economic behavior, social patterns, and institutional structure or law.
+
+In 1999, Cisco Systems issued a technical white paper, which described a new router that the company planned to sell to cable broadband providers. In describing advantages that these new "policy routers" offer cable providers, the paper explained that if the provider's users want to subscribe to a service that "pushes" information to their computer: "You could restrict the incoming push broadcasts as well as subscribers' outgoing access to the push site to discourage its use. At the same time, you could promote your own or a partner's services with full speed features to encourage adoption of your services."~{ White Paper, "Controlling Your Network, A Must for Cable Operators" (1999), http:// www.cptech.org/ecom/openaccess/cisco1.html. }~
+
+In plain English, the broadband provider could inspect the packets flowing to and from a customer, and decide which packets would go through faster and more reliably, and which would slow down or be lost. Its engineering purpose was to improve quality of service. However, it could readily be used to make it harder for individual users to receive information that they want to subscribe to, and easier for them to receive information from sites preferred by the provider--for example, the provider's own site, or sites of those who pay the cable operator for using this function to help "encourage" users to adopt their services. There are no reports of broadband providers using these capabilities systematically. But occasional events, such as when Canada's second largest telecommunications company blocked access for all its subscribers and those of smaller Internet service providers that relied on its network to the website of the Telecommunications Workers Union in 2005, suggest that the concern is far from imaginary. ,{[pg 148]},
+
+It is fairly clear that the new router increases the capacity of cable operators to treat their subscribers as objects, and to manipulate their actions in order to make them act as the provider wills, rather than as they would have had they had perfect information. It is less obvious whether this is a violation of, or a decrease in, the autonomy of the users. At one extreme, imagine the home as a black box with no communications capabilities save one--the cable broadband connection. Whatever comes through that cable is, for all practical purposes, "the state of the world," as far as the inhabitants of that home know. In this extreme situation, the difference between a completely neutral pipe that carries large amounts of information indiscriminately, and a pipe finely controlled by the cable operator is a large one, in terms of the autonomy of the home's inhabitants. If the pipe is indiscriminate, then the choices of the users determine what they know; decisions based on that knowledge can be said to be autonomous, at least to the extent that whether they are or are not autonomous is a function of the state of the agent's knowledge when forming a decision. If the pipe is finely controlled and purposefully manipulated by the cable operator, by contrast, then decisions that individuals make based on the knowledge they acquire through that pipe are substantially a function of the choices of the controller of the pipe, not of the users. At the other extreme, if each agent has dozens of alternative channels of communication to the home, and knows how the information flow of each one is managed, then the introduction of policy routers into one or some of those channels has no real implications for the agent's autonomy. While it may render one or more channels manipulable by their provider, the presence of alternative, indiscriminate channels, on the one hand, and of competition and choice among various manipulated channels, on the other hand, attenuates the extent to which the choices of the provider structure the universe of information within which the individual agent operates. The provider no longer can be said to shape the individual's choices, even if it tries to shape the information environment observable through its channel with the specific intent of manipulating the actions of users who view the world through its pipe. With sufficient choice among pipes, and sufficient knowledge about the differences between pipes, the very choice to use the manipulated pipe can be seen as an autonomous act. The resulting state of knowledge is self-selected by the user. Even if that state of knowledge then is partial and future actions constrained by it, the limited range of options is itself an expression of the user's autonomy, not a hindrance on it. For example, consider the following: Odysseus and his men mix different ,{[pg 149]}, forms of freedom and constraint in the face of the Sirens. Odysseus maintains his capacity to acquire new information by leaving his ears unplugged, but binds himself to stay on the ship by having his men tie him to the mast. His men choose the same course at the same time, but bind themselves to the ship by having Odysseus stop their ears with wax, so that they do not get the new information--the siren songs--that might change their minds and cause them not to stay the course. Both are autonomous when they pass by the Sirens, though both are free only because of their current incapacity. Odysseus's incapacity to jump into the water and swim to the Sirens and his men's incapacity to hear the siren songs are a result of their autonomously chosen past actions.
+
+The world we live in is neither black box nor cornucopia of well-specified communications channels. However, characterizing the range of possible configurations of the communications environment we occupy as lying on a spectrum from one to the other provides us with a framework for describing the degree to which actual conditions of a communications environment are conducive to individual autonomy. More important perhaps, it allows us to characterize policy and law that affects the communications environment as improving or undermining individual autonomy. Law can affect the range of channels of communications available to individuals, as well as the rules under which they are used. How many communications channels and sources of information can an individual receive? How many are available for him or her to communicate with others? Who controls these communications channels? What does control over the communications channels to an agent entail? What can the controller do, and what can it not? All of these questions are the subject of various forms of policy and law. Their implications affect the degree of autonomy possessed by individuals operating with the institutional-technical-economic framework thus created.
+
+There are two primary types of effects that information law can have on personal autonomy. The first type is concerned with the relative capacity of some people systematically to constrain the perceptions or shape the preferences of others. A law that systematically gives some people the power to control the options perceived by, or the preferences of, others, is a law that harms autonomy. Government regulation of the press and its propaganda that attempts to shape its subjects' lives is a special case of this more general concern. This concern is in some measure quantitative, in the sense that a greater degree of control to which one is subject is a greater offense to autonomy. More fundamentally, a law that systematically makes one adult ,{[pg 150]}, susceptible to the control of another offends the autonomy of the former. Law has created the conditions for one person to act upon another as an object. This is the nonpragmatic offense to autonomy committed by abortion regulations upheld in /{Planned Parenthood v. Casey}/ --such as requirements that women who seek abortions listen to lectures designed to dissuade them. These were justified by the plurality there, not by the claim that they did not impinge on a woman's autonomy, but that the state's interest in the potential life of a child trumps the autonomy of the pregnant woman.
+
+The second type of effect that law can have on autonomy is to reduce significantly the range and variety of options open to people in society generally, or to certain classes of people. This is different from the concern with government intervention generally. It is not focused on whether the state prohibits these options, but only on whether the effect of the law is to remove options. It is less important whether this effect is through prohibition or through a set of predictable or observable behavioral adaptations among individuals and organizations that, as a practical matter, remove these options. I do not mean to argue for the imposition of restraints, in the name of autonomy, on any lawmaking that results in a removal of any single option, irrespective of the quantity and variety of options still open. Much of law does that. Rather, the autonomy concern is implicated by laws that systematically and significantly reduce the number, and more important, impoverish the variety, of options open to people in the society for which the law is passed.
+
+"Number and variety" is intended to suggest two dimensions of effect on the options open to an individual. The first is quantitative. For an individual to author her own life, she must have a significant set of options from which to choose; otherwise, it is the choice set--or whoever, if anyone, made it so--and not the individual, that is governing her life. This quantitative dimension, however, does not mean that more choices are always better, from the individual's perspective. It is sufficient that the individual have some adequate threshold level of options in order for him or her to exercise substantive self-authorship, rather than being authored by circumstances. Beyond that threshold level, additional options may affect one's welfare and success as an autonomous agent, but they do not so constrain an individual's choices as to make one not autonomous. Beyond quantitative adequacy, the options available to an individual must represent meaningfully different paths, not merely slight variations on a theme. Qualitatively, autonomy requires the availability of options in whose adoption or rejection the individual ,{[pg 151]}, can practice critical reflection and life choices. In order to sustain the autonomy of a person born and raised in a culture with a set of socially embedded conventions about what a good life is, one would want a choice set that included at least some unconventional, non-mainstream, if you will, critical options. If all the options one has--even if, in a purely quantitative sense, they are "adequate"--are conventional or mainstream, then one loses an important dimension of self-creation. The point is not that to be truly autonomous one necessarily must be unconventional. Rather, if selfgovernance for an individual consists in critical reflection and re-creation by making choices over the course of his life, then some of the options open must be different from what he would choose simply by drifting through life, adopting a life plan for no reason other than that it is accepted by most others. A person who chooses a conventional life in the presence of the option to live otherwise makes that conventional life his or her own in a way that a person who lives a conventional life without knowing about alternatives does not.
+
+As long as our autonomy analysis of information law is sensitive to these two effects on information flow to, from, and among individuals and organizations in the regulated society, it need not conflict with the concerns of those who adopt the formal conception of autonomy. It calls for no therapeutic agenda to educate adults in a wide range of options. It calls for no one to sit in front of educational programs. It merely focuses on two core effects that law can have through the way it structures the relationships among people with regard to the information environment they occupy. If a law--passed for any reason that may or may not be related to autonomy concerns--creates systematic shifts of power among groups in society, so that some have a greater ability to shape the perceptions of others with regard to available options, consequences of action, or the value of preferences, then that law is suspect from an autonomy perspective. It makes the choices of some people less their own and more subject to manipulation by those to whom the law gives the power to control perceptions. Furthermore, a law that systematically and severely limits the range of options known to individuals is one that imposes a normative price, in terms of autonomy, for whatever value it is intended to deliver. As long as the focus of autonomy as an institutional design desideratum is on securing the best possible information flow to the individual, the designer of the legal structure need not assume that individuals are not autonomous, or have failures of autonomy, in order to serve autonomy. All the designer need assume is that individuals ,{[pg 152]}, will not act in order to optimize the autonomy of their neighbors. Law then responds by avoiding institutional designs that facilitate the capacity of some groups of individuals to act on others in ways that are systematically at the expense of the ability of those others to control their own lives, and by implementing policies that predictably diversify the set of options that all individuals are able to see as open to them.
+
+Throughout most of the 1990s and currently, communications and information policy around the globe was guided by a wish to "let the private sector lead," interpreted in large measure to mean that various property and property-like regulatory frameworks should be strengthened, while various regulatory constraints on property-like rights should be eased. The drive toward proprietary, market-based provisioning of communications and information came from disillusionment with regulatory systems and state-owned communications networks. It saw the privatization of national postal, telephone, and telegraph authorities (PTTs) around the world. Even a country with a long tradition of state-centric communications policy, like France, privatized much of its telecommunications systems. In the United States, this model translated into efforts to shift telecommunications from the regulated monopoly model it followed throughout most of the twentieth century to a competitive market, and to shift Internet development from being primarily a government-funded exercise, as it had been from the late 1960s to the mid 1990s, to being purely private property, market based. This model was declared in the Clinton administration's 1993 National Information Infrastructure: Agenda for Action, which pushed for privatization of Internet deployment and development. It was the basis of that administration's 1995 White Paper on Intellectual Property, which mapped the most aggressive agenda ever put forward by any American administration in favor of perfect enclosure of the public domain; and it was in those years when the Federal Communications Commission (FCC) first implemented spectrum auctions aimed at more thorough privatization of wireless communications in the United States. The general push for stronger intellectual property rights and more marketcentric telecommunications systems also became a central tenet of international trade regimes, pushing similar policies in smaller and developing economies.
+
+The result of the push toward private provisioning and deregulation has led to the emergence of a near-monopolistic market structure for wired physical broadband services. By the end of 2003, more than 96 percent of homes and small offices in the United States that had any kind of "high-speed" ,{[pg 153]}, Internet services received their service from either their incumbent cable operator or their incumbent local telephone company. If one focuses on the subset of these homes and offices that get service that provides more substantial room for autonomous communicative action--that is, those that have upstream service at high-speed, enabling them to publish and participate in online production efforts and not simply to receive information at high speeds--the picture is even more dismal. Less than 2 percent of homes and small offices receive their broadband connectivity from someone other than their cable carrier or incumbent telephone carrier. More than 83 percent of these users get their access from their cable operator. Moreover, the growth rate in adoption of cable broadband and local telephone digital subscriber line (DSL) has been high and positive, whereas the growth rate of the few competing platforms, like satellite broadband, has been stagnant or shrinking. The proprietary wired environment is gravitating toward a high-speed connectivity platform that will be either a lopsided duopoly, or eventually resolve into a monopoly platform.~{ Data are all based on FCC Report on High Speed Services, Appendix to Fourth 706 Report NOI (Washington, DC: Federal Communications Commission, December 2003). }~ These owners are capable, both technically and legally, of installing the kind of policy routers with which I opened the discussion of autonomy and information law--routers that would allow them to speed up some packets and slow down or reject others in ways intended to shape the universe of information available to users of their networks.
+
+The alternative of building some portions of our telecommunications and information production and exchange systems as commons was not understood in the mid-1990s, when the policy that resulted in this market structure for communications was developed. As we saw in chapter 3, however, wireless communications technology has progressed to the point where it is now possible for users to own equipment that cooperates in mesh networks to form a "last-mile" infrastructure that no one other than the users own. Radio networks can now be designed so that their capital structure more closely approximates the Internet and personal computer markets, bringing with it a greater scope for commons-based peer production of telecommunications infrastructure. Throughout most of the twentieth century, wireless communications combined high-cost capital goods (radio transmitters and antennae towers) with cheaper consumer goods (radio receivers), using regulated proprietary infrastructure, to deliver a finished good of wireless communications on an industrial model. Now WiFi is marking the possibility of an inversion of the capital structure of wireless communication. We see end-user equipment manufacturers like Intel, Cisco, and others producing ,{[pg 154]}, and selling radio "transceivers" that are shareable goods. By using ad hoc mesh networking techniques, some early versions of which are already being deployed, these transceivers allow their individual owners to cooperate and coprovision their own wireless communications network, without depending on any cable carrier or other wired provider as a carrier of last resort. Almost the entire debate around spectrum policy and the relative merits of markets and commons in wireless policy is conducted today in terms of efficiency and innovation. A common question these days is which of the two approaches will lead to greater growth of wireless communications capacity and will more efficiently allocate the capacity we already have. I have contributed my fair share of this form of analysis, but the question that concerns us here is different. We must ask what, if any, are the implications of the emergence of a feasible, sustainable model of a commons-based physical infrastructure for the first and last mile of the communications environment, in terms of individual autonomy?
+
+The choice between proprietary and commons-based wireless data networks takes on new significance in light of the market structure of the wired network, and the power it gives owners of broadband networks to control the information flow into the vast majority of homes. Commons-based wireless systems become the primary legal form of communications capacity that does not systematically subject its users to manipulation by an infrastructure owner.
+
+Imagine a world with four agents--A, B, C, and D--connected to each other by a communications network. Each component, or route, of the network could be owned or unowned. If all components are unowned, that is, are organized as a commons, each agent has an equal privilege to use any component of the network to communicate with any other agent. If all components are owned, the owner of any network component can deny to any other agent use of that network component to communicate with anyone else. This translates in the real world into whether or not there is a "spectrum owner" who "owns" the link between any two users, or whether the link is simply a consequence of the fact that two users are communicating with each other in a way that no one has a right to prevent them from doing.
+
+In this simple model, if the network is unowned, then for any communication all that is required is a willing sender and a willing recipient. No third agent gets a say as to whether any other pair will communicate with each other. Each agent determines independently of the others whether to ,{[pg 155]}, participate in a communicative exchange, and communication occurs whenever all its participants, and only they, agree to communicate with each other. For example, A can exchange information with B, as long as B consents. The only person who has a right to prevent A from receiving information from, or sending information to, B, is B, in the exercise of B's own autonomous choice whether to change her information environment. Under these conditions, neither A nor B is subject to control of her information environment by others, except where such control results from denying her the capacity to control the information environment of another. If all network components are owned, on the other hand, then for any communication there must be a willing sender, a willing recipient, and a willing infrastructure owner. In a pure property regime, infrastructure owners have a say over whether, and the conditions under which, others in their society will communicate with each other. It is precisely the power to prevent others from communicating that makes infrastructure ownership a valuable enterprise: One can charge for granting one's permission to communicate. For example, imagine that D owns all lines connecting A to B directly or through D, and C owns all lines connecting A or B to C. As in the previous scenario, A wishes to exchange information with B. Now, in addition to B, A must obtain either C's or D's consent. A now functions under two distinct types of constraint. The first, as before, is a constraint imposed by B's autonomy: A cannot change B's information environment (by exchanging information with her) without B's consent. The second constraint is that A must persuade an owner of whatever carriage medium connects A to B to permit A and B to communicate. The communication is not sent to or from C or D. It does not change C's or D's information environment, and that is not A's intention. C and D's ability to consent or withhold consent is not based on the autonomy principle. It is based, instead, on an instrumental calculus: namely, that creating such property rights in infrastructure will lead to the right incentives for the deployment of infrastructure necessary for A and B to communicate in the first place.
+
+Now imagine that D owns the entire infrastructure. If A wants to get information from B or to communicate to C in order to persuade C to act in a way that is beneficial to A, A needs D's permission. D may grant or withhold permission, and may do so either for a fee or upon the imposition of conditions on the communication. Most significantly, D can choose to prevent anyone from communicating with anyone else, or to expose each participant to the communications of only some, but not all, members of ,{[pg 156]}, society. This characteristic of her ownership gives D the power to shape A's information environment by selectively exposing A to information in the form of communications from others. Most commonly, we might see this where D decides that B will pay more if all infrastructure is devoted to permitting B to communicate her information to A and C, rather than any of it used to convey A's statements to C. D might then refuse to carry A's message to C and permit only B to communicate to A and C. The point is that from A's perspective, A is dependent upon D's decisions as to what information can be carried on the infrastructure, among whom, and in what directions. To the extent of that dependence, A's autonomy is compromised. We might call the requirement that D can place on A as a precondition to using the infrastructure an "influence exaction."
+
+The magnitude of the negative effect on autonomy, or of the influence exaction, depends primarily on (a) the degree to which it is hard or easy to get around D's facility, and (b) the degree of transparency of the exaction. Compare, for example, Cisco's policy router for cable broadband, which allows the cable operator to speed up and slow down packets based on its preferences, to Amazon's brief experiment in 1998-1999 with accepting undisclosed payments from publishers in exchange for recommending their books. If a cable operator programs its routers to slow down packets of competitors, or of information providers that do not pay, this practice places a significant exaction on users. First, the exaction is entirely nontransparent. There are many reasons that different sites load at different speeds, or even fail to load altogether. Users, the vast majority of whom are unaware that the provider could, if it chose, regulate the flow of information to them, will assume that it is the target site that is failing, not that their own service provider is manipulating what they can see. Second, there is no genuine work-around. Cable broadband covers roughly two-thirds of the home market, in many places without alternative; and where there is an alternative, there is only one--the incumbent telephone company. Without one of these noncompetitive infrastructure owners, the home user has no broadband access to the Internet. In Amazon's case, the consumer outrage when the practice was revealed focused on the lack of transparency. Users had little objection to clearly demarcated advertisement. The resistance was to the nontransparent manipulation of the recommendation system aimed at causing the consumers to act in ways consistent with Amazon's goals, rather than their own. In that case, however, there were alternatives. There are many different places from which to find book reviews and recommendations, and ,{[pg 157]}, at the time, barnesandnoble.com was already available as an online bookseller--and had not significantly adopted similar practices. The exaction was therefore less significant. Moreover, once the practice was revealed, Amazon publicly renounced it and began to place advertisements in a clearly recognizable separate category. The lesson was not lost on others. When Google began at roughly the same time as a search engine, it broke with the thencommon practice of selling search-result location. When the company later introduced advertised links, it designed its interface to separate out clearly the advertisements from the algorithm-based results, and to give the latter more prominent placement than the former. This does not necessarily mean that any search engine that accepts payments for linking is necessarily bad. A search engine like Overture, which explicitly and publicly returns results ranked according to which, among the sites retrieved, paid Overture the most, has its own value for consumers looking for commercial sites. A transparent, nonmonopolistic option of this sort increases, rather than decreases, the freedom of users to find the information they want and act on it. The problem would be with search engines that mix the two strategies and hide the mix, or with a monopolistic search engine.
+
+Because of the importance of the possibility to work around the owned infrastructure, the degree of competitiveness of any market in such infrastructure is important. Before considering the limits of even competitive markets by comparison to commons, however, it is important to recognize that a concern with autonomy provides a distinct justification for the policy concern with media concentration. To understand the effects of concentration, we can think of freedom from constraint as a dimension of welfare. Just as we have no reason to think that in a concentrated market, total welfare, let alone consumer welfare, will be optimal, we also have no reason to think that a component of welfare--freedom from constraint as a condition to access one's communicative environment--will be optimal. Moreover, when we use a "welfare" calculus as a metaphor for the degree of autonomy users have in the system, we must optimize not total welfare, as we do in economic analysis, but only what in the metaphorical calculus would count as "consumer surplus." In the domain of influence and autonomy, only "consumer surplus" counts as autonomy enhancing. "Producer surplus," the degree of successful imposition of influence on others as a condition of service, translates in an autonomy calculus into control exerted by some people (providers) over others (consumers). It reflects the successful negation of autonomy. The monopoly case therefore presents a new normative ,{[pg 158]}, dimension of the well-known critiques of media concentration. Why, however, is this not solely an analysis of media concentration? Why does a competitive market in infrastructure not solve the autonomy deficit of property?
+
+If we make standard assumptions of perfectly competitive markets and apply them to our A-B-D example, one would think that the analysis must change. D no longer has monopoly power. We would presume that the owners of infrastructure would be driven by competition to allocate infrastructure to uses that users value most highly. If one owner "charges" a high price in terms of conditions imposed on users, say to forgo receiving certain kinds of speech uncongenial to the owner, then the users will go to a competitor who does not impose that condition. This standard market response is far from morally irrelevant if one is concerned with autonomy. If, in fact, every individual can choose precisely the package of influence exactions and the cash-to-influence trade-off under which he or she is willing to communicate, then the autonomy deficit that I suggest is created by property rights in communications infrastructure is minimal. If all possible degrees of freedom from the influence of others are available to autonomous individuals, then respecting their choices, including their decisions to subject themselves to the influence of others in exchange for releasing some funds so they are available for other pursuits, respects their autonomy.
+
+Actual competition, however, will not eliminate the autonomy deficit of privately owned communications infrastructure, for familiar reasons. The most familiar constraint on the "market will solve it" hunch is imposed by transaction costs--in particular, information-gathering and negotiation costs. Influence exactions are less easily homogenized than prices expressed in currency. They will therefore be more expensive to eliminate through transactions. Some people value certain kinds of information lobbed at them positively; others negatively. Some people are more immune to suggestion, others less. The content and context of an exaction will have a large effect on its efficacy as a device for affecting the choices of the person subject to its influence, and these could change from communication to communication for the same person, let alone for different individuals. Both users and providers have imperfect information about the users' susceptibility to manipulated information flows; they have imperfect information about the value that each user would place on being free of particular exactions. Obtaining the information necessary to provide a good fit for each consumer's preferences regarding the right influence-to-cash ratio for a given service ,{[pg 159]}, would be prohibitively expensive. Even if the information were obtained, negotiating the precise cash-to-influence trade-off would be costly. Negotiation also may fail because of strategic behavior. The consumer's ideal outcome is to labor under an exaction that is ineffective. If the consumer can reduce the price by submitting to constraints on communication that would affect an average consumer, but will not change her agenda or subvert her capacity to author her life, she has increased her welfare without compromising her autonomy. The vendor's ideal outcome, however, is that the influence exaction be effective--that it succeed in changing the recipient's preferences or her agenda to fit those of the vendor. The parties, therefore, will hide their true beliefs about whether a particular condition to using proprietary infrastructure is of a type that is likely to be effective at influencing the particular recipient. Under anything less than a hypothetical and practically unattainable perfect market in communications infrastructure services, users of a proprietary infrastructure will face a less-than-perfect menu of influence exactions that they must accept before they can communicate using owned infrastructure.
+
+Adopting a regulatory framework under which all physical means of communication are based on private property rights in the infrastructure will therefore create a cost for users, in terms of autonomy. This cost is the autonomy deficit of exclusive reliance on proprietary models. If ownership of infrastructure is concentrated, or if owners can benefit from exerting political, personal, cultural, or social influence over others who seek access to their infrastructure, they will impose conditions on use of the infrastructure that will satisfy their will to exert influence. If agents other than owners (advertisers, tobacco companies, the U.S. drug czar) value the ability to influence users of the infrastructure, then the influence-exaction component of the price of using the infrastructure will be sold to serve the interests of these third parties. To the extent that these influence exactions are effective, a pure private-property regime for infrastructure allows owners to constrain the autonomy of users. The owners can do this by controlling and manipulating the users' information environment to shape how they perceive their life choices in ways that make them more likely to act in a manner that the owners prefer.
+
+The traditional progressive or social-democratic response to failures of property-based markets has been administrative regulation. In the area of communications, these responses have taken the form of access regulations-- ranging from common carriage to more limited right-of-reply, fairness ,{[pg 160]}, doctrine-type regulations. Perfect access regulation--in particular, commoncarrier obligations--like a perfectly competitive market, could in principle alleviate the autonomy deficit of property. Like markets, however, actual regulation that limits the powers that go with property in infrastructure suffers from a number of limitations. First, the institutional details of the common-carriage regime can skew incentives for what types of communications will be available, and with what degree of freedom. If we learned one thing from the history of American communications policy in the twentieth century, it is that regulated entities are adept at shaping their services, pricing, and business models to take advantage of every weakness in the common-carriage regulatory system. They are even more adept at influencing the regulatory process to introduce lucrative weaknesses into the regulatory system. At present, cable broadband has succeeded in achieving a status almost entirely exempt from access requirements that might mitigate its power to control how the platform is used, and broadband over legacy telephone systems is increasingly winning a parallel status of unregulated semimonopoly. Second, the organization that owns the infrastructure retains the same internal incentives to control content as it would in the absence of common carriage and will do so to the extent that it can sneak by any imperfections in either the carriage regulations or their enforcement. Third, as long as the network is built to run through a central organizational clearinghouse, that center remains a potential point at which regulators can reassert control or delegate to owners the power to prevent unwanted speech by purposefully limiting the scope of the common-carriage requirements.
+
+As a practical matter, then, if all wireless systems are based on property, just like the wired systems are, then wireless will offer some benefits through the introduction of some, albeit imperfect, competition. However, it will not offer the autonomy-enhancing effects that a genuine diversity of constraint can offer. If, on the other hand, policies currently being experimented with in the United States do result in the emergence of a robust, sustainable wireless communications infrastructure, owned and shared by its users and freely available to all under symmetric technical constraints, it will offer a genuinely alternative communications platform. It may be as technically good as the wired platforms for all users and uses, or it may not. Nevertheless, because of its radically distributed capitalization, and its reliance on commons rendered sustainable by equipment-embedded technical protocols, rather than on markets that depend on institutionally created asymmetric power over communications, a commons-based wireless system will offer an ,{[pg 161]}, infrastructure that operates under genuinely different institutional constraints. Such a system can become an infrastructure of first and last resort for uses that would not fit the constraints of the proprietary market, or for users who find the price-to-influence exaction bundles offered in the market too threatening to their autonomy.
+
+The emerging viability of commons-based strategies for the provisioning of communications, storage, and computation capacity enables us to take a practical, real world look at the autonomy deficit of a purely property-based communications system. As we compare property to commons, we see that property, by design, introduces a series of legal powers that asymmetrically enable owners of infrastructure to exert influence over users of their systems. This asymmetry is necessary for the functioning of markets. Predictably and systematically, however, it allows one group of actors--owners--to act upon another group of actors--consumers--as objects of manipulation. No single idiom in contemporary culture captures this characteristic better than the term "the market in eyeballs," used to describe the market in advertising slots. Commons, on the other hand, do not rely on asymmetric constraints. They eliminate points of asymmetric control over the resources necessary for effective communication, thereby eliminating the legal bases of the objectification of others. These are not spaces of perfect freedom from all constraints. However, the constraints they impose are substantively different from those generated by either the property system or by an administrative regulatory system. Their introduction alongside proprietary networks therefore diversifies the constraints under which individuals operate. By offering alternative transactional frameworks for alternative information flows, these networks substantially and qualitatively increase the freedom of individuals to perceive the world through their own eyes, and to form their own perceptions of what options are open to them and how they might evaluate alternative courses of action.
+
+2~ AUTONOMY, MASS MEDIA, AND NONMARKET INFORMATION PRODUCERS
+
+The autonomy deficit of private communications and information systems is a result of the formal structure of property as an institutional device and the role of communications and information systems as basic requirements in the ability of individuals to formulate purposes and plan actions to fit their lives. The gains flow directly from the institutional characteristics of ,{[pg 162]}, commons. The emergence of the networked information economy makes one other important contribution to autonomy. It qualitatively diversifies the information available to individuals. Information, knowledge, and culture are now produced by sources that respond to a myriad of motivations, rather than primarily the motivation to sell into mass markets. Production is organized in any one of a myriad of productive organizational forms, rather than solely the for-profit business firm. The supplementation of the profit motive and the business organization by other motivations and organizational forms--ranging from individual play to large-scale peer-production projects--provides not only a discontinuously dramatic increase in the number of available information sources but, more significantly, an increase in available information sources that are qualitatively different from others.
+
+Imagine three storytelling societies: the Reds, the Blues, and the Greens. Each society follows a set of customs as to how they live and how they tell stories. Among the Reds and the Blues, everyone is busy all day, and no one tells stories except in the evening. In the evening, in both of these societies, everyone gathers in a big tent, and there is one designated storyteller who sits in front of the audience and tells stories. It is not that no one is allowed to tell stories elsewhere. However, in these societies, given the time constraints people face, if anyone were to sit down in the shade in the middle of the day and start to tell a story, no one else would stop to listen. Among the Reds, the storyteller is a hereditary position, and he or she alone decides which stories to tell. Among the Blues, the storyteller is elected every night by simple majority vote. Every member of the community is eligible to offer him- or herself as that night's storyteller, and every member is eligible to vote. Among the Greens, people tell stories all day, and everywhere. Everyone tells stories. People stop and listen if they wish, sometimes in small groups of two or three, sometimes in very large groups. Stories in each of these societies play a very important role in understanding and evaluating the world. They are the way people describe the world as they know it. They serve as testing grounds to imagine how the world might be, and as a way to work out what is good and desirable and what is bad and undesirable. The societies are isolated from each other and from any other source of information.
+
+Now consider Ron, Bob, and Gertrude, individual members of the Reds, Blues, and Greens, respectively. Ron's perception of the options open to him and his evaluation of these options are largely controlled by the hereditary storyteller. He can try to contact the storyteller to persuade him to tell ,{[pg 163]}, different stories, but the storyteller is the figure who determines what stories are told. To the extent that these stories describe the universe of options Ron knows about, the storyteller defines the options Ron has. The storyteller's perception of the range of options largely will determine the size and diversity of the range of options open to Ron. This not only limits the range of known options significantly, but it also prevents Ron from choosing to become a storyteller himself. Ron is subjected to the storyteller's control to the extent that, by selecting which stories to tell and how to tell them, the storyteller can shape Ron's aspirations and actions. In other words, both the freedom to be an active producer and the freedom from the control of another are constrained. Bob's autonomy is constrained not by the storyteller, but by the majority of voters among the Blues. These voters select the storyteller, and the way they choose will affect Bob's access to stories profoundly. If the majority selects only a small group of entertaining, popular, pleasing, or powerful (in some other dimension, like wealth or political power) storytellers, then Bob's perception of the range of options will be only slightly wider than Ron's, if at all. The locus of power to control Bob's sense of what he can and cannot do has shifted. It is not the hereditary storyteller, but rather the majority. Bob can participate in deciding which stories can be told. He can offer himself as a storyteller every night. He cannot, however, decide to become a storyteller independently of the choices of a majority of Blues, nor can he decide for himself what stories he will hear. He is significantly constrained by the preferences of a simple majority. Gertrude is in a very different position. First, she can decide to tell a story whenever she wants to, subject only to whether there is any other Green who wants to listen. She is free to become an active producer except as constrained by the autonomy of other individual Greens. Second, she can select from the stories that any other Green wishes to tell, because she and all those surrounding her can sit in the shade and tell a story. No one person, and no majority, determines for her whether she can or cannot tell a story. No one can unilaterally control whose stories Gertrude can listen to. And no one can determine for her the range and diversity of stories that will be available to her from any other member of the Greens who wishes to tell a story.
+
+The difference between the Reds, on the one hand, and the Blues or Greens, on the other hand, is formal. Among the Reds, only the storyteller may tell the story as a matter of formal right, and listeners only have a choice of whether to listen to this story or to no story at all. Among the ,{[pg 164]}, Blues and the Greens anyone may tell a story as a matter of formal right, and listeners, as a matter of formal right, may choose from whom they will hear. The difference between the Reds and the Blues, on the one hand, and the Greens, on the other hand, is economic. In the former, opportunities for storytelling are scarce. The social cost is higher, in terms of stories unavailable for hearing, or of choosing one storyteller over another. The difference between the Blues and the Greens, then, is not formal, but practical. The high cost of communication created by the Blues' custom of listening to stories only in the evening, in a big tent, together with everyone else, makes it practically necessary to select "a storyteller" who occupies an evening. Since the stories play a substantive role in individuals' perceptions of how they might live their lives, that practical difference alters the capacity of individual Blues and Greens to perceive a wide and diverse set of options, as well as to exercise control over their perceptions and evaluations of options open for living their lives and to exercise the freedom themselves to be storytellers. The range of stories Bob is likely to listen to, and the degree to which he can choose unilaterally whether he will tell or listen, and to which story, are closer, as a practical matter, to those of Ron than to those of Gertrude. Gertrude has many more stories and storytelling settings to choose from, and many more instances where she can offer her own stories to others in her society. She, and everyone else in her society, can be exposed to a wider variety of conceptions of how life can and ought to be lived. This wider diversity of perceptions gives her greater choice and increases her ability to compose her own life story out of the more varied materials at her disposal. She can be more self-authored than either Ron or Bob. This diversity replicates, in large measure, the range of perceptions of how one might live a life that can be found among all Greens, precisely because the storytelling customs make every Green a potential storyteller, a potential source of information and inspiration about how one might live one's life.
+
+All this could sound like a morality tale about how wonderfully the market maximizes autonomy. The Greens easily could sound like Greenbacks, rather than like environmentalists staking out public parks as information commons. However, this is not the case in the industrial information economy, where media markets have high entry barriers and large economies of scale. It is costly to start up a television station, not to speak of a network, a newspaper, a cable company, or a movie distribution system. It is costly to produce the kind of content delivered over these systems. Once production costs or the costs of laying a network are incurred, the additional marginal ,{[pg 165]}, cost of making information available to many users, or of adding users to the network, is much smaller than the initial cost. This is what gives information and cultural products and communications facilities supply-side economies of scale and underlies the industrial model of producing them. The result is that the industrial information economy is better stylized by the Reds and Blues rather than by the Greens. While there is no formal limitation on anyone producing and disseminating information products, the economic realities limit the opportunities for storytelling in the massmediated environment and make storytelling opportunities a scarce good. It is very costly to tell stories in the mass-mediated environment. Therefore, most storytellers are commercial entities that seek to sell their stories to the audience. Given the discussion earlier in this chapter, it is fairly straightforward to see how the Greens represent greater freedom to choose to become an active producer of one's own information environment. It is similarly clear that they make it exceedingly difficult for any single actor to control the information flow to any other actor. We can now focus on how the story provides a way of understanding the justification and contours of the third focus of autonomy-respecting policy: the requirement that government not limit the quantity and diversity of information available.
+
+The fact that our mass-mediated environment is mostly commercial makes it more like the Blues than the Reds. These outlets serve the tastes of the majority--expressed in some combination of cash payment and attention to advertising. I do not offer here a full analysis--covered so well by Baker in Media, Markets, and Democracy--as to why mass-media markets do not reflect the preferences of their audiences very well. Presented here is a tweak of an older set of analyses of whether monopoly or competition is better in mass-media markets to illustrate the relationship between markets, channels, and diversity of content. In chapter 6, I describe in greater detail the SteinerBeebe model of diversity and number of channels. For our purposes here, it is enough to note that this model shows how advertiser-supported media tend to program lowest-common-denominator programs, intended to "capture the eyeballs" of the largest possible number of viewers. These media do not seek to identify what viewers intensely want to watch, but tend to clear programs that are tolerable enough to viewers so that they do not switch off their television. The presence or absence of smaller-segment oriented television depends on the shape of demand in an audience, the number of channels available to serve that audience, and the ownership structure. The relationship between diversity of content and diversity of structure or ownership ,{[pg 166]}, is not smooth. It occurs in leaps. Small increases in the number of outlets continue to serve large clusters of low-intensity preferences--that is, what people find acceptable. A new channel that is added will more often try to take a bite out of a large pie represented by some lowest-commondenominator audience segment than to try to serve a new niche market. Only after a relatively high threshold number of outlets are reached do advertiser-supported media have sufficient reason to try to capture much smaller and higher-intensity preference clusters--what people are really interested in. The upshot is that if all storytellers in society are profit maximizing and operate in a market, the number of storytellers and venues matters tremendously for the diversity of stories told in a society. It is quite possible to have very active market competition in how well the same narrow set of stories are told, as opposed to what stories are told, even though there are many people who would rather hear different stories altogether, but who are in clusters too small, too poor, or too uncoordinated to persuade the storytellers to change their stories rather than their props.
+
+The networked information economy is departing from the industrial information economy along two dimensions that suggest a radical increase in the number of storytellers and the qualitative diversity of stories told. At the simplest level, the cost of a channel is so low that some publication capacity is becoming available to practically every person in society. Ranging from an e-mail account, to a few megabytes of hosting capacity to host a subscriber's Web site, to space on a peer-to-peer distribution network available for any kind of file (like FreeNet or eDonkey), individuals are now increasingly in possession of the basic means necessary to have an outlet for their stories. The number of channels is therefore in the process of jumping from some infinitesimally small fraction of the population--whether this fraction is three networks or five hundred channels almost does not matter by comparison--to a number of channels roughly equal to the number of users. This dramatic increase in the number of channels is matched by the fact that the low costs of communications and production enable anyone who wishes to tell a story to do so, whether or not the story they tell will predictably capture enough of a paying (or advertising-susceptible) audience to recoup production costs. Self-expression, religious fervor, hobby, community seeking, political mobilization, any one of the many and diverse reasons that might drive us to want to speak to others is now a sufficient reason to enable us to do so in mediated form to people both distant and close. The basic filter of marketability has been removed, allowing anything ,{[pg 167]}, that emerges out of the great diversity of human experience, interest, taste, and expressive motivation to flow to and from everyone connected to everyone else. Given that all diversity within the industrial information economy needed to flow through the marketability filter, the removal of that filter marks a qualitative increase in the range and diversity of life options, opinions, tastes, and possible life plans available to users of the networked information economy.
+
+The image of everyone being equally able to tell stories brings, perhaps more crisply than any other image, two critical objections to the attractiveness of the networked information economy: quality and cacophony. The problem of quality is easily grasped, but is less directly connected to autonomy. Having many high school plays and pickup basketball games is not the same as having Hollywood movies or the National Basketball Association (NBA). The problem of quality understood in these terms, to the extent that the shift from industrial to networked information production in fact causes it, does not represent a threat to autonomy as much as a welfare cost of making the autonomy-enhancing change. More troubling from the perspective of autonomy is the problem of information overload, which is related to, but distinct from, production quality. The cornucopia of stories out of which each of us can author our own will only enhance autonomy if it does not resolve into a cacophony of meaningless noise. How, one might worry, can a system of information production enhance the ability of an individual to author his or her life, if it is impossible to tell whether this or that particular story or piece of information is credible, or whether it is relevant to the individual's particular experience? Will individuals spend all their time sifting through mounds of inane stories and fairy tales, instead of evaluating which life is best for them based on a small and manageable set of credible and relevant stories? None of the philosophical accounts of substantive autonomy suggests that there is a linearly increasing relationship between the number of options open to an individual--or in this case, perceivable by an individual--and that person's autonomy. Information overload and decision costs can get in the way of actually living one's autonomously selected life.
+
+The quality problem is often raised in public discussions of the Internet, and takes the form of a question: Where will high-quality information products, like movies, come from? This form of the objection, while common, is underspecified normatively and overstated descriptively. First, it is not at all clear what might be meant by "quality," insofar as it is a characteristic of ,{[pg 168]}, information, knowledge, and cultural production that is negatively affected by the shift from an industrial to a networked information economy. Chapter 2 explains that information has always been produced in various modalities, not only in market-oriented organizations and certainly not in proprietary strategies. Political theory is not "better" along any interesting dimension when written by someone aiming to maximize her own or her publisher's commercial profits. Most of the commercial, proprietary online encyclopedias are not better than /{Wikipedia}/ along any clearly observable dimension. Moreover, many information and cultural goods are produced on a relational model, rather than a packaged-goods model. The emergence of the digitally networked environment does not much change their economics or sustainability. Professional theatre that depends on live performances is an example, as are musical performances. To the extent, therefore, that the emergence of substantial scope for nonmarket, distributed production in a networked information economy places pressure on "quality," it is quality of a certain kind. The threatened desiderata are those that are uniquely attractive about industrially produced mass-market products. The high-production-cost Hollywood movie or television series are the threatened species. Even that species is not entirely endangered, and the threat varies for different industries, as explained in some detail in chapter 11. Some movies, particularly those currently made for video release only, may well, in fact, recede. However, truly high-production-value movies will continue to have a business model through release windows other than home video distribution. Independently, the pressure on advertising-supported television from multichannel video-- cable and satellite--on the other hand, is pushing for more low-cost productions like reality TV. That internal development in mass media, rather than the networked information economy, is already pushing industrial producers toward low-cost, low-quality productions. Moreover, as a large section of chapter 7 illustrates, peer production and nonmarket production are producing desirable public information--news and commentary--that offer qualities central to democratic discourse. Chapter 8 discusses how these two forms of production provide a more transparent and plastic cultural environment--both central to the individual's capacity for defining his or her goals and options. What emerges in the networked information environment, therefore, will not be a system for low-quality amateur mimicry of existing commercial products. What will emerge is space for much more expression, from diverse sources and of diverse qualities. Freedom--the freedom to speak, but also to be free from manipulation and to be cognizant ,{[pg 169]}, of many and diverse options--inheres in this radically greater diversity of information, knowledge, and culture through which to understand the world and imagine how one could be.
+
+Rejecting the notion that there will be an appreciable loss of quality in some absolute sense does not solve the deeper problem of information overload, or having too much information to be able to focus or act upon it. Having too much information with no real way of separating the wheat from the chaff forms what we might call the Babel objection. Individuals must have access to some mechanism that sifts through the universe of information, knowledge, and cultural moves in order to whittle them down to a manageable and usable scope. The question then becomes whether the networked information economy, given the human need for filtration, actually improves the information environment of individuals relative to the industrial information economy. There are three elements to the answer: First, as a baseline, it is important to recognize the power that inheres in the editorial function. The extent to which information overload inhibits autonomy relative to the autonomy of an individual exposed to a well-edited information flow depends on how much the editor who whittles down the information flow thereby gains power over the life of the user of the editorial function, and how he or she uses that power. Second, there is the question of whether users can select and change their editor freely, or whether the editorial function is bundled with other communicative functions and sold by service providers among which users have little choice. Finally, there is the understanding that filtration and accreditation are themselves information goods, like any other, and that they too can be produced on a commonsbased, nonmarket model, and therefore without incurring the autonomy deficit that a reintroduction of property to solve the Babel objection would impose.
+
+Relevance filtration and accreditation are integral parts of all communications. A communication must be relevant for a given sender to send to a given recipient and relevant for the recipient to receive. Accreditation further filters relevant information for credibility. Decisions of filtration for purposes of relevance and accreditation are made with reference to the values of the person filtering the information, not the values of the person receiving the information. For instance, the editor of a cable network newsmagazine decides whether a given story is relevant to send out. The owner of the cable system decides whether it is, in the aggregate, relevant to its viewers to see that newsmagazine on its system. Only if both so decide, does each viewer ,{[pg 170]}, get the residual choice of whether to view the story. Of the three decisions that must coincide to mark the newsmagazine as relevant to the viewer, only one is under the control of the individual recipient. And, while the editor's choice might be perceived in some sense as inherent to the production of the information, the cable operator's choice is purely a function of its role as proprietor of the infrastructure. The point to focus on is that the recipient's judgment is dependent on the cable operator's decision as to whether to release the program. The primary benefit of proprietary systems as mechanisms of avoiding the problem of information overload or the Babel objection is precisely the fact that the individual cannot exercise his own judgment as to all the programs that the cable operator--or other commercial intermediary between someone who makes a statement and someone who might receive it--has decided not to release.
+
+As with any flow, control over a necessary passageway or bottleneck in the course of a communication gives the person controlling that point the power to direct the entire flow downstream from it. This power enables the provision of a valuable filtration service, which promises the recipient that he or she will not spend hours gazing at irrelevant materials. However, filtration only enhances the autonomy of users if the editor's notions of relevance and quality resemble those of the sender and the recipient. Imagine a recipient who really wants to be educated about African politics, but also likes sports. Under perfect conditions, he would seek out information on African politics most of the time, with occasional searches for information on sports. The editor, however, makes her money by selling advertising. For her, the relevant information is whatever will keep the viewer's attention most closely on the screen while maintaining a pleasantly acquisitive mood. Given a choice between transmitting information about famine in Sudan, which she worries will make viewers feel charitable rather than acquisitive, and transmitting a football game that has no similar adverse effects, she will prefer the latter. The general point should be obvious. For purposes of enhancing the autonomy of the user, the filtering and accreditation function suffers from an agency problem. To the extent that the values of the editor diverge from those of the user, an editor who selects relevant information based on her values and plans for the users does not facilitate user autonomy, but rather imposes her own preferences regarding what should be relevant to users given her decisions about their life choices. A parallel effect occurs with accreditation. An editor might choose to treat as credible a person whose views or manner of presentation draw audiences, rather than necessary ,{[pg 171]}, the wisest or best-informed of commentators. The wide range in quality of talking heads on television should suffice as an example. The Babel objection may give us good reason to pause before we celebrate the networked information economy, but it does not provide us with reasons to celebrate the autonomy effects of the industrial information economy.
+
+The second component of the response to the Babel objection has to do with the organization of filtration and accreditation in the industrial information economy. The cable operator owns its cable system by virtue of capital investment and (perhaps) expertise in laying cables, hooking up homes, and selling video services. However, it is control over the pipeline into the home that gives it the editorial role in the materials that reach the home. Given the concentrated economics of cable systems, this editorial power is not easy to replace and is not subject to open competition. The same phenomenon occurs with other media that are concentrated and where the information production and distribution functions are integrated with relevance filtration and accreditation: from one-newspaper towns to broadcasters or cable broadband service providers. An edited environment that frees the individual to think about and choose from a small selection of information inputs becomes less attractive when the editor takes on that role as a result of the ownership of carriage media, a large printing press, or copyrights in existing content, rather than as a result of selection by the user as a preferred editor or filter. The existence of an editor means that there is less information for an individual to process. It does not mean that the values according to which the information was pared down are those that the user would have chosen absent the tied relationship between editing and either proprietary content production or carriage.
+
+Finally, and most important, just like any other form of information, knowledge, and culture, relevance and accreditation can be, and are, produced in a distributed fashion. Instead of relying on the judgment of a record label and a DJ of a commercial radio station for what music is worth listening to, users can compare notes as to what they like, and give music to friends whom they think will like it. This is the virtue of music file-sharing systems as distribution systems. Moreover, some of the most interesting experiments in peer production described in chapter 3 are focused on filtration. From the discussions of /{Wikipedia}/ to the moderation and metamoderation scheme of Slashdot, and from the sixty thousand volunteers that make up the Open Directory Project to the PageRank system used by Google, the means of filtering data are being produced within the networked information ,{[pg 172]}, economy using peer production and the coordinate patterns of nonproprietary production more generally. The presence of these filters provides the most important answer to the Babel objection. The presence of filters that do not depend on proprietary control, and that do not bundle proprietary content production and carriage services with filtering, offers a genuinely distinct approach toward presenting autonomous individuals with a choice among different filters that reflect genuinely diverse motivations and organizational forms of the providers.
+
+Beyond the specific efforts at commons-based accreditation and relevance filtration, we are beginning to observe empirically that patterns of use of the Internet and the World Wide Web exhibit a significant degree of order. In chapter 7, I describe in detail and apply the literature that has explored network topology to the Babel objection in the context of democracy and the emerging networked public sphere, but its basic lesson applies here as well. In brief, the structure of linking on the Internet suggests that, even without quasi-formal collaborative filtering, the coordinate behavior of many autonomous individuals settles on an order that permits us to make sense of the tremendous flow of information that results from universal practical ability to speak and create. We observe the Web developing an order--with high-visibility nodes, and clusters of thickly connected "regions" where groups of Web sites accredit each other by mutual referencing. The highvisibility Web sites provide points of condensation for informing individual choices, every bit as much as they form points of condensation for public discourse. The enormous diversity of topical and context-dependent clustering, whose content is nonetheless available for anyone to reach from anywhere, provides both a way of slicing through the information and rendering it comprehensible, and a way of searching for new sources of information beyond those that one interacts with as a matter of course. The Babel objection is partly solved, then, by the fact that people tend to congregate around common choices. We do this not as a result of purposeful manipulation, but rather because in choosing whether or not to read something, we probably give some weight to whether or not other people have chosen to read it. Unless one assumes that individual human beings are entirely dissimilar from each other, then the fact that many others have chosen to read something is a reasonable signal that it may be worthwhile for me to read. This phenomenon is both universal--as we see with the fact that Google successfully provides useful ranking by aggregating all judgments around the Web as to the relevance of any given Web site--and recursively ,{[pg 173]}, present within interest-based and context-based clusters or groups. The clustering and actual degree distribution in the Web suggests, however, that people do not simply follow the herd--they will not read whatever a majority reads. Rather, they will make additional rough judgments about which other people's preferences are most likely to predict their own, or which topics to look in. From these very simple rules--other people share something with me in their tastes, and some sets of other people share more with me than others--we see the Babel objection solved on a distributed model, without anyone exerting formal legal control or practical economic power.
+
+Why, however, is this not a simple reintroduction of heteronomy, of dependence on the judgment of others that subjects individuals to their control? The answer is that, unlike with proprietary filters imposed at bottlenecks or gateways, attention-distribution patterns emerge from many small-scale, independent choices where free choice exists. They are not easily manipulable by anyone. Significantly, the millions of Web sites that do not have high traffic do not "go out of business." As Clay Shirky puts it, while my thoughts about the weekend are unlikely to be interesting to three random users, they may well be interesting, and a basis for conversation, for three of my close friends. The fact that power law distributions of attention to Web sites result from random distributions of interests, not from formal or practical bottlenecks that cannot be worked around, means that whenever an individual chooses to search based on some mechanism other than the simplest, thinnest belief that individuals are all equally similar and dissimilar, a different type of site will emerge as highly visible. Topical sites cluster, unsurprisingly, around topical preference groups; one site does not account for all readers irrespective of their interests. We, as individuals, also go through an iterative process of assigning a likely relevance to the judgments of others. Through this process, we limit the information overload that would threaten to swamp our capacity to know; we diversify the sources of information to which we expose ourselves; and we avoid a stifling dependence on an editor whose judgments we cannot circumvent. We might spend some of our time using the most general, "human interest has some overlap" algorithm represented by Google for some things, but use political common interest, geographic or local interest, hobbyist, subject matter, or the like, to slice the universe of potential others with whose judgments we will choose to affiliate for any given search. By a combination of random searching and purposeful deployment of social mapping--who is likely to be interested in what is relevant to me now--we can solve the Babel objection while subjecting ,{[pg 174]}, ourselves neither to the legal and market power of proprietors of communications infrastructure or media products nor to the simple judgments of the undifferentiated herd. These observations have the virtue of being not only based on rigorous mathematical and empirical studies, as we see in chapter 7, but also being more consistent with intuitive experience of anyone who has used the Internet for any decent length of time. We do not degenerate into mindless meandering through a cacophonous din. We find things we want quite well. We stumble across things others suggest to us. When we do go on an unplanned walk, within a very short number of steps we either find something interesting or go back to looking in ways that are more self-conscious and ordered.
+
+The core response to the Babel objection is, then, to accept that filtration is crucial to an autonomous individual. Nonetheless, that acknowledgement does not suggest that the filtration and accreditation systems that the industrial information economy has in fact produced, tied to proprietary control over content production and exchange, are the best means to protect autonomous individuals from the threat of paralysis due to information overload. Property in infrastructure and content affords control that can be used to provide filtration. To that extent, property provides the power for some people to shape the will-formation processes of others. The adoption of distributed information-production systems--both structured as cooperative peer-production enterprises and unstructured coordinate results of individual behavior, like the clustering of preferences around Web sites--does not mean that filtration and accreditation lose their importance. It only means that autonomy is better served when these communicative functions, like others, are available from a nonproprietary, open model of production alongside the proprietary mechanisms of filtration. Being autonomous in this context does not mean that we have to make all the information, read it all, and sift through it all by ourselves. It means that the combination of institutional and practical constraints on who can produce information, who can access it, and who can determine what is worth reading leaves each individual with a substantial role in determining what he shall read, and whose judgment he shall adhere to in sifting through the information environment, for what purposes, and under what circumstances. As always in the case of autonomy for context-bound individuals, the question is the relative role that individuals play, not some absolute, context-independent role that could be defined as being the condition of freedom.
+
+The increasing feasibility of nonmarket, nonproprietary production of information, ,{[pg 175]}, knowledge, and culture, and of communications and computation capacity holds the promise of increasing the degree of autonomy for individuals in the networked information economy. By removing basic capital and organizational constraints on individual action and effective cooperation, the networked information economy allows individuals to do more for and by themselves, and to form associations with others whose help they require in pursuing their plans. We are beginning to see a shift from the highly constrained roles of employee and consumer in the industrial economy, to more flexible, self-authored roles of user and peer participant in cooperative ventures, at least for some part of life. By providing as commons a set of core resources necessary for perceiving the state of the world, constructing one's own perceptions of it and one's own contributions to the information environment we all occupy, the networked information economy diversifies the set of constraints under which individuals can view the world and attenuates the extent to which users are subject to manipulation and control by the owners of core communications and information systems they rely on. By making it possible for many more diversely motivated and organized individuals and groups to communicate with each other, the emerging model of information production provides individuals with radically different sources and types of stories, out of which we can work to author our own lives. Information, knowledge, and culture can now be produced not only by many more people than could do so in the industrial information economy, but also by individuals and in subjects and styles that could not pass the filter of marketability in the mass-media environment. The result is a proliferation of strands of stories and of means of scanning the universe of potential stories about how the world is and how it might become, leaving individuals with much greater leeway to choose, and therefore a much greater role in weaving their own life tapestry. ,{[pg 176]},
+
+1~6 Chapter 6 - Political Freedom Part 1: The Trouble with Mass Media
+
+Modern democracies and mass media have coevolved throughout the twentieth century. The first modern national republics--the early American Republic, the French Republic from the Revolution to the Terror, the Dutch Republic, and the early British parliamentary monarchy--preexisted mass media. They provide us with some model of the shape of the public sphere in a republic without mass media, what Jurgen Habermas called the bourgeois public sphere. However, the expansion of democracies in complex modern societies has largely been a phenomenon of the late nineteenth and twentieth centuries--in particular, the post-World War II years. During this period, the platform of the public sphere was dominated by mass media--print, radio, and television. In authoritarian regimes, these means of mass communication were controlled by the state. In democracies, they operated either under state ownership, with varying degrees of independence from the sitting government, or under private ownership financially dependent on advertising markets. We do not, therefore, have examples of complex modern democracies whose public sphere is built on a platform that is widely ,{[pg 177]}, distributed and independent of both government control and market demands. The Internet as a technology, and the networked information economy as an organizational and social model of information and cultural production, promise the emergence of a substantial alternative platform for the public sphere. The networked public sphere, as it is currently developing, suggests that it will have no obvious points of control or exertion of influence--either by fiat or by purchase. It seems to invert the mass-media model in that it is driven heavily by what dense clusters of users find intensely interesting and engaging, rather than by what large swathes of them find mildly interesting on average. And it promises to offer a platform for engaged citizens to cooperate and provide observations and opinions, and to serve as a watchdog over society on a peer-production model.
+
+The claim that the Internet democratizes is hardly new. "Everyone a pamphleteer" has been an iconic claim about the Net since the early 1990s. It is a claim that has been subjected to significant critique. What I offer, therefore, in this chapter and the next is not a restatement of the basic case, but a detailed analysis of how the Internet and the emerging networked information economy provide us with distinct improvements in the structure of the public sphere over the mass media. I will also explain and discuss the solutions that have emerged within the networked environment itself to some of the persistent concerns raised about democracy and the Internet: the problems of information overload, fragmentation of discourse, and the erosion of the watchdog function of the media.
+
+For purposes of considering political freedom, I adopt a very limited definition of "public sphere." The term is used in reference to the set of practices that members of a society use to communicate about matters they understand to be of public concern and that potentially require collective action or recognition. Moreover, not even all communications about matters of potential public concern can be said to be part of the public sphere. Communications within self-contained relationships whose boundaries are defined independently of the political processes for collective action are "private," if those communications remain purely internal. Dinner-table conversations, grumblings at a bridge club, or private letters have that characteristic, if they occur in a context where they are not later transmitted across the associational boundaries to others who are not part of the family or the bridge club. Whether these conversations are, or are not, part of the public sphere depends on the actual communications practices in a given society. The same practices can become an initial step in generating public opinion ,{[pg 178]}, in the public sphere if they are nodes in a network of communications that do cross associational boundaries. A society with a repressive regime that controls the society-wide communications facilities nonetheless may have an active public sphere if social networks and individual mobility are sufficient to allow opinions expressed within discrete associational settings to spread throughout a substantial portion of the society and to take on political meaning for those who discuss them. The public sphere is, then, a sociologically descriptive category. It is a term for signifying how, if at all, people in a given society speak to each other in their relationship as constituents about what their condition is and what they ought or ought not to do as a political unit. This is a purposefully narrow conception of the public sphere. It is intended to focus on the effects of the networked environment on what has traditionally been understood to be political participation in a republic. I postpone consideration of a broader conception of the public sphere, and of the political nature of who gets to decide meaning and how cultural interpretations of the conditions of life and the alternatives open to a society are created and negotiated in a society until chapter 8.
+
+The practices that define the public sphere are structured by an interaction of culture, organization, institutions, economics, and technical communications infrastructure. The technical platforms of ink and rag paper, handpresses, and the idea of a postal service were equally present in the early American Republic, Britain, and France of the late eighteenth and early nineteenth centuries. However, the degree of literacy, the social practices of newspaper reading, the relative social egalitarianism as opposed to elitism, the practices of political suppression or subsidy, and the extent of the postal system led to a more egalitarian, open public sphere, shaped as a network of smaller-scale local clusters in the United States, as opposed to the more tightly regulated and elitist national and metropolis-centered public spheres of France and Britain. The technical platforms of mass-circulation print and radio were equally available in the Soviet Union and Nazi Germany, in Britain, and in the United States in the 1930s. Again, however, the vastly different political and legal structures of the former created an authoritarian public sphere, while the latter two, both liberal public spheres, differed significantly in the business organization and economic model of production, the legal framework and the cultural practices of reading and listening-- leading to the then still elitist overlay on the public sphere in Britain relative to a more populist public sphere in the United States.
+
+Mass media structured the public sphere of the twentieth century in all ,{[pg 179]}, advanced modern societies. They combined a particular technical architecture, a particular economic cost structure, a limited range of organizational forms, two or three primary institutional models, and a set of cultural practices typified by consumption of finished media goods. The structure of the mass media resulted in a relatively controlled public sphere--although the degree of control was vastly different depending on whether the institutional model was liberal or authoritarian--with influence over the debate in the public sphere heavily tilted toward those who controlled the means of mass communications. The technical architecture was a one-way, hub-and-spoke structure, with unidirectional links to its ends, running from the center to the periphery. A very small number of production facilities produced large amounts of identical copies of statements or communications, which could then be efficiently sent in identical form to very large numbers of recipients. There was no return loop to send observations or opinions back from the edges to the core of the architecture in the same channel and with similar salience to the communications process, and no means within the massmedia architecture for communication among the end points about the content of the exchanges. Communications among the individuals at the ends were shunted to other media--personal communications or telephones-- which allowed communications among the ends. However, these edge media were either local or one-to-one. Their social reach, and hence potential political efficacy, was many orders of magnitude smaller than that of the mass media.
+
+The economic structure was typified by high-cost hubs and cheap, ubiquitous, reception-only systems at the ends. This led to a limited range of organizational models available for production: those that could collect sufficient funds to set up a hub. These included: state-owned hubs in most countries; advertising-supported commercial hubs in some of the liberal states, most distinctly in the United States; and, particularly for radio and television, the British Broadcasting Corporation (BBC) model or hybrid models like the Canadian Broadcasting Corporation (CBC) in Canada. The role of hybrid and purely commercial, advertising-supported media increased substantially around the globe outside the United States in the last two to three decades of the twentieth century. Over the course of the century, there also emerged civil-society or philanthropy-supported hubs, like the party presses in Europe, nonprofit publications like Consumer Reports (later, in the United States), and, more important, public radio and television. The oneway technical architecture and the mass-audience organizational model underwrote ,{[pg 180]}, the development of a relatively passive cultural model of media consumption. Consumers (or subjects, in authoritarian systems) at the ends of these systems would treat the communications that filled the public sphere as finished goods. These were to be treated not as moves in a conversation, but as completed statements whose addressees were understood to be passive: readers, listeners, and viewers.
+
+The Internet's effect on the public sphere is different in different societies, depending on what salient structuring components of the existing public sphere its introduction perturbs. In authoritarian countries, it is the absence of a single or manageably small set of points of control that is placing the greatest pressure on the capacity of the regimes to control their public sphere, and thereby to simplify the problem of controlling the actions of the population. In liberal countries, the effect of the Internet operates through its implications for economic cost and organizational form. In both cases, however, the most fundamental and potentially long-standing effect that Internet communications are having is on the cultural practice of public communication. The Internet allows individuals to abandon the idea of the public sphere as primarily constructed of finished statements uttered by a small set of actors socially understood to be "the media" (whether state owned or commercial) and separated from society, and to move toward a set of social practices that see individuals as participating in a debate. Statements in the public sphere can now be seen as invitations for a conversation, not as finished goods. Individuals can work their way through their lives, collecting observations and forming opinions that they understand to be practically capable of becoming moves in a broader public conversation, rather than merely the grist for private musings.
+
+2~ DESIGN CHARACTERISTICS OF A COMMUNICATIONS PLATFORM FOR A LIBERAL PUBLIC PLATFORM OR A LIBERAL PUBLIC SPHERE
+
+How is private opinion about matters of collective, formal, public action formed? How is private opinion communicated to others in a form and in channels that allow it to be converted into a public, political opinion, and a position worthy of political concern by the formal structures of governance of a society? How, ultimately, is such a political and public opinion converted into formal state action? These questions are central to understanding how ,{[pg 181]}, individuals in complex contemporary societies, located at great distances from each other and possessing completely different endowments of material, intellectual, social, and formal ties and capabilities, can be citizens of the same democratic polity rather than merely subjects of a more or less responsive authority. In the idealized Athenian agora or New England town hall, the answers are simple and local. All citizens meet in the agora, they speak in a way that all relevant citizens can hear, they argue with each other, and ultimately they also constitute the body that votes and converts the opinion that emerges into a legitimate action of political authority. Of course, even in those small, locally bounded polities, things were never quite so simple. Nevertheless, the idealized version does at least give us a set of functional characteristics that we might seek in a public sphere: a place where people can come to express and listen to proposals for agenda items--things that ought to concern us as members of a polity and that have the potential to become objects of collective action; a place where we can make and gather statements of fact about the state of our world and about alternative courses of action; where we can listen to opinions about the relative quality and merits of those facts and alternative courses of action; and a place where we can bring our own concerns to the fore and have them evaluated by others.
+
+Understood in this way, the public sphere describes a social communication process. Habermas defines the public sphere as "a network for communicating information and points of view (i.e., opinions expressing affirmative or negative attitudes)"; which, in the process of communicating this information and these points of view, filters and synthesizes them "in such a way that they coalesce into bundles of topically specified public opinions."~{ Jurgen Habermas, Between Facts and Norms, Contributions to Discourse Theory of Law and Democracy (Cambridge, MA: MIT Press, 1996). }~ Taken in this descriptive sense, the public sphere does not relate to a particular form of public discourse that is normatively attractive from some perspective or another. It defines a particular set of social practices that are necessary for the functioning of any complex social system that includes elements of governing human beings. There are authoritarian public spheres, where communications are regimented and controlled by the government in order to achieve acquiescence and to mobilize support, rather than relying solely on force to suppress dissent and opposition. There are various forms of liberal public spheres, constituted by differences in the political and communications systems scattered around liberal democracies throughout the world. The BBC or the state-owned televisions throughout postwar Western European democracies, for example, constituted the public spheres in different ,{[pg 182]}, ways than did the commercial mass media that dominated the American public sphere. As advertiser-supported mass media have come to occupy a larger role even in places where they were not dominant before the last quarter of the twentieth century, the long American experience with this form provides useful insight globally.
+
+In order to consider the relative advantages and failures of various platforms for a public sphere, we need to define a minimal set of desiderata that such a platform must possess. My point is not to define an ideal set of constraints and affordances of the public sphere that would secure legitimacy or would be most attractive under one conception of democracy or another. Rather, my intention is to define a design question: What characteristics of a communications system and practices are sufficiently basic to be desired by a wide range of conceptions of democracy? With these in hand, we will be able to compare the commercial mass media and the emerging alternatives in the digitally networked environment.
+
+/{Universal Intake}/. Any system of government committed to the idea that, in principle, the concerns of all those governed by that system are equally respected as potential proper subjects for political action and that all those governed have a say in what government should do requires a public sphere that can capture the observations of all constituents. These include at least their observations about the state of the world as they perceive and understand it, and their opinions of the relative desirability of alternative courses of action with regard to their perceptions or those of others. It is important not to confuse "universal intake" with more comprehensive ideas, such as that every voice must be heard in actual political debates, or that all concerns deserve debate and answer. Universal intake does not imply these broader requirements. It is, indeed, the role of filtering and accreditation to whittle down what the universal intake function drags in and make it into a manageable set of political discussion topics and interventions. However, the basic requirement of a public sphere is that it must in principle be susceptible to perceiving and considering the issues of anyone who believes that their condition is a matter appropriate for political consideration and collective action. The extent to which that personal judgment about what the political discourse should be concerned with actually coincides with what the group as a whole will consider in the public sphere is a function of the filtering and accreditation functions. ,{[pg 183]},
+
+/{Filtering for Potential Political Relevance}/. Not everything that someone considers to be a proper concern for collective action is perceived as such by most other participants in the political debate. A public sphere that has some successful implementation of universal intake must also have a filter to separate out those matters that are plausibly within the domain of organized political action and those that are not. What constitutes the range of plausible political topics is locally contingent, changes over time, and is itself a contested political question, as was shown most obviously by the "personal is political" feminist intellectual campaign. While it left "my dad won't buy me the candy I want" out of the realm of the political, it insisted on treating "my husband is beating me" as critically relevant in political debate. An overly restrictive filtering system is likely to impoverish a public sphere and rob it of its capacity to develop legitimate public opinion. It tends to exclude views and concerns that are in fact held by a sufficiently large number of people, or to affect people in sufficiently salient ways that they turn out, in historical context, to place pressure on the political system that fails to consider them or provide a legitimate answer, if not a solution. A system that is too loose tends to fail because it does not allow a sufficient narrowing of focus to provide the kind of sustained attention and concentration necessary to consider a matter and develop a range of public opinions on it.
+
+/{Filtering for Accreditation}/. Accreditation is different from relevance, requires different kinds of judgments, and may be performed in different ways than basic relevance filtering. A statement like "the president has sold out space policy to Martians" is different from "my dad won't buy me the candy I want." It is potentially as relevant as "the president has sold out energy policy to oil companies." What makes the former a subject for entertainment, not political debate, is its lack of credibility. Much of the function of journalistic professional norms is to create and preserve the credibility of the professional press as a source of accreditation for the public at large. Parties provide a major vehicle for passing the filters of both relevance and accreditation. Academia gives its members a source of credibility, whose force (ideally) varies with the degree to which their statements come out of, and pertain to, their core roles as creators of knowledge through their disciplinary constraints. Civil servants in reasonably professional systems can provide a source of accreditation. Large corporations have come to play such a role, though with greater ambiguity. The emerging role of nongovernment organizations ,{[pg 184]}, (NGOs), very often is intended precisely to preorganize opinion that does not easily pass the relevant public sphere's filters of relevance and accreditation and provide it with a voice that will. Note that accreditation of a move in political discourse is very different from accreditation of a move in, for example, academic discourse, because the objective of each system is different. In academic discourse, the fact that a large number of people hold a particular opinion ("the universe was created in seven days") does not render that opinion credible enough to warrant serious academic discussion. In political discourse, say, about public school curricula, the fact that a large number of people hold the same view and are inclined to have it taught in public schools makes that claim highly relevant and "credible." In other words, it is credible that this could become a political opinion that forms a part of public discourse with the potential to lead to public action. Filters, both for relevance and accreditation, provide a critical point of control over the debate, and hence are extremely important design elements.
+
+/{Synthesis of "Public Opinion."}/ The communications system that offers the platform for the public sphere must also enable the synthesis of clusters of individual opinion that are sufficiently close and articulated to form something more than private opinions held by some number of individuals. How this is done is tricky, and what counts as "public opinion" may vary among different theories of democracy. In deliberative conceptions, this might make requirements of the form of discourse. Civic republicans would focus on open deliberation among people who see their role as deliberating about the common good. Habermas would focus on deliberating under conditions that assure the absence of coercion, while Bruce Ackerman would admit to deliberation only arguments formulated so as to be neutral as among conceptions of the good. In pluralist conceptions, like John Rawls's in Political Liberalism, which do not seek ultimately to arrive at a common understanding but instead seek to peaceably clear competing positions as to how we ought to act as a polity, this might mean the synthesis of a position that has sufficient overlap among those who hold it that they are willing to sign on to a particular form of statement in order to get the bargaining benefits of scale as an interest group with a coherent position. That position then comes to the polls and the bargaining table as one that must be considered, overpowered, or bargained with. In any event, the platform has to provide some capacity to synthesize the finely disparate and varied versions of beliefs and positions held by actual individuals into articulated positions amenable for ,{[pg 185]}, consideration and adoption in the formal political sphere and by a system of government, and to render them in ways that make them sufficiently salient in the overall mix of potential opinions to form a condensation point for collective action.
+
+/{Independence from Government Control}/. The core role of the political public sphere is to provide a platform for converting privately developed observations, intuitions, and opinions into public opinions that can be brought to bear in the political system toward determining collective action. One core output of these communications is instructions to the administration sitting in government. To the extent that the platform is dependent on that same sitting government, there is a basic tension between the role of debate in the public sphere as issuing instructions to the executive and the interests of the sitting executive to retain its position and its agenda and have it ratified by the public. This does not mean that the communications system must exclude government from communicating its positions, explaining them, and advocating them. However, when it steps into the public sphere, the locus of the formation and crystallization of public opinion, the sitting administration must act as a participant in explicit conversation, and not as a platform controller that can tilt the platform in its direction.
+
+2~ THE EMERGENCE OF THE COMMERCIAL MASSMEDIA PLATFORM FOR THE PUBLIC SPHERE
+
+Throughout the twentieth century, the mass media have played a fundamental constitutive role in the construction of the public sphere in liberal democracies. Over this period, first in the United States and later throughout the world, the commercial, advertising-supported form of mass media has become dominant in both print and electronic media. Sometimes, these media have played a role that has drawn admiration as "the fourth estate." Here, the media are seen as a critical watchdog over government processes, and as a major platform for translating the mobilization of social movements into salient, and ultimately actionable, political statements. These same media, however, have also drawn mountains of derision for the power they wield, as well as fail to wield, and for the shallowness of public communication they promote in the normal course of the business of selling eyeballs to advertisers. Nowhere was this clearer than in the criticism of the large role that television came to play in American public culture and its public ,{[pg 186]}, sphere. Contemporary debates bear the imprint of the three major networks, which in the early 1980s still accounted for 92 percent of television viewers and were turned on and watched for hours a day in typical American homes. These inspired works like Neil Postman's Amusing Ourselves to Death or Robert Putnam's claim, in Bowling Alone, that television seemed to be the primary identifiable discrete cause of the decline of American civic life. Nevertheless, whether positive or negative, variants of the mass-media model of communications have been dominant throughout the twentieth century, in both print and electronic media. The mass-media model has been the dominant model of communications in both democracies and their authoritarian rivals throughout the period when democracy established itself, first against monarchies, and later against communism and fascism. To say that mass media were dominant is not to say that only technical systems of remote communications form the platform of the public sphere. As Theda Skocpol and Putnam have each traced in the context of the American and Italian polities, organizations and associations of personal civic involvement form an important platform for public participation. And yet, as both have recorded, these platforms have been on the decline. So "dominant" does not mean sole, but instead means overridingly important in the structuring of the public sphere. It is this dominance, not the very existence, of mass media that is being challenged by the emergence of the networked public sphere.
+
+The roots of the contemporary industrial structure of mass media presage both the attractive and unattractive aspects of the media we see today. Pioneered by the Dutch printers of the seventeenth century, a commercial press that did not need to rely on government grants and printing contracts, or on the church, became a source of a constant flow of heterodox literature and political debate.~{ Elizabeth Eisenstein, The Printing Press as an Agent of Change (New York: Cambridge University Press, 1979); Jeremey Popkin, News and Politics in the Age of Revolution: Jean Luzac's Gazzette de Leyde (Ithaca, NY: Cornell University Press, 1989). }~ However, a commercial press has always also been sensitive to the conditions of the marketplace--costs, audience, and competition. In seventeenth-century England, the Stationers' Monopoly provided its insiders enough market protection from competitors that its members were more than happy to oblige the Crown with a compliant press in exchange for monopoly. It was only after the demise of that monopoly that a genuinely political press appeared in earnest, only to be met by a combination of libel prosecutions, high stamp taxes, and outright bribery and acquisition by government.~{ Paul Starr, The Creation of the Media: Political Origins of Modern Communications (New York: Basic Books, 2004), 33-46. }~ These, like the more direct censorship and sponsorship relationships that typified the prerevolutionary French press, kept newspapers and gazettes relatively compliant, and their distribution largely limited to elite audiences. Political dissent did not form part of a stable and ,{[pg 187]}, independent market-based business model. As Paul Starr has shown, the evolution of the British colonies in America was different. While the first century or so of settlement saw few papers, and those mostly "authorized" gazettes, competition began to increase over the course of the eighteenth century. The levels of literacy, particularly in New England, were exceptionally high, the population was relatively prosperous, and the regulatory constraints that applied in England, including the Stamp Tax of 1712, did not apply in the colonies. As second and third newspapers emerged in cities like Boston, Philadelphia, and New York, and were no longer supported by the colonial governments through postal franchises, the public sphere became more contentious. This was now a public sphere whose voices were selfsupporting, like Benjamin Franklin's Pennsylvania Gazette. The mobilization of much of this press during the revolutionary era, and the broad perception that it played an important role in constituting the American public, allowed the commercial press to continue to play an independent and critical role after the revolution as well, a fate not shared by the brief flowering of the press immediately after the French Revolution. A combination of high literacy and high government tolerance, but also of postal subsidies, led the new United States to have a number and diversity of newspapers unequalled anywhere else, with a higher weekly circulation by 1840 in the 17-millionstrong United States than in all of Europe with its population then of 233 million. By 1830, when Tocqueville visited America, he was confronted with a widespread practice of newspaper reading--not only in towns, but in farflung farms as well, newspapers that were a primary organizing mechanism for political association.~{ Starr, Creation of the Media, 48-62, 86-87. }~
+
+This widespread development of small-circulation, mostly local, competitive commercial press that carried highly political and associational news and opinion came under pressure not from government, but from the economies of scale of the mechanical press, the telegraph, and the ever-expanding political and economic communities brought together by rail and industrialization. Harold Innis argued more than half a century ago that the increasing costs of mechanical presses, coupled with the much-larger circulation they enabled and the availability of a flow of facts from around the world through telegraph, reoriented newspapers toward a mass-circulation, relatively low-denominator advertising medium. These internal economies, as Alfred Chandler and, later, James Beniger showed in their work, intersected with the vast increase in industrial output, which in turn required new mechanisms of demand management--in other words, more sophisticated ,{[pg 188]}, advertising to generate and channel demand. In the 1830s, the Sun and Herald were published in New York on large-circulation scales, reducing prices to a penny a copy and shifting content from mostly politics and business news to new forms of reporting: petty crimes from the police courts, human-interest stories, and outright entertainment-value hoaxes.~{ Starr, Creation of the Media, 131-133. }~ The startup cost of founding such mass-circulation papers rapidly increased over the second quarter of the nineteenth century, as figure 6.1 illustrates. James Gordon Bennett founded the Herald in 1835, with an investment of five hundred dollars, equal to a little more than $10,400 in 2005 dollars. By 1840, the necessary investment was ten to twenty times greater, between five and ten thousand dollars, or $106,000?$212,000 in 2005 terms. By 1850, that amount had again grown tenfold, to $100,000, about $2.38 million in 2005.~{ Starr, Creation of the Media, 135. }~ In the span of fifteen years, the costs of starting a newspaper rose from a number that many could conceive of spending for a wide range of motivations using a mix of organizational forms, to something that required a more or less industrial business model to recoup a very substantial financial investment. The new costs reflected mutually reinforcing increases in organizational cost (because of the professionalization of the newspaper publishing model) and the introduction of high-capacity, higher-cost equipment: electric presses (1839); the Hoe double-cylinder rotary press (1846), which raised output from the five hundred to one thousand sheets per hour of the early steam presses (up from 250 sheets for the handpress) to twelve thousand sheets per hour; and eventually William Bullock's roll-fed rotary press that produced twelve thousand complete newspapers per hour by 1865. The introduction of telegraph and the emergence of news agencies--particularly the Associated Press (AP) in the United States and Reuters in England--completed the basic structure of the commercial printed press. These characteristics--relatively high cost, professional, advertising supported, dependent on access to a comparatively small number of news agencies (which, in the case of the AP, were often used to anticompetitive advantage by their members until the midtwentieth-century antitrust case)--continued to typify print media. With the introduction of competition from radio and television, these effects tended to lead to greater concentration, with a majority of papers facing no local competition, and an ever-increasing number of papers coming under the joint ownership of a very small number of news publishing houses.
+
+The introduction of radio was the next and only serious potential inflection point, prior to the emergence of the Internet, at which some portion of the public sphere could have developed away from the advertiser- ,{[pg 189]},
+
+{won_benkler_6_1.png "Figure 6.1: Start-up Costs of a Daily Newspaper, 1835-1850 (in 2005 dollars)" }http://www.jus.uio.no/sisu
+
+supported mass-media model. In most of Europe, radio followed the path of state-controlled media, with variable degrees of freedom from the executive at different times and places. Britain developed the BBC, a public organization funded by government-imposed levies, but granted sufficient operational freedom to offer a genuine platform for a public sphere, as opposed to a reflection of the government's voice and agenda. While this model successfully developed what is perhaps the gold standard of broadcast journalism, it also grew as a largely elite institution throughout much of the twentieth century. The BBC model of state-based funding and monopoly with genuine editorial autonomy became the basis of the broadcast model in a number of former colonies: Canada and Australia adopted a hybrid model in the 1930s. This included a well-funded public broadcaster, but did not impose a monopoly in its favor, allowing commercial broadcasters to grow alongside it. Newly independent former colonies in the postwar era that became democracies, like India and Israel, adopted the model with monopoly, levy-based funding, and a degree of editorial independence. The most currently visible adoption of a hybrid model based on some state funding but with editorial freedom is Al Jazeera, the Arab satellite station partly funded by the Emir of Qatar, but apparently free to pursue its own editorial policy, whose coverage stands in sharp contrast to that of the state-run broadcasters ,{[pg 190]}, in the region. In none of these BBC-like places did broadcast diverge from the basic centralized communications model of the mass media, but it followed a path distinct from the commercial mass media. Radio, and later television, was a more tightly controlled medium than was the printed press; its intake, filtering, and synthesis of public discourse were relatively insulated from the pressure of both markets, which typified the American model, and politics, which typified the state-owned broadcasters. These were instead controlled by the professional judgments of their management and journalists, and showed both the high professionalism that accompanied freedom along both those dimensions and the class and professional elite filters that typify those who control the media under that organizational model. The United States took a different path that eventually replicated, extended, and enhanced the commercial, advertiser-supported mass-media model originated in the printed press. This model was to become the template for the development of similar broadcasters alongside the state-owned and independent BBC-model channels adopted throughout much of the rest of the world, and of programming production for newer distribution technologies, like cable and satellite stations. The birth of radio as a platform for the public sphere in the United States was on election night in 1920.~{ The following discussion of the birth of radio is adapted from Yochai Benkler, "Overcoming Agoraphobia: Building the Commons of the Digitally Networked Environment," Harvard Journal of Law and Technology 11 (Winter 1997-1998): 287. That article provides the detailed support for the description. The major secondary works relied on are Erik Barnouw, A History of Broadcasting in the United States (New York: Oxford University Press, 1966-1970); Gleason Archer, History of Radio to 1926 (New York: Arno Press, 1971); and Philip T. Rosen, Modern Stentors: Radio Broadcasters and the Federal Government, 1920-1934 (Westport, CT: Greenwood Press, 1980). }~ Two stations broadcast the election returns as their launchpad for an entirely new medium--wireless broadcast to a wide audience. One was the Detroit News amateur station, 8MK, a broadcast that was framed and understood as an internal communication of a technical fraternity--the many amateurs who had been trained in radio communications for World War I and who then came to form a substantial and engaged technical community. The other was KDKA Pittsburgh, launched by Westinghouse as a bid to create demand for radio receivers of a kind that it had geared up to make during the war. Over the following four or five years, it was unclear which of these two models of communication would dominate the new medium. By 1926, however, the industrial structure that would lead radio to follow the path of commercial, advertiser-supported, concentrated mass media, dependent on government licensing and specializing in influencing its own regulatory oversight process was already in place.
+
+Although this development had its roots in the industrial structure of radio production as it emerged from the first two decades of innovation and businesses in the twentieth century, it was shaped significantly by politicalregulatory choices during the 1920s. At the turn of the twentieth century, radio was seen exclusively as a means of wireless telegraphy, emphasizing ,{[pg 191]}, ship-to-shore and ship-to-ship communications. Although some amateurs experimented with voice programs, broadcast was a mode of point-to-point communications; entertainment was not seen as its function until the 1920s. The first decade and a half of radio in the United States saw rapid innovation and competition, followed by a series of patent suits aimed to consolidate control over the technology. By 1916, the ideal transmitter based on technology available at the time required licenses of patents held by Marconi, AT&T, General Electric (GE), and a few individuals. No licenses were in fact granted. The industry had reached stalemate. When the United States joined the war, however, the navy moved quickly to break the stalemate, effectively creating a compulsory cross-licensing scheme for war production, and brought in Westinghouse, the other major potential manufacturer of vacuum tubes alongside GE, as a participant in the industry. The two years following the war saw intervention by the U.S. government to assure that American radio industry would not be controlled by British Marconi because of concerns in the navy that British control over radio would render the United States vulnerable to the same tactic Britain used against Germany at the start of the war--cutting off all transoceanic telegraph communications. The navy brokered a deal in 1919 whereby a new company was created-- the Radio Corporation of America (RCA)--which bought Marconi's American business. By early 1920, RCA, GE, and AT&T entered into a patent cross-licensing model that would allow each to produce for a market segment: RCA would control transoceanic wireless telegraphy, while GE and AT&T's Western Electric subsidiary would make radio transmitters and sell them under the RCA brand. This left Westinghouse with production facilities developed for the war, but shut out of the existing equipment markets by the patent pool. Launching KDKA Pittsburgh was part of its response: Westinghouse would create demand for small receivers that it could manufacture without access to the patents held by the pool. The other part of its strategy consisted of acquiring patents that, within a few months, enabled Westinghouse to force its inclusion in the patent pool, redrawing the market division map to give Westinghouse 40 percent of the receiving equipment market. The first part of Westinghouse's strategy, adoption of broadcasting to generate demand for receivers, proved highly successful and in the long run more important. Within two years, there were receivers in 10 percent of American homes. Throughout the 1920s, equipment sales were big business.
+
+Radio stations, however, were not dominated by the equipment manufacturers, or by anyone else for that matter, in the first few years. While the ,{[pg 192]}, equipment manufacturers did build powerful stations like KDKA Pittsburgh, WJZ Newark, KYW Chicago (Westinghouse), and WGY Schenectady (GE), they did not sell advertising, but rather made their money from equipment sales. These stations did not, in any meaningful sense of the word, dominate the radio sphere in the first few years of radio, as the networks would indeed come to do within a decade. In November 1921, the first five licenses were issued by the Department of Commerce under the new category of "broadcasting" of "news, lectures, entertainment, etc." Within eight months, the department had issued another 453 licenses. Many of these went to universities, churches, and unions, as well as local shops hoping to attract business with their broadcasts. Universities, seeing radio as a vehicle for broadening their role, began broadcasting lectures and educational programming. Seventy-four institutes of higher learning operated stations by the end of 1922. The University of Nebraska offered two-credit courses whose lectures were transmitted over the air. Churches, newspapers, and department stores each forayed into this new space, much as we saw the emergence of Web sites for every organization over the course of the mid-1990s. Thousands of amateurs were experimenting with technical and format innovations. While receivers were substantially cheaper than transmitters, it was still possible to assemble and sell relatively cheap transmitters, for local communications, at prices sufficiently low that thousands of individual amateurs could take to the air. At this point in time, then, it was not yet foreordained that radio would follow the mass-media model, with a small number of well-funded speakers and hordes of passive listeners. Within a short period, however, a combination of technology, business practices, and regulatory decisions did in fact settle on the model, comprised of a small number of advertisersupported national networks, that came to typify the American broadcast system throughout most of the rest of the century and that became the template for television as well.
+
+Herbert Hoover, then secretary of commerce, played a pivotal role in this development. Throughout the first few years after the war, Hoover had positioned himself as the champion of making control over radio a private market affair, allying himself both with commercial radio interests and with the amateurs against the navy and the postal service, each of which sought some form of nationalization of radio similar to what would happen more or less everywhere else in the world. In 1922, Hoover assembled the first of four annual radio conferences, representing radio manufacturers, broadcasters, and some engineers and amateurs. This forum became Hoover's primary ,{[pg 193]}, stage. Over the next four years, he used its annual meeting to derive policy recommendations, legitimacy, and cooperation for his regulatory action, all without a hint of authority under the Radio Act of 1912. Hoover relied heavily on the rhetoric of public interest and on the support of amateurs to justify his system of private broadcasting coordinated by the Department of Commerce. From 1922 on, however, he followed a pattern that would systematically benefit large commercial broadcasters over small ones; commercial broadcasters over educational and religious broadcasters; and the oneto-many broadcasts over the point-to-point, small-scale wireless telephony and telegraphy that the amateurs were developing. After January 1922, the department inserted a limitation on amateur licenses, excluding from their coverage the broadcast of "weather reports, market reports, music, concerts, speeches, news or similar information or entertainment." This, together with a Department of Commerce order to all amateurs to stop broadcasting at 360 meters (the wave assigned broadcasting), effectively limited amateurs to shortwave radiotelephony and telegraphy in a set of frequencies then thought to be commercially insignificant. In the summer, the department assigned broadcasters, in addition to 360 meters, another band, at 400 meters. Licenses in this Class B category were reserved for transmitters operating at power levels of 500-1,000 watts, who did not use phonograph records. These limitations on Class B licenses made the newly created channel a feasible home only to broadcasters who could afford the much-more-expensive, highpowered transmitters and could arrange for live broadcasts, rather than simply play phonograph records. The success of this new frequency was not immediate, because many receivers could not tune out stations broadcasting at the two frequencies in order to listen to the other. Hoover, failing to move Congress to amend the radio law to provide him with the power necessary to regulate broadcasting, relied on the recommendations of the Second Radio Conference in 1923 as public support for adopting a new regime, and continued to act without legislative authority. He announced that the broadcast band would be divided in three: high-powered (500-1,000 watts) stations serving large areas would have no interference in those large areas, and would not share frequencies. They would transmit on frequencies between 300 and 545 meters. Medium-powered stations served smaller areas without interference, and would operate at assigned channels between 222 and 300 meters. The remaining low-powered stations would not be eliminated, as the bigger actors wanted, but would remain at 360 meters, with limited hours of operation and geographic reach. Many of these lower-powered broadcasters ,{[pg 194]}, were educational and religious institutions that perceived Hoover's allocation as a preference for the RCA-GE-AT&T-Westinghouse alliance. Despite his protestations against commercial broadcasting ("If a speech by the President is to be used as the meat in a sandwich of two patent medicine advertisements, there will be no radio left"), Hoover consistently reserved clear channels and issued high-power licenses to commercial broadcasters. The final policy action based on the radio conferences came in 1925, when the Department of Commerce stopped issuing licenses. The result was a secondary market in licenses, in which some religious and educational stations were bought out by commercial concerns. These purchases further gravitated radio toward commercial ownership. The licensing preference for stations that could afford high-powered transmitters, long hours of operation, and compliance with high technical constraints continued after the Radio Act of 1927. As a practical matter, it led to assignment of twenty-one out of the twentyfour clear channel licenses created by the Federal Radio Commission to the newly created network-affiliated stations.
+
+Over the course of this period, tensions also began to emerge within the patent alliance. The phenomenal success of receiver sales tempted Western Electric into that market. In the meantime, AT&T, almost by mistake, began to challenge GE, Westinghouse, and RCA in broadcasting as an outgrowth of its attempt to create a broadcast common-carriage facility. Despite the successes of broadcast and receiver sales, it was not clear in 1922-1923 how the cost of setting up and maintaining stations would be paid for. In England, a tax was levied on radio sets, and its revenue used to fund the BBC. No such proposal was considered in the United States, but the editor of Radio Broadcast proposed a national endowed fund, like those that support public libraries and museums, and in 1924, a committee of New York businessmen solicited public donations to fund broadcasters (the response was so pitiful that the funds were returned to their donors). AT&T was the only company to offer a solution. Building on its telephone service experience, it offered radio telephony to the public for a fee. Genuine wireless telephony, even mobile telephony, had been the subject of experimentation since the second decade of radio, but that was not what AT&T offered. In February 1922, AT&T established WEAF in New York, a broadcast station over which AT&T was to provide no programming of its own, but instead would enable the public or program providers to pay on a per-time basis. AT&T treated this service as a form of wireless telephony so that it would fall, under the patent alliance agreements of 1920, under the exclusive control of AT&T. ,{[pg 195]},
+
+RCA, Westinghouse, and GE could not compete in this area. "Toll broadcasting" was not a success by its own terms. There was insufficient demand for communicating with the public to sustain a full schedule that would justify listeners tuning into the station. As a result, AT&T produced its own programming. In order to increase the potential audience for its transmissions while using its advantage in wired facilities, AT&T experimented with remote transmissions, such as live reports from sports events, and with simultaneous transmissions of its broadcasts by other stations, connected to its New York feed by cable. In its effort to launch toll broadcasting, AT&T found itself by mid-1923 with the first functioning precursor to an advertisersupported broadcast network.
+
+The alliance members now threatened each other: AT&T threatened to enter into receiver manufacturing and broadcast, and the RCA alliance, with its powerful stations, threatened to adopt "toll broadcasting," or advertisersupported radio. The patent allies submitted their dispute to an arbitrator, who was to interpret the 1920 agreements, reached at a time of wireless telegraphy, to divide the spoils of the broadcast world of 1924. In late 1924, the arbitrator found for RCA-GE-Westinghouse on almost all issues. Capitalizing on RCA's difficulties with the antitrust authorities and congressional hearings over aggressive monopolization practices in the receiving set market, however, AT&T countered that if the 1920 agreements meant what the arbitrator said they meant, they were a combination in restraint of trade to which AT&T would not adhere. Bargaining in the shadow of the mutual threats of contract and antitrust actions, the former allies reached a solution that formed the basis of future radio broadcasting. AT&T would leave broadcasting. A new company, owned by RCA, GE, and Westinghouse would be formed, and would purchase AT&T's stations. The new company would enter into a long-term contract with AT&T to provide the long-distance communications necessary to set up the broadcast network that David Sarnoff envisioned as the future of broadcast. This new entity would, in 1926, become the National Broadcasting Company (NBC). AT&T's WEAF station would become the center of one of NBC's two networks, and the division arrived at would thereafter form the basis of the broadcast system in the United States.
+
+By the middle of 1926, then, the institutional and organizational elements that became the American broadcast system were, to a great extent, in place. The idea of government monopoly over broadcasting, which became dominant in Great Britain, Europe, and their former colonies, was forever abandoned. ,{[pg 196]}, The idea of a private-property regime in spectrum, which had been advocated by commercial broadcasters to spur investment in broadcast, was rejected on the backdrop of other battles over conservation of federal resources. The Radio Act of 1927, passed by Congress in record speed a few months after a court invalidated Hoover's entire regulatory edifice as lacking legal foundation, enacted this framework as the basic structure of American broadcast. A relatively small group of commercial broadcasters and equipment manufacturers took the lead in broadcast development. A governmental regulatory agency, using a standard of "the public good," allocated frequency, time, and power assignments to minimize interference and to resolve conflicts. The public good, by and large, correlated to the needs of commercial broadcasters and their listeners. Later, the broadcast networks supplanted the patent alliance as the primary force to which the Federal Radio Commission paid heed. The early 1930s still saw battles over the degree of freedom that these networks had to pursue their own commercial interests, free of regulation (studied in Robert McChesney's work).~{ Robert Waterman McChesney, Telecommunications, Mass Media, and Democracy: The Battle for the Control of U.S. Broadcasting, 1928-1935 (New York: Oxford University Press, 1993). }~ By that point, however, the power of the broadcasters was already too great to be seriously challenged. Interests like those of the amateurs, whose romantic pioneering mantle still held strong purchase on the process, educational institutions, and religious organizations continued to exercise some force on the allocation and management of the spectrum. However, they were addressed on the periphery of the broadcast platform, leaving the public sphere to be largely mediated by a tiny number of commercial entities running a controlled, advertiser-supported platform of mass media. Following the settlement around radio, there were no more genuine inflection points in the structure of mass media. Television followed radio, and was even more concentrated. Cable networks and satellite networks varied to some extent, but retained the basic advertiser-supported model, oriented toward luring the widest possible audience to view the advertising that paid for the programming.
+
+2~ BASIC CRITIQUES OF MASS MEDIA
+
+The cluster of practices that form the mass-media model was highly conducive to social control in authoritarian countries. The hub-and-spoke technical architecture and unidirectional endpoint-reception model of these systems made it very simple to control, by controlling the core--the state-owned television, radio, and newspapers. The high cost of providing ,{[pg 197]}, high-circulation statements meant that subversive publications were difficult to make and communicate across large distances and to large populations of potential supporters. Samizdat of various forms and channels have existed in most if not all authoritarian societies, but at great disadvantage relative to public communication. The passivity of readers, listeners, and viewers coincided nicely with the role of the authoritarian public sphere--to manage opinion in order to cause the widest possible willing, or at least quiescent, compliance, and thereby to limit the need for using actual repressive force.
+
+In liberal democracies, the same technical and economic cost characteristics resulted in a very different pattern of communications practices. However, these practices relied on, and took advantage of, some of the very same basic architectural and cost characteristics. The practices of commercial mass media in liberal democracies have been the subject of a vast literature, criticizing their failures and extolling their virtues as a core platform for the liberal public sphere. There have been three primary critiques of these media: First, their intake has been seen as too limited. Too few information collection points leave too many views entirely unexplored and unrepresented because they are far from the concerns of the cadre of professional journalists, or cannot afford to buy their way to public attention. The debates about localism and diversity of ownership of radio and television stations have been the clearest policy locus of this critique in the United States. They are based on the assumption that local and socially diverse ownership of radio stations will lead to better representation of concerns as they are distributed in society. Second, concentrated mass media has been criticized as giving the owners too much power--which they either employ themselves or sell to the highest bidder--over what is said and how it is evaluated. Third, the advertisingsupported media needs to attract large audiences, leading programming away from the genuinely politically important, challenging, and engaging, and toward the titillating or the soothing. This critique has emphasized the tension between business interests and journalistic ethics, and the claims that market imperatives and the bottom line lead to shoddy or cowering reporting; quiescence in majority tastes and positions in order to maximize audience; spectacle rather than substantive conversation of issues even when political matters are covered; and an emphasis on entertainment over news and analysis.
+
+Three primary defenses or advantages have also been seen in these media: first is their independence from government, party, or upper-class largesse, particularly against the background of the state-owned media in authoritarian ,{[pg 198]}, regimes, and given the high cost of production and communication, commercial mass media have been seen as necessary to create a public sphere grounded outside government. Second is the professionalism and large newsrooms that commercial mass media can afford to support to perform the watchdog function in complex societies. Because of their market-based revenues, they can replace universal intake with well-researched observations that citizens would not otherwise have made, and that are critical to a wellfunctioning democracy. Third, their near-universal visibility and independence enable them to identify important issues percolating in society. They can provide a platform to put them on the public agenda. They can express, filter, and accredit statements about these issues, so that they become wellspecified subjects and feasible objects for public debate among informed citizens. That is to say, the limited number of points to which all are tuned and the limited number of "slots" available for speaking on these media form the basis for providing the synthesis required for public opinion and raising the salience of matters of public concern to the point of potential collective action. In the remainder of this chapter, I will explain the criticisms of the commercial mass media in more detail. I then take up in chapter 7 the question of how the Internet in general, and the rise of nonmarket and cooperative individual production in the networked information economy in particular, can solve or alleviate those problems while fulfilling some of the important roles of mass media in democracies today.
+
+3~ Mass Media as a Platform for the Public Sphere
+
+The structure of mass media as a mode of communications imposes a certain set of basic characteristics on the kind of public conversation it makes possible. First, it is always communication from a small number of people, organized into an even smaller number of distinct outlets, to an audience several orders of magnitude larger, unlimited in principle in its membership except by the production capacity of the media itself--which, in the case of print, may mean the number of copies, and in radio, television, cable, and the like, means whatever physical-reach constraints, if any, are imposed by the technology and business organizational arrangements used by these outlets. In large, complex, modern societies, no one knows everything. The initial function of a platform for the public sphere is one of intake--taking into the system the observations and opinions of as many members of society as possible as potential objects of public concern and consideration. The ,{[pg 199]}, radical difference between the number of intake points the mass media have and the range and diversity of human existence in large complex societies assures a large degree of information loss at the intake stage. Second, the vast difference between the number of speakers and the number of listeners, and the finished-goods style of mass-media products, imposes significant constraints on the extent to which these media can be open to feedback-- that is, to responsive communications that are tied together as a conversation with multiple reciprocal moves from both sides of the conversation. Third, the immense and very loosely defined audience of mass media affects the filtering and synthesis functions of the mass media as a platform for the public sphere. One of the observations regarding the content of newspapers in the late eighteenth to mid-nineteenth centuries was the shift they took as their circulation increased--from party-oriented, based in relatively thick communities of interest and practice, to fact- and sensation-oriented, with content that made thinner requirements on their users in order to achieve broader and more weakly defined readership. Fourth, and finally, because of the high costs of organizing these media, the functions of intake, sorting for relevance, accrediting, and synthesis are all combined in the hands of the same media operators, selected initially for their capacity to pool the capital necessary to communicate the information to wide audiences. While all these functions are necessary for a usable public sphere, the correlation of capacity to pool capital resources with capacity to offer the best possible filtering and synthesis is not obvious. In addition to basic structural constraints that come from the characteristic of a communications modality that can properly be called "mass media," there are also critiques that arise more specifically from the business models that have characterized the commercial mass media over the course of most of the twentieth century. Media markets are relatively concentrated, and the most common business model involves selling the attention of large audiences to commercial advertisers.
+
+3~ Media Concentration: The Power of Ownership and Money
+
+The Sinclair Broadcast Group is one of the largest owners of television broadcast stations in the United States. The group's 2003 Annual Report proudly states in its title, "Our Company. Your Message. 26 Million Households"; that is, roughly one quarter of U.S. households. Sinclair owns and operates or provides programming and sales to sixty-two stations in the United States, including multiple local affiliates of NBC, ABC, CBS, and ,{[pg 200]}, Fox. In April 2004, ABC News's program Nightline dedicated a special program to reading the names of American service personnel who had been killed in the Iraq War. The management of Sinclair decided that its seven ABC affiliates would not air the program, defending its decision because the program "appears to be motivated by a political agenda designed to undermine the efforts of the United States in Iraq."~{ "Names of U.S. Dead Read on Nightline," Associated Press Report, May 1, 2004, http://www.msnbc.msn.com/id/4864247/. }~ At the time, the rising number of American casualties in Iraq was already a major factor in the 2004 presidential election campaign, and both ABC's decision to air the program, and Sinclair's decision to refuse to carry it could be seen as interventions by the media in setting the political agenda and contributing to the public debate. It is difficult to gauge the politics of a commercial organization, but one rough proxy is political donations. In the case of Sinclair, 95 percent of the donations made by individuals associated with the company during the 2004 election cycle went to Republicans, while only 5 percent went to Democrats.~{ The numbers given here are taken from The Center for Responsive Politics, http:// www.opensecrets.org/, and are based on information released by the Federal Elections Commission. }~ Contributions of Disney, on the other hand, the owner of the ABC network, split about seventy-thirty in favor of contribution to Democrats. It is difficult to parse the extent to which political leanings of this sort are personal to the executives and professional employees who make decisions about programming, and to what extent these are more organizationally self-interested, depending on the respective positions of the political parties on the conditions of the industry's business. In some cases, it is quite obvious that the motives are political. When one looks, for example, at contributions by Disney's film division, they are distributed 100 percent in favor of Democrats. This mostly seems to reflect the large contributions of the Weinstein brothers, who run the semi-independent studio Miramax, which also distributed Michael Moore's politically explosive criticism of the Bush administration, Fahrenheit 9/11, in 2004. Sinclair's contributions were aligned with, though more skewed than, those of the National Association of Broadcasters political action committee, which were distributed 61 percent to 39 percent in favor of Republicans. Here the possible motivation is that Republicans have espoused a regulatory agenda at the Federal Communications Commission that allows broadcasters greater freedom to consolidate and to operate more as businesses and less as public trustees.
+
+The basic point is not, of course, to trace the particular politics of one programming decision or another. It is the relative power of those who manage the mass media when it so dominates public discourse as to shape public perceptions and public debate. This power can be brought to bear throughout the components of the platform, from the intake function (what ,{[pg 201]}, facts about the world are observed) to the filtration and synthesis (the selection of materials, their presentation, and the selection of who will debate them and in what format). These are all central to forming the agenda that the public perceives, choreographing the discussion, the range of opinions perceived and admitted into the conversation, and through these, ultimately, choreographing the perceived consensus and the range of permissible debate. One might think of this as "the Berlusconi effect." Thinking in terms of a particular individual, known for a personal managerial style, who translated the power of control over media into his election as prime minister of his country symbolizes well the concern, but of course does not exhaust the problem, which is both broader and more subtle than the concern with the possibility that mass media will be owned by individuals who would exert total control over these media and translate their control into immediate political power, manufacturing and shaping the appearance of a public sphere, rather than providing a platform for one.
+
+The power of the commercial mass media depends on the degree of concentration in mass-media markets. A million equally watched channels do not exercise power. Concentration is a common word used to describe the power media exercise when there are only few outlets, but a tricky one because it implies two very distinct phenomena. The first is a lack of competition in a market, to a degree sufficient to allow a firm to exercise power over its pricing. This is the antitrust sense. The second, very different concern might be called "mindshare." That is, media is "concentrated" when a small number of media firms play a large role as the channel from and to a substantial majority of readers, viewers, and listeners in a given politically relevant social unit.
+
+If one thinks that commercial firms operating in a market will always "give the audience what it wants" and that what the audience wants is a fully representative cross-section of all observations and opinions relevant to public discourse, then the antitrust sense would be the only one that mattered. A competitive market would force any market actor simply to reflect the range of available opinions actually held in the public. Even by this measure, however, there continue to be debates about how one should define the relevant market and what one is measuring. The more one includes all potential nationally available sources of information, newspapers, magazines, television, radio, satellite, cable, and the like, the less concentrated the market seems. However, as Eli Noam's recent work on local media concentration has argued, treating a tiny television station on Long Island as equivalent to ,{[pg 202]}, WCBS in New York severely underrepresents the power of mass media over their audience. Noam offered the most comprehensive analysis currently available of the patterns of concentration where media are actually accessed-- locally, where people live--from 1984 to 2001-2002. Most media are consumed locally--because of the cost of national distribution of paper newspapers, and because of the technical and regulatory constraints on nationwide distribution of radio and television. Noam computed two measures of market concentration for each of thirty local markets: the HerfindahlHirschman Index (HHI), a standard method used by the Department of Justice to measure market concentration for antitrust purposes; and what he calls a C4 index--that is, the market share of the top four firms in a market, and C1, the share of the top single firm in the market. He found that, based on the HHI index, all the local media markets are highly concentrated. In the standard measure, a market with an index of less than 1,000 is not concentrated, a market with an index of 1,000-1,800 is moderately concentrated, and a market with an index of above 1,800 on the HHI is highly concentrated. Noam found that local radio, which had an index below 1,000 between 1984 and 1992, rose over the course of the following years substantially. Regulatory restrictions were loosened over the course of the 1990s, resulting by the end of the decade in an HHI index measure of 2,400 for big cities, and higher for medium-sized and small markets. And yet, radio is less concentrated than local multichannel television (cable and satellite) with an HHI of 6,300, local magazines with an HHI of 6,859, and local newspapers with an HHI of 7,621. The only form of media whose concentration has declined to less than highly concentrated (HHI 1,714) is local television, as the rise of new networks and local stations' viability on cable has moved us away from the three-network world of 1984. It is still the case, however, that the top four television stations capture 73 percent of the viewers in most markets, and 62 percent in large markets. The most concentrated media in local markets are newspapers, which, except for the few largest markets, operate on a one-newspaper town model. C1 concentration has grown in this area to 83 percent of readership for the leading papers, and an HHI of 7,621.
+
+The degree of concentration in media markets supports the proposition that owners of media can either exercise power over the programming they provide or what they write, or sell their power over programming to those who would like to shape opinions. Even if one were therefore to hold the Pollyannaish view that market-based media in a competitive market would ,{[pg 203]}, be constrained by competition to give citizens what they need, as Ed Baker put it, there is no reason to think the same in these kinds of highly concentrated markets. As it turns out, a long tradition of scholarship has also developed the claim that even without such high levels of concentration in the antitrust sense, advertiser-supported media markets are hardly good mechanisms for assuring that the contents of the media provide a good reflection of the information citizens need to know as members of a polity, the range of opinions and views about what ought to occupy the public, and what solutions are available to those problems that are perceived and discussed.~{ A careful catalog of these makes up the first part of C. Edwin Baker, Media, Markets, and Democracy (New York: Cambridge University Press, 2002). }~ First, we have long known that advertiser-supported media suffer from more or less well-defined failures, purely as market mechanisms, at representing the actual distribution of first-best preferences of audiences. As I describe in more detail in the next section, whether providers in any market structure, from monopoly to full competition, will even try to serve firstbest preferences of their audience turns out to be a function of the distribution of actual first-best and second-best preferences, and the number of "channels." Second, there is a systematic analytic problem with defining consumer demand for information. Perfect information is a precondition to an efficient market, not its output. In order for consumers to value information or an opinion fully, they must know it and assimilate it to their own worldview and understanding. However, the basic problem to be solved by media markets is precisely to select which information people will value if they in fact come to know it, so it is impossible to gauge the value of a unit of information before it has been produced, and hence to base production decisions on actual existing user preferences. The result is that, even if media markets were perfectly competitive, a substantial degree of discretion and influence would remain in the hands of commercial media owners.
+
+The actual cultural practice of mass-media production and consumption is more complex than either the view of "efficient media markets" across the board or the general case against media concentration and commercialism. Many of the relevant companies are public companies, answerable to at least large institutional shareholders, and made up of managements that need not be monolithic in their political alignment or judgment as to the desirability of making political gains as opposed to market share. Unless there is economic or charismatic leadership of the type of a William Randolph Hearst or a Rupert Murdoch, organizations usually have complex structures, with varying degrees of freedom for local editors, reporters, and midlevel managers to tug and pull at the fabric of programming. Different media companies ,{[pg 204]}, also have different business models, and aim at different market segments. The New York Times, Wall Street Journal, and Washington Post do not aim at the same audience as most daily local newspapers in the United States. They are aimed at elites, who want to buy newspapers that can credibly claim to embody highly professional journalism. This requires separation of editorial from business decisions--at least for some segments of the newspapers that are critical in attracting those readers. The degree to which the Berlusconi effect in its full-blown form of individual or self-consciously directed political power through shaping of the public sphere will apply is not one that can necessarily be answered as a matter of a priori theoretical framework for all mass media. Instead, it is a concern, a tendency, whose actual salience in any given public sphere or set of firms is the product of historical contingency, different from one country to another and one period to another. It will depend on the strategies of particular companies and their relative mindshare in a society. However, it is clear and structurally characteristic of mass media that a society that depends for its public sphere on a relatively small number of actors, usually firms, to provide most of the platform of its public sphere, is setting itself up for, at least, a form of discourse elitism. In other words, those who are on the inside of the media will be able to exert substantially greater influence over the agenda, the shape of the conversation, and through these the outcomes of public discourse, than other individuals or groups in society. Moreover, for commercial organizations, this power could be sold--and as a business model, one should expect it to be. The most direct way to sell influence is explicit political advertising, but just as we see "product placement" in movies as a form of advertising, we see advertiser influence on the content of the editorial materials. Part of this influence is directly substantive and political. Another is the source of the second critique of commercial mass media.
+
+3~ Commercialism, Journalism, and Political Inertness
+
+The second cluster of concerns about the commercial mass media is the degree to which their commercialism undermines their will and capacity to provide a platform for public, politically oriented discourse. The concern is, in this sense, the opposite of the concern with excessive power. Rather than the fear that the concentrated mass media will exercise its power to pull opinion in its owners' interest, the fear is that the commercial interests of these media will cause them to pull content away from matters of genuine ,{[pg 205]}, political concern altogether. It is typified in a quote offered by Ben Bagdikian, attributed to W. R. Nelson, publisher of the Kansas City Star in 1915: "Newspapers are read at the breakfast table and dinner tables. God's great gift to man is appetite. Put nothing in the paper that will destroy it."~{ Ben H. Bagdikian, The Media Monopoly, 5th ed. (Boston: Beacon Press, 1997), 118. }~ Examples abound, but the basic analytic structure of the claim is fairly simple and consists of three distinct components. First, advertiser-supported media need to achieve the largest audience possible, not the most engaged or satisfied audience possible. This leads such media to focus on lowest-commondenominator programming and materials that have broad second-best appeal, rather than trying to tailor their programming to the true first-best preferences of well-defined segments of the audience. Second, issues of genuine public concern and potential political contention are toned down and structured as a performance between iconic representations of large bodies of opinion, in order to avoid alienating too much of the audience. This is the reemergence of spectacle that Habermas identified in The Transformation of the Public Sphere. The tendency toward lowest-common-denominator programming translates in the political sphere into a focus on fairly well-defined, iconic views, and to avoidance of genuinely controversial material, because it is easier to lose an audience by offending its members than by being only mildly interesting. The steady structuring of the media as professional, commercial, and one way over 150 years has led to a pattern whereby, when political debate is communicated, it is mostly communicated as performance. Someone represents a party or widely known opinion, and is juxtaposed with others who similarly represent alternative widely known views. These avatars of public opinion then enact a clash of opinion, orchestrated in order to leave the media neutral and free of blame, in the eyes of their viewers, for espousing an offensively partisan view. Third, and finally, this business logic often stands in contradiction to journalistic ethic. While there are niche markets for high-end journalism and strong opinion, outlets that serve those markets are specialized. Those that cater to broader markets need to subject journalistic ethic to business necessity, emphasizing celebrities or local crime over distant famines or a careful analysis of economic policy.
+
+The basic drive behind programming choices in advertising-supported mass media was explored in the context of the problem of "program diversity" and competition. It relies on a type of analysis introduced by Peter Steiner in 1952. The basic model argued that advertiser-supported media are sensitive only to the number of viewers, not the intensity of their satisfaction. This created an odd situation, where competitors would tend to divide ,{[pg 206]}, among them the largest market segments, and leave smaller slices of the audience unserved, whereas a monopolist would serve each market segment, in order of size, until it ran out of channels. Because it has no incentive to divide all the viewers who want, for example, sitcoms, among two or more stations, a monopolist would program a sitcom on one channel, and the next-most-desired program on the next channel. Two competitors, on the other hand, would both potentially program sitcoms, if dividing those who prefer sitcoms in half still yields a larger total audience size than airing the next-most-desired program. To illustrate this effect with a rather extreme hypothetical example, imagine that we are in a television market of 10 million viewers. Suppose that the distribution of preferences in the audience is as follows: 1,000,000 want to watch sitcoms; 750,000 want sports; 500,000 want local news; 250,000 want action movies; 9,990 are interested in foreign films; and 9,980 want programs on gardening. The stark drop-off between action movies and foreign films and gardening is intended to reflect the fact that the 7.5 million potential viewers who do not fall into one of the first four clusters are distributed in hundreds of small clusters, none commanding more than 10,000 viewers. Before we examine why this extreme assumption is likely correct, let us first see what happens if it were. Table 6.1 presents the programming choices that would typify those of competing channels, based on the number of channels competing and the distribution of preferences in the audience. It reflects the assumptions that each programmer wants to maximize the number of viewers of its channel and that the viewers are equally likely to watch one channel as another if both offer the same type of programming. The numbers in parentheses next to the programming choice represent the number of viewers the programmer can hope to attract given these assumptions, not including the probability that some of the 7.5 million viewers outside the main clusters will also tune in. In this extreme example, one would need a system with more than 250 channels in order to start seeing something other than sitcoms, sports, local news, and action movies. Why, however, is such a distribution likely, or even plausible? The assumption is not intended to represent an actual distribution of what people most prefer to watch. Rather, it reflects the notion that many people have best preferences, fallback preferences, and tolerable options. Their first-best preferences reflect what they really want to watch, and people are highly diverse in this dimension. Their fallback and tolerable preferences reflect the kinds of things they would be willing to watch if nothing else is available, rather than getting up off the sofa and going to a local cafe or reading a book. ,{[pg 207]},
+
+!_ Table 6.1: Distribution of Channels Hypothetical
+
+table{~h c2; 10; 90
+
+No. of channels
+Programming Available (in thousands of viewers)
+
+1
+sitcom (1000)
+
+2
+sitcom (1000), sports (750)
+
+3
+sitcom (1000 or 500), sports (750), indifferent between sitcoms and local news (500)
+
+4
+sitcom (500), sports (750), sitcom (500), local news (500)
+
+5
+sitcom (500), sports (375), sitcom (500), local news (500), sports (375)
+
+6
+sitcom (333), sports (375), sitcom (333), local news (500), sports (375), sitcom (333)
+
+7
+sitcom (333), sports (375), sitcom (333), local news (500), sports (375), sitcom (333), action movies (250)
+
+8
+sitcom (333), sports (375), sitcom (333), local news (250), sports (375), sitcom (333), action movies (250), local news (250)
+
+9
+sitcom (250), sports (375), sitcom (250), local news (250), sports (375), sitcom (250), action movies (250), local news (250), sitcom (250)
+
+***
+***
+
+250
+100 channels of sitcom (10); 75 channels of sports (10); 50 channels of local news (10); 25 channels of action movies (10)
+
+251
+100 channels of sitcom (10); 75 channels of sports (10); 50 channels of local news (10); 25 channels of action movies (10); 1 foreign film channel (9.99)
+
+252
+100 channels of sitcom (10); 75 channels of sports (10); 50 channels of local news (10); 25 channels of action movies (10); 1 foreign film channel (9.99); 1 gardening channel (9.98)
+
+}table
+
+Here represented by sitcoms, sports, and the like, fallback options are more widely shared, even among people whose first-best preferences differ widely, because they represent what people will tolerate before switching, a much less strict requirement than what they really want. This assumption follows Jack Beebe's refinement of Steiner's model. Beebe established that media monopolists would show nothing but common-denominator programs and that competition among broadcasters would begin to serve the smaller preference clusters only if a large enough number of channels were available. Such a model would explain the broad cultural sense of Bruce Springsteen's song, "57 Channels (And Nothin' On)," and why we saw the emergence of channels like Black Entertainment Television, Univision (Spanish channel in the United States), or The History Channel only when cable systems significantly expanded channel capacity, as well as why direct- ,{[pg 208]}, broadcast satellite and, more recently, digital cable offerings were the first venue for twenty-four-hour-a-day cooking channels and smaller minority-language channels.~{ Peter O. Steiner, "Program Patterns and Preferences, and the Workability of Competition in Radio Broadcasting," The Quarterly Journal of Economics 66 (1952): 194. The major other contribution in this literature is Jack H. Beebe, "Institutional Structure and Program Choices in Television Markets," The Quarterly Journal of Economics 91 (1977): 15. A parallel line of analysis of the relationship between programming and the market structure of broadcasting began with Michael Spence and Bruce Owen, "Television Programming, Monopolistic Competition, and Welfare," The Quarterly Journal of Economics 91 (1977): 103. For an excellent review of this literature, see Matthew L. Spitzer, "Justifying Minority Preferences in Broadcasting," South California Law Review 64 (1991): 293, 304-319. }~
+
+While this work was developed in the context of analyzing media diversity of offerings, it provides a foundation for understanding the programming choices of all advertiser-supported mass media, including the press, in domains relevant to the role they play as a platform for the public sphere. It provides a framework for understanding, but also limiting, the applicability of the idea that mass media will put nothing in the newspaper that will destroy the reader's appetite. Controversial views and genuinely disturbing images, descriptions, or arguments have a higher likelihood of turning readers, listeners, and viewers away than entertainment, mildly interesting and amusing human-interest stories, and a steady flow of basic crime and courtroom dramas, and similar fare typical of local television newscasts and newspapers. On the other hand, depending on the number of channels, there are clearly market segments for people who are "political junkies," or engaged elites, who can support some small number of outlets aimed at that crowd. The New York Times or the Wall Street Journal are examples in print, programs like Meet the Press or Nightline and perhaps channels like CNN and Fox News are examples of the possibility and limitations of this exception to the general entertainment-oriented, noncontroversial, and politically inert style of commercial mass media. The dynamic of programming to the lowest common denominator can, however, iteratively replicate itself even within relatively news- and elite-oriented media outlets. Even among news junkies, larger news outlets must cater relatively to the mainstream of its intended audience. Too strident a position or too probing an inquiry may slice the market segment to which they sell too thin. This is likely what leads to the common criticism, from both the Right and Left, that the same media are too "liberal" and too "conservative," respectively. By contrast, magazines, whose business model can support much lower circulation levels, exhibit a substantially greater will for political engagement and analysis than even the relatively political-readership-oriented, larger-circulation mass media. By definition, however, the media that cater to these niche markets serve only a small segment of the political community. Fox News in the United States appears to be a powerful counterexample to this trend. It is difficult to pinpoint why. The channel likely represents a composite of the Berlusconi effect, the high market segmentation made possible by high-capacity cable ,{[pg 209]}, systems, the very large market segment of Republicans, and the relatively polarized tone of American political culture since the early 1990s.
+
+The mass-media model as a whole, with the same caveat for niche markets, does not lend itself well to in-depth discussion and dialog. High professionalism can, to some extent, compensate for the basic structural problem of a medium built on the model of a small number of producers transmitting to an audience that is many orders of magnitude larger. The basic problem occurs at the intake and synthesis stages of communication. However diligent they may be, a small number of professional reporters, embedded as they are within social segments that are part of social, economic, and political elites, are a relatively stunted mechanism for intake. If one seeks to collect the wide range of individual observations, experiences, and opinions that make up the actual universe of concerns and opinions of a large public as a basic input into the public sphere, before filtering, the centralized model of mass media provides a limited means of capturing those insights. On the back end of the communication of public discourse, concentrated media of necessity must structure most "participants" in the debate as passive recipients of finished messages and images. That is the core characteristic of mass media: Content is produced prior to transmission in a relatively small number of centers, and when finished is then transmitted to a mass audience, which consumes it. This is the basis of the claim of the role of professional journalism to begin with, separating it from nonprofessional observations of those who consume its products. The result of this basic structure of the media product is that discussion and analysis of issues of common concern is an iconic representation of discussion, a choreographed enactment of public debate. The participants are selected for the fact that they represent wellunderstood, well-defined positions among those actually prevalent in a population, the images and stories are chosen to represent issues, and the public debate that is actually facilitated (and is supposedly where synthesis of the opinions in public debate actually happens) is in fact an already presynthesized portrayal of an argument among avatars of relatively large segments of opinion as perceived by the journalists and stagers of the debate. In the United States, this translates into fairly standard formats of "on the left X, on the right Y," or "the Republicans' position" versus "the Democrats' position." It translates into "photo-op" moments of publicly enacting an idea, a policy position, or a state of affairs--whether it is a president landing on an aircraft carrier to represent security and the successful completion of a ,{[pg 210]}, controversial war, or a candidate hunting with his buddies to represent a position on gun control. It is important to recognize that by describing these characteristics, I am not identifying failures of imagination, thoughtfulness, or professionalism on the part of media organizations. These are simply characteristics of a mass-mediated public sphere; modes of communication that offer the path of least resistance given the characteristics of the production and distribution process of mass media, particularly commercial mass media. There are partial exceptions, as there are to the diversity of content or the emphasis on entertainment value, but these do not reflect what most citizens read, see, or hear. The phenomenon of talk radio and call-in shows represents a very different, but certainly not more reflective form. They represent the pornography and violence of political discourse--a combination of exhibitionism and voyeurism intended to entertain us with opportunities to act out suppressed desires and to glimpse what we might be like if we allowed ourselves more leeway from what it means to be a wellsocialized adult.
+
+The two basic critiques of commercial mass media coalesce on the conflict between journalistic ethics and the necessities of commercialism. If professional journalists seek to perform a robust watchdog function, to inform their readers and viewers, and to provoke and explore in depth, then the dynamics of both power and lowest-common-denominator appeal push back. Different organizations, with different degrees of managerial control, editorial independence, internal organizational culture, and freedom from competitive pressures, with different intended market segments, will resolve these tensions differently. A quick reading of the conclusions of some media scholarship, and more commonly, arguments made in public debates over the media, would tend to lump "the media" as a single entity, with a single set of failures. In fact, unsurprisingly, the literature suggests substantial heterogeneity among organizations and media. Television seems to be the worst culprit on the dimension of political inertness. Print media, both magazines and some newspapers, include significant variation in the degree to which they fit these general models of failure.
+
+As we turn now to consider the advantages of the introduction of Internet communications, we shall see how this new model can complement the mass media and alleviate its worst weaknesses. In particular, the discussion focuses on the emergence of the networked information economy and the relatively larger role it makes feasible for nonmarket actors and for radically distributed production of information and culture. One need not adopt the position ,{[pg 211]}, that the commercial mass media are somehow abusive, evil, corporatecontrolled giants, and that the Internet is the ideal Jeffersonian republic in order to track a series of genuine improvements represented by what the new emerging modalities of public communication can do as platforms for the public sphere. Greater access to means of direct individual communications, to collaborative speech platforms, and to nonmarket producers more generally can complement the commercial mass media and contribute to a significantly improved public sphere. ,{[pg 212]},
+
+1~7 Chapter 7 - Political Freedom Part 2: Emergence of the Networked Public Sphere
+
+The fundamental elements of the difference between the networked information economy and the mass media are network architecture and the cost of becoming a speaker. The first element is the shift from a hub-and-spoke architecture with unidirectional links to the end points in the mass media, to distributed architecture with multidirectional connections among all nodes in the networked information environment. The second is the practical elimination of communications costs as a barrier to speaking across associational boundaries. Together, these characteristics have fundamentally altered the capacity of individuals, acting alone or with others, to be active participants in the public sphere as opposed to its passive readers, listeners, or viewers. For authoritarian countries, this means that it is harder and more costly, though not perhaps entirely impossible, to both be networked and maintain control over their public spheres. China seems to be doing too good a job of this in the middle of the first decade of this century for us to say much more than that it is harder to maintain control, and therefore that at least in some authoritarian regimes, control will be looser. In ,{[pg 213]}, liberal democracies, ubiquitous individual ability to produce information creates the potential for near-universal intake. It therefore portends significant, though not inevitable, changes in the structure of the public sphere from the commercial mass-media environment. These changes raise challenges for filtering. They underlie some of the critiques of the claims about the democratizing effect of the Internet that I explore later in this chapter. Fundamentally, however, they are the roots of possible change. Beginning with the cost of sending an e-mail to some number of friends or to a mailing list of people interested in a particular subject, to the cost of setting up a Web site or a blog, and through to the possibility of maintaining interactive conversations with large numbers of people through sites like Slashdot, the cost of being a speaker in a regional, national, or even international political conversation is several orders of magnitude lower than the cost of speaking in the mass-mediated environment. This, in turn, leads to several orders of magnitude more speakers and participants in conversation and, ultimately, in the public sphere.
+
+The change is as much qualitative as it is quantitative. The qualitative change is represented in the experience of being a potential speaker, as opposed to simply a listener and voter. It relates to the self-perception of individuals in society and the culture of participation they can adopt. The easy possibility of communicating effectively into the public sphere allows individuals to reorient themselves from passive readers and listeners to potential speakers and participants in a conversation. The way we listen to what we hear changes because of this; as does, perhaps most fundamentally, the way we observe and process daily events in our lives. We no longer need to take these as merely private observations, but as potential subjects for public communication. This change affects the relative power of the media. It affects the structure of intake of observations and views. It affects the presentation of issues and observations for discourse. It affects the way issues are filtered, for whom and by whom. Finally, it affects the ways in which positions are crystallized and synthesized, sometimes still by being amplified to the point that the mass media take them as inputs and convert them into political positions, but occasionally by direct organization of opinion and action to the point of reaching a salience that drives the political process directly.
+
+The basic case for the democratizing effect of the Internet, as seen from the perspective of the mid-1990s, was articulated in an opinion of the /{U.S. Supreme Court in Reno v. ACLU}/: ,{[pg 214]},
+
+_1 The Web is thus comparable, from the readers' viewpoint, to both a vast library including millions of readily available and indexed publications and a sprawling mall offering goods and services. From the publishers' point of view, it constitutes a vast platform from which to address and hear from a world-wide audience of millions of readers, viewers, researchers, and buyers. Any person or organization with a computer connected to the Internet can "publish" information. Publishers include government agencies, educational institutions, commercial entities, advocacy groups, and individuals. . . .
+
+_1 Through the use of chat rooms, any person with a phone line can become a town crier with a voice that resonates farther than it could from any soapbox. Through the use of Web pages, mail exploders, and newsgroups, the same individual can become a pamphleteer. As the District Court found, "the content on the Internet is as diverse as human thought."~{ /{Reno v. ACLU}/, 521 U.S. 844, 852-853, and 896-897 (1997). }~
+
+The observations of what is different and unique about this new medium relative to those that dominated the twentieth century are already present in the quotes from the Court. There are two distinct types of effects. The first, as the Court notes from "the readers' perspective," is the abundance and diversity of human expression available to anyone, anywhere, in a way that was not feasible in the mass-mediated environment. The second, and more fundamental, is that anyone can be a publisher, including individuals, educational institutions, and nongovernmental organizations (NGOs), alongside the traditional speakers of the mass-media environment--government and commercial entities.
+
+Since the end of the 1990s there has been significant criticism of this early conception of the democratizing effects of the Internet. One line of critique includes variants of the Babel objection: the concern that information overload will lead to fragmentation of discourse, polarization, and the loss of political community. A different and descriptively contradictory line of critique suggests that the Internet is, in fact, exhibiting concentration: Both infrastructure and, more fundamentally, patterns of attention are much less distributed than we thought. As a consequence, the Internet diverges from the mass media much less than we thought in the 1990s and significantly less than we might hope.
+
+I begin the chapter by offering a menu of the core technologies and usage patterns that can be said, as of the middle of the first decade of the twentyfirst century, to represent the core Internet-based technologies of democratic discourse. I then use two case studies to describe the social and economic practices through which these tools are implemented to construct the public ,{[pg 215]}, sphere, and how these practices differ quite radically from the mass-media model. On the background of these stories, we are then able to consider the critiques that have been leveled against the claim that the Internet democratizes. Close examination of the application of networked information economy to the production of the public sphere suggests that the emerging networked public sphere offers significant improvements over one dominated by commercial mass media. Throughout the discussion, it is important to keep in mind that the relevant comparison is always between the public sphere that we in fact had throughout the twentieth century, the one dominated by mass media, that is the baseline for comparison, not the utopian image of the "everyone a pamphleteer" that animated the hopes of the 1990s for Internet democracy. Departures from the naïve utopia are not signs that the Internet does not democratize, after all. They are merely signs that the medium and its analysis are maturing.
+
+2~ BASIC TOOLS OF NETWORKED COMMUNICATION
+
+Analyzing the effect of the networked information environment on public discourse by cataloging the currently popular tools for communication is, to some extent, self-defeating. These will undoubtedly be supplanted by new ones. Analyzing this effect without having a sense of what these tools are or how they are being used is, on the other hand, impossible. This leaves us with the need to catalog what is, while trying to abstract from what is being used to what relationships of information and communication are emerging, and from these to transpose to a theory of the networked information economy as a new platform for the public sphere.
+
+E-mail is the most popular application on the Net. It is cheap and trivially easy to use. Basic e-mail, as currently used, is not ideal for public communications. While it provides a cheap and efficient means of communicating with large numbers of individuals who are not part of one's basic set of social associations, the presence of large amounts of commercial spam and the amount of mail flowing in and out of mailboxes make indiscriminate e-mail distributions a relatively poor mechanism for being heard. E-mails to smaller groups, preselected by the sender for having some interest in a subject or relationship to the sender, do, however, provide a rudimentary mechanism for communicating observations, ideas, and opinions to a significant circle, on an ad hoc basis. Mailing lists are more stable and self-selecting, and ,{[pg 216]}, therefore more significant as a basic tool for the networked public sphere. Some mailing lists are moderated or edited, and run by one or a small number of editors. Others are not edited in any significant way. What separates mailing lists from most Web-based uses is the fact that they push the information on them into the mailbox of subscribers. Because of their attention limits, individuals restrict their subscriptions, so posting on a mailing list tends to be done by and for people who have self-selected as having a heightened degree of common interest, substantive or contextual. It therefore enhances the degree to which one is heard by those already interested in a topic. It is not a communications model of one-to-many, or few-to-many as broadcast is to an open, undefined class of audience members. Instead, it allows one, or a few, or even a limited large group to communicate to a large but limited group, where the limit is self-selection as being interested or even immersed in a subject.
+
+The World Wide Web is the other major platform for tools that individuals use to communicate in the networked public sphere. It enables a wide range of applications, from basic static Web pages, to, more recently, blogs and various social-software-mediated platforms for large-scale conversations of the type described in chapter 3--like Slashdot. Static Web pages are the individual's basic "broadcast" medium. They allow any individual or organization to present basic texts, sounds, and images pertaining to their position. They allow small NGOs to have a worldwide presence and visibility. They allow individuals to offer thoughts and commentaries. They allow the creation of a vast, searchable database of information, observations, and opinions, available at low cost for anyone, both to read and write into. This does not yet mean that all these statements are heard by the relevant others to whom they are addressed. Substantial analysis is devoted to that problem, but first let us complete the catalog of tools and information flow structures.
+
+One Web-based tool and an emerging cultural practice around it that extends the basic characteristics of Web sites as media for the political public sphere are Web logs, or blogs. Blogs are a tool and an approach to using the Web that extends the use of Web pages in two significant ways. Technically, blogs are part of a broader category of innovations that make the web "writable." That is, they make Web pages easily capable of modification through a simple interface. They can be modified from anywhere with a networked computer, and the results of writing onto the Web page are immediately available to anyone who accesses the blog to read. This technical change resulted in two divergences from the cultural practice of Web sites ,{[pg 217]}, in the 1990s. First, they allowed the evolution of a journal-style Web page, where individual short posts are added to the Web site in short or large intervals. As practice has developed over the past few years, these posts are usually archived chronologically. For many users, this means that blogs have become a form of personal journal, updated daily or so, for their own use and perhaps for the use of a very small group of friends. What is significant about this characteristic from the perspective of the construction of the public sphere is that blogs enable individuals to write to their Web pages in journalism time--that is, hourly, daily, weekly--whereas Web page culture that preceded it tended to be slower moving: less an equivalent of reportage than of the essay. Today, one certainly finds individuals using blog software to maintain what are essentially static Web pages, to which they add essays or content occasionally, and Web sites that do not use blogging technology but are updated daily. The public sphere function is based on the content and cadence--that is, the use practice--not the technical platform.
+
+The second critical innovation of the writable Web in general and of blogs in particular was the fact that in addition to the owner, readers/users could write to the blog. Blogging software allows the person who runs a blog to permit some, all, or none of the readers to post comments to the blog, with or without retaining power to edit or moderate the posts that go on, and those that do not. The result is therefore not only that many more people write finished statements and disseminate them widely, but also that the end product is a weighted conversation, rather than a finished good. It is a conversation because of the common practice of allowing and posting comments, as well as comments to these comments. Blog writers--bloggers-- often post their own responses in the comment section or address comments in the primary section. Blog-based conversation is weighted, because the culture and technical affordances of blogging give the owner of the blog greater weight in deciding who gets to post or comment and who gets to decide these questions. Different blogs use these capabilities differently; some opt for broader intake and discussion on the board, others for a more tightly edited blog. In all these cases, however, the communications model or information-flow structure that blogs facilitate is a weighted conversation that takes the form of one or a group of primary contributors/authors, together with some larger number, often many, secondary contributors, communicating to an unlimited number of many readers.
+
+The writable Web also encompasses another set of practices that are distinct, but that are often pooled in the literature together with blogs. These ,{[pg 218]}, are the various larger-scale, collaborative-content production systems available on the Web, of the type described in chapter 3. Two basic characteristics make sites like Slashdot or /{Wikipedia}/ different from blogs. First, they are intended for, and used by, very large groups, rather than intended to facilitate a conversation weighted toward one or a small number of primary speakers. Unlike blogs, they are not media for individual or small group expression with a conversation feature. They are intrinsically group communication media. They therefore incorporate social software solutions to avoid deterioration into chaos--peer review, structured posting privileges, reputation systems, and so on. Second, in the case of Wikis, the conversation platform is anchored by a common text. From the perspective of facilitating the synthesis of positions and opinions, the presence of collaborative authorship of texts offers an additional degree of viscosity to the conversation, so that views "stick" to each other, must jostle for space, and accommodate each other. In the process, the output is more easily recognizable as a collective output and a salient opinion or observation than where the form of the conversation is more free-flowing exchange of competing views.
+
+Common to all these Web-based tools--both static and dynamic, individual and cooperative--are linking, quotation, and presentation. It is at the very core of the hypertext markup language (HTML) to make referencing easy. And it is at the very core of a radically distributed network to allow materials to be archived by whoever wants to archive them, and then to be accessible to whoever has the reference. Around these easy capabilities, the cultural practice has emerged to reference through links for easy transition from your own page or post to the one you are referring to--whether as inspiration or in disagreement. This culture is fundamentally different from the mass-media culture, where sending a five-hundred-page report to millions of users is hard and expensive. In the mass media, therefore, instead of allowing readers to read the report alongside its review, all that is offered is the professional review in the context of a culture that trusts the reviewer. On the Web, linking to original materials and references is considered a core characteristic of communication. The culture is oriented toward "see for yourself." Confidence in an observation comes from a combination of the reputation of the speaker as it has emerged over time, reading underlying sources you believe you have some competence to evaluate for yourself, and knowing that for any given referenced claim or source, there is some group of people out there, unaffiliated with the reviewer or speaker, who will have access to the source and the means for making their disagreement with the ,{[pg 219]}, speaker's views known. Linking and "see for yourself" represent a radically different and more participatory model of accreditation than typified the mass media.
+
+Another dimension that is less well developed in the United States than it is in Europe and East Asia is mobility, or the spatial and temporal ubiquity of basic tools for observing and commenting on the world we inhabit. Dan Gillmor is clearly right to include these basic characteristics in his book We the Media, adding to the core tools of what he describes as a transformation in journalism, short message service (SMS), and mobile connected cameras to mailing lists, Web logs, Wikis, and other tools. The United States has remained mostly a PC-based networked system, whereas in Europe and Asia, there has been more substantial growth in handheld devices, primarily mobile phones. In these domains, SMS--the "e-mail" of mobile phones--and camera phones have become critical sources of information, in real time. In some poor countries, where cell phone minutes remain very (even prohibitively) expensive for many users and where landlines may not exist, text messaging is becoming a central and ubiquitous communication tool. What these suggest to us is a transition, as the capabilities of both systems converge, to widespread availability of the ability to register and communicate observations in text, audio, and video, wherever we are and whenever we wish. Drazen Pantic tells of how listeners of Internet-based Radio B-92 in Belgrade reported events in their neighborhoods after the broadcast station had been shut down by the Milosevic regime. Howard Rheingold describes in Smart Mobs how citizens of the Philippines used SMS to organize real-time movements and action to overthrow their government. In a complex modern society, where things that matter can happen anywhere and at any time, the capacities of people armed with the means of recording, rendering, and communicating their observations change their relationship to the events that surround them. Whatever one sees and hears can be treated as input into public debate in ways that were impossible when capturing, rendering, and communicating were facilities reserved to a handful of organizations and a few thousands of their employees.
+
+2~ NETWORKED INFORMATION ECONOMY MEETS THE PUBLIC SPHERE
+
+The networked public sphere is not made of tools, but of social production practices that these tools enable. The primary effect of the Internet on the ,{[pg 220]}, public sphere in liberal societies relies on the information and cultural production activity of emerging nonmarket actors: individuals working alone and cooperatively with others, more formal associations like NGOs, and their feedback effect on the mainstream media itself. These enable the networked public sphere to moderate the two major concerns with commercial mass media as a platform for the public sphere: (1) the excessive power it gives its owners, and (2) its tendency, when owners do not dedicate their media to exert power, to foster an inert polity. More fundamentally, the social practices of information and discourse allow a very large number of actors to see themselves as potential contributors to public discourse and as potential actors in political arenas, rather than mostly passive recipients of mediated information who occasionally can vote their preferences. In this section, I offer two detailed stories that highlight different aspects of the effects of the networked information economy on the construction of the public sphere. The first story focuses on how the networked public sphere allows individuals to monitor and disrupt the use of mass-media power, as well as organize for political action. The second emphasizes in particular how the networked public sphere allows individuals and groups of intense political engagement to report, comment, and generally play the role traditionally assigned to the press in observing, analyzing, and creating political salience for matters of public interest. The case studies provide a context both for seeing how the networked public sphere responds to the core failings of the commercial, mass-media-dominated public sphere and for considering the critiques of the Internet as a platform for a liberal public sphere.
+
+Our first story concerns Sinclair Broadcasting and the 2004 U.S. presidential election. It highlights the opportunities that mass-media owners have to exert power over the public sphere, the variability within the media itself in how this power is used, and, most significant for our purposes here, the potential corrective effect of the networked information environment. At its core, it suggests that the existence of radically decentralized outlets for individuals and groups can provide a check on the excessive power that media owners were able to exercise in the industrial information economy.
+
+Sinclair, which owns major television stations in a number of what were considered the most competitive and important states in the 2004 election-- including Ohio, Florida, Wisconsin, and Iowa--informed its staff and stations that it planned to preempt the normal schedule of its sixty-two stations to air a documentary called Stolen Honor: The Wounds That Never Heal, as a news program, a week and a half before the elections.~{ Elizabeth Jensen, "Sinclair Fires Journalist After Critical Comments," Los Angeles Times, October 19, 2004. }~ The documentary ,{[pg 221]}, was reported to be a strident attack on Democratic candidate John Kerry's Vietnam War service. One reporter in Sinclair's Washington bureau, who objected to the program and described it as "blatant political propaganda," was promptly fired.~{ Jensen, "Sinclair Fires Journalist"; Sheridan Lyons, "Fired Reporter Tells Why He Spoke Out," Baltimore Sun, October 29, 2004. }~ The fact that Sinclair owns stations reaching one quarter of U.S. households, that it used its ownership to preempt local broadcast schedules, and that it fired a reporter who objected to its decision, make this a classic "Berlusconi effect" story, coupled with a poster-child case against media concentration and the ownership of more than a small number of outlets by any single owner. The story of Sinclair's plans broke on Saturday, October 9, 2004, in the Los Angeles Times. Over the weekend, "official" responses were beginning to emerge in the Democratic Party. The Kerry campaign raised questions about whether the program violated election laws as an undeclared "in-kind" contribution to the Bush campaign. By Tuesday, October 12, the Democratic National Committee announced that it was filing a complaint with the Federal Elections Commission (FEC), while seventeen Democratic senators wrote a letter to the chairman of the Federal Communications Commission (FCC), demanding that the commission investigate whether Sinclair was abusing the public trust in the airwaves. Neither the FEC nor the FCC, however, acted or intervened throughout the episode.
+
+Alongside these standard avenues of response in the traditional public sphere of commercial mass media, their regulators, and established parties, a very different kind of response was brewing on the Net, in the blogosphere. On the morning of October 9, 2004, the Los Angeles Times story was blogged on a number of political blogs--Josh Marshall on talkingpointsmemo. com, Chris Bower on MyDD.com, and Markos Moulitsas on dailyKos.com. By midday that Saturday, October 9, two efforts aimed at organizing opposition to Sinclair were posted in the dailyKos and MyDD. A "boycottSinclair" site was set up by one individual, and was pointed to by these blogs. Chris Bowers on MyDD provided a complete list of Sinclair stations and urged people to call the stations and threaten to picket and boycott. By Sunday, October 10, the dailyKos posted a list of national advertisers with Sinclair, urging readers to call them. On Monday, October 11, MyDD linked to that list, while another blog, theleftcoaster.com, posted a variety of action agenda items, from picketing affiliates of Sinclair to suggesting that readers oppose Sinclair license renewals, providing a link to the FCC site explaining the basic renewal process and listing public-interest organizations to work with. That same day, another individual, Nick Davis, started a Web site, ,{[pg 222]}, BoycottSBG.com, on which he posted the basic idea that a concerted boycott of local advertisers was the way to go, while another site, stopsinclair.org, began pushing for a petition. In the meantime, TalkingPoints published a letter from Reed Hundt, former chairman of the FCC, to Sinclair, and continued finding tidbits about the film and its maker. Later on Monday, TalkingPoints posted a letter from a reader who suggested that stockholders of Sinclair could bring a derivative action. By 5:00 a.m. on the dawn of Tuesday, October 12, however, TalkingPoints began pointing toward Davis's database on BoycottSBG.com. By 10:00 that morning, Marshall posted on TalkingPoints a letter from an anonymous reader, which began by saying: "I've worked in the media business for 30 years and I guarantee you that sales is what these local TV stations are all about. They don't care about license renewal or overwhelming public outrage. They care about sales only, so only local advertisers can affect their decisions." This reader then outlined a plan for how to watch and list all local advertisers, and then write to the sales managers--not general managers--of the local stations and tell them which advertisers you are going to call, and then call those. By 1:00 p.m. Marshall posted a story of his own experience with this strategy. He used Davis's database to identify an Ohio affiliate's local advertisers. He tried to call the sales manager of the station, but could not get through. He then called the advertisers. The post is a "how to" instruction manual, including admonitions to remember that the advertisers know nothing of this, the story must be explained, and accusatory tones avoided, and so on. Marshall then began to post letters from readers who explained with whom they had talked--a particular sales manager, for example--and who were then referred to national headquarters. He continued to emphasize that advertisers were the right addressees. By 5:00 p.m. that same Tuesday, Marshall was reporting more readers writing in about experiences, and continued to steer his readers to sites that helped them to identify their local affiliate's sales manager and their advertisers.~{ The various posts are archived and can be read, chronologically, at http:// www.talkingpointsmemo.com/archives/week_2004_10_10.php. }~
+
+By the morning of Wednesday, October 13, the boycott database already included eight hundred advertisers, and was providing sample letters for users to send to advertisers. Later that day, BoycottSBG reported that some participants in the boycott had received reply e-mails telling them that their unsolicited e-mail constituted illegal spam. Davis explained that the CANSPAM Act, the relevant federal statute, applied only to commercial spam, and pointed users to a law firm site that provided an overview of CANSPAM. By October 14, the boycott effort was clearly bearing fruit. Davis ,{[pg 223]}, reported that Sinclair affiliates were threatening advertisers who cancelled advertisements with legal action, and called for volunteer lawyers to help respond. Within a brief period, he collected more than a dozen volunteers to help the advertisers. Later that day, another blogger at grassroots nation.com had set up a utility that allowed users to send an e-mail to all advertisers in the BoycottSBG database. By the morning of Friday, October 15, Davis was reporting more than fifty advertisers pulling ads, and three or four mainstream media reports had picked up the boycott story and reported on it. That day, an analyst at Lehman Brothers issued a research report that downgraded the expected twelve-month outlook for the price of Sinclair stock, citing concerns about loss of advertiser revenue and risk of tighter regulation. Mainstream news reports over the weekend and the following week systematically placed that report in context of local advertisers pulling their ads from Sinclair. On Monday, October 18, the company's stock price dropped by 8 percent (while the S&P 500 rose by about half a percent). The following morning, the stock dropped a further 6 percent, before beginning to climb back, as Sinclair announced that it would not show Stolen Honor, but would provide a balanced program with only portions of the documentary and one that would include arguments on the other side. On that day, the company's stock price had reached its lowest point in three years. The day after the announced change in programming decision, the share price bounced back to where it had been on October 15. There were obviously multiple reasons for the stock price losses, and Sinclair stock had been losing ground for many months prior to these events. Nonetheless, as figure 7.1 demonstrates, the market responded quite sluggishly to the announcements of regulatory and political action by the Democratic establishment earlier in the week of October 12, by comparison to the precipitous decline and dramatic bounce-back surrounding the market projections that referred to advertising loss. While this does not prove that the Weborganized, blog-driven and -facilitated boycott was the determining factor, as compared to fears of formal regulatory action, the timing strongly suggests that the efficacy of the boycott played a very significant role.
+
+The first lesson of the Sinclair Stolen Honor story is about commercial mass media themselves. The potential for the exercise of inordinate power by media owners is not an imaginary concern. Here was a publicly traded firm whose managers supported a political party and who planned to use their corporate control over stations reaching one quarter of U.S. households, many in swing states, to put a distinctly political message in front of this large audience. ,{[pg 224]},
+
+{won_benkler_7_1.png "Figure 7.1: Sinclair Stock, October 8?November 5, 2004" }http://www.jus.uio.no/sisu
+
+- We also learn, however, that in the absence of monopoly, such decisions do not determine what everyone sees or hears, and that other mass-media outlets will criticize each other under these conditions. This criticism alone, however, cannot stop a determined media owner from trying to exert its influence in the public sphere, and if placed as Sinclair was, in locations with significant political weight, such intervention could have substantial influence. Second, we learn that the new, network-based media can exert a significant counterforce. They offer a completely new and much more widely open intake basin for insight and commentary. The speed with which individuals were able to set up sites to stake out a position, to collect and make available information relevant to a specific matter of public concern, and to provide a platform for others to exchange views about the appropriate political strategy and tactics was completely different from anything that the economics and organizational structure of mass media make feasible. The third lesson is about the internal dynamics of the networked public sphere. Filtering and synthesis occurred through discussion, trial, and error. Multiple proposals for action surfaced, and the practice of linking allowed most anyone interested who connected to one of the nodes in the network to follow ,{[pg 225]}, quotations and references to get a sense of the broad range of proposals. Different people could coalesce on different modes of action--150,000 signed the petition on stopsinclair.org, while others began to work on the boycott. Setting up the mechanism was trivial, both technically and as a matter of cost--something a single committed individual could choose to do. Pointing and adoption provided the filtering, and feedback about the efficacy, again distributed through a system of cross-references, allowed for testing and accreditation of this course of action. High-visibility sites, like Talkingpointsmemo or the dailyKos, offered transmissions hubs that disseminated information about the various efforts and provided a platform for interest-group-wide tactical discussions. It remains ambiguous to what extent these dispersed loci of public debate still needed mass-media exposure to achieve broad political salience. BoycottSBG.com received more than three hundred thousand unique visitors during its first week of operations, and more than one million page views. It successfully coordinated a campaign that resulted in real effects on advertisers in a large number of geographically dispersed media markets. In this case, at least, mainstream media reports on these efforts were few, and the most immediate "transmission mechanism" of their effect was the analyst's report from Lehman, not the media. It is harder to judge the extent to which those few mainstream media reports that did appear featured in the decision of the analyst to credit the success of the boycott efforts. The fact that mainstream media outlets may have played a role in increasing the salience of the boycott does not, however, take away from the basic role played by these new mechanisms of bringing information and experience to bear on a broad public conversation combined with a mechanism to organize political action across many different locations and social contexts.
+
+Our second story focuses not on the new reactive capacity of the networked public sphere, but on its generative capacity. In this capacity, it begins to outline the qualitative change in the role of individuals as potential investigators and commentators, as active participants in defining the agenda and debating action in the public sphere. This story is about Diebold Election Systems (one of the leading manufacturers of electronic voting machines and a subsidiary of one of the foremost ATM manufacturers in the world, with more than $2 billion a year in revenue), and the way that public criticism of its voting machines developed. It provides a series of observations about how the networked information economy operates, and how it allows large numbers of people to participate in a peer-production enterprise of ,{[pg 226]}, news gathering, analysis, and distribution, applied to a quite unsettling set of claims. While the context of the story is a debate over electronic voting, that is not what makes it pertinent to democracy. The debate could have centered on any corporate and government practice that had highly unsettling implications, was difficult to investigate and parse, and was largely ignored by mainstream media. The point is that the networked public sphere did engage, and did successfully turn something that was not a matter of serious public discussion to a public discussion that led to public action.
+
+Electronic voting machines were first used to a substantial degree in the United States in the November 2002 elections. Prior to, and immediately following that election, there was sparse mass-media coverage of electronic voting machines. The emphasis was mostly on the newness, occasional slips, and the availability of technical support staff to help at polls. An Atlanta Journal-Constitution story, entitled "Georgia Puts Trust in Electronic Voting, Critics Fret about Absence of Paper Trails,"~{ Duane D. Stanford, Atlanta Journal-Constitution, October 31, 2002, 1A. }~ is not atypical of coverage at the time, which generally reported criticism by computer engineers, but conveyed an overall soothing message about the efficacy of the machines and about efforts by officials and companies to make sure that all would be well. The New York Times report of the Georgia effort did not even mention the critics.~{ Katherine Q. Seelye, "The 2002 Campaign: The States; Georgia About to Plunge into Touch-Screen Voting," New York Times, October 30, 2002, A22. }~ The Washington Post reported on the fears of failure with the newness of the machines, but emphasized the extensive efforts that the manufacturer, Diebold, was making to train election officials and to have hundreds of technicians available to respond to failure.~{ Edward Walsh, "Election Day to Be Test of Voting Process," Washington Post, November 4, 2002, A1. }~ After the election, the Atlanta Journal-Constitution reported that the touch-screen machines were a hit, burying in the text any references to machines that highlighted the wrong candidates or the long lines at the booths, while the Washington Post highlighted long lines in one Maryland county, but smooth operation elsewhere. Later, the Post reported a University of Maryland study that surveyed users and stated that quite a few needed help from election officials, compromising voter privacy.~{ Washington Post, December 12, 2002. }~ Given the centrality of voting mechanisms for democracy, the deep concerns that voting irregularities determined the 2000 presidential elections, and the sense that voting machines would be a solution to the "hanging chads" problem (the imperfectly punctured paper ballots that came to symbolize the Florida fiasco during that election), mass-media reports were remarkably devoid of any serious inquiry into how secure and accurate voting machines were, and included a high quotient of soothing comments from election officials who bought the machines and executives of the manufacturers who sold them. No mass-media outlet sought to go ,{[pg 227]}, behind the claims of the manufacturers about their machines, to inquire into their security or the integrity of their tallying and transmission mechanisms against vote tampering. No doubt doing so would have been difficult. These systems were protected as trade secrets. State governments charged with certifying the systems were bound to treat what access they had to the inner workings as confidential. Analyzing these systems requires high degrees of expertise in computer security. Getting around these barriers is difficult. However, it turned out to be feasible for a collection of volunteers in various settings and contexts on the Net.
+
+In late January 2003, Bev Harris, an activist focused on electronic voting machines, was doing research on Diebold, which has provided more than 75,000 voting machines in the United States and produced many of the machines used in Brazil's purely electronic voting system. Harris had set up a whistle-blower site as part of a Web site she ran at the time, blackboxvoting.com. Apparently working from a tip, Harris found out about an openly available site where Diebold stored more than forty thousand files about how its system works. These included specifications for, and the actual code of, Diebold's machines and vote-tallying system. In early February 2003, Harris published two initial journalistic accounts on an online journal in New Zealand, Scoop.com--whose business model includes providing an unedited platform for commentators who wish to use it as a platform to publish their materials. She also set up a space on her Web site for technically literate users to comment on the files she had retrieved. In early July of that year, she published an analysis of the results of the discussions on her site, which pointed out how access to the Diebold open site could have been used to affect the 2002 election results in Georgia (where there had been a tightly contested Senate race). In an editorial attached to the publication, entitled "Bigger than Watergate," the editors of Scoop claimed that what Harris had found was nothing short of a mechanism for capturing the U.S. elections process. They then inserted a number of lines that go to the very heart of how the networked information economy can use peer production to play the role of watchdog:
+
+_1 We can now reveal for the first time the location of a complete online copy of the original data set. As we anticipate attempts to prevent the distribution of this information we encourage supporters of democracy to make copies of these files and to make them available on websites and file sharing networks: http:// users.actrix.co.nz/dolly/. As many of the files are zip password protected you may need some assistance in opening them, we have found that the utility available at ,{[pg 228]}, the following URL works well: http://www.lostpassword.com. Finally some of the zip files are partially damaged, but these too can be read by using the utility at: http://www.zip-repair.com/. At this stage in this inquiry we do not believe that we have come even remotely close to investigating all aspects of this data; i.e., there is no reason to believe that the security flaws discovered so far are the only ones. Therefore we expect many more discoveries to be made. We want the assistance of the online computing community in this enterprise and we encourage you to file your findings at the forum HERE [providing link to forum].
+
+A number of characteristics of this call to arms would have been simply infeasible in the mass-media environment. They represent a genuinely different mind-set about how news and analysis are produced and how censorship and power are circumvented. First, the ubiquity of storage and communications capacity means that public discourse can rely on "see for yourself" rather than on "trust me." The first move, then, is to make the raw materials available for all to see. Second, the editors anticipated that the company would try to suppress the information. Their response was not to use a counterweight of the economic and public muscle of a big media corporation to protect use of the materials. Instead, it was widespread distribution of information--about where the files could be found, and about where tools to crack the passwords and repair bad files could be found-- matched with a call for action: get these files, copy them, and store them in many places so they cannot be squelched. Third, the editors did not rely on large sums of money flowing from being a big media organization to hire experts and interns to scour the files. Instead, they posed a challenge to whoever was interested--there are more scoops to be found, this is important for democracy, good hunting!! Finally, they offered a platform for integration of the insights on their own forum. This short paragraph outlines a mechanism for radically distributed storage, distribution, analysis, and reporting on the Diebold files.
+
+As the story unfolded over the next few months, this basic model of peer production of investigation, reportage, analysis, and communication indeed worked. It resulted in the decertification of some of Diebold's systems in California, and contributed to a shift in the requirements of a number of states, which now require voting machines to produce a paper trail for recount purposes. The first analysis of the Diebold system based on the files Harris originally found was performed by a group of computer scientists at the Information Security Institute at Johns Hopkins University and released ,{[pg 229]}, as a working paper in late July 2003. The Hopkins Report, or Rubin Report as it was also named after one of its authors, Aviel Rubin, presented deep criticism of the Diebold system and its vulnerabilities on many dimensions. The academic credibility of its authors required a focused response from Diebold. The company published a line-by-line response. Other computer scientists joined in the debate. They showed the limitations and advantages of the Hopkins Report, but also where the Diebold response was adequate and where it provided implicit admission of the presence of a number of the vulnerabilities identified in the report. The report and comments to it sparked two other major reports, commissioned by Maryland in the fall of 2003 and later in January 2004, as part of that state's efforts to decide whether to adopt electronic voting machines. Both studies found a wide range of flaws in the systems they examined and required modifications (see figure 7.2).
+
+Meanwhile, trouble was brewing elsewhere for Diebold. In early August 2003, someone provided Wired magazine with a very large cache containing thousands of internal e-mails of Diebold. Wired reported that the e-mails were obtained by a hacker, emphasizing this as another example of the laxity of Diebold's security. However, the magazine provided neither an analysis of the e-mails nor access to them. Bev Harris, the activist who had originally found the Diebold materials, on the other hand, received the same cache, and posted the e-mails and memos on her site. Diebold's response was to threaten litigation. Claiming copyright in the e-mails, the company demanded from Harris, her Internet service provider, and a number of other sites where the materials had been posted, that the e-mails be removed. The e-mails were removed from these sites, but the strategy of widely distributed replication of data and its storage in many different topological and organizationally diverse settings made Diebold's efforts ultimately futile. The protagonists from this point on were college students. First, two students at Swarthmore College in Pennsylvania, and quickly students in a number of other universities in the United States, began storing the e-mails and scouring them for evidence of impropriety. In October 2003, Diebold proceeded to write to the universities whose students were hosting the materials. The company invoked provisions of the Digital Millennium Copyright Act that require Web-hosting companies to remove infringing materials when copyright owners notify them of the presence of these materials on their sites. The universities obliged, and required the students to remove the materials from their sites. The students, however, did not disappear quietly into the ,{[pg 230]}, night.
+
+{won_benkler_7_2.png "Figure 7.2: Analysis of the Diebold Source Code Materials" }http://www.jus.uio.no/sisu
+
+- On October 21, 2003, they launched a multipronged campaign of what they described as "electronic civil disobedience." First, they kept moving the files from one student to another's machine, encouraging students around the country to resist the efforts to eliminate the material. Second, they injected the materials into FreeNet, the anticensorship peer-to-peer publication network, and into other peer-to-peer file-sharing systems, like eDonkey and BitTorrent. Third, supported by the Electronic Frontier Foundation, one of the primary civil-rights organizations concerned with Internet freedom, the students brought suit against Diebold, seeking a judicial declaration that their posting of the materials was privileged. They won both the insurgent campaign and the formal one. As a practical matter, the materials remained publicly available throughout this period. As a matter of law, the litigation went badly enough for Diebold that the company issued a letter promising not to sue the students. The court nonetheless awarded the students damages and attorneys' fees because it found that Diebold had "knowingly and materially misrepresented" that the publication of the e-mail archive was a copyright violation in its letters to the Internet service providers.~{ /{Online Policy Group v. Diebold}/, Inc., 337 F. Supp. 2d 1195 (2004). }~ ,{[pg 231]},
+
+Central from the perspective of understanding the dynamics of the networked public sphere is not, however, the court case--it was resolved almost a year later, after most of the important events had already unfolded--but the efficacy of the students' continued persistent publication in the teeth of the cease-and-desist letters and the willingness of the universities to comply. The strategy of replicating the files everywhere made it impracticable to keep the documents from the public eye. And the public eye, in turn, scrutinized. Among the things that began to surface as users read the files were internal e-mails recognizing problems with the voting system, with the security of the FTP site from which Harris had originally obtained the specifications of the voting systems, and e-mail that indicated that the machines implemented in California had been "patched" or updated after their certification. That is, the machines actually being deployed in California were at least somewhat different from the machines that had been tested and certified by the state. This turned out to have been a critical find.
+
+California had a Voting Systems Panel within the office of the secretary of state that reviewed and certified voting machines. On November 3, 2003, two weeks after the students launched their electronic disobedience campaign, the agenda of the panel's meeting was to include a discussion of proposed modifications to one of Diebold's voting systems. Instead of discussing the agenda item, however, one of the panel members made a motion to table the item until the secretary of state had an opportunity to investigate, because "It has come to our attention that some very disconcerting information regarding this item [sic] and we are informed that this company, Diebold, may have installed uncertified software in at least one county before it was certified."~{ California Secretary of State Voting Systems Panel, Meeting Minutes, November 3, 2003, http://www.ss.ca.gov/elections/vsp_min_110303.pdf. }~ The source of the information is left unclear in the minutes. A later report in Wired cited an unnamed source in the secretary of state's office as saying that somebody within the company had provided this information. The timing and context, however, suggest that it was the revelation and discussion of the e-mail memoranda online that played that role. Two of the members of the public who spoke on the record mention information from within the company. One specifically mentions the information gleaned from company e-mails. In the next committee meeting, on December 16, 2003, one member of the public who was in attendance specifically referred to the e-mails on the Internet, referencing in particular a January e-mail about upgrades and changes to the certified systems. By that December meeting, the independent investigation by the secretary of state had found systematic discrepancies between the systems actually installed ,{[pg 232]}, and those tested and certified by the state. The following few months saw more studies, answers, debates, and the eventual decertification of many of the Diebold machines installed in California (see figures 7.3a and 7.3b).
+
+The structure of public inquiry, debate, and collective action exemplified by this story is fundamentally different from the structure of public inquiry and debate in the mass-media-dominated public sphere of the twentieth century. The initial investigation and analysis was done by a committed activist, operating on a low budget and with no financing from a media company. The output of this initial inquiry was not a respectable analysis by a major player in the public debate. It was access to raw materials and initial observations about them, available to start a conversation. Analysis then emerged from a widely distributed process undertaken by Internet users of many different types and abilities. In this case, it included academics studying electronic voting systems, activists, computer systems practitioners, and mobilized students. When the pressure from a well-financed corporation mounted, it was not the prestige and money of a Washington Post or a New York Times that protected the integrity of the information and its availability for public scrutiny. It was the radically distributed cooperative efforts of students and peer-to-peer network users around the Internet. These efforts were, in turn, nested in other communities of cooperative production--like the free software community that developed some of the applications used to disseminate the e-mails after Swarthmore removed them from the students' own site. There was no single orchestrating power--neither party nor professional commercial media outlet. There was instead a series of uncoordinated but mutually reinforcing actions by individuals in different settings and contexts, operating under diverse organizational restrictions and affordances, to expose, analyze, and distribute criticism and evidence for it. The networked public sphere here does not rely on advertising or capturing large audiences to focus its efforts. What became salient for the public agenda and shaped public discussion was what intensely engaged active participants, rather than what kept the moderate attention of large groups of passive viewers. Instead of the lowest-common-denominator focus typical of commercial mass media, each individual and group can--and, indeed, most likely will--focus precisely on what is most intensely interesting to its participants. Instead of iconic representation built on the scarcity of time slots and space on the air or on the page, we see the emergence of a "see for yourself" culture. Access to underlying documents and statements, and to ,{[pg 233]}, the direct expression of the opinions of others, becomes a central part of the medium.
+
+{won_benkler_7_3a.png "Figure 7.3a: Diebold Internal E-mails Discovery and Distribution" }http://www.jus.uio.no/sisu
+
+2~ CRITIQUES OF THE CLAIMS THAT THE INTERNET HAS DEMOCRATIZING EFFECTS
+
+It is common today to think of the 1990s, out of which came the Supreme Court's opinion in /{Reno v. ACLU}/, as a time of naïve optimism about the Internet, expressing in political optimism the same enthusiasm that drove the stock market bubble, with the same degree of justifiability. An ideal liberal public sphere did not, in fact, burst into being from the Internet, fully grown like Athena from the forehead of Zeus. The detailed criticisms of the early claims about the democratizing effects of the Internet can be characterized as variants of five basic claims:
+
+{won_benkler_7_3a.png "Figure 7.3b: Internal E-mails Translated to Political and Judicial Action" }http://www.jus.uio.no/sisu
+
+1. /{Information overload.}/ A basic problem created when everyone can speak is that there will be too many statements, or too much information. Too ,{[pg 234]}, many observations and too many points of view make the problem of sifting through them extremely difficult, leading to an unmanageable din. This overall concern, a variant of the Babel objection, underlies three more specific arguments: that money will end up dominating anyway, that there will be fragmentation of discourse, and that fragmentation of discourse will lead to its polarization.
+
+/{Money will end up dominating anyway.}/ A point originally raised by Eli Noam is that in this explosively large universe, getting attention will be as difficult as getting your initial message out in the mass-media context, if not more so. The same means that dominated the capacity to speak in the mass-media environment--money--will dominate the capacity to be heard on the Internet, even if it no longer controls the capacity to speak.
+
+/{Fragmentation of attention and discourse.}/ A point raised most explicitly by Cass Sunstein in Republic.com is that the ubiquity of information and the absence of the mass media as condensation points will impoverish public discourse by fragmenting it. There will be no public sphere. ,{[pg 235]}, Individuals will view the world through millions of personally customized windows that will offer no common ground for political discourse or action, except among groups of highly similar individuals who customize their windows to see similar things.
+
+/{Polarization.}/ A descriptively related but analytically distinct critique of Sunstein's was that the fragmentation would lead to polarization. When information and opinions are shared only within groups of likeminded participants, he argued, they tend to reinforce each other's views and beliefs without engaging with alternative views or seeing the concerns and critiques of others. This makes each view more extreme in its own direction and increases the distance between positions taken by opposing camps.
+
+2. /{Centralization of the Internet.}/ A second-generation criticism of the democratizing effects of the Internet is that it turns out, in fact, not to be as egalitarian or distributed as the 1990s conception had suggested. First, there is concentration in the pipelines and basic tools of communications. Second, and more intractable to policy, even in an open network, a high degree of attention is concentrated on a few top sites--a tiny number of sites are read by the vast majority of readers, while many sites are never visited by anyone. In this context, the Internet is replicating the massmedia model, perhaps adding a few channels, but not genuinely changing anything structural.
+
+Note that the concern with information overload is in direct tension with the second-generation concerns. To the extent that the concerns about Internet concentration are correct, they suggest that the information overload is not a deep problem. Sadly, from the perspective of democracy, it turns out that according to the concentration concern, there are few speakers to which most people listen, just as in the mass-media environment. While this means that the supposed benefits of the networked public sphere are illusory, it also means that the information overload concerns about what happens when there is no central set of speakers to whom most people listen are solved in much the same way that the mass-media model deals with the factual diversity of information, opinion, and observations in large societies--by consigning them to public oblivion. The response to both sets of concerns will therefore require combined consideration of a series of questions: To what extent are the claims of concentration correct? How do they solve the information overload ,{[pg 236]}, problem? To what extent does the observed concentration replicate the mass-media model?
+
+3. /{Centrality of commercial mass media to the Fourth Estate function.}/ The importance of the press to the political process is nothing new. It earned the press the nickname "the Fourth Estate" (a reference to the three estates that made up the prerevolutionary French Estates-General, the clergy, nobility, and townsmen), which has been in use for at least a hundred and fifty years. In American free speech theory, the press is often described as fulfilling "the watchdog function," deriving from the notion that the public representatives must be watched over to assure they do the public's business faithfully. In the context of the Internet, the concern, most clearly articulated by Neil Netanel, has been that in the modern complex societies in which we live, commercial mass media are critical for preserving the watchdog function of the media. Big, sophisticated, well-funded government and corporate market actors have enormous resources at their disposal to act as they please and to avoid scrutiny and democratic control. Only similarly big, powerful, independently funded media organizations, whose basic market roles are to observe and criticize other large organizations, can match these established elite organizational actors. Individuals and collections of volunteers talking to each other may be nice, but they cannot seriously replace well-funded, economically and politically powerful media.
+
+4. /{Authoritarian countries can use filtering and monitoring to squelch Internet use.}/ A distinct set of claims and their critiques have to do with the effects of the Internet on authoritarian countries. The critique is leveled at a basic belief supposedly, and perhaps actually, held by some cyberlibertarians, that with enough access to Internet tools freedom will burst out everywhere. The argument is that China, more than any other country, shows that it is possible to allow a population access to the Internet-- it is now home to the second-largest national population of Internet users--and still control that use quite substantially.
+
+5. /{Digital divide.}/ While the Internet may increase the circle of participants in the public sphere, access to its tools is skewed in favor of those who already are well-off in society--in terms of wealth, race, and skills. I do not respond to this critique in this chapter. First, in the United States, this is less stark today than it was in the late 1990s. Computers and Internet connections are becoming cheaper and more widely available in public libraries and schools. As they become more central to life, they ,{[pg 237]}, seem to be reaching higher penetration rates, and growth rates among underrepresented groups are higher than the growth rate among the highly represented groups. The digital divide with regard to basic access within advanced economies is important as long as it persists, but seems to be a transitional problem. Moreover, it is important to recall that the democratizing effects of the Internet must be compared to democracy in the context of mass media, not in the context of an idealized utopia. Computer literacy and skills, while far from universal, are much more widely distributed than the skills and instruments of mass-media production. Second, I devote chapter 9 to the question of how and why the emergence specifically of nonmarket production provides new avenues for substantial improvements in equality of access to various desiderata that the market distributes unevenly, both within advanced economies and globally, where the maldistribution is much more acute. While the digital divide critique can therefore temper our enthusiasm for how radical the change represented by the networked information economy may be in terms of democracy, the networked information economy is itself an avenue for alleviating maldistribution.
+
+The remainder of this chapter is devoted to responding to these critiques, providing a defense of the claim that the Internet can contribute to a more attractive liberal public sphere. As we work through these objections, we can develop a better understanding of how the networked information economy responds to or overcomes the particular systematic failures of mass media as platforms for the public sphere. Throughout this analysis, it is comparison of the attractiveness of the networked public sphere to that baseline--the mass-media-dominated public sphere--not comparison to a nonexistent ideal public sphere or to the utopia of "everyone a pamphleteer," that should matter most to our assessment of its democratic promise.
+
+2~ IS THE INTERNET TOO CHAOTIC, TOO CONCENTRATED, OR NEITHER?
+
+The first-generation critique of the claims that the Internet democratizes focused heavily on three variants of the information overload or Babel objection. The basic descriptive proposition that animated the Supreme Court in /{Reno v. ACLU}/ was taken as more or less descriptively accurate: Everyone would be equally able to speak on the Internet. However, this basic observation ,{[pg 238]}, was then followed by a descriptive or normative explanation of why this development was a threat to democracy, or at least not much of a boon. The basic problem that is diagnosed by this line of critique is the problem of attention. When everyone can speak, the central point of failure becomes the capacity to be heard--who listens to whom, and how that question is decided. Speaking in a medium that no one will actually hear with any reasonable likelihood may be psychologically satisfying, but it is not a move in a political conversation. Noam's prediction was, therefore, that there would be a reconcentration of attention: money would reemerge in this environment as a major determinant of the capacity to be heard, certainly no less, and perhaps even more so, than it was in the mass-media environment.~{ Eli Noam, "Will the Internet Be Bad for Democracy?" (November 2001), http:// www.citi.columbia.edu/elinoam/articles/int_bad_dem.htm. }~ Sunstein's theory was different. He accepted Nicholas Negroponte's prediction that people would be reading "The Daily Me," that is, that each of us would create highly customized windows on the information environment that would be narrowly tailored to our unique combination of interests. From this assumption about how people would be informed, he spun out two distinct but related critiques. The first was that discourse would be fragmented. With no six o'clock news to tell us what is on the public agenda, there would be no public agenda, just a fragmented multiplicity of private agendas that never coalesce into a platform for political discussion. The second was that, in a fragmented discourse, individuals would cluster into groups of self-reinforcing, self-referential discussion groups. These types of groups, he argued from social scientific evidence, tend to render their participants' views more extreme and less amenable to the conversation across political divides necessary to achieve reasoned democratic decisions.
+
+Extensive empirical and theoretical studies of actual use patterns of the Internet over the past five to eight years has given rise to a second-generation critique of the claim that the Internet democratizes. According to this critique, attention is much more concentrated on the Internet than we thought a few years ago: a tiny number of sites are highly linked, the vast majority of "speakers" are not heard, and the democratic potential of the Internet is lost. If correct, these claims suggest that Internet use patterns solve the problem of discourse fragmentation that Sunstein was worried about. Rather than each user reading a customized and completely different "newspaper," the vast majority of users turn out to see the same sites. In a network with a small number of highly visible sites that practically everyone reads, the discourse fragmentation problem is resolved. Because they are seen by most people, the polarization problem too is solved--the highly visible sites are ,{[pg 239]}, not small-group interactions with homogeneous viewpoints. While resolving Sunstein's concerns, this pattern is certainly consistent with Noam's prediction that money would have to be paid to reach visibility, effectively replicating the mass-media model. While centralization would resolve the Babel objection, it would do so only at the expense of losing much of the democratic promise of the Net.
+
+Therefore, we now turn to the question: Is the Internet in fact too chaotic or too concentrated to yield a more attractive democratic discourse than the mass media did? I suggest that neither is the case. At the risk of appearing a chimera of Goldilocks and Pangloss, I argue instead that the observed use of the network exhibits an order that is not too concentrated and not too chaotic, but rather, if not "just right," at least structures a networked public sphere more attractive than the mass-media-dominated public sphere.
+
+There are two very distinct types of claims about Internet centralization. The first, and earlier, has the familiar ring of media concentration. It is the simpler of the two, and is tractable to policy. The second, concerned with the emergent patterns of attention and linking on an otherwise open network, is more difficult to explain and intractable to policy. I suggest, however, that it actually stabilizes and structures democratic discourse, providing a better answer to the fears of information overload than either the mass media or any efforts to regulate attention to matters of public concern.
+
+The media-concentration type argument has been central to arguments about the necessity of open access to broadband platforms, made most forcefully over the past few years by Lawrence Lessig. The argument is that the basic instrumentalities of Internet communications are subject to concentrated markets. This market concentration in basic access becomes a potential point of concentration of the power to influence the discourse made possible by access. Eli Noam's recent work provides the most comprehensive study currently available of the degree of market concentration in media industries. It offers a bleak picture.~{ Eli Noam, "The Internet Still Wide, Open, and Competitive?" Paper presented at The Telecommunications Policy Research Conference, September 2003, http:// www.tprc.org/papers/2003/200/noam_TPRC2003.pdf. }~ Noam looked at markets in basic infrastructure components of the Internet: Internet backbones, Internet service providers (ISPs), broadband providers, portals, search engines, browser software, media player software, and Internet telephony. Aggregating across all these sectors, he found that the Internet sector defined in terms of these components was, throughout most of the period from 1984 to 2002, concentrated according to traditional antitrust measures. Between 1992 and 1998, however, this sector was "highly concentrated" by the Justice Department's measure of market concentration for antitrust purposes. Moreover, the power ,{[pg 240]}, of the top ten firms in each of these markets, and in aggregate for firms that had large market segments in a number of these markets, shows that an ever-smaller number of firms were capturing about 25 percent of the revenues in the Internet sector. A cruder, but consistent finding is the FCC's, showing that 96 percent of homes and small offices get their broadband access either from their incumbent cable operator or their incumbent local telephone carrier.~{ Federal Communications Commission, Report on High Speed Services, December 2003. }~ It is important to recognize that these findings are suggesting potential points of failure for the networked information economy. They are not a critique of the democratic potential of the networked public sphere, but rather show us how we could fail to develop it by following the wrong policies.
+
+The risk of concentration in broadband access services is that a small number of firms, sufficiently small to have economic power in the antitrust sense, will control the markets for the basic instrumentalities of Internet communications. Recall, however, that the low cost of computers and the open-ended architecture of the Internet protocol itself are the core enabling facts that have allowed us to transition from the mass-media model to the networked information model. As long as these basic instrumentalities are open and neutral as among uses, and are relatively cheap, the basic economics of nonmarket production described in part I should not change. Under competitive conditions, as technology makes computation and communications cheaper, a well-functioning market should ensure that outcome. Under oligopolistic conditions, however, there is a threat that the network will become too expensive to be neutral as among market and nonmarket production. If basic upstream network connections, server space, and up-to-date reading and writing utilities become so expensive that one needs to adopt a commercial model to sustain them, then the basic economic characteristic that typifies the networked information economy--the relatively large role of nonproprietary, nonmarket production--will have been reversed. However, the risk is not focused solely or even primarily on explicit pricing. One of the primary remaining scarce resources in the networked environment is user time and attention. As chapter 5 explained, owners of communications facilities can extract value from their users in ways that are more subtle than increasing price. In particular, they can make some sites and statements easier to reach and see--more prominently displayed on the screen, faster to load--and sell that relative ease to those who are willing to pay.~{ See Eszter Hargittai, "The Changing Online Landscape: From Free-For-All to Commercial Gatekeeping," http://www.eszter.com/research/pubs/hargittai-onlinelandscape.pdf. }~ In that environment, nonmarket sites are systematically disadvantaged irrespective of the quality of their content. ,{[pg 241]},
+
+The critique of concentration in this form therefore does not undermine the claim that the networked information economy, if permitted to flourish, will improve the democratic public sphere. It underscores the threat of excessive monopoly in infrastructure to the sustainability of the networked public sphere. The combination of observations regarding market concentration and an understanding of the importance of a networked public sphere to democratic societies suggests that a policy intervention is possible and desirable. Chapter 11 explains why the relevant intervention is to permit substantial segments of the core common infrastructure--the basic physical transport layer of wireless or fiber and the software and standards that run communications--to be produced and provisioned by users and managed as a commons.
+
+2~ ON POWER LAW DISTRIBUTIONS, NETWORK TOPOLOGY, AND BEING HEARD
+
+A much more intractable challenge to the claim that the networked information economy will democratize the public sphere emerges from observations of a set or phenomena that characterize the Internet, the Web, the blogosphere, and, indeed, most growing networks. In order to extract information out of the universe of statements and communications made possible by the Internet, users are freely adopting practices that lead to the emergence of a new hierarchy. Rather than succumb to the "information overload" problem, users are solving it by congregating in a small number of sites. This conclusion is based on a new but growing literature on the likelihood that a Web page will be linked to by others. The distribution of that probability turns out to be highly skew. That is, there is a tiny probability that any given Web site will be linked to by a huge number of people, and a very large probability that for a given Web site only one other site, or even no site, will link to it. This fact is true of large numbers of very different networks described in physics, biology, and social science, as well as in communications networks. If true in this pure form about Web usage, this phenomenon presents a serious theoretical and empirical challenge to the claim that Internet communications of the sorts we have seen here meaningfully decentralize democratic discourse. It is not a problem that is tractable to policy. We cannot as a practical matter force people to read different things than what they choose to read; nor should we wish to. If users avoid information overload by focusing on a small subset of sites in an otherwise ,{[pg 242]}, open network that allows them to read more or less whatever they want and whatever anyone has written, policy interventions aimed to force a different pattern would be hard to justify from the perspective of liberal democratic theory.
+
+The sustained study of the distribution of links on the Internet and the Web is relatively new--only a few years old. There is significant theoretical work in a field of mathematics called graph theory, or network topology, on power law distributions in networks, on skew distributions that are not pure power law, and on the mathematically related small-worlds phenomenon in networks. The basic intuition is that, if indeed a tiny minority of sites gets a large number of links, and the vast majority gets few or no links, it will be very difficult to be seen unless you are on the highly visible site. Attention patterns make the open network replicate mass media. While explaining this literature over the next few pages, I show that what is in fact emerging is very different from, and more attractive than, the mass-media-dominated public sphere.
+
+While the Internet, the Web, and the blogosphere are indeed exhibiting much greater order than the freewheeling, "everyone a pamphleteer" image would suggest, this structure does not replicate a mass-media model. We are seeing a newly shaped information environment, where indeed few are read by many, but clusters of moderately read sites provide platforms for vastly greater numbers of speakers than were heard in the mass-media environment. Filtering, accreditation, synthesis, and salience are created through a system of peer review by information affinity groups, topical or interest based. These groups filter the observations and opinions of an enormous range of people, and transmit those that pass local peer review to broader groups and ultimately to the polity more broadly, without recourse to market-based points of control over the information flow. Intense interest and engagement by small groups that share common concerns, rather than lowest-commondenominator interest in wide groups that are largely alienated from each other, is what draws attention to statements and makes them more visible. This makes the emerging networked public sphere more responsive to intensely held concerns of a much wider swath of the population than the mass media were capable of seeing, and creates a communications process that is more resistant to corruption by money.
+
+In what way, first, is attention concentrated on the Net? We are used to seeing probability distributions that describe social phenomena following a Gaussian distribution: where the mean and the median are the same and the ,{[pg 243]}, probabilities fall off symmetrically as we describe events that are farther from the median. This is the famous Bell Curve. Some phenomena, however, observed initially in Pareto's work on income distribution and Zipf 's on the probability of the use of English words in text and in city populations, exhibit completely different probability distributions. These distributions have very long "tails"--that is, they are characterized by a very small number of very high-yield events (like the number of words that have an enormously high probability of appearing in a randomly chosen sentence, like "the" or "to") and a very large number of events that have a very low probability of appearing (like the probability that the word "probability" or "blogosphere" will appear in a randomly chosen sentence). To grasp intuitively how unintuitive such distributions are to us, we could think of radio humorist Garrison Keillor's description of the fictitious Lake Wobegon, where "all the children are above average." That statement is amusing because we assume intelligence follows a normal distribution. If intelligence were distributed according to a power law, most children there would actually be below average--the median is well below the mean in such distributions (see figure 7.4). Later work by Herbert Simon in the 1950s, and by Derek de Solla Price in the 1960s, on cumulative advantage in scientific citations~{ Derek de Solla Price, "Networks of Scientific Papers," Science 149 (1965): 510; Herbert Simon, "On a Class of Skew Distribution Function," Biometrica 42 (1955): 425-440, reprinted in Herbert Simon, Models of Man Social and Rational: Mathematical Essays on Rational Human Behavior in a Social Setting (New York: Garland, 1957). }~ presaged an emergence at the end of the 1990s of intense interest in power law characterizations of degree distributions, or the number of connections any point in a network has to other points, in many kinds of networks--from networks of neurons and axons, to social networks and communications and information networks.
+
+The Internet and the World Wide Web offered a testable setting, where large-scale investigation could be done automatically by studying link structure (who is linked-in to and by whom, who links out and to whom, how these are related, and so on), and where the practical applications of better understanding were easily articulated--such as the design of better search engines. In 1999, Albert-László Barabasi and Reka Albert published a paper in /{Science}/ showing that a variety of networked phenomena have a predictable topology: The distribution of links into and out of nodes on the network follows a power law. There is a very low probability that any vertex, or node, in the network will be very highly connected to many others, and a very large probability that a very large number of nodes will be connected only very loosely, or perhaps not at all. Intuitively, a lot of Web sites link to information that is located on Yahoo!, while very few link to any randomly selected individual's Web site. Barabasi and Albert hypothesized a mechanism ,{[pg 244]},
+
+{won_benkler_7_4.png "Figure 7.4: Illustration of How Normal Distribution and Power Law Distribution Would Differ in Describing How Many Web Sites Have Few or Many Links Pointing at Them" }http://www.jus.uio.no/sisu
+
+for this distribution to evolve, which they called "preferential attachment." That is, new nodes prefer to attach to already well-attached nodes. Any network that grows through the addition of new nodes, and in which nodes preferentially attach to nodes that are already well attached, will eventually exhibit this distribution.~{ Albert-Laszio Barabasi and Reka Albert, "Emergence of Scaling in Random Networks," Science 286 (1999): 509. }~ In other words, the rich get richer. At the same time, two computer scientists, Lada Adamic and Bernardo Huberman, published a study in Nature that identified the presence of power law distributions in the number of Web pages in a given site. They hypothesized not that new nodes preferentially attach to old ones, but that each site has an intrinsically different growth rate, and that new sites are formed at an exponential rate.~{ Bernardo Huberman and Lada Adamic, "Growth Dynamics of the World Wide Web," Nature 401 (1999): 131. }~ The intrinsically different growth rates could be interpreted as quality, interest, or perhaps investment of money in site development and marketing. They showed that on these assumptions, a power law distribution would emerge. Since the publication of these articles we have seen an explosion of theoretical and empirical literature on graph theory, or the structure and growth of networks, and particularly on link structure in the World Wide Web. It has consistently shown that the number of links into and out of Web sites follows power laws and that the exponent (the exponential ,{[pg 245]}, factor that determines that the drop-off between the most linked-to site and the second most linked-to site, and the third, and so on, will be so dramatically rapid, and how rapid it is) for inlinks is roughly 2.1 and for outlinks 2.7.
+
+If one assumes that most people read things by either following links, or by using a search engine, like Google, that heavily relies on counting inlinks to rank its results, then it is likely that the number of visitors to a Web page, and more recently, the number of readers of blogs, will follow a similarly highly skew distribution. The implication for democracy that comes most immediately to mind is dismal. While, as the Supreme Court noted with enthusiasm, on the Internet everyone can be a pamphleteer or have their own soapbox, the Internet does not, in fact, allow individuals to be heard in ways that are substantially more effective than standing on a soapbox in a city square. Many Web pages and blogs will simply go unread, and will not contribute to a more engaged polity. This argument was most clearly made in Barabasi's popularization of his field, Linked: "The most intriguing result of our Web-mapping project was the complete absence of democracy, fairness, and egalitarian values on the Web. We learned that the topology of the Web prevents us from seeing anything but a mere handful of the billion documents out there."~{ Albert-Laszio Barabasi, Linked, How Everything Is Connected to Everything Else and What It Means for Business, Science, and Everyday Life (New York: Penguin, 2003), 56-57. One unpublished quantitative study showed specifically that the skewness holds for political Web sites related to various hot-button political issues in the United States--like abortion, gun control, or the death penalty. A small fraction of the Web sites discussing these issues account for the large majority of links into them. Matthew Hindman, Kostas Tsioutsiouliklis, and Judy Johnson, " `Googelarchy': How a Few Heavily Linked Sites Dominate Politics on the Web," July 28, 2003, http://www.scholar.google.com/url?sa U&q http://www.princeton.edu/~mhindman/googlearchy-hindman.pdf. }~
+
+The stories offered in this chapter and throughout this book present a puzzle for this interpretation of the power law distribution of links in the network as re-creating a concentrated medium. The success of Nick Davis's site, BoycottSBG, would be a genuine fluke. The probability that such a site could be established on a Monday, and by Friday of the same week would have had three hundred thousand unique visitors and would have orchestrated a successful campaign, is so small as to be negligible. The probability that a completely different site, StopSinclair.org, of equally network-obscure origins, would be established on the very same day and also successfully catch the attention of enough readers to collect 150,000 signatures on a petition to protest Sinclair's broadcast, rather than wallowing undetected in the mass of self-published angry commentary, is practically insignificant. And yet, intuitively, it seems unsurprising that a large population of individuals who are politically mobilized on the same side of the political map and share a political goal in the public sphere--using a network that makes it trivially simple to set up new points of information and coordination, tell each other about them, and reach and use them from anywhere--would, in fact, inform each other and gather to participate in a political demonstration. We saw ,{[pg 246]}, that the boycott technique that Davis had designed his Web site to facilitate was discussed on TalkingPoints--a site near the top of the power law distribution of political blogs--but that it was a proposal by an anonymous individual who claimed to know what makes local affiliates tick, not of TalkingPoints author Josh Marshall. By midweek, after initially stoking the fires of support for Davis's boycott, Marshall had stepped back, and Davis's site became the clearing point for reports, tactical conversations, and mobilization. Davis not only was visible, but rather than being drowned out by the high-powered transmitter, TalkingPoints, his relationship with the high-visibility site was part of his success. This story alone cannot, of course, "refute" the power law distribution of network links, nor is it offered as a refutation. It does, however, provide a context for looking more closely at the emerging understanding of the topology of the Web, and how it relates to the fears of concentration of the Internet, and the problems of information overload, discourse fragmentation, and the degree to which money will come to dominate such an unstructured and wide-open environment. It suggests a more complex story than simply "the rich get richer" and "you might speak, but no one will hear you." In this case, the topology of the network allowed rapid emergence of a position, its filtering and synthesis, and its rise to salience. Network topology helped facilitate all these components of the public sphere, rather than undermined them. We can go back to the mathematical and computer science literature to begin to see why.
+
+Within two months of the publication of Barabasi and Albert's article, Adamic and Huberman had published a letter arguing that, if Barabasi and Albert were right about preferential attachment, then older sites should systematically be among those that are at the high end of the distribution, while new ones will wallow in obscurity. The older sites are already attached, so newer sites would preferentially attach to the older sites. This, in turn, would make them even more attractive when a new crop of Web sites emerged and had to decide which sites to link to. In fact, however, Adamic and Huberman showed that there is no such empirical correlation among Web sites. They argued that their mechanism--that nodes have intrinsic growth rates that are different--better describes the data. In their response, Barabasi and Albert showed that on their data set, the older nodes are ac? tually more connected in a way that follows a power law, but only on average--that is to say, the average number of connections of a class of older nodes related to the average number of links to a younger class of nodes follows a power law. This argued that their basic model was sound, but ,{[pg 247]}, required that they modify their equations to include something similar to what Huberman and Adamic had proposed--an intrinsic growth factor for each node, as well as the preferential connection of new nodes to established nodes.~{ Lada Adamic and Bernardo Huberman, "Power Law Distribution of the World Wide Web," Science 287 (2000): 2115. }~ This modification is important because it means that not every new node is doomed to be unread relative to the old ones, only that on average they are much less likely to be read. It makes room for rapidly growing new nodes, but does not theorize what might determine the rate of growth. It is possible, for example, that money could determine growth rates: In order to be seen, new sites or statements would have to spend money to gain visibility and salience. As the BoycottSBG and Diebold stories suggest, however, as does the Lott story described later in this chapter, there are other ways of achieving immediate salience. In the case of BoycottSBG, it was providing a solution that resonated with the political beliefs of many people and was useful to them for their expression and mobilization. Moreover, the continued presence of preferential attachment suggests that noncommercial Web sites that are already highly connected because of the time they were introduced (like the Electronic Frontier Foundation), because of their internal attractiveness to large communities (like Slashdot), or because of their salience to the immediate interests of users (like BoycottSBG), will have persistent visibility even in the face of large infusions of money by commercial sites. Developments in network topology theory and its relationship to the structure of the empirically mapped real Internet offer a map of the networked information environment that is indeed quite different from the naïve model of "everyone a pamphleteer." To the limited extent that these findings have been interpreted for political meaning, they have been seen as a disappointment--the real world, as it turns out, does not measure up to anything like that utopia. However, that is the wrong baseline. There never has been a complex, large modern democracy in which everyone could speak and be heard by everyone else. The correct baseline is the one-way structure of the commercial mass media. The normatively relevant descriptive questions are whether the networked public sphere provides broader intake, participatory filtering, and relatively incorruptible platforms for creating public salience. I suggest that it does. Four characteristics of network topology structure the Web and the blogosphere in an ordered, but nonetheless meaningfully participatory form. First, at a microlevel, sites cluster--in particular, topically and interest-related sites link much more heavily to each other than to other sites. Second, at a macrolevel, the Web and the blogosphere have ,{[pg 248]}, giant, strongly connected cores--"areas" where 20-30 percent of all sites are highly and redundantly interlinked; that is, tens or hundreds of millions of sites, rather than ten, fifty, or even five hundred television stations. That pattern repeats itself in smaller subclusters as well. Third, as the clusters get small enough, the obscurity of sites participating in the cluster diminishes, while the visibility of the superstars remains high, forming a filtering and transmission backbone for universal intake and local filtering. Fourth and finally, the Web exhibits "small-world" phenomena, making most Web sites reachable through shallow paths from most other Web sites. I will explain each of these below, as well as how they interact to form a reasonably attractive image of the networked public sphere.
+
+First, links are not smoothly distributed throughout the network. Sites cluster into densely linked "regions" or communities of interest. Computer scientists have looked at clustering from the perspective of what topical or other correlated characteristics describe these relatively high-density interconnected regions of nodes. What they found was perhaps entirely predictable from an intuitive perspective of the network users, but important as we try to understand the structure of information flow on the Web. Web sites cluster into topical and social/organizational clusters. Early work done in the IBM Almaden Research Center on how link structure could be used as a search technique showed that by mapping densely interlinked sites without looking at content, one could find communities of interest that identify very fine-grained topical connections, such as Australian fire brigades or Turkish students in the United States.~{ Ravi Kumar et al., "Trawling the Web for Emerging Cyber-Communities," WWW8/ Computer Networks 31, nos. 11-16 (1999): 1481-1493. }~ A later study out of the NEC Research Institute more formally defined the interlinking that would identify a "community" as one in which the nodes were more densely connected to each other than they were to nodes outside the cluster by some amount. The study also showed that topically connected sites meet this definition. For instance, sites related to molecular biology clustered with each other--in the sense of being more interlinked with each other than with off-topic sites--as did sites about physics and black holes.~{ Gary W. Flake et al., "Self-Organization and Identification of Web Communities," IEEE Computer 35, no. 3 (2002): 66-71. Another paper that showed significant internal citations within topics was Soumen Chakrabati et al., "The Structure of Broad Topics on the Web," WWW2002, Honolulu, HI, May 7-11, 2002. }~ Lada Adamic and Natalie Glance recently showed that liberal political blogs and conservative political blogs densely interlink with each other, mostly pointing within each political leaning but with about 15 percent of links posted by the most visible sites also linking across the political divide.~{ Lada Adamic and Natalie Glance, "The Political Blogosphere and the 2004 Election: Divided They Blog," March 1, 2005, http://www.blogpulse.com/papers/2005/ AdamicGlanceBlogWWW.pdf. }~ Physicists analyze clustering as the property of transitivity in networks: the increased probability that if node A is connected to node B, and node B is connected to node C, that node A also will be connected to node C, forming a triangle. Newman has shown that ,{[pg 249]}, the clustering coefficient of a network that exhibits power law distribution of connections or degrees--that is, its tendency to cluster--is related to the exponent of the distribution. At low exponents, below 2.333, the clustering coefficient becomes high. This explains analytically the empirically observed high level of clustering on the Web, whose exponent for inlinks has been empirically shown to be 2.1.~{ M.E.J. Newman, "The Structure and Function of Complex Networks," Society for Industrial and Applied Mathematics Review 45, section 4.2.2 (2003): 167-256; S. N. Dorogovstev and J.F.F. Mendes, Evolution of Networks: From Biological Nets to the Internet and WWW (Oxford: Oxford University Press, 2003). }~
+
+Second, at a macrolevel and in smaller subclusters, the power law distribution does not resolve into everyone being connected in a mass-media model relationship to a small number of major "backbone" sites. As early as 1999, Broder and others showed that a very large number of sites occupy what has been called a giant, strongly connected core.~{ This structure was first described by Andrei Broder et al., "Graph Structure of the Web," paper presented at www9 conference (1999), http://www.almaden.ibm.com/ webfountain/resources/GraphStructureintheWeb.pdf. It has since been further studied, refined, and substantiated in various studies. }~ That is, nodes within this core are heavily linked and interlinked, with multiple redundant paths among them. Empirically, as of 2001, this structure was comprised of about 28 percent of nodes. At the same time, about 22 percent of nodes had links into the core, but were not linked to from it--these may have been new sites, or relatively lower-interest sites. The same proportion of sites was linked-to from the core, but did not link back to it--these might have been ultimate depositories of documents, or internal organizational sites. Finally, roughly the same proportion of sites occupied "tendrils" or "tubes" that cannot reach, or be reached from, the core. Tendrils can be reached from the group of sites that link into the strongly connected core or can reach into the group that can be connected to from the core. Tubes connect the inlinking sites to the outlinked sites without going through the core. About 10 percent of sites are entirely isolated. This structure has been called a "bow tie"--with a large core and equally sized in- and outflows to and from that core (see figure 7.5).
+
+One way of interpreting this structure as counterdemocratic is to say: This means that half of all Web sites are not reachable from the other half--the "IN," "tendrils," and disconnected portions cannot be reached from any of the sites in SCC and OUT. This is indeed disappointing from the "everyone a pamphleteer" perspective. On the other hand, one could say that half of all Web pages, the SCC and OUT components, are reachable from IN and SCC. That is, hundreds of millions of pages are reachable from hundreds of millions of potential entry points. This represents a very different intake function and freedom to speak in a way that is potentially accessible to others than a five-hundred-channel, mass-media model. More significant yet, Dill and others showed that the bow tie structure appears not only at the level of the Web as a whole, but repeats itself within clusters. That is, the Web ,{[pg 250]},
+
+{won_benkler_7_5.png "Figure 7.5: Bow Tie Structure of the Web" }http://www.jus.uio.no/sisu
+
+appears to show characteristics of self-similarity, up to a point--links within clusters also follow a power law distribution and cluster, and have a bow tie structure of similar proportions to that of the overall Web. Tying the two points about clustering and the presence of a strongly connected core, Dill and his coauthors showed that what they called "thematically unified clusters," such as geographically or content-related groupings of Web sites, themselves exhibit these strongly connected cores that provided a thematically defined navigational backbone to the Web. It is not that one or two major sites were connected to by all thematically related sites; rather, as at the network level, on the order of 25-30 percent were highly interlinked, and another 25 percent were reachable from within the strongly connected core.~{ Dill et al., "Self-Similarity in the Web" (San Jose, CA: IBM Almaden Research Center, 2001); S. N. Dorogovstev and J.F.F. Mendes, Evolution of Networks. }~ Moreover, when the data was pared down to treat only the home page, rather than each Web page within a single site as a distinct "node" (that is, everything that came under www.foo.com was treated as one node, as opposed to the usual method where www.foo.com, www.foo.com/nonsuch, and www.foo.com/somethingelse are each treated as a separate node), fully 82 percent of the nodes were in the strongly connected core, and an additional 13 percent were reachable from the SCC as the OUT group.
+
+Third, another finding of Web topology and critical adjustment to the ,{[pg 251]}, basic Barabasi and Albert model is that when the topically or organizationally related clusters become small enough--on the order of hundreds or even low thousands of Web pages--they no longer follow a pure power law distribution. Instead, they follow a distribution that still has a very long tail-- these smaller clusters still have a few genuine "superstars"--but the body of the distribution is substantially more moderate: beyond the few superstars, the shape of the link distribution looks a little more like a normal distribution. Instead of continuing to drop off exponentially, many sites exhibit a moderate degree of connectivity. Figure 7.6 illustrates how a hypothetical distribution of this sort would differ both from the normal and power law distributions illustrated in figure 7.4. David Pennock and others, in their paper describing these empirical findings, hypothesized a uniform component added to the purely exponential original Barabasi and Albert model. This uniform component could be random (as they modeled it), but might also stand for quality of materials, or level of interest in the site by participants in the smaller cluster. At large numbers of nodes, the exponent dominates the uniform component, accounting for the pure power law distribution when looking at the Web as a whole, or even at broadly defined topics. In smaller clusters of sites, however, the uniform component begins to exert a stronger pull on the distribution. The exponent keeps the long tail intact, but the uniform component accounts for a much more moderate body. Many sites will have dozens, or even hundreds of links. The Pennock paper looked at sites whose number was reduced by looking only at sites of certain organizations--universities or public companies. Chakrabarti and others later confirmed this finding for topical clusters as well. That is, when they looked at small clusters of topically related sites, the distribution of links still has a long tail for a small number of highly connected sites in every topic, but the body of the distribution diverges from a power law distribution, and represents a substantial proportion of sites that are moderately linked.~{ Soumen Chakrabarti et al., "The Structure of Broad Topics on the Web," WWW2002, Honolulu, HI, May 7-11, 2002. }~ Even more specifically, Daniel Drezner and Henry Farrell reported that the Pennock modification better describes distribution of links specifically to and among political blogs.~{ Daniel W. Drezner and Henry Farrell, "The Power and Politics of Blogs" (July 2004), http://www.danieldrezner.com/research/blogpaperfinal.pdf. }~
+
+These findings are critical to the interpretation of the distribution of links as it relates to human attention and communication. There is a big difference between a situation where no one is looking at any of the sites on the low end of the distribution, because everyone is looking only at the superstars, and a situation where dozens or hundreds of sites at the low end are looking at each other, as well as at the superstars. The former leaves all but the very ,{[pg 252]},
+
+{won_benkler_7_6.png "Figure 7.6: Illustration of a Skew Distribution That Does Not Follow a Power Law" }http://www.jus.uio.no/sisu
+
+few languishing in obscurity, with no one to look at them. The latter, as explained in more detail below, offers a mechanism for topically related and interest-based clusters to form a peer-reviewed system of filtering, accreditation, and salience generation. It gives the long tail on the low end of the distribution heft (and quite a bit of wag).
+
+The fourth and last piece of mapping the network as a platform for the public sphere is called the "small-worlds effect." Based on Stanley Milgram's sociological experiment and on mathematical models later proposed by Duncan Watts and Steven Strogatz, both theoretical and empirical work has shown that the number of links that must be traversed from any point in the network to any other point is relatively small.~{ D. J. Watts and S. H. Strogatz, "Collective Dynamics of `Small World' Networks," Nature 393 (1998): 440-442; D. J. Watts, Small Worlds: The Dynamics of Networks Between Order and Randomness (Princeton, NJ: Princeton University Press, 1999). }~ Fairly shallow "walks"-- that is, clicking through three or four layers of links--allow a user to cover a large portion of the Web.
+
+What is true of the Web as a whole turns out to be true of the blogosphere as well, and even of the specifically political blogosphere. Early 2003 saw increasing conversations in the blogosphere about the emergence of an "Alist," a number of highly visible blogs that were beginning to seem more like mass media than like blogs. In two blog-based studies, Clay Shirky and then Jason Kottke published widely read explanations of how the blogosphere ,{[pg 253]}, was simply exhibiting the power law characteristics common on the Web.~{ Clay Shirky, "Power Law, Weblogs, and Inequality" (February 8, 2003), http:// www.shirky.com/writings/powerlaw_weblog.htm; Jason Kottke, "Weblogs and Power Laws" (February 9, 2003), http://www.kottke.org/03/02/weblogs-and-power-laws. }~ The emergence in 2003 of discussions of this sort in the blogosphere is, it turns out, hardly surprising. In a time-sensitive study also published in 2003, Kumar and others provided an analysis of the network topology of the blogosphere. They found that it was very similar to that of the Web as a whole--both at the macro- and microlevels. Interestingly, they found that the strongly connected core only developed after a certain threshold, in terms of total number of nodes, had been reached, and that it began to develop extensively only in 2001, reached about 20 percent of all blogs in 2002, and continued to grow rapidly. They also showed that what they called the "community" structure--the degree of clustering or mutual pointing within groups--was high, an order of magnitude more than a random graph with a similar power law exponent would have generated. Moreover, the degree to which a cluster is active or inactive, highly connected or not, changes over time. In addition to time-insensitive superstars, there are also flare-ups of connectivity for sites depending on the activity and relevance of their community of interest. This latter observation is consistent with what we saw happen for BoycottSBG.com. Kumar and his collaborators explained these phenomena by the not-too-surprising claim that bloggers link to each other based on topicality--that is, their judgment of the quality and relevance of the materials--not only on the basis of how well connected they are already.~{ Ravi Kumar et al., "On the Bursty Evolution of Blogspace," Proceedings of WWW2003, May 20-24, 2003, http://www2003.org/cdrom/papers/refereed/p477/ p477-kumar/p477-kumar.htm. }~
+
+This body of literature on network topology suggests a model for how order has emerged on the Internet, the World Wide Web, and the blogosphere. The networked public sphere allows hundreds of millions of people to publish whatever and whenever they please without disintegrating into an unusable cacophony, as the first-generation critics argued, and it filters and focuses attention without re-creating the highly concentrated model of the mass media that concerned the second-generation critique. We now know that the network at all its various layers follows a degree of order, where some sites are vastly more visible than most. This order is loose enough, however, and exhibits a sufficient number of redundant paths from an enormous number of sites to another enormous number, that the effect is fundamentally different from the small number of commercial professional editors of the mass media.
+
+Individuals and individual organizations cluster around topical, organizational, or other common features. At a sufficiently fine-grained degree of clustering, a substantial proportion of the clustered sites are moderately connected, ,{[pg 254]}, and each can therefore be a point of intake that will effectively transmit observations or opinions within and among the users of that topical or interest-based cluster. Because even in small clusters the distribution of links still has a long tail, these smaller clusters still include high-visibility nodes. These relatively high-visibility nodes can serve as points of transfer to larger clusters, acting as an attention backbone that transmits information among clusters. Subclusters within a general category--such as liberal and conservative blogs clustering within the broader cluster of political blogs-- are also interlinked, though less densely than within-cluster connectivity. The higher level or larger clusters again exhibit a similar feature, where higher visibility nodes can serve as clearinghouses and connectivity points among clusters and across the Web. These are all highly connected with redundant links within a giant, strongly connected core--comprising more than a quarter of the nodes in any given level of cluster. The small-worlds phenomenon means that individual users who travel a small number of different links from similar starting points within a cluster cover large portions of the Web and can find diverse sites. By then linking to them on their own Web sites, or giving them to others by e-mail or blog post, sites provide multiple redundant paths open to many users to and from most statements on the Web. High-visibility nodes amplify and focus on given statements, and in this regard, have greater power in the information environment they occupy. However, there is sufficient redundancy of paths through high-visibility nodes that no single node or small collection of nodes can control the flow of information in the core and around the Web. This is true both at the level of the cluster and at the level of the Web as a whole.
+
+The result is an ordered system of intake, filtering, and synthesis that can in theory emerge in networks generally, and empirically has been shown to have emerged on the Web. It does not depend on single points of control. It avoids the generation of a din through which no voice can be heard, as the fears of fragmentation predicted. And, while money may be useful in achieving visibility, the structure of the Web means that money is neither necessary nor sufficient to grab attention--because the networked information economy, unlike its industrial predecessor, does not offer simple points of dissemination and control for purchasing assured attention. What the network topology literature allows us to do, then, is to offer a richer, more detailed, and empirically supported picture of how the network can be a platform for the public sphere that is structured in a fundamentally different way than the mass-media model. The problem is approached ,{[pg 255]}, through a self-organizing principle, beginning with communities of interest on smallish scales, practices of mutual pointing, and the fact that, with freedom to choose what to see and who to link to, with some codependence among the choices of individuals as to whom to link, highly connected points emerge even at small scales, and continue to be replicated with everlarger visibility as the clusters grow. Without forming or requiring a formal hierarchy, and without creating single points of control, each cluster generates a set of sites that offer points of initial filtering, in ways that are still congruent with the judgments of participants in the highly connected small cluster. The process is replicated at larger and more general clusters, to the point where positions that have been synthesized "locally" and "regionally" can reach Web-wide visibility and salience. It turns out that we are not intellectual lemmings. We do not use the freedom that the network has made possible to plunge into the abyss of incoherent babble. Instead, through iterative processes of cooperative filtering and "transmission" through the high visibility nodes, the low-end thin tail turns out to be a peer-produced filter and transmission medium for a vastly larger number of speakers than was imaginable in the mass-media model.
+
+The effects of the topology of the network are reinforced by the cultural forms of linking, e-mail lists, and the writable Web. The network topology literature treats every page or site as a node. The emergence of the writable Web, however, allows each node to itself become a cluster of users and posters who, collectively, gain salience as a node. Slashdot is "a node" in the network as a whole, one that is highly linked and visible. Slashdot itself, however, is a highly distributed system for peer production of observations and opinions about matters that people who care about information technology and communications ought to care about. Some of the most visible blogs, like the dailyKos, are cooperative blogs with a number of authors. More important, the major blogs receive input--through posts or e-mails-- from their users. Recall, for example, that the original discussion of a Sinclair boycott that would focus on local advertisers arrived on TalkingPoints through an e-mail comment from a reader. Talkingpoints regularly solicits and incorporates input from and research by its users. The cultural practice of writing to highly visible blogs with far greater ease than writing a letter to the editor and with looser constraints on what gets posted makes these nodes themselves platforms for the expression, filtering, and synthesis of observations and opinions. Moreover, as Drezner and Farrell have shown, blogs have developed cultural practices of mutual citation--when one blogger ,{[pg 256]}, finds a source by reading another, the practice is to link to the original blog, not only directly to the underlying source. Jack Balkin has argued that the culture of linking more generally and the "see for yourself" culture also significantly militate against fragmentation of discourse, because users link to materials they are commenting on, even in disagreement.
+
+Our understanding of the emerging structure of the networked information environment, then, provides the basis for a response to the family of criticisms of the first generation claims that the Internet democratizes. Recall that these criticisms, rooted in the problem of information overload, or the Babel objection, revolved around three claims. The first claim was that the Internet would result in a fragmentation of public discourse. The clustering of topically related sites, such as politically oriented sites, and of communities of interest, the emergence of high-visibility sites that the majority of sites link to, and the practices of mutual linking show quantitatively and qualitatively what Internet users likely experience intuitively. While there is enormous diversity on the Internet, there are also mechanisms and practices that generate a common set of themes, concerns, and public knowledge around which a public sphere can emerge. Any given site is likely to be within a very small number of clicks away from a site that is visible from a very large number of other sites, and these form a backbone of common materials, observations, and concerns. All the findings of power law distribution of linking, clustering, and the presence of a strongly connected core, as well as the linking culture and "see for yourself," oppose the fragmentation prediction. Users self-organize to filter the universe of information that is generated in the network. This self-organization includes a number of highly salient sites that provide a core of common social and cultural experiences and knowledge that can provide the basis for a common public sphere, rather than a fragmented one.
+
+The second claim was that fragmentation would cause polarization. Because like-minded people would talk only to each other, they would tend to amplify their differences and adopt more extreme versions of their positions. Given that the evidence demonstrates there is no fragmentation, in the sense of a lack of a common discourse, it would be surprising to find higher polarization because of the Internet. Moreover, as Balkin argued, the fact that the Internet allows widely dispersed people with extreme views to find each other and talk is not a failure for the liberal public sphere, though it may present new challenges for the liberal state in constraining extreme action. Only polarization of discourse in society as a whole can properly be ,{[pg 257]}, considered a challenge to the attractiveness of the networked public sphere. However, the practices of linking, "see for yourself," or quotation of the position one is criticizing, and the widespread practice of examining and criticizing the assumptions and assertions of one's interlocutors actually point the other way, militating against polarization. A potential counterargument, however, was created by the most extensive recent study of the political blogosphere. In that study, Adamic and Glance showed that only about 10 percent of the links on any randomly selected political blog linked to a site across the ideological divide. The number increased for the "A-list" political blogs, which linked across the political divide about 15 percent of the time. The picture that emerges is one of distinct "liberal" and "conservative" spheres of conversation, with very dense links within, and more sparse links between them. On one interpretation, then, although there are salient sites that provide a common subject matter for discourse, actual conversations occur in distinct and separate spheres--exactly the kind of setting that Sunstein argued would lead to polarization. Two of the study's findings, however, suggest a different interpretation. The first was that there was still a substantial amount of cross-divide linking. One out of every six or seven links in the top sites on each side of the divide linked to the other side in roughly equal proportions (although conservatives tended to link slightly more overall--both internally and across the divide). The second was, that in an effort to see whether the more closely interlinked conservative sites therefore showed greater convergence "on message," Adamic and Glance found that greater interlinking did not correlate with less diversity in external (outside of the blogosphere) reference points.~{ Both of these findings are consistent with even more recent work by Hargittai, E., J. Gallo and S. Zehnder, "Mapping the Political Blogosphere: An Analysis of LargeScale Online Political Discussions," 2005. Poster presented at the International Communication Association meetings, New York. }~ Together, these findings suggest a different interpretation. Each cluster of more or less like-minded blogs tended to read each other and quote each other much more than they did the other side. This operated not so much as an echo chamber as a forum for working out of observations and interpretations internally, among likeminded people. Many of these initial statements or inquiries die because the community finds them uninteresting or fruitless. Some reach greater salience, and are distributed through the high-visibility sites throughout the community of interest. Issues that in this form reached political salience became topics of conversation and commentary across the divide. This is certainly consistent with both the BoycottSBG and Diebold stories, where we saw a significant early working out of strategies and observations before the criticism reached genuine political salience. There would have been no point for opponents to link to and criticize early ideas kicked around within the community, ,{[pg 258]}, like opposing Sinclair station renewal applications. Only after a few days, when the boycott was crystallizing, would opponents have reason to point out the boycott effort and discuss it. This interpretation also well characterizes the way in which the Trent Lott story described later in this chapter began percolating on the liberal side of the blogosphere, but then migrated over to the center-right.
+
+The third claim was that money would reemerge as the primary source of power brokerage because of the difficulty of getting attention on the Net. Descriptively, it shares a prediction with the second-generation claims: Namely, that the Internet will centralize discourse. It differs in the mechanism of concentration: it will not be the result of an emergent property of large-scale networks, but rather of an old, tried-and-true way of capturing the political arena--money. But the peer-production model of filtering and discussion suggests that the networked public sphere will be substantially less corruptible by money. In the interpretation that I propose, filtering for the network as a whole is done as a form of nested peer-review decisions, beginning with the speaker's closest information affinity group. Consistent with what we have been seeing in more structured peer-production projects like /{Wikipedia}/, Slashdot, or free software, communities of interest use clustering and mutual pointing to peer produce the basic filtering mechanism necessary for the public sphere to be effective and avoid being drowned in the din of the crowd. The nested structure of the Web, whereby subclusters form relatively dense higher-level clusters, which then again combine into even higher-level clusters, and in each case, have a number of high-end salient sites, allows for the statements that pass these filters to become globally salient in the relevant public sphere. This structure, which describes the analytic and empirical work on the Web as a whole, fits remarkably well as a description of the dynamics we saw in looking more closely at the success of the boycott on Sinclair, as well as the successful campaign to investigate and challenge Diebold's voting machines.
+
+The peer-produced structure of the attention backbone suggests that money is neither necessary nor sufficient to attract attention in the networked public sphere (although nothing suggests that money has become irrelevant to political attention given the continued importance of mass media). It renders less surprising Howard Dean's strong campaign for the Democratic presidential primaries in 2003 and the much more stable success of MoveOn.org since the late 1990s. These suggest that attention on the network has more to do with mobilizing the judgments, links, and cooperation ,{[pg 259]}, of large bodies of small-scale contributors than with applying large sums of money. There is no obvious broadcast station that one can buy in order to assure salience. There are, of course, the highly visible sites, and they do offer a mechanism of getting your message to large numbers of people. However, the degree of engaged readership, interlinking, and clustering suggests that, in fact, being exposed to a certain message in one or a small number of highly visible places accounts for only a small part of the range of "reading" that gets done. More significantly, it suggests that reading, as opposed to having a conversation, is only part of what people do in the networked environment. In the networked public sphere, receiving information or getting out a finished message are only parts, and not necessarily the most important parts, of democratic discourse. The central desideratum of a political campaign that is rooted in the Internet is the capacity to engage users to the point that they become effective participants in a conversation and an effort; one that they have a genuine stake in and that is linked to a larger, society-wide debate. This engagement is not easily purchased, nor is it captured by the concept of a well-educated public that receives all the information it needs to be an informed citizenry. Instead, it is precisely the varied modes of participation in small-, medium-, and large-scale conversations, with varied but sustained degrees of efficacy, that make the public sphere of the networked environment different, and more attractive, than was the mass-media-based public sphere.
+
+The networked public sphere is not only more resistant to control by money, but it is also less susceptible to the lowest-common-denominator orientation that the pursuit of money often leads mass media to adopt. Because communication in peer-produced media starts from an intrinsic motivation--writing or commenting about what one cares about--it begins with the opposite of lowest common denominator. It begins with what irks you, the contributing peer, individually, the most. This is, in the political world, analogous to Eric Raymond's claim that every free or open-source software project begins with programmers with an itch to scratch--something directly relevant to their lives and needs that they want to fix. The networked information economy, which makes it possible for individuals alone and in cooperation with others to scour the universe of politically relevant events, to point to them, and to comment and argue about them, follows a similar logic. This is why one freelance writer with lefty leanings, Russ Kick, is able to maintain a Web site, The Memory Hole, with documents that he gets by filing Freedom of Information Act requests. In April ,{[pg 260]}, 2004, Kick was the first to obtain the U.S. military's photographs of the coffins of personnel killed in Iraq being flown home. No mainstream news organization had done so, but many published the photographs almost immediately after Kick had obtained them. Like free software, like Davis and the bloggers who participated in the debates over the Sinclair boycott, or the students who published the Diebold e-mails, the decision of what to publish does not start from a manager's or editor's judgment of what would be relevant and interesting to many people without being overly upsetting to too many others. It starts with the question: What do I care about most now?
+
+To conclude, we need to consider the attractiveness of the networked public sphere not from the perspective of the mid-1990s utopianism, but from the perspective of how it compares to the actual media that have dominated the public sphere in all modern democracies. The networked public sphere provides an effective nonmarket alternative for intake, filtering, and synthesis outside the market-based mass media. This nonmarket alternative can attenuate the influence over the public sphere that can be achieved through control over, or purchase of control over, the mass media. It offers a substantially broader capture basin for intake of observations and opinions generated by anyone with a stake in the polity, anywhere. It appears to have developed a structure that allows for this enormous capture basin to be filtered, synthesized, and made part of a polity-wide discourse. This nested structure of clusters of communities of interest, typified by steadily increasing visibility of superstar nodes, allows for both the filtering and salience to climb up the hierarchy of clusters, but offers sufficient redundant paths and interlinking to avoid the creation of a small set of points of control where power can be either directly exercised or bought.
+
+There is, in this story, an enormous degree of contingency and factual specificity. That is, my claims on behalf of the networked information economy as a platform for the public sphere are not based on general claims about human nature, the meaning of liberal discourse, context-independent efficiency, or the benevolent nature of the technology we happen to have stumbled across at the end of the twentieth century. They are instead based on, and depend on the continued accuracy of, a description of the economics of fabrication of computers and network connections, and a description of the dynamics of linking in a network of connected nodes. As such, my claim is not that the Internet inherently liberates. I do not claim that commonsbased production of information, knowledge, and culture will win out by ,{[pg 261]}, some irresistible progressive force. That is what makes the study of the political economy of information, knowledge, and culture in the networked environment directly relevant to policy. The literature on network topology suggests that, as long as there are widely distributed capabilities to publish, link, and advise others about what to read and link to, networks enable intrinsic processes that allow substantial ordering of the information. The pattern of information flow in such a network is more resistant to the application of control or influence than was the mass-media model. But things can change. Google could become so powerful on the desktop, in the e-mail utility, and on the Web, that it will effectively become a supernode that will indeed raise the prospect of a reemergence of a mass-media model. Then the politics of search engines, as Lucas Introna and Helen Nissenbaum called it, become central. The zeal to curb peer-to-peer file sharing of movies and music could lead to a substantial redesign of computing equipment and networks, to a degree that would make it harder for end users to exchange information of their own making. Understanding what we will lose if such changes indeed warp the topology of the network, and through it the basic structure of the networked public sphere, is precisely the object of this book as a whole. For now, though, let us say that the networked information economy as it has developed to this date has a capacity to take in, filter, and synthesize observations and opinions from a population that is orders of magnitude larger than the population that was capable of being captured by the mass media. It has done so without re-creating identifiable and reliable points of control and manipulation that would replicate the core limitation of the mass-media model of the public sphere--its susceptibility to the exertion of control by its regulators, owners, or those who pay them.
+
+2~ WHO WILL PLAY THE WATCHDOG FUNCTION?
+
+A distinct critique leveled at the networked public sphere as a platform for democratic politics is the concern for who will fill the role of watchdog. Neil Netanel made this argument most clearly. His concern was that, perhaps freedom of expression for all is a good thing, and perhaps we could even overcome information overflow problems, but we live in a complex world with powerful actors. Government and corporate power is large, and individuals, no matter how good their tools, cannot be a serious alternative to a well-funded, independent press that can pay investigative reporters, defend lawsuits, and generally act like the New York Times and the Washington Post ,{[pg 262]}, when they published the Pentagon Papers in the teeth of the Nixon administration's resistance, providing some of the most damning evidence against the planning and continued prosecution of the war in Vietnam. Netanel is cognizant of the tensions between the need to capture large audiences and sell advertising, on the one hand, and the role of watchdog, on the other. He nonetheless emphasizes that the networked public sphere cannot investigate as deeply or create the public salience that the mass media can. These limitations make commercial mass media, for all their limitations, necessary for a liberal public sphere.
+
+This diagnosis of the potential of the networked public sphere underrepresents its productive capacity. The Diebold story provides in narrative form a detailed response to each of the concerns. The problem of voting machines has all the characteristics of an important, hard subject. It stirs deep fears that democracy is being stolen, and is therefore highly unsettling. It involves a difficult set of technical judgments about the functioning of voting machines. It required exposure and analysis of corporate-owned materials in the teeth of litigation threats and efforts to suppress and discredit the criticism. At each juncture in the process, the participants in the critique turned iteratively to peer production and radically distributed methods of investigation, analysis, distribution, and resistance to suppression: the initial observations of the whistle-blower or the hacker; the materials made available on a "see for yourself" and "come analyze this and share your insights" model; the distribution by students; and the fallback option when their server was shut down of replication around the network. At each stage, a peer-production solution was interposed in place of where a well-funded, high-end mass-media outlet would have traditionally applied funding in expectation of sales of copy. And it was only after the networked public sphere developed the analysis and debate that the mass media caught on, and then only gingerly.
+
+The Diebold case was not an aberration, but merely a particularly rich case study of a much broader phenomenon, most extensively described in Dan Gilmore's We the Media. The basic production modalities that typify the networked information economy are now being applied to the problem of producing politically relevant information. In 2005, the most visible example of application of the networked information economy--both in its peer-production dimension and more generally by combining a wide range of nonproprietary production models--to the watchdog function of the media is the political blogosphere. The founding myth of the blogosphere's ,{[pg 263]}, journalistic potency was built on the back of then Senate majority leader Trent Lott. In 2002, Lott had the indiscretion of saying, at the onehundredth-birthday party of Republican Senator Strom Thurmond, that if Thurmond had won his Dixiecrat presidential campaign, "we wouldn't have had all these problems over all these years." Thurmond had run on a segregationist campaign, splitting from the Democratic Party in opposition to Harry Truman's early civil rights efforts, as the post?World War II winds began blowing toward the eventual demise of formal, legal racial segregation in the United States. Few positions are taken to be more self-evident in the national public morality of early twenty-first-century America than that formal, state-imposed, racial discrimination is an abomination. And yet, the first few days after the birthday party at which Lott made his statement saw almost no reporting on the statement. ABC News and the Washington Post made small mention of it, but most media outlets reported merely on a congenial salute and farewell celebration of the Senate's oldest and longestserving member. Things were different in the blogosphere. At first liberal blogs, and within three days conservative bloggers as well, began to excavate past racist statements by Lott, and to beat the drums calling for his censure or removal as Senate leader. Within about a week, the story surfaced in the mainstream media, became a major embarrassment, and led to Lott's resignation as Senate majority leader about a week later. A careful case study of this event leaves it unclear why the mainstream media initially ignored the story.~{ Harvard Kennedy School of Government, Case Program: " `Big Media' Meets `Bloggers': Coverage of Trent Lott's Remarks at Strom Thurmond's Birthday Party," http:// www.ksg.harvard.edu/presspol/Research_Publications/Case_Studies/1731_0.pdf.}~ It may have been that the largely social event drew the wrong sort of reporters. It may have been that reporters and editors who depend on major Washington, D.C., players were reluctant to challenge Lott. Perhaps they thought it rude to emphasize this indiscretion, or too upsetting to us all to think of just how close to the surface thoughts that we deem abominable can lurk. There is little disagreement that the day after the party, the story was picked up and discussed by Marshall on TalkingPoints, as well as by another liberal blogger, Atrios, who apparently got it from a post on Slate's "Chatterbox," which picked it up from ABC News's own The Note, a news summary made available on the television network's Web site. While the mass media largely ignored the story, and the two or three mainstream reporters who tried to write about it were getting little traction, bloggers were collecting more stories about prior instances where Lott's actions tended to suggest support for racist causes. Marshall, for example, found that Lott had filed a 1981 amicus curiae brief in support of Bob Jones University's effort to retain its tax-exempt status. The U.S. government had rescinded ,{[pg 264]}, that status because the university practiced racial discrimination--such as prohibiting interracial dating. By Monday of the following week, four days after the remarks, conservative bloggers like Glenn Reynolds on Instapundit, Andrew Sullivan, and others were calling for Lott's resignation. It is possible that, absent the blogosphere, the story would still have flared up. There were two or so mainstream reporters still looking into the story. Jesse Jackson had come out within four days of the comment and said Lott should resign as majority leader. Eventually, when the mass media did enter the fray, its coverage clearly dominated the public agenda and its reporters uncovered materials that helped speed Lott's exit. However, given the short news cycle, the lack of initial interest by the media, and the large time lag between the event itself and when the media actually took the subject up, it seems likely that without the intervention of the blogosphere, the story would have died. What happened instead is that the cluster of political blogs--starting on the Left but then moving across the Left-Right divide--took up the subject, investigated, wrote opinions, collected links and public interest, and eventually captured enough attention to make the comments a matter of public importance. Free from the need to appear neutral and not to offend readers, and free from the need to keep close working relationships with news subjects, bloggers were able to identify something that grated on their sensibilities, talk about it, dig deeper, and eventually generate a substantial intervention into the public sphere. That intervention still had to pass through the mass media, for we still live in a communications environment heavily based on those media. However, the new source of insight, debate, and eventual condensation of effective public opinion came from within the networked information environment.
+
+The point is not to respond to the argument with a litany of anecdotes. The point is that the argument about the commercial media's role as watchdog turns out to be a familiar argument--it is the same argument that was made about software and supercomputers, encyclopedias and immersive entertainment scripts. The answer, too, is by now familiar. Just as the World Wide Web can offer a platform for the emergence of an enormous and effective almanac, just as free software can produce excellent software and peer production can produce a good encyclopedia, so too can peer production produce the public watchdog function. In doing so, clearly the unorganized collection of Internet users lacks some of the basic tools of the mass media: dedicated full-time reporters; contacts with politicians who need media to survive, and therefore cannot always afford to stonewall questions; or ,{[pg 265]}, public visibility and credibility to back their assertions. However, networkbased peer production also avoids the inherent conflicts between investigative reporting and the bottom line--its cost, its risk of litigation, its risk of withdrawal of advertising from alienated corporate subjects, and its risk of alienating readers. Building on the wide variation and diversity of knowledge, time, availability, insight, and experience, as well as the vast communications and information resources on hand for almost anyone in advanced economies, we are seeing that the watchdog function too is being peer produced in the networked information economy.
+
+Note that while my focus in this chapter has been mostly the organization of public discourse, both the Sinclair and the Diebold case studies also identify characteristics of distributed political action. We see collective action emerging from the convergence of independent individual actions, with no hierarchical control like that of a political party or an organized campaign. There may be some coordination and condensation points--like BoycottSBG.com or blackboxvoting.org. Like other integration platforms in peer-production systems, these condensation points provide a critical function. They do not, however, control the process. One manifestation of distributed coordination for political action is something Howard Rheingold has called "smart mobs"--large collections of individuals who are able to coordinate real-world action through widely distributed information and communications technology. He tells of the "People Power II" revolution in Manila in 2001, where demonstrations to oust then president Estrada were coordinated spontaneously through extensive text messaging.~{ Howard Rheingold, Smart Mobs, The Next Social Revolution (Cambridge, MA: Perseus Publishing, 2002). }~ Few images in the early twentyfirst century can convey this phenomenon more vividly than the demonstrations around the world on February 15, 2003. Between six and ten million protesters were reported to have gone to the streets of major cities in about sixty countries in opposition to the American-led invasion of Iraq. There had been no major media campaign leading up to the demonstrations-- though there was much media attention to them later. There had been no organizing committee. Instead, there was a network of roughly concordant actions, none controlling the other, all loosely discussing what ought to be done and when. MoveOn.org in the United States provides an example of a coordination platform for a network of politically mobilized activities. It builds on e-mail and Web-based media to communicate opportunities for political action to those likely to be willing and able to take it. Radically distributed, network-based solutions to the problems of political mobilization rely on the same characteristics as networked information production ,{[pg 266]}, more generally: extensive communications leading to concordant and cooperative patterns of behavior without the introduction of hierarchy or the interposition of payment.
+
+2~ USING NETWORKED COMMUNICATION TO WORK AROUND AUTHORITARIAN CONTROL
+
+The Internet and the networked public sphere offer a different set of potential benefits, and suffer a different set of threats, as a platform for liberation in authoritarian countries. State-controlled mass-media models are highly conducive to authoritarian control. Because they usually rely on a small number of technical and organizational points of control, mass media offer a relatively easy target for capture and control by governments. Successful control of such universally visible media then becomes an important tool of information manipulation, which, in turn, eases the problem of controlling the population. Not surprisingly, capture of the national television and radio stations is invariably an early target of coups and revolutions. The highly distributed networked architecture of the Internet makes it harder to control communications in this way.
+
+The case of Radio B92 in Yugoslavia offers an example. B92 was founded in 1989, as an independent radio station. Over the course of the 1990s, it developed a significant independent newsroom broadcast over the station itself, and syndicated through thirty affiliated independent stations. B92 was banned twice after the NATO bombing of Belgrade, in an effort by the Milosevic regime to control information about the war. In each case, however, the station continued to produce programming, and distributed it over the Internet from a server based in Amsterdam. The point is a simple one. Shutting down a broadcast station is simple. There is one transmitter with one antenna, and police can find and hold it. It is much harder to shut down all connections from all reporters to a server and from the server back into the country wherever a computer exists.
+
+This is not to say that the Internet will of necessity in the long term lead all authoritarian regimes to collapse. One option open to such regimes is simply to resist Internet use. In 2003, Burma, or Myanmar, had 28,000 Internet users out of a population of more than 42 million, or one in fifteen hundred, as compared, for example, to 6 million out of 65 million in neighboring Thailand, or roughly one in eleven. Most countries are not, however, willing to forgo the benefits of connectivity to maintain their control. Iran's ,{[pg 267]}, population of 69 million includes 4.3 million Internet users, while China has about 80 million users, second only to the United States in absolute terms, out of a population of 1.3 billion. That is, both China and Iran have a density of Internet users of about one in sixteen.~{ Data taken from CIA World Fact Book (Washington, DC: Central Intelligence Agency, 2004). }~ Burma's negligible level of Internet availability is a compound effect of low gross domestic product (GDP) per capita and government policies. Some countries with similar GDP levels still have levels of Internet users in the population that are two orders of magnitude higher: Cameroon (1 Internet user for every 27 residents), Moldova (1 in 30), and Mongolia (1 in 55). Even very large poor countries have several times more users per population than Myanmar: like Pakistan (1 in 100), Mauritania (1 in 300), and Bangladesh (1 in 580). Lawrence Solum and Minn Chung outline how Myanmar achieves its high degree of control and low degree of use.~{ Lawrence Solum and Minn Chung, "The Layers Principle: Internet Architecture and the Law" (working paper no. 55, University of San Diego School of Law, Public Law and Legal Theory, June 2003). }~ Myanmar has only one Internet service provider (ISP), owned by the government. The government must authorize anyone who wants to use the Internet or create a Web page within the country. Some of the licensees, like foreign businesses, are apparently permitted and enabled only to send e-mail, while using the Web is limited to security officials who monitor it. With this level of draconian regulation, Myanmar can avoid the liberating effects of the Internet altogether, at the cost of losing all its economic benefits. Few regimes are willing to pay that price.
+
+Introducing Internet communications into a society does not, however, immediately and automatically mean that an open, liberal public sphere emerges. The Internet is technically harder to control than mass media. It increases the cost and decreases the efficacy of information control. However, a regime willing and able to spend enough money and engineering power, and to limit its population's access to the Internet sufficiently, can have substantial success in controlling the flow of information into and out of its country. Solum and Chung describe in detail one of the most extensive and successful of these efforts, the one that has been conducted by China-- home to the second-largest population of Internet users in the world, whose policies controlled use of the Internet by two out of every fifteen Internet users in the world in 2003. In China, the government holds a monopoly over all Internet connections going into and out of the country. It either provides or licenses the four national backbones that carry traffic throughout China and connect it to the global network. ISPs that hang off these backbones are licensed, and must provide information about the location and workings of their facilities, as well as comply with a code of conduct. Individual ,{[pg 268]}, users must register and provide information about their machines, and the many Internet cafes are required to install filtering software that will filter out subversive sites. There have been crackdowns on Internet cafes to enforce these requirements. This set of regulations has replicated one aspect of the mass-medium model for the Internet--it has created a potential point of concentration or centralization of information flow that would make it easier to control Internet use. The highly distributed production capabilities of the networked information economy, however, as opposed merely to the distributed carriage capability of the Internet, mean that more must be done at this bottleneck to squelch the flow of information and opinion than would have to be done with mass media. That "more" in China has consisted of an effort to employ automatic filters--some at the level of the cybercafe or the local ISP, some at the level of the national backbone networks. The variability of these loci and their effects is reflected in partial efficacy and variable performance for these mechanisms. The most extensive study of the efficacy of these strategies for controlling information flows over the Internet to China was conducted by Jonathan Zittrain and Ben Edelman. From servers within China, they sampled about two hundred thousand Web sites and found that about fifty thousand were unavailable at least once, and close to nineteen thousand were unavailable on two distinct occasions. The blocking patterns seemed to follow mass-media logic--BBC News was consistently unavailable, as CNN and other major news sites often were; the U.S. court system official site was unavailable. However, Web sites that provided similar information--like those that offered access to all court cases but were outside the official system--were available. The core Web sites of human rights organizations or of Taiwan and Tibet-related organizations were blocked, and about sixty of the top one hundred results for "Tibet" on Google were blocked. What is also apparent from their study, however, and confirmed by Amnesty International's reports on Internet censorship in China, is that while censorship is significant, it is only partially effective.~{ Amnesty International, People's Republic of China, State Control of the Internet in China (2002). }~ The Amnesty report noted that Chinese users were able to use a variety of techniques to avoid the filtering, such as the use of proxy servers, but even Zittrain and Edelman, apparently testing for filtering as experienced by unsophisticated or compliant Internet users in China, could access many sites that would, on their face, seem potentially destabilizing.
+
+This level of censorship may indeed be effective enough for a government negotiating economic and trade expansion with political stability and control. It suggests, however, limits of the ability of even a highly dedicated ,{[pg 269]}, government to control the capacity of Internet communications to route around censorship and to make it much easier for determined users to find information they care about, and to disseminate their own information to others. Iran's experience, with a similar level of Internet penetration, emphasizes the difficulty of maintaining control of Internet publication.~{ A synthesis of news-based accounts is Babak Rahimi, "Cyberdissent: The Internet in Revolutionary Iran," Middle East Review of International Affairs 7, no. 3 (2003). }~ Iran's network emerged from 1993 onward from the university system, quite rapidly complemented by commercial ISPs. Because deployment and use of the Internet preceded its regulation by the government, its architecture is less amenable to centralized filtering and control than China's. Internet access through university accounts and cybercafes appears to be substantial, and until the past three or four years, had operated free of the crackdowns and prison terms suffered by opposition print publications and reporters. The conservative branches of the regime seem to have taken a greater interest in suppressing Internet communications since the publication of imprisoned Ayatollah Montazeri's critique of the foundations of the Islamic state on the Web in December 2000. While the original Web site, montazeri.com, seems to have been eliminated, the site persists as montazeri.ws, using a Western Samoan domain name, as do a number of other Iranian publications. There are now dozens of chat rooms, blogs, and Web sites, and e-mail also seems to be playing an increasing role in the education and organization of an opposition. While the conservative branches of the Iranian state have been clamping down on these forms, and some bloggers and Web site operators have found themselves subject to the same mistreatment as journalists, the efficacy of these efforts to shut down opposition seems to be limited and uneven.
+
+Media other than static Web sites present substantially deeper problems for regimes like those of China and Iran. Scanning the text of e-mail messages of millions of users who can encrypt their communications with widely available tools creates a much more complex problem. Ephemeral media like chat rooms and writable Web tools allow the content of an Internet communication or Web site to be changed easily and dynamically, so that blocking sites becomes harder, while coordinating moves to new sites to route around blocking becomes easier. At one degree of complexity deeper, the widely distributed architecture of the Net also allows users to build censorship-resistant networks by pooling their own resources. The pioneering example of this approach is Freenet, initially developed in 1999-2000 by Ian Clarke, an Irish programmer fresh out of a degree in computer science and artificial intelligence at Edinburgh University. Now a broader free-software project, Freenet ,{[pg 270]}, is a peer-to-peer application specifically designed to be censorship resistant. Unlike the more famous peer-to-peer network developed at the time--Napster--Freenet was not intended to store music files on the hard drives of users. Instead, it stores bits and pieces of publications, and then uses sophisticated algorithms to deliver the documents to whoever seeks them, in encrypted form. This design trades off easy availability for a series of security measures that prevent even the owners of the hard drives on which the data resides--or government agents that search their computers--from knowing what is on their hard drive or from controlling it. As a practical matter, if someone in a country that prohibits certain content but enables Internet connections wants to publish content--say, a Web site or blog--safely, they can inject it into the Freenet system. The content will be encrypted and divided into little bits and pieces that are stored in many different hard drives of participants in the network. No single computer will have all the information, and shutting down any given computer will not make the information unavailable. It will continue to be accessible to anyone running the Freenet client. Freenet indeed appears to be used in China, although the precise scope is hard to determine, as the network is intended to mask the identity and location of both readers and publishers in this system. The point to focus on is not the specifics of Freenet, but the feasibility of constructing user-based censorship-resistant storage and retrieval systems that would be practically impossible for a national censorship system to identify and block subversive content.
+
+To conclude, in authoritarian countries, the introduction of Internet communications makes it harder and more costly for governments to control the public sphere. If these governments are willing to forgo the benefits of Internet connectivity, they can avoid this problem. If they are not, they find themselves with less control over the public sphere. There are, obviously, other means of more direct repression. However, control over the mass media was, throughout most of the twentieth century, a core tool of repressive governments. It allowed them to manipulate what the masses of their populations knew and believed, and thus limited the portion of the population that the government needed to physically repress to a small and often geographically localized group. The efficacy of these techniques of repression is blunted by adoption of the Internet and the emergence of a networked information economy. Low-cost communications, distributed technical and organizational structure, and ubiquitous presence of dynamic authorship ,{[pg 271]}, tools make control over the public sphere difficult, and practically never perfect.
+
+2~ TOWARD A NETWORKED PUBLIC SPHERE
+
+The first generation of statements that the Internet democratizes was correct but imprecise. The Internet does restructure public discourse in ways that give individuals a greater say in their governance than the mass media made possible. The Internet does provide avenues of discourse around the bottlenecks of older media, whether these are held by authoritarian governments or by media owners. But the mechanisms for this change are more complex than those articulated in the past. And these more complex mechanisms respond to the basic critiques that have been raised against the notion that the Internet enhances democracy.
+
+Part of what has changed with the Internet is technical infrastructure. Network communications do not offer themselves up as easily for single points of control as did the mass media. While it is possible for authoritarian regimes to try to retain bottlenecks in the Internet, the cost is higher and the efficacy lower than in mass-media-dominated systems. While this does not mean that introduction of the Internet will automatically result in global democratization, it does make the work of authoritarian regimes harder. In liberal democracies, the primary effect of the Internet runs through the emergence of the networked information economy. We are seeing the emergence to much greater significance of nonmarket, individual, and cooperative peerproduction efforts to produce universal intake of observations and opinions about the state of the world and what might and ought to be done about it. We are seeing the emergence of filtering, accreditation, and synthesis mechanisms as part of network behavior. These rely on clustering of communities of interest and association and highlighting of certain sites, but offer tremendous redundancy of paths for expression and accreditation. These practices leave no single point of failure for discourse: no single point where observations can be squelched or attention commanded--by fiat or with the application of money. Because of these emerging systems, the networked information economy is solving the information overload and discourse fragmentation concerns without reintroducing the distortions of the mass-media model. Peer production, both long-term and organized, as in the case of Slashdot, and ad hoc and dynamically formed, as in the case of blogging or ,{[pg 272]}, the Sinclair or Diebold cases, is providing some of the most important functionalities of the media. These efforts provide a watchdog, a source of salient observations regarding matters of public concern, and a platform for discussing the alternatives open to a polity.
+
+In the networked information environment, everyone is free to observe, report, question, and debate, not only in principle, but in actual capability. They can do this, if not through their own widely read blog, then through a cycle of mailing lists, collective Web-based media like Slashdot, comments on blogs, or even merely through e-mails to friends who, in turn, have meaningful visibility in a smallish-scale cluster of sites or lists. We are witnessing a fundamental change in how individuals can interact with their democracy and experience their role as citizens. Ideal citizens need not be seen purely as trying to inform themselves about what others have found, so that they can vote intelligently. They need not be limited to reading the opinions of opinion makers and judging them in private conversations. They are no longer constrained to occupy the role of mere readers, viewers, and listeners. They can be, instead, participants in a conversation. Practices that begin to take advantage of these new capabilities shift the locus of content creation from the few professional journalists trolling society for issues and observations, to the people who make up that society. They begin to free the public agenda setting from dependence on the judgments of managers, whose job it is to assure that the maximum number of readers, viewers, and listeners are sold in the market for eyeballs. The agenda thus can be rooted in the life and experience of individual participants in society--in their observations, experiences, and obsessions. The network allows all citizens to change their relationship to the public sphere. They no longer need be consumers and passive spectators. They can become creators and primary subjects. It is in this sense that the Internet democratizes. ,{[pg 273]},
+
+1~8 Chapter 8 - Cultural Freedom: A Culture Both Plastic and Critical
+
+poem{
+
+Gone with the Wind
+
+There was a land of Cavaliers and Cotton Fields called the Old South.
+Here in this pretty world, Gallantry took its last bow. Here was the
+last ever to be seen of Knights and their Ladies Fair, of Master and
+of Slave. Look for it only in books, for it is no more than a dream
+remembered, a Civilization gone with the wind.
+
+--MGM (1939) film adaptation of Margaret Mitchell's novel (1936)
+
+}poem
+
+poem{
+
+Strange Fruit
+
+Southern trees bear strange fruit,
+Blood on the leaves and blood at the root,
+Black bodies swinging in the southern breeze,
+Strange fruit hanging from the poplar trees.
+
+Pastoral scene of the gallant south,
+The bulging eyes and the twisted mouth,
+Scent of magnolias, sweet and fresh,
+Then the sudden smell of burning flesh.
+
+Here is the fruit for the crows to pluck,
+For the rain to gather, for the wind to suck,
+For the sun to rot, for the trees to drop,
+Here is a strange and bitter crop.
+
+--Billie Holiday (1939)
+ from lyrics by Abel Meeropol (1937)
+
+}poem
+
+,{[pg 274]},
+
+In 1939, Gone with the Wind reaped seven Oscars, while Billie Holiday's song reached number 16 on the charts, even though Columbia Records refused to release it: Holiday had to record it with a small company that was run out of a storefront in midtown Manhattan. On the eve of the second reconstruction era, which was to overhaul the legal framework of race relations over the two decades beginning with the desegregation of the armed forces in the late 1940s and culminating with the civil rights acts passed between 1964-1968, the two sides of the debate over desegregation and the legacy of slavery were minting new icons through which to express their most basic beliefs about the South and its peculiar institutions. As the following three decades unfolded and the South was gradually forced to change its ways, the cultural domain continued to work out the meaning of race relations in the United States and the history of slavery. The actual slogging of regulation of discrimination, implementation of desegregation and later affirmative action, and the more local politics of hiring and firing were punctuated throughout this period by salient iconic retellings of the stories of race relations in the United States, from Guess Who's Coming to Dinner? to Roots. The point of this chapter, however, is not to discuss race relations, but to understand culture and cultural production in terms of political theory. Gone with the Wind and Strange Fruit or Guess Who's Coming to Dinner? offer us intuitively accessible instances of a much broader and more basic characteristic of human understanding and social relations. Culture, shared meaning, and symbols are how we construct our views of life across a wide range of domains--personal, political, and social. How culture is produced is therefore an essential ingredient in structuring how freedom and justice are perceived, conceived, and pursued. In the twentieth century, Hollywood and the recording industry came to play a very large role in this domain. The networked information economy now seems poised to attenuate that role in favor of a more participatory and transparent cultural production system.
+
+Cultural freedom occupies a position that relates to both political freedom and individual autonomy, but is synonymous with neither. The root of its importance is that none of us exist outside of culture. As individuals and as political actors, we understand the world we occupy, evaluate it, and act in it from within a set of understandings and frames of meaning and reference that we share with others. What institutions and decisions are considered "legitimate" and worthy of compliance or participation; what courses of ,{[pg 275]}, action are attractive; what forms of interaction with others are considered appropriate--these are all understandings negotiated from within a set of shared frames of meaning. How those frames of meaning are shaped and by whom become central components of the structure of freedom for those individuals and societies that inhabit it and are inhabited by it. They define the public sphere in a much broader sense than we considered in the prior chapters.
+
+The networked information economy makes it possible to reshape both the "who" and the "how" of cultural production relative to cultural production in the twentieth century. It adds to the centralized, market-oriented production system a new framework of radically decentralized individual and cooperative nonmarket production. It thereby affects the ability of individuals and groups to participate in the production of the cultural tools and frameworks of human understanding and discourse. It affects the way we, as individuals and members of social and political clusters, interact with culture, and through it with each other. It makes culture more transparent to its inhabitants. It makes the process of cultural production more participatory, in the sense that more of those who live within a culture can actively participate in its creation. We are seeing the possibility of an emergence of a new popular culture, produced on the folk-culture model and inhabited actively, rather than passively consumed by the masses. Through these twin characteristics--transparency and participation--the networked information economy also creates greater space for critical evaluation of cultural materials and tools. The practice of producing culture makes us all more sophisticated readers, viewers, and listeners, as well as more engaged makers.
+
+Throughout the twentieth century, the making of widely shared images and symbols was a concentrated practice that went through the filters of Hollywood and the recording industry. The radically declining costs of manipulating video and still images, audio, and text have, however, made culturally embedded criticism and broad participation in the making of meaning much more feasible than in the past. Anyone with a personal computer can cut and mix files, make their own files, and publish them to a global audience. This is not to say that cultural bricolage, playfulness, and criticism did not exist before. One can go to the avant-garde movement, but equally well to African-Brazilian culture or to Our Lady of Guadalupe to find them. Even with regard to television, that most passive of electronic media, John Fiske argued under the rubric of "semiotic democracy" that viewers engage ,{[pg 276]}, in creative play and meaning making around the TV shows they watch. However, the technical characteristics of digital information technology, the economics of networked information production, and the social practices of networked discourse qualitatively change the role individuals can play in cultural production.
+
+The practical capacity individuals and noncommercial actors have to use and manipulate cultural artifacts today, playfully or critically, far outstrips anything possible in television, film, or recorded music, as these were organized throughout the twentieth century. The diversity of cultural moves and statements that results from these new opportunities for creativity vastly increases the range of cultural elements accessible to any individual. Our ability, therefore, to navigate the cultural environment and make it our own, both through creation and through active selection and attention, has increased to the point of making a qualitative difference. In the academic law literature, Niva Elkin Koren wrote early about the potential democratization of "meaning making processes," William Fisher about "semiotic democracy," and Jack Balkin about a "democratic culture." Lessig has explored the generative capacity of the freedom to create culture, its contribution to creativity itself. These efforts revolve around the idea that there is something normatively attractive, from the perspective of "democracy" as a liberal value, about the fact that anyone, using widely available equipment, can take from the existing cultural universe more or less whatever they want, cut it, paste it, mix it, and make it their own--equally well expressing their adoration as their disgust, their embrace of certain images as their rejection of them.
+
+Building on this work, this chapter seeks to do three things: First, I claim that the modalities of cultural production and exchange are a proper subject for normative evaluation within a broad range of liberal political theory. Culture is a social-psychological-cognitive fact of human existence. Ignoring it, as rights-based and utilitarian versions of liberalism tend to do, disables political theory from commenting on central characteristics of a society and its institutional frameworks. Analyzing the attractiveness of any given political institutional system without considering how it affects cultural production, and through it the production of the basic frames of meaning through which individual and collective self-determination functions, leaves a large hole in our analysis. Liberal political theory needs a theory of culture and agency that is viscous enough to matter normatively, but loose enough to give its core foci--the individual and the political system--room to be effective ,{[pg 277]}, independently, not as a mere expression or extension of culture. Second, I argue that cultural production in the form of the networked information economy offers individuals a greater participatory role in making the culture they occupy, and makes this culture more transparent to its inhabitants. This descriptive part occupies much of the chapter. Third, I suggest the relatively straightforward conclusion of the prior two observations. From the perspective of liberal political theory, the kind of open, participatory, transparent folk culture that is emerging in the networked environment is normatively more attractive than was the industrial cultural production system typified by Hollywood and the recording industry.
+
+A nine-year-old girl searching Google for Barbie will quite quickly find links to AdiosBarbie.com, to the Barbie Liberation Organization (BLO), and to other, similarly critical sites interspersed among those dedicated to selling and playing with the doll. The contested nature of the doll becomes publicly and everywhere apparent, liberated from the confines of feminist-criticism symposia and undergraduate courses. This simple Web search represents both of the core contributions of the networked information economy. First, from the perspective of the searching girl, it represents a new transparency of cultural symbols. Second, from the perspective of the participants in AdiosBarbie or the BLO, the girl's use of their site completes their own quest to participate in making the cultural meaning of Barbie. The networked information environment provides an outlet for contrary expression and a medium for shaking what we accept as cultural baseline assumptions. Its radically decentralized production modes provide greater freedom to participate effectively in defining the cultural symbols of our day. These characteristics make the networked environment attractive from the perspectives of both personal freedom of expression and an engaged and self-aware political discourse.
+
+We cannot, however, take for granted that the technological capacity to participate in the cultural conversation, to mix and make our own, will translate into the freedom to do so. The practices of cultural and countercultural creation are at the very core of the battle over the institutional ecology of the digital environment. The tension is perhaps not new or unique to the Internet, but its salience is now greater. The makers of the 1970s comic strip Air Pirates already found their comics confiscated when they portrayed Mickey and Minnie and Donald and Daisy in various compromising countercultural postures. Now, the ever-increasing scope and expanse ,{[pg 278]}, of copyright law and associated regulatory mechanisms, on the one hand, and of individual and collective nonmarket creativity, on the other hand, have heightened the conflict between cultural freedom and the regulatory framework on which the industrial cultural production system depends. As Lessig, Jessica Litman, and Siva Vaidhyanathan have each portrayed elegantly and in detail, the copyright industries have on many dimensions persuaded both Congress and courts that individual, nonmarket creativity using the cultural outputs of the industrial information economy is to be prohibited. As we stand today, freedom to play with the cultural environment is nonetheless preserved in the teeth of the legal constraints, because of the high costs of enforcement, on the one hand, and the ubiquity and low cost of the means to engage in creative cultural bricolage, on the other hand. These social, institutional, and technical facts still leave us with quite a bit of unauthorized creative expression. These facts, however, are contingent and fragile. Chapter 11 outlines in some detail the long trend toward the creation of ever-stronger legal regulation of cultural production, and in particular, the enclosure movement that began in the 1970s and gained steam in the mid-1990s. A series of seemingly discrete regulatory moves threatens the emerging networked folk culture. Ranging from judicial interpretations of copyright law to efforts to regulate the hardware and software of the networked environment, we are seeing a series of efforts to restrict nonmarket use of twentieth-century cultural materials in order to preserve the business models of Hollywood and the recording industry. These regulatory efforts threaten the freedom to participate in twenty-first-century cultural production, because current creation requires taking and mixing the twentieth-century cultural materials that make up who we are as culturally embedded beings. Here, however, I focus on explaining how cultural participation maps onto the project of liberal political theory, and why the emerging cultural practices should be seen as attractive within that normative framework. I leave development of the policy implications to part III.
+
+2~ CULTURAL FREEDOM IN LIBERAL POLITICAL THEORY
+
+Utilitarian and rights-based liberal political theories have an awkward relationship to culture. Both major strains of liberal theory make a certain set of assumptions about the autonomous individuals with which they are concerned. Individuals are assumed to be rational and knowledgeable, at least ,{[pg 279]}, about what is good for them. They are conceived of as possessing a capacity for reason and a set of preferences prior to engagement with others. Political theory then proceeds to concern itself with political structures that respect the autonomy of individuals with such characteristics. In the political domain, this conception of the individual is easiest to see in pluralist theories, which require institutions for collective decision making that clear what are treated as already-formed preferences of individuals or voluntary groupings.
+
+Culture represents a mysterious category for these types of liberal political theories. It is difficult to specify how it functions in terms readily amenable to a conception of individuals whose rationality and preferences for their own good are treated as though they preexist and are independent of society. A concept of culture requires some commonly held meaning among these individuals. Even the simplest intuitive conception of what culture might mean would treat this common frame of meaning as the result of social processes that preexist any individual, and partially structure what it is that individuals bring to the table as they negotiate their lives together, in society or in a polity. Inhabiting a culture is a precondition to any interpretation of what is at stake in any communicative exchange among individuals. A partly subconscious, lifelong dynamic social process of becoming and changing as a cultural being is difficult to fold into a collective decision-making model that focuses on designing a discursive platform for individuated discrete participants who are the bearers of political will. It is easier to model respect for an individual's will when one adopts a view of that will as independent, stable, and purely internally generated. It is harder to do so when one conceives of that individual will as already in some unspecified degree rooted in exchange with others about what an individual is to value and prefer.
+
+Culture has, of course, been incorporated into political theory as a central part of the critique of liberalism. The politics of culture have been a staple of critical theory since Marx first wrote that "Religion . . . is the opium of the people" and that "to call on them to give up their illusions about their condition is to call on them to give up a condition that requires illusions."~{ Karl Marx, "Introduction to a Contribution to the Critique of Hegel's Philosophy of Right," Deutsch-Franzosicher Jahrbucher (1844). }~ The twentieth century saw a wide array of critique, from cultural Marxism to poststructuralism and postmodernism. However, much of mainstream liberal political theory has chosen to ignore, rather than respond and adapt to, these critiques. In Political Liberalism, for example, Rawls acknowledges "the fact" of reasonable pluralism--of groups that persistently and reasonably hold competing comprehensive doctrines--and aims for political pluralism as a mode of managing the irreconcilable differences. This leaves the formation ,{[pg 280]}, of the comprehensive doctrine and the systems of belief within which it is rendered "reasonable" a black box to liberal theory. This may be an adequate strategy for analyzing the structure of formal political institutions at the broadest level of abstraction. However, it disables liberal political theory from dealing with more fine-grained questions of policy that act within the black box.
+
+As a practical matter, treating culture as a black box disables a political theory as a mechanism for diagnosing the actual conditions of life in a society in terms of its own political values. It does so in precisely the same way that a formal conception of autonomy disables those who hold it from diagnosing the conditions of autonomy in practical life. Imagine for a moment that we had received a revelation that a crude version of Antonio Gramsci's hegemony theory was perfectly correct as a matter of descriptive sociology. Ruling classes do, in fact, consciously and successfully manipulate the culture in order to make the oppressed classes compliant. It would be difficult, then, to continue to justify holding a position about political institutions, or autonomy, that treated the question of how culture, generally, or even the narrow subset of reasonably held comprehensive doctrines like religion, are made, as a black box. It would be difficult to defend respect for autonomous choices as respect for an individual's will, if an objective observer could point to a social process, external to the individual and acting upon him or her, as the cause of the individual holding that will. It would be difficult to focus one's political design imperatives on public processes that allow people to express their beliefs and preferences, argue about them, and ultimately vote on them, if it is descriptively correct that those beliefs and preferences are themselves the product of manipulation of some groups by others.
+
+The point is not, of course, that Gramsci was descriptively right or that any of the broad range of critical theories of culture is correct as a descriptive matter. It is that liberal theories that ignore culture are rendered incapable of answering some questions that arise in the real world and have real implications for individuals and polities. There is a range of sociological, psychological, or linguistic descriptions that could characterize the culture of a society as more or less in accord with the concern of liberalism with individual and collective self-determination. Some such descriptive theory of culture can provide us with enough purchase on the role of culture to diagnose the attractiveness of a cultural production system from a politicaltheory perspective. It does not require that liberal theory abandon individuals ,{[pg 281]}, as the bearers of the claims of political morality. It does not require that liberal political theory refocus on culture as opposed to formal political institutions. It does require, however, that liberal theory at least be able to diagnose different conditions in the practical cultural life of a society as more or less attractive from the perspective of liberal political theory.
+
+The efforts of deliberative liberal theories to account for culture offer the most obvious source of such an insight. These political theories have worked to develop a conception of culture and its relationship to liberalism precisely because at a minimum, they require mutual intelligibility across individuals, which cannot adequately be explained without some conception of culture. In Jurgen Habermas's work, culture plays the role of a basis for mutual intelligibility. As the basis for "interpersonal intelligibility," we see culture playing such a role in the work of Bruce Ackerman, who speaks of acculturation as the necessary condition to liberal dialogue. "Cultural coherence" is something he sees children requiring as a precondition to becoming liberal citizens: it allows them to "Talk" and defend their claims in terms without which there can be no liberal conversation.~{ Bruce A. Ackerman, Social Justice and the Liberal State (New Haven, CT, and London: Yale University Press, 1980), 333-335, 141-146. }~ Michael Walzer argues that, "in matters of morality, argument is simply the appeal to common meanings."~{ Michael Walzer, Spheres of Justice: A Defense of Pluralism and Equality (New York: Basic Books, 1983), 29. }~ Will Kymlicka claims that for individual autonomy, "freedom involves making choices amongst various options, and our societal culture not only provides these options, but makes them meaningful to us." A societal culture, in turn, is a "shared vocabulary of tradition and convention" that is "embodied in social life[,] institutionally embodied--in schools, media, economy, government, etc."~{ Will Kymlicka, Multicultural Citizenship: A Liberal Theory of Minority Rights (Oxford: Clarendon Press, 1995), 76, 83. }~ Common meanings in all these frameworks must mean more than simple comprehension of the words of another. It provides a common baseline, which is not itself at that moment the subject of conversation or inquiry, but forms the background on which conversation and inquiry take place. Habermas's definition of lifeworld as "background knowledge," for example, is a crisp rendering of culture in this role:
+
+_1 the lifeworld embraces us as an unmediated certainty, out of whose immediate proximity we live and speak. This all-penetrating, yet latent and unnoticed presence of the background of communicative action can be described as a more intense, yet deficient, form of knowledge and ability. To begin with, we make use of this knowledge involuntarily, without reflectively knowing that we possess it at all. What enables background knowledge to acquire absolute certainty in this way, and even augments its epistemic quality from a subjective standpoint, is precisely the property that robs it of a constitutive feature of knowledge: we make use of ,{[pg 282]}, such knowledge without the awareness that it could be false. Insofar as all knowledge is fallible and is known to be such, background knowledge does not represent knowledge at all, in a strict sense. As background knowledge, it lacks the possibility of being challenged, that is, of being raised to the level of criticizable validity claims. One can do this only by converting it from a resource into a topic of discussion, at which point--just when it is thematized--it no longer functions as a lifeworld background but rather disintegrates in its background modality.~{ Jurgen Habermas, Between Facts and Norms, Contributions to a Discourse Theory of Law and Democracy (Cambridge, MA: MIT Press, 1998), 22-23. }~
+
+In other words, our understanding of meaning--how we are, how others are, what ought to be--are in some significant portion unexamined assumptions that we share with others, and to which we appeal as we engage in communication with them. This does not mean that culture is a version of false consciousness. It does not mean that background knowledge cannot be examined rationally or otherwise undermines the very possibility or coherence of a liberal individual or polity. It does mean, however, that at any given time, in any given context, there will be some set of historically contingent beliefs, attitudes, and social and psychological conditions that will in the normal course remain unexamined, and form the unexamined foundation of conversation. Culture is revisable through critical examination, at which point it ceases to be "common knowledge" and becomes a contested assumption. Nevertheless, some body of unexamined common knowledge is necessary for us to have an intelligible conversation that does not constantly go around in circles, challenging the assumptions on whichever conversational move is made.
+
+Culture, in this framework, is not destiny. It does not predetermine who we are, or what we can become or do, nor is it a fixed artifact. It is the product of a dynamic process of engagement among those who make up a culture. It is a frame of meaning from within which we must inevitably function and speak to each other, and whose terms, constraints, and affordances we always negotiate. There is no point outside of culture from which to do otherwise. An old Yiddish folktale tells of a naïve rabbi who, for safekeeping, put a ten-ruble note inside his copy of the Torah, at the page of the commandment, "thou shalt not steal." That same night, a thief stole into the rabbi's home, took the ten-ruble note, and left a five-ruble note in its place, at the page of the commandment, "thou shalt love thy neighbor as thyself." The rabbi and the thief share a common cultural framework (as do we, across the cultural divide), through which their various actions can be understood; indeed, without which their actions would be unintelligible. ,{[pg 283]}, The story offers a theory of culture, power, and freedom that is more congenial to liberal political theory than critical theories, and yet provides a conception of the role of culture in human relations that provides enough friction, or viscosity, to allow meaning making in culture to play a role in the core concerns of liberal political theory. Their actions are part strategic and part communicative--that is to say, to some extent they seek to force an outcome, and to some extent they seek to engage the other in a conversation in order to achieve a commonly accepted outcome. The rabbi places the ten-ruble note in the Bible in order to impress upon the putative thief that he should leave the money where it is. He cannot exert force on the thief by locking the money up in a safe because he does not own one. Instead, he calls upon a shared understanding and a claim of authority within the governed society to persuade the thief. The thief, to the contrary, could have physically taken the ten-ruble note without replacing it, but he does not. He engages the rabbi in the same conversation. In part, he justifies his claim to five rubles. In part, he resists the authority of the rabbi--not by rejecting the culture that renders the rabbi a privileged expert, but by playing the game of Talmudic disputation. There is a price, though, for participating in the conversation. The thief must leave the five-ruble note; he cannot take the whole amount.
+
+In this story, culture is open to interpretation and manipulation, but not infinitely so. Some moves may be valid within a cultural framework and alter it; others simply will not. The practical force of culture, on the other hand, is not brute force. It cannot force an outcome, but it can exert a real pull on the range of behaviors that people will seriously consider undertaking, both as individuals and as polities. The storyteller relies on the listener's cultural understanding about the limits of argument, or communicative action. The story exploits the open texture of culture, and the listener's shared cultural belief that stealing is an act of force, not a claim of justice; that those who engage in it do not conceive of themselves as engaged in legitimate defensible acts. The rabbi was naïve to begin with, but the thief 's disputation is inconsistent with our sense of the nature of the act of stealing in exactly the same way that the rabbi's was, but inversely. The thief, the rabbi, and the storyteller participate in making, and altering, the meaning of the commandments.
+
+Culture changes through the actions of individuals in the cultural context. Beliefs, claims, communicative moves that have one meaning before an intervention ,{[pg 284]}, may begin to shift in their meaning as a result of other moves, made by other participants in the same cultural milieu. One need not adopt any given fully fledged meme theory of culture--like Richard Dawkins's, or Balkin's political adaptation of it as a theory of ideology--to accept that culture is created through communication among human beings, that it exerts some force on what they can say to each other and how it will be received, and that the parameters of a culture as a platform for making meaning in interaction among human beings change over time with use. How cultural moves are made, by whom, and with what degree of perfect replication or subtle (and not so subtle) change, become important elements in determining the rate and direction of cultural change. These changes, over time, alter the platform individuals must use to make sense of the world they occupy, and for participants in conversation to be able to make intelligible communications to each other about the world they share and where it can and ought to go. Culture so understood is a social fact about particular sets of human beings in historical context. As a social fact, it constrains and facilitates the development, expression, and questioning of beliefs and positions. Whether and how Darwinism should be taught in public schools, for example, is a live political question in vast regions of the United States, and is played out as a debate over whether evolution is "merely a theory." Whether racial segregation should be practiced in these schools is no longer a viable or even conceivable political agenda. The difference between Darwinism and the undesirability of racial segregation is not that one is scientifically true and the other is not. The difference is that the former is not part of the "common knowledge" of a large section of society, whereas the latter is, in a way that no longer requires proof by detailed sociological and psychological studies of the type cited by the Supreme Court in support of its holding, in /{Brown v. Board of Education}/, that segregation in education was inherently unequal.
+
+If culture is indeed part of how we form a shared sense of unexamined common knowledge, it plays a significant role in framing the meaning of the state of the world, the availability and desirability of choices, and the organization of discourse. The question of how culture is framed (and through it, meaning and the baseline conversational moves) then becomes germane to a liberal political theory. Between the Scylla of a fixed culture (with hierarchical, concentrated power to control its development and interpretation) and the Charybdis of a perfectly open culture (where nothing ,{[pg 285]}, is fixed and everything is up for grabs, offering no anchor for meaning and mutual intelligibility), there is a wide range of practical social and economic arrangements around the production and use of culture. In evaluating the attractiveness of various arrangements from the perspective of liberal theory, we come to an already familiar trade-off, and an already familiar answer. As in the case of autonomy and political discourse, a greater ability of individuals to participate in the creation of the cultural meaning of the world they occupy is attractive from the perspective of the liberal commitments to individual freedom and democratic participation. As in both areas that we have already considered, a Babel objection appears: Too much freedom to challenge and remake our own cultural environment will lead to a lack of shared meaning. As in those two cases, however, the fears of too active a community of meaning making are likely exaggerated. Loosening the dominant power of Hollywood and television over contemporary culture is likely to represent an incremental improvement, from the perspective of liberal political commitments. It will lead to a greater transparency of culture, and therefore a greater capacity for critical reflection, and it will provide more opportunities for participating in the creation of culture, for interpolating individual glosses on it, and for creating shared variations on common themes.
+
+2~ THE TRANSPARENCY OF INTERNET CULTURE
+
+If you run a search for "Barbie" on three separate search engines--Google, Overture, and Yahoo!--you will get quite different results. Table 8.1 lists these results in the order in which they appear on each search engine. Overture is a search engine that sells placement to the parties who are being searched. Hits on this search engine are therefore ranked based on whoever paid Overture the most in order to be placed highly in response to a query. On this list, none of the top ten results represent anything other than salesrelated Barbie sites. Critical sites begin to appear only around the twentyfifth result, presumably after all paying clients have been served. Google, as we already know, uses a radically decentralized mechanism for assigning relevance. It counts how many sites on the Web have linked to a particular site that has the search term in it, and ranks the search results by placing a site with a high number of incoming links above a site with a low number of incoming links. In effect, each Web site publisher "votes" for a site's ,{[pg 286]},
+
+!_ Table 8.1: Results for "Barbie" - Google versus Overture and Yahoo!
+
+table{~h c3; 33; 33; 33;
+
+Google
+Overture
+Yahoo!
+
+Barbie.com (Mattel's site)
+Barbie at Amazon.com
+Barbie.com
+
+AdiosBarbie.com: A Body Image for Every Body (site created by women critical of Barbie's projected body image)
+Barbie on Sale at KBToys
+Barbie Collector
+
+Barbie Bazar magazine (Barbie collectible news and Information)
+Target.com: Barbies
+My Scene.com
+
+If You Were a Barbie, Which Messed Up Version Would You Be?
+Barbie: Best prices and selection (bizrate.com)
+EverythingGirl.com
+
+Visible Barbie Project (macabre images of Barbie sliced as though in a science project)
+Barbies, New and Pre-owned at NetDoll
+Barbie History (fan-type history, mostly when various dolls were released)
+
+Barbie: The Image of Us All (1995 undergraduate paper about Barbie's cultural history)
+Barbies - compare prices (nextag.com)
+Mattel, Inc.
+
+Audigraph.free.fr (Barbie and Ken sex animation)
+Barbie Toys (complete line of Barbie electronics online)
+Spatula Jackson's Barbies (picutes of Barbie as various counter-cultural images)
+
+Suicide bomber Barbie (Barbie with explosives strapped to waist)
+Barbie Party supplies
+Barbie! (fan site)
+
+Barbies (Barbie dressed and painted as counter-cultural images)
+Barbie and her accessories online
+The Distorted Barbie
+
+}table
+
+,{[pg 287]},
+
+relevance by linking to it, and Google aggregates these votes and renders them on their results page as higher ranking. The little girl who searches for Barbie on Google will encounter a culturally contested figure. The same girl, searching on Overture, will encounter a commodity toy. In each case, the underlying efforts of Mattel, the producer of Barbie, have not changed. What is different is that in an environment where relevance is measured in nonmarket action--placing a link to a Web site because you deem it relevant to whatever you are doing with your Web site--as opposed to in dollars, Barbie has become a more transparent cultural object. It is easier for the little girl to see that the doll is not only a toy, not only a symbol of beauty and glamour, but also a symbol of how norms of female beauty in our society can be oppressive to women and girls. The transparency does not force the girl to choose one meaning of Barbie or another. It does, however, render transparent that Barbie can have multiple meanings and that choosing meanings is a matter of political concern for some set of people who coinhabit this culture. Yahoo! occupies something of a middle ground--its algorithm does link to two of the critical sites among the top ten, and within the top twenty, identifies most of the sites that appear on Google's top ten that are not related to sales or promotion.
+
+A similar phenomenon repeats itself in the context of explicit efforts to define Barbie--encyclopedias. There are, as of this writing, six generalinterest online encyclopedias that are reasonably accessible on the Internet-- that is to say, can be found with reasonable ease by looking at major search engines, sites that focus on education and parenting, and similar techniques. Five are commercial, and one is a quintessential commons-based peerproduction project-- /{Wikipedia}/. Of the five commercial encyclopedias, only one is available at no charge, the Columbia Encyclopedia, which is packaged in two primary forms--as encyclopedia.com and as part of Bartleby.com.~{ Encyclopedia.com is a part of Highbeam Research, Inc., which combines free and pay research services. Bartleby provides searching and access to many reference and highculture works at no charge, combining it with advertising, a book store, and many links to Amazon.com or to the publishers for purchasing the printed versions of the materials. }~ The other four--Britannica, Microsoft's Encarta, the World Book, and Grolier's Online Encyclopedia--charge various subscription rates that range around fifty to sixty dollars a year. The Columbia Encyclopedia includes no reference to Barbie, the doll. The World Book has no "Barbie" entry, but does include a reference to Barbie as part of a fairly substantial article on "Dolls." The only information that is given is that the doll was introduced in 1959, that she has a large wardrobe, and in a different place, that darkskinned Barbies were introduced in the 1980s. The article concludes with a guide of about three hundred words to good doll-collecting practices. Microsoft's ,{[pg 288]}, Encarta also includes Barbie in the article on "Doll," but provides a brief separate definition as well, which replicates the World Book information in slightly different form: 1959, large wardrobe, and introduction of dark-skinned Barbies. The online photograph available with the definition is of a brown-skinned, black-haired Barbie. Grolier's Online's major generalpurpose encyclopedia, Americana, also has no entry for Barbie, but makes reference to the doll as part of the article on dolls. Barbie is described as a revolutionary new doll, made to resemble a teenage fashion model as part of a trend to realism in dolls. Grolier's Online does, however, include a more specialized American Studies encyclopedia that has an article on Barbie. That article heavily emphasizes the number of dolls sold and their value, provides some description of the chronological history of the doll, and makes opaque references to Barbie's physique and her emphasis on consumption. While the encyclopedia includes bibliographic references to critical works about Barbie, the textual references to cultural critique or problems she raises are very slight and quite oblique.
+
+Only two encyclopedias focus explicitly on Barbie's cultural meaning: Britannica and /{Wikipedia}/. The Britannica entry was written by M. G. Lord, a professional journalist who authored a book entitled Forever Barbie: The Unauthorized Biography of a Real Doll. It is a tightly written piece that underscores the critique of Barbie, both on body dimensions and its relationship to the body image of girls, and excessive consumerism. It also, however, makes clear the fact that Barbie was the first doll to give girls a play image that was not focused on nurturing and family roles, but was an independent, professional adult: playing roles such as airline pilot, astronaut, or presidential candidate. The article also provides brief references to the role of Barbie in a global market economy--its manufacture outside the United States, despite its marketing as an American cultural icon, and its manufacturer's early adoption of direct-to-children marketing. /{Wikipedia}/ provides more or less all the information provided in the Britannica definition, including a reference to Lord's own book, and adds substantially more material from within Barbie lore itself and a detailed time line of the doll's history. It has a strong emphasis on the body image controversy, and emphasizes both the critique that Barbie encourages girls to focus on shallow consumption of fashion accessories, and that she represents an unattainable lifestyle for most girls who play with her. The very first version of the definition, posted January 3, 2003, included only a brief reference to a change in Barbie's waistline as a result of efforts by parents and anorexia groups ,{[pg 289]}, concerned with the doll's impact on girls' nutrition. This remained the only reference to the critique of Barbie until December 15, 2003, when a user who was not logged in introduced a fairly roughly written section that emphasized both the body image concerns and the consumerism concerns with Barbie. During the same day, a number of regular contributors (that is, users with log-in names and their own talk pages) edited the new section and improved its language and flow, but kept the basic concepts intact. Three weeks later, on January 5, 2004, another regular user rewrote the section, reorganized the paragraphs so that the critique of Barbie's emphasis on high consumption was separated from the emphasis on Barbie's body dimensions, and also separated and clarified the qualifying claims that Barbie's independence and professional outfits may have had positive effects on girls' perception of possible life plans. This contributor also introduced a reference to the fact that the term "Barbie" is often used to denote a shallow or silly girl or woman. After that, with a change three weeks later from describing Barbie as available for most of her life only as "white Anglo-Saxon (and probably protestant)" to "white woman of apparently European descent" this part of the definition stabilized. As this description aims to make clear, /{Wikipedia}/ makes the history of the evolution of the article entirely transparent. The software platform allows any reader to look at prior versions of the definition, to compare specific versions, and to read the "talk" pages-- the pages where the participants discuss their definition and their thoughts about it.
+
+The relative emphasis of Google and /{Wikipedia}/, on the one hand, and Overture, Yahoo!, and the commercial encyclopedias other than Britannica, on the other hand, is emblematic of a basic difference between markets and social conversations with regard to culture. If we focus on the role of culture as "common knowledge" or background knowledge, its relationship to the market--at least for theoretical economists--is exogenous. It can be taken as given and treated as "taste." In more practical business environments, culture is indeed a source of taste and demand, but it is not taken as exogenous. Culture, symbolism, and meaning, as they are tied with marketbased goods, become a major focus of advertising and of demand management. No one who has been exposed to the advertising campaigns of Coca-Cola, Nike, or Apple Computers, as well as practically to any one of a broad range of advertising campaigns over the past few decades, can fail to see that these are not primarily a communication about the material characteristics or qualities of the products or services sold by the advertisers. ,{[pg 290]},
+
+They are about meaning. These campaigns try to invest the act of buying their products or services with a cultural meaning that they cultivate, manipulate, and try to generalize in the practices of the society in which they are advertising, precisely in order to shape taste. They offer an opportunity to generate rents, because the consumer has to have this company's shoe rather than that one, because that particular shoe makes the customer this kind of person rather than that kind--cool rather than stuffy, sophisticated rather than common. Neither the theoretical economists nor the marketing executives have any interest in rendering culture transparent or writable. Whether one treats culture as exogenous or as a domain for limiting the elasticity of demand for one's particular product, there is no impetus to make it easier for consumers to see through the cultural symbols, debate their significance, or make them their own. If there is business reason to do anything about culture, it is to try to shape the cultural meaning of an object or practice, in order to shape the demand for it, while keeping the role of culture hidden and assuring control over the careful cultural choreography of the symbols attached to the company. Indeed, in 1995, the U.S. Congress enacted a new kind of trademark law, the Federal Antidilution Act, which for the first time disconnects trademark protection from protecting consumers from confusion by knockoffs. The Antidilution Act of 1995 gives the owner of any famous mark--and only famous marks--protection from any use that dilutes the meaning that the brand owner has attached to its own mark. It can be entirely clear to consumers that a particular use does not come from the owner of the brand, and still, the owner has a right to prevent this use. While there is some constitutional free-speech protection for criticism, there is also a basic change in the understanding of trademark law-- from a consumer protection law intended to assure that consumers can rely on the consistency of goods marked in a certain way, to a property right in controlling the meaning of symbols a company has successfully cultivated so that they are, in fact, famous. This legal change marks a major shift in the understanding of the role of law in assigning control for cultural meaning generated by market actors.
+
+Unlike market production of culture, meaning making as a social, nonmarket practice has no similar systematic reason to accept meaning as it comes. Certainly, some social relations do. When girls play with dolls, collect them, or exhibit them, they are rarely engaged in reflection on the meaning of the dolls, just as fans of Scarlett O'Hara, of which a brief Internet search suggests there are many, are not usually engaged in critique of Gone with the ,{[pg 291]}, Wind as much as in replication and adoption of its romantic themes. Plainly, however, some conversations we have with each other are about who we are, how we came to be who we are, and whether we view the answers we find to these questions as attractive or not. In other words, some social interactions do have room for examining culture as well as inhabiting it, for considering background knowledge for what it is, rather than taking it as a given input into the shape of demand or using it as a medium for managing meaning and demand. People often engage in conversations with each other precisely to understand themselves in the world, their relationship to others, and what makes them like and unlike those others. One major domain in which this formation of self- and group identity occurs is the adoption or rejection of, and inquiry into, cultural symbols and sources of meaning that will make a group cohere or splinter; that will make people like or unlike each other.
+
+The distinction I draw here between market-based and nonmarket-based activities is purposefully overstated to clarify the basic structural differences between these two modes of organizing communications and the degree of transparency of culture they foster. As even the very simple story of how Barbie is defined in Internet communications demonstrates, practices are not usually as cleanly divided. Like the role of the elite newspapers in providing political coverage, discussed in chapter 6, some market-based efforts do provide transparency; indeed, their very market rationale pushes them to engage in a systematic effort to provide transparency. Google's strategy from the start has been to assume that what individuals are interested in is a reflection of what other individuals--who are interested in roughly the same area, but spend more time on it, that is, Web page authors--think is worthwhile. The company built its business model around rendering transparent what people and organizations that make their information available freely consider relevant. Occasionally, Google has had to deal with "search engine optimizers," who have advised companies on how to game its search engine to achieve a high ranking. Google has fought these optimizers; sometimes by outright blocking access to traffic that originates with them. In these cases, we see a technical competition between firms--the optimizers--whose interest is in capturing attention based on the interests of those who pay them, and a firm, Google, whose strategic choice is to render the distributed judgments of relevance on the Web more or less faithfully. There, the market incentive actually drives Google's investment affirmatively toward transparency. However, the market decision must be strategic, not tactical, for this ,{[pg 292]}, to be the case. Fear of litigation has, for example, caused Google to bury links that threatened it with liability. The most prominent of these cases occurred when the Church of Scientology threatened to sue Google over presenting links to www.xenu.net, a site dedicated to criticizing scientology. Google initially removed the link. However, its strategic interest was brought to the fore by widespread criticism of its decision on the Internet, and the firm relented. A search for "Scientology" as of this writing reveals a wide range of sites, many critical of scientology, and xenu.net is the second link. A search for "scientology Google" will reveal many stories, not quite flattering either to Google or to the Church of Scientology, as the top links. We see similar diversity among the encyclopedias. Britannica offered as clear a presentation of the controversy over Barbie as /{Wikipedia}/. Britannica has built its reputation and business model on delivery of the knowledge and opinions of those in positions to claim authority in the name of high culture professional competence, and delivering that perspective to those who buy the encyclopedia precisely to gain access to that kind of knowledge base, judgment, and formal credibility. In both cases, the long-term business model of the companies calls for reflecting the views and insights of agents who are not themselves thoroughly within the market--whether they are academics who write articles for Britannica, or the many and diverse Web page owners on the Internet. In both cases, these business models lead to a much more transparent cultural representation than what Hollywood or Madison Avenue produce. Just as not all market-based organizations render culture opaque, not all nonmarket or social-relations-based conversations aim to explore and expose cultural assumptions. Social conversations can indeed be among the most highly deferential to cultural assumptions, and can repress critique more effectively and completely than market-based conversations. Whether in communities of unquestioning religious devotion or those that enforce strict egalitarian political correctness, we commonly see, in societies both traditional and contemporary, significant social pressures against challenging background cultural assumptions within social conversations. We have, for example, always had more cultural experimentation and fermentation in cities, where social ties are looser and communities can exercise less social control over questioning minds and conversation. Ubiquitous Internet communications expand something of the freedom of city parks and streets, but also the freedom of cafes and bars--commercial platforms for social inter? action--so that it is available everywhere.
+
+The claim I make here, as elsewhere throughout this book, is not that ,{[pg 293]}, nonmarket production will, in fact, generally displace market production, or that such displacement is necessary to achieve the improvement in the degree of participation in cultural production and legibility. My claim is that the emergence of a substantial nonmarket alternative path for cultural conversation increases the degrees of freedom available to individuals and groups to engage in cultural production and exchange, and that doing so increases the transparency of culture to its inhabitants. It is a claim tied to the particular technological moment and its particular locus of occurrence--our networked communications environment. It is based on the fact that it is displacing the particular industrial form of information and cultural production of the twentieth century, with its heavy emphasis on consumption in mass markets. In this context, the emergence of a substantial sector of nonmarket production, and of peer production, or the emergence of individuals acting cooperatively as a major new source of defining widely transmissible statements and conversations about the meaning of the culture we share, makes culture substantially more transparent and available for reflection, and therefore for revision.
+
+Two other dimensions are made very clear by the /{Wikipedia}/ example. The first is the degree of self-consciousness that is feasible with open, conversationbased definition of culture that is itself rendered more transparent. The second is the degree to which the culture is writable, the degree to which individuals can participate in mixing and matching and making their own emphases, for themselves and for others, on the existing set of symbols. Fisher, for example, has used the term "semiotic democracy" to describe the potential embodied in the emerging openness of Internet culture to participation by users. The term originates from Fiske's Television Culture as a counterpoint to the claim that television was actually a purely one-way medium that only enacted culture on viewers. Instead, Fiske claimed that viewers resist these meanings, put them in their own contexts, use them in various ways, and subvert them to make their own meaning. However, much of this resistance is unstated, some of it unself-conscious. There are the acts of reception and interpretation, or of using images and sentences in different contexts of life than those depicted in the television program; but these acts are local, enacted within small-scale local cultures, and are not the result of a self-conscious conversation among users of the culture about its limits, its meanings, and its subversions. One of the phenomena we are beginning to observe on the Internet is an emerging culture of conversation about culture, which is both self-conscious and informed by linking or quoting from specific ,{[pg 294]}, reference points. The /{Wikipedia}/ development of the definition of Barbie, its history, and the availability of a talk page alongside it for discussion about the definition, are an extreme version of self-conscious discussion about culture. The basic tools enabled by the Internet--cutting, pasting, rendering, annotating, and commenting--make active utilization and conscious discussion of cultural symbols and artifacts easier to create, sustain, and read more generally.
+
+The flexibility with which cultural artifacts--meaning-carrying objects-- can be rendered, preserved, and surrounded by different context and discussion makes it easy for anyone, anywhere, to make a self-conscious statement about culture. They enable what Balkin has called "glomming on"-- taking that which is common cultural representation and reworking it into your own move in a cultural conversation.~{ Jack Balkin, "Digital Speech and Democratic Culture: A Theory of Freedom of Expression for the Information Society," New York University Law Review 79 (2004): 1. }~ The low cost of storage, and the ubiquitous possibility of connecting from any connection location to any storage space make any such statement persistent and available to others. The ease of commenting, linking, and writing to other locations of statements, in turn, increases the possibility of response and counterresponse. These conversations can then be found by others, and at least read if not contributed to. In other words, as with other, purposeful peer-produced projects like /{Wikipedia}/, the basic characteristics of the Internet in general and the World Wide Web in particular have made it possible for anyone, anywhere, for any reason to begin to contribute to an accretion of conversation about well-defined cultural objects or about cultural trends and characteristics generally. These conversations can persist across time and exist across distance, and are available for both active participation and passive reading by many people in many places. The result is, as we are already seeing it, the emergence of widely accessible, self-conscious conversation about the meaning of contemporary culture by those who inhabit it. This "writability" is also the second characteristic that the /{Wikipedia}/ definition process makes very clear, and the second major change brought about by the networked information economy in the digital environment.
+
+2~ THE PLASTICITY OF INTERNET CULTURE: THE FUTURE OF HIGH-PRODUCTION-VALUE FOLK CULTURE
+
+I have already described the phenomena of blogs, of individually created movies like /{The Jedi Saga}/, and of Second Life, the game platform where ,{[pg 295]}, users have made all the story lines and all the objects, while the commercial provider created the tools and hosts the platform for their collective storytelling. We are seeing the broad emergence of business models that are aimed precisely at providing users with the tools to write, compose, film, and mix existing materials, and to publish, play, render, and distribute what we have made to others, everywhere. Blogger, for example, provides simple tools for online publication of written materials. Apple Computer offers a product called GarageBand, that lets users compose and play their own music. It includes a large library of prerecorded building blocks--different instruments, riffs, loops--and an interface that allows the user to mix, match, record and add their own, and produce their own musical composition and play it. Video-editing utilities, coupled with the easy malleability of digital video, enable people to make films--whether about their own lives or, as in the case of /{The Jedi Saga}/, of fantasies. The emerging phenomenon of Machinima--short movies that are made using game platforms--underscores how digital platforms can also become tools for creation in unintended ways. Creators use the 3-D rendering capabilities of an existing game, but use the game to stage a movie scene or video presentation, which they record as it is played out. This recording is then distributed on the Internet as a standalone short film. While many of these are still crude, the basic possibilities they present as modes of making movies is significant. Needless to say, not everyone is Mozart. Not everyone is even a reasonably talented musician, author, or filmmaker. Much of what can be and is done is not wildly creative, and much of it takes the form of Balkin's "glomming on": That is, users take existing popular culture, or otherwise professionally created culture, and perform it, sometimes with an effort toward fidelity to the professionals, but often with their own twists, making it their own in an immediate and unmediated way. However, just as learning how to read music and play an instrument can make one a better-informed listener, so too a ubiquitous practice of making cultural artifacts of all forms enables individuals in society to be better readers, listeners, and viewers of professionally produced culture, as well as contributors of our own statements into this mix of collective culture.
+
+People have always created their own culture. Popular music did not begin with Elvis. There has always been a folk culture--of music, storytelling, and theater. What happened over the course of the twentieth century in advanced economies, and to a lesser extent but still substantially around the globe, is the displacement of folk culture by commercially produced mass popular ,{[pg 296]}, culture. The role of the individuals and communities vis-a-vis cultural arti` facts changed, from coproducers and replicators to passive consumers. The time frame where elders might tell stories, children might put on a show for the adults, or those gathered might sing songs came to be occupied by background music, from the radio or phonograph, or by television. We came to assume a certain level of "production values"--quality of sound and image, quality of rendering and staging--that are unattainable with our crude means and our relatively untrained voices or use of instruments. Not only time for local popular creation was displaced, therefore, but also a sense of what counted as engaging, delightful articulation of culture. In a now-classic article from 1937, "The Work of Art in the Age of Mechanical Reproduction," Walter Benjamin authored one of the only instances of critical theory that took an optimistic view of the emergence of popular culture in the twentieth century as a potentially liberating turn. Benjamin's core claim was that with mechanical replication of art, the "aura" that used to attach to single works of art is dissipated. Benjamin saw this aura of unique works of art as reinforcing a distance between the masses and the representations of culture, reinforcing the perception of their weakness and distance from truly great things. He saw in mechanical reproducibility the possibility of bringing copies down to earth, to the hands of the masses, and reversing the sense of distance and relative weakness of the mass culture. What Benjamin did not yet see were the ways in which mechanical reproduction would insert a different kind of barrier between many dispersed individuals and the capacity to make culture. The barrier of production costs, production values, and the star system that came along with them, replaced the iconic role of the unique work of art with new, but equally high barriers to participation in making culture. It is precisely those barriers that the capabilities provided by digital media begin to erode. It is becoming feasible for users to cut and paste, "glom on," to existing cultural materials; to implement their intuitions, tastes, and expressions through media that render them with newly acceptable degrees of technical quality, and to distribute them among others, both near and far. As Hollywood begins to use more computer-generated special effects, but more important, whole films--2004 alone saw major releases like Shrek 2, The Incredibles, and Polar Express--and as the quality of widely available image-generation software and hardware improves, the production value gap between individual users or collections of users and the commercial-professional studios will decrease. As this book is completed in early 2005, nothing makes clearer the value of retelling basic stories through ,{[pg 297]}, the prism of contemporary witty criticism of prevailing culture than do Shrek 2 and The Incredibles, and, equally, nothing exposes the limits of purely technical, movie-star-centered quality than the lifelessness of Polar Express. As online games like Second Life provide users with new tools and platforms to tell and retell their own stories, or their own versions of well-trodden paths, as digital multimedia tools do the same for individuals outside of the collaborative storytelling platforms, we can begin to see a reemergence of folk stories and songs as widespread cultural practices. And as network connections become ubiquitous, and search engines and filters improve, we can begin to see this folk culture emerging to play a substantially greater role in the production of our cultural environment.
+
+2~ A PARTICIPATORY CULTURE: TOWARD POLICY
+
+Culture is too broad a concept to suggest an all-encompassing theory centered around technology in general or the Internet in particular. My focus is therefore much narrower, along two dimensions. First, I am concerned with thinking about the role of culture to human interactions that can be understood in terms of basic liberal political commitments--that is to say, a concern for the degree of freedom individuals have to form and pursue a life plan, and the degree of participation they can exercise in debating and determining collective action. Second, my claim is focused on the relative attractiveness of the twentieth-century industrial model of cultural production and what appears to be emerging as the networked model in the early twenty-first century, rather than on the relationship of the latter to some theoretically defined ideal culture.
+
+A liberal political theory cannot wish away the role of culture in structuring human events. We engage in wide ranges of social practices of making and exchanging symbols that are concerned with how our life is and how it might be, with which paths are valuable for us as individuals to pursue and which are not, and with what objectives we as collective communities-- from the local to the global--ought to pursue. This unstructured, ubiquitous conversation is centrally concerned with things that a liberal political system speaks to, but it is not amenable to anything like an institutionalized process that could render its results "legitimate." Culture operates as a set of background assumptions and common knowledge that structure our understanding of the state of the world and the range of possible actions and outcomes open to us individually and collectively. It constrains the range of conversational ,{[pg 298]}, moves open to us to consider what we are doing and how we might act differently. In these regards, it is a source of power in the critical-theory sense--a source that exerts real limits on what we can do and how we can be. As a source of power, it is not a natural force that stands apart from human endeavor and is therefore a fact that is not itself amenable to political evaluation. As we see well in the efforts of parents and teachers, advertising agencies and propaganda departments, culture is manipulable, manageable, and a direct locus of intentional action aimed precisely at harnessing its force as a way of controlling the lives of those who inhabit it. At the same time, however, culture is not the barrel of a gun or the chains of a dungeon. There are limits on the degree to which culture can actually control those who inhabit it. Those degrees depend to a great extent on the relative difficulty or ease of seeing through culture, of talking about it with others, and of seeing other alternatives or other ways of symbolizing the possible and the desirable.
+
+Understanding that culture is a matter of political concern even within a liberal framework does not, however, translate into an agenda of intervention in the cultural sphere as an extension of legitimate political decision making. Cultural discourse is systematically not amenable to formal regulation, management, or direction from the political system. First, participation in cultural discourse is intimately tied to individual self-expression, and its regulation would therefore require levels of intrusion in individual autonomy that would render any benefits in terms of a participatory political system Pyrrhic indeed. Second, culture is much more intricately woven into the fabric of everyday life than political processes and debates. It is language-- the basic framework within which we can comprehend anything, and through which we do so everywhere. To regulate culture is to regulate our very comprehension of the world we occupy. Third, therefore, culture infuses our thoughts at a wide range of levels of consciousness. Regulating culture, or intervening in its creation and direction, would entail self-conscious action to affect citizens at a subconscious or weakly conscious level. Fourth, and finally, there is no Archimedean point outside of culture on which to stand and decide--let us pour a little bit more of this kind of image or that, so that we achieve a better consciousness, one that better fits even our most just and legitimately arrived-at political determinations.
+
+A systematic commitment to avoid direct intervention in cultural exchange does not leave us with nothing to do or say about culture, and about law or policy as it relates to it. What we have is the capacity and need ,{[pg 299]}, to observe a cultural production and exchange system and to assure that it is as unconstraining and free from manipulation as possible. We must diagnose what makes a culture more or less opaque to its inhabitants; what makes it more or less liable to be strictly constraining of the conversations that rely on it; and what makes the possibility of many and diverse sources and forms of cultural intervention more or less likely. On the background of this project, I suggest that the emergence of Internet culture is an attractive development from the perspective of liberal political theory. This is so both because of the technical characteristics of digital objects and computer network communications, and because of the emerging industrial structure of the networked information economy--typified by the increased salience of nonmarket production in general and of individual production, alone or in concert with others, in particular. The openness of digital networks allows for a much wider range of perspectives on any particular symbol or range of symbols to be visible for anyone, everywhere. The cross section of views that makes it easy to see that Barbie is a contested symbol makes it possible more generally to observe very different cultural forms and perspectives for any individual. This transparency of background unstated assumptions and common knowledge is the beginning of self-reflection and the capacity to break out of given molds. Greater transparency is also a necessary element in, and a consequence of, collaborative action, as various participants either explicitly, or through negotiating the divergence of their nonexplicit different perspectives, come to a clearer statement of their assumptions, so that these move from the background to the fore, and become more amenable to examination and revision. The plasticity of digital objects, in turn, improves the degree to which individuals can begin to produce a new folk culture, one that already builds on the twentieth-century culture that was highly unavailable for folk retelling and re-creation. This plasticity, and the practices of writing your own culture, then feed back into the transparency, both because the practice of making one's own music, movie, or essay makes one a more self-conscious user of the cultural artifacts of others, and because in retelling anew known stories, we again come to see what the originals were about and how they do, or do not, fit our own sense of how things are and how they ought to be. There is emerging a broad practice of learning by doing that makes the entire society more effective readers and writers of their own culture.
+
+By comparison to the highly choreographed cultural production system of the industrial information economy, the emergence of a new folk culture ,{[pg 300]}, and of a wider practice of active personal engagement in the telling and retelling of basic cultural themes and emerging concerns and attachments offers new avenues for freedom. It makes culture more participatory, and renders it more legible to all its inhabitants. The basic structuring force of culture is not eliminated, of course. The notion of floating monads disconnected from a culture is illusory. Indeed, it is undesirable. However, the framework that culture offers us, the language that makes it possible for us to make statements and incorporate the statements of others in the daily social conversation that pervades life, is one that is more amenable to our own remaking. We become more sophisticated users of this framework, more self-conscious about it, and have a greater capacity to recognize, challenge, and change that which we find oppressive, and to articulate, exchange, and adopt that which we find enabling. As chapter 11 makes clear, however, the tension between the industrial model of cultural production and the networked information economy is nowhere more pronounced than in the question of the degree to which the new folk culture of the twenty-first century will be permitted to build upon the outputs of the twentieth-century industrial model. In this battle, the stakes are high. One cannot make new culture ex nihilo. We are as we are today, as cultural beings, occupying a set of common symbols and stories that are heavily based on the outputs of that industrial period. If we are to make this culture our own, render it legible, and make it into a new platform for our needs and conversations today, we must find a way to cut, paste, and remix present culture. And it is precisely this freedom that most directly challenges the laws written for the twentieth-century technology, economy, and cultural practice. ,{[pg 301]},
+
+1~9 Chapter 9 - Justice and Development
+
+How will the emergence of a substantial sector of nonmarket, commons-based production in the information economy affect questions of distribution and human well-being? The pessimistic answer is, very little. Hunger, disease, and deeply rooted racial, ethnic, or class stratification will not be solved by a more decentralized, nonproprietary information production system. Without clean water, basic literacy, moderately well-functioning governments, and universal practical adoption of the commitment to treat all human beings as fundamentally deserving of equal regard, the fancy Internet-based society will have little effect on the billions living in poverty or deprivation, either in the rich world, or, more urgently and deeply, in poor and middle-income economies. There is enough truth in this pessimistic answer to require us to tread lightly in embracing the belief that the shift to a networked information economy can indeed have meaningful effects in the domain of justice and human development.
+
+Despite the caution required in overstating the role that the networked information economy can play in solving issues of justice, ,{[pg 302]}, it is important to recognize that information, knowledge, and culture are core inputs into human welfare. Agricultural knowledge and biological innovation are central to food security. Medical innovation and access to its fruits are central to living a long and healthy life. Literacy and education are central to individual growth, to democratic self-governance, and to economic capabilities. Economic growth itself is critically dependent on innovation and information. For all these reasons, information policy has become a critical element of development policy and the question of how societies attain and distribute human welfare and well-being. Access to knowledge has become central to human development. The emergence of the networked information economy offers definable opportunities for improvement in the normative domain of justice, as it does for freedom, by comparison to what was achievable in the industrial information economy.
+
+We can analyze the implications of the emergence of the networked information economy for justice or equality within two quite different frames. The first is liberal, and concerned primarily with some form of equality of opportunity. The second is social-democratic, or development oriented, and focused on universal provision of a substantial set of elements of human well-being. The availability of information from nonmarket sources and the range of opportunities to act within a nonproprietary production environment improve distribution in both these frameworks, but in different ways. Despite the differences, within both frameworks the effect crystallizes into one of access--access to opportunities for one's own action, and access to the outputs and inputs of the information economy. The industrial economy creates cost barriers and transactional-institutional barriers to both these domains. The networked information economy reduces both types of barriers, or creates alternative paths around them. It thereby equalizes, to some extent, both the opportunities to participate as an economic actor and the practical capacity to partake of the fruits of the increasingly information-based global economy.
+
+The opportunities that the network information economy offers, however, often run counter to the central policy drive of both the United States and the European Union in the international trade and intellectual property systems. These two major powers have systematically pushed for everstronger proprietary protection and increasing reliance on strong patents, copyrights, and similar exclusive rights as the core information policy for growth and development. Chapter 2 explains why such a policy is suspect from a purely economic perspective concerned with optimizing innovation. ,{[pg 303]}, A system that relies too heavily on proprietary approaches to information production is not, however, merely inefficient. It is unjust. Proprietary rights are designed to elicit signals of people's willingness and ability to pay. In the presence of extreme distribution differences like those that characterize the global economy, the market is a poor measure of comparative welfare. A system that signals what innovations are most desirable and rations access to these innovations based on ability, as well as willingness, to pay, overrepresents welfare gains of the wealthy and underrepresents welfare gains of the poor. Twenty thousand American teenagers can simply afford, and will be willing to pay, much more for acne medication than the more than a million Africans who die of malaria every year can afford to pay for a vaccine. A system that relies too heavily on proprietary models for managing information production and exchange is unjust because it is geared toward serving small welfare increases for people who can pay a lot for incremental improvements in welfare, and against providing large welfare increases for people who cannot pay for what they need.
+
+2~ LIBERAL THEORIES OF JUSTICE AND THE NETWORKED INFORMATION ECONOMY
+
+Liberal theories of justice can be categorized according to how they characterize the sources of inequality in terms of luck, responsibility, and structure. By luck, I mean reasons for the poverty of an individual that are beyond his or her control, and that are part of that individual's lot in life unaffected by his or her choices or actions. By responsibility, I mean causes for the poverty of an individual that can be traced back to his or her actions or choices. By structure, I mean causes for the inequality of an individual that are beyond his or her control, but are traceable to institutions, economic organizations, or social relations that form a society's transactional framework and constrain the behavior of the individual or undermine the efficacy of his or her efforts at self-help.
+
+We can think of John Rawls's /{Theory of Justice}/ as based on a notion that the poorest people are the poorest because of dumb luck. His proposal for a systematic way of defending and limiting redistribution is the "difference principle." A society should organize its redistribution efforts in order to make those who are least well-off as well-off as they can be. The theory of desert is that, because any of us could in principle be the victim of this dumb luck, we would all have agreed, if none of us had known where we ,{[pg 304]}, would be on the distribution of bad luck, to minimize our exposure to really horrendous conditions. The practical implication is that while we might be bound to sacrifice some productivity to achieve redistribution, we cannot sacrifice too much. If we did that, we would most likely be hurting, rather than helping, the weakest and poorest. Libertarian theories of justice, most prominently represented by Robert Nozick's entitlement theory, on the other hand, tend to ignore bad luck or impoverishing structure. They focus solely on whether the particular holdings of a particular person at any given moment are unjustly obtained. If they are not, they may not justly be taken from the person who holds them. Explicitly, these theories ignore the poor. As a practical matter and by implication, they treat responsibility as the source of the success of the wealthy, and by negation, the plight of the poorest--leading them to be highly resistant to claims of redistribution.
+
+The basic observation that an individual's economic condition is a function of his or her own actions does not necessarily resolve into a blanket rejection of redistribution, as we see in the work of other liberals. Ronald Dworkin's work on inequality offers a critique of Rawls's, in that it tries to include a component of responsibility alongside recognition of the role of luck. In his framework, if (1) resources were justly distributed and (2) bad luck in initial endowment were compensated through some insurance scheme, then poverty that resulted from bad choices, not bad luck, would not deserve help through redistribution. While Rawls's theory ignores personal responsibility, and in this regard, is less attractive from the perspective of a liberal theory that respects individual autonomy, it has the advantage of offering a much clearer metric for a just system. One can measure the welfare of the poorest under different redistribution rules in market economies. One can then see how much redistribution is too much, in the sense that welfare is reduced to the point that the poorest are actually worse off than they would be under a less-egalitarian system. You could compare the Soviet Union, West Germany, and the United States of the late 1960s?early 1970s, and draw conclusions. Dworkin's insurance scheme would require too fine an ability to measure the expected incapacitating effect of various low endowments--from wealth to intelligence to health--in a market economy, and to calibrate wealth endowments to equalize them, to offer a measuring rod for policy. It does, however, have the merit of distinguishing--for purposes of judging desert to benefit from society's redistribution efforts--between a child of privilege who fell into poverty through bad investments coupled with sloth and a person born into a poor family with severe mental ,{[pg 305]}, defects. Bruce Ackerman's Social Justice and the Liberal State also provides a mechanism of differentiating the deserving from the undeserving, but adds policy tractability by including the dimension of structure to luck and responsibility. In addition to the dumb luck of how wealthy your parents are when you are born and what genetic endowment you are born with, there are also questions of the education system you grow up with and the transactional framework through which you live your life--which opportunities it affords, and which it cuts off or burdens. His proposals therefore seek to provide basic remedies for those failures, to the extent that they can, in fact, be remedied. One such proposal is Anne Alstott and Ackerman's idea of a government-funded personal endowment at birth, coupled with the freedom to squander it and suffer the consequential reduction in welfare.~{ Anne Alstott and Bruce Ackerman, The Stakeholder Society (New Haven, CT: Yale University Press, 1999). }~ He also emphasizes a more open and egalitarian transactional framework that would allow anyone access to opportunities to transact with others, rather than depending on, for example, unequal access to social links as a precondition to productive behavior.
+
+The networked information economy improves justice from the perspective of every single one of these theories of justice. Imagine a good that improves the welfare of its users--it could be software, or an encyclopedia, or a product review. Now imagine a policy choice that could make production of that good on a nonmarket, peer-production basis too expensive to perform, or make it easy for an owner of an input to exclude competitors-- both market-based and social-production based. For example, a government might decide to: recognize patents on software interfaces, so that it would be very expensive to buy the right to make your software work with someone else's; impose threshold formal education requirements on the authors of any encyclopedia available for school-age children to read, or impose very strict copyright requirements on using information contained in other sources (as opposed to only prohibiting copying their language) and impose high penalties for small omissions; or give the putative subjects of reviews very strong rights to charge for the privilege of reviewing a product--such as by expanding trademark rights to refer to the product, or prohibiting a reviewer to take apart a product without permission. The details do not matter. I offer them only to provide a sense of the commonplace kinds of choices that governments could make that would, as a practical matter, differentially burden nonmarket producers, whether nonprofit organizations or informal peer-production collaborations. Let us call a rule set that is looser from the perspective of access to existing information resources Rule Set A, and a rule ,{[pg 306]}, set that imposes higher costs on access to information inputs Rule Set B. As explained in chapter 2, it is quite likely that adopting B would depress information production and innovation, even if it were intended to increase the production of information by, for example, strengthening copyright or patent. This is because the added incentives for some producers who produce with the aim of capturing the rents created by copyright or patents must be weighed against their costs. These include (a) the higher costs even for those producers and (b) the higher costs for all producers who do not rely on exclusive rights at all, but instead use either a nonproprietary market model--like service--or a nonmarket model, like nonprofits and individual authors, and that do not benefit in any way from the increased appropriation. However, let us make here a much weaker assumption--that an increase in the rules of exclusion will not affect overall production. Let us assume that there will be exactly enough increased production by producers who rely on a proprietary model to offset the losses of production in the nonproprietary sectors.
+
+It is easy to see why a policy shift from A to B would be regressive from the perspective of theories like Rawls's or Ackerman's. Under Rule A, let us say that in this state of affairs, State A, there are five online encyclopedias. One of them is peer produced and freely available for anyone to use. Rule B is passed. In the new State B, there are still five encyclopedias. It has become too expensive to maintain the free encyclopedia, however, and more profitable to run commercial online encyclopedias. A new commercial encyclopedia has entered the market in competition with the four commercial encyclopedias that existed in State A, and the free encyclopedia folded. From the perspective of the difference principle, we can assume that the change has resulted in a stable overall welfare in the Kaldor-Hicks sense. (That is, overall welfare has increased enough so that, even though some people may be worse off, those who have been made better off are sufficiently better off that they could, in principle, compensate everyone who is worse off enough to make everyone either better off or no worse off than they were before.) There are still five encyclopedias. However, now they all charge a subscription fee. The poorest members of society are worse off, even if we posit that total social welfare has remained unchanged. In State A, they had access for free to an encyclopedia. They could use the information (or the software utility, if the example were software) without having to give up any other sources of welfare. In State B, they must choose between the same amount ,{[pg 307]}, of encyclopedia usage as they had before, and less of some other source of welfare, or the same welfare from other sources, and no encyclopedia. If we assume, contrary to theory and empirical evidence from the innovation economics literature, that the move to State B systematically and predictably improves the incentives and investments of the commercial producers, that would still by itself not justify the policy shift from the perspective of the difference principle. One would have to sustain a much stricter claim: that the marginal improvement in the quality of the encyclopedias, and a decline in price from the added market competition that was not felt by the commercial producers when they were competing with the free, peer-produced version, would still make the poorest better off, even though they now must pay for any level of encyclopedia access, than they were when they had four commercial competitors with their prior levels of investment operating in a competitive landscape of four commercial and one free encyclopedia.
+
+From the perspective of Ackerman's theory of justice, the advantages of the networked information economy are clearer yet. Ackerman characterizes some of the basic prerequisites for participating in a market economy as access to a transactional framework, to basic information, and to an adequate educational endowment. To the extent that any of the basic utilities required to participate in an information economy at all are available without sensitivity to price--that is, free to anyone--they are made available in a form that is substantially insulated from the happenstance of initial wealth endowments. In this sense at least, the development of a networked information economy overcomes some of the structural components of continued poverty--lack of access to information about market opportunities for production and cheaper consumption, about the quality of goods, or lack of communications capacity to people or places where one can act productively. While Dworkin's theory does not provide a similarly clear locus for mapping the effect of the networked information economy on justice, there is some advantage, and no loss, from this perspective, in having more of the information economy function on a nonmarket basis. As long as one recognizes bad luck as a partial reason for poverty, then having information resources available for free use is one mechanism of moderating the effects of bad luck in endowment, and lowers the need to compensate for those effects insofar as they translate to lack of access to information resources. This added access results from voluntary communication by the producers and a respect for their willingness to communicate what they produced freely. ,{[pg 308]}, While the benefits flow to individuals irrespective of whether their present state is due to luck or irresponsibility, it does not involve a forced redistribution from responsible individuals to irresponsible individuals.
+
+From the perspective of liberal theories of justice, then, the emergence of the networked information economy is an unqualified improvement. Except under restrictive assumptions inconsistent with what we know as a matter of both theory and empirics about the economics of innovation and information production, the emergence of a substantial sector of information production and exchange that is based on social transactional frameworks, rather than on a proprietary exclusion business model, improves distribution in society. Its outputs are available freely to anyone, as basic inputs into their own actions--whether market-based or nonmarket-based. The facilities it produces improve the prospects of all who are connected to the Internet-- whether they are seeking to use it as consumers or as producers. It softens some of the effects of resource inequality. It offers platforms for greater equality of opportunity to participate in market- and nonmarket-based enterprises. This characteristic is explored in much greater detail in the next segment of this chapter, but it is important to emphasize here that equality of opportunity to act in the face of unequal endowment is central to all liberal theories of justice. As a practical matter, these characteristics of the networked information economy make the widespread availability of Internet access a more salient objective of redistribution policy. They make policy debates, which are mostly discussed in today's political sphere in terms of innovation and growth, and sometimes in terms of freedom, also a matter of liberal justice.
+
+2~ COMMONS-BASED STRATEGIES FOR HUMAN WELFARE AND DEVELOPMENT
+
+There is a long social-democratic tradition of focusing not on theoretical conditions of equality in a liberal society, but on the actual well-being of human beings in a society. This conception of justice shares with liberal theories the acceptance of market economy as a fundamental component of free societies. However, its emphasis is not equality of opportunity or even some level of social insurance that still allows the slothful to fall, but on assuring a basic degree of well-being to everyone in society. Particularly in the European social democracies, the ambition has been to make that basic level quite high, but the basic framework of even American Social Security-- ,{[pg 309]}, unless it is fundamentally changed in the coming years--has this characteristic. The literature on global poverty and its alleviation was initially independent of this concern, but as global communications and awareness increased, and as the conditions of life in most advanced market economies for most people improved, the lines between the concerns with domestic conditions and global poverty blurred. We have seen an increasing merging of the concerns into a concern for basic human well-being everywhere. It is represented in no individual's work more clearly than in that of Amartya Sen, who has focused on the centrality of development everywhere to the definition not only of justice, but of freedom as well.
+
+The emerging salience of global development as the core concern of distributive justice is largely based on the sheer magnitude of the problems faced by much of the world's population.~{ Numbers are all taken from the 2004 Human Development Report (New York: UN Development Programme, 2004). }~ In the world's largest democracy, 80 percent of the population--slightly more people than the entire population of the United States and the expanded European Union combined-- lives on less than two dollars a day, 39 percent of adults are illiterate, and 47 percent of children under the age of five are underweight for their age. In Africa's wealthiest democracy, a child at birth has a 45 percent probability of dying before he or she reaches the age of forty. India and South Africa are far from being the worst-off countries. The scope of destitution around the globe exerts a moral pull on any acceptable discussion of justice. Intuitively, these problems seem too fundamental to be seriously affected by the networked information economy--what has /{Wikipedia}/ got to do with the 49 percent of the population of Congo that lacks sustainable access to improved water sources? It is, indeed, important not to be overexuberant about the importance of information and communications policy in the context of global human development. But it is also important not to ignore the centrality of information to most of our more-advanced strategies for producing core components of welfare and development. To see this, we can begin by looking at the components of the Human Development Index (HDI).
+
+The Human Development Report was initiated in 1990 as an effort to measure a broad set of components of what makes a life livable, and, ultimately, attractive. It was developed in contradistinction to indicators centered on economic output, like gross domestic product (GDP) or economic growth alone, in order to provide a more refined sense of what aspects of a nation's economy and society make it more or less livable. It allows a more nuanced approach toward improving the conditions of life everywhere. As ,{[pg 310]}, Sen pointed out, the people of China, Kerala in India, and Sri Lanka lead much longer and healthier lives than other countries, like Brazil or South Africa, which have a higher per capita income.~{ Amartya Sen, Development as Freedom (New York: Knopf, 1999), 46-47. }~ The Human Development Report measures a wide range of outcomes and characteristics of life. The major composite index it tracks is the Human Development Index. The HDI tries to capture the capacity of people to live long and healthy lives, to be knowledgeable, and to have material resources sufficient to provide a decent standard of living. It does so by combining three major components: life expectancy at birth, adult literacy and school enrollment, and GDP per capita. As Figure 9.1 illustrates, in the global information economy, each and every one of these measures is significantly, though not solely, a function of access to information, knowledge, and information-embedded goods and services. Life expectancy is affected by adequate nutrition and access to lifesaving medicines. Biotechnological innovation for agriculture, along with agronomic innovation in cultivation techniques and other, lower-tech modes of innovation, account for a high portion of improvements in the capacity of societies to feed themselves and in the availability of nutritious foods. Medicines depend on pharmaceutical research and access to its products, and health care depends on research and publication for the development and dissemination of information about best-care practices. Education is also heavily dependent, not surprisingly, on access to materials and facilities for teaching. This includes access to basic textbooks, libraries, computation and communications systems, and the presence of local academic centers. Finally, economic growth has been understood for more than half a century to be centrally driven by innovation. This is particularly true of latecomers, who can improve their own condition most rapidly by adopting best practices and advanced technology developed elsewhere, and then adapting to local conditions and adding their own from the new technological platform achieved in this way. All three of these components are, then, substantially affected by access to, and use of, information and knowledge. The basic premise of the claim that the emergence of the networked information economy can provide significant benefits to human development is that the manner in which we produce new information--and equally important, the institutional framework we use to manage the stock of existing information and knowledge around the world--can have significant impact on human development. ,{[pg 311]},
+
+{won_benkler_9_1.png "Figure 9.1: HDI and Information" }http://www.jus.uio.no/sisu
+
+2~ INFORMATION-EMBEDDED GOODS AND TOOLS, INFORMATION, AND KNOWLEDGE
+
+One can usefully idealize three types of information-based advantages that developed economies have, and that would need to be available to developing and less-developed economies if one's goal were the improvement in conditions in those economies and the opportunities for innovation in them. These include information-embedded material resources--consumption goods and production tools--information, and knowledge.
+
+/{Information-Embedded Goods}/. These are goods that are not themselves information, but that are better, more plentiful, or cheaper because of some technological advance embedded in them or associated with their production. Pharmaceuticals and agricultural goods are the most obvious examples in the areas of health and food security, respectively. While there are other constraints on access to innovative products in these areas--regulatory and political in nature--a perennial barrier is cost. And a perennial barrier to competition that could reduce the cost is the presence of exclusive rights, ,{[pg 312]}, mostly in the form of patents, but also in the form of internationally recognized breeders' rights and regulatory data exclusivity. In the areas of computation and communication, hardware and software are the primary domains of concern. With hardware, there have been some efforts toward developing cheaper equipment--like the simputer and the Jhai computer efforts to develop inexpensive computers. Because of the relatively commoditized state of most components of these systems, however, marginal cost, rather than exclusive rights, has been the primary barrier to access. The solution, if one has emerged, has been aggregation of demand--a networked computer for a village, rather than an individual. For software, the initial solution was piracy. More recently, we have seen an increased use of free software instead. The former cannot genuinely be described as a "solution," and is being eliminated gradually by trade policy efforts. The latter--adoption of free software to obtain state-of-the-art software--forms the primary template for the class of commons-based solutions to development that I explore in this chapter.
+
+/{Information-Embedded Tools}/. One level deeper than the actual useful material things one would need to enhance welfare are tools necessary for innovation itself. In the areas of agricultural biotechnology and medicines, these include enabling technologies for advanced research, as well as access to materials and existing compounds for experimentation. Access to these is perhaps the most widely understood to present problems in the patent system of the developed world, as much as it is for the developing world--an awareness that has mostly crystallized under Michael Heller's felicitous phrase "anti-commons," or Carl Shapiro's "patent thicket." The intuition, whose analytic basis is explained in chapter 2, is that innovation is encumbered more than it is encouraged when basic tools for innovation are proprietary, where the property system gives owners of these tools proprietary rights to control innovation that relies on their tools, and where any given new innovation requires the consent of, and payment to, many such owners. This problem is not unique to the developing world. Nonetheless, because of the relatively small dollar value of the market for medicines that treat diseases that affect only poorer countries or of crop varieties optimized for those countries, the cost hurdle weighs more heavily on the public or nonprofit efforts to achieve food security and health in poor and middle-income countries. These nonmarket-based research efforts into diseases and crops of concern purely to these areas are not constructed to appropriate gains from ,{[pg 313]}, exclusive rights to research tools, but only bear their costs on downstream innovation.
+
+/{Information}/. The distinction between information and knowledge is a tricky one. I use "information" here colloquially, to refer to raw data, scientific reports of the output of scientific discovery, news, and factual reports. I use "knowledge" to refer to the set of cultural practices and capacities necessary for processing the information into either new statements in the information exchange, or more important in our context, for practical use of the information in appropriate ways to produce more desirable actions or outcomes from action. Three types of information that are clearly important for purposes of development are scientific publications, scientific and economic data, and news and factual reports. Scientific publication has seen a tremendous cost escalation, widely perceived to have reached crisis proportions even by the terms of the best-endowed university libraries in the wealthiest countries. Over the course of the 1990s, some estimates saw a 260 percent increase in the prices of scientific publications, and libraries were reported choosing between journal subscription and monograph purchases.~{ Carol Tenopir and Donald W. King, Towards Electronic Journals: Realities for Scientists, Librarians, and Publishers (Washington, DC: Special Libraries Association, 2000), 273. }~ In response to this crisis, and in reliance on what were perceived to be the publication costreduction opportunities for Internet publication, some scientists--led by Nobel laureate and then head of the National Institutes of Health Harold Varmus--began to agitate for a scientist-based publication system.~{ Harold Varmus, E-Biomed: A Proposal for Electronic Publications in the Biomedical Sciences (Bethesda, MD: National Institutes of Health, 1999). }~ The debates were, and continue to be, heated in this area. However, currently we are beginning to see the emergence of scientist-run and -driven publication systems that distribute their papers for free online, either within a traditional peer-review system like the Public Library of Science (PLoS), or within tightly knit disciplines like theoretical physics, with only post-publication peer review and revision, as in the case of the Los Alamos Archive, or ArXiv.org. Together with free software and peer production on the Internet, the PLoS and ArXiv.org models offer insights into the basic shape of the class of commons-based, nonproprietary production solutions to problems of information production and exchange unhampered by intellectual property.
+
+Scientific and economic data present a parallel conceptual problem, but in a different legal setting. In the case of both types of data, much of it is produced by government agencies. In the United States, however, raw data is in the public domain, and while initial access may require payment of the cost of distribution, reworking of the data as a tool in information production ,{[pg 314]}, and innovation--and its redistribution by those who acquired access initially--is considered to be in the public domain. In Europe, this has not been the case since the 1996 Database Directive, which created a propertylike right in raw data in an effort to improve the standing of European database producers. Efforts to pass similar legislation in the United States have been mounted and stalled in practically every Congress since the mid1990s. These laws continue to be introduced, driven by the lobby of the largest owners of nongovernment databases, and irrespective of the fact that for almost a decade, Europe's database industry has grown only slowly in the presence of a right, while the U.S. database industry has flourished without an exclusive rights regime.
+
+News, market reports, and other factual reporting seem to have escaped the problems of barriers to access. Here it is most likely that the valueappropriation model simply does not depend on exclusive rights. Market data is generated as a by-product of the market function itself. Tiny time delays are sufficient to generate a paying subscriber base, while leaving the price trends necessary for, say, farmers to decide at what prices to sell their grain in the local market, freely available.~{ C. K. Prahald, The Fortune at the Bottom of the Pyramid: Eradicating Poverty Through Profits (Upper Saddle River, NJ: Wharton School of Publishing, 2005), 319-357, Section 4, "The ITC e-Choupal Story." }~ As I suggested in chapter 2, the advertising-supported press has never been copyright dependent, but has instead depended on timely updating of news to capture attention, and then attach that attention to advertising. This has not changed, but the speed of the update cycle has increased and, more important, distribution has become global, so that obtaining most information is now trivial to anyone with access to an Internet connection. While this continues to raise issues with deployment of communications hardware and the knowledge of how to use it, these issues can be, and are being, approached through aggregation of demand in either public or private forms. These types of information do not themselves appear to exhibit significant barriers to access once network connectivity is provided.
+
+/{Knowledge}/. In this context, I refer mostly to two types of concern. The first is the possibility of the transfer of implicit knowledge, which resists codification into what would here be treated as "information"--for example, training manuals. The primary mechanism for transfer of knowledge of this type is learning by doing, and knowledge transfer of this form cannot happen except through opportunities for local practice of the knowledge. The second type of knowledge transfer of concern here is formal instruction in an education context (as compared with dissemination of codified outputs for self- ,{[pg 315]}, teaching). Here, there is a genuine limit on the capacity of the networked information economy to improve access to knowledge. Individual, face-toface instruction does not scale across participants, time, and distance. However, some components of education, at all levels, are nonetheless susceptible to improvement with the increase in nonmarket and radically decentralized production processes. The MIT Open Courseware initiative is instructive as to how the universities of advanced economies can attempt to make at least their teaching materials and manuals freely available to teachers throughout the world, thereby leaving the pedagogy in local hands but providing more of the basic inputs into the teaching process on a global scale. More important perhaps is the possibility that teachers and educators can collaborate, both locally and globally, on an open platform model like /{Wikipedia}/, to coauthor learning objects, teaching modules, and, more ambitiously, textbooks that could then be widely accessed by local teachers
+
+2~ INDUSTRIAL ORGANIZATION OF HDI-RELATED INFORMATION INDUSTRIES
+
+The production of information and knowledge is very different from the production of steel or automobiles. Chapter 2 explains in some detail that information production has always included substantial reliance on nonmarket actors and on nonmarket, nonproprietary settings as core modalities of production. In software, for example, we saw that Mickey and romantic maximizer-type producers, who rely on exclusive rights directly, have accounted for a stable 36-37 percent of market-based revenues for software developers, while the remainder was focused on both supply-side and demand-side improvements in the capacity to offer software services. This number actually overstates the importance of software publishing, because it does not at all count free software development except when it is monetized by an IBM or a Red Hat, leaving tremendous value unaccounted for. A very large portion of the investments and research in any of the information production fields important to human development occur within the category that I have broadly described as "Joe Einstein." These include both those places formally designated for the pursuit of information and knowledge in themselves, like universities, and those that operate in the social sphere, but produce information and knowledge as a more or less central part of their existence--like churches or political parties. Moreover, individuals acting as social beings have played a central role in our information ,{[pg 316]}, production and exchange system. In order to provide a more sector-specific analysis of how commons-based, as opposed to proprietary, strategies can contribute to development, I offer here a more detailed breakdown specifically of software, scientific publication, agriculture, and biomedical innovation than is provided in chapter 2.
+
+Table 9.1 presents a higher-resolution statement of the major actors in these fields, within both the market and the nonmarket sectors, from which we can then begin to analyze the path toward, and the sustainability of, more significant commons-based production of the necessities of human development. Table 9.1 identifies the relative role of each of the types of main actors in information and knowledge production across the major sectors relevant to contemporary policy debates. It is most important to extract from this table the diversity of business models and roles not only in each industry, but also among industries. This diversity means that different types of actors can have different relative roles: nonprofits as opposed to individuals, universities as opposed to government, or nonproprietary market actors--that is, market actors whose business model is service based or otherwise does not depend on exclusive appropriation of information--as compared to nonmarket actors. The following segments look at each of these sectors more specifically, and describe the ways in which commons-based strategies are already, or could be, used to improve the access to information, knowledge, and the information-embedded goods and tools for human development. However, even a cursory look at the table shows that the current production landscape of software is particularly well suited to having a greater role for commonsbased production. For example, exclusive proprietary producers account for only one-third of software-related revenues, even within the market. The remainder is covered by various services and relationships that are compatible with nonproprietary treatment of the software itself. Individuals and nonprofit associations also have played a very large role, and continue to do so, not only in free software development, but in the development of standards as well. As we look at each sector, we see that they differ in their incumbent industrial landscape, and these differences mean that each sector may be more or less amenable to commons-based strategies, and, even if in principle amenable, may present harder or easier transition problems. ,{[pg 317]},
+
+!_ Table 9.1: Map of Players and Roles in Major Relevant Sectors
+
+table{~h c7; 14; 14; 14; 14; 14; 14; 14;
+
+Actor Sector
+Government
+Universities, Libraries, etc.
+IP-Based Industry
+Non-IP-Based Industry
+NGOs/ Nonprofits
+Individuals
+
+Software
+Research funding, defense, procurement
+Basic research and design; components "incubate" much else
+Software publishing (1/3 annual revenue)
+Software services, customization (2/3 annual revenue)
+FSF; Apache; W3C; IETF
+Free/ opensource software
+
+Scientific publication
+Research funding
+University presses; salaries; promotions and tenure
+Elsevier Science; professional associations
+Biomed Central
+PLoS; ArXiv
+Working papers; Web-based self-publishing
+
+Agricultural Biotech
+Grants and government labs
+Basic research; tech transfer (50%)
+Big Pharma; Biotech (50%)
+Generics
+One-World Health
+None
+
+}table
+
+2~ TOWARD ADOPTING COMMONS-BASED STRATEGIES FOR DEVELOPMENT
+
+The mainstream understanding of intellectual property by its dominant policy-making institutions--the Patent Office and U.S. trade representative in the United States, the Commission in the European Union, and the World Intellectual Property Organization (WIPO) and Trade-Related Aspects of Intellectual Property (TRIPS) systems internationally--is that strong protection is good, and stronger protection is better. In development and trade policy, this translates into a belief that the primary mechanism for knowledge transfer and development in a global information economy is for ,{[pg 318]}, all nations, developing as well as developed, to ratchet up their intellectual property law standards to fit the most protective regimes adopted in the United States and Europe. As a practical political matter, the congruence between the United States and the European Union in this area means that this basic understanding is expressed in the international trade system, in the World Trade Organization (WTO) and its TRIPS agreement, and in international intellectual property treaties, through the WIPO. The next few segments present an alternative view. Intellectual property as an institution is substantially more ambiguous in its effects on information production than the steady drive toward expansive rights would suggest. The full argument is in chapter 2.
+
+Intellectual property is particularly harmful to net information importers. In our present world trade system, these are the poor and middle-income nations. Like all users of information protected by exclusive rights, these nations are required by strong intellectual property rights to pay more than the marginal cost of the information at the time that they buy it. In the standard argument, this is intended to give producers incentives to create information that users want. Given the relative poverty of these countries, however, practically none of the intellectual-property-dependent producers develop products specifically with returns from poor or even middle-income markets in mind. The pharmaceutical industry receives about 5 percent of its global revenues from low- and middle-income countries. That is why we have so little investment in drugs for diseases that affect only those parts of the world. It is why most agricultural research that has focused on agriculture in poorer areas of the world has been public sector and nonprofit. Under these conditions, the above-marginal-cost prices paid in these poorer countries are purely regressive redistribution. The information, knowledge, and information-embedded goods paid for would have been developed in expectation of rich world rents alone. The prospects of rents from poorer countries do not affect their development. They do not affect either the rate or the direction of research and development. They simply place some of the rents that pay for technology development in the rich countries on consumers in poor and middle-income countries. The morality of this redistribution from the world's poor to the world's rich has never been confronted or defended in the European or American public spheres. It simply goes unnoticed. When crises in access to information-embedded goods do appear--such as in the AIDS/HIV access to medicines crisis--these are seldom tied to our ,{[pg 319]}, basic institutional choice. In our trade policies, Americans and Europeans push for ever-stronger protection. We thereby systematically benefit those who own much of the stock of usable human knowledge. We do so at the direct expense of those who need access to knowledge in order to feed themselves and heal their sick.
+
+The practical politics of the international intellectual property and trade regime make it very difficult to reverse the trend toward ever-increasing exclusive property protections. The economic returns to exclusive proprietary rights in information are highly concentrated in the hands of those who own such rights. The costs are widely diffuse in the populations of both the developing and developed world. The basic inefficiency of excessive property protection is difficult to understand bycomparison to the intuitive, but mistaken, Economics 101 belief that property is good, more property is better, and intellectual property must be the same. The result is that pressures on the governmentsthat represent exporters of intellectual property rights permissions--in particular, the United States and the European Union--come in this area mostly from the owners, and they continuously push for everstronger rights. Monopoly is a good thing to have if you can get it. Its value for rent extraction isno less valuable for a database or patent-based company than it is for the dictator's nephew in a banana republic. However, its value to these supplicants does not make it any more efficient or desirable. The political landscape is, however, gradually beginning to change. Since the turn of the twenty-first century, and particularly in the wake of the urgency with which the HIV/AIDS crisis in Africa has infused the debate over access to medicines, there has been a growing public interest advocacy movementfocused on the intellectual property trade regime. This movement is, however, confronted with a highly playable system. A victory for developing world access in one round in the TRIPS context always leaves other places to construct mechanisms for exclusivity. Bilateral trade negotiations are one domain that is beginning to play an important role. In these, the United States or the European Union can force a rice- or cotton-exporting country to concede a commitment to strong intellectual property protection in exchange for favorable treatment for their core export. The intellectual property exporting nations can then go to WIPO, and push for new treaties based on the emerging international practice of bilateral agreements. This, in turn, would cycle back and be generalized and enforced through the trade regimes. Another approach is for the exporting nations to change their own ,{[pg 320]}, laws, and then drive higher standards elsewhere in the name of "harmonization." Because the international trade and intellectual property system is highly "playable" and manipulable in these ways, systematic resistance to the expansion of intellectual property laws is difficult.
+
+The promise of the commons-based strategies explored in the remainder of this chapter is that they can be implemented without changes in law-- either national or international. They are paths that the emerging networked information economy has opened to individuals, nonprofits, and publicsector organizations that want to help in improving human development in the poorer regions of the world to take action on their own. As with decentralized speech for democratic discourse, and collaborative production by individuals of the information environment they occupy as autonomous agents, here too we begin to see that self-help and cooperative action outside the proprietary system offer an opportunity for those who wish to pursue it. In this case, it is an opportunity to achieve a more just distribution of the world's resources and a set of meaningful improvements in human development. Some of these solutions are "commons-based," in the sense that they rely on free access to existing information that is in the commons, and they facilitate further use and development of that information and those information-embedded goods and tools by releasing their information outputs openly, and managing them as a commons, rather than as property. Some of the solutions are specifically peer-production solutions. We see this most clearly in software, and to some extent in the more radical proposals for scientific publication. I will also explore here the viability of peerproduction efforts in agricultural and biomedical innovation, although in those fields, commons-based approaches grafted onto traditional publicsector and nonprofit organizations at present hold the more clearly articulated alternatives.
+
+3~ Software
+
+The software industry offers a baseline case because of the proven large scope for peer production in free software. As in other information-intensive industries, government funding and research have played an enormously important role, and university research provides much of the basic science. However, the relative role of individuals, nonprofits, and nonproprietary market producers is larger in software than in the other sectors. First, twothirds of revenues derived from software in the United States are from services ,{[pg 321]}, and do not depend on proprietary exclusion. Like IBM's "Linux-related services" category, for which the company claimed more than two billion dollars of revenue for 2003, these services do not depend on exclusion from the software, but on charging for service relationships.~{ For the sources of numbers for the software industry, see chapter 2 in this volume. IBM numbers, in particular, are identified in figure 2.1. }~ Second, some of the most basic elements of the software environment--like standards and protocols--are developed in nonprofit associations, like the Internet Engineering Taskforce or the World Wide Web Consortium. Third, the role of individuals engaged in peer production--the free and open-source software development communities--is very large. Together, these make for an organizational ecology highly conducive to nonproprietary production, whose outputs can be freely usable around the globe. The other sectors have some degree of similar components, and commons-based strategies for development can focus on filling in the missing components and on leveraging nonproprietary components already in place.
+
+In the context of development, free software has the potential to play two distinct and significant roles. The first is offering low-cost access to highperforming software for developing nations. The second is creating the potential for participation in software markets based on human ability, even without access to a stock of exclusive rights in existing software. At present, there is a movement in both developing and the most advanced economies to increase reliance on free software. In the United States, the Presidential Technology Advisory Commission advised the president in 2000 to increase use of free software in mission-critical applications, arguing the high quality and dependability of such systems. To the extent that quality, reliability, and ease of self-customization are consistently better with certain free software products, they are attractive to developing-country governments for the same reasons that they are to the governments of developed countries. In the context of developing nations, the primary additional arguments that have been made include cost, transparency, freedom from reliance on a single foreign source (read, Microsoft), and the potential of local software programmers to learn the program, acquire skills, and therefore easily enter the global market with services and applications for free software.~{ These arguments were set out most clearly and early in a public exchange of letters between Representative Villanueva Nunez in Peru and Microsoft's representatives in that country. The exchange can be found on the Web site of the Open Source Initiative, http://www.opensource.org/docs/peru_and_ms.php. }~ The question of cost, despite the confusion that often arises from the word "free," is not obvious. It depends to some extent on the last hope--that local software developers will become skilled in the free software platforms. The cost of software to any enterprise includes the extent, cost, and efficacy with which the software can be maintained, upgraded, and fixed when errors occur. Free ,{[pg 322]}, software may or may not involve an up-front charge. Even if it does not, that does not make it cost-free. However, free software enables an open market in free software servicing, which in turn improves and lowers the cost of servicing the software over time. More important, because the software is open for all to see and because developer communities are often multinational, local developers can come, learn the software, and become relatively low-cost software service providers for their own government. This, in turn, helps realize the low-cost promise over and above the licensing fees avoided. Other arguments in favor of government procurement of free software focus on the value of transparency of software used for public purposes. The basic thrust of these arguments is that free software makes it possible for constituents to monitor the behavior of machines used in governments, to make sure that they are designed to do what they are publicly reported to do. The most significant manifestation of this sentiment in the United States is the hitherto-unsuccessful, but fairly persistent effort to require states to utilize voting machines that use free software, or at a minimum, to use software whose source code is open for public inspection. This is a consideration that, if valid, is equally suitable for developing nations. The concern with independence from a single foreign provider, in the case of operating systems, is again not purely a developing-nation concern. Just as the United States required American Marconi to transfer its assets to an American company, RCA, so that it would not be dependent for a critical infrastructure on a foreign provider, other countries may have similar concerns about Microsoft. Again, to the extent that this is a valid concern, it is so for rich nations as much as it is for poor, with the exceptions of the European Union and Japan, which likely do have bargaining power with Microsoft to a degree that smaller markets do not.
+
+The last and quite distinct potential gain is the possibility of creating a context and an anchor for a free software development sector based on service. This was cited as the primary reason behind Brazil's significant push to use free software in government departments and in telecenters that the federal government is setting up to provide Internet service access to some of its poorer and more remote areas. Software services represent a very large industry. In the United States, software services are an industry roughly twice the size of the movie and video industry. Software developers from low- and middle-income countries can participate in the growing free software segment of this market by using their skills alone. Unlike with service for the proprietary domain, they need not buy licenses to learn and practice the ,{[pg 323]}, services. Moreover, if Brazil, China, India, Indonesia, and other major developing countries were to rely heavily on free software, then the "internal market," within the developing world, for free software?related services would become very substantial. Building public-sector demand for these services would be one place to start. Moreover, because free software development is a global phenomenon, free software developers who learn their skills within the developing world would be able to export those skills elsewhere. Just as India's call centers leverage the country's colonial past with its resulting broad availability of English speakers, so too countries like Brazil can leverage their active free software development community to provide software services for free software platforms anywhere in the developed and developing worlds. With free software, the developing-world providers can compete as equals. They do not need access to permissions to operate. Their relationships need not replicate the "outsourcing" model so common in proprietary industries, where permission to work on a project is the point of control over the ability to do so. There will still be branding issues that undoubtedly will affect access to developed markets. However, there will be no baseline constraints of minimal capital necessary to enter the market and try to develop a reputation for reliability. As a development strategy, then, utilization of free software achieves transfer of information-embedded goods for free or at low cost. It also transfers information about the nature of the product and its operation--the source code. Finally, it enables transfer, at least potentially, of opportunities for learning by doing and of opportunities for participating in the global market. These would depend on knowledge of a free software platform that anyone is free to learn, rather than on access to financial capital or intellectual property inventories as preconditions to effective participation.
+
+3~ Scientific Publication
+
+Scientific publication is a second sector where a nonproprietary strategy can be implemented readily and is already developing to supplant the proprietary model. Here, the existing market structure is quite odd in a way that likely makes it unstable. Authoring and peer review, the two core value-creating activities, are done by scientists who perform neither task in expectation of royalties or payment. The model of most publications, however, is highly proprietary. A small number of business organizations, like Elsevier Science, control most of the publications. Alongside them, professional associations of scientists also publish their major journals using a proprietary model. ,{[pg 324]}, Universities, whose scientists need access to the papers, incur substantial cost burdens to pay for the publications as a basic input into their own new work. While the effects of this odd system are heavily felt in universities in rich countries, the burden of subscription rates that go into the thousands of dollars per title make access to up-to-date scientific research prohibitive for universities and scientists working in poorer economies. Nonproprietary solutions are already beginning to emerge in this space. They fall into two large clusters.
+
+The first cluster is closer to the traditional peer-review publication model. It uses Internet communications to streamline the editorial and peer-review system, but still depends on a small, salaried editorial staff. Instead of relying on subscription payments, it relies on other forms of payments that do not require charging a price for the outputs. In the case of the purely nonprofit Public Library of Science (PLoS), the sources of revenue combine author's payments for publication, philanthropic support, and university memberships. In the case of the for-profit BioMed Central, based in the United Kingdom, it is a combination of author payments, university memberships, and a variety of customized derivative products like subscription-based literature reviews and customized electronic update services. Author payments--fees authors must pay to have their work published--are built into the cost of scientific research and included in grant applications. In other words, they are intended to be publicly funded. Indeed, in 2005, the National Institutes of Health (NIH), the major funding agency for biomedical science in the United States, announced a requirement that all NIH-funded research be made freely available on the Web within twelve months of publication. Both PLoS and BioMed Central have waiver processes for scientists who cannot pay the publication fees. The articles on both systems are available immediately for free on the Internet. The model exists. It works internally and is sustainable as such. What is left in determining the overall weight that these open-access journals will have in the landscape of scientific publication is the relatively conservative nature of universities themselves. The established journals, like Science or Nature, still carry substantially more prestige than the new journals. As long as this is the case, and as long as hiring and promotion decisions continue to be based on the prestige of the journal in which a scientist's work is published, the ability of the new journals to replace the traditional ones will be curtailed. Some of the established journals, however, are operated by professional associations of scientists. There is an internal tension between the interests of the associations in securing ,{[pg 325]}, their revenue and the growing interest of scientists in open-access publication. Combined with the apparent economic sustainability of the open-access journals, it seems that some of these established journals will likely shift over to the open-access model. At a minimum, policy interventions like those proposed by the NIH will force traditional publications to adapt their business model by making access free after a few months. The point here, however, is not to predict the overall likely success of open-access journals. It is to combine them with what we have seen happening in software as another example of a reorganization of the components of the industrial structure of an information production system. Individual scientists, government funding agencies, nonprofits and foundations, and nonproprietary commercial business models can create the same good--scientific publication--but without the cost barrier that the old model imposed on access to its fruits. Such a reorientation would significantly improve the access of universities and physicians in developing nations to the most advanced scientific publication.
+
+The second approach to scientific publication parallels more closely free software development and peer production. This is typified by ArXiv and the emerging practices of self-archiving or self-publishing. ArXiv.org is an online repository of working papers in physics, mathematics, and computer science. It started out focusing on physics, and that is where it has become the sine qua non of publication in some subdisciplines. The archive does not perform review except for technical format compliance. Quality control is maintained by postpublication review and commentary, as well as by hosting updated versions of the papers with explanations (provided by authors) of the changes. It is likely that the reason ArXiv.org has become so successful in physics is the very small and highly specialized nature of the discipline. The universe of potential readers is small, and their capacity to distinguish good arguments from bad is high. Reputation effects of poor publications are likely immediate.
+
+While ArXiv offers a single repository, a much broader approach has been the developing practice of self-archiving. Academics post their completed work on their own Web sites and make it available freely. The primary limitation of this mechanism is the absence of an easy, single location where one can search for papers on a topic of concern. And yet we are already seeing the emergence of tagging standards and protocols that allow anyone to search the universe of self-archived materials. Once completed, such a development process would in principle render archiving by single points of reference unnecessary. The University of Michigan Digital Library Production ,{[pg 326]}, Service, for example, has developed a protocol called OAIster (pronounced like oyster, with the tagline "find the pearls"), which combines the acronym of Open Archives Initiative with the "ster" ending made popular in reference to peer-to-peer distribution technologies since Napster (AIMster, Grokster, Friendster, and the like). The basic impulse of the Open Archives Initiative is to develop a sufficiently refined set of meta-data tags that would allow anyone who archives their materials with OAI-compliant tagging to be searched easily, quickly, and accurately on the Web. In that case, a general Web search becomes a targeted academic search in a "database" of scientific publications. However, the database is actually a network of self-created, small personal databases that comply with a common tagging and search standard. Again, my point here is not to explore the details of one or another of these approaches. If scientists and other academics adopt this approach of self-archiving coupled with standardized interfaces for global, welldelimited searches, the problem of lack of access to academic publication because of their high-cost publication will be eliminated.
+
+Other types of documents, for example, primary- and secondary-education textbooks, are in a much more rudimentary stage of the development of peer-production models. First, it should be recognized that responses to illiteracy and low educational completion in the poorer areas of the world are largely a result of lack of schoolteachers, physical infrastructure for classrooms, demand for children's schooling among parents who are themselves illiterate, and lack of effectively enforced compulsory education policy. The cost of textbooks contributes only a portion of the problem of cost. The opportunity cost of children's labor is probably the largest factor. Nonetheless, outdated materials and poor quality of teaching materials are often cited as one limit on the educational achievement of those who do attend school. The costs of books, school fees, uniforms, and stationery can amount to 20? 30 percent of a family's income.~{ A good regional study of the extent and details of educational deprivation is Mahbub ul Haq and Khadija ul Haq, Human Development in South Asia 1998: The Education Challenge (Islamabad, Pakistan: Human Development Center). }~ The component of the problem contributed by the teaching materials may be alleviated by innovative approaches to textbook and education materials authoring. Chapter 4 already discussed some textbook initiatives. The most successful commons-based textbook authoring project, which is also the most relevant from the perspective of development, is the South African project, Free High School Science Texts (FHSST). The FHSST initiative is more narrowly focused than the broader efforts of Wikibooks or the California initiative, more managed, and more successful. Nonetheless, in three years of substantial effort by a group of dedicated volunteers who administer the project, its product is one physics ,{[pg 327]}, high school text, and advanced drafts of two other science texts. The main constraint on the efficacy of collaborative textbook authoring is that compliance requirements imposed by education ministries tend to require a great degree of coherence, which constrains the degree of modularity that these text-authoring projects adopt. The relatively large-grained contributions required limit the number of contributors, slowing the process. The future of these efforts is therefore likely to be determined by the extent to which their designers are able to find ways to make finer-grained modules without losing the coherence required for primary- and secondary-education texts. Texts at the post-secondary level likely present less of a problem, because of the greater freedom instructors have to select texts. This allows an initiative like MIT's Open Courseware Initiative to succeed. That initiative provides syllabi, lecture notes, problem sets, etc. from over 1,100 courses. The basic creators of the materials are paid academics who produce these materials for one of their core professional roles: teaching college- and graduate-level courses. The content is, by and large, a "side-effect" of teaching. What is left to be done is to integrate, create easy interfaces and search capabilities, and so forth. The university funds these functions through its own resources and dedicated grant funding. In the context of MIT, then, these functions are performed on a traditional model--a large, well-funded nonprofit provides an important public good through the application of full-time staff aimed at non-wealth-maximizing goals. The critical point here was the radical departure of MIT from the emerging culture of the 1980s and 1990s in American academia. When other universities were thinking of "distance education" in terms of selling access to taped lectures and materials so as to raise new revenue, MIT thought of what its basic mandate to advance knowledge and educate students in a networked environment entailed. The answer was to give anyone, anywhere, access to the teaching materials of some of the best minds in the world. As an intervention in the ecology of free knowledge and information and an act of leadership among universities, the MIT initiative was therefore a major event. As a model for organizational innovation in the domain of information production generally and the creation of educational resources in particular, it was less significant.
+
+Software and academic publication, then, offer the two most advanced examples of commons-based strategies employed in a sector whose outputs are important to development, in ways that improve access to basic information, knowledge, and information-embedded tools. Building on these basic cases, we can begin to see how similar strategies can be employed to ,{[pg 328]}, create a substantial set of commons-based solutions that could improve the distribution of information germane to human development.
+
+2~ COMMONS-BASED RESEARCH FOR FOOD AND MEDICINES
+
+While computation and access to existing scientific research are important in the development of any nation, they still operate at a remove from the most basic needs of the world poor. On its face, it is far from obvious how the emergence of the networked information economy can grow rice to feed millions of malnourished children or deliver drugs to millions of HIV/AIDS patients. On closer observation, however, a tremendous proportion of the way modern societies grow food and develop medicines is based on scientific research and technical innovation. We have seen how the functions of mass media can be fulfilled by nonproprietary models of news and commentary. We have seen the potential of free and open source software and open-access publications to replace and redress some of the failures of proprietary software and scientific publication, respectively. These cases suggest that the basic choice between a system that depends on exclusive rights and business models that use exclusion to appropriate research outputs and a system that weaves together various actors--public and private, organized and individual--in a nonproprietary social network of innovation, has important implications for the direction of innovation and for access to its products. Public attention has focused mostly on the HIV/AIDS crisis in Africa and the lack of access to existing drugs because of their high costs. However, that crisis is merely the tip of the iceberg. It is the most visible to many because of the presence of the disease in rich countries and its cultural and political salience in the United States and Europe. The exclusive rights system is a poor institutional mechanism for serving the needs of those who are worst off around the globe. Its weaknesses pervade the problems of food security and agricultural research aimed at increasing the supply of nourishing food throughout the developing world, and of access to medicines in general, and to medicines for developing-world diseases in particular. Each of these areas has seen a similar shift in national and international policy toward greater reliance on exclusive rights, most important of which are patents. Each area has also begun to see the emergence of commons-based models to alleviate the problems of patents. However, they differ from each other still. Agriculture offers more immediate opportunities for improvement ,{[pg 329]}, because of the relatively larger role of public research--national, international, and academic--and of the long practices of farmer innovation in seed associations and local and regional frameworks. I explore it first in some detail, as it offers a template for what could be a path for development in medical research as well.
+
+2~ Food Security: Commons-Based Agricultural Innovation
+
+Agricultural innovation over the past century has led to a vast increase in crop yields. Since the 1960s, innovation aimed at increasing yields and improving quality has been the centerpiece of efforts to secure the supply of food to the world's poor, to avoid famine and eliminate chronic malnutrition. These efforts have produced substantial increases in the production of food and decreases in its cost, but their benefits have varied widely in different regions of the world. Now, increases in productivity are not alone a sufficient condition to prevent famine. Sen's observations that democracies have no famines--that is, that good government and accountability will force public efforts to prevent famine--are widely accepted today. The contributions of the networked information economy to democratic participation and transparency are discussed in chapters 6-8, and to the extent that those chapters correctly characterize the changes in political discourse, should help alleviate human poverty through their effects on democracy. However, the cost and quality of food available to accountable governments of poor countries, or to international aid organizations or nongovernment organizations (NGOs) that step in to try to alleviate the misery caused by ineffective or malicious governments, affect how much can be done to avoid not only catastrophic famine, but also chronic malnutrition. Improvements in agriculture make it possible for anyone addressing food security to perform better than they could have if food production had lower yields, of less nutritious food, at higher prices. Despite its potential benefits, however, agricultural innovation has been subject to an unusual degree of sustained skepticism aimed at the very project of organized scientific and scientifically based innovation. Criticism combines biological-ecological concerns with social and economic concerns. Nowhere is this criticism more strident, or more successful at moving policy, than in current European resistance to genetically modified (GM) foods. The emergence of commons-based production strategies can go some way toward allaying the biological-ecological fears by locating much of the innovation at the local level. Its primary benefit, however, ,{[pg 330]}, is likely to be in offering a path for agricultural and biological innovation that is sustainable and low cost, and that need not result in appropriation of the food production chain by a small number of multinational businesses, as many critics fear.
+
+Scientific plant improvement in the United States dates back to the establishment of the U.S. Department of Agriculture, the land-grant universities, and later the state agricultural experiment stations during the Civil War and in the decades that followed. Public-sector investment dominated agricultural research at the time, and with the rediscovery of Mendel's work in 1900, took a turn toward systematic selective breeding. Through crop improvement associations, seed certification programs, and open-release policies allowing anyone to breed and sell the certified new seeds, farmers were provided access to the fruits of public research in a reasonably efficient and open market. The development of hybrid corn through this system was the first major modern success that vastly increased agricultural yields. It reshaped our understanding not only of agriculture, but also more generally of the value of innovation, by comparison to efficiency, to growth. Yields in the United States doubled between the mid-1930s and the mid-1950s, and by the mid-1980s, cornfields had a yield six times greater than they had fifty years before. Beginning in the early 1960s, with funding from the Rockefeller and Ford foundations, and continuing over the following forty years, agricultural research designed to increase the supply of agricultural production and lower its cost became a central component of international and national policies aimed at securing the supply of food to the world's poor populations, avoiding famines and, ultimately, eliminating chronic malnutrition. The International Rice Research Institute (IRRI) in the Philippines was the first such institute, founded in the 1960s, followed by the International Center for Wheat and Maize Improvement (CIM-MYT) in Mexico (1966), and the two institutes for tropical agriculture in Colombia and Nigeria (1967). Together, these became the foundation for the Consultative Group for International Agricultural Research (CGIAR), which now includes sixteen centers. Over the same period, National Agricultural Research Systems (NARS) also were created around the world, focusing on research specific to local agroecological conditions. Research in these centers preceded the biotechnology revolution, and used various experimental breeding techniques to obtain high-yielding plants: for example, plants with shorter growing seasons, or more adapted to intensive fertilizer use. These efforts later introduced varieties ,{[pg 331]}, that were resistant to local pests, diseases, and to various harsh environmental conditions.
+
+The "Green Revolution," as the introduction of these new, scientificresearch-based varieties has been called, indeed resulted in substantial increases in yields, initially in rice and wheat, in Asia and Latin America. The term "Green Revolution" is often limited to describing these changes in those regions in the 1960s and 1970s. A recent study shows, however, that the growth in yields has continued throughout the last forty years, and has, with varying degrees, occurred around the world.~{ Robert Evenson and D. Gollin, eds., Crop Variety Improvement and Its Effect on Productivity: The Impact of International Agricultural Research (New York: CABI Pub., 2002); results summarized in Robert Evenson and D. Gollin, "Assessing the Impact of the Green Revolution, 1960-2000," Science 300 (May 2003): 758-762. }~ More than eight thousand modern varieties of rice, wheat, maize, other major cereals, and root and protein crops have been released over the course of this period by more than four hundred public breeding programs. One of the most interesting finds of this study was that fewer than 1 percent of these modern varieties had any crosses with public or private breeding programs in the developed world, and that private-sector contributions in general were limited to hybrid maize, sorghum, and millet. The effort, in other words, was almost entirely public sector, and almost entirely based in the developing world, with complementary efforts of the international and national programs. Yields in Asia increased sevenfold from 1961 to 2000, and fivefold in Latin America, the Middle East/North Africa, and Sub-Saharan Africa. More than 60 percent of the growth in Asia and Latin America occurred in the 1960s?1980s, while the primary growth in Sub-Saharan Africa began in the 1980s. In Latin America, most of the early-stage increases in yields came from increasing cultivated areas ( 40 percent), and from other changes in cultivation-- increased use of fertilizer, mechanization, and irrigation. About 15 percent of the growth in the early period was attributable to the use of modern varieties. In the latter twenty years, however, more than 40 percent of the total increase in yields was attributable to the use of new varieties. In Asia in the early period, about 19 percent of the increase came from modern varieties, but almost the entire rest of the increase came from increased use of fertilizer, mechanization, and irrigation, not from increased cultivated areas. It is trivial to see why changes of this sort would elicit both environmental and a socialeconomic critique of the industrialization of farm work. Again, though, in the latter twenty years, 46 percent of the increase in yields is attributable to the use of modern varieties. Modern varieties played a significantly less prominent role in the Green Revolution of the Middle East and Africa, contributing 5-6 percent of the growth in yields. In Sub-Saharan Africa, for example, ,{[pg 332]}, early efforts to introduce varieties from Asia and Latin America failed, and local developments only began to be adopted in the 1980s. In the latter twenty-year period, however, the Middle East and North Africa did see a substantial role for modern varieties--accounting for close to 40 percent of a more than doubling of yields. In Sub-Saharan Africa, the overwhelming majority of the tripling of yields came from increasing area of cultivation, and about 16 percent came from modern varieties. Over the past forty years, then, research-based improvements in plants have come to play a larger role in increasing agricultural yields in the developing world. Their success was, however, more limited in the complex and very difficult environments of SubSaharan Africa. Much of the benefit has to do with local independence, as opposed to heavier dependence on food imports. Evenson and Gollin, for example, conservatively estimate that higher prices and a greater reliance on imports in the developing world in the absence of the Green Revolution would have resulted in 13-14 percent lower caloric intake in the developing world, and in a 6-8 percent higher proportion of malnourished children. While these numbers may not seem eye-popping, for populations already living on marginal nutrition, they represent significant differences in quality of life and in physical and mental development for millions of children and adults.
+
+The agricultural research that went into much of the Green Revolution did not involve biotechnology--that is, manipulation of plant varieties at the genetic level through recombinant DNA techniques. Rather, it occurred at the level of experimental breeding. In the developed world, however, much of the research over the past twenty-five years has been focused on the use of biotechnology to achieve more targeted results than breeding can, has been more heavily based on private-sector investment, and has resulted in more private-sector ownership over the innovations. The promise of biotechnology, and particularly of genetically engineered or modified foods, has been that they could provide significant improvements in yields as well as in health effects, quality of the foods grown, and environmental effects. Plants engineered to be pest resistant could decrease the need to use pesticides, resulting in environmental benefits and health benefits to farmers. Plants engineered for ever-higher yields without increasing tilled acreage could limit the pressure for deforestation. Plants could be engineered to carry specific nutritional supplements, like golden rice with beta-carotene, so as to introduce necessarily nutritional requirements into subsistence diets. Beyond the hypothetically optimistic possibilities, there is little question that genetic engineering has already produced crops that lower the cost of production ,{[pg 333]}, for farmers by increasing herbicide and pest tolerance. As of 2002, more than 50 percent of the world's soybean acreage was covered with genetically modified (GM) soybeans, and 20 percent with cotton. Twenty-seven percent of acreage covered with GM crops is in the developing world. This number will grow significantly now that Brazil has decided to permit the introduction of GM crops, given its growing agricultural role, and now that India, as the world's largest cotton producer, has approved the use of Bt cotton--a GM form of cotton that improves its resistance to a common pest. There are, then, substantial advantages to farmers, at least, and widespread adoption of GM crops both in the developed world outside of Europe and in the developing world.
+
+This largely benign story of increasing yields, resistance, and quality has not been without critics, to put it mildly. The criticism predates biotechnology and the development of transgenic varieties. Its roots are in criticism of experimental breeding programs of the American agricultural sectors and the Green Revolution. However, the greatest public visibility and political success of these criticisms has been in the context of GM foods. The critique brings together odd intellectual and political bedfellows, because it includes five distinct components: social and economic critique of the industrialization of agriculture, environmental and health effects, consumer preference for "natural" or artisan production of foodstuffs, and, perhaps to a more limited extent, protectionism of domestic farm sectors.
+
+Perhaps the oldest component of the critique is the social-economic critique. One arm of the critique focuses on how mechanization, increased use of chemicals, and ultimately the use of nonreproducing proprietary seed led to incorporation of the agricultural sector into the capitalist form of production. In the United States, even with its large "family farm" sector, purchased inputs now greatly exceed nonpurchased inputs, production is highly capital intensive, and large-scale production accounts for the majority of land tilled and the majority of revenue captured from farming.~{ Jack R. Kloppenburg, Jr., First the Seed: The Political Economy of Plant Biotechnology 1492-2000 (Cambridge and New York: Cambridge University Press, 1988), table 2.2. }~ In 2003, 56 percent of farms had sales of less than $10,000 a year. Roughly 85 percent of farms had less than $100,000 in sales.~{ USDA National Agriculture Statistics Survey (2004), http://www.usda.gov/ nass/aggraphs/fncht3.htm. }~ These farms account for only 42 percent of the farmland. By comparison, 3.4 percent of farms have sales of more than $500,000 a year, and account for more than 21 percent of land. In the aggregate, the 7.5 percent of farms with sales over $250,000 account for 37 percent of land cultivated. Of all principal owners of farms in the United States in 2002, 42.5 percent reported something other than farming as their principal occupation, and many reported spending two hundred or ,{[pg 334]}, more days off-farm, or even no work days at all on the farm. The growth of large-scale "agribusiness," that is, mechanized, rationalized industrial-scale production of agricultural products, and more important, of agricultural inputs, is seen as replacing the family farm and the small-scale, self-sufficient farm, and bringing farm labor into the capitalist mode of production. As scientific development of seeds and chemical applications increases, the seed as input becomes separated from the grain as output, making farmers dependent on the purchase of industrially produced seed. This further removes farmwork from traditional modes of self-sufficiency and craftlike production to an industrial mode. This basic dynamic is repeated in the critique of the Green Revolution, with the added overlay that the industrial producers of seed are seen to be multinational corporations, and the industrialization of agriculture is seen as creating dependencies in the periphery on the industrial-scientific core of the global economy.
+
+The social-economic critique has been enmeshed, as a political matter, with environmental, health, and consumer-oriented critiques as well. The environmental critiques focus on describing the products of science as monocultures, which, lacking the genetic diversity of locally used varieties, are more susceptible to catastrophic failure. Critics also fear contamination of existing varieties, unpredictable interactions with pests, and negative effects on indigenous species. The health effects concern focused initially on how breeding for yield may have decreased nutritional content, and in the more recent GM food debates, the concern that genetically altered foods will have some unanticipated negative health reactions that would only become apparent many years from now. The consumer concerns have to do with quality and an aesthetic attraction to artisan-mode agricultural products and aversion to eating industrial outputs. These social-economic and environmental-health-consumer concerns tend also to be aligned with protectionist lobbies, not only for economic purposes, but also reflecting a strong cultural attachment to the farming landscape and human ecology, particularly in Europe.
+
+This combination of social-economic and postcolonial critique, environmentalism, public-health concerns, consumer advocacy, and farm-sector protectionism against the relatively industrialized American agricultural sector reached a height of success in the 1999 five-year ban imposed by the European Union on all GM food sales. A recent study of a governmental Science Review Board in the United Kingdom, however, found that there was no ,{[pg 335]}, evidence for any of the environmental or health critiques of GM foods.~{ First Report of the GM Science Review Panel, An Open Review of the Science Relevant to GM Crops and Food Based on the Interests and Concerns of the Public, United Kingdom, July 2003. }~ Indeed, as Peter Pringle masterfully chronicled in Food, Inc., both sides of the political debate could be described as having buffed their cases significantly. The successes and potential benefits have undoubtedly been overstated by enamored scientists and avaricious vendors. There is little doubt, too, that the near-hysterical pitch at which the failures and risks of GM foods have been trumpeted has little science to back it, and the debate has degenerated to a state that makes reasoned, evidence-based consideration difficult. In Europe in general, however, there is wide acceptance of what is called a "precautionary principle." One way of putting it is that absence of evidence of harm is not evidence of absence of harm, and caution counsels against adoption of the new and at least theoretically dangerous. It was this precautionary principle rather than evidence of harm that was at the base of the European ban. This ban has recently been lifted, in the wake of a WTO trade dispute with the United States and other major producers who challenged the ban as a trade barrier. However, the European Union retained strict labeling requirements. This battle among wealthy countries, between the conservative "Fortress Europe" mentality and the growing reliance of American agriculture on biotechnological innovation, would have little moral valence if it did not affect funding for, and availability of, biotechnological research for the populations of the developing world. Partly as a consequence of the strong European resistance to GM foods, the international agricultural research centers that led the way in the development of the Green Revolution varieties, and that released their developments freely for anyone to sell and use without proprietary constraint, were slow to develop capacity in genetic engineering and biotechnological research more generally. Rather than the public national and international efforts leading the way, a study of GM use in developing nations concluded that practically all GM acreage is sown with seed obtained in the finished form from a developed-world supplier, for a price premium or technology licensing fee.~{ Robert E. Evenson, "GMOs: Prospects for Productivity Increases in Developing Countries," Journal of Agricultural and Food Industrial Organization 2 (2004): article 2. }~ The seed, and its improvements, is proprietary to the vendor in this model. It is not supplied in a form or with the rights to further improve locally and independently. Because of the critique of innovation in agriculture as part of the process of globalization and industrialization, of environmental degradation, and of consumer exploitation, the political forces that would have been most likely to support public-sector investment in agricultural innovation are in opposition to such investments. The result has not been retardation of biotechnological innovation ,{[pg 336]}, in agriculture, but its increasing privatization: primarily in the United States and now increasingly in Latin America, whose role in global agricultural production is growing.
+
+Private-sector investment, in turn, operates within a system of patents and other breeders' exclusive rights, whose general theoretical limitations are discussed in chapter 2. In agriculture, this has two distinct but mutually reinforcing implications. The first is that, while private-sector innovation has indeed accounted for most genetically engineered crops in the developing world, research aimed at improving agricultural production in the neediest places has not been significantly pursued by the major private-sector firms. A sector based on expectation of sales of products embedding its patents will not focus its research where human welfare will be most enhanced. It will focus where human welfare can best be expressed in monetary terms. The poor are systematically underserved by such a system. It is intended to elicit investments in research in directions that investors believe will result in outputs that serve the needs of those with the highest willingness and ability to pay for their outputs. The second is that even where the products of innovation can, as a matter of biological characteristics, be taken as inputs into local research and development--by farmers or by national agricultural research systems--the international system of patents and plant breeders' rights enforcement makes it illegal to do so without a license. This again retards the ability of poor countries and their farmers and research institutes to conduct research into local adaptations of improved crops.
+
+The central question raised by the increasing privatization of agricultural biotechnology over the past twenty years is: What can be done to employ commons-based strategies to provide a foundation for research that will be focused on the food security of developing world populations? Is there a way of managing innovation in this sector so that it will not be heavily weighted in favor of populations with a higher ability to pay, and so that its outputs allow farmers and national research efforts to improve and adapt to highly variable local agroecological environments? The continued presence of the public-sector research infrastructure--including the international and national research centers, universities, and NGOs dedicated to the problem of food security--and the potential of harnessing individual farmers and scientists to cooperative development of open biological innovation for agriculture suggest that commons-based paths for development in the area of food security and agricultural innovation are indeed feasible.
+
+First, some of the largest and most rapidly developing nations that still ,{[pg 337]}, have large poor populations--most prominently, China, India, and Brazil-- can achieve significant advances through their own national agricultural research systems. Their research can, in turn, provide a platform for further innovation and adaptation by projects in poorer national systems, as well as in nongovernmental public and peer-production efforts. In this regard, China seems to be leading the way. The first rice genome to be sequenced was japonica, apparently sequenced in 2000 by scientists at Monsanto, but not published. The second, an independent and published sequence of japonica, was sequenced by scientists at Syngenta, and published as the first published rice genome sequence in Science in April 2002. To protect its proprietary interests, Syngenta entered a special agreement with Science, which permitted the authors not to deposit the genomic information into the public Genbank maintained by the National Institutes of Health in the United States.~{ Elliot Marshall, "A Deal for the Rice Genome," Science 296 (April 2002): 34. }~ Depositing the information in GenBank makes it immediately available for other scientists to work with freely. All the major scientific publications require that such information be deposited and made publicly available as a standard condition of publication, but Science waved this requirement for the Syngenta japonica sequence. The same issue of Science, however, carried a similar publication, the sequence of Oryza sativa L.ssp. indica, the most widely cultivated subspecies in China. This was sequenced by a public Chinese effort, and its outputs were immediately deposited in GenBank. The simultaneous publication of the rice genome by a major private firm and a Chinese public effort was the first public exposure to the enormous advances that China's public sector has made in agricultural biotechnology, and its focus first and foremost on improving Chinese agriculture. While its investments are still an order of magnitude smaller than those of public and private sectors in the developed countries, China has been reported as the source of more than half of all expenditures in the developing world.~{ Jikun Huang et al., "Plant Biotechnology in China," Science 295 (2002): 674. }~ China's longest experience with GM agriculture is with Bt cotton, which was introduced in 1997. By 2000, 20 percent of China's cotton acreage was sown to Bt cotton. One study showed that the average acreage of a farm was less than 0.5 hectare of cotton, and the trait that was most valuable to them was Bt cotton's reduced pesticide needs. Those who adopted Bt cotton used less pesticide, reducing labor for pest control and the pesticide cost per kilogram of cotton produced. This allowed an average cost savings of 28 percent. Another effect suggested by survey data--which, if confirmed over time, would be very important as a matter of public health, but also to the political economy of the agricultural biotechnology debate--is that farmers ,{[pg 338]}, who do not use Bt cotton are four times as likely to report symptoms of a degree of toxic exposure following application of pesticides than farmers who did adopt Bt cotton.~{ Huang et al., "Plant Biotechnology." }~ The point is not, of course, to sing the praises of GM cotton or the Chinese research system. China's efforts offer an example of how the larger national research systems can provide an anchor for agricultural research, providing solutions both for their own populations, and, by making the products of their research publicly and freely available, offer a foundation for the work of others.
+
+Alongside the national efforts in developing nations, there are two major paths for commons-based research and development in agriculture that could serve the developing world more generally. The first is based on existing research institutes and programs cooperating to build a commons-based system, cleared of the barriers of patents and breeders' rights, outside and alongside the proprietary system. The second is based on the kind of loose affiliation of university scientists, nongovernmental organizations, and individuals that we saw play such a significant role in the development of free and open-source software. The most promising current efforts in the former vein are the PIPRA (Public Intellectual Property for Agriculture) coalition of public-sector universities in the United States, and, if it delivers on its theoretical promises, the Generation Challenge Program led by CGIAR (the Consultative Group on International Agricultural Research). The most promising model of the latter, and probably the most ambitious commonsbased project for biological innovation currently contemplated, is BIOS (Biological Innovation for an Open Society).
+
+PIPRA is a collaboration effort among public-sector universities and agricultural research institutes in the United States, aimed at managing their rights portfolio in a way that will give their own and other researchers freedom to operate in an institutional ecology increasingly populated by patents and other rights that make work difficult. The basic thesis and underlying problem that led to PIPRA's founding were expressed in an article in Science coauthored by fourteen university presidents.~{ Richard Atkinson et al., "Public Sector Collaboration for Agricultural IP Management," Science 301 (2003): 174. }~ They underscored the centrality of public-sector, land-grant university-based research to American agriculture, and the shift over the last twenty-five years toward increased use of intellectual property rules to cover basic discoveries and tools necessary for agricultural innovation. These strategies have been adopted by both commercial firms and, increasingly, by public-sector universities as the primary mechanism for technology transfer from the scientific institute to the commercializing firms. The problem they saw was that in agricultural research, ,{[pg 339]}, innovation was incremental. It relies on access to existing germplasm and crop varieties that, with each generation of innovation, brought with them an ever-increasing set of intellectual property claims that had to be licensed in order to obtain permission to innovate further. The universities decided to use the power that ownership over roughly 24 percent of the patents in agricultural biotechnology innovations provides them as a lever with which to unravel the patent thickets and to reduce the barriers to research that they increasingly found themselves dealing with. The main story, one might say the "founding myth" of PIPRA, was the story of golden rice. Golden rice is a variety of rice that was engineered to provide dietary vitamin A. It was developed with the hope that it could introduce vitamin A supplement to populations in which vitamin A deficiency causes roughly 500,000 cases of blindness a year and contributes to more than 2 million deaths a year. However, when it came to translating the research into deliverable plants, the developers encountered more than seventy patents in a number of countries and six materials transfer agreements that restricted the work and delayed it substantially. PIPRA was launched as an effort of public-sector universities to cooperate in achieving two core goals that would respond to this type of barrier--preserving the right to pursue applications to subsistence crops and other developing-world-related crops, and preserving their own freedom to operate vis-a-vis each other's patent portfolios.
+
+The basic insight of PIPRA, which can serve as a model for university alliances in the context of the development of medicines as well as agriculture, is that universities are not profit-seeking enterprises, and university scientists are not primarily driven by a profit motive. In a system that offers opportunities for academic and business tracks for people with similar basic skills, academia tends to attract those who are more driven by nonmonetary motivations. While universities have invested a good deal of time and money since the Bayh-Dole Act of 1980 permitted and indeed encouraged them to patent innovations developed with public funding, patent and other exclusive-rights-based revenues have not generally emerged as an important part of the revenue scheme of universities. As table 9.2 shows, except for one or two outliers, patent revenues have been all but negligible in university budgets.~{ This table is a slightly expanded version of one originally published in Yochai Benkler, "Commons Based Strategies and the Problems of Patents," Science 305 (2004): 1110. }~ This fact makes it fiscally feasible for universities to use their patent portfolios to maximize the global social benefit of their research, rather than trying to maximize patent revenue. In particular, universities can aim to include provisions in their technology licensing agreements that are aimed at the dual goals of (a) delivering products embedding their innovatioins ,{[pg 340]},
+
+!_ Table 9.2: Selected University Gross Revenues and Patent Licensing Revenues
+
+table{~h c6; 28; 14; 14; 14; 14; 14;
+
+.
+Total Revenues (millions)
+Licensing & Royalties (mil.)
+Licensing & Royalties (% of total)
+Gov. Grants & Contracts (mil.)
+Gov. Grants & Contracts (% of total)
+
+All universities
+$227,000
+$1270
+0.56%
+$31,430
+13.85%
+
+University of Columbia
+$2,074
+$178.4 $100-120a
+8.6% 4.9-5.9%
+$532
+25.65%
+
+University of California
+$14,166
+$81.3 $55(net)b
+0.57% 0.39%
+$2372
+16.74%
+
+Stanford University
+$3,475
+$43.3 $36.8c
+1.25% 1.06%
+$860
+24.75%
+
+Florida State
+$2,646
+$35.6
+1.35%
+$238
+8.99%
+
+University of Wisconsin Madison
+$1,696
+$32
+1.89%
+$417.4
+24.61%
+
+University of Minnesota
+$1,237
+$38.7
+3.12%
+$323.5
+26.15%
+
+Harvard
+$2,473
+$47.9
+1.94%
+$416 $548.7d
+16.82% 22.19%
+
+Cal Tech
+$531
+$26.7e $15.7f
+5.02% 2.95%
+$268
+50.47%
+
+}table
+
+Sources: Aggregate revenues: U.S. Dept. of Education, National Center for Education Statistics, Enrollment in Postsecondary Institutions, Fall 2001, and Financial Statistics, Fiscal Year 2001 (2003), Table F; Association of University Technology Management, Annual Survey Summary FY 2002 (AUTM 2003), Table S-12. Individual institutions: publicly available annual reports of each university and/or its technology transfer office for FY 2003.
+
+Notes:
+
+a. Large ambiguity results because technology transfer office reports increased revenues for yearend 2003 as $178M without reporting expenses; University Annual Report reports licensing revenue with all "revenue from other educational and research activities," and reports a 10 percent decline in this category, "reflecting an anticipated decline in royalty and license income" from the $133M for the previous year-end, 2002. The table reflects an assumed net contribution to university revenues between $100-120M (the entire decline in the category due to royalty/royalties decreased proportionately with the category).
+
+b. University of California Annual Report of the Office of Technology Transfer is more transparent than most in providing expenses--both net legal expenses and tech transfer direct operating expenses, which allows a clear separation of net revenues from technology transfer activities.
+
+c. Minus direct expenses, not including expenses for unlicensed inventions.
+
+d. Federal- and nonfederal-sponsored research.
+
+e. Almost half of this amount is in income from a single Initial Public Offering, and therefore does not represent a recurring source of licensing revenue.
+
+f. Technology transfer gross revenue minus the one-time event of an initial public offering of LiquidMetal Technologies.
+
+,{[pg 341]},
+
+to developing nations at reasonable prices and (b) providing researchers and plant breeders the freedom to operate that would allow them to research, develop, and ultimately produce crops that would improve food security in the developing world.
+
+While PIPRA shows an avenue for collaboration among universities in the public interest, it is an avenue that does not specifically rely on, or benefit in great measure from, the information networks or the networked information economy. It continues to rely on the traditional model of publicly funded research. More explicit in its effort to leverage the cost savings made possible by networked information systems is the Generation Challenge Program (GCP). The GCP is an effort to bring the CGIAR into the biotechnology sphere, carefully, given the political resistance to genetically modified foods, and quickly, given the already relatively late start that the international research centers have had in this area. Its stated emphasis is on building an architecture of innovation, or network of research relationships, that will provide low-cost techniques for the basic contemporary technologies of agricultural research. The program has five primary foci, but the basic thrust is to generate improvements both in basic genomics science and in breeding and farmer education, in both cases for developing world agriculture. One early focus would be on building a communications system that allows participating institutions and scientists to move information efficiently and utilize computational resources to pursue research. There are hundreds of thousands of samples of germplasm, from "landrace" (that is, locally agriculturally developed) and wild varieties to modern varieties, located in databases around the world in international, national, and academic institutions. There are tremendous high-capacity computation resources in some of the most advanced research institutes, but not in many of the national and international programs. One of the major goals articulated for the GCP is to develop Web-based interfaces to share these data and computational resources. Another is to provide a platform for sharing new questions and directions of research among participants. The work in this network will, in turn, rely on materials that have proprietary interests attached to them, and will produce outputs that could have proprietary interests attached to them as well. Just like the universities, the GCP institutes (national, international, and nonprofit) are looking for an approach aimed to secure open access to research materials and tools and to provide humanitarian access to its products, particularly for subsistence crop development and use. As of this writing, however, the GCP is still in a formative stage, more an aspiration than ,{[pg 342]}, a working model. Whether it will succeed in overcoming the political constraints placed on the CGIAR as well as the relative latecomer status of the international public efforts to this area of work remains to be seen. But the elements of the GCP certainly exhibit an understanding of the possibilities presented by commons-based networked collaboration, and an ambition to both build upon them and contribute to their development.
+
+The most ambitious effort to create a commons-based framework for biological innovation in this field is BIOS. BIOS is an initiative of CAMBIA (Center for the Application of Molecular Biology to International Agriculture), a nonprofit agricultural research institute based in Australia, which was founded and is directed by Richard Jefferson, a pioneer in plant biotechnology. BIOS is based on the observation that much of contemporary agricultural research depends on access to tools and enabling technologies-- such as mechanisms to identify genes or for transferring them into target plants. When these tools are appropriated by a small number of firms and available only as part of capital-intensive production techniques, they cannot serve as the basis for innovation at the local level or for research organized on nonproprietary models. One of the core insights driving the BIOS initiative is the recognition that when a subset of necessary tools is available in the public domain, but other critical tools are not, the owners of those tools appropriate the full benefits of public domain innovation without at the same time changing the basic structural barriers to use of the proprietary technology. To overcome these problems, the BIOS initiative includes both a strong informatics component and a fairly ambitious "copyleft"-like model (similar to the GPL described in chapter 3) of licensing CAMBIA's basic tools and those of other members of the BIOS initiative. The informatics component builds on a patent database that has been developed by CAMBIA for a number of years, and whose ambition is to provide as complete as possible a dataset of who owns what tools, what the contours of ownership are, and by implication, who needs to be negotiated with and where research paths might emerge that are not yet appropriated and therefore may be open to unrestricted innovation.
+
+The licensing or pooling component is more proactive, and is likely the most significant of the project. BIOS is setting up a licensing and pooling arrangement, "primed" by CAMBIA's own significant innovations in tools, which are licensed to all of the initiative's participants on a free model, with grant-back provisions that perform an openness-binding function similar to copyleft.~{ Wim Broothaertz et al., "Gene Transfer to Plants by Diverse Species of Bacteria," Nature 433 (2005): 629. }~ In coarse terms, this means that anyone who builds upon the ,{[pg 343]}, contributions of others must contribute improvements back to the other participants. One aspect of this model is that it does not assume that all research comes from academic institutions or from traditional governmentfunded, nongovernmental, or intergovernmental research institutes. It tries to create a framework that, like the open-source development community, engages commercial and noncommercial, public and private, organized and individual participants into a cooperative research network. The platform for this collaboration is "BioForge," styled after Sourceforge, one of the major free and open-source software development platforms. The commitment to engage many different innovators is most clearly seen in the efforts of BIOS to include major international commercial providers and local potential commercial breeders alongside the more likely targets of a commons-based initiative. Central to this move is the belief that in agricultural science, the basic tools can, although this may be hard, be separated from specific applications or products. All actors, including the commercial ones, therefore have an interest in the open and efficient development of tools, leaving competition and profit making for the market in applications. At the other end of the spectrum, BIOS's focus on making tools freely available is built on the proposition that innovation for food security involves more than biotechnology alone. It involves environmental management, locale-specific adaptations, and social and economic adoption in forms that are locally and internally sustainable, as opposed to dependent on a constant inflow of commoditized seed and other inputs. The range of participants is, then, much wider than envisioned by PIPRA or the GCP. It ranges from multinational corporations through academic scientists, to farmers and local associations, pooling their efforts in a communications platform and institutional model that is very similar to the way in which the GNU/Linux operating system has been developed. As of this writing, the BIOS project is still in its early infancy, and cannot be evaluated by its outputs. However, its structure offers the crispest example of the extent to which the peer-production model in particular, and commons-based production more generally, can be transposed into other areas of innovation at the very heart of what makes for human development--the ability to feed oneself adequately.
+
+PIPRA and the BIOS initiative are the most salient examples of, and the most significant first steps in the development of commons-based strategies to achieve food security. Their vitality and necessity challenge the conventional wisdom that ever-increasing intellectual property rights are necessary to secure greater investment in research, or that the adoption of proprietary ,{[pg 344]}, rights is benign. Increasing appropriation of basic tools and enabling technologies creates barriers to entry for innovators--public-sector, nonprofit organizations, and the local farmers themselves--concerned with feeding those who cannot signal with their dollars that they are in need. The emergence of commons-based techniques--particularly, of an open innovation platform that can incorporate farmers and local agronomists from around the world into the development and feedback process through networked collaboration platforms--promises the most likely avenue to achieve research oriented toward increased food security in the developing world. It promises a mechanism of development that will not increase the relative weight and control of a small number of commercial firms that specialize in agricultural production. It will instead release the products of innovation into a selfbinding commons--one that is institutionally designed to defend itself against appropriation. It promises an iterative collaboration platform that would be able to collect environmental and local feedback in the way that a free software development project collects bug reports--through a continuous process of networked conversation among the user-innovators themselves. In combination with public investments from national governments in the developing world, from the developed world, and from more traditional international research centers, agricultural research for food security may be on a path of development toward constructing a sustainable commons-based innovation ecology alongside the proprietary system. Whether it follows this path will be partly a function of the engagement of the actors themselves, but partly a function of the extent to which the international intellectual property/trade system will refrain from raising obstacles to the emergence of these commons-based efforts.
+
+3~ Access to Medicines: Commons-Based Strategies for Biomedical Research
+
+Nothing has played a more important role in exposing the systematic problems that the international trade and patent system presents for human development than access to medicines for HIV/AIDS. This is so for a number of reasons. First, HIV/AIDS has reached pandemic proportions. One quarter of all deaths from infectious and parasitic diseases in 2002 were caused by AIDS, accounting for almost 5 percent of all deaths in the world that year.~{ These numbers and others in this paragraph are taken from the 2004 WHO World Health Report, Annex Table 2. }~ Second, it is a new condition, unknown to medicine a mere twenty-five years ago, is communicable, and in principle is of a type--infectious diseases--that we have come to see modern medicine as capable of solving. ,{[pg 345]}, This makes it different from much bigger killers--like the many cancers and forms of heart disease--which account for about nine times as many deaths globally. Third, it has a significant presence in the advanced economies. Because it was perceived there as a disease primarily affecting the gay community, it had a strong and well-defined political lobby and high cultural salience. Fourth, and finally, there have indeed been enormous advances in the development of medicines for HIV/AIDS. Mortality for patients who are treated is therefore much lower than for those who are not. These treatments are new, under patent, and enormously expensive. As a result, death-- as opposed to chronic illness--has become overwhelmingly a consequence of poverty. More than 75 percent of deaths caused by AIDS in 2002 were in Africa. HIV/AIDS drugs offer a vivid example of an instance where drugs exist for a disease but cannot be afforded in the poorest countries. They represent, however, only a part, and perhaps the smaller part, of the limitations that a patent-based drug development system presents for providing medicines to the poor. No less important is the absence of a market pull for drugs aimed at diseases that are solely or primarily developing-world diseases--like drugs for tropical diseases, or the still-elusive malaria vaccine.
+
+To the extent that the United States and Europe are creating a global innovation system that relies on patents and market incentives as its primary driver of research and innovation, these wealthy democracies are, of necessity, choosing to neglect diseases that disproportionately affect the poor. There is nothing evil about a pharmaceutical company that is responsible to its shareholders deciding to invest where it expects to reap profit. It is not immoral for a firm to invest its research funds in finding a drug to treat acne, which might affect 20 million teenagers in the United States, rather than a drug that will cure African sleeping sickness, which affects 66 million Africans and kills about fifty thousand every year. If there is immorality to be found, it is in the legal and policy system that relies heavily on the patent system to induce drug discovery and development, and does not adequately fund and organize biomedical research to solve the problems that cannot be solved by relying solely on market pull. However, the politics of public response to patents for drugs are similar in structure to those that have to do with agricultural biotechnology exclusive rights. There is a very strong patentbased industry--much stronger than in any other patent-sensitive area. The rents from strong patents are enormous, and a rational monopolist will pay up to the value of its rents to maintain and improve its monopoly. The primary potential political push-back in the pharmaceutical area, which does ,{[pg 346]}, not exist in the agricultural innovation area, is that the exorbitant costs of drugs developed under this system is hurting even the well-endowed purses of developed-world populations. The policy battles in the United States and throughout the developed world around drug cost containment may yet result in a sufficient loosening of the patent constraints to deliver positive side effects for the developing world. However, they may also work in the opposite direction. The unwillingness of the wealthy populations in the developed world to pay high rents for drugs retards the most immediate path to lower-cost drugs in the developing world--simple subsidy of below-cost sales in poor countries cross-subsidized by above-cost rents in wealthy countries.
+
+The industrial structure of biomedical research and pharmaceutical development is different from that of agricultural science in ways that still leave a substantial potential role for commons-based strategies. However, these would be differently organized and aligned than in agriculture. First, while governments play an enormous role in funding basic biomedical science, there are no real equivalents of the national and international agricultural research institutes. In other words, there are few public-sector laboratories that actually produce finished drugs for delivery in the developing world, on the model of the International Rice Research Institute or one of the national agricultural research systems. On the other hand, there is a thriving generics industry, based in both advanced and developing economies, that stands ready to produce drugs once these are researched. The primary constraint on harnessing its capacity for low-cost drug production and delivery for poorer nations is the international intellectual property system. The other major difference is that, unlike with software, scientific publication, or farmers in agriculture, there is no existing framework for individuals to participate in research and development on drugs and treatments. The primary potential source of nongovernmental investment of effort and thought into biomedical research and development are universities as institutions and scientists, if they choose to organize themselves into effective peer-production communities.
+
+Universities and scientists have two complementary paths open to them to pursue commons-based strategies to provide improved research on the relatively neglected diseases of the poor and improved access to existing drugs that are available in the developed world but unaffordable in the developing. The first involves leveraging existing university patent portfolios--much as the universities allied in PIPRA are exploring and as CAMBIA is doing more ,{[pg 347]}, aggressively. The second involves work in an entirely new model--constructing collaboration platforms to allow scientists to engage in peer production, cross-cutting the traditional grant-funded lab, and aiming toward research into diseases that do not exercise a market pull on the biomedical research system in the advanced economies.
+
+/{Leveraging University Patents}/. In February 2001, the humanitarian organization Doctors Without Borders (also known as Medecins Sans Frontieres, or MSF) asked Yale University, which held the key South African patent on stavudine--one of the drugs then most commonly used in combination therapies--for permission to use generic versions in a pilot AIDS treatment program. At the time, the licensed version of the drug, sold by Bristol-MyersSquibb (BMS), cost $1,600 per patient per year. A generic version, manufactured in India, was available for $47 per patient per year. At that point in history, thirty-nine drug manufacturers were suing the South African government to strike down a law permitting importation of generics in a health crisis, and no drug company had yet made concessions on pricing in developing nations. Within weeks of receiving MSF's request, Yale negotiated with BMS to secure the sale of stavudine for fifty-five dollars a year in South Africa. Yale, the University of California at Berkeley, and other universities have, in the years since, entered into similar ad hoc agreements with regard to developing-world applications or distribution of drugs that depend on their patented technologies. These successes provide a template for a much broader realignment of how universities use their patent portfolios to alleviate the problems of access to medicines in developing nations.
+
+We have already seen in table 9.2 that while universities own a substantial and increasing number of patents, they do not fiscally depend in any significant way on patent revenue. These play a very small part in the overall scheme of revenues. This makes it practical for universities to reconsider how they use their patents and to reorient toward using them to maximize their beneficial effects on equitable access to pharmaceuticals developed in the advanced economies. Two distinct moves are necessary to harness publicly funded university research toward building an information commons that is easily accessible for global redistribution. The first is internal to the university process itself. The second has to do with the interface between the university and patent-dependent and similar exclusive-rights-dependent market actors.
+
+Universities are internally conflicted about their public and market goals. ,{[pg 348]}, Dating back to the passage of the Bayh-Dole Act, universities have increased their patenting practices for the products of publicly funded research. Technology transfer offices that have been set up to facilitate this practice are, in many cases, measured by the number of patent applications, grants, and dollars they bring in to the university. These metrics for measuring the success of these offices tend to make them function, and understand their role, in a way that is parallel to exclusive-rights-dependent market actors, instead of as public-sector, publicly funded, and publicly minded institutions. A technology transfer officer who has successfully provided a royalty-free license to a nonprofit concerned with developing nations has no obvious metric in which to record and report the magnitude of her success (saving X millions of lives or displacing Y misery), unlike her colleague who can readily report X millions of dollars from a market-oriented license, or even merely Y dozens of patents filed. Universities must consider more explicitly their special role in the global information and knowledge production system. If they recommit to a role focused on serving the improvement of the lot of humanity, rather than maximization of their revenue stream, they should adapt their patenting and licensing practices appropriately. In particular, it will be important following such a rededication to redefine the role of technology transfer offices in terms of lives saved, quality-of-life measures improved, or similar substantive measures that reflect the mission of university research, rather than the present metrics borrowed from the very different world of patent-dependent market production. While the internal process is culturally and politically difficult, it is not, in fact, analytically or technically complex. Universities have, for a very long time, seen themselves primarily as dedicated to the advancement of knowledge and human welfare through basic research, reasoned inquiry, and education. The long-standing social traditions of science have always stood apart from market incentives and orientations. The problem is therefore one of reawakening slightly dormant cultural norms and understandings, rather than creating new ones in the teeth of long-standing contrary traditions. The problem should be substantially simpler than, say, persuading companies that traditionally thought of their innovation in terms of patents granted or royalties claimed, as some technology industry participants have, to adopt free software strategies.
+
+If universities do make the change, then the more complex problem will remain: designing an institutional interface between universities and the pharmaceutical industry that will provide sustainable significant benefits for developing-world distribution of drugs and for research opportunities into ,{[pg 349]}, developing-world diseases. As we already saw in the context of agriculture, patents create two discrete kinds of barriers: The first is on distribution, because of the monopoly pricing power they purposefully confer on their owners. The second is on research that requires access to tools, enabling technologies, data, and materials generated by the developed-world research process, and that could be useful to research on developing-world diseases. Universities working alone will not provide access to drugs. While universities perform more than half of the basic scientific research in the United States, this effort means that more than 93 percent of university research expenditures go to basic and applied science, leaving less than 7 percent for development--the final research necessary to convert a scientific project into a usable product.~{ National Science Foundation, Division of Science Resource Statistics, Special Report: National Patterns of Research and Development Resources: 2003 NSF 05-308 (Arlington, VA: NSF, 2005), table 1. }~ Universities therefore cannot simply release their own patents and expect treatments based on their technologies to become accessible. Instead, a change is necessary in licensing practices that takes an approach similar to a synthesis of the general public license (GPL), of BIOS's licensing approach, and PIPRA.
+
+Universities working together can cooperate to include in their licenses provisions that would secure freedom to operate for anyone conducting research into developing-world diseases or production for distribution in poorer nations. The institutional details of such a licensing regime are relatively complex and arcane, but efforts are, in fact, under way to develop such licenses and to have them adopted by universities.~{ The detailed analysis can be found in Amy Kapzcynzki et al., "Addressing Global Health Inequities: An Open Licensing Paradigm for Public Sector Inventions," Berkeley Journal of Law and Technology (Spring 2005). 24. See Jean Lanjouw, "A New Global Patent Regime for Diseases: U.S. and International Legal Issues," Harvard Journal of Law & Technology 16 (2002). 25. S. Maurer, A. Sali, and A. Rai, "Finding Cures for Tropical Disease: Is Open Source the Answer?" Public Library of Science: Medicine 1, no. 3 (December 2004): e56. }~ What is important here, for understanding the potential, is the basic idea and framework. In exchange for access to the university's patents, the pharmaceutical licensees will agree not to assert any of their own rights in drugs that require a university license against generics manufacturers who make generic versions of those drugs purely for distribution in low- and middle-income countries. An Indian or American generics manufacturer could produce patented drugs that relied on university patents and were licensed under this kind of an equitable-access license, as long as it distributed its products solely in poor countries. A government or nonprofit research institute operating in South Africa could work with patented research tools without concern that doing so would violate the patents. However, neither could then import the products of their production or research into the developed world without violating the patents of both the university and the drug company. The licenses would create a mechanism for redistribution of drug products and research tools from the developed economies to the developing. It would do so without requiring the kind of regulatory changes advocated by others, such as ,{[pg 350]}, Jean Lanjouw, who have advocated policy changes aimed similarly to achieve differential pricing in the developing and developed worlds.24 Because this redistribution could be achieved by universities acting through licensing, instead of through changes in law, it offers a more feasible political path for achieving the desired result. Such action by universities would, of course, not solve all the problems of access to medicines. First, not all health-related products are based on university research. Second, patents do not account for all, or perhaps even most, of the reason that patients in poor nations are not treated. A lack of delivery infrastructure, public-health monitoring and care, and stable conditions to implement disease-control policy likely weigh more heavily. Nonetheless, there are successful and stable government and nonprofit programs that could treat hundreds of thousands or millions of patients more than they do now, if the cost of drugs were lower. Achieving improved access for those patients seems a goal worthy of pursuit, even if it is no magic bullet to solve all the illnesses of poverty.
+
+/{Nonprofit Research}/. Even a successful campaign to change the licensing practices of universities in order to achieve inexpensive access to the products of pharmaceutical research would leave the problem of research into diseases that affect primarily the poor. This is because, unless universities themselves undertake the development process, the patent-based pharmaceuticals have no reason to. The "simple" answer to this problem is more funding from the public sector or foundations for both basic research and development. This avenue has made some progress, and some foundations--particularly, in recent years, the Gates Foundation--have invested enormous amounts of money in searching for cures and improving basic public-health conditions of disease in Africa and elsewhere in the developing world. It has received a particularly interesting boost since 2000, with the founding of the Institute for One World Health, a nonprofit pharmaceutical dedicated to research and development specifically into developing-world diseases. The basic model of One World Health begins by taking contributions of drug leads that are deemed unprofitable by the pharmaceutical industry--from both universities and pharmaceutical companies. The firms have no reason not to contribute their patents on leads purely for purposes they do not intend to pursue. The group then relies on foundation and public-sector funding to perform synthesis, preclinical and clinical trials, in collaboration with research centers in the United States, India, Bangladesh, and Thailand, and when the time comes around for manufacturing, the institute collaborates with manufacturers ,{[pg 351]}, in developing nations to produce low-cost instances of the drugs, and with government and NGO public-health providers to organize distribution. This model is new, and has not yet had enough time to mature and provide measurable success. However, it is promising.
+
+/{Peer Production of Drug Research and Development}/. Scientists, scientists-intraining, and to some extent, nonscientists can complement university licensing practices and formally organized nonprofit efforts as a third component of the ecology of commons-based producers. The initial response to the notion that peer production can be used for drug development is that the process is too complex, expensive, and time consuming to succumb to commons-based strategies. This may, at the end of the day, prove true. However, this was also thought of complex software projects or of supercomputing, until free software and distributed computing projects like SETI@Home and Folding@Home came along and proved them wrong. The basic point is to see how distributed nonmarket efforts are organized, and to see how the scientific production process can be broken up to fit a peer-production model.
+
+First, anything that can be done through computer modeling or data analysis can, in principle, be done on a peer-production basis. Increasing portions of biomedical research are done today through modeling, computer simulation, and data analysis of the large and growing databases, including a wide range of genetic, chemical, and biological information. As more of the process of drug discovery of potential leads can be done by modeling and computational analysis, more can be organized for peer production. The relevant model here is open bioinformatics. Bioinformatics generally is the practice of pursuing solutions to biological questions using mathematics and information technology. Open bioinformatics is a movement within bioinformatics aimed at developing the tools in an open-source model, and in providing access to the tools and the outputs on a free and open basis. Projects like these include the Ensmbl Genome Browser, operated by the European Bioinformatics Institute and the Sanger Centre, or the National Center for Biotechnology Information (NCBI), both of which use computer databases to provide access to data and to run various searches on combinations, patterns, and so forth, in the data. In both cases, access to the data and the value-adding functionalities are free. The software too is developed on a free software model. These, in turn, are complemented by database policies like those of the International HapMap Project, an effort to map ,{[pg 352]}, common variations in the human genome, whose participants have committed to releasing all the data they collect freely into the public domain. The economics of this portion of research into drugs are very similar to the economics of software and computation. The models are just software. Some models will be able to run on the ever-more-powerful basic machines that the scientists themselves use. However, anything that requires serious computation could be modeled for distributed computing. This would allow projects to harness volunteer computation resources, like Folding@Home, Genome@Home, or FightAIDS@Home--sites that already harness the computing power of hundreds of thousands of users to attack biomedical science questions. This stage of the process is the one that most directly can be translated into a peer-production model, and, in fact, there have been proposals, such as the Tropical Disease Initiative proposed by Maurer, Sali, and Rai.~{ 25 }~
+
+Second, and more complex, is the problem of building wet-lab science on a peer-production basis. Some efforts would have to focus on the basic science. Some might be at the phase of optimization and chemical synthesis. Some, even more ambitiously, would be at the stage of preclinical animal trials and even clinical trials. The wet lab seems to present an insurmountable obstacle for a serious role for peer production in biomedical science. Nevertheless, it is not clear that it is actually any more so than it might have seemed for the development of an operating system, or a supercomputer, before these were achieved. Laboratories have two immensely valuable resources that may be capable of being harnessed to peer production. Most important by far are postdoctoral fellows. These are the same characters who populate so many free software projects, only geeks of a different feather. They are at a similar life stage. They have the same hectic, overworked lives, and yet the same capacity to work one more hour on something else, something interesting, exciting, or career enhancing, like a special grant announced by the government. The other resources that have overcapacity might be thought of as petri dishes, or if that sounds too quaint and oldfashioned, polymerase chain reaction (PCR) machines or electrophoresis equipment. The point is simple. Laboratory funding currently is silo-based. Each lab is usually funded to have all the equipment it needs for run-ofthe-mill work, except for very large machines operated on time-share principles. Those machines that are redundantly provisioned in laboratories have downtime. That downtime coupled with a postdoctoral fellow in the lab is an experiment waiting to happen. If a group that is seeking to start a project ,{[pg 353]}, defines discrete modules of a common experiment, and provides a communications platform to allow people to download project modules, perform them, and upload results, it would be possible to harness the overcapacity that exists in laboratories. In principle, although this is a harder empirical question, the same could be done for other widely available laboratory materials and even animals for preclinical trials on the model of, "brother, can you spare a mouse?" One fascinating proposal and early experiment at the University of Indiana-Purdue University Indianapolis was suggested by William Scott, a chemistry professor. Scott proposed developing simple, lowcost kits for training undergraduate students in chemical synthesis, but which would use targets and molecules identified by computational biology as potential treatments for developing-world diseases as their output. With enough redundancy across different classrooms and institutions around the world, the results could be verified while screening and synthesizing a significant number of potential drugs. The undergraduate educational experience could actually contribute to new experiments, as opposed simply to synthesizing outputs that are not really needed by anyone. Clinical trials provide yet another level of complexity, because the problem of delivering consistent drug formulations for testing to physicians and patients stretches the imagination. One option would be that research centers in countries affected by the diseases in question could pick up the work at this point, and create and conduct clinical trials. These too could be coordinated across regions and countries among the clinicians administering the tests, so that accruing patients and obtaining sufficient information could be achieved more rapidly and at lower cost. As in the case of One World Health, production and regulatory approval, from this stage on, could be taken up by the generics manufacturers. In order to prevent the outputs from being appropriated at this stage, every stage in the process would require a public-domain-binding license that would prevent a manufacturer from taking the outputs and, by making small changes, patenting the ultimate drug.
+
+This proposal about medicine is, at this stage, the most imaginary among the commons-based strategies for development suggested here. However, it is analytically consistent with them, and, in principle, should be attainable. In combination with the more traditional commons-based approaches, university research, and the nonprofit world, peer production could contribute to an innovation ecology that could overcome the systematic inability of a purely patent-based system to register and respond to the health needs of the world's poor. ,{[pg 354]},
+
+3~ COMMONS-BASED STRATEGIES FOR DEVELOPMENT: CONCLUSION
+
+Welfare, development, and growth outside of the core economies heavily depend on the transfer of information-embedded goods and tools, information, and knowledge from the technologically advanced economies to the developing and less-developed economies and societies around the globe. These are important partly as finished usable components of welfare. Perhaps more important, however, they are necessary as tools and platforms on which innovation, research, and development can be pursued by local actors in the developing world itself--from the free software developers of Brazil to the agricultural scientists and farmers of Southeast Asia. The primary obstacles to diffusion of these desiderata in the required direction are the institutional framework of intellectual property and trade and the political power of the patent-dependent business models in the information-exporting economies. This is not because the proprietors of information goods and tools are evil. It is because their fiduciary duty is to maximize shareholder value, and the less-developed and developing economies have little money. As rational maximizers with a legal monopoly, the patent holders restrict output and sell at higher rates. This is not a bug in the institutional system we call "intellectual property." It is a known feature that has known undesirable side effects of inefficiently restricting access to the products of innovation. In the context of vast disparities in wealth across the globe, however, this known feature does not merely lead to less than theoretically optimal use of the information. It leads to predictable increase of morbidity and mortality and to higher barriers to development.
+
+The rise of the networked information economy provides a new framework for thinking about how to work around the barriers that the international intellectual property regime places on development. Public-sector and other nonprofit institutions that have traditionally played an important role in development can do so with a greater degree of efficacy. Moreover, the emergence of peer production provides a model for new solutions to some of the problems of access to information and knowledge. In software and communications, these are directly available. In scientific information and some educational materials, we are beginning to see adaptations of these models to support core elements of development and learning. In food security and health, the translation process may be more difficult. In agriculture, we are seeing more immediate progress in the development of a woven ,{[pg 355]}, fabric of public-sector, academic, nonprofit, and individual innovation and learning to pursue biological innovation outside of the markets based on patents and breeders' rights. In medicine, we are still at a very early stage of organizational experiments and institutional proposals. The barriers to implementation are significant. However, there is growing awareness of the human cost of relying solely on the patent-based production system, and of the potential of commons-based strategies to alleviate these failures.
+
+Ideally, perhaps, the most direct way to arrive at a better system for harnessing innovation to development would pass through a new international politics of development, which would result in a better-designed international system of trade and innovation policy. There is in fact a global movement of NGOs and developing nations pursuing this goal. It is possible, however, that the politics of international trade are sufficiently bent to the purposes of incumbent industrial information economy proprietors and the governments that support them as a matter of industrial policy that the political path of formal institutional reform will fail. Certainly, the history of the TRIPS agreement and, more recently, efforts to pass new expansive treaties through the WIPO suggest this. However, one of the lessons we learn as we look at the networked information economy is that the work of governments through international treaties is not the final word on innovation and its diffusion across boundaries of wealth. The emergence of social sharing as a substantial mode of production in the networked environment offers an alternative route for individuals and nonprofit entities to take a much more substantial role in delivering actual desired outcomes independent of the formal system. Commons-based and peer production efforts may not be a cure-all. However, as we have seen in the software world, these strategies can make a big contribution to quite fundamental aspects of human welfare and development. And this is where freedom and justice coincide.
+
+The practical freedom of individuals to act and associate freely--free from the constraints of proprietary endowment, free from the constraints of formal relations of contract or stable organizations--allows individual action in ad hoc, informal association to emerge as a new global mover. It frees the ability of people to act in response to all their motivations. In doing so, it offers a new path, alongside those of the market and formal governmental investment in public welfare, for achieving definable and significant improvements in human development throughout the world. ,{[pg 356]},
+
+1~10 Chapter 10 - Social Ties: Networking Together
+
+Increased practical individual autonomy has been central to my claims throughout this book. It underlies the efficiency and sustainability of nonproprietary production in the networked information economy. It underlies the improvements I describe in both freedom and justice. Many have raised concerns that this new freedom will fray social ties and fragment social relations. On this view, the new freedom is one of detached monads, a freedom to live arid, lonely lives free of the many constraining attachments that make us grounded, well-adjusted human beings. Bolstered by early sociological studies, this perspective was one of two diametrically opposed views that typified the way the Internet's effect on community, or close social relations, was portrayed in the 1990s. The other view, popular among the digerati, was that "virtual communities" would come to represent a new form of human communal existence, providing new scope for building a shared experience of human interaction. Within a few short years, however, empirical research suggests that while neither view had it completely right, it was the ,{[pg 357]}, dystopian view that got it especially wrong. The effects of the Internet on social relations are obviously complex. It is likely too soon to tell which social practices this new mode of communication will ultimately settle on. The most recent research, however, suggests that the Internet has some fairly well-defined effects on human community and intimate social relations. These effects mark neither breakdown nor transcendence, but they do represent an improvement over the world of television and telephone along most dimensions of normative concern with social relations.
+
+We are seeing two effects: first, and most robustly, we see a thickening of preexisting relations with friends, family, and neighbors, particularly with those who were not easily reachable in the pre-Internet-mediated environment. Parents, for example, use instant messages to communicate with their children who are in college. Friends who have moved away from each other are keeping in touch more than they did before they had e-mail, because email does not require them to coordinate a time to talk or to pay longdistance rates. However, this thickening of contacts seems to occur alongside a loosening of the hierarchical aspects of these relationships, as individuals weave their own web of supporting peer relations into the fabric of what might otherwise be stifling familial relationships. Second, we are beginning to see the emergence of greater scope for limited-purpose, loose relationships. These may not fit the ideal model of "virtual communities." They certainly do not fit a deep conception of "community" as a person's primary source of emotional context and support. They are nonetheless effective and meaningful to their participants. It appears that, as the digitally networked environment begins to displace mass media and telephones, its salient communications characteristics provide new dimensions to thicken existing social relations, while also providing new capabilities for looser and more fluid, but still meaningful social networks. A central aspect of this positive improvement in loose ties has been the technical-organizational shift from an information environment dominated by commercial mass media on a oneto-many model, which does not foster group interaction among viewers, to an information environment that both technically and as a matter of social practice enables user-centric, group-based active cooperation platforms of the kind that typify the networked information economy. This is not to say that the Internet necessarily effects all people, all social groups, and networks identically. The effects on different people in different settings and networks will likely vary, certainly in their magnitude. My purpose here, however, is ,{[pg 358]}, to respond to the concern that enhanced individual capabilities entail social fragmentation and alienation. The available data do not support that claim as a description of a broad social effect.
+
+2~ FROM "VIRTUAL COMMUNITIES" TO FEAR OF DISINTEGRATION
+
+Angst about the fragmentation of organic deep social ties, the gemeinschaft community, the family, is hardly a creature of the Internet. In some form or another, the fear that cities, industrialization, rapid transportation, mass communications, and other accoutrements of modern industrial society are leading to alienation, breakdown of the family, and the disruption of community has been a fixed element of sociology since at least the mid-nineteenth century. Its mirror image--the search for real or imagined, more or less idealized community, "grounded" in preindustrial pastoral memory or postindustrial utopia--was often not far behind. Unsurprisingly, this patterned opposition of fear and yearning was replayed in the context of the Internet, as the transformative effect of this new medium made it a new focal point for both strands of thought.
+
+In the case of the Internet, the optimists preceded the pessimists. In his now-classic The Virtual Community, Howard Rheingold put it most succinctly in 1993:
+
+_1 My direct observations of online behavior around the world over the past ten years have led me to conclude that whenever CMC [computer mediated communications] technology becomes available to people anywhere, they inevitably build virtual communities with it, just as microorganisms inevitably create colonies. I suspect that one of the explanations for this phenomenon is the hunger for community that grows in the breasts of people around the world as more and more informal public spaces disappear from our real lives. I also suspect that these new media attract colonies of enthusiasts because CMC enables people to do things with each other in new ways, and to do altogether new kinds of things-- just as telegraphs, telephones, and televisions did.
+
+/{The Virtual Community}/ was grounded on Rheingold's own experience in the WELL (Whole Earth `Lectronic Link). The WELL was one the earliest well-developed instances of large-scale social interaction among people who started out as strangers but came to see themselves as a community. Its members eventually began to organize meetings in real space to strengthen ,{[pg 359]}, the bonds, while mostly continuing their interaction through computermediated communications. Note the structure of Rheingold's claim in this early passage. There is a hunger for community, no longer satisfied by the declining availability of physical spaces for human connection. There is a newly available medium that allows people to connect despite their physical distance. This new opportunity inevitably and automatically brings people to use its affordances--the behaviors it makes possible--to fulfill their need for human connection. Over and above this, the new medium offers new ways of communicating and new ways of doing things together, thereby enhancing what was previously possible. Others followed Rheingold over the course of the 1990s in many and various ways. The basic structure of the claim about the potential of cyberspace to forge a new domain for human connection, one that overcomes the limitations that industrial mass-mediated society places on community, was oft repeated. The basic observation that the Internet permits the emergence of new relationships that play a significant role in their participants' lives and are anchored in online communications continues to be made. As discussed below, however, much of the research suggests that the new online relationships develop in addition to, rather than instead of, physical face-to-face human interaction in community and family--which turns out to be alive and well.
+
+It was not long before a very different set of claims emerged about the Internet. Rather than a solution to the problems that industrial society creates for family and society, the Internet was seen as increasing alienation by absorbing its users. It made them unavailable to spend time with their families. It immersed them in diversions from the real world with its real relationships. In a social-relations version of the Babel objection, it was seen as narrowing the set of shared cultural experiences to such an extent that people, for lack of a common sitcom or news show to talk about, become increasingly alienated from each other. One strand of this type of criticism questioned the value of online relationships themselves as plausible replacements for real-world human connection. Sherry Turkle, the most important early explorer of virtual identity, characterized this concern as: "is it really sensible to suggest that the way to revitalize community is to sit alone in our rooms, typing at our networked computers and filling our lives with virtual friends?"~{ Sherry Turkle, "Virtuality and Its Discontents, Searching for Community in Cyberspace," The American Prospect 7, no. 24 (1996); Sherry Turkle, Life on the Screen: Identity in the Age of the Internet (New York: Simon & Schuster, 1995). }~ Instead of investing themselves with real relationships, risking real exposure and connection, people engage in limited-purpose, lowintensity relationships. If it doesn't work out, they can always sign off, and no harm done. ,{[pg 360]},
+
+Another strand of criticism focused less on the thinness, not to say vacuity, of online relations, and more on sheer time. According to this argument, the time and effort spent on the Net came at the expense of time spent with family and friends. Prominent and oft cited in this vein were two early studies. The first, entitled Internet Paradox, was led by Robert Kraut.~{ Robert Kraut et al., "Internet Paradox, A Social Technology that Reduces Social Involvement and Psychological Well Being," American Psychologist 53 (1998): 1017? 1031. }~ It was the first longitudinal study of a substantial number of users--169 users in the first year or two of their Internet use. Kraut and his collaborators found a slight, but statistically significant, correlation between increases in Internet use and (a) decreases in family communication, (b) decreases in the size of social circle, both near and far, and (c) an increase in depression and loneliness. The researchers hypothesized that use of the Internet replaces strong ties with weak ties. They ideal-typed these communications as exchanging knitting tips with participants in a knitting Listserv, or jokes with someone you would meet on a tourist information site. These trivialities, they thought, came to fill time that, in the absence of the Internet, would be spent with people with whom one has stronger ties. From a communications theory perspective, this causal explanation was more sophisticated than the more widely claimed assimilation of the Internet and television--that a computer monitor is simply one more screen to take away from the time one has to talk to real human beings.~{ A fairly typical statement of this view, quoted in a study commissioned by the Kellogg Foundation, was: "TV or other media, such as computers, are no longer a kind of `electronic hearth,' where a family will gather around and make decisions or have discussions. My position, based on our most recent studies, is that most media in the home are working against bringing families together." Christopher Lee et al., "Evaluating Information and Communications Technology: Perspective for a Balanced Approach," Report to the Kellogg Foundation (December 17, 2001), http:// www.si.umich.edu/pne/kellogg/013.html. }~ It recognized that using the Internet is fundamentally different from watching TV. It allows users to communicate with each other, rather than, like television, encouraging passive reception in a kind of "parallel play." Using a distinction between strong ties and weak ties, introduced by Mark Granovetter in what later became the social capital literature, these researchers suggested that the kind of human contact that was built around online interactions was thinner and less meaningful, so that the time spent on these relationships, on balance, weakened one's stock of social relations.
+
+A second, more sensationalist release of a study followed two years later. In 2000, the Stanford Institute for the Quantitative Study of Society's "preliminary report" on Internet and society, more of a press release than a report, emphasized the finding that "the more hours people use the Internet, the less time they spend with real human beings."~{ Norman H. Nie and Lutz Ebring, "Internet and Society, A Preliminary Report," Stanford Institute for the Quantitative Study of Society, February 17, 2000, 15 (Press Release), http://www.pkp.ubc.ca/bctf/Stanford_Report.pdf. }~ The actual results were somewhat less stark than the widely reported press release. As among all Internet users, only slightly more than 8 percent reported spending less time with family; 6 percent reported spending more time with family, and 86 percent spent about the same amount of time. Similarly, 9 percent reported spending less time with friends, 4 percent spent more time, and 87 percent spent the ,{[pg 361]}, same amount of time.~{ Ibid., 42-43, tables CH-WFAM, CH-WFRN. }~ The press release probably should not have read, "social isolation increases," but instead, "Internet seems to have indeterminate, but in any event small, effects on our interaction with family and friends"--hardly the stuff of front-page news coverage.~{ See John Markoff and A. Newer, "Lonelier Crowd Emerges in Internet Study," New York Times, February 16, 2000, section A, page 1, column 1. }~ The strongest result supporting the "isolation" thesis in that study was that 27 percent of respondents who were heavy Internet users reported spending less time on the phone with friends and family. The study did not ask whether they used email instead of the phone to keep in touch with these family and friends, and whether they thought they had more or less of a connection with these friends and family as a result. Instead, as the author reported in his press release, "E-mail is a way to stay in touch, but you can't share coffee or beer with somebody on e-mail, or give them a hug" (as opposed, one supposes, to the common practice of phone hugs).~{ Nie and Ebring, "Internet and Society," 19. }~ As Amitai Etzioni noted in his biting critique of that study, the truly significant findings were that Internet users spent less time watching television and shopping. Forty-seven percent of those surveyed said that they watched less television than they used to, and that number reached 65 percent for heavy users and 27 percent for light users. Only 3 percent of those surveyed said they watched more TV. Nineteen percent of all respondents and 25 percent of those who used the Internet more than five hours a week said they shopped less in stores, while only 3 percent said they shopped more in stores. The study did not explore how people were using the time they freed by watching less television and shopping less in physical stores. It did not ask whether they used any of this newfound time to increase and strengthen their social and kin ties.~{ Amitai Etzioni, "Debating the Societal Effects of the Internet: Connecting with the World," Public Perspective 11 (May/June 2000): 42, also available at http:// www.gwu.edu/ ccps/etzioni/A273.html. }~
+
+2~ A MORE POSITIVE PICTURE EMERGES OVER TIME
+
+The concerns represented by these early studies of the effects of Internet use on community and family seem to fall into two basic bins. The first is that sustained, more or less intimate human relations are critical to well-functioning human beings as a matter of psychological need. The claims that Internet use is associated with greater loneliness and depression map well onto the fears that human connection ground into a thin gruel of electronic bits simply will not give people the kind of human connectedness they need as social beings. The second bin of concerns falls largely within the "social capital" literature, and, like that literature itself, can be divided largely into two main subcategories. The first, following James Coleman and Mark Granovetter, focuses on the ,{[pg 362]}, economic function of social ties and the ways in which people who have social capital can be materially better off than people who lack it. The second, exemplified by Robert Putnam's work, focuses on the political aspects of engaged societies, and on the ways in which communities with high social capital--defined as social relations with people in local, stable, face-to-face interactions--will lead to better results in terms of political participation and the provisioning of local public goods, like education and community policing. For this literature, the shape of social ties, their relative strength, and who is connected to whom become more prominent features.
+
+There are, roughly speaking, two types of responses to these concerns. The first is empirical. In order for these concerns to be valid as applied to increasing use of Internet communications, it must be the case that Internet communications, with all of their inadequacies, come to supplant real-world human interactions, rather than simply to supplement them. Unless Internet connections actually displace direct, unmediated, human contact, there is no basis to think that using the Internet will lead to a decline in those nourishing connections we need psychologically, or in the useful connections we make socially, that are based on direct human contact with friends, family, and neighbors. The second response is theoretical. It challenges the notion that the socially embedded individual is a fixed entity with unchanging needs that are, or are not, fulfilled by changing social conditions and relations. Instead, it suggests that the "nature" of individuals changes over time, based on actual social practices and expectations. In this case, we are seeing a shift from individuals who depend on social relations that are dominated by locally embedded, thick, unmediated, given, and stable relations, into networked individuals--who are more dependent on their own combination of strong and weak ties, who switch networks, cross boundaries, and weave their own web of more or less instrumental, relatively fluid relationships. Manuel Castells calls this the "networked society,"~{ Manuel Castells, The Rise of Networked Society 2d ed. (Malden, MA: Blackwell Publishers, Inc., 2000). }~ Barry Wellman, "networked individualism."~{ Barry Wellman et al., "The Social Affordances of the Internet for Networked Individualism," Journal of Computer Mediated Communication 8, no. 3 (April 2003). }~ To simplify vastly, it is not that people cease to depend on others and their context for both psychological and social wellbeing and efficacy. It is that the kinds of connections that we come to rely on for these basic human needs change over time. Comparisons of current practices to the old ways of achieving the desiderata of community, and fears regarding the loss of community, are more a form of nostalgia than a diagnosis of present social malaise. ,{[pg 363]},
+
+3~ Users Increase Their Connections with Preexisting Relations
+
+The most basic response to the concerns over the decline of community and its implications for both the psychological and the social capital strands is the empirical one. Relations with one's local geographic community and with one's intimate friends and family do not seem to be substantially affected by Internet use. To the extent that these relationships are affected, the effect is positive. Kraut and his collaborators continued their study, for example, and followed up with their study subjects for an additional three years. They found that the negative effects they had reported in the first year or two dissipated over the total period of observation.~{ Robert Kraut et al., "Internet Paradox Revisited," Journal of Social Issues 58, no. 1 (2002): 49. }~ Their basic hypothesis that the Internet probably strengthened weak ties, however, is consistent with other research and theoretical work. One of the earliest systematic studies of high-speed Internet access and its effects on communities in this vein was by Keith Hampton and Barry Wellman.~{ Keith Hampton and Barry Wellman, "Neighboring in Netville: How the Internet Supports Community and Social Capital in a Wired Suburb," City & Community 2, no. 4 (December 2003): 277. }~ They studied the aptly named Toronto suburb Netville, where homes had high-speed wiring years before broadband access began to be adopted widely in North America. One of their most powerful findings was that people who were connected recognized three times as many of their neighbors by name and regularly talked with twice as many as those who were not wired. On the other hand, however, stronger ties--indicated by actually visiting neighbors, as opposed to just knowing their name or stopping to say good morning--were associated with how long a person had lived in the neighborhood, not with whether or not they were wired. In other words, weak ties of the sort of knowing another's name or stopping to chat with them were significantly strengthened by Internet connection, even within a geographic neighborhood. Stronger ties were not. Using applications like a local e-mail list and personal e-mails, wired residents communicated with others in their neighborhood much more often than did nonwired residents. Moreover, wired residents recognized the names of people in a wider radius from their homes, while nonwired residents tended to know only people within their block, or even a few homes on each side. However, again, stronger social ties, like visiting and talking face-to-face, tended to be concentrated among physically proximate neighbors. Other studies also observed this increase of weak ties in a neighborhood with individuals who are more geographically distant than one's own immediate street or block.~{ Gustavo S. Mesch and Yael Levanon, "Community Networking and Locally-Based Social Ties in Two Suburban Localities," City & Community 2, no. 4 (December 2003): 335. }~ Perhaps the most visible aspect of the social capital implications of a well-wired geographic community was the finding that ,{[pg 364]},
+
+wired neighbors began to sit on their front porches, instead of in their backyard, thereby providing live social reinforcement of community through daily brief greetings, as well as creating a socially enforced community policing mechanism.
+
+We now have quite a bit of social science research on the side of a number of factual propositions.~{ Useful surveys include: Paul DiMaggio et al., "Social Implications of the Internet," Annual Review of Sociology 27 (2001): 307-336; Robyn B. Driskell and Larry Lyon, "Are Virtual Communities True Communities? Examining the Environments and Elements of Community," City & Community 1, no. 4 (December 2002): 349; James E. Katz and Ronald E. Rice, Social Consequences of Internet Use: Access, Involvement, Interaction (Cambridge, MA: MIT Press, 2002). }~ Human beings, whether connected to the Internet or not, continue to communicate preferentially with people who are geographically proximate than with those who are distant.~{ Barry Wellman, "Computer Networks as Social Networks," Science 293, issue 5537 (September 2001): 2031. }~ Nevertheless, people who are connected to the Internet communicate more with people who are geographically distant without decreasing the number of local connections. While the total number of connections continues to be greatest with proximate family members, friends, coworkers, and neighbors, the Internet's greatest effect is in improving the ability of individuals to add to these proximate relationships new and better-connected relationships with people who are geographically distant. This includes keeping more in touch with friends and relatives who live far away, and creating new weak-tie relationships around communities of interest and practice. To the extent that survey data are reliable, the most comprehensive and updated surveys support these observations. It now seems clear that Internet users "buy" their time to use the Internet by watching less television, and that the more Internet experience they have, the less they watch TV. People who use the Internet claim to have increased the number of people they stay in touch with, while mostly reporting no effect on time they spend with their family.~{ Jeffery I. Cole et al., "The UCLA Internet Report: Surveying the Digital Future, Year Three" (UCLA Center for Communication Policy, January 2003), 33, 55, 62, http:// www.ccp.ucla.edu/pdf/UCLA-Internet-Report-Year-Three.pdf. }~
+
+Connections with family and friends seemed to be thickened by the new channels of communication, rather than supplanted by them. Emblematic of this were recent results of a survey conducted by the Pew project on "Internet and American Life" on Holidays Online. Almost half of respondents surveyed reported using e-mail to organize holiday activities with family (48 percent) and friends (46 percent), 27 percent reported sending or receiving holiday greetings, and while a third described themselves as shopping online in order to save money, 51 percent said they went online to find an unusual or hard-to-find gift. In other words, half of those who used the Internet for holiday shopping did so in order to personalize their gift further, rather than simply to take advantage of the most obvious use of e-commerce--price comparison and time savings. Further support for this position is offered in another Pew study, entitled "Internet and Daily Life." In that survey, the two most common uses--both of which respondents claimed they did more of because of the Net than they otherwise would have--were connecting ,{[pg 365]}, with family and friends and looking up information.~{ Pew Internet and Daily Life Project (August 11, 2004), report available at http:// www.pewinternet.org/PPF/r/131/report_display.asp. }~ Further evidence that the Internet is used to strengthen and service preexisting relations, rather than create new ones, is the fact that 79 percent of those who use the Internet at all do so to communicate with friends and family, while only 26 percent use the Internet to meet new people or to arrange dates. Another point of evidence is the use of instant messaging (IM). IM is a synchronous communications medium that requires its users to set time aside to respond and provides information to those who wish to communicate with an individual about whether that person is or is not available at any given moment. Because it is so demanding, IM is preferentially useful for communicating with individuals with whom one already has a preexisting relationship. This preferential use for strengthening preexisting relations is also indicated by the fact that two-thirds of IM users report using IM with no more than five others, while only one in ten users reports instant messaging with more than ten people. A recent Pew study of instant messaging shows that 53 million adults--42 percent of Internet users in the United States--trade IM messages. Forty percent use IM to contact coworkers, one-third family, and 21 percent use it to communicate equally with both. Men and women IM in equal proportions, but women IM more than men do, averaging 433 minutes per month as compared to 366 minutes, respectively, and households with children IM more than households without children.
+
+These studies are surveys and local case studies. They cannot offer a knockdown argument about how "we"--everyone, everywhere--are using the Internet. The same technology likely has different effects when it is introduced into cultures that differ from each other in their pre-Internet baseline.~{ See Barry Wellman, "The Social Affordances of the Internet for Networked Individualism," Journal of Computer Mediated Communication 8, no. 3 (April 2003); Gustavo S. Mesch and Yael Levanon, "Community Networking and Locally-Based Social Ties in Two Suburban Localities, City & Community 2, no. 4 (December 2003): 335. }~ Despite these cautions, these studies do offer the best evidence we have about Internet use patterns. As best we can tell from contemporary social science, Internet use increases the contact that people have with others who traditionally have been seen as forming a person's "community": family, friends, and neighbors. Moreover, the Internet is also used as a platform for forging new relationships, in addition to those that are preexisting. These relationships are more limited in nature than ties to friends and family. They are detached from spatial constraints, and even time synchronicity; they are usually interest or practice based, and therefore play a more limited role in people's lives than the more demanding and encompassing relationships with family or intimate friends. Each discrete connection or cluster of connections that forms a social network, or a network of social relations, plays some role, but not a definitive one, in each participant's life. There is little disagreement ,{[pg 366]}, among researchers that these kinds of weak ties or limited-liability social relationships are easier to create on the Internet, and that we see some increase in their prevalence among Internet users. The primary disagreement is interpretive--in other words, is it, on balance, a good thing that we have multiple, overlapping, limited emotional liability relationships, or does it, in fact, undermine our socially embedded being?
+
+2~ Networked Individuals
+
+The interpretive argument about the normative value of the increase in weak ties is colored by the empirical finding that the time spent on the Internet in these limited relationships does not come at the expense of the number of communications with preexisting, real-world relationships. Given our current state of sociological knowledge, the normative question cannot be whether online relations are a reasonable replacement for real-world friendship. Instead, it must be how we understand the effect of the interaction between an increasingly thickened network of communications with preexisting relations and the casting of a broader net that captures many more, and more varied, relations. What is emerging in the work of sociologists is a framework that sees the networked society or the networked individual as entailing an abundance of social connections and more effectively deployed attention. The concern with the decline of community conceives of a scarcity of forms of stable, nurturing, embedding relations, which are mostly fixed over the life of an individual and depend on long-standing and interdependent relations in stable groups, often with hierarchical relations. What we now see emerging is a diversity of forms of attachment and an abundance of connections that enable individuals to attain discrete components of the package of desiderata that "community" has come to stand for in sociology. As Wellman puts it: "Communities and societies have been changing towards networked societies where boundaries are more permeable, interactions are with diverse others, linkages switch between multiple networks, and hierarchies are flatter and more recursive. . . . Their work and community networks are diffuse, sparsely knit, with vague, overlapping, social and spatial boundaries."~{ Barry Wellman, "The Social Affordances of the Internet." }~ In this context, the range and diversity of network connections beyond the traditional family, friends, stable coworkers, or village becomes a source of dynamic stability, rather than tension and disconnect.
+
+The emergence of networked individuals is not, however, a mere overlay, "floating" on top of thickened preexisting social relations without touching them except to add more relations. The interpolation of new networked ,{[pg 367]}, connections, and the individual's role in weaving those for him- or herself, allows individuals to reorganize their social relations in ways that fit them better. They can use their network connections to loosen social bonds that are too hierarchical and stifling, while filling in the gaps where their realworld relations seem lacking. Nowhere is this interpolation clearer than in Mizuko Ito's work on the use of mobile phones, primarily for text messaging and e-mail, among Japanese teenagers.~{ A review of Ito's own work and that of other scholars of Japanese techno-youth culture is Mizuko Ito, "Mobile Phones, Japanese Youth, and the Re-Placement of Social Contact," forthcoming in Mobile Communications: Re-negotiation of the Social Sphere, ed., Rich Ling and P. Pedersen (New York: Springer, 2005). }~ Japanese urban teenagers generally live in tighter physical quarters than their American or European counterparts, and within quite strict social structures of hierarchy and respect. Ito and others have documented how these teenagers use mobile phones--primarily as platforms for text messages--that is, as a mobile cross between email and instant messaging and more recently images, to loosen the constraints under which they live. They text at home and in the classroom, making connections to meet in the city and be together, and otherwise succeed in constructing a network of time- and space-bending emotional connections with their friends, without--and this is the critical observation--breaking the social molds they otherwise occupy. They continue to spend time in their home, with their family. They continue to show respect and play the role of child at home and at school. However, they interpolate that role and those relations with a sub-rosa network of connections that fulfill otherwise suppressed emotional needs and ties.
+
+The phenomenon is not limited to youths, but is applicable more generally to the capacity of users to rely on their networked connections to escape or moderate some of the more constraining effects of their stable social connections. In the United States, a now iconic case--mostly described in terms of privacy--was that of U.S. Navy sailor Timothy McVeigh (not the Oklahoma bomber). McVeigh was discharged from the navy when his superiors found out that he was gay by accessing his AOL (America Online) account. The case was primarily considered in terms of McVeigh's e-mail account privacy. It settled for an undisclosed sum, and McVeigh retired from the navy with benefits. However, what is important for us here is not the "individual rights" category under which the case was fought, but the practice that it revealed. Here was an eighteen-year veteran of the navy who used the space-time breaking possibilities of networked communications to loosen one of the most constraining attributes imaginable of the hierarchical framework that he nonetheless chose to be part of--the U.S. Navy. It would be odd to think that the navy did not provide McVeigh with a sense of identity and camaraderie that closely knit communities provide their ,{[pg 368]}, members. Yet at the same time, it also stifled his ability to live one of the most basic of all human ties--his sexual identity. He used the network and its potential for anonymous and pseudonymous existence to coexist between these two social structures.
+
+At the other end of the spectrum of social ties, we see new platforms emerging to generate the kinds of bridging relations that were so central to the identification of "weak ties" in social capital literature. Weak ties are described in the social capital literature as allowing people to transmit information across social networks about available opportunities and resources, as well as provide at least a limited form of vouching for others--as one introduces a friend to a friend of a friend. What we are seeing on the Net is an increase in the platforms developed to allow people to create these kinds of weak ties based on an interest or practice. Perhaps clearest of these is Meetup.com. Meetup is a Web site that allows users to search for others who share an interest and who are locally available to meet face-to-face. The search results show users what meetings are occurring within their requested area and interest. The groups then meet periodically, and those who sign up for them also are able to provide a profile and photo of themselves, to facilitate and sustain the real-world group meetings. The power of this platform is that it is not intended as a replacement for real-space meetings. It is intended as a replacement for the happenstance of social networks as they transmit information about opportunities for interest- and practice-based social relations. The vouching function, on the other hand, seems to have more mixed efficacy, as Dana Boyd's ethnography of Friendster suggests.~{ Dana M. Boyd, "Friendster and Publicly Articulated Social Networking," Conference on Human Factors and Computing Systems (CHI 2004) (Vienna: ACM, April 24-29, 2004). }~ Friendster was started as a dating Web site. It was built on the assumption that dating a friend of a friend of a friend is safer and more likely to be successful than dating someone based on a similar profile, located on a general dating site like match.com--in other words, that vouching as friends provides valuable information. As Boyd shows, however, the attempt of Friendster to articulate and render transparent the social networks of its users met with less than perfect success. The platform only permits users to designate friend/not friend, without the finer granularity enabled by a face-toface conversation about someone, where one can answer or anticipate the question, "just how well do you know this person?" with a variety of means, from tone to express reservations. On Friendster, it seems that people cast broader networks, and for fear of offending or alienating others, include many more "friendsters" than they actually have "friends." The result is a weak platform for mapping general connections, rather than a genuine articulation ,{[pg 369]}, of vouching through social networks. Nonetheless, it does provide a visible rendering of at least the thinnest of weak ties, and strengthens their effect in this regard. It enables very weak ties to perform some of the roles of real-world weak social ties.
+
+2~ THE INTERNET AS A PLATFORM FOR HUMAN CONNECTION
+
+Communication is constitutive of social relations. We cannot have relationships except by communicating with others. Different communications media differ from each other--in who gets to speak to whom and in what can be said. These differences structure the social relations that rely on these various modes of communication so that they differ from each other in significant ways. Technological determinism is not required to accept this. Some aspects of the difference are purely technical. Script allows text and more or less crude images to be transmitted at a distance, but not voice, touch, smell, or taste. To the extent that there are human emotions, modes of submission and exertion of authority, irony, love or affection, or information that is easily encoded and conveyed in face-to-face communications but not in script, script-based communications are a poor substitute for presence. A long and romantic tradition of love letters and poems notwithstanding, there is a certain thinness to that mode in the hands of all but the most gifted writers relative to the fleshiness of unmediated love. Some aspects of the difference among media of communication are not necessarily technical, but are rather culturally or organizationally embedded. Television can transmit text. However, text distribution is not television's relative advantage in a sociocultural environment that already has mass-circulation print media, and in a technical context where the resolution of television images is relatively low. As a matter of cultural and business practice, therefore, from its inception, television emphasized moving images and sound, not text transmission. Radio could have been deployed as short-range, point-to-point personal communications systems, giving us a nation of walkie-talkies. However, as chapter 6 described, doing so would have required a very different set of regulatory and business decisions between 1919 and 1927. Communications media take on certain social roles, structures of control, and emphases of style that combine their technical capacities and limits with the sociocultural business context into which they were introduced, and through which they developed. The result is a cluster of use characteristics that define how a ,{[pg 370]}, given medium is used within a given society, in a given historical context. They make media differ from each other, providing platforms with very different capacities and emphases for their users.
+
+As a technical and organizational matter, the Internet allows for a radically more diverse suite of communications models than any of the twentiethcentury systems permitted. It allows for textual, aural, and visual communications. It permits spatial and temporal asynchronicity, as in the case of email or Web pages, but also enables temporal synchronicity--as in the case of IM, online game environments, or Voice over Internet Protocol (VoIP). It can even be used for subchannel communications within a spatially synchronous context, such as in a meeting where people pass electronic notes to each other by e-mail or IM. Because it is still highly textual, it requires more direct attention than radio, but like print, it is highly multiplexable-- both between uses of the Internet and other media, and among Internet uses themselves. Similar to print media, you can pick your head up from the paper, make a comment, and get back to reading. Much more richly, one can be on a voice over IP conversation and e-mail at the same time, or read news interlaced with receiving and responding to e-mail. It offers oneto-one, one-to-few, few-to-few, one-to-many, and many-to-many communications capabilities, more diverse in this regard than any medium for social communication that preceded it, including--on the dimensions of distance, asynchronicity, and many-to-many capabilities--even that richest of media: face-to-face communications.
+
+Because of its technical flexibility and the "business model" of Internet service providers as primarily carriers, the Internet lends itself to being used for a wide range of social relations. Nothing in "the nature of the technology" requires that it be the basis of rich social relations, rather than becoming, as some predicted in the early 1990s, a "celestial jukebox" for the mass distribution of prepackaged content to passive end points. In contradistinction to the dominant remote communications technologies of the twentieth century, however, the Internet offers some new easy ways to communicate that foster both of the types of social communication that the social science literature seems to be observing. Namely, it makes it easy to increase the number of communications with preexisting friends and family, and increases communication with geographically distant or more loosely affiliated others. Print, radio, television, film, and sound recording all operated largely on a one-to-many model. They did not, given the economics of production and transmission, provide a usable means of remote communication for individuals ,{[pg 371]}, at the edges of these communication media. Television, film, sound recording, and print industries were simply too expensive, and their business organization was too focused on selling broadcast-model communications, to support significant individual communication. When cassette tapes were introduced, we might have seen people recording a tape instead of writing a letter to friends or family. However, this was relatively cumbersome, low quality, and time consuming. Telephones were the primary means of communications used by individuals, and they indeed became the primary form of mediated personal social communications. However, telephone conversations require synchronicity, which means that they can only be used for socializing purposes when both parties have time. They were also only usable throughout this period for serial, one-to-one conversations. Moreover, for most of the twentieth century, a long-distance call was a very expensive proposition for most nonbusiness users, and outside of the United States, local calls too carried nontrivial time-sensitive prices in most places. Telephones were therefore a reasonable medium for social relations with preexisting friends and family. However, their utility dropped off radically with the cost of communication, which was at a minimum associated with geographic distance. In all these dimensions, the Internet makes it easier and cheaper to communicate with family and friends, at close proximity or over great distances, through the barriers of busy schedules and differing time zones. Moreover, because of the relatively low-impact nature of these communications, the Internet allows people to experiment with looser relations more readily. In other words, the Internet does not make us more social beings. It simply offers more degrees of freedom for each of us to design our own communications space than were available in the past. It could have been that we would have used that design flexibility to re-create the massmedia model. But to predict that it would be used in this fashion requires a cramped view of human desire and connectedness. It was much more likely that, given the freedom to design our own communications environment flexibly and to tailor it to our own individual needs dynamically over time, we would create a system that lets us strengthen the ties that are most important to us. It was perhaps less predictable, but unsurprising after the fact, that this freedom would also be used to explore a wider range of relations than simply consuming finished media goods.
+
+There is an appropriate wariness in contemporary academic commentary about falling into the trap of "the mythos of the electrical sublime" by adopting a form of Internet utopianism.~{ James W. Carrey, Communication as Culture: Essays on Media and Society (Boston: Unwin Hyman, 1989). }~ It is important, however, not to ,{[pg 372]}, let this caution blind us to the facts about Internet use, and the technical, business, and cultural capabilities that the Internet makes feasible. The cluster of technologies of computation and communications that characterize the Internet today are, in fact, used in functionally different ways, and make for several different media of communication than we had in the twentieth century. The single technical platform might best be understood to enable several different "media"--in the sense of clusters of technical-socialeconomic practices of communication--and the number of these enabled media is growing. Instant messaging came many years after e-mail, and a few years after Web pages. Blogging one's daily journal on LiveJournal so that a group of intimates can check in on one's life as it unfolds was not a medium that was available to users until even more recently. The Internet is still providing its users with new ways to communicate with each other, and these represent a genuinely wide range of new capabilities. It is therefore unsurprising that connected social beings, such as we are, will take advantage of these new capabilities to form connections that were practically infeasible in the past. This is not media determinism. This is not millenarian utopianism. It is a simple observation. People do what they can, not what they cannot. In the daily humdrum of their lives, individuals do more of what is easier to do than what requires great exertion. When a new medium makes it easy for people to do new things, they may well, in fact, do them. And when these new things are systematically more user-centric, dialogic, flexible in terms of the temporal and spatial synchronicity they require or enable, and multiplexable, people will communicate with each other in ways and amounts that they could not before.
+
+2~ THE EMERGENCE OF SOCIAL SOFTWARE
+
+The design of the Internet itself is agnostic as among the social structures and relations it enables. At its technical core is a commitment to push all the detailed instantiations of human communications to the edges of the network--to the applications that run on the computers of users. This technical agnosticism leads to a social agnosticism. The possibility of large-scale sharing and cooperation practices, of medium-scale platforms for collaboration and discussion, and of small-scale, one-to-one communications has led to the development of a wide range of software designs and applications to facilitate different types of communications. The World Wide Web was used initially as a global broadcast medium available to anyone and everyone, ,{[pg 373]}, everywhere. In e-mail, we see a medium available for one-to-one, few-tofew, one-to-many and, to a lesser extent, many-to-many use. One of the more interesting phenomena of the past few years is the emergence of what is beginning to be called "social software." As a new design space, it is concerned with groups that are, as defined by Clay Shirky, who first articulated the concept, "Larger than a dozen, smaller than a few hundred, where people can actually have these conversational forms that can't be supported when you're talking about tens of thousands or millions of users, at least in a single group." The definition of the term is somewhat amorphous, but the basic concept is software whose design characteristic is that it treats genuine social phenomena as different from one-to-one or one-to-many communications. It seeks to build one's expectations about the social interactions that the software will facilitate into the design of the platform. The design imperative was most clearly articulated by Shirky when he wrote that from the perspective of the software designer, the user of social software is the group, not the individual.~{ Clay Shirky, "A Group Is Its Own Worst Enemy," published first in Networks, Economics and Culture mailing list July 1, 2003. }~
+
+A simple example will help to illustrate. Take any given site that uses a collaborative authorship tool, like the Wiki that is the basis of /{Wikipedia}/ and many other cooperative authorship exercises. From the perspective of an individual user, the ease of posting a comment on the Wiki, and the ease of erasing one's own comments from it, would be important characteristics: The fewer registration and sign-in procedures, the better. Not so from the perspective of the group. The group requires some "stickiness" to make the group as a group, and the project as a project, avoid the rending forces of individualism and self-reference. So, for example, design components that require registration for posting, or give users different rights to post and erase comments over time, depending on whether they are logged in or not, or depending on a record of their past cooperative or uncooperative behavior, are a burden for the individual user. However, that is precisely their point. They are intended to give those users with a greater stake in the common enterprise a slight, or sometimes large, edge in maintaining the group's cohesion. Similarly, erasing past comments may be useful for the individual, for example, if they were silly or untempered. Keeping the comments there is, however, useful to the group--as a source of experience about the individual or part of the group's collective memory about mistakes made in the past that should not be repeated by someone else. Again, the needs of the group as a group often differ from those of the individual participant. Thinking of the platform as social software entails designing it with characteristics ,{[pg 374]}, that have a certain social-science or psychological model of the interactions of a group, and building the platform's affordances in order to enhance the survivability and efficacy of the group, even if it sometimes comes at the expense of the individual user's ease of use or comfort.
+
+This emergence of social software--like blogs with opportunities to comment, Wikis, as well as social-norm-mediated Listservs or uses of the "cc" line in e-mail--underscores the nondeterministic nature of the claim about the relationship between the Internet and social relations. The Internet makes possible all sorts of human communications that were not technically feasible before its widespread adoption. Within this wide range of newly feasible communications patterns, we are beginning to see the emergence of different types of relationships--some positive, some, like spam (unsolicited commercial e-mail), decidedly negative. In seeking to predict and diagnose the relationship between the increasing use of Internet communications and the shape of social relations, we see that the newly emerging constructive social possibilities are leading to new design challenges. These, in turn, are finding engineers and enthusiasts willing and able to design for them. The genuinely new capability--connecting among few and many at a distance in a dialogic, recursive form--is leading to the emergence of new design problems. These problems come from the fact that the new social settings come with their own social dynamics, but without long-standing structures of mediation and constructive ordering. Hence the early infamy of the tendency of Usenet and Listservs discussions to deteriorate into destructive flame wars. As social habits of using these kinds of media mature, so that users already know that letting loose on a list will likely result in a flame war and will kill the conversation, and as designers understand that social dynamics--including both those that allow people to form and sustain groups and those that rend them apart with equal if not greater force--we are seeing the coevolution of social norms and platform designs that are intended to give play to the former, and mediate or moderate the latter. These platforms are less likely to matter for sustaining the group in preexisting relations--as among friends or family. The structuring of those relationships is dominated by social norms. However, they do offer a new form and a stabilizing context for the newly emerging diverse set of social relations--at a distance, across interests and contexts--that typify both peer production and many forms of social interaction aimed purely at social reproduction.
+
+The peer-production processes that are described in primarily economic ,{[pg 375]}, terms in chapter 3--like free software development, /{Wikipedia}/, or the Open Directory Project--represent one cluster of important instances of this new form of social relations. They offer a type of relationship that is nonhierarchical and organized in a radically decentralized pattern. Their social valence is given by some combination of the shared experience of joint creativity they enable, as well as their efficacy--their ability to give their users a sense of common purpose and mutual support in achieving it. Individuals adopt projects and purposes they consider worth pursuing. Through these projects they find others, with whom they initially share only a general sense of human connectedness and common practical interest, but with whom they then interact in ways that allow the relationship to thicken over time. Nowhere is this process clearer than on the community pages of /{Wikipedia}/. Because of the limited degree to which that platform uses technical means to constrain destructive behavior, the common enterprise has developed practices of user-to-user communication, multiuser mediation, and userappointed mediation to resolve disputes and disagreements. Through their involvement in these, users increase their participation, their familiarity with other participants--at least in this limited role as coauthors--and their practices of mutual engagement with these others. In this way, peer production offers a new platform for human connection, bringing together otherwise unconnected individuals and replacing common background or geographic proximity with a sense of well-defined purpose and the successful common pursuit of this purpose as the condensation point for human connection. Individuals who are connected to each other in a peer-production community may or may not be bowling alone when they are off-line, but they are certainly playing together online.
+
+2~ THE INTERNET AND HUMAN COMMUNITY
+
+This chapter began with a basic question. While the networked information economy may enhance the autonomy of individuals, does it not also facilitate the breakdown of community? The answer offered here has been partly empirical and partly conceptual.
+
+Empirically, it seems that the Internet is allowing us to eat our cake and have it too, apparently keeping our (social) figure by cutting down on the social equivalent of deep-fried dough--television. That is, we communicate more, rather than less, with the core constituents of our organic communities--our family and our friends--and we seem, in some places, also to ,{[pg 376]}, be communicating more with our neighbors. We also communicate more with loosely affiliated others, who are geographically remote, and who may share only relatively small slivers of overlapping interests, or for only short periods of life. The proliferation of potential connections creates the social parallel to the Babel objection in the context of autonomy--with all these possible links, will any of them be meaningful? The answer is largely that we do, in fact, employ very strong filtering on our Internet-based social connections in one obvious dimension: We continue to use the newly feasible lines of communication primarily to thicken and strengthen connections with preexisting relationships--family and friends. The clearest indication of this is the parsimony with which most people use instant messaging. The other mechanism we seem to be using to avoid drowning in the noise of potential chitchat with ever-changing strangers is that we tend to find networks of connections that have some stickiness from our perspective. This stickiness could be the efficacy of a cluster of connections in pursuit of a goal one cares about, as in the case of the newly emerging peer-production enterprises. It could be the ways in which the internal social interaction has combined social norms with platform design to offer relatively stable relations with others who share common interests. Users do not amble around in a social equivalent of Brownian motion. They tend to cluster in new social relations, albeit looser and for more limited purposes than the traditional pillars of community.
+
+The conceptual answer has been that the image of "community" that seeks a facsimile of a distant pastoral village is simply the wrong image of how we interact as social beings. We are a networked society now--networked individuals connected with each other in a mesh of loosely knit, overlapping, flat connections. This does not leave us in a state of anomie. We are welladjusted, networked individuals; well-adjusted socially in ways that those who seek community would value, but in new and different ways. In a substantial departure from the range of feasible communications channels available in the twentieth century, the Internet has begun to offer us new ways of connecting to each other in groups small and large. As we have come to take advantage of these new capabilities, we see social norms and software coevolving to offer new, more stable, and richer contexts for forging new relationships beyond those that in the past have been the focus of our social lives. These do not displace the older relations. They do not mark a fundamental shift in human nature into selfless, community-conscious characters. We continue to be complex beings, radically individual and self-interested ,{[pg 377]}, at the same time that we are entwined with others who form the context out of which we take meaning, and in which we live our lives. However, we now have new scope for interaction with others. We have new opportunities for building sustained limited-purpose relations, weak and intermediate-strength ties that have significant roles in providing us with context, with a source of defining part of our identity, with potential sources for support, and with human companionship. That does not mean that these new relationships will come to displace the centrality of our more immediate relationships. They will, however, offer increasingly attractive supplements as we seek new and diverse ways to embed ourselves in relation to others, to gain efficacy in weaker ties, and to interpolate different social networks in combinations that provide us both stability of context and a greater degree of freedom from the hierarchical and constraining aspects of some of our social relations. ,{[pg 378]}, ,{[pg 379]},
+
+:C~ Part Three - Policies of Freedom at a Moment of Transformation
+
+1~p3 Introduction
+
+Part I of this book offers a descriptive, progressive account of emerging patterns of nonmarket individual and cooperative social behavior, and an analysis of why these patterns are internally sustainable and increase information economy productivity. Part II combines descriptive and normative analysis to claim that these emerging practices offer defined improvements in autonomy, democratic discourse, cultural creation, and justice. I have noted periodically, however, that the descriptions of emerging social practices and the analysis of their potential by no means imply that these changes will necessarily become stable or provide the benefits I ascribe them. They are not a deterministic consequence of the adoption of networked computers as core tools of information production and exchange. There is no inevitable historical force that drives the technological-economic moment toward an open, diverse, liberal equilibrium. If the transformation I describe actually generalizes and stabilizes, it could lead to substantial redistribution of power and money. The twentieth-century industrial producers of information, culture, and communications--like Hollywood, the recording industry, ,{[pg 380]}, and some of the telecommunications giants--stand to lose much. The winners would be a combination of the widely diffuse population of individuals around the globe and the firms or other toolmakers and platform providers who supply these newly capable individuals with the context for participating in the networked information economy. None of the industrial giants of yore are taking this threat lying down. Technology will not overcome their resistance through an insurmountable progressive impulse of history. The reorganization of production and the advances it can bring in freedom and justice will emerge only as a result of social practices and political actions that successfully resist efforts to regulate the emergence of the networked information economy in order to minimize its impact on the incumbents.
+
+Since the middle of the 1990s, we have seen intensifying battles over the institutional ecology within which the industrial mode of information production and the newly emerging networked modes compete. Partly, this has been a battle over telecommunications infrastructure regulation. Most important, however, this has meant a battle over "intellectual property" protection, very broadly defined. Building upon and extending a twenty-fiveyear trend of expansion of copyrights, patents, and similar exclusive rights, the last half-decade of the twentieth century saw expansion of institutional mechanisms for exerting exclusive control in multiple dimensions. The term of copyright was lengthened. Patent rights were extended to cover software and business methods. Trademarks were extended by the Antidilution Act of 1995 to cover entirely new values, which became the basis for liability in the early domain-name trademark disputes. Most important, we saw a move to create new legal tools with which information vendors could hermetically seal access to their materials to an extent never before possible. The Digital Millennium Copyright Act (DMCA) prohibited the creation and use of technologies that would allow users to get at materials whose owners control through encryption. It prohibited even technologies that users can employ to use the materials in ways that the owners have no right to prevent. Today we are seeing efforts to further extend similar technological regulations-- down to the level of regulating hardware to make sure that it complies with design specifications created by the copyright industries. At other layers of the communications environment, we see efforts to expand software patents, to control the architecture of personal computing devices, and to create everstronger property rights in physical infrastructure--be it the telephone lines, cable plant, or wireless frequencies. Together, these legislative and judicial ,{[pg 381]}, acts have formed what many have been calling a second enclosure movement: A concerted effort to shape the institutional ecology in order to help proprietary models of information production at the expense of burdening nonmarket, nonproprietary production.~{ For a review of the literature and a substantial contribution to it, see James Boyle, "The Second Enclosure Movement and the Construction of the Public Domain," Law and Contemporary Problems 66 (Winter-Spring 2003): 33-74. }~ The new enclosure movement is not driven purely by avarice and rent seeking--though it has much of that too. Some of its components are based in well-meaning judicial and regulatory choices that represent a particular conception of innovation and its relationship to exclusive rights. That conception, focused on mass-mediatype content, movies, and music, and on pharmaceutical-style innovation systems, is highly solicitous of the exclusive rights that are the bread and butter of those culturally salient formats. It is also suspicious of, and detrimental to, the forms of nonmarket, commons-based production emerging in the networked information economy.
+
+This new enclosure movement has been the subject of sustained and diverse academic critique since the mid-1980s.~{ Early versions in the legal literature of the skepticism regarding the growth of exclusive rights were Ralph Brown's work on trademarks, Benjamin Kaplan's caution over the gathering storm that would become the Copyright Act of 1976, and Stephen Breyer's work questioning the economic necessity of copyright in many industries. Until, and including the 1980s, these remained, for the most part, rare voices--joined in the 1980s by David Lange's poetic exhortation for the public domain; Pamela Samuelson's systematic critique of the application of copyright to computer programs, long before anyone was paying attention; Jessica Litman's early work on the political economy of copyright legislation and the systematic refusal to recognize the public domain as such; and William Fisher's theoretical exploration of fair use. The 1990s saw a significant growth of academic questioning of enclosure: Samuelson continued to press the question of copyright in software and digital materials; Litman added a steady stream of prescient observations as to where the digital copyright was going and how it was going wrong; Peter Jaszi attacked the notion of the romantic author; Ray Patterson developed a user-centric view of copyright; Diane Zimmerman revitalized the debate over the conflict between copyright and the first amendment; James Boyle introduced erudite criticism of the theoretical coherence of the relentless drive to propertization; Niva Elkin Koren explored copyright and democracy; Keith Aoki questioned trademark, patents, and global trade systems; Julie Cohen early explored technical protection systems and privacy; and Eben Moglen began mercilessly to apply the insights of free software to hack at the foundations of intellectual property apologia. Rebecca Eisenberg, and more recently, Arti Rai, questioned the wisdom of patents on research tools to biomedical innovation. In this decade, William Fisher, Larry Lessig, Litman, and Siva Vaidhyanathan have each described the various forms that the enclosure movement has taken and exposed its many limitations. Lessig and Vaidhyanathan, in particular, have begun to explore the relations between the institutional battles and the freedom in the networked environment. }~ The core of this rich critique has been that the cases and statutes of the past decade or so have upset the traditional balance, in copyrights in particular, between seeking to create incentives through the grant of exclusive rights and assuring access to information through the judicious limitation of these rights and the privileging of various uses. I do not seek to replicate that work here, or to offer a comprehensive listing of all the regulatory moves that have increased the scope of proprietary rights in digital communications networks. Instead, I offer a way of framing these various changes as moves in a large-scale battle over the institutional ecology of the digital environment. By "institutional ecology," I mean to say that institutions matter to behavior, but in ways that are more complex than usually considered in economic models. They interact with the technological state, the cultural conceptions of behaviors, and with incumbent and emerging social practices that may be motivated not only by self-maximizing behavior, but also by a range of other social and psychological motivations. In this complex ecology, institutions--most prominently, law--affect these other parameters, and are, in turn, affected by them. Institutions coevolve with technology and with social and market behavior. This coevolution leads to periods of relative stability, punctuated by periods of disequilibrium, which may be caused by external shocks or internally generated phase shifts. During these moments, the various parameters will be out of step, and will pull and tug at the pattern of behavior, at the technology, and at the institutional forms of the behavior. After the tugging and pulling has shaped the various parameters in ways that are more consistent ,{[pg 382]}, with each other, we should expect to see periods of relative stability and coherence.
+
+Chapter 11 is devoted to an overview of the range of discrete policy areas that are shaping the institutional ecology of digital networks, in which proprietary, market-based models of information production compete with those that are individual, social, and peer produced. In almost all contexts, when presented with a policy choice, advanced economies have chosen to regulate information production and exchange in ways that make it easier to pursue a proprietary, exclusion-based model of production of entertainment goods at the expense of commons- and service-based models of information production and exchange. This has been true irrespective of the political party in power in the United States, or the cultural differences in the salience of market orientation between Europe and the United States. However, the technological trajectory, the social practices, and the cultural understanding are often working at cross-purposes with the regulatory impulse. The equilibrium on which these conflicting forces settle will shape, to a large extent, the way in which information, knowledge, and culture are produced and used over the coming few decades. Chapter 12 concludes the book with an overview of what we have seen about the political economy of information and what we might therefore understand to be at stake in the policy choices that liberal democracies and advanced economies will be making in the coming years. ,{[pg 383]},
+
+1~11 Chapter 11 - The Battle Over the Institutional Ecology of the Digital Environment
+
+The decade straddling the turn of the twenty-first century has seen high levels of legislative and policy activity in the domains of information and communications. Between 1995 and 1998, the United States completely overhauled its telecommunications law for the first time in sixty years, departed drastically from decades of practice on wireless regulation, revolutionized the scope and focus of trademark law, lengthened the term of copyright, criminalized individual user infringement, and created new paracopyright powers for rights holders that were so complex that the 1998 Digital Millennium Copyright Act (DMCA) that enacted them was longer than the entire Copyright Act. Europe covered similar ground on telecommunications, and added a new exclusive right in raw facts in databases. Both the United States and the European Union drove for internationalization of the norms they adopted, through the new World Intellectual Property Organization (WIPO) treaties and, more important, though the inclusion of intellectual property concerns in the international trade regime. In the seven years since then, legal battles have raged over the meaning of these changes, as well ,{[pg 384]}, as over efforts to extend them in other directions. From telecommunications law to copyrights, from domain name assignment to trespass to server, we have seen a broad range of distinct regulatory moves surrounding the question of control over the basic resources needed to create, encode, transmit, and receive information, knowledge, and culture in the digital environment. As we telescope up from the details of sundry regulatory skirmishes, we begin to see a broad pattern of conflict over the way that access to these core resources will be controlled.
+
+Much of the formal regulatory drive has been to increase the degree to which private, commercial parties can gain and assert exclusivity in core resources necessary for information production and exchange. At the physical layer, the shift to broadband Internet has been accompanied by less competitive pressure and greater legal freedom for providers to exclude competitors from, and shape the use of, their networks. That freedom from both legal and market constraints on exercising control has been complemented by increasing pressures from copyright industries to require that providers exercise greater control over the information flows in their networks in order to enforce copyrights. At the logical layer, anticircumvention provisions and the efforts to squelch peer-to-peer sharing have created institutional pressures on software and protocols to offer a more controlled and controllable environment. At the content layer, we have seen a steady series of institutional changes aimed at tightening exclusivity.
+
+At each of these layers, however, we have also seen countervailing forces. At the physical layer, the Federal Communications Commission's (FCC's) move to permit the development of wireless devices capable of self-configuring as user-owned networks offers an important avenue for a commons-based last mile. The open standards used for personal computer design have provided an open platform. The concerted resistance against efforts to require computers to be designed so they can more reliably enforce copyrights against their users has, to this point, prevented extension of the DMCA approach to hardware design. At the logical layer, the continued centrality of open standard-setting processes and the emergence of free software as a primary modality of producing mission-critical software provide significant resistance to efforts to enclose the logical layer. At the content layer, where law has been perhaps most systematically one-sided in its efforts to enclose, the cultural movements and the technical affordances that form the foundation of the transformation described throughout this book stand as the most significant barrier to enclosure. ,{[pg 385]},
+
+It is difficult to tell how much is really at stake, from the long-term perspective, in all these legal battles. From one point of view, law would have to achieve a great deal in order to replicate the twentieth-century model of industrial information economy in the new technical-social context. It would have to curtail some of the most fundamental technical characteristics of computer networks and extinguish some of our most fundamental human motivations and practices of sharing and cooperation. It would have to shift the market away from developing ever-cheaper general-purpose computers whose value to users is precisely their on-the-fly configurability over time, toward more controllable and predictable devices. It would have to squelch the emerging technologies in wireless, storage, and computation that are permitting users to share their excess resources ever more efficiently. It would have to dampen the influence of free software, and prevent people, young and old, from doing the age-old human thing: saying to each other, "here, why don't you take this, you'll like it," with things they can trivially part with and share socially. It is far from obvious that law can, in fact, achieve such basic changes. From another viewpoint, there may be no need to completely squelch all these things. Lessig called this the principle of bovinity: a small number of rules, consistently applied, suffice to control a herd of large animals. There is no need to assure that all people in all contexts continue to behave as couch potatoes for the true scope of the networked information economy to be constrained. It is enough that the core enabling technologies and the core cultural practices are confined to small groups-- some teenagers, some countercultural activists. There have been places like the East Village or the Left Bank throughout the period of the industrial information economy. For the gains in autonomy, democracy, justice, and a critical culture that are described in part II to materialize, the practices of nonmarket information production, individually free creation, and cooperative peer production must become more than fringe practices. They must become a part of life for substantial portions of the networked population. The battle over the institutional ecology of the digitally networked environment is waged precisely over how many individual users will continue to participate in making the networked information environment, and how much of the population of consumers will continue to sit on the couch and passively receive the finished goods of industrial information producers. ,{[pg 386]},
+
+2~ INSTITUTIONAL ECOLOGY AND PATH DEPENDENCE
+
+The century-old pragmatist turn in American legal thought has led to the development of a large and rich literature about the relationship of law to society and economy. It has both Right and Left versions, and has disciplinary roots in history, economics, sociology, psychology, and critical theory. Explanations are many: some simple, some complex; some analytically tractable, many not. I do not make a substantive contribution to that debate here, but rather build on some of its strains to suggest that the process is complex, and particularly, that the relationship of law to social relations is one of punctuated equilibrium--there are periods of stability followed by periods of upheaval, and then adaptation and stabilization anew, until the next cycle. Hopefully, the preceding ten chapters have provided sufficient reason to think that we are going through a moment of social-economic transformation today, rooted in a technological shock to our basic modes of information, knowledge, and cultural production. Most of this chapter offers a sufficient description of the legislative and judicial battles of the past few years to make the case that we are in the midst of a significant perturbation of some sort. I suggest that the heightened activity is, in fact, a battle, in the domain of law and policy, over the shape of the social settlement that will emerge around the digital computation and communications revolution.
+
+The basic claim is made up of fairly simple components. First, law affects human behavior on a micromotivational level and on a macro-socialorganizational level. This is in contradistinction to, on the one hand, the classical Marxist claim that law is epiphenomenal, and, on the other hand, the increasingly rare simple economic models that ignore transaction costs and institutional barriers and simply assume that people will act in order to maximize their welfare, irrespective of institutional arrangements. Second, the causal relationship between law and human behavior is complex. Simple deterministic models of the form "if law X, then behavior Y" have been used as assumptions, but these are widely understood as, and criticized for being, oversimplifications for methodological purposes. Laws do affect human behavior by changing the payoffs to regulated actions directly. However, they also shape social norms with regard to behaviors, psychological attitudes toward various behaviors, the cultural understanding of actions, and the politics of claims about behaviors and practices. These effects are not all linearly additive. Some push back and nullify the law, some amplify its ,{[pg 387]}, effects; it is not always predictable which of these any legal change will be. Decreasing the length of a "Walk" signal to assure that pedestrians are not hit by cars may trigger wider adoption of jaywalking as a norm, affecting ultimate behavior in exactly the opposite direction of what was intended. This change may, in turn, affect enforcement regarding jaywalking, or the length of the signals set for cars, because the risks involved in different signal lengths change as actual expected behavior changes, which again may feed back on driving and walking practices. Third, and as part of the complexity of the causal relation, the effects of law differ in different material, social, and cultural contexts. The same law introduced in different societies or at different times will have different effects. It may enable and disable a different set of practices, and trigger a different cascade of feedback and countereffects. This is because human beings are diverse in their motivational structure and their cultural frames of meaning for behavior, for law, or for outcomes. Fourth, the process of lawmaking is not exogenous to the effects of law on social relations and human behavior. One can look at positive political theory or at the history of social movements to see that the shape of law itself is contested in society because it makes (through its complex causal mechanisms) some behaviors less attractive, valuable, or permissible, and others more so. The "winners" and the "losers" battle each other to tweak the institutional playing field to fit their needs. As a consequence of these, there is relatively widespread acceptance that there is path dependence in institutions and social organization. That is, the actual organization of human affairs and legal systems is not converging through a process of either Marxist determinism or its neoclassical economics mirror image, "the most efficient institutions win out in the end." Different societies will differ in initial conditions and their historically contingent first moves in response to similar perturbations, and variances will emerge in their actual practices and institutional arrangements that persist over time--irrespective of their relative inefficiency or injustice.
+
+The term "institutional ecology" refers to this context-dependent, causally complex, feedback-ridden, path-dependent process. An example of this interaction in the area of communications practices is the description in chapter 6 of how the introduction of radio was received and embedded in different legal and economic systems early in the twentieth century. A series of organizational and institutional choices converged in all nations on a broadcast model, but the American broadcast model, the BBC model, and the state-run monopoly radio models created very different journalistic styles, ,{[pg 388]}, consumption expectations and styles, and funding mechanisms in these various systems. These differences, rooted in a series of choices made during a short period in the 1920s, persisted for decades in each of the respective systems. Paul Starr has argued in The Creation of the Media that basic institutional choices--from postage pricing to freedom of the press--interacted with cultural practices and political culture to underwrite substantial differences in the print media of the United States, Britain, and much of the European continent in the late eighteenth and throughout much of the nineteenth centuries.~{ Paul Starr, The Creation of the Media: Political Origins of Modern Communications (New York: Basic Books, 2004). }~ Again, the basic institutional and cultural practices were put in place around the time of the American Revolution, and were later overlaid with the introduction of mass-circulation presses and the telegraph in the mid-1800s. Ithiel de Sola Pool's Technologies of Freedom describes the battle between newspapers and telegraph operators in the United States and Britain over control of telegraphed news flows. In Britain, this resulted in the nationalization of telegraph and the continued dominance of London and The Times. In the United States, it resolved into the pooling model of the Associated Press, based on private lines for news delivery and sharing-- the prototype for newspaper chains and later network-television models of mass media.~{ Ithiel de Sola-Pool, Technologies of Freedom (Cambridge, MA: Belknap Press, 1983), 91-100. }~ The possibility of multiple stable equilibria alongside each other evoked by the stories of radio and print media is a common characteristic to both ecological models and analytically tractable models of path dependency. Both methodological approaches depend on feedback effects and therefore suggest that for any given path divergence, there is a point in time where early actions that trigger feedbacks can cause large and sustained differences over time.
+
+Systems that exhibit path dependencies are characterized by periods of relative pliability followed by periods of relative stability. Institutions and social practices coevolve through a series of adaptations--feedback effects from the institutional system to social, cultural, and psychological frameworks; responses into the institutional system; and success and failure of various behavioral patterns and belief systems--until a society reaches a stage of relative stability. It can then be shaken out of that stability by external shocks--like Admiral Perry's arrival in Japan--or internal buildup of pressure to a point of phase transition, as in the case of slavery in the United States. Of course, not all shocks can so neatly be categorized as external or internal--as in the case of the Depression and the New Deal. To say that there are periods of stability is not to say that in such periods, everything is just dandy for everyone. It is only to say that the political, social, economic ,{[pg 389]}, settlement is too widely comfortable for, accepted or acquiesced in, by too many agents who in that society have the power to change practices for institutional change to have substantial effects on the range of lived human practices.
+
+The first two parts of this book explained why the introduction of digital computer-communications networks presents a perturbation of transformative potential for the basic model of information production and exchange in modern complex societies. They focused on the technological, economic, and social patterns that are emerging, and how they differ from the industrial information economy that preceded them. This chapter offers a fairly detailed map of how law and policy are being tugged and pulled in response to these changes. Digital computers and networked communications as a broad category will not be rolled back by these laws. Instead, we are seeing a battle--often but not always self-conscious--over the precise shape of these technologies. More important, we are observing a series of efforts to shape the social and economic practices as they develop to take advantage of these new technologies.
+
+2~ A FRAMEWORK FOR MAPPING THE INSTITUTIONAL ECOLOGY
+
+Two specific examples will illustrate the various levels at which law can operate to shape the use of information and its production and exchange. The first example builds on the story from chapter 7 of how embarrassing internal e-mails from Diebold, the electronic voting machine maker, were exposed by investigative journalism conducted on a nonmarket and peerproduction model. After students at Swarthmore College posted the files, Diebold made a demand under the DMCA that the college remove the materials or face suit for contributory copyright infringement. The students were therefore forced to remove the materials. However, in order keep the materials available, the students asked students at other institutions to mirror the files, and injected them into the eDonkey, BitTorrent, and FreeNet filesharing and publication networks. Ultimately, a court held that the unauthorized publication of files that were not intended for sale and carried such high public value was a fair use. This meant that the underlying publication of the files was not itself a violation, and therefore the Internet service provider was not liable for providing a conduit. However, the case was decided on September 30, 2004--long after the information would have been relevant ,{[pg 390]}, to the voting equipment certification process in California. What kept the information available for public review was not the ultimate vindication of the students' publication. It was the fact that the materials were kept in the public sphere even under threat of litigation. Recall also that at least some of the earlier set of Diebold files that were uncovered by the activist who had started the whole process in early 2003 were zipped, or perhaps encrypted in some form. Scoop, the Web site that published the revelation of the initial files, published--along with its challenge to the Internet community to scour the files and find holes in the system--links to locations in which utilities necessary for reading the files could be found.
+
+There are four primary potential points of failure in this story that could have conspired to prevent the revelation of the Diebold files, or at least to suppress the peer-produced journalistic mode that made them available. First, if the service provider--the college, in this case--had been a sole provider with no alternative physical transmission systems, its decision to block the materials under threat of suit would have prevented publication of the materials throughout the relevant period. Second, the existence of peer-to-peer networks that overlay the physical networks and were used to distribute the materials made expunging them from the Internet practically impossible. There was no single point of storage that could be locked down. This made the prospect of threatening other universities futile. Third, those of the original files that were not in plain text were readable with software utilities that were freely available on the Internet, and to which Scoop pointed its readers. This made the files readable to many more critical eyes than they otherwise would have been. Fourth, and finally, the fact that access to the raw materials--the e-mails--was ultimately found to be privileged under the fair-use doctrine in copyright law allowed all the acts that had been performed in the preceding period under a shadow of legal liability to proceed in the light of legality.
+
+The second example does not involve litigation, but highlights more of the levers open to legal manipulation. In the weeks preceding the Americanled invasion of Iraq, a Swedish video artist produced an audio version of Diana Ross and Lionel Richie's love ballad, "Endless Love," lip-synched to news footage of U.S. president George Bush and British prime minister Tony Blair. By carefully synchronizing the lip movements from the various news clips, the video produced the effect of Bush "singing" Richie's part, and Blair "singing" Ross's, serenading each other with an eternal love ballad. No legal action with regard to the release of this short video has been reported. However, ,{[pg 391]}, the story adds two components not available in the context of the Diebold files context. First, it highlights that quotation from video and music requires actual copying of the digital file. Unlike text, you cannot simply transcribe the images or the sound. This means that access to the unencrypted bits is more important than in the case of text. Second, it is not at all clear that using the entire song, unmodified, is a "fair use." While it is true that the Swedish video is unlikely to cut into the market for the original song, there is nothing in the video that is a parody either of the song itself or of the news footage. The video uses "found materials," that is, materials produced by others, to mix them in a way that is surprising, creative, and creates a genuinely new statement. However, its use of the song is much more complete than the minimalist uses of digital sampling in recorded music, where using a mere two-second, three-note riff from another's song has been found to be a violation unless done with a negotiated license.~{ /{Bridgeport Music, Inc. v. Dimension Films}/, 2004 U.S. App. LEXIS 26877. }~
+
+Combined, the two stories suggest that we can map the resources necessary for a creative communication, whether produced on a market model or a nonmarket model, as including a number of discrete elements. First, there is the universe of "content" itself: existing information, cultural artifacts and communications, and knowledge structures. These include the song and video footage, or the e-mail files, in the two stories. Second, there is the cluster of machinery that goes into capturing, manipulating, fixing and communicating the new cultural utterances or communications made of these inputs, mixed with the creativity, knowledge, information, or communications capacities of the creator of the new statement or communication. These include the physical devices--the computers used by the students and the video artist, as well as by their readers or viewers--and the physical transmission mechanisms used to send the information or communications from one place to another. In the Diebold case, the firm tried to use the Internet service provider liability regime of the DMCA to cut off the machine storage and mechanical communications capacity provided to the students by the university. However, the "machinery" also includes the logical components-- the software necessary to capture, read or listen to, cut, paste, and remake the texts or music; the software and protocols necessary to store, retrieve, search, and communicate the information across the Internet.
+
+As these stories suggest, freedom to create and communicate requires use of diverse things and relationships--mechanical devices and protocols, information, cultural materials, and so forth. Because of this diversity of components ,{[pg 392]}, and relationships, the institutional ecology of information production and exchange is a complex one. It includes regulatory and policy elements that affect different industries, draw on various legal doctrines and traditions, and rely on diverse economic and political theories and practices. It includes social norms of sharing and consumption of things conceived of as quite different--bandwidth, computers, and entertainment materials. To make these cohere into a single problem, for several years I have been using a very simple, three-layered representation of the basic functions involved in mediated human communications. These are intended to map how different institutional components interact to affect the answer to the basic questions that define the normative characteristics of a communications system--who gets to say what, to whom, and who decides?~{ Other layer-based abstractions have been proposed, most effectively by Lawrence Solum and Minn Chung, The Layers Principle: Internet Architecture and the Law, University of San Diego Public Law Research Paper No. 55. Their model more closely hews to the OSI layers, and is tailored to being more specifically usable for a particular legal principle--never regulate at a level lower than you need to. I seek a higherlevel abstraction whose role is not to serve as a tool to constrain specific rules, but as a map for understanding the relationships between diverse institutional elements as they relate to the basic problem of how information is produced and exchanged in society. }~
+
+These are the physical, logical, and content layers. The physical layer refers to the material things used to connect human beings to each other. These include the computers, phones, handhelds, wires, wireless links, and the like. The content layer is the set of humanly meaningful statements that human beings utter to and with one another. It includes both the actual utterances and the mechanisms, to the extent that they are based on human communication rather than mechanical processing, for filtering, accreditation, and interpretation. The logical layer represents the algorithms, standards, ways of translating human meaning into something that machines can transmit, store, or compute, and something that machines process into communications meaningful to human beings. These include standards, protocols, and software--both general enabling platforms like operating systems, and more specific applications. A mediated human communication must use all three layers, and each layer therefore represents a resource or a pathway that the communication must use or traverse in order to reach its intended destination. In each and every one of these layers, we have seen the emergence of technical and practical capabilities for using that layer on a nonproprietary model that would make access cheaper, less susceptible to control by any single party or class of parties, or both. In each and every layer, we have seen significant policy battles over whether these nonproprietary or open-platform practices will be facilitated or even permitted. Looking at the aggregate effect, we see that at all these layers, a series of battles is being fought over the degree to which some minimal set of basic resources and capabilities necessary to use and participate in constructing the information environment will be available for use on a nonproprietary, nonmarket basis. ,{[pg 393]},
+
+In each layer, the policy debate is almost always carried out in local, specific terms. We ask questions like, Will this policy optimize "spectrum management" in these frequencies, or, Will this decrease the number of CDs sold? However, the basic, overarching question that we must learn to ask in all these debates is: Are we leaving enough institutional space for the socialeconomic practices of networked information production to emerge? The networked information economy requires access to a core set of capabilities--existing information and culture, mechanical means to process, store, and communicate new contributions and mixes, and the logical systems necessary to connect them to each other. What nonmarket forms of production need is a core common infrastructure that anyone can use, irrespective of whether their production model is market-based or not, proprietary or not. In almost all these dimensions, the current trajectory of technologicaleconomic-social trends is indeed leading to the emergence of such a core common infrastructure, and the practices that make up the networked information economy are taking advantage of open resources. Wireless equipment manufacturers are producing devices that let users build their own networks, even if these are now at a primitive stage. The open-innovation ethos of the programmer and Internet engineering community produce both free software and proprietary software that rely on open standards for providing an open logical layer. The emerging practices of free sharing of information, knowledge, and culture that occupy most of the discussion in this book are producing an ever-growing stream of freely and openly accessible content resources. The core common infrastructure appears to be emerging without need for help from a guiding regulatory hand. This may or may not be a stable pattern. It is possible that by some happenstance one or two firms, using one or two critical technologies, will be able to capture and control a bottleneck. At that point, perhaps regulatory intervention will be required. However, from the beginning of legal responses to the Internet and up to this writing in the middle of 2005, the primary role of law has been reactive and reactionary. It has functioned as a point of resistance to the emergence of the networked information economy. It has been used by incumbents from the industrial information economies to contain the risks posed by the emerging capabilities of the networked information environment. What the emerging networked information economy therefore needs, in almost all cases, is not regulatory protection, but regulatory abstinence.
+
+The remainder of this chapter provides a more or less detailed presentation of the decisions being made at each layer, and how they relate to the freedom ,{[pg 394]}, to create, individually and with others, without having to go through proprietary, market-based transactional frameworks. Because so many components are involved, and so much has happened since the mid-1990s, the discussion is of necessity both long in the aggregate and truncated in each particular category. To overcome this expositional problem, I have collected the various institutional changes in table 11.1. For readers interested only in the overarching claim of this chapter--that is, that there is, in fact, a battle over the institutional environment, and that many present choices interact to increase or decrease the availability of basic resources for information production and exchange--table 11.1 may provide sufficient detail. For those interested in a case study of the complex relationship between law, technology, social behavior, and market structure, the discussion of peer-to-peer networks may be particularly interesting to pursue.
+
+A quick look at table 11.1 reveals that there is a diverse set of sources of openness. A few of these are legal. Mostly, they are based on technological and social practices, including resistance to legal and regulatory drives toward enclosure. Examples of policy interventions that support an open core common infrastructure are the FCC's increased permission to deploy open wireless networks and the various municipal broadband initiatives. The former is a regulatory intervention, but its form is largely removal of past prohibitions on an entire engineering approach to building wireless systems. Municipal efforts to produce open broadband networks are being resisted at the state legislation level, with statutes that remove the power to provision broadband from the home rule powers of municipalities. For the most part, the drive for openness is based on individual and voluntary cooperative action, not law. The social practices of openness take on a quasi-normative face when practiced in standard-setting bodies like the Internet Engineering Task Force (IETF) or the World Wide Web Consortium (W3C). However, none of these have the force of law. Legal devices also support openness when used in voluntaristic models like free software licensing and Creative Commons?type licensing. However, most often when law has intervened in its regulatory force, as opposed to its contractual-enablement force, it has done so almost entirely on the side of proprietary enclosure.
+
+Another characteristic of the social-economic-institutional struggle is an alliance between a large number of commercial actors and the social sharing culture. We see this in the way that wireless equipment manufacturers are selling into a market of users of WiFi and similar unlicensed wireless devices. We see this in the way that personal computer manufacturers are competing ,{[pg 395]},
+
+!_ Table 11.1: Overview of the Institutional Ecology
+
+table{~h c3; 33; 33; 33;
+
+.
+Enclosure
+Openness
+
+Physical Transport
+Broadband trated by FCC as information service
+Open wireless networks
+
+.
+DMCA ISP liability
+Municipal broadband initiatives
+
+.
+Municipal broadband barred by states
+.
+
+Physical Devices
+CBDPTA: regulatory requirements to implement "trusted systems"; private efforts towards the same goal
+Standardization
+
+.
+Operator-controlled mobile phones
+Fiercely competitive market in commodity components
+
+Logical Transmission protocols
+Privatized DNS/ICANN
+TCP/IP
+
+.
+.
+IETF
+
+.
+.
+p2p networks
+
+Logical Software
+DMCA anticircumvention; Proprietary OS; Web browser; Software Patents
+Free Software
+
+.
+.
+W3C
+
+.
+.
+P2p software widely used
+
+.
+.
+social acceptability of widespread hacking of copy protection
+
+Content
+Copyright expansion: "Right to read"; No de minimis digital sampling; "Fair use" narrowed: effect on potential market "commercial" defined broadly; Criminalization; Term extension
+Increasing sharing practices and adoption of sharing licensing practices
+
+.
+Contractual enclosure: UCITA
+Musicians distribute music freely
+
+.
+Trademark dilution
+Creative Commons; other open publication models
+
+.
+Database protection
+Widespread social disdain for copyright
+
+.
+Linking and trespass to chattels
+International jurisdictional arbitrage
+
+.
+International "harmonization" and trade enforcement of maximal exclusive rights regimes
+Early signs of a global access to knowledge movement combining developing nations with free information ecology advocates, both markets and non-market, raising a challenge to the enclosure movement
+
+}table
+
+,{[pg 396]},
+
+over decreasing margins by producing the most general-purpose machines that would be most flexible for their users, rather than machines that would most effectively implement the interests of Hollywood and the recording industry. We see this in the way that service and equipment-based firms, like IBM and Hewlett-Packard (HP), support open-source and free software. The alliance between the diffuse users and the companies that are adapting their business models to serve them as users, instead of as passive consumers, affects the political economy of this institutional battle in favor of openness. On the other hand, security consciousness in the United States has led to some efforts to tip the balance in favor of closed proprietary systems, apparently because these are currently perceived as more secure, or at least more amenable to government control. While orthogonal in its political origins to the battle between proprietary and commons-based strategies for information production, this drive does tilt the field in favor of enclosure, at least at the time of this writing in 2005.
+
+Over the past few years, we have also seen that the global character of the Internet is a major limit on effective enclosure, when openness is a function of technical and social practices, and enclosure is a function of law.~{ The first major treatment of this phenomenon was Michael Froomkin, "The Internet as a Source of Regulatory Arbitrage" (1996), http://www.law.miami.edu/froomkin/articles/arbitr.htm. }~ When Napster was shut down in the United States, for example, KaZaa emerged in the Netherlands, from where it later moved to Australia. This force is meeting the countervailing force of international harmonization--a series of bilateral and multilateral efforts to "harmonize" exclusive rights regimes internationally and efforts to coordinate international enforcement. It is difficult at this stage to predict which of these forces will ultimately have the upper hand. It is not too early to map in which direction each is pushing. And it is therefore not too early to characterize the normative implications of the success or failure of these institutional efforts.
+
+2~ THE PHYSICAL LAYER
+
+The physical layer encompasses both transmission channels and devices for producing and communicating information. In the broadcast and telephone era, devices were starkly differentiated. Consumers owned dumb terminals. Providers owned sophisticated networks and equipment: transmitters and switches. Consumers could therefore consume whatever providers could produce most efficiently that the providers believed consumers would pay for. Central to the emergence of the freedom of users in the networked environment is an erosion of the differentiation between consumer and provider ,{[pg 397]}, equipment. Consumers came to use general-purpose computers that could do whatever their owners wanted, instead of special-purpose terminals that could only do what their vendors designed them to do. These devices were initially connected over a transmission network--the public phone system-- that was regulated as a common carrier. Common carriage required the network owners to carry all communications without differentiating by type or content. The network was neutral as among communications. The transition to broadband networks, and to a lesser extent the emergence of Internet services on mobile phones, are threatening to undermine that neutrality and nudge the network away from its end-to-end, user-centric model to one designed more like a five-thousand-channel broadcast model. At the same time, Hollywood and the recording industry are pressuring the U.S. Congress to impose regulatory requirements on the design of personal computers so that they can be relied on not to copy music and movies without permission. In the process, the law seeks to nudge personal computers away from being purely general-purpose computation devices toward being devices with factory-defined behaviors vis-a-vis predicted-use patterns, like glorified ` televisions and CD players. The emergence of the networked information economy as described in this book depends on the continued existence of an open transport network connecting general-purpose computers. It therefore also depends on the failure of the efforts to restructure the network on the model of proprietary networks connecting terminals with sufficiently controlled capabilities to be predictable and well behaved from the perspective of incumbent production models.
+
+3~ Transport: Wires and Wireless
+
+Recall the Cisco white paper quoted in chapter 5. In it, Cisco touted the value of its then new router, which would allow a broadband provider to differentiate streams of information going to and from the home at the packet level. If the packet came from a competitor, or someone the user wanted to see or hear but the owner preferred that the user did not, the packet could be slowed down or dropped. If it came from the owner or an affiliate, it could be speeded up. The purpose of the router was not to enable evil control over users. It was to provide better-functioning networks. America Online (AOL), for example, has been reported as blocking its users from reaching Web sites that have been advertised in spam e-mails. The theory is that if spammers know their Web site will be inaccessible to AOL customers, they will stop.~{ Jonathan Krim, "AOL Blocks Spammers' Web Sites," Washington Post, March 20, 2004, p. A01; also available at http://www.washingtonpost.com/ac2/wp-dyn?page name article&contentId A9449-2004Mar19&notFound true. }~ The ability of service providers to block sites or packets from ,{[pg 398]}, certain senders and promote packets from others may indeed be used to improve the network. However, whether this ability will in fact be used to improve service depends on the extent to which the interests of all users, and particularly those concerned with productive uses of the network, are aligned with the interests of the service providers. Clearly, when in 2005 Telus, Canada's second largest telecommunications company, blocked access to the Web site of the Telecommunications Workers Union for all of its own clients and those of internet service providers that relied on its backbone network, it was not seeking to improve service for those customers' benefit, but to control a conversation in which it had an intense interest. When there is a misalignment, the question is what, if anything, disciplines the service providers' use of the technological capabilities they possess? One source of discipline would be a genuinely competitive market. The transition to broadband has, however, severely constrained the degree of competition in Internet access services. Another would be regulation: requiring owners to treat all packets equally. This solution, while simple to describe, remains highly controversial in the policy world. It has strong supporters and strong opposition from the incumbent broadband providers, and has, as a practical matter, been rejected for the time being by the FCC. The third type of solution would be both more radical and less "interventionist" from the perspective of regulation. It would involve eliminating contemporary regulatory barriers to the emergence of a user-owned wireless infrastructure. It would allow users to deploy their own equipment, share their wireless capacity, and create a "last mile" owned by all users in common, and controlled by none. This would, in effect, put equipment manufacturers in competition to construct the "last mile" of broadband networks, and thereby open up the market in "middle-mile" Internet connection services.
+
+Since the early 1990s, when the Clinton administration announced its "Agenda for Action" for what was then called "the information superhighway," it was the policy of the United States to "let the private sector lead" in deployment of the Internet. To a greater or lesser degree, this commitment to private provisioning was adopted in most other advanced economies in the world. In the first few years, this meant that investment in the backbone of the Internet was private, and heavily funded by the stock bubble of the late 1990s. It also meant that the last distribution bottleneck--the "last mile"--was privately owned. Until the end of the 1990s, the last mile was made mostly of dial-up connections over the copper wires of the incumbent local exchange carriers. This meant that the physical layer was not only ,{[pg 399]}, proprietary, but that it was, for all practical purposes, monopolistically owned. Why, then, did the early Internet nonetheless develop into a robust, end-to-end neutral network? As Lessig showed, this was because the telephone carriers were regulated as common carriers. They were required to carry all traffic without discrimination. Whether a bit stream came from Cable News Network (CNN) or from an individual blog, all streams-- upstream from the user and downstream to the user--were treated neutrally.
+
+4~ Broadband Regulation
+
+The end of the 1990s saw the emergence of broadband networks. In the United States, cable systems, using hybrid fiber-coaxial systems, moved first, and became the primary providers. The incumbent local telephone carriers have been playing catch-up ever since, using digital subscriber line (DSL) techniques to squeeze sufficient speed out of their copper infrastructure to remain competitive, while slowly rolling out fiber infrastructure closer to the home. As of 2003, the incumbent cable carriers and the incumbent local telephone companies accounted for roughly 96 percent of all broadband access to homes and small offices.~{ FCC Report on High Speed Services, December 2003 (Appendix to Fourth 706 Report NOI). }~ In 1999-2000, as cable was beginning to move into a more prominent position, academic critique began to emerge, stating that the cable broadband architecture could be manipulated to deviate from the neutral, end-to-end architecture of the Internet. One such paper was written by Jerome Saltzer, one of the authors of the paper that originally defined the "end-to-end" design principle of the Internet in 1980, and Lessig and Mark Lemley wrote another. These papers began to emphasize that cable broadband providers technically could, and had commercial incentive to, stop treating all communications neutrally. They could begin to move from a network where almost all functions are performed by user-owned computers at the ends of the network to one where more is done by provider equipment at the core. The introduction of the Cisco policy router was seen as a stark marker of how things could change.
+
+The following two years saw significant regulatory battles over whether the cable providers would be required to behave as commons carriers. In particular, the question was whether they would be required to offer competitors nondiscriminatory access to their networks, so that these competitors could compete in Internet services. The theory was that competition would discipline the incumbents from skewing their networks too far away from what users valued as an open Internet. The first round of battles occurred at the municipal level. Local franchising authorities tried to use their power ,{[pg 400]}, over cable licenses to require cable operators to offer open access to their competitors if they chose to offer cable broadband. The cable providers challenged these regulations in courts. The most prominent decision came out of Portland, Oregon, where the Federal Court of Appeals for the Ninth Circuit held that broadband was part information service and part telecommunications service, but not a cable service. The FCC, not the cable franchising authority, had power to regulate it.~{ 216 F.3d 871 (9th Cir. 2000). }~ At the same time, as part of the approval of the AOL-Time Warner merger, the Federal Trade Commission (FTC) required the new company to give at least three competitors open access to its broadband facilities, should AOL be offered cable broadband facilities over Time Warner.
+
+The AOL-Time Warner merger requirements, along with the Ninth Circuit's finding that cable broadband included a telecommunications component, seemed to indicate that cable broadband transport would come to be treated as a common carrier. This was not to be. In late 2001 and the middle of 2002, the FCC issued a series of reports that would reach the exact opposite result. Cable broadband, the commission held, was an information service, not a telecommunications service. This created an imbalance with the telecommunications status of broadband over telephone infrastructure, which at the time was treated as a telecommunications service. The commission dealt with this imbalance by holding that broadband over telephone infrastructure, like broadband over cable, was now to be treated as an information service. Adopting this definition was perhaps admissible as a matter of legal reasoning, but it certainly was not required by either sound legal reasoning or policy. The FCC's reasoning effectively took the business model that cable operators had successfully used to capture two-thirds of the market in broadband--bundling two discrete functionalities, transport (carrying bits) and higher-level services (like e-mail and Web hosting)--and treated it as though it described the intrinsic nature of "broadband cable" as a service. Because that service included more than just carriage of bits, it could be called an information service. Of course, it would have been as legally admissible, and more technically accurate, to do as the Ninth Circuit had done. That is, to say that cable broadband bundles two distinct services: carriage and information-use tools. The former is a telecommunications service. In June of 2005, the Supreme Court in the Brand X case upheld the FCC's authority to make this legally admissible policy error, upholding as a matter of deference to the expert agency the Commission's position that cable broadband services should be treated as information services.~{ /{National Cable and Telecommunications Association v. Brand X Internet Services}/ (decided June 27, 2005). }~ As a matter ,{[pg 401]}, of policy, the designation of broadband services as "information services" more or less locked the FCC into a "no regulation" approach. As information services, broadband providers obtained the legal power to "edit" their programming, just like any operator of an information service, like a Web site. Indeed, this new designation has placed a serious question mark over whether future efforts to regulate carriage decisions would be considered constitutional, or would instead be treated as violations of the carriers' "free speech" rights as a provider of information. Over the course of the 1990s, there were a number of instances where carriers--particularly cable, but also telephone companies--were required by law to carry some signals from competitors. In particular, cable providers were required to carry over-the-air broadcast television, telephone carriers, in FCC rules called "video dialtone," were required to offer video on a common carriage basis, and cable providers that chose to offer broadband were required to make their infrastructure available to competitors on a common carrier model. In each of these cases, the carriage requirements were subjected to First Amendment scrutiny by courts. In the case of cable carriage of broadcast television, the carriage requirements were only upheld after six years of litigation.~{ /{Turner Broad. Sys. v. FCC}/, 512 U.S. 622 (1994) and /{Turner Broad. Sys. v. FCC}/, 520 U.S. 180 (1997). }~ In cases involving video common carriage requirements applied to telephone companies and cable broadband, lower courts struck down the carriage requirements as violating the telephone and cable companies' free-speech rights.~{ /{Chesapeake & Potomac Tel. Co. v. United States}/, 42 F.3d 181 (4th Cir. 1994); /{Comcast Cablevision of Broward County, Inc. v. Broward County}/, 124 F. Supp. 2d 685, 698 (D. Fla., 2000). }~ To a large extent, then, the FCC's regulatory definition left the incumbent cable and telephone providers--who control 96 percent of broadband connections to home and small offices--unregulated, and potentially constitutionally immune to access regulation and carriage requirements.
+
+Since 2003 the cable access debate--over whether competitors should get access to the transport networks of incumbent broadband carriers--has been replaced with an effort to seek behavioral regulation in the form of "network neutrality." This regulatory concept would require broadband providers to treat all packets equally, without forcing them to open their network up to competitors or impose any other of the commitments associated with common carriage. The concept has the backing of some very powerful actors, including Microsoft, and more recently MCI, which still owns much of the Internet backbone, though not the last mile. For this reason, if for no other, it remains as of this writing a viable path for institutional reform that would balance the basic structural shift of Internet infrastructure from a commoncarriage to a privately controlled model. Even if successful, the drive to network neutrality would keep the physical infrastructure a technical bottleneck, ,{[pg 402]}, owned by a small number of firms facing very limited competition, with wide legal latitude for using that control to affect the flow of information over their networks.
+
+4~ Open Wireless networks
+
+A more basic and structural opportunity to create an open broadband infrastructure is, however, emerging in the wireless domain. To see how, we must first recognize that opportunities to control the broadband infrastructure in general are not evenly distributed throughout the networked infrastructure. The long-haul portions of the network have multiple redundant paths with no clear choke points. The primary choke point over the physical transport of bits across the Internet is in the last mile of all but the most highly connected districts. That is, the primary bottleneck is the wire or cable connecting the home and small office to the network. It is here that cable and local telephone incumbents control the market. It is here that the high costs of digging trenches, pulling fiber, and getting wires through and into walls pose a prohibitive barrier to competition. And it is here, in the last mile, that unlicensed wireless approaches now offer the greatest promise to deliver a common physical infrastructure of first and last resort, owned by its users, shared as a commons, and offering no entity a bottleneck from which to control who gets to say what to whom.
+
+As discussed in chapter 6, from the end of World War I and through the mid-twenties, improvements in the capacity of expensive transmitters and a series of strategic moves by the owners of the core patents in radio transmission led to the emergence of the industrial model of radio communications that typified the twentieth century. Radio came to be dominated by a small number of professional, commercial networks, based on high-capitalcost transmitters. These were supported by a regulatory framework tailored to making the primary model of radio utilization for most Americans passive reception, with simple receivers, of commercial programming delivered with high-powered transmitters. This industrial model, which assumed large-scale capital investment in the core of the network and small-scale investments at the edges, optimized for receiving what is generated at the core, imprinted on wireless communications systems both at the level of design and at the level of regulation. When mobile telephony came along, it replicated the same model, using relatively cheap handsets oriented toward an infrastructurecentric deployment of towers. The regulatory model followed Hoover's initial pattern and perfected it. A government agency strictly controlled who may ,{[pg 403]}, place a transmitter, where, with what antenna height, and using what power. The justification was avoidance of interference. The presence of strict licensing was used as the basic assumption in the engineering of wireless systems throughout this period. Since 1959, economic analysis of wireless regulation has criticized this approach, but only on the basis that it inefficiently regulated the legal right to construct a wireless system by using strictly regulated spectrum licenses, instead of creating a market in "spectrum use" rights.~{ The locus classicus of the economists' critique was Ronald Coase, "The Federal Communications Commission," Journal of Law and Economics 2 (1959): 1. The best worked-out version of how these property rights would look remains Arthur S. De Vany et al., "A Property System for Market Allocation of the Electromagnetic Spectrum: A Legal-Economic-Engineering Study," Stanford Law Review 21 (1969): 1499. }~ This critique kept the basic engineering assumptions stable--for radio to be useful, a high-powered transmitter must be received by simple receivers. Given this engineering assumption, someone had to control the right to emit energy in any range of radio frequencies. The economists wanted the controller to be a property owner with a flexible, transferable right. The regulators wanted it to be a licensee subject to regulatory oversight and approval by the FCC.
+
+As chapter 3 explained, by the time that legislatures in the United States and around the world had begun to accede to the wisdom of the economists' critique, it had been rendered obsolete by technology. In particular, it had been rendered obsolete by the fact that the declining cost of computation and the increasing sophistication of communications protocols among enduser devices in a network made possible new, sharing-based solutions to the problem of how to allow users to communicate without wires. Instead of having a regulation-determined exclusive right to transmit, which may or may not be subject to market reallocation, it is possible to have a market in smart radio equipment owned by individuals. These devices have the technical ability to share capacity and cooperate in the creation of wireless carriage capacity. These radios can, for example, cooperate by relaying each other's messages or temporarily "lending" their antennae to neighbors to help them decipher messages of senders, without anyone having exclusive use of the spectrum. Just as PCs can cooperate to create a supercomputer in SETI@Home by sharing their computation, and a global-scale, peer-to-peer data-storage and retrieval system by sharing their hard drives, computationally intensive radios can share their capacity to produce a local wireless broadband infrastructure. Open wireless networks allow users to install their own wireless device--much like the WiFi devices that have become popular. These devices then search automatically for neighbors with similar capabilities, and self-configure into a high-speed wireless data network. Reaching this goal does not, at this point, require significant technological innovation. The technology is there, though it does require substantial engineering ,{[pg 404]}, effort to implement. The economic incentives to develop such devices are fairly straightforward. Users already require wireless local networks. They will gain added utility from extending their range for themselves, which would be coupled with the possibility of sharing with others to provide significant wide-area network capacity for whose availability they need not rely on any particular provider. Ultimately, it would be a way for users to circumvent the monopoly last mile and recapture some of the rents they currently pay. Equipment manufacturers obviously have an incentive to try to cut into the rents captured by the broadband monopoly/oligopoly by offering an equipment-embedded alternative.
+
+My point here is not to consider the comparative efficiency of a market in wireless licenses and a market in end-user equipment designed for sharing channels that no one owns. It is to highlight the implications of the emergence of a last mile that is owned by no one in particular, and is the product of cooperation among neighbors in the form of, "I'll carry your bits if you carry mine." At the simplest level, neighbors could access locally relevant information directly, over a wide-area network. More significant, the fact that users in a locality coproduced their own last-mile infrastructure would allow commercial Internet providers to set up Internet points of presence anywhere within the "cloud" of the locale. The last mile would be provided not by these competing Internet service providers, but by the cooperative efforts of the residents of local neighborhoods. Competitors in providing the "middle mile"--the connection from the last mile to the Internet cloud-- could emerge, in a way that they cannot if they must first lay their own last mile all the way to each home. The users, rather than the middle-mile providers, shall have paid the capital cost of producing the local transmission system--their own cooperative radios. The presence of a commons-based, coproduced last mile alongside the proprietary broadband network eliminates the last mile as a bottleneck for control over who speaks, with what degree of ease, and with what types of production values and interactivity.
+
+The development of open wireless networks, owned by their users and focused on sophisticated general-purpose devices at their edges also offers a counterpoint to the emerging trend among mobile telephony providers to offer a relatively limited and controlled version of the Internet over the phones they sell. Some wireless providers are simply offering mobile Internet connections throughout their networks, for laptops. Others, however, are using their networks to allow customers to use their ever-more-sophisticated phones to surf portions of the Web. These latter services diverge in their ,{[pg 405]}, styles. Some tend to be limited, offering only a set of affiliated Web sites rather than genuine connectivity to the Internet itself with a general-purpose device. Sprint's "News" offerings, for example, connects users to CNNtoGo, ABCNews.com, and the like, but will not enable a user to reach the blogosphere to upload a photo of protesters being manhandled, for example. So while mobility in principle increases the power of the Web, and text messaging puts e-mail-like capabilities everywhere, the effect of the implementations of the Web on phones is more ambiguous. It could be more like a Web-enabled reception device than a genuinely active node in a multidirectional network. Widespread adoption of open wireless networks would give mobile phone manufacturers a new option. They could build into the mobile telephones the ability to tap into open wireless networks, and use them as general-purpose access points to the Internet. The extent to which this will be a viable option for the mobile telephone manufacturers depends on how much the incumbent mobile telephone service providers, those who purchased their licenses at high-priced auctions, will resist this move. Most users buy their phones from their providers, not from general electronic equipment stores. Phones are often tied to specific providers in ways that users are not able to change for themselves. In these conditions, it is likely that mobile providers will resist the competition from free open wireless systems for "data minutes" by refusing to sell dual-purpose equipment. Worse, they may boycott manufacturers who make mobile phones that are also generalpurpose Web-surfing devices over open wireless networks. How that conflict will go, and whether users would be willing to carry a separate small device to enable them to have open Internet access alongside their mobile phone, will determine the extent to which the benefits of open wireless networks will be transposed into the mobile domain. Normatively, that outcome has significant implications. From the perspective of the citizen watchdog function, ubiquitous availability of capture, rendering, and communication capabilities are important. From the perspective of personal autonomy as informed action in context, extending openness to mobile units would provide significant advantages to allow individuals to construct their own information environment on the go, as they are confronting decisions and points of action in their daily lives.
+
+4~ Municipal Broadband Initiatives
+
+One alternative path for the emergence of basic physical information transport infrastructure on a nonmarket model is the drive to establish municipal ,{[pg 406]}, systems. These proposed systems would not be commons-based in the sense that they would not be created by the cooperative actions of individuals without formal structure. They would be public, like highways, sidewalks, parks, and sewage systems. Whether they are, or are not, ultimately to perform as commons would depend on how they would be regulated. In the United States, given the First Amendment constraints on government preferring some speech to other speech in public fora, it is likely that municipal systems would be managed as commons. In this regard, they would have parallel beneficial characteristics to those of open wireless systems. The basic thesis underlying municipal broadband initiatives is similar to that which has led some municipalities to create municipal utilities or transportation hubs. Connectivity has strong positive externalities. It makes a city's residents more available for the information economy and the city itself a more attractive locale for businesses. Most of the efforts have indeed been phrased in these instrumental terms. The initial drive has been the creation of municipal fiber-to-the-home networks. The town of Bristol, Virginia, is an example. It has a population of slightly more than seventeen thousand. Median household income is 68 percent of the national median. These statistics made it an unattractive locus for early broadband rollout by incumbent providers. However, in 2003, Bristol residents had one of the most advanced residential fiber-to-the-home networks in the country, available for less than forty dollars a month. Unsurprisingly, therefore, the city had broadband penetration rivaling many of the top U.S. markets with denser and wealthier populations. The "miracle" of Bristol is that the residents of the town, fed up with waiting for the local telephone and cable companies, built their own, municipally owned network. Theirs has become among the most ambitious and successful of more than five hundred publicly owned utilities in the United States that offer high-speed Internet, cable, and telephone services to their residents. Some of the larger cities--Chicago and Philadelphia, most prominently--are moving as of this writing in a similar direction. The idea in Chicago is that basic "dark fiber"--that is, the physical fiber going to the home, but without the electronics that would determine what kinds of uses the connectivity could be put to--would be built by the city. Access to use this entirely neutral, high-capacity platform would then be open to anyone-- commercial and noncommercial alike. The drive in Philadelphia emphasizes the other, more recently available avenue--wireless. The quality of WiFi and the widespread adoption of wireless techniques have moved other municipalities to adopt wireless or mixed-fiber wireless strategies. Municipalities are ,{[pg 407]}, proposing to use publicly owned facilities to place wireless points of access around the town, covering the area in a cloud of connectivity and providing open Internet access from anywhere in the city. Philadelphia's initiative has received the widest public attention, although other, smaller cities are closer to having a wireless cloud over the city already.
+
+The incumbent broadband providers have not taken kindly to the municipal assault on their monopoly (or oligopoly) profits. When the city of Abilene, Texas, tried to offer municipal broadband service in the late-1990s, Southwestern Bell (SBC) persuaded the Texas legislature to pass a law that prohibited local governments from providing high-speed Internet access. The town appealed to the FCC and the Federal Court of Appeals in Washington, D.C. Both bodies held that when Congress passed the 1996 Telecommunications Act, and said that, "no state . . . regulation . . . may prohibit . . . the ability of any entity to provide . . . telecommunications service," municipalities were not included in the term "any entity." As the D.C. Circuit put it, "any" might have some significance "depending on the speaker's tone of voice," but here it did not really mean "any entity," only some. And states could certainly regulate the actions of municipalities, which are treated in U.S. law as merely their subdivisions or organs.~{ /{City of Abilene, Texas v. Federal Communications Commission}/, 164 F3d 49 (1999). }~ Bristol, Virginia, had to fight off similar efforts to prohibit its plans through state law before it was able to roll out its network. In early 2004, the U.S. Supreme Court was presented with the practice of state preemption of municipal broadband efforts and chose to leave the municipalities to fend for themselves. A coalition of Missouri municipalities challenged a Missouri law that, like the Texas law, prohibited them from stepping in to offer their citizens broadband service. The Court of the Appeals for the Eighth Circuit agreed with the municipalities. The 1996 Act, after all, was intended precisely to allow anyone to compete with the incumbents. The section that prohibited states from regulating the ability of "any entity" to enter the telecommunications service market precisely anticipated that the local incumbents would use their clout in state legislatures to thwart the federal policy of introducing competition into the local loop. Here, the incumbents were doing just that, but the Supreme Court reversed the Eighth Circuit decision. Without dwelling too much on the wisdom of allowing citizens of municipalities to decide for themselves whether they want a municipal system, the court issued an opinion that was technically defensible in terms of statutory interpretation, but effectively invited the incumbent broadband providers to put their lobbying efforts into persuading state legislators to prohibit municipal efforts.~{ /{Nixon v. Missouri Municipal League}/, 541 U.S. 125 (2004). }~ After ,{[pg 408]}, Philadelphia rolled out its wireless plan, it was not long before the Pennsylvania legislature passed a similar law prohibiting municipalities from offering broadband. While Philadelphia's plan itself was grandfathered, future expansion from a series of wireless "hot spots" in open area to a genuine municipal network will likely be challenged under the new state law. Other municipalities in Pennsylvania are entirely foreclosed from pursuing this option. In this domain, at least as of 2005, the incumbents seem to have had some substantial success in containing the emergence of municipal broadband networks as a significant approach to eliminating the bottleneck in local network infrastructure.
+
+3~ Devices
+
+The second major component of the physical layer of the networked environment is comprised of the devices people use to compute and communicate. Personal computers, handhelds, game consoles, and to a lesser extent, but lurking in the background, televisions, are the primary relevant devices. In the United States, personal computers are the overwhelmingly dominant mode of connectivity. In Europe and Japan, mobile handheld devices occupy a much larger space. Game consoles are beginning to provide an alternative computationally intensive device, and Web-TV has been a background idea for a while. The increasing digitization of both over-the-air and cable broadcast makes digital TV a background presence, if not an immediate alternative avenue, to Internet communications. None of these devices are constructed by a commons--in the way that open wireless networks, free software, or peer-produced content can be. Personal computers, however, are built on open architecture, using highly standardized commodity components and open interfaces in an enormously competitive market. As a practical matter, therefore, PCs provide an open-platform device. Handhelds, game consoles, and digital televisions, on the other hand, use more or less proprietary architectures and interfaces and are produced in a less-competitive market-- not because there is no competition among the manufacturers, but because the distribution chain, through the service providers, is relatively controlled. The result is that configurations and features can more readily be customized for personal computers. New uses can be developed and implemented in the hardware without permission from any owner of a manufacturing or distribution outlet. As handhelds grow in their capabilities, and personal computers collapse in size, the two modes of communicating are bumping into each other's turf. At the moment, there is no obvious regulatory push to ,{[pg 409]}, nudge one or the other out. Observing the evolution of these markets therefore has less to do with policy. As we look at these markets, however, it is important to recognize that the outcome of this competition is not normatively neutral. The capabilities made possible by personal computers underlie much of the social and economic activity described throughout this book. Proprietary handhelds, and even more so, game consoles and televisions, are, presently at least, platforms that choreograph their use. They structure their users' capabilities according to design requirements set by their producers and distributors. A physical layer usable with general-purpose computers is one that is pliable and open for any number of uses by individuals, in a way that a physical layer used through more narrowly scripted devices is not.
+
+The major regulatory threat to the openness of personal computers comes from efforts to regulate the use of copyrighted materials. This question is explored in greater depth in the context of discussing the logical layer. Here, I only note that peer-to-peer networks, and what Fisher has called "promiscuous copying" on the Internet, have created a perceived threat to the very existence of the major players in the industrial cultural production system-- Hollywood and the recording industry. These industries are enormously adept at driving the regulation of their business environment--the laws of copyright, in particular. As the threat of copying and sharing of their content by users increased, these industries have maintained a steady pressure on Congress, the courts, and the executive to ratchet up the degree to which their rights are enforced. As we will see in looking at the logical and content layers, these efforts have been successful in changing the law and pushing for more aggressive enforcement. They have not, however, succeeded in suppressing widespread copying. Copying continues, if not entirely unabated, certainly at a rate that was impossible a mere six years ago.
+
+One major dimension of the effort to stop copying has been a drive to regulate the design of personal computers. Pioneered by Senator Fritz Hollings in mid-2001, a number of bills were drafted and lobbied for: the first was the Security Systems Standards and Certification Act; the second, Consumer Broadband and Digital Television Promotion Act (CBDTPA), was actually introduced in the Senate in 2002.~{ Bill Number S. 2048, 107th Congress, 2nd Session. }~ The basic structure of these proposed statutes was that they required manufacturers to design their computers to be "trusted systems." The term "trusted," however, had a very odd meaning. The point is that the system, or computer, can be trusted to perform in certain predictable ways, irrespective of what its owner wishes. ,{[pg 410]},
+
+The impulse is trivial to explain. If you believe that most users are using their personal computers to copy films and music illegally, then you can think of these users as untrustworthy. In order to be able to distribute films and music in the digital environment that is trustworthy, one must disable the users from behaving as they would choose to. The result is a range of efforts at producing what has derisively been called "the Fritz chip": legal mandates that systems be designed so that personal computers cannot run programs that are not certified properly to the chip. The most successful of these campaigns was Hollywood's achievement in persuading the FCC to require manufacturers of all devices capable of receiving digital television signals from the television set to comply with a particular "trusted system" standard. This "broadcast flag" regulation was odd in two distinct ways. First, the rule-making documents show quite clearly that this was a rule driven by Hollywood, not by the broadcasters. This is unusual because the industries that usually play a central role in these rule makings are those regulated by the FCC, such as broadcasters and cable systems. Second, the FCC was not, in fact, regulating the industries that it normally has jurisdiction to regulate. Instead, the rule applied to any device that could use digital television signals after they had already been received in the home. In other words, they were regulating practically every computer and digital-video-capable consumer electronics device imaginable. The Court of Appeals ultimately indeed struck down the regulation as wildly beyond the agency's jurisdiction, but the broadcast flag nonetheless is the closest that the industrial information economy incumbents have come to achieving regulatory control over the design of computers.
+
+The efforts to regulate hardware to fit the distribution model of Hollywood and the recording industry pose a significant danger to the networked information environment. The core design principle of general-purpose computers is that they are open for varied uses over time, as their owners change their priorities and preferences. It is this general-purpose character that has allowed personal computers to take on such varied roles since their adoption in the 1980s. The purpose of the Fritz chip?style laws is to make computing devices less flexible. It is to define a range of socially, culturally, and economically acceptable uses of the machines that are predicted by the legislature and the industry actors, and to implement factory-defined capabilities that are not flexible, and do not give end users the freedom to change the intended use over time and to adapt to changing social and economic conditions and opportunities. ,{[pg 411]},
+
+The political economy of this regulatory effort, and similar drives that have been more successful in the logical and content layers, is uncharacteristic of American politics. Personal computers, software, and telecommunications services are significantly larger industries than Hollywood and the recording industry. Verizon alone has roughly similar annual revenues to the entire U.S. movie industry. Each one of the industries that the content industries have tried to regulate has revenues several times greater than do the movie and music industries combined. The relative successes of Hollywood and the recording industry in regulating the logical and content layers, and the viability of their efforts to pass a Fritz chip law, attest to the remarkable cultural power of these industries and to their lobbying prowess. The reason is likely historical. The software and hardware industries in particular have developed mostly outside of the regulatory arena; only around 2002 did they begin to understand that what goes on in Washington could really hurt them. The telecommunications carriers, which are some of the oldest hands at the regulatory game, have had some success in preventing regulations that would force them to police their users and limit Internet use. However, the bulk of their lobbying efforts have been aimed elsewhere. The institutions of higher education, which have found themselves under attack for not policing their students' use of peer-to-peer networks, have been entirely ineffective at presenting their cultural and economic value and the importance of open Internet access to higher education, as compared to the hypothetical losses of Hollywood and the recording industry. Despite the past successes of these entertainment-industry incumbents, two elements suggest that physical device regulation of the CBDPTA form will not follow the same successful path of similar legislation at the logical layer, the DMCA of 1998. The first element is the fact that, unlike in 1998, the technology industries have now realized that Hollywood is seeking to severely constrain their design space. Industries with half a trillion dollars a year in revenues tend to have significant pull in American and international lawmaking bodies, even against industries, like movies and sound recording, that have high cultural visibility but no more than seventy-five billion dollars a year in revenues. The second is that in 1998, there were very few public advocacy organizations operating in the space of intellectual property and trying to play watchdog and to speak for the interests of users. By 2004, a number of organizations dedicated to users' rights in the digital environment emerged to make that conflict clear. The combination of well-defined business interests with increasing representation of user interests creates a political landscape ,{[pg 412]}, in which it will be difficult to pass sweeping laws to limit the flexibility of personal computers. The most recent iteration of the Fritz chip agenda, the Inducing Infringement of Copyrights Act of 2004 was indeed defeated, for the time being, by a coalition of high-technology firms and people who would have formerly been seen as left-of-center media activists.
+
+Regulation of device design remains at the frontier of the battles over the institutional ecology of the digital environment. It is precisely ubiquitous access to basic, general-purpose computers, as opposed to glorified televisions or telephone handsets, that lies at the very heart of the networked information economy. And it is therefore precisely ubiquitous access to such basic machines that is a precondition to the improvements in freedom and justice that we can see emerging in the digital environment.
+
+2~ THE LOGICAL LAYER
+
+At the logical layer, most of the efforts aimed to secure a proprietary model and a more tightly controlled institutional ecology follow a similar pattern to the efforts to regulate device design. They come from the needs of the content-layer businesses--Hollywood and the recording industry, in particular. Unlike the physical transmission layer, which is historically rooted in a proprietary but regulated organizational form, most of the logical layer of the Internet has its roots in open, nonproprietary protocols and standards. The broad term "logical layer" combines a wide range of quite different functionalities. The most basic logical components--the basic protocols and standards for Internet connectivity--have from the beginning of the Internet been open, unowned, and used in common by all Internet users and applications. They were developed by computer scientists funded primarily with public money. The basic Internet Protocol (IP) and Transmission Control Protocol (TCP) are open for all to use. Most of the basic standards for communicating were developed in the IETF, a loosely defined standardssetting body that works almost entirely on a meritocratic basis--a body that Michael Froomkin once suggested is the closest earthly approximation of Habermas's ideal speech situation. Individual computer engineers contributed irrespective of formal status or organizational affiliation, and the organization ran on the principle that Dave Clark termed "rough consensus and running code." The World Wide Web protocols and authoring conventions HTTP and HTML were created, and over the course of their lives, shepherded by Tim Berners Lee, who has chosen to dedicate his efforts to making ,{[pg 413]}, the Web a public good rather than cashing in on his innovation. The sheer technical necessity of these basic protocols and the cultural stature of their achievement within the engineering community have given these open processes and their commonslike institutional structure a strong gravitational pull on the design of other components of the logical layer, at least insofar as it relates to the communication side of the Internet.
+
+This basic open model has been in constant tension with the proprietary models that have come to use and focus on the Internet in the past decade. By the mid-1990s, the development of graphical-user interfaces to the Web drove Internet use out of universities and into homes. Commercial actors began to look for ways to capture the commercial value of the human potential of the World Wide Web and the Internet, while Hollywood and the recording industry saw the threat of one giant worldwide copying machine looming large. At the same time, the Clinton administration's search of "third-way" liberal agenda manifested in these areas as a commitment to "let the private sector lead" in deployment of the Internet, and an "intellectual property" policy based on extreme protectionism for the exclusive-rightsdependent industries aimed, in the metaphors of that time, to get cars on the information superhighway or help the Internet become a celestial jukebox. The result was a series of moves designed to make the institutional ecology of the Internet more conducive to the proprietary model.
+
+3~ The Digital Millennium Copyright Act of 1998
+
+No piece of legislation more clearly represents the battle over the institutional ecology of the digital environment than the pompously named Digital Millennium Copyright Act of 1998 (DMCA). The DMCA was the culmination of more than three years of lobbying and varied efforts, both domestically in the United States and internationally, over the passage of two WIPO treaties in 1996. The basic worldview behind it, expressed in a 1995 white paper issued by the Clinton administration, was that in order for the National Information Infrastructure (NII) to take off, it had to have "content," and that its great promise was that it could deliver the equivalent of thousands of channels of entertainment. This would only happen, however, if the NII was made safe for delivery of digital content without making it easily copied and distributed without authorization and without payment. The two core recommendations of that early road map were focused on regulating technology and organizational responsibility. First, law was to regulate ,{[pg 414]}, the development of technologies that might defeat any encryption or other mechanisms that the owners of copyrighted materials would use to prevent use of their works. Second, Internet service providers were to be held accountable for infringements made by their users, so that they would have an incentive to police their systems. Early efforts to pass this agenda in legislation were resisted, primarily by the large telecommunications service providers. The Baby Bells--U.S. regional telephone companies that were created from the breakup of AT&T (Ma Bell) in 1984, when the telecommunications company was split up in order to introduce a more competitive structure to the telecom industry--also played a role in partly defeating implementation of this agenda in the negotiations toward new WIPO treaties in 1996, treaties that ultimately included a much-muted version of the white paper agenda. Nonetheless, the following year saw significant lobbying for "implementing legislation" to bring U.S. law in line with the requirements of the new WIPO treaties. This new posture placed the emphasis of congressional debates on national industrial policy and the importance of strong protection to the export activities of the U.S. content industries. It was enough to tip the balance in favor of passage of the DMCA. The Internet service provider liability portions bore the marks of a hard-fought battle. The core concerns of the telecommunications companies were addressed by creating an explicit exemption for pure carriage of traffic. Furthermore, providers of more sophisticated services, like Web hosting, were provided immunity from liability for simple failure to police their system actively. In exchange, however, service providers were required to respond to requests by copyright owners by immediately removing materials that the copyright owners deemed infringing. This was the provision under which Diebold forced Swarthmore to remove the embarrassing e-mail records from the students' Web sites. The other, more basic, element of the DMCA was the anticircumvention regime it put in place. Pamela Samuelson has described the anticircumvention provisions of the DMCA as the result of a battle between Hollywood and Silicon Valley. At the time, unlike the telecommunications giants who were born of and made within the regulatory environment, Silicon Valley did not quite understand that what happened in Washington, D.C., could affect its business. The Act was therefore an almost unqualified victory for Hollywood, moderated only by a long list of weak exemptions for various parties that bothered to show up and lobby against it.
+
+The central feature of the DMCA, a long and convoluted piece of legislation, ,{[pg 415]}, is its anticircumvention and antidevice provisions. These provisions made it illegal to use, develop, or sell technologies that had certain properties. Copyright owners believed that it would be possible to build strong encryption into media products distributed on the Internet. If they did so successfully, the copyright owners could charge for digital distribution and users would not be able to make unauthorized copies of the works. If this outcome was achieved, the content industries could simply keep their traditional business model--selling movies or music as discrete packages--at lower cost, and with a more refined ability to extract the value users got from using their materials. The DMCA was intended to make this possible by outlawing technologies that would allow users to get around, or circumvent, the protection measures that the owners of copyrighted materials put in place. At first blush, this proposition sounds entirely reasonable. If you think of the content of a music file as a home, and of the copy protection mechanism as its lock, then all the DMCA does is prohibit the making and distributing of burglary tools. This is indeed how the legislation was presented by its supporters. From this perspective, even the relatively draconian consequences spelled out in the DMCA's criminal penalties seem defensible.
+
+There are two distinct problems with this way of presenting what the DMCA does. First, copyrights are far from coextensive with real property. There are many uses of existing works that are permissible to all. They are treated in copyright law like walking on the sidewalk or in a public park is treated in property law, not like walking across the land of a neighbor. This is true, most obviously, for older works whose copyright has expired. This is true for certain kinds of uses of a work, like quoting it for purposes of criticism or parody. Encryption and other copy-protection techniques are not limited by the definition of legal rights. They can be used to protect all kinds of digital files--whether their contents are still covered by copyright or not, and whether the uses that users wish to make of them are privileged or not. Circumvention techniques, similarly, can be used to circumvent copyprotection mechanisms for purposes both legitimate and illegitimate. A barbed wire cutter, to borrow Boyle's metaphor, could be a burglary tool if the barbed wire is placed at the property line. However, it could equally be a tool for exercising your privilege if the private barbed wire has been drawn around public lands or across a sidewalk or highway. The DMCA prohibited all wire cutters, even though there were many uses of these technologies that could be used for legal purposes. Imagine a ten-year-old girl doing her homework on the history of the Holocaust. She includes in her multimedia paper ,{[pg 416]}, a clip from Steven Spielberg's film, Schindler's List, in which a little girl in red, the only color image on an otherwise black-and-white screen, walks through the pandemonium of a deportation. In her project, the child painstakingly superimposes her own face over that of the girl in the film for the entire sequence, frame by frame. She calls the paper, "My Grandmother." There is little question that most copyright lawyers (not retained by the owner of the movie) would say that this use would count as a "fair use," and would be privileged under the Copyright Act. There is also little question that if Schindler's List was only available in encrypted digital form, a company would have violated the DMCA if it distributed a product that enabled the girl to get around the encryption in order to use the snippet she needed, and which by traditional copyright law she was permitted to use. It is in the face of this concern about overreaching by those who employ technological protection measures that Julie Cohen argued for the "right to hack"--to circumvent code that impedes one's exercise of one's privileged uses.
+
+The second problem with the DMCA is that its definitions are broad and malleable. Simple acts like writing an academic paper on how the encryption works, or publishing a report on the Web that tells users where they can find information about how to circumvent a copy-protection mechanism could be included in the definition of providing a circumvention device. Edward Felten is a computer scientist at Princeton. As he was preparing to publish an academic paper on encryption, he received a threatening letter from the Recording Industry Association of America (RIAA), telling him that publication of the paper constituted a violation of the DMCA. The music industry had spent substantial sums on developing encryption for digital music distribution. In order to test the system before it actually entrusted music with this wrapper, the industry issued a public challenge, inviting cryptographers to try to break the code. Felten succeeded in doing so, but did not continue to test his solutions because the industry required that, in order to continue testing, he sign a nondisclosure agreement. Felten is an academic, not a businessperson. He works to make knowledge public, not to keep it secret. He refused to sign the nondisclosure agreement, and prepared to publish his initial findings, which he had made without entering any nondisclosure agreement. As he did so, he received the RIAA's threatening letter. In response, he asked a federal district court to declare that publication of his findings was not a violation of the DMCA. The RIAA, realizing that trying to silence academic publication of a criticism of the ,{[pg 417]}, weakness of its approach to encryption was not the best litigation stance, moved to dismiss the case by promising it would never bring suit.~{ /{Felten v. Recording Indust. Assoc. of America Inc.}/, No. CV- 01-2669 (D.N.J. June 26, 2001). }~
+
+Another case did not end so well for the defendant. It involved a suit by the eight Hollywood studios against a hacker magazine, 2600. The studios sought an injunction prohibiting 2600 from making available a program called DeCSS, which circumvents the copy-protection scheme used to control access to DVDs, named CSS. CSS prevents copying or any use of DVDs unauthorized by the vendor. DeCSS was written by a fifteen-year-old Norwegian named Jon Johanson, who claimed (though the district court discounted his claim) to have written it as part of an effort to create a DVD player for GNU/Linux-based machines. A copy of DeCSS, together with a story about it was posted on the 2600 site. The industry obtained an injunction against 2600, prohibiting not only the posting of DeCSS, but also its linking to other sites that post the program--that is, telling users where they can get the program, rather than actually distributing a circumvention program. That decision may or may not have been correct on the merits. There are strong arguments in favor of the proposition that making DVDs compatible with GNU/Linux systems is a fair use. There are strong arguments that the DMCA goes much farther than it needs to in restricting speech of software programmers and Web authors, and so is invalid under the First Amendment. The court rejected these arguments.
+
+The point here is not, however, to revisit the legal correctness of that decision, but to illustrate the effects of the DMCA as an element in the institutional ecology of the logical layer. The DMCA is intended as a strong legal barrier to certain technological paths of innovation at the logical layer of the digital environment. It is intended specifically to preserve the "thing-" or "goods"-like nature of entertainment products--music and movies, in particular. As such, it is intended to, and does to some extent, shape the technological development toward treating information and culture as finished goods, rather than as the outputs of social and communications processes that blur the production-consumption distinction. It makes it more difficult for individuals and nonmarket actors to gain access to digital materials that the technology, the market, and the social practices, left unregulated, would have made readily available. It makes practices of cutting and pasting, changing and annotating existing cultural materials harder to do than the technology would have made possible. I have argued elsewhere that when Congress self-consciously makes it harder for individuals to use whatever technology is available to them, to speak as they please and to whomever ,{[pg 418]}, they please, in the interest of some public goal (in this case, preservation of Hollywood and the recording industry for the public good), it must justify its acts under the First Amendment. However, the important question is not one of U.S. constitutional law.
+
+The more general claim, true for any country that decides to enforce a DMCA-like law, is that prohibiting technologies that allow individuals to make flexible and creative uses of digital cultural materials burdens the development of the networked information economy and society. It burdens individual autonomy, the emergence of the networked public sphere and critical culture, and some of the paths available for global human development that the networked information economy makes possible. All these losses will be incurred in expectation of improvements in creativity, even though it is not at all clear that doing so would actually improve, even on a simple utilitarian calculus, the creative production of any given country or region. Passing a DMCA-type law will not by itself squelch the development of nonmarket and peer production. Indeed, many of these technological and social-economic developments emerged and have flourished after the DMCA was already in place. It does, however, represent a choice to tilt the institutional ecology in favor of industrial production and distribution of cultural packaged goods, at the expense of commons-based relations of sharing information, knowledge, and culture. Twentieth-century cultural materials provide the most immediate and important source of references and images for contemporary cultural creation. Given the relatively recent provenance of movies, recorded music, and photography, much of contemporary culture was created in these media. These basic materials for the creation of contemporary multimedia culture are, in turn, encoded in formats that cannot simply be copied by hand, as texts might be even in the teeth of technical protection measures. The capacity to copy mechanically is a necessary precondition for the capacity to quote and combine existing materials of these kinds into new cultural statements and conversational moves. Preserving the capacity of industrial cultural producers to maintain a hermetic seal on the use of materials to which they own copyright can be bought only at the cost of disabling the newly emerging modes of cultural production from quoting and directly building upon much of the culture of the last century.
+
+3~ The Battle over Peer-to-Peer Networks
+
+The second major institutional battle over the technical and social trajectory of Internet development has revolved around peer-to-peer (p2p) networks. I ,{[pg 419]}, offer a detailed description of it here, but not because I think it will be the make-it-or-break-it of the networked information economy. If any laws have that determinative a power, they are the Fritz chip and DMCA. However, the peer-to-peer legal battle offers an excellent case study of just how difficult it is to evaluate the effects of institutional ecology on technology, economic organization, and social practice.
+
+Peer-to-peer technologies as a global phenomenon emerged from Napster and its use by tens of millions of users around the globe for unauthorized sharing of music files. In the six years since their introduction, p2p networks have developed robust and impressive technical capabilities. They have been adopted by more than one hundred million users, and are increasingly applied to uses well beyond music sharing. These developments have occurred despite a systematic and aggressive campaign of litigation and criminal enforcement in a number of national systems against both developers and users. Technically, p2p networks are algorithms that run on top of the Internet and allow users to connect directly from one end user's machine to another. In theory, that is how the whole Internet works--or at least how it worked when there were a small number of computers attached to it. In practice, most users connect through an Internet service provider, and most content available for access on the Internet was available on a server owned and operated by someone distinct from its users. In the late 1990s, there were rudimentary utilities that allowed one user to access information stored on the computer of another, but no widely used utility allowed large numbers of individuals to search each other's hard drives and share data directly from one user to another. Around 1998-1999, early Internet music distribution models, like MP3.com, therefore provided a centralized distribution point for music. This made them highly vulnerable to legal attack. Shawn Fanning, then eighteen years old, was apparently looking for ways to do what teenagers always do--share their music with friends--in a way that would not involve a central point of storing and copying. He developed Napster--the first major, widely adopted p2p technology. Unlike MP3.com, users of Napster could connect their computers directly--one person could download a song stored on the computer of another without mediation. All that the Napster site itself did, in addition to providing the end-user software, was to provide a centralized directory of which songs resided on which machine. There is little disagreement in the literature that it is an infringement under U.S. copyright law for any given user to allow others to duplicate copyrighted music from his or her computer to theirs. The centralizing role of Napster ,{[pg 420]}, in facilitating these exchanges, alongside a number of ill-considered statements by some of its principals, were enough to render the company liable for contributory copyright infringement.
+
+The genie of p2p technology and the social practice of sharing music, however, were already out of the bottle. The story of the following few years, to the extent that one can tell a history of the present and the recent past, offers two core insights. First, it shows how institutional design can be a battleground over the conditions of cultural production in the digital environment. Second, it exposes the limits of the extent to which the institutional ecology can determine the ultimate structure of behavior at a moment of significant and rapid technological and social perturbation. Napster's judicial closure provided no real respite for the recording industry. As Napster was winding down, Gnutella, a free software alternative, had already begun to replace it. Gnutella did not depend on any centralized component, not even to facilitate search. This meant that there was no central provider. There was no firm against which to bring action. Even if there were, it would be impossible to "shut down" use of the program. Gnutella was a freestanding program that individual users could install. Once installed, its users could connect to anyone else who had installed the program, without passing through any choke point. There was no central server to shut down. Gnutella had some technical imperfections, but these were soon overcome by other implementations of p2p. The most successful improvement over Gnutella was the FastTrack architecture, now used by Kazaa, Grokster, and other applications, including some free software applications. It improves on the search capabilities of Gnutella by designating some users as "supernodes," which store information about what songs are available in their "neighborhood." This avoids Gnutella's primary weakness, the relatively high degree of network overhead traffic. The supernodes operate on an ad hoc basis. They change based on whose computer is available with enough storage and bandwidth. They too, therefore, provide no litigation target. Other technologies have developed to speed up or make more robust the distribution of files, including BitTorrent, eDonkey and its free-software relative eMule, and many others. Within less than two years of Napster's closure, more people were using these various platforms to share files than Napster had users at its height. Some of these new firms found themselves again under legal assault--both in the United States and abroad.
+
+As the technologies grew and developed, and as the legal attacks increased, the basic problem presented by the litigation against technology manufacturers ,{[pg 421]}, became evident. Peer-to-peer techniques can be used for a wide range of uses, only some of which are illegal. At the simplest level, they can be used to distribute music that is released by an increasing number of bands freely. These bands hope to get exposure that they can parley into concert performances. As recorded music from the 1950s begins to fall into the public domain in Europe and Australia, golden oldies become another legitimate reason to use p2p technologies. More important, p2p systems are being adapted to different kinds of uses. Chapter 7 discusses how FreeNet is being used to disseminate subversive documents, using the persistence and robustness of p2p networks to evade detection and suppression by authoritarian regimes. BitTorrent was initially developed to deal with the large file transfers required for free software distributions. BitTorrent and eDonkey were both used by the Swarthmore students when their college shut down their Internet connection in response to Diebold's letter threatening action under the service provider liability provisions of the DMCA. The founders of KaZaa have begun to offer an Internet telephony utility, Skype, which allows users to make phone calls from one computer to another for free, and from their computer to the telephone network for a small fee. Skype is a p2p technology.
+
+In other words, p2p is developing as a general approach toward producing distributed data storage and retrieval systems, just as open wireless networks and distributed computing are emerging to take advantage of personal devices to produce distributed communications and computation systems, respectively. As the social and technological uses of p2p technologies grow and diversify, the legal assault on all p2p developers becomes less sustainable-- both as a legal matter and as a social-technical matter. KaZaa was sued in the Netherlands, and moved to Australia. It was later subject to actions in Australia, but by that time, the Dutch courts found the company not to be liable to the music labels. Grokster, a firm based in the United States, was initially found to have offered a sufficiently diverse set of capabilities, beyond merely facilitating copyright infringements, that the Court of Appeals for the Ninth Circuit refused to find it liable simply for making and distributing its software. The Supreme Court reversed that holding, however, returning the case to the lower courts to find, factually, whether Grokster had actual intent to facilitate illegal copying.~{ /{Metro-Goldwyn-Mayer v. Grokster, Ltd.}/ (decided June 27, 2005). }~ Even if Grokster ultimately loses, the FastTrack network architecture will not disappear; clients (that is, end user software) will continue to exist, including free software clients. Perhaps it will be harder to raise money for businesses located within the United States ,{[pg 422]}, to operate in this technological space, because the new rule announced by the Supreme Court in Grokster raises the risk of litigation for innovators in the p2p space. However, as with encryption regulation in the mid-1990s, it is not clear that the United States can unilaterally prevent the development of technology for which there is worldwide demand and with regard to whose development there is globally accessible talent.
+
+How important more generally are these legal battles to the organization of cultural production in the networked environment? There are two components to the answer: The first component considers the likely effect of the legal battles on the development and adoption of the technology and the social practice of promiscuous copying. In this domain, law seems unlikely to prevent the continued development of p2p technologies. It has, however, had two opposite results. First, it has affected the path of the technological evolution in a way that is contrary to the industry interests but consistent with increasing distribution of the core functions of the logical layer. Second, it seems to have dampened somewhat the social practice of file sharing. The second component assumes that a range of p2p technologies will continue to be widely adopted, and that some significant amount of sharing will continue to be practiced. The question then becomes what effect this will have on the primary cultural industries that have fought this technology-- movies and recorded music. Within this new context, music will likely change more radically than movies, and the primary effect will be on the accreditation function--how music is recognized and adopted by fans. Film, if it is substantially affected, will likely be affected largely by a shift in tastes.
+
+MP3.com was the first major music distribution site shut down by litigation. From the industry's perspective, it should have represented an entirely unthreatening business model. Users paid a subscription fee, in exchange for which they were allowed to download music. There were various quirks and kinks in this model that made it unattractive to the music industry at the time: the industry did not control this major site, and therefore had to share the rents from the music, and more important, there was no effective control over the music files once downloaded. However, from the perspective of 2005, MP3.com was a vastly more manageable technology for the sound recording business model than a free software file-sharing client. MP3.com was a single site, with a corporate owner that could be (and was) held responsible. It controlled which user had access to what files--by requiring each user to insert a CD into the computer to prove that he or she had bought the CD--so that usage could in principle be monitored and, if ,{[pg 423]}, desired, compensation could be tied to usage. It did not fundamentally change the social practice of choosing music. It provided something that was more like a music-on-demand jukebox than a point of music sharing. As a legal matter, MP3.com's infringement was centered on the fact that it stored and delivered the music from this central server instead of from the licensed individual copies. In response to the shutdown of MP3.com, Napster redesigned the role of the centralized mode, and left storage in the hands of users, keeping only the directory and search functions centralized. When Napster was shut down, Gnutella and later FastTrack further decentralized the system, offering a fully decentralized, ad hoc reconfigurable cataloging and search function. Because these algorithms represent architecture and a protocol-based network, not a particular program, they are usable in many different implementations. This includes free software programs like MLDonkey--which is a nascent file-sharing system that is aimed to run simultaneously across most of the popular file-sharing networks, including FastTrack, BitTorrent, and Overnet, the eDonkey network. These programs are now written by, and available from, many different jurisdictions. There is no central point of control over their distribution. There is no central point through which to measure and charge for their use. They are, from a technical perspective, much more resilient to litigation attack, and much less friendly to various possible models of charging for downloads or usage. From a technological perspective, then, the litigation backfired. It created a network that is less susceptible to integration into an industrial model of music distribution based on royalty payments per user or use.
+
+It is harder to gauge, however, whether the litigation was a success or a failure from a social-practice point of view. There have been conflicting reports on the effects of file sharing and the litigation on CD sales. The recording industry claimed that CD sales were down because of file sharing, but more independent academic studies suggested that CD sales were not independently affected by file sharing, as opposed to the general economic downturn.~{ See Felix Oberholzer and Koleman Strumpf, "The Effect of File Sharing on Record Sales" (working paper), http://www.unc.edu/cigar/papers/FileSharing_March2004.pdf. }~ The Pew project on Internet and American Life user survey data suggests that the litigation strategy against individual users has dampened the use of file sharing, though file sharing is still substantially more common among users than paying for files from the newly emerging payper-download authorized services. In mid-2003, the Pew study found that 29 percent of Internet users surveyed said they had downloaded music files, identical to the percentage of users who had downloaded music in the first quarter of 2001, the heyday of Napster. Twenty-one percent responded that ,{[pg 424]}, they allow others to download from their computer.~{ Mary Madden and Amanda Lenhart, "Music Downloading, File-Sharing, and Copyright" (Pew, July 2003), http://www.pewinternet.org/pdfs/PIP_Copyright_Memo.pdf/. }~ This meant that somewhere between twenty-six and thirty-five million adults in the United States alone were sharing music files in mid-2003, when the recording industry began to sue individual users. Of these, fully two-thirds expressly stated that they did not care whether the files they downloaded were or were not copyrighted. By the end of 2003, five months after the industry began to sue individuals, the number of respondents who admitted to downloading music dropped by half. During the next few months, these numbers increased slightly to twenty-three million adults, remaining below the mid-2003 numbers in absolute terms and more so in terms of percentage of Internet users. Of those who had at one point downloaded, but had stopped, roughly a third said that the threat of suit was the reason they had stopped file sharing.~{ Lee Rainie and Mary Madden, "The State of Music Downloading and File-Sharing Online" (Pew, April 2004), http://www.pewinternet.org/pdfs/PIP_Filesharing_April_ 04.pdf. }~ During this same period, use of pay online music download services, like iTunes, rose to about 7 percent of Internet users. Sharing of all kinds of media files--music, movies, and games--was at 23 percent of adult Internet users. These numbers do indeed suggest that, in the aggregate, music downloading is reported somewhat less often than it was in the past. It is hard to tell how much of this reduction is due to actual behavioral change as compared to an unwillingness to self-report on behavior that could subject one to litigation. It is impossible to tell how much of an effect the litigation has had specifically on sharing by younger people--teenagers and college students--who make up a large portion of both CD buyers and file sharers. Nonetheless, the reduction in the total number of self-reported users and the relatively steady percentage of total Internet users who share files of various kinds suggest that the litigation does seem to have had a moderating effect on file sharing as a social practice. It has not, however, prevented file sharing from continuing to be a major behavioral pattern among one-fifth to one-quarter of Internet users, and likely a much higher proportion in the most relevant populations from the perspective of the music and movie industries--teenagers and young adults.
+
+From the perspective of understanding the effects of institutional ecology, then, the still-raging battle over peer-to-peer networks presents an ambiguous picture. One can speculate with some degree of confidence that, had Napster not been stopped by litigation, file sharing would have been a much wider social practice than it is today. The application was extremely easy to use; it offered a single network for all file-sharing users, thereby offering an extremely diverse and universal content distribution network; and for a brief period, it was a cultural icon and a seemingly acceptable social practice. The ,{[pg 425]}, period of regrouping that followed its closure; the imperfect interfaces of early Gnutella clients; the relative fragmentation of file sharing into a number of networks, each with a smaller coverage of content than was present; and the fear of personal litigation risk are likely to have limited adoption. On the other hand, in the longer run, the technological developments have created platforms that are less compatible with the industrial model, and which would be harder to integrate into a stable settlement for music distribution in the digital environment.
+
+Prediction aside, it is not immediately obvious why peer-to-peer networks contribute to the kinds of nonmarket production and creativity that I have focused on as the core of the networked information economy. At first blush, they seem simply to be mechanisms for fans to get industrially produced recorded music without paying musicians. This has little to do with democratization of creativity. To see why p2p networks nonetheless are a part of the development of a more attractive cultural production system, and how they can therefore affect the industrial organization of cultural production, we can look first at music, and then, independently, at movies. The industrial structure of each is different, and the likely effects of p2p networks are different in each case.
+
+Recorded music began with the phonograph--a packaged good intended primarily for home consumption. The industry that grew around the ability to stamp and distribute records divided the revenue structure such that artists have been paid primarily from live public performances and merchandizing. Very few musicians, including successful recording artists, make money from recording royalties. The recording industry takes almost all of the revenues from record and CD sales, and provides primarily promotion and distribution. It does not bear the capital cost of the initial musical creation; artists do. With the declining cost of computation, that cost has become relatively low, often simply a computer owned by artists themselves, much as they own their instruments. Because of this industrial structure, peer-to-peer networks are a genuine threat to displacing the entire recording industry, while leaving musicians, if not entirely unaffected, relatively insulated from the change and perhaps mildly better off. Just as the recording industry stamps CDs, promotes them on radio stations, and places them on distribution chain shelves, p2p networks produce the physical and informational aspects of a music distribution system. However, p2p networks do so collaboratively, by sharing the capacity of their computers, hard drives, and network connections. Filtering and accreditation, or "promotion," are produced on the ,{[pg 426]}, model that Eben Moglen called "anarchist distribution." Jane's friends and friends of her friends are more likely to know exactly what music would make her happy than are recording executives trying to predict which song to place, on which station and which shelf, to expose her to exactly the music she is most likely to buy in a context where she would buy it. Filesharing systems produce distribution and "promotion" of music in a socialsharing modality. Alongside peer-produced music reviews, they could entirely supplant the role of the recording industry.
+
+Musicians and songwriters seem to be relatively insulated from the effects of p2p networks, and on balance, are probably affected positively. The most comprehensive survey data available, from mid-2004, shows that 35 percent of musicians and songwriters said that free downloads have helped their careers. Only 5 percent said it has hurt them. Thirty percent said it increased attendance at concerts, 21 percent that it helped them sell CDs and other merchandise, and 19 percent that it helped them gain radio playing time. These results are consistent with what one would expect given the revenue structure of the industry, although the study did not separate answers out based on whether the respondent was able to live entirely or primarily on their music, which represented only 16 percent of the respondents to the survey. In all, it appears that much of the actual flow of revenue to artists-- from performances and other sources--is stable. This is likely to remain true even if the CD market were entirely displaced by peer-to-peer distribution. Musicians will still be able to play for their dinner, at least not significantly less so than they can today. Perhaps there will be fewer millionaires. Perhaps fewer mediocre musicians with attractive physiques will be sold as "geniuses," and more talented musicians will be heard than otherwise would have, and will as a result be able to get paying gigs instead of waiting tables or "getting a job." But it would be silly to think that music, a cultural form without which no human society has existed, will cease to be in our world if we abandon the industrial form it took for the blink of a historical eye that was the twentieth century. Music was not born with the phonograph, nor will it die with the peer-to-peer network. The terms of the debate, then, are about cultural policy; perhaps about industrial policy. Will we get the kind of music we want in this system, whoever "we" are? Will American recording companies continue to get the export revenue streams they do? Will artists be able to live from making music? Some of these arguments are serious. Some are but a tempest in a monopoly-rent teapot. It is clear that a technological change has rendered obsolete a particular mode of distributing ,{[pg 427]}, information and culture. Distribution, once the sole domain of market-based firms, now can be produced by decentralized networks of users, sharing instantiations of music they deem attractive with others, using equipment they own and generic network connections. This distribution network, in turn, allows a much more diverse range of musicians to reach much more finely grained audiences than were optimal for industrial production and distribution of mechanical instantiations of music in vinyl or CD formats. The legal battles reflect an effort by an incumbent industry to preserve its very lucrative business model. The industry has, to this point, delayed the transition to peer-based distribution, but it is unclear for how long or to what extent it will be successful in preventing the gradual transition to userbased distribution.
+
+The movie industry has a different industrial structure and likely a different trajectory in its relations to p2p networks. First and foremost, movies began as a relatively high capital cost experience good. Making a movie, as opposed to writing a song, was something that required a studio and a large workforce. It could not be done by a musician with a guitar or a piano. Furthermore, movies were, throughout most of their history, collective experience goods. They were a medium for public performance experienced outside of the home, in a social context. With the introduction of television, it was easy to adapt movie revenue structure by delaying release of films to television viewing until after demand for the movie at the theater declined, as well as to develop their capabilities into a new line of business--television production. However, theatrical release continued to be the major source of revenue. When video came along, the movie industry cried murder in the Sony Betamax case, but actually found it quite easy to work videocassettes into yet another release window, like television, and another medium, the made-for-video movie. Digital distribution affects the distribution of cultural artifacts as packaged goods for home consumption. It does not affect the social experience of going out to the movies. At most, it could affect the consumption of the twenty-year-old mode of movie distribution: videos and DVDs. As recently as the year 2000, when the Hollywood studios were litigating the DeCSS case, they represented to the court that home video sales were roughly 40 percent of revenue, a number consistent with other reports.~{ See 111 F.Supp.2d at 310, fns. 69-70; PBS Frontline report, http://www.pbs.org/ wgbh/pages/frontline/shows/hollywood/business/windows.html. }~ The remainder, composed of theatrical release revenues and various television releases, remains reasonably unthreatened as a set of modes of revenue capture to sustain the high-production value, high-cost movies that typify Hollywood. Forty percent is undoubtedly a large chunk, but unlike ,{[pg 428]}, the recording industry, which began with individually owned recordings, the movie industry preexisted videocassettes and DVDs, and is likely to outlive them even if p2p networks were to eliminate that market entirely, which is doubtful.
+
+The harder and more interesting question is whether cheap high-quality digital video-capture and editing technologies combined with p2p networks for efficient distribution could make film a more diverse medium than it is now. The potential hypothetical promise of p2p networks like BitTorrent is that they could offer very robust and efficient distribution networks for films outside the mainstream industry. Unlike garage bands and small-scale music productions, however, this promise is as yet speculative. We do not invest in public education for film creation, as we do in the teaching of writing. Most of the raw materials out of which a culture of digital capture and amateur editing could develop are themselves under copyright, a subject we return to when considering the content layer. There are some early efforts, like atomfilms.com, at short movie distribution. The technological capabilities are there. It is possible that if films older than thirty or even fifty years were released into the public domain, they would form the raw material out of which a new cultural production practice would form. If it did, p2p networks would likely play an important role in their distribution. However, for now, although the sound recording and movie industries stand shoulder to shoulder in the lobbying efforts, their circumstances and likely trajectory in relation to file sharing are likely quite different.
+
+The battles over p2p and the DMCA offer some insight into the potential, but also the limits, of tweaking the institutional ecology. The ambition of the industrial cultural producers in both cases was significant. They sought to deploy law to shape emerging technologies and social practices to make sure that the business model they had adopted for the technologies of film and sound recording continued to work in the digital environment. Doing so effectively would require substantial elimination of certain lines of innovation, like certain kinds of decryption and p2p networks. It would require outlawing behavior widely adopted by people around the world--social sharing of most things that they can easily share--which, in the case of music, has been adopted by tens of millions of people around the world. The belief that all this could be changed in a globally interconnected network through the use of law was perhaps naïve. Nonetheless, the legal efforts have had some impact on social practices and on the ready availability of materials ,{[pg 429]}, for free use. The DMCA may not have made any single copyright protection mechanism hold up to the scrutiny of hackers and crackers around the Internet. However, it has prevented circumvention devices from being integrated into mainstream platforms, like the Windows operating system or some of the main antivirus programs, which would have been "natural" places for them to appear in consumer markets. The p2p litigation did not eliminate the p2p networks, but it does seem to have successfully dampened the social practice of file sharing. One can take quite different views of these effects from a policy perspective. However, it is clear that they are selfconscious efforts to tweak the institutional ecology of the digital environment in order to dampen the most direct threats it poses for the twentieth-century industrial model of cultural production. In the case of the DMCA, this is done at the direct cost of making it substantially harder for users to make creative use of the existing stock of audiovisual materials from the twentieth century--materials that are absolutely central to our cultural selfunderstanding at the beginning of the twenty-first century. In the case of p2p networks, the cost to nonmarket production is more indirect, and may vary across different cultural forms. The most important long-term effect of the pressure that this litigation has put on technology to develop decentralized search and retrieval systems may, ultimately and ironically, be to improve the efficiency of radically decentralized cultural production and distribution, and make decentralized production more, rather than less, robust to the vicissitudes of institutional ecology.
+
+3~ The Domain Name System: From Public Trust to the Fetishism of Mnemonics
+
+Not all battles over the role of property-like arrangements at the logical layer originate from Hollywood and the recording industry. One of the major battles outside of the ambit of the copyright industries concerned the allocation and ownership of domain names. At stake was the degree to which brand name ownership in the material world could be leveraged into attention on the Internet. Domain names are alphanumeric mnemonics used to represent actual Internet addresses of computers connected to the network. While 130.132.51.8 is hard for human beings to remember, www.yale.edu is easier. The two strings have identical meaning to any computer connected to the Internet--they refer to a server that responds to World Wide Web queries for Yale University's main site. Every computer connected to the Internet has a unique address, either permanent or assigned by a provider ,{[pg 430]}, for the session. That requires that someone distribute addresses--both numeric and mnemonic. Until 1992, names and numbers were assigned on a purely first-come, first-served basis by Jon Postel, one of the very first developers of the Internet, under U.S. government contract. Postel also ran a computer, called the root server, to which all computers would turn to ask the numeric address of letters.mnemonic.edu, so they could translate what the human operator remembered as the address into one their machine could use. Postel called this system "the Internet Assigned Numbers Authority, IANA," whose motto he set as, "Dedicated to preserving the central coordinating functions of the global Internet for the public good." In 1992, Postel got tired of this coordinating job, and the government contracted it to a private firm called Network Solutions, Inc., or NSI. As the number of applications grew, and as the administration sought to make this system pay for itself, NSI was allowed in 1995 to begin to charge fees for assigning names and numbers. At about the same time, widespread adoption of a graphical browser made using the World Wide Web radically simpler and more intuitive to the uninitiated. These two developments brought together two forces to bear on the domain name issue--each with a very different origin and intent. The first force consisted of the engineers who had created and developed the Internet, led by Postel, who saw the domain name space to be a public trust and resisted its commercialization by NSI. The second force consisted of trademark owners and their lawyers, who suddenly realized the potential for using control over domain names to extend the value of their brand names to a new domain of trade--e-commerce. These two forces placed the U.S. government under pressure to do two things: (1) release the monopoly that NSI--a for-profit corporation--had on the domain name space, and (2) find an efficient means of allowing trademark owners to control the use of alphanumeric strings used in their trademarks as domain names. Postel initially tried to "take back the root" by asking various regional domain name servers to point to his computer, instead of to the one maintained by NSI in Virginia. This caused uproar in the government, and Postel was accused of attacking and hijacking the Internet! His stature and passion, however, placed significant weight on the side of keeping the naming system as an open public trust. That position came to an abrupt end with his death in 1996. By late 1996, a self-appointed International Ad Hoc Committee (IAHC) was formed, with the blessing of the Internet Society (ISOC), a professional membership society for individuals and organizations involved in Internet planning. IAHC's membership was about half intellectual property ,{[pg 431]}, lawyers and half engineers. In February 1997, IAHC came out with a document called the gTLD-MoU (generic top-level domain name memorandum of understanding). Although the product of a small group, the gTLD-MoU claimed to speak for "The Internet Community." Although it involved no governments, it was deposited "for signature" with the International Telecommunications Union (ITU). Dutifully, some 226 organizations--Internet services companies, telecommunications providers, consulting firms, and a few chapters of the ISOC signed on. Section 2 of the gTLD-MoU, announcing its principles, reveals the driving forces of the project. While it begins with the announcement that the top-level domain space "is a public resource and is subject to the public trust," it quickly commits to the principle that "the current and future Internet name space stakeholders can benefit most from a self-regulatory and market-oriented approach to Internet domain name registration services." This results in two policy principles: (1) commercial competition in domain name registration by releasing the monopoly NSI had, and (2) protecting trademarks in the alphanumeric strings that make up the second-level domain names. The final, internationalizing component of the effort--represented by the interests of the WIPO and ITU bureaucracies--was attained by creating a Council of Registrars as a Swiss corporation, and creating special relationships with the ITU and the WIPO.
+
+None of this institutional edifice could be built without the U.S. government. In early 1998, the administration responded to this ferment with a green paper, seeking the creation of a private, nonprofit corporation registered in the United States to take on management of the domain name issue. By its own terms, the green paper responded to concerns of the domain name registration monopoly and of trademark issues in domain names, first and foremost, and to some extent to increasing clamor from abroad for a voice in Internet governance. Despite a cool response from the European Union, the U.S. government proceeded to finalize a white paper and authorize the creation of its preferred model--the private, nonprofit corporation. Thus was born the Internet Corporation for Assigned Names and Numbers (ICANN) as a private, nonprofit California corporation. Over time, it succeeded in large measure in loosening NSI's monopoly on domain name registration. Its efforts on the trademark side effectively created a global preemptive property right. Following an invitation in the U.S. government's white paper for ICANN to study the proper approach to trademark enforcement in the domain name space, ICANN and WIPO initiated a process ,{[pg 432]}, that began in July 1998 and ended in April 1999. As Froomkin describes his experience as a public-interest expert in this process, the process feigned transparency and open discourse, but was in actuality an opaque staff-driven drafting effort.~{ A. M. Froomkin, "Semi-Private International Rulemaking: Lessons Learned from the WIPO Domain Name Process," http://www.personal.law.miami.edu/froomkin/ articles/TPRC99.pdf. }~ The result was a very strong global property right available to trademark owners in the alphanumeric strings that make up domain names. This was supported by binding arbitration. Because it controlled the root server, ICANN could enforce its arbitration decisions worldwide. If ICANN decides that, say, the McDonald's fast-food corporation and not a hypothetical farmer named Old McDonald owned www.mcdonalds.com, all computers in the world would be referred to the corporate site, not the personal one. Not entirely satisfied with the degree to which the ICANNWIPO process protected their trademarks, some of the major trademark owners lobbied the U.S. Congress to pass an even stricter law. This law would make it easier for the owners of commercial brand names to obtain domain names that include their brand, whether or not there was any probability that users would actually confuse sites like the hypothetical Old McDonald's with that of the fast-food chain.
+
+The degree to which the increased appropriation of the domain name space is important is a function of the extent to which the cultural practice of using human memory to find information will continue to be widespread. The underlying assumption of the value of trademarked alphanumeric strings as second-level domain names is that users will approach electronic commerce by typing in "www.brandname.com" as their standard way of relating to information on the Net. This is far from obviously the most efficient solution. In physical space, where collecting comparative information on price, quality, and so on is very costly, brand names serve an important informational role. In cyberspace, where software can compare prices, and product-review services that link to vendors are easy to set up and cheap to implement, the brand name becomes an encumbrance on good information, not its facilitator. If users are limited, for instance, to hunting around as to whether information they seek is on www.brandname.com, www.brand_ name.com, or www.brand.net, name recognition from the real world becomes a bottleneck to e-commerce. And this is precisely the reason why owners of established marks sought to assure early adoption of trademarks in domain names--it assures users that they can, in fact, find their accustomed products on the Web without having to go through search algorithms that might expose them to comparison with pesky start-up competitors. As search engines become better and more tightly integrated into the basic ,{[pg 433]}, browser functionality, the idea that a user who wants to buy from Delta Airlines would simply type "www.delta.com," as opposed to plugging "delta airlines" into an integrated search toolbar and getting the airline as a first hit becomes quaint. However, quaint inefficient cultural practices can persist. And if this indeed is one that will persist, then the contours of the property right matter. As the law has developed over the past few years, ownership of a trademark that includes a certain alphanumeric string almost always gives the owner of the trademark a preemptive right in using the letters and numbers incorporated in that mark as a domain name.
+
+Domain name disputes have fallen into three main categories. There are cases of simple arbitrage. Individuals who predicted that having a domain name with the brand name in it would be valuable, registered such domain names aplenty, and waited for the flat-footed brand name owners to pay them to hand over the domain. There is nothing more inefficient about this form of arbitrage than any other. The arbitrageurs "reserved" commercially valuable names so they could be auctioned, rather than taken up by someone who might have a non-negotiable interest in the name--for example, someone whose personal name it was. These arbitrageurs were nonetheless branded pirates and hijackers, and the consistent result of all the cases on domain names has been that the corporate owners of brand names receive the domain names associated with their brands without having to pay the arbitrageurs. Indeed, the arbitrageurs were subject to damage judgments. A second kind of case involved bona fide holders of domain names that made sense for them, but were nonetheless shared with a famous brand name. One child nicknamed "Pokey" registered "pokey.org," and his battle to keep that name against a toy manufacturer that sold a toy called "pokey" became a poster child for this type of case. Results have been more mixed in this case, depending on how sympathetic the early registrant was. The third type of case--and in many senses, most important from the perspective of freedom to participate not merely as a consumer in the networked environment, but as a producer--involves those who use brand names to draw attention to the fact that they are attacking the owner of the brand. One well-known example occurred when Verizon Wireless was launched. The same hacker magazine involved in the DeCSS case, 2600, purchased the domain name "verizonreallysucks.com" to poke fun at Verizon. In response to a letter requiring that they give up the domain name, the magazine purchased the domain name "VerizonShouldSpendMoreTimeFixingItsNetworkAndLess MoneyOnLawyers.com." These types of cases have again met with varying ,{[pg 434]}, degrees of sympathy from courts and arbitrators under the ICANN process, although it is fairly obvious that using a brand name in order to mock and criticize its owner and the cultural meaning it tries to attach to its mark is at the very core of fair use, cultural criticism, and free expression.
+
+The point here is not to argue for one type of answer or another in terms of trademark law, constitutional law, or the logic of ICANN. It is to identify points of pressure where the drive to create proprietary rights is creating points of control over the flow of information and the freedom to make meaning in the networked environment. The domain name issue was seen by many as momentous when it was new. ICANN has drawn a variety of both yearnings and fears as a potential source of democratic governance for the Internet or a platform for U.S. hegemony. I suspect that neither of these will turn out to be true. The importance of property rights in domain names is directly based on the search practices of users. Search engines, directories, review sites, and referrals through links play a large role in enabling users to find information they are interested in. Control over the domain name space is unlikely to provide a real bottleneck that will prevent both commercial competitors and individual speakers from drawing attention to their competition or criticism. However, the battle is indicative of the efforts to use proprietary rights in a particular element of the institutional ecology of the logical layer--trademarks in domain names--to tilt the environment in favor of the owners of famous brand names, and against individuals, noncommercial actors, and smaller, less-known competitors.
+
+3~ The Browser Wars
+
+A much more fundamental battle over the logical layer has occurred in the browser wars. Here, the "institutional" component is not formal institutions, like laws or regulations, but technical practice institutions--the standards for Web site design. Unlike on the network protocol side, the device side of the logical layer--the software running personal computers--was thoroughly property-based by the mid-1990s. Microsoft's dominance in desktop operating systems was well established, and there was strong presence of other software publishers in consumer applications, pulling the logical layer toward a proprietary model. In 1995, Microsoft came to perceive the Internet and particularly the World Wide Web as a threat to its control over the desktop. The user-side Web browser threatened to make the desktop a more open environment that would undermine its monopoly. Since that time, the two pulls--the openness of the nonproprietary network and the closed nature ,{[pg 435]}, of the desktop--have engaged in a fairly energetic tug-of-war over the digital environment. This push-me-pull-you game is played out both in the domain of market share, where Microsoft has been immensely successful, and in the domain of standard setting, where it has been only moderately successful. In market share, the story is well known and has been well documented in the Microsoft antitrust litigation. Part of the reason that it is so hard for a new operating system to compete with Microsoft's is that application developers write first, and sometimes only, for the already-dominant operating system. A firm investing millions of dollars in developing a new piece of photo-editing software will usually choose to write it so that it works with the operating system that has two hundred million users, not the one that has only fifteen million users. Microsoft feared that Netscape's browser, dominant in the mid-1990s, would come to be a universal translator among applications--that developers could write their applications to run on the browser, and the browser would handle translation across different operating systems. If that were to happen, Microsoft's operating system would have to compete on intrinsic quality. Windows would lose the boost of the felicitous feedback effect, where more users mean more applications, and this greater number of applications in turn draws more new users, and so forth. To prevent this eventuality, Microsoft engaged in a series of practices, ultimately found to have violated the antitrust laws, aimed at getting a dominant majority of Internet users to adopt Microsoft's Internet Explorer (IE). Illegal or not, these practices succeeded in making IE the dominant browser, overtaking the original market leader, Netscape, within a short number of years. By the time the antitrust case was completed, Netscape had turned browser development over to the open-source development community, but under licensing conditions sufficiently vague so that the project generated little early engagement. Only around 2001-2002, did the Mozilla browser development project get sufficient independence and security for developers to begin to contribute energetically. It was only in late 2004, early 2005, that Mozilla Firefox became the first major release of a free software browser that showed promise of capturing some user-share back from IE.
+
+Microsoft's dominance over the operating system and browser has not, as a practical matter, resulted in tight control over the information flow and use on the Internet. This is so for three reasons. First, the TCP/IP protocol is more fundamental to Internet communications. It allows any application or content to run across the network, as long as it knows how to translate itself into very simple packets with standard addressing information. To prevent ,{[pg 436]}, applications from doing this over basic TCP/IP would make the Microsoft operating system substantially crippling to many applications developers, which brings us to the second reason. Microsoft's dominance depends to a great extent on the vastly greater library of applications available to run on Windows. To make this library possible, Microsoft makes available a wide range of application program interfaces that developers can use without seeking Microsoft's permission. As a strategic decision about what enhances its core dominance, Microsoft may tilt the application development arena in its favor, but not enough to make it too hard for most applications to be implemented on a Windows platform. While not nearly as open as a genuinely open-source platform, Windows is also a far cry from a completely controlled platform, whose owner seeks to control all applications that are permitted to be developed for, and all uses that can be made of, its platform. Third, while IE controls much of the browser market share, Microsoft has not succeeded in dominating the standards for Web authoring. Web browser standard setting happens on the turf of the mythic creator of the Web-- Tim Berners Lee. Lee chairs the W3C, a nonprofit organization that sets the standard ways in which Web pages are authored so that they have a predictable appearance on the browser's screen. Microsoft has, over the years, introduced various proprietary extensions that are not part of the Web standard, and has persuaded many Web authors to optimize their Web sites to IE. If it succeeds, it will have wrested practical control over standard setting from the W3C. However, as of this writing, Web pages generally continue to be authored using mostly standard, open extensions, and anyone browsing the Internet with a free software browser, like any of the Mozilla family, will be able to read and interact with most Web sites, including the major ecommerce sites, without encountering nonstandard interfaces optimized for IE. At a minimum, these sites are able to query the browser as to whether or not it is IE, and serve it with either the open standard or the proprietary standard version accordingly.
+
+3~ Free Software
+
+The role of Mozilla in the browser wars points to the much more substantial and general role of the free software movement and the open-source development community as major sources of openness, and as a backstop against appropriation of the logical layer. In some of the most fundamental uses of the Internet--Web-server software, Web-scripting software, and e-mail servers--free or open-source software has a dominant user share. In others, like ,{[pg 437]}, the operating system, it offers a robust alternative sufficiently significant to prevent enclosure of an entire component of the logical layer. Because of its licensing structure and the fact that the technical specifications are open for inspection and use by anyone, free software offers the most completely open, commons-based institutional and organizational arrangement for any resource or capability in the digital environment. Any resource in the logical layer that is the product of a free software development project is institutionally designed to be available for nonmarket, nonproprietary strategies of use. The same openness, however, makes free software resistant to control. If one tries to implement a constraining implementation of a certain function--for example, an audio driver that will not allow music to be played without proper authorization from a copyright holder--the openness of the code for inspection will allow users to identify what, and how, the software is constraining. The same institutional framework will allow any developer to "fix" the problem and change the way the software behaves. This is how free and open-source software is developed to begin with. One cannot limit access to the software--for purposes of inspection and modification--to developers whose behavior can be controlled by contract or property and still have the software be "open source" or free. As long as free software can provide a fully implemented alternative to the computing functionalities users want, perfect enclosure of the logical layer is impossible. This openness is a boon for those who wish the network to develop in response to a wide range of motivations and practices. However, it presents a serious problem for anyone who seeks to constrain the range of uses made of the Internet. And, just as they did in the context of trusted systems, the incumbent industrial culture producers--Hollywood and the recording industry-- would, in fact, like to control how the Internet is used and how software behaves.
+
+3~ Software Patents
+
+Throughout most of its history, software has been protected primarily by copyright, if at all. Beginning in the early 1980s, and culminating formally in the late 1990s, the Federal Circuit, the appellate court that oversees the U.S. patent law, made clear that software was patentable. The result has been that software has increasingly become the subject of patent rights. There is now pressure for the European Union to pass a similar reform, and to internationalize the patentability of software more generally. There are a variety of policy questions surrounding the advisability of software patents. Software ,{[pg 438]}, development is a highly incremental process. This means that patents tend to impose a burden on a substantial amount of future innovation, and to reward innovation steps whose qualitative improvement over past contributions may be too small to justify the discontinuity represented by a patent grant. Moreover, innovation in the software business has flourished without patents, and there is no obvious reason to implement a new exclusive right in a market that seems to have been enormously innovative without it. Most important, software components interact with each other constantly. Sometimes interoperating with a certain program may be absolutely necessary to perform a function, not because the software is so good, but because it has become the standard. The patent then may extend to the very functionality, whereas a copyright would have extended only to the particular code by which it was achieved. The primary fear is that patents over standards could become major bottlenecks.
+
+From the perspective of the battle over the institutional ecology, free software and open-source development stand to lose the most from software patents. A patent holder may charge a firm that develops dependent software in order to capture rents. However, there is no obvious party to charge for free software development. Even if the patent owner has a very open licensing policy--say, licensing the patent nonexclusively to anyone without discrimination for $10,000--most free software developers will not be able to play. IBM and Red Hat may pay for licenses, but the individual contributor hacking away at his or her computer, will not be able to. The basic driver of free software innovation is easy ubiquitous access to the state of the art, coupled with diverse motivations and talents brought to bear on a particular design problem. If working on a problem requires a patent license, and if any new development must not only write new source code, but also avoid replicating a broad scope patent or else pay a large fee, then the conditions for free software development are thoroughly undermined. Free software is responsible for some of the most basic and widely used innovations and utilities on the Internet today. Software more generally is heavily populated by service firms that do not functionally rely on exclusive rights, copyrights, or patents. Neither free software nor service-based software development need patents, and both, particularly free and open-source software, stand to be stifled significantly by widespread software patenting. As seen in the case of the browser war, in the case of Gnutella, and the much more widely used basic utilities of the Web--Apache server software, a number of free e-mail servers, and the Perl scripting language--free and open-source ,{[pg 439]}, software developers provide central chunks of the logical layer. They do so in a way that leaves that layer open for anyone to use and build upon. The drive to increase the degree of exclusivity available for software by adopting patents over and above copyright threatens the continued vitality of this development methodology. In particular, it threatens to take certain discrete application areas that may require access to patented standard elements or protocols out of the domain of what can be done by free software. As such, it poses a significant threat to the availability of an open logical layer for at least some forms of network use.
+
+2~ THE CONTENT LAYER
+
+The last set of resources necessary for information production and exchange is the universe of existing information, knowledge, and culture. The battle over the scope, breadth, extent, and enforcement of copyright, patent, trademarks, and a variety of exotic rights like trespass to chattels or the right to link has been the subject of a large legal literature. Instead of covering the entire range of enclosure efforts of the past decade or more, I offer a set of brief descriptions of the choices being made in this domain. The intention is not to criticize or judge the intrinsic logic of any of these legal changes, but merely to illustrate how all these toggles of institutional ecology are being set in favor of proprietary strategies, at the expense of nonproprietary producers.
+
+3~ Copyright
+
+The first domain in which we have seen a systematic preference for commercial producers that rely on property over commons-based producers is in copyright. This preference arises from a combination of expansive interpretations of what rights include, a niggardly interpretive attitude toward users' privileges, especially fair use, and increased criminalization. These have made copyright law significantly more industrial-production friendly than it was in the past or than it need be from the perspective of optimizing creativity or welfare in the networked information economy, rather than rentextraction by incumbents.
+
+/{Right to Read}/. Jessica Litman early diagnosed an emerging new "right to read."~{ Jessica Litman, "The Exclusive Right to Read," Cardozo Arts and Entertainment Law Journal 13 (1994): 29. }~ The basic right of copyright, to control copying, was never seen to include the right to control who reads an existing copy, when, and how ,{[pg 440]}, many times. Once a user bought a copy, he or she could read it many times, lend it to a friend, or leave it on the park bench or in the library for anyone else to read. This provided a coarse valve to limit the deadweight loss associated with appropriating a public good like information. As a happenstance of computer technology, reading on a screen involves making a temporary copy of a file onto the temporary memory of the computer. An early decision of the Ninth Circuit Court of Appeals, MAI Systems, treated RAM (random-access memory) copies of this sort as "copies" for purposes of copyright.~{ /{MAI Systems Corp. v. Peak Computer, Inc.}/, 991 F.2d 511 (9th Cir. 1993). }~ This position, while weakly defended, was not later challenged or rejected by other courts. Its result is that every act of reading on a screen involves "making a copy" within the meaning of the Copyright Act. As a practical matter, this interpretation expands the formal rights of copyright holders to cover any and all computer-mediated uses of their works, because no use can be made with a computer without at least formally implicating the right to copy. More important than the formal legal right, however, this universal baseline claim to a right to control even simple reading of one's copyrighted work marked a change in attitude. Justified later through various claims--such as the efficiency of private ordering or of price discrimination--it came to stand for a fairly broad proposition: Owners should have the right to control all valuable uses of their works. Combined with the possibility and existence of technical controls on actual use and the DMCA's prohibition on circumventing those controls, this means that copyright law has shifted. It existed throughout most of its history as a regulatory provision that reserved certain uses of works for exclusive control by authors, but left other, not explicitly constrained uses free. It has now become a law that gives rights holders the exclusive right to control any computer-mediated use of their works, and captures in its regulatory scope all uses that were excluded from control in prior media.
+
+/{Fair Use Narrowed}/. Fair use in copyright was always a judicially created concept with a large degree of uncertainty in its application. This uncertainty, coupled with a broader interpretation of what counts as a commercial use, a restrictive judicial view of what counts as fair, and increased criminalization have narrowed its practical scope.
+
+First, it is important to recognize that the theoretical availability of the fair-use doctrine does not, as a practical matter, help most productions. This is due to a combination of two factors: (1) fair-use doctrine is highly fact specific and uncertain in application, and (2) the Copyright Act provides ,{[pg 441]}, large fixed statutory damages, even if there is no actual damage to the copyright owner. Lessig demonstrated this effect most clearly by working through an example of a documentary film.~{ Lawrence Lessig, Free Culture: How Big Media Uses Technology and the Law to Lock Down Culture and Control Creativity (New York: Penguin Press, 2004). }~ A film will not be distributed without liability insurance. Insurance, in turn, will not be issued without formal clearance, or permission, from the owner of each copyrighted work, any portion of which is included in the film, even if the amount used is trivially small and insignificant to the documentary. A five-second snippet of a television program that happened to play on a television set in the background of a sequence captured in documentary film can therefore prevent distribution of the film, unless the filmmaker can persuade the owner of that program to grant rights to use the materials. Copyright owners in such television programs may demand thousands of dollars for even such a minimal and incidental use of "their" images. This is not because a court would ultimately find that using the image as is, with the tiny fraction of the television program in the background, was not covered by fair use. It probably would be a fair use. It is because insurance companies and distributors would refuse to incur the risk of litigation.
+
+Second, in the past few years, even this uncertain scope has been constricted by expanding the definitions of what counts as interference with a market and what counts as a commercial use. Consider the Free Republic case. In that case, a political Web site offered a forum for users to post stories from various newspapers as grist for a political discussion of their contents or their slant. The court held that because newspapers may one day sell access to archived articles, and because some users may read some articles on the Web forum instead of searching and retrieving them from the newspapers' archive, the use interfered with a potential market. Moreover, because Free Republic received donations from users (although it did not require them) and exchanged advertising arrangements with other political sites, the court treated the site as a "commercial user," and its use of newspaper articles to facilitate political discussion of them "a commercial use." These factors enabled the court to hold that posting an article from a medium--daily newspapers--whose existence does not depend on copyright, in a way that may one day come to have an effect on uncertain future revenues, which in any case would be marginal to the business model of the newspapers, was not a fair use even when done for purposes of political commentary.
+
+/{Criminalization}/. Copyright enforcement has also been substantially criminalized in the past few years. Beginning with the No Electronic Theft Act ,{[pg 442]}, (NET Act) in 1997 and later incorporated into the DMCA, criminal copyright has recently become much more expansive than it was until a few years ago. Prior to passage of the NET Act, only commercial pirates--those that slavishly made thousands of copies of video or audiocassettes and sold them for profit--would have qualified as criminal violators of copyright. Criminal liability has now been expanded to cover private copying and free sharing of copyrighted materials whose cumulative nominal price (irrespective of actual displaced demand) is quite low. As criminal copyright law is currently written, many of the tens of millions using p2p networks are felons. It is one thing when the recording industry labels tens of millions of individuals in a society "pirates" in a rhetorical effort to conform social norms to its members' business model. It is quite another when the state brands them felons and fines or imprisons them. Litman has offered the most plausible explanation of this phenomenon.~{ Jessica Litman, "Electronic Commerce and Free Speech," Journal of Ethics and Information Technology 1 (1999): 213. }~ As the network makes low-cost production and exchange of information and culture easier, the large-scale commercial producers are faced with a new source of competition--volunteers, people who provide information and culture for free. As the universe of people who can threaten the industry has grown to encompass more or less the entire universe of potential customers, the plausibility of using civil actions to force individuals to buy rather than share information goods decreases. Suing all of one's intended customers is not a sustainable business model. In the interest of maintaining the business model that relies on control over information goods and their sale as products, the copyright industry has instead enlisted criminal enforcement by the state to prevent the emergence of such a system of free exchange. These changes in formal law have, in what is perhaps a more important development, been coupled with changes in the Justice Department's enforcement policy, leading to a substantial increase in the shadow of criminal enforcement in this area.~{ See Department of Justice Intellectual Property Policy and Programs, http://www.usdoj.gov/criminal/cybercrime/ippolicy.htm. }~
+
+/{Term Extension}/. The change in copyright law that received the most widespread public attention was the extension of copyright term in the Sonny Bono Copyright Term Extension Act of 1998. The statute became cause celebre in the early 2000s because it was the basis of a major public campaign and constitutional challenge in the case of /{Eldred v. Ashcroft}/.~{ /{Eldred v. Ashcroft}/, 537 U.S. 186 (2003). }~ The actual marginal burden of this statute on use of existing materials could be seen as relatively small. The length of copyright protection was already very long-- seventy-five years for corporate-owned materials, life of the author plus fifty for materials initially owned by human authors. The Sonny Bono Copyright ,{[pg 443]}, Term Extension Act increased these two numbers to ninety-five and life plus seventy, respectively. The major implication, however, was that the Act showed that retroactive extension was always available. As materials that were still valuable in the stocks of Disney, in particular, came close to the public domain, their lives would be extended indefinitely. The legal challenge to the statute brought to public light the fact that, as a practical matter, almost the entire stock of twentieth-century culture and beyond would stay privately owned, and its copyright would be renewed indefinitely. For video and sound recordings, this meant that almost the entire universe of materials would never become part of the public domain; would never be available for free use as inputs into nonproprietary production. The U.S. Supreme Court upheld the retroactive extension. The inordinately long term of protection in the United States, initially passed under the pretext of "harmonizing" the length of protection in the United States and in Europe, is now being used as an excuse to "harmonize" the length of protection for various kinds of materials--like sound recordings--that actually have shorter terms of protection in Europe or other countries, like Australia. At stake in all these battles is the question of when, if ever, will Errol Flynn's or Mickey Mouse's movies, or Elvis's music, become part of the public domain? When will these be available for individual users on the same terms that Shakespeare or Mozart are available? The implication of Eldred is that they may never join the public domain, unless the politics of term-extension legislation change.
+
+/{No de Minimis Digital Sampling}/. A narrower, but revealing change is the recent elimination of digital sampling from the universe of ex ante permissible actions, even when all that is taken is a tiny snippet. The case is recent and has not been generalized by other courts as of this writing. However, it offers insight into the mind-set of judges who are confronted with digital opportunities, and who in good faith continue to see the stakes as involving purely the organization of a commercial industry, rather than defining the comparative scope of commercial industry and nonmarket commons-based creativity. Courts seem blind to the effects of their decisions on the institutional ecology within which nonproprietary, individual, and social creation must live. In Bridgeport Music, Inc., the Sixth Circuit was presented with the following problem: The defendant had created a rap song.~{ /{Bridgeport Music, Inc. v. Dimension Films}/, 383 F.3d 390 (6th Cir.2004). }~ In making it, he had digitally copied a two-second guitar riff from a digital recording of a 1970s song, and then looped and inserted it in various places to create a completely different musical effect than the original. The district court ,{[pg 444]}, had decided that the amount borrowed was so small as to make the borrowing de minimis--too little for the law to be concerned with. The Court of Appeals, however, decided that it would be too burdensome for courts to have to decide, on a case-by-case basis, how much was too little for law to be concerned with. Moreover, it would create too much uncertainty for recording companies; it is, as the court put it, "cheaper to license than to litigate."~{ 383 F3d 390, 400. }~ The court therefore held that any digital sampling, no matter how trivial, could be the basis of a copyright suit. Such a bright-line rule that makes all direct copying of digital bits, no matter how small, an infringement, makes digital sound recordings legally unavailable for noncommercial, individually creative mixing. There are now computer programs, like Garage Band, that allow individual users to cut and mix existing materials to create their own music. These may not result in great musical compositions. But they may. That, in any event, is not their point. They allow users to have a very different relationship to recorded music than merely passively listening to finished, unalterable musical pieces. By imagining that the only parties affected by copyright coverage of sampling are recording artists who have contracts with recording studios and seek to sell CDs, and can therefore afford to pay licensing fees for every two-second riff they borrow, the court effectively outlawed an entire model of user creativity. Given how easy it is to cut, paste, loop, slow down, and speed up short snippets, and how creatively exhilarating it is for users--young and old--to tinker with creating musical compositions with instruments they do not know how to play, it is likely that the opinion has rendered illegal a practice that will continue, at least for the time being. Whether the social practice will ultimately cause the law to change or vice versa is more difficult to predict.
+
+3~ Contractual Enclosure: Click-Wrap Licenses and the Uniform Computer Information Transactions Act (UCITA)
+
+Practically all academic commentators on copyright law--whether critics or proponents of this provision or that--understand copyright to be a public policy accommodation between the goal of providing incentives to creators and the goal of providing efficiently priced access to both users and downstream creators. Ideally, it takes into consideration the social costs and benefits of one settlement or another, and seeks to implement an optimal tradeoff. Beginning in the 1980s, software and other digital goods were sold with "shrink-wrap licenses." These were licenses to use the software, which purported ,{[pg 445]}, to apply to mass-market buyers because the buyer would be deemed to have accepted the contract by opening the packaging of the software. These practices later transmuted online into click-wrap licenses familiar to most anyone who has installed software and had to click "I Agree" once or more before the software would install. Contracts are not bound by the balance struck in public law. Licensors can demand, and licensees can agree to, almost any terms. Among the terms most commonly inserted in such licenses that restrict the rights of users are prohibitions on reverse engineering, and restrictions on the use of raw data in compilations, even though copyright law itself does not recognize rights in data. As Mark Lemley showed, most courts prior to the mid-1990s did not enforce such terms.~{ Mark A. Lemley, "Intellectual Property and Shrinkwrap Licenses," Southern California Law Review 68 (1995): 1239, 1248-1253. }~ Some courts refused to enforce shrink-wrap licenses in mass-market transactions by relying on state contract law, finding an absence of sufficient consent or an unenforceable contract of adhesion. Others relied on federal preemption, stating that to the extent state contract law purported to enforce a contract that prohibited fair use or otherwise protected material in the public domain--like the raw information contained in a report--it was preempted by federal copyright law that chose to leave this material in the public domain, freely usable by all. In 1996, in /{ProCD v. Zeidenberg}/, the Seventh Circuit held otherwise, arguing that private ordering would be more efficient than a single public determination of what the right balance was.~{ 86 F.3d 1447 (7th Cir. 1996). }~
+
+The following few years saw substantial academic debate as to the desirability of contractual opt-outs from the public policy settlement. More important, the five years that followed saw a concerted effort to introduce a new part to the Uniform Commercial Code (UCC)--a model commercial law that, though nonbinding, is almost universally adopted at the state level in the United States, with some modifications. The proposed new UCC Article 2B was to eliminate the state law concerns by formally endorsing the use of standard shrink-wrap licenses. The proposed article generated substantial academic and political heat, ultimately being dropped by the American Law Institute, one of the main sponsors of the UCC. A model law did ultimately pass under the name of the Uniform Computer Information Transactions Act (UCITA), as part of a less universally adopted model law effort. Only two states adopted the law--Virginia and Maryland. A number of other states then passed anti-UCITA laws, which gave their residents a safe harbor from having UCITA applied to their click-wrap transactions.
+
+The reason that ProCD and UCITA generated so much debate was the concern that click-wrap licenses were operating in an inefficient market, and ,{[pg 446]}, that they were, as a practical matter, displacing the policy balance represented by copyright law. Mass-market transactions do not represent a genuine negotiated agreement, in the individualized case, as to what the efficient contours of permissions are for the given user and the given information product. They are, rather, generalized judgments by the vendor as to what terms are most attractive for it that the market will bear. Unlike rival economic goods, information goods sold at a positive price in reliance on copyright are, by definition, priced above marginal cost. The information itself is nonrival. Its marginal cost is zero. Any transaction priced above the cost of communication is evidence of some market power in the hands of the provider, used to price based on value and elasticity of demand, not on marginal cost. Moreover, the vast majority of users are unlikely to pay close attention to license details they consider to be boilerplate. This means there is likely significant information shortfall on the part of consumers as to the content of the licenses, and the sensitivity of demand to overreaching contract terms is likely low. This is not because consumers are stupid or slothful, but because the probability that either they would be able to negotiate out from under a standard provision, or a court would enforce against them a truly abusive provision is too low to justify investing in reading and arguing about contracts for all but their largest purchases. In combination, these considerations make it difficult to claim as a general matter that privately set licensing terms would be more efficient than the publicly set background rules of copyright law.~{ For a more complete technical explanation, see Yochai Benkler, "An Unhurried View of Private Ordering in Information Transactions," Vanderbilt Law Review 53 (2000): 2063. }~ The combination of mass-market contracts enforced by technical controls over use of digital materials, which in turn are protected by the DMCA, threatens to displace the statutorily defined public domain with a privately defined realm of permissible use.~{ James Boyle, "Cruel, Mean or Lavish? Economic Analysis, Price Discrimination and Digital Intellectual Property," Vanderbilt Law Review 53 (2000); Julie E. Cohen, "Copyright and the Jurisprudence of Self-Help," Berkeley Technology Law Journal 13 (1998): 1089; Niva Elkin-Koren, "Copyright Policy and the Limits of Freedom of Contract," Berkeley Technology Law Journal 12 (1997): 93. }~ This privately defined settlement would be arrived at in non-negotiated mass-market transactions, in the presence of significant information asymmetries between consumers and vendors, and in the presence of systematic market power of at least some degree.
+
+3~ Trademark Dilution
+
+As discussed in chapter 8, the centrality of commercial interaction to social existence in early-twenty-first-century America means that much of our core iconography is commercial in origin and owned as a trademark. Mickey, Barbie, Playboy, or Coke are important signifiers of meaning in contemporary culture. Using iconography is a central means of creating rich, culturally situated expressions of one's understanding of the world. Yet, as Boyle ,{[pg 447]}, has pointed out, now that we treat flag burning as a constitutionally protected expression, trademark law has made commercial icons the sole remaining venerable objects in our law. Trademark law permits the owners of culturally significant images to control their use, to squelch criticism, and to define exclusively the meaning that the symbols they own carry.
+
+Three factors make trademark protection today more of a concern as a source of enclosure than it might have been in the past. First is the introduction of the federal Anti-Dilution Act of 1995. Second is the emergence of the brand as the product, as opposed to a signifier for the product. Third is the substantial reduction in search and other information costs created by the Net. Together, these three factors mean that owned symbols are becoming increasingly important as cultural signifiers, are being enclosed much more extensively than before precisely as cultural signifiers, and with less justification beyond the fact that trademarks, like all exclusive rights, are economically valuable to their owners.
+
+In 1995, Congress passed the first federal Anti-Dilution Act. Though treated as a trademark protection law, and codifying doctrines that arose in trademark common law, antidilution is a fundamentally different economic right than trademark protection. Traditional trademark protection is focused on preventing consumer confusion. It is intended to assure that consumers can cheaply tell the difference between one product and another, and to give producers incentives to create consistent quality products that can be associated with their trademark. Trademark law traditionally reflected these interests. Likelihood of consumer confusion was the sine qua non of trademark infringement. If I wanted to buy a Coca-Cola, I did not want to have to make sure I was not buying a different dark beverage in a red can called Coca-Gola. Infringement actions were mostly limited to suits among competitors in similar relevant markets, where confusion could occur. So, while trademark law restricted how certain symbols could be used, it was so only as among competitors, and only as to the commercial, not cultural, meaning of their trademark. The antidilution law changes the most relevant factors. It is intended to protect famous brand names, irrespective of a likelihood of confusion, from being diluted by use by others. The association between a particular corporation and a symbol is protected for its value to that corporation, irrespective of the use. It no longer regulates solely competitors to the benefit of competition. It prohibits many more possible uses of the symbol than was the case under traditional trademark law. It applies even to noncommercial users where there is no possibility of confusion. The emergence ,{[pg 448]}, of this antidilution theory of exclusivity is particularly important as brands have become the product itself, rather than a marker for the product. Nike and Calvin Klein are examples: The product sold in these cases is not a better shoe or shirt--the product sold is the brand. And the brand is associated with a cultural and social meaning that is developed purposefully by the owner of the brand so that people will want to buy it. This development explains why dilution has become such a desirable exclusive right for those who own it. It also explains the cost of denying to anyone the right to use the symbol, now a signifier of general social meaning, in ways that do not confuse consumers in the traditional trademark sense, but provide cultural criticism of the message signified.
+
+Ironically, the increase in the power of trademark owners to control uses of their trademark comes at a time when its functional importance as a mechanism for reducing search costs is declining. Traditional trademark's most important justification was that it reduced information collection costs and thereby facilitated welfare-enhancing trade. In the context of the Internet, this function is significantly less important. General search costs are lower. Individual items in commerce can provide vastly greater amounts of information about their contents and quality. Users can use machine processing to search and sift through this information and to compare views and reviews of specific items. Trademark has become less, rather than more, functionally important as a mechanism for dealing with search costs. When we move in the next few years to individual-item digital marking, such as with RFID (radio frequency identification) tags, all the relevant information about contents, origin, and manufacture down to the level of the item, as opposed to the product line, will be readily available to consumers in real space, by scanning any given item, even if it is not otherwise marked at all. In this setting, where the information qualities of trademarks will significantly decline, the antidilution law nonetheless assures that owners can control the increasingly important cultural meaning of trademarks. Trademark, including dilution, is subject to a fair use exception like that of copyright. For the same reasons as operated in copyright, however, the presence of such a doctrine only ameliorates, but does not solve, the limits that a broad exclusive right places on the capacity of nonmarket-oriented creative uses of materials--in this case, culturally meaningful symbols. ,{[pg 449]},
+
+3~ Database Protection
+
+In 1991, in /{Feist Publications, Inc. v. Rural Tel. Serv. Co.}/, the Supreme Court held that raw facts in a compilation, or database, were not covered by the Copyright Act. The constitutional clause that grants Congress the power to create exclusive rights for authors, the Court held, required that works protected were original with the author. The creative element of the compilation--its organization or selectivity, for example, if sufficiently creative-- could therefore be protected under copyright law. However, the raw facts compiled could not. Copying data from an existing compilation was therefore not "piracy"; it was not unfair or unjust; it was purposefully privileged in order to advance the goals of the constitutional power to make exclusive grants--the advancement of progress and creative uses of the data.~{ /{Feist Publications, Inc. v. Rural Telephone Service Co., Inc.}/, 499 U.S. 340, 349-350 (1991). }~ A few years later, the European Union passed a Database Directive, which created a discrete and expansive right in raw data compilations.~{ Directive No. 96/9/EC on the legal protection of databases, 1996 O.J. (L 77) 20. }~ The years since the Court decided Feist have seen repeated efforts by the larger players in the database publishing industry to pass similar legislation in the United States that would, as a practical matter, overturn Feist and create exclusive private rights in the raw data in compilations. "Harmonization" with Europe has been presented as a major argument in favor of this law. Because the Feist Court based its decision on limits to the constitutional power to create exclusive rights in raw information, efforts to protect database providers mostly revolved around an unfair competition law, based in the Commerce Clause, rather than on precisely replicating the European right. In fact, however, the primary draft that has repeatedly been introduced walks, talks, and looks like a property right.
+
+Sustained and careful work, most prominently by Jerome Reichman and Paul Uhlir, has shown that the proposed database right is unnecessary and detrimental, particularly to scientific research.~{ J. H. Reichman and Paul F. Uhlir, "Database Protection at the Crossroads: Recent Developments and Their Impact on Science and Technology," Berkeley Technology Law Journal 14 (1999): 793; Stephen M. Maurer and Suzanne Scotchmer, "Database Protection: Is It Broken and Should We Fix It?" Science 284 (1999): 1129. }~ Perhaps no example explains this point better than the "natural experiment" that Boyle has pointed to, and which the United States and Europe have been running over the past decade or so. The United States has formally had no exclusive right in data since 1991. Europe has explicitly had such a right since 1996. One would expect that both the European Union and the United States would look to the comparative effects on the industries in both places when the former decides whether to keep its law, and the latter decides whether to adopt one like it. The evidence is reasonably consistent and persuasive. Following the Feist decision, the U.S. database industry continued to grow steadily, without ,{[pg 450]}, a blip. The "removal" of the property right in data by Feist had no effect on growth. Europe at the time had a much smaller database industry than did the United States, as measured by the number of databases and database companies. Maurer, Hugenholz, and Onsrud showed that, following the introduction of the European sui generis right, each country saw a one-time spike in the number of databases and new database companies, but this was followed within a year or two by a decline to the levels seen before the Directive, which have been fairly stagnant since the early 1990s.~{ See Stephen M. Maurer, P. Bernt Hugenholtz, and Harlan J. Onsrud, "Europe's Database Experiment," Science 294 (2001): 789; Stephen M. Maurer, "Across Two Worlds: Database Protection in the U.S. and Europe," paper prepared for Industry Canada's Conference on Intellectual Property and Innovation in the KnowledgeBased Economy, May 23-24 2001. }~ Another study, more specifically oriented toward the appropriate policy for government-collected data, compared the practices of Europe--where government agencies are required to charge what the market will bear for access to data they collect--and the United States, where the government makes data it collects freely available at the cost of reproduction, as well as for free on the Web. That study found that the secondary uses of data, including commercial- and noncommercial-sector uses--such as, for example, markets in commercial risk management and meteorological services--contributed vastly more to the economy of the United States because of secondary uses of freely accessed government weather data than equivalent market sectors in Europe were able to contribute to their respective economies.~{ Peter Weiss, "Borders in Cyberspace: Conflicting Public Sector Information Policies and their Economic Impacts" (U.S. Dept. of Commerce, National Oceanic and Atmospheric Administration, February 2002). }~ The evidence suggests, then, that the artificial imposition of rents for proprietary data is suppressing growth in European market-based commercial services and products that rely on access to data, relative to the steady growth in the parallel U.S. markets, where no such right exists. It is trivial to see that a cost structure that suppresses growth among market-based entities that would at least partially benefit from being able to charge more for their outputs would have an even more deleterious effect on nonmarket information production and exchange activities, which are burdened by the higher costs and gain no benefit from the proprietary rights.
+
+There is, then, mounting evidence that rights in raw data are unnecessary to create a basis for a robust database industry. Database manufacturers rely on relational contracts--subscriptions to continuously updated databases-- rather than on property-like rights. The evidence suggests that, in fact, exclusive rights are detrimental to various downstream industries that rely on access to data. Despite these fairly robust observations from a decade of experience, there continues to be a threat that such a law will pass in the U.S. Congress. This continued effort to pass such a law underscores two facts. First, much of the legislation in this area reflects rent seeking, rather than reasoned policy. Second, the deeply held belief that "more property-like ,{[pg 451]}, rights will lead to more productivity" is hard to shake, even in the teeth of both theoretical analysis and empirical evidence to the contrary.
+
+3~ Linking and Trespass to Chattels: New Forms of Information Exclusivity
+
+Some litigants have turned to state law remedies to protect their data indirectly, by developing a common-law, trespass-to-server form of action. The primary instance of this trend is /{eBay v. Bidder's Edge}/, a suit by the leading auction site against an aggregator site. Aggregators collect information about what is being auctioned in multiple locations, and make the information about the items available in one place so that a user can search eBay and other auction sites simultaneously. The eventual bidding itself is done on the site that the item's owner chose to make his or her item available, under the terms required by that site. The court held that the automated information collection process--running a computer program that automatically requests information from the server about what is listed on it, called a spider or a bot--was a "trespass to chattels."~{ /{eBay, Inc. v. Bidder's Edge, Inc.}/, 2000 U.S. Dist. LEXIS 13326 (N.D.Cal. 2000). }~ This ancient form of action, originally intended to apply to actual taking or destruction of goods, mutated into a prohibition on unlicensed automated searching. The injunction led to Bidder's Edge closing its doors before the Ninth Circuit had an opportunity to review the decision. A common-law decision like /{eBay v. Bidder's Edge}/ creates a common-law exclusive private right in information by the back door. In principle, the information itself is still free of property rights. Reading it mechanically--an absolute necessity given the volume of the information and its storage on magnetic media accessible only by mechanical means--can, however, be prohibited as "trespass." The practical result would be equivalent to some aspects of a federal exclusive private right in raw data, but without the mitigating attributes of any exceptions that would be directly introduced into legislation. It is still too early to tell whether cases such as these ultimately will be considered preempted by federal copyright law,~{ The preemption model could be similar to the model followed by the Second Circuit in /{NBA v. Motorola}/, 105 F.3d 841 (2d Cir. 1997), which restricted state misappropriation claims to narrow bounds delimited by federal policy embedded in the Copyright Act. This might require actual proof that the bots have stopped service, or threaten the service's very existence. }~ or perhaps would be limited by first amendment law on the model of /{New York Times v. Sullivan}/.~{ /{New York Times v. Sullivan}/, 376 U.S. 254, 266 (1964). }~
+
+Beyond the roundabout exclusivity in raw data, trespass to chattels presents one instance of a broader question that is arising in application of both common-law and statutory provisions. At stake is the legal control over information about information, like linking and other statements people make about the availability and valence of some described information. Linking--the mutual pointing of many documents to each other--is the very ,{[pg 452]}, core idea of the World Wide Web. In a variety of cases, parties have attempted to use law to control the linking practices of others. The basic structure of these cases is that A wants to tell users M and N about information presented by B. The meaning of a link is, after all, "here you can read information presented by someone other than me that I deem interesting or relevant to you, my reader." Someone, usually B, but possibly some other agent C, wants to control what M and N know or do with regard to the information B is presenting. B (or C) then sues A to prevent A from linking to the information on B's site.
+
+The simplest instance of such a case involved a service that Microsoft offered--sidewalk.com--that provided access to, among other things, information on events in various cities. If a user wanted a ticket to the event, the sidewalk site linked that user directly to a page on ticketmaster.com where the user could buy a ticket. Ticketmaster objected to this practice, preferring instead that sidewalk.com link to its home page, in order to expose the users to all the advertising and services Ticketmaster provided, rather than solely to the specific service sought by the user referred by sidewalk .com. At stake in these linking cases is who will control the context in which certain information is presented. If deep linking is prohibited, Ticketmaster will control the context--the other movies or events available to be seen, their relative prominence, reviews, and so forth. The right to control linking then becomes a right to shape the meaning and relevance of one's statements for others. If the choice between Ticketmaster and Microsoft as controllers of the context of information may seem of little normative consequence, it is important to recognize that the right to control linking could easily apply to a local library, or church, or a neighbor as they participate in peer-producing relevance and accreditation of the information to which they link.
+
+The general point is this: On the Internet, there are a variety of ways that some people can let others know about information that exists somewhere on the Web. In doing so, these informers loosen someone else's control over the described information--be it the government, a third party interested in limiting access to the information, or the person offering the information. In a series of instances over the past half decade or more we have seen attempts by people who control certain information to limit the ability of others to challenge that control by providing information about the information. These are not cases in which a person without access to information is seeking affirmative access from the "owner" of information. These are ,{[pg 453]}, cases where someone who dislikes what another is saying about particular information is seeking the aid of law to control what other parties can say to each other about that information. Understood in these terms, the restrictive nature of these legal moves in terms of how they burden free speech in general, and impede the freedom of anyone, anywhere, to provide information, relevance, and accreditation, becomes clear. The /{eBay v. Bidder's Edge}/ case suggests one particular additional aspect. While much of the political attention focuses on formal "intellectual property"?style statutes passed by Congress, in the past few years we have seen that state law and common-law doctrine are also being drafted to create areas of exclusivity and boundaries on the free use of information. These efforts are often less well informed, and because they were arrived at ad hoc, often without understanding that they are actually forms of regulating information production and exchange, they include none of the balancing privileges or limitations of rights that are so common in the formal statutory frameworks.
+
+3~ International "Harmonization"
+
+One theme that has repeatedly appeared in the discussion of databases, the DMCA, and term extension, is the way in which "harmonization" and internationalization of exclusive rights are used to ratchet up the degree of exclusivity afforded rights holders. It is trite to point out that the most advanced economies in the world today are information and culture exporters. This is true of both the United States and Europe. Some of the cultural export industries--most notably Hollywood, the recording industry, some segments of the software industry, and pharmaceuticals--have business models that rely on the assertion of exclusive rights in information. Both the United States and the European Union, therefore, have spent the past decade and a half pushing for ever-more aggressive and expansive exclusive rights in international agreements and for harmonization of national laws around the world toward the highest degrees of protection. Chapter 9 discusses in some detail why this was not justified as a matter of economic rationality, and why it is deleterious as a matter of justice. Here, I only note the characteristic of internationalization and harmonization as a one-way ratchet toward ever-expanding exclusivity.
+
+Take a simple provision like the term of copyright protection. In the mid1990s, Europe was providing for many works (but not all) a term of life of the author plus seventy years, while the United States provided exclusivity for the life of the author plus fifty. A central argument for the Sonny Bono ,{[pg 454]}, Copyright Term Extension Act of 1998 was to "harmonize" with Europe. In the debates leading up to the law, one legislator actually argued that if our software manufacturers had a shorter term of copyright, they would be disadvantaged relative to the European firms. This argument assumes, of course, that U.S. software firms could stay competitive in the software business by introducing nothing new in software for seventy-five years, and that it would be the loss of revenues from products that had not been sufficiently updated for seventy-five years to warrant new copyright that would place them at a disadvantage. The newly extended period created by the Sonny Bono Copyright Term Extension Act is, however, longer in some cases than the protection afforded in Europe. Sound recordings, for example, are protected for fifty years in Europe. The arguments are now flowing in the opposite direction--harmonization toward the American standard for all kinds of works, for fear that the recordings of Elvis or the Beatles will fall into the European public domain within a few paltry years. "Harmonization" is never invoked to de-escalate exclusivity--for example, as a reason to eliminate the European database right in order to harmonize with the obviously successful American model of no protection, or to shorten the length of protection for sound recordings in the United States.
+
+International agreements also provide a fertile forum for ratcheting up protection. Lobbies achieve a new right in a given jurisdiction--say an extension of term, or a requirement to protect technological protection measures on the model of the DMCA. The host country, usually the United States, the European Union, or both, then present the new right for treaty approval, as the United States did in the context of the WIPO treaties in the mid-1990s. Where this fails, the United States has more recently begun to negotiate bilateral free trade agreements (FTAs) with individual nations. The structure of negotiation is roughly as follows: The United States will say to Thailand, or India, or whoever the trading partner is: If you would like preferential treatment of your core export, say textiles or rice, we would like you to include this provision or that in your domestic copyright or patent law. Once this is agreed to in a number of bilateral FTAs, the major IP exporters can come back to the multilateral negotiations and claim an emerging international practice, which may provide more exclusivity than their then applicable domestic law. With changes to international treaties in hand, domestic resistance to legislation can be overcome, as we saw in the United States when the WIPO treaties were used to push through Congress the DMCA anticircumvention provisions that had failed to pass two years ,{[pg 455]}, earlier. Any domestic efforts to reverse and limit exclusivity then have to overcome substantial hurdles placed by the international agreements, like the agreement on Trade Related Aspects of Intellectual Property (TRIPS). The difficulty of amending international agreements to permit a nation to decrease the degree of exclusivity it grants copyright or patent holders becomes an important one-way ratchet, preventing de-escalation.
+
+3~ Countervailing Forces
+
+As this very brief overview demonstrates, most of the formal institutional moves at the content layer are pushing toward greater scope and reach for exclusive rights in the universe of existing information, knowledge, and cultural resources. The primary countervailing forces in the content layer are similar to the primary countervailing forces in the logical layer--that is, social and cultural push-back against exclusivity. Recall how central free software and the open, cooperative, nonproprietary standard-setting processes are to the openness of the logical layer. In the content layer, we are seeing the emergence of a culture of free creation and sharing developing as a countervailing force to the increasing exclusivity generated by the public, formal lawmaking system. The Public Library of Science discussed in chapter 9 is an initiative of scientists who, frustrated with the extraordinarily high journal costs for academic journals, have begun to develop systems for scientific publication whose outputs are immediately and freely available everywhere. The Creative Commons is an initiative to develop a series of licenses that allow individuals who create information, knowledge, and culture to attach simple licenses that define what others may, or may not, do with their work. The innovation represented by these licenses relative to the background copyright system is that they make it trivial for people to give others permission to use their creations. Before their introduction, there were no widely available legal forms to make it clear to the world that it is free to use my work, with or without restrictions. More important than the institutional innovation of Creative Commons is its character as a social movement. Under the moniker of the "free culture" movement, it aims to encourage widespread adoption of sharing one's creations with others. What a mature movement like the free software movement, or nascent movements like the free culture movement and the scientists' movement for open publication and open archiving are aimed at is the creation of a legally selfreinforcing domain of open cultural sharing. They do not negate propertylike rights in information, knowledge, and culture. Rather, they represent a ,{[pg 456]}, self-conscious choice by their participants to use copyrights, patents, and similar rights to create a domain of resources that are free to all for common use.
+
+Alongside these institutionally instantiated moves to create a selfreinforcing set of common resources, there is a widespread, global culture of ignoring exclusive rights. It is manifest in the widespread use of file-sharing software to share copyrighted materials. It is manifest in the widespread acclaim that those who crack copy-protection mechanisms receive. This culture has developed a rhetoric of justification that focuses on the overreaching of the copyright industries and on the ways in which the artists themselves are being exploited by rights holders. While clearly illegal in the United States, there are places where courts have sporadically treated participation in these practices as copying for private use, which is exempted in some countries, including a number of European countries. In any event the sheer size of this movement and its apparent refusal to disappear in the face of lawsuits and public debate present a genuine countervailing pressure against the legal tightening of exclusivity. As a practical matter, efforts to impose perfect private ordering and to limit access to the underlying digital bits in movies and songs through technical means have largely failed under the sustained gaze of the community of computer scientists and hackers who have shown its flaws time and again. Moreover, the mechanisms developed in response to a large demand for infringing file-sharing utilities were the very mechanisms that were later available to the Swarthmore students to avoid having the Diebold files removed from the Internet and that are shared by other censorship-resistant publication systems. The tools that challenge the "entertainment-as-finished-good" business model are coming into much wider and unquestionably legitimate use. Litigation may succeed in dampening use of these tools for copying, but also creates a heightened political awareness of information-production regulation. The same students involved in the Diebold case, radicalized by the lawsuit, began a campus "free culture" movement. It is difficult to predict how this new political awareness will play out in a political arena--the making of copyrights, patents, and similar exclusive rights--that for decades has functioned as a technical backwater that could never invoke a major newspaper editorial, and was therefore largely controlled by the industries whose rents it secured. ,{[pg 457]},
+
+2~ THE PROBLEM OF SECURITY
+
+This book as a whole is dedicated to the emergence of commons-based information production and its implications for liberal democracies. Of necessity, the emphasis of this chapter too is on institutional design questions that are driven by the conflict between the industrial and networked information economies. Orthogonal to this conflict, but always relevant to it, is the perennial concern of communications policy with security and crime. Throughout much of the 1990s, this concern manifested primarily as a conflict over encryption. The "crypto-wars," as they were called, revolved around the FBI's efforts to force industry to adopt technology that had a backdoor-- then called the "Clipper Chip"--that would facilitate wiretapping and investigation. After retarding encryption adoption in the United States for almost a decade, the federal government ultimately decided that trying to hobble security in most American systems (that is, forcing everyone to adopt weaker encryption) in order to assure that the FBI could better investigate the failures of security that would inevitably follow use of such weak encryption was a bad idea. The fact that encryption research and business was moving overseas--giving criminals alternative sources for obtaining excellent encryption tools while the U.S. industry fell behind--did not help the FBI's cause. The same impulse is to some extent at work again, with the added force of the post-9/11 security mind-set.
+
+One concern is that open wireless networks are available for criminals to hide their tracks--the criminal uses someone else's Internet connection using their unencrypted WiFi access point, and when the authorities successfully track the Internet address back to the WiFi router, they find an innocent neighbor rather than the culprit. This concern has led to some proposals that manufacturers of WiFi routers set their defaults so that, out of the box, the router is encrypted. Given how "sticky" defaults are in technology products, this would have enormously deleterious effects on the development of open wireless networks. Another concern is that free and open-source software reveals its design to anyone who wants to read it. This makes it easier to find flaws that could be exploited by attackers and nearly impossible to hide purposefully designed weaknesses, such as susceptibility to wiretapping. A third is that a resilient, encrypted, anonymous peer-to-peer network, like FreeNet or some of the major p2p architectures, offers the criminals or terrorists communications systems that are, for all practical purposes, beyond the control of law enforcement and counterterrorism efforts. To the extent ,{[pg 458]}, that they take this form, security concerns tend to support the agenda of the proprietary producers.
+
+However, security concerns need not support proprietary architectures and practices. On the wireless front, there is a very wide range of anonymization techniques available for criminals and terrorists who use the Internet to cover their tracks. The marginally greater difficulty that shutting off access to WiFi routers would impose on determined criminals bent on covering their tracks is unlikely to be worth the loss of an entire approach toward constructing an additional last-mile loop for local telecommunications. One of the core concerns of security is the preservation of network capacity as a critical infrastructure. Another is assuring communications for critical security personnel. Open wireless networks that are built from ad hoc, self-configuring mesh networks are the most robust design for a local communications loop currently available. It is practically impossible to disrupt local communications in such a network, because these networks are designed so that each router will automatically look for the next available neighbor with which to make a network. These systems will self-heal in response to any attack on communications infrastructure as a function of their basic normal operational design. They can then be available both for their primary intended critical missions and for first responders as backup data networks, even when main systems have been lost--as they were, in fact, lost in downtown Manhattan after the World Trade Center attack. To imagine that security is enhanced by eliminating the possibility that such a backup local communications network will emerge in exchange for forcing criminals to use more anonymizers and proxy servers instead of a neighbor's WiFi router requires a very narrow view of security. Similarly, the same ease of study that makes flaws in free software observable to potential terrorists or criminals makes them available to the community of developers, who quickly shore up the defenses of the programs. Over the past decade, security flaws in proprietary programs, which are not open to inspection by such large numbers of developers and testers, have been much more common than security breaches in free software. Those who argue that proprietary software is more secure and allows for better surveillance seem to be largely rehearsing the thought process that typified the FBI's position in the Clipper Chip debate.
+
+More fundamentally, the security concerns represent a lack of ease with the great freedom enabled by the networked information environment. Some of the individuals who can now do more alone and in association with others want to do harm to the United States in particular, and to advanced liberal ,{[pg 459]}, market-based democracies more generally. Others want to trade Nazi memorabilia or child pornography. Just as the Internet makes it harder for authoritarian regimes to control their populations, so too the tremendous openness and freedom of the networked environment requires new ways of protecting open societies from destructive individuals and groups. And yet, particularly in light of the systematic and significant benefits of the networked information economy and its sharing-based open production practices to the core political commitments of liberal democracies, preserving security in these societies by eliminating the technologies that can support improvements in the very freedom being protected is perverse. Given Abu Ghraib and Guantanamo Bay, however, squelching the emergence of an open networked environment and economy hardly seems to be the most glaring of self-defeating moves in the war to protect freedom and human dignity in liberal societies. It is too early to tell whether the security urge will ultimately weigh in on the side of the industrial information economy incumbents, or will instead follow the path of the crypto-wars, and lead security concerns to support the networked information economy's ability to provide survivable, redundant, and effective critical infrastructures and information production and exchange capabilities. If the former, this impulse may well present a formidable obstacle to the emergence of an open networked information environment. ,{[pg 460]},
+
+1~12 Chapter 12 - Conclusion: The Stakes of Information Law and Policy
+
+Complex modern societies have developed in the context of mass media and industrial information economy. Our theories of growth and innovation assume that industrial models of innovation are dominant. Our theories about how effective communications in complex societies are achieved center on market-based, proprietary models, with a professional commercial core and a dispersed, relatively passive periphery. Our conceptions of human agency, collective deliberation, and common culture in these societies are embedded in the experience and practice of capital-intensive information and cultural production practices that emphasize proprietary, market-based models and starkly separate production from consumption. Our institutional frameworks reflect these conceptual models of information production and exchange, and have come, over the past few years, to enforce these conceptions as practiced reality, even when they need not be.
+
+This book began with four economic observations. First, the baseline conception that proprietary strategies are dominant in our information production system is overstated. The education system, ,{[pg 461]}, from kindergarten to doctoral programs, is thoroughly infused with nonproprietary motivations, social relations, and organizational forms. The arts and sciences are replete with voluntarism and actions oriented primarily toward social-psychological motivations rather than market appropriation. Political and theological discourses are thoroughly based in nonmarket forms and motivations. Perhaps most surprisingly, even industrial research and development, while market oriented, is in most industries not based on proprietary claims of exclusion, but on improved efficiencies and customer relations that can be captured and that drive innovation, without need for proprietary strategies of appropriation. Despite the continued importance of nonproprietary production in information as a practical matter, the conceptual nuance required to acknowledge its importance ran against the grain of the increasingly dominant thesis that property and markets are the roots of all growth and productivity. Partly as a result of the ideological and military conflict with Communism, partly as a result of the theoretical elegance of a simple and tractable solution, policy makers and their advisers came to believe toward the end of the twentieth century that property in information and innovation was like property in wristwatches and automobiles. The more clearly you defined and enforced it, and the closer it was to perfect exclusive rights, the more production you would get. The rising dominance of this conceptual model combined with the rent-seeking lobbying of industrialmodel producers to underwrite a fairly rapid and substantial tipping of the institutional ecology of innovation and information production in favor of proprietary models. The U.S. patent system was overhauled in the early 1980s, in ways that strengthened and broadened the reach and scope of exclusivity. Copyright was vastly expanded in the mid-1970s, and again in the latter 1990s. Trademark was vastly expanded in the 1990s. Other associated rights were created and strengthened throughout these years.
+
+The second economic point is that these expansions of rights operate, as a practical matter, as a tax on nonproprietary models of production in favor of the proprietary models. It makes access to information resources more expensive for all, while improving appropriability only for some. Introducing software patents, for example, may help some of the participants in the onethird of the software industry that depends on sales of finished software items. But it clearly raises the costs without increasing benefits for the twothirds of the industry that is service based and relational. As a practical matter, the substantial increases in the scope and reach of exclusive rights have adversely affected the operating conditions of nonproprietary producers. ,{[pg 462]},
+
+Universities have begun to seek patents and pay royalties, impeding the sharing of information that typified past practice. Businesses that do not actually rely on asserting patents for their business model have found themselves amassing large patent portfolios at great expense, simply to fend off the threat of suit by others who would try to hold them up. Older documentary films, like Eyes on the Prize, have been hidden from public view for years, because of the cost and complexity of clearing the rights to every piece of footage or trademark that happens to have been captured by the camera. New documentaries require substantially greater funding than would have been necessary to pay for their creation, because of the costs of clearing newly expanded rights.
+
+The third economic observation is that the basic technologies of information processing, storage, and communication have made nonproprietary models more attractive and effective than was ever before possible. Ubiquitous low-cost processors, storage media, and networked connectivity have made it practically feasible for individuals, alone and in cooperation with others, to create and exchange information, knowledge, and culture in patterns of social reciprocity, redistribution, and sharing, rather than proprietary, market-based production. The basic material capital requirements of information production are now in the hands of a billion people around the globe who are connected to each other more or less seamlessly. These material conditions have given individuals a new practical freedom of action. If a person or group wishes to start an information-production project for any reason, that group or person need not raise significant funds to acquire the necessary capital. In the past, the necessity to obtain funds constrained information producers to find a market-based model to sustain the investment, or to obtain government funding. The funding requirements, in turn, subordinated the producers either to the demands of markets, in particular to mass-market appeal, or to the agendas of state bureaucracies. The networked information environment has permitted the emergence to much greater significance of the nonmarket sector, the nonprofit sector, and, most radically, of individuals.
+
+The fourth and final economic observation describes and analyzes the rise of peer production. This cluster of phenomena, from free and open-source software to /{Wikipedia}/ and SETI@Home, presents a stark challenge to conventional thinking about the economics of information production. Indeed, it challenges the economic understanding of the relative roles of marketbased and nonmarket production more generally. It is important to see these ,{[pg 463]}, phenomena not as exceptions, quirks, or ephemeral fads, but as indications of a fundamental fact about transactional forms and their relationship to the technological conditions of production. It is a mistake to think that we have only two basic free transactional forms--property-based markets and hierarchically organized firms. We have three, and the third is social sharing and exchange. It is a widespread phenomenon--we live and practice it every day with our household members, coworkers, and neighbors. We coproduce and exchange economic goods and services. But we do not count these in the economic census. Worse, we do not count them in our institutional design. I suggest that the reason social production has been shunted to the peripheries of the advanced economies is that the core economic activities of the economies of steel and coal required large capital investments. These left markets, firms, or state-run enterprises dominant. As the first stage of the information economy emerged, existing information and human creativity-- each a "good" with fundamentally different economic characteristics than coal or steel--became important inputs. The organization of production nevertheless followed an industrial model, because information production and exchange itself still required high capital costs--a mechanical printing press, a broadcast station, or later, an IBM mainframe. The current networked stage of the information economy emerged when the barrier of high capital costs was removed. The total capital cost of communication and creation did not necessarily decline. Capital investment, however, became widely distributed in small dollops, owned by individuals connected in a network. We came to a stage where the core economic activities of the most advanced economies--the production and processing of information--could be achieved by pooling physical capital owned by widely dispersed individuals and groups, who have purchased the capital means for personal, household, and small-business use. Then, human creativity and existing information were left as the main remaining core inputs. Something new and radically different started to happen. People began to apply behaviors they practice in their living rooms or in the elevator--"Here, let me lend you a hand," or "What did you think of last night's speech?"--to production problems that had, throughout the twentieth century, been solved on the model of Ford and General Motors. The rise of peer production is neither mysterious nor fickle when viewed through this lens. It is as rational and efficient given the objectives and material conditions of information production at the turn of the twenty-first century as the assembly line was for the conditions at the turn of the twentieth. The pooling of human creativity and of ,{[pg 464]}, computation, communication, and storage enables nonmarket motivations and relations to play a much larger role in the production of the information environment than it has been able to for at least decades, perhaps for as long as a century and a half.
+
+A genuine shift in the way we produce the information environment that we occupy as individual agents, as citizens, as culturally embedded creatures, and as social beings goes to the core of our basic liberal commitments. Information and communications are core elements of autonomy and of public political discourse and decision making. Communication is the basic unit of social existence. Culture and knowledge, broadly conceived, form the basic frame of reference through which we come to understand ourselves and others in the world. For any liberal political theory--any theory that begins with a focus on individuals and their freedom to be the authors of their own lives in connection with others--the basic questions of how individuals and communities come to know and evaluate are central to the project of characterizing the normative value of institutional, social, and political systems. Independently, in the context of an information- and innovation-centric economy, the basic components of human development also depend on how we produce information and innovation, and how we disseminate its implementations. The emergence of a substantial role for nonproprietary production offers discrete strategies to improve human development around the globe. Productivity in the information economy can be sustained without the kinds of exclusivity that have made it difficult for knowledge, information, and their beneficial implementations to diffuse beyond the circles of the wealthiest nations and social groups. We can provide a detailed and specific account of why the emergence of nonmarket, nonproprietary production to a more significant role than it had in the industrial information economy could offer improvements in the domains of both freedom and justice, without sacrificing--indeed, while improving--productivity.
+
+From the perspective of individual autonomy, the emergence of the networked information economy offers a series of identifiable improvements in how we perceive the world around us, the extent to which we can affect our perceptions of the world, the range of actions open to us and their possible outcomes, and the range of cooperative enterprises we can seek to enter to pursue our choices. It allows us to do more for and by ourselves. It allows us to form loose associations with others who are interested in a particular outcome they share with us, allowing us to provide and explore many more ,{[pg 465]}, diverse avenues of learning and speaking than we could achieve by ourselves or in association solely with others who share long-term strong ties. By creating sources of information and communication facilities that no one owns or exclusively controls, the networked information economy removes some of the most basic opportunities for manipulation of those who depend on information and communication by the owners of the basic means of communications and the producers of the core cultural forms. It does not eliminate the possibility that one person will try to act upon another as object. But it removes the structural constraints that make it impossible to communicate at all without being subject to such action by others.
+
+From the perspective of democratic discourse and a participatory republic, the networked information economy offers a genuine reorganization of the public sphere. Except in the very early stages of a small number of today's democracies, modern democracies have largely developed in the context of mass media as the core of their public spheres. A systematic and broad literature has explored the basic limitations of commercial mass media as the core of the public sphere, as well as it advantages. The emergence of a networked public sphere is attenuating, or even solving, the most basic failings of the mass-mediated public sphere. It attenuates the power of the commercial mass-media owners and those who can pay them. It provides an avenue for substantially more diverse and politically mobilized communication than was feasible in a commercial mass media with a small number of speakers and a vast number of passive recipients. The views of many more individuals and communities can be heard. Perhaps most interestingly, the phenomenon of peer production is now finding its way into the public sphere. It is allowing loosely affiliated individuals across the network to fulfill some of the basic and central functions of the mass media. We are seeing the rise of nonmarket, distributed, and collaborative investigative journalism, critical commentary, and platforms for political mobilization and organization. We are seeing the rise of collaborative filtering and accreditation, which allows individuals engaged in public discourse to be their own source of deciding whom to trust and whose words to question.
+
+A common critique of claims that the Internet improves democracy and autonomy is centered on information overload and fragmentation. What we have seen emerging in the networked environment is a combination of selfconscious peer-production efforts and emergent properties of large systems of human beings that have avoided this unhappy fate. We have seen the adoption of a number of practices that have made for a reasonably navigable ,{[pg 466]}, and coherent information environment without re-creating the mass-media model. There are organized nonmarket projects for producing filtering and accreditation, ranging from the Open Directory Project to mailing lists to like-minded people, like MoveOn.org. There is a widespread cultural practice of mutual pointing and linking; a culture of "Here, see for yourself, I think this is interesting." The basic model of observing the judgments of others as to what is interesting and valuable, coupled with exercising one's own judgment about who shares one's interests and whose judgment seems to be sound has created a pattern of linking and usage of the Web and the Internet that is substantially more ordered than a cacophonous free-for-all, and less hierarchically organized and controlled by few than was the massmedia environment. It turns out that we are not intellectual lemmings. Given freedom to participate in making our own information environment, we neither descend into Babel, nor do we replicate the hierarchies of the mass-mediated public spheres to avoid it.
+
+The concepts of culture and society occupy more tenuous positions in liberal theory than autonomy and democracy. As a consequence, mapping the effects of the changes in information production and exchange on these domains as aspects of liberal societies is more complex. As to culture, the minimum that we can say is that the networked information environment is rendering culture more transparent. We all "occupy" culture; our perceptions, views, and structures of comprehension are all always embedded in culture. And yet there are degrees to which this fact can be rendered more or less opaque to us as inhabitants of a culture. In the networked information environment, as individuals and groups use their newfound autonomy to engage in personal and collective expression through existing cultural forms, these forms become more transparent--both through practice and through critical examination. The mass-media television culture encouraged passive consumption of polished, finished goods. The emergence of what might be thought of as a newly invigorated folk culture--created by and among individuals and groups, rather than by professionals for passive consumption-- provides both a wider set of cultural forms and practices and a bettereducated or better-practiced community of "readers" of culture. From the perspective of a liberal theory unwilling simply to ignore the fact that culture structures meaning, personal values, and political conceptions, the emergence of a more transparent and participatory cultural production system is a clear improvement over the commercial, professional mass culture of the twentieth century. In the domain of social relations, the degree of autonomy and the ,{[pg 467]}, loose associations made possible by the Internet, which play such an important role in the gains for autonomy, democracy, and a critical culture, have raised substantial concerns about how the networked environment will contribute to a further erosion of community and solidarity. As with the Babel objection, however, it appears that we are not using the Internet further to fragment our social lives. The Internet is beginning to replace twentieth-century remote media--television and telephone. The new patterns of use that we are observing as a result of this partial displacement suggest that much of network use focuses on enhancing and deepening existing real-world relations, as well as adding new online relations. Some of the time that used to be devoted to passive reception of standardized finished goods through a television is now reoriented toward communicating and making together with others, in both tightly and loosely knit social relations. Moreover, the basic experience of treating others, including strangers, as potential partners in cooperation contributes to a thickening of the sense of possible social bonds beyond merely co-consumers of standardized products. Peer production can provide a new domain of reasonably thick connection with remote others.
+
+The same capabilities to make information and knowledge, to innovate, and to communicate that lie at the core of the gains in freedom in liberal societies also underlie the primary advances I suggest are possible in terms of justice and human development. From the perspective of a liberal conception of justice, the possibility that more of the basic requirements of human welfare and the capabilities necessary to be a productive, self-reliant individual are available outside of the market insulates access to these basic requirements and capabilities from the happenstance of wealth distribution. From a more substantive perspective, information and innovation are central components of all aspects of a rich meaning of human development. Information and innovation are central to human health--in the production and use of both food and medicines. They are central to human learning and the development of the knowledge any individual needs to make life richer. And they are, and have for more than fifty years been known to be, central to growth of material welfare. Along all three of these dimensions, the emergence of a substantial sector of nonmarket production that is not based on exclusivity and does not require exclusion to feed its own engine contributes to global human development. The same economic characteristics that make exclusive rights in information a tool that imposes barriers to access in advanced economies make these rights a form of tax on technological latecomers. ,{[pg 468]}, What most poor and middle-income countries lack is not human creativity, but access to the basic tools of innovation. The cost of the material requirements of innovation and information production is declining rapidly in many domains, as more can be done with ever-cheaper computers and communications systems. But exclusive rights in existing innovation tools and information resources remain a significant barrier to innovation, education, and the use of information-embedded tools and goods in low- and middle-income countries. As new strategies for the production of information and knowledge are making their outputs available freely for use and continuing innovation by everyone everywhere, the networked information economy can begin to contribute significantly to improvements in human development. We already see free software and free and open Internet standards playing that role in information technology sectors. We are beginning to see it take form in academic publishing, raw information, and educational materials, like multilingual encyclopedias, around the globe. More tentatively, we are beginning to see open commons-based innovation models and peer production emerge in areas of agricultural research and bioagricultural innovation, as well as, even more tentatively, in the area of biomedical research. These are still very early examples of what can be produced by the networked information economy, and how it can contribute, even if only to a limited extent, to the capacity of people around the globe to live a long and healthy, well-educated, and materially adequate life.
+
+If the networked information economy is indeed a significant inflection point for modern societies along all these dimensions, it is so because it upsets the dominance of proprietary, market-based production in the sphere of the production of knowledge, information, and culture. This upset is hardly uncontroversial. It will likely result in significant redistribution of wealth, and no less importantly, power, from previously dominant firms and business models to a mixture of individuals and social groups on the one hand, and on the other hand businesses that reshape their business models to take advantage of, and build tools an platforms for, the newly productive social relations. As a practical matter, the major economic and social changes described here are not deterministically preordained by the internal logic of technological progress. What we see instead is that the happenstance of the fabrication technology of computation, in particular, as well as storage and communications, has created technological conditions conducive to a significant realignment of our information production and exchange system. The actual structure of the markets, technologies, and social practices that ,{[pg 469]}, have been destabilized by the introduction of computer-communications networks is now the subject of a large-scale and diffuse institutional battle.
+
+We are seeing significant battles over the organization and legal capabilities of the physical components of the digitally networked environment. Will all broadband infrastructures be privately owned? If so, how wide a margin of control will owners have to prefer some messages over others? Will we, to the contrary, permit open wireless networks to emerge as an infrastructure of first and last resort, owned by its users and exclusively controlled by no one? The drives to greater private ownership in wired infrastructure, and the push by Hollywood and the recording industry to require digital devices mechanically to comply with exclusivity-respecting standards are driving the technical and organizational design toward a closed environment that would be more conducive to proprietary strategies. Open wireless networks and the present business model of the large and successful device companies--particularly, personal computers--to use open standards push in the opposite direction. End-user equipment companies are mostly focused on making their products as valuable as possible to their users, and are therefore oriented toward offering general-purpose platforms that can be deployed by their owners as they choose. These then become equally available for marketoriented as for social behaviors, for proprietary consumption as for productive sharing.
+
+At the logical layer, the ethic of open standards in the technical community, the emergence of the free software movement and its apolitical cousin, open-source development practices, on the one hand, and the antiauthoritarian drives behind encryption hacking and some of the peer-to-peer technologies, on the other hand, are pushing toward an open logical layer available for all to use. The efforts of the content industries to make the Internet manageable--most visibly, the DMCA and the continued dominance of Microsoft over the desktop, and the willingness of courts and legislatures to try to stamp out copyright-defeating technologies even when these obviously have significant benefits to users who have no interest in copying the latest song in order not to pay for the CD--are the primary sources of institutional constraint on the freedom to use the logical resources necessary to communicate in the network.
+
+At the content layer--the universe of existing information, knowledge, and culture--we are observing a fairly systematic trend in law, but a growing countertrend in society. In law, we see a continual tightening of the control that the owners of exclusive rights are given. Copyrights are longer, apply ,{[pg 470]}, to more uses, and are interpreted as reaching into every corner of valuable use. Trademarks are stronger and more aggressive. Patents have expanded to new domains and are given greater leeway. All these changes are skewing the institutional ecology in favor of business models and production practices that are based on exclusive proprietary claims; they are lobbied for by firms that collect large rents if these laws are expanded, followed, and enforced. Social trends in the past few years, however, are pushing in the opposite direction. These are precisely the trends of networked information economy, of nonmarket production, of an increased ethic of sharing, and an increased ambition to participate in communities of practice that produce vast quantities of information, knowledge, and culture for free use, sharing, and followon creation by others.
+
+The political and judicial pressures to form an institutional ecology that is decidedly tilted in favor of proprietary business models are running headon into the emerging social practices described throughout this book. To flourish, a networked information economy rich in social production practices requires a core common infrastructure, a set of resources necessary for information production and exchange that are open for all to use. This requires physical, logical, and content resources from which to make new statements, encode them for communication, and then render and receive them. At present, these resources are available through a mixture of legal and illegal, planned and unplanned sources. Some aspects come from the happenstance of the trajectories of very different industries that have operated under very different regulatory frameworks: telecommunications, personal computers, software, Internet connectivity, public- and private-sector information, and cultural publication. Some come from more or less widespread adoption of practices of questionable legality or outright illegality. Peer-to-peer file sharing includes many instances of outright illegality practiced by tens of millions of Internet users. But simple uses of quotations, clips, and mix-and-match creative practices that may, or, increasingly, may not, fall into the narrowing category of fair use are also priming the pump of nonmarket production. At the same time, we are seeing an ever-more self-conscious adoption of commons-based practices as a modality of information production and exchange. Free software, Creative Commons, the Public Library of Science, the new guidelines of the National Institutes of Health (NIH) on free publication of papers, new open archiving practices, librarian movements, and many other communities of practice are developing what was a contingent fact into a self-conscious social movement. As ,{[pg 471]}, the domain of existing information and culture comes to be occupied by information and knowledge produced within these free sharing movements and licensed on the model of open-licensing techniques, the problem of the conflict with the proprietary domain will recede. Twentieth-century materials will continue to be a point of friction, but a sufficient quotient of twentyfirst-century materials seem now to be increasingly available from sources that are happy to share them with future users and creators. If this socialcultural trend continues over time, access to content resources will present an ever-lower barrier to nonmarket production.
+
+The relationship of institutional ecology to social practice is a complex one. It is hard to predict at this point whether a successful sustained effort on the part of the industrial information economy producers will succeed in flipping even more of the institutional toggles in favor of proprietary production. There is already a more significant social movement than existed in the 1990s in the United States, in Europe, and around the world that is resisting current efforts to further enclose the information environment. This social movement is getting support from large and wealthy industrial players who have reoriented their business model to become the platforms, toolmakers, and service providers for and alongside the emerging nonmarket sector. IBM, Hewlett Packard, and Cisco, for example, might stand shoulder to shoulder with a nongovernment organization (NGO) like Public Knowledge in an effort to block legislation that would require personal computers to comply with standards set by Hollywood for copy protection. When Hollywood sued Grokster, the file-sharing company, and asked the Supreme Court to expand contributory liability of the makers of technologies that are used to infringe copyrights, it found itself arrayed against amicus briefs filed by Intel, the Consumer Electronics Association, and Verizon, SBC, AT&T, MCI, and Sun Microsystems, alongside briefs from the Free Software Foundation, and the Consumer Federation of America, Consumers Union, and Public Knowledge.
+
+Even if laws that favor enclosure do pass in one, or even many jurisdictions, it is not entirely clear that law can unilaterally turn back a trend that combines powerful technological, social, and economic drivers. We have seen even in the area of peer-to-peer networks, where the arguments of the incumbents seemed the most morally compelling and where their legal successes have been the most complete, that stemming the tide of change is difficult--perhaps impossible. Bits are a part of a flow in the networked information environment, and trying to legislate that fact away in order to ,{[pg 472]}, preserve a business model that sells particular collections of bits as discrete, finished goods may simply prove to be impossible. Nonetheless, legal constraints significantly shape the parameters of what companies and individuals decide to market and use. It is not hard to imagine that, were Napster seen as legal, it would have by now encompassed a much larger portion of the population of Internet users than the number of users who actually now use file-sharing networks. Whether the same moderate levels of success in shaping behavior can be replicated in areas where the claims of the incumbents are much more tenuous, as a matter of both policy and moral claims--such as in the legal protection of anticircumvention devices or the contraction of fair use--is an even harder question. The object of a discussion of the institutional ecology of the networked environment is, in any event, not prognostication. It is to provide a moral framework within which to understand the many and diverse policy battles we have seen over the past decade, and which undoubtedly will continue into the coming decade, that I have written this book.
+
+We are in the midst of a quite basic transformation in how we perceive the world around us, and how we act, alone and in concert with others, to shape our own understanding of the world we occupy and that of others with whom we share it. Patterns of social practice, long suppressed as economic activities in the context of industrial economy, have now emerged to greater importance than they have had in a century and a half. With them, they bring the possibility of genuine gains in the very core of liberal commitments, in both advanced economies and around the globe. The rise of commons-based information production, of individuals and loose associations producing information in nonproprietary forms, presents a genuine discontinuity from the industrial information economy of the twentieth century. It brings with it great promise, and great uncertainty. We have early intimations as to how market-based enterprises can adjust to make room for this newly emerging phenomenon--IBM's adoption of open source, Second Life's adoption of user-created immersive entertainment, or Open Source Technology Group's development of a platform for Slashdot. We also have very clear examples of businesses that have decided to fight the new changes by using every trick in the book, and some, like injecting corrupt files into peer-to-peer networks, that are decidedly not in the book. Law and regulation form one important domain in which these battles over the shape of our emerging information production system are fought. As we observe these battles; as we participate in them as individuals choosing how to behave and ,{[pg 473]}, what to believe, as citizens, lobbyists, lawyers, or activists; as we act out these legal battles as legislators, judges, or treaty negotiators, it is important that we understand the normative stakes of what we are doing.
+
+We have an opportunity to change the way we create and exchange information, knowledge, and culture. By doing so, we can make the twentyfirst century one that offers individuals greater autonomy, political communities greater democracy, and societies greater opportunities for cultural self-reflection and human connection. We can remove some of the transactional barriers to material opportunity, and improve the state of human development everywhere. Perhaps these changes will be the foundation of a true transformation toward more liberal and egalitarian societies. Perhaps they will merely improve, in well-defined but smaller ways, human life along each of these dimensions. That alone is more than enough to justify an embrace of the networked information economy by anyone who values human welfare, development, and freedom.
+
+1~blurb Blurb
+
+_1 "In this book, Benkler establishes himself as the leading intellectual of the information age. Profoundly rich in its insight and truth, this work will be the central text for understanding how networks have changed how we understand the world. No work to date has more carefully or convincingly made the case for a fundamental change in how we understand the economy of society." Lawrence Lessig, professor of law, Stanford Law School
+
+_1 "A lucid, powerful, and optimistic account of a revolution in the making." Siva Vaidhyanathan, author of /{The Anarchist in the Library}/
+
+_1 "This deeply researched book documents the fundamental changes in the ways in which we produce and share ideas, information, and entertainment. Then, drawing widely on the literatures of philosophy, economics, and political theory, it shows why these changes should be welcomed, not resisted. The trends examined, if allowed to continue, will radically alter our lives - and no other scholar describes them so clearly or champions them more effectively than Benkler." William W. Fisher III, Hale and Dorr Professor of Intellectual Property Law, Harvard University, and directory, Berkman Center for Internet and Society
+
+_1 "A magnificent achievement. Yochai Benkler shows us how the Internet enables new commons-based methods for producing goods, remaking culture, and participating in public life. /{The Wealth of Networks}/ is an indispensable guide to the political economy of our digitally networked world." Jack M. Balkin, professor of law and director of the Information Society Project, Yale University.
+
+A dedicated wiki may be found at: http://www.benkler.org/wealth_of_networks/index.php/Main_Page
+
+Including a pdf: http://www.benkler.org/wonchapters.html
+
+The author's website is: http://www.benkler.org/
+
+The books may be purchased at bookshops, including { Amazon.com }http://www.amazon.com/Wealth-Networks-Production-Transforms-Markets/dp/0300110561/ or at { Barnes & Noble }http://search.barnesandnoble.com/booksearch/isbnInquiry.asp?isbn=0300110561
+
+% Not final copy: markup not final, output numbering is therefore subject to change
+
+% italics need checking
+
+% hyphenation in markup text needs review, pdf to text transformation not perfect
+
+% book index can be preserved as book pages have been kept after a fashion, should be added
+
+% original footnotes do not have meaning in this copy and have been removed
diff --git a/data/sisu_markup_samples/non-free/un_contracts_international_sale_of_goods_convention_1980.sst b/data/sisu_markup_samples/non-free/un_contracts_international_sale_of_goods_convention_1980.sst
new file mode 100644
index 0000000..ff9151e
--- /dev/null
+++ b/data/sisu_markup_samples/non-free/un_contracts_international_sale_of_goods_convention_1980.sst
@@ -0,0 +1,783 @@
+% SiSU 0.38
+
+@title: United Nations Convention On Contracts For The International Sale Of Goods, 1980 (CISG)
+
+@creator: http://www.un.org/ United Nations (UN)
+
+@source: UNCITRAL, United Nations
+
+@subject: UNCITRAL, United Nations, sale of goods
+
+@keywords: UNCITRAL, United Nations, sale of goods
+
+@type: convention, international sales, sale of goods, UNCITRAL, United Nations
+
+@date: 1980
+
+@source: UNCITRAL United Nations
+
+@structure: PART; Chapter; Section; Article;
+
+@level: new=:A,:B; break=:C
+
+@skin: skin_sisu
+
+:A~ United Nations Convention On Contracts For The International Sale Of Goods, 1980 (CISG)
+
+1~pre [Preamble]-#
+
+THE STATES PARTIES TO THIS CONVENTION,
+
+BEARING IN MIND the broad objectives in the resolutions adopted by the sixth special session of the General Assembly of the United Nations on the establishment of a New International Economic Order,
+
+CONSIDERING that the development of international trade on the basis of equality and mutual benefit is an important element in promoting friendly relations among States,
+
+BEING OF THE OPINION that the adoption of uniform rules which govern contracts for the international sale of goods and take into account the different social, economic and legal systems would contribute to the removal of legal barriers in international trade and promote the development of international trade,
+
+HAVE DECREED as follows:
+
+PART I - Sphere of Application and General Provisions
+
+Chapter I - Sphere of Application
+
+Article 1
+
+(1) This Convention applies to contracts of sale of goods between parties whose places of business are in different States:
+
+(a) when the States are Contracting States; or
+
+(b) when the rules of private international law lead to the application of the law of a Contracting State.
+
+(2) The fact that the parties have their places of business in different States is to be disregarded whenever this fact does not appear either from the contract or from any dealings between, or from information disclosed by, the parties at any time before or at the conclusion of the contract.
+
+(3) Neither the nationality of the parties nor the civil or commercial character of the parties or of the contract is to be taken into consideration in determining the application of this Convention.
+
+Article 2
+
+This Convention does not apply to sales:
+
+(a) of goods bought for personal, family or household use, unless the seller, at any time before or at the conclusion of the contract, neither knew nor ought to have known that the goods were bought for any such use;
+
+(b) by auction;
+
+(c) on execution or otherwise by authority of law;
+
+(d) of stocks, shares, investment securities, negotiable instruments or money;
+
+(e) of ships, vessels, hovercraft or aircraft;
+
+(f) of electricity.
+
+Article 3
+
+(1) Contracts for the supply of goods to be manufactured or produced are to be considered sales unless the party who orders the goods undertakes to supply a substantial part of the materials necessary for such manufacture or production.
+
+(2) This Convention does not apply to contracts in which the preponderant part of the obligations of the party who furnishes the goods consists in the supply of labour or other services.
+
+Article 4
+
+This Convention governs only the formation of the contract of sale and the rights and obligations of the seller and the buyer arising from such a contract. In particular, except as otherwise expressly provided in this Convention, it is not concerned with:
+
+(a) the validity of the contract or of any of its provisions or of any usage;
+
+(b) the effect which the contract may have on the property in the goods sold.
+
+Article 5
+
+This Convention does not apply to the liability of the seller for death or personal injury caused by the goods to any person.
+
+Article 6
+
+The parties may exclude the application of this Convention or, subject to article 12, derogate from or vary the effect of any of its provisions.
+
+Chapter II - General Provisions
+
+Article 7
+
+(1) In the interpretation of this Convention, regard is to be had to its international character and to the need to promote uniformity in its application and the observance of good faith in international trade.
+
+(2) Questions concerning matters governed by this Convention which are not expressly settled in it are to be settled in conformity with the general principles on which it is based or, in the absence of such principles, in conformity with the law applicable by virtue of the rules of private international law.
+
+Article 8
+
+(1) For the purposes of this Convention statements made by and other conduct of a party are to be interpreted according to his intent where the other party knew or could not have been unaware what that intent was.
+
+(2) If the preceding paragraph is not applicable, statements made by and other conduct of a party are to be interpreted according to the understanding that a reasonable person of the same kind as the other party would have had in the same circumstances.
+
+(3) In determining the intent of a party or the understanding a reasonable person would have had, due consideration is to be given to all relevant circumstances of the case including the negotiations, any practices which the parties have established between themselves, usages and any subsequent conduct of the parties.
+
+Article 9
+
+(1) The parties are bound by any usage to which they have agreed and by any practices which they have established between themselves.
+
+(2) The parties are considered, unless otherwise agreed, to have impliedly made applicable to their contract or its formation a usage of which the parties knew or ought to have known and which in international trade is widely known to, and regularly observed by, parties to contracts of the type involved in the particular trade concerned.
+
+Article 10
+
+For the purposes of this Convention:
+
+(a) if a party has more than one place of business, the place of business is that which has the closest relationship to the contract and its performance, having regard to the circumstances known to or contemplated by the parties at any time before or at the conclusion of the contract;
+
+(b) if a party does not have a place of business, reference is to be made to his habitual residence.
+
+Article 11
+
+A contract of sale need not be concluded in or evidenced by writing and is not subject to any other requirement as to form. It may be proved by any means, including witnesses.
+
+Article 12
+
+Any provision of article 11, article 29 or Part II of this Convention that allows a contract of sale or its modification or termination by agreement or any offer, acceptance or other indication of intention to be made in any form other than in writing does not apply where any party has his place of business in a Contracting State which has made a declaration under article 96 of this Convention. The parties may not derogate from or vary the effect or this article.
+
+Article 13
+
+For the purposes of this Convention "writing" includes telegram and telex.
+
+PART II - Formation of the Contract
+
+Article 14
+
+(1) A proposal for concluding a contract addressed to one or more specific persons constitutes an offer if it is sufficiently definite and indicates the intention of the offeror to be bound in case of acceptance. A proposal is sufficiently definite if it indicates the goods and expressly or implicitly fixes or makes provision for determining the quantity and the price.
+
+(2) A proposal other than one addressed to one or more specific persons is to be considered merely as an invitation to make offers, unless the contrary is clearly indicated by the person making the proposal.
+
+Article 15
+
+(1) An offer becomes effective when it reaches the offeree.
+
+(2) An offer, even if it is irrevocable, may be withdrawn if the withdrawal reaches the offeree before or at the same time as the offer.
+
+Article 16
+
+(1) Until a contract is concluded an offer may be revoked if the revocation reaches the offeree before he has dispatched an acceptance.
+
+(2) However, an offer cannot be revoked:
+
+(a) if it indicates, whether by stating a fixed time for acceptance or otherwise, that it is irrevocable; or
+
+(b) if it was reasonable for the offeree to rely on the offer as being irrevocable and the offeree has acted in reliance on the offer.
+
+Article 17
+
+An offer, even if it is irrevocable, is terminated when a rejection reaches the offeror.
+
+Article 18
+
+(1) A statement made by or other conduct of the offeree indicating assent to an offer is an acceptance. Silence or inactivity does not in itself amount to acceptance.
+
+(2) An acceptance of an offer becomes effective at the moment the indication of assent reaches the offeror. An acceptance is not effective if the indication of assent does not reach the offeror within the time he has fixed or, if no time is fixed, within a reasonable time, due account being taken of the circumstances of the transaction, including the rapidity of the means of communication employed by the offeror. An oral offer must be accepted immediately unless the circumstances indicate otherwise.
+
+(3) However, if, by virtue of the offer or as a result of practices which the parties have established between themselves or of usage, the offeree may indicate assent by performing an act, such as one relating to the dispatch of the goods or payment of the price, without notice to the offeror, the acceptance is effective at the moment the act is performed, provided that the act is performed within the period of time laid down in the preceding paragraph.
+
+Article 19
+
+(1) A reply to an offer which purports to be an acceptance but contains additions, limitations or other modifications is a rejection of the offer and constitutes a counter-offer.
+
+(2) However, a reply to an offer which purports to be an acceptance but contains additional or different terms which do not materially alter the terms of the offer constitutes an acceptance, unless the offeror, without undue delay, objects orally to the discrepancy or dispatches a notice to that effect. If he does not so object, the terms of the contract are the terms of the offer with the modifications contained in the acceptance.
+
+(3) Additional or different terms relating, among other things, to the price, payment, quality and quantity of the goods, place and time of delivery, extent of one party's liability to the other or the settlement of disputes are considered to alter the terms of the offer materially.
+
+Article 20
+
+(1) A period of time for acceptance fixed by the offeror in a telegram or a letter begins to run from the moment the telegram is handed in for dispatch or from the date shown on the letter or, if no such date is shown, from the date shown on the envelope. A period of time for acceptance fixed by the offeror by telephone, telex or other means of instantaneous communication, begins to run from the moment that the offer reaches the offeree.
+
+(2) Official holidays or non-business days occurring during the period for acceptance are included in calculating the period. However, if a notice of acceptance cannot be delivered at the address of the offeror on the last day of the period because that day falls on an official holiday or a non-business day at the place of business of the offeror, the period is extended until the first business day which follows.
+
+Article 21
+
+(1) A late acceptance is nevertheless effective as an acceptance if without delay the offeror orally so informs the offeree or dispatches a notice to that effect.
+
+(2) If a letter or other writing containing a late acceptance shows that it has been sent in such circumstances that if its transmission had been normal it would have reached the offeror in due time, the late acceptance is effective as an acceptance unless, without delay, the offeror orally informs the offeree that he considers his offer as having lapsed or dispatches a notice to that effect.
+
+Article 22
+
+An acceptance may be withdrawn if the withdrawal reaches the offeror before or at the same time as the acceptance would have become effective.
+
+Article 23
+
+A contract is concluded at the moment when an acceptance of an offer becomes effective in accordance with the provisions of this Convention.
+
+Article 24
+
+For the purposes of this Part of the Convention, an offer, declaration of acceptance or any other indication of intention "reaches" the addressee when it is made orally to him or delivered by any other means to him personally, to his place of business or mailing address or, if he does not have a place of business or mailing address, to his habitual residence.
+
+PART III - Sale of Goods
+
+Chapter I - General Provisions
+
+Article 25
+
+A breach of contract committed by one of the parties is fundamental if it results in such detriment to the other party as substantially to deprive him of what he is entitled to expect under the contract, unless the party in breach did not foresee and a reasonable person of the same kind in the same circumstances would not have foreseen such a result.
+
+Article 26
+
+A declaration of avoidance of the contract is effective only if made by notice to the other party.
+
+Article 27
+
+Unless otherwise expressly provided in this Part of the Convention, if any notice, request or other communication is given or made by a party in accordance with this Part and by means appropriate in the circumstances, a delay or error in the transmission of the communication or its failure to arrive does not deprive that party of the right to rely on the communication.
+
+Article 28
+
+If, in accordance with the provisions of this Convention, one party is entitled to require performance of any obligation by the other party, a court is not bound to enter a judgement for specific performance unless the court would do so under its own law in respect of similar contracts of sale not governed by this Convention.
+
+Article 29
+
+(1) A contract may be modified or terminated by the mere agreement of the parties.
+
+(2) A contract in writing which contains a provision requiring any modification or termination by agreement to be in writing may not be otherwise modified or terminated by agreement. However, a party may be precluded by his conduct from asserting such a provision to the extent that the other party has relied on that conduct.
+
+Chapter II - Obligations of the Seller
+
+Article 30
+
+The seller must deliver the goods, hand over any documents relating to them and transfer the property in the goods, as required by the contract and this Convention.
+
+Section I - Delivery of the goods and handing over of documents
+
+Article 31
+
+If the seller is not bound to deliver the goods at any other particular place, his obligation to deliver consists:
+
+(a) if the contract of sale involves carriage of the goods - in handing the goods over to the first carrier for transmission to the buyer;
+
+(b) if, in cases not within the preceding subparagraph, the contract related to specific goods, or unidentified goods to be drawn from a specific stock or to be manufactured or produced, and at the time of the conclusion of the contract the parties knew that the goods were at, or were to be manufactured or produced at, a particular place - in placing the goods at the buyer's disposal at that place;
+
+(c) in other cases - in placing the goods at the buyer's disposal at the place where the seller had his place of business at the time of the conclusion of the contract.
+
+Article 32
+
+(1) If the seller, in accordance with the contract or this Convention, hands the goods over to a carrier and if the goods are not clearly identified to the contract by markings on the goods, by shipping documents or otherwise, the seller must give the buyer notice of the consignment specifying the goods.
+
+(2) If the seller is bound to arrange for carriage of the goods, he must make such contracts as are necessary for carriage to the place fixed by means of transportation appropriate in the circumstances and according to the usual terms for such transportation.
+
+(3) If the seller is not bound to effect insurance in respect of the carriage of the goods, he must, at the buyer's request, provide him with all available information necessary to enable him to effect such insurance.
+
+Article 33
+
+The seller must deliver the goods:
+
+(a) if a date is fixed by or determinable from the contract, on that date;
+
+(b) if a period of time is fixed by or determinable from the contract, at any time within that period unless circumstances indicate that the buyer is to choose a date; or
+
+(c) in any other case, within a reasonable time after the conclusion of the contract.
+
+Article 34
+
+If the seller is bound to hand over documents relating to the goods, he must hand them over at the time and place and in the form required by the contract. If the seller has handed over documents before that time, he may, up to that time, cure any lack of conformity in the documents, if the exercise of this right does not cause the buyer unreasonable inconvenience or unreasonable expense. However, the buyer retains any right to claim damages as provided for in this Convention.
+
+Section II - Conformity of the goods and third party claims
+
+Article 35
+
+(1) The seller must deliver goods which are of the quantity, quality and description required by the contract and which are contained or packaged in the manner required by the contract.
+
+(2) Except where the parties have agreed otherwise, the goods do not conform with the contract unless they:
+
+(a) are fit for the purposes for which goods of the same description would ordinarily be used;
+
+(b) are fit for any particular purpose expressly or impliedly made known to the seller at the time of the conclusion of the contract, except where the circumstances show that the buyer did not rely, or that it was unreasonable for him to rely, on the seller's skill and judgement;
+
+(c) possess the qualities of goods which the seller has held out to the buyer as a sample or model;
+
+(d) are contained or packaged in the manner usual for such goods or, where there is no such manner, in a manner adequate to preserve and protect the goods.
+
+(3) The seller is not liable under subparagraphs (a) to (d) of the preceding paragraph for any lack of conformity of the goods if at the time of the conclusion of the contract the buyer knew or could not have been unaware of such lack of conformity.
+
+Article 36
+
+(1) The seller is liable in accordance with the contract and this Convention for any lack of conformity which exists at the time when the risk passes to the buyer, even though the lack of conformity becomes apparent only after that time.
+
+(2) The seller is also liable for any lack of conformity which occurs after the time indicated in the preceding paragraph and which is due to a breach of any of his obligations, including a breach of any guarantee that for a period of time the goods will remain fit for their ordinary purpose or for some particular purpose or will retain specified qualities or characteristics.
+
+Article 37
+
+If the seller has delivered goods before the date for delivery, he may, up to that date, deliver any missing part or make up any deficiency in the quantity of the goods delivered, or deliver goods in replacement of any non-conforming goods delivered or remedy any lack of conformity in the goods delivered, provided that the exercise of this right does not cause the buyer unreasonable inconvenience or unreasonable expense. However, the buyer retains any right to claim damages as provided for in this Convention.
+
+Article 38
+
+(1) The buyer must examine the goods, or cause them to be examined, within as short a period as is practicable in the circumstances.
+
+(2) If the contract involves carriage of the goods, examination may be deferred until after the goods have arrived at their destination.
+
+(3) If the goods are redirected in transit or redispatched by the buyer without a reasonable opportunity for examination by him and at the time of the conclusion of the contract the seller knew or ought to have known of the possibility of such redirection or redispatch, examination may be deferred until after the goods have arrived at the new destination.
+
+Article 39
+
+(1) The buyer loses the right to rely on a lack of conformity of the goods if he does not give notice to the seller specifying the nature of the lack of conformity within a reasonable time after he has discovered it or ought to have discovered it.
+
+(2) In any event, the buyer loses the right to rely on a lack of conformity of the goods if he does not give the seller notice thereof at the latest within a period of two years from the date on which the goods were actually handed over to the buyer, unless this time-limit is inconsistent with a contractual period of guarantee.
+
+Article 40
+
+The seller is not entitled to rely on the provisions of articles 38 and 39 if the lack of conformity relates to facts of which he knew or could not have been unaware and which he did not disclose to the buyer.
+
+Article 41
+
+The seller must deliver goods which are free from any right or claim of a third party, unless the buyer agreed to take the goods subject to that right or claim. However, if such right or claim is based on industrial property or other intellectual property, the seller's obligation is governed by article 42.
+
+Article 42
+
+(1) The seller must deliver goods which are free from any right or claim of a third party based on industrial property or other intellectual property, of which at the time of the conclusion of the contract the seller knew or could not have been unaware, provided that the right or claim is based on industrial property or other intellectual property:
+
+(a) under the law of the State where the goods will be resold or otherwise used, if it was contemplated by the parties at the time of the conclusion of the contract that the goods would be resold or otherwise used in that State; or
+
+(b) in any other case, under the law of the State where the buyer has his place of business.
+
+(2) The obligation of the seller under the preceding paragraph does not extend to cases where:
+
+(a) at the time of the conclusion of the contract the buyer knew or could not have been unaware of the right or claim; or
+
+(b) the right or claim results from the seller's compliance with technical drawings, designs, formulae or other such specifications furnished by the buyer.
+
+Article 43
+
+(1) The buyer loses the right to rely on the provisions of article 41 or article 42 if he does not give notice to the seller specifying the nature of the right or claim of the third party within a reasonable time after he has become aware or ought to have become aware of the right or claim.
+
+(2) The seller is not entitled to rely on the provisions of the preceding paragraph if he knew of the right or claim of the third party and the nature of it.
+
+Article 44
+
+Notwithstanding the provisions of paragraph (1) of article 39 and paragraph (1) of article 43, the buyer may reduce the price in accordance with article 50 or claim damages, except for loss of profit, if he has a reasonable excuse for his failure to give the required notice.
+
+Section III - Remedies for breach of contract by the seller
+
+Article 45
+
+(1) If the seller fails to perform any of his obligations under the contract or this Convention, the buyer may:
+
+(a) exercise the rights provided in articles 46 to 52;
+
+(b) claim damages as provided in articles 74 to 77.
+
+(2) The buyer is not deprived of any right he may have to claim damages by exercising his right to other remedies.
+
+(3) No period of grace may be granted to the seller by a court or arbitral tribunal when the buyer resorts to a remedy for breach of contract.
+
+Article 46
+
+(1) The buyer may require performance by the seller of his obligations unless the buyer has resorted to a remedy which is inconsistent with this requirement.
+
+(2) If the goods do not conform with the contract, the buyer may require delivery of substitute goods only if the lack of conformity constitutes a fundamental breach of contract and a request for substitute goods is made either in conjunction with notice given under article 39 or within a reasonable time thereafter.
+
+(3) If the goods do not conform with the contract, the buyer may require the seller to remedy the lack of conformity by repair, unless this is unreasonable having regard to all the circumstances. A request for repair must be made either in conjunction with notice given under article 39 or within a reasonable time thereafter.
+
+Article 47
+
+(1) The buyer may fix an additional period of time of reasonable length for performance by the seller of his obligations.
+
+(2) Unless the buyer has received notice from the seller that he will not perform within the period so fixed, the buyer may not, during that period, resort to any remedy for breach of contract. However, the buyer is not deprived thereby of any right he may have to claim damages for delay in performance.
+
+Article 48
+
+(1) Subject to article 49, the seller may, even after the date for delivery, remedy at his own expense any failure to perform his obligations, if he can do so without unreasonable delay and without causing the buyer unreasonable inconvenience or uncertainty of reimbursement by the seller of expenses advanced by the buyer. However, the buyer retains any right to claim damages as provided for in this Convention.
+
+(2) If the seller requests the buyer to make known whether he will accept performance and the buyer does not comply with the request within a reasonable time, the seller may perform within the time indicated in his request. The buyer may not, during that period of time, resort to any remedy which is inconsistent with performance by the seller.
+
+(3) A notice by the seller that he will perform within a specified period of time is assumed to include a request, under the preceding paragraph, that the buyer make known his decision.
+
+(4) A request or notice by the seller under paragraph (2) or (3) of this article is not effective unless received by the buyer.
+
+Article 49
+
+(1) The buyer may declare the contract avoided:
+
+(a) if the failure by the seller to perform any of his obligations under the contract or this Convention amounts to a fundamental breach of contract; or
+
+(b) in case of non-delivery, if the seller does not deliver the goods within the additional period of time fixed by the buyer in accordance with paragraph (1) of article 47 or declares that he will not deliver within the period so fixed.
+
+(2) However, in cases where the seller has delivered the goods, the buyer loses the right to declare the contract avoided unless he does so:
+
+(a) in respect of late delivery, within a reasonable time after he has become aware that delivery has been made;
+
+(b) in respect of any breach other than late delivery, within a reasonable time:
+
+(i) after he knew or ought to have known of the breach;
+
+(ii) after the expiration of any additional period of time fixed by the buyer in accordance with paragraph (1) of article 47, or after the seller has declared that he will not perform his obligations within such an additional period; or
+
+(iii) after the expiration of any additional period of time indicated by the seller in accordance with paragraph (2) of article 48, or after the buyer has declared that he will not accept performance.
+
+Article 50
+
+If the goods do not conform with the contract and whether or not the price has already been paid, the buyer may reduce the price in the same proportion as the value that the goods actually delivered had at the time of the delivery bears to the value that conforming goods would have had at that time. However, if the seller remedies any failure to perform his obligations in accordance with article 37 or article 48 or if the buyer refuses to accept performance by the seller in accordance with those articles, the buyer may not reduce the price.
+
+Article 51
+
+(1) If the seller delivers only a part of the goods or if only a part of the goods delivered is in conformity with the contract, articles 46 to 50 apply in respect of the part which is missing or which does not conform.
+
+(2) The buyer may declare the contract avoided in its entirety only if the failure to make delivery completely or in conformity with the contract amounts to a fundamental breach of the contract.
+
+Article 52
+
+(1) If the seller delivers the goods before the date fixed, the buyer may take delivery or refuse to take delivery.
+
+(2) If the seller delivers a quantity of goods greater than that provided for in the contract, the buyer may take delivery or refuse to take delivery of the excess quantity. If the buyer takes delivery of all or part of the excess quantity, he must pay for it at the contract rate.
+
+Chapter III - Obligations of the Buyer
+
+Article 53
+
+The buyer must pay the price for the goods and take delivery of them as required by the contract and this Convention.
+
+Section I - Payment of the price
+
+Article 54
+
+The buyer's obligation to pay the price includes taking such steps and complying with such formalities as may be required under the contract or any laws and regulations to enable payment to be made.
+
+Article 55
+
+Where a contract has been validly concluded but does not expressly or implicitly fix or make provision for determining the price, the parties are considered, in the absence of any indication to the contrary, to have impliedly made reference to the price generally charged at the time of the conclusion of the contract for such goods sold under comparable circumstances in the trade concerned.
+
+Article 56
+
+If the price is fixed according to the weight of the goods, in case of doubt it is to be determined by the net weight.
+
+Article 57
+
+(1) If the buyer is not bound to pay the price at any other particular place, he must pay it to the seller:
+
+(a) at the seller's place of business; or
+
+(b) if the payment is to be made against the handing over of the goods or of documents, at the place where the handing over takes place.
+
+(2) The seller must bear any increases in the expenses incidental to payment which is caused by a change in his place of business subsequent to the conclusion of the contract.
+
+Article 58
+
+(1) If the buyer is not bound to pay the price at any other specific time, he must pay it when the seller places either the goods or documents controlling their disposition at the buyer's disposal in accordance with the contract and this Convention. The seller may make such payment a condition for handing over the goods or documents.
+
+(2) If the contract involves carriage of the goods, the seller may dispatch the goods on terms whereby the goods, or documents controlling their disposition, will not be handed over to the buyer except against payment of the price.
+
+(3) The buyer is not bound to pay the price until he has had an opportunity to examine the goods, unless the procedures for delivery or payment agreed upon by the parties are inconsistent with his having such an opportunity.
+
+Article 59
+
+The buyer must pay the price on the date fixed by or determinable from the contract and this Convention without the need for any request or compliance with any formality on the part of the seller.
+
+Section II - Taking delivery
+
+Article 60
+
+The buyer's obligation to take delivery consists:
+
+(a) in doing all the acts which could reasonably be expected of him in order to enable the seller to make delivery; and
+
+(b) in taking over the goods.
+
+Section III - Remedies for breach of contract by the buyer
+
+Article 61
+
+(1) If the buyer fails to perform any of his obligations under the contract or this Convention, the seller may:
+
+(a) exercise the rights provided in articles 62 to 65;
+
+(b) claim damages as provided in articles 74 to 77.
+
+(2) The seller is not deprived of any right he may have to claim damages by exercising his right to other remedies.
+
+(3) No period of grace may be granted to the buyer by a court or arbitral tribunal when the seller resorts to a remedy for breach of contract.
+
+Article 62
+
+The seller may require the buyer to pay the price, take delivery or perform his other obligations, unless the seller has resorted to a remedy which is inconsistent with this requirement.
+
+Article 63
+
+(1) The seller may fix an additional period of time of reasonable length for performance by the buyer of his obligations.
+
+(2) Unless the seller has received notice from the buyer that he will not perform within the period so fixed, the seller may not, during that period, resort to any remedy for breach of contract. However, the seller is not deprived thereby of any right he may have to claim damages for delay in performance.
+
+Article 64
+
+(1) The seller may declare the contract avoided:
+
+(a) if the failure by the buyer to perform any of his obligations under the contract or this Convention amounts to a fundamental breach of contract; or
+
+(b) if the buyer does not, within the additional period of time fixed by the seller in accordance with paragraph (1) of article 63, perform his obligation to pay the price or take delivery of the goods, or if he declares that he will not do so within the period so fixed.
+
+(2) However, in cases where the buyer has paid the price, the seller loses the right to declare the contract avoided unless he does so:
+
+(a) in respect of late performance by the buyer, before the seller has become aware that performance has been rendered; or
+
+(b) in respect of any breach other than late performance by the buyer, within a reasonable time:
+
+(i) after the seller knew or ought to have known of the breach; or
+
+(ii) after the expiration of any additional period of time fixed by the seller in accordance with paragraph (1) or article 63, or after the buyer has declared that he will not perform his obligations within such an additional period.
+
+Article 65
+
+(1) If under the contract the buyer is to specify the form, measurement or other features of the goods and he fails to make such specification either on the date agreed upon or within a reasonable time after receipt of a request from the seller, the seller may, without prejudice to any other rights he may have, make the specification himself in accordance with the requirements of the buyer that may be known to him.
+
+(2) If the seller makes the specification himself, he must inform the buyer of the details thereof and must fix a reasonable time within which the buyer may make a different specification. If, after receipt of such a communication, the buyer fails to do so within the time so fixed, the specification made by the seller is binding.
+
+Chapter IV - Passing of Risk
+
+Article 66
+
+Loss of or damage to the goods after the risk has passed to the buyer does not discharge him from his obligation to pay the price, unless the loss or damage is due to an act or omission of the seller.
+
+Article 67
+
+(1) If the contract of sale involves carriage of the goods and the seller is not bound to hand them over at a particular place, the risk passes to the buyer when the goods are handed over to the first carrier for transmission to the buyer in accordance with the contract of sale. If the seller is bound to hand the goods over to a carrier at a particular place, the risk does not pass to the buyer until the goods are handed over to the carrier at that place. The fact that the seller is authorized to retain documents controlling the disposition of the goods does not affect the passage of the risk.
+
+(2) Nevertheless, the risk does not pass to the buyer until the goods are clearly identified to the contract, whether by markings on the goods, by shipping documents, by notice given to the buyer or otherwise.
+
+Article 68
+
+The risk in respect of goods sold in transit passes to the buyer from the time of the conclusion of the contract. However, if the circumstances so indicate, the risk is assumed by the buyer from the time the goods were handed over to the carrier who issued the documents embodying the contract of carriage. Nevertheless, if at the time of the conclusion of the contract of sale the seller knew or ought to have known that the goods had been lost or damaged and did not disclose this to the buyer, the loss or damage is at the risk of the seller.
+
+Article 69
+
+(1) In cases not within articles 67 and 68, the risk passes to the buyer when he takes over the goods or, if he does not do so in due time, from the time when the goods are placed at his disposal and he commits a breach of contract by failing to take delivery.
+
+(2) However, if the buyer is bound to take over the goods at a place other than a place of business of the seller, the risk passes when delivery is due and the buyer is aware of the fact that the goods are placed at his disposal at that place.
+
+(3) If the contract relates to goods not then identified, the goods are considered not to be placed at the disposal of the buyer until they are clearly identified to the contract.
+
+Article 70
+
+If the seller has committed a fundamental breach of contract, articles 67, 68 and 69 do not impair the remedies available to the buyer on account of the breach.
+
+Chapter V - Provisions Common to the Obligations of the Seller and of the Buyer
+
+Section I - Anticipatory breach and instalment contracts
+
+Article 71
+
+(1) A party may suspend the performance of his obligations if, after the conclusion of the contract, it becomes apparent that the other party will not perform a substantial part of his obligations as a result of:
+
+(a) a serious deficiency in his ability to perform or in his creditworthiness; or
+
+(b) his conduct in preparing to perform or in performing the contract.
+
+(2) If the seller has already dispatched the goods before the grounds described in the preceding paragraph become evident, he may prevent the handing over of the goods to the buyer even though the buyer holds a document which entitles him to obtain them. The present paragraph relates only to the rights in the goods as between the buyer and the seller.
+
+(3) A party suspending performance, whether before or after dispatch of the goods, must immediately give notice of the suspension to the other party and must continue with performance if the other party provides adequate assurance of his performance.
+
+Article 72
+
+(1) If prior to the date for performance of the contract it is clear that one of the parties will commit a fundamental breach of contract, the other party may declare the contract avoided.
+
+(2) If time allows, the party intending to declare the contract avoided must give reasonable notice to the other party in order to permit him to provide adequate assurance of his performance.
+
+(3) The requirements of the preceding paragraph do not apply if the other party has declared that he will not perform his obligations.
+
+Article 73
+
+(1) In the case of a contract for delivery of goods by instalments, if the failure of one party to perform any of his obligations in respect of any instalment constitutes a fundamental breach of contract with respect to that instalment, the other party may declare the contract avoided with respect to that instalment.
+
+(2) If one party's failure to perform any of his obligations in respect of any instalment gives the other party good grounds to conclude that a fundamental breach of contract will occur with respect to future instalments, he may declare the contract avoided for the future, provided that he does so within a reasonable time.
+
+(3) A buyer who declares the contract avoided in respect of any delivery may, at the same time, declare it avoided in respect of deliveries already made or of future deliveries if, by reason of their interdependence, those deliveries could not be used for the purpose contemplated by the parties at the time of the conclusion of the contract.
+
+Section II - Damages
+
+Article 74
+
+Damages for breach of contract by one party consist of a sum equal to the loss, including loss of profit, suffered by the other party as a consequence of the breach. Such damages may not exceed the loss which the party in breach foresaw or ought to have foreseen at the time of the conclusion of the contract, in the light of the facts and matters of which he then knew or ought to have known, as a possible consequence of the breach of contract.
+
+Article 75
+
+If the contract is avoided and if, in a reasonable manner and within a reasonable time after avoidance, the buyer has bought goods in replacement or the seller has resold the goods, the party claiming damages may recover the difference between the contract price and the price in the substitute transaction as well as any further damages recoverable under article 74.
+
+Article 76
+
+(1) If the contract is avoided and there is a current price for the goods, the party claiming damages may, if he has not made a purchase or resale under article 75, recover the difference between the price fixed by the contract and the current price at the time of avoidance as well as any further damages recoverable under article 74. If, however, the party claiming damages has avoided the contract after taking over the goods, the current price at the time of such taking over shall be applied instead of the current price at the time of avoidance.
+
+(2) For the purposes of the preceding paragraph, the current price is the price prevailing at the place where delivery of the goods should have been made or, if there is no current price at that place, the price at such other place as serves as a reasonable substitute, making due allowance for differences in the cost of transporting the goods.
+
+Article 77
+
+A party who relies on a breach of contract must take such measures as are reasonable in the circumstances to mitigate the loss, including loss of profit, resulting from the breach. If he fails to take such measures, the party in breach may claim a reduction in the damages in the amount by which the loss should have been mitigated.
+
+Section III - Interest
+
+Article 78
+
+If a party fails to pay the price or any other sum that is in arrears, the other party is entitled to interest on it, without prejudice to any claim for damages recoverable under article 74.
+
+Section IV - Exemptions
+
+Article 79
+
+(1) A party is not liable for a failure to perform any of his obligations if he proves that the failure was due to an impediment beyond his control and that he could not reasonably be expected to have taken the impediment into account at the time of the conclusion of the contract or to have avoided or overcome it or its consequences.
+
+(2) If the party's failure is due to the failure by a third person whom he has engaged to perform the whole or a part of the contract, that party is exempt from liability only if:
+
+(a) he is exempt under the preceding paragraph; and
+
+(b) the person whom he has so engaged would be so exempt if the provisions of that paragraph were applied to him.
+
+(3) The exemption provided by this article has effect for the period during which the impediment exists.
+
+(4) The party who fails to perform must give notice to the other party of the impediment and its effect on his ability to perform. If the notice is not received by the other party within a reasonable time after the party who fails to perform knew or ought to have known of the impediment, he is liable for damages resulting from such non-receipt.
+
+(5) Nothing in this article prevents either party from exercising any right other than to claim damages under this Convention.
+
+Article 80
+
+A party may not rely on a failure of the other party to perform, to the extent that such failure was caused by the first party's act or omission.
+
+Section V - Effects of avoidance
+
+Article 81
+
+(1) Avoidance of the contract releases both parties from their obligations under it, subject to any damages which may be due. Avoidance does not affect any provision of the contract for the settlement of disputes or any other provision of the contract governing the rights and obligations of the parties consequent upon the avoidance of the contract.
+
+(2) A party who has performed the contract either wholly or in part may claim restitution from the other party of whatever the first party has supplied or paid under the contract. If both parties are bound to make restitution, they must do so concurrently.
+
+Article 82
+
+(1) The buyer loses the right to declare the contract avoided or to require the seller to deliver substitute goods if it is impossible for him to make restitution of the goods substantially in the condition in which he received them.
+
+(2) The preceding paragraph does not apply:
+
+(a) if the impossibility of making restitution of the goods or of making restitution of the goods substantially in the condition in which the buyer received them is not due to his act or omission;
+
+(b) if the goods or part of the goods have perished or deteriorated as a result of the examination provided for in article 38; or
+
+(c) if the goods or part of the goods have been sold in the normal course of business or have been consumed or transformed by the buyer in the course normal use before he discovered or ought to have discovered the lack of conformity.
+
+Article 83
+
+A buyer who has lost the right to declare the contract avoided or to require the seller to deliver substitute goods in accordance with article 82 retains all other remedies under the contract and this Convention.
+
+Article 84
+
+(1) If the seller is bound to refund the price, he must also pay interest on it, from the date on which the price was paid.
+
+(2) The buyer must account to the seller for all benefits which he has derived from the goods or part of them:
+
+(a) if he must make restitution of the goods or part of them; or
+
+(b) if it is impossible for him to make restitution of all or part of the goods or to make restitution of all or part of the goods substantially in the condition in which he received them, but he has nevertheless declared the contract avoided or required the seller to deliver substitute goods.
+
+Section VI - Preservation of the goods
+
+Article 85
+
+If the buyer is in delay in taking delivery of the goods or, where payment of the price and delivery of the goods are to be made concurrently, if he fails to pay the price, and the seller is either in possession of the goods or otherwise able to control their disposition, the seller must take such steps as are reasonable in the circumstances to preserve them. He is entitled to retain them until he has been reimbursed his reasonable expenses by the buyer.
+
+Article 86
+
+(1) If the buyer has received the goods and intends to exercise any right under the contract or this Convention to reject them, he must take such steps to preserve them as are reasonable in the circumstances. He is entitled to retain them until he has been reimbursed his reasonable expenses by the seller.
+
+(2) If goods dispatched to the buyer have been placed at his disposal at their destination and he exercises the right to reject them, he must take possession of them on behalf of the seller, provided that this can be done without payment of the price and without unreasonable inconvenience or unreasonable expense. This provision does not apply if the seller or a person authorized to take charge of the goods on his behalf is present at the destination. If the buyer takes possession of the goods under this paragraph, his rights and obligations are governed by the preceding paragraph.
+
+Article 87
+
+A party who is bound to take steps to preserve the goods may deposit them in a warehouse of a third person at the expense of the other party provided that the expense incurred is not unreasonable.
+
+Article 88
+
+(1) A party who is bound to preserve the goods in accordance with article 85 or 86 may sell them by any appropriate means if there has been an unreasonable delay by the other party in taking possession of the goods or in taking them back or in paying the price or the cost of preservation, provided that reasonable notice of the intention to sell has been given to the other party.
+
+(2) If the goods are subject to rapid deterioration or their preservation would involve unreasonable expense, a party who is bound to preserve the goods in accordance with article 85 or 86 must take reasonable measures to sell them. To the extent possible he must give notice to the other party of his intention to sell.
+
+(3) A party selling the goods has the right to retain out of the proceeds of sale an amount equal to the reasonable expenses of preserving the goods and of selling them. He must account to the other party for the balance.
+
+PART IV - Final Provisions
+
+Article 89
+
+The Secretary-General of the United Nations is hereby designated as the depositary for this Convention.
+
+Article 90
+
+This Convention does not prevail over any international agreement which has already been or may be entered into and which contains provisions concerning the matters governed by this Convention, provided that the parties have their places of business in States parties to such agreement.
+
+Article 91
+
+(1) This Convention is open for signature at the concluding meeting of the United Nations Conference on Contracts for the International Sale of Goods and will remain open for signature by all States at the Headquarters of the United Nations, New York until 30 September 1981.
+
+(2) This Convention is subject to ratification, acceptance or approval by the signatory States.
+
+(3) This Convention is open for accession by all States which are not signatory States as from the date it is open for signature.
+
+(4) Instruments of ratification, acceptance, approval and accession are to be deposited with the Secretary-General of the United Nations.
+
+Article 92
+
+(1) A Contracting State may declare at the time of signature, ratification, acceptance, approval or accession that it will not be bound by Part II of this Convention or that it will not be bound by Part III of this Convention.
+
+(2) A Contracting State which makes a declaration in accordance with the preceding paragraph in respect of Part II or Part III of this Convention is not to be considered a Contracting State within paragraph (1) of article 1 of this Convention in respect of matters governed by the Part to which the declaration applies.
+
+Article 93
+
+(1) If a Contracting State has two or more territorial units in which, according to its constitution, different systems of law are applicable in relation to the matters dealt with in this Convention, it may, at the time of signature, ratification, acceptance, approval or accession, declare that this Convention is to extend to all its territorial units or only to one or more of them, and may amend its declaration by submitting another declaration at any time.
+
+(2) These declarations are to be notified to the depositary and are to state expressly the territorial units to which the Convention extends.
+
+(3) If, by virtue of a declaration under this article, this Convention extends to one or more but not all of the territorial units of a Contracting State, and if the place of business of a party is located in that State, this place of business, for the purposes of this Convention, is considered not to be in a Contracting State, unless it is in a territorial unit to which the Convention extends.
+
+(4) If a Contracting State makes no declaration under paragraph (1) of this article, the Convention is to extend to all territorial units of that State.
+
+Article 94
+
+(1) Two or more Contracting States which have the same or closely related legal rules on matters governed by this Convention may at any time declare that the Convention is not to apply to contracts of sale or to their formation where the parties have their places of business in those States. Such declarations may be made jointly or by reciprocal unilateral declarations.
+
+(2) A Contracting State which has the same or closely related legal rules on matters governed by this Convention as one or more non-Contracting States may at any time declare that the Convention is not to apply to contracts of sale or to their formation where the parties have their places of business in those States.
+
+(3) If a State which is the object of a declaration under the preceding paragraph subsequently becomes a Contracting State, the declaration made will, as from the date on which the Convention enters into force in respect of the new Contracting State, have the effect of a declaration made under paragraph (1), provided that the new Contracting State joins in such declaration or makes a reciprocal unilateral declaration.
+
+Article 95
+
+Any State may declare at the time of the deposit of its instrument of ratification, acceptance, approval or accession that it will not be bound by subparagraph (1)(b) of article 1 of this Convention.
+
+Article 96
+
+A Contracting State whose legislation requires contracts of sale to be concluded in or evidenced by writing may at any time make a declaration in accordance with article 12 that any provision of article 11, article 29, or Part II of this Convention, that allows a contract of sale or its modification or termination by agreement or any offer, acceptance, or other indication of intention to be made in any form other than in writing, does not apply where any party has his place of business in that State.
+
+Article 97
+
+(1) Declarations made under this Convention at the time of signature are subject to confirmation upon ratification, acceptance or approval.
+
+(2) Declarations and confirmations of declarations are to be in writing and be formally notified to the depositary.
+
+(3) A declaration takes effect simultaneously with the entry into force of this Convention in respect of the State concerned. However, a declaration of which the depositary receives formal notification after such entry into force takes effect on the first day of the month following the expiration of six months after the date of its receipt by the depositary. Reciprocal unilateral declarations under article 94 take effect on the first day of the month following the expiration of six months after the receipt of the latest declaration by the depositary.
+
+(4) Any State which makes a declaration under this Convention may withdraw it at any time by a formal notification in writing addressed to the depositary. Such withdrawal is to take effect on the first day of the month following the expiration of six months after the date of the receipt of the notification by the depositary.
+
+(5) A withdrawal of a declaration made under article 94 renders inoperative, as from the date on which the withdrawal takes effect, any reciprocal declaration made by another State under that article.
+
+Article 98
+
+No reservations are permitted except those expressly authorized in this Convention.
+
+Article 99
+
+(1) This Convention enters into force, subject to the provisions of paragraph (6) of this article, on the first day of the month following the expiration of twelve months after the date of deposit of the tenth instrument of ratification, acceptance, approval or accession, including an instrument which contains a declaration made under article 92.
+
+(2) When a State ratifies, accepts, approves or accedes to this Convention after the deposit of the tenth instrument of ratification, acceptance, approval or accession, this Convention, with the exception of the Part excluded, enters into force in respect of that State, subject to the provisions of paragraph (6) of this article, on the first day of the month following the expiration of twelve months after the date of the deposit of its instrument of ratification, acceptance, approval or accession.
+
+(3) A State which ratifies, accepts, approves or accedes to this Convention and is a party to either or both the Convention relating to a Uniform Law on the Formation of Contracts for the International Sale of Goods done at The Hague on 1 July 1964 (1964 Hague Formation Convention) and the Convention relating to a Uniform Law on the International Sale of Goods done at The Hague on 1 July 1964 (1964 Hague Sales Convention) shall at the same time denounce, as the case may be, either or both the 1964 Hague Sales Convention and the 1964 Hague Formation Convention by notifying the Government of the Netherlands to that effect.
+
+(4) A State party to the 1964 Hague Sales Convention which ratifies, accepts, approves or accedes to the present Convention and declares or has declared under article 52 that it will not be bound by Part II of this Convention shall at the time of ratification, acceptance, approval or accession denounce the 1964 Hague Sales Convention by notifying the Government of the Netherlands to that effect.
+
+(5) A State party to the 1964 Hague Formation Convention which ratifies, accepts, approves or accedes to the present Convention and declares or has declared under article 92 that it will not be bound by Part III of this Convention shall at the time of ratification, acceptance, approval or accession denounce the 1964 Hague Formation Convention by notifying the Government of the Netherlands to that effect.
+
+(6) For the purpose of this article, ratifications, acceptances, approvals and accessions in respect of this Convention by States parties to the 1964 Hague Formation Convention or to the 1964 Hague Sales Convention shall not be effective until such denunciations as may be required on the part of those States in respect of the latter two Conventions have themselves become effective. The depositary of this Convention shall consult with the Government of the Netherlands, as the depositary of the 1964 Conventions, so as to ensure necessary co-ordination in this respect.
+
+Article 100
+
+(1) This Convention applies to the formation of a contract only when the proposal for concluding the contract is made on or after the date when the Convention enters into force in respect of the Contracting States referred to in subparagraph (1)(a) or the Contracting State referred to in subparagraph (1)(b) of article 1.
+
+(2) This Convention applies only to contracts concluded on or after the date when the Convention enters into force in respect of the Contracting States referred to in subparagraph (1)(a) or the Contracting State referred to in subparagraph (1)(b) of article 1.
+
+Article 101
+
+(1) A Contracting State may denounce this Convention, or Part II or Part III of the Convention, by a formal notification in writing addressed to the depositary.
+
+(2) The denunciation takes effect on the first day of the month following the expiration of twelve months after the notification is received by the depositary. Where a longer period for the denunciation to take effect is specified in the notification, the denunciation takes effect upon the expiration of such longer period after the notification is received by the depositary.
+
+:B~ [Post Provisions]-#
+
+1~post [Post Clauses (If any: Signed; Witnessed; Done; Authentic Texts; & Deposited Clauses)]-#
+
+DONE at Vienna, this day of eleventh day of April, one thousand nine hundred and eighty, in a single original, of which the Arabic, Chinese, English, French, Russian and Spanish texts are equally authentic.
+
+IN WITNESS WHEREOF the undersigned plenipotentiaries, being duly authorized by their respective Governments, have signed this Convention.