B004R9Q09U EBOK (33 page)

Read B004R9Q09U EBOK Online

Authors: Alex Wright

BOOK: B004R9Q09U EBOK
2.06Mb size Format: txt, pdf, ePub
 

In 1981 Nelson would expand on his vision of Xanadu in
Literary Machines
, a sequel to
Dream Machines
that would ultimately provide the direct inspiration for the World Wide Web (a debt readily acknowledged by Tim Berners-Lee). By this time Nelson had spent 7 years working with a team of programmers to prototype a real-world implementation of Xanadu. The experience had given him an opportunity to refine his vision and articulate a more detailed vision of how the system would actually work:

 

The Xanadu system, designed to address many forms of text structure, has grown into a design for the universal storage of all interactive media, and, indeed, all data.… From this you might get the idea that the Xanadu program is an enormous piece of software. On the contrary: it is one relatively small computer program, set up to run in each storage machine of an ever-growing network.
41

 

This vision of a small program (which today we might call a “browser”) resident on the user’s computer interacting with “storage machines” (which today we would call “servers”) provided the essential blueprint for the operation of the World Wide Web.

However, Nelson’s vision reverberated far beyond the mechanics of client-server networked computers. He also proved eerily prescient in anticipating a series of potential concerns with implementing a hypertext environment, such as privacy, copyright, archiving, and version control—all problems very much in evidence in today’s implementation of the Web.

Xanadu has taken on mythical proportions in the literature of computer science. Despite several attempts, the product has never successfully launched (most recently the rights had been acquired by Autodesk). But Nelson soldiers on, most recently as a fellow at the Oxford Internet Institute, where he continues to pursue his inspired and idiosyncratic vision of an alternative universe of humanist computing. Despite his outsize influence on the evolution of the Web, Nelson remains surprisingly little-read today.
Dream Machines
has
long been out of print; copies now fetch $300 or more (I was fortunate enough to find a dog-eared copy kept under lock and key at the San Francisco Public Library). While Nelson has worked with large corporations and institutions in trying to bring his vision to life, his colorful style and virulent antiestablishmentarianism seem to have guaranteed his status as a committed outsider.

In the struggle between networks and hierarchies, Nelson has cast his lot squarely on the side of the network, insisting on a purely bottom-up model of information storage and retrieval, absent familiar hierarchies like files and folders, and liberated from the control of the computer priesthood that he has devoted his career to confronting.

Perhaps Nelson’s hopes for a humanist vision of computing amounts to tilting at windmills. For all its promise of individual creativity and liberation, the control of today’s Web ultimately rests in the hands of corporate and governmental entities. While many scholars have decried the commercialization of the Web, however, perhaps we can take some solace in knowing that every major information technology in human history has taken hold primarily due to commercial impetus. Writing emerged at the hands of merchants; the printing press spread not because of the Gutenberg Bible but on the strength of a booming business in religious indulgences and contracts; so it should perhaps come as no surprise that adoption of the Web has been fueled largely by commercial interests. But Nelson’s work still resonates as an inspired, if at times hyperventilated, alternative vision of a humanist computer environment.

OTHER XANADUS
 

While Nelson and his band of idealistic programmers were pursuing their vision outside the institutional mainstream, the burgeoning computer industry was growing mainly at the behest of large organizations like banks, insurance companies, the government, and the military. Computers lived in the so-called back office, tended by specialized information technology staff—Nelson’s “priesthood”—who typically had little direct contact with the front office marketing, sales,
and management staff. Corporate computing followed a strictly hierarchical model, in which systems architects would carefully mediate the requirements of the front office into structured documents that formed the basis for rigid, deterministic programming regimens designed to maximize efficiency and minimize risk. Information flowed in tightly controlled channels. The age of the desktop PC was still over the horizon.

The command-and-control model of enterprise computing fueled the industry’s growth in the postwar era, but during the late 1960s and 1970s an alternative vision of human-centered computers began to percolate. In Palo Alto, California, a group of former Engelbart disciples came together at the legendary Xerox PARC, a research and development facility that Xerox founded in 1970 with the express mission of creating “the architecture of information.” That group’s accomplishments, including the graphical user interface, WYSIWIG (What You See Is What You Get) text editing tools, the Ethernet protocol, the laser printer, and the pioneering Alto personal computer, have been amply chronicled elsewhere.

By the early 1980s, many universities began to experience the emergence of two separate computing cultures: the administrative information technology (I/T) infrastructure of the formal organization that handled the payroll, ran the online library catalog, and processed billing, payments, and course registrations. These were Nelson’s “priests.” Typically working behind an organizational firewall, the I/T department usually maintained its own battery of mainframes and minicomputers. End users, in this culture, still used green-screen “dumb” terminals, but all the actual computing happened behind closed doors. Meanwhile, in the computer labs, faculty offices, and dorm rooms, students and faculty were starting to build their own computing infrastructure, often operating outside the administrative I/T department. Whereas in the 1970s and 1980s everyone had shared space on the campus mainframe, now the two cultures were beginning to diverge: minicomputers and mainframes for the administration and Unix workstations and personal computers for the academics. Students and faculty started using PCs and sharing information with each other.

Across the continent, another group of researchers explored the possibilities of networked information retrieval in an academic setting. In the late 1960s Andries van Dam led a research team at Brown University in producing the first working prototype of an interactive hypertext system on a commercially available mainframe, the IBM 360/50. Inspired by a chance meeting between van Dam and his Swarthmore contemporary Ted Nelson at the 1967 Spring Joint Computer Conference, the system came to be called the Hypertext Editing System (HES). Nelson helped van Dam formulate requirements for the system, designing the hypertext features and contributing a set of interconnected patents on electroplating. Eventually, Nelson became disappointed with the project’s emphasis on paper printout capabilities (by Nelson’s own account, the Brown researchers viewed him as a “raving” and “flaming” intruder), and he left Brown to pursue his hypertext vision in the form of his new Xanadu project. Nonetheless, HES marked an important early validation of Nelson’s vision: It proved that a hypertext environment could work in practice. Meanwhile, van Dam went on to build a series of pioneering hypertext systems that would ultimately be used by thousands of people around the world.
42

The original HES system had two equally important goals: (1) to support individual authors in the process of creating documents or other work products; and (2) to explore the notions of nonlinear information structures that Ted Nelson had dubbed hypertext. These goals arose out of the needs of a user community that wrote papers on typewriters and had no interest in working with the teletype-driven text editors that programmers used to write programs at the time. Nelson’s vision of hypertext suggested a new way of working with ideas that would be free of the linear constraints imposed by the then-ubiquitous paper media.

HES ran on standard hardware and software (IBM 360/50 OS/MFT with a 2250 graphics display terminal), combining nonhierarchical and flexible document structures with the first true non-line-oriented word processing system. The Brown team envisioned HES as a research system, intended to explore notions of document structures and text editing; as such it did not immediately come into wide
use on campus, especially since it was quickly replaced by research on its successor, FRESS, a system designed to overcome the limitations of HES, extend its strengths, and incorporate lessons learned from Doug Engelbart’s NLS. Nonetheless, IBM released it as part of its Type Three Library of customer-generated applications. Years later van Dam would receive a letter from a NASA program manager, informing him that HES had been used to produce Apollo documentation that went to the Moon.

In 1967, almost as soon as HES was deployed, van Dam and his team envisioned a more accessible tool, one that could be employed across campus. The result was the File Retrieving and Editing SyStem (FRESS). Once again, the team had designed a computing environment as both a process and a product. When HES development started, van Dam hadn’t known about Engelbart’s work with NLS, but by the time FRESS development was under way he did. Consequently, FRESS was a response to both the experience with HES and the insights gained from NLS, both positive and negative.

FRESS was a multiuser, device-independent system. It ran on equipment ranging from typewriter terminals and glass teletypes to high-end vector graphics terminals with light pens. The system was designed to support online collaborative work, the writing process, and document production. It had bidirectional links, macros, metadata in the form of both display and textual keywords, and a notion of document structures that in some sense paralleled programming structures, although it lacked fully conditional branching, multiple windows, view specifications, structured links that allowed documents and document views to be dynamically structured on the fly, and dynamic access control mechanisms that accommodated conflicting demands on the same information corpus.

Users could include two kinds of links: “tags” pointing to another part of the same document, such as an annotation or footnote, and “jumps” that provided bidirectional links between documents. Developed in the years before the computer mouse became a common feature, the system at first allowed users to manipulate objects using a light pen and a foot pedal (“point and kick,” to borrow Gillies and Caillieau’s pun).

As primitive as the system might have seemed—with its foot pedals, green screen, and text-only readouts, FRESS was in some ways more sophisticated than today’s Web. Links worked in both directions—a link from one document to another would always invoke a reciprocal link going the other way. If someone linked to your document, you would know about it. For example, a class of students could annotate a set of documents, adding their own links, and view whichever links they wanted to see from their peers. FRESS also provided a facility for keyword annotations, allowing readers to search and filter by subjects and to assign names to discrete blocks of text for later research. The ability to assign free-form keywords—now often referred to as “tagging”—allowed users to begin creating their own idiosyncratic meta structure for document collections. Thus, readers could view each other’s annotations, and even filter by reader—a major step toward what Bush called “associative trails.”
43

FRESS also marked the first experiment with hypertext in the classroom. Brown offered two experimental courses—one on energy, another on poetry—in which students conducted all of their reading, writing, and postclass discussion online. Class dialogue followed the now-familiar threaded discussion format, with commentary threads attached to the related document or resource. Students wrote three times as much as in offline classes, and got better grades than their peers in a control group; class participation proved much higher than in physical classrooms, as quiet wallflowers felt comfortable piping up in the online arena. After the classes wound down, the system maintained the hypertext “corpus” of information that each class had created.
44

The early Brown experiments proved enormously influential, allowing nonspecialist users to read and write documents online without understanding how to program a computer. Although including the term “hypertext” in the name may have been a stretch—as Nelson points out, the system had few truly hypertext capabilities beyond document linking—it marked an important milestone in the history of computing. And if nothing else, it lent an institutional imprimatur to the possibilities of hypertext in the academy.

In the years leading up to the emergence of the Web, Brown’s
early work on HES and FRESS would provide the foundation for an even more ambitious effort—IRIS Intermedia—which was the most fully fledged hypertext environment to emerge before the Web, indeed a work whose collaborative and interactive authoring capabilities have yet to be equaled by the Web. The Institute for Research on Information (IRIS) was founded in 1983 with grants from the Annenberg Foundation and IBM. Its 20-person staff began work on creating a new kind of scholarly computing environment dubbed Intermedia. Led by Norm Meyrowitz and Nicole Yankelovich, the team began work on an ambitious interactive hypertext environment.

The authoring environment provided users with a suite of interlinked tools for text editing, vector graphics, photo editing, and video and timeline editing. By allowing users to create collections of interlinked original material online, the system allowed users to create a “web” of knowledge about any topic. To keep track of the relationships between objects, Intermedia stored links in a central repository database—an approach with great advantages for ensuring the integrity of links but one that ultimately precluded the system from scaling beyond a relatively limited local deployment.

Other books

The Book of Fires by Borodale, Jane
Pursuit of a Parcel by Patricia Wentworth
S.O.S by Will James
No Other Love by Speer, Flora
Secrets by Nancy Popovich
Ming Tea Murder by Laura Childs
Bargain Hunting by Rhonda Pollero
Forbidden Drink by Nicola Claire