Ink to Link: A Hypertext History in 36 Nodes (I)

来源:百度文库 编辑:神马文学网 时间:2024/04/28 18:49:25
Ink to Link: A Hypertext History in 36 Nodes
John E. McEneaney
It would probably be appropriate to begin this article on hypertext with a definition. But sometimes the shortest distance between two points is not a straight line. So I'll start with a story instead.
In May of this year I was in San Antonio, Texas, for Hypertext 2000, the 11th annual conference of the Association for Computing Machinery. One of the workshop sessions was devoted to defining what we mean when we use the word hypertext. It was sobering to discover that those at the workshop couldn't agree on a definition. In fact, we found ourselves hopelessly deadlocked on a preliminary question: “Does hypertext require computers or can it be realized in noncomputer forms?”
The absence of consensus was sobering since we were all hypertext “professionals” in one sense or another. Even if there might be uncertainty among those on the “outside,” surely within this specialized group we could agree on fundamentals. But we couldn't.
Frustrating though this was, our lack of agreement seemed to me a useful reminder of the fundamentally messy human questions we inevitably address when we ask questions related to reading. So, for now at least, I'm going to avoid definitions. I'll stick to stories and let you wrestle with whatever definitions emerge from a hypertext history in 36 nodes. You can get started by clicking on the “Path” button at the top right of the screen.
Some tips on navigating this column might be useful.
1. Window management
You have probably noticed that, once you left the article's introduction, Ink to Link opened into a new window. This smaller window for the article “floats” on top of its home in ROL's Articles section.
If you click outside the article window, it will “lose focus” and seem to disappear, though it's actually still on your screen: the window has simply been covered by one or more other windows that were on your desktop (most likely the Ink to Link home page in ROL). You can return to the article window by
clicking the appropriate browser task bar button at the bottom (on a Windows-based PC), or
clicking the appropriate browser window (on a Mac).
To exit the article, click the Articles section icon (the open laptop on the red background) at the top right of the screen. This will close the column window and return you to the Articles page, where you have access to navigation for all of Reading Online.
2. Browser buttons
Use your browser's “Back” and “Forward” buttons as you usually do to backtrack or retrace a path. Other browser buttons will also work as usual.
3. Link semantics
There are two distinctly different kinds of links in this article. Links triggered by buttons in the banner at the top are “local” to other parts of the article; as you move through the article you will find that these buttons give you access to the reference list, a “Help” page, and the actual article content pages. There are, however, also links embedded in those content pages. All of these links lead to pages outside the article. Most external links lead to pages on other sites out on the Web; one leads to an article recently posted here at the ROL site. If you follow an external link (even the one to the ROL article), use your browser's “Back” button to return to the column.
This document is organized as a grid of 36 nodes (depicted graphically in the image at the upper left). Each row of nodes adopts a theme (people, places, things, etc.); each column of nodes focuses on an era (1930s and '40s through to “on the horizon”). The history can be read in several ways. You can
cycle through a theme by clicking on the “Theme >” button;
cycle through an era by clicking on the “Era >” button;
read the hypertext in a linear sequence by using the “Path >” button;
go directly to individual pages by clicking on a node in the grid image at the top left;
navigate from the theme-based index pages, linked in blue at the top.
The six themes and six eras discussed in this hypertext (and listed below) are “crossed,” resulting in 36 different entries or pages, one for each theme in each era. Keep in mind that I've interpreted the themes and eras rather loosely: some “things,” for example, might fit as well under the category of “events,” or vice versa. The organizational system is designed primarily as a narrative device.
 
Hypertext Eras 
Conceptual origins (1940s) 
Laboratory prototypes (1960s) 
Personal computing (1980s) 
Professionalization (later 1980s to mid-1990s) 
The Web (mid-1990s to the present) 
On the horizon (the future)Hypertext Themes 
People (some central figures) 
Places (places and centers where work was done) 
Things (hardware, software, concepts) 
Events (important historical points) 
Systems (tools and models for hypertext) 
Applications (applied hypertext)
 
Vannevar Bush: An engineer responds to the needs of the information age
Vannevar Bush was science adviser to U.S. President Franklin Delano Roosevelt during World War II. At that time, science and its associated technologies were vital to American national interests, so the job of advising the president on these matters was no small task.
Perhaps in part because of his wide-ranging responsibilities, Bush began to think about the problem of information management. In 1945 he published an article in The Atlantic Monthly titled “As We May Think,” in which he describes “the Memex” (for memory extender), a device intended to help people manage information.
The Memex was designed as a personal microfilm library with sophisticated storage and retrieval mechanisms that would allow readers to build large libraries yet still access information quickly. Especially important from Bush's perspective was “associative indexing,” a reader-controlled linking of one document to another.
Bush's insight was that the “meaning” of information depends on the objectives of the reader -- and, therefore, for any given collection of information, readers with different interests and objectives will organize information in different ways. Associative indexing was seen by Bush as the key to personal organization of information.
Associative indexing is also the key to hypertext as we know it. Our use of the word linkin the hypertext context is synonymous with the kind of association Bush had in mind. Although a functional Memex was never constructed, Bush's work established the conceptual foundations upon which modern computer-based hypertext systems have been erected.
The Office of Scientific Research and Development: Hypertext as a military technology
Vannevar Bush was a brilliant engineer but the events that led up to the Memex probably had more to do with his administrative responsibilities than his technical skills. Appointed director of the U.S. Office of Scientific Research and Development (OSRD) in 1941, Bush was responsible for managing a wide range of scientific and engineering projects intended to support the war effort (including the Manhattan project).
His intellectual skills notwithstanding, it's likely Bush soon became aware of his limits as a reader and manager of information. He was, after all, the chief administrator in a major government office. Being an engineer, it occurred to him that there might be ways technology could be applied to help him solve the kinds of information management problems he dealt with on a daily basis. And, as Neilsen has suggests (1995, p. 36), it may well be that not knowing much about microfilm technologies encouraged him to be less inhibited in his speculations on the Memex.
Ultimately, although the connections between hypertext and military matters are rarely emphasized, I think a fairly strong case can be made that the links are real, rooted both in hypertext history and the urgent nature of information management in war-time decision making. In the context of the OSRD, the Memex was a tool for understanding, designing, and successfully implementing weapons systems. In keeping with this interpretation, it is interesting to note that the application Bush describes in his 1945 “As We May Think” is a “hypertext” that documents the superiority of the Turkish short bow over the English long bow and the consequences of this for Crusades-era warfare.
The Memex: The office workstation of its day
The Memex, as conceived by Vannevar Bush in the 1940s, was intended to function as a high-end microfilm workstation. Although it was never actually built, Bush's descriptions of the Memex suggest a desk-like piece of furniture. Built into this desk would be two or more microfilm readers, a microfilm storage facility, and sophisticated microfilm retrieval and indexing mechanisms. A keyboard would provide access to these mechanisms, allowing readers to open and link documents in the system. A small headset microphone would give users access to built-in voice annotation capabilities. A “scanning” mechanism would allow the user to create new microfilm documents from print sources, and an annotation system would support direct modification of existing microfilms from the keyboard.
As conceptualized by Bush, the Memex was to be a personal system, intended as a self-contained unit that operated independently. In this sense Bush's Memex is more like a stand-alone scanner workstation (that is, a computer and a scanner) than an Internet-connected computer. Bush (1945) did, however, seem to appreciate the potential of “networking.” He suggested that Memex users would share the “trails” they created and that “trail blazers” (a new class of information worker) would organize the “enormous mass of the common record” in important new ways. In Bush's view, links might come to be seen as important and informative as the documents they connect.
Publication of “As We May Think”: Vannevar Bush's ideas about the Memex appear in print
Vannevar Bush's thinking about the issue of information management had begun sometime in the early to middle 1930s but didn't lead to tangible results until 1939, when he completed the first draft of the manuscript that would become “As We May Think.” Bush's professional priorities at the time were directed elsewhere, however, and his paper describing the Memex was not published until 1945.
The appearance of “As We May Think” that year in the Atlantic Monthly created something of a stir. After all, Bush was the U.S. president's advisor on matters of science and technology and he had a very prominent role in the war effort. His article was followed by stories in widely read publications including Time and Life. Ultimately, however, not much came of Bush's proposal for the Memex. Perhaps it was the sheer complexity of the mechanics required to make it work (although Bush was no stranger to complex machinery, having built a legendary mechanical equation solver while at the Massachusetts Institute of Technology). Perhaps it reflected the nation's desire to set aside the priorities and agendas of the war years. Whatever the reason, Bush's Memex was never constructed.
But something else was happening that would, after a suitable gestation period, bring Bush's vision to life.
The Memex as a hypertext system (and Vannevar Bush as a literacy theorist)
It strikes me as remarkable that Vannevar Bush's Memex sprang forth more than 55 years ago as a fully formed hypertext system. While it was never actually implemented, it was specified in considerable detail (Bush, 1945). Functionally, it included all of the essential elements of “modern” hypertext: links, blocks of text, trails, and webs. Even more important, however, are the (largely implicit) philosophical foundations upon which this system was erected.
Bush adopted a philosophy of reading and writing that seems to me to be even more radical than that of his contemporary Louise Rosenblatt (1994/1938), our currently reigning “ahead of her time” reading and writing theorist. Not only did Bush acknowledge the active role of readers in constructing meaning, he believed reading technologies should actively work to dissolve the distinctions between readers and writers. The Memex was not simply to serve as a repository for other people's text: reader annotations were an essential element of the Memex reading theory and were supported by a feature that allowed readers to contribute to the Memex library. Memex readers wouldn't just construct meaning in the abstract, they would create it in concrete ways that undermined the traditional distinction of reader and writer. This is a full hypertext system in a very modern sense (Landow, 1997, p. 6).
The Memex as an application: What if?
Since the Memex was never actually created, there are no Memex applications. Given the circumstances and time of its envisioning, however, we can imagine one application that probably occurred to Vannevar Bush on more than a few occasions: the weapons systems hypertext. The purpose of this application would be to assist those in a wartime scientific research and development office responsible for the management of weapons development. Individual project managers would coordinate planning and work using the Memex. Documentation would be available, as well as test reports, field trial data, and other materials. Project materials would also be linked to nonmilitary resources provided by parts and materials suppliers.
Standard operating procedures would surely have a role in such an institutionalized system, but in the spirit of Vannevar Bush, individual project managers would have some freedom to create personal links as well. Presumably, an office that had the benefit of such a system would operate more efficiently through the sharing of information and more creatively by promoting flexible thinking through associative linking.
Although, as educators, we might not like the idea of a literacy technology intended to support weapons development, I would argue that the weapons hypertext I've described is true to the historical circumstances of the Memex. It also seems reasonable to suppose that the urgency of wartime weapons development probably played a role in the interest both Bush and the public showed in the Memex. In an age of information warfare, however, the childhood adage “Sticks and stones may break my bones but words will never hurt me” may no longer apply. Words (i.e., bytes) may end up being the ultimate weapon; sticks and stones may simply be what are left when the words have done their work.
Ted Nelson: Word maker and hypertext visionary
Although Vannevar Bush is acknowledged as its “inventor,” it was Ted Nelson (1965) who first proposed the word hypertext to describe “non-sequential writing -- text that branches and allows choice to the reader, best read at an interactive screen.”
Nelson's Xanadu project, the long-term effort for which he is perhaps best known, has attracted both admiration and derision, sometimes from one and the same person (see, for example, Wolf, 1995). The admiration arises at least in part from the audacity of Xanadu, a project to create a universal world-wide document database with byte-level (i.e., letter-level) tracking to manage royalty payments to publishers. The derision is a consequence of the fact that, after more than 35 years, only limited portions of Xanadu have been implemented. In August 1999 elements of the Xanadu system were finally released as open source code (Kahney, 1999), but after years of unfulfilled promises and the arrival of a hugely popular alternative (the Web), interest in Nelson's system is limited.
Nelson is certainly the idea man in hypertext and almost invariably is referred to as a visionary. His ideas have inspired and motivated many in the field. The depth of his influence became clear to me during a special session at a recent hypertext conference. A video Ted sent from Japan (where he currently lives and works) was played. The talk was a folksy, reflective monologue he had filmed himself as he roamed the small rooms of his home. As the video began the hall seemed to settle into an almost reverent state of attention. While it might be true that Xanadu is nearly as mythical as its namesake, that “magic place of literary memory” (Nelson, 1993) in Samuel Tayler Coleridge's poem “Kubla Khan,” it was evident that even the hard-boiled back-end developers in the audience wanted to hear what Ted Nelson had to say.
Brown University: An important center for basic and applied work in hypertext
In the early 1990s when I started exploring hypertext, many of the references I found were associated in one way or another with Brown University in Providence, Rhode Island. At first, I attributed this to the fact that George Landow, whose work happened to be relevant to mine, was at Brown developing and studying humanities-oriented hypertext applications. I soon discovered, however, that research and development of both hypertext systems and applications had been going on at Brown for at least 25 years.
Andries van Dam is an important hypertext pioneer whose early work at Brown led to a number of different hypertext systems and applications, including the Hypertext Editing System (HES), the File Retrieval and Editing System (FRESS), and Intermedia. A particularly important aspect of the work at Brown was that, while technical folks like van Dam were busy developing new systems, faculty members like George Landow were thinking of new ways to create learning-oriented applications. This long-term interaction of developers, researchers, and educators supported and nurtured by Brown's Institute for Research in Information and Scholarship (IRIS), led to numerous hypertext applications that have become modern classics (Dickens Web, Victorian Web, In Memoriam,Hypertext).
The mouse: Rethinking the traditional user interface
Few computer interface devices have had the kind of impact on users' experience as hasDouglas Englebart's mouse. Before the mouse, interaction with the computer was mediated through an abstract system of commands operating on the password metaphor: you either knew the command (i.e., the password) you needed, or you were out of luck.
The mouse changed our computing experience, but attributing this change solely to a small hardware peripheral overlooks the larger ideas that were required for the device to make sense at all. Englebart's stunning innovation was to completely rethink the password metaphor. His solution was to create an interface that conceptualized the computer as a spatially distributed workbench with tools represented by icons, distinctive visual images that are much easier to remember and interpret than commands. The workbench metaphor was further extended by the notion of a window, a discrete portion of the computer screen treated as an independent region within the workspace. The mouse was a natural consequence of this new spatial thinking. The computer as a virtual space was realized, and interactive computing was revolutionized.
Englebart is a gifted engineer, but it is of critical importance to note that his objective with the mouse and other hardware developments was to “augment” human intellectual capabilities, not simply to build better computers (Englebart, 1963). This distinction is crucial since computers that are technically “better” may actually turn out to be worse from a user perspective. Human cognition has its own distinctive features and capabilities, and it was Englebart's attention to these matters that really distinguished his work from others and led to his remarkably prolific creativity.
Douglas Englebart's work continues at the Bootstrap Institute.
Demonstration of the oN-Line System: Showmanship at a 1968 computer conference
New technologies typically get a cultural foothold by helping us do things the “old” way -- but faster, more efficiently, or more conveniently. It usually takes some time before genuinely new ways of doing and thinking start to appear. In the case of computers, there were well-established antecedents -- automated machinery and calculating devices chief among them. To most people, computers were simply high-powered amalgams of these earlier devices, enormously faster but still based on the same fundamental principles and applied in familiar ways.
Sometimes the most important intellectual pioneers are those who think about technology in new ways. Douglas Englebart is that kind of pioneer. An electrical engineer by training, Englebart's career has been devoted to using computers to “augment” human intellectual capacity. Central to Englebart's numerous innovations has been a focus on designing computers based on human ways of working rather than on technical requirements. Among other things, Englebart is credited with inventing video conferencing, hypermedia, and the mouse.
Best of all, Englebart had the daring and sense of drama to put his ideas and their executions together (at considerable risk and expense) for a remarkable show, a special demonstration of his oN-Line System (NLS) at the 1968 Fall Joint Computer Conference. Putting a substantial part of his research funding into the equipment he needed, Englebart set out to debut some of the most revolutionary ideas about computing technology of the past 35 years, and he did it without a net -- live, and in front of lots of colleagues.
Englebart's show was an extraordinary event that changed the course of hypertext research and development in dramatic fashion. In 1975, however, after he had developed perhaps half the concepts on which modern interactive computing is founded, Englebart's funding for the Augment Project was terminated (Neilsen, 1995, p. 37).
Xanadu: Revolutionizing hypertext theory (and frustrating application developers)
By some accounts, (see, e.g., Wolf, 1995), the only thing Xanadu has lived up to is its name: Coleridge's mythical place of literary memory (with the emphasis on “mythical”).Ted Nelson has been developing the ideas behind Xanadu for more than 30 years. He coined the term hypertext in 1965 and has exerted a powerful influence on our thinking about it for nearly 40 years. Yet apart from intriguing concepts, Xanadu has offered little to application developers. Although an open source version was posted to the Web in 1999 (Kahney, 1999), the momentum of Web standards and coding practices will make it very difficult for the Xanadu system to achieve the user base Nelson had hoped to promote.
Xanadu's distinguishing feature is its ambition. Nelson, the author of Xanadu, proposes nothing less than a worldwide hypertext system linking every version of every document ever published with an infrastructure that tracks document use and manages royalty payments to publishers. In some respects, the Web has begun to realize elements of Xanadu. One example is the way Ted Nelson invites others to make use of materials he posts to his Web site. Specifically, Nelson invites Web developers who want to use his images to “transclude” them from his server in Japan rather than create permanent copies on their own servers. Transclusion downloads images from another server only when they are needed and simply inserts them (with the appropriate copyright citation) in the new document. Although Nelson doesn't require any fees for use of images on his site, every request goes through his server, so he retains control over the images and their use.
A fully realized Xanadu could keep track of the users who download any file (or portion of a file), and follow up by assigning the appropriate fees (if fees are involved) and sending a bill to the server that requested the download.
The Hypertext Editing System: Associative indexing realized
By the middle 1960s people were beginning to return to Vannevar Bush's ideas. Douglas Englebart at the Stanford Research Institute in California was incorporating ideas from Bush in the oN-Line System (NLS) he was using to document his Augment Project. On the east coast, Andries van Dam and Ted Nelson (then both at Brown University) were working on the Hypertext Editing System (HES).
Although NLS usually gets the nod as the first system to incorporate hypertext-like cross-referencing, it enforced a specifically hierarchical approach to linking, and thus did not allow the “associative” linking that is the hallmark of a hypertext. As a result, van Dam and Nelson usually are credited with creating the first fully functional hypertext system.
In addition to providing true associative indexing (i.e., unconstrained linking), HES introduced the use of specialized peripherals including function keyboards and light pen technology. It also allowed linking to and from text segments within nodes and offered an on-screen menu system that allowed readers to adopt a simple branching path approach to navigation. Shortly after its completion, HES was purchased by NASA to manage documentation for Project Apollo and the United States' race to the moon.
A number of other important hypertext systems followed HES at Brown, including the File Retrieval and Editing System (FRESS) and the Electronic Document System, both developed for broader commercial distribution, and the Intermedia system specifically developed with educational applications in mind.
Ben Shneiderman: Applying principles of user interface design to hypertext
Ben Shneiderman, a professor in the Department of Computer Science at the University of Maryland, has been an important figure in user interface design and hypertext for more than 20 years. He also was an early adopter of hypertext as a medium for scholarly communication, as witnessed in Hypertext Hands-On! (Shneiderman & Kearsley, 1989).
Hypertext Hands-On! is based on the HyperTies system developed by Shneiderman in the early and middle 1980s. It was published in traditional print form, but an accompanying computer diskette incorporated a reader version of the HyperTies system along with all the book's content in HyperTies format. Besides being noteworthy as one of the early commercial hypertexts, Hypertext Hands-On! had a simple user interface, was developed to run under DOS, and introduced a unique linking system that provided readers with information about a potential destination node before that node was actually selected.
Shneiderman's work has exerted an important influence on my own hypertext studies. In the middle 1990s when my scholarship began to focus on hypertext, I came across a paper by Botafogo, Rivlin, and Shneiderman (1992) describing techniques to measure the structural complexity of hypertext documents. Ultimately, my work shifted focus from the structure of hypertexts to the structure of reader navigation (i.e., the paths that readers take when they read a hypertext), but the work of Shneiderman and his colleagues continued to be crucial in my effort to develop path measures. To my delight, I found that these measures were consistently good predictors of hypertext comprehension, resulting in correlations that exceeded even print comprehension scores (McEneaney, 1999, 2000b).
The University of Southampton: Creating “open” hypermedia that can use files in many formats
One of the important centers of hypertext research and development in Europe is located at the University of Southampton in the United Kingdom. In the middle 1980s, Wendy Halland Hugh Davis began work on the Microcosm hypertext system (Hall, Davis, & Hutchings, 1996). One distinctive feature of this work has been a systematic focus on developing and studying models for “open” hypertext systems that can incorporate documents in a wide range of formats.
Before Microcosm appeared, hypertext systems required documents to adhere to system-specific formats and standards. This proprietary approach meant that existing electronic materials had to be reformatted for use in hypertext applications -- and reformatting can be as expensive as starting from scratch.
The work at Southampton explores ways to create hypertext systems that can use existing documents in many different formats. In this approach, a hypertext application can include word-processed files, spreadsheets, media clips, and other independently developed materials without requiring any reformatting.
Commercial hypertext tools: Bringing hypertext to your desktop
In the mid-1980s, computers were beginning to make their way into a wide range of nontechnical domains. In 1984 the first Apple Macintosh computers capitalized on Douglas Englebart's new ways of thinking about interface, and suddenly computing became accessible to large numbers of people. As user-friendly systems enticed new users, software developers found a rapidly increasing market for their work, and general purpose hypertext development systems started appearing.
Ben Shneiderman began development of the HyperTies system at the University of Maryland in 1983; the resulting (Wintel) commercial product was available in 1987. Janet Walker's Symbolics Document Examiner was developed for a high-end Symbolics workstation, but because it was used to access system documentation, it was shipped with every Symbolics computer, resulting in a fairly large base of users. The Guide system was developed by Peter Brown at the University of Kent in the United Kingdom and was released by Office Workstations Ltd. (OWL) in a Macintosh version in 1986, with versions for Wintel and Unix computers available shortly thereafter.
Perhaps most important of all the early commercial systems was HyperCard, released by Apple in 1987 and bundled with every Apple Macintosh sold between 1987 and 1992. Although not intentionally designed as a hypertext system (Bill Atkinson developed HyperCard as a visual programming tool), the use of a “card” screen metaphor made it natural for users to think of it as such. New users were attracted to HyperCard's extensive libraries of buttons, graphics, and other widgets that made developing professional-looking cards easy. And experienced programmers were delighted to find that the HyperTalk programming language (part of HyperCard) was easy to learn but powerful enough to develop serious applications.
Hypertext development tools were becoming part of the standard computing toolbox, and with ever larger numbers of computer users out there, the number of potential hypertext authors and readers was expanding dramatically.
The Hypertext '87 conference: A profession emerges
By the late 1980s, a number of commercial general purpose hypertext development tools had appeared, including HyperTies, Guide, and HyperCard (the system that delivered hypertext to most of us working at that time). As the community of hypertext researchers and developers grew, an increasing need for interaction and collaboration arose. There was also a desire among hypertext professionals to distinguish their domain from the work of colleagues in other areas of computer science.
Hypertext '87, the first hypertext conference sponsored by the Association for Computing Machinery (ACM), was one result of these needs and desires. The conference took place at the University of North Carolina at Chapel Hill and attracted nearly every active researcher and developer in the field. Jay David Bolter and Michael Joyce considered hypertext as a medium for creative writing. George Landow gave a talk on the rhetoric of hypertext. Janet Walker presented a paper on her Symbolics Document Examiner, and Frank Halasz delivered his now classic “seven issues” paper outlining limitations of and future areas for development in hypertext systems (Halasz, 1987).
Organizers of the conference were unprepared for the response. Having planned for 200 or 300 participants, they were chagrined to find 500 lined up to register. Nearly half had to be turned away. Two years later, in 1989, this situation was repeated at the first open hypertext conference in Europe. Something was happening. Hypertext research and development was emerging as a distinct field, and those who identified with it were eager to professionalize their interests.
Intermedia: A hypertext system designed with educational applications in mind
Intermedia is one of a number of hypertext systems developed at Brown University since the middle 1960s. What most distinguishes it from its predecessors is the fact that it was specifically designed with educational applications in mind, and educators at Brown have actively applied it as a teaching and learning tool. Intermedia's most enduring legacy as a system may well be that it stimulated new ways of thinking about hypertext from distinctly literary and educational perspectives (Landow, 1997).
Intermedia capitalized on fine-grained multinode linking. Authors could link individual words and phrases to one another across nodes, and a single link could access several destination anchors. More traditional one-to-one node linking was still possible, but Intermedia's advanced linking system provided far more powerful support for the complex analysis typical in literary studies (where the system was most consistently applied). Another important aspect of the Intermedia system was that it (like Bush's Memex) de-emphasized the roles of author and reader by supporting large-scale collaborative development and allowing annotations. This was, in fact, central to Intermedia's special power as a learning tool: students did not simply sit passively before an Intermedia application receiving content; they were expected to contribute to the ongoing development of the application by adding their own original work and responding to the work they read through annotations and critiques.
If Intermedia had an Achilles heel, it was that it was designed for use with the A/UX operating system, Apple's version of Unix. A/UX had a fairly limited user base, and in the early 1990s was dropped by Apple -- which left Intermedia stranded. Some of the Intermedia applications developed in its golden years at Brown have been ported toStoryspace and the Web, but not all the features that made Intermedia such a powerful environment could be translated (particularly to the Web).
Document Examiner: Hypertext as an alternative to printed documentation
Document Examiner was one of the first widely used hypertext applications. Although other systems and applications had appeared earlier (e.g., HES, HyperTies, NLS), they were designed for specialized settings that tended to limit or exclude more general use. Document Examiner was intended to reach a large audience: all users of Symbolics computers.
Symbolics built and sold high-end workstations. Then as now, such equipment tends to arrive with considerable documentation: manuals, guides, references, and so on. Although now it's common for these materials to arrive in digital form (typically on a compact disc) with access provided by some hypertext-like help system, in the middle 1980s documentation was shipped in printed form. Every Symbolics workstation left the factory with 8,000 pages of printed support material.
Although it would be 10 years before Jakob Neilsen (1993) articulated his first law of computer manuals (“Users do not read manuals”), the folks at Symbolics had the insight to devote some of the horsepower packaged in each workstation to Document Examiner, which allowed management and presentation of support documents. In effect, the modern computer-based help system was born.
The Document Examiner project began in 1982; by 1985 it was a part of every Symbolics workstation that was shipped. A threshold had been crossed. Hypertext was no longer simply a laboratory tool: it was a working technology that was becoming an essential part of desktop computers, another technology poised on the edge of its own revolution.