Ink to Link: A Hypertext History in 36 Nodes (II)

来源:百度文库 编辑:神马文学网 时间:2024/04/28 01:02:26
Ink to Link: A Hypertext History in 36 Nodes (cont'd)

Mark Bernstein: Researcher, developer, entrepreneur, and impresario of hypertext

Mark Bernstein's official title at Eastgate Systems is “Chief Scientist.” This might suggest his responsibilities are limited to rocket science and other high-minded matters, but that is not the case. At a recent hypertext conference it was Mark who wrote out receipts and made change for people purchasing Eastgate software (including Storyspace, the widely used development system for literary hypertext).

Many of us “do” hypertext as one aspect of our professional lives. We also tend to have specialized views of hypertext (e.g., technical versus literary) that sometimes exclude or marginalize other perspectives. Mark, on the other hand, seems to live and breathe hypertext as our preeminent disciplinary entrepreneur. Best of all, he seems to have a fly-like compound view of the discipline that fuses business strategies, programming techniques, literary criticism, and the cunning and showmanship of an impresario.

Mark Bernstein has helped root hypertext in new soil, far from its technical origins, both through his continuing support and development of the Storyspace writing system and his publication at Eastgate of literary works by authors such as Michael Joyce, Deena Larson,Cathy Marshall, and many others. Although his contributions to the field are not yet routinely cited in hypertext histories, I think he'll get a section of his own when the dust finally settles and we figure out how hypertext relates to its linear cousins that still dominate literacy theory, practice, and commerce.

Association for Computing Machinery hypertext conferences: Defining a “place” for hypertext

In the early 1990s when my interest first turned to hypertext, I started looking for materials that would help me develop my knowledge and skills. I came across the proceedings of the annual hypertext conferences sponsored by the Association for Computing Machinery (ACM), beginning with the 1987 volume documenting the first such conference in Chapel Hill, North Carolina. We were approaching the end of the fiscal year at my university and, to my good fortune, the dean announced that funds were available to purchase books and other academic materials. I ordered a stack of proceedings and dove in.

I was spellbound. Some of the articles targeted programmers and system developers, appealing to my technical side. Others addressed issues from literary perspectives I had never considered before. Still others described empirical evaluation studies using designs and analytic tools familiar to me from the reading research literature. What was most astonishing, however, was the discovery that hypertext researchers and developers were exploring many of the same issues (e.g., readability, text coherence, comprehension and visualization, text structures, reader expectations, etc.) that we were confronting in reading research, but they had developed completely different theoretical systems and analytical approaches in their work.

At some point that summer my interest in hypertext got serious. I attribute this to the glimpse I had of the emerging hypertext research and development community that was gathering at ACM conferences in the United States and Europe.

Storyspace: A commercial hypertext development tool for literary applications

Most hypertext systems (both for authoring and presentation) have been developed to address technical needs. This isn't terribly surprising given that technical communities have “literary” needs (e.g., documentation and project management) and tend to express them in the tools they generate. What I find more surprising is the relative invisibility of nontechnical development tools in the hypertext histories I have found. The case ofStoryspace, the one commercially available hypertext tool developed for literary applications, comes to mind.

I suspect if I took a poll among literary hypertext authors to ask them what had been the most influential technology development with respect to their writing, a substantial number would identify Storyspace, despite the obvious and compelling influence of the Web. In fact, nearly every major hypertext author who publishes commercial literary work today uses the Storyspace system.

A significant part of Storyspace's appeal is that it was developed by programmers and literary authors working in collaboration, so it includes features attuned to literary work. Among other things, it provides authors with link and node management tools, visualization features, and methods to create default and conditional links. It is also quite inexpensive and is marketed by Eastgate Systems, the major publisher of literary hypertexts.

Originally developed by Jay David Bolter, Michael Joyce, and John B. Smith in the middle 1980s, Storyspace was used by Joyce to create afternoon, a story, the 1987 hypertext classic that effectively inaugurated the modern era of hypertext fiction. About 1990, Eastgate Systems opted to drop its Hypergate development system in favor of a version of Storyspace revised by Mark Bernstein.

Storyspace's limited capability for exporting hypertexts to hypertext mark-up language (HTML) is perhaps its greatest weakness in this Webcentric age, but even this rather serious limitation rarely seems to elicit the kinds of complaints you might expect. In the hypertext writers' workshops I have attended there are the usual wish list brainstorming sessions, but everyone seems to swear by their Storyspace.

The New York Times hypertext reviews

One day in September 1993, I was browsing a local bookstore and came across the New York Times Book Review for August 29. What caught my eye was Robert Coover's cover story, titled “Hyperfiction: Novels for the Computer.” “Hypertext is entering the literary mainstream!” I thought.

Inside was a lengthy review of Stuart Moulthrop's Victory Garden, brief reviews of ten other hypertexts, an explanatory sidebar on hypertext, and an article (Jones, 1993) about William Gibson's Agrippa, a US$2,000 electronic book that destroys itself as it is read (as the book's content scrolls off the screen it is permanently deleted, though the link to it I've included above allows the complete text to be viewed and re-viewed). In reading the shorter reviews I discovered that Coover's effort wasn't the Book Review's first exploration and evaluation of electronic literature. Apparently, there had been another essay and collection of reviews by Coover a year earlier.

Surely, I thought, literary hypertext has arrived. But I was wrong. Hypertext was finally being noticed, but 10 years after Mark Bernstein first uttered his famous question, we're still asking “Where are the hypertexts?” Technical users have largely abandoned printed manuals, reference materials, and learning guides in favor of hypertext (or hypertext-like) electronic media. But readers interested in satisfying their literary appetites still overwhelmingly choose traditional printed formats. Why is it that readers have been so slow to appreciate the literary potential of hypertext?

Trellis: Designing hypertexts from a user's perspective

Trellis is a method of conceptualizing hypertext originally developed by David Stotts andRick Furuta in the late 1980s and subsequently extended. Like many other hypertext models, Trellis hypertexts are displayed as nodes and links. What distinguishes the Trellis model from others (and, in my view, makes it especially important for educators) is its capacity to model browsing semantics.

Briefly, the term browsing semantics refers to how readers can (and cannot) move around in a document. In traditional models, movement is dictated by two rules:

  1. Readers occupy one, and only one, node at a time
  2. Links are static (i.e., they cannot be altered).

Trellis rewrites both these rules. Readers can open (and close) multiple nodes simultaneously, and links are conditional in the sense that a link may or may not appear on a given page, depending on the pages a reader has already visited. Recently, Stotts, Furuta, and Cabarrus (1998) have described methods to test complex hypertext systems to determine whether certain kinds of movement are (or are not) possible, an important capability if we want to be sure that hypertexts we develop meet certain use conditions (e.g., “all readers must read page X before they can select a link to page Y”) that may be difficult or impossible to check by hand.

One reason views of hypertext that include multiple nodes are important in educational contexts is that learning materials should help students make connections across facts and ideas. Multinode presentation is one way to do this. Even when learning is intended to occur within a single self-contained document, a case can be made for content and process nodes presented in parallel that help students see both what is to be learned and how to go about learning it (McEneaney, 2000a). Conditional linking has clear implications for designing hypertexts that depend on certain reading sequences or prior learning, and the capacity to test hypertext systems mechanically for browsing features is especially important as the size and complexity of learning materials increases.

In effect, traditional hypertext models focus on the hypertext itself, not on readers. Trellis, while still rather text-centric, introduces elements (i.e., browsing semantics and conditional linking) that start to bridge the gap between reader and text. For that reason alone, Trellis deserves the attention of educators.

The Perseus Project: A high-tech approach to classical studies

On first blush, the Perseus Project (Mylonas, 1992) seems an unlikely mix of content and format. This cutting-edge computer project funded by (among others) the U.S. National Science Foundation and the Digital Libraries Initiative operates out of the classics department at Tufts University in Massachusetts. Its objective is to provide a rich text and media resource for students of classical languages, culture, and archeology.

The apparent incongruity of classical studies and high-tech tools however quickly resolves when one realizes that the stability of the information we put into a hypertext is directly proportional to the long-term utility of a hypertext. In a sense, content drawn from classical studies is ideal for hypertext since it allows developers to create large-scale applications with fewer worries about problems related to updating and maintaining the application.

Started at Harvard University in the late 1980s, the first disc version of Perseus appeared in 1991. A second edition was released as a CD set in 1995. In its current incarnation, thePerseus Project is available both in CD format and on the Web.

Perseus includes a wide range of text and media resources. There are primary texts in their original languages, translations (that can be opened beside the original), secondary materials, and a wide range of image databases of maps, archeological sites, and classical artifacts. There are also search tools to help users access information in Perseus and materials specifically developed to support use of Perseus as a learning and teaching system.

Tim Berners-Lee: Reinventing the world as a Web

There is a tendency to think of the Web as something that leapt forth fully formed in the early 1990s. People are sometimes surprised to learn the Internet has been around since the early '80s, and that the Arpanet, its predecessor, dates back to the 1960s. I suspect we have similar ideas about the millionaire “geeks” who suddenly are thrust into the public eye. The truth is that the geeks often have been hard at work for a very long time before the media creates the illusion of their instantaneous elevation to wealthy guru-hood.

Tim Berners-Lee is one such celebrity whose long-term efforts ultimately led to a better way of doing things. In the early 1980s, Berners-Lee worked at CERN, the European Organization for Nuclear Research. He faced a problem common among computer-oriented technical workers: his work required access to documents, mailing lists, data sets, and other materials on a variety of computers that employed different operating systems, file formatting conventions, and applications. Just keeping up with this kind of techno-Babel is a headache. Attempting to integrate and apply information from disparate formats takes on migraine-like dimensions.

As it happened, Berners-Lee left CERN (and the particulars that motivated his early interest in solving his technical problem). But in 1989 he returned. In his absence, two very significant changes had overtaken the computing landscape there: new standards for the transfer of information over networks (TCP/IP) had been established, and the Internet had gone from a small-scale experimental system to a worldwide network.

In March 1989, shortly after returning to CERN, Berners-Lee prepared an initial proposal. In a subsequent revision he described a hypertext-like system that would provide “a single user-interface to many large classes of stored information such as reports, notes, data-bases, computer documentation, and on-line systems help.” In October 1990, the proposal that had been circulating was revised again by Berners-Lee and Robert Cailliau, who had agreed to refer to this system as the “World Wide Web.” In 1991, CERN released Berners-Lee's Web browser and server software to the public, along with the standards and protocols that would be used in the exchange of information (HTTP, HTML, and URL). The Web was live.

It would take another 2 years for the Mosaic browser to break the Web wide open, but once that happened, the place of Tim Berners-Lee in the pantheon of modern computing was assured.

Cyberspace: From dark imaginary beginnings to ubiquitous ambiguity

William Gibson coined the term “cyberspace” in the 1984 cult classic novel Neuromancer,an unflinchingly violent and chaotic view of a high-tech future populated by “cyberpunks” and gangsters. (This fictional world is depicted in Gibson-inspired films such Johnny Mnemonic and The Matrix.) Within just 10 years of the novel's publication, cyberspace had arrived.

Today's real-world cyberspace has its origins in the late 1960s, when the United States military began creating a communications network (ARPANET) to link national centers of research and development. In the early 1980s, another important step was taken when new methods of data transmission (TCP/IP) were adopted, new networks emerged (BITNET and CSNET), and civilian and military computers were divided into separate networks. In the early 1990s standards began to emerge that supported more user-friendly access to this network, and a few years thereafter the population of cyberspace was soaring, as easy to use Web browsers became widely available.

It now appears that the pioneer phase of cyberspace settlement is well behind us, with the entertaining speculation and abstractions of the middle 1980s giving way both to stunningly practical matters of stock offerings, markets, and monopolies, and to more sobering Gibson-like visions having to do with privacy, manipulation, and malicious virtual assault.

The Web: From a trickle to a stream to a torrent

Although the Internet goes back to the early 1980s and the proposal in which Tim Berners-Lee described the Web is more than a decade old, it was the Mosaic Web browser that really got things rolling. Growth of activity on the Web was substantial in the months prior to the browser's 1993 release, but that growth increased by a factor of 10 in the months following the appearance of Mosaic.

It isn't difficult to see why the release of Mosaic had the impact it did. It was easy to use, adopting a simple point and click interface, and it was available for a wide range of computer platforms (Macintosh, Wintel, Unix, etc.), so it wasn't limited to a small group of potential users. Its hypermedia capabilities were a significant step forward from then popular text-based browsers (e.g., Lynx), and third-party add-on software modules could significantly extend its capabilities.

Soon after the National Center for Supercomputing Applications at the University of Illinois released Mosaic, other commercial browsers began to appear (nearly all based on Mosaic). Netscape's browser established itself as the early leader, but when Microsoft finally joined the fray, its Internet Explorer quickly became a major player.

The graphical point-and-click browser had put a user-friendly face on the Web, and the stampede by everyday folks to get online began.

The database: The Web's “killer app”

Literary hypertext may still be waiting for its “killer app,” (a revolutionary software application), but on the Web at least, that battle seems settled: database-driven Web sites rule, and this is nowhere more evident than in the explosion of online e-commerce. Never mind that they're still figuring out how to turn a profit, businesses can't seem to get enough of the Web. Banner ads are as ubiquitous as the blinking pink text on neon green backgrounds of Netscape's early days. And behind almost every large-scale Web site is a database.

Essentially, a database is a collection of sorted information, rather like the printed tables of products, prices, and distributors a shop clerk consults when a customer want to make a special order. The power of a database is that the tables that present information can be broken down and reassembled in new ways, depending on particular needs. This means databases let users decide both what information to view and how they will view it. Databases are behind most online catalogs that allow you to read about and view products. Even more important, databases are used to track the business you do online and support the financial transactions that allow you to make purchases.

From a literacy perspective, however, the most important thing about the appearance of databases on the Web is that this technology supports text that is literally interactive (as opposed to the metaphorical way this term is often used.) Choices readers make during reading can determine text that is subsequently presented. The implications this has for our study of reading and for future technologies of reader-sensitive text are absolutely astounding. Taken to its logical limit, reader-sensitive text takes on the character of a conversation with an intelligent text-base, realizing Alan Turing's ideal for an artificially intelligent device.

HTML: The struggle to define the language of the Web

Tim Berners-Lee, Robert Callieau, and others at CERN developed what ultimately became the Web in the late 1980s and early '90s as a system for sharing documents on a network. The first official version of the language of the Web, hypertext mark-up language (HTML 1.0), was proposed by Berners-Lee and Daniel Connolly in June 1993. Since then the World Wide Web Consortium (W3C), the international Web governing body, has released several major revisions. The current standard is HTML 4.01/XHTML 1.0.

From a classical (i.e., pre-Web) perspective, HTML was important because it offered a nonproprietary path to hypertext development. Until 1993, hypertexts created on one system couldn't be used on another. Part of the reason for this state of affairs was that defining common standards requires compromise and, in the absence of a clear need, it made little sense to sacrifice power in an effort to achieve broad compatibility. The folks at CERN, on the other hand, were motivated by a need to establish broad compatibility: they wanted to create a system that would be both easy to use and powerful enough to create a worldwide network for exchange of scientific information. They were prepared to compromise to establish a workable system.

Hypertext mark-up language also has been a critically important element in the development of the Web since standards are what allow different browsers on different computers to display a common set of files. Unfortunately (or fortunately, depending on your point of view), much of the material on the Web is nonstandard. People with a financial stake in the Web (particularly the folks at Netscape and Microsoft) often stepped outside of W3C standards to create features they felt users wanted, and that helped to identify their products. In some cases, the features they created were eventually adopted as standards. In others, they ended up as browser-specific oddities.

Given the current state of browser technology it's difficult to predict where HTML is headed. It's gone though a stunning array of mutations over the last decade. Moreover, browsers are now explicitly developed to support add-on software modules that operate outside of HTML, using them to access databases, play movies, read word-processed files, and run animations. Although HTML was once the universal language of the Web, its dominance seems to be yielding to powerful -- but complex and often proprietary -- add-ons.

Literary hypertexts: Where are the readers?

Where are the hypertexts? This blunt question was first raised by Mark Bernstein at the 1990 European Conference on Hypertext. It expresses both a genuine question and the frustration literary hypertext developers and promoters experience as a community still waiting to emerge from the shadow of the print tradition.

While expository hypertext materials are now comfortably established in a number of genres (help systems, reference sources, computer manuals and system guides, for example) and have even replaced some print predecessors (when was the last time you used a print-based library catalog?), hypertext fiction and poetry still represent niche literatures. This leads to a strong local sense of community, but growing frustration over the readers who aren't participating.

The real question being asked, of course, isn't about the hypertexts at all: it's about hypertext readers. And this is where the reasons for their scarcity become clearer. Computer help systems are most likely to be useful when a person is seated at a computer (no surprise there). Moreover, someone seated at a computer is more likely to be favorably disposed to read online (a bit more subtle, but still fairly self-evident). Given these circumstances it's hardly surprising that online help systems and documentation have in part led the charge into the new literacies.

Literary hypertexts, on the other hand, seem to find circumstances stacked against them. Readers of literature are accustomed to text that is portable, and they often take special pleasure in the sheer physicality of the medium. Literary hypertexts, on the other hand, are still typically chained to suitcase-sized hardware (with some notable exceptions that nevertheless seem not to be making any dramatic inroads). And hypertext, by its very nature, eschews the physicality of the text in favor of the freedom and flexibility of a virtual format.

The Web may well change some attitudes (see, e.g., the Word Circuits and Electronic Literature Organization sites). But, as yet, a compelling case for literary hypertext (our literary “killer app”) has yet to emerge, despite the outstanding works that have appeared.

Our future selves: Expectation, habit, and the irresistable pull of words

When I teach reading courses I like to do a little in-class experiment to dramatize the power of the written word. I present PowerPoint slides (acetate overhead transparencies work fine, too) displaying random letter strings and words in different colors. The students are required to call out the color of the letters they see.

Students generally do pretty well at first (especially on the random letter strings). There is a steady buzz of color names as we cycle through my stack of slides. Even when real words appear, the students do OK. The fun starts, however, with the last slide -- where the word blue appears in bright red letters. Most students seem to have a mental hiccup, stop, and start to laugh. Others are surprised to find themselves blurting out, “Blue!”

We talk about the idea of automaticity and how experience shapes cognitive processes. Although color perception seems the more fundamental process, in this situation (which illustrates the well-known Stroop effect) words insist on being read. We go on to talk about the human habits and expectations intended to help us make meaning out of the torrent of sensory data that floods into our brains every waking second.

I find myself wondering about the expectations and habits -- some more obvious than others -- I have acquired in half a lifetime of reading print. The stream of thought flows on to “If scientists were fish, the last thing they'd study would be water.” Then I bump up against the obvious once again: point and click and the kind of interactivity common on the Web aren't much like print. What kinds of habits and expectations does that kind of reading lead to? What happens when words take on the same fleeting virtual character as sensory data?

Unconnected places: The issue of the “digital divide”

In the excitement of new technologies and possibilities that dawn each day, we can sometimes lose sight of the fact that, while we ride the crest of this wave, others cannot rise out of the trough. In fact, a great majority of the world's population has never engaged in the interactive literacies that are now redefining modern society -- not because they don't choose to, but because they cannot (Wresch, 1996).

Perhaps the greatest paradox of the new literacies is the fact that, while they certainly liberate and empower readers in ways that the print tradition never could, they require a complex social and technical infrastructure and level of “buy in” that was never part of that print tradition. Back in the dark ages, a single person could assemble the resources needed to publish a book. Once that book was published, it could travel the world under that person's arm (or anyone else's). In the new age of literacy, both publication and access are tied to large-scale social technologies whose successful operation depends on the cooperation and interaction of many people -- and in the absence of the required infrastructure, both publication and access come to a grinding halt.

And even when all the required resources are available, the pace of development in this new literacy community means that joining it involves risks comparable to hopping a moving freight train. If you're lucky, you'll get a grip as the train passes and will hold on long enough to recover from the violence of sudden acceleration into your future. If you're not so lucky, you'll be left in the dust and grime of the freight yard, nursing bruises and pulled muscles.

Intellectual property: New realities, new rules

Issues of intellectual property pose some of the more pressing problems facing the new literacy. We have well established protocols and statutes in the print domain, but our legal, social, and technical infrastructures are ill equipped to deal with the ambiguities, gray areas, and new ideas that electronic text introduces.

Part of the problem is that digital materials can be reproduced so easily and with such fidelity that the temptation to engage in or use unauthorized reproductions is much stronger than it was in the past. Reproducing printed work at a photocopier might not be cheaper than simply buying the text from the publisher, and the quality of the resulting product is almost invariably less. Digital reproduction, on the other hand, is generally perfect and typically requires little more than executing a “copy” command.

Other aspects of the problem are still more fundamental since they call into question our most basic concepts about what information is, who owns it, and how we transfer rights to use it. Drug companies, for instance, want to patent naturally occurring gene sequences. Napster, a Web-based music exchange service, lets users copy music files from one another's computers but never engages in the reproduction itself. Music industry publishers and performers lobby to ban the sale of used compact discs, arguing that the fee exchanged when a CD is purchased transfers a license to listen rather than rights to the object itself.

Ted Nelson claims the Xanadu system is capable of resolving many of these issues as they relate to hypertext and hypermedia. More than a few commentators (see, e.g., Keep & McLaughlin, 1995; Neilsen, 1995, p. 38; Wolf, 1995) have, however, expressed reservations about whether Xanadu will ever live up to the vision of its creator.

We are struggling both with the matter of translating existing problems and ideas into forms that will work in online environments and with the more daunting challenge of recognizing and defining the problems and concepts that will be driving development and distribution of information in the years ahead.

Xanadu realized: Will the Web reveal more than we want to know?

It hasn't happened yet. It most likely won't happen for some time to come. But there may be a day when Ted Nelson's vision of a universal electronic “docuverse” is realized. While it probably won't play out quite as Nelson plans, there does seem to be a progression toward an ever more complete archive of human activity on the Web.

Literary works are daily being added to the Project Gutenberg collection. The U.S. Library of Congress provides access to an extraordinary array of online resources, and many print publications are now available in searchable online versions. Personal Web pages proliferate, and e-commerce is changing the way we do business.

Despite the promise of Xanadu, however, many believe that the problems we will face as we set out to create a universal human docuverse will be enormous. A hacker with even modest skills, given some patience and a suitably underdeveloped conscience, can usually compromise the security of a network server. There are also those who suggest that we have less to fear from hackers than we do from existing social institutions (big business, government, etc.) that have the resources to shape the Web in ways that suit bureaucratic and political interests at the expense of individual liberties.

Ultimately, whatever docuverse we end up creating is likely to reflect our values and social practices in ways we may not find flattering. I suspect we'll end up somewhere between Ted Nelson's “mythical place of literary memory” and a Database Nation(Garfinkle, 2000), driven and manipulated by the information brokers. If I, like Star Trek's Jean Luc Picard, could bid someone to “make it so,” I'd go with Ted Nelson's utopian vision in a New York minute. For all my enthusiasm on matters technological, however, I can't shake the feeling that the future we create won't differ that much from the past we've left in our wake, and that makes me just a little apprehensive. What kind of world will it be when everything is online?

Convergent media: Literacies online, commitment, and dissonance

Convergence is one of the hot new buzz words in online communication. It refers to the integration of numerous media under a single umbrella technology (digital transmission). Nowadays the excitement is about the Web as a universal channel for virtually any media someone might care to broadcast.

Of course, convergence is not exactly a new idea. Analog recording methods (e.g., audiocassettes, VHS, etc.) are gradually being replaced by digital techniques (compact discs, digital video, etc.). Real-time Web-based radio is now common place, and streaming video is poised to expand online video offerings. Not long ago one of my graduate students excitedly introduced me to a Web site that provides free telephone service between his desktop computer (using a headset) and the telephone in his parents' home in China.

I suspect, however, that our attitudes and ideas about literacy will not yield to the same almost invisible transition occurring in our technology. I remember being irritated by the inclusion of “media techniques” as an English language art when the Standards for the English Language Arts were released by the International Reading Association and the National Council of Teachers of English in 1996. I still have that kind of response when the word text is used to refer to spoken language, behavior, and other artifacts that have nothing whatever to do with written language. I find it impossible simply to set aside my emotional commitment to print literacy, despite a professional agenda explicitly devoted to undermining the kind of narrow perspective it embodies. I use the word hypertext rather than the broader hypermedia, and I qualify my technology work as “text oriented.”

Personal confessions aside, I don't think I'm the only one struggling with this kind of dissonance.

Links as content: Links, paths, and learning

It has always seemed to me that one of Vannevar Bush's most remarkable insights was his appreciation of the power of links to shape understanding. He believed that the associations readers might make would be personal, but he also felt that a person's path through a document could be very useful to someone else who shared the same interests or objectives. Bush even suggested that Memex “trail blazers” who created “paths” through complex sets of documents could be considered “authors” and that the paths they generated might be considered as valuable or more valuable than the content they linked.

Part of the reason I consider this insight important is related to my perspective as an educator. Most of my hypertext-oriented research and writing over the past 4 years has explored the idea of paths in hypertext, both as prescriptive devices intended to support learning (McEneaney, 2000a), and as devices to help us better understand readers (McEneaney, 1999, 2000b). Briefly, the work I have pursued has consistently demonstrated a significant relationship between the paths readers take and measures of effective hypertext use. How readers choose to navigate in hypertext is a powerful predictor of their ultimate success in using hypertext materials -- more powerful even than their print reading skills.

Strategic navigational thinking appears to be a new element in the reading mix. And in environments like the Web (our current latest-and-greatest boon to education), providing readers support and guidance is absolutely crucial. Links are the glue that holds the Web together. I'd argue that links serve a similar function for readers -- they form the structural framework on which readers ultimately hang their understanding.

References

Bolter, J.D., Joyce, M., Smith, J.B., & Bernstein, M. (1999). Storyspace [computer software]. Watertown, MA: Eastgate Systems.

Botafogo, R.A., Rivlin, E., & Shneiderman, B. (1992). Structural analysis of hypertexts: Identifying hierarchies and useful metrics. ACM Transactions on Information Systems, 10(2), 142-180.

Bush, V. (1945, July). As we may think. Atlantic Monthly, 176(1), 101-108. Available:http://www.theatlantic.com/unbound/flashbks/computer/bushf.htm.

Coover, R. (1993, August 29). Hyperfiction: Novels for the computer. New York Times Book Review, 1, 8-12.

Englebart, D.C. (1963). A conceptual framework for the augmentation of man's intellect. In P.W. Howerton & D.C. Weeks (Eds.), Vistas in information handling: Volume 1, The augmentation of man's intellect by machine (pp. 1-29). Washington, DC: Spartan.

Garfinkle, S. (2000). Database nation: The death of privacy in the 21st century.Sebastopol, CA: O'Reilly Associates.

Gibson, W. (1984). Neuromancer. New York: Ace.

Halasz, F.G. (1987). Reflections on NoteCards: Seven issues for the next generation of hypermedia systems. In Proceedings of the ACM conference on hypertext (pp. 345-365). New York: Association for Computing Machinery.

Hall, W., Davis, H.C., & Hutchings, G. (1996). Rethinking hypermedia: The Microcosm approach. Dordrecht, Netherlands: Kluwer.

International Reading Association/National Council of Teachers of English. (1996).Standards for the English language arts. Newark, DE/Urbana, IL: Author.

Jones, G. (1993, August 29). The disappearing $2,000 book. New York Times Book Review, 12-13.

Joyce, M. (1987). afternoon, a story [computer hypertext]. Watertown, MA: Eastgate Systems.

Kahney, L. (1999, August 25). Programmer reaches his Xanadu. Wired News. Available:http://www.wired.com/news/technology/0,1282,21430,00.html

Keep, C., & McLaughlin, T. (1995). Ted Nelson and Xanadu. In The Electronic Labyrinth.Available: http://jefferson.village.virginia.edu/elab/hfl0155.html

Landow, G.P. (1997). Hypertext 2.0. Baltimore, MD: Johns Hopkins University Press.

McEneaney, J.E. (1999). Visualizing and assessing navigation in hypertext. In Proceedings of the Tenth ACM Conference on Hypertext and Hypermedia, (pp. 61-70). New York: Association for Computing Machinery.

McEneaney, J.E. (2000a, January). Learning on the Web: A content literacy perspective.Reading Online. Available: http://www.readingonline.org/articles/art_index.asp?HREF=/articles/mceneaney/index.html

McEneaney, J.E. (2000b). Navigational correlates of comprehension in hypertext. InProceedings of the Eleventh ACM Conference on Hypertext and Hypermedia, (pp. 254-255). New York: Association for Computing Machinery. Available:www.acm.org/pubs/contents/proceedings/hypertext/336296/ (scroll contents listing to access metadata and full text PDF)

Moulthrop, S. (1991). Victory garden [computer hypertext]. Watertown, MA: Eastgate Systems.

Mylonas, E. (1992). An interface to classical Greek civilization. Journal of the American Society for Information Science, 43(2), 192-201.

Neilsen, J. (1993). Usability engineering. San Francisco, CA: Morgan Kaufman.

Neilsen, J. (1995). Multimedia and hypertext: The Internet and beyond. Boston, MA: Academic.

Nelson, T.H. (1965). A file structure for the complex, the changing, and the indeterminate. In Proceedings of the ACM 20th National Conference (pp. 84-100). New York: Association for Computing Machinery.

Nelson, T.H. (1993). Literary machines 93.1. Watertown, MA: Eastgate Systems. Portions available: http://www.xanadu.com.au/ted/TN/PUBS/LM/LMpage.html

Rosenblatt, L.M. (1994/1938). The transactional theory of reading and writing. In R.D. Ruddell, M.R. Ruddell, & H. Singer (Eds.), Theoretical models and processes of reading(4th ed.). Newark, DE: International Reading Association.

Shneiderman, B. & Kearsley, G. (1989). Hypertext hands-on! Reading, MA: Addison-Wesley.

Stotts, P.D., Furuta, R., & Cabarrus, C.R. (1998). Hyperdocuments as automata: Verification of trace-based browsing properties by model checking. ACM Transactions on Information Systems, 16(1), 1-30.

Wolf, G. (1995, June). The curse of Xanadu. Wired, archive 3.06. Available:http://www.wired.com/wired/archive/3.06/xanadu.html

Wresch, W. (1996). Disconnected: Haves and have-nots in the information age. New Brunswick, NJ: Rutgers University Press.