First Monday

FM Reviews

++++++++++

J.R. Okin.
The Information Revolution: The Not–for–Dummies Guide to the History, Technology, and Use of the World Wide Web.
Winter Harbor, Maine: Ironbound Press, 2005.
cloth, 350 p., ISBN 0–976–38574–0, US$26.95.
Ironbound Press: http://www.IronboundPress.com

J.R. Okin. The Information Revolution

Anyone who was an active user of the Internet before 1993 knows what a different world it was from today. Prior to the development of Mosaic, the first popular Web browser, accessing the full capabilities of the Internet demanded a sophisticated knowledge of a variety of separate tools and protocols such as ftp, archie, telnet, gopher, veronica, wais, listservs and news groups. All of them had to be mastered to some extent if the user wanted to see the full range of what was available online then. This characteristic of the Internet made it very much a closed–off world, one requiring a motivated person to access.

All of this changed, of course, with the development and popularization of the World Wide Web. That development, along with an examination of the components that drive the Web, forms the focus of J.R. Okin’s second book in his Internet series, The Information Revolution: The Not–for–Dummies Guide to the History, Technology, and Use of the World Wide Web. Readers who imagine the origins of the World Wide Web began with Tim Berners–Lee will receive many surprises, for Okin does a very good job of showing the long process of development surrounding hypertext and hyperlinking that necessarily predated the actual establishment of the first World Wide Web server in 1990. Without the work of early developers in hypertext software, such as van Dam and Nelson’s 1967 Hypertext Editing System, and commercial products like Apple’s HyperCard, the World Wide Web might either have been different than it is today or at the very least delayed in its maturation.

But make no mistake: it was Berners–Lee who developed the necessary mechanisms required to make the World Wide Web operate as it does today, followed by the administrative structures needed to encourage its adoption and growth. Okin reviews how HTTP, URLs, HTML, browsers, and Web servers interact, and how each is a vital but not self–sufficient ingredient in the Web’s success. These accounts, while dry at times, are invariably lucid. They show clearly why Berners–Lee’s accomplishment had a two–fold impressiveness. First, it built upon previous work in hypertext and hyperlinking, but then also incorporated new components to create a truly useful, easy–to–use Internet information management system.

That system, which we know as the World Wide Web, also had the advantage of allowing access to previous information management systems in use then and now, including gopher, telnet and wais. Thus, the Web, while new, incorporated the old. This quality permitted an easy transition from the systems in use prior to the Mosaic browser, to information carried primarily or entirely through browsers via HTML and other file types on Web servers. Okin’s discussion of the history and mechanisms of the Web is supplemented by coverage of its uses by multimedia providers, businesses, and tracking services which employ features such as Web bugs. He gives several effective examples of business successes and failures, showing why success is linked to a business understanding of what the Web is and isn’t, and why it requires businesses to integrate their information interfaces with their Web presence.

The gopher protocol, which was barely mentioned in Okin’s first book in his series, The Internet Revolution, is given its due here. He correctly describes it as a quick, resource–frugal information management system whose hierarchical organization still makes it ideal as an information management system for many types of databases. Eclipsed by the Web in the mid–90s, gopher remains to be discovered by those whose experiences with the Internet began after that time [ 1].

It was gratifying to see the early Cello browser covered, one of several browser casualties caused by the success of Mosaic and the arrival of Netscape. But a surprising omission is any reference to Scott Yanoff’s famous Yanoff List. Before the advent of effective Web search engines the Yanoff List was a must–have for any Internet user. It consisted of a long list of Internet sites alphabetically grouped by subject, a kind of Internet Yellow Pages containing hundreds of ftp, gopher, Web, and other locations relevant to each subject. In the early ’90s, anyone using the Internet knew about the Yanoff List and eagerly awaited the updates that were available for downloading from a variety of locations.

For this reviewer Okin’s presentation on the Semantic Web, the last chapter in the book, is the most compelling in the volume. The author shows how the collective interaction of HTML, XML (Extensible Markup Language) and RDF (Resource Description Framework) could transform the Web from being information based to knowledge based. The current Web, which is HTML–centric, lacks the capability of showing how one kind of information differs from another. That’s why, when search engine queries are conducted, the results on a word or phrase can be about many more things than the searcher wanted to see. This forces the user to analyze each query result for the relevance the searcher had in mind. A Semantic Web, on the other hand, would return just those kinds of results the searcher wished to receive. Okin is optimistic about the development of the Semantic Web, believing that if the right tools are developed it could be merged, on a much larger scale than now, with the existing HTML information–only Web.

Far less compelling is the author’s discussion of commonplace uses of the Web, such as job hunting, Web dating, and genealogy. In a book deemed to be "not for dummies" it is difficult to imagine target readers needing any introduction or exposure to these everyday uses of the Web. Even casual users are likely to know as much about these subjects as Okin presents.

The book concludes with an appendix consisting of a Web dateline, Internet etiquette, and jargon. As with his first volume, The Internet Revolution, Okin retains the practice of making introductory statements outlining what he will be discussing in each section, followed by restatements of that information in what follows. This produces a very formal structure which lacks the more fluid effect having each section introduce the next might bring. His structural approach also often makes for a laborious exposition, something one might expect in a text geared more toward novices — and an irony considering the "not for dummies" subtitle of each of the three books in the series.

Yet despite this, Okin’s second book retains the valuable documentation methods of his first, making it another good addition to an Internet library and a text readers may find themselves returning to as the Internet continues its fascinating development. A final volume in this series is slated from the author, on the impact of the Internet’s technology and its future development. — Douglas Kocher, Chair, Department of Communication, Valparaiso University End of Review

 

Note

1. Readers interested in learning more about the Gopher protocol should visit gopher://gopher.floodgap.com/. There is a great deal of useful information about Gopher, including a list of functioning gopher servers worldwide.

++++++++++

Richard Rogers.
Information Politics on the Web.
Cambridge, Mass.: MIT Press, 2004.
cloth, 200 p., ISBN 0–262–18242–4, US$35.00.
MIT Press: http://mitpress.mit.edu

Richard Rogers. Information Politics on the Web

The problem: In the U.S., media consolidation has been swift and certain until very recently (when the U.S. Congress blocked proposed Federal Communication Commission rules changes, backed by the former FCC chair, Michael Powell that would have allowed further consolidation). Through horizontal integration, vertical integration, and cross–promotional synergies, a handful of deregulated media behemoths (Disney, General Electric, AOL Time Warner, Viacom and Vivendi) absorbed with enhancing their efficiency and profitability, have arguably subverted historical practices of democracy in the pursuit of maximal profit and power. "Their corporate interlocks and unified cultural and political values," says Ben Bagdikian, "raises troubling [and durable] questions about the [viability of] the individual’s role in ... [a] democracy" (Solomon, citing Bagdikian, www.tompaine.com/feature2.cfm/ID/3039/view/print).

In Amsterdam, Richard Rogers implicitly embraces the critiques of Bagdikian and others. In a November 2003 online prologue to "All American Issues: Seven Stories from the Homeland," Rogers lays out "Six Arguments Against News." Beginning (implicitly) with Altheide’s notion of news formats, Rogers notes that journalists no longer exclusively "routinize the non–routine." (The term refers to fitting the "news," the non–routine, into the daily routines of news production). For Rogers, these days, the task of narrative news formatting is not even primarily that of the journalist:

"Reality increasingly delivers the formats to the news. Delivered are press releases, sound–bites, story and video cams, and scripted events ... . The scandal is how media accepts [these formatted products, such as talking points] unproblematically as [reality–based] news" (Rogers, p. 2, http://www.issuenetwork.org/reports/news_networks/pdf/news_networks.pdf).

Rogers then catalogs the many problems created by these commercial news practices. Such problems include prominent and repetitive factual gaps, temporal mismatches, and the nearly daily creation of pre–mediation formatting. Pre–mediation formatting is produced by "media seeders" who "sell" scripts of "what will happen." The propagandist’s or public relations operative has a single goal. It is to turn a preferred "definition of a situation," a "spin," however contingent or improbable, into the appearance of the real and inevitable, either for political gain and/or corporate profit. Claims of imminent threats in the form of WMDs, as part of the "selling" of the Iraqi war, are a recent and prominent example.

Addressing NGOs committed to social change, the purpose of the November 2003 conference, and an ongoing idée fixe for Rogers, was "to examine in some depth the conditions under which [commercial] news may be marginalized ... [with] the idea that [information] networks [on the Web] may serve as a new means to [effectively] do without [commercial] news [exposure]" (Rogers, 2003, p. 4). Assessing the present media system as "in tatters," and unable to represent reality or rational democratic discourse, Govcom.org and Rogers turns the attention of NGOs toward the promises and perils of emerging informational networks. Dedicated to reflexively mapping and theorizing about the actual and potential characteristics of Web–based information networks, Rogers’ new book, Information Politics on the Web, is part of Govcom.org’s ongoing epistemo–technological project of "defin[ing] reality more adequately" than the "myths and lies and crack–brained notions" that have so much contemporary currency, particularly in the U.S. (Rogers, citing Mills, 2004, p. 105).

The Web has never been a neutral location, uninhabited either by creative resistance or pliant purveyors of formally–mediated informational politics.

Mapping and Theorizing About Emerging Alternatives: The Web has never been a neutral location, uninhabited either by creative resistance or pliant purveyors of formally–mediated informational politics. Informational politics is what Castells calls the purveying of "official" reality, as opposed to the circulation of alternative [and potentially more credible] networked, Web–based accounts, which Rogers calls "information politics."

Rogers splits information politics into two components: "Back end" and "front end" issues. By "back end politics," Rogers refers to how algorithmic search technologies either privilege or marginalize the information to select and index (or fail to select or index). By "front end politics," Rogers refers to assessments about how diverse, inclusive and prominent are the array of available sites on the Web. To test these notions, Rogers and Govcom.org developed four pieces of software for mapping the activity of information networks in cyberspace. As "political instruments" for the Web, one program displays dynamic linking practices ("Issue Crawler"), while others plot oscillations of interest in "stable" issues ("Issue Barometer" and the "Web Issue Index"). Still another registers shifting political loyalties ("Election Tracker"), in left activist subpopulations that Rogers axiomatically (and problematically) identifies as the entirety of "civil society."

Chapter Two, "The Viagra Files," is accessible and entertaining. Conducted with two sets of students (Austrians, in 2000, and the Dutch, in 2001), Rogers describes how "surfer–experts" solicited, and then mapped, answers to these questions: "What is Viagra, and who is it for?" Via various search engines, surfers found that the drug’s connotations and alternative uses overwhelmed Pfizer’s "official" meaning. On the Web, Viagra was the following: A "Californian drug"; an underground money–maker; a substitute for herbal aphrodisiacs; "a smile"; a party drug; a female aphrodisiac; and, finally, an unrecognized quandary for ER doctors (the conventional intervention for stroke patients, nitrates, kills a Viagra user).

Consider the process of "Cool Hunting," where fashionista market researchers explore and catalog the marginal in adolescent culture. Capital commodifies, exploits, and exhausts what fashionistas find (thereby rendering it "uncool").

Yet Rogers’ assertions about the meaning of what he found are questionable. For example, Rogers describes the "official" and "unofficial" Viagra narratives as "in collision." However, the crude concept of a collision is inadequate as a theorization or description. Bahktins’s semiotic notion of a profusion of significations and practices, the carnivalesque, is a better fit than Rogers’ indiscriminate notion of a collision. (Applying Barthes’ ideas, in S/Z, about the inversion between the denotative and connotative also would have been a better fit). Additionally, while Rogers’ "heart [is] gladdened" because "Viagra leads a richer, more youthful and experimental life [on the Web]," what Rogers does not recognize, in the play between marginal cultures and the commercial center, is the potential that underground uses may well be co–opted by transnational capital. For example, consider the process of "Cool Hunting," where fashionista market researchers explore and catalog the marginal in adolescent culture. Capital commodifies, exploits, and exhausts what fashionistas find (thereby rendering it "uncool"). In the midst of the process, "Cool Hunters" return to the margins, cataloging emerging marginalities, as the parasitic cycle repeats. Is Pfizer surreptitiously co–opting and commodifying alternative uses? We don’t know, but it’s possible.

In Chapter Four, "After Genoa: Remedying Informational Politics," Rogers details the construction and uses of "The Web Issue Index." It is

"a prototypical tracking [device] providing regular indications of leading (global and national) social issues [on] the Web. [It] strives to harness the value of ‘word on the net,’ and distill trends for ... analytical work" (Rogers, 2004, p. 95.

In the form of an "Issue Ticker," the Index tracks (among other functions) biases in corporate media coverage of recent dissent events (such as anti–globalization or anti–war protests). The Index allows for evidence–based interpretations of how and where corporatist media frames ascribe discredited identities and motives (such as "enemies of the poor," or "enemies of freedom") to activists. (The "Election Issue Tracker" does much the same, in monitoring the oscillation of electoral politics and issues). Not surprisingly, Rogers finds that Web–based "issue networks" facilitate the distribution of alternative narratives, although he’s careful to note that a Pew Center study shows that, over time, Web surfers narrow and routinize access to a small group of preferred site.

The applied nature of Rogers’ work on networks is fascinating, and undoubtedly deserves continued energy and close attention. However, whether his work means what he says it means (in service of all of civil society), well, that’s another question. There are several unstated "black box" axioms at the core of Rogers’ exposition that deserve investigation. For example, Rogers splits the social world into unproblematic halves: The private sphere consists of transnational corporate interests and the governments that have become increasingly pliant tools of those interests. The public sphere consists solely of progressive left–wing NGOs, who are identified as representing the core of civil society. Framing his objects of analysis this way creates several problems. They are as follows:

The ghost of Habermas hovers over Rogers’ project.

First, the ghost of Habermas hovers over Rogers’ project. Rogers’ project subsumes Habermas’ definition of the public sphere as a place where

"Something approaching public opinion can be formed. Access is guaranteed. A portion of the public sphere [emerges] in every conversation in which private individuals assemble to form a public body ... conferring in an unrestricted fashion — [with] the freedom to express and publish opinions — about matters of general interest ... . This ... requires specific [technological] means ... ." (Habermas, in Kellner and Kay, 1989, pp. 136–144).

Whether rational–critical discursive practices ever constituted any human society remains a hotly debated factual and normative question, which involves issues of literacy and class privilege. These remain unaddressed. Then, Rogers identifies the entirety of civil society with left–leaning NGO agendas. Rogers de facto jettisons "non–members " (presumably conservative, neo–liberal or religious groups, for example) who do not embrace such agendas. Given Govcom.org’s claim that its key project is to augment and disseminate reality–based narrative mappings, such an excision is an extremely curious move. The resulting validity problem means that the global scope of Rogers’ claims has to be ratcheted back, from cataloging Web–based information politics across the entirety of civil society to mapping the claims of an elite, anti–corporatist fraction.

Secondly, Rogers avoids the rather troubling issue that comes with such public information network mapping — the problem of surveillance effects. For example, the U.S government has a history (Conintelpro, CISPES) of surveilling, mapping, infiltrating and attempting to subvert the activities and relationships of lawful dissenting groups. In the post–9/11 environment, characterized by DARPA’s infamous TIA (Total Informational Awareness program) and the pre–mediation logic of threat preemption, it must be tempting to use left–NGO’s network mappings, based on or inspired by Govcom.org’s work, to identify potential "persons of interest." The transparency of these network maps extends from Govcom.org’s NGOs to the FBI, the CIA and neo–con and right–wing theocratic groups, such as the Family Research Council, or as exhibited by David Horowitz’s Manichean taxonomy of progressive evildoers at www.discoverthenetwork.org. Rogers would be well–advised to consider these unintended consequences.

In the end, what Rogers and Govcom.org are actually doing, by way of specific software development and network information heuristics, is intriguing. But just as they revel in the fact that their study shows that Viagra means something other than Pfizer’s official definition, so too, Rogers’ work also means something more complex and mixed than Govcom.org’s official and somewhat self–congratulatory self–definition. — Dion Dennis End of Review

++++++++++

John Thompson.
Books in the Digital Age: The Transformation of Academic and Higher Education Publishing in Britain and the United States.
Cambridge: Polity Press, 2005.
cloth, 480 p., ISBN 0–745–63477–x, US$79.95.
paper, 480 p., ISBN 0–745–63478–8, US$29.95.
Polity Press: http://www.polity.co.uk

John Thompson. Books in the Digital Age

To begin at the end, John Thompson’s Books in the Digital Age is a superbly well–researched work that has few peers in the scholarship on the modern book industry. Thompson has made an in–depth survey of academic publishing in the United States and Great Britain, talked to many industry practitioners, analyzed trade data, and has comprehensively detailed the many issues facing publishing today. No current practitioner in scholarly monograph and college textbook publishing can afford not to read this. Policy–makers will use this book to guide them in their work. Students of publishing, particularly attendees of the many publishing training courses, will read sections of the book as they prepare for their careers. All academic libraries will want to have a copy. This is the definitive "inside–out" work: How publishers view their own fields, with all the challenges and opportunities. The amount of work that went into this volume is simply staggering.

Having begun at the end, I have gotten ahead of myself. Thompson’s goal is to examine two segments of the publishing industry in two countries. Those segments (or fields, as Thompson calls them) are academic monographs (largely published by university presses) and college textbooks; the countries are the United States and Great Britain. Along the way he glances at publishing in other countries and languages and makes some good observations concerning other segments, especially trade publishing. There is a potted account of academic journals publishing, no coverage of k–12 textbooks or children’s books, and no real engagement with professional reference work (e.g., medical texts). Thompson’s reason for focusing on his two segments is that they are related — both source copyrights from the academic community and both (especially college texts) look to college instructors to influence purchasing decisions. I happen to believe that these segments are further apart than Thompson does and that they will diverge further as the digital age fully comes upon us (e.g., electronics will come to automate instruction, not just the texts), but as a way of drawing a box around a topic, this is a good one, and Thompson is to be commended for the hard–headed way he defines his topic and goes about studying it. And the quality of the study is truly impressive: I found only a couple errors of fact (e.g., Amazon licenses its title database from Baker & Taylor, not Books in Print) and none that undermine Thompson’s argument. For a book of this size, this is truly amazing.

Publishing is not one thing; it is not even like one country.

At the center of Thompson’s analysis is the concept of a publishing field. A field is more than a category of publishing — more, that is, than the category of college texts or children’s books or adult trade publishing. A field is a category plus all the relationships that grow up around that category, including competitors, vendors, distribution channels, agents, and customers. Industry outsiders may not immediately grasp how important Thompson’s model is. Publishing is not one thing; it is not even like one country. Rather, it is like driving around Europe, where there is a common thread to the constituent nations, but there is also nonetheless a tangible difference when one crosses the frontier from Italy into France. Publishers are united in some things (importance of copyright), but have weighted views of others. The cost of paper, for instance, is of great importance to a trade publisher, but is a minor matter to a medical house; and "databasing" properties is central to reference publishers, but has only marginal significance to the publisher of Harry Potter. Few members of one field know members of others. I would be willing to bet that every trade publisher could name all 30 titles on the New York Times hardcover bestseller list, but none could name the top 10 titles in college texts or El–Hi. I was bemused recently to chat with the head of one of the largest U.S. college operations, who had never heard of CrossRef, JSTOR, or Highwire Press.

Thompson puts the idea of a field to good use, especially in sounding a warning to publishers trying to move into new fields, where their old networks may not be of much avail. One of the strongest parts of the book is the examination of university presses as they attempt to bolster their flagging monograph operations by moving into the trade (bad idea) and texbook publishing (for which Thompson makes the best case I have ever seen, though I remain skeptical). The concept of a publishing field also helps to explain how slow moving traditional publishers are. To introduce a real innovation requires not only coaxing one’s own organization along (which can be very, very hard) but also requires the other elements of the field to move as well. A field, in other words, serves as a conservative influence, and that is why most innovation takes place at the margins of the industry or among industry outsiders who develop start–ups. A case in point: in 1989 I heard separate discussions about creating an online bookstore at Random House and Barnes & Noble, about six years before the launch of Amazon. Now, how did Jeff Bezos get there first?

Thompson is particularly good on recognizing the many ways the various elements of a field add value to the publishing process. Advocates of open access publishing should read this carefully, as it serves to describe the many aspects of publishing that will have to be re–created if the toll–access paradigm is to be subverted. There is a double–edged sword here: the field adds value through its various interconnections, but it resists innovation, which could put elements of the field at risk. The field exercises its influence in an almost organic matter, and no book exists outside a field.

A practitioner in this field will read this chapter and call for a whiskey.

In a book of exceptionally high quality, some stars nonetheless shine brighter than others. Thompson’s examination of the U.S. college textbook business is outstanding. He has captured the whole thing: the growth of the used book market; the insidious relationships between college bookstores and used book vendors; the "package wars" (adding more and more costly features to textbooks, which drives up prices); industry consolidation; college faculty as gatekeepers; and the tremendous pressure to get adoptions for the large enrollment courses targeted to freshmen and sophomores. Thompson sees the vicious cycle: competition for big adoptions causes publishers to add features (package wars); new features drive up prices; higher prices make the used book market viable; used books drive publishers to revise texts more often; new editions drive up prices; and used books become of increasing value to impecunious students. Thompson also sees how digital media potentially puts an end to this (lower–cost digital textbooks sold on a subscription basis would wipe out the used book market — and, incidentally, kill off college bookstores in the process), but notes with his characteristically cold eye that the market has not yet taken to the digital text. A practitioner in this field will read this chapter and call for a whiskey. Especially disheartening is Thompson’s claim that the British college publishing business is becoming more and more like the American. Bartender! Bartender!

The news is not entirely dire, though (except, perhaps, when the topic is U.S. university presses, for which there may be no hope). In the college textbook market Thompson properly identifies a key issue: there is meager sales representation for texts designed for upper–level courses, since the largest U.S. college publishers (the top six comprise perhaps 85 percent of the market, not counting used books) are pounding each other to get the lower–level adoptions. Thompson wants the university presses to jump in here and take the high ground. I agree that there is indeed an opportunity here, but I think this niche will be filled by entrepreneurs who work with purely electronic solutions and who sell their wares as site licenses to libraries rather than to the students themselves. Over time these entrepreneurs will work their way down the curriculum, ultimately challenging the incumbents. The future of the textbook business will look more like the early stage Morgan Claypool than the powerful, ponderous Pearson.

So what don’t I like about this very good book? Not too much, but here are the particulars:

A final word about Books in the Digital Age, and that is its sheer book–ness. This is a big book and it carries with it all the associations we have come to bring to books over the centuries. Even in paperback this book could serve as a doorstop, assuming one nowadays stops a door with a fat book and not with an exquisitely engineered GPS. The sheer heft, the suggestion of mass defoliation to produce the paper, implies a mind that has immersed itself in its subject and, over a long period of time, mastered all the intricacies. The design is horsy, promising an academic title — and it delivers, though this is perhaps a less exemplary aspect of the overall package. The iconic book is something of a monument to its author and lends an air of authority, which Thompson richly deserves. — Joseph J. Esposito (espositoj [at] gmail.com). End of Review


Contents Index

Copyright ©2005, First Monday