First Monday


The Social Life of Information by John Seely Brown and Paul Duguid

Chapter Two: Agents and Angels

I'm sorry to hear that you are depressed.
It's true, I am unhappy.

Do you think coming here will help you not to be unhappy?
I need some help, that much seems certain.

What would it mean to you if you got some help?
Perhaps I could learn to get along with my mother.

Tell me more about your family.

In this exchange, the therapist was Eliza, a legendary computer program written in 1966 by Joseph Weizenbaum of MIT. The identity of the patient is fortunately unknown. According to folklore, Eliza software persuaded many people to pour out their souls in the belief that a person, not a software program, was responding.

The software behind Eliza has since developed in many directions, including, for example, customer service. (This is not all that surprising. Irate customers often resemble the deranged and customer service agents, automatons.) On the Web site of Neuromedia, for example, you can meet the prototype, Shallow Red, who will greet you as follows:

Hello. I'm Shallow Red. I'm a Virtual Representative made with NeuroStudioTM. What is your name?
John

Are you ready to ask some questions, John?
Yes

What would you like to know about Neuromedia?
What does Neuromedia do?

Neuromedia makes authoring and serving tools to assist people in the development and deployment of VSRs.

Eliza and Shallow Red are both examples of "chatterbots." These are software programs that will simulate a human response to typed questions. There's an old joke (old, that is, by Internet standards) with the tag line "on the Internet, nobody knows you're a dog." And sometimes it can be quite hard to tell if what you are dealing with is a chatterbot. There's one, apparently, that plays Jeopardy with great skill. [ 1]

Chatterbots, in their turn, are instances of "autonomous agents" (familiarly known as "bots"). These agents, it seems likely, will play an increasingly critical role in the organization, in particular the self-organization, of social life. First, they help deal with the cascades of information that many people find so threatening. Second, they promise to handle many intricate human tasks. If they can, they promise to sweep away many conventional organizations and institutions previously involved in those tasks. So bots offer an interesting way to evaluate some of the endisms threatened by information technology.

These agents are not just the dream of eager futurists. Many are already hard at work. Without them the Internet, which has grown so dramatically in the past few years, would by now be unmanageable. Diligent agents that endlessly catalogue the World Wide Web for such familiar services as Yahoo!, Lycos, Excite, and Alta Vista have helped to manage it. As a result, you can find needles of information in the Web's vast haystack and order in the midst of its apparent chaos. Indeed, it was the search-and-catalogue capability of their agents that principally transformed these sites from mere search engines into lucrative portals-points where people plan and begin their voyages on the 'Net (and so points of great interest to advertisers).

Agent software also runs on the sites of major Internet retailers. When a regular shopper arrives, for example, at the sites of the bookseller Amazon.com or the CD seller CDNow, a personalized list of suggestions greets him or her. Agents track the customer's preferences and purchases and, by "collaborative filtering," compile these lists dynamically. The suggestions reflect purchases made by others with apparently similar tracks (and so, presumably, tastes). [ 2] Agents will trim the virtual "shop window" that a customer sees on arriving at the retailer's site to reflect his or her interest. In some cases they may also offer personalized discounts tailored to particular buying patterns and not available to other shoppers.

Agents don't only work for portals, retailers, and similar organizations. Increasingly people have agents at their personal disposal, even on their desktop. Macintosh computers, for example, come with "Sherlock," a customizable agent that will orchestrate many of the Web's different search engines into a collective search on a user's behalf. (Search engines don't catalogue the Web in the same way, and none catalogues more than a fraction, so orchestration is more powerful than solo performance.) Web users also have access to shopping agents or "shopbots" that on request will roam the Web sites of different vendors, comparing prices.

There are many plans for agents to perform other tasks and responsibilities on behalf of individuals. Already they undertake highly complex institutional financial trading, so it isn't surprising that developers envisage them trading for individuals. Already, too, NASA's infomatics division has a "smart T-shirt," remotely connecting its wearer's body to large central computers that can track vital signs and provide medical advice or summon assistance as necessary. This work, done for the military, is expected to migrate to civilian use. [ 3]

These examples seem part of a general trend leading from institutional agents to individual, personalized ones. As one book puts it, over the next few years, bots will be ever "more tightly woven into the fabric of our daily lives." [ 4] By 2005, a study offering more precision predicts, "at least 25 percent of the then-current PC/workstation user base will allow their personal agents to anticipate their needs." [5]

The Road Ahead

There's undoubtedly a great deal of reasonable forecasting in such predictions. There's a good deal of technological evangelism, too. [ 6] Indeed, in the hands of some futurists, agents appear more as angels offering salvation for encumbered humanity than as software programs. They will hover over our virtual shoulders, monitoring our good deeds and correcting our failings. They will have all the advantages of their ethereal, wholly rational character to see past our earthly foibles and overcome our bounded rationality. But they will also be capable of entering into the material side of life, mowing lawns, stocking the refrigerator, and driving the car.

The evangelistic view, however, may overlook some significant problems on the road ahead. And in overlooking those problems, it may actually miss ways in which agents can and will be practically useful, preferring instead to trumpet ideal but ultimately impractical uses. In considering agents in this chapter, then, our aim is not to dismiss them. We are great admirers. But we seek to ground them in social life rather than "information space."

The problems bots face, however, may be hard to see. Agents are victims of the sort of procrustean redefinition we mentioned in chapter 1. The personal names - Sherlock, Jeeves, Bob, or Red - though they are surely harmless enough on their own, hint that the gap between digital and human is narrow and narrowing. Using human job titles such as "personal assistant" or "agent" for digital technologies closes this gap a little further, suggesting that these jobs might soon be open to real or virtual applicants alike. Other terms, "digital broker" or "electronic surrogate," also elide the world of software and the world of humans, suggesting the possibility of having the best of both worlds. And many accounts simply endow digital agents with human capacities. They talk of bots' "decision-making powers," "social abilities," "learning capacities," and "autonomy." [ 7] Bill Gates confidently predicts that bots will soon develop personality. [ 8]

Redefinition comes from the other direction, too. That is, some accounts don't so much make bots sound like humans as make humans sound like bots. One definition puts both in a broad class of "goal pursuing agents, examples [of which] are to be found among humans (you and I), other animals, autonomous mobile robots, artificial life creatures, and software agents." [ 9] This kind of redefinition not only makes humans seem like bots, but makes human activity - learning, developing taste, wanting, choosing, pursuing, brokering, negotiating - appear principally as different kinds of individual information processing done equally by software goal-pursuing agents or by "wetware." [10]

Faced with such redefinitions, it becomes important to pay attention to ways that human learning, developing taste, wanting, choosing, pursuing, brokering, and negotiating are distinct. In particular, these human practices reflect social and not simply individual activities. Redefinition, however, makes it easy to overlook the significant challenges that this social character presents. As the history of artificial intelligence shows, computer science has long found social issues particularly hard to deal with (as, indeed, have social scientists). Social issues are remarkably hard. "It isn't that these issues aren't interesting," Robert Wilensky, a professor of computer science of the University of California, Berkeley, put it succinctly, "it's just that these problems, like so many social/psychological issues, are so hard that one had to hope that they weren't on the critical path." [ 11]

But with the sort of agenda for agents that we have summarized above, technologists and designers have now put these issues directly on the critical path. So whether as designers, producers, or users, we will all have to confront them, asking to what extent agents should or can be woven into the fabric of human life in all its complexities. These are issues that everyone must face.

Furthermore, they are issues that, from a technological point of view, cannot simply be read off from past successes with agents, admirable though these successes are. If social and psychological issues lie ahead (and they surely do when dealing with taste, style, and judgment), and if Wilensky's insight that such matters have been avoided in the past is right, then the road ahead is not likely to continue directly from the one behind. We cannot cry "full speed ahead" and trust that the outcome will be desirable. A tight focus on information and behaviorist aspects of individuals has propelled development successfully to its current position. But it will probably not suffice to take us to the less restricted future laid out for bots. To understand where to go next, it's time to open the aperture and look around. In what follows, we try to raise some of the social and psychological issues that, for good reasons, have usually been left out of discussions, both of the past and of the future.

Ranks of Agents

To understand the strengths and limits of agents, let us look primarily at personal assistants and some of the tasks planned for them. First, we look at the sort of information brokering done by the Macintosh's Sherlock. We then consider three categories of what might be thought of as market brokering. Here Pattie Maes (a professor at MIT's Media Lab and one of the most insightful designers of collaborative filters) and two colleagues distinguish three stages in the overall "buying process" that bots can handle. They are "product brokering," "merchant brokering," and "negotiation." [ 12]

Information Brokering

Information brokering puts agents in their own backyard, weeding and selecting from among the very stuff that is central to their existence - information. As we said, there are many successful information brokers currently in use, not only combing the World Wide Web as a whole but also, for example, gathering market information from the major exchanges, collecting weather information from diverse sources, indexing online newspapers, updating citation links between on-line journals, and gathering, "mining," and processing information from multiple databases. From here, it's easy to assume that these sorts of agents might make those classical providers and selectors of information - newspapers, publishers, libraries, and universities - redundant. Another handful of institutions appears set to disappear. [ 13]

To explore the strengths of information-brokering bots, however, it's important also to recognize their limitations. We sent Sherlock out on a relatively simple search, and to keep it well within its own interests, we asked it to search for knobot. Orchestrating half-a-dozen major search engines, Sherlock very quickly returned 51 "hits." That is, it pointed to 51 documents in the vast cornucopia of the Web containing the word knobot - an apparently rich return in such a short time. But an hour or so of follow up exposed some limitations.

7 were simple repeats (different search engines turned up the same site and Sherlock failed to filter them out).

3 more were simple mirrors. Here, a site on one server carries the direct image of a site on another, so although the addresses were different, the document containing knobot was exactly the same.

11 pointed directly or indirectly to a single document apparently held at an "ftp" site of the Corporation for National Research Initiatives (NRI), a high-powered research center in Reston, Virginia. NRI computers, however, refused to acknowledge the existence of this document.

2 pointed indirectly to a document at the Johnson Space Center (JSC) via the word Knobot (TM). TM (trademark) suggested that the JSC has some proprietary interest in the term and so its site might be particularly rewarding. Alas, the link to the JSC was dead, and a search using the JSC's own knobot returned the answer "Your search for knobot matched 0 of 418337 documents."

2 others led to documents that did not contain the word knobot. Here the document itself had probably changed, but not the index.

14 led to a single document maintained by the World Wide Web Consortium. Unfortunately, the word knobot seemed more or less irrelevant to the document.

2 led us to the Music Library at Indiana University, which curiously was peripherally involved in some rather overblown claims about the digital library and the Internet. [ 14] The documents there, alas, only pointed us on to the World Wide Web Consortium again, but this time to a document apparently removed.

2 pointed to papers using the word knobot in passing.

4 led to glossaries defining the word knobot to help people read documents using the word knobot.

4 led to two documents at Stanford University, two chapters in a report outlining how agents could make searches of the Web and digital libraries more usable. [ 15]

We cannot read too much significance into this outcome of a single search. Sherlock is a relatively crude engine and our single-word search, with no refinements, was itself yet more crude. Our results primarily serve to remind us that the Web is a vast, disorderly, and very fast-changing information repository with enormous quantities of overlapping and duplicate information and that all its catalogues are incomplete and out of date. That in itself, however, is a useful lesson when we consider whether the Web, self-organized but patrolled by bots, might be an alternative to libraries. Automatic organization, like automatic search, has its perils.

More significantly, perhaps, Sherlock's objective search, conducted on relatively friendly territory, hints at difficulties that might lie ahead in sending out a Sherlock-like surrogate with requests to "get me a knobot," let alone to get the "best," the "most interesting," the "most useful," or the "most reliable" knobot. This brings us to the marketplace kinds of brokering discussed by Maes and her colleagues.

Product Brokering

Product brokering, the first in their basket of shopping agents, involves agents that alert people to the availability of particular items that, given a person's previous track record, might be of interest. It's the sort of brokering done by the bots at Amazon and CDNow. Like Ian Banks? Well, there's a new William Gibson book available. Listen to the MJQ? Well, there's a new issue of John Lewis. eBay, the on-line auction offers agents to alert users when a particular type of item comes up for sale. This kind of brokering is enormously useful. But because it is so useful, all sorts of people are interested in it, both as producers and consumers. When interests conflict, they can lead to major problems for brokers and their users. [ 16]

The flap over Amazon's endorsements indicated the potential for difficulties. The Amazon site offers, among other things, recommendations and endorsements for books it stocks. In this it recalls conventional book (or wine) stores, where staff often provide personal recommendations, and customers come to rely on them. Such recommendations thus have something of a "mom and pop" or corner-store charm to them and can draw people who might otherwise resist the impersonal character of the on-line retailer. But when it was revealed that publishers paid Amazon for some endorsements, that charm dissipated. Amazon was quick to insist that it would always indicate which endorsements were paid, but the fall was irreversible. Innocence was lost, and faith in good agents on "our" side had to face the specter of bad agents purporting to represent "our" interests but in fact secretly representing someone else's.

The Amazon problem recalls the famous case against American Airlines' SAABRE system. Ostensibly this system would find all available flights from a computer database. In fact, it was subtly weighted to favor American's flights. Both examples raise questions of technological transparency. We might all be able to use agents, but how many are able to understand their biases among the complex mathematics of dynamic preference matching? Is your agent, you must ask, neutral, biased, or merely weighted? And can you tell the difference? Who really controls your agent, you, the designer, or the person feeding it information? How can you be sure that your agent has not been "turned," being now a double agent? As with the SAABRE case, will we all need to rely on some supervening agencies, rather than agents, to ensure fair play? An infocentric approach does not resolve these questions.

Merchant Brokering

Maes and her colleagues' second category, merchant brokering, includes agents that can roam the 'Net comparing prices and so report back on "best buy" options. Again, these sorts of bots are already in use and are helpful resources, providing the sort of information that makes markets more efficient, removing layers of intermediation in the process.

But again, too, they present some problems for the idea of the Web that they are assumed to promote. The buzz is all about personalization, product diversity, the demassification of production and consumption, and niche markets. Merchant brokering, however, heavily favors standardization.

Consider, for example, that despite all the rhetoric about the Internet "killing" the book and tailoring information, the first great flagship of Internet enterprise was a book retailer. There is no real surprise here. Books are the oldest mass-produced commodity. They are wonderfully standardized, with each copy of a particular edition being almost identical to its fellows. And markets, particularly long-distance markets, work best with standardized measures and standardized products. [ 17] The integration of the European market, for example, has driven nonstandard varieties of fruit and vegetables out of the market altogether, while wine regions that once blended seventy varietals now find themselves with five. So the first concern about merchant brokering is whether it may lead to less choice, not more as is often promised.

A second problem is that, as part of the process of price competition and standardization, subjective issues such as quality and service disappear. Both service and quality, the economists Brad DeLong and Michael Froomkin have argued, tend to get squeezed out of markets when low price becomes the decisive factor. [ 18] From the decisive factor, low price soon becomes the only factor. Network effects will tend to marginalize merchants who try to resist. With bots, which are not well suited to weigh subjective issues such as quality and service, these will probably disappear much more quickly.

In fact, though, quality and service don't simply disappear in these conditions. They can become highly problematic "externalities," not subject to the sales and purchase contract, but imposing heavy burdens after the fact. Low-bid environments have frequently shown this. Many government agencies award contracts to the lowest bidder to eliminate costly corruption. Instead they get costly headaches from inept or deceptive bidding, poor service, cost overruns, supplemental negotiations, "change orders," out-of-their-depth firms going bankrupt, and complicated litigation-all of which must ultimately be paid for. Led by Rhode Island and Washington, several states are now exploring alternative, more reliable indicators than the lowest price to choose suppliers. [ 19]

And as with product brokering, the question of good and bad agents also haunts merchant brokering. This time the user must ask not "Is this recommendation really what I wanted?" but "Is the price quoted in fact the lowest price available?" Amazon again became involved in such a question when it stepped from the market of highly standardized commodities, new books, to the far less standardized one of out-of-print books. These are priced from the subjective influence of such things as provenance (who owned the book before), author signatures, wear and tear, scarcity, and the like, so prices can vary significantly. Amazon offers a bot to search for out-of-print books for customers with the promise that this would recommend the lowest-priced book. But people who used the service have claimed that it would often recommend a copy that was not the lowest priced available.

Again the questions arise, "Is this an inept or a corrupt agent?" and perhaps more important "Can you tell the difference?" The 'Net, some economists believe, will become home to perfect information. It will be harder to make it a place of perfect (and trustworthy) informers.

Negotiating

These problems suggest that agent brokering is more problematic than it may first appear. The difficulty of agent negotiating is already well known. Despite the way many pundits talk of their presence just over the horizon, true negotiating bots do not yet exist. Maes and her colleagues are more circumspect about their development. And experimental results show serious problems. Nonetheless, it's easy to find confident talk about the speedy appearance of "Situations where your software agent meets my agent and they negotiate for prices and conditions." [ 20]

Difficulties with bots arise because human negotiating is an intricate process that tends to keep an eye on externalities such as the social fabric as well as on the immediate goal. This social fabric embraces such things as social capital and trustworthiness, things that make social relations, including market relations, possible. The confident predictions about markets of negotiating bots, however, tend to assume that negotiating is little more than a matter of matching supply and demand.

The first thing to notice is that people negotiate all the time. These negotiations, however, are only occasionally over price. Usually, humans negotiate behavior. They continually accommodate themselves to one another, and in the process construct and adjust the social fabric. In negotiating this way, people are barely conscious that they are negotiating at all. Indeed, when negotiations surface and people do become aware of them, it is usually a sign that implicit negotiation has broken down or that there is a tear in the social fabric.

Consider, for example, the way people negotiate with one another in the flow of collective conversation. Conversations demand individual and collective decisions over who will speak, when, and for how long. But in informal conversation the negotiations involved in such turn taking, though rapid and efficient, are all but invisible. To claim a turn, people will merely shift their gaze, subtly alter body position, or wait for a break to interject. Others will turn from or toward them, inviting them in or leaving them out. Speakers will hand off the conversation with a look, rising intonation, or a direct question to another person. Or they may attempt to keep hands on by avoiding eye contact, speaking rapidly, rejecting counteroffers, or pausing only at words like and or so. (One of the reasons that conference calls can be so awkward is that they remove many of these cues used for effective negotiation.)

Or consider a speechless negotiating practice: the way people on a crowded street negotiate a passage, pulling to one side or another, standing back, hurrying forward, dropping into single file then falling back to two abreast, and so forth. All of this is done with rarely a word among people who have never seen each other before and will never see each other again. Such negotiation does not rely on simple rules like "keep left" or "keep right." Rather it is a dynamic process. It too only becomes noticeable when it fails, when, for example, both pull in the same direction, stepping toward the street, then toward the wall, then back again, each time finding the same person in their way. (Such accidental deadlocks have been nicely called faux pas de deux.) [ 21]

In conversation, such impasses might be called a committee. In committees, particularly acrimonious ones, such implicit negotiations often don't work. Then participants have to invoke formal rules: the right to speak, the right to resist interruption, the order of speakers, the topics permitted, the length of time allotted, points of order, and so forth. This sort of explicit, rule-governed negotiation is clumsy, but necessary when the social fabric will not bear implicit negotiation. It looks more like the sort of thing developers have in mind when they talks of bots negotiating, but it is only a small portion of what human negotiation involves. [ 22]

Human foibles. To understand human negotiation and imitate it, then, requires understanding humans as more than simple "goal pursuing agents." For humans, rules and goals bear a complicated relationship to the social fabric. Both may shift dynamically in practice depending on the social conditions that prevail.

This human shiftiness is present even when people shop, let alone when they talk. In shopping, as elsewhere, humans have the awkward habit of changing rules and shifting goals in mid-negotiation. The Berkeley anthropologist Jean Lave discovered this in her studies of people shopping in the supermarket. [ 23] The people that she and her colleagues studied set out with a list that looks like a bot's instructions. When they returned, they accounted for their behavior in bot-like terms, talking about "best buys," "lowest prices," and so on. But studying their actual shopping behavior revealed something quite different. In between setting out and coming back, they continually shifted their goals, their preferences, and even their rules without hesitation. Shopping was in many ways a process of discovering or creating underlying preferences rather than acting in accordance with them.

This sort of goal shifting is even more evident when people negotiate directly with one another. (In a supermarket, we are often more in negotiation with ourselves than with anyone else. "Do I really want this?" "Is this worth it?" "Will the family eat it?" are questions we try out on ourselves, not on other people.) In interpersonal negotiation, people change their immediate goal for the sake of the higher goal of preserving the social conditions that make negotiation possible. For example, people will back off demands, even if they are in the right, because asserting rights may damage the social fabric.

People don't only abandon goals. They abandon rules, too. Authorities, for example, turn a blind eye if they sense that enforcing the law could be more damaging. [ 24] Stock exchanges not only suspend trading (particularly electronic trading), they may even suspend the rules when to do otherwise would damage the exchange or the market. In such circumstances, people implicitly invoke the axiom that if following the rules causes damage, then abandon the rules - an axiom it's hard for bots to follow. Immediate goals, longer-term relationships, and creation and preservation of the social fabric are always in balance in this way. Indeed, this interdependence paradoxically affects even those who deliberately break the rules. Liars have a heavy investment in truth telling, opportunists in dependability, speculators in stability.

Bot foiblelessness. Negotiating bots favor the direct and explicit side of human negotiation. They are the economist's idealized "homo economicus," anonymous traders working in a spot market. They are blind to the complex social trade-offs between goals, rules, and the social fabric. So it's no surprise to find that they are capable of tearing rents in that fabric.

Experiments at both IBM and MIT with bots in apparently frictionless markets indicate potential for destructive behavior. Not "subject to constraints that normally moderate human behavior in economic activity," as one researcher puts it, the bots will happily destabilize a market in pursuit of their immediate goals. In the experiments, bots engaged in savage price wars, drove human suppliers out of the market, and produced unmanageable swings in profitability. "There's potential for a lot of mayhem once bots are introduced on a wide scale," another researcher concluded. [ 25] The research suggests that frictionless markets, run by rationally calculating bots, may not be the efficient economic panacea some have hoped for. Social friction and "inertia" may usefully dampen volatility and increase stability.

DeLong and Froomkin even suggest that such frictionlessness might disable the "Invisible Hand." The great classical economist Adam Smith used the image of the invisible hand to describe forces that reconcile the pursuit of private interests through markets with the public good and with what we have called the social fabric. If not actually producing good, at least the invisible hand prevents the pursuit of private interests from doing harm. "Assumptions which underlie the microeconomics of the Invisible Hand," de Long and Froomkin conclude, "fray badly when transported to the information economy." [ 26]

Better bots, then, will require a better understanding of human negotiation, the contribution of the social fabric, and the role of human restraint in the functioning of the invisible hand. Development will be ill served by people who merely redefine elaborate social processes in terms of the things that bots do well.

How Autonomous?

If human negotiation is a delicate and complex art, it is inevitably more delicate and more complicated when conducted by proxies. And bots are proxies. They act not on their own behalf, but on behalf of others. As such, they also lead us to questions about delegation and representation, which will take us once more to the issue of transparency.

Delegates

One account of bot delegation, defining it in human terms, explains it as follows: "Just as you tell your assistant, 'When x happens, do y' an agent is always waiting for x to happen!" [ 27] The first and most obvious question to confront is: Is that really how human assistants work?

If humans are not, as we suggested, strict in following rules, they are far worse at laying them out. Human delegation relies as much on sympathetic disobedience when circumstances change as it does on strict obedience. As people working with computers have long known, strict obedience leads to disaster. Internet stock traders are becoming increasingly aware of this problem as on-line trades are executed as ordered in conditions of insufficient liquidity - conditions in which a broker would wisely have disobeyed the order. Labor organizations have long known that by "working to rule," following only the letter but not the spirit of a work contract, work becomes impossible.

Behind the stock traders' difficulties and labor tactics lies the simple truth that giving orders that have to take account of all possibilities is impossible in all but the simplest tasks. Consequently, rules, contracts, delegation, and the like rely on unspecifiable discretion and judgment on the part of the person following those orders. These return us once again to the differences between bots and people. Judgment and discretion are not features of software. They are products of human socialization and experience. As the political philosopher Michael Oakeshott has argued, they are learned not by the acquisition of facts and rules, but through social relations and participation in human activities. [ 28] As Mark Humphrys of the School of Computer Applications, Dublin University, points out, even if bots could learn like humans, they lack the rich stimuli that humans depend on to develop such things as judgment. Bots, in contrast to humans, live a wretchedly impoverished social existence. [29]

Representatives

Of course, the view of bots as blindly obedient, "doing x when y happens," denies them much of the autonomy that is central to their identity as autonomous agents. Consequently, it is more usual to think of bots as having "control over their own actions and their own internal state," as one account puts it. [ 30] While it might solve the problem of blind obedience, this autonomy raises problems of its own. If bots make decisions and give advice autonomously, who takes responsibility for those decisions? Can Eliza be sued for medical malpractice? And if not, should the programmer be sued instead? Can Shallow Red be sued for business malpractice? And if not, who should be sued, the programmer or the company that the chatterbot represents?

If the owner of a bot should take responsibility for those actions, then that owner must have some control over and insight into them. Yet for most bot users, the digital technology around which bots are built is opaque. Because bots are autonomous, it can be difficult to know what decisions a bot has taken, let alone why.

Once, as promised, bots start interacting with one another, understanding bot behavior may become impossible. Anyone who has had to call a help line with a problem about the way an operating system from one vendor and a program from another are working together - or failing to work - knows how hard it is to get anyone to take responsibility for software interactions. [ 31] Support staff rapidly renounce all knowledge of (and usually interest in) problems that arise from interactions because there are just too many possibilities. So it's easy to imagine sophisticated programmers, let alone ordinary users, being unable to unravel how even a small group of bots reached a particular state autonomously. The challenge will be unfathomable if, as one research group has it, we can "anticipate a scenario in which billions of intelligent agents will roam the virtual world, handling all levels of simple to complex negotiations and transactions." [32]

If human agents are confused with digital ones, if human action is taken as mere information processing, and if the social complexities of negotiation, delegation, and representation are reduced to "when x, do y," bots will end up with autonomy without accountability. Their owners, by contrast, may have accountability without control.

These are not trivial matters. As we have seen, people are being asked to put everything from the food in their refrigerator to their cars, houses, assets, reputation, and health in the hands of these agents despite the difficulty of understanding what these agents actually do or why.

One of the remarkable features of the recent failures of derivative traders (such as Barings Bank, Long-Term Capital, and Orange County) has been how little investors, even major financial institutions, have understood about the ventures to which traders armed with highly complex mathematical calculations and software programs were committing them. They took it all on trust, had insufficient safeguards (though somewhat paradoxically they called themselves "hedge" funds), and lost disastrously. The question "Do you know what your agent is doing tonight and with whom?" and the worry that their agent is no angel may loom increasingly large in the already overburdened late-night worries of executives.

"The Brightest Fell"

Once you worry about bots that represent, you also have to worry about bots that misrepresent. Bots may not be like people in every way, but they can give a good imitation of human deceit and in so doing be quite disruptive. And as with humans, it can be difficult to distinguish good from bad. "There's no way," one system administrator insists, "to tell the difference between malicious bots and benign bots." [ 33]

The similarity is already significantly changing the way the 'Net is viewed.

Since its early days, the 'Net has been an extraordinary source of free help and information. Some have suggested that it is an example of a "gift economy." One of the most useful areas of this economy has been the newsgroups. Though these are probably more famous for their pornographic content, newsgroups provide a great deal of esoteric and not just erotic information. Divided into groups by subject, they allow people interested in a particular topic to pose questions and receive advice and suggestions from people with similar interests. There are, for example, multiple Microsoft Windows discussion groups where people having difficulty with everything from basic use to Visual Basic programming can get help. [ 34]

Given this aspect of the 'Net, it wasn't too surprising to get some helpful advice when Paul put an advertisement to sell a car in Yahoo!'s classifieds. This is a free service that allows you to describe the car in your own words and provide details of how you can be reached. Within twenty-four hours a friendly message came back:

From: shelby@dallas.quick.com
Subject: FYI ON AUTO SALES
Date: Tue, 21 Jul 1998 17:33:22 -0600
X-Mailer: Allaire Cold Fusion 3.0
X-UIDL: 8e772fe616fe165b05ca43f75d8ea500

i saw your vehicle ad on the internet. i just wanted to let you know about the realy neat system i found where mine sold after many inquiries shortly after listing it. you can get to that system by clicking on the following link:
http://www.aaaaclassifieds.com

just thought you might be interested.

Dan

Thanking Dan for his kind recommendation, however, was more difficult than it first appeared. "shelby@dallas.quick.com" turned out to be a bogus address. The message, ingenuous misspelling and all, had come from an anonymous remailer in the Czech Republic designed to conceal the identity of the actual sender. "Dan" was no doubt little more than a software program, combing the 'Net for postings on car lists and sending out this advertisement disguised as a personal recommendation from a "user like you." [ 35] As the political scientist Peter Lyman points out, strange things happen when a gift economy and a market economy collide. [36]

War in Heaven

Dan is only a minor case of what has become close to bot warfare on the 'Net. While bots have yet to develop personality, they have already been involved in "personality theft," sending out messages in the names not only of people who do not exist but also of people who do exist but did not send the message. One journalist found he had been widely impersonated by a bot that sent e-mail and joined chat rooms using his name. [ 37]

Besides creating messages in others' names, bots can also erase them, wiping out messages that people have sent. Cancelbots, as these are sometimes called (perhaps they could more simply be called Nots), are an unfortunate case of good intentions gone awry. They developed in response to a notorious newsgroup incident. Two lawyers used a bot to post a message offering immigration assistance on 6,000 newsgroups. Angry newsgroup users took corrective action, developing cancelbots, which could patrol newsgroups and remove messages that were posted indifferently to multiple groups. Such a design, however, was too tempting to remain only a force for good. One night in September 1996, a rogue cancelbot wiped out 30,000 bona fide newsgroup messages. [ 38]

The 'Net now has a robust army of antisocial bots. These attack the innocent and the guilty alike. They make misleading recommendations and erase useful ones. They throw dust in the eyes of the collaborative filters and others tracking patterns of use. [ 39] And they overload mailboxes and attack Web sites until they crash. As rapidly as the diligent bots find ways to see through their ruses and overcome these tactics, others find new ways to produce mayhem.

Meanwhile, the line between good and bad becomes increasingly hard to draw. Intel, for example, recently persuaded Symantec, maker of antivirus software to take action against what was deemed by Intel "hostile" code written by a company called Zero-Knowledge Systems. People who rely on Symantec's software now get a warning when they visit Zero-Knowledge's site that they may be at risk from viruses. Zero-Knowledge, however, insists that its software is not a virus, but a program designed to reveal a flaw in Intel's chips. Intel's categorization, Zero-Knowledge maintains, was deliberately self-serving. [ 40]

Moore Solutions

Faced with the rebellions of these fallen agents, people often suggest that another cycle of Moore's Law will produce suitably powerful encryption to resist them. [ 41] Better encryption (and fewer government restrictions) will almost undoubtedly be advantageous. But it is important to realize that it alone cannot solve the problems at issue. Why not? First, the same power that is used for defense can be used for attack. As a (perhaps apocryphal) detective from London's Scotland Yard is said to have put it fatalistically, in fifteen years computers will be involved in the solution of most crimes, but they will also be involved in the perpetration of most crimes.

Second, solutions that rely on encryption already focus too tightly on information systems, ignoring the fact that these are always and inevitably part of a larger social system. Just as bots only deal with one aspect of negotiation, the information-exchange part, so cryptography deals only with one part of technological systems, the classical information channel. The human activity that encryption is designed to protect, however, usually involves much more.

For example, in the 1990s a number of companies have produced strategies for secure and private digital cash transactions over the Internet. Some of these - in particular David Chaum's company DigiCash, founded in 1994 - matched powerful technologies and admirable social goals. Futurists have greeted each such company and its encryption tools with confident claims that here was the "killer application" of the era. It would allow secure microtransactions (cash exchanges worth pennies), provide suitable anonymity and thereby unleash net commerce, dismantle the current financial system, and undermine the power and authority of major banks. It all sounded plausible, with more institutions satisfyingly looking to fall.

Unfortunately, DigiCash itself, like other similar efforts before it, fell, filing for bankruptcy in November 1998. It failed not because encryption isn't important, nor because DigiCash's encryption wasn't good (it was very good), but because encryption isn't enough. A viable system must embrace not just the technological system, but the social system - the people, organizations, and institutions involved. It must be able to verify not just the encryption, but a user's right to use the encryption, that this right has not been revoked, and that it is being used correctly. "A digital signature," says Dan Geer, an encryption specialist and chief strategist for CertCo, "is only as valid as the key (in which it was signed) was at the moment it was signed, it is only as good as the procedural perfection of the certificate issuer and the timely transmission of subsequent revocation." [ 42] In short, encryption is a strong link in a long chain whose very strength points up the weakness of other links, which cannot be made so secure. Overall, such systems, Geer argues, are only as secure as the financial and institutional investment committed. Consequently, interactions over the 'Net, financial or social, will be as secure not as its digital encryption, which is a relatively cheap fix, but as the infrastructure - social as well as technological-encompassing that interaction. [43]

Separate Spheres

The human and the digital are significantly, and usefully, distinct. Human planning, coordinating, decision making, and negotiating seem quite different from automated information searches or following digital footsteps. So for all their progress, and it is extraordinary, the current infobots, knobots, shopbots, and chatterbots are still a long way from the predicted insertion into the woof and warp of ordinary life. Current successes may not, as some hope, point directly to the grander visions laid out for agents. Optimists may be underestimating the difficulties of "scaling up" from successful but narrow experiments to broader use. (It's been said that computer scientists have a tendency to count "1, 2, 3, one million, ... ," as if scale were insignificant once the first steps were taken.)

Digital agents will undoubtedly play a central part in future developments of the 'Net. They are powerful resources for all sorts of human interactions. But to pursue their development needs more cold appraisal and less redefinition or evangelism.

First, developers and futurists must recognize that their goals address fields of human activity that lie well beyond the simple realm of information. "The humanizing of technology," the sociologist Anthony Giddens argues, "is likely to involve the increasing introduction of moral issues into the now largely 'instrumental.'" [ 44] Designing bots to imitate or replicate human actions introduces both moral and social-institutional questions. Your willingness to put your paycheck into a slot in the wall is, in many ways, remarkable. But it relies on much more than the instrumental reliability of the ATM. Indeed, for most people a particular ATM isn't transparent enough for us to judge whether it is instrumentally reliable or not. Instead, we look for reliability, both instrumental and moral, in the organizations represented by the ATM and by the institutions regulating those organizations.[45]

But, as Robert Wilensky rightly points out, such issues have not been on the critical path of much computer science. They are now on the road that lies all before us. They are not insurmountable roadblocks. But if people do not open their eyes a little wider, they will act as potholes into which much research may unprofitably stumble.

Second, and most important, bots and humans operate in different, if overlapping, spheres. By redefining one as the other, or reducing both to information-processing or goal-pursuing agents, these differences are submerged or confused. Some futurists seem continuously anxious to replace humans with bots in certain tasks without quite appreciating how people accomplish those tasks. In general, it will be better to pursue not substitution but complementarity. (We want ATMs that will supply us with money, not ones that will spend it for us.) But complementarity requires seeing the differences between information-processing agents and human agency.

About the Authors

John Seely Brown is Chief Scientist at Xerox Corporation and Director of the Xerox Palo Alto Research Center (PARC).
e-mail: John@SLofI.com

Paul Duguid is a research specialist in Social and Cultural Studies in Education at the University of California at Berkeley.
e-mail: paul@SLofI.com

Notes

1. Distinguishing a computer from a human is the essence of the famous Turing test, developed by mathematician Alan Turing (1963). He argued that if you couldn't tell the difference then you could say the machine was intelligent. Shallow Red is not quite there yet. Indeed, the continuation of the exchange suggests that Shallow Red is still rather shallow (though pleasantly honest):

What are VSRs?
The botmaster has not provided me with a definition of "VSRs." Thank you for asking. The botmaster shall provide me with an appropriate definition.

Shallow Red could be found at http://www.neurostudio.com, retrieved 21 October 1998. It has since been upgraded to Red, though it is not yet quite as deep as that promotion might suggest.

2. The lists you get if you shop for other people or buy gifts on-line make it clear that tastes and tracks are different things.

3. The study, done at Yale, was reported on National Public Radio's All Things Considered, 18 November 1998.

4. Leonard, 1997, p. 186.

5. Hoggdon, 1997, p. 15.

6. Though less so, perhaps, than with the telegraph, which the president of the Magnetic Telegraph Co., W. M. Swain, suggested might be the fulfillment of scriptural prophecy.

7. These quotations come from IBM's research into intelligent agents, reported on the 'Net at: http://www.networking.ibm.com/iag/iagwp1.html, retrieved 16 November 1998. IBM has since abandoned this work and taken down the site. (See http://www.networking.ibm.com/iag/iaghome.html, retrieved 21 July 1999.)

8. Gates, 1995, p. 83.

9. See Franklin, [1995]; for a taxonomy of agents, see Franklin and Graessner, 1996.

10. People familiar with the old programs of artificial intelligence (AI) will recognize some of these redefinitions. For classic AI, it eventually became apparent that such activities could not be reduced to notions of mental representation and thus made computationally tractable. The real strength of AI came from recognizing that computers were not like people.

11. Pretext Forum, 1997.

12. Maes, Guttman, and Moukas, 1999. See also Jonkheer, 1999.

13. It has been suggested, for example, that if bots can match learners' needs with teachers' capabilities, then the distance university will be no more than a software program with an income stream (Chellappa, Barua, and Whinston, 1997). See chapter 7 for a further discussion of this point.

14. For more on this see chapter 8.

15. None, unfortunately, took us to the alternative and more usual spelling, knowbot.

16. In all, visions of automated brokering often trivialize the complexities of a broker's work.

17. The Polish economic historian, Witold Kula (1986) tells the fascinating story of the standardization of measurements and how important that was to international trade. Another historian, Theodore Porter (1994), has shown how standardized grades enabled the Chicago Corn Exchange to broker deals between farmers in the old West and merchants around the world.

18. De Long and Froomkin, 1999.

19. Maynard, 1998; Markus, 1997.

20. These quotations come from the IBM intelligent agent research (see note 7).

21. Of course, this sort of breakdown is not always accidental. The opening of Romeo and Juliet reminds us that rivals can deliberately refuse this sort of implicit negotiation.

22. Indeed, part of the process of negotiation involves negotiating what sort of behavior, formal or informal, is appropriate. People have to understand what rules apply, for identical behavior can have different consequences depending on the situation. A wave across a room might welcome a friend or make you highest bidder at an auction. Recently, a young boy was discovered to be highest bidder in an eBay on-line auction. He bid $900,000, believing, apparently, that he was taking part in an on-line game. ("13-Year-Old Makes $3 million in Bids on eBay," USA Today Online, 29 April 1999, at http://www.usatoday.com/life/cyber/tech/ctf027.htm, retrieved 21 July 1999.).

23. Lave, 1988.

24. "No tolerance" policing refuses to turn such a blind eye. Its critics have argued that while such policing may meet short-term goals, it can be profoundly damaging to the social fabric, destroying for example, police relations with minority communities.

25. Jeffrey Kephart quoted in Schwartz, 1998, p. 72. See also Ward, 1998. Work of the HEDG group at MIT ( http://www.lucifer.com/~sasha/HEDG/HEDG.html, retrieved 21 July 1999]) suggests that cooperation among bots may help offset these wild swings, but there is significant debate among scientists about what cooperation entails.

26. De Long and Froomkin, 1999.

27. These quotations also come from the IBM intelligent agent research (see note 7).

28. In Oakeshott's (1967) words:

By "judgement" I mean the tacit or implicit component of knowledge, the ingredient which is not merely unspecified in propositions, but which is unspecifiable in propositions. It is the component of knowledge which does not appear in the form of rules and which, therefore, cannot be resolved into information or itemized in the manner characteristic of information. (p. 167)

29. Humphrys, 1997.

30. Jennings et al., 1996.

31. This may turn out to be the biggest challenge in adjusting Y2K problems.

32. These quotations also come from the IBM intelligent agent research (see note 7).

33. Quoted in Quittner, 1995.

34. Those who are unhappy, by contrast, can express their rage by posting messages to the microsoft.crash.crash.crash or microsoft. sucks newsgroups.

35. The car, by the way, found a buyer within a week.

36. Lyman, 1997.

37. Quittner, 1995.

38. Leonard, 1997, p. 192.

39. "Collaborative tracking" of this sort can easily fall prey to Goodhart's law, which states that statistical regularities break down when used for control. Because the digital footprints people leave across the Web can provide guidance, people have an interest in creating fake footprints.

40. Markoff, 1999.

41. See chapter 1 for our discussion of Moore's Law solutions.

42. Geer, 1998.

43. To illustrate his point by analogy, Geer (1998) reveals that, while conventional handwritten signatures are a relatively reliable means of verification, "actually verifying handwritten signatures" is so costly that banks generally bother only for checks over $20,000. Below that they are willing to incur the risk of insecurity rather than the cost of security. With digital signatures, too, banks must balance their risk against the costs of the full organizational and institutional infrastructure of verification. The encryption may be more or less the same for both high- and low-risk transactions, but the important procedural formalities (the weak links in the chain) will be quite different.

44. Giddens, 1990, p. 170.

45. The SETI project, which we mentioned in chapter 1 (see chapter 1, note 2), is an interesting case in point. Thousands of people are downloading data over the Internet and letting it run on their machines in the belief that they are processing data from outer space. It is the reassuring presence of the University of California in the arrangement that keeps at bay the idea that our machines might be processing data for the National Security Agency (NSA) or even providing data for a major marketing venture.

Bibliography

Ramanath Chellappa, Anitesh Barua, and Andrew B. Whinston, 1997. "E3: An Electronic Infrastructure for a Virtual University," Communications of the ACM, volume 40, number 9, pp. 56-58.

J. Bradford DeLong and A. Michael Froomkin, 1999. "The Next Economy?" In: Deborah Hurley, Brian Kahin, and Hal Varian (editors). Internet Publishing and Beyond: The Economics of Digital Information and Intellectual Property. Cambridge, Mass.: MIT Press, at http://www.law.miami.edu/~froomkin/articles/newecon.htm, retrieved 21 July 1999; a revised and updated version can be found as J. Bradford DeLong and A. Michael Froomkin, 2000. "Speculative Microeconomics for Tomorrow's Economy," First Monday, volume 5, number 2 (February), at http://firstmonday.org/issues/issue5_2/delong/, retrieved 21 July 1999.

Stan Franklin, 1995. "Autonomous Agents, Mechanisms of Mind," at http://www.msci.memphis.edu/~franklin/aagents.html, retrieved 21 July 1999.

Stan Franklin and Art Graessner, 1996. "Is It an Agent, or Just a Program? A Taxonomy for Autonomous Agents," In: Intelligent Agents 3: Agent Theories, Architectures, and Languages . Proceedings of the ICAI '96 Workshop, Budapest, Hungary, 12-13 August 1996. New York: Springer-Verlag, and at http://www.msci.memphis.edu/~franklin/AgentProg.html, retrieved 21 July 1999.

Bill Gates, 1995. The Road Ahead. New York: Viking.

Dan Geer, 1998. "Risk Management Is Where the Money Is," Risks Digest, volume 20, number 6, at http://catless.ncl.ac.uk/Risks/20.06.html, retrieved 21 July 1999.

Anthony Giddens, 1990. The Consequences of Modernity: The Raymond Fred West Memorial Lectures. Stanford, Calif.: Stanford University Press.

Paul N. Hoggdon, 1997. "The Role of Intelligent Agent Software in the Future of Direct Response Marketing," Direct Marketing, volume 59, number 9, pp. 10-18.

Mark Humphrys, 1997. "AI is Possible ... but AI Won't Happen: The Future of Artificial Intelligence," paper presented at the symposium "Science and the Human Dimension," Jesus College, Cambridge, 29 August, and at http://wwwnewscientist.com/nsplus/insight/future/humphrys.html, retrieved 20 March 2000.

N.R. Jennings, P. Faratin, M.J. Johnson, P. O'Brien, and M.E. Wiegand, 1996. "Agent based Business Process Management," at http://gryphon.elec.qmw.ac.uk/dai/projects/adept/jcis96/introduction.html#top, retrieved 16 December 1998.

Kees Jonkheer, 1999. "Intelligent Agents, Markets, and Competition: Consumers' Interests and Functionality of Destination Sites," First Monday, volume 4, number 6 (June), at http://firstmonday.org/issues/issue4_6/jonkheer/, retrieved 21 July 1999.

Witold Kula, 1986. Measures and Man. R. Szerter (translator). Princeton, N.J.: Princeton University Press.

Jean Lave, 1988. Cognition in Practice: Mind, Mathematics, and Culture in Everyday Life. New York: Cambridge University Press.

Andrew Leonard, 1997. Bots: The Origin of New Species. New York: Penguin Books.

Peter Lyman, 1997. "Documents and the Future of the Academic Community," paper presented at the Conference on Scholarly Communication and Technology, Emory University, 24-25 April, at http://www.dlib.org/dlib/july98/07lyman.html, retrieved 21 July 1999.

Pattie Maes, Robert H. Guttman, and Alexandros G. Moukas, 1999. "Agents That Buy and Sell," Communications of the ACM, volume 42, number 3, pp. 81-91.

John Markoff, 1999. "Intel Goes to Battle as Its Embedded Serial Number Is Unmasked," New York Times (28 April), section C, p. 1.

Edward Markus, 1997. "Low Bid Alternatives Earning Respect," American City and County, volume 112, number 9, pp. 22-25.

Michael Maynard, 1998. "Architects Oppose California Initiative," Architecture, volume 87, number 4, p. 26.

Michael Oakeshott, 1967. "Learning and Teaching," In: R.S. Peters (editor). The Concept of Education. London: Routledge and Kegan Paul, pp. 156-177.

Theodore M. Porter, 1994. "Information, Power, and the View from Nowhere," In: Lisa Bud-Frierman (editor). Information Acumen: The Understanding and Use of Knowledge in Modern Business. London: Routledge, pp. 217-231.

Pretext Forum, 1997. "A Panel of thinkers and Technologists Debate the Future of Libraries," at http://www.pretext.com/oct97/columns/forum.htm, retrieved 21 July 1999.

Joshua Quittner, 1995. "Automata Non Grata," Wired volume 3, number 4 (April), pp. 118-122, 172-173, and at http://www.wired.com/wired/archive/3.04/irc.html, retrieved 21 July 1999.

Evan I. Schwartz, 1998. "Shopbot Pandemonium," Wired volume 6, number 12 (December), p. 72, and at http://www.wired.com/wired/archive/6.12/mustread.html, retrieved 21 July 1999.

Alan Turing, 1963. "Computing Machinery and Intelligence," In: Edward A. Feigenbaum and Julian Feldman (editors). Computers and Thought. New York: McGraw-Hill, pp. 11-35.

Context

This text is an excerpt of The Social Life of Information by John Seely Brown and Paul Duguid, published in March 2000 by Harvard Business School Press. Reprinted by permission of Harvard Business School Press. Copyright © 2000 by the President and Fellows of Harvard College; All Rights Reserved.

The book is available from Harvard Business School Press directly, fine bookstores everywhere, and other sources such as Amazon.com. Please also visit the Social Life of Information Home Page at http://www.slofi.com/

A public forum to discuss The Social Life of Information can be found at http://www.slofi.com/slofidis.htm


Contents Index

Copyright © 2000, First Monday

"Chapter Two: Agents and Angels," In: The Social Life of Information by John Seely Brown and Paul Duguid
First Monday, volume 5, number 4 (April 2000),
URL: http://firstmonday.org/issues/issue5_4/brown_chapter2.html