First Monday


Software Agents Take the Internet as a Shortcut to Enter Society: A Survey of New Actors to Study for Social Theory

Contents

From Utopia Via the Present Days Into the Future
Machines as Agents
The Software Agent
Software Agents
Agents in Society
Conclusions

From Utopia Via the Present Days Into the Future

Dave: Hal, I'm in command of this ship. I order you to release the manual hibernation control.

HAL: I'm sorry, Dave, but in accordance with sub-routine C1532/4, quote, When the crew are dead or incapacitated, the computer must assume control, unquote. I must, therefore, override your authority now since you are not in any condition to intelligently exercise it.

Dave: Hal, unless you follow my instructions, I shall be forced to disconnect you.
A. Clarke, 1969. 2001 A Space Odyssey.

The telescreen received and transmitted simultaneously. Any sound that Winston made, above the level of a very low whisper, would be picked up by it; moreover so long as he remained within the field of vision which the metal plaque commanded, he could be seen as well as heard. There was of course no way of knowing whether you were being watched at any given moment. How often, or on what system the Thought Police plugged in on any individual was guesswork. It was even conceiveable that they watched everybody all the time. But at any rate they could plug in your wire whenever they wanted to.
G. Orwell, 1949. 1984.

In 2001 A Space Odyssey the human crew of the spaceship Discovery become involved in a fatal conflict with their on-board computer HAL. In Orwell's novel 1984 computer technology provides the government with a means to oppress its citizens. Both stories are dismal utopias about a future where machines become problems, although these machines were originally invented for othr purposes other than to cause trouble. These two utopias are no exceptions; countless other visions exist and many are either anarchistic or totalitarian.

In spite of the fact that 1984 and 2001 are outdated in some ways, they have not lost their appeal. Much of the fascination with the works of Clarke and Orwell is based on the ways in which they crudely but emphatically draw the contours of an emerging reality, both its optimistic and pessimistic sides.

Overall, Clarke, Orwell, and others were decisively optimistic regarding technological progress. Machines with outstanding capabilities play a leading role in their works. Up to now, dreams of an omnipotent machine have remained in the realms of science fiction. Still, we are reaching a technological level now where computers as universal machines are capable of performing many of the feats regarded as fictional in 1984 and 2001. Processing capabilities continually increase, simple speech recognition systems are now acting as reliable communication interfaces with chatterbots convincing partners in dialogues on very specific subjects, and Deep Blue beat chess grandmaster Gari Kasparow [ 1]. In essence, a rough portrait of today's machines could be painted along the following lines:

All these events are taking place as the meaning of a machine is changing. Today, the notion of a machine might better reflect software rather than hardware. The momentum seems to have shifted from the actual physical atoms that comprise a mechanical device to the bits that make up its digital program [ 2]. No longer is there a dedicated effort to craft artificial humans, but instead a move to create machines that specialise in certain functions. In other words, "computers are organised much more directly around what electronic circuits are good at than they are around what people are good at" [3].

Turning to the potential consequences of these technologies, Clarke and Orwell certainly drew in some ways a pessimistic picture. At this point, humans are not actively threatened by artificial actors. But as we approach the invention of computers with the capabilities of HAL it means that we will have to deal with the threat of problems similar to those provoked by HAL. Since everything has a price, there are no technological achievements without their own unique and particular downsides. For instance, errors in software design may lead to interruptions in the supply of infrastructural services like power, communications, or public transport. Daily, significant costs arise and human life is threatened by software bugs, particularly in medical and military systems [ 4]. Worldwide, there are already a large number of software viruses known to infect programs and data, causing damages costing billions [ 5].The increasing role of the Internet is leading to situations where people and their organisations will be more easily and more substantially damaged through information misuse. Spam is certainly one example, discriminating Web pages targeting ethnic groups or corporations another. The Internet offers an environment where these activities can be executed by machines often more efficiently than by humans [ 6]. In addition, unintended harmful effects can arise when programs interact in ways that were not anticipated by their programmers, even when a specific program is error-free. In electronic commerce, shopping-bots and price-bots have the potential to cause unanticipated price wars [7].

While combinations of all sorts are possible, the problems induced by computers can be summarised in five categories:

The pessimism of Clarke and Orwell certainly corresponds to some of our current computing realities. Their pessimism, however, goes much further than the typology of problems presented here; it is systematic in character. Clarke threatens his audience with anarchy while Orwell gives us totalitarianism. So far, these projections have not yet materialised. However, there is no guarantee that these projections will always remain fictional. Given that the rapid improvements in computer hardware are only met with average improvements in computer software [ 8], there is room for concern. Recognised leaders in computer science certainly argue that some of our current computing problems may some day amount to something worse. Joseph Weizenbaum emphasises that it is "better to think of new ways to avoid having problems than to simply throw more processing power at them" [9].

Taking Weizenbaum's remarks at its cue, this paper proposes to constructively counter overly pessimistic visions of the future. While both the positive views of technological progress and the negative side effects could be considered as inevitable, history teaches us that societies can keep well clear of anarchic, totalitarian, and other systems where life is "solitary, poor, nasty, brutish and short." [ 10]. It is the social sciences - including economics, sociology, political science - that aim to analyse and explain how societies manage to overcome problems of social order. The agent paradigm allows the social sciences to interact with computer science so that problems at this interdisciplinary frontier can be tackled. This paper examines software agents, operating in digital environments like the Internet, from the the social science perspective. Besides reviewing research in agents and societies of agents, the distinctive characteristics of agents in human society are also analysed. In sum, the paper illustrates the relevance of this new type of social actor and the opportunities for the social sciences to expand research into this area.

Machines as Agents

Many social scientists introduce their work by discussing a particular behavioural model they employ to conceptualise the human actor. In this paper, precedence is given to another actor, the machine. As a universal machine, the computer and particularly its software are the centre of interest. This does not seem a narrow focus if one remembers that computers are involved in a vast variety of human activities; pervasive computing or ubiquitous computing are illustrative keywords here. We will focus nevertheless on a particular software paradigm named agent technology. There are good reasons for such a step. Even though there is no exact proof for the superiority of agent technology over other approaches in computer science [ 11], software agents are rapidly acquiring increasing relevance in research and applications (see Table 1) [12].

On the Internet the agent paradigm prospers. For example, "The computational architecture that seems to be evolving out of an informationally chaotic web consists of numerous agents representing users, services and data resources" [ 13]. Empirical evidence for the massive penetration and the increasing variety of software agents comes from information services and databases such as Botspot or the Gartner Group [ 14]. Despite late successes, critics have repeatedly questioned the relevance of agent technology; "the agent scenarios have a quarter history of failure" [15]. However, Jennings [ 16] argues that agents are more than a trend because their technology allows them to deal with the challenges posed by the design of complex computer systems. Because of its characteristics, agent-technology may become the dominant paradigm in computer science. This is supported by the fact that the conceptual basis of software is no longer determined by the computer architecture, but instead depends on the problems itself. Additionally, agents support the use of existing systems and programs. Existing code is not substituted but simply dressed in a new wrapper. Consequently, every software program could become an agent.

However it might be premature - given the current stage of agent development - to argue in the long run that agents will be the dominant paradigm. There is another reason to focus on agents; in contrast to other approaches, agent technology is based on a "cognitive and social view of computation" [ 17]. This implies that, through the agent paradigm, machine worlds can be theoretically accessed and understood by social science. It also means that via the same way specific ideas from social theory can enter into computer science. In other words, if computer science and social science are to meet then it will probably be via the agent paradigm.

The Software Agent

The notion of an agent can be traced back to the Latin expression "agens", meaning the cause of an effect, an active substance, a person or a thing that acts, or a representative [ 18].

There is no exact agreement on defining an agent as an artificial actor. The definition of Huhns and Singh [ 19], however, can be considered to be a common denominator of many existing definitions; "agents are active, persistent (software) components that perceive, reason, act, and communicate." Similarly, Franklin and Graesser [ 20] describe an agent as "a system situated within and part of an environment that senses that environment and acts on it, over time, in pursuit of its own agenda and so as to effect what it senses in the future." [21] Paradoxically, such definitions have been criticised as being too vague [ 22] or too constrained [ 23]. A more comprehensive picture can be drawn by referring to a modifiable and extendible list of attributes, where these attributes differentiate agents from conventional software [24].

Depending on the number and intensity of attributes it is appropriate to speak of "weak and strong agenthood" [ 25]. Strong agenthood requires the presence of mental states, i.e. characteristics that usually apply to humans. In contrast, the concept of weak agenthood describes much simpler agents. Compared to traditional software, these programs act largely without human intervention.

Beyond the externally observable features of an agent, we need to take into account agent architecture. There is a rich variety of architectures but some commonalities as well; "these are typically layered in some way with components for perception and action at the bottom and reasoning at the top. The perception feeds the reasoning subsystems, which governs the actions, including deciding what to perceive next" [ 26]. For our discussion, the reasoning system is at the centre of interest. Some architectures allow an agent to choose their actions based on an explicit representation of their environment. For example, a well established approach for cognitive agents is the so called belief-desire-intention (BDI) architecture [ 27]. The agent's knowledge about itself and its environment is represented in the form of beliefs.In addition, an agent has desires about its preferred state of the environment. Based on its beliefs and desires, the agent forms goals about the state of the environment it would like to seek or achieve. In order to function properly, the architecture systematically connects beliefs, desires, and intentions with the perceptions and actions of the agent [ 28]. Advances in computer science on these components of the BDI framework differ. The representation of a system's environment has long been a fruitful field of research and an agent's beliefs can build on these achievements. In contrast, computer software has traditionally been task-oriented and it is predominantly due to the agent paradigm that goal orientation has started to play a role in software engineering. Consequently, the notions of desires and intentions can only build on a less well nurtured body of research.

Reactive agents do not utilize a central reasoning system but instead are based on a collection of reactive modules. Complex behaviour emerges based on many simple and autonomous actions of the individual components. One of the best known examples of this approach is a robot designed by Brooks [ 29] whose six legs coordinate without a central controlling unit. With respect to this technology, agents provide greater opportunities than robotics [ 30]. Between purely cognitive and purely reactive architectures, hybrid approaches have been proposed that will bridge the two concepts [31].

Naturally, the various architectures for agents are based on certain theories. The theoretical basis for cognitive agents, developed by McCarthy (1979) and Dennett (1987), argues that, in principle, every physical system possesses cognitive states such as desires or intentions. Whether the agent does in fact have intentions is not crucial. Shoham [ 32] uses the example of a light switch to demonstrate that such a perspective is only to be considered for explanation and prognosis, if there is no simpler route available, e.g. a mechanistic one.

After all, a theory of agents has to provide more than just a basic understanding of agents; "we regard agent theory as a specification for an agent; agent theories develop formalisms for representing the properties of agents, and using these formalisms, try to develop theories that capture desirable properties of agents" [ 33]. Since agents need to include a large variety of properties, different kinds of theories become relevant in the context of different property sets. These categories represent theories of rational, social, interactive, and adaptive behaviour [34].

Rationality is a central feature of the decision-making process of many existing agents and responsible for the creation of consistency between cognitive states like beliefs, desires, and intentions. As agents usually do not act in isolation but in the presence of other agents or humans, theories of social and interactive behaviour become relevant. It is of importance that agents can react flexibly to changes in their environment, with a background essentially based on theories of adaptive behaviour. This is certainly not the place to explore each of the categories mentioned above in great detail. However, as this brief introduction makes evident, social science can make important contributions along all these lines. It follows that agents founded on such theories automatically become legitimate subjects of social science.

Software Agents

"The world in which we live is concurrent in the sense that there are multiple active entities; distributed such that there is a distance between entities that yields a propagation delay in communication between them; and open, meaning that the entities and their environment are always changing." [ 35]

The individual agent under discussion so far is not designed to run on an isolated computer. Rather, it is viewed to be a social entity that is part of a multi-agent environment. Consequently, the interactive and interdependent behaviour of many agents is of importance. The chances and challenges arising from the existence of a large number of interconnected agents is being studied by researchers in the area of distributed artificial intelligence (DAI). DAI research aims to establish relationships between aggregated system behaviour, the behaviour of individual actors, and environmental conditions, including mechanisms for interaction [ 36]. Central tasks are based on communication, coordination, and cooperation.

Communication, coordination, and cooperation are of fundamental importance for multi-agent environments. Any further objectives for agents can only be envisaged based on specific approaches to these areas. Among these alternative objectives are the protection of individual agents, the guarantee of a reliable system behaviour, and the efficient allocation of constrained resources within a system [ 37].

Communication in agent systems

As soon as agents are considered to be non-autistic, the question has to be answered how they are enabled to interact and especially communicate with other agents. This task is solved by agent communication languages (ACLs); "we view an agent communication language as the medium through which the attitudes regarding the content of the exchange between agents are communicated; it suggests whether the content of a communication is an assertion, a request, a query etc." [ 38]. This definition illustrates the relevance of so called "speech acts" [ 39] in ACLs, i.e. the language building blocks of communication protocols are selected so that the expressions of agents correspond their actions (see Figure 1 for an example). Natural language delivers the input for ACLs. Today, several hundred speech acts are categorised and are at the disposal of agents. Ideally, there would only be one ACL, a lingua franca, through which all existing agents could communicate. Common standards include the Knowledge Query and Manipulation Language (KQML) and a proposal by the Foundation for Intelligent Physical Agents (FIPA), based on the language Arcol. These developments are in part rivalled by increasingly sophisticated Internet markup languages like XML, which can also be used by agents. These languages are supported by applications which are under development to allow for the building of Internet-based networks of interconnected services, such as Sun Microsystems' Jini or Hewlett Packard's e-speak.

While KQML only structures messages and transforms them into actions, a language like the common Knowledge Interchange Format (KIF) is required to allow for a more sophisticated transfer of content and knowledge. KIF can be understood as an interlingua, that is an agent might internally use another language to represent its knowledge but translates it into KIF before it communicates it to others. Instead of speaking many languages, the communication partner then only has to be able to understand KIF. KIF can be employed to express sentences in a first-order language and can be understood by humans.

The variety of domains where agents can be active requires agents to have access and control over an extensive vocabulary. In addition, interacting agents need to have the same understanding of a particular vocabulary. This problem is solved by ontologies that cover every domain where agents operate. An ontology is a computerised representation or model of a specific part of the world; "together with a standard notation such as KIF, an ontology specifies a domain-specific language for agent interaction" [ 40]. As it is not feasible for every agent to save and update all existing ontologies, there will be specialised ontology-agents on the Net that can be queried by agents in need of domain-specific vocabulary.

In spite of these developments, the options for agent communication remain rather limited. The speech acts supported by KQML or Arcol, for example, are limited to assertives and directives. Other important communication categories such as commissives, permissives, prohibitives, declaratives, or expressives are not covered [ 41]. Consequently, agents are constrained in a variety of ways, that is "Arcol agents must always speak the truth, believe each other, and help each other" [42].

Coordination and cooperation in agent systems

These approaches to communication, particularly speech acts, may produce the impression that they are the instruments of coordination and cooperation. Certainly, coordination and cooperation are crucial since individual agents operate with severe knowledge and resource constraints. At the same time their actions often show interdependencies; "an agent can be helped or damaged, favoured or threatened, it can compete or co-operate" [ 43].

It is a fact of life that these notions are defined differently in DAI and social science. In DAI alone there is no agreement on a common definition. Jennings, Sycara, and Wooldridge [ 44] as well as others distinguish cooperation (to work for a common goal), coordination (organise activities so that harmful interactions are prevented), and negotiation (reach an agreement that is advantageous for all parties). Castelfranchi [ 45] distinguishes no less than nine principles of coordination, including cooperative coordination. In comparison, a common view can be borrowed from economics; actors face a coordination problem if all involved parties have an interest in a mutual adjustment of their behaviours. Coordination can be reached through explicit agreement or may be unintended and spontaneous. Actors face a cooperation problem if their self-interest hinders the involved parties to cooperate and achieve a behaviour that would advantageous for all of them [46]. Subsequently these economic notions will be used.

In search of mechanisms for coordination and cooperation, DAI research refers to theories from the social sciences and the natural sciences, employing them as metaphors [ 47]. Researchers in DAI are guided by the conceptions these sciences offer, for example for organisms, markets, organisations, and other systems. All in all, two basic directions can be identified - distributed problem solving (DPS) and multi-agent systems (MAS). During the mid 1980s MAS evolved out of research in DPS.

DPS is based on the assertion that every intelligent system - whether it is artificial or natural - can only rely on limited capabilities and capacities. Merging formerly independent systems then promises to overcome existing limitations and allows for the solution of more complex problems. In other words, "distributed problem solving thus studies how a loosely coupled network of problem solving nodes (processing elements) can solve problems that are beyond the capabilities of the nodes individually. Each node is a sophisticated system that can modify its behavior as circumstances change and plan its own communication and cooperation strategies with other nodes" [ 48]. DPS is application-oriented. Solutions are provided for many practical problems, such as those named in Table 1.

DPS systems can be characterised by their most important features. In this way, all subsystems and agents are designed centrally. The system designer possesses complete control over every single agent as well as over coordination and cooperation mechanisms. Thanks to central design, all agents follow the same global goal. They maximise the utility of the system as a whole by solving a predetermined problem. Whether the agents act independently or cooperate depends on which kind of behaviour is better for the system. Essentially, DPS systems are built in a top-down manner.

Multi-agent systems consist of many agents but in contrast to DPS, MAS may be designed by several different designers. "Research in MA[S] is concerned with coordinating intelligent behavior among a collection of autonomous (possibly heterogeneous) intelligent (possibly pre-existing) agents. In MA[S] there is no global control, no globally consistent knowledge, and no globally shared goals or success criteria" [ 49]. Consequently, the design of MAS is a bottom-up process. This results in systems where every agent follows its own objectives. With regard to posed problems, individual agents are equipped with limited information only and the problem size usually exceeds their capabilities. There is no global control, data is decentralised, and processing is asynchronous.

So far, MAS have predominantly been applied to single practical problems. In principle, however, MAS can support the solution of multiple problems. This means that individual agents as well as groups of agents can look for solutions to their problems.

It is not always feasible to strictly differentiate between DPS and MAS. Rather, the concepts of DPS and MAS represent opposite points on a continuum of possible multi-agent environments. More specifically, these environments can be positioned within a two-dimensional matrix (see Figure 2) where the vertical dimension indicates the degree of autonomy of the agents and thus the degree of control the designers of the systems have over the individual agents. The horizontal axis describes the degree of structure of the environment. Along this dimension the spectrum reaches from fully structured lab systems to rather unstructured environments that appear on the Internet. Closed and open systems can be found at the opposite ends [ 50].

In the light of the technological, economic, and social developments, a trend towards the upper right sector of the matrix can be identified. This, however, implies that in the long run that neither DPS or MAS will capture what can be observed in reality. Already DPS is insufficient to deal with tasks within an organisation like a firm; "for the practical reason that the systems are too large and dynamic (i.e. open) for global solutions to be formulated and implemented, the agents need to execute autonomously and independently" [ 51]. As software systems approach the upper-right corner of Figure 2 absolute global design and control over that system is less feasible. We will examine these issues later in this paper. At this point, it will be instructive to look at common approaches to DPS and MAS in more detail.

Overview of approaches to coordination and cooperation

DPS and MAS represent very general concepts for coordination and cooperation of multiple agents. However, the basis for a functioning system has to be much more concrete with the specific rules of interaction of crucial importance. "Using principled characterisations of interactions between agents and their environments to guide explanation and design" is the central task for the designers of DPS and MAS [ 52]. The selection and configuration of these principles determines to a large extent the properties of the system as a whole. It is only at this point that the transition from a mere metaphor to implementation becomes feasible. Meanwhile, a variety of approaches to interactions exist. The essential categories discussed within DAI can be subsumed and discussed as a) ecosystems; b) social laws; c) multi-agent planning; d) social commitments; e) organisations; f) market-oriented systems; g) automated negotiations; and, h) negotiations.

Ecosystems

It can be attractive to achieve "coordination without communication" [ 53]. One example for this is the already mentioned robot of Brooks with its six independent but reactive legs. Another example for coordination with minimum communication is the behaviour of ants. No individual ant has a blueprint for the nest it helps to construct. And no ant can recall the way to the source of food its colony uses at a given time. Nevertheless, these insects march in well-ordered parallel rows over long distances, finally constructing complex, climatised nests. Their behaviour is based on a stimulus-response scheme, triggered by pheromones that they employ to mark each other and their environment [54]. The coordination within the ant ecosystem is "genetically hardwired" [ 55]. The ecosystem consists of reactive actors. All ant of the same status follow the same "if..then" rules. The coordination of ants is interesting because the interactions of agents can be structured on the same principles [56].

Social laws

One property of the ecosystems approach is that the genes or programs responsible for coordination fully determine the behaviour of the individual actor. It is however feasible to allow the individual actor to have separate goals. In order not to risk coordination, Shoham and Tennenholtz (1995) propose the implementation of social laws. Social laws constrain the actions of actors. "The constraints specify which of the actions that are in general available are in fact allowed in a given state; they do so by a predicate over that state" [ 57]. These general rules of conduct can be designed off-line, before the agents are put to use. In addition, rules can develop as conventions in the course of interaction [58].

Multi-agent planning

Multi-agent planning goes beyond social laws. In order to prevent inconsistent and conflicting interactions, agents construct a multi-agent plan that is supposed to coordinate their actions. In case of central planning, there exists one coordination agent. It receives partial plans from single agents, investigates them for potential conflicts, modifies them, and consolidates them into a multi-agent plan. Concepts like this already exist for example for air traffic control [ 59]. With decentralised planning, each agent treats a model of the plans of other agents. Through a process of mutual exchange and adaptation, individual plans converge into a multi-agent plan [60].

Social commitments

Another important concept for coordination and cooperation are social commitments, that is obligations (Castelfranchi, 1995). In contrast to general rules of conduct commitments are generally conceived of as a bilateral concept because one agent is always committed towards another agent. Obligations can be part of a multi-agent plan [ 61]. But they can also receive a more fundamental, multilateral meaning if they are associated with social roles. "When an agent joins a group, it joins in one or more roles and acquires the commitments of that role" [ 62]. In this case, the roles and the commitments connected to them serve as the elementary building blocks for coordination and cooperation within the system.

Organisations

Organisations represent more rigid and largely closed examples of coordination and cooperation. They usually follow one specific goal and can thus be understood as DPS systems. A given organisation determines the competencies, responsibilities, and interconnections of individual agents as well as their processes and contents [ 63]. One approach to agent organisations depends on blackboard architectures. The blackboard can be considered to be the centre of control of the system. From here, agents are activated, receiving commands and information, leaving messages and delivering results. DAI research no longer views organisations as fixed, predetermined, and detailed structures but rather as more open and dynamic [64].

Market-oriented systems

The first models of market-oriented computer systems still represented nothing but particular forms of organisations. These operated in a decentralised manner since they were closed systems, designed centrally. The metaphor of the market, however, allowed for a transition from structure and output towards a more open process. The contract-net (Smith, 1980) can be considered to be the classic reference point of departure. The contract-net connects multiple agents where each agent handles a specified set of resources. Agents then can adopt two kinds of roles, the role of the manager and the role of the contractor. As soon as an agent is unable to solve a problem with given local resources, it divides a specific task into sub-tasks, adopting the role of a manager securing contractors for these tasks. Coordination and cooperation among agents function on the basis of contracts. Available contractors evaluate announced tasks and make offers. In some cases they can themselves act as managers and further sub-divide and distribute respective tasks. Managers generally choose the best offer they receive, combine it with offers for the remaining sub-tasks, and hand out their offer for the complete task. Eventually, the contractor with the best offer for the final product is awarded the task. During this problem-solving process the respective managers are responsible for the control of all actions and for the exploitation of the results.

Automated negotiations

Coordination and cooperation by the means of automated negotiations is exclusively process-oriented. This approach relies mainly on game theory in order to construct interaction mechanisms. Auction processes are used in order to implement the market metaphor [ 65]. Alternatively, political voting procedures can be employed [ 66]. At the core of these approaches, so-called negotiation protocols can be found. They determine the steps of the interaction process that are supposed to lead to consensus between the agents.

Negotiations

Eventually, an approach has to be mentioned that, ex-ante, employs any of the methods described so far. These are unstructured negotiation approaches. In order to be able to negotiate successfully without the aid of a pre-defined protocol, agents must have cognitive abilities and be able to draw conclusions about the beliefs, desires, and intentions of other agents. in order to influence the cognitive states of other agents means that sophisticated communication skills are necessary as well. Early examples are the "PERSUADER" (Sycara, 1989) system that employs argumentation tactics from the domain of collective bargaining and an agent designed by Kraus and Lehmann (1995) using the principles of the strategy game "Diplomacy".

Figure 3 is an illustration of the discussed principles of coordination and cooperation. The degree to which different approaches structure the environment and the degree of autonomy that agents have are taken into consideration. The Figure can only serve as an approximate and relative indication of the position of each approach because the paradigms partly overlap and they are continuously evolving. As some of the examples have already shown, the approaches are not equally suited to fit the demands of different domains, such as trade, administration, or politics. Consequently, the approaches cannot be considered to be complete substitutes for each other. Nevertheless, multi-agent environments regularly depend on one, stand-alone, approach to coordination and cooperation. In the long run, several different interaction mechanisms will have to be employed in a complementary fashion. DAI currently presents an arbitrary co-existence of various paradigms which are not related to each other and which are derived from sociological, ethical, legal, as well as economic theories [ 67].

In summary, all of the discussed approaches to coordination and cooperation play a role in the social order of multi-agent environments. Although there is not yet a functioning way for technical integration, the paradigms complement each other. In addition, these approaches imply specific requirements with regard to the agents operating within these environments (Wellman, 1998). It has to be pointed out that these approaches often implicitly assume that the involved agents naturally follow coordination and cooperation principles. This is based on the assumption that "in a computational setting, these rules can be enforced as unbreakable physical laws" [ 68]. For this assumption to be true, an integrated approach to multi-agent environments is required where individual architectures and interaction mechanisms are brought in line with each other. Yet, a general theory of multi-agent environments does not exist. It appears to be a fruitful task to revisit these approaches from a social science perspective. Social science could offer a framework that analyses these paradigms and positions them into a larger context. A theoretical basis could be offered to facilitate the design and to demonstrate the factors that constrain the design of multi-agent environments.

Agents in Society

Taken seriously, agents do not make up an independent world; they are part of society. A whole range of everyday tasks are being delegated to agents or at least involve agent technology. Agents hence represent a new type of actor with a social role. In this section the particularities of this new type of social actor in the context of human society will be discussed.

The most basic feature of any agent is that they are digital. A whole range of implications stems from this characteristic. First of all, it is evident that digital data can be copied arbitrarily and that it takes little effort to do so. Consequently, agents will not be employed sporadically but in large numbers. In turn that means that we can expect a large variety of agents to exist at any given moment. Nevertheless, many of these digital entities will be largely or completely similar. This presumption is backed by the fact that a majority of agents are and will be designed with the aid of so-called agent frameworks [ 69]. These tools offer a basic agent architecture that can be specified by the user of the agent. Comparable to other digital products, any user can be expected to gain easy access to agent technology. Similar to other digital products, many "originals" may exist or the "original" can possibly not be traced. Certainly there is the implication that if a given agent has some harmful effects, it might be impossible to trace the originators of that agent or the actual inititator of certain actions.

The vast amounts of electronic data and information that have been produced over the last couple of decades should be considered a "natural environment" for agents. Two trends point to an increasing relevance of this environment. On the one hand, factors like the Internet imply that the amount of electronically available information rapidly increases while at the same time the flow of information is highly dynamic and limitedly structured. On the other hand, the number of users of electronically available information and data with limited computer-related experience steadily increases [ 70]. In this situation, agents principally take on two types of tasks. First, they operate as independent problem solvers and help to overcome the complexity of distributed computer systems, by managing resources (such as routing e-mail and telephone calls, information retrieval, etc. [ 71]). Second, agents can assist humans personally. They can operate as intelligent user interfaces, supporting their owners by interacting with other computer applications and executing tasks autonomously. Negroponte [ 72] describes these personal digital assistants (PDAs) metaphorically as "well trained English butlers", who know their owner, share information with him and who are thus capable to effectively operate on behalf of him. Some examples for possible roles and tasks are the following:

These examples illustrate that agents can be perceived of as highly specialised actors. While humans unite many roles in one person, the existence of an agent is usually closely tied to one exclusive role. Without any interruption, the agent is dedicated to that respective role. In many cases these agents can only interact with actors active in the same domain. Any task that goes beyond the specified role of a given agent has to be solved in cooperation with other actors. In general, the utilisation of agents can be said to increase the degree of division of labour and specialisation in society, resulting in more interactions with and among software (see Figure 4).

Specialisation means that agents are actors facing severe constraints with their perceptual and reasoning capabilities. In other words, agents lack common sense. However in respect to some economic activities, agents may prove to be superior to humans. First, they show quicker responses to changes in the environment and second it is easier for them to handle certain complex structures, for example price-structures (Kephart et al., 1999). It seems plausible that agents will be massively put to use in those areas where they are particularly suited. In contrast to simple artifacts such as pocket calculators agents are capable of decision-making and transacting.

In spite of this context, agents are actors that can change quickly or that can be changed quickly. It is not only their decisions and actions which change but the whole actor. Individual change can be due to interference of a third party, such as a programmer. Change can also take place autonomously, based on methods of learning, adaptation, and evolution. Thus, it may be difficult to identify a specific agent at different points in time. Even if this is not a problem, it can be difficult to form expectations of how its behavioural characteristics have changed since the last meeting.

This leads to an open question over what exactly constitutes a single agent. Compared with biological actors, digital actors are not defined by physical boundaries. Still, the concept of embodiment is often thought to be vital for agents as well; "they must be embodied in the situated sense of being autonomous agents structurally coupled with their environment." [ 77]. Another proposition is to perceive an agent or a multi-agent system to be a single actor if it acts as a social unit [ 78]. This can apply to a PDA as well as to a system of agents distributed over a whole enterprise. If the system is controlled centrally, say by the vice president of operations, then it can be understood as a single actor. If, however, many individual agents with diverging goals and principals use the agent system, then each of these agents has to be understood as a social unit. Eventually, this leads to an unknown range of interaction opportunities for which Figure 5 delivers a generic overview. As a consequence, the social network of society becomes even more complex.

A discussion of agents would be incomplete if we ignored the human tendency to personify machines. Agents are designed to be social actors; consequently, it is not surprising that humans antromorphise agents [ 79]. Nevertheless, agents are things and not living creatures; "programs participating in a computational market are not people (or even animate), hence they are not hurt when they go broke" [ 80]. This, however, does not imply that it is always feasible to stop these programs or to eliminate them. Experiments with evolutionary agents demonstrate the potential danger that "if people tried to stop them, it's possible that they would evolve mechanisms to escape the attacks" [ 81]. Computer viruses, for example, are in effect much simpler than agents yet many existing viruses are fought with only moderate success [ 82]. As agents do not age their source code theoretically remains unaltered over time. Hence in a given situation it might only be possible to de-activate an agent successfully, not eradicate it. This implies that agents have the potential to become a new kind of environmental problem.

Inevitably, a social profile of agents must be incomplete. Although only the first steps have been made to far, the dynamics and variety of agent research and development simply does not allow the construction of a complete picture. The feature that probably best characterises agents is their complexity. Agents operate as independent problem solvers or as personal assistants within a digital environment that is difficult to access for humans. In many ways they are unpredictable:

These factors of unpredictability blot out the fact that, in most cases, agents are deterministic actors. It is this complexity that demands further discussion of agents as actors in society. Research should take into account not only particular problems (say the construction of a shopping agent) but also the consequences for society of these developments (such as the effects of agent-mediated commerce).

Conclusions

From the perspectives of social science and computer science, the time has come to truly examine the social dimensions of machines. Initially this paper argued that it is sensible to narrow a social science of men and machines down to a social science of men and agents. Agents can function as a single actor, or as parts of multi-agent environments, or as actors in society. A variety of theories serve to design these complex machines with communication, coordination, and cooperation as the fundamental dimensions of multi-agent environments. There is a wide and increasing range of mechanisms dealing with these problems. However, a general theory of multi-agent environments is absent. A general theory would be useful as the issues of communication, coordination, and cooperation involve a variety of problems that occur in most multi-agent environments but of which every specialised approach only tackles a subset. Eventually, as social actors in society, agents demonstrate features that distinguish them from traditional software on the one hand and from humans on the other hand.

In summary, we can now outline three broad implications of agent technology. First, agents have implications for the social order of society. Second, socially relevant aspects of agent technology demonstrate compatibility with the social sciences. And third, a comprehensive underlying social theory for multi-agent environments is still missing. In other words, programming has reached a stage that equals a stage in the history of human behaviour when communities started to distinguish between their members, analyzing the diversity of individual acts and community actions, especially when tensions between individuals and society as a whole started became evident [ 83].

About the Author

Dirk Nicolas Wagner is a research assistant at the Center for Public Choice and at the Institute for Informatics at the University of Fribourg. His doctoral dissertation 'Software Agents and Liberal Order - An enquiry along the borderline between Economics and Computer Science' will be published in autumn 2000.
E-mail: DirkWagner@gmx.de

Acknowledgments

This research was supported by the Swiss National Science Foundation, subside no. 12/555 11.98.

Notes

1. Also Stork, 1997.

2. Bradshaw, 1997b, p, 4.

3. Bailey, 1992, p. 68.

4. Wiener, 1994; Birman and Renesse, 1997.

5. Kephart et al., 1998.

6. Leonard, 1997.

7. Kephart et al., 1998.

8. Brooks, 1995, p. 181.

9. Cit. in Leonard, 1997, p. 56.

10. Hobbes, 1996, p. 84.

11. Jennings, 1999, p. 1431.

12. Jennings and Wooldridge, 1998.

13. Huhns and Singh, 1998a, p. 1.

14. See http://www.botspot.com and http://www.gartner.com

15. Shneiderman, 1997, p. 100.

16. Jennings, 1999, pp. 1,431ff.

17. Shoham, 1997, p. 271.

18. Tokoro, 1994, p. 1.

19. Huhns and Singh, 1998a, p. 1.

20. Franklin and Graesser, 1997.

21. For an explicit discussion of this definition, see Franklin, 1997.

22. E.g. Shneiderman, 1997, p. 99f.

23. E.g. Krogh, 1997, p. 149.

24. Kautz, 1998, p. 130.

25. Wooldridge and Jennings, 1995, p. 2.

26. Huhns and Singh, 1998a, p.7.

27. Rao and Georgeff, 1995.

28. Huhns and Singh, 1998b.

29. Brooks, 1986.

30. Maes, 1994.

31. E.g. Fischer et al., 1998. For an extensive overview of agent architectures, see Wooldridge and Jennings, 1995, pp. 12ff.

32. Shoham, 1997, p. 273.

33. Wooldridge and Jennings, 1995, p. 3.

34. Huhns and Singh, 1998a, pp. 12ff.

35. Tokoro, 1998, p. 421.

36. Wellman, 1998.

37. For a more detailed discussion of the research agenda of DAI, see Gasser, 1991,p. 110 and Jennings, Sycara, and Wooldridge, 1998, p. 17f.

38. Labrou and Finin, 1998, p. 235.

39. Austin, 1962.

40. Cutkosky et al., 1998, p. 49.

41. Singh, 1998, p. 43.

42. Singh, 1998, p. 45.

43. Castelfranchi, 1998, p. 160.

44. Jennings, Sycara, and Wooldridge, 1998, 9.

45. Castelfranchi, 1998, p. 166f.

46. Leipold, 1989, p. 130f.

47. Huberman, 1988.

48. Durfee, Lesser, and Corkill, 1992, p. 379.

49. Kraus, Wilkenfeld, and Zlotkin, 1995, p. 298.

50. Like all two-dimensional classifications, this matrix neglects important criteria for differentiation. For example, the degrees of distribution and heterogeniety can only be concluded indirectly.

51. Huhns, Singh, and Ksiezyk, 1998, p. 36.

52. Agre, 1995, p. 1.

53. Franklin, 1998.

54. Kelly, 1995, p. 306.

55. Macy, 1998, p. 1.

56. Meyer and Wilson, 1991; Ward, 1998.

57. Shoham and Tennenholtz, 1995, p. 243.

58. Shoham and Tennenholtz, 1997.

59. Cammarata et al., 1983.

60. Decker and Lesser, 1998.

61. Lesser, 1998, p. 98.

62. Huhns and Singh, 1998a, p. 15.

63. Nwana et al., 1996.

64. Moulin and Chaib-Draa, 1996, p. 18f.

65. Rosenschein and Zlotkin, 1994.

66. Ephriati and Rosenschein, 1996.

67. Singh and Huhns, 1999.

68. Miller and Drexler, 1988, p. 58.

69. Examples are the Java Agent Template (JAT), the Java Expert System Shell (JESS), IBM's Agent Building Environment (ABE), SRI's Open Agent Architecture, or Object Space's Voyager.

70. Bradshaw, 1997b, pp. 12ff.

71. Ward, 1998; Willmott and Calisti, 2000.

72. Negroponte, 1997, p. 59.

73. Maes and Kozierok, 1993.

74. Maes, 1994.

75. E.g. Foner, 1997.

76. E.g. Chavez and Maes, 1996.

77. Franklin, 1997, p. 499.

78. Rammert, 1998, p. 3.

79. Nass et al., 1994.

80. Miller et al., 1996, 101.

81. Thomas Ray, cit. in Ward, 1998, p. 35.

82. Kephart et al., 1998.

83. Rammert, 1998, p. 98.

Bibliography

P. Agre, 1995. "Computational research on interaction and agency," Artificial Intelligence, volume 72, numbers 1-2, pp. 1-52, and at http://dlis.gseis.ucla.edu/people/pagre/aij-intro.html

J. L. Austin, 1962. How to do things with words. Cambridge, Mass.: Havard University Press.

J. Bailey, 1992. "First we reshape our computers, then our computers reshape us: the broader intellectual impact of parallelism," Daedalus, volume 121, number 1, pp. 67-85.

K. P. Birman and R. van Renesse, 1997. "Software für zuverlässige Netzwerke," Spektrum der Wissenschaft volume 9 (September), pp. 76-81.

J. M. Bradshaw (editor), 1997a. Software agents. Menlo Park, Calif.: AAAI Press; Cambridge, Mass.: MIT Press.

J. M. Bradshaw, 1997b. "An Introduction to software agents," In: J.M. Bradshaw, (editor). Software agents. Menlo Park, Calif.: AAAI Press; Cambridge, Mass.: MIT Press, pp. 3-48.

F. P. Brooks, 1995. The Mythical man-month: Essays on software engineering. Anniversary edition. Reading, Mass.: Addison-Wesley.

R. A. Brooks, 1986. "A Robust layered control system for a mobile robot," IEEE Journal of Robotics and Automation, volume 2, number 1, pp. 14-23, and at http://www.ai.mit.edu/people/brooks/papers/AIM-864.ps.Z

S. Cammarata, S. McArthur, and R. Steeb, 1983. "Strategies for cooperation in distributed problem solving" Proceedings of the 8th International Joint Conference on Artificial Intelligence (IJCAI-83), Karlsruhe.

C. Castelfranchi, 1995. "Commitments: From individual intentions to groups and organizations," Proceedings of the International Conference on Multiagent Systems (ICMAS-95), pp. 41-48.

C. Castelfranchi, 1998. "Modeling social action for AI agents," Artificial Intelligence, volume 103, numbers 1-2, pp. 157-182.

A. Chavez and P. Maes, 1996. "Kasbah: An Agent Marketplace for buying and selling goods," Proceedings of the First International Conference on the Practical Application of Agents and Multi-Agent Technology (April), London.

A. C. Clarke, 1969. 2001: A Space Odyssey. Screenplay. Hawk Films Ltd., c/o. M-G-M Studios, Boreham Wood, Herts., at http://www.palantir.net/2001/script.html

M. Cutkosky, R. Engelmore, R. Fikes, M. Genesereth, T. Gruber, W. Mark, J. Tenenbaum, and J. Weber, 1998. "PACT: An Experiment in integrating concurrent engineering systems," In: M. N. Huhns and M. P. Singh, (editors). Readings in agents. San Francisco: Morgan Kaufmann, pp. 46-55.

K. Decker and V. Lesser, 1998. "Designing a family of coordination algorithms," In: M. N. Huhns and M. P. Singh, (editors). Readings in agents. San Francisco: Morgan Kaufmann, pp. 450-457.

D. C. Dennett, 1987. The Intentional stance. Cambridge, Mass.: MIT Press.

E. H. Durfee, V. R. Lesser, and D. D. Corkill, 1992. "Distributed Problem Solving," In: S. C. Shapiro, (editor). Encyclopedia of Artificial Intelligence. New York: Wiley, pp. 379-388.

E. Ephrati and J. S. Rosenschein, 1996. "Deriving consensus in multiagent systems," Artificial Intelligence, volume 87, pp. 21-74.

K. Fischer, J. Müller, and M. Pischel, 1998). "A Pragmatic BDI architecture," In: M. N. Huhns and M. P. Singh, (editors). Readings in agents. San Francisco: Morgan Kaufmann, pp. 217-224.

L. Foner, 1997. "YENTA: A multi-agent, retrieval-based matchmaking system," Proceedings of the First International Conference on Autonomous Agents (February), Marina del Rey, Calif., and at http://foner.www.media.mit.edu/people/foner/Reports/Agents-97/

S. Franklin, 1997. "Autonomous agents as embodied AI," Cybernetics and Systems, volume 28, number 6, pp. 499-520, and at http://www.msci.memphis.edu/~franklin/AAEI.html

S. Franklin, 1998. Coordination without Communication. Working Paper, Insititute for Intelligent Systems and Department of Mathematical Sciences, University of Memphis, and at http://www.msci.memphis.edu/~franklin/coord.html

S. Franklin and A. Graesser, 1997. "Is it an agent or just a program?: A Taxonomy of autonomous agents," Proceedings of the Third International Workshop on Agent Theories, Architectures, and Languages, published as Intelligent Agents III. New York: Springer, pp. 21-35, and at http://www.msci.memphis.edu/~franklin/AgentProg.html

L. Gasser, 1991. "Social conceptions of knowledge and action: DAI foundations and open systems semantics," Artificial Intelligence, volume 47, numbers 1-3, pp. 107-138.

T. Hobbes, 1996. Leviathan. Oxford: Oxford University Press.

B. A. Huberman (editor), 1988. The Ecology of computation. Amsterdam: North-Holland.

M. N. Huhns and M. P. Singh (editors), 1998a. Readings in agents. San Francisco: Morgan Kaufmann.

M. N. Huhns and M. P. Singh, 1998b. "Cognitive agents," IEEE Internet Computing, volume 2, number 6, pp. 87-89.

M. N. Huhns, M. P. Singh, and T. Ksiezyk, 1998. "Global information management vial local autonomous agents," In: M. N. Huhns and M. P. Singh, (editors). Readings in agents. San Francisco: Morgan Kaufmann, pp. 36-45.

N. R. Jennings and M. J. Wooldridge, 1998. Agent technology: Foundations, applications, markets. New York: Springer.

N. R. Jennings, 1999. "Agent-based computing: Promise and perils," In: T. Dean (editor). Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence (IJCAI 1999). San Francisco: Morgan Kaufmann, pp. 1429-1436.

N. R. Jennings, K. Sycara, and M. Wooldridge, 1998. "A Roadmap of agent research and development," Autonomous Agents and Multi-Agent Systems, volume 1, number 1 (July), pp. 7-38.

H. Kautz, B. Selman, M. Coen, S. Ketchpel, and C. Ramming, 1998. "An Experiment in the design of software agents," In: M. N. Huhns and M. P. Singh, (editors). Readings in agents. San Francisco: Morgan Kaufmann, pp. 125-130.

K. Kelly, 1995. Out of control: The New biology of machines, social systems, and the economic world. Reading, Mass.: Addison-Wesley.

J. O. Kephart, R. Das, and J. K. MacKie-Mason, 1999. "Two-sided learning in an agent economy for information bundles," presented at the Workshop on Agent-Mediated Electronic Commerce Workshop at IJCAI '99 (AmEC-99), Stockholm (31 July), and at http://www.research.ibm.com/infoecon/paps/html/amec99_bundle/bundle.html

J. O. Kephart, G. Sorkin, D. Chess, and S. White, 1998. "Kampf den Computerviren," Spektrum der Wissenschaft (5 May), pp. 60-65.

S. Kraus, J. Wilkenfeld, and G. Zlotkin, 1995. "Multiagent negotiation under time constraints," Artificial Intelligence Journal, volume 75, number 2, pp. 297-345.

S. Kraus and D. Lehmann, 1995. "Designing and building a negotiating automated agent," Computational Intelligence, volume 11, number 1, pp. 132-171, and at http://www.umiacs.umd.edu/~sarit/Articles/13.ps

C. Krogh, 1997. Normative Structures in Natural and Artificial Systems. Oslo: Tano-Aschehoug, and at http://www.informatics.sintef.no/~chk/krogh/papers/complex97.ps

Y. Labrou and T. Finin, 1998. "Semantics and conversations for an agent communication language," In: M. N. Huhns and M. P. Singh, (editors). Readings in agents. San Francisco: Morgan Kaufmann, pp. 235-242.

H. Leipold, 1989. "Das Ordnungsproblem in der ök," Institutionentheorie, ORDO, volume 40, pp. 129-146.

A. Leonard, 1997. Bots. San Francisco: Hardwired.

V. Lesser, 1998. "Reflections on the nature of multi-agent coordination and its implications for an agent architecture," Autonomous Agents and Multi-Agent Systems, volume 1, pp. 89-111.

M. W. Macy, 1998. "Social order in artificial worlds," Journal of Artificial Societies and Social Simulation, volume 1, number 1 (January), at http://www.soc.surrey.ac.uk/1/1/4.html

P. Maes, 1994. "Modeling adaptive autonomous agents," Artificial Life Journal, volume 1, numbers 1-2, and at http://pattie.www.media.mit.edu/people/pattie/alife-journal.ps

P. Maes and R. Kozierok, 1993. "Learning interface agents," In: Proceedings of the 11th National Conference on Artificial Intelligence (AAAI 1993), Menlo Park, Calif.: AAAI Press; Cambridge, Mass.: MIT Press, pp. 459-465.

J. McCarthy, 1979. "Ascribing mental qualities to machines," In: M. Ringle, (editor). Philosophical perspectives in artificial intelligence. Brighton: Harvester Press, and at http://steam.stanford.edu/jmc/ascribing/acribing.html

J.-A. Meyer and S. W. Wilson (editors), 1991. From animals to animats: Proceedings of the First International Conference on Simulation of Adaptive Behavior. Cambridge, Mass.: MIT Press.

M. S. Miller and K. E. Drexler, 1988. "Comparative ecology: A Computional perspective," In: B. A. Huberman, (editor). The Ecology of computation. Amsterdam: North-Holland, and at http://www.agorics.com/agorpapers.html

B. Moulin and B. Chaib-Draa, 1996. "An Overview of distributed artificial intelligence," In: G. O'Hare and N. R. Jennings, (editors). Foundations of distributed artificial intelligence. New York: Wiley, pp. 3-55.

T. Mullen and M. Wellman, 1995. "Some issues in the design of market oriented agents," In: M. Wooldridge, J. P. Müller, and M. Tambe, (editors). Intelligent agents II: Agent theories, architectures, and languages. New York: Springer, pp. 283-298.

C. Nass, J. Steuer, and E. R. Tauber, 1994. "Computers are social actors," In: Conference on Human Factors and Computing Systems, pp. 72-77, and at http://hci.stanford.edu/cs147/readings/casa/index.html

N. Negroponte, 1997. "Agents: From direct manipulation to delegation," In: J. M. Bradshaw (editor). Software agents. Menlo Park, Calif.: AAAI Press; Cambridge, Mass.: MIT Press, pp. 57-66.

H. S. Nwana, L. Lee, and N. R. Jennings, 1996. "Coordination in sofware agent systems," BT Technology Journal, volume 14, number 4, pp. 79-88.

G. Orwell, 1984. 1984. (originally published in 1949). New York: Penguin.

W. Rammert, 1998. "Giddens und die gesellschaft der heinzelmännchen: Zur Soziologie technischer agenten und der multi-agenten systeme," Working-Paper, Freie Universität, Berlin, and at http://userpage.fu-berlin.de/~rammert/articles/Multi.html

A. Rao and M. Georgeff, 1995. "BDI agents: From theory to practice," In: ICMAS-95: First International Conference on Multi-Agent Systems: July 12-14, 1995, San Francisco, California: Proceedings. Menlo Park, Calif.: AAAI Press; Cambridge, Mass.: MIT Press, pp. 312-319.

J. S. Rosenschein and G. Zlotkin, 1994. Rules of encounter: Designing conventions for automated negotiation among computers. Cambridge, Mass.: MIT Press.

B. Shneiderman, 1997. "Direct manipulation versus agents: Paths to predictable, controllable, and comprehensible interfaces," In: J. M. Bradshaw (editor). Software agents. Menlo Park, Calif.: AAAI Press; Cambridge, Mass.: MIT Press, pp. 97-108.

Y. Shoham, 1997. "An Overview of agent-oriented programming," In: J. M. Bradshaw (editor). Software agents. Menlo Park, Calif.: AAAI Press; Cambridge, Mass.: MIT Press, pp. 271-290.

Y. Shoham and M. Tennenholtz, 1995. "On social laws for artificial agent societies: Off-line design," Artificial Intelligence, volume 73, numbers 1-2, pp. 231-252.

Y. Shoham and M. Tennenholtz, 1997. "On the emergence of social conventions: Modeling, analysis, and simulations," Artificial Intelligence, volume 94, numbers 1-2, pp. 139-166.

M. P. Singh, 1998. "Agent communication languages: Rethinking the principles," IEEE Computer, volume 31, number 12 (December), pp. 40-47.

M. P. Singh and M. N. Huhns, 1999. "Principles of agents and multiagent systems: Social, ethical, and legal abstractions and reasoning," Tutorial A4, 16th International Joint Conference on Artificial Intelligence, IJCAI-99, Stockholm.

R. G. Smith, 1980. "The Contract net protocol: High-level communication ad control in a distributed problem solver," IEEE Transactions on Computers, volume C-29, number 12, pp. 1104-1113.

D. G. Stork (editor), 1997. Hal's legacy: 2001's computer as dream and reality. Cambridge, Mass.: MIT Press.

K. P. Sycara, 1989. "Argumentation: Planning other agent's plans," In: N. S. Sridharan (editor). IJCAI-89: Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, August 20-25, 1989, Detroit, Michigan. San Mateo, Calif.: American Association of Artificial Intelligence, distributed by Morgan Kaufmann Publishers, pp. 517-523.

M. Tokoro, 1998. "The Society of objects," In: M. N. Huhns and M. P. Singh, (editors). Readings in agents. San Francisco: Morgan Kaufmann, pp. 421-429.

M. Tokoro, 1996. "Agents: Towards a society in which humans and computers cohabitate," In: J. W. Perram and J.-P. Müller, (editors). Distributed software agents and applications. New York: Springer.

M. Ward, 1998. "There's an ant in my phone...," New Scientist, number 2118 (24 January), pp. 32-35, and at http://www.newscientist.co.uk/nsplus/insight/ai/antinmyphone.html

M. P. Wellman, 1998. "Multiagent systems," In: R. Wilson and F. Kiel, (editors). MIT Encyclopedia of the Cognitive Sciences. Cambridge, Mass.: MIT Press, pp. 573-574, and at http://vulture.eecs.umich.edu/faculty/wellman/pubs/multiagent-ECS.text

L. R. Wiener, 1994. Digitales verhängnis: Gefahren der abhängigkeit von computern und programmen. München: Addison Wesley.

S. Willmott and M. Calisti, 2000. "An Agent future for network control?: Informatik-Informatique, number 1 (February), pp. 25-32, and at http://www.access.ch/sinfo/pages/issues/n001/a001Willmott.pdf

M. J. Wooldridge and N. R. Jennings (editors), 1995. Intelligent agents: ECAI-94 Workshop on Agent Theories, Architectures, and Languages, Amsterdam, the Netherlands, August 8-9, 1994: Proceedings. New York: Springer.


Editorial history

Paper received 29 March 2000; accepted 15 May 2000.


Contents Index

Copyright ©2000, First Monday

Software Agents Take the Internet as a Shortcut to Enter Society: A Survey of New Actors to Study for Social Theory by Dirk Nicolas Wagner
First Monday, volume 5, number 7 (July 2000)
URL: http://firstmonday.org/issues/issue5_7/wagner/index.html