First Monday

Intelligent Software Agents on the Internet: An Inventory of Currently Offered Functionality in the Information Society and a Prediction of (Near) Future Developments

by Björn Hermans

Contents

Table of Contents | Previous Chapters | First Chapters

Chapter 6: Future and Near-Future Agent Trends & Developments
Chapter 7: Concluding Remarks
Bibliography
Notes

Chapter 6: Future and Near-Future Agent Trends & Developments

Introduction
"... it often is impossible to identify the effects of a technology. Consider the now ubiquitous computer. In the mid-1940s, when digital computers were first built, leading pioneers presumed that the entire country might need only a dozen or so. In the mid-1970s, few expected that within a decade the PC would become the most essential occupational tool in the world. Even fewer people realised that the PC was not a stand-alone technology, but the hub of a complex technological system that contained elements as diverse as on-line publishing, e-mail, computer games and electronic gambling." - "Cyber-Seers: Through A Glass, Darkly" by G. Pascal Zachary

In this chapter we will take a cautious look into the future of agents [ 82 ] and the agent-technique.

To do so, in each section one important aspect related to, or one party involved in it, will be looked at more closely. First general remarks will be made about it. Next, where possible, a rough chronology of expected and announced events and developments is sketched to give an idea of what may be expected with respect to this party [ 83 ]. The given chronologies are divided into three periods:

  • "short term", relating to the period one to two years from now (i.e. from now up to and including 1997);

  • "medium term", relating to the period three to five years from now (i.e. from 1998 until the year 2000);

  • "long term", relating to the period from six years from now and beyond (i.e. the period beyond the year 2000).

This partition is rather arbitrary, but it is the most practical and workable compromise.

Another thing that may look rather arbitrary is the list of parties that have been selected for a further examination. It - indeed - could have been much longer, but we have chosen to look only at those parties and techniques of which it is (almost) certain that they will be involved in, or have influence on, future agent developments.

The depth of the examination may also appear rather superficial. However, it seemed more sensible to "just" describe those factors and issues that will influence developments (and to clarify and illustrate them wherever possible), than to make bold predictions (implicating that the future is straightforward and easy to predict) which are very hard to found with facts;

"Depending on the addressed area, carrying out [such an] analysis may be more or less easy: policy and regulatory trends for instance are quite easy to identify and understand. Business strategy too can be more or less easily deciphered. Yet this may already be a lot more complex since there is often a part of guessing or gambling behind corporate moves. Consumers' interest can also be guessed, for instance in the light of the skyrocketing popularity of the Internet or the multiplication of commercial on-line PC services.

The most difficult part of the exercise may in fact be to gauge the economic, social and cultural impact of new applications [such as agents]. Indeed, their visibility is still limited, making it all the more difficult to assess their penetration in the social fabric and in public interest areas." from "An Overview of 1995's Main Trends and Key Events" in Information Society Trends, special issue

Yet another compromise is the distribution of information over the various sections and the remarks that are made about it: there is quite some overlap in both of these.

The reason for this is twofold. Firstly, there is quite a lot of information and remarks that fit into more than one section. The section it has been put in now is the one that it is thought to fit in best, or the one where it was the most practical to put it in. Secondly, some of the mentioned parties (such as suppliers) can play more than one role and are linked to other parties. These links and roles are given in the various sections, but information about the involved parties is given only once.

The Agent-technique

General remarks
Agents will have a great impact, as was seen in the previous chapter. Some, mostly researchers, say they will appear in everyday products as an evolutionary process. Others, such as large companies, are convinced it will be a revolutionary process. The latter does not seem very likely as many parties are not (yet) familiar with agents, especially the future users of them. The most probable evolution will be that agents, initially, leverage simpler technologies available in most applications (e.g. word processors, spreadsheets or knowledge-based systems). After this stage, agents will gradually evolve into more complicated applications.

Developments that may be expected, and technical matters that will need to be given a lot of thought, are:

The chosen agent architecture / standards

This is a very important issue. On a few important points consensus already seems to have been reached: ACL (Agent Communication Language) is adopted and used by many parties as their agent communication language. ACL uses KIF (Knowledge Interchange Format) and KQML to communicate knowledge and queries to others. KIF and KQML are also used by many parties, for instance by the Matchmaker project we saw in chapter four, and is currently being further extended. In general, standards are slow to emerge, but examples such as HTML have shown that a major standard can emerge in two to three years when it is good enough and meets the needs of large numbers of people.

Another, related and equally important issue, is the agent architecture that will be persued and will become the standard. No consensus has been reached about this yet.

There are two possible architectures that can be persued, each of which has strong influences on such aspects as required investments and agent system complexity [ 84 ]:

Homogeneous Architecture: here there is a single, all-encompassing system which handles all transactions¾[ 85 ]¾ and functions [ 86 ]. Most of the current agent-enabled applications use this model, because the application can, itself, provide the entire agent system needed to make a complete, comprehensive system [ 87 ];

Heterogeneous Architecture: here there is a community within which agents interact with other agents. This community model assumes agents can have different users, skills, and costs.

There are various factors that influence which path the developments will follow, i.e. which of these two types of architectures will become predominant [ 88 ]:

1. The producer of the agent technique (i.e. used agent language) that has been chosen to be used in a homogeneous model: this producer will have to be willing to give out its source code so others are able to write applications and use it as the basis for further research.

If this producer is not willing to do so, other parties (such as universities) will experiment with and start to develop other languages. If the producer does share the source code with others, researchers, but also competitors, will be able to further elaborate the technique and develop applications of their own with it. It is for this last consequence, that most producers in this situation, at least all the commercial ones, will chose to keep the source code to themselves, as they would not want to destroy this very profitable monopoly. In the end, this 'protectionism' of this producer, combined with findings of (university) research and market competition, will result in multiple alternative techniques being developed (i.e. lead to a heterogeneous architecture);

2. Interoperability requirements, i.e. the growing need to co-operate/interact with other parties in activities such as information searches (because doing it all by yourself will soon lead to unworkable situations). Here, a homogeneous architecture would clearly make things much easier compared to a heterogeneous architecture as one then does not need to worry about which agent language or system others may be using.

However, multi-agent systems - especially those involved in information access, selection, and processing - will depend upon access to existing facilities (so-called legacy systems). Application developers will be disinclined to rewrite these just to meet some standard. A form of translation will have to be developed to allow these applications to participate. In the final analysis it is clear that this can only be done when using a heterogeneous agent model [ 89 ]. Furthermore, agent systems will be developed in many places, at different times, with differing needs or constraints. It is highly unlikely that a single design will work for all;

3. Ultimately, the most important factor will be "user demand created by user perceived or real value". People will use applications that they like for some reason(s). The architecture that is used by (or best supports) these applications will become the prevailing architecture, and will set the standard for future developments and applications.

Although a homogeneous architecture has its advantages, it is very unlikely that all the problems that are linked to it can be solved. So, although the agent architecture of the future may be expected to be a heterogeneous one, this will not be because of its merits, but rather because of the demerits of a homogeneous one.

Legal and ethical issues (related to the technical aspects of agents):

This relates to such issues as:

  • Authentication: how can be ensured that an agent is who it says it is, and that it is representing who it claims to be representing?

  • Secrecy: how can be ensured that an agent maintain a user's privacy? How do you ensure that third parties cannot read some user's agent and execute it for their own gains?

  • Privacy: how can be ensured that agents maintain a user's much needed privacy when acting on his behalf?

  • Responsibility which goes with relinquished authority: when a user relinquishes some of his responsibility to one ore more software agents (as he would implicitly), he should be (explicitly) aware of the authority that is being transferred to it/them;

  • Ethical issues, such as tidiness (an agent should leave the world as it found it), thrift (an agent should limit its consumption of scarce resources) and vigilance (an agent should not allow client actions with unanticipated results).

Enabling, facilitating and managing agent collaboration/multi-agent systems:

A lot of research has to be done into the various aspects of collaborating agents, such as:

  • Interoperability/communication/brokering services: how can brokering/directory type services for locating engines and/or specific services, such as we have seen them in chapter four, be provided?

  • Inter-Agent co-ordination: this is a major issue in the design of these systems. Co-ordination is essential to enabling groups of agents to solve problems effectively. Co-ordination is also required due to the constraints of resource boundedness and time;

  • Stability, scalability and performance issues: these issues have yet to be acknowledged, yet alone tackled in collaborative agent systems. Although these issues are non-functional, they are crucial nonetheless;

  • Evaluation of collaborative agent systems: this problem is still outstanding. Methods and tests need to be developed to verify and validate the systems, so it can be ensured that they meet their functional specifications, and to check if such things as unanticipated events are handled properly.

Issues related to the User Interface:

Major (research) issues here are [ 90 ]:

  • Determining which learning techniques are preferable for what domains and why. This can be achieved by carrying out many experiments using various machine learning techniques over several domains;

  • Extending the range of applications of interface agents into other innovative areas (such as entertainment);

  • Demonstrating that the knowledge learned with interface agents can be truly used to reduce users' workload, and that users, indeed, want them;

  • Extending interface agents to be able to negotiate with other peer agents.

Miscellaneous technical issues:

There are many other technical issues which will need to be resolved, such as:

  • Legacy systems: techniques and methodologies need to be established for integrating agents and legacy systems;

  • Cash handling: how will the agent pay for services? How can a user ensure that it does not run amok and run up an outrageous bill on the user's behalf?

  • Improving/extending Agent intelligence: the intelligence of agents will continuously need to be improved/extended in all sorts of ways;

  • Improving and extending agent learning techniques: can agent learning lead to instability of its system? How can be ensured that an agent does not spend (too) much of its time learning, instead of participating in its set-up?

  • Performance issues: what will be the effect of having hundreds, thousands or millions of agents on a network such as the Internet (or a large WAN)?

Chronological overview of expected/predicted developments

The short-term: basic agent-based applications
In the short term, basic agent-based software may be expected to emerge from research, e.g. basic interface agents such as mail filtering or calendar scheduling agents. Basic mobile agent services will also be provided now.

A "threat" in especially this period is that many software producers will claim that their products are agents or agent-based, whereas in reality they are not. In fact, the first manifestations of this are already becoming visible:

"... we are already hearing of 'compression agents' and 'system agents' when 'disk compressors' and 'operating systems' would do respectively, and have done in the past. [ 91 ]"

On the other hand, mainly from the domain of academic research, an opposite trend is starting to become visible as well, namely that of a further diversification and elaboration of (sub-)agent concepts. The origins of this lie in the constant expansion of the agent concept: it already is starting to get too broad to be used in any meaningful way. Therefore logical and workable sub-classes of agents, such as information agents and interface agents, are being stipulated and defined by researchers.

Available (i.e. offered by a significant number of producers/vendors) agent-applications will allow users to specify a query/request by means of written sentences (which may not be ambiguous). Agents will then search for information with the aid of indices available at the source(s) (irrespective of application developing the index). Searches can be based on keywords, but concepts may be conveniently used as well. The first mobile agents will too become available now.

Agents that are really used (by a significant number of users) are the well-known wizards. Wizards can be used to guide a user through some procedure (which may be creating a table in a word processor, but they can also be used to launch or set-up agents), and can pop-up when needed to give a user some advice or hints. Also used in this period are agents that can be used for information retrieval (where the user is helped by one or more agents, which communicate with the user by means of a personalised user interface).

In this period, setting up agent-based applications is that difficult, that only skilled users (such as researchers or software developers) are able to do this. It may be expected that a special branch of companies or organisations will emerge in this period which consist of professionals that set-up agents for others. As time goes by, and agents get more user-friendly to install (or agents will even be able to install their software themselves), the need for this profession should disappear again: toward 1998 it is expected that agent-based applications become available that can be set-up by end users themselves.

The medium-term: further elaboration and enhancement
In this period more elaborated agent applications are available and used, as more mobile and information agent applications and languages will become available. It is also by this time that the outlines of the most important agent-related standards should become clear. The different agent sub-types of the short term, will now start to mature, and will be the subject of specialised research and conferences.

The first multi-agent systems, which may be using both mobile and non-mobile agents, and most probably are using a heterogeneous architecture, will be entering the market somewhere around 1998 or 1999. Significant usage of these systems may be expected at the turn of the century. It is also at this time that agents that are able to interact with other agents managed by other applications, are becoming available. Because of their increased usage, agents will probably by this time generate more traffic on the Internet than people do.

Around 1998-1999, agent applications can and will be set-up by significant numbers of end-users themselves. Expectations are that a few years later, agents that are able to do this themselves (i.e., a user agent "sees" a need, and "proposes" a solution to its user in the form of a new agent) will become available.

Agent-empowered software that is as effective as a research librarian for content search will be available in 1998 [ 92 ], and may be expected to be used by a significant number of users near the year 2000. Agents that can understand a non-ambiguous, written request will be used in 1998 as well, just like indices that are based on a concept search (such as Oracle's Context). It will probably not be until the year 2000, before the first agent applications are available that can understand any written request, made using normal natural language (interaction with the user is used to resolve ambiguities in these requests).

The long term: agents frow to maturity
Beyond the year 2000, it is very hard to predict well what might happen:

"We may expect to see agents which approximate true 'smartness' in that they can collaborate and learn, in addition to being autonomous in their settings. They ... posses rich negotiation skills and some may demonstrate what may be referred to, arguably, as 'emotions'. ... It is also at this stage society would need to begin to confront some of the legal and ethical issues which are bound to follow the large scale fielding of agent technology. [ 93 ]"

End users may be expected to really start using anthropomorphic user interfaces. Agents will more and more be interacting with agents of other applications, will more or less set themselves up without the help of their user, and will get more powerful and more intelligent.

Users can state requests in normal language, where agents will resolve such problems as ambiguity by making use of user preferences and the user model (the expected date for such agent functionality to be available will at the earliest be in 2005).

The User

General remarks
"Agent-enablement will become a significant programming paradigm, ranking greater in importance than client/server or object orientation. The big difference will lie in increased user focus. Successful implementors will view their products in the context of personal aids, such as assistant, guide, wizard. [ 94 ]" Users are one of the most - if not the most - influential party involved in the developments around agents. However, it may be expected that most users will adopt a rather passive attitude with regard to agents: research and past experiences with other technologies have learned us that substantial user demand of new technologies is always lagging a few years behind the availability of it. So users may be called "passive" in a sense that they will only gradually start to use applications that employ the agent-technique. Moreover, they will not do this because of the fact that these applications use the agent technique, but simply because they find these application more efficient, convenient, faster, more user-friendly, etcetera. They may even find them "smarter", even though they have never heard of such things as intelligent software agents. Not until applications using agents are sweeping the market and users are more familiar with the concept of agents, will the role of users become more active in the sense that they knowingly favour agent-enabled applications over applications that do not use the agent-technique.

Ease of Use
"Software is too hard to use for the majority of people. Until computers become a more natural medium for people... something they can interact with in a more social way, the vast majority of features and technologies will be inaccessible and not widely used. [Our] industry has historically proven more finesse at delivering difficult and challenging technologies, than it has providing these in an approachable way." - a Delphi Process respondent [ 95 ]

In general, "ease of use" (or the lack of it) will be one of the most important issue in the agent-user area. If users do not feel comfortable working with agents, if they find them insecure or unreliable, or if they have to deal with hardware or software problems, agents will never be able to enter the mainstream. The issue of ease of use can be split up into a number of important sub issues:

The User Interface (broadly speaking)
The interface between the user and agents (i.e. agent applications) is a very important factor for success. Future agent user interfaces will have to bridge two gaps: the first is the gap between the user and the computer (in general) and the second is the gap between the computer user and agents:

"the end user first must feel comfortable with computers in general before attempting to get value from an agent-enabled application." a remark made by a respondent [ 96 ]

Special interface agents will have to be used to ensure that computer novices, or even users who have never worked with a computer at all, will be able to operate it and feel comfortable doing so:

"People don't understand what a computer is, and you ask them to work with a state of the art tool. First we must make them feel comfortable with computers." a remark made by a respondent [ 97 ]

A good agent/computer user interface will have to look friendly to the novice user. There are strong debates over the question whether or not anthropomorphic interfaces (i.e. interfaces who use techniques such as animated characters) are a good way of achieving this goal. Some say people like to treat computers as if they were humans, so providing an interface which gives a computer a more human appearance would fit perfectly to this attitude. Others think users may get fed up by anthropomorphic interfaces (e.g. find them too round-about, or too childish), or they may be disappointed by the level of intelligence (i.e. by the perceived limitations) of such interfaces. Therefore, user interfaces will not only have to look good (e.g. more "human"), but they will also need to be "intelligent". Intelligence in this context relates to such abilities as being able to understand commands given in normal (i.e. natural) language (preferably with the additional ability to understand ambiguous sentences) or the ability to take the context into consideration in which commands are given and by whom this is done [ 98 ].

Security / Reliability
"Users must be comfortable trusting their intelligent agents. It is essential that people feel in control of their lives and surroundings. They must be comfortable with the actions performed for them by autonomous agents, in part through a feeling of understanding, and in part through confidence in the systems. Furthermore, people expect their safety and security to be guaranteed by intelligent agents." from "Intelligent Agents: a Technology and Business Applications analysis

The security and reliability (i.e. predictability) will be an important issue for many users. The rise of multi-agent systems complicates things even further, as it becomes very hard to keep a good overview on a situation where several layers of agents and all types of agents are involved: how can one be sure that nothing is lost, changed or treated wrong, in a system where multiple kinds of agents need to work together to fulfil a request?

One possibility to offer a secure agent system is to use one common language, such as Telescript. But as has been pointed out earlier, it is very unlikely that all agents will use the same language.

Another complicating factor is the fact that agents are programmed a-synchronously; agents are built at different moments in time, so each agent will have its own agenda and skills, which may not be easily compatible with (those of) other agents.

Respondents in a study by P. Janca were asked when agents will be relied on for complete personal information security (by users) [ 99 ]. The given answers (i.e. opinions) varied strongly.

Some thought that complete security could never be accomplished. New and better security techniques will be invented, but so are new "other" techniques which give rise to new security problems.

Others thought it would be possible within ten to twenty-five years. Additional remarks made by these respondents were the expectation that it will take quite some time before people really have trust in agents. But, on the other hand people (i.e. users) will have to trust agent security as "more and more information is imposed on us, we will not be able to manage all this by ourselves. We need to define templates and rules for different events, etc., and therefore pass the responsibility at least partially to an agent."

To one of the respondents agent security was sort of a non-issue, as he found that it is something an agent should not be concerned with in the same way an agent should not be concerned about the operating system a user is using: "If the question is "When will agents make my personal data secure?" my answer would be never - the technology would be misapplied, since secure communications technology covers this issue fairly completely now, and is constantly being improved with public encryption, authentication services."

Hardware Issues
The current PC operating system environment, as used by many users, makes it difficult-to- impossible to capture the type of information needed to measure a user's actions. Without these signals, user interface agents cannot determine when to intervene. A related problem is the non-standard environment. Every PC can be just a little bit different, making standard interface development a challenge.

Available applications
"Ease of use" is tightly coupled to another factor in user acceptation and adoption of agents: the availability of agent applications that the user finds useful, convenient, etcetera. User adoption of agents will not be driven by the agent technique's (cap)abilities, but by agent applications:

"The catalyst will be a few good agent applications controlling data that is important to users. The bar needs to be set and then customers will demand agents." respondent reaction [ 100 ]

Generally seen, the following major user agent applications (related to user information needs) can be distinguished, each of which have been realised already, or can be realised within a few years [ 101 ]:

Personal assistants: here the agent system treats each user as an individual. As the system gets more and more experience, it will look more and more like a personal assistant. Examples of such personal assistants are Open Sesame! and Microsoft's Bob;

Information management: this relates to search engine improvements. One improvement would be the ability to go beyond the regular search environment. Another improvement would have an agent pre-determine which data sources it would check. A further improvement is the ability to do searches based on context rather than a search based on keywords, and to select data sources based on this context. An example of such an agent is Oracle's ConText which is a natural language processing technology capable of compressing and summarising documents. The way data is compressed may depend on personal taste: the type of data, day of the week, etc. Intelligence may be required to determine how to present such data;

Personal newspaper: a daily personal newspaper is presented to a user. This newspaper includes headlines and summaries of articles for maximum ease of use. The application will scan which information the user reads first, and adjust future presentations to match this reading pattern.

Examples of personal newspapers that are currently being offered are those of the Wall Street Journal, The Times and IBM's InfoSage;

Personal research assistant: here there is an agent (the assistant) which has knowledge of a user's preferences, as well as his or her standing requests for information on certain topics. It periodically scans appropriate databases, and delivers summaries on a scheduled or on-request basis. Eventually, the assistant will both understand - and communicate using - natural language.

Chronological overview of expected/predicted developments

The short term: first agent encounters
By the end of 1996, expectations [ 102 ] are that end users have somehow heard of, and therefore recognise, the term "agent(s)", even though they may not be able to give a (very rough) definition or description of it.

One year later, in 1997, it is expected that about a quarter of the then current PC/workstation user base consider "agents" to be personally helpful (although they may be referring to simple wizard-like agent-applications) and will say they (themselves) are using, or have once used, a product or service incorporating agents.

Agent-applications that are available are user-invoked interfaces that enable a dialogue with an agent, and agents that can produce reports that are generated by the agent itself at regular intervals or whenever necessary.

Agent-applications that are really used are those that can act as a personal assistant: they can effectively sort incoming mail and filter (electronically available) news articles that match a user's areas of interest.

The medium term: increased user confidence and agent usage
Useful, but still rather limited, interface agents will be available which perform such roles as that of an eager assistant, a WWW guide, memory aid, WWW filter/critic, and which can deliver entertainment. User communication will be by such means as anthropomorphic agent user interfaces (which are expected to become available somewhere around 1998-1999), as

"people love having a social entity to help them with a task. People are willing to pay a premium today for something as simple as the social entities in Bob. People use computers to do many of the things above, and will feel much, much more comfortable with a social entity or character. [ 103 ]" One out of every four users, by this time, will be so confident about agents, that he trusts his agent to navigate the network (Internet) to find candidate products for some purchase. In one study it is predicted by Delphi Process respondents that by the year 2000, these users may even trust their agent to make a purchase (although this probably won't go for such purchases as a new car or a new home) [ 104 ]. However, some of the respondents in this report were sure that users will never let an agent buy goods for them at all.

In the same report, it is predicted/expected that by the year 1999-2000, about 10% of the then current PC/workstation users will consider the following agent aspects to be "solved problems":

  • Ease of use;
  • Security;
  • Privacy;
  • Training and support;
  • Continuity (i.e. an initiator knows his agent traversed the network, and can rely on the reports of results).

About one year later, agent overload (i.e. an agent handles overload by modifying requests and/or ignoring some), will be added to this list.

By the same time, agents that can produce reports that are generated by the agent itself at regular intervals, or whenever needed[ 105 ], are really used. Also used a lot then (i.e. somewhere around the year 2000) is agent-empowered software that is as effective as a newspaper, in the ability to headline/set document length based on the expected article importance for the user.

The long term: further agent confidence and task delegation?
How developments will continue in this period is rather uncertain, and because of that is hard to predict precisely. What may be expected is a further increase of user's confidence in agents. For instance, it is predicted that over ten years, a quarter of the then current PC/workstation user base will allow an/its agent to anticipate its needs/desires, find candidate products, and to make the purchase [ 106 ].

Agent-applications that will start to be used by large numbers of users, are anthropomorphic user interfaces (this is predicted to happen somewhere near the year 2001).

The Suppliers & The Developers
This section is about suppliers and developers of agents and agent-based applications. However, in many places in this section, both these parties can be interpreted in a (much) broader sense: i.e., "suppliers" can also relate to suppliers of (information) services on the Internet using agents in some way for their service [ 107 ], and "developers" can also be related to researchers investigating the agent technique and related areas (such as that of Artificial Intelligence).

Two very important questions with regard to suppliers are who is going to offer agents and why/with what reasons they are doing this. For developers/researchers a very important question is what (functionality) agents will be offering. However, things will get very complicated in the future as both suppliers and developers can play the other's role as well; e.g. a supplier of agent software can do research into agents as well, which it can then use in all sorts of agent applications. And: suppliers and developers can be users of agents too!

It is probably not surprising that this makes it (very) difficult to predict how in the future these separate roles and aims might be intertwined.

Who will be developing agents and how will they be offered?
At this moment, many suppliers/producers use the one-way (business) model of Netscape/Java to reach consumers/users. Especially manufacturing and distribution companies, which produce a tangible product, are more likely to use this producer/distribution model.

At the same time, there is a trend for organisations whose main product is to provide service(s), i.e. where value is added through transaction processing (such as information (service) providers who are active on the Internet), to change their role of that of a rather passive supplier to a more active and more elaborated role. These organisations are switching from the production/distribution model to a consumer/push-pull model, and are very interested in applications and techniques that enable them to reach all those users whose needs they can cater for, and improve and enhance the ways in which they offer their services.

Concepts such as the three layer model (as seen in chapter four) and software agents, can help to offer/enable all of this. Non-commercial intermediaries, such as libraries or the government (and all its services and organisations), could use them to extend their services by providing these to a much larger user group (that of the Internet or an Intranet [ 108 ]) and by tapping into all the information sources the Internet provides access to. They can also help users selecting the right sources to match their needs (in a same way as these intermediaries have been doing for conventional media such as books and articles). Commercial intermediaries (or information brokers), can offer these services to this audience too, but more elaborated and with various forms of support.

As said, the producer/distribution model is currently the vogue among many organisations "doing business on the Internet". One of the most important motives for them to do so, is probably the fact that this allows producers to use the same techniques and the same materials (such as advertisements) on the Internet, as they use, and have used, for other media (such as newspapers or television channels). The problem with using this simple model is that the user (i.e. consumer) must track down and access the content (i.e. a provider's Internet service). And it may be expected that future Internet users are unlikely to spend much time on doing this.

Although, consumer/push-pull strategies have not yet proven successful on the Internet, they might be turned into profitable, successful strategies if certain principles are followed. One model - called the software economy - has been sketched [ 109 ]. In this "economy", the basic economic unit is a transaction, which is of a special kind: barter. Thus, the basic mechanism for making money becomes the transaction fee or commission. Another major difference between this model and the previous (current) one, is that business is initiated predominantly by users/consumers:

"Someone somewhere wants to buy information, service, or a product. They enter into a community of buyers and sellers, e.g. a market, and obtain the service or product by electronic bartering. The intriguing prospect in this model, however, is the idea of a software agent - a program that roams the telesphere (cyberspace) looking for buyers and sellers (who are also software agents). Whenever two or more software agents meet in the telesphere, they barter for services and products, and then report back to their human owners." from "Stepping Out" [ 110 ]

Various (electronic) publishers, but also others, are doing a lot research into all (new) sorts of electronic publishing, which make use of the previously described principle.

An example of this is the InfoMarket project of IBM, where one of the concepts that research is done into, is that of superdistribution. This technique makes it possible to package documents in such a way that they can be transported from, say the author, or the publisher, to an (in theory) infinite number of users, without getting any copyright infringements. In fact, the more the document gets handed out to others (i.e. friends, colleagues, peers), the more the author or the publisher can earn with it.

What kinds of agents will be offered?
"Success in the Info Age [ 111 ] means learning to use technology to individualise and personalise services and products. In other words, technology which increases the human touch will succeed. Technology which plunges humankind deeper into machine-like efficiency will fail." from "Living in real time ..." [ 112 ]

Information service providers (but the same goes for just about any company or organisation offering services on the Internet) will soon have to adjust to the change as described in the previous paragraph. Soon a situation will emerge where there are so many suppliers offering seemingly identical products and/or services, that users (i.e. buyers, consumers, etc.) will need to be attracted by other means than sharp prices or nice advertisements. It has every appearance that delivered services (such as product information, user support, the availability of some kind of help desk, etcetera) will be that decisive factor. Agents can be used to offer individualised services/information (i.e. in the form that is suitable for a specific user), but also to provide these at any time that is suitable for the user (regardless of his location). They can be used for such uses as have been described in earlier sections of this chapter, and for various functions in middle layer activities.

Here are a few examples of such possible future services:

  • Agents can be used to deliver tailor-made (personalised) services, e.g. (in case of the government) help a person to find the right juridical information about some juridical problem he or she has, and present it in the most comprehensive way (based on information provided by the user agent);

  • Publishers can use agents as a tool to pre-select information (such as news articles) for users. Because of their experience and authority, groups of users will keep on relying on publishers to pre-select information, just like they do now for the publication of magazines and professional literature (probably even regardless of whether or not a model like the three layer model is used on the Internet);

  • Small/individual suppliers (such as a real-estate agent) can use agents to provide personalised information about their service(s) or product(s), and can save money as they do not have to send out printed brochures which can only provide generic information.

No matter what will be offered in the future, it is vitally important that - especially developers of agents and agent-based applications - be real "ambassadors" for agents, in that they give good, reliable information about agents: they should give a realistic representation of the possibilities of the agent-technique to prevent overoptimistic expectations of (potential) users of agents. Neither users, nor suppliers, nor developers would have any avail of that.

Why/with what reasons will agents be developed and/or offered?

The reasons why organisations or companies develop agents can be very diverse. However, they all have their (minor or major) influences on agents, and the functionality they have to offer.

For commercial developers, there are probably three important reasons to develop agents:

1. First and foremost, agents are developed because they can and will be profitable. Judging by their necessity as we saw in chapter one and four, and adding to this the functionality they can offer (as we saw in chapter two, three and four), it may be safely concluded that money can be made with them. And if you, as a company, do not develop them or do research into applications for them, then your rivals surely will. And just as it has been said about the Internet: it is better not to wait to long with doing so;

2. Suppliers and intermediaries/brokers can use agents to more effectively (i.e. better) reach their target groups (perhaps even target groups that until now could not, or not well enough be reached with the aid of traditional media). So, they have every reason to develop good (user, intermediary and supplier) agents that are able to do this;

3. The Internet makes it possible to reach a huge public against relatively small risks and costs. Agents can extend this advantage, by making it possible to provide customised services all night and day. Money can be saved in various way, for instance because a software agent is much cheaper than a human agent.

An important, rather general reason to develop agents, for any kind of developer, is that they are able to perform many tasks. Better still, they can perform them more intelligently than conventional programs can, and much more. This saves time and money, provides better results, etcetera. Many parties can profit from this: not only IT-freaks, but many other groups as well, e.g. students, managers, and the average civilian [ 113 ].

Apart from reasons to develop and/or offer agents, there are also good reasons - for some - not to do so. Usage of the three layer model, and of agents, leads to a very transparent market, as users - by means of the middle layer and/or agents, can get a detailed overview of all (or at least many) of the suppliers in a specific market, of their prices, the service they offer, etcetera. Therefore, a very justified question to ask, is whether suppliers want such a transparent market, as market obscurity can be an advantage to them (it makes it hard(er) for users to get an idea of whether or not there are better or cheaper alternatives). So, besides the reasons why agents are developed and/or offered, the reasons why some will not want them to be developed or offered will play a role as well. The influence of it, however, will not be that big, as it is very unlikely that parties adhering to this opinion will be able to exert a strong influence.

The Government
The influence of governments, their departments, and various services on the developments, at least in the foreseeable future, may be expected to be of a rather indirect nature in most cases (as will be seen in this section). In some cases, e.g. where juridical aspects of agents need to be addressed, this influence will have to be more than just superficial. Besides, there are quite some important Internet-related governmental issues, that can greatly benefit from the functionality offered by agents: the three layer model we saw in chapter four, is an example of such a thing that can very well be used in government policies or plans regarding "The Information Super-Highway".

We will step through the list of important governmental Internet-issues, and indicate in which way(s) agents and/or the three layer model can contribute to solving these.

1. The government, maybe through one of its departments, should ensure that developments regarding agents and the Internet go in the right direction. What that right direction should be, is not very easy to determine. In fact, in the United States the government is more and more delegating this kind of decision making to third parties (e.g. groups of large IT-companies) as they think these are better up to the task and/or because this saves the government lots of money. However, it can be said for sure that the right developments are not very likely to emerge from the action of market forces alone as the largest and most powerful companies would dominate the others:

The key issue for the emergence of new markets is the need for a new regulatory environment allowing full competition. This will be a prerequisite for mobilising the private capital necessary for innovation, growth and development. In order to function properly, the new market requires that all actors are equipped to participate successfully, or at least that they do not start with significant handicaps. All should be able to operate according to clear rules, within a single, fair and competitive framework. [ 114 ]"

Issues, such as the development of open standards for agents (or for Internet services), are of such great importance, that governments should (at least) supervise the whole process:

"In an efficient and expanding information infrastructure, ... components should work together. Assembling the various pieces of this complex system to meet the challenge of interoperability would be impossible without clear conventions. Standards are such conventions. [ 115 ]"

Thus, it should not be a supervision on the specific ins and outs of a standard. The government should engage in a steering role to ensure that standards are and remain really open, and not become a standard that is exacted or enforced by one party (leading to a stifling monopoly):

"Most people can agree that an ideal information infrastructure should have such qualities as extended interoperability, broad accessibility, and support for broad participation. ... Progress toward that ideal is more likely if the government can set an example with its own services and help enable a consensus on a vision of the future by removing barriers to its realisation. ... However, the government's role is as a partner and participant with the private sector, exercising its regulatory authority with restraint. [ 116 ]"

2. The previous quote mentions another reason why the government should be actively using the Internet, related services and techniques: governments can set an example towards others and encourage them (other parties or even to complete sectors) to follow it:

"Initiatives taking the form of experimental applications are the most effective means of addressing the slow take-off of demand and supply. They have a demonstration function which would help to promote their wider use; they provide an early test bed for suppliers to fine-tune applications to customer requirements, and they can stimulate advanced users ... [ 117 ]"

In fact agents can be of general use to (the) government(s). Governments can set an example by offering useful and convenient services on the Internet. They can do this by employing agents (we have seen many of such possibilities in the previous sections) and by adopting, actively using and maybe even promoting some form of the three layer model. After all, governments are active in all three layers/roles: as suppliers (of many services and all kinds of information), as intermediaries and as a user of services and information (e.g. reports) of others.

3. Governments are already addressing various juridical aspects of the Internet in general (such as copyright), but specific ones related to agents will need to be addressed as well. For instance, who is responsible for the actions of an agent? The user, one might say, but it is almost impossible for the average user to follow the actions of an agent all the time that it is active on his behalf. If it isn't the user, who is?

These matters will become pressing in about five years. And as it always takes quite a while before laws/rules are passed and ready to be used in legal practise, it may be a good idea for governments to address this problem as soon as possible. For instance by asking advice from all sorts of experts and institutions that operate in this field.

Issues related to this, and that of point 1 in this section, are the need for security and encryption techniques to make sure that agents and the Internet can be used without endangering the secrecy of data or endangering people's privacy.

4. Not only , as seen in point 2, can agents & the Internet be of great use for the government, it is (not surprisingly) of great use to many others too. This makes agents & the Internet in general, something the government should be interested in. The Internet "industry", already, is a growing economic sector, providing work to many. It may be expected that the agent industry (although on a smaller scale compared to the Internet) will too be a sector with many (economic) opportunities:

"nothing will happen automatically. We have to act to ensure that these jobs are created [...], and soon. [ 118 ]"

5. Agents and the three layer model can be introduced as a logical continuation of current Internet policies. The first aim of the government was to get people, companies and organisations to discover, make use of and perform research on the Internet. A second aim now should be to make use of the Internet easier, more efficient, more user-friendly, more profitable, etcetera:

"[...] Once people are comfortable finding information on the Internet, they will discover that they want much more: they will want help in locating reliable, useful information; they will want to discuss it with others, [...] generate it, and so on. [ 119 ]"

These are all things software agents (preferably combined with a three layer structure) can help to offer.

6. The Internet can be used for educational purposes (in a broad sense): "It is no longer a question of "whether" it will happen - it is a question of 'how soon'". In chapter five it has been described why education will play in increasingly important role with regard to the aim that everyone (i.e. every civilian) should be able to use the Internet/NII/"Information Super-Highway".

The exact future applications of agents in education are not easy to foresee at this moment. But the functionality that can be offered at this moment, already looks very promising. For instance, agents can be used in education to gather all sorts of information, and to offer customised teaching programmes to scholars and students. They also can reduce costs, save time and improve the learning process:

"The Hudson Institute, headquartered in Indianapolis, Indiana, reviewed 20 years of research on computer-based instruction and found that students learn 30 percent more in 40 percent less time and at 30 percent less cost when using computer-aided instruction. Who says automated delivery isn't as good as delivery in the flesh?" from "Living in real time" [ 120 ]

The Internet & the WWW
As the most important application environment, the Internet and its services, need to be taken into account as well when making predictions. Furthermore, more and more people are getting familiar with the Internet (in general), and more and more are making their first trips on it:

"Whether they have actually logged on to the Internet or not, Americans are optimistic about the new medium ... Eighty percent of those on-line, and 54 percent of those not on-line (59 percent of all respondents), say they believe the information on the Internet is useful." from Business Wire

However, after the initial period of introduction, many new users run into several problems. One of the most important is having difficulties finding (specific) information:

" ... there is a big difference between Americans' ideas about the value of information in Cyberspace, and their abilities to access that information: 54 percent of those on-line report they spend most of their time searching for information rather than finding it. And of those not on-line, 46 percent [!] believe that if they were on-line, they'd spend more time in search of information than actually finding it." from Business Wire

Contrary to what may be expected, this does not have a negative influence on the perception of the usefulness of the Internet as a source of information:

" ... Whether they are on-line or not, the Americans surveyed do not view the Internet as something that would complicate their lives. Further, they report that the new electronic medium with its oceans of information will neither complicate their lives nor does the prospect of going on-line make them feel they would be isolated from others.

Of all survey respondents, 63 percent say the Internet does not complicate life for them. (87 percent of those on-line, and 57 percent of those not on-line agreed.) In fact, 58 percent of all those surveyed actually thought the Internet would simplify their lives."
from Business Wire

Agents can be of great help to users that are searching for (specific) information. In fact, from the preceding it can be concluded that it is of vital importance that this functionality is offered.

Moreover, agents can not only be used as information gatherers, but to provide suitable (and customised) interfaces to computers and the Internet as well.

Another pressing problem, which will at least exist in the forthcoming few years (i.e. short term), are resource strains. The increased use of the Internet (i.e. the demand for bandwidth) is outrunning its capacity.

It is hard to predict whether this is only a "temporary" problem (i.e. it will end within a few years). It all depends on whether or not certain parties are willing to invest in more and faster network connections [ 121 ].

Agents play a strange role here. On one hand, they can help to reduce the waste of bandwidth (e.g. by performing searchers more efficiently), on the other hand they will increase the usage of bandwidth as the user-friendliness and convenience (e.g. efficiency, speed) they can offer, will attract more users and lead to an increased use of the Internet as an information tool. This latter development is likely to eclipse the former. However, agents cannot be blamed for that.

Probably within a year, safe payment methods will be available which make it possible to easily make many small payments on the Internet [ 122 ]. This will strongly stimulate the demand for agents and agent-enabled applications, as performing information searches inefficiently, does not only cost you time but also money. Weighing the (value of) offered services and information of the numerous suppliers against the money that has to be paid for (retrieving) it, will be a task that is too complicated for humans to do all by themselves. Instead, in the medium term, they will farm out this task to agents. (See chapter four for more about this.)

Another medium term development may be a further rise of the number of Intranets. Intranets are well manageable (also financially), they are well suited for multi-media applications, can act as a gateway to the Internet, and are basically secure (compared to the rather insecure Internet). For the rest, they have all the good qualities of the Internet (e.g. openness, robustness).

Intranets enable Internet providers to offer more differentiated Internet services. For instance, they can then offer cheap, but slow(er) services to those who favour low prices over fast connections, and offer more expensive, but also faster and more elaborated services at higher rates. And they offer many large organisations and companies to means to connect their various offices and employees [ 123 ].

Fortunately, agents can be active on both Intranets as on the Internet. And again, agents will then probably be needed to weigh which connections to use: cheaper and slower, or more expensive but faster ones. This can all depend on the nature and the urgency of the task that has to be performed, the time of the day it is performed on, etcetera. Soon, this may be expected to become a task that is too complicated for humans to do themselves.

Summary
Predicting how agents will develop into the future is not an easy task at all. Not only because the agent-technique is still in its early years, but also because most of the factors and parties that are involved in the developments around it influence each other mutually. This makes it quite impossible to predict now how the state of affairs will be over - say - five years, and how the "environment" (e.g. users, suppliers, Internet, computer technology) will have responded to the agent-technique and many other developments (including its own).

The ultimate test of agent's success will be the acceptance and (mass) usage of them by users. The road to this success is most likely to be laid by developers and suppliers. Apart from them, it may be expected that many commercial companies and organisations will join in on this as well, as there are many interesting opportunities for them. Agents will enable all to offer personalised and "smart" services, delivered around the clock and (probably) at low(er) prices.

However, there are a few important points that need to be settled before this can really be done well. Solid standards need to be established for such things as the used agent communication language, some sort of list of standard agent queries and responses needs to be drawn up, etcetera. Furthermore, rules and possibly even laws are needed to regulate (unwanted) agent behaviour and to be able to deal with various (future) legal issues (e.g. who is responsible for an agent's actions?). Seeing that these standards emerge and that the juridical issues are dealt with, is mainly a task for the most important "players" in the agent area (e.g. companies, researchers). The role of the government in all of this will be mostly supportive, but - because of the fact that the government is also a party that can greatly benefit from agents in many ways - it can be an active one as well.

Chapter 7: Concluding Remarks

Intelligent Software agents have been around now since a few years. But even although this technique is still young, it looks promising already. Promising, but also rather vague and a bit obscure to many. This thesis' aim was - and is - to provide an overview of what agents are offering now and are expected to offer in the future. For that purpose, practical examples have been given to indicate what already has been accomplished. A model was outlined which can be used to extend, enhance and amplify the functionality (individual) agents can offer. And trends and developments from past and present have been described, and future developments have been outlined.

One of the conclusions that can be drawn from these trends and developments, is that users will be the ultimate test of agents' success. Users will also (albeit indirectly) drive agents' development; that is something that seems to be certain. What is uncertain is whether users will discover, use and adopt agents all by themselves, or whether they will just start to use them because they are (getting) incorporated into a majority of applications. Users may discover more or less on their own how handy, user-friendly and convenient agents are (or how they are not), just like many users have discovered or are discovering the pros and cons of the Internet and the World Wide Web. But it may just as well go like as in the case of Operating Systems and GUIs, where companies with the biggest market share have more or less imposed the usage of certain systems and software.

From the current situation it cannot be easily deduced which path future developments will follow. There is no massive supply of agents or agent-based applications yet, but what can be seen is that large software and hardware companies, such as IBM, Microsoft and Sun Microsystems, are busy studying and developing agents (or agent-like techniques) and applications. Initial user reactions to the first agent applications (not necessarily provided by these large companies) may be called promising: such applications as wizards (although these aren't true agents, but a good predecessor of them) and search-engines (which heavily employ all sorts of search agents, or agent-like variants of these) are eagerly used by users and viewed as (very) positive, sometimes even as a real relief. Also strongly gaining in popularity are personalised newspapers and search agents that continuously scan Usenet articles (sometimes even the entire Internet or the WWW) looking for information that matches certain keywords or topics [ 124 ].

And this only seems to be the beginning, as the agent-technique can be used in many more ways. The growing popularity of the Internet, but also the problems many people encounter when searching for or when offering information or services on it, will only increase the possible number of applications or application areas: the Internet is an ideal environment for agents as they are (or can be) well adapted to its uncertainty, and are better [ 125 ] at dealing with the Internet's complexity and extensiveness. In the future, agents should also be able to relief humans of many other tasks, both mundane as well as more complicated ones (i.e. which require more "intelligence").

To get to this stage, however, some important obstacles need to be tackled first. For example: one of the interesting and powerful aspects of agents will be their ability to communicate with other agents, other applications and - of course - with humans. To do this, good and powerful interfaces and communication languages (i.e. protocols) have to be developed. Standards could be of great help here, but it also takes quite some time (at least some years) before these are drawn up. As much as they will help speed up developments from that moment on, the lack of them is likely to slow down developments up till then.

Other important issues that have not, or only partially been addressed and tackled, are such things as security, (user) privacy, means to accomplish real intelligent agent behaviour, and many ethical and juridical issues.

My expectations are that, within foreseeable time (i.e. within five years), enough of these issues will have been sufficiently dealt with [ 126 ]. The situation for agents can, in a way, be compared to that in the area of Artificial Intelligence in general: critics have been, and still are saying that it is unclear what AI exactly is, what its aims are, and that AI researchers are not able to come up with many concrete techniques or practical (usually meaning: profitable) applications. These critics seem to pass over the fact that, although there may be a number of concepts that still are rather vague or that lack a clear definition, and that there are a lot of pieces missing in its puzzle, AI has managed to make impressive achievements: concepts and techniques like fuzzy logic and neural networks have been used and incorporated into many applications.

At this moment, agents seem to have become the critics' latest "moving target". Agents are being incorporated into future doom scenarios, where they are used (for instance by "Big Brother") to spy on Internet users, and where they turn people into solitary creatures, that live their life inside their own little virtual reality. Agents (in their view) are the latest hype, and - as a technique - have not much to offer.

As was said at the beginning: the agent-technique is still very young. It 'growing up' takes time, and it will take a lot of trial and error, and a lot of experimenting to make it mature. This is exactly the stage where we are at now, so you can't expect agents to already be advanced and (nearly) perfect. This paper has described just how advanced and "perfect" agents are at this moment, and how they are expected to mature in the future. Developments may not have come "there" at this moment, but they certainly have made enough progress to make them more than just a hype.

Statement reviews
In chapter one, two statements have been formulated. Let us see now how these statements - a claim and a prediction - have turned out [ 127 ].

The claim that was made with regard to the first part of this paper consisted of two parts. The first part was:

"Intelligent Software Agents make up a promising solution for the current (threat of an) information overkill on the Internet."

Judging from the information that we have seen in chapters two and three, and also judging from published research reports, new product announcements and articles in the media, it seems safe to conclude that agents are starting to lift off, and are judged by many as valuable, promising and useful. Numerous agent-like as well as real agent-enabled applications are available on the Internet (albeit often as test or beta versions). These are already able to offer a broad range of functions, which make it possible to perform all sorts of tasks on the Internet (some of which were not feasible in the past), and/or support users while doing them.

There are only a few objections that can be raised against the claim that agents "make up a promising solution" for the information overkill on the Internet. Objections that can be made, concern the lack of standards with regard to vital agent aspects (such as the communication language and the architecture that will be used) and about the vagueness of some of the agent's aspects (as seen in chapter 2). While these are indeed valid objections, none of them really are insurmountable obstacles for further development of the agent-technique as a whole, and of agent-enabled applications in particular.

The second part of the claim elaborated on the first part:

"The functionality of agents can be maximally utilised when they are employed in the (future) three layer structure of the Internet."

The current structure of the Internet seems to be missing something. Users complain that they are increasingly unable to find the information or services they are looking for. Suppliers are complaining that it gets increasingly difficult to reach users, let alone reach the right ones. Both seem to find "it's a jungle out there". A worrying development, also for governments and many others who want the Internet (and all the information and services that are available through it) to be easily accessible and operable for all. What many seem to be wanting, either implicitly (e.g. by stating that some sort of intermediary services are needed) or explicitly, is that a third party [ 128 ] or layer be added to the Internet. This layer or party will try to bring supply (i.e. suppliers) and demand (i.e. users) together in the best possible way. The three layer model, as seen in chapter four, is a way in which this can be accomplished.

So, adding a third layer or party to the Internet seems to be very promising and a way of offering new and powerful services to all on the Internet. But does it lead to agents being "maximally utilised"? First and foremost: it does not mean that agents have little to offer if they are not employed in a three layer structure for the Internet. Individual agents (or agent systems) are capable of doing many things, even when not employed in a three layer structure. But some of the offered functionality can be done more efficiently, and probably quicker or at lesser costs, when the three layer structure is used (as was shown in chapter four). Moreover, the structure will enable tasks that a single agent is incapable of doing (well, or not at all), such as finding information within a foreseeable period of time on (ideally) the whole Internet.

Adding the conclusions and remarks about the two sub-statements together, it can be safely concluded that agents, either individually or (preferably) employed in the three layer structure, have the potential to become a valuable tool in the (Internet's) information society.

With regard to the trends and developments of the second part of this paper, the following prediction was stated:

"Agents will be a highly necessary tool in the process of information supply and demand. However, agents will not yet be able to replace skilled human information intermediaries. In the forthcoming years their role will be that of a valuable personal assistant that can support all kinds of people with their information activities."

In the previous section it has been shown that agents are able to contribute in many ways to improve "the process of information supply and demand" (e.g. as intermediary agents). The question now is: are they better at doing this than, say, a human information broker?

When I started writing this paper, i.e. when I formulated this prediction, I assumed agents are not - and would not - be able to replace human intermediaries (at least not in the next three to five years). Now, lots of information, six chapters, and five months later, I would say that this assumption was more or less correct. "More or less" because it paints the future situation with a dither brush than necessary: agents will not (yet) be able to replace skilled human information intermediaries in all areas. There are tasks that are so complicated (in the broadest sense) that they cannot be done by agents (yet, or maybe never at all). But there still are numerous other tasks that agents are very well capable of doing. What's more, there are tasks that (soon) agents will be better at then their human counterparts (such as performing massive information searches on the Internet, which agents can do faster and twenty-four hours a day).

So, agents will be 'nothing more' than "a valuable personal assistant" in some cases, but they will also be (or become) invaluable in other ones. And there will be cases where humans and agents are (more or less) equally good at. For instance, in case there has to be chosen between a human or an electronic intermediary, the decision which of these two to approach (i.e. 'use') will then depend on such factors as costs/prices and additional services that can be delivered.

More generally, it may probably be the choice between doing it yourself (which leaves you in control, but may lead to a task being done inefficiently, incompletely or more expensively) or trusting agents to do it for you (with all the (dis)advantages as we have seen them in this paper).

author
Bjorn Hermans is currently working as an Internet Application Engineer at Cap Gemini B.V. in the Netherlands. This paper originally appeared as his thesis for his studies into Language & Artificial Intelligence at Tilburg University. Hermans' e-mail address is hermans@hermans.org and his Web site can be found at http://www.hermans.org


Acknowledgements

There are many persons that have contributed to the realisation of this paper, and I am very grateful to all those who did.

There are a few persons that I would especially like to thank: Jan de Vuijst (for advising me, and for supporting me with the realisation of this thesis), Peter Janca, Leslie Daigle and Dan Kuokka (for the valuable information they sent me), and Jeff Bezemer (for his many valuable remarks).

Bibliography: Information sources

Literature

D. D'Aloisi, D. and V. Giannini, 1995. The Info Agent: an Interface for Supporting Users in Intelligent Retrieval, (November).

The High-Level Group on the Information Society. Recommendations to the European Council - Europe and the global information society (The Bangemann Report). Brussels, (May).

L. Daigle, 1995. Position Paper. ACM SigComm'95 - MiddleWare Workshop (April).

Daigle, Deutsch, Heelan, Alpaugh, and Maclachlan, 1995. Uniform Resource Agents (URAs). Internet-Draft (November).

O. Etzioni and D. S. Weld, 1994. A Softbot-Based Interface to the Internet, Communications of the ACM, vol. 37, no. 7 (July), pp 72-76.

O. Etzioni, O. and D. S. Weld, 1995. Intelligent Agents on the Internet - Fact, Fiction, and Forecast, IEEE Expert, no. 4, pp. 44-49, (August).

R. Fikes, R. Engelmore, A. Farquhar, and W. Pratt, 1995. Network-Based Information Brokers. Knowledge System Laboratory, Stanford University.

Gilbert, Aparicio, et al. The Role of Intelligent Agents in the Information Infrastructure. IBM, U. S.

IITA, Information Infrastructure Technology and Applications, 1993. Report of the IITA Task Group High Performance Computing, Communications and Information Technology Subcommittee.

P. Janca, 1995. Pragmatic Application of Information Agents. BIS Strategic Decisions, Norwell, U. S., (May).

Ted G. Lewis, 1995-1996. SpinDoctor WWW pages.

National Research Council, 1994. Realizing the Information Future - The Internet and Beyond. Washington D. C.

H. S. Nwana, 1996. Software Agents: An Overview. Intelligent Systems Research, BT Laboratories, Ipswich, U. K.

P. Resnick, R. Zeckhauser and C. Avery, 1995. Roles for Electronic Brokers, Cambridge, Mass.: MIT.

M. Rohs, 1995. WWW-Unterstützung durch intelligente Agenten. Elaborated version of a presentation given as part of the Proseminar "World-Wide-Web", Fachgebiet Verteilte Systeme des Fachbereichs Informatik der TH Darmstadt.

SRI International. Exploring the World Wide Web Population's Other Half, (June).

M. Wooldridge and N. R. Jennings, 1995. Intelligent Agents: Theory and Practice, (January).

R. H. Zakon, 1996. Hobbes' Internet Timeline v2.3a.

Information Sources on the Internet
The @gency: A WWW page by Serge Stinckwich, with some agent definitions, a list of agent projects and laboratories, and links to agent pages and other agent-related Internet resources.

Agent Info: A WWW page containing a substantial bibliography on and Web Links related to Interface Agents. It does provide some information on agents in general as well.

Agent Oriented Bibliography: Note that as this project is at beta stage, response times might be slow and the output is not yet perfect. Any new submissions are warmly welcomed.

Artificial Intelligence FAQ: Mark Kantrowitz' Artificial Intelligence Frequently Asked Questions contains information about AI resources on the Internet, AI Associations and Journals, answers to some of the most frequently asked questions about AI, and much more.

Global Intelligence Ideas for Computers: A WWW page by Eric Vereerstraeten about "assistants or agents [that] are appearing in new programs, [and that] are now wandering around the web to get you informed of what is going on in the world". It tries to give an impression of what the next steps in the development of these agents will be.

Intelligent Software Agents: These pages, by Ralph Becket, are intended as a repository for information about research into fields of AI concerning intelligent software agents.

Intelligent Software Agents: This is an extensive list that subdivides the various types of intelligent software agents into a number of comprehensive categories. Per category, organisations, groups, projects and (miscellaneous) resources are listed. The information is maintained by Sverker Janson.

Personal agents: A walk on the client side: A research paper by Sharp Laboratories. It outlines "the role of agent software in personal electronics in mediating between the individual user and the available services" and it projects " a likely sequence in which personal agent-based products will be successful". Other subjects that are discussed are "various standardisation and interoperability issues affecting the practicality of agents in this role".

Project Aristotle: Automated Categorization of Web Resources: This is "a clearinghouse of projects, research, products and services that are investigating or which demonstrate the automated categorization, classification or organization of Web resources. A working bibliography of key and significant reports, papers and articles, is also provided. Projects and associated publications have been arranged by the name of the university, corporation, or other organization, with which the principal investigator of a project is affiliated". It is compiled and maintained by Gerry McKiernan.

SIFT: SIFT is an abbreviation of "Stanford Information Filtering Tool", and it is a personalised Net information filtering service. "Everyday SIFT gathers tens of thousands of new articles appearing in USENET News groups, filters them against topics specified by you, and prepares all hits into a single web page for you." SIFT is a free service, provided as a part of the Stanford Digital Library Project.

The Software Agents Mailing List FAQ: A WWW page, maintained by Marc Belgrave, containing Frequently Asked Questions about this mailing list. Questions such as "how do I join the mailing list?", but also "what is a software agent?" and "where can I find technical papers and proceedings about agents?" are answered in this document.

UMBC AgentWeb: An information service of the UMBC's Laboratory for Advanced Information Technology, maintained by Tim Finin. It contains information and resources about intelligent information agents, intentional agents, software agents, softbots, knowbots, infobots, etcetera.


Notes
82. Note that whenever in this chapter things are being said about "agents", the words "agent-based applications" should be thought off as well wherever possible and applicable.

83. It is probably needless to say that all of the expectations in the chronologies, are rather good guesses than hard facts.

84. But also on such aspects as marketing, development and investments. See, for instance, P. Janca, 1995. Pragmatic Application of Information Agents. BIS Strategic Decisions, (May).

85. i.e. correspondence between one or more agents (or users).

86. i.e. tasks that are performed by an agent.

87. General Magic's Telescript expands this premise into multi-agent systems. As long as all agents in the system use Telescript conventions, they are part of a single, all-encompassing system. Such a system can support multiple users, each (in theory) using a different application.

88. See chapter five of P. Janca, 1995. Pragmatic Application of Information Agents. BIS Strategic Decisions, (May).

89. Either that, or by means of a very complicated and extensive homogeneous architecture (as it has to be able to accommodate every possible legacy system).

90. See (also) section 5.2 of H. S. Nwana, 1996. Software Agents: An Overview. Intelligent Systems Research AA&T, BT Laboratories, Ipswich.

91. Nwana, op. cit.

92. Already, the first user-operated search engines which support conceptual searches are becoming available. The Infoseek Guide as offered by Infoseek Corporation is an example of such a search engine.

93. Nwana, op. cit.

94. P. Janca, 1995. Pragmatic Application of Information Agents. BIS Strategic Decisions, (May).

95. Janca, op. cit.

96. Janca, op. cit.

97. Janca, op. cit.

98. i.e. what one person means to say may be different from what another person means to say, even though they both use identical words. Furthermore, a person may wish a different outcome over time, even though the same expression is used. A related challenge is in setting appropriate thresholds to trigger intervention: novice users will be glad when an agent helps him without an explicit call for help, whereas a power user will soon get very annoyed when he is constantly being "helped" (i.e. interrupted) by agent(s).

99. See Appendix III of this report.

100. Janca, op. cit.

101. Janca, op. cit.

102. Janca, op. cit.

103. Janca, op. cit.

104. Janca, op. cit.

105. For instance, by reporting automatically about certain events (e.g. a report about monthly sales figures).

106. Janca, op. cit.

107. e.g. to deliver better services, to collect all sorts of (user) information, or to communicate with other/middle layer agents.

108. Intranets, which runs on open TCP/IP networks, enable (large) organisations/companies to employ the same types of servers and browsers used for the World Wide Web for internal applications distributed over the corporate LAN. Because intranets are based on the same independent standard Internet protocols and technologies, they are accessible to every member within an organisation, regardless of their choice of hardware platform. Intranet servers enable business functionality such as publishing information, processing data and databased applications, and collaboration among employees, vendors, and customers. Driven by the powerful combination of openness and security, intuitive access to detailed information, extreme cost-effectiveness, and flexibility for customisation in increasingly competitive times, Intranets are a getting very popular nowadays.

109. T. G. Lewis, 1995-1996. SpinDoctor WWW pages.

110. Lewis, op. cit.

111. i.e. in the information society. The "Info Age" denotes the period following the current "Post-Industrial Age", and will be a period where information is the most important good.

112. Lewis, op. cit.

113. The specific benefits agents can offer governments are discussed in the next section.

114. The High-Level Group on the Information Society, 1994. Recommendations to the European Council - Europe and the global information society (the A HREF="http://www.earn.net/EC/report.html">Bangemann Report). Brussels.

115. Op. cit.

116. National Research Council, 1994. Realizing the Information Future - The Internet and Beyond.

117. The High-Level Group on the Information Society, 1994. Recommendations to the European Council - Europe and the global information society (the A HREF="http://www.earn.net/EC/report.html">Bangemann Report). Brussels.

118. Op.cit.

119. National Research Council, 1994. Realizing the Information Future - The Internet and Beyond.

120. T. G. Lewis, 1995-1996. SpinDoctor WWW pages.

121. As long as most of the Internet services are free (largely due to the fact that there are not yet safe methods to make many small payments on the Internet - see next footnote), commercial parties will be very inclined to do so. Non-commercial parties (such as the various governments) do not have sufficient funds to fully meet the increased demand for bandwidth.

122. In the near-future it is very likely that very small amounts of money will need to be paid for each page of information that is retrieved. The exact amount does not have to be more than a few cents, as information pages (especially popular ones) get retrieved so often, that only small charges will be sufficient to cover the costs that have to be made to put the information online.

To make this system work (i.e. interesting for Internet users), there should preferably be no, or virtually none (i.e. not more than a few cents), overhead costs per payment.

123. It is predicted that Intranets will, within a few years, become much cheaper alternatives for expensive groupware packages such as Lotus Notes. Some even argue that "Internet technologies are much more relevant and exploitable within a local LAN [i.e. an Intranet], right now, than over much slower, dial-up access routes associated with typical home-access to the Internet" (from "The Intranet - a Corporate Revolution" by JSB Computer Systems Ltd).

124. Examples of such services are the Sift service of Stanford University and IBM's InfoSage.

125. That is, better than many conventional programs.

126. Which does not mean that they have been completely solved, but to such a degree that they do not interfere (much) with further developments.

127. About six months after they have been formulated.

128. Users and suppliers being the first and second one.


Contents Index

Copyright © 1997, ƒ ¡ ® s † - m ¤ ñ d @ ¥