Norwegian University of Science and Technology, Trondheim
Faculty of Physics, Informatics and Mathematics
Department of Computer and Information Science


Intelligent Agents
and
Conceptual Modelling

Bjørn Christian Tørrissen

Trondheim, May 1996


Abstract

This paper is about "intelligent agents" and how these can be represented in conceptual models. An intelligent agent is a piece of software with an element of artificial intelligence, which can be used to support people in the use of computer applications. The way intelligent agents work, how the software seems to act in intelligent ways on its own, whether over computer networks or not, can be difficult to understand for people. This new quality of software can make people sceptical, many have expressed fear of losing control over the software they use. This is why we have to find ways to explain the work of the agents, so that users can build up enough trust towards the agents and dare use the new technology to a larger extent.

The first part of the paper is an introduction to different fundamental principles for construction and design of agents, how agents cooperate and communicate with the human user and other software agents, what tasks agents can be set to do and problems regarding use of intelligent agents. After this introduction a scenario is presented, which describes how agents can be used to improve on the work situation for university students.

Different properties of agents that may need to be modelled are presented in chapter 4, as well as an evaluation of whether existing perspectives of conceptual modelling are capable of demonstrating these properties. Three factors to consider when choosing how to model an agent conceptually are identified:

The paper concludes that information systems with intelligent agents can be conceptually modelled through the use of combinations of existing modelling languages.


Preface

This paper is the result of a literature study that took place from the months January to May of 1996. The task was given by the Department of Computer and Information Science. Babak Amin Farshchian was the project supervisor and professor Arne Sølvberg was coordinator of the department's projects. The intention of the project has been to give an overview of various aspects concerning the introduction of intelligent agents in software and information systems. The emphasis have been put on how conceptual modelling of agents can be done through existing perspectives for conceptual modelling.

The main part of the work has been to acquaint myself with intelligent agents in general. At the start of the project there was little or no literature on the topic to be found in the university library, and searching international literature databases didn't result in many hits. What could be found was generally on a relatively advanced level. I hope this paper can act as an introduction to the fields of intelligent agents for others with the need of such material. Most of the literature I build my paper upon have been found in articles accesible on the Internet's World Wide Web. The draft for the book "Information Systems Engineering - Advanced Conceptual Modelling" have also been to great help to gain insight in the mysteries of conceptual models.

In addition to comments from my supervisor I'd also like to thank Tor G. Syvertsen, professor of structural informatics at NTNU, for interesting comments to my scenario concerning the Computer Pencilcase, and Thomas Erickson, long time researcher and designer of interactive systems, for thought-provoking e-mails concerning my ideas.
Trondheim, April 30. 1996

Bjørn Christian Tørrissen, stud.techn.


Table of Contents

1. Introduction

1.1 Point of Departure

1.2 Motivation

1.3 The Situation Today

1.4 Framework and Methods

1.5 A Guide to the Paper

-

2. An Introduction to Intelligent Agents

2.1 What Is An Agent?

2.2 Agent Architectures

2.3 What Does An Agent Look Like?

2.4 How Does An Agent Do Its Work?

2.5 Cooperation Between Agents

2.6 Areas of Application for Agents

2.7 Problems Concerning the Use of Agents

-

3. Agent Scenario - University

3.1 The Computer Pencilcase 2.0

3.2 Modelling of the Agents From the Scenario

-

4. Conceptual Modelling of Intelligent Agents

4.1 Agent Properties

4.2 Agent Architectures

4.3 Perspectives For Conceptual Models

4.4 Summary

5. Discussion

5.1 Literature Study Concerning Intelligent Agents

5.2 Intelligent Agents and Conceptual Models

6. Conclusion (at last)

-

Appendix A: Glossary

Appendix B: References


1. Introduction

1.1 Point of Departure

Intelligent agents is a new section of the ever-growing field of applications and user interfaces, and there are still many obscurities concerning how to model and design/visualize the agents and their functionality. As opposed to the desktop metaphor, which in the late 80's appeared in a useful form in Apple and Windows products, the agent metaphor demands more than a good look to convince the people who are to use them. Objects like windows, folders and icons in the "desktop" metaphor are passive, visible, time- and place-dependent phenomenons the user more or less fully control. After the introduction of agents, however, the user meets properties of the software described as "intelligence", invisible helpers and geographically distributed autonome actions [Erickson].

There are two main barriers to break: 1. Agents need to be skilled enough (competence), and 2. The users must be able to trust the agents and what they do to perform their tasks (reliability/trustworthiness) [Maes1994a].

1.2 Motivation

It is critical for the successful use of intelligent agents that the users understands what the agent does, as well as how it does it. In the same way it is important that software developers understand the agent mechanisms when they develop applications where intelligent agents are a part of the system. To be able to do this, we need conceptual models that explain the agents on suitable levels of understanding.

For the two (wildly?) differing target groups there will be a need for different ways to model and describe agents. For the end users of agent systems we need a model that changes the users comprehension of the computer as a tool, a machine, towards making the users look at the computer as some kind of secretary or personal assistant. The model must be able to explain how the agent does its work, without going into too many technical details that might confuse the user. At the same time there is a need for a conceptual model to be used during the construction of systems and applications containing agents. These models should contain enough detailed information to be used as a foundation for implementation and maintenance.

1.3 The Situation Today

Agent technology has only recently been introduced in commercial applications. Basic artificial intelligence, "helpers/wizards" and exchange of documents between different applications are research areas of high priority both in the university and the commercial world these days. There's not been many major publications yet, although there's a lot of independent research, so there is not a general agreement on exactly what defines the science of intelligent agents yet.

In most conceptual models for information systems in use today are not designed to cope with intelligent agents.

1.4 Framework and Methods

I have through several years now had an interest in understanding how people perceive computers and how people act when working on computers. As a result I have supplemented my typical computer science courses with some philosophy and cognitive psychology. With this background I will look at different ways to model the behaviour and work of agents, for the whole range of human users, from system designers to end users.

The most interesting angle for my literature study is to look at different agent architectures, and how the choice of architecture partially decides how the agent will be presented in a conceptual model. I will try to draw a picture of what possibilities agent technology in general provides, rather than follow particular projects in depth.

Before this project I had a very limited knowledge of what intelligent agents were all about. A major part of the work therefore became a literature study, where I oriented myself about the various sides of the concept "intelligent agents". Further on I looked at different perspectives for conceptual modelling of software systems, and I evaluated their relevance to intelligent agents.

1.5 A Reader's Guide to the Paper

Chapter 2 is the result of the literature study, where what I have read is summarized in a concentrated description of various areas that somehow belongs to the field of intelligent agents. In chapter 3 a scenario is presented, where potentially realistic ways of using agents are shown in short examples. This scenario is used as a background for looking at and evaluating different perspectives of conceptual modelling in connection with intelligent agents in chapter 4. A discussion leads to a summary in chapter 5 and a conclusion in chapter 6.


2. An Introduction to Intelligent Agents

Noone can lift ten tons by hand, so the human beings chose to construct machines to do that for them. Few people are able to extract the square root of large numbers quickly and without making mistakes, so we have chosen to construct machines to do that as well. Now we're living in a world where it is impossible to know everything, or even where to look for everything that is available. More and more information is accessible through global computer networks, and we have started building "machines" to help us find the information we need. These "machines" is one of the applications of what has come to be known as "Intelligent Agents". These agents are also increasingly used to make software easier to use, something which is much needed when computers change from being a tool from the few, to be a medium for the many.

2.1 What is an Agent?

In the software industry there is no commonly agreed definition on what belongs to the agent field and what doesn't. [Maes 1994b, p.71] defines it vaguely as software using techniques from artificial intelligence to assist a human user of a specific application. This definition covers a lot, from filters sorting electronic mail using simple rules to artificial intelligence that assists flight control personnel in making sure airplanes don't come to close to each other.

In this paper I will use the definition of [Wooldridge & Jennings] to delimit what agents I consider interesting. According to this definition agents have the following basic properties:

In other words, an agent is software that lets the user define what is eventually or instantly wanted, and works towards that goal, without the user having to worry about anything else than waiting for the result of the agent's work.

To implement agents a tradition of using functional languages has developed. Here predicate logic is used to define the agent rules. A problem of classic logic is that it isn't suited for expressing human belief, wishes and intentions. In the sentence "Peter thinks Santa Claus is real", the truth value of the sentence is not depending on whether Santa Claus exists or not. A piece of artificial intelligence looking at this sentence may have "learned" that Santa Claus doesn't exist, and will therefore have a hard time deciding whether Peter can believe in Santa Claus or not.

2.2 Agent Architectures

An agent is supposed to come up with creative suggestions towards its owner, that is the user that activated the agent. Creativity may seem like a quality that won't fit in a computer. However, if we define creativity to be "to use old ideas in new combinations", it can be defended. By giving the agent a knowledge base that a pattern of actions can be interpreted from, the agent will be able to make good guesses about what may happen in the future, and how the agent should act with this in mind to get closer to its goals.

An agent architecture spesifies how an agent can be decomposed to a set of modules, and how these modules can cooperate to perceive the agent's surroundings, guess about the future and execute actions to get closer to its goals.

Three different approaches have been tried to construct the best agent architecture; programmable interface agents, knowledge base agents and self-learning agents.

2.2.1 Programmable Interface Agents

In this approach the agent is an integrated part of the application. The agent can be activated if the user wants it. An activation demands that the user learns the language of the agent and uses this language in instructing the agent on how to act.

How useful such an agent is is depending on how user friendly the programming of the agent is. So far most implementations of programmable interface agents have only offered rather traditional programming, meaning more or less cryptic languages with little or no linguistic flexibility. Because of this the use of the agents have been reserved for experienced computer users. A development of other ways to programs agents directly, e.g. through a graphical interface or natural language, may make this approach more attractive.

2.2.2 Knowledge Base Agents

In stead of the users programming the application to behave exactly as they would like it, this approach includes relatively large, standard knowledge bases within the application. These knowledge bases tells the application how the users probably think, what they may want to happen and what they want their screens to look like. The same knowledge base is shared by many users, who may or may not be able to change the knowledge base by adding their own rules to it.

This solution is handy for the user, but given the fact that most people are different in most ways and have different wishes for the world, a general knowledge base like this is a poor solution in most cases.

2.2.3 Self-learning Agents

Here the most attractive elements of the two previous approaches, that is 1) The possibility for each user to adjust the agent's instructions, and 2) The use of knowledge bases. This combination has led to the architecture dominating the agents scene today.

The users are offered an agent that can be trained without the user having to learn an agent language. Instead of the traditional programming the agent is instructed through:


Figure 2.1 How an agent builds its knowledge base

2.3 What Does An Agent Look Like?

In chapter 12 of his book "Being Digital", Nicholas Negroponte describes an intelligent agent, using the metaphor "the experienced butler" that aids his master in all kinds of tasks. There's some work ahead of us before we have multi-talented intelligent agents like that, but it is a suitable goal to work towards for us. Apple Computer shares Negroponte's view of agents in a video bordering on science fiction, "The Knowledge Navigator", where an educational agent, Phil, is pictured as a well dressed, versatile and well informed teacher.

An important part of creating an agent worthy of the user's trust, is giving the agent a comfortable appearance. Maxims, an agent for the Eudora e-mail application, uses simple caricatures of typical facial expressions to describe the agent's different moods, e.g. "working", "in doubt", "satisfies", etc [Maes 1994a]. The caricature changes, depending on what the agent does.

This technique is basically the same used to create believable cartoon characters. If you want to achieve compassion, understanding and enthusiasm for a character, you have to consider that [Thomas & Johnston]:

  1. The emotions of the character must be visible all of the time.
  2. Thinking/acting is affected by state of mind.
  3. Emotions must be visible long enough that the audience can follow the way of thinking.

So, it is important to represent the way the agent works by expressing the various "states of mind" the agent goes through and bases its decisions on. In situations where the agent isn't actively working, but perhaps is waiting for a resource to be freed or presents its work to its owner, the same principles should be followed. Whether the agent "thinks" it is presenting the correct result or isn't quite sure about it, it should somehow show.

There is a risk in presenting the agent as being too sophisticated. If the agent appears as a human figure, antropomorphizing the agent, the user often automatically assumes that the agent is able to perform far more advanced functions than it actually is capable of.

For simpler agents where the goal isn't to create a feeling of cooperating with a "someone", for instance when offering a fill-in-and-collect-forms function, using normal rules for design of user interfaces is enough.

Eventually, the goal must be to concentrate the communication between the user and all the agents for different applications in a single, simple user interface, which gives the user the impression that there is only one agent doing all the work.

2.4 How Does An Agent Do Its Work?

The methods of an agent will vary depending on what kind of task it is set to do. In general, everything an agent can do could be done by a person just as well, given enough time. The agent appearantly acts in intelligent ways by making more or less intelligent decisions.

Agents imitate intelligence by using available information in accordance with defined rules of operation and observations of its owners actions, to predict the future and prepare for expected situations, or to reach certain goals defined by the owner.

A typical example is Letizia, an agent used in connection with reading documents off the World Wide Web (WWW). Most of the time when the user is accessing the WWW, the computer is idle, waiting for instructions from the user to retrieve a new document. Letizia uses this otherwise idle time to look for other documents that somehow are related to the document that is being read, so that the user after having read the document will get suggestions for other documents that might be of interest. So, Letizia bases its searching on the contents of relatively recently read document.

In the same way there are agents that works from subject categories in articles from Usenet News, or from in what order of senders the user reads e-mail filters the e-mails in degrees of importance.

2.5 Cooperation Between Agents

We can separate between 1) Cooperation between agents with similar functions, and 2) Cooperation between agents with different functions.

2.5.1 Cooperation Between Agents With Similar Functions

For agents that performs information filtering from e.g. Usenet News, it may be an interesting option to export parts of the knowledge database of an agent to one or more other agents. The rules an agent has struggled to acquire to be able to find interesting information about some certain topic would be nice for other people interested in the same topic to be able to copy. Or if an agent is unsure about what to do with a piece of information, it could ask another, more experienced, agent about what to do with it.

[Maes 1994a] mentions an example on how this can drastically shorten the "training period" before the agent is competent enough to make the right decision.

2.5.2 Cooperaction Between Agents With Different Functions

Even though it is desirable having a single interface agent communicating with the user, there will actually be a whole network of agents working to interpret the user's wishes and produce results for the user. There will be agents for the user interface, for communication between different applications, for planning and coordinating the agents' work and so on.

In [Kautz et al.] a bottom-up way to design agents is described. This is a good example of cooperating agents. Here task specific agents are distinguished from personal user agents. The user agents transform requests from the task specific agent to a comprehensible graphical presentation for the user. The user fills in a response, which the user agent relays to the task specific agent again.

In addition to allowing a reduction of the size of each agent's knowledge base, distributing the knowledge between several agents other practicalities also surface. The task specific agent, which can be shared by many users, won't have to worry about read/write access to the particular user's screen or files, as this can be left for the user (inteface) agent to decide. Also, one agent can be changed without having to do changes other places in the agent network. Different users can even use different versions of the same user agent, choosing the one they like best. The same user agent can also be used as an interface to several different types of task specific agents for different applications.

2.6 Areas Of Application For Agents

Practical uses of agent technology have started to arrive increasingly more often.

Firefly - http://www.firefly.com/
This agent lets the user rate music, books and movies, and uses these ratings in combination with other people's ratings recommends e.g. music it believes the user will appreciate listening to. To finance the service there is an option to buy these recommended products through the agent interface. The whole concept is founded on that when people like something, they are likely to like something else, because someone else who like that something also likes this specific "something else".

NewT - [Maes 1994a]
This is a filter agent for Usenet newsgroups. This forum has little by little developed into a massive message board, too large for anyone to keep themselves fully oriented on what is posted there. Using NewT, the user can activate one or more news agents and train them to look for interesting postings, by showing them examples of articles the user finds interesting or not interesting at all. The agent performs a text analysis of these articles and makes a profile for what contents an article interesting to the user will look like. This profile is compared to articles on the Usenet, and the seemingly interesting articles are presented to the user by the agent.

Open Sesame! http://www.sesame.com/
Originally this was a workstation agent for PC and Mac, now it has extended its field of operation to entertainment as well. When it is activated it runs in the background and observes what the user does with the computer and in the applications, purely functionally. After a while it will catch patterns in the user's behaviour, and will offer to automate repetitive activities as soon as it has grown confident enough that it really is a pattern.

KidSim - [Smith et al.]
KidSim offers a simple, graphical user interface, designed for use by children. The user can create a world with inhabitants and rules for how they are to act and behave towards each other. The rules are programmed by demonstrating behaviour by pointing and clicking, the user don't have to learn a traditional programming language or script language. KidSim has been used to create an aquarium-simulator, Pac-Man and similar simple games and animations.

2.7 Problems Concerning The Use Of Agents

As is often the case when introducing new technology, there are also some security measures that has to be taken when introducing agent technology to applications. Important questions are how much information about the user should be stored in the agent's knowledge base, and how is it to be protected from other agents the agent may be communicating with?

When it comes to exchange of agent functionality as described in 2.5.1, it must be possible to delimit what parts of the knowledge base you want to share with others. If not, there is a possibility that interests you have and want to keep hidden from others may be know to them through a look at your agent's instructions. For some people this is totally unacceptable, while others think that it doesn't matter if others know WHAT a person does, as long as they don't know WHY the person does it.

Another problem is the danger that computer networks can be densely populated by agents tearing along on their quests for information so much that they clog the network. This will be a particularly large problem on the Internet, where uncontrolled use of bandwidth will lead to great damage. Directives for how agents are to use net resources must be drawn up and be used by agent developers. This work is started, among others [Eichmann] lists "ethical rules" for WWW agents to follow. Eventually we also will have to decide whether the owner/activator of an agent is responsible for anything the agent may do to fulfill the goals the owner defines for it.


3. Scenario

In this chapter we will take a look at a scenario, regarding how a student at the Norwegian University of Science and Technology (NUST) in a few years can use agents to improve the work situation when studying. The scenario takes as its starting point an imaginary look back at the Computer Pencilcase (CP) project which was a planned pilot project at the Norwegian Institute of Technology in the fall of 1995. The system must be able to satisfy the needs of students from various studies, and it is not to be expected that the students are willing to spend a lot of time to learn how to use a new system.

The chapter gives examples on how agents the way they are described in chapter 2 can actually be used. The descriptions are based on both agent systems in use today and agent systems under development.

The Computer Pencilcase 2.0

In 1996 the Computer Pencilcase was introduced as a trial project for the civil engineering and computer science departments of NTNU. The students at those departments had to obtain a portable PC for the classes, and the university supplied them with necessary software, such as word processors, spreadsheets and various software for use in specific classes. In addition there was put together standard packages for free Internet applications. The idea was that this would guarantee the necessary foundation for using computers more actively in the classes. It worked fairly well, it became much easier for the students to actually get hold of a computer to do their homework and laboratory work on, since they were carrying one with them at all times, but the revolution that was hoped for just didn't happen. After a while complaints started to be heard from here and there, and the main problems seemed to be:

With this as a starting point, a student set out to find ways to use intelligent agents to improve the system, so that the large investments that had been made could be defended. here are the main results from this student's work:

3.2 Modelling of the Agents From the Scenario

The agents in the scenario have different functions, but they are all coordinated through the user interface agent, so that the student experiences interaction with a single agent. For the users to have a realistic understanding of what is really going on, it is important to present a surveyable and understandable conceptual model of the agent system.

It seems to the users that the agents:

To understand how the agent can give this impression, it is necessary to understand the mechanisms in action. Users not knowing anything about this are likely to over-estimate the skills of the agents. The more intelligent the agent seems to be, the more important it is to explain to the user how the agent actually works to make its decisions.

To do this explaining, and to get rid of the scepticism users may have towards using "artificial intelligence" in their work, we need conceptual models the users can understand.

Different agents need emphasizing of different aspects when being modelled. For the agency described in the scenario, we will need several different modelling methods. In addition to what aspects are to be modelled, there will also be necessary to be able to draw up models on different levels of detail, depending on for whom the model is made and for what purpose the model is being made. If the model is to be used to explain for a user how an agent works, it can be done much less detailed than if the model is to be used as a foundation for designing a new agent.

What modelling aspects that can be used is the subject for the next chapter.


4. Conceptual Modelling of Intelligent Agents

Intelligent agents in connection with conceptual modelling is a field where few publications have appeared. This may be a result of the fact that the research community so far haven't even been able to agree on what is the best agent architecture. The reason for this is probably that there is no such optimal architecture. The different functions agents have been designed and implemented for, have different needs the agent architecture and conceptual models of them have to cover.

In addition to being able to emphasize various properties of the agent, it is also necessary to be able to present the model in different levels of detail. For construction of agent systems a high level of detail is required to present the full complexity of system. For the end users of the agents it is most important to explain in an understandable way what really happens when the agents work, and what the agents can and can not do for the users.

We will start off by looking at what properties of the agents it is necessary to model, and later look at how or if this fits into existing frameworks for conceptual models.

4.1 Agent Properties

Depending on the agent to be modelled, there are various properties of the agent that should be expressed in the agent model. From chapter 2 and [Franklin & Graesser] the following properties are the most important:

4.2 Agent Architectures

An advanced class of intelligent agents are predictive agents. These agents have all the components an agent can have. Figure 4.1 is an attempt to show how these components work together. The figure is based on the article "Towards Anticipatory Agents" by Ekdahl, Astor & Davidsson, which can be found in [Wooldridge & Jennings].


Figure 4.1 The main components of an agent

The figure describes a circuit of events, where no event is to be considered as a specific starting point. Still, starting in the bottom left of the figure, the agent's sensors are sensing the surroundings and constructs a picture of what state the "agent's world" is in.

As more or less raw data, these sensed datas are brought to the interpreter , which transforms the information to a format the agent can use in its calculations. This information is incorporated into the agent's model of the agent system and its surroundings, and to the analyzer. The analyzer is on some kind of meta-level, where the superior goal(s) of the agent are emphasized. This is where the owner inputs his or her wishes to, and it is the analyzer that communicates with the user if the agent is in doubt about what the user really wants. The state of "the world" is compared to the wishes the user has for "the ideal world", the ultimate goal for the agent, and the analyzer responds to the interpreter about what should be attempted changed by the agent in the "world". While the analyzer has been doing this, the predictor, a part of the interpreter, has used the model to simulate possible future states of the "world", based on what actions the agent can perform.

Desirable modifications of the "world" and possible future states of the "world" are taken to the decision maker. Here the various possible future states are compared to the state defined as the goal, and the action that seems to lead to the most desirable future state will be chosen for execution. The necessary activations of tasks for the agent are generated and sent to the activator. The agents actions affect the surroundings, and after some time the agent senses its surroundings once more and the circuit starts over, hopefully in a "world" somewhat closer to the main goal of the agent. This circuit continues over and over again until the defined goal state in the "world" is reached, or the owner of the agent redefines or cancels the goal.

Ideally the sensing and the execution of agent actions to change the surroundings run continuously, so that the agent can react immediately to changes in the "world". To reach a real-time agent system like that with the technology of today, the agents sensing and reactions must be limited to a carefully chosen small subset of all variables in the agent's surroundings.

4.2.1 A General Model of a User Interface Agent

The user interface agent from the scenario in chapter 3, which runs the user interface and automates repetitive patterns of actions on behalf of the user, can be molded into the components from figure 4.1. To show the user such a model of the agent may help the user to realize that the agent is not a normal, thinking being hunched inside the computer, but a set of software modules cooperating and performing work according to certain rules, specified by the user.

When the user performs an action through the desktop/user interface, the following agent-related activity takes place:

  1. User activity takes place...
  2. ... and is sensed by the agent, which all the time runs in the background of the user interface and records any activity.
  3. The agent interprets the intention of the user activity, e.g. "What menu item was chosen?"
  4. The interpreted meaning of the signalled desired action is sent to the model of the surroundings, while at the same time...
  5. ... the same signal is transmitted to the agent's knowledge base. The superior goal of the agent is "Automate repetitive patterns of user actions", and ...
  6. ... the state that is aimed for is a knowledge base filled with as much information about the user's actions as possible, to be able to detect repetitive patterns in these.
  7. Based on the knowledge base and the currently signalled user action, the predictor can estimate probabilities for what may be the user's next action. These estimations go to...
  8. ... The decision maker, where the various probabilities are evaluated. What happens from here depends on whether or not the predictor has found a pattern of actions with a certain percentage probability. If the probability is high enough, the decision maker will decide to ask the user if this actually is a pattern, and if so, whether the user wants it automated or not. The decision is taken to...
  9. ... the unit which executes the agent's actions to do what the user requested, as well as asking the user whether automation is desired, if called for.
  10. Finally, the user gets to see the result of the agents interpretation and evaluations, though feedback through the user interface.

An example on a pattern of actions the agent can perceive, is if the user each time he or she logs on opens an e-mail application, a window showing the time and a word processor. When the agent have seen the user do this a number of times, the probability for it to happen again is so high that it asks the user if it actually is a pattern.

If the user denies this being a pattern, the agent stores this information in its knowledge base, so that it won't trouble the user with asking the same question again. But if the user acknowledges this as a pattern, the agent will ask the user if the user would like:

The agent must be equipped with certain safety rules, such as "Never delete a file without asking the user first!". The users must know that THEY are in charge of the computer, the agents are their servants. In the same way, the agents must somehow report what they have done and why they did it after having dome something. This reporting can be implemented as a window where the status and actions of the agent can be observed, and every time the agent does something important and/or need feedback from the user, a window with a message that must be clicked in order to continue with other operations can appear. In this case, when the user logs on, a window saying "Logon detected. Login sequence Start-email, Start-time, Start-wordprocessor executed." can be a decent agent report.

4.3 Perspectives For Conceptual Models

Although chapter 4.2 draws up a picture of how an agent works, the description there is much too rough to be of any use to a developer who wants to design an agent system or an end user who wants to know how her e-mail application functions. Because of the wide span of applications where agents can be introduced, it is impossible to make a general conceptual model of agents that can explain all variants of agents. Its better for us to go to a swimming pool and experiment than to read instructions from a book when we want to learn how to swim. In the same way we have to choose what kind of model suits any particular agent the best.

In chapter 2.2 pp 23-56 of [Krogstie & Sølvberg], a number of perspectives to choose between when constructing conceptual models are represented:

So far no modelling language or even model perspective have been chosen as the intelligent agent modelling tool. Existing agents have just been modelled and explained in ways that seemed to fit the particular agent. In the following we will look at what possibilities various existing system modelling languages offer for incorporating agents and agent behaviour. The theory behind the models will not be discussed in depth. In chapter 4.4 there is a summary of the evaluation of the various model perspectives.

4.3.1 Structural Perspective

The structural perspective is dominated by the Entity Relationship (ER) model and its variants and extensions. The model basically consists of:

The agent's goal can be modelled as a singular attribute or a multi-value attribute belonging to the agent entity. The agent entity can also have an attribute describing the "mood" of the agent, e.g. "working and concentrated", "wondering", "self-assured" etc.

To separate the agent from other kinds of entities in the ER diagram, the symbol in Figure 4.3 is used in Figure 4.4. The symbol is deduced from the stereotypical "real life", secret agent with sunglasses. Figure 4.4 is a simple ER model of the e-mail agent system described in the scenario in chapter 3.1.


Figure 4.4 Agent modelled in ER diagram

The user owns an agent and uses an e-mail application to read and write e-mails. The agent has an attribute describing its "mood" and a multi-value attribute containing the agent's goals. Communication between user and agent takes place both directly between them and indirectly by the agent observing the user actions. The agent reads incoming e-mail and treats it by e.g. sorting the mailbox file, according to the agent's goals and its "mood". The sorted mail is presented to the user through the e-mail application.

This model can express:

The model can not express:

On the whole, the model as a tool for modelling agents must be said to be not particularly good. It can be used to show that there is an agent in the system, but it is not possible to express how it operates or what resources it is using at various stages of its activity. The model can be used to give end users a superficial picture of how the agent works, but it is not a good model for describing an implementation of the system and the agent.

4.3.2 Functional Perspective

In this perspective the processes in the system are put in focus. The dominating conceptual model framework based on processes is the Data Flow Diagram (DFD) model. Such models are built from the symbols of Figure 4.5.


Figure 4.5 The symbols of the DFD modelling language

These symbols can be put together in a network of precesses, storage units and external entities, and in this way be a model of a system. Every single process entity can be decomposed into a more detailed description of the process, without introducing any new symbols.

This perspective offers two ways to model an agent in a system. The easiest way is to model the agent as an external entity, which works outside the system itself by receiving a flow of information from the system, treats it and sends suggestions for actions back to the main system. However, most often the agent is to be considered an integrated part of the system, and must therefore be modelled as what it is, namely a set of processes.

This model can express:

The model can not express:

So on the whole, this model perspective can express many of the agent properties well, but the artificial intelligence part is not well handled. By having a DFD for each mood the agent can be in, this could be managed somehow. Combining DFD with describing the processes in a rule perspective gives a better possibility of describing an agent system with large precision and varying levels of abstraction.

4.3.3 Behavioural Perspective

The main elements of this model are states and transitions between states as a result of events, as shown in Figure 4.6.


Figure 4.6 Transitions between states caused by events

To further specify the system model, conditions for an event to trigger a state transition can be defined. An event can also directly trig other events. An extension of state transition diagrams which increases the span of what can be modelled is Statecharts [Harel]. Statecharts offers ways to express parallel processes and to divide the system into modules.

Petri net is another model classified under the behavioural perspective. This is a state machine where the transitions are defined by tokens flowing through the model of the systems. That makes it possible to model order, concurrency, synchronizing, exclusiveness and iterations of states, so that the model can be very precise.

Figure 4.7 is a simple example of a state diagram. The model shows how an agent's mood changes as a result of communication between the user and the agent. This could be a model of the agent that comes with suggestions for updated information of relevance in the scenario in chapter 3.1. The top state in the figure represents the agent when working. Eventually it comes to a point where it needs to communicate with the user, by asking the user something or suggesting something for the user. Depending on to what extent the agent "believes" it has come up with something interesting, the agent will be unsure or confident when addressing the user.

If the agent is unsure and the user corrects the agent, the agent's mood will change into "confusion" when it continues its work. If the user confirms that the agent has done something useful, the agent will be "pleased" when it goes back to do more work.

If the agent is sure it has done its work well, but the user corrects the agent, the agent will be "surprised", and have to go back and do its work better before addressing the user again. If the user accepts the confident agent's suggestion, the agent is "satisfied", and goes back to work "feeling" it knows the user well.


Figure 4.7 The agent's moods changes, illustrated using a state diagram

The various moods the agent can be in, decides how the agent works when it comes to precision, how much of the knowledge base is to be used, what resources the agent should occupy and so on.

The behavioural perspective can model:

The model can not express:

This perspective can be used as part of a model to represent an agent system, for example to show how the agent changes directives for operation and how the agent in certain states reacts to events in the system.

4.3.4 Rule Perspective

This perspective looks at a system as a set of rules that are to be followed for all activity in the system. A rule typically has the format:

IF {condition} THEN {expression}

Here "condition" is a requirement for the state of parts of or the whole system to be modelled, and "expression" is a description of what is to be done given the fulfilled condition. Example: "IF neither keyboard nor mouse have been touched for 20 minutes THEN start Solitaire", or "IF sender of incoming e-mail is "Mummy" THEN play loud sound on computer speaker". This perspective is well suited for the actual implementation of agent systems, as this most often is done in functional languages, which are based on rules to define functionality.

The rules can be sorted into two sets: 1) Rules that always must be followed, and 2) Rules (deontic) that may be departed from to make it easier to reach temporary goals. In the field of artificial intelligence rules have for a long time been the official tool for representation of knowledge. The core of expert systems, databases and requirement specifications consists of rules. The advantages of using rules are many:

The rule perspective also have drawbacks. In some cases it may be difficult to model details in certain situations to be either true or false, an uncertainty operator could be wished for sometimes. It can also be difficult to separate between what should be absolute rules and which should be deontic.

Even though rules can be used to remove amguity, the rules in themselves can become complicated, so that an holistic understand of the system is difficult to get. If one also decides to keep a flat structure for the rules, that is using a single (, low) level of abstraction, normal people will quickly be overwhelmed by the number of rules, no matter how simple the singular rule is.

Another problem is to avoid contradiction between the rules. Example: To keep children from reading indecent texts, many programs for reading Usenet postings support letting the user easily introduce rules such as "I don't want to read messages with the word 'breast' in them". This is fine, until the user at a later stage maybe has a sick neighbour, and introduces the rule "I want to read all articles with the words 'breast cancer' in them"...

Within the rule perspective there's also goal-oriented conceptual models. Six kinds of goals are described in [Sutcliffe & Maiden]:

  1. Positive goal states: States that are desired.
  2. Negative goal states: States that are to be avoided.
  3. Alternative goal states: The desired states vary, depending on the in-flow to the system at different times.
  4. Exception goal states: Nothing can be done to the system in a normal way in the state of the system even if it isn't a desired state. In such cases special actions are to be executed.
  5. Feedback goals: In connection with a desired state a certain deviation from an ideal state is allowed.
  6. Combination goals: A combination of the first 5 kinds.

This or a similar separation of goals into classes is needed to get the flexibility an intelligent agent demands from the system to be able to make "intelligent" decisions.

By using the rule perspective, the following can be modelled:

It is evident from this list that the rule perspective can be used to model the most important aspects of intelligent agents. It is not thereby given that using this perspective gives simple and understandable conceptual models by using it. Different levels of abstraction is necessary for explaining agents for different groups of people. While system developers can understand rules on a level similar to the instructions the computer will use, end users will need a more metaphoric set of rules.

4.3.5 Object Perspective

Object oriented techniques have been introduced to most fields within software development during the last decade. The basis of the perspective is identifying entities in the system and encapsulates them and their functionality in objects. Only the information necessary to use the object's functions is visible for the surrounding world, most of the details of the object is hidden inside it.

A set of objects sharing the same definitions for attributes and operations belong to the same class. By letting the properties of a class being inherited by subclasses, and adding new properties to the subclasses, specialized objects are defined. By putting several classes together in a new class, new objects can be put together. This is called aggregation.

So far, object oriented modelling has not been much used in modelling of agents. [Saake et al.] have done experiments with this using TROLL, an object oriented specification language with first-order, temporal logic as a basis. Here agents are looked upon as autonomous and intelligent objects, that are equipped with knowledge and the ability to reason about things to be able to reach various goals.

The main differences between "normal" objects and agent-objects are:

To model agents logical formulas will be used to describe the agents, instead of static values and attributes as used in traditional objects. A change of state to the agent object therefore will be a revision of knowledge, by adding or removing logical formulas from the object. In this way dynamic relations to other objects and agents can be formed dynamically, depending on the agent's properties, behaviour and tasks.

The various agent properties can be expressed in the object perspective:

An object-oriented approach to conceptual modelling of agents seems promising. A problem, however, is that some of an agent's properties can't be properly expressed through the use of logical formulas yet. In particular this is a problem when it comes to expressing flexible communication between agents. This kind of modelling is best suited for analyse and design of the system for actualy implementation. As a conceptual model explaining the agent system for end users it is not particularly good.

4.3.6 Communication Perspective

This perspective is based on language- and acting theory from philosophical linguistics. The fundamental assumption is that every work process takes place as a consequence of actors cooperating through conversations and entering into commitments. Speech Act theory have been developed from this. Speech acts are elements in a conversational structure, that defines the possible courses of events a conversation between two people can take.

[Searle] formalizes the structure in conversations and classifies all speech acts as being classifiable as one of five basic illocutionary types. These categories cover all statements that can occur, not only explicit "action sentences" as "I will close the door now" and similar sentences. This is necessary to cover the whole span of possible ways to express the same intention. A person saying "Gee, it's really hot in here!", may be expressing a wish for someone to open the window.

The five categories are:

Action Workflow [Medina-Mora et al.], the "workflow paradigm", is developed, using the Speech Act theory as a starting point. The workflow is defined as a coordination process between to acting roles, the customer and the service provider. The workflow is divided into four phases. In the first phase the customer requests something from the service provider or the service provider offers the customer something. In the second phase the two actors negotiate a contract with conditions for what the final result of the process is to be. In the third phase the service provider performs the work, and in the fourth and final phase the service provider hands the product of the work over to the customer, who checks that the product is in accordance with the contract. See Figure 4.8 for a general view of the workflow process.


Figure 4.8 The main phases in Action Workflow

The result of the four phases one by one can be summarized in the following speech acts:

If the conditions are not fulfilled after one circulation through the phases, it will take one or more similar additional circulations of workflow before the customer is satisfied and the process is finished. If necessary, the service provider can initiate other workflows with other actors, to get the work done.

Even though this workflow model was constructed to describe the course of work processes in business organizations, it really is a very general description of workflow, and can easily be used to describe coordination processes between human users and intelligent agents. The ten steps from chapter 4.2.1 are well suited for being modelled as speech acts between the various components of the agent.

By modelling the user as customer and the agent as service provider the following agent properties can be modelled:

The model can not express:

The communication perspective can be used to model the course of the work that must be done to reach the user-defined goal. What actually is done can not be shown in high detail, though. By combining this model with a model that can express reactivity, flexibility and mood, a holistic model useful for both end users and system developers can be created.

4.3.7 Actor and Role Perspective

This is a relatively new perspective, originating from work with object-oriented programming languages and artificial intelligence. A model using this perspective is ALBERT, "Agent-oriented Language for Building and Eliciting Real-Time requirements". As the name indicates, the model is suited for modelling complex, distributed, real-time systems, and it is oriented towards agents.

ALBERT describes these systems as agent communities, where each agent is responsible for certain functions in the system, and the agents have varying degrees of knowledge of other agents, what these other agents are doing, and what services they offer.

Requirements for structure, time-variable properties and values, functionality, behaviour, real-time reponse and cooperation and communication between agents are met. Modelling can be done at two levels: 1) Agent level, where the behaviour of the single agent is emphasized, with no thought of there being other agents in the system, or 2) Community level, where the interplay between agents is modelled, and the community's common good is considered when the agents' behaviour is coordinated. The whole process is written in a formal language based on an extended temporal logic.

The actor- and role-perspective is well suited for uniting and modelling agent systems as a whole, like in the example scenario in chapter 3, with agents in different roles in different situations.

Several models based on agents and users playing roles exist, but these havent't come into common use yet. However, it seems like models from this perspective will take over the agent modelling scene in the future, especially for distributed systems with many agents and/or agent components that have to cooperate to reach higher goals. What makes this approach different from previous models is that in this perspective the models are developed with agents in mind, instead of trying to incorporate agents into an existing model. So far it seems like these models mainly will be used for analysis and design of systems, as even though agents can be described well, the models soon become large and difficult to follow.

4.4. Summary

Table 4.1 summarizes the various perspectives of conceptual modelling and their applicability when it comes to modelling systems inhabited by intelligent agents. Each property it is desirable to model is represented with a row in the table. Likewise, each model perspective is represented with a column. The X's in the columns indicate that the property on that row can be modelled using the perspective in the column heading. A "-" indicated that the perspective can not model the property on that row. Below the property rows there are two rows for comments on what the perspective is well suited for and what the perspective's drawbacks are.

Agent property

Structural perspective

Functional perspective

Behavioural perspective

Rule perspective

Object perspective

Communication perspective

Actor and role perspective

Autonomy

-

-

X

X

X

X

X

Reactivity

X

X

X

X

X

-

X

Proactivity

X

X

-

X

X

X

X

Continuity

X

X

X

X

X

X

X

Learning

X

X

X

X

X

X

X

Communicating

-

X

-

X

(X)

X

X

Flexibility

-

-

X

X

X

-

X

Moods

X

-

X

X

X

-

X

Is suited for

Superficial system description

Modelling of system processes

Systems with clearly defined states

Modelling how agents make decisions

Looks at the system as modules

Systems built around communicating modules

Systems with several agents with various tasks and goals

Drawbacks

Processes and use of resources is not shown

Artificial intelligence poorly modelled

May easily become large and difficult to follow

Difficult to get a holistic impression of system

Some agent properties are difficult to express

Models course of processes, not the processes

Not established conceptual model/modelling language yet

Table 4.1 - The different perspectives' modelling abilities

Three of the perspectives, namely the rule perspective, the object perspective and the actor and role perspective, offer a possibility of modelling all the important properties of intelligent agents listed in chapter 4.1. The rule perspective has a drawback in that a complete set of rules for describing anything but the tiniest agent systems, will be too large for a human being to keep track of and understand completely. If the rules are abstracted to a higher level, where the number of rules is significantly lower and the rules themselves more comprehensible, this perspective can be used to explain for the users how the agent makes its decisions. The perspective can be useful for system developers as well, as it can be used to describe the agent's work processes very accurately.

The object perspective is also a good developer's perspective, as it can be used to model how the different modules in the agent system are connected, and how the objects in the system interact. The problem with the object perspective is that some properties, such as flexible communication between agents, is difficult to describe by using rules the way it is done today. The object perspective can also be used to explain to users how a system works, if aggregated objects with simplified functionality are used. In this way the abstraction level can be raised.

The actor and role perspective seems promising, but so far there are few actualy modelling languages where it is used. The perspective is especially good for modelling systems with several agents, where it is necessary to make the agents cooperate to reach the system's superior goals. It probably will not be used as a basis for conceptual models intended for end users, as it would be too complex and require certain previous knowledge which most end users do not have.

The four perspectives that can not express all the important properties of agents, are still well suited for modelling agent systems with an emphasis on different subjects. The structural perspective is good for emphasizing the entities of the system and the relations between them. The functional perspective describes the processes in the system in a very good way, offering the level of details that is most practical. The behavioural perspective can draw up a good picture of how the agents' work is driven by events the agents themselves or their surroundings cause. The communucation perspective models the course of events in the agent system as agreements between the actors in the system, where agents, users, specific applications and so on can be actors.

These four perspectives are well suited for describing agent systems each in their own way, and will be suited for different kinds of agents. Graphical modelling languages are often based on one of these four perspectives. The level of abstraction can be varied, and if these models are combined with e.g. a model based on the rule perspective under a certain level of abstraction, the result can be a good modelling language for use by both end users and system developers. By combining several modelling perspectives, a comprehensible modelling language able to express all important properties of agents can be created.



5. Discussion

This paper has consisted of two main parts: A literature study concerning the topic of intelligent agents, and an evaluation of the possibility of using existing conceptual modelling languages for modelling intelligent agents.

5.1 Literature Study Concerning Intelligent Agents

The field "intelligent agents" has become more visible on the software scene since the beginning of this project. The number of publications concerning agents availble on the Internet has increased dramatically. The most certain indication that intelligent agents are becoming more popular is maybe that they are mentioned more and more often in popular science articles in general, and even in magazines like Newsweek and Time Magazine. It also seems like all major software developers have one or more research groups dedicated to agent-related work by now.

It is difficult to predict what the result of introducing intelligent agents to various situations will result in. One of the more important challenges is to make the new information-based world of global computer networks and increasingly faster computer technology, more accessible to people both with and without a background in computer science. This is crucial for making people start using the computer as a medium as much as as a tool. This is a natural consequence of the increasing number of different services available over computer networks.

Intelligent agents can be used in many different situations:

Introducing agents in all possible situations should not be the primary goal. For example, students searching for information may learn more through the process of searching for information, through talking to other students, professors or library personnel, than through just being shown the results of an agent search. For such cases agents may play a different role, namely by helping people that need information with finding and communicating with other people that share their interests. Instead of bringing information to the user, agents can be used to establish contact between people.

On a long view, intelligents agent will become "smarter" and have an increasing functionality. The vision of people saying "Have your agent contact my agent for a meeting next week!" is within reach already, technologically. It is not so easy, however, to make people look at intelligent agents as trustworthy helpers in various tasks. A lot of work needs to be done to create a user interface between user and agent. It must inspire confidence, while at the same time not creating the impression that the agent can do something it actually can not do. Further, the information that comes to the user through the user interface may be the result of a horde of intelligent agents' work, but it is desirable that the user perceives it as he or she is cooperating with only one agent. This is to create a relationship between user and the agent technology that is as personal as possible, so that trust can be built. And it is, of course, of utmost importance that the agent actually does what it is supposed to do.

It may seem as if eventually there will be no limits for what agents can do for users. The limitation lies in how advanced the artificial intelligence we can build is. Given the necessary technology, agents that think and act unpleasantly like people can be created. Creativity can be defined as using old ideas in new situations. Through a sensible connection between observations and computer logic, an agent can be able to come up with suggestions for problem solving that may seem creative. In the future we should strive for giving an appropriate face to agents, as well as improving the "thinking" technology behind them.

5.2 Intelligent Agents and Conceptual Models

As part of the task to develop agents and make agents more attractive and accessible to users, we depend on conceptual models that can be understood by people. It may seem as if agent systems are "merely" another branch of traditional software and information systems, so that one is tempted to continue using established modelling languages for modelling them. The introduction of intelligent agents involves, however, that the software gets new properties and qualities. Without the user directly control it, the user will experience the agents work through:

To get an accurate picture of how the intelligent agent can do this, explanations for how it is done must be available to the user. If a complete explanation is to be given, it will be to comprehensive for normal users to understand. If the explanation is over-simplified, there is a risk that users will antropomorphize, that is ascribing complex human skills, which the agents obviously don't possess, to the agents.

When a conceptual model for explaining intelligent agents' mode of operation is to be constructed, a starting point can be found in existing perspectives for conceptual models. Several modelling languages have resulted from analysis and design of systems and organizations. These languages can be divided into the following perspectives:

Within the different perspectives there are several established modelling languages in use today. Some are based on graphical representations, while others are textual. Which one to use depends on what audience the model is meant for. Modelling language can also be chosen based on what basic agent properties it is necessary to model for any particular agent system.

The first three alternatives above are best suited for conceptual models, as they can express all the eight most important, basic agent properties identified in chapter 2 and 3:

The last four perspectives lack the ability to express one or more of the basic agent properties. Through combinations of different modelling languages, it is possible to create models that show every aspect of an agent system. What properties it is most important to model will vary from one agent system to another.

Even though it often is possible to model agents efficiently through more or less prosaic textual descriptions, existing perspectives for conceptual modelling can be used to more precisely model intelligent agents. As the definition of what intelligent agents are becomes clearer, modelling languages for agent systems will probably evolve from the existing modelling languages.



6. Conclusion

From starting the paper by defining an intelligent agent as a piece of autonomous, communicating, reactive and proactive software, various aspects of intelligent agents have been described through a literature study. Here some fundamental principles for agent architecture were discussed:

The different topics in the introduction to intelligent agents, chapter 2, should give the reader a good basis for writing about how user interfaces for intelligent agent systems should be designed.

To show some concrete examples on how intelligent agents could be used, chapter 3 draws up a scenario for how intelligent agents can aid students in their work at the university. Filtering and sorting e-mail, assistance in searching for information on the World Wide Web and agent-managed user interfaces are some application areas where agents can be of help as of today.

It is necessary that users have an as realistic idea as possible, of what agents are and how they work, so that the users can trust and use their agents in the best possible way. For this to happen, appropriate, conceptual models of agents and their modes of operation must be developed.

Various traditional and experimental modelling languages are evaluated based on their abilities to express the properties intelligent agents may have. By comparing these abilities systematically, it seems that no single modelling perspective suggests itself as being the ultimate solution for modelling of intelligent agents.

There are three main questions one must ask oneself when a conceptual model for an agent system is to be chosen:

  1. What audience is the model meant for?
  2. What properties of the agent are to be emphasized?
  3. What level of abstraction shall the model have?

Some modelling languages can express all the basic agent properties, but may soon become too complex to be used to describe large systems. Other modelling languages give easily understood presentations of agents, but can not model all necessary aspects of the system. Using combinations of the various, existing modelling languages, based on the three main questions above, seems to be the best way to model agent systems, judging from what the situation is today.



Appendix A: Glossary

Agent
Software operating more or less autonomous, acts socially towards other agents and human users, reacts to various forms of stimulation from its surroundings and acts by itself to get closer to its defined goals.

Architecture
A specific methodology, a foundation for designing agents. Specifies how the agent can be decomposed to a set of modules, and how these modules are to cooperate.

Blackboard Architecture
An architecture where exchange of information between agents takes place on a "blackboard", a globally accessible data structure.

Computer Pencilcase
A project at NTNU. The idea is to modernize and increase the use of computers in the education, by making it easier for the students to get necessary computer access. The students buy laptop computers, while the university supplies software and sockets for connecting the laptops to the Internet and the university network.

Information agent
An agent thatt can respond to requests from a user or other agents, by collecting and formatting information from various (often distributed) information sources.

Interface agent
An agent that uses artificial intelligence to aid a user in the use of a particular computer application.

NTNU
Norges teknisk-naturvitenskapelige universitet, or NUST, the Norwegian University of Science and Technology.

Plan
A presentation of a set of actions, which through execution will lead to a goal being reached. A plan may involve several agents

Predictive agent
An agent that works by predicting future system states by looking at the present state and evaluating the consequences of its own potential actions, and picks the action that leads the system as a whole the furthest towards the agent's goal(s). Also called anticipatory agent

Proactive
The ability to act on one's own initiative. To not just be driven by events, but to actually act rationally to reach its own or defined goals.

Reactive
The ability to sense in the surroundings and within reasonable time to react on sensed changes in the surroundings.

Speech Act
Pragmatic communication theory, where the main axiom is that all communication is based on actions performed by a speaker, with the intention of changing a listeners mental state.

Temporal logic
A branch of mathematical logic where the modal operators are used to express when expressions are true/false in time, e.g. "is always true", "is true until..." etc.

World Wide Web
WWW for short. Term describing the most popular and widespread system for presentation and linking of information on the Internet. Documents presented on the WWW are written in HTML, Hyper Text Markup Language.


Appendix B: References.

[Eichmann]
David Eichmann, "Ethical Web Agents", Repository Based Software Engineering Program, University of Houston - Clear Lake
http://rbse.jsc.nasa.gov/eichmann/www-f94/ethics/ethics.html

[Elofson]
Greg Elofson, "Intelligent Agents extend knowledge-based systems feasibility", IBM Systems Journal, Vol. 34, No. 1, 1995

[Erickson]
Thomas Erickson, "Designing Agents as if People Mattered", User Experience Laboratory, Advanced Technology, Apple Computer

[Franklin & Graesser]
Stan Franklin, Art Graesser, "Is it an agent, or just a Program? : A Taxonomy for Autonomous Agents", Institute for Intelligent Systems, University of Memphis
http://www.msci.memphis.edu/~franklin/

Harel
David Harel, "Statecharts: A Visual Formalism for Complex Systems", Science of Computer Programming 8, 1987, pp 231-274, North-Holland

[Kautz et al.]
Henry A. Kautz, Bart Selman, Michael Coen, "Bottom-Up Design of Software Agents", Communications of the ACM, July 1994, Vol. 37, No. 7

[Krogstie & Sølvberg]
John Krogstie, Arne Sølvberg (red.), "Information Systems Engineering - Advanced Conceptual Modeling", Information Systems Group, The Norwegian University of Science and Technology, draft version, January 1996

[Lashkari et al.]
Yezdi Lashkari, Max Metral, Pattie Maes, "Collaborative Interface Agents", Conference of the American Association for Artificial Intelligence, Seattle, August 1994

[Lieberman]
Henry Lieberman, "Attaching Interface Agent Software to Applications", Media Laboratory, Massachusetts Institute of Technology

[Maes 1994a]
Pattie Maes, "Agents that Reduce Work and Information Overload", Communications of the ACM, July 1994, Vol. 37, No. 7

[Maes 1994b]
Pattie Maes, "Social Interface Agents: Acquiring competence by learning from users and other agents" in Etzioni (red.) "Software Agents - Papers from the 1994 Spring Symposium", AAAI Press

[Medina-Mora et al.]
Raul Medina-Mora, Terry Winograd, Rodrigo Flores, Fernando Flores, "The Action Workflow Approach to Workflow Management Technology", pp 1-10 CSCW '92 ACM Conference on Computer-Supported Cooperative Work, ACM, 1992

[Russel & Norvig]
Stuart Russel, Peter Norvig, "Artificial Intelligence - A Modern Approach", Prentice Hall, New Jersey, 1995, ISBN 0-13-103805-2

[Saake et al.]
Gunter Saake, Stefan Conrad, Can Türker, "From Object Specification towards Agent Design", Proceedings of the 14th International Conference on Object-Oriented and Entity- Relationship Modeling, Gold Coast, Australia, pp 329-340, LNCS 1021, Springer Verlag, Dec. 1995
http://wwwit.cs.uni-magdeburg.de/institut/veroeffentlichungen/95/SaaConTue95.html

[Searle]
J. R. Searle, "A Taxonomy of Illocutionary Acts", Cambridge University Press, Cambridge, 1975

[Smith et al.]
David Canfield Smith, Allen Cypher, Jim Spohrer, "KIDSIM: Programming Agents Without a Programming Language", Communications of the ACM, July 1994, Vol. 37, No. 7

[Sutcliffe & Maiden]
A. G. Sutcliffe, N. A. M. Maiden, "Bridging the Requirements Gap: Policies, Goals and Domains", Proceedings of the Seventh International Workshop on Software Specification and Designs, pp 52-55, Redondo Beach, USA, December 6-7 1993

[Thomas & Johnston]
F. Thomas and O. Johnston, "Disney Animation: The Illusion of Life", Abbeville Press, New York 1981

[Winograd & Flores]
Terry Winograd, Fernando Flores, pp 54-69, "Understanding Computers and Cognition", Addison-Wesley, 1986

[Wooldridge & Jennings]
Michael J. Wooldridge, Nicholas R. Jennings (Eds.), "Intelligent Agents", ECAI-94 Workshop on Agent Theories, Architectures and Languages, Amsterdam, The Netherlands, August 1994


Last modified: Wed Oct 15 14:54:51 MET DST 1997