CHI 97 Electronic Publications: Tutorials
CHI 97 Prev CHI 97 Electronic Publications: Tutorials Next

Software Agents

Marc Millier
Intel Architecture Labs
2111 NE 25th
Hillsboro OR 97124 USA
+1 503 264 6770
mmillier@ibeam.intel.com

ABSTRACT

"Agents" and "Agent technology" have become the new buzzwords in computer software. Much of this `buzz' is pure hype similar to the AI hype of the 80's. The software agents tutorial is intended to provide the attendee an overview of the software and user interface technologies being applied to autonomous software modules known as "Agents". This overview should allow the student to separate the "wheat from the chaff" and provide pointers for the student's further research into the technology.

Keywords

Software Agents, Distributed Artificial Intelligence, Tutorial

© 1997 Copyright on this material is held by the authors.



INTRODUCTION

Goals

The tutorial is intended to provide information on agents and agent technology to educate more of the software development and CHI community on the impact and nature of current Agent technology. By providing a reasonable backdrop for agents, this tutorial hopes to allow the student to pursue additional reading material with a more objective eye for that which is real and that which is hyperbole (hysteria?).

Objectives

The tutorial student should leave with a general understanding of the structure and architectures of current agent technology, the terminology and common definitions in the field, and an understanding of some of the user interface issues in software agents.

TUTORIAL

What's an Agent?

The dictionary definition of agent turns out to be appropriate for this discussion:

agent \'a--j*nt\ n 1: something that produces or is capable of producing an effect : an active or efficient cause 2: one who acts for or in the place of another by authority from him.

In our discussion of software agents, both of the above definitions apply. Software agents, by definition, are active, independent components. Most agents are designed to act as or for the user to help execute some task or operation.

Each developer and researcher in the agents field adopts their own definition of an Agent. Leonard Foner defines an agent as a "program that performs tasks for a user" 1. While this is an accurate definition, it's not very useful. Pattie Maes has a more useful definition:

"A Software Agent is a computational system which has goals, sensors, and effectors, and decides autonomously which actions to take, and when"2

To my mind "Agent" is not a definition but a characteristic that software has to one degree or another. We can only define a software component as an agent by examining it's characteristics and behaviors. The first part of this tutorial is an examination of some of these characteristics and a demonstration of several software agents.

Kinds of Agents

Agents come in various forms for various purposes. In general we can categorize software agents by the `sophistication' of the algorithms applied. This sophistication can range from simple rules (authored by the user or some expert) to fully AI, knowledge engineered, expert agents that fully embody some domain expertise. I have arbitrarily divided agent software into three categories, 1) Fixed agenda/Rule based, 2) Learning or Adaptive, and 3) Intelligent (AI Engineered).

These categories are implementation characteristics of an agent, not definitions of utility. The role an agent plays in the user's context is a more useful taxonomy.

Agent Roles

Pattie Maes, in previous CHI tutorials has identified a set of roles that agents can play in helping the user2:

  1. Eager Assistants
  2. Guides
  3. Memory Aids
  4. Filters & Critics
  5. Matchmakers
  6. Buying & Selling
  7. Entertainment.

A particular agent may play multiple roles for the user. For instance, assisting the user with a specific operation while guiding the user toward a high level goal.

The tutorial notes lists several examples of software agents for each of these roles.

User Interface Issues

One of the key characteristics of agent software is autonomous behavior. This raises several user interface issues for the agent developer (as well as the user). Agent user interfaces must be unobtrusive to the user and also allow the user to learn to "trust" what the agent will do when.

While the user interface for an agent must be "translucent". The agent must also allow the user sufficient control for the user feel that the agent is an effective assistant, but also "stay out of the way" while the user completes their tasks. These trust and control issues, along with the independent nature of each agent encourages the application of other user interface modes, especially speech and gestural control.

Another user interface often associated with agents is the `characterization' or `personification' of the agent. As an example, the AT&T doggie that goes off and does work for you. In many cases the personification of the agent user interface can help the user develop trust and to build a mental model of how the agent works. However, the personification of the agent can also lead the user into believing the agent is more capable than it is, and can become a distraction to user when performing real tasks.

Agent Architectures

Given the set of functionality for agents that we have described, all agent architectures can be broken down into four functions, Observation, Recognition, Planning and/or Inference, and Action or Execution

These four functions reflect Pattie Maes's description of agents as having sensors and effectors, with the further detail of recognition of what the agent senses and a planning or inference step to make decisions about appropriate actions.

The Software Agent Group at the MIT Media Lab have developed a model for agents as a "third party" to the user interacting with a task oriented application. The agent observes both the user and the application to determine the "right thing to do".

In general we think of agents as "Autonomous Objects", and thus develop agents as an extension of the Object Oriented development process. We have even used the phrase "Agent Oriented Software".

Agent Technologies

Many AI technologies are applicable to agents. These include pattern recognition (Neural Nets), Associative Networks and Memories, as well as expert systems and inference engines.

Agent specific technologies include communication and knowledge sharing structures (blackboards and the like) and action selection networks.

The future of Agents

"With Holodecks, Transporters, and Replicators, the problem is solved!"

Humor aside, the artifacts in the Star Trek programs all have an "agent" perspective on the user interface. The interfaces are translucent, multi-modal, and directive in nature. The users of the holodeck use voice for simple commands and call forth a more direct interface when required. The Transporter is constructed to be used by an expert because of the complexity of operation and the hazard of ambiguous input. These are not bad lessons for creators of software agents. As agent technology matures, the translucent interfaces illustrated above will become more and more important. Users need to be able to direct their agents, not command them.

Are agents a good thing?

Lest one think that everyone believes that agents are a good idea, it is interesting to investigate alternative viewpoints. Jaron Lanier (of VR fame) has stated that "The whole notion of intelligent agents is both wrong and evil!"3. His premise is that users will "dumb themselves down" to make their agents work better. In a more technical way, Ben Schneiderman of the Institute for Systems Research has been attributed with saying that "Well defined interfaces always provides a better solution"4.

I think that agents and agent technology is here to stay. Agents are simply another way to build tools that allow the interaction between the software and the user to be more effective and useful for the user. As always the success and utility of this technology will depend on it's appropriate use.

ACKNOWLEDGMENTS

Pattie Maes and Alan Wexelblat, MIT Media Lab for their work in previous tutorials in this subject area, as well as their ongoing research in software agents .

With inspiration from Steve McGeady, Intel Corporation and Jim Larson, Intel Corporation & CHI '97 Tutorial Chair.

REFERENCES

  1. Leonard Foner, "What's an Agent Anyway?", Agents Memo 93-01, Agents Group MIT Media Lab, 1993
  2. Pattie Maes, Alan Wexelblat, CHI 96 Tutorial, "Software Agents", 1996
  3. Jaron Lanier, "Agents of Alienation", Hotwired debate with Pattie Maes, July 1995
  4. Benjamin Schneiderman, Institute for Systems Research, University of Maryland

CHI 97 Prev CHI 97 Electronic Publications: Tutorials Next

CHI 97 Electronic Publications: Tutorials