Tuesday, May 04, 2004

Intelligent Agent part 2: Architecture

Here the second part of my research study on software intelligent agent.

Intelligent Agent Architecture

Current agent architecture can be classified as either 1) logical based agent, 2) reactive agent, 3) BDI agent (belief-desire-intention) and 4) layered architectures. The following summarizes very briefly these approaches and provides some insight in relation to their pros/cons.


The logic based approach is when the decision making strategy reduces to a problem of proof (predicate logic), i.e. the agent uses its symbolic representation of the world (usually through logical formulae) to accomplish logical deduction in order to select an appropriate action. In this scheme, the deduction rules that determine the agent behaviour are the “program” (execution), whereas the symbolic information/ representation of the environment is the current database (data).


This traditional approach originating from early AI work, suffers from disadvantages such as the following:

  1. Because of Calculative rationality (the optimal action is deduced at time j based on the environment condition as seen at time i, where j>i), we must assume that the environment did not change during the time agent chooses its action, reducing the practicality of such agents. Although alternative compilation of the logical specification into a more amenable form can provide better efficiency, this issue still persist in time-constraint and highly dynamic environment.

  2. Perception of the agent is realised through a non-obvious mapping from real environment condition to a set of symbolic formulae. It has been acknowledged that as seductive and useful logic is to conceptualise and specify agent, it is quite difficult when one tries to represent and reason about complex, dynamic of physical or non-physical environment using a set of explicit symbolic reasoning.



The reactive agent architecture (alternatively referred to as behavioural or situated) proposed by R. Brooks intends to simplify and avoid the difficulty of symbolic representation by providing a more economical, elegant and robust agent whose behaviour is simply a function of its local environment. One of the most famous technique called Subsumption architecture is characterized by an action-perceptual input mapping (situation action) and a layer controlling hierarchy in charge of resolving possible simultaneous action conflicts (lower layer action takes precedence over higher or more abstract layer). Agent performance and complexity which depends on the number of behaviour and number of percept (raw sensor input is used as opposed to complex symbolic representation) can result in a constant decision time when hard-wiring decision making process, making such agent computational simple and attractive. However, this simplicity leads to some disadvantages:

  1. Only a local environment information is gathered making non-local information not possibly taken into account

  2. How could purely reactive paradigm lead to some form of experience learning and performance improvement over time?

  3. Implementation engineering relying heavily on experimentation, trail and error methodology

  4. The hierarchical form of action control can lead to confusion when a large number of different behaviours are modelled.



The BDI or Belief-Desire-Intention architecture is influenced by the study of practical reasoning (human reasoning taking place during continuous decision-action process). This philosophical concept explains any action taken by a combination of intention (what goals are to be achieved) and a chosen option (how to proceed for achieving these goals). It is assumed in BDI system that course of action taken to satisfy goals should not stop simply because of failure but in the mean time it should not persist without any reasonable chance of success (or without the original intention remaining relevant). This represents a dilemma between the right balance of reactiveness and proactiveness behaviour contained in a BDI agent. Pure goal directed agents will never stop to reconsider their intention even after important environment changes that could invalidate the original goal. Meanwhile pure event driven agents never spend enough time trying to achieve their goals and waste all computational time in reacting to their environment. In general the more static an environment is the more pro-active agents should be, whereas in more dynamic environment agents need to have the ability to react to changes.



The layered architecture aims to accomplish a better balance between agent pro-activeness and re-activeness by introducing the concept of specialized dedicated layer. Layering architecture into subsystems allows one to design specialized subsystems adapted for specific task, for example one subsystem would deal with reactive behaviour while another one with pro-active behaviour. Most of these architectures use either horizontal layering or vertical layering for their information and control flow mechanisms. Horizontal layering requires some form of mediation between all layers since they all have access to input and can all produce some action output. Vertical layering involves a flow sequence between all layers, with lowest level layer receiving the input and forwarding it to highest or more abstract layer if necessary (i.e. when current layer expertise and competence cannot process or deal with input). Input information flowing from lowest to highest is referred to by bottom-up activation while command flowing from highest to lowest is referred to by top-down execution. This architecture is very popular due to its pragmatism and intuitive mapping from pro-active, reactive, social behaviours into specific and dedicated subsystems.



Taken from

  1. Multiagent Systems, a Modern Approach to Distributed Modern Approach to Artifical Intelligence” Chapter 2 Mutliagents Systems and Societies of Agents, Huhns and Stephens (1999)

Saturday, May 01, 2004

Intelligent Agent part 1: Definition and Theory

I did some research study for a friend of mine (no more details are possible ;-) on intelligent agents (software). This is a quite a broad area and very interesting. While doing this academic-oriented research reminded me on the period of my master degree, I must admit that working around a theoretical framework greatly contrasted with my daily work at the job... which is all about practical and not much about theoretical.

I've decided to publish some of the theoretical material I've surveyed... here is the first part.

What are intelligent agents

Agents are loosely defined as being a computer based system characterized by their autonomy (no direct human intervention required), their social ability allowing them to communicate with other agents or human, their reactivity allowing them to react to the changing environment and finally their pro-activeness which also permit them to take initiative to meet their goal. In fact the key characteristics of reactivity, pro-activeness and social ability can be considered as main differentiator between “normal” agent and “intelligent” agent.


Other more formal definition would also include some human alike concepts such as knowledge, belief, intention and obligation, as well as additional properties like mobility, veracity, benevolence and rationality.



An important aspect of agent has to do with their surrounding environment which they operate in. To see the impact of environmental properties on agent, one can classify them as being:

  1. Accessible/inaccessible: accessible environment would be one for which all states are known up-to-date and accurately. In all practical cases, the environment is inaccessible, e.g. surrounding physical world, Internet, etc.

  2. Deterministic/non-deterministic: except for the most trivial case, agents has only partial and non-deterministic control over their environment are, i.e. same action performed twice on seemingly identical circumstances may appear to have different consequences. Although non-deterministic lead to more complex system, to all intents and purposes it is recognized that a highly complex deterministic environment is just as difficult as a non-deterministic.

  3. Episodic/non-episodic: episodic system has performance dependant only on the current discrete episode. This simplifies design since all possible scenarios and interaction in future episodes can be ignored in relation to deciding which actions are to be taken by the agents.

  4. Static/dynamic: static environment only observe changes in reaction to action performed by the agent, in most situation other unknown/known processes operate on the environment making it dynamic.

  5. Discrete/continuous: Discrete environment implies only a finite number of actions and percept to be possible.


It is recognised that with respect to building agent, most complex class of environment are those that are inaccessible, non-deterministic, non-episodic, dynamic and continuous.



Agent theory

A substantial part of agent theories are derived around the concept of intentional system, i.e. system with behaviour that are predicted based on similar human type attitudes: belief and knowledge (referred to as information category), and desire, intention, obligation, commitment and choice (referred to as pro-attitudes). Following this concept, merely all objects from the most simples automata-like objects to the most complex ones, can be described by having some form of intentional stance. This fact helps to provide a level of abstraction useful when dealing with the complexity of the latter.


As “compelling” as the intentional notions are for an intuitive representation, description, understanding and explanation of agent based system, it can certainly represent a difficult challenge when one wishes to reason about this inside a logical theoretical framework. Theory for representing intentional notion cannot be based using classical relying on truth functional operator, because this notion is said to be referential opaque. Substitution rule applicable to first order logic are not possible with intentional sentence such as “Janine believes p”, since replacing p with an equivalent will not preserve meaning.


Because of this limitation, other formalisms have been proposed for handling the syntactic rules and the semantic model associated with intentional system. One of the best known semantic model is characterized by the so-called “possible worlds” which is supported by an attractive mathematical tools called the “correspondence theory”. However as compelling as this model is for its logic representation, it suffers from limitations related to the famous “logical omniscience” (see problem due to infinite knowledge caused by statement logical tautology). In this 1995 study, the authors recognized that alternatives theories developed for either belief/knowledge representation (information attitudes) or goals/desire (pro-attitudes) all suffer from various deficiencies and/or limitations representing challenges to the agent theorist.


A more useful and complete theory for agent system should not only try representing the intentional stance but should also try embracing a larger logical framework interested in other components such as the dynamic aspects of agency, the relationship between the agent’s information and its pro-attitudes, the cognitive states change over time and reaction to the environment stimuli, etc. Some tentative work aims to achieve this very challenging task.


Of particular interest is communication between agents, which are modelled using the “speech act theory” stating that all communications are considered as actions performed to bring change to the world (usually to the listener internal state). This theory attempts to classify between the various forms of speech acts, with the two prominent ones being representative speech act for informing objectives and the directives one for requesting objective. See Multiagents system discussion below.




Taken from

  1. Intelligent Agents: Theory and Practice”, Wooldridge and Jennings (1995)

  2. Multiagent Systems, a Modern Approach to Distributed Modern Approach to Artifical Intelligence” Chapter 1 Intelligent Agents, Wooldridge (1999)