Next Previous Table of content

9. The philosopher from Birmingham

Aaron Sloman, professor of philosophy at the School of Computer Science of the University of Birmingham, counts certainly as one of the most influential theoreticians regarding computer models of emotions. In an article from 1981  titled "Why robots will have emotion" he stated:

"Emotions involve complex processes produced by interactions between motives, beliefs, percepts, etc. E.g. real or imagined fulfilment or violation of a motive, or triggering of a 'motive-generator', can disturb processes produced by other motives. To understand emotions, therefore, we need to understand motives and the types of processes they can produce. This leads to a study of the global architecture of a mind."

 

(Sloman, 1981, p.1)

Like Bates, Reilly or Elliott, Sloman also represents the broad and shallow approach. For him, it is more important to develop a complete system with little depth than individual modules with much depth. It is his conviction that only in this way a model can be developed which reflects reality to some extent realistically.

Sloman and his coworkers in the Cognition and Affect Project  have, since 1981, published a lot of works on the topic "intelligent systems with emotions", which can be divided roughly into three categories:

  1. Works which are concerned with the fundamental approach to the construction of an intelligent system;
  2. works dealing with the fundamental elements of such a system and
  3. works which try to implement such a system practically.

To understand Sloman's approach correctly, one must see it in the context of his epistemological approach which is not concerned primarily with emotions, but with the construction of intelligent systems.

I shall try to sketch briefly the core thoughts of Sloman's theory because they form the basis for the understanding of the "libidinal computer" developed by Ian Wright (see below).

9.1. Approaches to the construction of intelligent systems

Sloman's interest lies not primarily in a simulation of the human mind, but in the development of a general "intelligent system", independent from its physical substance. Humans, bonobos, computers and extraterrestial beings are different implementations of such intelligent systems - the underlying construction principles are, however, identical.

Sloman divides the past attempts to develop a theory about the function modes of the human mind (and thus of intelligent systems generally) into three large groups: Semantics-based, phenomena-based and design-based.

Semantics-based approaches analyze how humans describe psychological states and processes, in order to determine implicit meanings which are the basis of the use of words of everyday language. Among them he ranks, among others, the approaches of Ortony, Clore and Collins as well as of Johnson-Laird and Oatley. Sloman's argument against these approaches is: "As a source of information about mental processes such enquiries restrict us to current `common sense´ with all its errors and limitations." (Sloman, 1993, p. 3)

Some philosophers who examine concepts analytically, produce, according to Sloman, semantics-based theories, too. What differentiates them from the psychologists, however, is the fact that they do not concentrate on existing concepts alone, but are often more interested in the quantity of all possible concepts.

Phenomena-based approaches assume that psychological phenomena like "emotion", "motivation" or "consciousness" are already clear and that everybody can intuitively recognize concrete examples of them. They try therefore only to correlate measurable phenomena arising at the same time (e.g. physiological effects, behaviour, environmental characteristics) with the occurrence of such psychological phenomena. These approaches, argues Sloman, can be found particularly with psychologists. His criticism of such approaches is:

"Phenomena-based theories that appear to be concerned with mechanisms, because they relate behaviour to neurophysiological structures or processes, often turn out on close examination to be concerned only with empirical correlations between behaviour and internal processes: they do not show why or how the mechanisms identified produce their alleged effects. That requires something analogous to a mathematical proof, or logical deduction, and most cognitive theories fall far short of that."

 

(Sloman, 1993, p. 3)

Design-based approaches transcend the limits of these two approaches. Sloman refers here expressly to the work of the philosopher Daniel Dennett who essentially  shaped the debate around intelligent systems and consciousness.

Dennett differentiates between three approaches if one wants to make forecasts about an entity: physical stance , design stance and intentional stance . The physical stance is "simply the standard laborious method of the physical sciences" (Dennett, 1996, p. 28); the design stance, on the other hand, assumes "that an entity is designed as I suppose it to be, and that it will operate according to that design" (Dennett, 1996, p. 29). The intentional stance which can be regarded according to Dennett also as a"sub-species" of the design stance, predicts the behaviour of an entity, for example of a computer program, "as if it were a rational agent" (Dennett, 1996, p. 31).

Representatives of the design-based approach proceed from the position of an engineer who tries to design a system that produces the phenomena to be explained. However, each design does not require at the same time also a designer:

"The concept of "design" used here is very general, and does not imply the existence of a designer. Neither does it require that where there's no designer there must have been something like an evolutionary selection process. We are not primarily concerned with origins, but with what underlies and explains capabilities of a working system."

 

(Sloman, 1993, p.4)

A design is, strictly taken, nothing else than an abstraction which determines a class of possible instances. It does not have to be necessarily concrete or materially implemented - although its instances can quite have a physical form.

For Sloman, the term "design" is closely linked with the term "niche". A niche is also  a not a material entity and no geographical region. Sloman defines it in a broad sense as a collection of requirements to a functioning system.

Regarding the development of intelligent agents in AI, design and niche play a special role. Sloman speaks of design-space and niche-space . A genuinely intelligent system will interact with its environment and will change in the course of its evolution. Thus it moves on a certain trajectory through design-space . With it corresponds a certain trajectory through the nichespace , because through the changes of the system it can occupy new niches:

"A design-based theory locates human mechanisms within a space of possible designs, covering both actual and possible organisms and also possible non-biological intelligent systems."

 

(Sloman, 1991, p. 5)

Sloman identifies different trajectories through the design-space: Individuals who can adapt themselves and change, go through so-called i-trajectories . Evolutionary developments which are possible only over generations of individuals, he calls e-trajectories . And finally there are changes in individuals that are made from the outside (for example debugging software) and which he calls r-trajectories (r for repair).

Together these elements result in dynamic systems which can be implemented in different ways.

"Since niches and designs interact dynamically, we can regard them as parts of virtual machines in the biosphere consisting of a host of control mechanisms, feedback loops, and information structures (including gene pools). All of these are ultimately implemented in, and supervenient on physics and chemistry. But they and their causal interactions may be as real as poverty and crime and their interactions."

 

(Sloman, 1998b, p. 6)

For Sloman, one of the most urgent tasks exists in specifying biological terms such as niche, genotype etc.more clearly in order to be able to exactly understand the relations between niches and designs for organisms. This would also be a substantial progress for psychology:

"This could lead to advances in comparative psychology. Understanding the precise variety of types of functional architectures in design space and the virtual machine processes they support, will enable us to describe and compare in far greater depth the capabilities of various animals. We'll also have a conceptual framework for saying precisely which subsets of human mental capabilities they have and which they lack. Likewise the discussion of mental capabilities of various sorts of machines could be put on a firmer scientific basis, with less scope for prejudice to determine which descriptions to use. E.g. instead of arguing about which animals, which machines, and which brain damaged humans have consciousness, we can determine precisely which sorts of consciousness they actually have."

 

(Sloman, 1998b, p. 10f.)

Sloman grants that the requirements of design-based approaches are not trivial. He names five requirements which such an approach should fulfill:

  1. Analysis of the requirements of an autonomous intelligent agent;
  2. a design specification for a functioning system which fulfills the requirements of (1);
  3. a detailed implementation or specification for such an implementation of a functioning system;
  4. a theoretical analysis, to what extent the design specification and the details of the implementation fulfill the requirements or not;
  5. an analysis of the neighbourhood in the design space .

A design-based approach does not necessarily have to be a top-down approach. Sloman believes that models which combine top-down and bottom-up will be most successful.

For Sloman, design-based theories are more effective than other approaches, because:

"Considering alternative possible designs leads to deeper theories, partly because the contrast between different design options helps us understand the trade-offs addressed by any one design, and partly because an adequate design-based theory of human affective states would describe mechanisms capable of generating a wide range of phenomena, thereby satisfying one of the criteria for a good scientific theory: generality. Such a theory can also demonstrate the possibility of new kinds of phenomena, which might be produced by special training, new social conditions, brain damage, mental disturbance, etc."

 

(Sloman, 1991, p. 5)

9.2. The fundamental architecture of an intelligent system

What a design-based approach sketches, are architectures.  Such an architecture describes which states and processes are possible for a system which possesses this architecture. 

 

From the quantity of all possible architectures, Sloman is particularly interested in a certain class:  "..."high level" architectures which can provide a systematic non-behavioural conceptual framework for mentality (including emotional states)." (Sloman, 1998a, p. 1) Such a framework for mentality

 

"is primarily concerned with an "information level" architecture, close to the requirements specified by software engineers. This extends Dennett's "design stance" by using a level of description between physical levels (including physical design levels) and "holistic" intentional descriptions."

 

(Sloman, 1998a, p. 1)

 

An architecture for an intelligent system consists, according to Sloman, of four substantial components:  several functionally different layers, control states, motivators and filters as well as a global alarm system. 

 

9.2.1. The layers

Sloman postulates that every intelligent sytem possesses three layers:

a reactive layer which contains automatic and hard-wired processes; 

a deliberative layer for planning, evaluating and assigning resources etc.

a meta management layer which contains observation and evaluation mechanisms for internal states.

The reactive layer is the evolutionary oldest, and there is a multitude of organisms which only possess this layer.  Schematically, a purely reactive agent presents itself as follows:

Fig. 13: Reactive architecture (Sloman, 1997a, p. 5)

A reactive agent can make neither plans nor develop new structures.  It is optimized for special tasks;  with new tasks, however, it cannot cope.  What it is missing in flexibility, it gains at speed.  Since almost all processes are clearly defined, its reaction rate is high.  Insects are, according to Sloman, examples for such purely reactive systems, which prove at the same time that the interaction of a number of such agents can produce astonishingly complex results (e.g. termite towers). 

A second, phylogenetically younger layer gives an agent more qualities by far.  Schematically, this looks as follows:

Fig. 14: Deliberative architecture (Sloman, 1997a, p. 6)

A deliberative agent can re-combine its action repertoire arbitrarily, develop plans and evaluate them before execution.  An essential condition for this is a long-term memory in order to store plans not completed yet or to rest and evaluate later the probable consequences of plans. 

 

The construction of such plans proceedes gradually and is therefore not a continuous, but a discrete process.  Many of the processes in the deliberative layer are of serial nature and therefore resource-limited.  This seriality offers a number of advantages:  at any time it is clear to the system which plans have led  to a success, and it can assign rewards accordingly; at the same time, the execution of contradicting plans is prevented;  communication with the long term storage is to a large extent error free. 

 

Such a resource-limited subsystem is of course highly error-prone.  Therefore filtering processes with variable thresholds are necessary, in order to guarantee the working of the system