|
Next Previous Table of content
9. The philosopher from Birmingham
Aaron Sloman, professor
of philosophy at the School of Computer Science of the University of
Birmingham, counts certainly as one of the most influential theoreticians
regarding computer models of emotions. In an article from 1981 titled "Why robots will have
emotion" he stated:
Like Bates, Reilly or Elliott,
Sloman also represents the broad and shallow approach. For him,
it is more important to develop a complete system with little depth
than individual modules with much depth. It is his conviction that only
in this way a model can be developed which reflects reality to some
extent realistically. Sloman and his coworkers
in the Cognition and Affect Project
have, since 1981, published a lot of works on the topic
"intelligent systems with emotions", which can be divided roughly
into three categories:
To understand Sloman's
approach correctly, one must see it in the context of his epistemological
approach which is not concerned primarily with emotions, but with the
construction of intelligent systems. I shall try to sketch
briefly the core thoughts of Sloman's theory because they form the basis for
the understanding of the "libidinal computer" developed by Ian
Wright (see below). 9.1.
Approaches to the construction of intelligent systems
Sloman's interest lies
not primarily in a simulation of the human mind, but in the development of a
general "intelligent system", independent from its physical
substance. Humans, bonobos, computers and extraterrestial beings are
different implementations of such intelligent systems - the underlying
construction principles are, however, identical. Sloman divides the
past attempts to develop a theory about the function modes of the human
mind (and thus of intelligent systems generally) into three large groups:
Semantics-based, phenomena-based and design-based. Semantics-based approaches
analyze how humans describe psychological states and processes, in order
to determine implicit meanings which are the basis of the use of words
of everyday language. Among them he ranks, among others, the approaches
of Ortony, Clore and Collins as well as of Johnson-Laird and Oatley.
Sloman's argument against these approaches is: "As a source of
information about mental processes such enquiries restrict us to current
`common sense´ with all its errors and limitations." (Sloman, 1993,
p. 3) Some philosophers
who examine concepts analytically, produce, according to Sloman, semantics-based
theories, too. What differentiates them from the psychologists, however,
is the fact that they do not concentrate on existing concepts
alone, but are often more interested in the quantity of all possible
concepts. Phenomena-based approaches
assume that psychological phenomena like "emotion", "motivation"
or "consciousness" are already clear and that everybody can
intuitively recognize concrete examples of them. They try therefore
only to correlate measurable phenomena arising at the same time (e.g.
physiological effects, behaviour, environmental characteristics) with
the occurrence of such psychological phenomena. These approaches, argues
Sloman, can be found particularly with psychologists. His criticism
of such approaches is:
Design-based approaches
transcend the limits of these two approaches. Sloman refers here expressly
to the work of the philosopher Daniel Dennett who essentially shaped the debate around intelligent systems
and consciousness. Dennett differentiates
between three approaches if one wants to make forecasts about an entity:
physical stance , design stance and intentional stance
. The physical stance is "simply the standard laborious
method of the physical sciences" (Dennett, 1996, p. 28); the design
stance, on the other hand, assumes "that an entity is designed
as I suppose it to be, and that it will operate according to that design"
(Dennett, 1996, p. 29). The intentional stance which can be regarded
according to Dennett also as a"sub-species" of the design
stance, predicts the behaviour of an entity, for example of a computer
program, "as if it were a rational agent" (Dennett,
1996, p. 31). Representatives of the
design-based approach proceed from the position of an engineer who tries to
design a system that produces the phenomena to be explained. However, each
design does not require at the same time also a designer:
A design is, strictly
taken, nothing else than an abstraction which determines a class of possible
instances. It does not have to be necessarily concrete or materially
implemented - although its instances can quite have a physical form. For Sloman, the term
"design" is closely linked with the term "niche".
A niche is also a not a material
entity and no geographical region. Sloman defines it in a broad sense
as a collection of requirements to a functioning system. Regarding the development
of intelligent agents in AI, design and niche play a special role. Sloman
speaks of design-space and niche-space . A genuinely intelligent
system will interact with its environment and will change in the course
of its evolution. Thus it moves on a certain trajectory through
design-space . With it corresponds a certain trajectory through
the nichespace , because through the changes of the system it
can occupy new niches:
Sloman identifies
different trajectories through the design-space: Individuals
who can adapt themselves and change, go through so-called i-trajectories
. Evolutionary developments which are possible only over generations
of individuals, he calls e-trajectories . And finally there are
changes in individuals that are made from the outside (for example debugging
software) and which he calls r-trajectories (r for repair). Together these elements
result in dynamic systems which can be implemented in different ways.
For Sloman, one of
the most urgent tasks exists in specifying biological terms such as
niche, genotype etc.more clearly in order to be able to exactly understand
the relations between niches and designs for organisms. This would also
be a substantial progress for psychology:
Sloman grants that the
requirements of design-based approaches are not trivial. He names five
requirements which such an approach should fulfill:
A design-based approach
does not necessarily have to be a top-down approach. Sloman believes
that models which combine top-down and bottom-up will
be most successful. For Sloman, design-based
theories are more effective than other approaches, because:
9.2. The fundamental architecture of an intelligent system
What a design-based
approach sketches, are architectures. Such an architecture describes which states and processes are
possible for a system which possesses this architecture. From
the quantity of all possible architectures, Sloman is particularly interested
in a certain class: "..."high
level" architectures which can provide a systematic non-behavioural
conceptual framework for mentality (including emotional states)."
(Sloman, 1998a, p. 1) Such a framework for mentality
An
architecture for an intelligent system consists, according to Sloman, of four
substantial components: several functionally
different layers, control states, motivators and filters as well as a global
alarm system. 9.2.1. The layers
Sloman postulates that
every intelligent sytem possesses three layers:
The reactive layer is the
evolutionary oldest, and there is a multitude of organisms which only
possess this layer. Schematically,
a purely reactive agent presents itself as follows: Fig. 13: Reactive architecture (Sloman, 1997a,
p. 5) A reactive agent can make
neither plans nor develop new structures.
It is optimized for special tasks;
with new tasks, however, it cannot cope. What it is missing in flexibility, it gains at speed. Since almost all processes are clearly
defined, its reaction rate is high.
Insects are, according to Sloman, examples for such purely reactive
systems, which prove at the same time that the interaction of a number of
such agents can produce astonishingly complex results (e.g. termite
towers). A second,
phylogenetically younger layer gives an agent more qualities by far. Schematically, this looks as follows: Fig. 14: Deliberative architecture (Sloman,
1997a, p. 6) A
deliberative agent can re-combine its action repertoire arbitrarily, develop
plans and evaluate them before execution.
An essential condition for this is a long-term memory in order to
store plans not completed yet or to rest and evaluate later the probable
consequences of plans. The
construction of such plans proceedes gradually and is therefore not
a continuous, but a discrete process.
Many of the processes in the deliberative layer are of serial
nature and therefore resource-limited.
This seriality offers a number of advantages:
at any time it is clear to the system which plans have led to a success, and it can assign rewards accordingly; at the same
time, the execution of contradicting plans is prevented; communication with the long term storage
is to a large extent error free. Such a resource-limited subsystem is of course highly error-prone. Therefore filtering processes with variable thresholds are necessary, in order to guarantee the working of the system |