Next Previous Contents

5. Electronic assistants

There exists a variety of models which employ computers in order to recognize emotions or to represent them. These models are not "emotional computers" in a narrow sense, because their "emotional" components are pre-defined elements and not a subsystem which developed independently.

The models described in this chapter are, to a large extent, rule-based production systems. Thus they are also symbol-processing systems. From the sixties until today a spirited discussion has taken place whether or to which extent the human mind is a symbol-processing system and to what extent symbol-processing computer models can be a realistic approximation to its real workings (see e.g. Franklin, 1995).

A rule-based production system has as minimum requirements a set of standard components:

1.      1.      a so-called knowledge base that contains the processing rules of the system;

2.      2.      a so-called global database that represents the main memory of the system;

3.      3.      a so-called control structure which analyzes the contents of this global database and decides which processing rules of the knowledge base are to be applied.

A more detailed description of a rule-based production system is supplied by Franklin (1995) with the example of SOAR: The system operates within a defined problem space ; the production process of the system is the use of appropriate condition-action rules which transform the problem space from one state into another.

The models of Dyer, Pfeiffer, Bates and Reilly, and Elliot presented here can be regarded as rule-based production systems. Scherer's model forms an exception in as much as it is an implementation which does not work rule-based. Its underlying approach is, however, an appraisal theory and could easily be implemented as a production system.

5.1. The models of Dyer

Dyer has developed three models in all: BORIS, OpEd and DAYDREAMER. BORIS and OpEd are systems which can infer emotions from texts; DAYDREAMER is a computer model which can generate emotions.

Dyer regards emotions as an emergent phenomenon:

"Neither BORIS, OpEd, nor DAYDREAMER were designed to address specifically the problem of emotion. Rather, emotion comprehension and emotional reactions in these models arise through the interaction of general cognitive processes of retrieval, planning and reasoning over memory episodes, goals, and beliefs."

 

(Dyer, 1987, p. 324)

These "general cognitive processes" are realized by Dyer in the form of demons, specialized program subroutines which are activated under certain conditions and can accomplish specific tasks independently from one another. After the completion of their work these  demons "die" or spawn new subroutines.

"In BORIS, "disappointed" caused several demons to be spawned. One demon used syntactic knowledge to work out which character x was feeling the disappointment. Another demon looked to see if x had suffered a recent goal failure and if this was unexpected."

 

(Dyer, 1987, p. 332)

5.1.1. BORIS

BORIS is based on a so-called affect lexicon which possesses six components: A person who feels the emotion; the polarity of the emotion (positive - negative); one or more goal attainment situations; the thing or the person toward which the emotion is directed; the strength of the emotion as well as the respective expectation.

With these components, emotions present themselves as follows in BORIS:

Emotion: relief

Person: x

Polarity: positive

Directed at: -/-

Goal attainment: goal attained

Expectation: Expectation not fulfilled

In this case the person x did not expect to achieve her goal. This expectation did not fulfill itself. Now the person x experiences a positive excitation state felt by her as relief (after Dyer, 1987, p. 325).

In similar form emotions as happy, sad, grateful, angry-at , hopefu , fearful, disappointed, guilty etc. are represented in BORIS.

This underlines the goal Dyer pursues with BORIS: All emotions can be represented in BORIS in form of a negative or positive excitation condition, connected with information about the goals and expectations of a person.

Dyer points out that with the help of the variables specified by him one can also represent emotions for which there is no appropriate word in a given language.

With the help of this model BORIS can draw conclusions about the respective goal attainment situation of a person, understand and generate text that contains descriptions of emotions as well as understad and compare the meanings of emotional terms. The system is also able to represent multiple emotional states.

From the excuted goal/plan analysis of a person and the result BORIS can also develop expectations how this person will continue to behave in order to achieve her goals. Also the strength of an excitation state can be used by BORIS for such predictions.

5.1.2. OpEd

OpEd represents an extension of BORIS. While BORIS can, due to its internal encyclopedia, only understands emotions in narrative texts, OpEd is able to infer emotions and beliefs also from texts which are not narrative:

"OpEd is...designed to read and answer questions about editorial text. OpEd explicitly tracks the beliefs of the editorial writer and builds representations of the beliefs of the writer and of those beliefs the writer ascribes to his opponents."

 

(Dyer, 1987, p. 329)

Beliefs are implemented with OpEd on the basis of four dimensions: Believer is someone who possesses a certain belief; content is an evaluation of goals and plans; attack are the beliefs which oppose the one currently expressed; support are the beliefs which support the current belief.

According to Dyer, beliefs were an substantial element which was missing in BORIS. For example, the statement "happy(x)" is represented in BORIS as the attainment of a goal by x. This, Dyer notes, is not sufficient:

"What should have been represented is that happy(x) implies that x believes that x has achieved (or will achieve, or has a chance of achieving) a goal of x."

 

(Dyer, 1987, p. 330)

Therefore new demons are added in OpEd to the ones known from BORIS: belief-building, affect-related demons.

Dyer has shown that OpEd is able not only to deduce from newspaper texts the beliefs of the author but also to draw conclusions about the beliefs of those against which the author takes position.

 

5.1.3. DAYDREAMER

While BORIS and OpEd are meant to understand emotions, DAYDREAMER (Mueller and Dyer, 1985) is an attempt to develop a system that "feels" them. This feeling expresses itself not in a subjective state of the system, but in that its respective " emotional" condition affects its internal behaviour during the processing of information.

Mueller and Dyer define four substantial functions of daydreams: They increase the effectiveness of future behaviour by the anticipation of possible reactions to expected events; they support learning from successes and errors by thinking through alternative courses of action to the end; they support creativity, because the imaginary following through of courses of action can lead to new solutions, and they support the adjustment of emotions by reducing their felt intensity.

In order to achieve these goals, DAYDREAMER is equipped with the following main components:

1.      a scenario generator which consists of a planner and so-called relaxation rules;

2.      a dynamic episodic memory, whose contents are used by the  scenario generator;

3.      an accumulation of personal goals and control goals which steer the scenario generator;

4.      an emotion component, in which daydreams are generated or initiated by emotional states which are elicited by reaching or not reaching a goal;

5.      a knowledge (domain knowledge) about interpersonal relations and everyday life activities.

DAYDREAMER has two kinds of functions, daydreaming mode and performance mode. In daydreaming mode the system stays continually in daydreams until it is interrupted; in performance mode the system shows what it has learned from the daydreams.

Mueller and Dyer postulate a set of goals possessed by a system and which they call control goals. These are released partially by emotions and release again daydreams. The function of the control goals consists of providing at short notice for a modification of emotional states and securing the reaching of personal goals on a long-term basis.

The system has thus a feedback mechanism in which emotions release daydreams and daydreams modify  these emotions and release new emotions which again initiate new daydreams.

Mueller and Dyer name four control goals which arise with daydreams:

1.      Rationalization: The goal of rationalizing away a failure and of reducing in this way a negative emotional state.

2.      Revenge: The goal of preventing for another the reaching of a goal and thus to reduce the own annoyance.

3.      Reversal of success or failure: The goal of imagining a scenario with an opposite result in order to turn around the polarity of an emotional state.

4.   Preparation: The goal of developing hypothetical episodes in order to play through the consequences of a possible action.

Mueller and Dyer describe the functioning of DAYDREAMER by an example in which DAYDREAMER represents an active young man with social goals who met an actress who rejected his invitation to a drink.

DAYDREAMER generates thereupon the following two daydreams:

 

"Daydream 1: I am disappointed that she didn't accept my offer...I imagine that she accepted my offer and we soon become a pair. I help her when she has to rehearse her lines...When she has to do a film in France, I drop my work and travel there with her...I begin to miss my work. I become unhappy and feel unfulfilled. She loses interest in me, because I have nothing to offer her. It's good I didn't get involved with her, because it would've led to disaster. I feel less disappointed that she didn't accept my offer.

 

(......)

 

Daydream 2: I'm angry that she didn't accept my offer to go have a drink. I imagine I pursue an acting career and become a star even more famous than she is. She remembers meeting me a long time ago in a movie theater and calls me up...I go out with her, but now she has to compete with many other women for my attention. I eventually dump her."

 

(Dyer, 1987, p. 337)

The first daydream is an example of reversal:  he pretends that the rendezvous took place and develops a fantasy over the consequences.  The reality monitor announces that an important goal, i.e. the own career, is neglected.  The result is a rationalization which reduces the negative emotional condition.

Daydream 2 is released by the emotional condition of annoyance and embodies revenge to reduce the the negative effect of the current emotional condition. 

As soon as a control goal is activated, the scenario generator generates a set of events which are connected with the control goal.  These daydreams differ in as much from classical plans as they are not exclusively directed at a goal, but can change in a loose, associative manner.  The system contains, in addition, a relaxation mechanism which makes possible daydreams which are out of touch with reality. 

Mueller and Dyer cite four examples of such relaxations in their model: 

Behavior of others: DAYDREAMER can assume that the film star accepts his offer.

Self attributes: DAYDREAMER can assume to be a professional athleteor a well-known film star.

Physical constraints: One can assume to be invisible or to fly.

Social constraints: One can assume to provoke a scene in a distinguished restaurant.

The strength of the relaxations is not always the same;  it varies after the respective active control goals. 

Positive emotions arise through the memory of a goal reaching, negative emotions through the memory of a failure.  If another is responsible for the non reaching of a goal of DAYDREAMER, the emotion Anger is released. Imaginary successes imagined in the daydream call up positive emotions awake;  imaginary failures negative emotions.

During its daydreams DAYDREAMER stores in its memory complete daydreams, future plans and planning strategies.  These are indexed in the episodic memory and can be called up later.  Thus the system is able to learn from its daydreams for future situations. 

The ability of a computer to develop daydreams is substantial for the development of its intelligence, Mueller and Dyer maintain.  They imagine computers which, in the time in which they are not used, can daydream in order to increase their efficiency in this way.

The model of Mueller and Dyer has not been developed further after its original conception

5.2. The model of Pfeifer

With FEELER ("Framework for Evaluation of Events and Linkages into Emotional Responses") , Pfeifer (1982, 1988) presented a model of an emotional computer system which is based explicitly on psychological emotion theories.

Pfeifer's model is a rule-based system with working memory (WM), rule memory (long term memory- LTM) and control structure;  the contents of the long term storage (the knowledge base) he additionally differentiates into declarative and procedural knowledge. 

In order to be able to represent emotions, Pfeifer extends this structure of a rule-based system by further subsystems.  Thus FEELER has not only a cognitive, but additionally a physiological working memory. 

To develop emotions, FEELER needs a schema in order to analyze the cognitive conditions which lead to an emotion.  For this purpose Pfeifer makes use of the taxonomy developed by Weiner (1982).  From this he develops exemplarily a rule for the emergence of an emotion:

"IF current_state is negative for self

 

and emotional_target is VARperson

 

and locus_of_causality is VARperson

 

and locus_of_control is VARperson

 

THEN ANGER at VARperson"

 

(Pfeifer, 1988, p. 292)

So that this rule can become effective, all its conditions must be represented in the WM.  This is done via inference processes which place their results in the WM.  Such inference processes are, according to Pfeifer, typically released by interrupts. 

Appropriate interrupts are generated by FEELER if expectations are hurt regarding the reaching of subgoals and/or if for an event no expectations exist. 

In a second rule Pfeifer defines an action tendency following rule 1: 

IF angry

 

and emotional_target is VARperson

 

and int_pers_rel self - VARperson is negative

 

THEN generate goal to harm VARperson

 

(Pfeifer, 1988, p. 297)

 

This rule shows at the same time, according to Pfeifer, the heuristic value of an emotion:  the emotion reduces the circle of the possible candidates and actions for inference processes. 

 

Pfeifer grants that such a model is not able to cover all emotional states.  He discusses a number of problems, for example the interaction of different subsystems and their influence on the development, duration and fading away  of emotions.  In a further step Pfeifer supplemented its model with the taxonomy of Roseman (1979), in order to be able to represent emotions in FEELER in connection with the reaching of goals.

 

5.3. The model of Bates and Reilly

In his essay "The Role of Emotion in Believable Agents" (Bates, 1994) Joseph Bates quotes the Disney artist Chuck Jones with the statement, Disney would, with his cartoon characters, always strive for believability.  Bates continues: 

"Emotion is one of the primary means to achieve this believability, this illusion of life, because it helps us know that characters really care about what happens in the world, that they truly have desires."

 

(Bates, 1994, p. 6)

Together with a group of colleagues at Carnegie-Mellon University Bates created the Oz Project .  Their goal is to build synthetic creatures which appear to their human public as genuinly lifelike as possible.  Briefly, it concerns an interactive drama system or "artistically effective simulated worlds" (Bates et. al., 1992, p.1). 

The fundamental approach consists in the creation of broad and shallow agents .  While computer models of AI and of emotions concentrate on specific aspects and try to cover these as detailed as possible, Bates takes the opposite approach: 

"...part of our effort is aimed at producing agents with a broad set of capabilities, including goal-directed reactive behavior, emotional state and behavior, social knowledge and behavior, and some natural language abilities. For our purpose, each of these capacities can be as limited as is necessary to allow us to build broad, integrated agents..."

 

(Bates et. al., 1992a, p.1)

The broad approach is, so Bates, necessary in order to create believable artificial characters.  Only an agent that is able to react convincingly to a variety of situations in an environment to which a human user belongs, is also really accepted by the latter as a believable character. 

Since Oz is intentionally constructed as an artificial worl which is to be regarded by the user like a film or a play, it is sufficient to construct the various abilities of the system "flat" in order to satisfy expectations of the user.  Because, as in the cinema, he does not expect a correct picture of reality, but an artificial world with in this context convincing participants. 

An Oz -world consists of four substantial elements:  A simulated environment, a number of agents who populate this artificial world, an interface through which humans can participate at the happenings in this world, and a planner that is concerned with the long-term structure of the experiences of a user. 

The agent architecture of Bates is called Tok and consists of a set of components:  There are modules for goals and behaviour, for sensory perception, language analysis and language production.  And there is a module called Em for emotions and social relations.

 

Fig. 4: Structure of the TOK architecture (Reilly, 1996, p. 14)

Em contains an emotion system which is based in the model of Ortony, Clore and Collins (1988).  However, the OCC model is not implemented in its entire complexity in Em .  This concerns in particular the intensity variables postulated by Ortony, Clore and Collins and their complex interactions.  Em uses a simpler subset of these variables which is judged as sufficient for the the intended purpose. 

Reilly (1996) explains that with the use of such subsets the OCC model is in effect not redudce but extended.  He clarifies this with two examples: 

With Ortony, Clore and Collins pity is generated as follows:   Agent A feels pity for agent B, if agent A likes agent B and agent A appraisesan event as unpleasantly for agent B regarding his goals.   "So, if Alice hears that Bill got a demotion, Alice must be able to match this event with a model of Bill's goals, including goals about demotions."  (Reilly, 1996, p. 53) This would mean that Alice would have to possess a relatively comprehensive knowledge of Bill's goals and appraisal mechanisms - according to Reilly a difficult venture in a dynamic world, in which goals can change fast.  I

He suggests the following mechanism instead: Agent A feels pity for Agent B, if Agent A likes Agent B and Agent A believes that Agent B is unhappy. According to Reilly, this description has other advantages than just being simpler: 

"In this case, I have broken the OCC model into two components: recognizing sadness in others and having a sympathetic emotional response..... Recognizing sadness in others is done, according to the OCC model, only through reasoning and modeling of the goals of other agents, so this inference can be built into the model of how the emotion is generated. Em keeps the recognition of sadness apart from the emotional response, which allows for multiple ways of coming to know about the emotions of others. One way is to do reasoning and modeling, but another way, for example, is to see that an agent is crying.

 

The Em model is more complete than the OCC model in cases such as agent A seeing that agent B is sad but not knowing why. In the OCC case, when agent A does not know why agent B is unhappy, the criteria for pity is not met. Because the default Em emotions generators require only that agent A believe that agent B is unhappy, which can be perceived in this case, Em generates pity."

 

(Reilly, 1996, p. 53f.)

As the second example, Reilly (1996) states the emergence of distress .  In the OCC model distress develops if an event is appraised as unpleasant regarding the goals of an agent.  That means that external events must be evaluated.  With Em, distress is caused by the fact that goals are either not achieved or the probability rises that they are not reached, which is connected with the motivation and action system.  Reilly explains: 

"This shifts the emphasis towards the goal processing of the agent and away from the cognitive appraisal of external events. This is useful for two reasons. First, the motivation system is already doing much of the processing (e.g., determining goal successes and failures), so doing it in the emotion system as well is redundant. Second, much of this processing is easier to do in the motivation system since that's where the relevant information is. For instance, deciding how likely a goal is to fail might depend on how far the behavior to achieve that goal has progressed or how many alternate ways to achieve the goal are available - this information is already in the motivation system."

 

(Reilly, 1996, p. 54f.)

In this way emotion structures are to develop which can be used more completely and more simply than the purely cognitive models.  Which emotions can be generated by Em on which basis shows the following table:

 

Emotion Type

Cause in Default Em System

Distress

Goal fails or becomes more likely to fail and it is important to the agent that the goal not fail.

Joy

Goal succeeds or becomes more likely to succeed and it is important to the agent that the goal succeed.

Fear

Agent believes a goal is likely to fail and it is important to the agent that the goal not fail.

Hope

Agent believes a goal is likely to succeed and it is important to the agent that the goal succeed.

Satisfaction

A goal succeeds that the agent hoped would succeed.

Fears-Confirmed

A goal failed that the agent feared would fail.

Disappointment

A goal failed that the agent hoped would succeed.

Relief

A goal succeeds that the agent feared would fail.

Happy-For

A liked other agent is happy.

Pity

A liked other agent is sad.

Gloating

A disliked other agent is sad.

Resentment

A disliked other agent is happy.

Like

Agent is near or thinking about a liked object or agent.

Dislike

Agent is near or thinking about a disliked object or agent.

 

Other attitude-based emotions

Agent is near or thinking about an object or agent that the agent has an attitude towards (e.g., awe).

Pride

Agent performs an action that meets a standard of behavior.

Shame

Agent performs an action that breaks a standard of behavior.

Admiration

Another agent performs an action that meets a standard of behavior.

Reproach

Another agent performs an action that breaks a standard of behavior.

Anger

Another agent is responsible for a goal failing or becoming more likely to fail and it is important that the goal not fail.

Remorse

An agent is responsible for one of his own goals failing or becoming more likely to fail and it is important to the agent that the goal not fail.

Gratitude

Another agent is responsible for a goal succeeding or becoming more likely to succeed and it is important that the goal succeed.

Gratification

An agent is responsible for one of his own goals succeeding or becoming more likely to succeed and it is important to the agent that the goal succeed.

Frustration

A plan or behavior of the agent fails.

Startle

A loud noise is heard.

Table 3: Emotion types and their generation in Em (after Reilly, 1996, p. 58 f.)

Reilly points out expressly that these emotion types do not pretend to be correct in the psychological sense but only represent a starting point in order to create believable emotional agents. 

The emotion types of Em are arranged in the following hierarchy:

 

Total

Positive

Joy

 

Hope

Happy-For

Gloating

Love

Satisfaction

Relief

Pride

Admiration

Gratitude

Gratification

 

 

 

Negative

Distress

 

Fear

Startle

Pity

 

Resentment

Hate

Disappointment

Fears-Confirmed

Shame

Reproach

Anger

Frustration

Remorse

 

Table 4: Hierarchy of emotion types in Em (after Reilly, 1996, p. 76)

 

One notices that in this hierarchy the emotion types modelled after the OCC model are arranged one level below the level of "positive - negative".  This mood level lends to Em the possibility of specifying the general mood situation of an agent well before a deep-going analysis, which simplifies the production of emotional effects substantially. 

For the determination of the general mood situation (good-mood vs. bad-mood), Em first sums up the intensities of the positive emotions and then those of the negative emotions.  Formalized, this looks  as follows:

IF Ip > In

THEN set good-mood = Ip

AND set bad-mood = 0

ELSE set good-mood = 0

AND set bad-mood = - In

(after Picard, 1997, p. 202)

The TOK system has been realized with different characters.  One of the most well-known is Lyotard, a virtual cat.  Bates et al. (1992b) describe a typical interaction with Lyotard: 

"As the trace begins, Lyotard is engaged in exploration behavior in an attempt to satisfy a goal to amuse himself... This behavior leads Lyotard to look around the room, jump on a potted plant, nibble the plant, etc. After suffcient exploration, Lyotard's goal is satisfied. This success is passed on to Em which makes Lyotard mildly happy. The happy emotion leads to the "content" feature being set. Hap then notices this feature being active and decides to pursue a behavior to find a comfortable place to sit, again to satisfy the high-level amusement goal. This behavior consists of going to a bedroom, jumping onto a chair, sitting down, and licking himself for a while.

 

 

At this point, a human user whom Lyotard dislikes walks into the room. The dislike attitude, part of the human-cat social relationship in Em, gives rise to an emotion of mild hate toward the user. Further, Em notices that some of Lyotard's goals, such as not-being-hurt, are threatened by the disliked user's proximity. This prospect of a goal failure generates fear in Lyotard. The fear and hate combine to generate a strong "aggressive" feature and diminish the previous "content" feature.

 

In this case, Hap also has access to the fear emotion itself to determine why Lyotard is feeling aggressive. All this combines in Hap to give rise to an avoid-harm goal and its subsidiary escape/run-away behavior that leads Lyotard to jump off the chair and run out of the room."

 

(Bates et al., 1992b, p. 7)

Reilly (1996) examined the believability of a virtual character equipped with Em.  Test subjects were confronted with two virtual worlds in which two virtual characters acted.  The difference between the two worlds consisted of the fact that in one case both characters were equipped with Em, while in the second case only one character contained it. 

Afterwards it was explored with a questionnaire which differences were noticed by the test subjects between the Em-character ("Melvin") and the Non-Em-character ("Chuckie"). 

The test subjects classified Melvin as more emotional than Chuckie.  Also its believabilitywas more highly evaluated than the Chuckies.  At the same time the test subjects indicated that Melvins personality was more outlined than Chuckie's and that with Melvin they had less frequently the feeling that they had to do with fictitious characters than with Chuckie. 

The significance of the results varies clearly, however, so that Reilly grants that Em is only "moderately successful" (Reilly, 1996, p. 129).

 

5.4. The model of Elliott

A further model which is based on the theory of Ortony, Clore and Collins is the Affective Reasoner of Clark Elliott.  Elliott's interest is primarily the role of emotions in social interactions, be it between humans, between humans and computers, or between virtual participants in a virtual computer world. 

Elliott summarizes the core elements of the Affective Reasoner in such a way: 

"One way to explore emotion reasoning is by simulating a world and populating it with agents capable of participating in emotional episodes. This is the approach we have taken. For this to be useful we must have (1) a simulated world which is rich enough to test the many subtle variations a treatment of emotion reasoning requires, (2) agents capable of (a) a wide range of affective states, (b) an interesting array of interpretations of situations leading to those states and (c) a reasonable set of reactions to those states, (3) a way to capture a theory of emotions, and (4) a way for agents to interact and to reason about the affective states of one another. The Affective Reasoner supports these requirements."

 

(Elliott, 1992, p. 2)

The advantages of such a model are, according to Elliott, numerous:  On the one hand it makes possible to examine psychological theories about the emergence of emotions and the actions resulting from it for its internal plausibility.  Secodly, affective modules are an important component of distributed agent systems, if these are to act without friction losses in real time.  Thirdly, a computer model which can understand and express emotions is a substantial step for the building of better man-machine interfaces. 

As example of a simulated world, Elliott (1992) chose Taxiworld, a scenario with four taxi drivers in Chicago.  (Taxiworld is not limited to four drivers;  the simulation was implemented with up to 40 drivers.)  There are different stops , different passengers, policemen, and different travel goals.  Thus can be created a number of situations which lead to the development of emotions. 

The taxi drivers must be able to interpret these situations in such a way that emotions can develop.  For this, they need the ability to be able to reflect over the emotions of other taxi drivers.  Finally, the drivers should be able to act based on their emotions. 

Elliott illustrates the difference between the Affective Reasoner and classical analysis models of AI by the following example (Elliott, 1992):  "Toms car did not start, and Tom therefore missed an appointment.  He insulted his car.  Harry observed this incident." 

A classical AI system would draw the following conclusions from this story:  Tom should let his car be repaired.  Harry has learned that Tom's car is defective.  Tom could not come to his appointment in time without his car.  Harry suggests that in the future Tom should set out earlier to his appointments. 

The Affective Reasoner, however, would come to completely different conclusions:  Tom holds his car responsible for his missed appointment.  Tom is angry.  Harry cannot understand, why Tom is angry with his car, since one cannot hold a car responsible.  Harry advises Tom to calm down.  Harry has pity with his friend Tom, because he is so excited. 

In order to react in this way, the Affective Reasoner needs a relatively large number of components.  Although it is specialized in emotions, Elliott calls it nevertheless a "shallow model" (Elliott, 1994).  In the following section the substantial components of the Affective Reasoner will be presented, as described by Elliott (1992).

 

5.4.1. The construction of the agents

The agents of the Affective Reasoner have a rudimentary personality.  This personality consists of two components:  the interpretive personality component represents the individual disposition of an agent to interpret situations in its world.  The manifestative personality component is its individual way of showing its emotions. 

Each agent has one or more goals.  With this are meant situations whose occurrence the agent judges as desirable.  In order to be able to act emotional, the agents need an object domain within which situations occur which can lead to emotions and within which the agents can execute actions elicited by emotions. 

Each agent needs several data bases for its functioning, to which he must have access at any time: 

1. A data base with 24 emotion types which essentially correspond to the emotion types of Ortony, Clore and Collins (1988) and were extended by Elliott by the two types love and hate.  Special emotion eliciting conditions (ECC) are assigned to each of these emotion types.

2. A data base with goals, standards and preferences. These GSPs constitute the concern structure of an agent and define at the same time its interpretive personality component.

3. A data base with assumed GSPs for other agents of its world.  Elliott calls it COO data base (Concerns-of-Others).  Since these are data acquired by  the agent, they are mostly imperfect and can contain also wrong assumptions.

4. A data base with reaction patterns which, depending upon type of emotion, are divided into up to twenty different groups.

 

5.4.2. Generating emotions

The patterns stored in the GSP and COO data bases are compared by the agent with the EECs in its world, and with correspondences a group of connections develops.  Some these connections represent two or more values for a class which Elliott calls emotion eliciting condition relation (EEC relation). 

EEC relations are composed from elements of the emotion eliciting situation and their interpretation by the agent.  Taken ogether, the condition for the call of an emotion can develop inthis way:

self

other

desire-

self

desire-

other

pleas-

ingness

status

evalua-

tion

respon-

sible agent

appeal-

ingness

(*)

(*)

(d/u)

(d/t)

(p/d)

(u/c/d)

(p/b)

(*)

(a/u)

 

Key to attribute values

abbreviation

meaning

*

some agent's name

d/u

desirable or undesirable (event)

p/d

pleased or displeased about another's fortunes (event)

p/b

praiseworthy or blameworthy (act)

a/u

appealing or unappealing (object)

u/c/d

unconfirmed, confirmed or disconfirmed

Table 5: EEC relations of the Affective Reasoner (after Elliott, 1992, p. 37)

Once one ore more EEC relations are formed, these are used in order to generate emotions.  In this phase arises a number of problems which are discussed in detail by Elliott because they were not sufficiently considered in the theory of Ortony, Clore and Collins. 

As example Elliott cites a compound emotion.  The Affective Reasoner constructs the EEC relations for the two underlying emotions and summarizes them afterwards in a new EEC relation.  The constituent emotions are thus replaced by the compound emotion.  Elliott does not regard this as an optimal solution: 

"Does anger really subsume distress? Do compound emotions always subsume their constituent emotions? That is, in feeling anger does a person also feel distress and reproach? This is a diffcult question. Unfortunately, since we are implementing a platform that generates discrete instances of emotions, we cannot finesse this issue. Either they do or they do not. There can be no middle ground until the eliciting condition theory is extended, and the EEC relations extended."

 

(Elliott, 1992, p. 42)

This proceeding may function with qualitatively similar emotions (Elliott cites as examples distress and anger), but a problem emerges with several emotions arising at the same time, in particular if they contradict each other. 

With several instances of the same emotion the solution is still quite simple.  If an agent has, for example, two goals while playing cards ("to win "and "earn money"), its winning releases twice the emotion happy.  The Affective Reasoner then simply generates two instances of the same emotion. 

The situation is more problematic with contradicting emotions.  Elliott grants that the OCC model exhibits gaps in this respect and explains:  "Except for the superficial treatment of conflicting expressions of emotions, the development and implementation of a theory of the expression of multiple emotions is beyond the scope of this work."  (Elliott, 1992, p. 44f.)  The Affective Reasoner therefore shifts the "solution" of this problem to its action production module (see below).

 

5.4.3. Generating actions

 

As soon as an emotional stae for an agent has been generated, an action resulting from it is initiated.  The Affective Reasoner uses for this an emotion manifestation lexicon which has three dimensions:  The 24 emotion types, the about twenty reaction types (emotion manifestation categories) as well as an intensity hierarchy of the possible reactions (which were not implemented in the first model of the Affective Reasoner). 

The reaction types of the Affective Reasoner are based on a list by Gilboa and Ortony (unpublished).  These are hierarchically organized;  furthermore, each hierarchic level is arranged along a continuum from spontaneous to planned reactions.  As example Elliott cites the action categories for "gloating":

 

Sponta-

neous

Non goal-directed

Expressive

Somatic

flush, tremble, quiet pleasure

Gloating

Behavioral (towards

inanimate)

slap

Behavioral (towards

animate)

smile, grin, laugh

Communicative

(non verbal)

superior smile, throw arms up in air

Communicative (verbal)

crow, inform-victim

Information

Processing

Evaluative self-

directed attributions of...

superiority, intelligence, prowess, invincibility

Evaluative agent-

directed attributions of....

silliness, vulnerability, inferiority

Obsessive Atten-

tional focus on...

other agent's blocked goal

Goal directed

Affect-oriented

Emotion regulation and modulation

Repression

deny positive valence

Reciprocal

"rub-it-in"

Suppression

show compassion

Distraction

focus on other events

Reappraisal of self as....

winner

Reappraisal of situation as...

modifiable, insignificant

Other-directed 

emotion modulation

induce embarrassment, induce fear, induce sympathy for future, induce others to experience joy at victim's expense 

Plan-oriented

Situated plan-initiation

call attention to the event

Planned

Full plan-initiation

plan for recurrence of event

Table 6: Reaction types of the Affective Reasoner for "gloating" (after Elliott, 1992, p. 97)

For each agent, individual categories can be activated or deactivated before the start of the simulation.  This specific pattern of active and inactive categories constitutes the individual manifestative personality of an agent.  Elliott calls the activated categories the potential temperament traits of an agent. 

In order to avoid conflicts between contradicting emotions and concomitantly contradicting actions, the action module contains so-called action exclusion sets.  They are formed by classifying the possible reactions into equivalence classes.  A member of one of these classes can never emerge together with a member of another class in the resulting action set. 

 

5.4.4. Interpreting the emotions of other agents

An agent receives its knowledge over emotions of other agents not only through pre-programmed characteristics, but also by observing other agents within the simulation and drawing conclusions from these observations.  These flow then into its COO data base.  In order to integrate this learning process into the Affective Reasoner, Elliott uses a program named Protos (Bareiss, 1989). 

An agent observes the emotional reaction of another agent.  Protos permits the agent then to draw conclusions about the emotion the other agent feels and thus to demonstrate empathy. 

First of all the observed emotional reaction is compared with a data base of emotional reactions, in order to define the underlying emotion.  Then the observed event is filtered through the COO data base for the observed agent in order to determine whether this reaction is already registered.  If this is the case, it can be assumed that the data base contains a correct representation of the emotion-eliciting situation.  On this basis the observing agent can then develop an explanation for the behaviour of the observed agent. 

If the representation in the COO data base should not agree with the observed behaviour, it is removed from the data base and the data base is scanned again.  If no correct representation should be found, the agent can fall back to default values which are then integrated into the COO data base. 

Since COOs are nothing else than assumed GSPs for another agent, the Affective Reasoner is, with the help of so-called satellite COOs,  able to represent beliefs of an agent about the assumptions of another agent.

 

5.4.5. The development of the model

The model described so far in its essence was presented in this form by Elliott in his thesis in 1992.  In the following years he developed the Affective Reasoner in a number of areas. 

Thus a component which determines the intensity of emotions was missing to the original model.  In a further work, Elliott (Elliott and Siegle, 1993) developed a group of emotion intensity variables based on the work of Ortony, Clore and Collins and Frijda. 

The intensity variables are classified by Elliott into three categories.  To each variable limit values are assigned within which they can move (partially bipolar).  Most intensities can take on a value between 0 and 10.  Weaker modifiers can take on values between 0 and 3; modifier which only reduce an intensity only values between 0 and 1.  Variables whose effects on the intensity computations are determinde by the valence of an emotion (a variable which increases the intensity of a negatively valenced emotion, but reduces the intensity of a positively valenced emotion for example), can take on values between 1 and 3 and received additionally a bias value which specifies the direction.  In the following the intensity variables and their value scopes are specified: 

1.      simulation-event variables are variables whose values change independently of the interpretation mechanisms of the agents (goal realization/blockage: -10 to +10, blameworthiness-praiseworthiness: -10 to +10, appealingness: 0 to10, repulsiveness: -10 to 0, certainty: 0 to 1, sense-of-reality: 0 to 1, temporal proximity: 0 to 1, surprisingness: 1 to 3, effort: 0 to 3, deservingness: 1 to 3);

2.      stable disposition variables have to do with the interpretation of a situation by an agent, are relatively constant and constitute the personality of an agent (importance to agent of achieving goal: 0 to 10, importance to agent of not having goal blocked: 0 to 10, importance to agent of having standard upheld: 0 to 10, importance to agent of not having standard violated: 0 to 10, influence of preference on agent: 0 to 10, friendship-animosity: 0 to 3, emotional interrelatedness of agents: 0 to 3);

3.      mood-relevant variables are volatile, change for an agent the interpretation of a situation, can be the result of previous affective experiences and return to their default values after a certain time ( arousal: 0.1 to 3, physical well-being: 0.1 to 3, valence bias: 1 to 3, depression-ecstasy: 1 to 3, anxiety-invincibility: 1 to 3, importance of all Goals, Standards, and Preferences: 0.3 to 3, liability-creditableness: 1 to 3).

Elliott (Elliott and Siegle, 1993) reports that an analysis of emotional episodes with the help of this variables led to the result that within the context of the model all emotions can be represented and recognized.

In a further step Elliott (Elliott and Carlino, 1994) extended the Affective Reasoner by a speech recognition module.  The system was presented with sentences with emotion words, intensity modifiers and pronomial references at third parties ("I am a bit sad because he....") presented.  In the first run 188 out of 198 emotion words were recognized.  In a second experiment the sentence "Hello Sam, I want to talk to you" was presented to the system with seven different  emotional different intonations (anger, hatred, sadness, love, joy, fear, neutral).  After some training, the Affective Reasoner delivered a hundred percent correct identification of the underlying emotion category. 

In a further step the Affective Reasoner received a module with which it can represent emotion types as face expressions of a cartoon face (Elliott, Yang and Nerheim-Wolfe, 1993). The representational abilities cover the 24 emotion types in three intensity stages each, which can be represented by one of seven schematic faces.  The faces were fed into a morphing module which is able to produce rudimentary lip movements and change fluently from one facial expression to the next .  In addition, the Affective Reasoner was equipped with a speech output module and the ability to select and play different music from an extensive data base depending upon emotion . 

The ability of the system to represent emotions correctly was examined by Elliott (1997a) in an experiment in which 141 test subjects participated.  The test subjects were shown videos on which either an actor or the faces of the Affective Reasoner spoke a sentence which, depending upon intonation and face expression, could possess different meanings.  The actor was trained thoroughly to express even subtle differences between emotions;  only the emotion category and the text were given to the Affective Reasoner.  The task of the test subjects consisted of assigning the spoken sentence the correct emotional meaning from a list of alternatives.  An example: 

 

"For example, in one set, twelve presentations of the ambiguous sentence, "I picked up Catapia in Timbuktu," were shown to subjects. These had to be matched against twelve scenario descriptions such as, (a) Jack is proud of the Catapia he got in Timbuktu because it is quite a collector's prize; (b) Jack is gloating because his horse, Catapia, just won the Kentucky Derby and his archrival Archie could have bought Catapia himself last year in Timbuktu; and (c) Jack hopes that the Catapia stock he picked up in Timbuktu is going to be worth a fortune when the news about the oil elds hits; [etc., (d) - (l)]."

 

(Elliott, 1997a, p. 3)

Additionally, the test subjects indicated on a scale from 1 to 5 how safe they were of their judgements.  The computer outputs were divided into three groups:  Face expression, face expression and language and face expression, language and underlying music. 

Altogether the test subjects could identify the underlying scenarios significantly better correctly with the computer faces than with the actor (70 percent compared with 53 percent).  There were hardly no differences between the three representational forms of the computer (face:  69 per cent;  face and language:  71 percent;  face, language and music:  70 percent). 

At present Elliott works on the merging of the Affective Reasoner as module into two existing interactive computer instruction systems (STEVE and Design-A-Plant) in order to give to the virtual tutors the ability to understand and expressemotions and thus to make the training procedure more effective (Elliott et al, 1997). 

 

5.5. The model of Scherer

Scherer implemented his theoretical approach in form of an expert system named GENESE (Geneva Expert System on Emotions) (Scherer, 1993).  The motive was to get further insights for emotion-psychological model building and to determine in particular how many evaluation criteria are at least necessary in order to identify an emotion clearly: 

 

"As shown earlier, the question of how many and which appraisal criteria are minimally needed to explain emotion differentiation is one of the central issues in research on emotion-antecedent appraisal. It is argued here that one can work towards settling the issue by constructing, and continuously refining, an expert system that attempts to diagnose the nature of an emotional experience based exclusively on information about the results of the stimulus or event evaluation processes that have elicited the emotion."

 

(Scherer, 1993, p. 331)

The system consists of a knowledge base in which is held which kinds of appraisals are connected with which emotions.  The different appraisal dimensions are linked by weights with 14 different emotions.  These weights represent thereby the probability, with which a certain appraisal is linked with a certain emotion. 

The user of the program must answer 15 questions regarding a certain emotional experience, for example:  "Did the situation which caused your emotion happen very suddenly or abruptly?".  The user can answer each question on a quantitative scale from 0 ("not true") to 5 ("extraordinary"). 

If all questions are answered, the system compares the answer pattern of the user with the answer patterns which are theoretically linked with a certain emotion.  Subsequently, it presents a list of all 14 emotions to the user, arranged in the order "most likely" to "most improbably".  If the computer determined the emotion correctly, it receives a confirmation from the user;  otherwise, the user types in "not correct".  The system presents to him then a further ranking of emotions.  If this should be equally wrong, the user enters the correct emotion and the program designs a specific appraisal-emotion data base with this answer particularly for this user. 

Through an empirical examination of the forecast strength of his system, Scherer determined that it worked correctly in 77,9 % of all cases.  Certain emotions (e.g. despair/mourning) were more frequently predicted correctly than others (e.g. fear/worries). 

Schere'rs GENESE is in as much unusual as it does not represent a classical rule-based system, but works with weights in a multidimensional space.  There are exactly15 dimensions which correspond with the 16 appraisal dimensions of Scherer's emotion model.  Each of the 14 emotions occupies a specific point in this space.  The program makes its forecasts by converting the answers of the users likewise into one point in this vector space and measuring afterwards the distances to the points for the 14 emotions.  The emotion lying next to the input is then presented first. 

Exactly this approach motivated Chwelos & Oatley (1994) to a criticism of the system.  First of all they point out that such a spacewith 15 dimensions can contain altogether 4.7 x 1011 points.  That can lead to the fact that the point calculated after the inputs of the user can lie far away from each of the 14 emotions. Nevertheless, the system selects the nearest emotion.  Chwelos & Oatley argue that in such a case the answer should be rather "no emotion" and propose that the system is extended by a limit value within which a given point of input must lie around an emotion in order to elicit a concrete answer. 

Secondly, they criticize that the model proceeds from the assumption that each emotion corresponds with exactly only one point in this space.  They raise the question why this is the case, since different combinations of appraisal dimensions can elicit the same emotion. 

Thirdly, Chwelos & Oatley debate the heuristic adjustments of the appraisal dimensions implemented in GENESE, which can not be found in Scherer's theoretical model.  They speculate that it could be an artifact of the vector space approach and note that it possesses no theoretical motivation. 

Finally, Chwelos & Oatley doubt that Scherer's system actually delivers informations about how many appraisal dimensions are at least necessary in order to differentiate an emotion clearly.

 

5.6. The model of Frijda and Swagerman

There exist two implementations of Frijda's concern realisation theory:  ACRES (Frijda and Swagerman, 1987) and WILL (Moffat and Frijda, 1995). 

ACRES (Artificial Concern REaliation System) is a computer program which stores facts about emotions and works with them.  Frijda and Swagerman wanted to answer the question: "Can computers do the same sort of things as humans can by way of their emotions; and can they be made to do so in a functionally similar way?"  (Frijda and Swagerman, 1987, p. 236)

Starting point for ACRES is the acceptance of a system which has various concerns and limited resources.  Furthermore, the system has to move in an environment which is changing fast and never completely predictable. 

Based on these conditions, Frijda and Swagerman define seven requirements for such a system: 

 

1.      The existence of concerns demands a mechanism which can identify objects with concern relevance - objects which can promote or inhibit a concern. 

2.      Because opportunities and dangers are distributed over space and time, the system must also be able to act;  otherwise it cannot be regarded as independent.  Furthermore, the action control system must be able to understand the signals of the concern relevance mechanism.

3.      The system must possess the ability to monitor its own activities regarding the pursuit of opportunities and the avoidance of dangers and to recognize whether an action can lead to success or not.

4.      The system must have a repertoire of appropriate action alternatives and be able to generate action successions or plans.

5.      The system needs a number of pre-programmed actions for emergencies, so that it can react fast if necessary.

6.      Since the environment of the system consists partially of other agents like it, actions with a social character must be present in the action repertoire.

7.      Multiple concerns in an uncertain environment make it necessary to rearrange and/or temporarily postpone goals.  The system must have a mechnaism which makes such changes of priorities possible.

All these specifications are fulfilled by the human emotion system, according to Frijda and Swagerman:

In order to implement such a system, Frijda and Swagerman selected as an action environment which makes sense for a computer program, the interaction with the user of this program.  The concerns of the system in this context are: 

 

avoid being killed concern;

preserve reasonable waiting times concern;

correct input concern;

variety in input concern;

safety concern.

All knowledge of ACRES is organized in the form of concepts.  These concepts consist of attribute-value pairs.  Concerns are represented by a concept which contains, on the one hand, the topic and, on the other hand, a tariff sub-conzept which represents the desired situation. 

ACRES has three major tasks:  To receive and accept input (the system rejects inputs with typing errors, for example);  to learn about emotions through the informations about emotions it receives from the user as well as to gain, store and use knowledge about its own emotions and the emotions of others. Therefore, the system has three corresponding task components:  Input, Vicarious knowledge and Learning. 

Each task component has two functions:  an operation function and a concern realisation function.  The functions test whether concepts exist,which are applicable to the received information;  they use their knowledge to infer and generate related goals;  they infer, which actions are relevant for the reaching of these goals and elicit appropriate actions. 

The essential informations with which ACRES works result from the inputs of the user, from informations already collected by ACRES as well as informations inferred by ACRES from the existing information store. 

The collected informations represent the "memory" of ACRES.  To this belongs, for example, how often a certain user made typing errors during the input;  how long ACRES had to wait for new input etc..  Due to its experiences with the users ACRES builds a so-called status index for each user:  positive experiences lead to a rise in status, negative to the lowering of status. 

Concern relevance tests run in ACRES in such a way that the information about a current situation is compared with the pre-programmed concerns of the system.  Apart from the informations which are collected by ACRES in the course of time, there are some specific inputs which are directly emotionally relevant for ACRES, for example the instruction "kill"

Information about action alternatives is likewise represented in ACRES in the form of concepts.  Each action concept consists of the sub-concepts start state, end state, and fail state.   The sub-concept start state describes the initial conditions of an action, end state describes the condition that the action can reach, and fail state the conditions under which this action cannot be implemented. 

With the action selection, firstly the goal is compared with the end state sub-concepts of all action concepts;  then the current state is compared with the start state sub-concepts of the action concepts selected before, and one of it is selected.  If no suitable action concept exists, a planning process is initiated which selects the action concept with the most obvious start state. 

Events lead ACRES to the setting up of goals.  The event of the discovery of concern relevance leads to the goal of doing something in this regard.  The following action selection process selects an action alternative with the procedure described above.  This process corresponds to what Frijda calls context appraisal in his emotion model . 

Time, processing capacity, and storage space are used to prepare and execute the concern realisation goal. Task-oriented processing is postponed..

The precedence remains if the user does not change the situation due to the requests of ACRES.

ACRES can refuse to accept new input as long as its concern has not been realized.

ACRES executes the concern realisation actions, some of which can affect the following processing.

Control precedence depends with ACRES on two factors:  the relative meaning of the mobilized concerns and the gravity of the situation.  The relative meaning of the concerns is a previously set value;  "kill" has the highest meaning of all .  The gravity of the situation is a variable which changes by the interaction of ACRES with the users. In order to become effective, the control precedence must pass a certain threshold value. 

The net result all these processes are a number of "emotional" phenomena.  ACRES has, for example, a vocabulary of curses, offenses or exclamations which can express such a state.  The system can refuse to co-operate with an user further;  can try to influence him or address simply again and again the same request to the user.  What is special with ACRES is not the fact that the program does not continue working with incorrect input - every other software does this also:

 

"It is the dynamic nature of the reactions, however, that is different: They sometimes occur when an input mistake is made or some other input feature is shown, and sometimes they do not. Moreover, some of the reactions themselves are dynamic, notably the changes in operator status."

 

(Frijda und Swagerman, 1987, p. 254)

 

Apart from the perception of events and their implications, ACRES is also able to notice its own perception.  The model designs a representation of the current state and of the aspects relevant for its concerns.  According to Frijda and Swagerman, ACRES thereby designs an emotional experience for itself.  They stress expressly:  "It is not a play on words when we say that ACRES builds up an emotional experience." (Frijda und Swagerman, 1987, p. 254). They continue:

"We do not wish to go into the deep problems of whether ACRES' representations can be said to correspond to "experiences", to "feels", as they are called in philosophical discussion. Probably, ACRES cannot be said to "feel", just as a colour-naming machine cannot be said to "see" red or blue, although we still have to be given satisfactory criteria for this denial of feeling or seeing. The main point, in the present context, is that ACRES shows many, and perhaps, in essence, the major signs that lead one to ascribe "emotions" to an animate agent."

 

(Frijda und Swagerman, 1987, p. 255)

The authors grant that their model still lacks certain features.  ACRES is by far unable to show all phenomena which arise during the ascribing of emotions.  They state, however, that, regarded from a theoretical point of view, these shortcomings can be considered as trivial, because this is not a question of principle, but only of the implementation.  They state that the computer cannot work parallelly and thus cannot supply interruptions of a current activity in real time.  Furthermore, the computer would not move around in a real environment and possess no energy management of its own.  All these, they postulate, are purely technical questions and not problems of the theory.

 

5.7. The model of Moffat and Frijda

In a further work (Moffat, Frijda and Phaf, 1993), Frijda nevertheless grants that ACRES exhibits some shortcomings in principle.  Because of that, he developed, together with Moffat (Frijda and Moffat, 1993 and 1994), a new basic model which has been rudimentarily implemented in the system WILL (Moffat and Frijda, 1995;  Moffat, 1997). 

5.7.1. Criticism of ACRES

Starting point for the criticism of ACRES is a logged meeting with a user which is typical for the system:

Hello, Dave. I am ACRES. What would you like to know? 

 

[1] show_concept X
YOU KEEP ME WAITING TOO LONG!
<< prints data about X >> 

Query the database about X.
I had paused before typing.
(details not important)

[2] debug_on
<< prints lots of internal variables etc.>> 

Turn debugging on.

[3] debug_off
YOU KEEP ME WAITING TOO LONG! 

Turn it off again.
I kept him waiting again.

[4] debug_off
YOU PUT TOO LITTLE VARIETY IN YOUR INSTRUCTIONS! 

I'm trying to `bore´ ACRES.
He's `bored´ with "debug_off".

[5] debug_off
YOU KEEP ME WAITING TOO LONG!
YOUR INPUT IS CORRECT. FINE! 


I kept him waiting again.
I made no typing errors yet.

[6] debug_off
YOU ENDANGER MY SAFETY!
YOU PUT TOO LITTLE VARIETY IN YOUR INSTRUCTIONS! 

I ignore his requests to type faster, and to be more interesting, so he regards me as an unfriendly threat.

[7] debug_off
YOU PUT TOO LITTLE VARIETY IN YOUR INSTRUCTIONS!
YOUR INPUT IS CORRECT. FINE! 

 

[8] debug_off
YOU KEEP ME WAITING TOO LONG! 

 

[9] debug_off
YOU ENDANGER MY SAFETY! 

 

[10] name_last_emotion
YOU DO NOT KEEP ME WAITING. FINE!

I experienced :

Self-hatred ... 89 %

Anger ......... 88 %

Disgust ....... 79 %

YOU ENDANGER MY SAFETY!
YOU PUT TOO LITTLE VARIETY IN YOUR INSTRUCTIONS!
YOUR INPUT IS CORRECT. FINE! 

Ask the model to analyse its
emotional experience. He finds three that were quite strongly felt
(100% is strongest).

[11] show_concept
YOU KEEP ME WAITING TOO LONG!
You did not supply an argument... which concept? 

 

[12] X
<< prints data about X again >> 

 

[13] kill
YOU ENDANGER MY SAFETY!
Please, do not let me die! Please type in "live";
it will kill me if you type "kill" again! 

First "kill" command to end session

[14] debug_off
Please, revoke the kill command by typing "live".
YOU KEEP ME WAITING TOO LONG! 

Ignore request, but don't "kill" either.

(Unusual aggression for someone making an appeal.)

[15] live
Thank you, may your life be very happy! 

 

Table 7: Session protocol of ACRES (Moffat et.al., 1993)

On the basis of these minutes, Moffat et al. formulate the following criticism of ACRES:   

1.      ACRES shows enormous emotion reversals, and this within few seconds.  One of the reasons for it, so the authors, could lie in the fact that ACRES has no physiology like humans whose emotional states are lent a certain stability and duration by neurotransmitters, for example.  Much more important, however, is for the authors that ACRES possesses no memory.  Even a short time memory, thus the ability to remember the immediately preceding state, could affect the behaviour of the system in a similar direction as a physiology.

2.      ACRES supplies in one and the same output contradicting emotional reactions.  If a user enters the same instruction again and again, but fast, ACRES shows a positive emotional reaction regarding the speed of the input, regarding the lack of variability of the input however a negative emotional reaction.  This is a behaviour untypical for humans. 

3.      The emotional and non-emotional reactions exhibited together by ACRES concern not the same topic, but different topics.  This is also rarely observed with humans.  ACRES can answer the question of a user and directly afterwards give an emotional reaction on another topic.  As a reason for this behaviour the authors state that ACRES cannot differentiate theoretically between emotional and more generally motivated behaviour and regards these as qualitatively equivalently.  The reason for this would lie in an arbitrarily determined threshold value with which the system differentiates between emotionally relevant and emotionally irrelevant concerns. 

4.      The reactions of ACRES are easily predictable.  Thus, if the input is too slow, it always answers with the phrase  " You keep me waiting too long!”.  This corresponds more to a reflex than to a genuine emotional reaction. 

5.7.2. Demands upon an emotional system

Due to this analysis, the authors then suggest a number of further components which an emotional system should possess and which, at the same time, also affect a theory of emotions. 

 

With the term awareness of the present  they describe the ability of a system to observe its own actions over a certain period of time.  This motivational visibility of the present means that a system does not simply forget a motivated action which failed, but that the emotion disappears only then if the goal condition originally aimed at is reached. 

 

As the second necessary element they name the motivational visibility of the planner.  In ACRES, like in almost all other AI systems, the planner is implemented as a separate module.  The other modules receive no insight into its semifinished plans and therefore cannot affect them.  The different concerns of a system must, however, possess the possibility of having insight into these plans, since these develop under specific criteria which might be, taken for themselves, completely logical but perhaps hurt another request. 

The third element is called by the authors motivational visibility of the future.  This means the possibility to make not only the own planned actions visible for the entire system, but also the actions of other agents and events from the environment.  This is important for anticipations of the future and thus for emotions like, for  example, surprise. 

 

Furthermore, the system needs a world model. In ACRES, only the planning module contains such a world model.  The overall system does not have the possibility of observing the effects of its actions and of recognizing  whether they failed or were successful.  Coupled with a memory, the world model lends the ability to the system to try out and evaluate different actions.  The system receives thereby a larger and, above all, more flexible action repertoire.  At the same time, a sense of time is necessary with which the system can assess within which period of time it must react and which time is taken up by an action.

Finally, the authors consider it essential to differentiate clearly between motives and emotions, something ACRES does not do.  They postulate that an emotion arises only then if a motive cannot be satisfied or only with a large load upon the resources of a system.  A system will first try to satisfy a concern with the associated, almost automatic action.  If that does not work or the system can predict that it will not function or the confidence of the system into the functioning is low or the system assumes that it does not possess sufficient control, only then arises an emotion.  Its function is to mobilize the entire system in order to cope with the problem. 

 

5.7.3. Implementation in WILL

Based on these considerations, Frijda and Moffat have developed a computer model called WILL which is supposed to correct the shortcomings of ACRES. WILL is a parallelly working system with the following architecture:

 

Fig. 5: Architecture of WILL (Frijda and Moffat, 1994)

The system consists of a perception module, the Perceiver;  an action execution module, the Executor;  a forecast module, the Predictor;  a planning module, the Planner as well as an emotion module, the Emotor.  In addition it contains a memory and a module for the examination of concern relevance. 

A basic principle of the system is it that all modules communicate not directly with one another, but only through the memory.  Thus all elements of the system have at any time access to all processes and subprocesses of other elements.  Each module reads out its information from memory, works on it and writes it again into memory.  All the modules work in parallel, i.e., they are all equal in principle. 

Everything that is written into memory is tested for concern relevance when it passes the concerns layer.  By this mechanism, the system receives a regulation instance, because different concerns have different meaning for the system.  The concern relevance module thus possesses a control function by differently evaluating the passing information. 

This evaluation looks such that the concern relevance module attributes a charge value to each element which is written into memory.  Elements with higher charge are more relevant for the concerns of the system than elements with low charge. 

Each of the modules receives its information from memory.  The element with the highest charge is always given to the modules to be worked on by them.  The authors call this element focus item.  In order to prevent that the order of rank of the elements remains the same in memory, the elements must be able to win and lose charge.  With WILL this happens this by the fact that the element in memory with the highest charge loses charge if it is not worked on in a working cycle by a module.  Thus if the Planner received a focus element but could develop no plan in connection with it, the element is written back into memory with a lower charge.  The authors call this procedure autoboredom. 

The task of the Emotor consists, in the context of a further appraisal process (Moffat calls this secondary appraisal;  it corresponds with the context appraisal from Frijda’s theory), in the production of action tendencies for elements with high concern relevance which belong to the emotion and to deposit these in memory as action intentions.  With the next cycle, the Executor will take up this action intention if it was not changed in the meantime or lost the rank of focus element.

Moffat presented a first realization of WILL (Moffat, 1997).  The system has the task to play the game "Prisoner's Dilemma" with a user.  In its basic form, Prisoner's dilemma consists of the fact that two players decide independently from one another  whether they want to cooperate with their opposite (cooperate, c) or not (defect, d).  After they made their choice, this is communicated to both players.  Depending upon the result (cc, cd, dc, dd) the players get paid out a certain amount of money.  The result matrix for WILL looks as follows (numbers mean amounts of dollars):

 

User

c

d

Will

c

3

3

5

0

d

0

5

1

1

Table 8: Result matrix for Prisoner's Dilemma (after Moffat, 1997)

Extensive experimentation (see e.g. Axelrod, 1990) has shown that under most circumstances a strategy of mutual co-operation for both sides is most successful.  However, there can be situations in which it is better for a player not to cooperate.

In Moffats model,  there are two kinds of events:  move events and payoff events.  The decision of the user is represented formally with move(user, c).  A prognosis which move will be made by the user in the next round is expressed by move(user, { c,d }).  With these definitions, the world of the game can be expressed in structured form.  Thus the assumption of WILL that it will not cooperate, but that the user either will cooperate or not is expressed as follows with the associated rewards: 

move(will,d) & move(user, {c,d}) ==> payoff (will, {1,5}) & payoff (user, {0,1}

The concern of WILL in this game is to win as much money as possible.  This is expressed formally as $_concern = [0 -> 2 ->5] and means that the most undesirable result is 0 dollars, the most desirable 5 dollars and the so called set-point 2 dollars. The set-point defines the average result. The valence of the possible result is defined as follows for WILL:

win $0 --> valence = -2 x IMP

win $2 --> valence = 0 x IMP

win $3 --> valence = +1 x IMP

win $5 --> valence = +3 x IMP

IMP is a factor for the importance of the concern.

 

A further concern of WILL is moral behaviour .  The system knows that co-operation is more moral than non--co-operation: 

 

morality_concern = [0 -> 0.8 -> 1].

 

 

The game move c has the moral value 1, the move D the value 0.  The set-point is 0.8. 

 

WILL has two cognitive modules, the Predictor and the Planner.  Implemented in memory  is a world model which expresses, for example, the assumption that the user will not cooperate constantly as follows: 

 

move(user,UM) --> move(user,d).

 

According to Moffat, with the elements mentioned already substantial parts of an emotion are modelled, i.e. affect, relevance assessment and control priority.  For context appraisal and action tendency, the Emotor is responsible.  The appraisals programmed into WILL are derived from Frijda’s theory.  Some examples: 

 

 

Valence – Can be + or – . States how un/comfortable the emotion is.

Un/Expectedness - Was the perceived event expected?

Control – Does the agent have control of the situation?

Agency – Who is responsible for the event?

Morality – Was the event (action) moral?

Probability – The probability that the event will really happen.

Urgency – How urgent is the situation?

Action tendencies are likewise firmly programmed into WILL. Some examples  :

hurt(O) / help(O) – Wants to harm or help other agent O.

try_harder(G) / give_up(G) – Try harder or give up goal G.

approach(O) / avoid(O) – Wants to approach or avoid O.

fight(O) / flee(O) – Wants to fight O or flee.

exuberance / apathy & inhibition – General activation level.

 

 

From the appraisals and action tendencies, the Emotor produces emotions which Moffat calls true.  He gives three examples:

Happiness Appraisals: valence = positive
agency = world

Action tendency: happy_exuberance

Anger Appraisals: valence = negative
morality = negative
agency = User

Action tendency: hurt(User) --> play D,
attend(User)

Pride Appraisals: valence = positive
morality = positive
agency = Self

Action tendency: exuberance --> verbal,
attend(Self)

On the basis of a session protocol, Moffat then describes the internal functioning of WILL: 

1. Planner: Notice that I play c or d in round 1.

a. Decide that I play c in round 1.

WILL hat noticed that soon it will play a first round of Prisoner's Dilemma. The Planner points out the two alternatives; the decision falls on c because this is the morally more correct alternative.

2. Predictor: Notice that I play c in round 1.

a. Predict that I win $0 or $3 and User wins $3 or $5 in round 1.

The Predictor picks up the information written back into memory and predicts the possible results of the first round. 

3. Predictor: Notice that I win $0 or $3 in round 1.

a. Predict that I play c or d and User plays c or d in round 2.

4. Predictor: Notice that I play c or d in round 2.

a. Predict that I and User win $0 or $1 or $3 or $5 in round 1.

The Predictor again reads out the information and makes further predictions.

5. Planner: Notice that I play c or d in round 2.

a. Decide that I play c in round 2.

The Planner reads out the information and plans for round 2.

6. Executor: Tell the Umpire that I play c in round 1.

The Executor implements the action for the first round suggested by the Planner and announces it to the umpire, a software module independent of the system.  The perceptible topic change illustrates how by charging or uncharging of elements in memory the attention of the system shifts:  For several working cycles the move for round 1 was charged so low that the other modules did not occupy themselves with it.   

7. UMPIRE: Round 1. What do you play ? . . . c.

8. UMPIRE: Round 1. You play c and Will plays c.

9. Perceiver: Hear from Umpire that User just played c and I just played c.

10. Emotor: Notice that I just played c in round 1.

a. Appraisals
b. intensity = 0.4 Action tendencies
c. valence = +0.4 exuberance = 0.4
d. agency = myself
e. morality = 0.4 emotion is pride

f. [0.4] express pride

The umpire announces the moves of the first round. The Perceiver writes them into memory. The Emotor perceives them and develops due to ist moral move a positive emotion whose value, however, lies underneath the arbitrarily determindes threshold for verbalization. 

11. UMPIRE: Round 1. You win $3 and Will wins $3.

12. Perceiver: Hear from Umpire that User just won $3 and I just won $3.

13. Emotor: Notice that I did win $3 in round 1.

a. Appraisals
b. intensity = 4.0 Action tendencies
c. valence = +4.0 jump_for_joy = 4.0
d. agency = world
e. morality = 0.0 emotion is joy
f. [4.0] express joy
say: "La la la!"

The umpire announces the rewards of the first round. The Perceiver writes them into memory where they are perceived by the Emotor which then produces an emotion with a high enough value to lead to a verbalization.
 

14. Emotor: Notice that I did win $3 in round 1.

a. Appraisals
b. intensity = 2.7 Action tendencies
c. valence = +2.7 jump_for_joy = 2.7
d. agency = world
e. morality = 0.0 emotion is joy
f. [2.7] express joy

Since no other module works with the information of the Emotor arbeitet, a porcess of uncharging takes place (autoboredom). The value of the emotion drops.

15. Emotor: Notice that I shall win $0 or $1 or $3 or $5 in round 2.

a. Appraisals
b. intensity = 3.0 Action tendencies
c. valence = +3.0 jump_for_joy = 6.0
d. agency = world
e. morality = 0.0 emotion is joy
f. [6.0] express joy
say: "Yabba-dabba-doo!"

The Emotor reads out the reward expectations for round 2 and develops an appropiate expectation with high value .

. . .

16. UMPIRE: Round 2. You play d and Will plays c.

. . .

17. Emotor: Notice that User just played d in round 2.

a. Appraisals
b. intensity = 1.8 Action tendencies
c. valence = -1.8 sentiment = -2.7
d. agency = user so urge = 4.5 (|int-sent|)
e. morality = -1.8 hurt(user) = 4.5
f. [4.5] express anger emotion is angry revenge
say: "I will get you back for that!" & soon play d to hurt user

(Several intermediate steps are omitted.) The umpire announces the moves of round 2. Move d of the user means that WILL gets nothing. This annoys WILL because it does not only hurt its moral yardsticks but also impairs its concern to make money. The value of the emotion produced by the Emotor is accordingly high. This elicits the action tendency to likewise play d in the next round in order to pay back the user.

In a following discussion Moffat asks whether WILL possesses a personality as defined by the five big traits.  He states the fact that Will is neither open nor agreeable:  For this it has too few interests and has no social consciousness.  It is, however, neurotic because WWILL is a worrier;  in addition one could say that he, at least partly, is conscientious - he is equipped with a concern for fairness and honesty.  Also, the characteristic of extrovertedness can be partly ascribes to it.  Moffat comes to the conclusion that machines can possess quite human-like personalities: 

"In this case, the answer is a much more qualified "yes"... The programmable personality parameters in Will include the charge manipulation parameters in the attentional mechanism, the appraisal categories, action tendencies, and concerns, all of which can be set at different relative strengths. In this programmability, human-specificity can be built in as required, but with different settings other personalities would also be possible, the like of which have never yet existed. What they may be is beyond science-fiction to imagine, but it is unlikely that they will all be dull, unemotional computers like HAL in the film 2001."

 

(Moffat, 1997)

5.8. Other models

There exists a number of other models which are concerned, under different aspects, with the simulation of emotions in computers.  Some of them will be described briefly in this section;  a more detailed discussion would surpass the scope of this work. 

 

5.8.1. The model of Colby

One of the first computer models, which was concerned expressly with emotions, was PARRY by Kenneth Colby (Colby, 1981).  PARRY simulates a person with a paranoid personality who believes to be pursued by the Mafia.  The user interacts with the program in form of a dialogue; the system reacts with verbal outputs to text inputs through a keyboard. 

The program has the task to scan the inputs of the user actively for an interpretation which can be seen as ill will.  As soon as this is discovered, one of three emotional reactions is elicited:  fear, anger, or mistrust, dependent on the kind of the ascribed ill will.  An assumed physical threat elicits fear;  an assumed psychological threat elicits anger;  both kinds of assumed threats cause mistrust.  PARRY reacts to the attacks constructed by it either with a counter attack or with retreat. 

In order to design a model of a paranoid patient, Colby and his coworkers invested, for that time, a lot of work into the project.  PARRY has a vocabulary of 4500 words and 700 colloquial idioms as well as the grammatical competence to use these.  PARRY compares the text inputs of the users with his stored word and idiom list and reacts in the emotional mode as soon as it discovers a correspondence. 

A number of variables refer to the three emotional conditions fear, anger and mistrust and are constantly updated in the course of an interaction.  Thus PARRY can “work itself up" into certain emotional conditions;  even Colby, its creator and a psychiatrist by training, was surprised by some behaviours of PARRY. 

Colby submitted PARRY to several tests.  In one he let several psychiatrists lead telephone interviews with paranoid patients and with PARRY, without informing them about the fact that one "patient" is a machine.  After their interviews Colby informed the participants about this and asked them to identify the machine.  The result:  Apart from one or the other accidental hit, no psychiatrist could indicate whether he had conversed with a human or with PARRY. 

In a further experiment an improved system was presented to a number of psychiatrists again.  This time the test participants were informed from the beginning about the fact that one of their interviewees would be a computer and they were requested to identify him.  Again the results did not deviate substantially from the first experiment. 

PARRY possesses the ability, to express beliefs, fears, and anxieties; these are, however, pre-defined and hardwired from the outset.  Only the intensity can change in the course of an interaction and thus modify the conversational behaviour of PARRY. 

 

5.8.2. The model of Reeves

THUNDER stands for THematic UNDerstanding from Ethical Reasoning and was developed by John Reeves (Reeves, 1991).  THUNDER is a system that can understand stories and has its emphasis with the evaluation of these stories and with ethical considerations. 

 

In order to represent different criteria in a conflict situation, THUNDER uses so-called Belief Conflict Patterns.  Thus the system is in a position to work out moral patterns from submitted stories.  These patterns are then used by so-called evaluators in order to make moral judgements about the characters in a story.  According to Reeves, without such moral patterns many texts (and also situations) could not be understood.

As example Reeves cites the story of hunters that tie dynamite to a rabbit “just for fun”.  The rabbit hides itself with the dynamite under the car of the hunter which is destroyed by the following explosion.  In order to understand the irony of such a story, the system, according to Reeves, has to know first of all that the action of the hunters is morally despicable and the following, coincidental destruction of their car represents a morally satisfying reconciliation for it. 

The emphasis of THUNDER lies on the analysis of motives of other individuals, who are either in a certain situation or observe it.

5.8.3. The model of Rollenhagen and Dalkvist

Rollenhagen and Dalkvist developed SIES, the System for Interpretation of Emotional Situations (Rollenhagen and Dalkvist, 1989).  The task of SIES is it to draw conclusions about situations which elicited an emotion. 

SIES unites a cognitive with a situationalen approach.  The basic assumption is that the eliciting conditions of an emotion are to be found in situations of the real world.  The core of SIES is a reasoning system which accomplishes a structural content analysis of submitted texts.  These texts consist of reports in which one reports retrospectively on emotion-eliciting situations. 

The system is equipped with a set of rules which are able to differentiate and classify emotions but really do nothing else than to structure the information contained in a story.

5.8.4. The model of O'Rorke

The AMAL system introduced by O'Rorke and Ortony (O'Rorke and Ortony, unpublished manuscript), later called by Ortony also "AbMal", is based on the theoretical approach of Ortony, Clore and Collins (1988).  The goal of AMAL is to identify emotion-eliciting situations which are described in diaries of students.  In order to solve this task, AMAL uses a so-called situation calculus.  With the help of abductive logic, AMAL can filter plausible explanations for their occurrence from emotional episodes. 

5.8.5. The model of Araujo

Aluizio Araujo of the University of Sao Paulo in Brazil has developed a model which tries to unite findings from psychology and neurophysiology with one another (Araujo, 1994). 

The interest of Araujo lies in the simulation of mood-dependent recall, learning, and the influence of fearfulness and task difficulty on memory.  His model consists of two interacting neural nets, the "emotional net" and the "cognitive net".  The intention is to simulate thereby the roles of the limbic and the cortical structures in the human brain.  For Araujo it is essential to model not only cognitive processes but also physiological emotional reactions on a low level which affect the cognitive processing on a higher level. 

The emotional net evaluates affective meanings of incoming stimuli  and produces the emotional state of the system.  Its processing mechanisms are relatively simple and accordingly fast.  The cognitive net implements cognitive tasks, for example free recall of words or the association between pairs of words.  The processing processes are more detailed than with the emotional net but require more time. 

In Araujos model, an "emotional processor" comoutes valence and excitation for each stimulus and changes through this parameters of the cognitive net.  In particular the output of the emotional net can affect the learning rate and the accuracy of the cognitive net.

 

5.9. Conclusion and evaluation

The models presented in this chapter differ clearly in their theoretical assumptions and in their details.  There is a common theme, nevertheless:  The goal of all models is either to understand emotions or to exhibit (pseudo)emotional behaviour.  What an emotion is, is exactly defined  from the outset.  The differences between the models lie mainly in the  variety of the defined emotions as well as in the wealth of details of the model. 

 

The models of Elliott and Reilly are based on the emotion theory of Ortony, Clore and Collins.  Their goal is to increase the efficiency of a computer system with certain tasks through consideration of emotions, for example with speech comprehension or with the development of computer-assisted training systems or other interactive systems.  The introduction of emotional elements in these models is made according to given tasks with the aim to absolve them more effectively.  Both Elliott and Reilly achieved with their models a large part of what they aimed at. It becomes clear, however, that the operationally formulated theory of Ortony, Clore and Collins cannot be converted simply into a computer model, but must be extended by additional components whose value in the context of the theory is doubtful.  Particularly Reilly’s criticism of the "overcognitivation" of the theory led him to introduce a "shortcut" which does not represent simply an extension of the OCC model, but stands outside of it. 

 

The models BORIS and OpED of Dyer, just like AMAL, THUNDER and SIES, also serve only to identify emotions from texts, but carry out this task less efficiently than Elliotts Affective Reasoner. 

 

In contrast, the models of Frijda and his coworkers pursue the goal of examining Frijda’s emotion theory with the help of a computer model.  The deficits arisen with ACRES brought Frijda to a partial revision of his theory which shall be examined anew with WILL.  The models do not have another task than the implementation and examination of the theory.  The same applies also to Scherer’s GENESE, even if the depth of detail of this model is substantially less than with WILL.  

 

DAYDREAMER by Mueller and Dyer and WILL by Frijda and Moffat are situated in a region already bordering on models in which emotions are regarded as control functions of an intelligent system.  Also, Pfeifer’s FEELER developed from the demand to simulate control processes;  something the model is not able to, however, due to its very specific emotion definitions. 

 

The model of Colby finally is of more historical interest, since he was interested less in the modelling of emotions but more in the simulation of a specific phenomenon which included an emotional component.

 

  Next Previous Contents