Next Previous Table of content

3. Strange brains

Since there are computers, there are also attempts to simulate processes of human thinking on them. The ability of an electronic machine to read, manipulate and output  information in high speed, tempted researchers from the outset to speculations over the equivalence of computers and brains.

Such speculations soon found entrance into psychology. In particular the possibility of calculating machines to process information parallely corresponded to approaches in  psychology which regarded the brain predominantly as a parallel processing system.

Computers were regarded before this background as a possibility of clearing up still unexplored phenomena of the human mind through modelling. A good example is the Pandemonium model of Oliver Selfridge (Selfridge, 1959). Pandemonium is a model for visual pattern recognition. It consists of a multiplicity of  parallely working demons, of which everyone is specialized in recognizing a certain visual stimulus, for example a horizontal bar in the center of the presented stimulus or a curvature in the right upper corner.

If a demon recognizes "his" stimulus, he calls  out to the central master demon. This exclamation is the louder, the more highly the demon estimates the probability of  a correct identification. All demons work independently from one another; none is affected by his neighbours.

On the basis of the received information, the master demon then decides which pattern constitutes the stimulus. In an advancement of the model, the demons were organized  hierarchically in order to relieve the master demon.

Between these specialized demons in Selfridge's model and the actually existing feature detector cells in the visual cortex exists an astonishing similarity. And indeed it was the model of Selfridge and the its assumptions over perception processes, which suggested for the first time that such feature detectors could exist in humans. The model was in this case the reason for neurophysiologists to look for appropriate cells.

Thus Pandemonium is a good example of how computer models can advance psychological research. On the other hand, it  should not be forgotten that a system such as Pandemonium is unable to really "see". And this is a key point for critics who grant a reduced heuristic use to computer modelling, but otherwise deny any equivalence between humans and machines.

This is also one of the fundamental questions for the development of emotional computers: the equivalence of the systems "humans" and "computer". Both the AI research and the past approaches for the development of emotional systems assume without further doubt that "intelligence" and "emotion" i na computer are not fundamentally different from intelligence and emotion in humans.

This assumption abstracts to a large extent from the specific hardware; "emotion" is understood as a pure software implementation. But it is quite questionable whether the two systems obey the same laws of development.

A computer is a discrete system, which knows nothing else than only two different states. By the combination of several of such elements, "intermediate states" can be created; this however only on a certain level of abstraction from the underlying hardware.

In contrast, the physiology of the emotional and cognitive system in humans by no means represents a comparable mechanism, but consists, even on the lowest level, of a multiplicity of mechanisms, some of which work more according to digital principles, others more according to analog principles.

Even one of the best researched mechanisms, the function of the neurons, is not exclusively an On/Off mechanism, but consists of a multiplicity of differentiated partial mechanisms - and this on the level of the hardware.

The simulation of such mechanisms with computers is at present only possible as software. Simple neural switching patterns can, up to a certain point, also be modelled by parallel computers; such a modelling is possible, however, only within certain limits and ignores completely chemical processes which play an important part in the human brain.

Picard (1997) tries to solve the problem by abstracting from the difference between hardware and software and defining both as "computers". She justifies this position with the argument that emotional software agents can exist in an "emotion-free" hardware.

A similar discussion is deals with the comparability of emotions of humans and animals (see Dawkins, 1993). Here at least we have a hardware of identical elements, although of different complexity. In this case it is, too, not considered scientifically decided whether an emotion like "mourning" is identical in humans and animals.

The affair is made more difficult still by the question whether a computer can be considered in principle as a form of life. In the "Artificial Life" discussion of the last years, some attention was given to this question. Thus the evolutionary biologist Richard Dawkins (Dawkins, 1988) holds the opinion that the ability for reproduction would already be sufficient to speak of life in the biological sense. Others extend the definition by the components "self organization" and "autonomy".

If one ignores the ethical and philosophical discussion of "life" and concentrates on the aspects "self organization" and "autonomy", then it is quite realistic to attribute to computers and/or software these characteristics. Self organization in the sense of adaptation can be observed, for example, in neural nets working with genetic algorithms (see e.g. Holland, 1998). Autonomy in the reduced sense can be observed in robots and/or partly in autonomously operating programs, for example agents for the internet. Such programs possess also the ability for reproduction, which would fulfil the third condition.

The emphasis of the AI and AL research lies at present on the advancement of such autonomous, self-organizing systems. The models used are partly based on functional models of the human brain; this should not tempt one, however, to rashly equate their operating mode with that of the human brain.

Especially with optimization processes of software by genetic algorithms it is frequently not known to human observers which self organization processes the software uses, in order to reach the optimization goal.

Sometimes, though, it might be useful to attribute to a computer mental abilities. Thus John McCarthy, one of the pioneers of artificial intelligence, explains:

".. Although we may know the program, its state at a given moment is usually not directly observable, and the facts we can obtain about its current state may be more readily expressed by ascribing certain beliefs and goals than in any other way... Ascribing beliefs may allow deriving general statements about the program's behavior that could not be obtained from any finite number of simulations.. The beliefs and goal structures we ascribe to the program may be easier to understand than the details of the program as expressed in its listing... The difference between this program and another actual or hypothetical program may be best expressed as a difference in belief structure."

(McCarthy, 1990, p. 96)

The attribution of mental abilities according to these remarks possesses only a functional nature: It serves to express information about the state of a machine at a given time which otherwise could only be expressed through lengthy and complex descriptions of details.

McCarthy lists a number of mental characteristics which can be attributed to computers: Introspection and self knowledge, consciousness and self-consciousness, language and thinking, intentions, free will, understanding and creativity. At the same time he warns to equate such attributed mental qualities with human characteristics:

"The mental qualities of present machines are not the same as ours. While we will probably be able, in the future, to make machines with mental qualities more like our own, we'll probably never want to deal with a computer that loses its temper, or an automatic teller that falls in love? Computers will end up with the psychology that is convenient to their designers..."

(McCarthy, 1990, p. 185f.)

We now know that the last sentence must not necessarily be correct. There are first examples of self-organizing and self-optimizing hardware (Harvey and Thompson, 1997), whose modes of functioning are not known to its human designers. And the current approaches in the design of emotional computers go far beyond modelling but try to develop computers whose mental qualities are not pre-defined by the designer, but develop independently.

Although naturally certain basic assumptions of the designers flow into such systems, this approach is nevertheless fundamentally different from the classical modelling approach which can be observed in cognitive science. The question remains, however, whether the processes in a computer which, due to this procedure, one day actually develops mental qualities are identical with the processes in the human body and brain.

Critics of such an approach  point out the fact that emotions are not comparable to purely cognitive processes, since they are affected by additional factors (e.g. hormones) and also require a subject. The modelling of these processes within a computer, a purely cognitive construction, would therefore be impossible; the more so because the subjective element is missing in a machine without which an emotion, whose substantial component is a feeling, could not be felt. To this argument there are several answers.

On the one hand, one cannot possibly rule out that a computer can possess "feelings". From an evolutionary viewpoint, computers are an extremely young phenomenon which have, in their short existence, made a number of giant steps. Today, there exist machines with hundreds of parallel processors; impressive research progress is made with biological and quantum computers. It might be just a question of time until a computer does posses a similar complex hardware as the human brain. With increasing complexity the probability increases, too, that such a system will organize itself on a higher level. What must be laborously  programmed as a "monitoring instance" today might develop into something which one day might be called the "ego" of a computer.

On the other hand, it would be extremely anthropocentric to deny emotions to an intelligent system just because it does not possess human hormones. A computer consists of a multitude of "physiological" processes which could be perceived as "bodily feelings" once the system has been equipped with a propioceptive subsystem. If, in addition, this computer is able to learn and move, one could imagine it reacting to certain situations with a change of such processes which for it possess the same value as physiological changes in our body.

An emotional computer must not experience emotions like a human - not more than a visitor from Zeta Epsilon. Nevertheless, its emotions can be as genuine to it as ours to us - and influence its thoughts and actions just as much.

We can't therefore assume a priori that "emotions" which are developed by a computer are comparable to human emotions. But it is thoroughly justified to assume that the emotions of a computer serve the same functions for it as for us humans. If this is the case, the computer modelling of emotions would not only be a way to learn more about human emotions; it would at the same time lay the foundations for a time when intelligent systems of different "building blocks" will co-operate with one another.

Next Previous Table of content