Arto Jarvinen LiTH-ISY-I-0994 1989-05-19
Information Representation in Neural Networks - a Survey
A. Jarvinen Computer Vision Laboratory University of Linkoping 581 83 Linkoping, Sweden
This report is a survey of information representations in both biological and arti cial neural networks. The correct information representation iscrucial for the dynamics and the adaptation algorithms of neural networks. A number of examples of existing information representations are given.
Arti cial neural networks (ANN) have during the last few years created much interest and researchers from many disciplines have been drawn into the eld. There are several classes of ANN, each solving a speci c problem. Thetwo main features of most ANN are: 1. They are massively parallel, i.e. they are built of a high number of interconnected relatively simple processing elements 2. They most often perform an input-output mapping which the net can adaptively `learn' either by examples (supervised algorithms) or by using some other kind of criteria (unsupervised algorithms) The rst types of ANN were described byRosenblatt 23] and Widrow 28]. They implement a mapping from their input to their output. These early types of networks could only perform relatively simple mappings. With the later types of networks also more complicated input-output mappings were possible, 7], 1] and 24]. A second type of ANN is the associative memory, 11], 14], from which stored data can be retrieved by presenting the net with somedata which earlier has been associated with the stored data. A third type is the self-organizing associative net described by Kohonen 15]. Its main feature is that it automatically constructs mappings from a arbitrarydimensional feature-space to a two- dimensional feature-space. 1
A major problem for both biological neural networks (BNN) and ANN is that of information presentation. The problemreally consists of two sub-problems: 1. What information do we wish to represent? 2. How is this information represented? Two aspects of an IR are the unit, i.e. what quality or feature does the IR give information about, and a value that represents the quantity of this unit. An example would be the complex cells in the visual cortex which re for lines of a certain orientation and a certaindirection of motion. The unit here would be `linelike structure at coordinate x, y in the retina moving in direction '. The value would be a function of the velocity and the contrast of the line and can perhaps be seen as a measure of the certainty of the statement made by the ring cell or cells 4]. In ANN the information representation is crucial for the convergence of adaptation algorithms and the eciency and the compactness of the network. With this survey I attempt to give an overview of di erent types of information presentations (IR) in both biological and arti cial neural networks. Many examples are from biological and arti cial vision systems, the area most familiar to the author.
2 The Computing Element
In the brain there are some 1000 di erent types of neurons. They are computingelements with several (up to 10,000) inputs and one output. The output can branch to the inputs of several other neurons. There are also neurons which function as a whole group of neurons in that inputs and outputs may be local to a small part of the neuron, e.g. certain types of amacrine cells in the retina 12]. The output signal is usually a function of the weighted sum of the input signals tothe neuron. Most types of neurons in the brain uses a frequency coding for its output. The frequency of the output signal thus represent the value of the output of the neuron. There are also neurons that have graded outputs i.e. their output potential is proportional to the output value of the neuron. Such cells can be found for instance in the retina. The most common type of arti cial neurons...