intro for physicists and mushroom-pickers
The topics "brain" and "neural networks" might suggest that they are investigated by neuro- biologists, mathematicians, or microprocessor engineers.
What may the physicists do here?
Still, they really are doing a lot. The signal analysis, the systems
with chaotic dynamics, the computer simulations, the mathematical modelling,
and the phase transition analysis have always been in the domain of modern
physics.
YES and NO
YES:
NO:
So, the simplifications are inevitable in modelling the brain activity. It is a tempting task to reveal in (over)simplified models the features pertinent to the real brain, due to the YES.
The neurophysiology is reluctant to give answers to our questions, and that is another reason for model simplification. On one hand, the neurons are effectively damaged by the investigations in vivo. On the other hand, the non-damaging methods provide little information on the activity of separate neurons.
Happily enough, a histologist tells us that ~70% of neuron cells in the cerebral cortex are very similar. Having this, one would like to mimic in a network of interconnected nodes - "neurons" - the features a real brain has. These may be:
The learning in networks is perhaps the most intensely studied subject by physicists. Two popular models have emerged, presented in some detail here.
Essentially a spin glass model. A discontent function is defined for the values of spin projections of the nodes (S(i)=+1,-1), connection strengths J(ij) and firing threshold potentials T(i):
D(S(all i, time))=-1/2*SUM(ij){S(i, time)*J(ij)*S(j, time)}+SUM(i){S(i, time)*T(i)
The states S(i) of neurons are updated recurrently at random. And it appears that the discontent function (or energy, or cost function, or Lyapunov function) cannot be increased by such a process if the interactions between neurons are symmetric:
J(ij)=J(ji) ,
because the contribution of updating a single neuron state to D is non-positive:
-1/2*S(1, time')*SUM(j){J(1j)*S(j, time)-T(j)}<0.
The last inequality is to be applied twice for D to account for both the discontent of an updated neuron and the discontent of its postsynaptic targets.
The dynamics of such a network, when represented as a movement of a point in the states' space, reveals basins of attraction for this point: the point sinks into the local minima of D.
Thus, a system with a given initial spin configuration will evolve in time, until it reaches one of its stable states - a "memorized" one. For example, the blurred patterns of characters may be recognized.
Back in the 1950's, simple perceptrons raised great hopes in the community working with tha artificial intelligence. Those systems were shown to be able to extract global features out of patterns (e.g. whether the rectangles in a picture are disconnected). However, the great hopes vanished when Minsky and Papert in their book Perceptrons (1969) rigorously demonstrated the limits of those simple devices.
Multilayered perceptrons are far better in extracting the global features of presented input. The XOR (exclusive OR) operation may be easily done by a two-layer perceptron (i.e. one layer of hidden units).
Two basic learning strategies for perceptrons:
The second implies that teaching is carried out by modifying the synaptic strength according to
DELTA(J(ij))=-delta(error)/delta(J(ij)),
where DELTA(Jij) is the "taught" change in the synaptic strength.
Sejnowsky and Rosenberg (1987) constructed the NETtalk - a three-layer feed-forward system that can convert written English text into phonemes. Learning the 1,000 most common English words provided for 98% correct pronouncation.
Both 1 and 2 type networks have their drawbacks when compared to the functioning of the "grey matter". Among these,
Still, some features match or may be matched. And more, the artificial networks exist and operate independently of the real brain investigations.
The investigators, if you ask them, assert you that they are not trying to make any artificial brain. No no no, they are not so ambitious. But to be able to trigger a new soul - how very intriguing that would be!
Last edited 1997.12.16. If you have comments or suggestions, email me at
bernotas@itpa.lt
This page created with Netscape Navigator Gold