# Hebbian theory

Hebbian theory is a scientific theory in biological neuroscience which explains the adaptation of neurons in the brain during the learning process. It describes a basic mechanism for synaptic plasticity wherein an increase in synaptic efficacy arises from the presynaptic cell's repeated and persistent stimulation of the postsynaptic cell. Introduced by Donald Hebb in his 1949 book The Organization of Behavior,[1] it is also called Hebb's rule, Hebb's postulate, and cell assembly theory, and states:

"Let us assume that the persistence or repetition of a reverberatory activity (or "trace") tends to induce lasting cellular changes that add to its stability.… When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased."[1]

The theory is often summarized as "Cells that fire together, wire together."[2] It attempts to explain "associative learning", in which simultaneous activation of cells leads to pronounced increases in synaptic strength between those cells. Such learning is known as Hebbian learning.

## Hebbian engrams and cell assembly theory

Hebbian theory concerns how neurons might connect themselves to become engrams. Hebb's theories on the form and function of cell assemblies can be understood from the following:

"The general idea is an old one, that any two cells or systems of cells that are repeatedly active at the same time will tend to become 'associated', so that activity in one facilitates activity in the other." (Hebb 1949, p. 70)
"When one cell repeatedly assists in firing another, the axon of the first cell develops synaptic knobs (or enlarges them if they already exist) in contact with the soma of the second cell." (Hebb 1949, p. 63)

Gordon Allport posits additional ideas regarding cell assembly theory and its role in forming engrams, along the lines of the concept of auto-association, described as follows:

"If the inputs to a system cause the same pattern of activity to occur repeatedly, the set of active elements constituting that pattern will become increasingly strongly interassociated. That is, each element will tend to turn on every other element and (with negative weights) to turn off the elements that do not form part of the pattern. To put it another way, the pattern as a whole will become 'auto-associated'. We may call a learned (auto-associated) pattern an engram." (Allport 1985, p. 44)

Hebbian theory has been the primary basis for the conventional view that when analyzed from a holistic level, engrams are neuronal nets or neural networks.

Work in the laboratory of Eric Kandel has provided evidence for the involvement of Hebbian learning mechanisms at synapses in the marine gastropod Aplysia californica.

Experiments on Hebbian synapse modification mechanisms at the central nervous system synapses of vertebrates are much more difficult to control than are experiments with the relatively simple peripheral nervous system synapses studied in marine invertebrates. Much of the work on long-lasting synaptic changes between vertebrate neurons (such as long-term potentiation) involves the use of non-physiological experimental stimulation of brain cells. However, some of the physiologically relevant synapse modification mechanisms that have been studied in vertebrate brains do seem to be examples of Hebbian processes. One such study reviews results from experiments that indicate that long-lasting changes in synaptic strengths can be induced by physiologically relevant synaptic activity working through both Hebbian and non-Hebbian mechanisms

↑Jump back a section

## Principles

From the point of view of artificial neurons and artificial neural networks, Hebb's principle can be described as a method of determining how to alter the weights between model neurons. The weight between two neurons increases if the two neurons activate simultaneously—and reduces if they activate separately. Nodes that tend to be either both positive or both negative at the same time have strong positive weights, while those that tend to be opposite have strong negative weights.

For example, we have heard the word Nokia for many years, and are still hearing it. We are pretty used to hearing Nokia Mobile Phones, i.e. the word 'Nokia' has been associated with the word 'Mobile Phone' in our Mind. Every time we see a Nokia Mobile, the association between the two words 'Nokia' and 'Phone' gets strengthened in our mind. The association between 'Nokia' and 'Mobile Phone' is so strong that if someone tried to say Nokia is manufacturing Cars and Trucks, it would seem odd.

The following is a formulaic description of Hebbian learning: (note that many other descriptions are possible)

$\,w_{ij}=x_ix_j$

where $w_{ij}$ is the weight of the connection from neuron $j$ to neuron $i$ and $x_i$ the input for neuron $i$. Note that this is pattern learning (weights updated after every training example). In a Hopfield network, connections $w_{ij}$ are set to zero if $i=j$ (no reflexive connections allowed). With binary neurons (activations either 0 or 1), connections would be set to 1 if the connected neurons have the same activation for a pattern.

Another formulaic description is:

$w_{ij} = \frac{1}{p} \sum_{k=1}^p x_i^k x_j^k\,$ ,

where $w_{ij}$ is the weight of the connection from neuron $j$ to neuron $i$, $p$ is the number of training patterns, and $x_{i}^k$ the $k$th input for neuron $i$. This is learning by epoch (weights updated after all the training examples are presented). Again, in a Hopfield network, connections $w_{ij}$ are set to zero if $i=j$ (no reflexive connections).

A variation of Hebbian learning that takes into account phenomena such as blocking and many other neural learning phenomena is the mathematical model of Harry Klopf. Klopf's model reproduces a great many biological phenomena, and is also simple to implement.

↑Jump back a section

## Generalization and stability

Hebb's Rule is often generalized as

$\,\Delta w_i = \eta x_i y,$

or the change in the $i$th synaptic weight $w_i$ is equal to a learning rate $\eta$ times the $i$th input $x_i$ times the postsynaptic response $y$. Often cited is the case of a linear neuron,

$\,y = \sum_j w_j x_j,$

and the previous section's simplification takes both the learning rate and the input weights to be 1. This version of the rule is clearly unstable, as in any network with a dominant signal the synaptic weights will increase or decrease exponentially. However, it can be shown that for any neuron model, Hebb's rule is unstable[citation needed]. Therefore, network models of neurons usually employ other learning theories such as BCM theory, Oja's rule,[3] or the Generalized Hebbian Algorithm.

↑Jump back a section

## References

1. ^ a b Hebb, D.O. (1949). The Organization of Behavior. New York: Wiley & Sons.
2. ^ The mnemonic phrase is usually attributed to Carla Shatz at Stanford University, referenced for example in Doidge, Norman (2007). The Brain That Changes Itself. United States: Viking Press. p. 427. ISBN 067003830X.
3. ^ Shouval, Harel (2005-01-03). "The Physics of the Brain". The Synaptic basis for Learning and Memory: A theoretical approach. The University of Texas Health Science Center at Houston. Archived from the original on 2007-06-10. Retrieved 2007-11-14.
↑Jump back a section