Connectionist language modeling
Meeting 2: Neurons and neurodes
August 24, 2006
Structure of the neuron
- Most of the input to the neuron is collected
through the dendrites
- The neuron also has a cell body, which is responsible
for maintaining the functioning of the cell.
- The axon stretches out to other neurons, creating
interface points at their dendrites or cell body.
Each neuron can be seen as
an information processing unit
- Electrochemical charge is the common
- Charge is input into the neuron across
synapses from other neurons
- The neuron processes this charge and arrives
at its own charge
- If the neuronÕs charge goes beyond a
particular threshold, it outputs charge to other neurons
Neurodes are like neurons
Connection strength or
Neurodes (mostly) perform
- Sum inputs
- Pass them through an activation function
Input summation is a very
simple process. For a given node i:
- Take the activation of each node with a
connection to i (or
simply each input to i)
and multiply it by the
strength of its connection to i.
- Sum these products
For example, take a node i, which has inputs from two other nodes, g and h.
The activations of g and h are 2 and Ð0.5, respectively. Their connection
weights are Ð0.1 and Ð2, respectively. What is the sum of the inputs to node i? What would the activation be if the connection
weights were 0.1 and 2?
The second step is the
passing of this sum of products to the activation function. In principle, any
function is possible, but in practice, only a small number are used.
- The simplest is a linear function, which just
makes the activation of neurode i equal to the sum of products passed into it.
- But neurons mostly donÕt work this way Ð they
have thresholds under which they do not become active and over which they
become active to some degree. Most connectionist models therefore make use
of an activation function demonstrating this behavior Ð a sigmoid
Sigmoid functions in
connectionist models have the following properties
- They force the activation of a node to fall
between given values, in most cases 0 and 1.
- Within a particular range, they are very
sensitive to net inputs Ð that is, a small change in net input yields a
large change in activation.
- Globally, with large changes in net input,
they display an on-off activation character.