Adaptive Linear Neuron (Adaline)


Adaptive Linear Neuron (Adaline)
Adaline which stands for Adaptive Linear Neuron, is a network having a single linear unit. It was developed by Widrow and Hoff in 1960. Some important points about Adaline are as follows
 ·       It uses bipolar activation function.
·       It uses delta rule for training to minimize the Mean-Squared Error MSE between the actual output and the desired/target output.
·       The weights and the bias are adjustable.
Architecture
The basic structure of Adaline is similar to perceptron having an extra feedback loop with the help of which the actual output is compared with the desired/target output. After comparison on the basis of training algorithm, the weights and bias will be updated.

Training Algorithm
Step 1 − Initialize the following to start the training
§  Weights
§  Bias
§  Learning rate α
For easy calculation and simplicity, weights and bias must be set equal to 0 and the learning rate must be set equal to 1.
Step 2 − Continue step 3-8 when the stopping condition is not true.
Step 3 − Continue step 4-6 for every bipolar training pair s:t.
Step 4 − Activate each input unit as follows −
Step 5 − Obtain the net input with the following relation

Here ‘b’ is bias and ‘n’ is the total number of input neurons.
Step 6 − Apply the following activation function to obtain the final output


Step 7 − Adjust the weight and bias as follows −
Case 1 − if y ≠ t then,

Case 2 − if y = t then,

Here ‘y’ is the actual output and‘t’ is the desired/target output
(t − yin) is the computed error.

Step 8 − Test for the stopping condition, which will happen when there is no change in  weight or the highest weight change occurred during training is smaller than the specified tolerance.

Comments

Popular posts from this blog

Basics of Statistics

Madaline