BPN
Back
Propagation Neural Networks
Back Propagation Neural is a multi layer neural network consisting
of the input layer, at least one hidden layer and output layer. As its name
suggests, back propagating will take place in this network. The error which is
calculated at the output layer, by comparing the target output and the actual
output, will be propagated back towards the input layer.
Architecture
As
shown in the diagram, the architecture of BPN has three interconnected layers
having weights on them. The hidden layer as well as the output layer also has
bias, whose weight is always 1, on them. As is clear from the diagram, the
working of BPN is in two phases. One phase sends the signal from the input
layer to the output layer, and the other phase back propagates the error from
the output layer to the input layer.
Training Algorithm
For training, BPN will use binary sigmoid activation function.
The training of BPN will have the following three phases.
Phase 1 −
Feed Forward Phase
Phase 2 −
Back Propagation of error
Phase 3 −
Updating of weights
All these steps will be concluded in the algorithm as follows
Step 1 −
Initialize the following to start the training −
Weights
Learning rate α
For
easy calculation and simplicity, take some small random values.
Step
2 − Continue step 3-11 when the stopping
condition is not true.
Step
3 − Continue step 4-10 for every training
pair.
Phase
1
Step
4 − Each input unit receives input signal xi
and sends it to the hidden unit for all i = 1 to n
Step
5 − Calculate the net input at the hidden
unit using the following relation
Here b0j is the bias on hidden
unit, vij is
the weight on j unit
of the hidden layer coming from i unit of the input layer.
Now calculate the
net output by applying the following activation function
Send these
output signals of the hidden layer units to the output layer units.
Step 6 − Calculate the net input at the output
layer unit using the following relation
Here b0k is the bias on output
unit, wjk is
the weight on k unit
of the output layer coming from j unit of the hidden layer.
Calculate the
net output by applying the following activation function
Phase 2
Step 7 −
Compute the error correcting term, in correspondence with the target pattern
received at each output unit, as follows\
On this basis, update the weight and bias as follows
Then, send δk back
to the hidden layer.
Step 8 −
Now each hidden unit will be the sum of its delta inputs from the output units.
Error term can be calculated as follows
On this basis, update the weight and bias as follows
Phase 3
Step 9 −
Each output unit (yk k
= 1 to m) updates the weight and bias as follows
Step
10 − Each output unit (zj
j = 1 to p) updates the
weight and bias as follows
Step
11 − Check for the stopping condition, which
may be either the number of epochs reached or the target output matches the
actual output.












Comments
Post a Comment