Posts

Showing posts from July, 2019

BPN

Image
Back Propagation Neural Networks Back Propagation Neural is a multi layer neural network consisting of the input layer, at least one hidden layer and output layer. As its name suggests, back propagating will take place in this network. The error which is calculated at the output layer, by comparing the target output and the actual output, will be propagated back towards the input layer. Architecture As shown in the diagram, the architecture of BPN has three interconnected layers having weights on them. The hidden layer as well as the output layer also has bias, whose weight is always 1, on them. As is clear from the diagram, the working of BPN is in two phases. One phase sends the signal from the input layer to the output layer, and the other phase back propagates the error from the output layer to the input layer. Training Algorithm For training, BPN will use binary sigmoid activation function. The training of BPN will have the following three phases. Phase 1 −...

Madaline

Image
Multiple Adaptive Linear Neuron Madaline which stands for Multiple Adaptive Linear Neuron, is a network which consists of many Adalines in parallel. It will have a single output unit. Some important points about Madaline are as follows ·         It is just like a multilayer perceptron, where Adaline will act as a hidden unit between the input and the Madaline layer. ·         The weights and the bias between the input and Adaline layers, as in we see in the Adaline architecture, are adjustable. ·         The Adaline and Madaline layers have fixed weights and bias of 1. ·        Training can be done with the help of Delta rule. Architecture The architecture of Madaline consists of “n” neurons of the input layer, “m” neurons of the Adaline layer, and 1 neuron of the Madaline layer. The Adaline layer can be considered as the hidden layer as...

Adaptive Linear Neuron (Adaline)

Image
Adaptive Linear Neuron ( Adaline) Adaline which stands for Adaptive Linear Neuron, is a network having a single linear unit. It was developed by Widrow and Hoff in 1960. Some important points about Adaline are as follows   ·        It uses bipolar activation function. ·        It uses delta rule for training to minimize the Mean-Squared Error MSE between the actual output and the desired/target output. ·        The weights and the bias are adjustable. Architecture The basic structure of Adaline is similar to perceptron having an extra feedback loop with the help of which the actual output is compared with the desired/target output. After comparison on the basis of training algorithm, the weights and bias will be updated. Training Algorithm Step 1 − Initialize the following to start the training §   Weights §   Bias §   Learning rate α For easy ca...