Method description

A Feed Forward Neural Network consists of three types of neuron layers: the input layer, the output layer and the optional intermediate hidden layers.

Figure 34.1. Feed Forward Neural Network scheme

Feed Forward Neural Network scheme

In the Feed Forward Neural Networks model there is only one input layer and one output layer. Their size is determined by the dimension of the training data and the number of possible values attained by the.

While it is possible to build a neural network without any hidden layers, usually at least one hidden layer is used. Without any hidden layers the model can learn only linear relations. Two hidden layers are usually enough to learn any relation from the data.

The time required to train the model increases with the number of connections between neurons. When the model is too large, some neurons or even whole hidden layers may have to be removed.

On the other hand, when the hidden part of the neural network is too small, the model may not be able to learn more complicated relations exhibited by the data. Thus, the number of layers in the model has to be chosen with the above considerations in mind.

Every neuron in the hidden layer has the inputs from other neurons and one output. Each neuron has the weights associated with each input.

Figure 34.2. Neuron scheme

Neuron scheme

The base function is calculated for each neuron using the weights and input information. The result is then used to calculated the output value, which is sent to the neurons in the next layer.

The classical Back Propagation algorithm is used to train the Feed Forward Neural Networks model.

Figure 34.3. Backpropagation algorithm scheme

Backpropagation algorithm scheme

The main idea is to propagate the classification error from the last layer up to the first hidden layer. See the References for more details.