Single layer networks
The leftmost layer in this network is called the input layer;
The neurons within the layer are called input neurons;
The rightmost or output layer contains the output neurons, or, a single output neuron;
The middle layer is called a hidden layer - The term "hidden" means nothing more than "not an input or an output".
Multiple layer networks
Multiple layer networks are sometimes called multilayer perceptrons or MLPs, despite being made up of sigmoid neurons, not perceptrons.
Example:Determine whether a handwritten image depicts a "9" or not.
A natural way to design the network is to encode the intensities of the image pixels into the input neurons. If the image is a 6464 by 6464 greyscale image, then we'd have 4,096=64×644,096=64×64 input neurons, with the intensities scaled appropriately between 00 and 11. The output layer will contain just a single neuron, with output values of less than 0.50.5 indicating "input image is not a 9", and values greater than 0.50.5 indicating "input image is a 9 ".
Feedforward Neural Networks
Definition: Neural networks where the output from one layer is used as input to the next layer, which means there are no loops in the network - information is always fed forward, never fed back. If we did have loops, we'd end up with situations where the input to the σ function depended on the output. That'd be hard to make sense of, and so we don't allow such loops.
Recurrent Nuural Networks
Definition: Models of artificial neural networks in which feedback loops are possible. The idea in these models is to have neurons which fire for some limited duration of time, before becoming quiescent. That firing can stimulate other neurons, which may fire a little while later, also for a limited duration. That causes still more neurons to fire, and so over time we get a cascade of neurons firing. Loops don't cause problems in such a model, since a neuron's output only affects its input at some later time, not instantaneously. The learning algorithms for recurrent nets are less powerful. But they're much closer in spirit to how our brains work than feedforward networks. And it's possible that recurrent networks can solve important problems which can only be solved with great difficulty by feedforward networks.
Example: A simple network to classify handwritten digits
Input Layer: Training data for the network will consist of many 28 by 28 pixel images of scanned handwritten digits, and so the input layer contains 784=28×28 neurons. The input pixels are greyscale, with a value of 0.0 representing white, a value of 1.0representing black, and in between values representing gradually darkening shades of grey.
Hidden Layer: We denote the number of neurons in this hidden layer by n, and we'll experiment with different values for n. The example shown illustrates a small hidden layer, containing just n=15 neurons.
Output Layer: The output layer of the network contains 10 neurons. If the first neuron fires, i.e., has an output ≈1, then that will indicate that the network thinks the digit is a 0. If the second neuron fires then that will indicate that the network thinks the digit is a 1.
A little more precisely, we number the output neurons from 0 through 9, and figure out which neuron has the highest activation value.
Origin: http://neuralnetworksanddeeplearning.com/chap1.html