Perceptron
Perceptron is a method to make output by weighing up evidence.
A perceptron takes several binary inputs, x1,x2,...and produces a single binary output:
Formula:
evidence: x1,x2,... - binary
weight:w1,w2,... - show importance of each evidence
threshold
output - binary
evidence: x - binary
weight: w
bias=-threshold; bias: b, large bias a measure of how easy it is to get the perceptron to output a 1/0. For a perceptron with a really big bias, it's extremely easy for the perceptron to output a 1/0.
output - binary
Multi-layer Perceptron: A perceptron has just a single output.The multiple output arrows shown below are merely a useful way of indicating that the output from a perceptron is being used as the input to several other perceptrons.
Elementary Logical Functions: AND, OR, and NAND
perceptrons can be used is to compute the elementary logical functions such as AND, OR, and NAND. For example, suppose we have a perceptron with two inputs, each with weight −2−2, and an overall bias of 33. Then we see that input 0000 produces output 11, since (−2)∗0+(−2)∗0+3=3(−2)∗0+(−2)∗0+3=3 is positive. Here, I've introduced the ∗∗ symbol to make the multiplications explicit. Similar calculations show that the inputs 0101 and 1010 produce output 11. But the input 1111 produces output 00, since (−2)∗1+(−2)∗1+3=−1(−2)∗1+(−2)∗1+3=−1 is negative. And so our perceptron implements a NAND gate!
The NAND example shows that we can use perceptrons to compute simple logical functions. we can use networks of perceptrons to compute any logical function, because the NAND gate is universal for computation, that is, we can build any computation up out of NAND gates.
Sigmoid Neuron
Why Sigmoid Neuron
A small change in the weights or bias of any single perceptron in the network can sometimes cause the output of that perceptron to completely flip, say from 0 to 1. That makes it difficult to see how to gradually modify the weights and biases so that the network gets closer to the desired behaviour.
evidence: x1,x2,... = x - any value between 0 to 1
weight:w1,w2,... = w - show importance of each evidence
bias=-threshold: b
output: sigma(w*x+b)
if z=w*x+b approaching to infinity, sigma approaching to 1
if z=w*x+b approaching to negative infinity, sigma approaching to 0
Sigmoid Function VS Step Function
ps. Actually, when w⋅x+b=0w⋅x+b=0 the perceptron outputs 0, while the step function outputs 1. So, strictly speaking, we'd need to modify the step function at that one point.
Δoutput - Sigmoid Neuron
from calculaus: Δoutput is a linear function of the changes Δwj and Δb in the weights and bias. This linearity makes it easy to choose small changes in the weights and biases to achieve any desired small change in the output.
Origin: http://neuralnetworksanddeeplearning.com/index.html