The potential of a neuron is converted to an activation by an activation function. Since the potential is a linear combination of weights and activations, and the activation function is (typically) a nonlinear function, the relationship between the potential and the activation is (typically) nonlinear. This allows the neural network to approximate nonlinear functions and is fundamental to machine learning.
\( a \) | This is the potential of a neuron in a layer of a feedforward neural network. |
\( \sigma \) | This symbol represents the activation function. It maps real values to other real values in a non-linear way. |
\( x \) | This represents the activation of a neuron in a neural network. |
Suppose we use the logistic sigmoid as the activation function \(\htmlClass{sdt-0000000051}{\sigma}\) and that the potential of the neuron is -2. The neuron activation is given by \[\htmlClass{sdt-0000000051}{\sigma}(-2)\approx0.119.\]