Your History

Menu

Backpropagation - Unit Potential

Description

The potential of a neuron is the weighted sum of the neuron activations in the previous layer in a Feedforward Neural Network (FNN), without applying an activation function. It is used in backpropagation, where the potential of each neuron is computed in the forward pass.

\[\htmlClass{sdt-0000000099}{a}_{\htmlClass{sdt-0000000018}{i}}^\kappa=\htmlClass{sdt-0000000080}{\sum}_{\htmlClass{sdt-0000000011}{j}=1,\dots,\htmlClass{sdt-0000000119}{L}^{\kappa-1}}\htmlClass{sdt-0000000059}{\mathbf{W}}_{\htmlClass{sdt-0000000018}{i}\htmlClass{sdt-0000000011}{j}}^{\kappa}\htmlClass{sdt-0000000094}{\mathcal{x}}_{\htmlClass{sdt-0000000011}{j}}^{\kappa-1}\]

Symbols Used:

This is a secondary symbol for an iterator, a variable that changes value to refer to a series of elements

\( i \)

This is the symbol for an iterator, a variable that changes value to refer to a sequence of elements.

\( \mathbf{W} \)

This symbol represents the matrix containing the weights and biases of a layer in a neural network.

\( \sum \)

This is the summation symbol in mathematics, it represents the sum of a sequence of numbers.

\( \mathcal{x} \)

This symbol represents the activations of a neural network layer in vector form.

\( a \)

This is the potential of a neuron in a layer of a feedforward neural network.

\( L \)

This symbol refers to the number of neurons in a layer.

Derivation

  1. Consider a neuron \(\htmlClass{sdt-0000000018}{i}\) in \(\kappa\)-th layer of an FNN, such that there exists a \((\kappa-1)\)-th layer. The neuron \(\htmlClass{sdt-0000000018}{i}\) might be a hidden neuron or an output neuron.
  2. We have \(\htmlClass{sdt-0000000119}{L}^{\kappa-1}\) neurons in the \((\kappa-1)\)-th layer. The sum of the activations of the neurons in the \((\kappa-1)\)-th layer is therefore equal to \[\htmlClass{sdt-0000000080}{\sum}_{\htmlClass{sdt-0000000011}{j}=1,\dots,\htmlClass{sdt-0000000119}{L}^{\kappa-1}}\htmlClass{sdt-0000000094}{\mathcal{x}}_{\htmlClass{sdt-0000000011}{j}}^{\kappa-1}.\]
  3. Recall that the potential of neuron \(\htmlClass{sdt-0000000018}{i}\) is the weighted sum of activations, determined by the weight matrix in the \(\kappa\)-th layer \(\htmlClass{sdt-0000000059}{\mathbf{W}}^\kappa\). The weight of the connection of neuron \(\htmlClass{sdt-0000000011}{j}\) and neuron \(\htmlClass{sdt-0000000018}{i}\) is given by \(\htmlClass{sdt-0000000059}{\mathbf{W}}_{\htmlClass{sdt-0000000018}{i}\htmlClass{sdt-0000000011}{j}}^\kappa\), which weighs the corresponding activation \(\htmlClass{sdt-0000000094}{\mathcal{x}}_{\htmlClass{sdt-0000000011}{j}}^{\kappa-1}\).
  4. Therefore, the potential of neuron \(\htmlClass{sdt-0000000018}{i}\) is given by \[\htmlClass{sdt-0000000099}{a}_{\htmlClass{sdt-0000000018}{i}}^\kappa=\htmlClass{sdt-0000000080}{\sum}_{\htmlClass{sdt-0000000011}{j}=1,\dots,\htmlClass{sdt-0000000119}{L}^{\kappa-1}}\htmlClass{sdt-0000000059}{\mathbf{W}}_{\htmlClass{sdt-0000000018}{i}\htmlClass{sdt-0000000011}{j}}^{\kappa}\htmlClass{sdt-0000000094}{\mathcal{x}}_{\htmlClass{sdt-0000000011}{j}}^{\kappa-1}\] as required.

References

  1. Jaeger, H. (2024, May 18). Neural Networks (AI) (WBAI028-05) Lecture Notes BSc program in Artificial Intelligence. Retrieved from https://www.ai.rug.nl/minds/uploads/LN_NN_RUG.pdf