Your History

Menu

Delta Equation by Backpropagation

Prerequisites

Backpropagation - Unit Potential | \(a_{i}^\kappa=\sum_{j=1,\dots,L^{\kappa-1}}\mathbf{W}_{ij}^{\kappa}\mathcal{x}_{j}^{\kappa-1}\)
Delta Equation | \(\delta_{i}^\kappa\dot{=}\frac{\partial L\left(\mathcal{N}_{\theta}(u),y\right)}{\partial a_{i}^\kappa}\)
Neuron Potential to Activation | \(\sigma(a)=x\)

Description

This equation is used to compute the error of a hidden neuron using its potential and the weighted sum of errors of the neurons in the next layer. This is the key idea behind backpropagation and the reason backpropagation is computationally feasible.

\[\htmlClass{sdt-0000000075}{\delta}_{\htmlClass{sdt-0000000018}{i}}^\kappa=\htmlClass{sdt-0000000051}{\sigma}\htmlClass{sdt-0000000025}{'}(\htmlClass{sdt-0000000099}{a}_{\htmlClass{sdt-0000000018}{i}}^\kappa)\htmlClass{sdt-0000000080}{\sum}_{\htmlClass{sdt-0000000011}{j}=1,\dots,\htmlClass{sdt-0000000119}{L}^{\kappa+1}}\htmlClass{sdt-0000000075}{\delta}_{\htmlClass{sdt-0000000011}{j}}^{\kappa+1}\htmlClass{sdt-0000000059}{\mathbf{W}}_{\htmlClass{sdt-0000000011}{j}\htmlClass{sdt-0000000018}{i}}^{\kappa+1}\]

Symbols Used:

This is a secondary symbol for an iterator, a variable that changes value to refer to a series of elements

\( i \)

This is the symbol for an iterator, a variable that changes value to refer to a sequence of elements.

\( ' \)

This is the symbol for the derivative of a function. If \(f(x)\) is a function, then \(f'(x)\) is the derivative of that function.

\( \sigma \)

This symbol represents the activation function. It maps real values to other real values in a non-linear way.

\( \mathbf{W} \)

This symbol represents the matrix containing the weights and biases of a layer in a neural network.

\( \delta \)

This is the error of a neuron in a feedforward neural network.

\( \sum \)

This is the summation symbol in mathematics, it represents the sum of a sequence of numbers.

\( a \)

This is the potential of a neuron in a layer of a feedforward neural network.

\( L \)

This symbol refers to the number of neurons in a layer.

Derivation

  1. Recall the equation to compute the error \(\htmlClass{sdt-0000000075}{\delta}\) of a neuron \(\htmlClass{sdt-0000000018}{i}\) in layer \(\kappa\) given by \[\htmlClass{sdt-0000000075}{\delta}_{\htmlClass{sdt-0000000018}{i}}^\kappa\dot{=}\frac{\partial \htmlClass{sdt-0000000072}{L}\left(\htmlClass{sdt-0000000001}{\mathcal{N}}_{\htmlClass{sdt-0000000083}{\theta}}(\htmlClass{sdt-0000000103}{u}),\htmlClass{sdt-0000000037}{y}\right)}{\partial \htmlClass{sdt-0000000099}{a}_{\htmlClass{sdt-0000000018}{i}}^\kappa}\]
  2. The chain rule allows us to express the error of neuron \(\htmlClass{sdt-0000000018}{i}\) in terms of the potentials of the neurons in the next layer. This makes sense considering the architecture of a feedforward network. This gives \[\htmlClass{sdt-0000000075}{\delta}_{\htmlClass{sdt-0000000018}{i}}^\kappa=\htmlClass{sdt-0000000080}{\sum}_{\htmlClass{sdt-0000000011}{j}=1,\dots,\htmlClass{sdt-0000000119}{L}^{\kappa+1}}\frac{\partial\htmlClass{sdt-0000000072}{L}(\htmlClass{sdt-0000000001}{\mathcal{N}}_{\htmlClass{sdt-0000000066}{\theta}}(\htmlClass{sdt-0000000103}{u}),\htmlClass{sdt-0000000037}{y})}{\partial \htmlClass{sdt-0000000099}{a}_{\htmlClass{sdt-0000000011}{j}}^{\kappa+1}}\frac{\partial\htmlClass{sdt-0000000099}{a}_{\htmlClass{sdt-0000000011}{j}}^{\kappa+1}}{\partial \htmlClass{sdt-0000000099}{a}_{\htmlClass{sdt-0000000018}{i}}^\kappa}\]
  3. We can reuse the definition of \(\htmlClass{sdt-0000000075}{\delta}\) to get an expression for \(\htmlClass{sdt-0000000075}{\delta}_{\htmlClass{sdt-0000000011}{j}}^{\kappa+1}\): \[\htmlClass{sdt-0000000075}{\delta}_{\htmlClass{sdt-0000000018}{i}}^\kappa=\htmlClass{sdt-0000000080}{\sum}_{\htmlClass{sdt-0000000011}{j}=1,\dots,\htmlClass{sdt-0000000119}{L}^{\kappa+1}}\htmlClass{sdt-0000000075}{\delta}_{\htmlClass{sdt-0000000011}{j}}^{\kappa+1}\frac{\partial\htmlClass{sdt-0000000099}{a}_{\htmlClass{sdt-0000000011}{j}}^{\kappa+1}}{\partial \htmlClass{sdt-0000000099}{a}_{\htmlClass{sdt-0000000018}{i}}^\kappa}.\]
  4. Now recall the equation to compute the potential of a neuron \[\htmlClass{sdt-0000000099}{a}_{\htmlClass{sdt-0000000018}{i}}^\kappa=\htmlClass{sdt-0000000080}{\sum}_{\htmlClass{sdt-0000000011}{j}=1,\dots,\htmlClass{sdt-0000000119}{L}^{\kappa-1}}\htmlClass{sdt-0000000059}{\mathbf{W}}_{\htmlClass{sdt-0000000018}{i}\htmlClass{sdt-0000000011}{j}}^{\kappa}\htmlClass{sdt-0000000094}{\mathcal{x}}_{\htmlClass{sdt-0000000011}{j}}^{\kappa-1}\] We can use this to get an expression for \(\htmlClass{sdt-0000000099}{a}_{\htmlClass{sdt-0000000011}{j}}^{\kappa+1}\). We get \[\htmlClass{sdt-0000000075}{\delta}_{\htmlClass{sdt-0000000018}{i}}^\kappa=\htmlClass{sdt-0000000080}{\sum}_{\htmlClass{sdt-0000000011}{j}=1,\dots,\htmlClass{sdt-0000000119}{L}^{\kappa+1}}\htmlClass{sdt-0000000075}{\delta}_{\htmlClass{sdt-0000000011}{j}}^{\kappa+1}\frac{\partial \htmlClass{sdt-0000000080}{\sum}_{k=1,\dots,\htmlClass{sdt-0000000119}{L}^\kappa}\htmlClass{sdt-0000000059}{\mathbf{W}}_{jk}^{\kappa+1}\htmlClass{sdt-0000000090}{x}_k^\kappa}{\partial \htmlClass{sdt-0000000099}{a}_{\htmlClass{sdt-0000000018}{i}}^\kappa}.\]
  5. Recall the relationship between the potential \(\htmlClass{sdt-0000000099}{a}\) and the activation of a neuron \(\htmlClass{sdt-0000000090}{x}\):\[\htmlClass{sdt-0000000051}{\sigma}(\htmlClass{sdt-0000000099}{a})=\htmlClass{sdt-0000000090}{x}\]so we get \[\htmlClass{sdt-0000000075}{\delta}_{\htmlClass{sdt-0000000018}{i}}^\kappa=\htmlClass{sdt-0000000080}{\sum}_{\htmlClass{sdt-0000000011}{j}=1,\dots,\htmlClass{sdt-0000000119}{L}^{\kappa+1}}\htmlClass{sdt-0000000075}{\delta}_{\htmlClass{sdt-0000000011}{j}}^{\kappa+1}\frac{\partial \htmlClass{sdt-0000000080}{\sum}_{k=1,\dots,\htmlClass{sdt-0000000119}{L}^\kappa}\htmlClass{sdt-0000000059}{\mathbf{W}}_{jk}^{\kappa+1}\htmlClass{sdt-0000000051}{\sigma}(\htmlClass{sdt-0000000099}{a}_k^\kappa)}{\partial \htmlClass{sdt-0000000099}{a}_{\htmlClass{sdt-0000000018}{i}}^\kappa}.\]
  6. Now we have \[\htmlClass{sdt-0000000080}{\sum}_{k=1,\dots,\htmlClass{sdt-0000000119}{L}^\kappa}\htmlClass{sdt-0000000059}{\mathbf{W}}_{jk}^{\kappa+1}\htmlClass{sdt-0000000051}{\sigma}(\htmlClass{sdt-0000000099}{a}_k^\kappa)=\htmlClass{sdt-0000000059}{\mathbf{W}}_{\htmlClass{sdt-0000000011}{j}\htmlClass{sdt-0000000018}{i}}^{\kappa+1}\htmlClass{sdt-0000000051}{\sigma}(\htmlClass{sdt-0000000099}{a}_{\htmlClass{sdt-0000000018}{i}}^\kappa),\] so we get \[\htmlClass{sdt-0000000075}{\delta}_{\htmlClass{sdt-0000000018}{i}}^\kappa=\htmlClass{sdt-0000000080}{\sum}_{\htmlClass{sdt-0000000011}{j}=1,\dots,\htmlClass{sdt-0000000119}{L}^{\kappa+1}}\htmlClass{sdt-0000000075}{\delta}_{\htmlClass{sdt-0000000011}{j}}^{\kappa+1}\frac{\partial \htmlClass{sdt-0000000059}{\mathbf{W}}_{\htmlClass{sdt-0000000011}{j}\htmlClass{sdt-0000000018}{i}}^{\kappa+1}\htmlClass{sdt-0000000051}{\sigma}(\htmlClass{sdt-0000000099}{a}_{\htmlClass{sdt-0000000018}{i}}^\kappa)}{\partial \htmlClass{sdt-0000000099}{a}_{\htmlClass{sdt-0000000018}{i}}^\kappa}.\]
  7. Computing the partial derivative gives \[\htmlClass{sdt-0000000075}{\delta}_{\htmlClass{sdt-0000000018}{i}}^\kappa=\htmlClass{sdt-0000000080}{\sum}_{\htmlClass{sdt-0000000011}{j}=1,\dots,\htmlClass{sdt-0000000119}{L}^{\kappa+1}}\htmlClass{sdt-0000000075}{\delta}_{\htmlClass{sdt-0000000011}{j}}^{\kappa+1}\htmlClass{sdt-0000000059}{\mathbf{W}}_{\htmlClass{sdt-0000000011}{j}\htmlClass{sdt-0000000018}{i}}^{\kappa+1}\htmlClass{sdt-0000000051}{\sigma}\htmlClass{sdt-0000000025}{'}(\htmlClass{sdt-0000000099}{a}_{\htmlClass{sdt-0000000018}{i}}^\kappa).\]
  8. Finally, by the linearity of summation we get \[\htmlClass{sdt-0000000075}{\delta}_{\htmlClass{sdt-0000000018}{i}}^\kappa=\htmlClass{sdt-0000000051}{\sigma}\htmlClass{sdt-0000000025}{'}(\htmlClass{sdt-0000000099}{a}_{\htmlClass{sdt-0000000018}{i}}^\kappa)\htmlClass{sdt-0000000080}{\sum}_{\htmlClass{sdt-0000000011}{j}=1,\dots,\htmlClass{sdt-0000000119}{L}^{\kappa+1}}\htmlClass{sdt-0000000075}{\delta}_{\htmlClass{sdt-0000000011}{j}}^{\kappa+1}\htmlClass{sdt-0000000059}{\mathbf{W}}_{\htmlClass{sdt-0000000011}{j}\htmlClass{sdt-0000000018}{i}}^{\kappa+1}\] as required.

References

  1. Jaeger, H. (2024, May 18). Neural Networks (AI) (WBAI028-05) Lecture Notes BSc program in Artificial Intelligence. Retrieved from https://www.ai.rug.nl/minds/uploads/LN_NN_RUG.pdf