This definition is used to compute how the loss is affected with a change in the potential of a neuron in a given layer. It is used in the backward pass in backpropagation, where the gradient is computed for updating the model parameters. The error term is computed differently, depending on whether the neuron is a hidden neuron or an output neuron.
\( \mathcal{N} \) | This is the symbol used for a function approximator, typically a neural network. |
\( i \) | This is the symbol for an iterator, a variable that changes value to refer to a sequence of elements. |
\( y \) | This symbol stands for the ground truth of a sample. In supervised learning this is often paired with the corresponding input. |
\( L \) | This is the symbol for a loss function. It is a function that calculates how wrong a model's inference is compared to where it should be. |
\( \delta \) | This is the error of a neuron in a feedforward neural network. |
\( \theta \) | This symbol represents the parameters of the model |
\( a \) | This is the potential of a neuron in a layer of a feedforward neural network. |
\( u \) | This symbol denotes the input of a model. |