Menu

Neural Networks

This is a listing of all of the pages that were created for our pilot with the Neural Networks course as part of the BSc Artificial Intelligence program at the University of Groningen, taught by Prof H. Jaeger.

Counting loss
General Form of a Loss Function
Quadratic Loss (L2)
Risk of a Model
Mean Squared Error Loss (MSE)
MSE Minimization
Empirical Risk of a Model
Risk of Optimal Model
Operationalization of Supervised Learning
Loss Minimization with Regularization
General Form of a Regularization Function
Approximation of Performance Landscape
Update rule of the Gradient Descent
L2 Regularization
Goal of Supervised Learning
Polynomial Curve Fitting
RNN Update Equation
Function Estimated by Perceptron
Risk Minimization for MLPs
Gradient of the performance surface
Activation of a neuron
General form of an activation function
Rectified Linear Unit
Activation of a layer
Activation of the output layer
Recurrent Neural Network Update Equations
Gradient Empirical Risk
Gradient Empirical Risk (sum of gradients)
Recurrent Neural Network with Output Feedback
Input neuron of an LSTM
Output gate of an LSTM
Forget gate of an LSTM
Input gate of an LSTM
Update of a memory cell in an LSTM
Recursive Definition of Recurrent Neural Networks
Temporal Evolution of Dynamical System
General Form of an Update Operator
Discrete-Time Update Operator
Stochastic Discrete-Time Update Operator
Continuous-Time Update Operator (ODE)
Discrete-Time System with Input
Discrete-Time Dynamical System
Stochastic Discrete-Time System with Input
Stochastic Discrete-Time Dynamical System
Continuous-Time Dynamical System
Continuous-Time System with Input
Backpropagation - Unit Potential
Markov Transition Matrix Entries
Output of an LSTM
Energy of a state in a Hopfield Network
Activation of a neuron in a Hopfield Network
Weight update of a Hopfield Network
Analytical solution of a Hopfield Network
Delta Equation
Delta Equation by Backpropagation
Neuron Potential to Activation
Weight update of a Heteroassociative Hopfield Network
Energy of a Specific State in a Boltzmann Machine
Boltzmann Normalization Constant/Partition Function
Boltzmann Normalization Constant/Partition Function (Discrete)
Boltzmann Distribution of Microstates
Boltzmann Acceptance Function
Metropolis Acceptance Function
Ratio Metropolis Acceptance Function
Energy Change When One Unit in Boltzmann Machine Changes
Kullback-Leibler Divergence
Probability of Setting a Unit to 1 in a BM
Gradients of KL Divergence with Respect to Weights
Weight Update Rule for Boltzmann Machines
Loss Function
Model
Input
Ground Truth
Activation Function
Optimal Model
Expectation
Random Variable Input
Random Variable Output
Risk
Hypothesis Space
Parameter Space
Sample
Weight Vector
Regularization
Gradient
Learning Rate
RNN Hidden State
Network (Function Approximator)
Polynomial
Polynomial Constant
Neuron activation
Layer size
Layer activation
Weights matrix
Output activation vector
Sigmoid
Bias
Activation Vector
Number of Inputs
Number of Neurons
Number of Outputs
State of the input neuron of LSTM
State of the input gate neuron in LSTM
State of the output gate of an LSTM
Memory cell of an LSTM
State of the forget gate in an LSTM
Update Operator
State Space of Dynamical System
System State
Output Function
System Output
System Input
Potential of a Unit
Energy
Training pattern
Random Variable
Transpose
Error of a Neuron
Generic Neuron Activation
Number of Samples
Temperature
Partition Function (Normalization constant)
Microstate
Space of Possible Microstates
Proposed Next State
Proposal Distribution
Acceptance Probability (Acceptance Function)
General Measure Function
Weight in BM
Average Probability Overall
Average Probability Samples
Markov Transition Matrix
Probability Distribution
Model's parameters
Markov Property