Contrary to Feedforward Neural Networks, Recurrent Neural Networks (RNNs) have an internal state. Given its current state, the RNN's next state can be computed using an update equation. This gives rise to a recurrence relation between states.
\( \sigma \) | This symbol represents the activation function. It maps real values to other real values in a non-linear way. |
\( \mathbf{W} \) | This symbol represents the matrix containing the weights and biases of a layer in a neural network. |
\( n \) | This symbol represents any given whole number, \( n \in \htmlClass{sdt-0000000014}{\mathbb{W}}\). |
\( \mathbf{h} \) | This symbol represents the hidden state of a recurrent neural network. |
\( \htmlClass{sdt-0000000059}{\mathbf{W}} \) is an \(\htmlClass{sdt-0000000117}{n} \times \htmlClass{sdt-0000000117}{n}\) matrix and \(\htmlClass{sdt-0000000125}{\mathbf{h}}\) is an \(\htmlClass{sdt-0000000117}{n}\)-dimensional vector. This ensures that the matrix product is defined, and that the hidden state retains the same dimensions over time. Often we use \(\tanh\) as the activation function \(\htmlClass{sdt-0000000051}{\sigma}\) to induce a non-linear relationship between states.