Your History

Menu

General form of an activation function

Description

In order to make neural networks non-linear functions, we need to apply transformations that are more advanced than matrix multiplications. To do this, we use a non-linear, bijective function, called an activation function. We typically denote it as \( \htmlClass{sdt-0000000051}{\sigma} \).

The function can operate on scalars or on vectors. If it operates on a vector, it works element-wise, applying the same function to each entry of the vector.

\[\htmlClass{sdt-0000000051}{\sigma}: \htmlClass{sdt-0000000045}{\mathbb{R}}^{\htmlClass{sdt-0000000117}{n}} \rightarrow \htmlClass{sdt-0000000045}{\mathbb{R}}^{\htmlClass{sdt-0000000117}{n}}\]

Symbols Used:

This symbol represents any given whole number, \( n \in \htmlClass{sdt-0000000014}{\mathbb{W}}\).

\( \sigma \)

This symbol represents the activation function. It maps real values to other real values in a non-linear way.

\( \mathbb{R} \)

This is the symbol for the set of real numbers.

References

  1. Jaeger, H. (2024, April 26). Neural Networks (AI) (WBAI028-05) Lecture Notes BSc program in Artificial Intelligence. Retrieved from https://www.ai.rug.nl/minds/uploads/LN_NN_RUG.pdf