Universal approximation theorem
Explanation
Any neural network architecture aims at finding a function
UAT states that NNs have a universality property where no matter the function
This means that the problems are not limited to a specific type of function or problem, being able to model a wide range of relationships between inputs and outputs.
Neural networks may not necessarily find the exact function, but are able to achieve an accurate approximation that gets arbitrarily close to the true function.
- ? Does this apply to non-continuous functions?
Caveats
- The model might be unfeasibly large
- The model is not guaranteed to generalise