Why has Reproducing Kernel Hilbert Space (RKHS) become such an important concept in machine learning in recent times? Is it because it allows us to represent a function of combination of linear functions?

What areas of mathematics does one need to cover before understanding RKHS?

**Contents**hide

#### Best Answer

As the name says, reproducing kernel Hilbert spaces is a Hilbert space, so some knowledge of Hilbert space/functional analysis comes in handy … But you might as well start with RKHS, and then see what you do not understand, and what you need to read to cover that.

The usual example of Hilbert spaces, $L_2$, have the problem that the members are not functions, but equivalence classes of functions that coincide except on a set of (Lebesgue) measure zero. That way, they always give the same results when integrated … and that is what $L_2$ spaces can be used for. Members of $L_2$ spaces cannot really be evaluated since you can change the value at one point without changing the value of the integral.

So in applications where you really want functions that you can evaluate at individual points (like in approximation theory, regression, …) RKHS come in handy, because the defining property is equivalent to the requirement that the evaluation functional $$ E_x(f) = f(x) $$ is continuous in $f$ for each $x$. So you can evaluate the member functions, and replacing $f$ with some other function, say $f+epsilon$ (in some sense …) will only change the value a little bit. That is the intuition you asked for.

### Similar Posts:

- Solved – How to prove there is no finite-dimensional feature space for Gaussian RBF kernel
- Solved – Can someone provide a brief explanation as to why reproducing kernel Hilbert space is so popular in machine learning
- Solved – Are “kernel methods” and “reproducing kernel Hilbert spaces” related
- Solved – Understanding the reproducing property of RKHS
- Solved – Understanding the reproducing property of RKHS