Reproducing Kernel Hilbert Spaces

 Note: this was originally a thread on my twitter

I'm assuming the reader is already familiar with Hilbert spaces. If you need a refresher, here's my post about them.

We'll start out in the world of a generic Hilbert space and talk about dual spaces and bounded linear functionals. Conveniently, this is sort of what my linear algebra students are learning about right now!

A linear functional is a linear transformation from a vector space to the underlying field: for example, remember \(\ell^2(\mathbb{C})\) :

written in symbols: the definition of linearity, pulling out constants and separating along addition  

There are lots of examples of linear functionals, but we'll care about the ones that are *bounded*, that is, the sup of the image of the functional is finite:

written in symbols: the sup of the image of a functional
There's a nice theorem that says that for linear functionals, being bounded is equivalent to being continuous and to being continuous at 0.

the statement of the theorem that bounded is equivalent to continuous is equivalent to continuous at 0 

The space of continuous linear functionals on a normed vector space is called the dual of the space and has some other nifty properties that I'm not going to spend time on right now, but here is the norm in the dual space that I will be using:
the norm in the dual space is the supremum of the image of the unit ball in the vector space 

Now we come to my favorite theorem! The Riesz representation theorem for Hilbert spaces! This says that any continuous linear functional on a Hilbert space can be represented as the inner product with a particular element of the Hilbert space. (This means we can say Hilbert spaces are "self dual," something not true for general normed spaces.)


The statement of the Riesz representation theorem. 

(Once I mentioned that this was my favorite theorem and people laughed at me because it maybe isn't the most exciting theorem out there, but I love it because I remember learning it for the first time in undergrad and realizing that all these things are tied together, and that there was a world of analysis that connected to linear algebra! Purely nostalgic but I'll always love it.)

Now that we have Riesz Representation (RR from now on), we can actually get down to business defining Reproducing Kernel Hilbert Spaces! Everything I've done so far has been for general Hilbert spaces, but in my last thread, I mentioned that we sometimes distinguish between different Hilbert spaces by examining their OTHER properties.

Every space I'm going to talk about from here on out is a separable Hilbert space of analytic functions on some complex domain. Remember, analytic means I can write these functions as power series, and requiring that I have a SEPARABLE Hilbert space means that all of these are isomorphic to \((\ell^2\), though in some cases the isomorphism may not be obvious.


Our example from last time is the Hardy space of the unit disk, \(H^2(\mathbb{D})\). Remember, the norm in this space can be given in a couple of ways:


an analytic function written as a power series, then defining the Hardy space as the set of power series that converge on the unit disk and have square summable Taylor coefficients, noting that the norm can also be given as an integral mean
Another interesting space is the Bergman Hilbert space on the unit disk, \(A^2(\mathbb{D})\), the set of analytic functions that are square integrable with respect to area measure on the unit disk. This norm also has a summation representation:

The norm of the Bergman space as an integral and as a square sum of the coefficients of a power series weighted by 1/(n+1) 

A third example is the Dirichlet space on the unit disk. In some sense this is "the opposite of the Bergman space." The Dirichlet space on the unit disk is the set of analytic functions with *derivatives* that are square integrable with respect to the area measure.  The summation representation of the norm for the Dirichlet space has the weight of the Bergman space flipped:


The norm of the Dirichlet space as an integral and as a square sum of the coefficients of a power series weighted by (n+1). Important note: this norm doesn't quite agree with just the Bergman norm of the derivative of a function. We have to be careful about the constant functions to make sure they won't have norm zero.
*The norm of the Dirichlet space as an integral and as a square sum of the coefficients of a power series weighted by (n+1). Important note: this norm doesn't quite agree with just the Bergman norm of the derivative of a function. We have to be careful about the constant functions to make sure they won't have norm zero.


All three of these are examples of RKHS, but how do we actually DEFINE an RKHS?

Let's go back to the Riesz Representation theorem, and then remember that these are spaces of FUNCTIONS: we can plug points into them!

When I first learned about this, I called this "the plug it in functional," but a maybe more eloquent name is the standard one "the point evaluation functional." For each point in your domain, you can define a functional by evaluating each function in your space at that point:

definition of the point evaluation functional at w: the functional that maps a function f to the value f(w)

NOW! For our three examples of function spaces above, every point evaluation functional is bounded and so RR says that for each point, there is an element of the function space that, when inner-product-ed with any function f gives the value of the function at that point. This is called the _reproducing kernel_ at that point because it "reproduces" the value of the function!

writing out the definition of the reproducing kernel with the inner product

So we define a Reproducing Kernel Hilbert Space as a space of analytic functions on a complex domain such that each point evaluation functional is bounded. Remember, each reproducing kernel is a function in the Hilbert space, it's actually a function, and we can inner product it with other reproducing kernels:

a reproducing kernel evaluated at a point is just an inner product with another reproducing kernel

It's sometimes convenient to write this as a function of two variables called the kernel function for H. One nifty thing is that this kernel function can be expanded in terms of any orthonormal basis for H.

writing the kernel function as the sum of a basis vector evaluated at one point times the conjugate of the basis vector evaluated at the other point


You can write a more general definition of a kernel on a domain X (which I'll neglect here since it involves a little bit of technicality) and then it turns out that there is a bijective correspondence between Hilbert function spaces on X and kernels on X.

You can interpret that as saying that the reproducing kernel of a Hilbert Function Space "contains the information about the space."




Let's look at some examples! In particular, here are the reproducing kernels of the three example spaces I discussed above:

the kernel of the Hardy space is 1/(1-\bar{w}z)
 the kernel of the Bergman space is 1/(1-\bar{w}z)^2
 the kernel of the Dirichlet space is =1/(\bar{w}z) ln(1-\bar{w}z)

There are lot of other useful examples, both in one and in several variables, and looking at kernels that are "nice" can be helpful to find the "correct" generalization of a space like the Hardy space.


One particularly exciting example? There is a space of functions that has the RIEMANN ZETA function as its reproducing kernel!

Most of the information in this thread comes from "Pick Interpolation and Hilbert Function Spaces" by Agler and McCarthy. It's probably my go-to reference!
Another place for further reading is "An Introduction to the Theory of Reproducing Kernel Hilbert Spaces" by Paulsen and Raghupathi 

Comments

Popular posts from this blog

Frostman's Theorem surprises me

Two variable shift operators

Boundary Values and Inner Functions