Sobolev space

In mathematics, a Sobolev space is a normed space of functions obtained by imposing on a function f and its derivatives up to some order k the condition of finite L^{p} norm, for given p ≥ 1. It is named after Sergei L. Sobolev.
Contents 
Introduction
There are many criteria for smoothness of mathematical functions. The most basic criterion may be that of continuity. A considerably stronger notion of smoothness is that of differentiability (because functions that are differentiable are also continuous) and a yet stronger notion of smoothness is that the derivative also be continuous (these functions are said to be <math>C^1<math> — see smooth function). Differentiable functions are important in many areas, and in particular for differential equations. In the twentieth century, it was observed that the space of <math>C^1<math> was not exactly the right space to study solutions of differential equations.
Many physical problems, such as weather prediction or microwave oven design, are modelled by partial differential equations. In such problems, there is some data (such as today's weather, or the shape and water distribution of the food in the microwave oven) and there is a prediction (such as tomorrow's weather, or the time required to cook the food in the microwave.) In some cases, it is difficult to do an accurate simulation. The butterfly effect makes it so that long term weather predictions are extremely difficult to make. Scientists need to be able to estimate the accuracy of their simulations. This can be turned into the a mathematical question of sorts:
 If the initial data and/or the model are slightly wrong, how wrong can my prediction be?
By turning to this question, mathematicians eventually gave precise descriptions of "slightly wrong data" and "wrong prediction". In so doing, it became apparent that the natural space of <math>C^1<math> functions was inadequate. As mathematicians found what the meaning of "slightly wrong data" and "wrong prediction" ought to be, it became obvious that sometimes the "predictions" would not be <math>C^1<math>. This required a careful investigation of the meaning of a differential equation, when the solution is not even differentiable.
The Sobolev spaces are the modern replacement for the space <math>C^1<math> of solutions of partial differential equations. In these spaces, we can estimate the size of the butterfly effect or, if it cannot be estimated, we can often prove that the butterfly effect is too strong to be controlled.
Technical discussion
We start by introducing Sobolev spaces in the simplest settings, the onedimensional case on the unit circle. In this case the Sobolev space <math>W^{k,p}<math> is defined to be the subset of L^{p} such that f and its derivatives up to some order k have a finite L^{p} norm, for given p ≥ 1. Some care must be taken to define derivatives in the proper sense. In the onedimensional problem it is enough to assume <math>f^{(k1)}<math> is differentiable almost everywhere and is equal to the Lebesgue integral of its derivative (this gets rid of examples such as Cantor's function which are irrelevant to what the definition is trying to accomplish).
With this definition, the Sobolev spaces admit a natural norm,
 <math>f_{k,p}=\sum_{i=0}^k f^{(i)}_p = \sum_{i=0}^k \Big(\int f^{(i)}(t)^p\,dt \Big)^{1/p}.<math>
<math>W^{k,p}<math> equipped with the norm <math>\cdot_{k,p}<math> is a Banach space. It turns out that it is enough to take only the first and last in the sequence, i.e. the norm defined by
 <math>f^{(k)}_p + f_p<math>
is equivalent to the norm above.
Examples
A few Sobolev spaces have simpler descriptions. For example, in one dimension, <math>W^{1,1}<math> is the space of absolutely continuous functions, while W^{1,∞} is the space of Lipschitz functions. Further, <math>W^{k,2}<math> can be defined naturally in terms of its Fourier series, namely,
 <math>W^{k,2}({\mathbb T}) = \Big\{ f\in L^2({\mathbb T}):\sum_{n=\infty}^\infty (1+n^2 + \dotsb + n^{2k}) \widehat{f}(n)^2 < \infty\Big\}<math>
where <math>\widehat{f}<math> is the Fourier series of f. As above, one can use the equivalent norm
 <math>f^2=\sum_{n=\infty}^\infty (1 + n^{2k}) \widehat{f}(n)^2.<math>
Both representations follow easily from Parseval's theorem and the fact that differentiation is equivalent to multiplying the Fourier coefficient by in. This specific case is so important it has a special notation, <math>H^k<math>:
 <math>\,H^k = W^{k,2}.<math>
Sobolev spaces with noninteger k
To prevent confusion, when talking about k which is not integer we will usually denote it by s, i.e. <math>W^{s,p}<math> or <math>H^s.<math>
The case p = 2
The case p = 2 is the easiest since the Fourier description is straightforward to generalize. We define the norm
 <math>f^2_{2,s}=\sum (1+n^{2s})\widehat{f}(n)^2<math>
and the Sobolev space <math>H^s<math> as the space of all functions with finite norm.
Fractional order differentiation
A similar approach can be used if p is different from 2. In this case Parseval's theorem no longer holds, but differentiation still corresponds to multiplication in the Fourier domain and can be generalized to noninteger orders. Therefore we define an operator of fractional order differentiation of order s by
 <math>F^s(f)=\sum_{n=\infty}^\infty (in)^s\widehat{f}(n)e^{int}<math>
or in other words, taking Fourier transform, multiplying by <math>(in)^s<math> and then taking inverse Fourier transform (operators defined by Fouriermultiplicationinverse Fourier are called multipliers and are a topic of research in their own right). This allows to define the Sobolev norm of <math>s,p<math> by
 <math>\,f_{s,p}=f_p+F^s(f)_p<math>
and, as usual, the Sobolev space is the space of functions with finite Sobolev norm.
Complex interpolation
Another way of obtaining the "fractional Sobolev spaces" is given by complex interpolation. Complex interpolation is a general technique: for any 0 ≤ t ≤ 1 and X and Y Banach spaces that are continuously included in some larger Banach space we may create "intermediate space" denoted [X,Y]_{t}. (below we discuss a different method, the socalled real interpolation method, which is essential in the Sobolev theory for the characterization of traces).
Such spaces X and Y are called interpolation pairs.
We mention a couple of useful theorems about complex interpolation:
Theorem (reinterpolation): [ [X,Y]_{a} , [X,Y]_{b} ]_{c} = [X,Y]_{cb+(1c)a}.
Theorem (interpolation of operators): if {X,Y} and {A,B} are interpolation pairs, and if T is a linear map defined on X+Y into A+B so that T is continuous from X to A and from Y to B then T is continuous from [X,Y]_{t} to [A,B]_{t}. and we have the interpolation inequality:
<math>T_{[X,Y]_t \to [A,B]_t}\leq CT_{X\to A}^{1t}T_{Y\to B}^t.<math>
See also: RieszThorin theorem.
Returning to Sobolev spaces, we want to get <math>W^{s,p}<math> for noninteger s by interpolating between <math>W^{k,p}<math>s. The first thing is of course to see that this gives consistent results, and indeed we have
Theorem: <math>\left[W^{0,p},W^{m,p}\right]_t=W^{n,p}<math> if n is an integer such that n=tm.
Hence, complex interpolation is a consistent way to get a continuum of spaces <math>W^{s,p}<math> between the <math>W^{k,p}<math>. Further, it gives the same spaces as fractional order differentiation does (but see extension operators below for a twist).
Multiple dimensions
We now turn to the case of Sobolev spaces in R^{n} and subsets of R^{n}. The change from the circle to the line only entails technical changes in the Fourier formulas — basically a change of Fourier series to Fourier transform and sums to integrals. The transition to multiple dimensions brings more difficulties, starting from the very definition. The requirement that <math>f^{(k1)}<math> is the integral of <math>f^{(k)}<math> does not generalize, and the simplest solution is to consider derivatives in the sense of distribution theory.
A formal definition now follows. Let D be an open set in R^{n}. We define the Sobolev space
 <math>\,W^{k,p}(D)<math>
as the family of functions f defined on D such that for every multiindex <math>\alpha<math> with
 <math>\alpha\leq k<math>
we have that <math>f^{(\alpha)}<math> is a function and
 <math>f^{(\alpha)}_p < \infty.<math>
The appropriate norm to take on it is the sum of those L^{p} norms over all such α. It is then complete, and so a Banach space.
Actually, this approach works also in one dimension, and is not very different from the one described under fractional order differentiation above.
Examples
In multiple dimensions, it is no longer true that, for example, <math>W^{1,1}<math> contains only continuous functions. For example, 1/x belong to <math>W^{1,1}(B^3)<math> where <math>B^3<math> is the unit ball in three dimensions. It is true that for k sufficiently large, <math>W^{k,p}(D)<math> will contain only continuous functions, but for which k this is already true depends both on p and on the dimension.
However, the descriptions of W^{1,∞} and <math>W^{k,2}<math> above hold, mutatis mutandis.
Traces
Let s ≥ ½. If X is an open set such that its boundary G is "sufficiently smooth", then we may define the trace (that is, restriction) map P by
 <math>Pu=u_G,<math>
i.e. u restricted to G. A sample smoothness condition is uniformly <math>C^m<math>, m ≥ s. (NB There is no connection here to trace of a matrix.)
This trace map P as defined has domain <math>H^s(X)<math>, and its image is precisely <math>H^{s1/2}(G)<math>. To be completely formal, P is first defined for infinitely differentiable functions and is extended by continuity to <math>H^s(X)<math>. Note that we 'lose half a derivative' in taking this trace.
Identifying the image of the trace map for <math>W^{s,p}<math> is considerably more difficult and demands the tool of real interpolation, which we shall not go into. The resulting spaces are the Besov spaces. It turns out that in the case of the <math>W^{s,p}<math> spaces, we don't lose half a derivative; rather, we lose 1/p of a derivative.
Extension operators
If X is an open domain whose boundary is not too poorly behaved (e.g., if its boundary is a manifold, or satisfies the more permissive but more obscure "cone condition") then there is an operator A mapping functions of X to functions of R^{n} such that:
 Au(x) = u(x) for almost every x in X and
 A is continuous from <math>W^{k,p}(X)<math> to <math>W^{k,p}({\mathbb R}^n)<math>, for any 1 ≤ p ≤ ∞ and integer k.
We will call such an operator A an extension operator for X.
Extension operators are the most natural way to define <math>H^s(X)<math> for noninteger s (we cannot work directly on X since taking Fourier transform is a global operation). We define <math>H^s(X)<math> by saying that u is in <math>H^s(X)<math> if and only if Au is in <math>H^s(\mathbb R^n)<math>. Equivalently, complex interpolation yields the same <math>H^s(X)<math> spaces so long as X has an extension operator. If X does not have an extension operator, complex interpolation is the only way to obtain the <math>H^s(X)<math> spaces.
As a result, the interpolation inequality still holds.
Extension by zero
We define <math>H^s_0(X)<math> to be the closure in <math>H^s(X)<math> of the space <math>C^\infty_c(X)<math> of infinitely differentiable compactly supported functions. Given the definition of a trace, above, we may state the following
Theorem: Let X be uniformly C^{m} regular, m ≥ s and let P be the linear map sending u in <math>H^s(X)<math> to
 <math>\left.\left(u,\frac{du}{dn},...,\frac{d^k u}{dn^k}\right)\right_G<math>
where d/dn is the derivative normal to G, and k is the largest integer less than s. Then <math>H^s_0<math> is precisely the kernel of P.
If <math>u\in H^s_0(X)<math> we may define its extension by zero <math>\tilde u \in L^2({\mathbb R}^n)<math> in the natural way, namely
 <math>\tilde u(x)=u(x) \; \textrm{ if } \; x \in X, 0 \; \textrm{ otherwise.}<math>
Theorem: Let s>½. The map taking u to <math>\tilde u<math> is continuous into <math>H^s({\mathbb R}^n)<math> if and only if s is not of the form n+½ for n an integer.de:SobolewRaum