Symmetry of second derivatives

In mathematics, the symmetry of second derivatives refers to the possibility of interchanging the order of taking partial derivatives of a function
 f(x_{1}, x_{2}, ..., x_{n})
of n variables. If the partial derivative with respect to x_{i} is denoted with a subscript i, then the symmetry is the assertion that
 f_{ij}
is an n × n symmetric matrix. This matrix is called the Hessian matrix of f. The entries in it off the main diagonal are the mixed derivatives; that is, successive partial derivatives with respect to different variables. In most normal circumstances the Hessian matrix is indeed symmetric; but from the point of view of mathematical analysis this isn't a safe statement, without some hypothesis on f that goes further than simply stating the existence of the second derivatives at a particular point. Clairaut's theorem gives a sufficient condition on f for this to occur.
In symbols, the symmetry says that, for example,
 <math>\frac {\partial}{\partial x} \left( \frac { \partial f }{ \partial y} \right) =
\frac {\partial}{\partial y} \left( \frac { \partial f }{ \partial x} \right)<math>
This equality can also be written as
 <math>\partial_{xy} f = \partial_{yx} f.<math>
Alternatively, the symmetry can be written as an algebraic statement involving the differential operator D_{i} which takes the partial derivative with respect to x_{i}:
 D_{i} . D_{j} = D_{j} . D_{i}.
From this relation it follows that the ring of differential operators with constant coefficients, generated by the D_{i}, is commutative. But one should naturally specify some domain for these operators. It is easy to check the symmetry as applied to monomials, so that one can take polynomials in the x_{i} as a domain. In fact smooth functions is possible.
One can also apply the theory of distributions to get round any analytic problems with the symmetry. Firstly the derivative of any function can be defined (provided it is integrable), as a distribution. Secondly the use of integration by parts throws the symmetry question back onto the test functions, which are smooth and certainly satisfy the symmetry. One concludes that, in the sense of distributions, the symmetry always holds. (Another approach, where the Fourier transform of a function is defined, is to note that on transforms the partial derivatives become multiplication operators that commute much more obviously).
The fact remains that in the worst case there are counterexamples. In the case of two variables, near (0, 0) one can consider two limiting processes on
 f(h, k) − f(h, 0) − f(0, k) + f(0, 0)
corresponding to making h → 0 first, and to making k → 0 first. These processes need not commute: it can matter, looking at the firstorder terms, which is applied first. This leads to the construction of pathological examples in which the symmetry of second derivatives is not true. [Could someone write up such an example here or in its own article, and add that to the list of mathematical examples?] Given that the derivatives as Schwartz distributions are symmetric, this kind of example belongs in the 'fine' theory of real analysis.
A more sophisticated argument is this: consider the firstorder differential operators D_{i} to be infinitesimal operators on Euclidean space. That is, D_{i} in a sense generates the oneparameter group of translations parallel to the x_{i}axis. These groups certainly all commute with each other, and therefore we expect that the infinitesimal generators do also; the Lie bracket
 [D_{i}, D_{j}] = 0
is the way that is reflected. When one comes to ask whether this is try as applied to spaces of functions on Euclidean space, this question is an elementary part of the theory of differentiable vectors in representation theory; that is, a suitable subspace of a function space on which the Lie algebra representation deriving from a representation of a Lie group has desirable and transparent properties.
Counterexample
Consider the function
 <math>f(x,y) = \frac{xy(x^2  y^2)}{x^2+y^2}<math>
with <math>f(0,0) = 0<math>. Then the mixed partial derivatives of f exist, and are continuous everywhere except at <math>(0,0)<math>. Moreover
 <math>\frac {\partial}{\partial x} \left( \frac { \partial f }{ \partial y} \right) \ne
\frac {\partial}{\partial y} \left( \frac { \partial f }{ \partial x} \right)<math>
at <math>(0,0)<math>.