Determinant

In linear algebra, the determinant is a function that associates a scalar det(A) to every square matrix A. The fundamental geometric meaning of the determinant is as the scale factor for volume when A is regarded as a linear transformation. Determinants are important both in calculus, where they enter the substitution rule for several variables, and in multilinear algebra.
The determinant of A is also sometimes denoted by A, but this notation is ambiguous: it is also used to for certain matrix norms, and for the square root of <math>{AA}^*<math>.
Contents 
Determinants of 2by2 matrices
The 2×2 matrix
 <math>A=\begin{bmatrix}a&b\\
c&d\end{bmatrix}<math> has determinant
 <math>\det(A)=adbc \,<math>.
The interpretation is that this gives the area of the parallelogram with vertices at (0,0), (a,c), (b,d), and (a + b, c + d), with a sign factor (which is −1 if A as a transformation matrix flips the unit square over).
A formula for larger matrices will be given below.
Applications
Determinants are used to characterize invertible matrices (namely as those matrices, and only those matrices, with nonzero determinants), and to explicitly describe the solution to a system of linear equations with Cramer's rule. They can be used to find the eigenvalues of the matrix <math>A<math> through the characteristic polynomial <math>p(x) = \det(xI  A)<math> (where I is the identity matrix of the same format as A).
One often thinks of the determinant as assigning a number to every sequence of <math>n<math> vectors in <math>\Bbb{R}^n<math>, by using the square matrix whose columns are the given vectors. With this understanding, the sign of the determinant of a basis can be used to define the notion of orientation in Euclidean spaces. The determinant of a set of vectors is positive if the vectors form a righthanded coordinate system, and negative if lefthanded.
Determinants are used to calculate volumes in vector calculus: the absolute value of the determinant of real vectors is equal to the volume of the parallelepiped spanned by those vectors. As a consequence, if the linear map <math>f: \Bbb{R}^n \rightarrow \Bbb{R}^n<math> is represented by the matrix <math>A<math>, and <math>S<math> is any measurable subset of <math>\Bbb{R}^n<math>, then the volume of <math>f(S)<math> is given by <math>\left \det(A) \right \times \operatorname{volume}(S)<math>. More generally, if the linear map <math>f: \Bbb{R}^n \rightarrow \Bbb{R}^m<math> is represented by the <math>m<math>by<math>n<math> matrix <math>A<math>, and <math>S<math> is any measurable subset of <math>\Bbb{R}^{n}<math>, then the <math>n<math>dimensional volume of <math>f(S)<math> is given by <math>\sqrt{\det(A^t A)} \times \operatorname{volume}(S)<math>. By calculating the volume of the tetrahedron bounded by four points, they can be used to identify skew lines.
General definition and computation
Suppose <math>A = (A_{i,j}) \,<math> is a square matrix.
If <math>A<math> is a 1by1 matrix, then <math>\det(A) = A_{1,1} \,<math>
If <math>A<math> is a 2by2 matrix, then <math>\det(A) = A_{1,1}A_{2,2}  A_{2,1}A_{1,2} \,<math>
For a 3by3 matrix <math>A<math>, the formula is more complicated:
 <math>
\det(A) = A_{1,1}A_{2,2}A_{3,3} + A_{1,3}A_{2,1}A_{3,2} + A_{1,2}A_{2,3}A_{3,1} 
A_{1,3}A_{2,2}A_{3,1}  A_{1,1}A_{2,3}A_{3,2}  A_{1,2}A_{2,1}A_{3,3}
\,<math>
For a general <math>n<math>by<math>n<math> matrix, the determinant was defined by Gottfried Leibniz with what is now known as the Leibniz formula:
 <math>\det(A) = \sum_{\sigma \in S_n}
\sgn(\sigma) \prod_{i=1}^n A_{i, \sigma(i)}<math>
The sum is computed over all permutations <math>\sigma<math> of the numbers {1,2,...,n} and <math>\sgn(\sigma)<math> denotes the signature of the permutation <math>\sigma<math>: +1 if <math>\sigma<math> is an even permutation and −1 if it is odd. See even and odd permutations for an explanation of even/odd permutations.
This formula contains <math>n!<math> (factorial) summands and is therefore impractical to use it to calculate determinants for large <math>n<math>.
In general, determinants can be computed with the Gauss algorithm using the following rules:
 If <math>A<math> is a triangular matrix, i.e. <math>A_{i,j} = 0 \,<math> whenever <math>i > j<math>, then <math>\det(A) = A_{1,1} A_{2,2} \cdots A_{n,n} \,<math>
 If <math>B<math> results from <math>A<math> by exchanging two rows or columns, then <math>\det(B) = \det(A) \,<math>
 If <math>B<math> results from <math>A<math> by multiplying one row or column with the number <math>c<math>, then <math>\det(B) = c\,\det(A) \,<math>
 If <math>B<math> results from <math>A<math> by adding a multiple of one row to another row, or a multiple of one column to another column, then <math>\det(B) = \det(A) \,<math>
Explicitly, starting out with some matrix, use the last three rules to convert it into a triangular matrix, then use the first rule to compute its determinant.
It is also possible to expand a determinant along a row or column using Laplace's formula, which is efficient for relatively small matrices. To do this along row <math>i<math>, say, we write
 <math>\det(A) = \sum_{j=1}^n\ A_{i,j}C_{i,j}<math>
where the <math>C_{i,j}<math> represent the matrix cofactors, i.e. <math>C_{i,j}<math> is <math>(1)^{i+j}<math> times the determinant of the matrix that results from <math>A<math> by removing the <math>i<math>th row and the <math>j<math>th column.
Example
Suppose we want to compute the determinant of
 <math>A = \begin{bmatrix}2&2&3\\
1& 1& 3\\ 2 &0 &1\end{bmatrix}<math> We can go ahead and use the Leibniz formula directly:
 <math>\det(A)=(2)\cdot 1 \cdot (1) + (3)\cdot 0 \cdot (1) + 2\cdot 3\cdot 2  (3)\cdot 1 \cdot 2  (2)\cdot 3 \cdot 0  2\cdot (1) \cdot (1)<math>
 <math>=2+0+12(6)02 = 18.\;<math>
Alternatively, we can use Laplace's formula to expand the determinant along a row or column. It is best to choose a row or column with many zeros, so we will expand along the second column:
 <math>\det(A)=(1)^{1+2}\cdot 2 \cdot \det \begin{bmatrix}1&3\\
2 &1\end{bmatrix} + (1)^{2+2}\cdot 1 \cdot \det \begin{bmatrix}2&3\\ 2&1\end{bmatrix}<math>
 <math>=(2)\cdot((1)\cdot(1)2\cdot3)+1\cdot((2)\cdot(1)2\cdot(3)) = (2)(5)+8 = 18.<math>
A third way (and the method of choice for larger matrices) would involve the Gauss algorithm. When doing computations by hand, one can often shorten things dramatically by smartly adding multiples of columns or rows to other columns or rows; this doesn't change the value of the determinant, but may create zero entries which simplifies the subsequent calculations. In our example, adding the second column to the first one is especially useful:
 <math>\begin{bmatrix}0&2&3\\
0 &1 &3\\ 2 &0 &1\end{bmatrix}<math> and this determinant can be quickly expanded along the first column:
 <math>\det(A)=(1)^{3+1}\cdot 2\cdot \det \begin{bmatrix}2&3\\
1&3\end{bmatrix}<math>
 <math>=2\cdot(2\cdot31\cdot(3)) = 2\cdot 9 = 18.<math>
Properties
The determinant is a multiplicative map in the sense that
 <math>\det(AB) = \det(A)\det(B) \,<math> for all nbyn matrices <math>A<math> and <math>B<math>.
This is generalized by the CauchyBinet formula to products of nonsquare matrices.
It is easy to see that <math>\det(rI_n) = r^n \,<math> and thus
 <math>\det(rA) = r^n \det(A) \,<math> for all <math>n<math>by<math>n<math> matrices <math>A<math> and all scalars <math>r<math>.
The matrix <math>A<math> (over the real or complex numbers, or some other field) is invertible if and only if det(A)≠0; in this case we have
 <math>\det(A^{1}) = \det(A)^{1} \,<math>
Expressed differently: the vectors v_{1},...,v_{n} in R^{n} form a basis if and only if det(v_{1},...,v_{n}) is nonzero.
A matrix and its transpose have the same determinant:
 <math>\det(A) = \det(A^T) \,<math>.
If <math>A<math> and <math>B<math> are similar, i.e. if there exists an invertible matrix <math>X<math> such that <math>A<math> = <math>X^{1} B X \,<math>, then by the multiplicative property,
 <math>\det(A) = \det(B) \,<math>
This means that the determinant is a similarity invariant. Because of this, the determinant of some linear transformation T : V → V for some finite dimensional vector space V is independent of the basis for V. The relationship is oneway, however: there exist matrices which have the same determinant but are not similar.
If <math>A<math> is a square <math>n<math>by<math>n<math> matrix with real or complex entries and if λ_{1},...,λ_{n} are the (complex) eigenvalues of <math>A<math> listed according to their algebraic multiplicities, then
 <math>\det(A) = \lambda_{1}\lambda_{2} \cdots \lambda_{n}<math>
This follows from the fact that <math>A<math> is always similar to its Jordan normal form, an upper triangular matrix with the eigenvalues on the main diagonal.
From this connection between the determinant and the eigenvalues, one can derive a connection between the trace function, the exponential function, and the determinant:
 <math>\det(\exp(A)) = \exp(\operatorname{tr}(A))<math>.
Derivative
The determinant of real square matrices is a polynomial function from <math>\Bbb{R}^{n \times n}<math> to <math>\Bbb{R}<math>, and as such is everywhere differentiable. Its derivative can be expressed using Jacobi's formula:
 <math>d \,\det(A) = \operatorname{tr}(\operatorname{adj}(A) \,dA)<math>
where adj(A) denotes the adjugate of A. In particular, if A is invertible, we have
 <math>d \,\det(A) = \det(A) \,\operatorname{tr}(A^{1} \,dA)<math>
or, more colloquially,
 <math>\det(A + X)  \det(A) \approx \det(A) \,\operatorname{tr}(A^{1} X)<math>
if the entries in the matrix <math>X<math> are sufficiently small. The special case where <math>A<math> is equal to the identity matrix <math>I<math> yields
 <math>\det(I + X) \approx 1 + \operatorname{tr}(X)<math>.
Generalizations and related functions
As was pointed out above, it is possible to unambiguously define the determinant of any linear map f : V → V, if V is a finitedimensional vector space.
It makes sense to define the determinant for matrices whose entries come from any commutative ring. The computation rules, the Leibniz formula and the compatibility with matrix multiplication remain valid, except that now a matrix <math>A<math> is invertible if and only if <math>\det(A)<math> is an invertible element of the ground ring.
Abstractly, one may define the determinant as a certain antisymmetric multilinear map as follows: if <math>R<math> is a commutative ring and <math>M = R^n<math> denotes the free Rmodule with <math>n<math> generators, then
 <math>\det: M^n \rightarrow R<math>
is the unique map with the following properties:
 det is <math>R<math>linear in each of the <math>n<math> arguments.
 det is antisymmetric, meaning that if two of the <math>n<math> arguments are equal, then the determinant is zero.
 <math>\det(e_1,\ldots,e_n) = 1<math>, where <math>e_i<math> is that element of <math>M<math> which has a 1 in the <math>i<math>th coordinate and zeros elsewhere.
Linear algebraists prefer to use the multilinear map approach to define determinant, whereas combinatorialists may prefer the Leibniz formula. (Of course, even when using the above abstract approach, one has to use the Leibniz formula to show that such a multilinear map actually exists.)
The Pfaffian is an analog of the determinant for <math>2n\times 2n<math> antisymmetric matrices. It is a polynomial of degree <math>n<math>, and its square is equal to the determinant of the matrix.
There is no direct generalisation of determinants, or of the notion of volume, to spaces of infinite dimension. There are various approaches possible, including the use of the extension of the trace of a matrix, and functional determinants.
History
Historically, determinants were considered before matrices. Originally, a determinant was defined as a property of a system of linear equations. The determinant "determines" whether the system has a unique solution (which occurs precisely if the determinant is nonzero). In this sense, twobytwo determinants were considered by Cardano at the end of the 16th century and larger ones by Leibniz about 100 years later. Following him Cramer (1750) added to the theory, treating the subject in relation to sets of equations. The recurrent law was first announced by Bezout (1764).
It was Vandermonde (1771) who first recognized determinants as independent functions. Laplace (1772) gave the general method of expanding a determinant in terms of its complementary minors: Vandermonde had already given a special case. Immediately following, Lagrange (1773) treated determinants of the second and third order. Lagrange was the first to apply determinants to questions outside elimination theory; he proved many special cases of general identities.
Gauss (1801) made the next advance. Like Lagrange, he made much use of determinants in the theory of numbers. He introduced the word determinants (Laplace had used resultant), though not in the present signification, but rather as applied to the discriminant of a quantic. Gauss also arrived at the notion of reciprocal (inverse) determinants, and came very near the multiplication theorem.
The next contributor of importance is Binet (1811, 1812), who formally stated the theorem relating to the product of two matrices of <math>m<math> columns and <math>n<math> rows, which for the special case of <math>m = n<math> reduces to the multiplication theorem. On the same day (Nov. 30, 1812) that Binet presented his paper to the Academy, Cauchy also presented one on the subject. (See CauchyBinet formula.) In this he used the word determinant in its present sense, summarized and simplified what was then known on the subject, improved the notation, and gave the multiplication theorem with a proof more satisfactory than Binet's. With him begins the theory in its generality.
The next important figure was Jacobi (from 1827). He early used the functional determinant which Sylvester later called the Jacobian, and in his memoirs in Crelle for 1841 he specially treats this subject, as well as the class of alternating functions which Sylvester has called alternants. About the time of Jacobi's last memoirs, Sylvester (1839) and Cayley began their work.
The study of special forms of determinants has been the natural result of the completion of the general theory. Axisymmetric determinants have been studied by Lebesgue, Hesse, and Sylvester; persymmetric determinants by Sylvester and Hankel; circulants by Catalan, Spottiswoode, Glaisher, and Scott; skew determinants and Pfaffians, in connection with the theory of orthogonal transformation, by Cayley; continuants by Sylvester; Wronskians (so called by Muir) by Christoffel and Frobenius; compound determinants by Sylvester, Reiss, and Picquet; Jacobians and Hessians by Sylvester; and symmetric gauche determinants by Trudi. Of the textbooks on the subject Spottiswoode's was the first. In America, Hanus (1886) and Weld (1893) published treatises.de:Determinante et:Determinant es:Determinante (matemáticas) fr:Déterminant (mathématiques) ko:행렬식 it:Determinante he:דטרמיננטה nl:Determinant ja:行列式 pl:Wyznacznik sv:Determinant