Eigenvalue
|
In mathematics, a number is called an eigenvalue of a matrix if there exists a nonzero vector such that the matrix times the vector is equal to the same vector multiplied by the eigenvalue. This vector is then called the eigenvector associated with the eigenvalue.
The eigenvalues of a matrix or a differential operator often have important physical significance. In classical mechanics the eigenvalues of the governing equations typically correspond to the natural frequencies of vibration (see resonance). In quantum mechanics, the eigenvalues of an operator corresponding to some observable variable are those values of the observable that have non-zero probability of occurring.
The word eigenvalue comes from the German Eigenwert which means "proper or characteristic value."
Contents |
Definition
Formally, we define eigenvectors and eigenvalues as follows. Let A be an n-by-n matrix of real number or complex numbers (see below for generalizations). We say that λ ∈ C is an eigenvalue of A with eigenvector v ∈ Cn if v is not zero and
- <math> Av = \lambda v. <math>
The spectrum of A, denoted σ(A), is the set of all eigenvalues.
Computing eigenvalues
Suppose that we want to compute the eigenvalues of a given matrix. If the matrix is small, we can compute them symbolically using the characteristic polynomial. However, this is often impossible for larger matrices, in which case we must use a numerical method.
Symbolic computations using the characteristic polynomial
The eigenvalues of a matrix are the zeros of its characteristic polynomial. Indeed, if λ is an eigenvalue of A with eigenvector v, then <math>(A - \lambda I)v = 0<math>, where I denotes the identity matrix. This is only possible if the determinant of <math> A - \lambda I <math> vanishes. But the characteristic polynomial is defined to be <math> p_A(\lambda) = \det(A - \lambda I) <math>.
It follows that we can compute all the eigenvalues of a matrix A by solving the equation <math> p_A(\lambda) = 0 <math>. The fundamental theorem of algebra says that this equation has at least one solution, so every matrix has at least one eigenvalue.
Numerical computations
Main article: eigenvalue algorithm.
The Abel-Ruffini theorem implies that there is no general algorithm for finding the zeros of the characteristic polynomial. Therefore, general eigenvalues algorithms are iterative. The easiest method is power iteration: we choose a random vector v and compute Av, <math>A^2v<math>, <math>A^3v<math>, ... This sequence will almost always converge to an eigenvector corresponding to the dominant eigenvalue. This algorithm is easy, but not very useful by itself. However, popular methods such as the QR algorithm are based on it.
Example
Let us determine the eigenvalues of the matrix
- <math>
A = \begin{bmatrix}
0 & 1 & -1 \\ 1 & 1 & 0 \\ -1 & 0 & 1
\end{bmatrix}. <math> We first compute the characteristic polynomial of A:
- <math>
p(x) = \det( A - \lambda I) = \det \begin{bmatrix}
-\lambda & 1 & -1 \\ 1 & 1-\lambda & 0 \\ -1 & 0 & 1-\lambda
\end{bmatrix} = -\lambda^3 + 2\lambda^2 + \lambda - 2. <math> This polynomial factorizes as <math>p(\lambda) = -(\lambda - 2) (\lambda - 1) (\lambda + 1)<math>. Therefore, the eigenvalues of A are 2, 1 and −1.
Multiplicity
The (algebraic) multiplicity of an eigenvalue λ of A is the order of λ as a zero of the characteristic polynomial of A; in other words, it is the number of factors t − λ in the characteristic polynomial. An n-by-n matrix has n eigenvalues, counted according to their algebraic multiplicity, because its characteristic polynomial has degree n.
An eigenvalue of algebraic multiplicity 1 is called a simple eigenvalue.
Occasionally, in an article on matrix theory, one may read a statement like
- "the eigenvalues of a matrix A are 4,4,3,3,3,2,2,1,"
meaning that the algebraic multiplicity of 4 is two, of 3 is three, of 2 is two and of 1 is one. This style is used because algebraic multiplicity is the key to many mathematical proofs in matrix theory.
The geometric multiplicity of an eigenvalue λ is the dimension of the associated eigenspace, which consists of all the eigenvectors associated with λ; in other words, it is the nullity of the matrix λI − A. The geometric multiplicity is less than or equal to the algebraic multiplicity.
Consider for example the matrix
- <math> \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix}. <math>
It has only one eigenvalue, namely λ = 1. The characteristic polynomial is <math>(\lambda-1)^2<math>, so this eigenvalue has algebraic multiplicity 2. However, the associated eigenspace is spanned by (1, 0)T, so the geometric multiplicity is only 1.
Properties
The spectrum is invariant under similarity transformations: the matrices A and P-1AP have the same eigenvalues for any matrix A and any invertible matrix P. The spectrum is also invariant under transposition: the matrices A and AT have the same eigenvalues.
A matrix is invertible if and only if zero is not an eigenvalue of the matrix.
A matrix is diagonalizable if and only if the algebraic and geometric multiplicities coincide for all its eigenvalues. In particular, an n-by-n matrix is diagonalizable if it has n different eigenvalues.
The location of the spectrum is often restricted if the matrix has a special form:
- All eigenvalues of a Hermitian matrix (A = A*) are real. Furthermore, all eigenvalues of a positive-definite matrix (v*Av > 0 for all vectors v) are positive.
- All eigenvalues of a skew-Hermitian matrix (A = −A*) are purely imaginary.
- All eigenvalues of a unitary matrix (A-1 = A*) have absolute value one.
- The eigenvalues of a triangular matrix are the entries on the main diagonal. This holds a fortiori for diagonal matrices.
Generally, the trace of a matrix equals the sum of the eigenvalues, and the determinant equals the product of the eigenvalues (counted according to algebraic multiplicity).
Suppose that A is an m-by-n matrix, with m ≤ n, and that B is an n-by-m matrix. Then BA has the same eigenvalues as AB plus m − n eigenvalues equal to zero.
Extensions and generalizations
Eigenvalues of an operator
Suppose we have a linear operator A mapping the vector space V to itself. As in the matrix case, we say that λ ∈ C is an eigenvalue of A if there exists a nonzero v ∈ V such that Av = λv.
Suppose now that A is a bounded linear operator on a Banach space V. We say that λ ∈ C is a spectral value of A if the operator <math>A-\lambda I<math> is not invertible, where I denotes the identity operator. Note that by the closed graph theorem, if a bounded operator has an inverse, the inverse is necessarily bounded. The set of all spectral values is the spectrum of A.
If V is finite dimensional, then the spectrum of A is the same of the set of eigenvalues of A. This follows from the fact that on finite-dimensional spaces injectivity of a linear operator A is equivalent to surjectivity of A. However, an operator on an infinite-dimensional space may have no eigenvalues at all, while it always has spectral values.
Eigenvalues of a matrix with entries from a ring
Suppose that A is a square matrix with entries in a ring R. An element λ ∈ R is called a right eigenvalue of A if there exists a nonzero column vector x such that Ax=λx, or a left eigenvalue if there exists a nonzero row vector y such that yA=yλ.
If R is commutative, the left eigenvalues of A are exactly the right eigenvalues of A and are just called eigenvalues. If R is not commutative, e.g. quaternions, they may be different.
Eigenvalues of a graph
An eigenvalue of a graph is defined as an eigenvalue of the graph's adjacency matrix A, or (increasingly) of the graph's Laplacian matrix <math>I-T^{-1/2}AT^{-1/2}<math>, where T is a diagonal matrix holding the degree of each vertex, and in <math>T^{-1/2}<math>, 0 is substituted for <math>0^{-1/2}<math>.
Generalized eigenvalue problem
A generalized eigenvalue problem is of the form
- <math> Av = \lambda B v \quad \quad (1) <math>
where A and B are matrices (with complex entries). The generalized eigenvalues λ can be obtained by solving the equation
- <math>\det(A - \lambda B)=0.\, <math>
If B is invertible, then problem (1) can be obviously written in the form
- <math> B^{-1}Av = \lambda v \quad \quad (2)<math>
which is a standard eigenvalue problem. However, in most situations it is preferable not to perform the inversion, and solve the generalized eigenvalue problem as stated originally.
If A and B are symmetric matrices with real entries, then problem (1) has real eigenvalues. This would have not been easy to see from the equivalent formulation (2), because the matrix <math> B^{-1}A<math> is not necessarily symmetric if A and B are.
External links
References
- Roger A. Horn and Charles R. Johnson, Matrix Analysis, Cambridge University Press, 1985. ISBN 0-521-30586-1 (hardback), ISBN 0-521-38632-2 (paperback).
de:Eigenwert
es:Autovalor
fr:Valeur propre
he:ערך עצמי
ja:固有値問題
nl:Eigenwaarde (wiskunde)
pl:Wartości własne
pt:Valor prprio
ro:Valoare proprie
sl:lastna vrednost