Orthogonal matrix
|
In linear algebra, an orthogonal matrix is a square matrix G whose transpose is its inverse, i.e.,
- <math>GG^T=G^T G=I_n.<math>
This definition can be given for matrices with entries from any field, but the most common case is the one of matrices with real entries, and only that case will be considered in the rest of this article.
A real square matrix is orthogonal if and only if its columns form an orthonormal basis of Rn with the ordinary Euclidean dot product, which is the case if and only if its rows form an orthonormal basis of Rn.
Geometrically, orthogonal matrices describe linear transformations of Rn which preserve angles and lengths, such as rotations and reflections. They are compatible with the Euclidean inner product in the following sense: if G is orthogonal and x and y are vectors in Rn, then
- <math>\langle Gx,Gy\rangle=\langle x,y \rangle.<math>
Conversely, if V is any finite-dimensional real inner product space and f : V → V is a linear map with
- <math>\langle f(x),f(y)\rangle=\langle x,y\rangle<math>
for all elements x, y of V, then f is described by an orthogonal matrix with respect to any orthonormal basis of V.
The inverse of every orthogonal matrix is again orthogonal, as is the matrix product of two orthogonal matrices. This shows that the set of all n×n orthogonal matrices forms a group. It is a Lie group of dimension n(n − 1)/2 and is called the orthogonal group, denoted by O(n).
The determinant of any orthogonal matrix is 1 or −1. That can be shown as follows:
- <math>1=\det(I)=\det(GG^T)=\det(G)\det(G^T)=(\det(G))^2.<math>
The orthogonal matrices with determinant 1 correspond to proper rotations and those with determinant −1 to improper rotations. The set of all orthogonal matrices whose determinant is 1 is a subgroup of O(n) of index 2, the special orthogonal group SO(n).
Orthogonal matrices preserve lengths and hence, all eigenvalues have absolute value 1, i.e., they are on the unit circle centered at 0 in the complex plane. Eigenvectors for different eigenvalues are orthogonal.
If Q is orthogonal, then one can always find an orthogonal matrix P such that
- <math>P^{T}QP = \begin{bmatrix}
\begin{matrix}R_1 & & \\ & \ddots & \\ & & R_k\end{matrix} & 0 \\ 0 & \begin{matrix}\pm 1 & & \\ & \ddots & \\ & & \pm 1\end{matrix} \\ \end{bmatrix}<math>
where the matrices R1,...,Rk are 2-by-2 rotation matrices. Intuitively, this result means that every orthogonal matrix describes a combination of rotations and reflections. The matrices R1,...,Rk correspond to the non-real eigenvalues of Q.
If A is an arbitrary m-by-n matrix of rank n, we can always write
- <math> A = Q \begin{pmatrix} R \\ 0 \end{pmatrix} <math>
where Q is an orthogonal m-by-m matrix and R is an upper triangular n-by-n matrix with positive main diagonal entries. This is known as a QR decomposition of A and can be proven by applying the Gram-Schmidt process to the columns of A. It is useful for numerically solving systems of linear equations and least squares problems.
The complex analog to orthogonal matrices are the unitary matrices.
Algorithm for adjusting a matrix so that it is orthogonal
In numerical computations the product of two orthogonal matrices may not be orthogonal due to floating point error. To following algorithm will clean up a nearly orthogonal matrix (assuming that the columns in the matrix are unit vectors):
for col1 = first column to last column for col2 = col1+1 to last column let dp = dot product of col1 and col2 for row = 1 to num of rows mtx(row, col2) -= mtx(row, col1) * dp let len = sqrt(dot product of col2 and col2) for row = 1 to num of rows mtx(row, col2) /= len
The first inner loop moves col2 so that it is at right angles to col1, by moving the tip of the vector opposite to the direction of col1 by the amount of component that col2 has in col1's direction. Doing this never disturbs prior adjustments, because col1 and col2 are always at right angles to all prior columns. This will shorten col2, so we have to re-normalise it.
Matrix representation of Clifford algebras
This is meant as a simple introduction.
There is a second geometrical meaning for orthogonal matrices.
In matrix representations of Clifford algebras some of them are regarded as base vectors. Let me give a simple example.
Normally in R2 we have the basic vectors e1 = [1 0] and e2 =[0 1], so that a point in this plane is
- [x y]= x·[1 0] + y·[0 1]
The orthogonal matrix
- <math> \begin{bmatrix} 0&1 \\ 1&0 \end{bmatrix}
<math> represents a reflection around the bisecting line because the two basic vectors get exchanged.
- <math> \begin{bmatrix} x&y \end{bmatrix}
\begin{bmatrix} 0&1 \\ 1&0 \end{bmatrix} = \begin{bmatrix} y&x \end{bmatrix} <math>
The orthogonal matrix
- <math> \begin{bmatrix} 1&0 \\ 0&-1 \end{bmatrix}
<math>
represents a reflection in the x-axis because the point [x y] has [x,−y] as image.
- <math> \begin{bmatrix} x&y \end{bmatrix}
\begin{bmatrix} 1&0 \\ 0&-1 \end{bmatrix} = \begin{bmatrix} x&-y \end{bmatrix} <math>
These two reflections anticommute (the result changes sign if the order is reversed)
- <math> \begin{bmatrix} 0&1 \\ 1&0 \end{bmatrix}
\begin{bmatrix} 1&0 \\ 0&-1 \end{bmatrix} = \begin{bmatrix} 0&-1 \\ 1&0 \end{bmatrix} <math> This is a rotation
- <math> \begin{bmatrix} 1&0 \\ 0&-1 \end{bmatrix}
\begin{bmatrix} 0&1 \\ 1&0 \end{bmatrix} = \begin{bmatrix} 0&1 \\ -1&0 \end{bmatrix} .<math>
If we now no longer regard them as linear transformations but as basic vectors for a 2D plane.
- <math> e_1 = \begin{bmatrix} 0&1 \\ 1&0 \end{bmatrix} <math>
- <math> e_2 = \begin{bmatrix} 1&0 \\ 0&-1 \end{bmatrix}<math>
- <math> \Rightarrow e_1^2 = e_2^2 = I \land e_1 e_2 = - e_2 e_1 <math>
A point with coordinates (x,y) would in this plane be represented by the matrix
- <math> \begin{bmatrix} y&x \\ x&-y \end{bmatrix} <math>
The square of this matrix is the square of its norm (the inner product with itself)
- <math>
\begin{bmatrix} y&x \\ x&-y \end{bmatrix} \begin{bmatrix} y&x \\ x&-y \end{bmatrix} =\begin{bmatrix} x^2+y^2 & 0 \\ 0 & x^2+y^2 \end{bmatrix} <math>
If we now define the inner product as
- <math> A \cdot B = \frac{1}{2}(AB + BA), <math>
because the base vectors anticommute we see that
- <math> e_1 \cdot e_2 = 0.<math>
The matrices e1 and e2
- are orthogonal in both senses:
- they are orthogonal matrices as defined in this article
- they represent orthogonal basicvectors (a right angle between them) because they anticommute.
See more at representations of Clifford algebras.
See also
it:Matrice ortogonale ja:直交行列 zh:正交矩阵 fr:Matrice orthogonale