Transpose
|
- See transposition for meanings of this term in telecommunication and music.
In mathematics, and in particular linear algebra, the transpose of a matrix is another matrix, produced by turning rows into columns and vice versa. Informally, the transpose of a square matrix is obtained by reflecting at the main diagonal (that runs from the top left to bottom right of the matrix). The transpose of the matrix A is written as Atr, tA, A′, or AT, the last of these notations being preferred in Wikipedia.
Formally, the transpose of the m-by-n matrix A is the n-by-m matrix AT defined by AT[i, j] = A[j, i] for 1 ≤ i ≤ n and 1 ≤ j ≤ m.
For example,
- <math>\begin{bmatrix}
1 & 2 \\ 3 & 4 \end{bmatrix}^{\mathrm{T}} \!\! \;\! = \, \begin{bmatrix} 1 & 3 \\ 2 & 4 \end{bmatrix}\quad\quad \mbox{and}\quad\quad \begin{bmatrix} 1 & 2 \\ 3 & 4 \\ 5 & 6 \end{bmatrix}^{\mathrm{T}} \!\! \;\! = \, \begin{bmatrix} 1 & 3 & 5\\ 2 & 4 & 6 \end{bmatrix} \; <math>
Properties
For any two m-by-n matrices A and B and every scalar c, we have (A + B)T = AT + BT and (cA)T = c(AT). This shows that the transpose is a linear map from the space of all m-by-n matrices to the space of all n-by-m matrices.
The transpose operation is self-inverse, i.e taking the transpose of the transpose amounts to doing nothing: (AT)T = A.
If A is an m-by-n and B an n-by-k matrix, then we have (AB)T = (BT)(AT). Note that the order of the factors switches. From this one can deduce that a square matrix A is invertible if and only if AT is invertible, and in this case we have (A-1)T = (AT)-1.
The dot product of two vectors expressed as columns of their coordinates can be computed as
- <math> \mathbf{a} \cdot \mathbf{b} = \mathbf{a}^{\mathrm{T}} \mathbf{b} \,<math>
where the product on the right is the ordinary matrix multiplication.
If A is an arbitrary m-by-n matrix with real entries, then ATA is a positive semidefinite matrix.
If A is an n-by-n matrix over some field, then A is similar to AT.
Further nomenclature
A square matrix whose transpose is equal to itself is called a symmetric matrix, i.e. A is symmetric iff:
- <math>\ A = A^{\mathrm{T}}<math>
A square matrix whose transpose is also its inverse is called an orthogonal matrix, i.e. G is orthogonal iff
- <math> G\, G^{\,\mathrm{T}} = G^{\,\mathrm{T}} G = I_n , \,<math>the identity matrix
A square matrix whose transpose is equal to its negative is called skew-symmetric, i.e. A is skew-symmetric iff:
- <math>\ A = - A^{\mathrm{T}}<math>
The conjugate transpose of the complex matrix A, written as A*, is obtained by taking the transpose of A and then taking the complex conjugate of each entry.
Transpose of linear maps
If f: V→W is a linear map between vector spaces V and W with dual spaces W* and V*, we define the transpose of f to be the linear map tf : W*→V* with
- <math> {}^t f (\phi ) = \phi \circ f \,<math>for every <math>\ \phi<math> in W*.
If the matrix A describes a linear map with respect to two bases, then the matrix AT describes the transpose of that linear map with respect to the dual bases. See dual space for more details on this.ja:転置行列 pl:Macierz transponowana