Gaussian elimination

In mathematics, Gaussian elimination or Gauss-Jordan elimination, named after Carl Friedrich Gauss and Wilhelm Jordan (for many, Gaussian elimination is regarded as the front half of the complete Gauss-Jordan elimination), is an algorithm in linear algebra for determining the solutions of a system of linear equations, for determining the rank of a matrix, and for calculating the inverse of an invertible square matrix. When applied to a matrix, Gaussian elimination produces what is called "reduced row echelon form".



The method is named after the mathematician Carl Friedrich Gauss, but the original inventor is Liu Hui. It is referenced in the important Chinese book Jiuzhang suanshu or The Nine Chapters on the Mathematical Art, dated A.D. 263.

Numerical analysis

The computational complexity of Gaussian elimination is O(n3), that is, the number of operations required is proportional to n3 if the matrix size is n-by-n.

This algorithm can be used on a computer for systems with thousands of equations and unknowns. It is, however, numerically unstable, at least on pathological examples; that is, floating-point errors committed throughout the computation are accumulated and may result in results far from the correct solution. For this reason and for reasons of its prohibitive cost on large matrices, iterative methods, generally based on finding a fixed point of a contraction mapping (see Banach fixed point theorem) are generally preferred for larger systems; there also exist specific methods for even larger systems whose coefficients follow a regular pattern. See system of linear equations. Gaussian elimination is, however, a good method for systems of equations over a field where computations are exact, such as finite fields.


Suppose you need to find numbers x, y, and z such that the following three equations are all simultaneously true:
<math>2x + y - z = 8<math>,
<math>-3x - y + 2z = -11<math>,
<math>-2x + y + 2z = -3<math>

This is called a system of linear equations for the unknowns x, y, and z. They are called linear because each term is either constant or is a constant times a single variable to the first power. The goal is to transform this system to an equivalent one so that we can easily read off the solution. The operations to transform a system of equations to another, whilst still preserving the solutions are as follows:

  • multiply or divide an equation by a non-zero number
  • switch two equations
  • add or subtract a (not necessarily integer) multiple of one equation to another one

The strategy is as follows: eliminate x from all but the first equation, eliminate y from all but the second equation, and then eliminate z from all but the third equation.

In our example, we eliminate x from the second equation by adding 3/2 times the first equation to the second, and then we eliminate x from the third equation by adding the first equation to the third. The result is:
<math>2x + y - z = 8<math>,
<math>\begin{matrix} \frac{1}{2} \end{matrix}y + \begin{matrix} \frac{1}{2} \end{matrix}z = 1<math>,
<math>2y + z = 5<math>

Now we eliminate y from the first equation by adding −2 times the second equation to the first, and then we eliminate y from the third equation by adding −4 times the second equation to the third:
<math>2x - 2z = 6<math>,
<math>\begin{matrix} \frac{1}{2} \end{matrix}y + \begin{matrix} \frac{1}{2} \end{matrix}z = 1<math>,
<math>-z = 1<math>

Finally, we eliminate z from the first equation by adding −2 times the third equation to the first, and then we eliminate z from the second equation by adding 0.5 times the third equation to the second:
<math>2x = 4<math>,
<math>\begin{matrix} \frac{1}{2} \end{matrix}y = \begin{matrix} \frac{3}{2} \end{matrix}<math>,
<math>-z = 1<math>

We can now read off the solution by dividing: x = 2, y = 3 and z = −1.

This algorithm works generally, also for bigger systems. Sometimes it is necessary to switch two equations: for instance if y had not occurred in the second equation after our first step above, we would have switched the second and third equation and then eliminated y from the first equation. It is possible that the algorithm gets "stuck": for instance if y had not occurred in the second and the third equation after our first step above. In this case, the system does not have a unique solution.

Usually, in practice, one does not deal with the actual systems in terms of equations but instead makes use of the coefficient matrix (which is also suitable for computer manipulations), so doing this, our original system would then look like

<math> \begin{pmatrix} 2 & 1 & -1 & 8 \\ -3 & -1 & 2 & -11 \\ -2 & 1 & 2 & -3 \end{pmatrix} <math>

and in the end we are left with

<math> \begin{pmatrix} 2 & 0 & 0 & 4 \\ 0 & 1/2 & 0 & 3/2 \\ 0 & 0 & -1 & 1 \end{pmatrix} <math>

or, after dividing the rows by 2, 1/2 and -1, respectively:

<math> \begin{pmatrix} 1 & 0 & 0 & 2 \\ 0 & 1 & 0 & 3 \\ 0 & 0 & 1 & -1 \end{pmatrix} <math>

In this case we apply matrix operations equivalent to the equation operations, namely:

  • Multiply or divide a row by a non-zero value.
  • Switch two rows.
  • Add or subtract a (not necessarily integer) multiple of one row to another row.

Row echelon and reduced row echelon form

Two special arrangements of matrix are called row echelon form and reduced row echelon form. The definitions of these terms depends on the first non-zero term in each row, called the row's leading coefficient. For a matrix to be in row echelon form (REF), each leading coefficient must equal 1. Furthermore the rows must be arranged as follows. As one moves from the top to the bottom of the matrix, the leading coefficients move from the left to the right and finally rows without any leading coefficient appear last. The final matrix in the example above is in row echelon form, as is this matrix:

<math> \begin{pmatrix} 0 & 1 & 4 & 0 & -3 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 \\ \end{pmatrix} <math>

A matrix in REF is further said to be in reduced row echelon form (RREF) if each leading coefficient is the only non-zero entry in its column. The above matrix is not in RREF because of the -3 entry. If that entry were 0, the matrix would be in RREF.

Gaussian elimination amounts to using the matrix operations to obtain a matrix in RREF. All matrices have a unique equivalent matrix in RREF. When solving a linear system, sometimes there are no solutions and sometimes there are multiple solutions. Whichever is the case, converting the final matrix back into equations reveals the solutions (if any) of the system. For instance, if the above matrix represents a system in four variables, the third row corresponds to the equation 0 = 1, so the matrix represents a system with no solutions.

The following matrix in RREF represents a system with many solutions:

<math> \begin{pmatrix} 1 & 2/3 & 0 & 11 \\ 0 & 0 & 1 & 7 \\ \end{pmatrix} <math>

In equation form, we have x + (2/3)y = 11 and z = 7. Each value of y yields a different value of x.

Other applications

Finding the inverse of a matrix

Suppose A is a square n-by-n matrix and you need to calculate its inverse. The n-by-n identity matrix is augmented to the right of A, which produces an n-by-2n matrix. Then you perform the Gaussian elimination algorithm on that matrix. When the algorithm finishes, the identity matrix will appear on the left; the inverse of A can then be found to the right of the identity matrix.

If the algorithm gets "stuck" as explained above, then A is not invertible.

In practice, inverting a matrix is rarely required. Most of the time, one is really after the solution of a particular system of linear equations.

The general algorithm to compute ranks and bases

The Gaussian elimination algorithm can be applied to any m-by-n matrix A. If we get "stuck" in a given column, we move to the next column. In this way, for example, any 6 by 9 matrix can be transformed to a matrix that has a reduced row echelon form like

<math> \begin{pmatrix} 1 & * & 0 & 0 & * & * & 0 & * & 0 \\ 0 & 0 & 1 & 0 & * & * & 0 & * & 0 \\ 0 & 0 & 0 & 1 & * & * & 0 & * & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & * & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{pmatrix} <math>

(the *s are arbitrary entries). This echelon matrix T contains a wealth of information about A: the rank of A is 5 since there are 5 non-zero rows in T; the vector space spanned by the rows of A has as basis the first, third, forth, seventh and ninth column of A (the rows of the ones in T), and the *'s tell you how the other rows of A can be written as linear combinations of the basis rows.

The Gaussian elimination can be performed over any field. The three elementary operations used in the Gaussian elimination (multiplying rows, switching rows, and adding multiples of rows to other rows) amount to multiplying the original matrix A with invertible m-by-m matrices from the left. In general, we can say:

To every m-by-n matrix A over the field K there exists a uniquely determined invertible m-by-m matrix S and a uniquely determined reduced row-echelon matrix T such that A = ST.

S is the product of the matrices corresponding to the row operations performed.

The formal algorithm to compute T from A follows. We write A[i,j] for the entry in row i, column j in matrix A. The transformation is performed "in place", meaning that the original matrix A is lost and successively replaced by T.

while (i <= m and j<= n) do
  # Find pivot in column j, starting in row i:
  max_val = A[i,j]
  max_ind = i
  for k=i+1 to m do
    val = A[k,j]
    if abs(val) > abs(max_val) then
      max_val = val
      max_ind = k
  if max_val <> 0 then
    switch rows i and max_ind
    divide row i by max_val
    for u = 1 to m do
       if u <> i then
          add - A[u,j] * row i to row u
    i = i + 1
  j = j + 1

This algorithm differs slightly from the one discussed earlier, because before eliminating a variable, it first exchanges rows to move the entry with the largest absolute value to the "pivot position". Such a pivoting procedure improves the numerical stability of the algorithm; some variants are also in use.

Note that if the field is the real or complex numbers and floating point arithmetic is in use, the comparison "max_val <> 0" should be replaced by "abs(max_val) > eps" for some small, machine-dependent constant eps, since it is rarely correct to compare floating point numbers to zero.cs:Gaussova eliminačn metoda de:Gausches Eliminationsverfahren ko:가우스 소거법 ja:ガウスの消去法 nl:Gauss-eliminatie sl:Gaussova eliminacijska metoda


  • Art and Cultures
    • Art (
    • Architecture (
    • Cultures (
    • Music (
    • Musical Instruments (
  • Biographies (
  • Clipart (
  • Geography (
    • Countries of the World (
    • Maps (
    • Flags (
    • Continents (
  • History (
    • Ancient Civilizations (
    • Industrial Revolution (
    • Middle Ages (
    • Prehistory (
    • Renaissance (
    • Timelines (
    • United States (
    • Wars (
    • World History (
  • Human Body (
  • Mathematics (
  • Reference (
  • Science (
    • Animals (
    • Aviation (
    • Dinosaurs (
    • Earth (
    • Inventions (
    • Physical Science (
    • Plants (
    • Scientists (
  • Social Studies (
    • Anthropology (
    • Economics (
    • Government (
    • Religion (
    • Holidays (
  • Space and Astronomy
    • Solar System (
    • Planets (
  • Sports (
  • Timelines (
  • Weather (
  • US States (


  • Home Page (
  • Contact Us (

  • Clip Art (
Personal tools