Linear independence
|
In linear algebra, a family of vectors is linearly independent if none of them can be written as a linear combination of finitely many other vectors in the collection. For instance, in three-dimensional Euclidean space R3, the three vectors (1, 0, 0), (0, 1, 0) and (0, 0, 1) are linearly independent, while (2, −1, 1), (1, 0, 1) and (3, −1, 2) are not (since the third vector is the sum of the first two). Vectors which are not linearly independent are called linearly dependent.
Contents [hide] |
Definition
Let v1, v2, ..., vn be real vectors of the same length. We say that they are linearly dependent if there exist numbers a1, a2, ..., an, not all equal to zero, such that:
- <math> a_1 \mathbf{v}_1 + a_2 \mathbf{v}_2 + \cdots + a_n \mathbf{v}_n = \mathbf{0}. <math>
Note that the zero on the right is the zero vector, not the number zero.
If such numbers do not exist, then the vectors are said to be linearly independent. This condition can be reformulated as follows: Whenever a1, a2, ..., an are numbers such that
- <math> a_1 \mathbf{v}_1 + a_2 \mathbf{v}_2 + \cdots + a_n \mathbf{v}_n = \mathbf{0}, <math>
we have ai = 0 for i = 1, 2, ..., n.
More generally, let V be a vector space over a field K, and let {vi}i∈I be a family of elements of V. The family is linearly dependent over K if there exists a family {aj}j∈J of nonzero elements of K such that
- <math> \sum_{j \in J} a_j v_j = \mathbf{0} \,<math>
where the index set J is a nonempty, finite subset of I.
A set X of elements of V is linearly independent if the corresponding family {x}x∈X is linearly independent.
The concept of linear independence is important because a set of vectors which is linearly independent and spans some vector space, forms a basis for that vector space.
The projective space of linear dependences
A linear dependence among vectors v1, ..., vn is a tuple (a1, ..., an) with n scalar components, not all zero, such that
- <math>a_1 v_1 + \cdots + a_n v_n=0. \,<math>
If such a linear dependence exists, then the n vectors are linearly dependent. It makes sense to identify two linear dependences if one arises as a non-zero multiple of the other, because in this case the two describe the same linear relationship among the vectors. Under this identification, the set of all linear dependences among v1, ...., vn is a projective space.
Example I
The vectors (1, 1) and (−3, 2) in R2 are linearly independent.
Proof:
Let a, b be two real numbers such that:
- <math> a(1, 1) + b(-3, 2) = (0, 0) \,<math>
Then:
- <math>
- <math>