## Linearly Independent Sets of Vectors

Theorem SLSLC tells us that a solution to a homogeneous system of equations is a linear combination of the columns of the coefficient matrix that equals the zero vector. We used just this situation to our advantage (twice!) in Example SCAD where we reduced the set of vectors used in a span construction from four down to two, by declaring certain vectors as surplus. The next two definitions will allow us to formalize this situation.

Definition RLDCV (Relation of Linear Dependence for Column Vectors) Given a set of vectors $S=\set{\vectorlist{u}{n}}$, a true statement of the form \begin{equation*} \lincombo{\alpha}{u}{n}=\zerovector \end{equation*} is a relation of linear dependence on $S$. If this statement is formed in a trivial fashion, i.e. $\alpha_i=0$, $1\leq i\leq n$, then we say it is the trivial relation of linear dependence on $S$.

Definition LICV (Linear Independence of Column Vectors) The set of vectors $S=\set{\vectorlist{u}{n}}$ is linearly dependent if there is a relation of linear dependence on $S$ that is not trivial. In the case where the only relation of linear dependence on $S$ is the trivial one, then $S$ is a linearly independent set of vectors.

Notice that a relation of linear dependence is an equation. Though most of it is a linear combination, it is not a linear combination (that would be a vector). Linear independence is a property of a set of vectors. It is easy to take a set of vectors, and an equal number of scalars, all zero, and form a linear combination that equals the zero vector. When the easy way is the only way, then we say the set is linearly independent. Here's a couple of examples.

Example LDS: Linearly dependent set in $\complex{5}$.

Example LIS: Linearly independent set in $\complex{5}$.

Example LDS and Example LIS relied on solving a homogeneous system of equations to determine linear independence. We can codify this process in a time-saving theorem.

Theorem LIVHS (Linearly Independent Vectors and Homogeneous Systems) Suppose that $A$ is an $m\times n$ matrix and $S=\set{\vectorlist{A}{n}}$ is the set of vectors in $\complex{m}$ that are the columns of $A$. Then $S$ is a linearly independent set if and only if the homogeneous system $\homosystem{A}$ has a unique solution.

Since Theorem LIVHS is an equivalence, we can use it to determine the linear independence or dependence of any set of column vectors, just by creating a corresponding matrix and analyzing the row-reduced form. Let's illustrate this with two more examples.

Example LIHS: Linearly independent, homogeneous system.

Example LDHS: Linearly dependent, homogeneous system.

As an equivalence, Theorem LIVHS gives us a straightforward way to determine if a set of vectors is linearly independent or dependent. Review Example LIHS and Example LDHS. They are very similar, differing only in the last two slots of the third vector. This resulted in slightly different matrices when row-reduced, and slightly different values of $r$, the number of nonzero rows. Notice, too, that we are less interested in the actual solution set, and more interested in its form or size. These observations allow us to make a slight improvement in Theorem LIVHS.

Theorem LIVRN (Linearly Independent Vectors, $r$ and $n$) Suppose that $A$ is an $m\times n$ matrix and $S=\set{\vectorlist{A}{n}}$ is the set of vectors in $\complex{m}$ that are the columns of $A$. Let $B$ be a matrix in reduced row-echelon form that is row-equivalent to $A$ and let $r$ denote the number of non-zero rows in $B$. Then $S$ is linearly independent if and only if $n=r$.

So now here's an example of the most straightforward way to determine if a set of column vectors in linearly independent or linearly dependent. While this method can be quick and easy, don't forget the logical progression from the definition of linear independence through homogeneous system of equations which makes it possible.

Example LDRN: Linearly dependent, $r < n$.

Example LLDS: Large linearly dependent set in $\complex{4}$.

The situation in Example LLDS is slick enough to warrant formulating as a theorem.

Theorem MVSLD (More Vectors than Size implies Linear Dependence) Suppose that $S=\set{\vectorlist{u}{n}}$ is the set of vectors in $\complex{m}$, and that $n>m$. Then $S$ is a linearly dependent set.

## Linear Independence and Nonsingular Matrices

We will now specialize to sets of $n$ vectors from $\complex{n}$. This will put Theorem MVSLD off-limits, while Theorem LIVHS will involve square matrices. Let's begin by contrasting Archetype A and Archetype B.

Example LDCAA: Linearly dependent columns in Archetype A.

Example LICAB: Linearly independent columns in Archetype B.

That Archetype A and Archetype B have opposite properties for the columns of their coefficient matrices is no accident. Here's the theorem, and then we will update our equivalences for nonsingular matrices, Theorem NME1.

Theorem NMLIC (Nonsingular Matrices have Linearly Independent Columns) Suppose that $A$ is a square matrix. Then $A$ is nonsingular if and only if the columns of $A$ form a linearly independent set.

Here's an update to Theorem NME1.

Theorem NME2 (Nonsingular Matrix Equivalences, Round 2) Suppose that $A$ is a square matrix. The following are equivalent.

1. $A$ is nonsingular.
2. $A$ row-reduces to the identity matrix.
3. The null space of $A$ contains only the zero vector, $\nsp{A}=\set{\zerovector}$.
4. The linear system $\linearsystem{A}{\vect{b}}$ has a unique solution for every possible choice of $\vect{b}$.
5. The columns of $A$ form a linearly independent set.

## Null Spaces, Spans, Linear Independence

In Subsection SS.SSNS:Spanning Sets: Spanning Sets of Null Spaces we proved Theorem SSNS which provided $n-r$ vectors that could be used with the span construction to build the entire null space of a matrix. As we have hinted in Example SCAD, and as we will see again going forward, linearly dependent sets carry redundant vectors with them when used in building a set as a span. Our aim now is to show that the vectors provided by Theorem SSNS form a linearly independent set, so in one sense they are as efficient as possible a way to describe the null space. Notice that the vectors $\vect{z}_j$, $1\leq j\leq n-r$ first appear in the vector form of solutions to arbitrary linear systems (Theorem VFSLS). The exact same vectors appear again in the span construction in the conclusion of Theorem SSNS. Since this second theorem specializes to homogeneous systems the only real difference is that the vector $\vect{c}$ in Theorem VFSLS is the zero vector for a homogeneous system. Finally, Theorem BNS will now show that these same vectors are a linearly independent set. We'll set the stage for the proof of this theorem with a moderately large example. Study the example carefully, as it will make it easier to understand the proof.

Example LINSB: Linear independence of null space basis.

The proof of Theorem BNS is really quite straightforward, and relies on the "pattern of zeros and ones" that arise in the vectors $\vect{z}_i$, $1\leq i\leq n-r$ in the entries that correspond to the free variables. Play along with Example LINSB as you study the proof. Also, take a look at Example VFSAD, Example VFSAI and Example VFSAL, especially at the conclusion of Step 2 (temporarily ignore the construction of the constant vector, $\vect{c}$). This proof is also a good first example of how to prove a conclusion that states a set is linearly independent.

Theorem BNS (Basis for Null Spaces) Suppose that $A$ is an $m\times n$ matrix, and $B$ is a row-equivalent matrix in reduced row-echelon form with $r$ nonzero rows. Let $D=\{d_1,\,d_2,\,d_3,\,\ldots,\,d_r\}$ and $F=\{f_1,\,f_2,\,f_3,\,\ldots,\,f_{n-r}\}$ be the sets of column indices where $B$ does and does not (respectively) have leading 1's. Construct the $n-r$ vectors $\vect{z}_j$, $1\leq j\leq n-r$ of size $n$ as \begin{equation*} \vectorentry{\vect{z}_j}{i}= \begin{cases} 1&\text{if $i\in F$, $i=f_j$}\\ 0&\text{if $i\in F$, $i\neq f_j$}\\ -\matrixentry{B}{k,f_j}&\text{if $i\in D$, $i=d_k$} \end{cases} \end{equation*} Define the set $S=\set{\vectorlist{z}{n-r}}$. Then

1. $\nsp{A}=\spn{S}$.
2. $S$ is a linearly independent set.

Example NSLIL: Null space spanned by linearly independent set, Archetype L.