Skip to main content

Section 3.4 Subspaces of \(\mathbb{R}^n\)

In this section we will explore the subspaces of \(\mathbb{R}^n\text{.}\) The formal definition is below, but informally you should think of a subspace of \(\mathbb{R}^n\) as a "copy" of some \(\mathbb{R}^k\) sitting inside \(\mathbb{R}^n\text{,}\) in much the same way as a plane is a "copy" of \(\mathbb{R}^2\) sitting inside \(\mathbb{R}^3\text{.}\) Another good informal way of thinking about subspaces of \(\mathbb{R}^n\) is that they are collections of vectors in \(\mathbb{R}^n\) that we cannot "escape" using our fundamental operations of vector addition and scalar multiplication.

Subsection 3.4.1 Subspaces

Definition 3.4.1.

Let \(S\) be a collection of vectors in \(\mathbb{R}^n\text{.}\) We say that \(S\) is a subspace of \(\mathbb{R}^n\) if it has all three of the following properties:

  1. \(S\) is not empty (that is, there is at least one vector in \(S\)).

  2. Whenever \(\vec{v}\) and \(\vec{w}\) are vectors in \(S\) then \(\vec{v}+\vec{w}\) is a vector in \(S\text{.}\) (We say that \(S\) is closed under addition.)

  3. Whenever \(\vec{v}\) is a vector in \(S\) and \(c\) is a scalar then \(c\vec{v}\) is a vector in \(S\text{.}\) (We say that \(S\) is closed under scalar multiplication.)

The collection of all vectors on the line \(y=x+1\) is not a subspace of \(\mathbb{R}^2\text{,}\) because \(\begin{bmatrix}0\\1\end{bmatrix}\) is on the line \(y=x+1\) but \(2\begin{bmatrix}0\\1\end{bmatrix} = \begin{bmatrix}0\\2\end{bmatrix}\) is not on that line, so property (3) of the definition of subspace is not satisfied.

The collection of all vectors on the curve \(y = x^2\) is not a subspace of \(\mathbb{R}^2\text{.}\) The vectors \(\begin{bmatrix}1\\1\end{bmatrix}\) and \(\begin{bmatrix}2\\4\end{bmatrix}\) are both on the curve \(y=x^2\text{,}\) but their sum \(\begin{bmatrix}3\\5\end{bmatrix}\) is not, so condition (2) of the definition of subspace is not satisfied.

The collection of all vectors on the line \(y=3x\) is a subspace of \(\mathbb{R}^2\text{.}\) To prove this we need to verify all three conditions in the definition of "subspace". Let \(S\) denote the collection of all vectors \(\begin{bmatrix}x\\y\end{bmatrix}\) where \(y=3x\text{.}\)

  1. The collection \(S\) is not empty because it contains the vector \(\begin{bmatrix}0\\0\end{bmatrix}\text{.}\)

  2. Suppose that \(\begin{bmatrix}x_1\\y_1\end{bmatrix}\) and \(\begin{bmatrix}x_2\\y_2\end{bmatrix}\) are in \(S\text{.}\) Then \(y_1 = 3x_1\) and \(y_2=3x_2\) (by definition of \(S\)), so \(y_1+y_2 = 3x_1+3x_2 = 3(x_1+x_2)\text{.}\) Therefore the vector \(\begin{bmatrix}x_1\\y_1\end{bmatrix} + \begin{bmatrix}x_2\\y_2\end{bmatrix} = \begin{bmatrix}x_1+x_2\\y_1+y_2\end{bmatrix}\) is in \(S\text{.}\)

  3. Suppose that \(\begin{bmatrix}x\\y\end{bmatrix}\) is in \(S\) and \(c\) is a scalar. Then by definition of \(S\) we have \(y=3x\text{,}\) so \(cy=c(3x) = 3(cx)\text{.}\) Therefore the vector \(c\begin{bmatrix}x\\y\end{bmatrix} = \begin{bmatrix}cx\\cy\end{bmatrix}\) is in \(S\text{.}\)

Since all three condition in the definition of subspace are satisfied we can conclude that \(S\) is a subspace of \(\mathbb{R}^2\text{.}\)

Let \(S\) be the collection that contains only the vector \(\vec{0}\text{.}\) Then \(S\) is not empty (it has the zero vector in it!), and because \(\vec{0}+\vec{0} = \vec{0}\) and \(c\vec{0}=\vec{0}\) it is closed under addition and scalar multiplication. Thus \(S\) is a subspace of \(\mathbb{R}^n\text{.}\) This subspace is sometimes called the trivial subspace.

By definition of "subspace", \(S\) is not empty. Therefore we can find some vector \(\vec{v}\) in \(S\text{.}\) Since \(S\) is closed under scalar multiplication and \(0\vec{v} = \vec{0}\) we conclude that \(\vec{0}\) is in \(S\text{.}\)

The lemma sometimes gives a very quick way of checking that a collection is not a subspace of \(\mathbb{R}^n\) - any collection that does not contain \(\vec{0}\) cannot be a subspace. On the other hand, it does not help us to prove that a collection is a subspace, because there are collection that do contain \(\vec{0}\) yet are not subspaces (such as the one in Example 3.4.3).

We check the three conditions of Definition 3.4.1

  1. The vector \(\vec{v_1}\) is in \(\SpanS(\vec{v_1}, \ldots, \vec{v_k})\text{,}\) so \(\SpanS(\vec{v_1}, \ldots, \vec{v_k})\) is not empty.

  2. Suppose that \(\vec{v}\) and \(\vec{w}\) are in \(\SpanS(\vec{v_1}, \ldots, \vec{v_k})\text{.}\) Then there are scalars \(a_1, \ldots, a_k, b_1, \ldots, b_k\) such that \(\vec{v}=a_1\vec{v_1}+\cdots+a_k\vec{v_k}\) and \(\vec{w} = b_1\vec{v_1}+\cdots+b_k\vec{v_k}\text{.}\) Then

    \begin{align*} \vec{v}+\vec{w} \amp = (a_1\vec{v_1}+\cdots+a_k\vec{v_k}) + (b_1\vec{v_1}+\cdots+b_k\vec{v_k})\\ \amp = (a_1+b_1)\vec{v_1} + \cdots + (a_k+b_k)\vec{v_k}\text{.} \end{align*}
    This shows that \(\vec{v}+\vec{w}\) is in \(\SpanS(\vec{v_1}, \ldots, \vec{v_k})\text{.}\)

  3. Suppose that \(\vec{v}\) is in \(\SpanS(\vec{v_1}, \ldots, \vec{v_k})\text{,}\) and \(c\) is a scalar. Find scalars \(a_1, \ldots, a_k\) such that \(\vec{v} = a_1\vec{v_1} + \cdots + a_k\vec{v_k}\text{.}\) Then

    \begin{align*} c\vec{v} \amp = c(a_1\vec{v_1}+\cdots+a_k\vec{v_k})\\ \amp = (ca_1)\vec{v_1} + \cdots + (ca_k)\vec{v_k}\text{.} \end{align*}
    This shows that \(c\vec{v}\) is in \(\SpanS(\vec{v_1}, \ldots, \vec{v_k})\text{.}\)

Any line through the origin in \(\mathbb{R}^2\) is a subspace of \(\mathbb{R}^2\text{.}\) To see this, recall from Section 2.3 that a line in \(\mathbb{R}^2\) can be described in vector form as \(\vec{v} = t\vec{d}+\vec{p}\text{,}\) where \(\vec{d}\) is a direction vector for the line and \(\vec{p}\) is a point on the line. Since our line goes through the origin we can choose \(\vec{p}=\vec{0}\text{,}\) in which case we see that the vectors on our line are exactly those in \(\SpanS(\vec{d})\text{.}\) The line is therefore a subspace of \(\mathbb{R}^2\) by Theorem 3.4.7.

Nearly identical reasoning shows that a line through the origin in \(\mathbb{R}^3\) is a subspace of \(\mathbb{R}^3\text{.}\) Similarly, a plane through the origin in \(\mathbb{R}^3\) can be described as \(\SpanS(\vec{d_1}, \vec{d_2})\) where \(\vec{d_1}, \vec{d_2}\) are non-parallel direction vectors for the plane, so every plane through the origin of \(\mathbb{R}^3\) is a subspace of \(\mathbb{R}^3\text{.}\)

Subsection 3.4.2 Basis

In light of Theorem 3.4.7 we might ask whether every subspace of \(\mathbb{R}^n\) can be written as \(\SpanS(\vec{v_1}, \ldots, \vec{v_k})\) for some vectors \(\vec{v_1}, \ldots, \vec{v_k}\text{.}\) The answer is "yes", but we will not give a rigorous proof of this fact, because it uses tools beyond the scope of this course. The idea is (roughly) to start with a vector \(\vec{v_1}\) from your subspace \(S\text{,}\) then check whether \(S = \SpanS(\vec{v_1})\text{.}\) If so, then you are done. If not, find a vector \(\vec{v_2}\) in \(S\) but not in \(\SpanS(\vec{v_1})\text{,}\) and check if \(S = \SpanS(\vec{v_1}, \vec{v_2})\text{.}\) Continue in this way until you eventually stop because \(S = \SpanS(\vec{v_1}, \ldots, \vec{v_k})\) (the fact that you do eventually get to stop is the part of this proof that is beyond the scope of our course).

Once we know that our subspace can be described as \(\SpanS(\vec{v_1}, \ldots, \vec{v_k})\) it is natural to want to do so with as few vectors as possible (that is, with the smallest possible value of \(k\)), so that our description has no redundencies. Fortunately, we already know (from Section 3.2) that this happens exactly when \(\vec{v_1}, \ldots, \vec{v_k}\) are linearly independent. We therefore have the following definition:

Definition 3.4.9.

Let \(S\) be a subspace of \(\mathbb{R}^n\text{.}\) A basis for \(S\) is a collection \(B\) of vectors in \(S\) such that \(\SpanS(B) = S\) and the vectors in \(B\) are linearly independent.

The definition of "basis" that we have given is the easiest to use in most situations. For the sake of helping to understand what makes a basis special, it is worth seeing the following theorem, which we state without proof.

Start by writing the subspace as \(S = \SpanS(\vec{v_1}, \ldots, \vec{v_k})\text{.}\) If \(\vec{v_1}, \ldots, \vec{v_k}\) are linearly independent then they are a basis for \(S\) and we are done. If not then by Theorem 3.2.7 there is some \(i\) such that \(\vec{v_1}, \ldots, \vec{v_{i-1}}, \vec{v_{i+1}}, \ldots, \vec{v_k}\) still span \(S\text{.}\) If these are linearly independent then they are a basis for \(S\) and we are done. If not, repeat the process of using Theorem 3.2.7 to remove vectors from the list until we do reach a linearly independent set.

Note 3.4.12.

In order to make the previous theorem statement completely true, and make the proof we provided correct, we need to deal with a minor annoyance concerning the trivial subspace (that is, the subspace which contains only the zero vector, Example 3.4.5). The problem is that the set containing only the zero vector is already linearly dependent (Example 3.2.3). To resolve the problem we need to consider the empty collection of vectors, which we denote \(\emptyset\text{,}\) and it's span, \(\SpanS(\emptyset)\text{.}\) We declare that \(\SpanS(\emptyset)\) is the set containing only the zero vector (symbolically, \(\SpanS(\emptyset) = \{\vec{0}\}\)). You can check that \(\emptyset\) satisfies the definition of being linearly independent, so with this new convention \(\emptyset\) is a basis for the trivial subspace of \(\mathbb{R}^n\) (and the proof of the previous theorem does work, because at worst we end up eliminating all of the vectors and arriving at the independent set \(\emptyset\)). We emphasize that for our purposes this is just a minor inconvenience to make the rest of our theorems about subspaces and bases true, and we will rarely encounter it.

Let \(S = \SpanS\left(\begin{bmatrix}2\\1\\1\end{bmatrix}, \begin{bmatrix}4\\2\\2\end{bmatrix}, \begin{bmatrix}1\\0\\-1\end{bmatrix}, \begin{bmatrix}0\\1\\3\end{bmatrix}\right)\text{.}\) Find a basis for \(S\text{.}\)

Solution.

We are told that the vectors \(\begin{bmatrix}2\\1\\1\end{bmatrix}, \begin{bmatrix}4\\2\\2\end{bmatrix}, \begin{bmatrix}1\\0\\-1\end{bmatrix}, \begin{bmatrix}0\\1\\3\end{bmatrix}\) are a spanning set for \(S\text{,}\) so following the idea of the proof of Theorem 3.4.11 we check whether or not these vectors are linearly independent. In fact, we know that they are not independent, because of Theorem 3.3.10, but we still need to do the calculations to find out which vector we can remove without changing the span (you might be able to see by observation that the second vector is a multiple of the first; we will pretend that we did not notice this, in order to give a full demonstration of the method).

\begin{equation*} \matr{cccc|c}{2 \amp 4 \amp 1 \amp 0 \amp 0 \\ 1 \amp 2 \amp 0 \amp 1 \amp 0 \\ 1 \amp 2 \amp -1 \amp 3 \amp 0} \to \matr{cccc|c}{1 \amp 2 \amp 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 1 \amp -2 \amp 0 \\ 0 \amp 0 \amp 0 \amp 0 \amp 0}\text{.} \end{equation*}

The reduced row echelon form tells is that if we want to write

\begin{equation*} a_1\begin{bmatrix}2\\1\\1\end{bmatrix} + a_2\begin{bmatrix}4\\2\\2\end{bmatrix} +a_3\begin{bmatrix}1\\0\\-1\end{bmatrix} + a_4\begin{bmatrix}0\\1\\3\end{bmatrix} = \begin{bmatrix}0\\0\\0\end{bmatrix} \end{equation*}

, then we must have \(a_1 + 2a_2 +a_4 = 0\) and \(a_3 -2a_4 = 0\text{.}\) One specific solution is \(a_1 = 1, a_2 = 0, a_3 = -2, a_4 = -1\text{,}\) which gives us the equation

\begin{equation*} \begin{bmatrix}2\\1\\1\end{bmatrix} + -2\begin{bmatrix}1\\0\\-1\end{bmatrix} -\begin{bmatrix}0\\1\\3\end{bmatrix} = \begin{bmatrix}0\\0\\0\end{bmatrix}\text{,} \end{equation*}

or equivalently,

\begin{equation*} \begin{bmatrix}0\\1\\3\end{bmatrix} = \begin{bmatrix}2 \\ 1 \\ 1\end{bmatrix} - 2\begin{bmatrix}1\\0\\-1\end{bmatrix}\text{.} \end{equation*}

From here we see that \(\begin{bmatrix}0\\1\\3\end{bmatrix}\) is in the span of the other vectors, so by Theorem 3.1.7 we can remove it without changing the span. That is, we have that \(S = \SpanS\left(\begin{bmatrix}2\\1\\1\end{bmatrix}, \begin{bmatrix}4\\2\\2\end{bmatrix}, \begin{bmatrix}1\\0\\-1\end{bmatrix}\right)\text{.}\)

At this point we have reduced the size of our spanning set, but we don't know if we have done enough. So we repeat the entire process that we just went through, this time starting with our new, smaller, spanning set.

\begin{equation*} \matr{ccc|c}{2 \amp 4 \amp 1 \amp 0 \\ 1 \amp 2 \amp 0 \amp 0 \\ 1 \amp 2 \amp -1 \amp 0} \to \matr{ccc|c}{1 \amp 2 \amp 0 \amp 0 \\ 0 \amp 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 0 \amp 0}\text{.} \end{equation*}

This time we see that to have

\begin{equation*} a_1\begin{bmatrix}2\\1\\1\end{bmatrix} + a_2\begin{bmatrix}4\\2\\2\end{bmatrix} +a_3\begin{bmatrix}1\\0\\-1\end{bmatrix} = \begin{bmatrix}0\\0\\0\end{bmatrix}\text{,} \end{equation*}

we must have \(a_1 +2a_2 = 0\) and \(a_3=0\text{.}\) By choosing a specific solution, say \(a_1 = -2, a_2=1, a_3=0\text{.}\) we get

\begin{equation*} -2\begin{bmatrix}2\\1\\1\end{bmatrix} + \begin{bmatrix}4\\2\\2\end{bmatrix} = \begin{bmatrix}0\\0\\0\end{bmatrix}\text{,} \end{equation*}

or equivalently,

\begin{equation*} \begin{bmatrix}4\\2\\2\end{bmatrix} = 2\begin{bmatrix}2\\1\\1\end{bmatrix}\text{.} \end{equation*}

As before, this shows us that we can remove the vector \(\begin{bmatrix}4\\2\\2\end{bmatrix}\) without changing the span, so we now know that \(S = \SpanS\left(\begin{bmatrix}2\\1\\1\end{bmatrix}, \begin{bmatrix}1\\0\\-1\end{bmatrix}\right)\text{.}\)

We've got a smaller spanning set again, so we repeat the process of checking if they are linearly independent.

\begin{equation*} \matr{cc|c}{2 \amp 1 \amp 0 \\ 1 \amp 0 \amp 0 \\ 1 \amp -1 \amp 0} \to \matr{cc|c}{1 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 0}\text{.} \end{equation*}

Here we see that the system has a unique solution, so by Theorem 3.2.5 the columns of the coefficient matrix are linearly independent. We therefore conclude that \(\left\{\begin{bmatrix}2\\1\\1\end{bmatrix}, \begin{bmatrix}1\\0\\-1\end{bmatrix}\right\}\) forms a basis for \(S\text{.}\)

Note 3.4.14.

You might have noticed in the solution to Example 3.4.13 we ended up essentially repeating the same row reduction several times, having just removed a column from the matrix each time. Later (in Section 4.6) we will see a more efficient way to find a basis for a subspace of \(\mathbb{R}^n\) by studying some specific subspaces associated to matrices.

Subsection 3.4.3 Dimension

Aside from the trivial subspace, every subspace of \(\mathbb{R}^n\) has many different bases. For example, you can check that \(\left\{\begin{bmatrix}1\\0\end{bmatrix}, \begin{bmatrix}0\\1\end{bmatrix}\right\}\text{,}\) \(\left\{\begin{bmatrix}1\\1\end{bmatrix}, \begin{bmatrix}1\\-1\end{bmatrix}\right\}\text{,}\) and \(\left\{\begin{bmatrix}2\\4\end{bmatrix}, \begin{bmatrix}17 \\ \pi/3\end{bmatrix}\right\}\) are three different examples of bases for \(\mathbb{R}^2\text{.}\) Given a subspace \(S\) and two bases \(B_1\) and \(B_2\) there may be no vectors in common between \(B_1\) and \(B_2\text{,}\) but it turns out that the number of vectors in the two bases will always be the same. That is, for example, it is impossible to find a basis for \(\mathbb{R}^2\) that has three vectors in it.

Definition 3.4.16.

Let \(S\) be a subspace of \(\mathbb{R}^n\text{.}\) The number of vectors in any basis for \(S\) is called the dimension of \(S\text{,}\) and is denoted by \(\dim(S)\text{.}\)

For any \(n \geq 1\) the standard unit vectors \(\vec{e_1}, \ldots, \vec{e_n}\) form a basis for \(\mathbb{R}^n\text{,}\) and thus \(\dim(\mathbb{R}^n) = n\) (as you would hope!).

In Example 3.4.13 we considered \(S = \SpanS\left(\begin{bmatrix}2\\1\\1\end{bmatrix}, \begin{bmatrix}4\\2\\2\end{bmatrix}, \begin{bmatrix}1\\0\\-1\end{bmatrix}, \begin{bmatrix}0\\1\\3\end{bmatrix}\right)\text{,}\) which is a subspace of \(\mathbb{R}^3\text{.}\) In that example we found that \(\left\{\begin{bmatrix}2\\1\\1\end{bmatrix}, \begin{bmatrix}1\\0\\-1\end{bmatrix}\right\}\) is a basis for \(S\text{.}\) Therefore \(\dim(S) = 2\text{.}\)

Consider a subspace \(S\) of \(\mathbb{R}^2\text{.}\) There are three possibilities:

  • \(\dim(S) = 0\text{:}\) In this case, \(S = \{\vec{0}\}\text{,}\) i.e., \(S\) contains only the origin.

  • \(\dim(S) = 1\text{:}\) Let \(B = \{\vec{d}\}\) be a basis for \(S\text{.}\) Then \(S = \SpanS(\vec{d})\text{,}\) so any vector \(\vec{v}\) in \(S\) can be expressed as \(\vec{v} = t\vec{d}\) for some scalar \(t\text{.}\) We recognize this (from Section 2.3) as the vector equation of a line through the origin, so \(S\) is a line through the origin.

  • \(\dim(S) = 2\text{:}\) In this case \(S = \mathbb{R}^2\text{.}\)

We thus have a complete description of all subspaces of \(\mathbb{R}^2\text{:}\) Any subspace of \(\mathbb{R}^2\) is either the trivial subspace, a line through the origin, or all of \(\mathbb{R}^2\text{.}\)

You can similarly check that any subspace of \(\mathbb{R}^3\) is either the trivial subspace (dimension \(0\)), a line through the origin (dimension \(1\)), a plane through the origin (dimension \(2\)), or all of \(\mathbb{R}^3\) (dimension \(3\)).

Exercises 3.4.4 Exercises

1.

Which of the following sets are subspace of \(\mathbb{R}^3? \) Explain.
  1. \(\displaystyle V_1 = \left\{\mathbf{u} = \begin{bmatrix} u_1 \\ u_2 \\ u_3 \end{bmatrix} \ \Bigg| \ |u_1| \le 4 \right\}. \) Solution.

    \(V_1\) is not closed unter scalar multiplication: While the vector \(\mathbf{u} = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}\) is in \(V_{1}\text{,}\) the vector \(5\mathbf{u} = \begin{bmatrix} 5 \\ 0 \\ 0 \end{bmatrix}\) is not in \(V_{1}\text{.}\)
    Answer.

    No, \(V_{1}\) is not a subspace of \(\mathbb{R}^{3}\)

  2. \(\displaystyle V_2 = \left\{\mathbf{u} = \begin{bmatrix} u_1 \\ u_2 \\ u_3 \end{bmatrix} \ \Bigg| \ u_i \ge 0 \ \text{for each } i = 1,2,3 \right\}. \) Solution.

    \(V_2\) is not closed unter scalar multiplication: While the vector \(\mathbf{u} = \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}\) is in \(V_{2}\text{,}\) the vector \((-1)\mathbf{u} = \begin{bmatrix} -1 \\ -1 \\ -1 \end{bmatrix}\) is not in \(V_{2}\text{.}\)
    Answer.

    No, \(V_{2}\) is not a subspace of \(\mathbb{R}^{3}\text{.}\)

  3. \(\displaystyle V_3 = \left\{\mathbf{u} = \begin{bmatrix} u_1 \\ u_2 \\ u_3 \end{bmatrix} \ \Bigg| \ u_3 + u_1 = 2u_2 \right\}. \) Solution.

    We claim that \(V_{3}\) is indeed a subspace of \(\mathbb{R}^{3}\text{,}\) so we have to check that it satisfies the properties listed in Definition 3.4.1:
    1. Non-empty: Since \(0+0=2\cdot 0\text{,}\) the vector \(\mathbf{u} = \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix}\) is an element of \(V_{3}\text{.}\)

    2. Closed under scalar multiplication: We take an arbitrary vector \(\mathbf{u} = \begin{bmatrix} u_1 \\ u_2 \\ u_3 \end{bmatrix}\in V_{3}\) and any scalar \(\lambda\text{.}\) Since \(\mathbf{u}\in V_{3}\text{,}\) we know that its entries satisfy the equation \(u_3 + u_1 = 2u_2\text{.}\) Now, the components of \(\mathbf{v}:=\lambda \mathbf{u}\) are given by

      \begin{equation*} \begin{bmatrix} v_1 \\ v_2 \\ v_3 \end{bmatrix} = \begin{bmatrix} \lambda \cdot u_1 \\ \lambda \cdot u_2 \\ \lambda \cdot u_3 \end{bmatrix} \end{equation*}
      and they satisfy
      \begin{equation*} v_{3} + v_{1} = \lambda \cdot u_{3} + \lambda \cdot u_{1} = \lambda \cdot (u_{3} + u_{1}) = \lambda \cdot 2 u_{2} = 2 v_{2} . \end{equation*}
      Thus, \(\mathbf{v}\in V_{3}\text{.}\) We have shown that \(\lambda \mathbf{u}\) is an element of \(V_{3}\text{.}\)

    3. Closed under addition: We take two arbitrary vectors \(\mathbf{u} = \begin{bmatrix} u_1 \\ u_2 \\ u_3 \end{bmatrix}\) and \(\mathbf{v} = \begin{bmatrix} v_1 \\ v_2 \\ v_3 \end{bmatrix}\) in \(V_{3}\text{.}\) Since \(\mathbf{u}, \mathbf{v}\in V_{3}\text{,}\) we know that their entries satisfy the equation \(u_3 + u_1 = 2u_2\) and \(v_3 + v_1 = 2v_2\text{.}\) This means that the components of

      \begin{equation*} \mathbf{w}:= \mathbf{u} + \mathbf{v} = \begin{bmatrix} u_{1} + v_{1} \\ u_{2} + v_{2} \\ u_{3} + v_{3} \end{bmatrix} \end{equation*}
      satisfy:
      \begin{align*} w_{3} + w_{1} \amp= (u_{3}+v_{3}) + (u_{1} + v_{1}) = (u_{3}+u_{1}) + ( v_{3}+ v_{1})\\ \amp= 2u_{2} + 2v_{2} = 2 (u_{2}+v_{2}) = 2w_{2} . \end{align*}
      Thus, \(\mathbf{w}\in V_{3}\text{.}\) We have shown that \(\mathbf{u}+\mathbf{v}\) is an element of \(V_{3}\text{.}\)

    Answer.

    Yes, \(V_{3}\) is a subspace of \(\mathbb{R}^{3}\text{.}\)

  4. \(\displaystyle V_4 = \left\{\mathbf{u} = \begin{bmatrix} u_1 \\ u_2 \\ u_3 \end{bmatrix} \ \Bigg| \ u_3 \ge u_1 \right\}. \) Solution.

    \(V_{4}\) is not closed under scalar multiplication: While the vector \(\mathbf{u} = \begin{bmatrix} 1 \\ 0 \\ 5 \end{bmatrix}\) is in \(V_{4}\text{,}\) the vector \((-1)\mathbf{u} = \begin{bmatrix} -1 \\ 0 \\ -5 \end{bmatrix}\) is not in \(V_{4}\text{,}\) because \(-5 \not\geq -1\text{.}\)
    Answer.

    No, \(V_{4}\) is not a subspace of \(\mathbb{R}^{3}\text{.}\)

  5. \(\displaystyle V_5 = \left\{\mathbf{u} = \begin{bmatrix} u_1 \\ u_2 \\ u_3 \end{bmatrix} \ \Bigg| \ u_3 = u_1 =0 \right\}. \) Solution.

    We claim that \(V_{5}\) is indeed a subspace of \(\mathbb{R}^{3}\text{,}\) so we have to check that it satisfies the properties listed in Definition 3.4.1:
    1. Non-empty: The first and third component of \(\mathbf{u} = \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix}\) are equal to zero, so it is an element of \(V_{5}\text{.}\)

    2. Closed under scalar multiplication: We take an arbitrary vector \(\mathbf{u}\in V_{5}\) and any scalar \(\lambda\text{.}\) Since \(\mathbf{u}\in V_{5}\text{,}\) we know that \(\mathbf{u} = \begin{bmatrix} 0 \\ u_{2} \\ 0 \end{bmatrix}\) for some number \(u_{2}\text{,}\) so that

      \begin{equation*} \lambda \mathbf{u} = \begin{bmatrix} 0 \\ \lambda \cdot u_{2} \\ 0 \end{bmatrix}. \end{equation*}
      The first and third component of \(\lambda \mathbf{u}\) are equal to zero, so it is an element of \(V_{5}\text{.}\)

    3. Closed under addition: We take two arbitrary vectors \(\mathbf{u},\mathbf{v}\in V_{5}\text{.}\) This means that

      \begin{equation*} \mathbf{u} = \begin{bmatrix} 0 \\ u_{2} \\ 0 \end{bmatrix}, \mathbf{v} = \begin{bmatrix} 0 \\ v_{2} \\ 0 \end{bmatrix} \quad \text{ for some numbers } u_{2}, v_{2}. \end{equation*}
      Therefore,
      \begin{equation*} \mathbf{u} + \mathbf{v} = \begin{bmatrix} 0 \\ u_{2} + v_{2} \\ 0 \end{bmatrix}. \end{equation*}
      The first and third component of \(\mathbf{u}+\mathbf{v}\) are equal to zero, so it is an element of \(V_{5}\text{.}\)

    Answer.

    Yes, \(V_{5}\) is a subspace of \(\mathbb{R}^{3}\text{.}\)

Hint.
Recall the what it means for a collection of vectors to form a subspace: Definition 3.4.1.

2.

Suppose \(V \) and \(W \) are subspaces of \(\mathbb{R}^n \text{.}\) Let \(V \cap W \) be all vectors which are in both \(V \) and \(W \text{.}\) Show that \(V \cap W \) is a subspace also.
Hint.
See Definition 3.4.1 for the definition of a subspace. Use the fact that both\(V\) and \(W\) have these properties.
Solution.
We have to check that \(V \cap W \) satisfies the properties listed in Definition 3.4.1:
  1. Non-empty: We know that \(V\) and \(W\) are both subspaces of \(\mathbb{R}^{n}\text{.}\) In particular, we know that they are both non-empty, so take any \(\mathbf{v}\in V\) and \(\mathbf{w}\in W\text{.}\) Since \(V\) and \(W\) are closed under scalar multiplication, we see that

    \begin{equation*} \mathbf{0} = 0 \cdot \mathbf{v} \in V \quad\text{ and }\quad \mathbf{0} = 0 \cdot \mathbf{w} \in W. \end{equation*}
    Thus, \(\mathbf{0}\) is in both \(V\) and \(W\text{,}\) so \(\mathbf{0}\in V \cap W \text{.}\)

  2. Closed under scalar multiplication: Suppose that \(\mathbf{u}\in V\cap W\) and \(\lambda\) is any scalar. By definition of \(V\cap W\text{,}\) we know that \(\mathbf{u} \in V\) and \(\mathbf{u} \in W\text{.}\) Since \(V\) and \(W\) are closed under scalar multiplication, we know that

    \begin{equation*} \lambda \cdot \mathbf{u} \in V \quad\text{ and }\quad \lambda \cdot \mathbf{u} \in W. \end{equation*}
    Thus, \(\lambda \cdot \mathbf{u} \in V \cap W\text{.}\) This shows that \(V\cap W\) is closed under scalar multiplication.

  3. Closed under addition: Assume that \(\mathbf{x}, \mathbf{y}\in V\cap W\text{.}\) In particular, \(\mathbf{x},\mathbf{y} \in V\) and \(\mathbf{x},\mathbf{y} \in W\text{.}\) Since \(V\) and \(W\) are closed under addition, we know that

    \begin{equation*} \mathbf{x}+\mathbf{y} \in V \quad\text{ and }\quad \mathbf{x}+\mathbf{y} \in W. \end{equation*}
    Thus, \(\mathbf{x}+\mathbf{y} \in V \cap W\text{.}\) This shows that \(V\cap W\) is closed under addition.

3.

Show that every proper subspace \(U \) of \(\mathbb{R}^2 \) is a line through the origin.
Hint 1.
Recall that \(U\) is a proper subspace of \(\mathbb{R}^2 \) if \(U\) is a subspace that is neither \(\{\mathbf{0}\}\) nor \(\mathbb{R}^2 \text{.}\)
Hint 2.
If \(\mathbf{d} \) is a nonzero vector in \(U \text{,}\) let \(\mathbb{R}\textbf{d} = \{ r\textbf{d} \ | \ r \text{ in } \mathbb{R}\} \) denote the line with direction vector \(\mathbf{d}. \) If \(\mathbf{u} \) is in \(U \) but not in \(\mathbb{R}\textbf{d} \text{,}\) argue that every vector in \(\mathbb{R}^2 \) is a linear combination of \(\mathbf{u} \) and \(\mathbf{d}. \)
Solution.
Since \(U\) is proper, there exists a non-zero vector in \(U\text{,}\) say \(\mathbf{d}\text{.}\) We want to show that \(U = \mathbb{R}\textbf{d}\text{,}\) where
\begin{equation*} \mathbb{R}\textbf{d}:=\{ r\textbf{d} \ | \ r \text{ in } \mathbb{R}\} \subseteq \mathbb{R}^{2}, \end{equation*}
a line through the origin in direction of \(\mathbf{d}\text{.}\) Assume otherwise: there exists \(\mathbf{u}\in U\) which is not in \(\mathbb{R}\textbf{d}\text{.}\) Note that, since \(U\) is a subspace, this means that \(\SpanS \{\mathbf{d} , \mathbf{u}\} \subseteq U\text{.}\) Consider the vector equation
\begin{equation} t \mathbf{d} + s \mathbf{u} = \mathbf{0}.\tag{3.4.1} \end{equation}
If \(s\neq 0\text{,}\) then this is equivalent to
\begin{equation*} \frac{-t}{s} \mathbf{d} = \mathbf{u}. \end{equation*}
If this were true for some choice of \(t, s\text{,}\) then that would mean \(\mathbf{u} \in \mathbb{R}\textbf{d},\) which contradicts our assumption. If \(s=0\text{,}\) then (3.4.1) is equivalent to
\begin{equation*} t \mathbf{d} = \mathbf{0}. \end{equation*}
Since \(\mathbf{d} \neq \mathbf{0}\) by assumption, this implies \(t=0\text{.}\) Thus, we have shown that the only solution to (3.4.1) is \((t,s)=(0,0)\text{.}\) This means exactly that \(\{\mathbf{d} , \mathbf{u}\}\) is linearly independent. Since \(\dim (\mathbb{R}^2) = 2\text{,}\) we onclude that \(\mathbb{R}^2 = \SpanS \{\mathbf{d} , \mathbf{u}\} \subseteq U \subsetneq \mathbb{R}^2\text{,}\) a contradiction to \(U\) being proper.