Skip to main content

Section 5.1 Eigenvalues and eigenvectors

We have seen that linear transformations (or, equivalently, matrices) can be fairly complicated. In this chapter we will be considering what happens when a linear transformation acts on a vector in a very simple way, by stretching the vector without changing the direction. We will see that this leads us to a powerful method for understanding some matrices in great detail.

For the material in this chapter to work out nicely it is necessary for us to allow the word "scalar" to mean "complex number", and to allow matrices and vectors with complex number entries. For that reason we will often speak of vectors in \(\mathbb{C}^n\) rather than \(\mathbb{R}^n\text{.}\) In fact, we will very often start with matrices with real number entries, but sometimes complex numbers will arise even in that case.

Everything that we have done so far in the course works equally well using complex numbers as it did using real numbers, except for anything involving the dot product. Fortunately, we will have no use for the material based on the dot product in this chapter.

Subsection 5.1.1 Definitions and examples

Definition 5.1.1.

Let \(T : \mathbb{C}^n \to \mathbb{C}^n\) be a linear transformation, and let \(\lambda\) be a scalar. A vector \(\vec{v}\) in \(\mathbb{C}^n\) is called an eigenvector for \(T\text{,}\) with eigenvalue \(\lambda\text{,}\) if \(T(\vec{v}) = \lambda \vec{v}\) and \(\vec{v} \neq \vec{0}\text{.}\)

Similarly, if \(A\) is an \(n \times n\) matrix, then \(\vec{v}\) is an eigenvector of \(A\) with eigenvalue \(\lambda\) if \(A\vec{v} = \lambda \vec{v}\) and \(\vec{v}\neq \vec{0}\text{.}\)

In both contexts, we say that \(\vec{v}\) is an eigenvector if it is an eigenvector for some eigenvalue \(\lambda\text{,}\) and similarly we say that \(\lambda\) is an eigenvalue if there is some eigenvector \(\vec{v}\) for \(\lambda\text{.}\)

In Section 4.1 we saw how to translate back and forth between the language of linear transformations and the language of matrices. Since most of the time it is easier to work with matrices, that is what we will focus on here. Any time we want to talk about eigenvalues or eigenvectors of a linear transformation \(T\) we will instead work with the matrix \([T]\text{.}\)

Notice that the equation \(A\vec{v} = \lambda\vec{v}\) can only have a chance of being true if \(A\) is a square matrix: If \(A\) is \(m \times n\) and \(\vec{v}\) is in \(\mathbb{C}^n\) then \(A\vec{v}\) is a vector in \(\mathbb{C}^m\) while \(\lambda \vec{v}\) is a vector in \(\mathbb{C}^n\text{,}\) so to have \(A\vec{v} = \lambda\vec{v}\) we must have \(m=n\text{.}\)

Let \(A = \begin{bmatrix}2 \amp -1 \\ -4 \amp 5\end{bmatrix}\text{,}\) let \(\vec{v} = \begin{bmatrix}-1 \\ 4\end{bmatrix}\) and \(\vec{w} = \begin{bmatrix}2\\3\end{bmatrix}\text{.}\) Then

\begin{equation*} A\vec{v} = \begin{bmatrix}2 \amp -1 \\ -4 \amp 5\end{bmatrix}\begin{bmatrix}-1 \\ 4\end{bmatrix} = \begin{bmatrix}-6 \\ 24\end{bmatrix} = 6\vec{v}\text{.} \end{equation*}

Thus \(\vec{v}\) is an eigenvector for \(A\) with eigenvalue \(6\text{.}\)

On the other hand,

\begin{equation*} A\vec{w} = \begin{bmatrix}2 \amp -1 \\ -4 \amp 5\end{bmatrix}\begin{bmatrix}2\\3\end{bmatrix} = \begin{bmatrix}1 \\ 7\end{bmatrix}\text{.} \end{equation*}

You can check that the equation \(\begin{bmatrix}1\\7\end{bmatrix} = \lambda \begin{bmatrix}2\\3\end{bmatrix}\) has no solutions, so there is no scalar \(\lambda\) such that \(A\vec{w} = \lambda\vec{w}\text{,}\) and thus \(\vec{w}\) is not an eigenvector for \(A\text{.}\)

Note 5.1.3.

In what follows it will be very useful to re-write the equation \(A\vec{v} = \lambda\vec{v}\) as an equation where the right side is \(\vec{0}\text{.}\) We can subtract \(\lambda\vec{v}\) from both sides to get \(A\vec{v} - \lambda\vec{v} = \vec{0}\text{.}\) At this point it is tempting to factor the \(\vec{v}\) on the left side of the equation, and write \((A-\lambda)\vec{v} = \vec{0}\text{.}\) However, this is nonsense, because \(A - \lambda\) does not make sense. To write something that is correct we must first re-write \(A\vec{v} - \lambda \vec{v}\) as \(A\vec{v} - (\lambda I_n) \vec{v}\text{.}\) Now both multiplications are multiplications of an \(n \times n\) matrix by a vector, and it is correct to factor this expression and obtain \((A-\lambda I_n)\vec{v} = \vec{0}\text{.}\)

Subsection 5.1.2 Finding eigenvalues

Given a matrix \(A\text{,}\) we'd like to find all of the eigenvalues of \(A\) and their corresponding eigenvectors. We start by finding the eigenvalues.

By definition, \(\lambda\) is and eigenvalue of \(A\) if and only if there is some non-zero \(\vec{v}\) such that \(A\vec{v} = \lambda\vec{v}\text{,}\) or equivalently, \((A-\lambda I_n)\vec{v} = \vec{0}\text{.}\) Such a vector exists if and only if \(\NullSp(A-\lambda I_n)\) contains a non-zero vector. By Theorem 4.6.20 this happens if and only if \(\det(A - \lambda I_n) = 0\text{.}\)

Let \(A = \begin{bmatrix}1 \amp 4 \amp -2 \\ 3 \amp 2 \amp 0 \\ 0 \amp 0 \amp -2\end{bmatrix}\text{.}\) We will find all of the eigenvalues of \(A\text{.}\) Using Theorem 5.1.4 that means that we are looking for all numbers \(x\) such that \(\det(A - xI_3) = 0\text{.}\) So we calculate:

\begin{align*} \det(A - xI_3) \amp = \det\begin{bmatrix}1-x \amp 4 \amp -2 \\ 3 \amp 2-x \amp 0 \\ 0 \amp 0 \amp -2-x\end{bmatrix}\\ \amp = -x^3+x^2+16x+20 \\ \amp = -(x-5)(x+2)^2 \end{align*}

We now see that there are two eigenvalues, namely \(5\) and \(-2\text{.}\)

In particular, we have shown that for any \(\lambda\) other than \(5\) and \(-2\) it is impossible to solve the equation \(A\vec{v} = \lambda\vec{v}\) with \(\vec{v} \neq \vec{0}\text{!}\)

You might have noticed that at the end of the previous example we needed to find the roots of a polynomial, and also that one of the eigenvalues appeared in the factorization of that polynomial more than once. We capture those phenomena with some definitions.

Definition 5.1.6.

Let \(A\) be an \(n \times n\) matrix. The characteristic polynomial of \(A\) is the polynomial \(\chi_A(x) = \det(A - xI_n)\text{.}\)

If \(\lambda\) is a scalar, then the algebraic multiplicity of \(\lambda\) as an eigenvalue of \(A\text{,}\) denoted \(\alg_A(\lambda)\text{,}\) is the multiplicity with which \(\lambda\) appears as a root of \(\chi_A(x)\text{.}\)

Let \(A = \begin{bmatrix}0 \amp -1 \amp 0 \amp 0 \\ 1 \amp 0 \amp 0 \amp 0 \\ 0 \amp 0 \amp 0 \amp 1 \\ 0 \amp 0 \amp -1 \amp 0\end{bmatrix}\text{.}\) The characteristic polynomial is

\begin{equation*} \chi_A(x) = \det(A-xI_4) = \det\begin{bmatrix}-x \amp -1 \amp 0 \amp 0 \\ 1 \amp -x \amp 0 \amp 0 \\ 0 \amp 0 \amp -x \amp 1 \\ 0 \amp 0 \amp -1 \amp -x\end{bmatrix} = x^4+2x^2+1 = (x-i)^2(x+i)^2\text{.} \end{equation*}

The eigenvalues are therefore \(i\) and \(-i\text{,}\) each with algebraic multiplicity \(2\text{.}\) The algebraic multiplicities add up to \(4\text{,}\) matching the fact that \(A\) is \(4\times 4\text{,}\) but if we considered only real number eigenvalues we wouldn't find any at all.

For triangular matrices calculating determinants is easy, and so finding eigenvalues is also easy.

Suppose that \(A\) is upper triangular, say \(A = \begin{bmatrix}\lambda_1 \amp * \amp * \amp \cdots \amp * \\ 0 \amp \lambda_2 \amp * \amp \cdots \amp * \\ 0 \amp 0 \amp \lambda_3 \amp \cdots \amp * \\ \vdots \amp \vdots \amp \vdots \amp \ddots \amp \vdots \\ 0 \amp 0 \amp 0 \amp \cdots \amp \lambda_n\end{bmatrix}\text{.}\) Then \(A - xI_n = \begin{bmatrix}\lambda_1 - x \amp * \amp * \amp \cdots \amp * \\ 0 \amp \lambda_2 - x\amp * \amp \cdots \amp * \\ 0 \amp 0 \amp \lambda_3 -x \amp \cdots \amp * \\ \vdots \amp \vdots \amp \vdots \amp \ddots \amp \vdots \\ 0 \amp 0 \amp 0 \amp \cdots \amp \lambda_n - x\end{bmatrix}\text{.}\) By Fact 4.5.11 we have

\begin{equation*} \chi_A(x) = \det(A - xI_n) = (\lambda_1 - x)(\lambda_2 - x)(\lambda_3 - x) \cdots (\lambda_n - x)\text{.} \end{equation*}

The proof when \(A\) is lower triangular is essentially the same.

Subsection 5.1.3 Finding eigenvectors

Now that we know how to find the eigenvalues of a matrix, we turn to finding the eigenvectors corresponding to each eigenvalue. Given a matrix \(A\) and a scalar \(\lambda\text{,}\) we want to find the non-zero vectors \(\vec{v}\) such that \(A\vec{v} = \lambda \vec{v}\text{.}\) Re-writing this equation as \((A-\lambda I_n)\vec{v} = \vec{0}\) we see that we are looking for the non-zero vectors in \(\NullSp(A - \lambda I_n)\text{.}\) The space \(\NullSp(A - \lambda I_n)\) appears so often in this context that it gets its own name.

Definition 5.1.10.

Let \(A\) be an \(n \times n\) matrix, and let \(\lambda\) be a scalar. Then we define the \(\lambda\)-eigenspace of \(A\) to be \(E_A(\lambda) = \NullSp(A - \lambda I_n)\text{.}\)

Note 5.1.11.

The vectors in \(E_A(\lambda)\) are not exactly the \(\lambda\)-eigenvectors of \(A\text{,}\) because \(\vec{0}\) is not an eigenvector but it does appear in \(E_A(\lambda)\text{.}\) However, this is the only difference. That is, for a non-zero vector \(\vec{v}\text{,}\) we have \(\vec{v}\) in \(E_A(\lambda)\) if and only if \(\vec{v}\) is a \(\lambda\)-eigenvector for \(A\text{.}\)

Let \(A = \begin{bmatrix}1 \amp 4 \amp -2 \\ 3 \amp 2 \amp 0 \\ 0 \amp 0 \amp -2\end{bmatrix}\text{.}\) In Example 5.1.5 we showed that the eigenvalues of \(A\) are \(5\) (with algebraic multiplicity \(1\)) and \(-2\) (with algebraic multiplicity \(2\)). Find a basis for \(E_A(-2)\text{.}\)

Solution.

We are asked to find a basis for \(E_A(-2) = \NullSp(A - (-2)I_3)\text{.}\) We have already seen how to find a basis for the null space of a matrix (see Example 4.6.17), so we apply that technique to the matrix \(A - (-2)I_3\text{.}\)

\begin{equation*} A - (-2)I_3 = \begin{bmatrix}3 \amp 4 \amp -2 \\ 3 \amp 4 \amp 0 \\ 0 \amp 0 \amp 0\end{bmatrix} \to \begin{bmatrix}1 \amp 4/3 \amp 0 \\ 0 \amp 0 \amp 1 \\ 0 \amp 0 \amp 0\end{bmatrix}\text{.} \end{equation*}

From here we see that \(\begin{bmatrix}x\\y\\z\end{bmatrix}\) is in \(E_A(-2) = \NullSp(A-(-2)I_3)\) if and only if \(x = -4/3 y\) and \(z = 0\text{,}\) which gives us

\begin{equation*} \begin{bmatrix}x\\y\\z\end{bmatrix} = \begin{bmatrix}-4/3y \\ y \\ 0\end{bmatrix} = y\begin{bmatrix}-4/3 \\ 1 \\0\end{bmatrix}\text{.} \end{equation*}

Therefore the single vector \(\begin{bmatrix}-4/3 \\ 1 \\0\end{bmatrix}\) is a basis for \(E_A(-2)\text{.}\)

Definition 5.1.13.

Let \(A\) be an \(n \times n\) matrix, and let \(\lambda\) be an eigenvalue of \(A\text{.}\) The geometric multiplicity of \(\lambda\text{,}\) written \(\geo_A(\lambda)\text{,}\) is defined to be \(\geo_A(\lambda) = \dim(E_A(\lambda))\text{.}\)

In general, the geometric and algebraic multiplicities of an eigenvalue are not necesarily the same, but they sometimes are. In Section 5.2 we will see that marvellous things happen when they turn out to be the same for all of the eigenvalues of a matrix. Even though the two multiplicities of an eigenvalue may be different, they are not entirely unrelated; see Theorem 5.1.16.

For good measure, let's do an example where we go through the whole process from start to finish. Let \(A = \begin{bmatrix} 5 \amp 3 \amp -6 \amp -5 \\ -5 \amp -11 \amp 5 \amp 0 \\ -4 \amp -12 \amp 3 \amp -5 \\ 6 \amp 18 \amp -11 \amp -1\end{bmatrix}\text{.}\) Find all eigenvalues of \(A\text{.}\) For each eigenvalue, find the algebraic multiplicity, a basis for the corresponding eigenspace, and the geometric multiplicity.

Solution.

We start by finding and factoring the characteristic polynomial.

\begin{align*} \chi_A(x) \amp = \det(A - xI_4) \\ \amp = \det\begin{bmatrix} 5-x \amp 3 \amp -6 \amp -5 \\ -5 \amp -11-x \amp 5 \amp 0 \\ -4 \amp -12 \amp 3-x \amp -5 \\ 6 \amp 18 \amp -11 \amp -1-x\end{bmatrix} \\ \amp = x^4 + 4x^3 -44x^2 -96x +576 \\ \amp = (x-4)^2(x+6)^2\text{.} \end{align*}

Now we know that the eigenvalues are \(4\) and \(-6\text{,}\) and that each has algebraic multiplicity \(2\text{.}\)

Next we pick one of the eigenvalues and look for eigenvectors. Let's start with the eigenvalue \(4\text{.}\) We want to find a basis for \(E_A(4) = \NullSp(A - 4I_4)\text{,}\) so we row-reduce:

\begin{equation*} A - 4I_4 = \begin{bmatrix} 1 \amp 3 \amp -6 \amp -5 \\ -5 \amp -15 \amp 5 \amp 0 \\ -4 \amp -12 \amp -1 \amp -5 \\ 6 \amp 18 \amp -11 \amp -5\end{bmatrix} \to \begin{bmatrix}1 \amp 3 \amp 0 \amp 1 \\ 0 \amp 0 \amp 1 \amp 1 \\ 0 \amp 0 \amp 0 \amp 0\\ 0 \amp 0 \amp 0 \amp 0\end{bmatrix}\text{.} \end{equation*}

We've arrived at the equations \(x+3y+w=0\) and \(z+w=0\text{,}\) so vectors in this eigenspace look like:

\begin{equation*} \begin{bmatrix}x\\y\\z\\w\end{bmatrix} = \begin{bmatrix}-3y-w \\ y \\ -w \\ w\end{bmatrix} = y\begin{bmatrix}-3\\1\\0\\0\end{bmatrix} + w\begin{bmatrix}-1\\0\\-1\\1\end{bmatrix}\text{.} \end{equation*}

We have thus found that a basis for \(E_4(A)\) is \(\left\{\begin{bmatrix}-3\\1\\0\\0\end{bmatrix}, \begin{bmatrix}-1\\0\\-1\\1\end{bmatrix}\right\}\text{.}\) Since this basis has \(2\) vectors in it we conclude that \(\geo_A(4) = \dim(E_A(4)) = 2\text{.}\)

Now we turn to the other eigenvalue, \(-6\text{,}\) and repeat the process.

\begin{equation*} A - (-6)I_4 = \begin{bmatrix} 11 \amp 3 \amp -6 \amp -5 \\ -5 \amp -5 \amp 5 \amp 0 \\ -4 \amp -12 \amp 9 \amp -5 \\ 6 \amp 18 \amp -11 \amp 5\end{bmatrix} \to \begin{bmatrix}1 \amp 0 \amp 0 \amp -1 \\ 0 \amp 1 \amp 0 \amp 0 \\ 0 \amp 0 \amp 1 \amp -1 \\ 0 \amp 0 \amp 0 \amp 0\end{bmatrix}\text{.} \end{equation*}

This time the equations we get are \(x-w=0\text{,}\) \(y=0\text{,}\) \(z-w=0\text{,}\) so vectors in \(E_A(-6)\) have this form:

\begin{equation*} \begin{bmatrix}x\\y\\z\\w\end{bmatrix} = \begin{bmatrix}w\\0\\w\\w\end{bmatrix} = w\begin{bmatrix}1\\0\\1\\1\end{bmatrix}\text{.} \end{equation*}

Therefore \(\left\{\begin{bmatrix}1\\0\\1\\1\end{bmatrix}\right\}\) is a basis for \(E_A(-6)\text{,}\) and since this basis contains only one vector we have \(\geo_A(-6) = 1\text{.}\)

Subsection 5.1.4 Properties of eigenvalues and eigenvectors

In the course of the material we've developed so far about eigenvalues and eigenvectors you've seen that determinants and null spaces have played prominent roles. You will hopefully be unsurprised that this means we're in a position to add something to the fundamental theorem!

The new items are (16) and (17), which are equivalent to each other by the definitions of "eigenvalue", "eigenvector", and "eigenspace". To connect them to the other items on the list, notice that \(E_A(0) = \NullSp(A-0I) = \NullSp(A)\text{,}\) so (17) is just a rephrasing of (5).

We now state the promised relationship between \(\alg_A(\lambda)\) and \(\geo_A(\lambda)\text{,}\) though we won't prove it.

A formal proof requires mathematical induction, but hopefully the following calculation (which proves the result for \(k=2\)) will convince you that you could continue this up to any \(k\text{:}\)

\begin{align*} A^2\vec{v} \amp = A(A\vec{v})\\ \amp = A(\lambda\vec{v}) \\ \amp = \lambda(A\vec{v}) \\ \amp = \lambda(\lambda\vec{v})\\ \amp = \lambda^2\vec{v} \end{align*}

We start with a calculation:

\begin{align*} \lambda(A^{-1}\vec{v}) \amp = A^{-1}(\lambda \vec{v}) \\ \amp = A^{-1}(A\vec{v})\\ \amp = (A^{-1}A)\vec{v}\\ \amp = I_n\vec{v}\\ \amp = \vec{v} \end{align*}

Now since \(A\) is assumed to be invertible we know that \(\lambda \neq 0\) (by Theorem 5.1.15), so we can divide both sides by \(\lambda\) to obtain

\begin{equation*} A^{-1}\vec{v} = \frac{1}{\lambda}\vec{v}\text{.} \end{equation*}

Finally, it is will be useful in the next section to know that eigenvectors from different eigenvalues are linearly independent. The proof technique is interesting, too!

For a contradiction, suppose that \(\vec{v_1}, \ldots, \vec{v_k}\) are linearly dependent. Let \(j\) be the smallest number such that \(\vec{v_1}, \ldots, \vec{v_j}\) are linearly dependent, so \(\vec{v_1}, \ldots, \vec{v_{j-1}}\) are linearly independent. Suppose that \(a_1\vec{v_1} + \cdots + a_{j-1}\vec{v_{j-1}} + a_j\vec{v_j} = \vec{0}\text{,}\) where at least one of the coefficients is non-zero. Because \(\vec{v_1}, \ldots, \vec{v_{j-1}}\) are linearly independent we cannot have \(a_j=0\text{,}\) so we can rearrange this equation to say \(\vec{v_j} = \frac{a_1}{a_j}\vec{v_1} + \cdots + \frac{a_{j-1}}{a_j}\vec{v_{j-1}}\text{.}\) To make this slightly easier to work with, let \(c_1 = a_1/a_j\text{,}\) \(c_2 = a_2/a_j\text{,}\) and so on. Then the equation we have is

\begin{equation*} \vec{v_j} = c_1\vec{v_1} + \cdots + c_{j-1}\vec{v_{j-1}}\text{.} \end{equation*}

If we multiply both sides of the above equation by \(\lambda_j\) we have:

\begin{equation*} \lambda_j\vec{v_j} = c_1\lambda_j\vec{v_1} + \cdots + c_{j-1}\lambda_j\vec{v_{j-1}}\text{.} \end{equation*}

On the other hand, we also have:

\begin{align*} \lambda_j\vec{v_j} \amp = A\vec{v_j} \\ \amp = A(c_1\vec{v_1} + \cdots + c_{j-1}\vec{v_{j-1}}) \\ \amp = c_1A\vec{v_1} + \cdots + c_{j-1}A\vec{v_{j-1}}\\ \amp = c_1\lambda_1\vec{v_1} + \cdots + c_{j-1}\lambda_{j-1}\vec{v_{j-1}} \text{.} \end{align*}

Now subtracting these two results, we obtain:

\begin{equation*} \vec{0} = c_1(\lambda_1 - \lambda_j)\vec{v_1} + \cdots + c_{j-1}(\lambda_{j-1} - \lambda_j)\vec{v_{j-1}}\text{.} \end{equation*}

By assumption, \(\vec{v_1}, \ldots, \vec{v_{j-1}}\) are linearly independent. Thus all of the coefficients above must be \(0\text{;}\) that is, for each \(r\text{,}\) \(c_r(\lambda_r - \lambda_j) = 0\text{.}\) The assumption of the theorem was that the eigenvalues \(\lambda_1, \ldots, \lambda_k\) are distinct, so we know that \(\lambda_r - \lambda_j \neq 0\) for all \(r\text{.}\) Thus for every \(r\) we must have \(c_r = 0\text{.}\) But then

\begin{equation*} \vec{v_j} = c_1\vec{v_1} + \cdots + c_{j-1}\vec{v_{j-1}} = 0\vec{v_1} + \cdots + 0\vec{v_{j-1}} = \vec{0}\text{,} \end{equation*}

and this contradicts the assumption that \(\vec{v_j}\) is an eigenvector of \(A\text{.}\)

Exercises 5.1.5 Exercises

1.

Find the eigenvalues and a basis for each eigenspace for the matrix
\begin{equation*} \begin{bmatrix} -6 \amp -92 \amp 12 \\ 0 \amp 0 \amp 0 \\ -2 \amp -31 \amp 4 \end{bmatrix} \text{.} \end{equation*}
Hint.
Follow the process demonstrated in Example 5.1.14.
Answer.
The eigenvalues are \(-2\) (with algebraic multiplicity \(1\)) and \(0\) (with algebraic multiplicity \(2\)). A basis for \(E_{-2}(A)\) is \(\left\{\begin{bmatrix}3\\0\\1\end{bmatrix}\right\}\) and a basis for \(E_0(A)\) is \(\left\{\begin{bmatrix}2\\0\\1\end{bmatrix}\right\}\text{.}\)

2.

Find the eigenvalues and a basis of each eigenspace for the matrix
\begin{equation*} \begin{bmatrix} -2 \amp -17 \amp -6 \\ 0 \amp 0 \amp 0 \\ 1 \amp 9 \amp 3 \end{bmatrix} \text{.} \end{equation*}
Hint.
Follow the process demonstrated in Example 5.1.14.
Answer.
The eigenvalues are \(1\) (with algebraic multiplicity \(1\)) and \(0\) (with algebraic multiplicity \(2\)). A basis for \(E_{1}(A)\) is \(\left\{\begin{bmatrix}-2\\0\\1\end{bmatrix}\right\}\) and a basis for \(E_0(A)\) is \(\left\{\begin{bmatrix}-3\\0\\1\end{bmatrix}\right\}\text{.}\)

3.

Find the eigenvalues and a basis for each eigenspace for the matrix
\begin{equation*} \begin{bmatrix} 9 \amp 2 \amp 8 \\ 2 \amp -6 \amp -2 \\ -8 \amp 2 \amp -5 \end{bmatrix} \text{.} \end{equation*}
Hint.
Follow the process demonstrated in Example 5.1.14.
Answer.
The eigenvalues are \(-3\text{,}\) \(-1\text{,}\) and \(2\text{,}\) each with algebraic multiplicity \(1\text{.}\) A basis for \(E_{-3}(A)\) is \(\left\{\begin{bmatrix}1\\2\\-2\end{bmatrix}\right\}\text{,}\) a basis for \(E_{-1}(A)\) is \(\left\{\begin{bmatrix}2\\2\\-3\end{bmatrix}\right\}\text{,}\) and a basis for \(E_2(A)\) is \(\left\{\begin{bmatrix}2\\1\\-2\end{bmatrix}\right\}\text{.}\)

4.

Find the eigenvalues and a basis of each eigenspace for the matrix \(A = \begin{bmatrix}-2 \amp -1 \\ 5 \amp 2\end{bmatrix}\text{.}\)
Hint.
The eigenvalues are complex. Follow the process demonstrated in Example 5.1.14, this time using complex numbers in your calculations when they show up.
Answer.
The eigenvalues are \(i\) and \(-i\text{.}\) A basis for \(E_{i}(A)\) is \(\left\{\begin{bmatrix}\frac{1}{5}(i-2) \\ 1\end{bmatrix}\right\}\text{.}\) A basis for \(E_{-i}(A)\) is \(\left\{\begin{bmatrix}-\frac{1}{5}(i+2)\\1\end{bmatrix}\right\}\text{.}\)
Solution.
We start by finding the characteristic polynomial.
\begin{equation*} \chi_A(x) = \det(A - xI) = \det\begin{bmatrix}-2-x \amp -1 \\ 5 \amp 2-x\end{bmatrix} = x^2+1. \end{equation*}
Therefore the eigenvalues are the solutions to \(x^2+1=0\text{,}\) which are \(i\) and \(-i\text{.}\) For the eigenvalue \(i\text{,}\) we calculate:
\begin{equation*} A-iI = \begin{bmatrix}-2-i \amp -1 \\ 5 \amp 2-i\end{bmatrix} \to \begin{bmatrix}1 \amp \frac{1}{5}(2-i) \\ 0 \amp 0\end{bmatrix}. \end{equation*}
Therefore a basis for \(E_{i}(A)\) is \(\left\{\begin{bmatrix}\frac{1}{5}(i-2) \\ 1\end{bmatrix}\right\}\text{.}\) For the eigenvalue \(-i\text{,}\) a similar process shows that a basis for \(E_{-i}(A)\) is \(\left\{\begin{bmatrix}-\frac{1}{5}(i+2)\\1\end{bmatrix}\right\}\text{.}\)

5.

Without doing any explicit computation, find the eigenvalues, along with their algebraic and geometric multiplicities, of the matrix
\begin{equation*} B = \begin{bmatrix} 0 \amp 0 \\ 0 \amp -3 \end{bmatrix} \text{.} \end{equation*}
Hint.
This matrix is diagonal, so in particular it is triangular.
Answer.
The eigenvalues are \(0\) and \(-3\text{,}\) with \(\alg(0)=\geo(0)=\alg(-3)=\geo(-3)=1\text{.}\)
Solution.
This matrix is diagonal, so in particular it is triangular. Theorem 5.1.9 tells us that the eigenvalues are the diagonal entries of the matrix and that they are repeated according to their algebraic multiplicities. Thus the eigenvalues are \(0\) and \(-3\text{,}\) each with algebraic multiplicity \(1\text{.}\) By Theorem 5.1.16 we have \(1\leq \geo(0) \leq \alg(0) =1\text{,}\) so \(\geo(0) = 1\text{,}\) and similarly \(1 \leq \geo(-3) \leq \alg(-3) = 1\text{,}\) so \(\geo(-3) = 1\) as well.

6.

Without doing any explicit computation, explain why there are no real eigenvalues for the matrix
\begin{equation*} B = \begin{bmatrix}0 \amp 1 \\ -1 \amp 0\end{bmatrix}\text{.} \end{equation*}
Hint.
Think about what happens geometrically when a vector in \(\mathbb{R}^2\) is multiplied by \(B\text{.}\)
Solution.
We can recognize this matrix as the matrix of rotation by the angle \(\theta = \pi/2\text{.}\) Geometrically we can see that every non-zero vector in \(\mathbb{R}^2\) changes direction when rotated by \(\pi/2\text{,}\) so no non-zero vector can satisfy \(B\vec{v} = \lambda\vec{v}\) for any real number \(\lambda\text{,}\) thus there are no real eigenvalues of this matrix.

7.

Given \(A = \begin{bmatrix} a \amp b \\ c \amp d\end{bmatrix}\) show that :
  1. \(\chi_A(x) = x^2-tr(A)x+\det(A)\) where \(tr(A) = a+d \) is called the trace of \(A \text{.}\)

  2. The eigenvalues of \(A\) are \(\frac{1}{2}[ (a+d) \pm \sqrt{(a-d)^2+4bc}]\text{.}\)

Hint.
For the first part, just carry out the calculation of \(\chi_A(x)\text{.}\) For the second part, remember that the eigenvalues of \(A\) are the roots of \(\chi_A(x)\text{.}\)
Solution.
  1. We use the definition of \(\chi_A(x)\) to carry out a calculation:

    \begin{align*} \chi_A(x) \amp = \det(A - xI_2) \\ \amp = \det\left(\begin{bmatrix}a-x \amp b \\ c \amp d-x\end{bmatrix}\right)\\ \amp = (a-x)(d-x)-bc\\ \amp = x^2 - (a+d)x + (ad-bc)\\ \amp = x^2 - tr(A)x + \det(A) \end{align*}

  2. The eigenvalues of \(A\) are the roots of \(\chi_A(x)\text{.}\) In the previous part we saw that \(\chi_A(x) = x^2 + (a+d)x + (ad-bc)\text{,}\) and applying the quadratic formula then gives that the roots are

    \begin{equation*} \frac{(a+d)\pm\sqrt{(a+d)^2 - 4(ad-bc)}}{2} = \frac{(a+d) \pm \sqrt{(a-d)^2+4bc}}{2}, \end{equation*}
    as desired.