



Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
Eigenvectors and eigenvalues, which are essential concepts in linear algebra. Eigenvectors are vectors that remain unchanged when a linear operator acts on them, while eigenvalues represent the scalar multiplication factor. How to find eigenvectors by solving the system (a - λi)x = 0 for each eigenvalue λ. It also covers the properties of eigenvectors, such as linear independence and the diagonalizability of operators.
Typology: Study notes
1 / 6
This page cannot be seen from the preview
Don't miss anything!
Last lecture we saw, that in order to find vectors, “stretched” by the operator with matrix A, we need to solve the characteristic equation
det(A − λI) = 0, (1)
which will give us different λi’s — coefficients, showing, how the vectors are changed after applying the operator. Now we will give the following definition.
Definition 1.1. Let V be a vector space, and let A be a linear operator in vector space V. Then the vector x is called eigenvector of the operator A is there exist a number λ, which is called eigenvalue such that A(x) = λx.
So, our goal is to find eigenvectors, since the following proposition holds:
Proposition 1.2. Let V be an n-dimensional vector space, and A be a linear operator. Then if there are n linearly independent eigenvectors, then the matrix of A is diagonal in the basis, consisting of eigenvectors.
So far we know how to find λi’s — eigenvalues of the operator. In order to find eigenvectors, we need to solve the system (A − λiI)x = 0 (2)
for every found eigenvalue λi. We will give an example of computing eigenvalues and eigenvectors.
Example 1.3. Let A =
. Let’s compute its eigenvalues and eigenvectors.
A − λI =
1 − λ − 3 1 5 − λ
det(A − λI) = (1 − λ)(5 − λ) + 3 = 5 − 6 λ + λ^2 + 3 = λ^2 − 6 λ + 8.
The roots of this equation are λ 1 = 2 and λ 2 = 4. Now we’ll find eigenvectors corresponding to these eigenvalues.
λ = 2. Let’s subtract λ’s from the diagonal. We’ll get the following matrix, and the system: ( − 1 − 3 1 3
−x − 3 y = 0 x + 3 y = 0
From this system it follows that x = − 3 y, so each vector of the form (− 3 c, c), i.e. (− 3 , 1) is an eigenvector corresponding to the eigenvalue λ = 2.
λ = 4. Let’s subtract λ’s from the diagonal. We’ll get the following matrix, and the system: ( − 3 − 3 1 1
− 3 x − 3 y = 0 x + y = 0
From this system it follows that x = −y, so each vector of the form (−c, c), i.e. (− 1 , 1) is an eigenvector corresponding to the eigenvalue λ = 4.
So, in the basis, consisting of the vectors e′ 1 = (− 3 , 1) and e′ 2 = (− 1 , 1) the matrix of the corresponding operator has form (^) ( 2 0 0 4
Now we can check our formula D = C−^1 AC, where D is a diagonal form of the matrix, and C is a change-of-basis matrix. We have
so
C−^1 AC =
In the same way we can compute eigenvalues and eigenvectors for larger matrices, but it will require solving equations of degree higher than 2. Since we don’t have formulae for such equations, we should guess roots of the characteristic equation.
This theorem allows us to define a characteristic polynomial of the operator without choosing a particular basis. Now our goal is to understand whether the operator is diagonalizable or not. Of course we can compute its eigenvalues. If there are n different eigenvalues, then the following theorem will show us that in this case there will be n linearly independent eigenvectors, and the basis with respect to which the operator is diagonal is just a basis, which consists of the eigenvectors.
Theorem 2.5. Eigenvectors corresponding to different eigenvalues are linearly independent.
Proof. The proof goes by induction. Let λ 1 , λ 2 ,... , λk be eigenvalues and corresponding eigen- vectors are linearly independent, i.e. if e 1 , e 2 ,... , en are eigenvectors such that A(ei) = λiei for all i = 1,... , k, and d 1 e 1 + d 2 e 2 + · · · + dkek = 0
then di = 0 for all i’s. Let we add another eigenvalue λk+1 and corresponding eigenvector ek+1, such that A(ek+1) = λk+1ek+1. We’ll prove that vectors e 1 , e 2 ,... , ek, ek+1 are still linearly independent. Let’s con- sider a linear combination of them which is equal to 0 :
c 1 e 1 + c 2 e 2 + · · · + ckek + ck+1ek+1 = 0. (3)
Now we can apply a linear operator to both sides of this equality:
A(c 1 e 1 ) + A(c 2 e 2 ) + · · · + A(ckek) + A(ck+1ek+1) = 0.
This is equivalent to
c 1 A(e 1 ) + c 2 A(e 2 ) + · · · + ckA(ek) + ck+1A(ek+1) = 0 ,
and since they are eigenvectors, i.e. A(ei) = λiei, we have
c 1 λ 1 e 1 + c 2 λ 2 e 2 + · · · + ckλkek + ck+1λk+1ek+1 = 0. (4)
Now, let’s multiply the equality (3) by λk+1, and subtract from (4). We’ll have:
c 1 (λ 1 − λk+1)e 1 + c 2 (λ 2 − λk+1)e 2 + · · · + ck(λk − λk+1)ek = 0.
(note, that we don’t have term with ek+1 anymore!). But λk+1 6 = λi, i = 1,... , k. So, if ci 6 = 0, for all i’s, we got a nontrivial linear combination of e 1 , e 2 ,... , ek which is equal to zero, and vectors e 1 , e 2 ,... , ek are not linearly independent. But they are linearly independent! Thus, all ci’s are equal to 0, and vectors e 1 , e 2 ,... , ek, ek+1 are linearly independent.
So, now we can specify the main corollary of this theorem.
Corollary 2.6. Let A be a linear operator in the space V. If the characteristic polynomial of A has n different roots, then A is diagonalizable with respect to basis, which consists of eigenvectors.
Now we will see, that even if there are no n different roots, then there may exist a basis of eigenvectors.
Example 2.7. Let A =
. Then A − λI =
1 − λ 0 0 1 − λ
, so
pA(λ) = (λ − 1)^2.
Thus, there exist only one eigenvalue λ = 1. Subtracting λ = 1 from diagonal elements of A we get zero matrix. So, each vector is an eigenvector of A, and thus of course there exists a basis, consisting of eigenvectors, i.e e 1 = (1, 0), and e 2 = (0, 1).
Example 2.8. Let A =
. Then A − λI =
1 − λ 1 0 1 − λ
, so
pA(λ) = (λ − 1)^2.
Thus, there exist only one eigenvalue λ = 1. Subtracting λ = 1 from diagonal elements of A
we get
. The corresponding system is
{ 0 x 1 + x 2 = 0 0 x 1 + 0 x 2 = 0
So, there are no 2 linearly independent eigenvectors, because all eigenvectors have form (c, 0). So, there is no basis, consisting of eigenvectors, and thus this operator is not diagonalizable.
3 Formulae for the characteristic polynomials of 2 × 2 -
and 3 × 3 -matrices
Let A =
a b c d
. Then the characteristic polynomial is equal to:
det(A − λI) =
a − λ b c d − λ
= (a − λ)(d − λ) − bc = λ^2 − (a + d)λ + (ad − bc) = λ^2 − (tr A)λ + det A.
Recall, that by tr A we denote the sum of diagonal elements of A, trace of the matrix A.