Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Modern Algebra - Abstract Algebra - Lecture Notes | MATH 731, Assignments of Linear Algebra

Material Type: Assignment; Class: Abstract Algebra; Subject: Mathematical Sciences; University: University of Wisconsin - Milwaukee; Term: Fall 2008;

Typology: Assignments

Pre 2010

Uploaded on 09/02/2009

koofers-user-9k5
koofers-user-9k5 🇺🇸

10 documents

1 / 72

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
MATH 731/732: MODERN ALGEBRA
University of Wisconsin–Milwaukee
2000 draft Printed September 3, 2008
Allen D. Bell
adbell@uwm.edu
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf2f
pf30
pf31
pf32
pf33
pf34
pf35
pf36
pf37
pf38
pf39
pf3a
pf3b
pf3c
pf3d
pf3e
pf3f
pf40
pf41
pf42
pf43
pf44
pf45
pf46
pf47
pf48

Partial preview of the text

Download Modern Algebra - Abstract Algebra - Lecture Notes | MATH 731 and more Assignments Linear Algebra in PDF only on Docsity!

MATH 731/732: MODERN ALGEBRA

University of Wisconsin–Milwaukee

2000 draft Printed September 3, 2008

Allen D. Bell

adbell@uwm.edu

Linear and Bilinear Algebra

In this appendix we review some important facts about linear algebra (the theory of vector spaces, linear maps, and bilinear forms) which will be used throughout the course. Because this material is review, we have not included many proofs. In the first section, we review basic facts, especially those concerning bases and dimension; at the end of the section we discuss direct sums and the rank of a linear map. In the second section, we discuss linear operators and matrices, and the connection between the two, including change of basis results and similar matrices. Then we review eigenvectors and canonical forms, including the Spectral Theorem and Jordan form. In the final section, we discuss bilinear forms and their matrices, again discussing change of basis and congruent matrices, and give the classification theorems for symmetric and alternating forms.

  1. Bases and Dimension We have already defined the notions of vector space and linear transformation between vector spaces. Let V be a vector space over a field F and let X be a subset of V. A linear combination of the elements of X is a vector ∑ x∈X λxx for scalars λx ∈ F , only finitely many of which are nonzero: the scalars 1 λx are called the coefficients in the

2 LINEAR AND BILINEAR ALGEBRA linear combination. Such a linear combination is called trivial if each λx is 0 and non- trivial otherwise. The span of X is the set of all linear combinations of the elements of X : it is precisely the subspace of V generated by X , and we denote it F X or span X. We say X spans V if F X = V. We say X is linearly independent if only the trivial linear combination of elements of X is 0 and linearly dependent if some nontrivial linear combination of elements of X is 0. A basis of X is a subset which spans X and is linearly independent. The canonical example is the space F n^ of all n-tuples (which we also regard as n × 1 column vectors when convenient), which has basis e 1 = (1, 0 ,... , 0 , 0),... , en = (0, 0 ,... , 0 , 1) (this is known as the standard basis).

Lemma 1.1. Let V be a vector space and X a subset of V. Then the following statements are equivalent.

(1) X is a basis of V. (2) Every element of V can be written as a linear combination of the elements of X with unique coefficients. Proof. Exercise. Lemma 1.2. Let X be a subset of a vector space V and let v ∈ V. (1) If v is in the span of X , then X ∪ {v} is linearly dependent and the span of X equals the span of X ∪ {v}.

4 LINEAR AND BILINEAR ALGEBRA (2) ⇒ (1). Let X be a maximal linearly independent subset of V and let v ∈ V. Since X ∪ {v} is linearly dependent, Lemma 1.2 implies v is in the span of X. Thus X spans V. (3) ⇒ (1). Let X be a minimal spanning subset of V and suppose X is linearly dependent. By Lemma 1.2, there is some vector x ∈ X which can be written as a linear combination of the vectors in X \ {x}, and so again by the lemma, X and X \ {x} have the same span. This violates the minimality condition, whence X must be linearly independent.

We include a proof of the following theorem because it is an excellent example of the use of Zorn’s Lemma.

Theorem 1.4. Every vector space has a basis. In fact, every linearly independent subset of a vector space is contained in a basis and every spanning subset contains a basis.

Proof. The first sentence follows from the second, since every vector space contains some linearly independent set (e.g., ∅) and some spanning set (e.g., the whole space). Let X be a linearly independent subset of a vector space V and let I be the collection of all linearly independent subsets of V containing X. We make I into a poset by using the partial order ⊆. We will show every chain in I contains an upper bound in I. Zorn’s Lemma will then tell us that I contains a maximal element Y. By definition, Y ⊇ X ,

  1. BASES AND DIMENSION 5 and any subset larger than Y is linearly dependent. Thus by Theorem 1.3, Y is a basis of V. Let C be a chain in I , and let Z = ∪C = ∪C∈C C. If Z is not linearly independent, then there is some non-trivial linear combination of its elements which is 0. Such a linear combination can only involve finitely many elements z 1 ,... , zn with non-zero coefficients, so the set {z 1 ,... , zn} is linearly dependent. For each i, there is a set Ci ∈ C with zi ∈ Ci. Since C is a chain, there is an index k such that Ck ⊇ Ci for all i = 1,... , n, so z 1 ,... , zn ∈ Ck. But this implies Ck is linearly dependent, contrary to the assumption that Ck ∈ C ⊆ I. This proves Z must be linearly independent, so Z ∈ I. Now clearly Z contains every member of C , so Z is an upper bound for C. Thus the hypothesis of Zorn’s Lemma is satisfied.

One approach to proving every spanning set X contains a basis is to look for a minimal element in the set S of all spanning sets of V contained in X , partially ordered by ⊆. However, “dualizing” the proof used above does not obviously work. We take a different tack. Redefine I to be the set of linearly independent subsets of the spanning set X. As above, I contains a maximal element B. We leave it as an exercise for the reader to show that B spans, and hence is a basis of, V.

  1. BASES AND DIMENSION 7 (⇐) Let U be the subspace of V spanned by X and let π : V → V /U be the canonical projection. Then π agrees with the zero map on X , so these maps must be the same. Thus U = V , as required.

Theorem 1.8. Let V be a vector space and X be a subset of V. Then X is a basis of V if and only if for every vector space W and every collection of elements {wx : x ∈ X} in W , there is a unique linear transformation φ : V → W with φ(x) = wx for all x ∈ X.

Proof. (⇒) Suppose X is a basis of V , and define φ : V → W by φ(∑ x∈X λxx) = ∑ x∈X λxwx : this makes sense by Lemma 1.1. It is easy to see that φ is a linear map with

the required property. Moreover, since φ must be linear, this is the only possible way to define φ, and hence φ is unique. (⇐) Suppose X has the stated property: by Lemma 1.7, X spans V (by uniqueness of φ). Suppose X is linearly dependent, let Y ⊂ X be a basis for V , and suppose x ∈ X \ Y. Let W be any nonzero vector space and choose wx ∈ W nonzero and wx′^ = 0 for all x′^6 = x. Let φ be the map given in the hypothesis. Then both φ and the zero map are maps from V to W which agree on the basis Y , so they must be equal by Lemma 1.7. But φ(x) = wx 6 = 0, so this is a contradiction. Thus X must be linearly independent.

Theorem 1.9. Two vector spaces are isomorphic if and only if they have the same dimension.

8 LINEAR AND BILINEAR ALGEBRA Proof. (⇒) Clearly an isomorphism takes a basis to a basis, so it preserves dimen- sion. (⇐) This follows from the mapping property of the last theorem: we leave it as an exercise.

The following additive property of dimensions is quite useful.

Proposition 1.10. [Dimension Formula] Let V, W be vector spaces. (1) If U is a subspace of V , then dim V = dim U + dim V /U. (2) If φ : V → W is a linear map, then dim V = dim ker φ + dim im φ.

Proof. Exercise. We define the rank of T to be dim im T.

Corollary 1.11. Let V be a finite dimensional vector space and let T : V → V be a linear map. Then the following are equivalent.

(1) T is one-to-one. (2) T is onto. (3) T is an isomorphism.

Proof. Exercise.

10 LINEAR AND BILINEAR ALGEBRA We leave the proof that A contains an r(A) × r(A) nonsingular submatrix as a mod- erately difficult exercise for the reader.

Remark. We can restrict the submatrices we consider in Proposition 1.13 to princi- pal submatrices, that is, submatrices in which we retain the common elements of both rows i 1 ,... , ir and columns i 1 ,... , ir for some 1 ≤ i 1 < · · · < ir ≤ n.

Corollary 1.14. (1) The rank of any matrix equals the rank of its transpose. (2) The nullity of a square matrix equals the nullity of its transpose.

Proof. Exercise.

Problem 1.A. Show that if A is an n × n matrix over a field F , then A is right invertible if and only if A is left invertible.

If V is a vector space and W 1 ,... , Wn are subspaces, there is a natural linear map T : ⊕ni=1Wi → V defined by T ((wi)ni=1) = ∑ni=1 wi. When this map is an isomorphism, we say V is the internal direct sum of the subspaces W 1 ,... , Wn , and we write V = ⊕ni=1Wi. (This means it is up to the reader to tell internal and external direct sums apart by the context.) This happens precisely when every element of V can be written uniquely in the form ∑ni=1 wi for some wi ∈ Wi.

  1. LINEAR OPERATORS AND MATRICES 11 When W, W ′^ are subspaces of V , it is easy to see that V = W ⊕ W ′^ precisely when V = W + W ′^ and W ∩ W ′^ = { 0 }. In this case we can define a projection π : V → W by π(w + w′) = w: this will be a well-defined linear map with the property that π(w) = w for all w ∈ W.

Problem 1.B. Let V be a vector space and let π : V → V be a linear map for which π^2 = π. Prove that V = im π ⊕ ker π.

The following fact is very useful.

Proposition 1.15. Let V be a vector space and let W be a subspace of V. Then there is a subspace W ′^ of V with V = W ⊕ W ′^.

Proof. Let Y be a basis for W , and extend it to a basis X for V ; then take W ′^ to be the span of X \ Y.

  1. Linear Operators and Matrices If V is a vector space, then the set EndF (V ) (or simply End(V )) of linear maps from V to V is a ring with pointwise addition and composition as multiplication. It is also an F -vector space of dimension n^2 , and is in fact an F -algebra. The invertible elements of End(V ) (that is, the linear isomorphisms) form a group under composition, which we call
  1. LINEAR OPERATORS AND MATRICES 13 Theorem 2.1. Let V be an n-dimensional vector space, and fix an ordered basis for V. The map which sends a linear operator T : V → V to its matrix relative to this basis is an isomorphism of F -algebras from End(V ) to Mn(F ) and an isomorphism of groups from GL(V ) to GLn(F ).

Proof. Exercise. We also would like to know what happens when we pick a different basis. Let v 1 ,... , vn and w 1 ,... , wn be ordered bases for V. We define the change of basis matrix from v 1 ,... , vn to w 1 ,... , wn to be the n × n matrix P = (pij ) defined by vj = ∑ni=1 pij wi. That is, P is the matrix, relative to the new ordered basis w 1 ,... , wn , of the linear operator from V to V which takes wi to vi for all i = 1,... , n, so its jth^ column lists the coefficients for an expansion of vj in terms of w 1 ,... , wn. We can write this symbolically as v = P w. This leads us to w = P −^1 v, whence we see that the change of basis matrix from w 1 ,... , wn to v 1 ,... , vn is P −^1.

Proposition 2.2. Let v 1 ,... , vn and w 1 ,... , wn be ordered bases for the vector space V , let P be the change of basis matrix, let T ∈ End(V ), and let A, B be the matrices of T with respect to these two ordered bases. Then B = P AP −^1.

Proof. Moderately difficult exercise.

14 LINEAR AND BILINEAR ALGEBRA Problem 2.A. Show that if P is the change of basis matrix from B to B′^ , then P −^1 is the change of basis matrix from B′^ to B.

We define two matrices A and B to be similar if B = P AP −^1 for some invertible matrix P. This is clearly an equivalence relation: it is conjugacy. It is not hard to see that any invertible matrix P can be a change of basis matrix, and in fact, we may choose either the old or the new ordered basis arbitrarily, and there will be a unique other ordered basis that gives rise to P as the change of basis matrix. This leads to the following corollary.

Corollary 2.3. Let T be a linear operator on V and let A be the matrix of T relative to an ordered basis of V. Then a matrix represents T relative to some ordered basis of V if and only if it is similar to A.

Thus the problem of finding a “nice” matrix for T is equivalent to the problem of finding a “nice” matrix similar to a given one. This is the problem of finding a canonical form, such as Jordan form, for a matrix. The nicest form is diagonal: we say an operator T is diagonalizable if there is an ordered basis relative to which the matrix of T is diagonal.

An eigenvector for a linear operator T : V → V is a nonzero vector v ∈ V with the property that T (v) = λv for some scalar λ ∈ F. Similarly, an eigenvector for an n × n

16 LINEAR AND BILINEAR ALGEBRA example is C. We say an n × n matrix A is a Jordan block with eigenvalue λ if aii = λ for i = 1,... , n, ai,i+1 = 1 for i = 1,... , n − 1 (just above the diagonal), and aij = 0 for all other i, j. We say a matrix A is in Jordan form if it is in block diagonal form with Jordan blocks as diagonal blocks.

Theorem 2.7. Every square matrix over F is similar to a matrix in Jordan form if and only if F is algebraically closed.

Proof. Very difficult exercise.

In order to compute the eigenvalues and eigenvectors of a linear operator or of a matrix, we need some additional functions. Recall that if A is a n × n matrix, its determinant is det A = ∑ σ∈Sn (sgn σ)a 1 ,σ(1)... an,σ(n) and its trace is tr A = ∑ni=1 aii. These functions have many important properties. We recall some of them in the next result. First we make one more definition. For each i, j , let Ai,j be the matrix obtained from A by deleting the ith^ row and jth^ column. We define the adjugate of A to be the matrix adj A = ((−1)i+j^ det Aj,i)i,j.

Proposition 2.8. Let A, B be n × n matrices and let λ ∈ F. (1) det At^ = det A and tr At^ = tr A. (2) det λA = λn^ det A and tr λA = λ tr A.

  1. LINEAR OPERATORS AND MATRICES 17 (3) det AB = det A det B and tr(A + B) = tr A + tr B. (4) A · adj A = adj A · A = (det A)I. (5) A−^1 exists if and only if det A 6 = 0, and in this case det A−^1 = (det A)−^1. (6) tr AB = tr BA. (7) If P is invertible, then det P AP −^1 = det A and tr P AP −^1 = tr A.

Proof. Moderately difficult exercise. This result gives us a formula for A−^1 when it exists, namely A−^1 = (det A)−^1 adj A. This formula also gives rise to a formula for solving systems of n linear equations in n unknowns. If we have a system ∑nj=1 aij xj = bi of n equations (for i = 1,... , n) in the n variables x 1 ,... , xn , we call A = (aij )i,j the determinant of the system. This system has a unique solution if and only if det A 6 = 0, and in this case we can explicitly write down the solution as follows. Let A(j)^ be the matrix we obtain from A by replacing the jth^ column of A by the column vector (b 1 ,... , bn)T^. Then xj = det A(j)/ det A. This is known as Cramer’s rule.

Since similar matrices have the same trace and same determinant, it makes sense to define the trace and determinant of a linear operator from a finite dimensional vector space to itself as the trace and determinant of any matrix representing it. One can actually give another characterization of trace and determinant over an algebraically closed field: using