Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

linear algebra friedberg solutions, Exercises of Linear Algebra

Complete solutions for "linear algebra" of Friedberg

Typology: Exercises

2018/2019

Uploaded on 07/31/2019

ekansh
ekansh 🇺🇸

4.3

(20)

266 documents

1 / 261

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Contents
1 Vector Spaces 6
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4 Linear Combinations and Systems of Linear Equations . . . . . . 13
1.5 Linear Dependence and Linear Independence . . . . . . . . . . . . 15
1.6 Bases and Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.7 Maximal Linearly Independent Subsets . . . . . . . . . . . . . . . . 24
2 Linear Transformations and Matrices 26
2.1 Linear Transformations, Null Spaces, and Ranges . . . . . . . . . 26
2.2 The Matrix Representation of a Linear Transformation . . . . . . 32
2.3 Comp osition of Linear Transformations and Matrix Multiplication 36
2.4 Invertibility and Isomorphisms . . . . . . . . . . . . . . . . . . . . . 42
2.5 The Change of Coordinate Matrix . . . . . . . . . . . . . . . . . . 47
2.6 Dual Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.7 Homogeneous Linear Differential Equations with Constant Coe-
ficients ................................... 58
3 Elementary Matrix Operations and Systems of Linear Equa-
tions 65
3.1 Elementary Matrix Operations and Elementary Matrices . . . . . 65
3.2 The Rank of a Matrix and Matrix Inverses . . . . . . . . . . . . . 69
3.3 Systems of Linear Equation—Theoretical Aspects . . . . . . . . . 76
3.4 Systems of Linear Equations—Computational Asp ects . . . . . . 78
4 Determinants 86
4.1 Determinants of Order 2 . . . . . . . . . . . . . . . . . . . . . . . . 86
4.2 Determinants of Order n........................ 89
4.3 Prop erties of Determinants . . . . . . . . . . . . . . . . . . . . . . . 92
4.4 Summary—Imp ortant Facts about Determinants . . . . . . . . . . 100
4.5 A Characterization of the Determinant . . . . . . . . . . . . . . . . 102
4
linear algebra friedberg solutions
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf2f
pf30
pf31
pf32
pf33
pf34
pf35
pf36
pf37
pf38
pf39
pf3a
pf3b
pf3c
pf3d
pf3e
pf3f
pf40
pf41
pf42
pf43
pf44
pf45
pf46
pf47
pf48
pf49
pf4a
pf4b
pf4c
pf4d
pf4e
pf4f
pf50
pf51
pf52
pf53
pf54
pf55
pf56
pf57
pf58
pf59
pf5a
pf5b
pf5c
pf5d
pf5e
pf5f
pf60
pf61
pf62
pf63
pf64

Partial preview of the text

Download linear algebra friedberg solutions and more Exercises Linear Algebra in PDF only on Docsity!

Contents

  • 1 Vector Spaces
    • 1.1 Introduction
    • 1.2 Vector Spaces
    • 1.3 Subspaces
    • 1.4 Linear Combinations and Systems of Linear Equations
    • 1.5 Linear Dependence and Linear Independence
    • 1.6 Bases and Dimension
    • 1.7 Maximal Linearly Independent Subsets
  • 2 Linear Transformations and Matrices
    • 2.1 Linear Transformations, Null Spaces, and Ranges
    • 2.2 The Matrix Representation of a Linear Transformation
    • 2.3 Composition of Linear Transformations and Matrix Multiplication
    • 2.4 Invertibility and Isomorphisms
    • 2.5 The Change of Coordinate Matrix
    • 2.6 Dual Spaces
      • ficients 2.7 Homogeneous Linear Differential Equations with Constant Coe-
    • tions 3 Elementary Matrix Operations and Systems of Linear Equa-
    • 3.1 Elementary Matrix Operations and Elementary Matrices
    • 3.2 The Rank of a Matrix and Matrix Inverses
    • 3.3 Systems of Linear Equation—Theoretical Aspects
    • 3.4 Systems of Linear Equations—Computational Aspects
  • 4 Determinants
    • 4.1 Determinants of Order
    • 4.2 Determinants of Order n
    • 4.3 Properties of Determinants
    • 4.4 Summary—Important Facts about Determinants
    • 4.5 A Characterization of the Determinant
  • 5 Diagonalization
    • 5.1 Eigenvalues and Eigenvectors
    • 5.2 Diagonalizability
    • 5.3 Matrix Limits and Markov Chains
    • 5.4 Invariant Subspace and the Cayley-Hamilton Theorem
  • 6 Inner Product Spaces
    • 6.1 Inner Products and Norms
      • Complements 6.2 The Gram-Schmidt Orthogonalization Process and Orthogonal
    • 6.3 The Adjoint of a Linear Operator
    • 6.4 Normal and Self-Adjoint Operators
    • 6.5 Unitary and Orthogonal Operators and Their Matrices
    • 6.6 Orthogonal Projections and the Spectral Theorem
    • 6.7 The Singular Value Decomposition and the Pseudoinverse
    • 6.8 Bilinear and Quadratic Forms
    • 6.9 Einstein’s Special Theory of Relativity
    • 6.10 Conditioning and the Rayleigh Quotient
    • 6.11 The Geometry of Orthogonal Operators
  • 7 Canonical Forms
    • 7.1 The Jordan Canonical Form I
    • 7.2 The Jordan Canonical Form II
    • 7.3 The Minimal Polynomial
    • 7.4 The Rational Canonical Form
  • GNU Free Documentation License
      1. APPLICABILITY AND DEFINITIONS
      1. VERBATIM COPYING
      1. COPYING IN QUANTITY
      1. MODIFICATIONS
      1. COMBINING DOCUMENTS
      1. COLLECTIONS OF DOCUMENTS
      1. AGGREGATION WITH INDEPENDENT WORKS
      1. TRANSLATION
      1. TERMINATION
      1. FUTURE REVISIONS OF THIS LICENSE
      1. RELICENSING
    • ADDENDUM: How to use this License for your documents
  • Appendices
  1. Let the four vertices of the parallelogram be A, B, C, D counterclockwise. Say x = AB⃗ and y = AD⃗. Then the line joining points B and D should be x + s(y − x), where s is in F. And the line joining points A and C should be t(x + y), where t is in F. To find the intersection of the two lines we should solve s and t such that x + s(y − x) = t(x + y). Hence we have ( 1 − s − t)x = (t − s)y. But since x and y can not be parallel, we have 1 − s − t = 0 and t − s = 0. So s = t = 12 and the midpoint would be the head of the vector 12 (x + y) emanating from A and by the previous exercise we know it’s the midpoint of segment AC or segment BD.

1.2 Vector Spaces

  1. (a) Yes. It’s condition (VS 3). (b) No. If x, y are both zero vectors. Then by condition (VS 3) x = x + y = y. (c) No. Let e be the zero vector. We have 1e = 2 e. (d) No. It will be false when a = 0. (e) Yes. (f) No. It has m rows and n columns. (g) No. (h) No. For example, we have that x + (−x) = 0. (i) Yes. (j) Yes. (k) Yes. That’s the definition.
  2. It’s the 3 × 4 matrix with all entries =0.
  3. M 13 = 3, M 21 = 4, M 22 = 5.
  4. (a) Œ

(b)

(c) Œ

(d)

(e) 2x^4 + x^3 + 2 x^2 − 2 x + 10. (f) −x^3 + 7 x^2 + 4.

(g) 10x^7 − 30 x^4 + 40 x^2 − 15 x. (h) 3x^5 − 6 x^3 + 12 x + 6.

6. M =

. Since all the entries has been doubled, we have 2M

can describe the inventory in June. Next, the matrix 2M − A can describe the list of sold items. And the number of total sold items is the sum of all entries of 2M − A. It equals 24.

  1. It’s enough to check f ( 0 ) + g( 0 ) = 2 = h( 0 ) and f ( 1 ) + g( 1 ) = 6 = h( 1 ).
  2. By (VS 7) and (VS 8), we have (a + b)(x + y) = a(x + y) + b(x + y) = ax + ay + bx + by.
  3. For two zero vectors 0 0 and 0 1 , by Thm 1.1 we have that 0 0 + x = x = 01 + x implies 0 0 = 01 , where x is an arbitrary vector. If for vector x we have two inverse vectors y 0 and y 1. Then we have that x + y 0 = 0 = x + y 1 implies y 0 = y 1. Finally we have 0a + 1 a = ( 0 + 1 )a = 1 a = 0 + 1 a and so 0a = 0.
  4. We have sum of two differentiable real-valued functions or product of scalar and one differentiable real-valued function are again that kind of function. And the function f = 0 would be the 0 in a vector space. Of course, here the field should be the real numbers.
  5. All condition is easy to check because there is only one element.
  6. We have f (−t) + g(−t) = f (t) + g(t) and cf (−t) = cf (t) if f and g are both even function. Futhermore, f = 0 is the zero vector. And the field here should be the real numbers.
  7. No. If it’s a vector space, we have 0(a 1 , a 2 ) = ( 0 , a 2 ) be the zero vector. But since a 2 is arbitrary, this is a contradiction to the uniqueness of zero vector.
  8. Yes. All the condition are preserved when the field is the real numbers.
  9. No. Because a real-valued vector scalar multiply with a complex number will not always be a real-valued vector.
  10. Yes. All the condition are preserved when the field is the rational numbers.
  11. No. Since 0(a 1 , a 2 ) = (a 1 , 0 ) is the zero vector but this will make the zero vector not be unique, it cannot be a vector space.
  12. No. We have ((a 1 , a 2 ) + (b 1 , b 2 )) + (c 1 , c 2 ) = (a 1 + 2 b 1 + 2 c 1 , a 2 + 3 b 2 + 3 c 2 ) but (a 1 , a 2 ) + ((b 1 , b 2 ) + (c 1 , c 2 )) = (a 1 + 2 b 1 + 4 c 1 , a 2 + 3 b 2 + 9 c 2 ).

(f)

(g) ‰ 5 6 7 Ž.

(h)

with tr= 2.

  1. Let M = aA+bB and N = aAt^ +bBt. Then we have Mij = aAij +bBij = Nji and so M t^ = N.
  2. We have Atij = Aji and so Atji = Aij.
  3. By the previous exercises we have (A + At)t^ = At^ + (At)t^ = At^ + A and so it’s symmetric.
  4. We have that tr(aA + bB) = (^) ∑ni= 1 aAii + bBii = a (^) ∑ni= 1 Aii + b (^) ∑ni= 1 Bii = atr(A) + btr(B).
  5. If A is a diagonal matrix, we have Aij = 0 = Aji when i ≠ j.
  6. Just check whether it’s closed under addition and scalar multiplication and whether it contains 0. And here s and t are in R.

(a) Yes. It’s a line t( 3 , 1 , − 1 ). (b) No. It contains no ( 0 , 0 , 0 ). (c) Yes. It’s a plane with normal vector ( 2 , − 7 , 1 ). (d) Yes. It’s a plane with normal vector ( 1 , − 4 , − 1 ). (e) No. It contains no ( 0 , 0 , 0 ). (f) No. We have both (

5 , 0 ) and ( 0 ,

3 ) are elements of W 6 but their sum (

3 ) is not an element of W 6.

  1. We have W 1 ∩ W 3 = { 0 }, W 1 ∩ W 4 = W 1 , and W 3 ∩ W 4 is a line t( 11 , 3 , − 1 ).
  2. We have W 1 is a subspace since it’s a plane with normal vector ( 1 , 1 ,... , 1 ). But this should be checked carefully. And since 0 ∉ W 2 , W 2 is not a subspace.
  3. No in general but Yes when n = 1. Since W is not closed under addition. For example, when n = 2, (x^2 + x) + (−x^2 ) = x is not in W.
  4. Directly check that sum of two upper triangular matrix and product of one scalar and one upper triangular matrix are again uppe triangular matrices. And of course zero matrix is upper triangular.
  5. It’s closed under addition since (f + g)(s 0 ) = 0 + 0 = 0. It’s closed under scalar multiplication since cf (s 0 ) = c 0 = 0. And zero function is in the set.
  1. It’s closed under addition since the number of nonzero points of f +g is less than the number of union of nonzero points of f and g. It’s closed under scalar multiplication since the number of nonzero points of cf equals to the number of f. And zero function is in the set.
  2. Yes. Since sum of two differentiable functions and product of one scalar and one differentiable function are again differentiable. The zero function is differentiable.
  3. If f (n)^ and g(n)^ are the nth derivative of f and g. Then f (n)^ + g(n)^ will be the nth derivative of f + g. And it will continuous if both f (n)^ and g(n) are continuous. Similarly cf (n)^ is the nth derivative of cf and it will be continuous. This space has zero function as the zero vector.
  4. There are only one condition different from that in Theorem 1.3. If W is a subspace, then 0 ∈ W implies W ≠ ∅. If W is a subset satisfying the conditions of this question, then we can pick x ∈ W since it’t not empty and the other condition assure 0x = 0 will be a element of W.
  5. We may compare the conditions here with the conditions in Theorem 1.3. First let W be a subspace. We have cx will be contained in W and so is cx + y if x and y are elements of W. Second let W is a subset satisfying the conditions of this question. Then by picking a = 1 or y = 0 we get the conditions in Theorem 1.3.
  6. It’s easy to say that is sufficient since if we have W 1 ⊂ W 2 or W 2 ⊂ W 1 then the union of W 1 and W 2 will be W 1 or W 2 , a space of course. To say it’s necessary we may assume that neither W 1 ⊂ W 2 nor W 2 ⊂ W 1 holds and then we can find some x ∈ W 1 ƒW 2 and y ∈ W 2 ƒW 1. Thus by the condition of subspace we have x + y is a vector in W 1 or in W 2 , say W 1. But this will make y = (x + y) − x should be in W 1. It will be contradictory to the original hypothesis that y ∈ W 2 ƒW 1.
  7. We have that aiwi ∈ W for all i. And we can get the conclusion that a 1 w 1 , a 1 w 1 + a 2 w 2 , a 1 w 1 + a 2 w 2 + a 3 w 3 are in W inductively.
  8. In calculus course it will be proven that {an +bn} and {can} will converge. And zero sequence, that is sequence with all entris zero, will be the zero vector.
  9. The fact that it’s closed has been proved in the previous exercise. And a zero function is either a even function or odd function.
  10. (a) We have (x 1 + x 2 ) + (y 1 + y 2 ) = (x 1 + y 1 ) + (x 2 + y 2 ) ∈ W 1 + W 2 and c(x 1 + x 2 ) = cx 1 + cx 2 ∈ W 1 + W 2 if x 1 , y 1 ∈ W 1 and x 2 , y 2 ∈ W 2. And we have 0 = 0 + 0 ∈ W 1 + W 2. Finally W 1 = {x + 0 ∶ x ∈ W 1 , 0 ∈ W 2 } ⊂ W 1 + W 2 and it’s similar for the case of W 2. (b) If U is a subspace contains both W 1 and W 2 then x + y should be a vector in U for all x ∈ W 1 and y ∈ W 2.

1.4 Linear Combinations and Systems of Linear

Equations

  1. (a) Yes. Just pick any coeficient to be zero. (b) No. By definition it should be { 0 }. (c) Yes. Every subspaces of which S is a subset contains span(S) and span(S) is a subspace. (d) No. This action will change the solution of one system of linear equations. (e) Yes. (f) No. For example, 0x = 3 has no solution.
  2. (a) Original system ⇔

x 1 − x 2 − 2 x 3 − x 4 = − 3 x 3 + 2 x 4 = 4 4 x 3 + 8 x 4 = 16

. So we

have solution is {( 5 + s − 3 t, s, 4 − 2 t, t) ∶ s, t ∈ F}. (b) {(− 2 , − 4 , − 3 )}. (c) No solution. (d) {(− 16 − 8 s, 9 + 3 s, s, 2 ) ∶ s ∈ F}. (e) {(− 4 + 10 s − 3 t, 3 − 3 s + 2 t, r, s, 5 ) ∶ s, t ∈ F}. (f) {( 3 , 4 , − 2 )}.

  1. (a) Yes. Solve the equation x 1 ( 1 , 3 , 0 ) + x 2 ( 2 , 4 , − 1 ) = (− 2 , 0 , 3 ) and we have the solution (x 1 , x 2 ) = ( 4 , − 3 ). (b) Yes. (c) No. (d) No. (e) No. (f) Yes.
  2. (a) Yes. (b) No. (c) Yes. (d) Yes. (e) No. (f) No.
  3. (a) Yes. (b) No. (c) No.

(d) Yes. (e) Yes. (f) No. (g) Yes. (h) No.

  1. For every (x 1 , x 2 , x 3 ) ∈ F^3 we may assume

y 1 ( 1 , 1 , 0 ) + y 2 ( 1 , 0 , 1 ) + y 3 ( 0 , 1 , 1 ) = (x 1 , x 2 , x 3 )

and solve the system of linear equation. We got (x 1 , x 2 , x 3 ) = 12 (x 1 − x 2 + x 3 )( 1 , 1 , 0 ) + 12 (x 1 + x 2 − x 3 )( 1 , 0 , 1 ) + 12 (−x 1 + x 2 + x 3 )( 0 , 1 , 1 ).

  1. For every (x 1 , x 2 ,... xn) ∈ Fn^ we can write (x 1 , x 2 ,... xn) = x 1 e 1 + x 2 e 2 + ⋯ + xnen.
  2. It’s similar to exercise 1.4.7.
  3. It’s similar to exercise 1.4.7.
  4. For x ≠ 0 the statement is the definition of linear combination and the set is a line. For x = 0 the both side of the equation is the set of zero vector and the set is the origin.
  5. To prove it’s sufficient we can use Theorem 1.5 and then we know W = span(W ) is a subspace. To prove it’s necessary we can also use Theorem 1.5. Since W is a subspace contains W , we have span(W ) ⊂ W. On the other hand, it’s natural that span(W ) ⊃ W.
  6. To prove span(S 1 ) ⊂ span(S 2 ) we may let v ∈ S 1. Then we can write v = a 1 x 1 + a 2 x 2 + ⋯ + a 3 x 3 where xi is an element of S 1 and so is S 2 for all n = 1 , 2 ,... , n. But this means v is a linear combination of S 2 and we complete the proof. If span(S 1 ) = V , we know span(S 2 ) is a subspace containing span(S 1 ). So it must be V.
  7. We prove span(S 1 ∪ S 2 ) ⊂ span(S 1 ) + span(S 2 ) first. For v ∈ span(S 1 ∪ S 2 ) we have v = (^) ∑ni= 1 aixi + (^) ∑mj= 1 bj yj with xi ∈ S 1 and yj ∈ S 2. Since the first summation is in span(S 1 ) and the second summation is in span(S 2 ), we have v ∈ span(S 1 ) + span(S 2 ). For the converse, let u + v ∈ span(S 1 ) + span(S 2 ) with u ∈ span(S 1 ) and v ∈ span(S 2 ). We can right u + v = ∑ni= 1 aixi +^ ∑mj= 1 bj yj with^ xi ∈^ S 1 and^ yj ∈^ S 2 and this means^ u^ +^ v^ ∈ span(S 1 ∪ S 2 ).
  8. For v ∈ span(S 1 ∩ S 2 ) we may write v = (^) ∑ni= 1 aixi with xi ∈ S 1 and xi ∈ S 2. So v is an element of both span(S 1 ) and span(S 2 ) and hence an element of span(S 1 ) ∩ span(S 2 ). For example we have if S 1 = S 2 = ( 1 , 0 ) then they are the same and if S 1 = ( 1 , 0 ) and S 2 = ( 0 , 1 ) then we have the left hand side is the set of zero vector and the right hand side is the the plane R^2.
  1. Let Eij be the matrix with the only nonzero ij-entry= 1. Then {E 11 , E 22 } is the generating set.
  2. (a) The equation x 1 ( 1 , 1 , 0 )+x 2 ( 1 , 0 , 1 )+x 3 ( 0 , 1 , 1 ) = 0 has only nontrivial solution when F = R. (b) When F has characteristic 2, we have 1 + 1 = 0 and so ( 1 , 1 , 0 ) + ( 1 , 0 , 1 ) + ( 0 , 1 , 1 ) = ( 0 , 0 , 0 ).
  3. It’s sufficient since if u = tv for some t ∈ F then we have u − tv = 0. While it’s also necessary since if au + bv = 0 for some a, b ∈ F with at least one of the two coefficients not zero then we may assume a ≠ 0 and u = − ba v.
  4. Pick v 1 = ( 1 , 1 , 0 ), v 2 = ( 1 , 0 , 0 ), v 3 = ( 0 , 1 , 0 ). And we have that none of the three is a multiple of another and they are dependent since v 1 − v 2 − v 3 = 0.
  5. Vector in span(S) are linear combinations of S and they all have different representation by the remark after Definition of linear independent. So there are 2n^ representations and so 2n^ vectors.
  6. Since S 1 is linearly dependent we have finite vectors x 1 , x 2 ,... , xn in S 1 and so in S 2 such that a 1 x 1 +a 2 x 2 +⋯+anxn = 0 is a nontrivial representa- tion. But the nontrivial representation is also a nontrivial representation of S 2. And the Corollary is just the contrapositive statement of the The- orem 1.6.
  7. (a) Sufficiency: If {u + v, u − v} is linearly independent we have a(u + v) + b(u − v) = 0 implies a = b = 0. Assuming that cu + dv = 0, we can deduce that c+ 2 d (u + v) + c− 2 d (u − v) = 0 and hence c+ 2 d = c− 2 d = 0. This means c = d = 0 if the characteristc is not two. Necessity: If {u, v} is linearly independent we have au + bv = 0 implies a = b = 0. Assuming that c(u + v) + d(u − v) = 0, we can deduce that (c + d)u + (c − d)v = 0 and hence c + d = c − d = 0 and 2c = 2 d = 0. This means c = d = 0 if the characteristc is not two. (b) Sufficiency: If au + bv + cw = 0 we have a+b 2 − c(u + v) + a− 2 b+ c(u + w) + −a+b+c 2 (v^ +^ w)^ =^ 0 and hence^ a^ =^ b^ =^ c^ =^ 0.^ Necessity: If^ a(u^ +^ v)^ + b(u + w) + c(v + w) = 0 we have (a + b)u + (a + c)v + (b + c)w = 0 and hence a = b = c = 0.
  8. Sufficiency: It’s natural that 0 is linearly dependent. If v is a linear combination of u 1 , u 2 ,... , un , say v = a 1 u 1 + a 2 u 2 + ⋯anun, then v − a 1 u 1 − a 2 u 2 − ⋯ − anun = 0 implies S is linearly dependent. Necessity: If S is linearly dependent and S ≠ { 0 } we have some nontrivial representation a 0 u 0 + a 1 u 1 + ⋯ + anun = 0 with at least one of the coefficients is zero, say a 0 = 0 without loss the generality. Then we can let v = u 0 = − (^) a^10 (a 1 u 1 + a 2 u 2 + ⋯ + anun).
  9. Sufficiency: If u 1 = 0 then S is linearly independent. If

uk+ 1 ∈ span({u 1 , u 2 ,... , uk})

for some k, say uk+ 1 = a 1 u 1 +a 2 u 2 +⋯+akuk, then we have a 1 u 1 +a 2 u 2 +⋯+ akuk − uk+ 1 = 0 is a nontrivial representation. Necessary: If S is linearly dependent, there are some integer k such that there is some nontrivial representation a 1 u 1 + a 2 u 2 + ⋯ + akuk + ak+ 1 uk+ 1 = 0. Furthermore we may assume that ak+ 1 ≠ 0 otherwise we may choose less k until that ak+ 1 ≠= 0. Hence we have ak+ 1 = − (^) ak^1 + 1 (a 1 u 1 + a 2 u 2 + ⋯ + akuk) and so ak+ 1 ∈ span({u 1 , u 2 ,... , uk}).

  1. Sufficiency: We can prove it by contrapositive statement. If S is linearly dependent we can find a 1 u 1 + a 2 u 2 + ⋯ + anun = 0. But thus the finite set {u 1 , u 2 ,... , un} would be a finite subset of S and it’s linearly dependent. Necessary: This is the Threorem 1.6.
  2. Let C 1 , C 2 ,... , Cn be the columns of M. Let a 1 C 1 + a 2 C 2 + ⋯ + anCn = 0 then we have an = 0 by comparing the n-th entry. And inductively we have an− 1 = 0 , an− 2 = 0 ,... , a 1 = 0.
  3. It’s similar to exercise 1.5.17.
  4. We have a 1 At 1 + a 2 At 2 + ⋯ + akAtk = 0 implies a 1 A 1 + a 2 A 2 + ⋯ + akAk = 0. Then we have a 1 = a 2 = ⋯ = an = 0.
  5. If {f, g} is linearly dependent, then we have f = kg. But this means 1 = f ( 0 ) = kg( 0 ) = k × 1 and hence k = 1. And er^ = f ( 1 ) = kg( 1 ) = es means r = s.

1.6 Bases and Dimension

  1. (a) No. The empty set is its basis. (b) Yes. This is the result of Replacement Theorem. (c) No. For example, the set of all polynomials has no finite basis. (d) No. R^2 has {( 1 , 0 ), ( 1 , 1 )} and {( 1 , 0 ), ( 0 , 1 )} as bases. (e) Yes. This is the Corollary after Replacement Theorem. (f) No. It’s n + 1. (g) No. It’s m × n. (h) Yes. This is the Replaceent Theorem. (i) No. For S = 1 , 2, a subset of R, then 5 = 1 × 1 + 2 × 2 = 3 × 1 + 1 × 2. (j) Yes. This is Theorem 1.11. (k) Yes. It’s { 0 } and V respectly. (l) Yes. This is the Corollary 2 after Replacement Theorem.
  2. It’s enough to check there are 3 vectors and the set is linear independent.

(a) Yes.

below and do the Gaussian elimintaion.

M =

And the row with all entries 0 can be omitted^1. So {u 1 , u 3 , u 6 , u 7 } would

(^1) Which row with all entries is important here. So actually the operation here is not the

be the basis for W (the answer here will not be unique).

  1. If a 1 u 1 + a 2 u 2 + a 3 u 3 + a 4 u 4 = (a 1 , a 1 + a 2 , a 1 + a 2 + a 3 , a 1 + a 2 + a 3 + a 4 ) = 0 we have a 1 = 0 by comparing the first entry and then a 2 = a 3 = a 4 = 0. For the second question we can solve (a 1 , a 2 , a 3 , a 4 ) = a 1 u 1 + (a 2 − a 1 )u 2 + (a 3 − a 2 )u 3 + (a 4 − a 3 )u 4.
  2. The polynomials found by Lagrange interpolation formula would be the answer. It would have the smallest degree since the set of those polyno- mials of Lagrange interpolation formula is a basis.

(a) − 4 x^2 − x + 8. (b) − 3 x + 12. (c) −x^3 + 2 x^2 + 4 x − 5. (d) 2x^3 − x^2 − 6 x + 15.

  1. If {u, v} is a basis then the dimension of V would be two. So it’s enough to check both {u + v, au} and {au, bv} are linearly independent. Assuming s(u + v) + tau = (s + ta)u + sv = 0 we have s + ta = s = 0 and hence s = t = 0. Assuming sau + tbv = 0 we have sa = tb = 0 and hence s = t = 0.
  2. If {u, v, w} is a basis then the dimension of V would be three. So it’s enough to check {u + v + w, v + w, w} is lineaerly independent. Assuming a(u + v + w) + b(v + w) + cw = au + (a + b)v + (a + b + c)w = 0 we have a = a + b = a + b + c = 0 and hence a = b = c = 0.
  3. We can substract the second equation by the two times of the first equa- tion. And then we have

x 1 − 2 x 2 + x 3 = 0 x 2 − x 3 = 0 Let x 3 = s and hence x 2 = s and x 1 = s. We have the solution would be {(s, s, s) = s( 1 , 1 , 1 ) ∶ s ∈ R}. And the basis would be {( 1 , 1 , 1 )}.

  1. For W 1 we can observe that by setting a 2 = p, a 3 = q, a 4 = s, and a 5 = t we can solve a 1 = q+s. So W 1 = {(q+s, p, q, s, t) = p( 0 , 1 , 0 , 0 , 0 )+q( 1 , 0 , 1 , 0 , 0 )+ s( 1 , 0 , 0 , 1 , 0 ) + t( 0 , 0 , 0 , 0 , 1 ) ∶ p, q, s, t ∈ F^5 }. And {( 0 , 1 , 0 , 0 , 0 ), ( 1 , 0 , 1 , 0 , 0 ), ( 1 , 0 , 0 , 1 , 0 ), ( 0 , 0 , 0 , 0 , 1 )}

is the basis. The dimension is four. And similarly for W 2 we may set a 4 = s, a 5 = t. And then we have a 1 = −t, a 2 = a 3 = a 4 = s and

W 2 = {(−t, s, s, s, t) = s( 0 , 1 , 1 , 1 , 0 ) + t(− 1 , 0 , 0 , 0 , 1 ) ∶ s, t ∈ F^5 }

. And hence {( 0 , 1 , 1 , 1 , 0 ), (− 1 , 0 , 0 , 0 , 1 )} is the basis of W 2. The dimension is two.

standard Gaussian elimination since we can not change the order of two row here.

independent set with size greater than that of β. So we can conclude that dim(W 1 ∩ W 2 ) =dim(W 1 ). For the converse, if we have W 1 ⊂ W 2 , then we have W 1 ∩ W 2 = W 1 and hence they have the same dimension.

  1. Let α and β be the basis of W 1 and W 2. By the definition we have both α and β are bases with finite size.

(a) The condition is that v ∈ W 1. If v ∉ W 1 = span(α), thus α∪{v} would be a independent set with size greater than α. By Replacement The- orem we have dim(W 1 ) <dim(W 2 ). For the converse, if v ∈ W 1 = span({v 1 , v 2 ,... , vk}), we actually have W 2 = span({v 1 , v 2 ,... , vk, v}) = span({v 1 , v 2 ,... , vk}) = W 1 and hence they have the same dimension. (b) Since we have W 1 ⊂ W 2 , we have in general we have dim(W 1 ) <dim(W 2 ).

  1. By exercise 1.5.18 we have β = {f (i)}i= 0 , 1 ,...,n is independent since they all have different degree. And since dim(Pn(R)) = n + 1 we can conclude that β is a basis and hence it generate the space Pn(R)).
  2. It would be m + n since (α, 0 ) ∪ ( 0 , β) would be a basis of Z if α and β are the basis of V and W respectly, where (α, 0 ) = {(u, 0 ) ∈ Z ∶ u ∈ V } and ( 0 , β) = {( 0 , u) ∈ Z ∶ u ∈ W }.
  3. It would be n since {x − a, x^2 − a^2 ,... , xn^ − an} is a basis.
  4. The dimension of W 1 ∩Pn(F) and W 2 ∩Pn(F) are ⌊ n+ 21 ⌋ and ⌈ n+ 21 ⌉ respectly since {xi} with 0 ≤ i ≤ n is an odd number and {xj^ } with 0 ≤ j ≤ n is a even number are bases of the two spaces respectly.
  5. If α is the basis of V over R, then we have α ∪ iα is the basis of V over R, where iα = {iv ∈ V ∶ v ∈ α}.
  6. (a) Using the notation of the Hint, if we assume

k Q i= 1

aiui +

m Q i= 1

bivi +

n Q i= 1

ciwi = 0

, then we have

v =

m Q i= 1

bivi = −

k Q i= 1

aiui −

n Q i= 1

ciwi

is contained in both W 1 and W 2 and hence in W 1 ∩ W 2. But if v ≠ 0 and can be express as u = (^) ∑ki= 1 a′ iui, then we have (^) ∑mi= 1 bivi − ∑ki= 1 a′ iui =^ 0. This is contradictory to that^ {u 1 ,... , v 1 ,.. .}^ is a basis of W 1. Hence we have

v =

m Q i= 1

bivi = −

k Q i= 1

aiui −

n Q i= 1

ciwi = 0

, this means ai = bj = cl = 0 for all index i, j, and k. So the set β = {u 1 ,... , v 1 ,... , w 1 ,.. .} is linearly independent. Furthermore, for every x + y ∈ W 1 + W 2 with x ∈ W 1 and y ∈ W 2 we can find the representation x = ∑ki= 1 diui + ∑mi= 1 bivi and y = ∑ki= 1 d′ iui + ∑ni= 1 ciwi. Hence we have

x + y =

k Q i= 1

(di + d′ i)ui +

m Q i= 1

bivi +

n Q i= 1

ciwi = 0

is linear combination of β. Finally we have dim(W 1 + W 2 ) = k + m + n =dim(W 1 )+dim(W 2 )−dim(W 1 ∩ W 2 ) and hence W 1 + W 2 is finite- dimensional. (b) With the formula in the previous exercise we have

dim(W 1 + W 2 ) = dim(W 1 ) + dim(W 2 ) − dim(W 1 ∩ W 2 ) = dim(W 1 ) + dim(W 2 )

if and only if dim(W 1 ∩ W 2 ) = 0. And dim(W 1 ∩ W 2 ) = 0 if and only if W 1 ∩ W 2 = { 0 }. And this is the sufficient and necessary condition for V = W 1 ⊕ W 2.

  1. It can be check W 1 and W 2 are subspaces with dimension 3 and 2. We

also can find out that W 1 ∩ W 2 = {Œ

0 a −a 0 ‘^ ∈^ V^ ∶^ a, b^ ∈^ F}^ and it has dimension 1. By the formula of the previous exercise, we have that dimension of W 1 + W 2 is 2 + 3 − 1 = 4.

  1. (a) This is the conclusion of W 1 ∩ W 2 ⊂ W 2.

(b) By the formula in 1.6.29(a) we have the left hand side

= m + n − dim(W 1 ∩ W 2 ) ≤ m + n

since dim(W 1 ∩ W 2 ) ≥ 0.

  1. (a) Let W 1 be the xy-plane with m = 2 and W 2 be the x-axis with n = 1. Then we have W 1 ∩ W 2 = W 2 has dimension 1. (b) Let W 1 be the xy-plane with m = 2 and W 2 be the z-axis with n = 1. Then we have W 1 + W 2 = R^3 has dimension 3 = 2 + 1. (c) Let W 1 be the xy-plane with m = 2 and W 2 be the xz-axis with n = 2. Then we have W 1 ∩ W 2 is the x-axis with dimension 1 and W 1 + W 2 is R^3 with dimension 3 ≠ 2 + 2.
  2. (a) Since V = W 1 ⊕ W 2 means W 1 ∩ W 2 = { 0 } and a basis is linearly independent and so contains no 0, we have β 1 ∩ β 2 = ∅. And it a special case of exercise 1.6.29(a) that β 1 ∪ β 2 is a basis.