Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Applied Linear Algebra Instructor's Solution Manual Olver, Shakiban, Exercises of Linear Algebra

Solution manual with errata

Typology: Exercises

2020/2021
On special offer
30 Points
Discount

Limited-time offer


Uploaded on 05/26/2021

torley
torley 🇺🇸

4.6

(41)

258 documents

1 / 357

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Applied Linear Algebra
Instructor’s Solutions Manual
by Peter J. Olver and Chehrzad Shakiban
Table of Contents
Chapter Page
1. Linear Algebraic Systems . . . . . . . . . . . . . . . . . . . . . . . . . 1
2. Vector Spaces and Bases . . . . . . . . . . . . . . . . . . . . . . . . . 46
3. Inner Products and Norms . . . . . . . . . . . . . . . . . . . . . . . 78
4. Minimization and Least Squares Approximation . . . . . . . 114
5. Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
6. Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
7. Linearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
8. Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
9. Linear Dynamical Systems . . . . . . . . . . . . . . . . . . . . . . . 262
10. Iteration of Linear Systems . . . . . . . . . . . . . . . . . . . . . . 306
11. Boundary Value Problems in One Dimension . . . . . . . . . 346
1
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf2f
pf30
pf31
pf32
pf33
pf34
pf35
pf36
pf37
pf38
pf39
pf3a
pf3b
pf3c
pf3d
pf3e
pf3f
pf40
pf41
pf42
pf43
pf44
pf45
pf46
pf47
pf48
pf49
pf4a
pf4b
pf4c
pf4d
pf4e
pf4f
pf50
pf51
pf52
pf53
pf54
pf55
pf56
pf57
pf58
pf59
pf5a
pf5b
pf5c
pf5d
pf5e
pf5f
pf60
pf61
pf62
pf63
pf64
Discount

On special offer

Partial preview of the text

Download Applied Linear Algebra Instructor's Solution Manual Olver, Shakiban and more Exercises Linear Algebra in PDF only on Docsity!

Applied Linear Algebra

Instructor’s Solutions Manual

by Peter J. Olver and Chehrzad Shakiban

Table of Contents

Chapter Page

1. Linear Algebraic Systems......................... 1

2. Vector Spaces and Bases......................... 46

3. Inner Products and Norms....................... 78

4. Minimization and Least Squares Approximation....... 114

5. Orthogonality................................ 131

6. Equilibrium................................. 174

7. Linearity................................... 193

8. Eigenvalues.................................. 226

9. Linear Dynamical Systems....................... 262

10. Iteration of Linear Systems...................... 306

11. Boundary Value Problems in One Dimension......... 346

Solutions — Chapter 1

(a) Reduce the system to x − y = 7, 3 y = −4; then use Back Substitution to solve for x = 173 , y = − 43. (b) Reduce the system to 6 u + v = 5, − 52 v = 52 ; then use Back Substitution to solve for u = 1, v = −1. (c) Reduce the system to p + q − r = 0, − 3 q + 5 r = 3, − r = 6; then solve for p = 5, q = − 11 , r = −6. (d) Reduce the system to 2 u − v + 2 w = 2, − 32 v + 4 w = 2, − w = 0; then solve for u = 13 , v = − 43 , w = 0. (e) Reduce the system to 5 x 1 + 3 x 2 − x 3 = 9, 15 x 2 − 25 x 3 = 25 , 2 x 3 = −2; then solve for x 1 = 4, x 2 = − 4 , x 3 = −1. (f ) Reduce the system to x + z − 2 w = − 3 , − y + 3 w = 1, − 4 z − 16 w = − 4 , 6 w = 6; then solve for x = 2, y = 2, z = − 3 , w = 1. (g) Reduce the system to 3 x 1 + x 2 = 1, 83 x 2 + x 3 = 23 , 218 x 3 + x 4 = 34 , 5521 x 4 = 57 ; then solve for x 1 = 113 , x 2 = 112 , x 3 = 112 , x 4 = 113.

1.1.2. Plugging in the given values of x, y and z gives a+2 b− c = 3, a− 2 − c = 1, 1+2 b+c = 2. Solving this system yields a = 4, b = 0, and c = 1.

♥ 1.1.3. (a) With Forward Substitution, we just start with the top equation and work down. Thus 2 x = −6 so x = −3. Plugging this into the second equation gives 12 + 3y = 3, and so y = −3. Plugging the values of x and y in the third equation yields −3 + 4(−3) − z = 7, and so z = −22. (b) We will get a diagonal system with the same solution. (c) Start with the last equation and, assuming the coefficient of the last variable is 6 = 0, use the operation to eliminate the last variable in all the preceding equations. Then, again assuming the coefficient of the next-to-last variable is non-zero, eliminate it from all but the last two equations, and so on. (d) For the systems in Exercise 1.1.1, the method works in all cases except (c) and (f ). Solving the reduced system by Forward Substitution reproduces the same solution (as it must): (a) The system reduces to 32 x = 172 , x + 2 y = 3. (b) The reduced system is 152 u = 152 , 3 u − 2 v = 5. (c) The method doesn’t work since r doesn’t appear in the last equation. (d) Reduce the system to 32 u = 12 , 72 u − v = 52 , 3 u − 2 w = − 1. (e) Reduce the system to 23 x 1 = 83 , 4 x 1 + 3 x 2 = 4, x 1 + x 2 + x 3 = − 1. (f ) Doesn’t work since, after the first reduction, z doesn’t occur in the next to last equation. (g) Reduce the system to 5521 x 1 = 57 , x 2 + 218 x 3 = 34 , x 3 + 83 x 4 = 23 , x 3 + 3 x 4 = 1.

1.2.1. (a) 3 × 4, (b) 7, (c) 6, (d) ( − 2 0 1 2 ), (e)

0 B @

1 CA.

1.2.8. Only the third pair commute.

1.2.9. 1 , 6 , 11 , 16.

1.2.10. (a)

0 B @

1 C A, (b)

0 BB B @

1 CC C A.

1.2.11. (a) True, (b) true.

♥ 1.2.12. (a) Let A = x y z w

!

. Then A D = a x b y a z b w

!

a x a y b z b w

! = D A, so if a 6 = b these

are equal if and only if y = z = 0. (b) Every 2 × 2 matrix commutes with a 0 0 a

! = a I.

(c) Only 3 × 3 diagonal matrices. (d) Any matrix of the form A =

0 B @

x 0 0 0 y z 0 u v

1 C A. (e) Let

D = diag (d 1 ,... , dn). The (i, j) entry of A D is aij dj. The (i, j) entry of D A is di aij. If di 6 = dj , this requires aij = 0, and hence, if all the di’s are different, then A is diagonal.

1.2.13. We need A of size m × n and B of size n × m for both products to be defined. Further, A B has size m × m while B A has size n × n, so the sizes agree if and only if m = n.

1.2.14. B =

x y 0 x

! where x, y are arbitrary.

1.2.15. (a) (A + B)^2 = (A + B)(A + B) = AA + AB + BA + BB = A^2 + 2AB + B^2 , since

AB = BA. (b) An example: A =

! , B =

! .

1.2.16. If A B is defined and A is m × n matrix, then B is n × p matrix and A B is m × p matrix; on the other hand if B A is defined we must have p = m and B A is n × n matrix. Now, since A B = B A, we must have p = m = n.

1.2.17. A On×p = Om×p, Ol×m A = Ol×n.

1.2.18. The (i, j) entry of the matrix equation c A = O is c aij = 0. If any aij 6 = 0 then c = 0, so the only possible way that c 6 = 0 is if all aij = 0 and hence A = O.

1.2.19. False: for example,

! 0 0 1 0

!

! .

1.2.20. False — unless they commute: A B = B A.

1.2.21. Let v be the column vector with 1 in its jth^ position and all other entries 0. Then A v is the same as the jth^ column of A. Thus, the hypothesis implies all columns of A are 0 and hence A = O.

1.2.22. (a) A must be a square matrix. (b) By associativity, A A^2 = A A A = A^2 A = A^3. (c) The na¨ıve answer is n − 1. A more sophisticated answer is to note that you can com- pute A^2 = A A, A^4 = A^2 A^2 , A^8 = A^4 A^4 , and, by induction, A^2

r with only r matrix multiplications. More generally, if the binary expansion of n has r + 1 digits, with s nonzero digits, then we need r + s − 1 multiplications. For example, A^13 = A^8 A^4 A since 13 is 1101 in binary, for a total of 5 multiplications: 3 to compute A^2 , A^4 and A^8 , and 2 more to mul- tiply them together to obtain A^13.

1.2.23. A =

! .

♦ 1.2.24. (a) If the ith^ row of A has all zero entries, then the (i, j) entry of A B is ai 1 b 1 j + · · · +

ainbnj = 0 b 1 j + · · · + 0 bnj = 0, which holds for all j, so the ith^ row of A B will have all 0’s.

(b) If A =

! , B =

! , then B A =

! .

1.2.25. The same solution X =

! in both cases.

1.2.26. (a)

! , (b)

!

. They are not the same.

1.2.27. (a) X = O. (b) Yes, for instance, A =

! , B =

! , X =

! .

1.2.28. A = (1/c) I when c 6 = 0. If c = 0 there is no solution.

♦ 1.2.29. (a) The ith^ entry of A z is 1 ai 1 +1 ai 2 +· · ·+1 ain = ai 1 +· · ·+ain, which is the ith^ row sum. (b) Each row of W has n − 1 entries equal to

n and one entry equal to

1 − n n and so its row sums are (n − 1)

n +

1 − n n = 0. Therefore, by part (a),^ W^ z^ =^0. Consequently, the row sums of B = A W are the entries of B z = A W z = A 0 = 0 , and the result follows.

(c) z =

0 B @

1 C A, and so A z =

0 B @

1 C A

0 B @

1 C A =

0 B @

1 C A, while B = A W =

0 BB @

1 CC A

0 BB B@

1 3 −^

2 3

1 3 1 3

1 3 −^

2 3

1 CC CA =

0 BB @

1 CC A, and so^ B^ z^ =

0 B@

1 CA.

♦ 1.2.30. Assume A has size m × n, B has size n × p and C has size p × q. The (k, j) entry of B C

is

X^ p

l = 1

bklclj , so the (i, j) entry of A (B C) is

X^ n

k = 1

aik

0 @

X^ p

l = 1

bklclj

1 A (^) =

X^ n

k = 1

X^ p

l = 1

aikbklclj.

On the other hand, the (i, l) entry of A B is

X^ k

i = 1

aikbkl, so the (i, j) entry of (A B) C is X^ p

l = 1

0 @ X^ n

k = 1

aikbkl

1 A (^) clj = X^ n

k = 1

X^ p

l = 1

aikbklclj. The two results agree, and so A (B C) =

(A B) C. Remark : A more sophisticated, simpler proof can be found in Exercise 7.1.44.

♥ 1.2.31. (a) We need A B and B A to have the same size, and so this follows from Exercise 1.2.13. (b) A B − B A = O if and only if A B = B A.

(c) (i)

! , (ii)

! , (iii)

0 B@

1 CA;

(d) (i) [ c A + d B, C ] = (c A + d B)C − C(c A + d B) = c(A C − C A) + d(B C − C B) = c [ A, B ] + d [ B, C ], [ A, c B + d C ] = A(c B + d C) − (c B + d C)A = c(A B − B A) + d(A C − C A) = c [ A, B ] + d [ A, C ]. (ii) [ A, B ] = A B − B A = − (B A − A B) = − [ B, A ].

(a) Check that S^2 = A by direct computation. Another example: S =

!

. Or, more

generally, 2 times any of the matrices in part (c). (b) S^2 is only defined if S is square.

(c) Any of the matrices

! , a b c − a

! , where a is arbitrary and b c = 1 − a^2.

(d) Yes: for example

! .

♥ 1.2.37. (a) M has size (i+j)×(k+l). (b) M =

0 B BB BB @

1 C CC CC A

. (c) Since matrix addition is

done entry-wise, adding the entries of each block is the same as adding the blocks. (d) X has size k × m, Y has size k × n, Z has size l × m, and W has size l × n. Then A X + B Z will have size i × m. Its (p, q) entry is obtained by multiplying the pth^ row of M times the qth^ column of P , which is ap 1 x 1 q + · · · + apixiq + bp 1 z 1 q + · · · + bplzlq and equals the sum of the (p, q) entries of A X and B Z. A similar argument works for the remaining three

blocks. (e) For example, if X = (1), Y = ( 2 0 ), Z =

! , W =

! , then

P =

0 B@

1 CA, and so M P =

0 BB BB B@

1 CC CC CA^. The individual block products are

!

! (1) +

! 0 1

! ,

0 B @

1 C A =

0 B @

1 C A (1) +

0 B @

1 C A

! ,

!

! ( 2 0 ) +

! 0 − 1 1 0

! ,

0 B @

1 C A =

0 B @

1 C A ( 2 0 ) +

0 B @

1 C A

! .

(a)

˛˛ ˛˛ ˛

! (^2) R 1 +R 2 −→

˛˛ ˛˛ ˛

!

. Back Substitution yields x 2 = 2, x 1 = −10.

(b)

˛˛ ˛˛ ˛

! − 23 R 1 +R 2 −→

˛˛ ˛˛ ˛

26 3

!

. Back Substitution yields w = 2, z = 3.

(c)

0 B@

˛ ˛˛ ˛˛ ˛˛

1 CA^4 R −→^1 +R^3

0 B@

˛ ˛˛ ˛˛ ˛˛

1 CA

3 2 R^2 +R^3 −→

0 B@

˛ ˛˛ ˛˛ ˛˛

1 CA.

Back Substitution yields z = 3, y = 16, x = 29.

(d)

0 B@

˛˛ ˛˛ ˛˛ ˛

1 CA^2 R −→^1 +R^2

0 B@

˛˛ ˛˛ ˛˛ ˛

1 CA^ −^3 R −→^1 +R^3

0 B@

˛˛ ˛˛ ˛˛ ˛

1 CA

7 4 R^2 +R^3 −→

0 B@

˛˛ ˛˛ ˛˛ ˛

1 C A. Back Substitution yields r = 3, q = 2, p = −1.

(e)

0 BB B@

˛˛ ˛˛ ˛˛ ˛˛ ˛

1 CC CA reduces to

0 BB B@

˛˛ ˛˛ ˛˛ ˛˛ ˛

1 CC CA.

Solution: x 4 = − 3 , x 3 = − 32 , x 2 = − 1 , x 1 = −4.

(f )

0 BB B @

˛˛ ˛˛ ˛˛ ˛˛ ˛

1 CC C A reduces to

0 BB B @

˛˛ ˛˛ ˛˛ ˛˛ ˛

1 CC C A

Solution: w = 2, z = 0, y = − 1 , x = 1.

1.3.2. (a) 3 x + 2 y = 2, − 4 x − 3 y = −1; solution: x = 4, y = −5, (b) x + 2 y = − 3 , − x + 2 y + z = − 6 , − 2 x − 3 z = 1; solution: x = 1, y = − 2 , z = −1, (c) 3 x − y + 2 z = − 3 , − 2 y − 5 z = − 1 , 6 x − 2 y + z = −3; solution: x = 23 , y = 3, z = −1, (d) 2 x − y = 0, − x + 2 y − z = 1, − y + 2 z − w = 1, − z + 2 w = 0; solution: x = 1, y = 2, z = 2, w = 1.

1.3.3. (a) x = 173 , y = − 43 ; (b) u = 1, v = −1; (c) u = 32 , v = − 13 , w = 16 ; (d) x 1 = 11 3 , x^2 =^ −^

10 3 , x^3 =^ −^

2 3 ;^ (e)^ p^ =^ −^

2 3 , q^ =^

19 6 , r^ =^

5 2 ;^ (f^ )^ a^ =^

1 3 , b^ = 0, c^ =^

4 3 , d^ =^ −^

2 3 ; (g) x = 13 , y = 76 , z = − 83 , w = 92.

1.3.4. Solving 6 = a + b + c, 4 = 4 a + 2 b + c, 0 = 9 a + 3 b + c, yields a = − 1 , b = 1, c = 6, so y = −x^2 + x + 6.

1.3.5.

(a) Regular:

! −→

! .

(b) Not regular.

(c) Regular:

0 B@

1 CA −→

0 B@

1 CA.

(d) Not regular:

0 B @

1 C A −→

0 B @

1 C A.

(e) Regular: 0

BB B @

1 CC C A

0 BB B @

1 CC C A

0 BB B @

1 CC C A

0 BB B @

1 CC C A

(a) − i 1 + i 1 − i 1

˛˛ ˛˛ ˛

− 3 i

! −→ − i 1 + i 0 1 − 2 i

˛˛ ˛˛ ˛

1 − 2 i

! ;

use Back Substitution to obtain the solution y = 1, x = 1 − 2 i.

(b)

0 B @

i 0 1 − i 0 2 i 1 + i − 1 2 i i

˛˛ ˛˛ ˛˛ ˛

2 i 2 1 − 2 i

1 C A −→

0 B @

i 0 1 − i 0 2 i 1 + i 0 0 − 2 − i

˛˛ ˛˛ ˛˛ ˛

2 i 2 1 − 2 i

1 C A.

solution: z = i , y = − 12 − 32 i , x = 1 + i.

(c) 1 − i 2 − i 1 + i

˛˛ ˛˛ ˛

i − 1

! −→ 1 − i 2 0 2 i

˛˛ ˛˛ ˛

i − 32 − 12 i

! ;

solution: y = − 14 + 34 i , x = 12.

by the jth^ column of Ak, whose first j − k − 1 entries are non-zero, and all the rest are zero, according to the induction hypothesis; therefore, if i > j − k − 1, every term in the sum producing this entry is 0, and the induction is complete. In particular, for k = n, every entry of Ak^ is zero, and so An^ = O. (c) The matrix A =

! has A^2 = O.

(a) Add −2 times the second row to the first row of a 2 × n matrix. (b) Add 7 times the first row to the second row of a 2 × n matrix. (c) Add −5 times the third row to the second row of a 3 × n matrix. (d) Add 12 times the first row to the third row of a 3 × n matrix. (e) Add −3 times the fourth row to the second row of a 4 × n matrix.

1.3.15. (a)

0 BB B @

1 CC C A,^ (b)

0 BB B @

1 CC C A,^ (c)

0 BB B @

1 CC C A,^ (d)

0 BB B @

1 CC C A.

1.3.16. L 3 L 2 L 1 =

0 B@

1 CA 6 = L 1 L 2 L 3.

1.3.17. E 3 E 2 E 1 =

0 B @

1 C A,^ E 1 E 2 E 3 =

0 B @

1 C A. The second is easier to predict

since its entries are the same as the corresponding entries of the Ei.

1.3.18. (a) Suppose that E adds c 6 = 0 times row i to row j 6 = i, while Ee adds d 6 = 0 times row k to row l 6 = k. If r 1 ,... , rn are the rows, then the effect of E Ee is to replace (i) rj by rl + c ri + d rk for j = l; (ii) rj by rj + c ri and rl by rl + (c d) ri + d rj for j = k; (iii) rj by rj + c ri and rl by rl + d rk otherwise. On the other hand, the effect of E Ee is to replace (i) rj by rl + c ri + d rk for j = l; (ii) rj by rj + c ri + (c d) rk and rl by rl + d rk for i = l; (iii) rj by rj + c ri and rl by rl + d rk otherwise. Comparing results, we see that E Ee = E Ee whenever i 6 = l and j 6 = k. (b) E 1 E 2 = E 2 E 1 , E 1 E 3 6 = E 3 E 1 , and E 3 E 2 = E 2 E 3. (c) See the answer to part (a).

1.3.19. (a) Upper triangular; (b) both special upper and special lower triangular; (c) lower triangular; (d) special lower triangular; (e) none of the above.

1.3.20. (a) aij = 0 for all i 6 = j; (b) aij = 0 for all i > j; (c) aij = 0 for all i > j and aii = 1 for all i; (d) aij = 0 for all i < j; (e) aij = 0 for all i < j and aii = 1 for all i.

♦ 1.3.21. (a) Consider the product L M of two lower triangular n × n matrices. The last n − i entries in the ith^ row of L are zero, while the first j − 1 entries in the jth^ column of M are zero. So if i < j each summand in the product of the ith^ row times the jth^ column is zero,

and so all entries above the diagonal in L M are zero. (b) The ith^ diagonal entry of L M is the product of the ith^ diagonal entry of L times the ith diagonal entry of M. (c) Special matrices have all 1’s on the diagonal, and so, by part (b), does their product.

1.3.22. (a) L =

! , U =

! , (b) L =

! , U =

! ,

(c) L =

0 B @

1 C A, U =

0 B @

1 C A, (d) L =

0 B @

1 2 1 0 0 13 1

1 C A, U =

0 B @

1 C A,

(e) L =

0 B@

1 CA, U =

0 B@

1 CA, (f ) L =

0 B@

1 CA, U =

0 B@

1 CA,

(g) L =

0 BB BB @

1 CC CC A

, U =

0 BB BB @

1 CC CC A

, (h) L =

0 BB B@

1 CC CA,

U =

0 BB B@

1 CC CA, (i) L =

0 BB BB B@

1 2 1 0 0 3 2 −^

3 7 1 0 1 2

1 7 −^

5 22 1

1 CC CC CA^ ,^ U^ =

0 BB BB B@

1 CC CC CA^.

1.3.23. (a) Add 3 times first row to second row. (b) Add −2 times first row to third row. (c) Add 4 times second row to third row.

1.3.24.

(a)

0 BB B@

1 CC CA

(b) (1) Add −2 times first row to second row. (2) Add −3 times first row to third row. (3) Add −5 times first row to fourth row. (4) Add −4 times second row to third row. (5) Add −6 times second row to fourth row. (6) Add −7 times third row to fourth row. (c) Use the order given in part (b).

♦ 1.3.25. See equation (4.51) for the general case.

1 1 t 1 t 2

!

t 1 1

! 1 1 0 t 2 − t 1

!

0 B @

t 1 t 2 t 3 t^21 t^22 t^23

1 C A =

0 B @

t 1 1 0 t^21 t 1 + t 2 1

1 C A

0 B @

0 t 2 − t 1 t 3 − t 1 0 0 (t 3 − t 1 ) (t 3 − t 2 )

1 C A ,

0 BB BB B @

t 1 t 2 t 3 t 4 t^21 t^22 t^23 t^24 t^31 t^32 t^33 t^34

1 CC CC C A

0 BB BB B @

t 1 1 0 0 t^21 t 1 + t 2 1 0 t^31 t^21 + t 1 t 2 + t^22 t 1 + t 2 + t 3 1

1 CC CC C A

0 BB BB B@

0 t 2 − t 1 t 3 − t 1 t 4 − t 1 0 0 (t 3 − t 1 ) (t 3 − t 2 ) (t 4 − t 1 ) (t 4 − t 2 ) 0 0 0 (t 4 − t 1 ) (t 4 − t 2 ) (t 4 − t 3 )

1 CC CC CA^.

(a) L =

! , U =

! ; x 1 =

0 @ −^

5 11 2 11

1 A, x 2 = 1 1

! , x 3 =

0 @

9 11 3 11

1 A;

(b) L =

0 B@

1 CA , U =

0 B@

1 CA; x 1 =

0 B@

1 CA, x 2 =

0 BB B@

5 3

1 CC CA;

(c) L =

0 BB @

2 9

5 3 1

1 CC A ,^ U^ =

0 BB @

1 CC A;^ x 1 =

0 B@

1 CA, x 2 =

0 B@

1 CA;

(d) L =

0 B@

1 CA, U =

0 B@

1 CA;

x 1 =

0 B@

1 CA, x 2 =

0 B@

1 CA, x 3 =

0 B@

1 CA

(e) L =

0 BB B@

1 CC CA, U =

0 BB BB @

1 CC CC A

; x 1 =

0 BB BB B@

5 4 − (^14) 1 4 1 4

1 CC CC CA^ ,^ x^2 =

0 BB BB B@

1 14 − 145 1 14 1 2

1 CC CC CA^ ;

(f ) L =

0 BB B @

1 CC C A,^ U^ =

0 BB B @

1 CC C A;

x 1 =

0 BB B @

1 CC C A , x 2 =

0 BB B @

1 CC C A , x 3 =

0 BB B @

1 CC C A

1.4.1. The nonsingular matrices are (a), (c), (d), (h).

1.4.2. (a) Regular and nonsingular, (b) singular, (c) nonsingular, (d) regular and nonsingular.

1.4.3. (a) x 1 = − 53 , x 2 = − 103 , x 3 = 5; (b) x 1 = 0, x 2 = − 1 , x 3 = 2;

(c) x 1 = − 6 , x 2 = 2, x 3 = −2; (d) x = − 132 , y = − 92 , z = − 1 , w = −3; (e) x 1 = − 11 , x 2 = − 103 , x 3 = − 5 , x 4 = −7.

1.4.4. Solve the equations −1 = 2 b + c, 3 = − 2 a + 4 b + c, −3 = 2 a − b + c, for a = − 4 , b = − 2 , c = 3, giving the plane z = − 4 x − 2 y + 3.

1.4.5. (a) Suppose A is nonsingular. If a 6 = 0 and c 6 = 0, then we subtract c/a times the first row from the second, producing the (2, 2) pivot entry (a d − b c)/a 6 = 0. If c = 0, then the pivot entry is d and so a d − b c = a d 6 = 0. If a = 0, then c 6 = 0 as otherwise the first column would not contain a pivot. Interchanging the two rows gives the pivots c and b, and so a d − b c = b c 6 = 0. (b) Regularity requires a 6 = 0. Proceeding as in part (a), we conclude that a d − b c 6 = 0 also.

1.4.6. True. All regular matrices are nonsingular.

♦ 1.4.7. Since A is nonsingular, we can reduce it to the upper triangular form with nonzero diago- nal entries (by applying the operations # 1 and # 2). The rest of argument is the same as in Exercise 1.3.8.

1.4.8. By applying the operations # 1 and # 2 to the system Ax = b we obtain an equivalent upper triangular system U x = c. Since A is nonsingular, uii 6 = 0 for all i, so by Back Sub-

stitution each solution component, namely xn =

cn unn

and xi =

uii

0 @ (^) ci − X^ n

k = i+

uikxk

1 A,

for i = n − 1 , n − 2 ,... , 1, is uniquely defined.

1.4.9. (a) P 1 =

0 BB B @

1 CC C A , (b) P 2 =

0 BB B @

1 CC C A

(c) No, they do not commute. (d) P 1 P 2 arranges the rows in the order 4, 1 , 3 , 2, while P 2 P 1 arranges them in the order 2, 4 , 3 , 1.

1.4.10. (a)

0 B@

1 CA, (b)

0 BB B@

1 CC CA, (c)

0 BB B@

1 CC CA, (d)

0 BB BB B@

1 CC CC CA^.

1.4.11. The (i, j) entry of the following Multiplication Table indicates the product PiPj , where

P 1 =

0 B@

1 CA , P 2 =

0 B@

1 CA , P 3 =

0 B@

1 CA ,

P 4 =

0 B@

1 CA , P 5 =

0 B@

1 CA , P 6 =

0 B@

1 CA.

The commutative pairs are P 1 Pi = PiP 1 , i = 1,... , 6, and P 2 P 3 = P 3 P 2.

P 1 P 2 P 3 P 4 P 5 P 6

P 1 P 1 P 2 P 3 P 4 P 5 P 6

P 2 P 2 P 3 P 1 P 6 P 4 P 5

P 3 P 3 P 1 P 2 P 5 P 6 P 4

P 4 P 4 P 5 P 6 P 1 P 2 P 3

P 5 P 5 P 6 P 4 P 3 P 1 P 2

P 6 P 6 P 4 P 5 P 2 P 3 P 1

1.4.12. (a)

0 BB B@

1 CC CA,

0 BB B@

1 CC CA,

0 BB B@

1 CC CA,

0 BB B@

1 CC CA,

0 BB B@

1 CC CA,

(b)

0 B @

1 C A

0 B @

1 C A =

0 B @

1 C A

0 B @

1 C A,^ x^ =

0 BB B @

5 4 3 4 − (^14)

1 CC C A

(c)

0 B @

1 C A

0 B @

1 C A =

0 B @

1 C A

0 B @

1 C A, x =

0 B @

1 C A;

(d)

0 BB B@

1 CC CA

0 BB B@

1 CC CA =

0 BB B@

1 CC CA

0 BB B@

1 CC CA, x =

0 BB B@

1 CC CA;

(e)

0 BB B @

1 CC C A

0 BB B @

1 CC C A =

0 BB B @

1 CC C A

0 BB B @

1 CC C A,^ x^ =

0 BB B @

1 CC C A;

(f )

0 BB BB B @

1 CC CC C A

0 BB BB B @

1 CC CC C A

0 BB BB B @

1 CC CC C A

0 BB BB B @

1 CC CC C A

, x =

0 BB BB B @

1 CC CC C A

(a)

0 B@

1 CA

0 B@

1 CA =

0 BB @

1 CC A

0 BB @

1 CC A;

solution: x 1 = 54 , x 2 = 74 , x 3 = 32.

(b)

0 BB B@

1 CC CA

0 BB B@

1 CC CA =

0 BB B@

1 CC CA

0 BB B@

1 CC CA;

solution: x = 4, y = 0, z = 1, w = 1.

(c)

0 BB B @

1 CC C A

0 BB B @

1 CC C A

0 BB B @

1 CC C A

0 BB B @

1 CC C A

solution: x = 193 , y = − 53 , z = − 3 , w = −2.

♦ 1.4.21. (a) They are all of the form P A = L U , where P is a permutation matrix. In the first case, we interchange rows 1 and 2, in the second case, we interchange rows 1 and 3, in the third case, we interchange rows 1 and 3 first and then interchange rows 2 and 3. (b) Same solution x = 1, y = 1, z = −2 in all cases. Each is done by a sequence of elemen- tary row operations, which do not change the solution.

1.4.22. There are four in all: 0

B@

1 CA

0 B@

1 CA =

0 B@

1 CA

0 B@

1 CA ,

0 B@

1 CA

0 B@

1 CA =

0 B@

1 CA

0 B@

1 CA ,

0 B@

1 CA

0 B@

1 CA =

0 B@

1 CA

0 B@

1 CA ,

0 B@

1 CA

0 B@

1 CA =

0 B@

1 CA

0 B@

1 CA.

The other two permutation matrices are not regular.

1.4.23. The maximum is 6 since there are 6 different 3 × 3 permutation matrices. For example, 0 B @

1 C A =

0 B @

1 C A

0 B @

1 C A ,

0 B@

1 CA

0 B@

1 CA =

0 B@

1 CA

0 B@

1 CA ,

0 B@

1 CA

0 B@

1 CA =

0 B@

1 CA

0 B@

1 CA ,

0 B @

1 C A

0 B @

1 C A =

0 B @

1 C A

0 B @

1 C A ,

0 B@

1 CA

0 B@

1 CA =

0 B@

1 CA

0 B@

1 CA ,

0 B@

1 CA

0 B@

1 CA =

0 B@

1 CA

0 B@

1 CA.

1.4.24. False. Changing the permuation matrix typically changes the pivots.

♠ 1.4.25.

Permuted L U factorization

start set P = I , L = I , U = A for j = 1 to n if ukj = 0 for all k ≥ j, stop; print “A is singular” if ujj = 0 but ukj 6 = 0 for some k > j then interchange rows j and k of U interchange rows j and k of P for m = 1 to j − 1 interchange ljm and lkm next m for i = j + 1 to n set lij = uij /ujj add − uij times row j to row i of A next i next j end

(c) Yes: P =

0 BB B@

1 CC CA interchanges two pairs of rows.

1.5.9. (a)

0 BB B@

1 CC CA, (b)

0 BB B@

1 CC CA, (c)

0 BB B@

1 CC CA, (d)

0 BB BB B@

1 CC CC CA^.

(a) If i and j = π(i) are the entries in the ith^ column of the 2 × n matrix corresponding to the permutation, then the entries in the jth^ column of the 2 × n matrix corresponding to the permutation are j and i = π−^1 (j). Equivalently, permute the columns so that the second row is in order 1, 2 ,... , n and then switch the two rows. (b) The permutations correspond to

(i)

! , (ii)

! , (iii)

! , (iv )

! .

The inverse permutations correspond to (i)

! , (ii)

! , (iii)

! , (iv )

! .

1.5.11. If a = 0 the first row is all zeros, and so A is singular. Otherwise, we make d → 0 by an elementary row operation. If e = 0 then the resulting matrix has a row of all zeros. Otherwise, we make h → 0 by another elementary row operation, and the result is a matrix with a row of all zeros.

1.5.12. This is true if and only if A^2 = I , and so, according to Exercise 1.2.36, A is either of the form

! or a b c − a

! , where a is arbitrary and b c = 1 − a^2.

1.5.13. (3 I − A)A = 3A − A^2 = I , so 3 I − A is the inverse of A.

c

A−^1

! (c A) =

c

c A−^1 A = I.

1.5.15. Indeed, (An)−^1 = (A−^1 )n.

1.5.16. If all the diagonal entries are nonzero, then D−^1 D = I. On the other hand, if one of diagonal entries is zero, then all the entries in that row are zero, and so D is not invertible.

1.5.17. Since U −^1 is also upper triangular, the only nonzero summand in the product of the ith row of U and the ith^ column of U −^1 is the product of their diagonal entries, which must equal 1 since U U −^1 = I.

♦ 1.5.18. (a) A = I −^1 A I. (b) If B = S−^1 AS, then A = S B S−^1 = T −^1 B T , where T = S−^1.

(c) If B = S−^1 A S and C = T −^1 B T , then C = T −^1 (S−^1 AS)T = (S T )−^1 A(S T ).

♥ 1.5.19. (a) Suppose D−^1 =

X Y

Z W

!

. Then, in view of Exercise 1.2.37, the equation D D−^1 =

I =

I O

O I

! requires A X = I , A Y = O, B Z = O, B W = I. Thus, X = A−^1 , W = B−^1

and, since they are invertible, Y = A−^1 O = O, Z = B−^1 O = O.

(b)

0 BB B@

2 3 −^

1 3 0 (^0 0 )

1 CC CA,

0 BB B@

1 CC CA.

(a) B A =

!^0 B @

1 C A =^

! .

(b) A X = I does not have a solution. Indeed, the first column of this matrix equation is

the linear system

0 B @

1 C A x y

!

0 B @

1 C A, which has no solutions since x − y = 1, y = 0,

and x + y = 0 are incompatible.

(c) Yes: for instance, B =

!

. More generally, B A = I if and only if B =

1 − z 1 − 2 z z − w 1 − 2 w w

! , where z, w are arbitrary.

1.5.21. The general solution to A X = I is X =

0 B@

− 2 y 1 − 2 v y v − 1 1

1 CA, where y, v are arbitrary.

Any of these matrices serves as a right inverse. On the other hand, the linear system Y A = I is incompatible and there is no solution.

1.5.22.

(a) No. The only solutions are complex, with a =

„ − 12 ± i

q 2 3

« b, where b 6 = 0 is any nonzero complex number.

(b) Yes. A simple example is A =

! , B =

!

. The general solution to the

2 × 2 matrix equation has the form A = B M , where M = x y z w

! is any matrix with

tr M = x + w = −1, and det M = x w − y z = 1. To see this, if we set A = B M , then ( I + M )−^1 = I + M −^1 , which is equivalent to I + M + M −^1 = O. Writing this out using the formula (1.38) for the inverse, we find that if det M = x w − y z = 1 then tr M = x+w = −1, while if det M 6 = 1, then y = z = 0 and x+x−^1 +1 = 0 = w+w−^1 +1, in which case, as in part (a), there are no real solutions.

1.5.23. E =

0 B BB @

1 C CC A,^ E

0 B BB @

1 C CC A.

1.5.24. (a)

0 @ −^1

2 3 (^1 )

1 A, (b)

0 @ −^

1 8

3 8 3 8 −^

1 8

1 A, (c)

0 @

3 5

4 5 − (^45 )

1 A, (d) no inverse,

(e)

0 B@

1 CA, (f )

0 BB B@

7 8 −^

3 8

1 8

1 CC CA, (g)

0 BB @

1 CC A,