Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Math Camp Notes: Linear Algebra I, Exercises of Linear Algebra

Answer: There are three leading principal minors: 1. |a11|, formed by deleting the last two rows and columns of A. 2. ∣. ∣.

Typology: Exercises

2021/2022

Uploaded on 09/27/2022

melanycox
melanycox 🇬🇧

5

(8)

227 documents

1 / 13

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Math Camp Notes: Linear Algebra I
Basic Matrix Operations and Properties
Consider two
n×m
matrices:
A=
a11 . . . a1m
.
.
.....
.
.
an1. . . anm
, B =
b11 . . . b1m
.
.
.....
.
.
bn1. . . bnm
Then the basic matrix operations are as follows:
1.
A+B=
a11 +b11 . . . a1m+b1m
.
.
.....
.
.
an1+bn1. . . anm +bnm
2.
λA =
λa11 . . . λa1m
.
.
.....
.
.
λan1. . . λanm
,
where
λR
Notice that the elements in the matrix are numbered such that
ai,j
, where
i
is the row and
j
is the column
in which the element
ai,j
is found.
In order to multiply matricies
CD
, the number of columns in the
C
matrix must be equal to the number
of rows in the
D
matrix. Say
C
is an
n×m
matrix, and
D
is an
m×k
matrix. Then multiplication is
dened as follows:
E=C
|{z}
n×m
D
|{z}
m×k
=
Pm
q=1 c1,qdq,1. . . Pm
q=1 c1,qdq,k
.
.
.....
.
.
Pm
q=1 cn,qdq,1. . . Pm
q=1 cn,qdq,k
| {z }
n×k
There are two notable special cases for multiplaction of matricies. The rst is called the inner product, which
occurs when two vectors of the same length are multiplied together such that the result is a scalar:
v·z=v
|{z}
1×n
z0
|{z}
n×1
=v1. . . vn
z1
.
.
.
zn
=
n
X
i=1
vizi
The second is called the outer product:
v0
|{z}
n×1
z
|{z}
1×n
=
z1
.
.
.
zn
v1. . . vn=
z1v1. . . z1vn
.
.
.....
.
.
znv1. . . znvn
| {z }
n×n
.
Note that when we multiplied the matricies
C
and
D
together, the resulting
ei,j
th element of
E
was just the
inner product of the
i
th row of
C
and
j
th column of
D
. Also, note that even if two matricies
X
and
Y
are
both
n×n
, then
XY 6=Y X
, except in special cases.
Types of Matricies
Zero Matricies
A zero matrix is a matrix where each element is 0
0=
0. . . 0
.
.
.....
.
.
0. . . 0
| {z }
n×k
1
pf3
pf4
pf5
pf8
pf9
pfa
pfd

Partial preview of the text

Download Math Camp Notes: Linear Algebra I and more Exercises Linear Algebra in PDF only on Docsity!

Math Camp Notes: Linear Algebra I

Basic Matrix Operations and Properties

Consider two n × m matrices:

A =

a 11... a 1 m

. . .

an 1... anm

 , B^ =

b 11... b 1 m

. . .

bn 1... bnm

Then the basic matrix operations are as follows:

1. A + B =

a 11 + b 11... a 1 m + b 1 m

. . .

an 1 + bn 1... anm + bnm

  1. λA =

λa 11... λa 1 m

. . .

λan 1... λanm

 ,^ where^ λ^ ∈^ R

Notice that the elements in the matrix are numbered such that ai,j , where i is the row and j is the column

in which the element ai,j is found.

In order to multiply matricies CD, the number of columns in the C matrix must be equal to the number

of rows in the D matrix. Say C is an n × m matrix, and D is an m × k matrix. Then multiplication is

dened as follows:

E = C

n×m

D

m×k

m q= c 1 ,q dq, 1...

m q= c 1 ,q dq,k

. . .

∑m

q= cn,q dq, 1...

∑m

q= cn,q dq,k

n×k

There are two notable special cases for multiplaction of matricies. The rst is called the inner product, which

occurs when two vectors of the same length are multiplied together such that the result is a scalar:

v · z = v ︸︷︷︸ 1 ×n

z

′ ︸︷︷︸ n× 1

v 1... vn

z 1

. . .

zn

n ∑

i=

vizi

The second is called the outer product:

v

′ ︸︷︷︸ n× 1

z ︸︷︷︸ 1 ×n

z 1

. . .

zn

v 1... vn

z 1 v 1... z 1 vn

. . .

znv 1... znvn

n×n

Note that when we multiplied the matricies C and D together, the resulting ei,j th element of E was just the

inner product of the ith row of C and jth column of D. Also, note that even if two matricies X and Y are

both n × n, then XY 6 = Y X, except in special cases.

Types of Matricies

Zero Matricies

A zero matrix is a matrix where each element is 0

n×k

The following properties hold for zero matricies:

1. A + 0 = A

  1. If AB = 0 , it is not necessarily the case that A = 0 or B = 0.

Identity Matricies

The identity matrix is a matrix with zeroes everywhere except along the diagonal. Note that the number of

columns must equal the number of rows.

I =

n×n

The reason it is called the identity matrix is because AI = IA = A.

Square, Symmetric, and Transpose Matricies

A square matrix is a matrix whose number of rows is the same as its number of columns. For example, the

identity matrix is always square. If a square matrix has the property that ai,j = aj,i for all its elements,

then we call it a symmetric matrix.

The transpose of a matrix A, denoted A ′ is a matrix such that for each element of A ′ , a ′ i,j = aj,i. For

example, the transpose of the matrix 

is (^) 

Note that a matrix A is symmetric if A = A

′ .

The following properties of the transpose hold:

1. (A

′ )

′ = A.

2. (A + B)

′ = A

  • B

′ .

  1. (αA)

′ = αA

′ .

4. (AB)

′ = B

′ A

′ .

  1. If the matrix A is n × k, then A

′ is k × n.

Diagonal and Triangular Matricies

A square matrix A is diagonal if it is all zeroes except along the diagonal:

a 11 0... 0

0 a 22... 0

. . .

0 0... ann

Note that all diagonal matricies are also symmetric.

  • Let A be an n × n matrix. Let Ai,j be the (n − 1) × (n − 1) submatrix obtained by deleting row i and

column j from A. Then the scalar Mij = det(Aij ) is called the (i, j)th minor of A.

  • The scalar Cij = (−1)

i+j Mij is called the (i, j)th cofactor of A. The cofactor is merely the signed

minor.

Armed with these two denitions, we notice that the determinant for the 2 × 2 matrix is

det(A) = aM 11 − bM 12 = aC 11 + bC 12 ,

and the determinant for the 3 × 3 matrix is

det(A) = aM 11 − bM 12 − cM 13 = aC 11 + bC 12 + C 13.

Therefore, we can dene the determinant for an n × n sqaure matrix as follows:

det

 A

n×n

 (^) = a 11 C 11 +^ a 12 C 12 +^...^ +^ a 1 nC 1 n.

Notice that the denition of the determinant uses elements and cofactors for the top row only. This is

called a cofactor expansion along the rst row. However, a cofactor expansion along any row or column

will be equal to the determinant. The proof of this asssertion is left as a homework problem for the 3 × 3

case.

Example: Find the determinant of the upper diagonal matrix

Answer: The determinant is:

aC 11 + bC 12 + C 13 = a · det

e f

h i

− b · det

d f

g i

  • c · det

d e

g h

= 1 · det

− 0 · det

  • 0 · det

Now lets expand along the second column instead of the third row:

aC 12 + bC 22 + C 32 = −b · det

d f

g i

  • e · det

a c

g i

− h · det

a c

d f

= − 0 · det

  • 3 · det

− 5 · det

Singular and Non-singular Matrices

Consider the system of equations:

y 1 = a 1 x 1 + b 1 x 2 +... + c 1 xn

y 2 = a 2 x 1 + b 2 x 2 +... + c 2 xn

ym = a 2 x 1 + b 2 x 2 +... + c 2 xn

The functions can be written in matrix form by

y 1

y 2

. . .

ym

m× 1

a 1 b 1... c 1

a 2 b 2... c 2

. . .

am bm... cm

m×n

x 1

x 2

. . .

xn

n× 1

In short, we can write this system as y = Ax. For simplicity, denote the ith row in the matrix as a separate

vector vi, so that

A =

a 1 b 1... c 1

a 2 b 2... c 2

. . .

am bm... cm

m×n

v 1

v 2

. . .

vm

We say the vectors v 1 ,... , vm are linearly dependant if there exist scalars q 1 ,... , qm , not all zero, such

that: m ∑

i=

qivi = 0.

We say the vectors v 1 ,... , vm are linearly independant if the only scalars q 1 ,... , qm such that:

m ∑

i=

qivi = 0

are q 1 =... = qm = 0. We can use this deniton of linear independance and dependance for columns as

well.

The rank of a matrix is the number of linearly independant rows or columns in a matrix (Note: the

number of linearly independant rows are the same as the number of linearly independant columns).

If we draw the lines in the system in R N , then they will all cross either at no points, at one point, or at

innitely many points. Therefore, the system may have no solutions, one solution, or many solutions for a

given vector y. If it has more than one solution, it will have innitely many solutions, since straight lines

can only intersect once if they do not coincide. The following are conditions on the number of solutions of a

system:

  1. A system of linear equations with coecient matrix A will have a solution for each choice of y i rank

A = the number of rows in A.

  1. A system of linear equations with coecient matrix A will have at most one solution for each choice

of y i rank A = the number of columns in A.

This implies that a system of linear equations with coecient matrix A will have exactly one solution for

each choice of y i rank A =the number of columns in A = the number of rows of A. If a square matrix has

exactly one solution for each y, we say the matrix is non-singular. Otherwise, it is non-singular and has

innitely many solutions.

Elementary Row Operations

Recall the system of equations:

y 1 = a 1 x 1 + b 1 x 2 +... + c 1 xn

y 2 = a 2 x 1 + b 2 x 2 +... + c 2 xn

Now subtract four times the rst row from the second row to obtain

A 3 =

Then rearrange to get the matrix in row echelon form

A 4 =

When a matrix is in row echelon form, it is easy to check whether all the rows are linearly independant.

For linear independance, we must have that all the rows in the row echelon form are non-zero. If not, then

the matrix will have linearly dependant rows. We have shown that all the rows in A are linearly independant

because the row echelon form contains no zero rows.

Another way of dening the rank of matrix is by the number of non-zero (or linearly independant) rows

in a matrix. For example, the matrix A is of full rank since all its rows are linearly independant.

Using Elementary Row Operations to Solve a System of Equations

We can solve a system of equations by writing the matrix

a 1 b 1... c 1 | y 1

a 2 b 2... c 2 | y 2

. . .

am bm... cm | ym

called the augmented matrix of A, and use elementary row operations.

Example:

Let

A =

as before. Say that the vector y = (1, 1 , 1)

. Then the augmented matrix is

Performing the same matrix operations as before, we have

We continue the row operations until the left hand side of the augmented matrix looks like the identity

matrix:

1 2

1 2

1 2 0 1 0 | − 1

0 0 1 |

1 2

3 2 0 1 0 | − 1

0 0 1 |

1 2

Notice that this implies

3 2 − 1 1 2

x 1

x 2

x 3

 (^) ⇒ x =

so we have found a solution using elementary row operations.

In summary, if we form the augmented matrix of A, reduce the left hand side of the matrix to is reduced

row echelon form (so that each row contains all zeros, except for the possibility of a one in a column of

all zeros) through elementary row operations, then the remaining vector on the right hand side will be the

solution to the system.

Inverse Matricies

It can be shown that the inverse of a square matrix will exist if it's determinant is non-zero. The following

statements are all equivalent for a square matrix A:

  1. A is non-singular.
  2. All the columns and rows in A are linearly independent.
  3. A has full rank.
  4. Exactly one solution X ∗ exists for each vector Y ∗ .
  5. A is invertible.
  6. det(A) 6 = 0.
  7. The row-echelon form of the matrix is upper triangular.
  8. The reduced row echelon form is the identity matrix.

Calculating the Inverse Matrix by the Adjoint Matrix

The adjoint matrix of a square matrix A is the transposed matrix of cofactors of A, or

adj(A) =

C 11 C 21... Cn 1

C 12 C 22... Cn 2

. . .

C 1 n C 2 n... Cnn

Notice that the adjoint of a 2 × 2 matrix is

a 22 −a 12

−a 21 a 11

The inverse of the matrix A can be found by

A

− 1

det(A)

· adj(A).

Therefore, the inverse of a 2 × 2 matrix is

A

− 1

a 11 a 22 − a 12 a 21

a 22 −a 12

−a 21 a 11

List all the principle minors of the 3 × 3 matrix:

a 11 a 12 a 13

a 21 a 22 a 23

a 31 a 32 a 33

Answer: There is one third order principle minor of A, det(A). There are three second order principal

minors:

a 11 a 12

a 21 a 22

, formed by deleting the third row and column of A.

a 11 a 13

a 31 a 33

, formed by deleting the second row and column of A.

a 22 a 23

a 32 a 33

, formed by deleting the rst row and column of A.

There are also three rst order principle minors: a 11 , by deleting the last two rows and columns; a 22 , by

deleting the rst and last rows and columns; and a 33 , by deleting the rst two rows and columns.

Leading Principal Minors

A leading principal minor the the determinant of the leading principal submatrix obtained by deleting

the last n − k rows and columns of an n × n matrix A.

Example:

List all the leading principle minors of the 3 × 3 matrix:

a 11 a 12 a 13

a 21 a 22 a 23

a 31 a 32 a 33

Answer: There are three leading principal minors:

  1. |a 11 |, formed by deleting the last two rows and columns of A.

a 11 a 12

a 21 a 22

, formed by deleting the last row and column of A.

a 11 a 12 a 13

a 21 a 22 a 23

a 31 a 32 a 33

, formed by deleting the no rows or columns of A.

Why in the world do we care about principal and leading principal minors? We need to calculate the signs

of the leading principal minors in order to determine the deniteness of a matrix. We need deniteness to

check second-order conditions for maxima and minima. We also need deniteness of the Hessian matrix to

check to see whether or not we have a concave function.

Quadratic Forms and Deniteness

Quadratic Forms

Consider the function F : R

2 → R, where F = a 11 x

2 1 +^ a^12 x^1 x^2 +^ a^22 x

2

  1. We call this a^ quadratic form^ in

R

2

. Notice that this can be expressed in matrix form as

F (x) =

x 1 x 2

a 11

1 2 a 12 1 2 a 12 a 22

x 1

x 2

= x

′ Ax,

where x = (x 1 , x 2 ), and A is unique and symmetric.

The quadratic form in R

n is

F (x) =

n ∑

i,j=

aij xixj ,

where x = (x 1 ,... , xn), and A is unique and symmetric. This can also be expressed in matrix form:

F (x) =

x 1 x 2... xn

a 11

1 2 a 21...

1 2 a 1 n 1 2 a 12 a 22...

1 2 a 2 n

. . .

1 2

an 1

1 2

an 2... ann

x 1

x 2

. . .

xn

= x

′ Ax.

Deniteness

Let A be an n × n symmetric matrix. Then A is:

  1. positive denite if x

′ Ax > 0 ∀ x 6 = 0 in R

n .

  1. positive semidenite if x

′ Ax ≥ 0 ∀ x 6 = 0 in R

n .

  1. negative denite if x

′ Ax < 0 ∀ x 6 = 0 in R

n .

  1. negative semidenite if x

′ Ax ≤ 0 ∀ x 6 = 0 in R

n .

  1. indenite if x

′ Ax > 0 for some x ∈ R

n , and < 0 for some other x ∈ R

n .

We can test for the deniteness of the matrix in the following fashion:

  1. A is positive denite i all of its n leading principal minors are strictly positive.
  2. A is negative denite i all of its n leading principal minors alternate in sign, where |A 1 | < 0 , |A 2 | > 0 ,

|A 3 | < 0 , etc.

  1. If some kth order leading pricipal minor of A is nonzero but does not t either of the above sign

patterns, then A is indenite.

If the matrix A would meet the criterion for positive or negative deniteness if we relaxed the strict inequalites

to weak inequalities (i.e. we allow zero to t into the pattern), then although the matrix is not positive or

negative denite, it may be positive or negative semidenite. In this case, we employ the following tests:

  1. A is positive semidenite i every principal minor of A is ≥ 0.
  2. A is negative semidenite i every principal minor of A of odd order is ≤ 0 and every principal minor

of even order is ≥ 0.

Notice that for determining semideniteness, we can no longer check just the leading principal minors, but

we must check all principal minors. What a pain!

Homework

Do the following:

  1. Let A =

and B =

. Find A − B, A + B, AB, and BA.

  1. Let v =

and u =

. Find u · v, u

′ v and v

′ u.

(d)

(e)

(f)

(g)

  1. Determine the ranks of the matrices in problem 14. How many linearly independent rows are in each?
  2. Calculate the determinants of the matrices in problem 14. Which have inverses?