







Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
Answer: There are three leading principal minors: 1. |a11|, formed by deleting the last two rows and columns of A. 2. ∣. ∣.
Typology: Exercises
1 / 13
This page cannot be seen from the preview
Don't miss anything!
Consider two n × m matrices:
a 11... a 1 m
. . .
an 1... anm
b 11... b 1 m
. . .
bn 1... bnm
Then the basic matrix operations are as follows:
a 11 + b 11... a 1 m + b 1 m
. . .
an 1 + bn 1... anm + bnm
λa 11... λa 1 m
. . .
λan 1... λanm
,^ where^ λ^ ∈^ R
Notice that the elements in the matrix are numbered such that ai,j , where i is the row and j is the column
in which the element ai,j is found.
In order to multiply matricies CD, the number of columns in the C matrix must be equal to the number
of rows in the D matrix. Say C is an n × m matrix, and D is an m × k matrix. Then multiplication is
dened as follows:
n×m
m×k
m q= c 1 ,q dq, 1...
m q= c 1 ,q dq,k
. . .
∑m
q= cn,q dq, 1...
∑m
q= cn,q dq,k
n×k
There are two notable special cases for multiplaction of matricies. The rst is called the inner product, which
occurs when two vectors of the same length are multiplied together such that the result is a scalar:
v · z = v ︸︷︷︸ 1 ×n
z
′ ︸︷︷︸ n× 1
v 1... vn
z 1
. . .
zn
n ∑
i=
vizi
The second is called the outer product:
v
′ ︸︷︷︸ n× 1
z ︸︷︷︸ 1 ×n
z 1
. . .
zn
v 1... vn
z 1 v 1... z 1 vn
. . .
znv 1... znvn
n×n
Note that when we multiplied the matricies C and D together, the resulting ei,j th element of E was just the
inner product of the ith row of C and jth column of D. Also, note that even if two matricies X and Y are
both n × n, then XY 6 = Y X, except in special cases.
Zero Matricies
A zero matrix is a matrix where each element is 0
n×k
The following properties hold for zero matricies:
Identity Matricies
The identity matrix is a matrix with zeroes everywhere except along the diagonal. Note that the number of
columns must equal the number of rows.
n×n
The reason it is called the identity matrix is because AI = IA = A.
Square, Symmetric, and Transpose Matricies
A square matrix is a matrix whose number of rows is the same as its number of columns. For example, the
identity matrix is always square. If a square matrix has the property that ai,j = aj,i for all its elements,
then we call it a symmetric matrix.
The transpose of a matrix A, denoted A ′ is a matrix such that for each element of A ′ , a ′ i,j = aj,i. For
example, the transpose of the matrix
is (^)
Note that a matrix A is symmetric if A = A
′ .
The following properties of the transpose hold:
′ )
′ = A.
′ = A
′
′ .
′ = αA
′ .
′ = B
′ A
′ .
′ is k × n.
Diagonal and Triangular Matricies
A square matrix A is diagonal if it is all zeroes except along the diagonal:
a 11 0... 0
0 a 22... 0
. . .
0 0... ann
Note that all diagonal matricies are also symmetric.
column j from A. Then the scalar Mij = det(Aij ) is called the (i, j)th minor of A.
i+j Mij is called the (i, j)th cofactor of A. The cofactor is merely the signed
minor.
Armed with these two denitions, we notice that the determinant for the 2 × 2 matrix is
det(A) = aM 11 − bM 12 = aC 11 + bC 12 ,
and the determinant for the 3 × 3 matrix is
det(A) = aM 11 − bM 12 − cM 13 = aC 11 + bC 12 + C 13.
Therefore, we can dene the determinant for an n × n sqaure matrix as follows:
det
n×n
(^) = a 11 C 11 +^ a 12 C 12 +^...^ +^ a 1 nC 1 n.
Notice that the denition of the determinant uses elements and cofactors for the top row only. This is
called a cofactor expansion along the rst row. However, a cofactor expansion along any row or column
will be equal to the determinant. The proof of this asssertion is left as a homework problem for the 3 × 3
case.
Example: Find the determinant of the upper diagonal matrix
Answer: The determinant is:
aC 11 + bC 12 + C 13 = a · det
e f
h i
− b · det
d f
g i
d e
g h
= 1 · det
− 0 · det
Now lets expand along the second column instead of the third row:
aC 12 + bC 22 + C 32 = −b · det
d f
g i
a c
g i
− h · det
a c
d f
= − 0 · det
− 5 · det
Consider the system of equations:
y 1 = a 1 x 1 + b 1 x 2 +... + c 1 xn
y 2 = a 2 x 1 + b 2 x 2 +... + c 2 xn
ym = a 2 x 1 + b 2 x 2 +... + c 2 xn
The functions can be written in matrix form by
y 1
y 2
. . .
ym
m× 1
a 1 b 1... c 1
a 2 b 2... c 2
. . .
am bm... cm
m×n
x 1
x 2
. . .
xn
n× 1
In short, we can write this system as y = Ax. For simplicity, denote the ith row in the matrix as a separate
vector vi, so that
a 1 b 1... c 1
a 2 b 2... c 2
. . .
am bm... cm
m×n
v 1
v 2
. . .
vm
We say the vectors v 1 ,... , vm are linearly dependant if there exist scalars q 1 ,... , qm , not all zero, such
that: m ∑
i=
qivi = 0.
We say the vectors v 1 ,... , vm are linearly independant if the only scalars q 1 ,... , qm such that:
m ∑
i=
qivi = 0
are q 1 =... = qm = 0. We can use this deniton of linear independance and dependance for columns as
well.
The rank of a matrix is the number of linearly independant rows or columns in a matrix (Note: the
number of linearly independant rows are the same as the number of linearly independant columns).
If we draw the lines in the system in R N , then they will all cross either at no points, at one point, or at
innitely many points. Therefore, the system may have no solutions, one solution, or many solutions for a
given vector y. If it has more than one solution, it will have innitely many solutions, since straight lines
can only intersect once if they do not coincide. The following are conditions on the number of solutions of a
system:
A = the number of rows in A.
of y i rank A = the number of columns in A.
This implies that a system of linear equations with coecient matrix A will have exactly one solution for
each choice of y i rank A =the number of columns in A = the number of rows of A. If a square matrix has
exactly one solution for each y, we say the matrix is non-singular. Otherwise, it is non-singular and has
innitely many solutions.
Recall the system of equations:
y 1 = a 1 x 1 + b 1 x 2 +... + c 1 xn
y 2 = a 2 x 1 + b 2 x 2 +... + c 2 xn
Now subtract four times the rst row from the second row to obtain
Then rearrange to get the matrix in row echelon form
When a matrix is in row echelon form, it is easy to check whether all the rows are linearly independant.
For linear independance, we must have that all the rows in the row echelon form are non-zero. If not, then
the matrix will have linearly dependant rows. We have shown that all the rows in A are linearly independant
because the row echelon form contains no zero rows.
Another way of dening the rank of matrix is by the number of non-zero (or linearly independant) rows
in a matrix. For example, the matrix A is of full rank since all its rows are linearly independant.
Using Elementary Row Operations to Solve a System of Equations
We can solve a system of equations by writing the matrix
a 1 b 1... c 1 | y 1
a 2 b 2... c 2 | y 2
. . .
am bm... cm | ym
called the augmented matrix of A, and use elementary row operations.
Example:
Let
as before. Say that the vector y = (1, 1 , 1)
′
. Then the augmented matrix is
Performing the same matrix operations as before, we have
We continue the row operations until the left hand side of the augmented matrix looks like the identity
matrix:
1 2
1 2
1 2 0 1 0 | − 1
0 0 1 |
1 2
3 2 0 1 0 | − 1
0 0 1 |
1 2
Notice that this implies
3 2 − 1 1 2
x 1
x 2
x 3
(^) ⇒ x =
so we have found a solution using elementary row operations.
In summary, if we form the augmented matrix of A, reduce the left hand side of the matrix to is reduced
row echelon form (so that each row contains all zeros, except for the possibility of a one in a column of
all zeros) through elementary row operations, then the remaining vector on the right hand side will be the
solution to the system.
It can be shown that the inverse of a square matrix will exist if it's determinant is non-zero. The following
statements are all equivalent for a square matrix A:
Calculating the Inverse Matrix by the Adjoint Matrix
The adjoint matrix of a square matrix A is the transposed matrix of cofactors of A, or
adj(A) =
C 11 C 21... Cn 1
C 12 C 22... Cn 2
. . .
C 1 n C 2 n... Cnn
Notice that the adjoint of a 2 × 2 matrix is
a 22 −a 12
−a 21 a 11
The inverse of the matrix A can be found by
det(A)
· adj(A).
Therefore, the inverse of a 2 × 2 matrix is
a 11 a 22 − a 12 a 21
a 22 −a 12
−a 21 a 11
List all the principle minors of the 3 × 3 matrix:
a 11 a 12 a 13
a 21 a 22 a 23
a 31 a 32 a 33
Answer: There is one third order principle minor of A, det(A). There are three second order principal
minors:
a 11 a 12
a 21 a 22
, formed by deleting the third row and column of A.
a 11 a 13
a 31 a 33
, formed by deleting the second row and column of A.
a 22 a 23
a 32 a 33
, formed by deleting the rst row and column of A.
There are also three rst order principle minors: a 11 , by deleting the last two rows and columns; a 22 , by
deleting the rst and last rows and columns; and a 33 , by deleting the rst two rows and columns.
Leading Principal Minors
A leading principal minor the the determinant of the leading principal submatrix obtained by deleting
the last n − k rows and columns of an n × n matrix A.
Example:
List all the leading principle minors of the 3 × 3 matrix:
a 11 a 12 a 13
a 21 a 22 a 23
a 31 a 32 a 33
Answer: There are three leading principal minors:
a 11 a 12
a 21 a 22
, formed by deleting the last row and column of A.
a 11 a 12 a 13
a 21 a 22 a 23
a 31 a 32 a 33
, formed by deleting the no rows or columns of A.
Why in the world do we care about principal and leading principal minors? We need to calculate the signs
of the leading principal minors in order to determine the deniteness of a matrix. We need deniteness to
check second-order conditions for maxima and minima. We also need deniteness of the Hessian matrix to
check to see whether or not we have a concave function.
Quadratic Forms
Consider the function F : R
2 → R, where F = a 11 x
2 1 +^ a^12 x^1 x^2 +^ a^22 x
2
R
2
. Notice that this can be expressed in matrix form as
F (x) =
x 1 x 2
a 11
1 2 a 12 1 2 a 12 a 22
x 1
x 2
= x
′ Ax,
where x = (x 1 , x 2 ), and A is unique and symmetric.
The quadratic form in R
n is
F (x) =
n ∑
i,j=
aij xixj ,
where x = (x 1 ,... , xn), and A is unique and symmetric. This can also be expressed in matrix form:
F (x) =
x 1 x 2... xn
a 11
1 2 a 21...
1 2 a 1 n 1 2 a 12 a 22...
1 2 a 2 n
. . .
1 2
an 1
1 2
an 2... ann
x 1
x 2
. . .
xn
= x
′ Ax.
Deniteness
Let A be an n × n symmetric matrix. Then A is:
′ Ax > 0 ∀ x 6 = 0 in R
n .
′ Ax ≥ 0 ∀ x 6 = 0 in R
n .
′ Ax < 0 ∀ x 6 = 0 in R
n .
′ Ax ≤ 0 ∀ x 6 = 0 in R
n .
′ Ax > 0 for some x ∈ R
n , and < 0 for some other x ∈ R
n .
We can test for the deniteness of the matrix in the following fashion:
|A 3 | < 0 , etc.
patterns, then A is indenite.
If the matrix A would meet the criterion for positive or negative deniteness if we relaxed the strict inequalites
to weak inequalities (i.e. we allow zero to t into the pattern), then although the matrix is not positive or
negative denite, it may be positive or negative semidenite. In this case, we employ the following tests:
of even order is ≥ 0.
Notice that for determining semideniteness, we can no longer check just the leading principal minors, but
we must check all principal minors. What a pain!
Do the following:
and B =
. Find A − B, A + B, AB, and BA.
and u =
. Find u · v, u
′ v and v
′ u.
(d)
(e)
(f)
(g)