Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Elements of Matrix Algebra, Schemes and Mind Maps of Algebra

A square matrix is a matrix with the same number of columns and rows, i.e. n = m. Page 3. 3. Short Guides to Microeconometrics. A symmetric matrix is a square ...

Typology: Schemes and Mind Maps

2021/2022

Uploaded on 09/27/2022

faylin
faylin 🇺🇸

4.9

(8)

225 documents

1 / 21

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Short Guides to Microeconometrics
Fall 2021
Kurt Schmidheiny/Klaus Neusser
Universit¨at Basel
Elements of Matrix Algebra
Contents
1 Definitions 2
2 Matrix Operations 3
3 Rank of a Matrix 6
4 Special Functions of Square Matrices 7
5 Systems of Equations 10
6 Eigenvalue, -vector and Decomposition 11
7 Quadratic Forms 13
8 Partitioned Matrices 15
9 Derivatives with Matrix Algebra 16
10 Kronecker Product 18
References 19
Formula Sources and Proofs 20
Version: 29-9-2021, 15:49
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15

Partial preview of the text

Download Elements of Matrix Algebra and more Schemes and Mind Maps Algebra in PDF only on Docsity!

Short Guides to Microeconometrics Fall 2021

Kurt Schmidheiny/Klaus Neusser Universit¨at Basel

Elements of Matrix Algebra

Contents

1 Definitions 2

2 Matrix Operations 3

3 Rank of a Matrix 6

4 Special Functions of Square Matrices 7

5 Systems of Equations 10

6 Eigenvalue, -vector and Decomposition 11

7 Quadratic Forms 13

8 Partitioned Matrices 15

9 Derivatives with Matrix Algebra 16

10 Kronecker Product 18

References 19

Formula Sources and Proofs 20

Version: 29-9-2021, 15:

Elements of Matrix Algebra 2

Foreword

These lecture notes are supposed to summarize the main results concern- ing matrix algebra as they are used in econometrics and economics. For a deeper discussion of the material, the interested reader should consult the references listed at the end.

1 Definitions

A matrix is a rectangular array of numbers. Here we consider only real numbers. If the matrix has n rows and m columns, we say that the matrix is of dimension (n × m). We denote matrices by capital bold letters:

A = (A)ij = (aij ) =

a 11 a 12... a 1 m a 21 a 22... a 2 m .. .

....^

an 1 an 2... anm

The numbers aij are called the elements of the matrix. An (n × 1) matrix is a column vector with n elements. Similarly, a (1 × m) matrix is a row vector with m elements. We denote vectors by bold letters.

a =

a 1 a 2 .. . an

b =

b 1 b 2... bm

A (1 × 1) matrix is a scalar which is denoted by an italic letter. The null matrix (O) is a matrix whose elements are all equal to zero, i.e. aij = 0 for all i = 1,... , n and j = 1,... , m. A square matrix is a matrix with the same number of columns and rows, i.e. n = m.

Elements of Matrix Algebra 4

2.3 Addition and Subtraction

The addition and subtraction of matrices is only defined for matrices with the same dimension.

Definition 2. The sum of two matrices A and B of the same dimensions is given by the sum of their elements, i.e.

C = A + B ⇐⇒ cij = aij + bij for all i and j

The sum of a matrix A and a scalar b is a matrix C = A + b with cij = aij + b. Note that A + b = b + A. We have the following calculation rules if matrix dimensions agree:

  • A + O = A (2.3)
  • A − B = A + (−B) (2.4)
  • A + B = B + A (2.5)
  • (A + B) + C = A + (B + C) (2.6)
  • (A + B)′^ = A′^ + B′^ (2.7)

2.4 Product

Definition 3. The inner product (dot product, scalar product) of two vectors a and b of the same dimension (n × 1) is a scalar (real number) defined as:

a′b = b′a = a 1 b 1 + a 2 b 2 + · · · + anbn =

∑^ n i=

aibi.

The product of a scalar c and a matrix A is a matrix B = cA with bij = caij. Note that cA = Ac when c is a scalar.

Definition 4. The product of two matrices A and B with dimensions (n × k) and (k × m), respectively, is given by the matrix C with dimension

5 Short Guides to Microeconometrics

(n × m) such that

C = A B ⇐⇒ cij =

∑^ k s=

aisbsj for all i and j

Remark 1. The matrix product is only defined if the number of columns of the first matrix is equal to the number of rows of the second matrix. Thus, although A B may be defined, B A is only defined if n = m. Thus for square matrices both A B and B A are defined.

Remark 2. The product of two matrices is in general not commutative, i.e. A B 6 = B A.

Remark 3. The product A B may also be defined as

cij = (C)ij = a′ i•b•j

where a′ i• denotes the i-th row of A and b•j the j-th column of B.

We have the following calculation rules if matrix dimensions agree:

  • AI = A, IA = A (2.8)
  • AO = O, OA = O (2.9)
  • (AB)C = A(BC) = ABC (2.10)
  • A(B + C) = AB + AC (2.11)
  • (B + C)A = BA + CA (2.12)
  • c(A + B) = cA + cB (2.13)
  • (AB)′^ = B′A′^ (order!) (2.14)
  • (ABC)′^ = C′B′A′^ (order!) (2.15)

7 Short Guides to Microeconometrics

4 Special Functions of Square Matrices

In this section only square (n × n) matrices are considered.

4.1 Trace of a Matrix

Definition 5. The trace of a matrix A, denoted by tr(A), is the sum of its diagonal elements:

tr(A) =

∑^ n i=

aii

The following calculation rules hold if matrix dimensions agree:

  • tr(cA) = c tr(A) (4.1)
  • tr(A′) = tr(A) (4.2)
  • tr(A + B) = tr(A) + tr(B) (4.3)
  • tr(AB) = tr(BA) (4.4)
  • tr(ABC) = tr(BCA) = tr(CAB) (4.5)

4.2 Determinant

The determinant of a(n × n) matrix A with n > 1 can be computed according to the following formula:

|A| =

∑^ n i=

aij (−1)i+j^ |Aij | for some arbitrary j

The determinant, computed as above, is said to be developed according to the j-th column. The term (−1)i+j^ |Aij | is called the cofactor of the element aij. Thereby Aij is a matrix of dimension ((n − 1) × (n − 1)) which is obtained by deleting the i-th row and the j-th column.

Elements of Matrix Algebra 8

n nj nn

i ij in

j n

ij

a a a

a a a

a a a

1

1

11 1 1

A

For n = 1, i.e. if A is a scalar, the determinant |A| is defined as the absolute value. For n = 2 , the determinant is given by:

|A| = a 11 a 22 − a 12 a 21. If at least two columns (rows) are linearly dependent, the determi- nant is equal to zero and the inverse of A does not exist. The matrix is called singular in this case. If the matrix is nonsingular then all columns (rows) are linearly independent. If a column or a row has just zeros as its elements, the determinant is equal to zero. If two columns (rows) are interchanged, the determinant changes its sign. Calculation rules for the determinant are:

  • |A| = |A′| (4.6)
  • |AB| = |A|·|B| (4.7)
  • |cA| = cn|A| (4.8)

4.3 Inverse of a Matrix

If A is a square matrix, there may exist a matrix B with property AB = BA = I. If such a matrix exists, it is called the inverse of A and is denoted by A−^1 , hence AA−^1 = A−^1 A = I. The inverse of a matrix can be computed as follows

A−^1 = |A^1 |

(−1)1+1|A 11 | (−1)2+1|A 21 |... (−1)n+1|An 1 | (−1)1+2|A 12 | (−1)2+2|A 22 |... (−1)n+2|An 2 | ..

....^

(−1)1+n|A 1 n| (−1)2+n|A 2 n|... (−1)n+n|Ann|

Elements of Matrix Algebra 10

5 Systems of Equations

Consider the following system of m equations in n unknowns x 1 ,... , xn:

a 11 x 1 + a 12 x 2 + · · · + a 1 nxn = b 1 a 21 x 1 + a 22 x 2 + · · · + a 2 nxn = b 2

... am 1 x 1 + am 2 x 2 + · · · + amnxn = bm

If we collect the unknowns into a vector x = (x 1 ,... , xn)′, the coefficients b 1 ,... , bn in to a vector b, and the coefficients (aij ) into a matrix A, we can rewrite the equation system compactly in matrix form as follows:    

a 11 a 12... a 1 n a 21 a 22... a 2 n .. .

....^

am 1 am 2... amn

A

x 1 x 2 .. . xn

x

b 1 b 2 .. . bm

b A x = b This equation system has a unique solution if m = n, i.e. if A is a square matrix, and A is nonsingular, i.e A−^1 exits. The solution is then given by x = A−^1 b

Remark 4. To achieve numerical accuracy it is preferable not to compute the inverse explicitly. There are efficient numerical algorithms which can solve the equation system without computing the inverse.

11 Short Guides to Microeconometrics

6 Eigenvalue, -vector and Decomposition

6.1 Eigenvalue and Eigenvector

A scalar λ is said to be an eigenvalue of the square matrix A if there exists a vector x 6 = 0 such that

A x = λx

The vector x is called an eigenvector corresponding to λ. If x is an eigenvector then α x, α 6 = 0, is also an eigenvector. Eigenvectors are therefore not unique. It is sometimes useful to normalize the length of the eigenvectors to one, i.e. to choose the eigenvector such that x′x = 1.

6.2 Characteristic Equation

In order to find the eigenvalues and eigenvectors of a square matrix, one has to solve the equation system

A x = λx = λI x ⇐⇒ (A − λ I)x = 0.

This equation system has a nontrivial solution, x 6 = 0, if and only if the matrix (A − λ I) is singular, or equivalently if and only if the determinant of (A − λ I) is equal to zero. This leads to an equation in the unknown parameter λ: |A − λ I| = 0.

This equation is called the characteristic equation of the matrix A and corresponds to a polynomial equation of order n. The n solutions of this equation (roots) are the eigenvalues of the matrix. The solutions may be complex numbers. Some solutions may appear several times. Eigenvectors corresponding to some eigenvalue λ can be obtained from the equation (A − λ I)x = 0. We have the following relations for an (n × n) matrix A:

  • tr(A) =

∑n i=1 λi^ (6.1)

  • |A| = ∏ni=1 λi (6.2)

13 Short Guides to Microeconometrics

7 Quadratic Forms

For a vector x ∈ Rn^ and a symmetric matrix A of dimension (n × n) the scalar function

f (x) = x′Ax =

∑^ n j=

∑^ n i=

xixj aij

is called a quadratic form. The quadratic form x′Ax and therefore the matrix A is called positive (negative) definite, if and only if

x′Ax > 0(< 0) for all x 6 = 0.

The property that A is positive definite implies that

  • λi > 0 for all i = 1, ..., n (7.1)
  • |A| > 0 (7.2)
  • A−^1 exists and is positive definite (7.3)
  • tr(A) > 0 (7.4)

The first property is an alternative definition for a positive definite matrix. The quadratic form x′Ax and therefore the matrix A is called non- negative definite or positive semi-definite, if and only if

x′Ax ≥ 0 for all x.

For nonnegative definite matrices we have:

  • λi ≥ 0 for all i = 1, ..., n (7.5)
  • |A| ≥ 0 (7.6)
  • tr(A) ≥ 0 (7.7)

The first property is an alternative definition for nonnegative definiteness.

Elements of Matrix Algebra 14

For an (n × m) matrix B,

  • B′B is nonnegative definite (7.8)
  • B′B is positive definite if B has full column rank (7.9)
  • BB′^ is nonnegative definite (7.10)

If the (n × m) matrix B has rank m (full column rank) and the (n × n) matrix A is positive definite then

  • B′AB is positive definite (7.11)

The inverse of a symmetric positive definite (n × n) matrix A can be decomposed into

A−^1 = C′C where CAC′^ = I.

where C is a (n × n) matrix.

Elements of Matrix Algebra 16

9 Derivatives with Matrix Algebra

A linear function f from the n-dimensional vector space of real numbers, Rn, to the real numbers, R, f : Rn^ −→ R is determined by the coefficient vector a = (a 1 ,... , an)′:

y = f (x) = a′x =

∑^ n i=

aixi = a 1 x 1 + a 2 x 2 + · · · + anxn

where x is a column vector of dimension n and y a scalar. The derivative of y = f (x) with respect to the column vector x is defined as follows:

∂y ∂x =^

∂a′x ∂x =^

∂x′a ∂x =

∂y/∂x 1 ∂y/∂x 2 .. . ∂y/∂xn

a 1 a 2 .. . an

= a

and with respect to the row vector x′^ as follows:

∂y ∂x′^ =^

∂a′x ∂x′^ =^

∂x′a ∂x′^ =

[ (^) ∂y ∂x 1

∂y ∂x 2...^

∂y ∂xn

]

[

a 1 a 2... an

]

= a′

The simultaneous equation system y = Ax can be viewed as m linear functions yi = a′ ix where a′ i denotes the i-th row of the (m×n) dimensional matrix A. Thus the derivative of yi with respect to x is given by

∂yi ∂x =^

∂a′ ix ∂x =^ ai

Consequently the derivative of y = Ax with respect to row vector x′^ can be defined as

∂y ∂x′^ =^

∂Ax ∂x′^ =

∂y 1 /∂x′ ∂y 2 /∂x′ .. . ∂ym/∂x′

a′ 1 a′ 2 .. . a′ m

= A.

17 Short Guides to Microeconometrics

The derivative of y = Ax with respect to column vector x is therefore

∂y ∂x =^

∂Ax ∂x =^ A

For a square matrix A of dimension (n × n) and the quadratic form x′Ax =

∑n j=

∑n i=1 xixj^ aij^ the derivative with respect to the column vector x is defined as

∂x′Ax ∂x = (A^ +^ A

′)x.

If A is a symmetric matrix this reduces to ∂x′Ax ∂x = 2Ax. The derivative of the quadratic form x′Ax with respect to the matrix elements aij is given by ∂x′Ax ∂aij^ =^ xixj^. Therefore the derivative with respect to the matrix A is given by

∂x′Ax ∂A =^ xx

19 Short Guides to Microeconometrics

References

[1] Abadir, K.M. and J.R. Magnus, Matrix Algebra, Cambridge: Cam- bridge University Press, 2005.

[2] Amemiya, T., Introduction to Statistics and Econometrics, Cam- bridge, Massachusetts: Harvard University Press, 1994.

[3] Dhrymes, P.J., Introductory Econometrics, New York : Springer- Verlag, 1978.

[4] Meyer, C.D., Matrix Analysis and Applied Linear Algebra, Philadel- phia: SIAM, 2000.

[5] Strang, G., Linear Algebra and its Applications, 3rd Edition, San Diego: Harcourt Brace Jovanovich, 1986.

[6] Magnus, J.R., and H. Neudecker, Matrix Differential Calculus with Applications in Statistics and Econometrics, Chichester: John Wiley,

Elements of Matrix Algebra 20

Formula Sources and Proofs

(2.8) Abadir and Magnus (2005), p. 28, ex. 2.18 (b). (2.10) Abadir and Magnus (2005), p. 25, ex. 2.14 (a). (2.11) Abadir and Magnus (2005), p. 25, ex. 2.14 (b). (2.14) Abadir and Magnus (2005), p. 26, ex. 2.15 (a). (2.15) Abadir and Magnus (2005), p. 26, ex. 2.15 (b). (3.1) Abadir and Magnus (2005), p. 78 - 79, ex. 4.7 (a). (3.2) Abadir and Magnus (2005), p. 77 - 78, ex. 4.5. (3.3) Abadir and Magnus (2005), p. 81, ex. 4.13 (d). (3.4) Abadir and Magnus (2005), p. 81, ex. 4.15 (b). (3.5) Abadir and Magnus (2005), p. 85, ex. 4.25 (c). (3.6) Abadir and Magnus (2005), p. 85, ex. 4.25 (d). (3.7) Abadir and Magnus (2005), p. 221, ex. 8.27 (a). (3.8) Abadir and Magnus (2005), p. 221, ex. 8.26 (a). (4.1) Abadir and Magnus (2005), p. 30, ex. 2.24 (b). (4.2) Abadir and Magnus (2005), p. 30, ex. 2.24 (c). (4.3) Abadir and Magnus (2005), p. 30, ex. 2.24 (a). (4.4) Abadir and Magnus (2005), p. 30, ex. 2.26 (a). (4.5) Abadir and Magnus (2005), p. 31, ex. 2.26 (c). (4.6) Abadir and Magnus (2005), p. 88, ex. 4.30. (4.7) Abadir and Magnus (2005), p. 94, ex. 4.42. (4.8) Abadir and Magnus (2005), p. 90, ex. 4.35 (a). (4.9) Abadir and Magnus (2005), p. 84, ex. 4.22 (b). (4.10) Abadir and Magnus (2005), p. 84, ex. 4.22 (d). (4.11) Abadir and Magnus (2005), p. 84, ex. 4.22 (c). (4.12) Abadir and Magnus (2005), p. 95, ex. 4.44 (a).