Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Lecture Notes on Computational Physics: Matrix Equations and Eigenvalues - Prof. Donald G., Study notes of Physics

These lecture notes cover the topic of matrix equations and eigenvalues in the context of computational physics. Information on setting up matrix equations, scalar multiplication, vector components, the dot product, and the relationship between vectors and matrices. It also discusses the concept of linear transformations and how they can be represented by matrices.

Typology: Study notes

Pre 2010

Uploaded on 08/18/2009

koofers-user-q3u
koofers-user-q3u 🇺🇸

10 documents

1 / 43

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
PHYS-4007/5007: Computational Physics
Course Lecture Notes
Section VIII
Dr. Donald G. Luttermoser
East Tennessee State University
Version 4.1
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b

Partial preview of the text

Download Lecture Notes on Computational Physics: Matrix Equations and Eigenvalues - Prof. Donald G. and more Study notes Physics in PDF only on Docsity!

PHYS-4007/5007: Computational Physics

Course Lecture Notes

Section VIII

Dr. Donald G. Luttermoser East Tennessee State University

Version 4.

Abstract

These class notes are designed for use of the instructor and students of the course PHYS-4007/5007: Computational Physics taught by Dr. Donald Luttermoser at East Tennessee State University.

VIII–2 PHYS-4007/5007: Computational Physics

  1. Also, since matrices play a big role in quantum mechanics, we will use the formalism that is used in QM to describe vectors and matrices.

B. Linear Algebra.

  1. In classical mechanics, vectors are typically defined in Cartesian coordinates as α = αx x^ ∧ + αy y^ ∧ + αz z^ ∧. (VIII-8) Note that one also can use the i, j, k notation for the unit vectors. a) Vectors are added via the component method such that

α ± β = (αx ± βx) x∧ + (αy ± βy) y∧

  • (αz ± βx) z∧. (VIII-9)

b) However in quantum mechanics, often we will have more than 3 coordinates to worry about — indeed, sometimes there may be an infinite amount of coordinates!

c) As such, we will introduce a new notation (the so-called bra-and-ket notation) to describe vectors: α ≡ |α〉 (ket), α∗^ ≡ 〈α| (bra). (VIII-10) Note that the ∗ in the “bra” definition means take the complex conjugate (multiply all i =

−1 terms by -1) in vector α.

  1. A vector space consists of a set of vectors (|α〉, |β〉, |γ〉, ...), together with a set of (real or complex) scalars (a, b, c, ...), which are subject to 2 operations: a) Vector addition: The sum of any 2 vectors is another vector: |α〉 + |β〉 = |γ〉. (VIII-11)

Donald G. Luttermoser, ETSU VIII–

i) Vector addition is commutative:

|α〉 + |β〉 = |β〉 + |α〉. (VIII-12)

ii) Vector addition is associative:

|α〉 + (|β〉 + |γ〉) = (|α〉 + |β〉) + |γ〉. (VIII-13)

iii) There exists a zero (or null) vector, | 0 〉, with the property that

|α〉 + | 0 〉 = |α〉, (VIII-14)

for every vector |α〉.

iv) For every vector |α〉 there is an associated in- verse vector (| − α〉) such that

|α〉 + | − α〉 = | 0 〉. (VIII-15)

b) Scalar multiplication: The product of any scalar with any vector is another vector:

a|α〉 = |γ〉. (VIII-16)

i) Scalar multiplication is distributive with respect to vector addition:

a(|α〉 + |β〉) = a|α〉 + a|β〉, (VIII-17)

and with respect to scalar addition:

(a + b)|α〉 = a|α〉 + b|α〉. (VIII-18)

ii) It is also associative:

a(b|α〉) = (ab)|α〉. (VIII-19)

Donald G. Luttermoser, ETSU VIII–

v) It is often easier to work with components than with the abstract vectors themselves. Use whatever method to which you are most comfortable.

  1. In 3 dimensions, we encounter 2 kinds of vector products: the dot product and the cross product. The latter does not generalize in any natural way to n-dimensional vector spaces, but the former does and is called the inner product. a) The inner product of 2 vectors (|α〉 and |β〉) is a com- plex number (which we write as 〈α|β〉), with the following properties: 〈β|α〉 = 〈α|β〉∗^ (VIII-25) 〈α|α〉 ≥ 0 (VIII-26) 〈α|α〉 = 0 ⇔ |α〉 = | 0 〉 (VIII-27) 〈α|(b|β〉 + c|γ〉) = b〈α|β〉 + c〈α|γ〉 (VIII-28) 〈α|β〉 =

∑N n=

α∗ nβn. (VIII-29)

b) A vector space with an inner product is called an inner product space.

c) Because the inner product of any vector with itself is a non-negative number (Eq. VIII-26), its square root is real — we call this the norm (think of this as the length ) of the vector: ‖α‖ ≡

√ 〈α|α〉. (VIII-30)

d) A unit vector, whose norm is 1, is said to be normalized.

e) Two vectors whose inner product is zero are called or- thogonal =⇒ a collection of mutually orthogonal nor- malized vectors, 〈αi|αj 〉 = δij , (VIII-31)

VIII–6 PHYS-4007/5007: Computational Physics

is called an orthonormal set, where δij is the Kro- necker delta.

f) Components of vectors can be written as ai = 〈ei|α〉. (VIII-32)

g) For vectors that are co-linear and proportional to each other, the Schwarz inequality can be applied to these vectors: |〈α|β〉|^2 ≤ 〈α|α〉〈β|β〉 (VIII-33) and we can define the (complex) angle between |α〉 and |β〉 by the formula

cos θ =

√√ √√ √ 〈α|β〉〈β|α〉 〈α|α〉〈β|β〉

. (VIII-34)

  1. A linear transformation ( Tˆ , the hat on an operator from this point forward will imply that the operator is a linear transfor- mation — don’t confuse it with the hat of a unit vector) takes each vector in a vector space and “transforms” it into some other vector (|α〉 → |α′〉 = Tˆ |α〉), with the proviso that the operator is linear T^ ˆ (a|α〉 + b|β〉) = a( Tˆ |α〉) + b( Tˆ |β〉). (VIII-35) a) We can write the linear transformation of basis vectors as

Tˆ |ej 〉 = ∑n i=

Tij|ei〉, (j = 1, 2 , ..., n). (VIII-36)

This is also the definition of a tensor, as such, the oper- ator Tˆ is also a tensor.

b) If |α〉 is an arbitrary vector:

|α〉 = a 1 |e 1 〉 + · · · + an|en〉 = ∑n j=

aj |ej 〉, (VIII-37)

VIII–8 PHYS-4007/5007: Computational Physics

f) The transpose of a matrix ( T˜) is the same set of elements in T, but with the rows and columns interchanged:

T^ ˜ =

    

T 11 T 21 · · · Tn 1 T 12 T 22 · · · Tn 2 ... ... ... T 1 n T 2 n · · · Tnn

    

. (VIII-45)

Note that the transpose of a column matrix is a row ma- trix!

g) A square matrix is symmetric if it is equal to its trans- pose (reflection in the main diagonal — upper left to lower right — leaves it unchanged); it is antisymmetric if this operation reverses the sign: SYMMETRIC: T˜ = T; ANTISYMMETRIC: T˜ = −T. (VIII-46)

h) The (complex) conjugate (T∗) is obtained by taking the complex conjugate of every element:

T∗^ =

    

T 11 ∗ T 12 ∗ · · · T 1 ∗n T 21 ∗ T 22 ∗ · · · T 2 ∗n ... ... ... T (^) n∗ 1 T (^) n∗ 2 · · · T (^) nn∗

    

; a∗^ =

    

a∗ 1 a∗ 2 ... a∗ n

    

. (VIII-47)

i) A matrix is real if all its elements are real and imaginary if they are all imaginary: REAL: T∗^ = T; IMAGINARY: T∗^ = −T. (VIII-48)

j) A square matrix is Hermitian (or self-adjoint as defined by T†^ ≡ T˜∗) if it is equal to its Hermitian conjugate; if Hermitian conjugation introduces a minus sign, the ma- trix is skew Hermitian (or anti-Hermitian): HERMITIAN: T†^ = T; SKEW HERMITIAN: T†^ = −T. (VIII-49)

Donald G. Luttermoser, ETSU VIII–

k) With this notation, the inner product of 2 vectors (with respect to an orthonormal basis), can be written in matrix form: 〈α|β〉 = a†b. (VIII-50)

l) Matrix multiplication is not, in general, commutative (ST 6 = TS) — the difference between 2 orderings is called the commutator:

[S, T] ≡ ST − TS. (VIII-51)

It can also be shown that one can write the following commutator relation:

[ Aˆ B,ˆ Cˆ] = Aˆ [ B,ˆ Cˆ] + [ A,ˆ Cˆ] B.ˆ (VIII-52)

m) The transpose of a product is the product of the transpose in reverse order : (

∼ ST) = T˜˜S, (VIII-53) and the same goes for Hermitian conjugates:

(ST)†^ = T†S†. (VIII-54)

n) The unit matrix is defined by

    

    

. (VIII-55)

In other words, (^1) ij = δij. (VIII-56)

o) The inverse of a matrix (written T−^1 ) is defined by

T−^1 T = TT−^1 = 1. (VIII-57)

Donald G. Luttermoser, ETSU VIII–

iv) For this 3x3 matrix, the matrix of cofactors is given by

C =

       

∣∣ ∣∣ ∣∣

T 22 T 23

T 32 T 33

∣∣ ∣∣ ∣∣ −

∣∣ ∣∣ ∣∣

T 21 T 23

T 31 T 33

∣∣ ∣∣ ∣∣

∣∣ ∣∣ ∣∣

T 21 T 22

T 31 T 32

∣∣ ∣∣ ∣∣

∣∣ ∣∣ ∣∣

T 12 T 13

T 32 T 33

∣∣ ∣∣ ∣∣

∣∣ ∣∣ ∣∣

T 11 T 13

T 31 T 33

∣∣ ∣∣ ∣∣ −

∣∣ ∣∣ ∣∣

T 11 T 12

T 31 T 32

∣∣ ∣∣ ∣∣ ∣∣ ∣∣ ∣∣

T 12 T 13

T 22 T 23

∣∣ ∣∣ ∣∣ −

∣∣ ∣∣ ∣∣

T 11 T 13

T 21 T 23

∣∣ ∣∣ ∣∣

∣∣ ∣∣ ∣∣

T 11 T 12

T 21 T 22

∣∣ ∣∣ ∣∣

       

(VIII-61)

v) The transpose of this cofactor matrix is then (see Eq. VIII-45)

C^ ˜ =

       

∣∣ ∣∣ ∣∣

T 22 T 32

T 23 T 33

∣∣ ∣∣ ∣∣ −

∣∣ ∣∣ ∣∣

T 12 T 32

T 13 T 33

∣∣ ∣∣ ∣∣

∣∣ ∣∣ ∣∣

T 12 T 22

T 13 T 23

∣∣ ∣∣ ∣∣

∣∣ ∣∣ ∣∣

T 21 T 31

T 23 T 33

∣∣ ∣∣ ∣∣

∣∣ ∣∣ ∣∣

T 11 T 31

T 13 T 33

∣∣ ∣∣ ∣∣ −

∣∣ ∣∣ ∣∣

T 11 T 21

T 13 T 23

∣∣ ∣∣ ∣∣ ∣∣ ∣∣ ∣∣

T 21 T 31

T 22 T 32

∣∣ ∣∣ ∣∣ −

∣∣ ∣∣ ∣∣

T 11 T 31

T 12 T 32

∣∣ ∣∣ ∣∣

∣∣ ∣∣ ∣∣

T 11 T 21

T 12 T 22

∣∣ ∣∣ ∣∣

       

(VIII-62)

vi) A matrix without an inverse is said to be singu- lar.

vii) The inverse of a product (assuming it exists) is the product of the inverses in reverse order : (ST)−^1 = T−^1 S−^1. (VIII-63)

p) A matrix is unitary if its inverse is equal to its Hermitian conjugate: UNITARY: U†^ = U−^1. (VIII-64)

q) The trace of a matrix is the sum of the diagonal elements: Tr(T) ≡ ∑m i=

Tii, (VIII-65)

VIII–12 PHYS-4007/5007: Computational Physics

and has the property Tr(T 1 T 2 ) = Tr(T 2 T 1 ). (VIII-66)

  1. A vector under a linear transformation that obeys the following equation: Tˆ |α〉 = λ|α〉, (VIII-67) is called an eigenvector of the transformation, and the (com- plex) number λ is called the eigenvalue. a) Notice that any (nonzero) multiple of an eigenvector is still an eigenvector with the same eigenvalue.

b) In matrix form, the eigenvector equation takes the form: Ta = λa (VIII-68) (for nonzero a), or (T − λ 1 )a = 0. (VIII-69) (here 0 is the zero matrix, whose elements are all zero.)

c) If the matrix (T - λ 1 ) had an inverse, we could multiply both sides of Eq. (VIII-69) by (T - λ 1 )−^1 , and conclude that a = 0. But by assumption, a is not zero, so the matrix (T - λ 1 ) must in fact be singular, which means that its determinant vanishes:

det(T−λ 1 ) =

∣∣ ∣∣ ∣∣ ∣∣ ∣∣ ∣∣

(T 11 − λ) T 12 · · · T 1 n T 21 (T 22 − λ) · · · T 2 n ... ... ... Tn 1 Tn 2 · · · (Tnn − λ)

∣∣ ∣∣ ∣∣ ∣∣ ∣∣ ∣∣

(VIII-70)

d) Expansion of the determinant yields an algebraic equation for λ: Cnλn^ + Cn− 1 λn−^1 + · · · + C 1 λ + C 0 = 0, (VIII-71)

VIII–14 PHYS-4007/5007: Computational Physics

diagonal and all other elements zero:

T =

    

λ 1 0 · · · 0 0 λ 2 · · · 0 ... ... ... 0 0 · · · λn

    

. (VIII-72)

c) The (normalized) eigenvectors are equally simple:

a(1)^ =

     

     

, a(2)^ =

     

     

,... , a(n)^ =

     

     

(VIII-73)

d) A matrix that can be brought to diagonal form (Eq. VIII-72) by change of basis is said to be diagonalizable.

e) In a geometrical sense, diagonalizing a matrix is equiva- lent to rotating the bases of a matrix about some point in the space until all of the off-diagonal elements go to zero. If D is the diagonalized matrix of matrix M, the operation that diagonalizes M is

D = SMS−^1 , (VIII-74)

where matrix S is called a similarity transformation. Note that the inverse of the similarity matrix can be con- structed by using the eigenvectors (in the old basis) as the columns of S−^1 : ( S−^1

) ij =^

( a(j)

) i.^ (VIII-75) f) There is great advantage in bringing a matrix to diagonal form — it is much easier to work with. Unfortunately,

Donald G. Luttermoser, ETSU VIII–

not every matrix can be diagonalized — the eigenvec- tors have to span the space for a matrix to be diagonalizable.

  1. The Hermitian conjugate of a linear transformation (called a Hermitian transformation) is that transformation Tˆ †^ which, when applied to the first member of an inner product, gives the same result as if Tˆ itself had been applied to the second vector:

〈 Tˆ †α|β〉 = 〈α| T βˆ 〉 (VIII-76)

(for all vectors |α〉 and |β〉). a) Note that the notation used in Eq. (VIII-76) is commonly used but incorrect: T βˆ 〉 actually means Tˆ |β〉 and 〈 Tˆ †α|β〉 means the inner product of the vector Tˆ †|α〉.

b) Note that we can also write

〈α| T βˆ 〉 = a†Tb = (T†a)†b = 〈 Tˆ †α|β〉. (VIII-77)

c) In quantum mechanics, a fundamental role is played by Hermitian transformations ( Tˆ †^ = Tˆ ). The eigenvectors and eigenvalues of a Hermitian transformation have 3 cru- cial properties: i) The eigenvalues of a Hermitian transforma- tion are real.

ii) The eigenvectors of a Hermitian transfor- mation belonging to distinct eigenvalues are orthogonal.

iii) The eigenvectors of a Hermitian transfor- mation span the space.

Donald G. Luttermoser, ETSU VIII–

  

− 3 (1 + 3i) 3 i (2 + 3i) 9 (3 − 2 i) (−6 + 3i) (6 + i) − 6

  .

Solution (d): Transpose of A — flip A about the diagonal:

A^ ˜ =

  

− 1 2 2 i 1 0 − 2 i i 3 2

  .

Solution (e): Complex conjugate of A — multiply each i term by –1 in A:

A∗^ =

  

− 1 1 −i 2 0 3 − 2 i 2 i 2

  .

Solution (f): Hermitian of A:

A†^ ≡ A˜∗^ =

  

− 1 2 − 2 i 1 0 2 i −i 3 2

  .

Solution (g): Trace of B:

Tr(B) =

∑^3 i=

Bii = 2 + 1 + 2 = 5.

Solution (h): Determinant of B: det(B) = 2(2 − 0) − 0(0 − 0) − i(0 − i) = 4 − 0 − 1 = 3.

Solution (i): Inverse of B:

B−^1 = 1 det(B)

C˜,

VIII–18 PHYS-4007/5007: Computational Physics

where

C =

       

∣∣ ∣∣ ∣∣

∣∣ ∣∣ ∣∣ −

∣∣ ∣∣ ∣∣

i 2

∣∣ ∣∣ ∣∣

∣∣ ∣∣ ∣∣

i 3

∣∣ ∣∣ ∣∣

∣∣ ∣∣ ∣∣

0 −i 3 2

∣∣ ∣∣ ∣∣

∣∣ ∣∣ ∣∣

2 −i i 2

∣∣ ∣∣ ∣∣ −

∣∣ ∣∣ ∣∣

i 3

∣∣ ∣∣ ∣∣ ∣∣ ∣∣ ∣∣

0 −i 1 0

∣∣ ∣∣ ∣∣ −

∣∣ ∣∣ ∣∣

2 −i 0 0

∣∣ ∣∣ ∣∣

∣∣ ∣∣ ∣∣

∣∣ ∣∣ ∣∣

       

  

2 0 −i − 3 i 3 − 6 i 0 2

   ,

then

B−^1 =

  

2 − 3 i i 0 3 0 −i − 6 2

  .

BB−^1 =

  

(4 + 0 − 1) (− 6 i + 0 + 6i) (2i + 0 − 2 i) (0 + 0 + 0) (0 + 3 + 0) (0 + 0 + 0) (2i + 0 − 2 i) (3 + 9 − 12) (−1 + 0 + 4)

  

  

   =

  

  .

If det(A) 6 = 0, then A has an inverse: det(A) = −1(0 + 6i) − 1(4 − 6 i) + i(− 4 i − 0) = − 6 i − 4 + 6i + 4 = 0. As such, A does not have an inverse.

Example VIII–2. Find the eigenvalues and normalized eigenvectors of the following matrix: M =

  1 1 0 1

  (^).

Can this matrix be diagonalized?

Solution:

0 = det(M − λ 1 ) =

∣∣ ∣∣ ∣∣

(1 − λ) 1 0 (1 − λ)

∣∣ ∣∣ ∣∣ = (1 − λ)^2