



































Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
These lecture notes cover the topic of matrix equations and eigenvalues in the context of computational physics. Information on setting up matrix equations, scalar multiplication, vector components, the dot product, and the relationship between vectors and matrices. It also discusses the concept of linear transformations and how they can be represented by matrices.
Typology: Study notes
1 / 43
This page cannot be seen from the preview
Don't miss anything!
Dr. Donald G. Luttermoser East Tennessee State University
Version 4.
Abstract
These class notes are designed for use of the instructor and students of the course PHYS-4007/5007: Computational Physics taught by Dr. Donald Luttermoser at East Tennessee State University.
VIII–2 PHYS-4007/5007: Computational Physics
B. Linear Algebra.
α ± β = (αx ± βx) x∧ + (αy ± βy) y∧
b) However in quantum mechanics, often we will have more than 3 coordinates to worry about — indeed, sometimes there may be an infinite amount of coordinates!
c) As such, we will introduce a new notation (the so-called bra-and-ket notation) to describe vectors: α ≡ |α〉 (ket), α∗^ ≡ 〈α| (bra). (VIII-10) Note that the ∗ in the “bra” definition means take the complex conjugate (multiply all i =
−1 terms by -1) in vector α.
Donald G. Luttermoser, ETSU VIII–
i) Vector addition is commutative:
|α〉 + |β〉 = |β〉 + |α〉. (VIII-12)
ii) Vector addition is associative:
|α〉 + (|β〉 + |γ〉) = (|α〉 + |β〉) + |γ〉. (VIII-13)
iii) There exists a zero (or null) vector, | 0 〉, with the property that
|α〉 + | 0 〉 = |α〉, (VIII-14)
for every vector |α〉.
iv) For every vector |α〉 there is an associated in- verse vector (| − α〉) such that
|α〉 + | − α〉 = | 0 〉. (VIII-15)
b) Scalar multiplication: The product of any scalar with any vector is another vector:
a|α〉 = |γ〉. (VIII-16)
i) Scalar multiplication is distributive with respect to vector addition:
a(|α〉 + |β〉) = a|α〉 + a|β〉, (VIII-17)
and with respect to scalar addition:
(a + b)|α〉 = a|α〉 + b|α〉. (VIII-18)
ii) It is also associative:
a(b|α〉) = (ab)|α〉. (VIII-19)
Donald G. Luttermoser, ETSU VIII–
v) It is often easier to work with components than with the abstract vectors themselves. Use whatever method to which you are most comfortable.
∑N n=
α∗ nβn. (VIII-29)
b) A vector space with an inner product is called an inner product space.
c) Because the inner product of any vector with itself is a non-negative number (Eq. VIII-26), its square root is real — we call this the norm (think of this as the length ) of the vector: ‖α‖ ≡
√ 〈α|α〉. (VIII-30)
d) A unit vector, whose norm is 1, is said to be normalized.
e) Two vectors whose inner product is zero are called or- thogonal =⇒ a collection of mutually orthogonal nor- malized vectors, 〈αi|αj 〉 = δij , (VIII-31)
VIII–6 PHYS-4007/5007: Computational Physics
is called an orthonormal set, where δij is the Kro- necker delta.
f) Components of vectors can be written as ai = 〈ei|α〉. (VIII-32)
g) For vectors that are co-linear and proportional to each other, the Schwarz inequality can be applied to these vectors: |〈α|β〉|^2 ≤ 〈α|α〉〈β|β〉 (VIII-33) and we can define the (complex) angle between |α〉 and |β〉 by the formula
cos θ =
√√ √√ √ 〈α|β〉〈β|α〉 〈α|α〉〈β|β〉
Tˆ |ej 〉 = ∑n i=
Tij|ei〉, (j = 1, 2 , ..., n). (VIII-36)
This is also the definition of a tensor, as such, the oper- ator Tˆ is also a tensor.
b) If |α〉 is an arbitrary vector:
|α〉 = a 1 |e 1 〉 + · · · + an|en〉 = ∑n j=
aj |ej 〉, (VIII-37)
VIII–8 PHYS-4007/5007: Computational Physics
f) The transpose of a matrix ( T˜) is the same set of elements in T, but with the rows and columns interchanged:
T 11 T 21 · · · Tn 1 T 12 T 22 · · · Tn 2 ... ... ... T 1 n T 2 n · · · Tnn
Note that the transpose of a column matrix is a row ma- trix!
g) A square matrix is symmetric if it is equal to its trans- pose (reflection in the main diagonal — upper left to lower right — leaves it unchanged); it is antisymmetric if this operation reverses the sign: SYMMETRIC: T˜ = T; ANTISYMMETRIC: T˜ = −T. (VIII-46)
h) The (complex) conjugate (T∗) is obtained by taking the complex conjugate of every element:
T 11 ∗ T 12 ∗ · · · T 1 ∗n T 21 ∗ T 22 ∗ · · · T 2 ∗n ... ... ... T (^) n∗ 1 T (^) n∗ 2 · · · T (^) nn∗
; a∗^ =
a∗ 1 a∗ 2 ... a∗ n
i) A matrix is real if all its elements are real and imaginary if they are all imaginary: REAL: T∗^ = T; IMAGINARY: T∗^ = −T. (VIII-48)
j) A square matrix is Hermitian (or self-adjoint as defined by T†^ ≡ T˜∗) if it is equal to its Hermitian conjugate; if Hermitian conjugation introduces a minus sign, the ma- trix is skew Hermitian (or anti-Hermitian): HERMITIAN: T†^ = T; SKEW HERMITIAN: T†^ = −T. (VIII-49)
Donald G. Luttermoser, ETSU VIII–
k) With this notation, the inner product of 2 vectors (with respect to an orthonormal basis), can be written in matrix form: 〈α|β〉 = a†b. (VIII-50)
l) Matrix multiplication is not, in general, commutative (ST 6 = TS) — the difference between 2 orderings is called the commutator:
[S, T] ≡ ST − TS. (VIII-51)
It can also be shown that one can write the following commutator relation:
[ Aˆ B,ˆ Cˆ] = Aˆ [ B,ˆ Cˆ] + [ A,ˆ Cˆ] B.ˆ (VIII-52)
m) The transpose of a product is the product of the transpose in reverse order : (
∼ ST) = T˜˜S, (VIII-53) and the same goes for Hermitian conjugates:
(ST)†^ = T†S†. (VIII-54)
n) The unit matrix is defined by
In other words, (^1) ij = δij. (VIII-56)
o) The inverse of a matrix (written T−^1 ) is defined by
T−^1 T = TT−^1 = 1. (VIII-57)
Donald G. Luttermoser, ETSU VIII–
iv) For this 3x3 matrix, the matrix of cofactors is given by
∣∣ ∣∣ ∣∣
∣∣ ∣∣ ∣∣ −
∣∣ ∣∣ ∣∣
∣∣ ∣∣ ∣∣
∣∣ ∣∣ ∣∣
∣∣ ∣∣ ∣∣
−
∣∣ ∣∣ ∣∣
∣∣ ∣∣ ∣∣
∣∣ ∣∣ ∣∣
∣∣ ∣∣ ∣∣ −
∣∣ ∣∣ ∣∣
∣∣ ∣∣ ∣∣ ∣∣ ∣∣ ∣∣
∣∣ ∣∣ ∣∣ −
∣∣ ∣∣ ∣∣
∣∣ ∣∣ ∣∣
∣∣ ∣∣ ∣∣
∣∣ ∣∣ ∣∣
v) The transpose of this cofactor matrix is then (see Eq. VIII-45)
∣∣ ∣∣ ∣∣
∣∣ ∣∣ ∣∣ −
∣∣ ∣∣ ∣∣
∣∣ ∣∣ ∣∣
∣∣ ∣∣ ∣∣
∣∣ ∣∣ ∣∣
−
∣∣ ∣∣ ∣∣
∣∣ ∣∣ ∣∣
∣∣ ∣∣ ∣∣
∣∣ ∣∣ ∣∣ −
∣∣ ∣∣ ∣∣
∣∣ ∣∣ ∣∣ ∣∣ ∣∣ ∣∣
∣∣ ∣∣ ∣∣ −
∣∣ ∣∣ ∣∣
∣∣ ∣∣ ∣∣
∣∣ ∣∣ ∣∣
∣∣ ∣∣ ∣∣
vi) A matrix without an inverse is said to be singu- lar.
vii) The inverse of a product (assuming it exists) is the product of the inverses in reverse order : (ST)−^1 = T−^1 S−^1. (VIII-63)
p) A matrix is unitary if its inverse is equal to its Hermitian conjugate: UNITARY: U†^ = U−^1. (VIII-64)
q) The trace of a matrix is the sum of the diagonal elements: Tr(T) ≡ ∑m i=
Tii, (VIII-65)
VIII–12 PHYS-4007/5007: Computational Physics
and has the property Tr(T 1 T 2 ) = Tr(T 2 T 1 ). (VIII-66)
b) In matrix form, the eigenvector equation takes the form: Ta = λa (VIII-68) (for nonzero a), or (T − λ 1 )a = 0. (VIII-69) (here 0 is the zero matrix, whose elements are all zero.)
c) If the matrix (T - λ 1 ) had an inverse, we could multiply both sides of Eq. (VIII-69) by (T - λ 1 )−^1 , and conclude that a = 0. But by assumption, a is not zero, so the matrix (T - λ 1 ) must in fact be singular, which means that its determinant vanishes:
det(T−λ 1 ) =
∣∣ ∣∣ ∣∣ ∣∣ ∣∣ ∣∣
(T 11 − λ) T 12 · · · T 1 n T 21 (T 22 − λ) · · · T 2 n ... ... ... Tn 1 Tn 2 · · · (Tnn − λ)
∣∣ ∣∣ ∣∣ ∣∣ ∣∣ ∣∣
d) Expansion of the determinant yields an algebraic equation for λ: Cnλn^ + Cn− 1 λn−^1 + · · · + C 1 λ + C 0 = 0, (VIII-71)
VIII–14 PHYS-4007/5007: Computational Physics
diagonal and all other elements zero:
λ 1 0 · · · 0 0 λ 2 · · · 0 ... ... ... 0 0 · · · λn
c) The (normalized) eigenvectors are equally simple:
a(1)^ =
, a(2)^ =
,... , a(n)^ =
d) A matrix that can be brought to diagonal form (Eq. VIII-72) by change of basis is said to be diagonalizable.
e) In a geometrical sense, diagonalizing a matrix is equiva- lent to rotating the bases of a matrix about some point in the space until all of the off-diagonal elements go to zero. If D is the diagonalized matrix of matrix M, the operation that diagonalizes M is
D = SMS−^1 , (VIII-74)
where matrix S is called a similarity transformation. Note that the inverse of the similarity matrix can be con- structed by using the eigenvectors (in the old basis) as the columns of S−^1 : ( S−^1
) ij =^
( a(j)
) i.^ (VIII-75) f) There is great advantage in bringing a matrix to diagonal form — it is much easier to work with. Unfortunately,
Donald G. Luttermoser, ETSU VIII–
not every matrix can be diagonalized — the eigenvec- tors have to span the space for a matrix to be diagonalizable.
〈 Tˆ †α|β〉 = 〈α| T βˆ 〉 (VIII-76)
(for all vectors |α〉 and |β〉). a) Note that the notation used in Eq. (VIII-76) is commonly used but incorrect: T βˆ 〉 actually means Tˆ |β〉 and 〈 Tˆ †α|β〉 means the inner product of the vector Tˆ †|α〉.
b) Note that we can also write
〈α| T βˆ 〉 = a†Tb = (T†a)†b = 〈 Tˆ †α|β〉. (VIII-77)
c) In quantum mechanics, a fundamental role is played by Hermitian transformations ( Tˆ †^ = Tˆ ). The eigenvectors and eigenvalues of a Hermitian transformation have 3 cru- cial properties: i) The eigenvalues of a Hermitian transforma- tion are real.
ii) The eigenvectors of a Hermitian transfor- mation belonging to distinct eigenvalues are orthogonal.
iii) The eigenvectors of a Hermitian transfor- mation span the space.
Donald G. Luttermoser, ETSU VIII–
− 3 (1 + 3i) 3 i (2 + 3i) 9 (3 − 2 i) (−6 + 3i) (6 + i) − 6
.
Solution (d): Transpose of A — flip A about the diagonal:
− 1 2 2 i 1 0 − 2 i i 3 2
.
Solution (e): Complex conjugate of A — multiply each i term by –1 in A:
A∗^ =
− 1 1 −i 2 0 3 − 2 i 2 i 2
.
Solution (f): Hermitian of A:
− 1 2 − 2 i 1 0 2 i −i 3 2
.
Solution (g): Trace of B:
Tr(B) =
∑^3 i=
Bii = 2 + 1 + 2 = 5.
Solution (h): Determinant of B: det(B) = 2(2 − 0) − 0(0 − 0) − i(0 − i) = 4 − 0 − 1 = 3.
Solution (i): Inverse of B:
B−^1 = 1 det(B)
VIII–18 PHYS-4007/5007: Computational Physics
where
∣∣ ∣∣ ∣∣
∣∣ ∣∣ ∣∣ −
∣∣ ∣∣ ∣∣
i 2
∣∣ ∣∣ ∣∣
∣∣ ∣∣ ∣∣
i 3
∣∣ ∣∣ ∣∣
−
∣∣ ∣∣ ∣∣
0 −i 3 2
∣∣ ∣∣ ∣∣
∣∣ ∣∣ ∣∣
2 −i i 2
∣∣ ∣∣ ∣∣ −
∣∣ ∣∣ ∣∣
i 3
∣∣ ∣∣ ∣∣ ∣∣ ∣∣ ∣∣
0 −i 1 0
∣∣ ∣∣ ∣∣ −
∣∣ ∣∣ ∣∣
2 −i 0 0
∣∣ ∣∣ ∣∣
∣∣ ∣∣ ∣∣
∣∣ ∣∣ ∣∣
2 0 −i − 3 i 3 − 6 i 0 2
,
then
B−^1 =
2 − 3 i i 0 3 0 −i − 6 2
.
(4 + 0 − 1) (− 6 i + 0 + 6i) (2i + 0 − 2 i) (0 + 0 + 0) (0 + 3 + 0) (0 + 0 + 0) (2i + 0 − 2 i) (3 + 9 − 12) (−1 + 0 + 4)
=
.
If det(A) 6 = 0, then A has an inverse: det(A) = −1(0 + 6i) − 1(4 − 6 i) + i(− 4 i − 0) = − 6 i − 4 + 6i + 4 = 0. As such, A does not have an inverse.
Example VIII–2. Find the eigenvalues and normalized eigenvectors of the following matrix: M =
1 1 0 1
(^).
Can this matrix be diagonalized?
Solution:
0 = det(M − λ 1 ) =
∣∣ ∣∣ ∣∣
(1 − λ) 1 0 (1 − λ)
∣∣ ∣∣ ∣∣ = (1 − λ)^2