Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Math 54 Cheat Sheet: Definitions and Formulas, Cheat Sheet of Mathematics

This chet sheet contains the following topics: Vector spaces, Diagonalization, Orthogonality, Second-order and Higher-order differential equations, Systems of differential equations, Coupled mass-spring system

Typology: Cheat Sheet

2019/2020

Uploaded on 10/09/2020

lalitdiya
lalitdiya 🇺🇸

4.3

(25)

240 documents

1 / 2

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Math 54 Cheat Sheet
Vector spaces
Subspace: If uand vare in W, then u+vare in W, and cuis in W
Nul(A): Solutions of Ax=0. Row-reduce A.
Row(A): Space spanned by the rows of A: Row-reduce Aand choose
the rows that contain the pivots.
Col(A): Space spanned by columns of A: Row-reduce Aand choose the
columns of Athat contain the pivots
Rank(A): = Dim(Col(A)) = number of pivots
Rank-Nullity theorem: Rank(A) + dim(Nul (A)) = n, where Ais
m×n
Linear transformation: T(u+v) = T(u) + T(v),T(cu) = cT (u),
where cis a number.
Tis one-to-one if T(u) = 0u=0
Tis onto if Col(T) = Rm.
Linearly independence:
a1v1+a2v2+···+anvn=0a1=a2=···=an= 0.
To show lin. ind, form the matrix of the vectors, and show that
Nul(A) = {0}
Linear dependence: a1v1+a2v2+···+anvn=0for
a1, a2,···, an, not all zero.
Span: Set of linear combinations of v1,···vn
Basis Bfor V: A linearly independent set such that Span (B) = V
To show sthg is a basis, show it is linearly independent and spans.
To find a basis from a collection of vectors, form the matrix Aof the
vectors, and find Col(A).
To find a basis for a vector space, take any element of that v.s. and
express it as a linear combination of ’simpler’ vectors. Then show those
vectors form a basis.
Dimension: Number of elements in a basis.
To find dim, find a basis and find num. elts.
Theorem: If Vhas a basis of vectors, then every basis of Vmust have n
vectors.
Basis theorem: If Vis an ndim v.s., then any lin. ind. set with n
elements is a basis, and any set of nelts. which spans Vis a basis.
Matrix of a lin. transf Twith respect to bases Band C: For every vector
vin B, evaluate T(v), and express T(v)as a linear combination of
vectors in C. Put the coefficients in a column vector, and then form the
matrix of the column vectors you found!
Coordinates: To find [x]B, express xin terms of the vectors in B.
x=PB[x]B, where PBis the matrix whole columns are the vectors in
B.
Invertible matrix theorem: If Ais invertible, then: Ais row-equivalent
to I,Ahas npivots, T(x) = Axis one-to-one and onto, Ax=bhas a
unique solution for every b,ATis invertible, det(A)6= 0, the columns
of Aform a basis for Rn,Nul(A) = {0},Rank (A) = n
a b
c d1
=1
adbc db
c a
A|II|A1
Change of basis: [x]C=PC←B [x]B(think of Cas the new, cool basis)
[C | B][I|PC←B]
PC←B is the matrix whose columns are [b]C, where bis in B
Diagonalization
Diagonalizability: Ais diagonalizable if A=P DP 1for some
diagonal Dand invertible P.
Aand Bare similar if A=P BP 1for Pinvertible
Theorem: Ais diagonalizable Ahas nlinearly independent
eigenvectors
Theorem: IF Ahas ndistinct eigenvalues, THEN Ais diagonalizable,
but the opposite is not always true!!!!
Notes: Acan be diagonalizable even if it’s not invertible (Ex:
A=0 0
0 0). Not all matrices are diagonalizable (Ex: 1 1
0 1)
Consequence: A=P DP 1An=P DnP1
How to diagonalize: To find the eigenvalues, calculate det(AλI), and
find the roots of that.
To find the eigenvectors, for each λfind a basis for Nul (AλI),
which you do by row-reducing
Rational roots theorem: If p(λ)=0has a rational root r=a
b, then a
divides the constant term of p, and bdivides the leading coefficient.
Use this to guess zeros of p. Once you have a zero that works, use long
division! Then A=P DP 1, where D= diagonal matrix of
eigenvalues, P= matrix of eigenvectors
Complex eigenvalues If λ=a+bi, and vis an eigenvector, then
A=P CP 1, where P=Re(v)I m(v),C=a b
b a
Cis a scaling of pdet(A)followed by a rotation by θ, where:
1
det(A)C=cos(θ) sin(θ)
sin(θ) cos(θ)
Orthogonality
u,vorthogonal if u·v= 0.
kuk=u·u
{u1···un}is orthogonal if ui·uj= 0 if i6=j, orthonormal if
ui·ui= 1
W: Set of vwhich are orthogonal to every win W.
If {u1···un}is an orthogonal basis, then:
y=c1u1+···cnuncj=y·uj
uj·uj
Orthogonal matrix Qhas orthonormal columns!
Consequence:QTQ=I,QQT=Orthogonal projection on Col(Q).
kQxk=kxk
(Qx)·(Qy) = x·y
Orthogonal projection: If {u1···uk}is a basis for W, then orthogonal
projection of yon Wis: ˆ
y=y·u1
u1u1u1+···+y·u1
ukukuk
yˆ
yis orthogonal to ˆ
y, shortest distance btw yand Wis kyˆyk
Gram-Schmidt: Start with B={u1,···un}. Let:
v1=u1
v2=u2u2·v1
v1·v1v1
v3=u3u3·v1
v1·v1v1u3·v2
v2·v2v2
Then {v1···vn}is an orthogonal basis for Span(B), and if
wi=vi
kvik, then {w1···wn}is an orthonormal basis for Span(B).
QR-factorization: To find Q, apply G-S to columns of A. Then
R=QTA
Least-squares: To solve Ax=bin the least squares-way, solve
ATAx=ATb.
Least squares solution makes kAxbksmallest.
ˆ
x=R1QTb, where A=QR.
Inner product spaces f·g=Rb
af(t)g(t)dt. G-S applies with this inner
product as well.
Cauchy-Schwarz: |u·v|≤kukkvk
Triangle inequality: ku+vk≤kuk+kvk
Symmetric matrices (A=AT)
Has nreal eigenvalues, always diagonalizable, orthogonally
diagonalizable (A=P DP T,Pis an orthogonal matrix, equivalent to
symmetry!).
Theorem: If Ais symmetric, then any two eigenvectors from different
eigenspaces are orthogonal.
How to orthogonally diagonalize: First diagonalize, then apply G-S on
each eigenspace and normalize. Then P= matrix of (orthonormal)
eigenvectors, D= matrix of eigenvalues.
Quadratic forms: To find the matrix, put the x2
i-coefficients on the
diagonal, and evenly distribute the other terms. For example, if the
x1x2term is 6, then the (1,2)th and (2,1)th entry of Ais 3.
Then orthogonally diagonalize A=P DP T.
Then let y=PTx, then the quadratic form becomes
λ1y2
1+···+λny2
n, where λiare the eigenvalues.
Spectral decomposition: λ1u1u1T+λ2u2u2T+···+λnununT
Second-order and Higher-order differential
equations
Homogeneous solutions: Auxiliary equation: Replace equation by
polynomial, so y000 becomes r3etc. Then find the zeros (use the rational
roots theorem and long division, see the ‘Diagonalization-section).
’Simple zeros’ give you ert, Repeated zeros (multiplicity m) give you
Aert +Btert +···Z tm1ert, Complex zeros r=a+bi give you
Aeat cos(bt) + Beat sin(bt).
Undetermined coefficients: y(t) = y0(t) + yp(t), where y0solves the
hom. eqn. (equation = 0), and ypis a particular solution. To find yp:
If the inhom. term is C tmert, then:
yp=ts(Amtm···+A1t+ 1)ert , where if ris a root of aux with
multiplicity m, then s=m, and if ris not a root, then s= 0.
If the inhom term is Ctmeat sin(βt), then: yp=ts(Amtm···+
A1t+ 1)eat cos(βt) + ts(Bmtm···+B1t+ 1)er t sin(βt), where
s=m, if a+bi is also a root of aux with multiplicity m(s= 0 if not).
cos always goes with sin and vice-versa, also, you have to look at
a+bi as one entity.
Variation of parameters: First, make sure the leading coefficient
(usually the coeff. of y00) is = 1.. Then y=y0+ypas above. Now
suppose yp(t) = v1(t)y1(t) + v2(t)y2(t), where y1and y2are your
hom. solutions. Then y1y2
y0
1y0
2v0
1
v0
2=0
f(t). Invert the matrix and
solve for v0
1and v0
2, and integrate to get v1and v2, and finally use:
yp(t) = v1(t)y1(t) + v2(t)y2(t).
Useful formulas: a b
c d1
=1
adbc db
c a
Rsec(t) = ln |sec(t)+ tan(t)|,Rtan(t) = ln |sec(t)|,
Rtan2(t) = tan(x)x,Rln(t) = tln(t)t
Linear independence: f, g, h are linearly independent if
af(t) + bg (t) + ch(t)=0a=b=c= 0. To show linear
pf2

Partial preview of the text

Download Math 54 Cheat Sheet: Definitions and Formulas and more Cheat Sheet Mathematics in PDF only on Docsity!

Math 54 Cheat Sheet

Vector spaces

Subspace: If u and v are in W , then u + v are in W , and cu is in W Nul(A): Solutions of Ax = 0. Row-reduce A. Row(A): Space spanned by the rows of A: Row-reduce A and choose the rows that contain the pivots. Col(A): Space spanned by columns of A: Row-reduce A and choose the columns of A that contain the pivots Rank(A): = Dim(Col(A)) = number of pivots Rank-Nullity theorem: Rank(A) + dim(N ul(A)) = n, where A is m × n Linear transformation: T (u + v) = T (u) + T (v), T (cu) = cT (u), where c is a number. T is one-to-one if T (u) = 0 ⇒ u = 0 T is onto if Col(T ) = Rm. Linearly independence: a 1 v 1 + a 2 v 2 + · · · + anvn = 0 ⇒ a 1 = a 2 = · · · = an = 0. To show lin. ind, form the matrix of the vectors, and show that N ul(A) = { 0 } Linear dependence: a 1 v 1 + a 2 v 2 + · · · + anvn = 0 for a 1 , a 2 , · · · , an, not all zero. Span: Set of linear combinations of v 1 , · · · vn Basis B for V : A linearly independent set such that Span (B) = V To show sthg is a basis, show it is linearly independent and spans. To find a basis from a collection of vectors, form the matrix A of the vectors, and find Col(A). To find a basis for a vector space, take any element of that v.s. and express it as a linear combination of ’simpler’ vectors. Then show those vectors form a basis. Dimension: Number of elements in a basis. To find dim, find a basis and find num. elts. Theorem: If V has a basis of vectors, then every basis of V must have n vectors. Basis theorem: If V is an n−dim v.s., then any lin. ind. set with n elements is a basis, and any set of n elts. which spans V is a basis. Matrix of a lin. transf T with respect to bases B and C: For every vector v in B, evaluate T (v), and express T (v) as a linear combination of vectors in C. Put the coefficients in a column vector, and then form the matrix of the column vectors you found! Coordinates: To find [x]B , express x in terms of the vectors in B. x = PB [x]B , where PB is the matrix whole columns are the vectors in B. Invertible matrix theorem: If A is invertible, then: A is row-equivalent to I, A has n pivots, T (x) = Ax is one-to-one and onto, Ax = b has a unique solution for every b, AT^ is invertible, det(A) 6 = 0, the columns of A form a basis for Rn, N ul(A) = { 0 }, Rank(A) = n [ a b c d

]− 1

= (^) ad^1 −bc

[

d −b −c a

]

[

A | I

]

[

I | A−^1

]

Change of basis: [x]C = PC←B [x]B (think of C as the new, cool basis) [C | B] → [I | PC←B ] PC←B is the matrix whose columns are [b]C , where b is in B

Diagonalization

Diagonalizability: A is diagonalizable if A = P DP −^1 for some diagonal D and invertible P. A and B are similar if A = P BP −^1 for P invertible Theorem: A is diagonalizable ⇔ A has n linearly independent eigenvectors Theorem: IF A has n distinct eigenvalues, THEN A is diagonalizable, but the opposite is not always true!!!! Notes: A can be diagonalizable even if it’s not invertible (Ex: A =

[

]

). Not all matrices are diagonalizable (Ex:

[

]

Consequence: A = P DP −^1 ⇒ An^ = P DnP −^1 How to diagonalize: To find the eigenvalues, calculate det(A − λI), and find the roots of that. To find the eigenvectors, for each λ find a basis for N ul(A − λI), which you do by row-reducing Rational roots theorem: If p(λ) = 0 has a rational root r = ab , then a divides the constant term of p, and b divides the leading coefficient. Use this to guess zeros of p. Once you have a zero that works, use long division! Then A = P DP −^1 , where D= diagonal matrix of eigenvalues, P = matrix of eigenvectors Complex eigenvalues If λ = a + bi, and v is an eigenvector, then

A = P CP −^1 , where P =

[

Re(v) Im(v)

]

, C =

[

a b −b a

]

C is a scaling of

det(A) followed by a rotation by θ, where: √^1 det(A)

C =

[

cos(θ) sin(θ) − sin(θ) cos(θ)

]

Orthogonality

u, v orthogonal if u · v = 0. ‖u‖ =

u · u {u 1 · · · un} is orthogonal if ui · uj = 0 if i 6 = j, orthonormal if ui · ui = 1 W ⊥: Set of v which are orthogonal to every w in W. If {u 1 · · · un} is an orthogonal basis, then: y = c 1 u 1 + · · · cnun ⇒ cj = y·uj uj·uj Orthogonal matrix Q has orthonormal columns! Consequence:QT^ Q = I, QQT^ = Orthogonal projection on Col(Q). ‖Qx‖ = ‖x‖ (Qx) · (Qy) = x · y Orthogonal projection: If {u 1 · · · uk} is a basis for W , then orthogonal projection of y on W is: ˆy =

y·u 1 u 1 u 1

u 1 + · · · +

y·u 1 ukuk

uk y − ˆy is orthogonal to yˆ, shortest distance btw y and W is ‖y − ˆy‖ Gram-Schmidt: Start with B = {u 1 , · · · un}. Let: v 1 = u 1 v 2 = u 2 −

u 2 ·v 1 v 1 ·v 1

v 1 v 3 = u 3 −

u 3 ·v 1 v 1 ·v 1

v 1 −

u 3 ·v 2 v 2 ·v 2

v 2 Then {v 1 · · · vn} is an orthogonal basis for Span(B), and if wi = (^) ‖vvii‖ , then {w 1 · · · wn} is an orthonormal basis for Span(B). QR-factorization: To find Q, apply G-S to columns of A. Then R = QT^ A Least-squares: To solve Ax = b in the least squares-way, solve AT^ Ax = AT^ b.

Least squares solution makes ‖Ax − b‖ smallest. ˆx = R−^1 QT^ b, where A = QR. Inner product spaces f · g =

∫ (^) b a f^ (t)g(t)dt. G-S applies with this inner product as well. Cauchy-Schwarz: |u · v| ≤ ‖u‖ ‖v‖ Triangle inequality: ‖u + v‖ ≤ ‖u‖ + ‖v‖

Symmetric matrices (A = A

T

Has n real eigenvalues, always diagonalizable, orthogonally diagonalizable (A = P DP T^ , P is an orthogonal matrix, equivalent to symmetry!). Theorem: If A is symmetric, then any two eigenvectors from different eigenspaces are orthogonal. How to orthogonally diagonalize: First diagonalize, then apply G-S on each eigenspace and normalize. Then P = matrix of (orthonormal) eigenvectors, D = matrix of eigenvalues. Quadratic forms: To find the matrix, put the x^2 i -coefficients on the diagonal, and evenly distribute the other terms. For example, if the x 1 x 2 −term is 6 , then the (1, 2)th and (2, 1)th entry of A is 3. Then orthogonally diagonalize A = P DP T^. Then let y = P T^ x, then the quadratic form becomes λ 1 y^21 + · · · + λny^2 n, where λi are the eigenvalues. Spectral decomposition: λ 1 u 1 u 1 T^ + λ 2 u 2 u 2 T^ + · · · + λnununT

Second-order and Higher-order differential

equations

Homogeneous solutions: Auxiliary equation: Replace equation by polynomial, so y′′′^ becomes r^3 etc. Then find the zeros (use the rational roots theorem and long division, see the ‘Diagonalization-section). ’Simple zeros’ give you ert, Repeated zeros (multiplicity m) give you Aert^ + Btert^ + · · · Ztm−^1 ert, Complex zeros r = a + bi give you Aeat^ cos(bt) + Beat^ sin(bt). Undetermined coefficients: y(t) = y 0 (t) + yp(t), where y 0 solves the hom. eqn. (equation = 0), and yp is a particular solution. To find yp: If the inhom. term is Ctmert, then: yp = ts(Amtm^ · · · + A 1 t + 1)ert, where if r is a root of aux with multiplicity m, then s = m, and if r is not a root, then s = 0. If the inhom term is Ctmeat^ sin(βt), then: yp = ts(Amtm^ · · · + A 1 t + 1)eat^ cos(βt) + ts(Bmtm^ · · · + B 1 t + 1)ert^ sin(βt), where s = m, if a + bi is also a root of aux with multiplicity m (s = 0 if not). cos always goes with sin and vice-versa, also, you have to look at a + bi as one entity. Variation of parameters: First, make sure the leading coefficient (usually the coeff. of y′′) is = 1.. Then y = y 0 + yp as above. Now suppose yp(t) = v 1 (t)y 1 (t) + v 2 (t)y 2 (t), where y 1 and y 2 are your hom. solutions. Then

[

y 1 y 2 y′ 1 y′ 2

] [

v 1 ′ v 2 ′

]

[

f (t)

]

. Invert the matrix and solve for v′ 1 and v′ 2 , and integrate to get v 1 and v 2 , and finally use: yp(t) = v 1 (t)y 1 (t) + v 2 (t)y 2 (t).

Useful formulas:

[

a b c d

]− 1

= (^) ad^1 −bc

[

d −b −c a

]

sec(t) = ln |sec(t) + tan(t)|,

∫ tan(t) = ln^ |sec(t)|, tan^2 (t) = tan(x) − x,

ln(t) = t ln(t) − t Linear independence: f, g, h are linearly independent if af (t) + bg(t) + ch(t) = 0 ⇒ a = b = c = 0. To show linear

dependence, do it directly. To show linear independence, form the

Wronskian: ˜W (t) =

[

f (t) g(t) f ′(t) g′(t)

]

(for 2 functions),

˜W (t) =

f (t) g(t) h(t) f ′(t) g′(t) h′(t) f ′′(t) g′′(t) h′′(t)

 (^) (for 3 functions). Then pick a

point t 0 where det(˜W (t 0 )) is easy to evaluate. If det 6 = 0, then f, g, h are linearly independent! Try to look for simplifications before you differentiate. Fundamental solution set: If f, g, h are solutions and linearly independent. Largest interval of existence: First make sure the leading coefficient equals to 1. Then look at the domain of each term. For each domain, consider the part of the interval which contains the initial condition. Finally, intersect the intervals and change any brackets to parentheses. Harmonic oscillator: my′′^ + by′^ + ky = 0 (m = inertia, b = damping, k = stiffness)

Systems of differential equations

To solve x′^ = Ax: x(t) = Aeλ^1 tv 1 + Beλ^2 tv 2 + eλ^3 tv 3 (λi are your eigenvalues, vi are your eigenvectors) Fundamental matrix: Matrix whose columns are the solutions, without the constants (the columns are solutions and linearly independent) Complex eigenvalues If λ = α + iβ, and v = a + ib. Then:

x(t) = A

eαt^ cos(βt)a − eαt^ sin(βt)b

B

eαt^ sin(βt)a + eαt^ cos(βt)b

Notes: You only need to consider one complex eigenvalue. For real eigenvalues, use the formula above. Also, (^) a+^1 bi = (^) aa 2 −+bib 2 Generalized eigenvectors If you only find one eigenvector v (even though there are supposed to be 2 ), then solve the following equation for u: (A − λI)(u) = v (one solution is enough). Then: x(t) = Aeλtv + B

teλtv + eλtu

Undetermined coefficients First find hom. solution. Then for xp, just like regular undetermined coefficients, except that instead of guessing

xp(t) = aet^ + b cos(t), you guess aet^ + b cos(t), where a =

[

a 1 a 2

]

is

a vector. Then plug into x′^ = Ax + f and solve for a etc. Variation of parameters First hom. solution xh(t) = Ax 1 (t) + Bx 2 (t). Then sps xp(t) = v 1 (t)x 1 (t) + v 2 (t)x 2 (t), then solve

˜W (t)

[

v′ 1 v′ 2

]

= f , where ˜W (t) =

[

x 1 (t) | x 2 (t)

]

. Multiply both

sides by

˜W (t)

, integrate and solve for v 1 (t), v 2 (t), and plug back

into xp. Finally, x = xh + xp Matrix exponential eAt^ =

n=

Antn n!. To calculate^ e

At, either

diagonalize: A = P DP −^1 ⇒ eAt^ = P eDtP −^1 , where eDt^ is a diagonal matrix with diag. entries eλit. Or if A only has one eigenvalue λ with multiplicity m, use eAt^ = eλt^

∑m− 1 n=

(A−λI)ntn n!. Solution of x′^ = Ax is then x(t) = eAtc, where c is a constant vector.

Coupled mass-spring system

Case N = 2

Equation: x′′^ = Ax, A =

[

]

Proper frequencies: Eigenvalues of A are: λ = − 1 , − 3 , then proper

frequencies ±i, ±

3 i (± square roots of eigenvalues)

Proper modes: v 1 =

[

sin

( (^) π 3

sin

2 π 3

]

[ √

3 √ 2 3 2

]

v 2 =

[

sin

2 π 3

sin

4 π 3

]

[ √

3 2 −

√ 3 2

]

Case N = 3

Equation: x′′^ = Ax, A =

Proper frequencies: Eigenvalues of A: λ = − 2 , − 2 −

then proper frequencies ±

2 i, ±

i, ±

i

Proper modes: v 1 =

sin

( (^) π 4

sin

2 π 4

sin

3 π 4

√ 2 2 √^1 2 2

 ,^ v 2 =

sin

2 π 4

sin

4 π 4

sin

6 π 4

 (^) , v 3 =

sin

3 π 4

sin

6 π 4

sin

9 π 4

√ 2 2 −√ 1 2 2

General case (just in case!)

Equation: x′′^ = Ax, A =

Proper frequencies: ± 2 i sin

kπ 2(N +1)

, k = 1, 2 , · · · N

Proper modes: vk =

sin

kπ N +

sin

2 kπ N +

sin

N kπ N +

Partial differential equations

Full Fourier series: f defined on (−T, T ): f (x) ˜

m=

am cos

( (^) πmx T

  • bm sin

( (^) πmx T

, where: a 0 = (^21) T

∫ T

−T f^ (x)dx am = (^) T^1

∫ T

−T f^ (x) cos^

( (^) πmx T

b 0 = 0 bm = (^) T^1

∫ T

−T f^ (x) sin^

( (^) πmx T

Cosine series: f defined on (0, T ): f (x) ˜

m=0 am^ cos^

( (^) πmx T

where: a 0 = (^22) T

∫ T

0 f^ (x)dx^ (not a typo) am = (^) T^2

∫ T

0 f^ (x) cos^

( (^) πmx T

Sine series: f defined on (0, T ): f (x) ˜

m=0 bm^ sin^

( (^) πmx T

, where: b 0 = 0 bm = (^) T^2

∫ T

0 f^ (x) sin^

( (^) πmx T

Tabular integration: (IBP:

f ′g = f g −

∫ f g′) To integrate f (t)g(t)dt where f is a polynomial, make a table whose first row is

f (t) and g(t). Then differentiate f as many times until you get 0 , and antidifferentiate as many times until it aligns with the 0 for f. Then multiply the diagonal terms and do + first term − second term etc. Orthogonality formulas:

∫ T

−T cos^

( (^) πmx T

sin

( (^) πnx T

dx = 0 ∫ (^) T −T cos^

( (^) πmx T

cos

( (^) πnx T

dx = 0 if m 6 = n ∫ (^) T −T sin^

( (^) πmx T

sin

( (^) πnx T

dx = 0 if m 6 = n Heat/Wave equations: Step 1: Suppose u(x, t) = X(x)T (t), plug this into PDE, and group X-terms and T -terms. Then X′′(x) X(x) =^ λ, so^ X

′′ (^) = λX. Then find a differential equation for T. Note: If you have an α-term, put it with T. Step 2: Deal with X′′^ = λX. Use boundary conditions to find X(0) etc. (if you have ∂u∂x , you might have X′(0) instead of X(0)). Step 3: Case 1: λ = ω^2 , then X(x) = Aeωx^ + Be−ωx, then find ω = 0, contradiction. Case 2: λ = 0, then X(x) = Ax + B, then eihter find X(x) = 0 (contradiction), or find X(x) = A. Case 3: λ = −ω^2 , then X(x) = A cos(ωx) + B sin(ωx). Then solve for ω, usually ω = πmT. Also, if case 2 works, should find cos, if case 2 doesn’t work, should find sin. Finally, λ = −ω^2 , and X(x) = whatever you found in 2) w/o the constant. Step 4: Solve for T (t) with the λ you found. Remember that for the heat equation: T ′^ = λT ⇒ T (t) = A˜meλt. And for the wave equation: T ′′^ = λT ⇒ T (t) = A˜m cos(ωt) + ˜Bm sin(ωt). Step 5: Then u(x, t) =

m=0 T^ (t)X(x)^ (if case 2 works), u(x, t) =

m=1 T^ (t)X(x)^ (if case 2 doesn’t work!) Step 6: Use u(x, 0), and plug in t = 0. Then use Fourier cosine or sine series or just ‘compare’, i.e. if u(x, 0) = 4 sin(2πx) + 3 sin(3πx), then A˜ 2 = 4, A˜ 3 = 3, and ˜Am = 0 if m 6 = 2, 3. Step 7: (only for wave equation): Use ∂u∂t u(x, 0): Differentiate Step 5 with respect to t and set t = 0. Then use Fourier cosine or series or ‘compare’ Nonhomogeneous heat equation:  

∂u ∂t =^ β^

∂^2 u ∂x^2 +^ P^ (x) u(0, t) = U 1 , u(L, t) = U 2 u(x, 0) = f (x) Then u(x, t) = v(x) + w(x, t), where: v[(x) = U 2 − U 1 +

∫ L

0

∫ (^) z 0

1 β P^ (s)dsdz

]

x L +^ U^1 −^

∫ (^) x 0

∫ (^) z 0

1 β P^ (s)dsdz^ and w(x, t) solves the hom. eqn:   

∂w ∂t =^ β^

∂^2 w ∂x^2 w(0, t) = 0, w(L, t) = 0 u(x, 0) = f (x) − v(x) D’Alembert’s formula: ONLY works for wave equation and −∞ < x < ∞: u(x, t) = 12 (f (x + αt) + f (x − αt)) + (^21) α

∫ (^) x+αt x−αt g(s)ds, where utt = α^2 uxx, u(x, 0) = f (x), ∂u∂t u(x, 0) = g(x). The integral just means ‘antidifferentiate and plug in’.

Laplace equation: Same as for Heat/Wave, but T (t) becomes Y (y), and we get Y ′′(y) = −λY (y). Also, instead of writing Y (y) = ˜Ameωy^ + ˜Bme−ωy^ , write Y (y) = ˜Am cosh(ωy) + ˜Bm sinh(ωy). Remember cosh(0) = 1, sinh(0) = 0