Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Complex Solution and Fundamental Matrix-Differential Equations and Their Solutions-Lecture Notes, Study notes of Differential Equations

Differentiation Equations course is one of basic course of science study. Its part of Mathematics, Computer Science, Physics, Engineering. This is lecture handout provided by teacher. It includes: Complex, Solution, Fundamental, Matrix, Eigenvalues, Eigenvector, Lemma, Scalar, Equation, Existence, Uniqueness

Typology: Study notes

2011/2012

Uploaded on 08/07/2012

puja
puja 🇮🇳

4.3

(8)

92 documents

1 / 3

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
LECTURE 27. COMPLEX SOLUTIONS AND THE FUNDAMENTAL MATRIX
Complex eigenvalues. We continue studying
(27.1) y = A�y,
where A = (aij ) is a constant n × n matrix. In this subsection, further, A is a real matrix. When A
has a complex eigenvalue, it yields a complex solution of (27.1). The following principle of equating
real parts then allows us to construct real solutions of (27.1) from the complex solution.
Lemma 27.1. If y(t) = α(t) +
(t), where α (t) and β
(t) are real vector-valued functions, is a complex
solution of (27.1), then both α (t) and β
(t) are real solutions of (27.1).
The proof is nearly the same as that for the scalar equation, and it is omitted.
Exercise. If a real matrix A has an eigenvalue λ with an eigenvector v, then show that A also has
an eigenvalue λ
¯ with an eigenvector v
¯
.
Example 27.2. We continue studying A =
1
4 1
1 . Recall that pA(λ) =
1
4
λ
1
1
λ
has two
complex eigenvalues 1 ± 2i.
If λ = 1 + 2i, then A
λI =
2
4
i
1
2i has an eigenvector 2
1
i . The result of the above
1
exercise then ensures that 2i is an eigenvector of the eigenvalue λ = 1 2i.
In order to find real solutions of (27.1), we write
e(1+2i)t 1= e
t cos 2t + iet sin 2t.
2i 2 sin 2t 2 cos 2t
cos 2t sin 2t
The above lemma then asserts that et 2 sin 2t and et 2 cos 2t are real solutions of (27.1).
Moreover, they are linearly independent. Therefore, the general real solution of (27.1) is
cos 2t sin 2t
c1e
t + c2e
t .
2 sin 2t 2 cos 2t
The fundamental matrix. The linear operator T�y := y A�y has a natural extension from vectors
to matrices. For example, when n = 2, let
y11 f1 y12 g1
T = , T = .
y21 f2 y22 g2
Then,
y11 y12 f1 g1
T = .
y21 y22 f2 g2
In general, if A is an n × n matrix and Y = (y1 ) is an n × n matrix, whose j-th column is yj , · · · yn
then
T Y = T (y1 yn) = (T y1 T yn).· · · · · ·
In this sense, y = A�y extends to Y = AY .
1
docsity.com
pf3

Partial preview of the text

Download Complex Solution and Fundamental Matrix-Differential Equations and Their Solutions-Lecture Notes and more Study notes Differential Equations in PDF only on Docsity!

LECTURE 27. COMPLEX SOLUTIONS AND THE FUNDAMENTAL MATRIX

Complex eigenvalues. We continue studying

(27.1) �y �^ = A�y,

where A = (aij ) is a constant n × n matrix. In this subsection, further, A is a real matrix. When A has a complex eigenvalue, it yields a complex solution of (27.1). The following principle of equating real parts then allows us to construct real solutions of (27.1) from the complex solution.

Lemma 27.1. If �y(t) = α� (t) + iβ�(t), where α� (t) and β�(t) are real vector-valued functions, is a complex solution of (27.1), then both α� (t) and β�(t) are real solutions of (27.1).

The proof is nearly the same as that for the scalar equation, and it is omitted.

Exercise. If a real matrix A has an eigenvalue λ with an eigenvector �v, then show that A also has an eigenvalue λ¯^ with an eigenvector �v¯.

Example 27.2. We continue studying A = (^) −

  1. Recall that^ pA(λ) =^

�^ �

λ 1 −

λ

�^ �^ has two

complex eigenvalues 1 ± 2 i.

If λ = 1 + 2i, then A − �

λI = �

i −

2 i has an eigenvector^2

i. The result of the above 1 exercise then ensures that (^) − 2 i is an eigenvector of the eigenvalue λ = 1 − 2 i.

In order to find real solutions of (27.1), we write

e(1+2i)t^21 i^ = et^ −cos 22 sin 2t^ t + iet^ 2 cos 2sin 2t^ t.

cos 2t sin 2t The above lemma then asserts that et −2 sin 2t and et 2 cos 2t are real solutions of (27.1).

Moreover, they are linearly independent. Therefore, the general real solution of (27.1) is

cos 2t sin 2t c 1 et^ −2 sin 2t + c 2 et^ 2 cos 2t.

The fundamental matrix. The linear operator T �y := �y �^ − A�y has a natural extension from vectors to matrices. For example, when n = 2, let

y 11 f 1 y 12 g 1 T (^) y = , T =. 21 f 2 y 22 g 2

Then, (^) � � � � y 11 y 12 f 1 g 1 T (^) y =. 21 y 22 f 2 g 2 In general, if A is an n × n matrix and Y = (y 1 · · · yn) is an n × n matrix, whose j-th column is yj , then T Y = T (y 1 · · · yn) = (T y 1 · · ·T yn).

In this sense, �y �^ = A�y extends to Y �^ = AY.

1

Exercise. Show that T (U + V ) = T U + T V, T (U C) = (T U )C, T (U�c) = (T U )�c, where U, V are n × n matrix-valued functions, C is an n × n matrix, and �c is a column vector.

That means, T is a linear operator defined on the class of matrix-valued functions Y differen tiable on an interval I. The following existence and uniqueness result is standard.

Existence and Uniqueness result.. If A(t) and F (t) are continuous and bounded (matrix-valued functions) on an interval t 0 ∈ I, then for any matrix Y 0 then initial value problem Y �^ = A(t)Y + F (t), Y (t 0 ) = Y 0 has a unique solution on t ∈ I.

Working assumption. A(t), F (t), and f (t) are always continuous and bounded on an interval t ∈ I.

Definition 27.3. A fundamental matrix of T Y = 0 is a solution U (t) for which |U (t 0 )| � = 0 at some point t 0.

We note that the condition |U (t 0 )| =� 0 implies that |U (t)| =� 0 for all t ∈ I. We use this fact to derive solution formulas. As an application of U (t), we obtain solution formulas for the initial value problem

�y �^ = A(t)�y + f�(t), �y(t 0 ) = y� 0. Let U (t) be a fundamental matrix of Y �^ = A(t)Y. In the homogeneous case of f�(t) = 0, let �y(t) = U (t)�c, where �c is an arbitrary column vector. Then, �y �^ = U ��c = (A(t)U )�c = A(t)(U�c) = A(t)�y, that is, y is a solution of the homogeneous system. The initial condition then determines �c and �c = U −^1 (t 0 )�y 0. Next, for a general f�(t), we use the variation of parameters by seting �y(t) = U (t)�v(t), where �v is a vector-valued function. Then, �y �^ = (U�v)�^ = U ��v + U�v �^ = A(t)U�v + U�v �^ = A(t)�y + U�v �.

Hence, U�v �^ = f�(t) and (^) � �y(t) = U (t) U −^1 (t)f�(t) dt.

Liouville’s equation. We prove a theorem of Liouville, which generalizes Abel’s identity for the Wronskian.

Theorem 27.4 (Liouville’s Theorem). If Y �(t) = A(t)Y (t) on an interval t ∈ I, then (27.2) |Y (t)|�^ = trA(t)|Y (t)|.

Proof. First, if |Y (t 0 )| = 0 at a point t 0 ∈ I, then |Y (t)| = 0 for all t ∈ I, and we are done. We therefore assume that |Y (t)| = 0� for all t ∈ I. Let Y (t 0 ) = I at a point t 0. That is, Y (t 0 ) = (y 1 (t 0 ) · · · yn(t 0 )) = (E 1 E 2 · · ·En). Here, Ej are the unit coordinate vectors in Rn, that is, the n-vector Ej has 1 in the j-th position and zero otherwise.

Lecture 27 2 18.034 Spring 2009