Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Ill Conditioning of Matrices-Numerical Methods in Engineering-Lecture 3 Slides-Civil Engineering and Geological Sciences, Slides of Numerical Methods in Engineering

Ill Conditioning of Matrices, Effects, Detection, Matrix, Normalized Determinant, Diagonally Dominant, Factor Method, Cholesky Method, Factorization, Forward Substitution, Backward Substitution, LU Factorization, Costs, Cholesky Square Root Method, LDL Transpose Method, Determinant for Factorization Methods

Typology: Slides

2011/2012

Uploaded on 02/20/2012

damyen
damyen 🇺🇸

4.4

(27)

274 documents

1 / 18

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
CE 341/441 - Lecture 3 - Fall 2004
p. 3.1
LECTURE 3
DIRECT SOLUTIONS TO LINEAR ALGEBRAIC SYSTEMS - CONTINUED
Ill-conditioning of Matrices
There is no clear cut or precise definition of an ill-conditioned matrix.
Effects of ill-conditioning
Roundoff error accrues in the calculations
Can potentially result in very inaccurate solutions
Small variation in matrix coefficients causes large variations in the solution
Detection of ill-conditioning in a matrix
An inaccurate solution for can satisfy an ill-conditioned matrix quite well!
• Apply back substitution to check for ill-conditioning
• Solve through Gauss or other direct method
• Back substitute
• Comparing we find that
X
AXB=Xpoor
A
Xpoor Bpoor
Bpoor B
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12

Partial preview of the text

Download Ill Conditioning of Matrices-Numerical Methods in Engineering-Lecture 3 Slides-Civil Engineering and Geological Sciences and more Slides Numerical Methods in Engineering in PDF only on Docsity!

CE 341/441 - Lecture 3 - Fall 2004

p. 3.

LECTURE 3DIRECT SOLUTIONS TO LINEAR ALGEBRAIC SYSTEMS - CONTINUEDIll-conditioning of Matrices • There is no clear cut or precise definition of an ill-conditioned matrix. Effects of ill-conditioning • Roundoff error accrues in the calculations• Can potentially result in very inaccurate solutions• Small variation in matrix coefficients causes large variations in the solution Detection of ill-conditioning in a matrix • An inaccurate solution for

can satisfy an ill-conditioned matrix quite well!

  • Apply back substitution to check for ill-conditioning
    • Solve

through Gauss or other direct method

  • Back substitute• Comparing we find that

X

A

X

B

X

poor

A

X

poor

B

poor

→^ B

poor

B

CE 341/441 - Lecture 3 - Fall 2004

p. 3.

  • Back substitution is

not

a good detection technique.

  • The effects of ill-conditioning are very subtle!
    • Examine the inverse of matrix
      • If there are elements of

which are many orders of magnitude larger than the orig-

inal matrix,

, then

is probably ill-conditioned

  • It is always best to normalize the rows of the original matrix such that the maximum

magnitude is of order 1

  • Evaluate

using the same method with which you are solving the system of equa-

tions. Now compute

and compare the results to

. If there’s a significant devi-

ation, then the presence of serious roundoff exists!

  • Compute

using the same method with which you are solving the system of

equations. This is a more severe test of roundoff since it is accumulated both in theoriginal inversion and the re-inversion.

A

A

(^1) –

A

A

A

(^1) –

A

(^1) – (^)

A

I

A

(^1) – (^

)^

(^1) –

CE 341/441 - Lecture 3 - Fall 2004

p. 3.

  • Effects of ill-conditioning are most serious in large dense matrices (e.g. especially those

obtained in such problems as curve fitting by least squares)

  • Sparse banded matrices which result from Finite Difference and Finite Element methods

are typically much better conditioned (i.e. can solve fairly large sets of equationswithout excessive roundoff error problems)

  • Ways to overcome ill-conditioning
    • Make sure you pivot!• Use large word size (use double precision)• Can use error correction schemes to improve the accuracy of the answers• Use iterative methods

CE 341/441 - Lecture 3 - Fall 2004

p. 3.

Factor Method (Cholesky Method) • Problem with Gauss elimination

  • Right hand side “load” vector,

, must be available at the time of matrix triangula-

tion

  • If

is not available during the triangulation process, the entire triangulation process

must be repeated!

  • Procedure is not well suited for solving problems in which

changes

steps

steps

.^

.^

.^

steps

  • Using Gauss elimination,

operations, where

= size of the system of equa-

tion and

= the number of different load vectors which must be solved for

  • Concept of the factor method is to facilitate the solution of multiple right hand sides

without having to go through a re-triangulation process for each

B

B

B

A

X

B

1

=

O N

(^3) (^

)^

O N

(^2) (^

A

X

B

2

=

O N

(^3) (^

)^

O N

(^2) (^

A

X

B

R

=

O N

(^3) (^

)^

O N

(^2) (^

O N

(^3)

R

(^

)^

N

R

B

r

CE 341/441 - Lecture 3 - Fall 2004

p. 3.

  • Reduce the number of unknowns by selecting either -^

Doolittle Method

-^

Crout Method

  • Now we only have

unknowns! We can solve for all unknown elements of

P

and

Q

by

proceeding from left to right and top to bottom

p^ ii

i^

N

q^ ii

i^

N ,

N

(^2) (^) a^11

a

12

a

13

a

21

a

22

a

23

a

31

a

32

a

33

p

21

p

31

p^32

q^11

q^12

q^13

q^22

q^23

q^33

a

11

a

12

a

13

a

21

a

22

a

23

a

31

a

32

a

33

q^11

q

12

q

13

p^21

q

11

p

21

q^12

q

22

p

21

q

13

q^23

p^31

q

11

p^31

q

12

p

32

q^22

p

31

q^13

p^32

q

23

q^33

CE 341/441 - Lecture 3 - Fall 2004

p. 3.

  • Factorization proceeds from left to right and then top to bottom as:
    • Red

current unknown being solved

  • Blue

unknown value

already

solved

a

11

q

11

=

a

12

q

12

=

a

13

q

13

=

a

21

p

21

q^11

a

22

p

21

q^12

q

22

a

23

p

21

q^13

q

23

a

31

p

31

q^11

a

32

p

31

q^12

p

32

q

22

a

33

p

31

q^13

p

32

q

23

q^33

CE 341/441 - Lecture 3 - Fall 2004

p. 3.

  • Now considering the equation to be solved• However

where

and

are known

Forward/backward substitution procedures to obtain a solution • Changing the order in which the product is formed• Now let • Hence we have two systems of simultaneous equations

A

X

B

A

LU

L

U

LU (

) X

B

L U

X

(^

)^

B

Y

U

X

L

Y

B

U

X

Y

CE 341/441 - Lecture 3 - Fall 2004

p. 3.

  • Apply a forward substitution sweep to solve for

for the system of equations

  • Apply a backward substitution sweep to solve for

for the system of equations

Notes on Factorization Methods • Procedure

  • Perform the factorization by solving for

and

  • Perform the sequential forward and backward substitution procedures to solve for

and

  • The factor method is very similar to Gauss elimination although the order in which the

operations are carried out is somewhat different.

  • Number of operations -^

for

decomposition (same as triangulation for Gauss)

-^

for forward/backward substitution (same as backward sweep for Gauss)

Y

L

Y

B

X

U

X

Y

L

U

Y

X

O N

(^3) (^

)^

LU

O N

(^2) (^

CE 341/441 - Lecture 3 - Fall 2004

p. 3.

-^

factorization costs

Factorization

Cost =

Back/Forward Substitution Cost =Total Cost =Total Cost for

  • Considering some typical values for

N

and

R

  • We can also implement

factorization (decomposition) in banded mode and the

savings compared to banded Gauss elimination would be

(where

= band-

width)

Gauss Elim.

LU Factorization

Ratio of Costs

5x

12

5x

9

5

6

21

16

LU

LU

O N

(^3) (^

[^

]^ R O N

(^2) (^

[^

]

O N

(^3) (^

)^

R O N

(^2) (^

[^

]

R

N

»^

R O N

(^2) (^

N

R

N

R

O RN

3

(^

)^

O RN

2

(^

)^

O N

(^

LU

O M

(^

)^

M

CE 341/441 - Lecture 3 - Fall 2004

p. 3.

Other Factorization Methods for Symmetrical Matrices • Solve

assuming that

A

is symmetrical, i.e

or

Cholesky Square Root Method • Requires that

is symetrical (

) and positive definite (

where

= any

vector and

is a positive number.

  • First step is to decompose the matrix

or

  • Diagonal terms on

or

don’t equal unity

A

X

B

A

T^

A

a^

ij^

a^

ji

=

A

A

T^

A

U

T^

AU

c

U

c

A

A

LL

T

A

U

T^

U

L

U

CE 341/441 - Lecture 3 - Fall 2004

p. 3.

LDL

T^

Method

___________• Decompose

  • Where

is a lower triangular matrix

  • Where

= diagonal matrix

  • Set the diagonal terms of

to unity

  • Solving for the elements of

and

  • Substituting and changing the order in which the products are formed• Now let

A

LDL

T

L D

L

L

D

A

LDL

T

L DL

T^

X

(^

)^

B

DL

T^

X

Y

CE 341/441 - Lecture 3 - Fall 2004

p. 3.

  • Now sequentially solve

by forward substitution

by backward substitution

  • Note that

= 1/diagonal terms and are easily computed

L

Y

B

L

T^

X

D

(^1) – (^)

Y

D

(^1) –