Download Itrative Solutions-Numerical Methods in Engineering-Lecture 4 Slides-Civil Engineering and Geological Sciences and more Slides Numerical Methods in Engineering in PDF only on Docsity!
CE 341/441 - Lecture 4 - Fall 2004
p. 4.
LECTURE 4ITERATIVE SOLUTIONS TO LINEAR ALGEBRAIC EQUATIONS • As finer discretizations are being applied with Finite Difference and Finite Element
codes:
- Matrices are becoming increasingly larger• Density of matrices is becoming increasingly smaller - Banded storage direct solution algorithms no longer remain attractive as solvers for
very large systems of simultaneous equations
CE 341/441 - Lecture 4 - Fall 2004
p. 4.
Example • For a typical Finite Difference or Finite Element code, the resulting algebraic equations
have between 5 and 10 nonzero entries per matrix row (i.e. per algebraic equation asso-ciated with each node)
A
γ^
δ
ε
β
γ
δ
ε
σ
β
γ
δ
ε
σ
β
γ^
δ
ε
σ
β
γ
δ
ε
α
β
γ^
δ
ε
α
β
γ
δ
ε
α
β
γ^
δ
ε
α
β
γ
δ
ε
α
β
γ^
δ
τ
α
β
γ
δ
τ
α
β
γ
δ
τ
α
β
γ
δ
α
β
γ
CE 341/441 - Lecture 4 - Fall 2004
p. 4.
(Point) Jacobi Method - An Iterative Method • Let’s consider the following set of algebraic equations• Guess a set of values for• Now solve each equation for unknowns which correspond to the diagonal terms in
using guessed values for all other unknowns:
a
11
x
1
a
12
x
2
a
13
x
3
b
1
=
a
21
x
1
a
22
x
2
a
23
x
3
b
2
=
a
31
x
1
a
32
x
2
a
33
x
3
b
3
=
X
X
(^0) [ ]
A
x
(^1) [ (^1)
]^
b
1
a
12
x
(^0) [ (^2)
]^
a
13
x
(^0) [ (^3) ]
a
11
x
(^1) [ (^2)
]^
b
2
a
21
x
(^0) [ (^1)
]
a
23
x
(^0) [ (^3) ]
a
22
x
(^1) [ (^3)
]^
b
3
a
31
x
(^0) [ (^1)
]
a
32
x
(^0) [ (^2) ]
a
33
CE 341/441 - Lecture 4 - Fall 2004
p. 4.
- Arrive at a second estimate
→
- Continue procedure until you reach convergence (by comparing results of 2 consecutive
iterations)
- This method is referred to as the
(Point) Jacobi Method
(Point) Jacobi Method
is formally described in vector notation as follows:
A
as
- Such that all diagonal elements of
A
are put into
D
- Such that all off-diagonal elements of
A
are put into
- The scheme is now defined as:
⇒
- Recall that inversion of a diagonal matrix (to find
) is obtained simply by
taking the reciprocal of each diagonal term
X
(^1) [ (^)
]
A
D
C
C
D
X
k
1
[^
]^
C
X
k [ (^)
]^
B
k
X
k
1
[^
]^
D
1
-^
C
X
k [ (^)
]^
D
1
-^
B
k
D
(^1) –
CE 341/441 - Lecture 4 - Fall 2004
p. 4.
- Total number of operations for full storage mode
where
= number of cycles required for convergence
- Note that you don’t a priori know the number of cycles,
, required to achieve a
certain degree of convergence and therefore accuracy
- Total number of operations for sparse non-zero entry only storage mode
where
= number of non zero entries per equation= number of cycles required for convergence
- The operation count dramatically reduces for sparse storage modes and is only a
function of the number of non-zero entries and the number of cycles. Note that
is
not related to the size of the problem,
N
, but to the local grid structure and algorithm
- Iterative methods are ideally suited for
- Very large matrices since they reduce the roundoff problem• Sparse but not banded matrices since they can reduce computational effort by not
operating on zeroes
- Very large sparse banded matrices due to efficiency
O N
(^2)
K
(^
)^
K
K
O N
α
K
(^
)^
α K
α
CE 341/441 - Lecture 4 - Fall 2004
p. 4.
Example • Solve by point Jacobi method:
⇒
⇒
- Start with solution guess
,^
and start iterating on the solution
x
y
x
y
x
k
1
[^
]^
y
k [ (^) ]
y
k
1
[^
]^
x
k [ (^) ]
x
k
1
[^
]^
y
k [ (^)
]
y
k
1
[^
]^
x
k [ (^) ]
x
(^0) [ (^)
]^
y
(^0) [ (^) ]^
CE 341/441 - Lecture 4 - Fall 2004
p. 4.
Iterative convergence • Is the
solution better than the
solution?
- Iterative process can be convergent/divergent
necessary
conditions for convergence is that the set be
diagonal
- This requires that one of the coefficients in each of the equations be greater than all
others and that this “strong coefficient” be contained in a different position in eachequation.
- We can re-arrange all strong elements onto diagonal positions by switching columns
→
this now makes the matrix
diagonal.
sufficient
condition to ensure convergence is that the matrix is
diagonally dominant
- There are less stringent conditions for convergence
k
(^
th )
k
th
a
ii
a
ij
j^
1 = i
j
N ∑ ≠
i^
N
CE 341/441 - Lecture 4 - Fall 2004
p. 4.
- A poor first guess will prolong the iterative process but will not make it diverge if the
matrix is such that convergence is assured.
- Therefore better guesses will speed up the iterative process
Criteria for ascertaining convergence •^
Absolute
convergence criteria
for
a user specified tolerance or accuracy
- The absolute convergence criteria is best used if you have a good idea of the magni-
tude of the
‘s
-^
Relative
convergence criteria
- This criteria is best used if the magnitude of the
‘s are not known.
- There are also problems with this criteria if
x
k i
1
[^
]^
x
k [ (^) i ]
ε ≤
i^
N
ε ≡
x
i
x
k i
1
[^
]^
x
k [ (^) i
]
x
k [ (^) i ]
-^
ε ≤
x
i
x
i^
CE 341/441 - Lecture 4 - Fall 2004
p. 4.
Gauss Seidel
method is formally described in vector form as
A
as
into
- Put negative of elements of
below the diagonal into
- Put negative of elements of
above the diagonal into
- Scheme is then defined as:
,^
⇒
Gauss Seidel
method is formally described using index notation as
,^
A
D
L
U
A
D
A
L
A
U
D
X
k
1
[^
]^
L
X
k
1
[^
]^
U
X
k [ (^) ]^
B
k
X
k
1
[^
]^
D
(^1) –
L
X
k
1
[^
]^
D
(^1) –
U
X
k [ (^) ]^
D
1
-^
B
k
x
k i
1
[^
]^
a
ij a
ii
j^
1
i^
1
x
j
k^
1
[^
]^
a
ij a
ii
j^
i^
1
=
N ∑
x
j
k [ ]
b
i a
ii
i^
N
k
CE 341/441 - Lecture 4 - Fall 2004
p. 4.
Point Relaxation Methods (Successive/Systematic (Over) Relaxation - SOR) • The
SOR
approach improves the calculated values at the
iteration using
Gauss-
Seidel
by calculating a weighted average of the
and
iterations and using this
for the next iteration
is the value obtained from the current
Gauss-Seidel
iteration
-^
is the relaxation factor which must be specified
values
-^
ranges between•
Gauss-Seidel
-^
Under-relaxation
-^
Over-relaxation
k
th
k
th
k
th
x
k i
1
[^
]^
λ
x
k i
1
[^
]*
λ
(^
x
i
k [ ]
x
i
k^
1
[^
]*
λ
λ
λ
λ
λ
λ
λ
CE 341/441 - Lecture 4 - Fall 2004
p. 4.
value is quite complex
- Depends on the characteristics of the matrix• Certain “classes” of problems will have optimal ranges• Trial and error is very useful• We can apply different values of
for different blocks within a matrix which exhibit
significantly different characteristics (different blocks in matrix may be associatedwith different p.d.e.’s in a coupled system)
Application of Gauss-Seidel to Non-Linear Equations •^
Gauss-Seidel (with relaxation)
is a very popular method to solve for systems of
nonlinear equations
- Notes:
- Multiple solutions exist for nonlinear equations• There
must
be linear components included in the equations such that a diagonal is
formed
- No general theory on iterative convergence is available for nonlinear equations
λ
λ
CE 341/441 - Lecture 4 - Fall 2004
p. 4.
Block Iterative Methods • Instead of operating on a point by point basis, we solve simultaneously for entire groups
of unknowns using direct methods
- Partition the coefficient matrix into blocks. All elements in the block are then solved in
one step using a direct method
INSERT FIGURE NO.126 and 127 Direct/Iterative Methods • Can correct errors due to roundoff in direct solutions by applying an iterative solution
after the direct solution has been implemented.
1
2
3
4
5 6
(^78)
=
1
3
4
5
6
7
8
5
6
=
−
[k+1]
=
−
[k+1]
[k]
[k]
2
⇒