Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Generalized Minimal Residual Method and Lanczos Method - Slides | CSE 275, Study notes of Computer Science

Material Type: Notes; Professor: Yang; Class: Matrix Computation; Subject: Computer Science & Engineering; University: University of California-Merced; Term: Unknown 1989;

Typology: Study notes

Pre 2010

Uploaded on 08/18/2009

koofers-user-hix
koofers-user-hix 🇺🇸

10 documents

1 / 17

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
CSE 275 Matrix Computation
Ming-Hsuan Yang
Electrical Engineering and Computer Science
University of California at Merced
Merced, CA 95344
http://faculty.ucmerced.edu/mhyang
Lecture 19
1 / 17
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff

Partial preview of the text

Download Generalized Minimal Residual Method and Lanczos Method - Slides | CSE 275 and more Study notes Computer Science in PDF only on Docsity!

CSE 275 Matrix Computation

Ming-Hsuan Yang

Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang

Lecture 19

Overview

Generalized minimal residuals Lanczos method

Residual minimization in Kn

Let Kn be the m × n Krylov matrix so that we have AKn =

[

Ab A^2 b · · · Anb

]

The columns space of this matrix is AKn, and thus our problem is to find a vector c ∈ Cn^ such that ‖AKnc − b‖ = minimum where ‖ · ‖ = ‖ · ‖ 2 Can be done with QR factorization of AKn and once c is found, we can set xn = Knc However, this procedure is numerically unstable and the factor R is not needed

Residual minimization in Kn (cont’d)

Use the Arnoldi iteration to construct a sequence of Krylov matrices Qn whose columns q 1 , q 2 ,... span the successive Krylov subspaces Kn Thus we write xn = Qny instead of xn = Knc, and the least squares problem is to find y ∈ Cn^ such that

‖AQny − b‖ = minimum

Superficially the above problem has dimensions m × n Because of special structure of Krylov subspaces, it is essentially of dimension (n + 1) × n Recall AQn = Qn+1 H˜n in Arnoldi iteration, and thus

‖Qn+1 H˜ny − b‖ = minimum

Both vectors inside the norm are in the column space of Qn+1 and thus multiply on the left by Q n∗+1 does not change the norm

‖ H˜ny − Q n∗+1b‖ = minimum

Mechanics of GMRES

GMRES

q 1 = b/‖b‖ for n = 1, 2 ,... do Step n of Arnoldi iteration Find y to minimize ‖rn‖ = ‖ H˜ny − ‖b‖e 1 ‖ xn = Qny end for At each step, GMRES minimizes the norm of the residual rn = b − Axn over all vectors xn ∈ Kn The quantity ‖rn‖ is computed in the course of finding y The “Find y” step is an (n + 1) × n matrix least squares problem with Hessenberg structure Use QR factorize to solve y in minimizing ‖rn‖ with O(n^2 ) flops due to the Hessenberg structure

Mechanics of GMRES (cont’d)

Rather than construct QR factorizations of the successive matrices H^ ˜ 1 , H˜ 2 ,... independently, one can use an update process to get the QR factorization of H˜n from that of H˜n− 1 All that is required is a single Givens rotation and O(n) work

GMRES and polynomial approximation (cont’d)

The corresponding residual rn = b − Axn is rn = (I − Aqn(A))b, where pn is the polynomial defined by pn(z) = 1 − zq(z), and thus

rn = pn(A)b

for some polynomial pn ∈ Pn The GMRES process chooses the coefficients of pn to minimize the norm of this residual The GMRES solves the following approximation problem successively for n = 1, 2 , 3 ,... Find pn ∈ Pn such that

‖pn(A)b‖ = minimum

Lanczos iteration

The Lanczos iteration is the Arnoldi iteration specialized to the case where A is Hermitian Further assume A is real and symmetric in the following analysis First note that Hn is symmetric and real, thus its eigenvalues, the Ritz values or Lanczos estimates are also real Second, since Hn is both symmetric and Hessenberg, it is tridiagonal This means in the inner loop of the Arnoldi iteration, the limits 1 to n can be replaced by n − 1 to n Instead of the (n + 1)-term recurrence at step n, the Lanczos iteration involves just a three-term recurrence Each step of the Lanczos iteration is much cheaper than the corresponding step f the Arnoldi iteration

The Lanczos iteration

Since a symmetric tridiagonal matrix contains only two distinct vectors, we replace Aij by new variables, i.e., let αn = hnn and βn = hn+1,n = hn,n+1, Hn becomes

Tn =

α 1 β 1 β 1 α 2 β 2 β 2 α 3

βn− 1 βn− 1 αn

with three-term recurrence

The Lanczos iteration algorithm

Lanczos iteration algorithm: β 0 = 0, q 0 = 0 , b = arbitrary, q 1 = b/‖b‖ for n = 1, 2 , 3 ,... do v = Aqn αn = q> n v v = v − βn− 1 qn− 1 − αnqn βn = ‖v‖ qn+1 = v/βn end for Each step consists of a matrix-vector multiplication, an inner product, and a couple of vector operations If A has enough sparsity or other structure that matrix-vector products can be computed cheaply, then such an iteration can be applied efficiently without too much difficulty to the problem of dimensions in the tens or hundreds of thousands

Numerical example

Let A be the 203 × 203 matrix A = diag(0, 0. 01 , 0. 02 ,... , 1. 99 , 2 , 2. 5 , 3 .0) The spectrum of A consists of a dense collection of eigenvalues throughout [0, 2] together with two outliers, 2.5 and 3. Lanczos iteration beginning with a random starting vector q 1 At step 9, seven Ritz values and the associated Lanczos polynomial is uniformly small on that interval The leading Ritz values are 1.93, 2.48, 2. The polynomial is small throughout [0, 2] ∪ { 2. 5 } ∪ { 3. 0 }

Numerical example

At step 20, the leading Ritz values are

  1. 9906 , 2. 49999999999987 , 3. 00000000000000