I’m doing some programming work for Professor Bindel in the CS department, and it’s pretty novel stuff to me. I figured if I write out what I know about the original method which he is trying to improve on, it’ll help me with the overall picture. Here it is:
The entire premise of GMRES (generalized minimal residual method) is to solve the system of equations like where is a matrices and are vectors with appropriate dimensions. First though, we need some additional machinery before we can cover the main method (don’t worry, the machinery does most of the work).
Overview
First some definitions and an overview before nitty-gritty: a Krylov subspace is a magic thing which is quite simple to think about. It’s defined as . It’s really nothing but a series of vectors which we get from multiplying with lots of times.
What GMRES aims to do is to minimize the residual ( where ).
Arnoldi Iterations
I’m going to assume that you seen what the power iterations). We basically apply GS on those Krylov vectors to obtain an orthonormal basis. Our end goal is to obtain an expression of the form where was from the matrix defined in our problem and matrices are the Krylov subspace vectors combined into a matrix. The is called a upper Hessenberg matrix (i.e. an upper triangular matrix with the first subdiagonal filled in).
The algorithm is as follows (shamelessly copied from wiki):
- Start with an arbitrary vector ”q”1 with norm 1.
- Repeat for ”k” = 2, 3, …
- for j from 1 to k − 1
- endfor
What does it mean? It’s just like (modified) GS! For each additional vector from the Krylov subspace, you take out the stuff that are orthogonal to vectors already processed and store it in the matrix. Sounds simple but looks rough…
GMRES
Now we can actually get to the method (which is simple if you’re still reading). We try to minimize the residual (in the Euclidean norm for those wondering), for (i.e. the kth Krylov subspace). We can rewrite that as as from our discussion of the Arnoldi iterations above, is the matrices of the Krylov subspace.
Now, we perform some algebra…
The first equation comes from substitution, the second from using the Arnoldi iterations results, and the last step is kind of tricky. C in our equation is the norm of the projection of b onto the orthogonal complement (i.e. the subspace complement to the space) to the Krylov subspace. Think of this as saying the Pythagorean theorem on the part that lies within the span, and those that lies complement to it.
Finally,
The last bit is just the observation that the first column of V_{k+1}[\latex] is b, normalized to unit length. So V_{k+1}^Tb=\|b\|e_1[\latex].
ProblemS with gmres
If you notice the Arnoldi iterations, you have more and more vectors to loop over as the Krylov subspace gets larger… and you have to store all those vectors too! This is why people consistently use GMRES with restarts, where they erase all the previous iterations, and use reconstruct the Krylov subspace from the current, closest solution.
The problem with this, is there are situations where convergence to the solution actually depends on the restart! If you don’t choose a good restart value, convergence to the solution might not occur. That’s bad.
Another way to deal with this is to demand the matrix of vectors by well conditioned… but I don’t know much about this. And another way, is to use Chebychev polynomials to somehow do it (more on this later after I read the paper).