### Classical iterative algorithms

#### by Yi Zhang

For this topic, many excellent textbooks are available. Most of the modern numerical PDE solvers rely on one or another iterative algorithm taking care of the linear system given by discretization ( I know very little of direct methods). In this scenario, one has linear system

and tries to come up with some iterative scheme

with

meaning the convergence to the “real” solution, or, to be exact, the solution by direct solver if we disregard the round-off error. By this setup, one can see the solution is the *fixed point* of operator . So the convergence of the algorithm is equivalent to two proposals: the spectral radius , and there exists operator norm induced by some norm of the linear space to which belongs that this norm of is less than . The convergence speed is measured by average rate of convergence of (for iterations) defined by

which comes from the fact that . In order to put the original problem into this fixed point setup, is splitted:

If is splitted according to entries’ positions as , one has the classical algorithms:

Jacobi:

Gauss-Seidel:

SOR:

The last one is in the component wise form in which is the Gauss-Seidel result, and the relaxation parameter is between zero and two (even though the name “overrelaxation” comes when it is greater than one). Each of those algorithms has direct(but different in implementation) analogical block version. All these methods are *stationary*, i.e. in each iteration is unchanged. If this is not the case, we have *unstationary* methods, in which after each iteration information is re-gathered in order to determine a new for next iteration. When is a SPD matrix, a large family of unstationary methods can be formed by the solving a minimization problem(minimal residual method (MINRES)). Classical CG method I wrote about before belongs to this family. A modern member of this family is GMRES by Saad and Schultz in 1986.