review of CG method:II

by Yi Zhang

Now let’s examine the convergence property of steepest descent method. For this purpose, define the error against exact solution x^{\ast} at each iteration step

e_i\equiv x_i-x^{\ast}

By iteration scheme there is


Since at every step matrix A would be multiplied, and the iterative property of a matrix is largely determined by its eigenvalues, one’s first reaction should be look at the decomposition of the vector space under discussion into its standard forms by


where \{v_j\} is the orthonormal basis of eigenvector space of A. By this defitino, there are



In which \lambda_j is eigenvalue corresponding to v_j. One can see if the normal/spectral condition number \kappa(A)\equiv\lambda_{\max}(A)/\lambda_{\min}(A)=0, i.e. all the eigenvalues of A are same, the error at next step immediately drops to zero. Geometricly this simply means contours are spheres and the steepest descent direction is toward the centre. Generally, there is

e_{i+1}=(I-l_iA)e_i\Rightarrow \|e_{i+1}\|_A^2=\|e_i\|_A^2\omega_i^2

where the norm is energy norm induced by A


and \omega_i (subscript emphasizes the dependence on each step) is determined by spectral condition number \kappa(A) and slope \mu_i=\xi_{(i)\max}/\xi_{(i)\min}:


Picture below shows the dependence of \omega to two variables. One can see that the slowest convergence happens when both condition number and slope are large. A is referred to as ill-conditioned when \kappa is large.