Power method

Is the simplest eigenvalue algorithm, it finds the eigenvalue of largest modulus.

Consider a diagonalizable matrix , then the eigenvectors form a basis for . Therefore, for every vector , it’s expantion in the eigen-base:

If we apply iterativly to the matrix , i.e. , in the eigenvector base this is just:

putting in evidence the biggest eigenvalue

in the limit , the terms since .

The error therm goes to zero lineart as . So the convergence is better if the greates eigenvalue if well separated from the rest of the spectrum.

To avoid overflow or underflow ( or to ) each iteration we normalize the vector.

Remaks the starting vector should have a nonzero component in , i.e. it should not be perperdicular to . This is however unlikely, and rounding error will introduce nonzero component helping us!

If the is more than one max eigenvalue, the algorithm will converge to an eigenspace.

Shifts

Consider the shifted matrix , for . The spectrum of the shifted matrix is just the spectrum of shift , the eigenvectors are the same:

With shifts we can compute other eigenvalues insted of the greates absolute value one!

It can also be used to speed up convergence by decreasing the factor .