summary by chatGPT
In the Krylov subspace method, a sequence of increasingly accurate approximate solutions is generated by projecting the linear system onto a sequence of nested subspaces, called Krylov subspaces. These subspaces are constructed using the coefficient matrix of the linear system and a chosen starting vector. The method then iteratively solves for the next approximation in the sequence by minimizing the residual error in the subspace.
One advantage of the Krylov subspace method is that it can be applied to large, sparse linear systems, which are common in many scientific and engineering applications. Another advantage is that it can be easily parallelized, which can make it more efficient on modern parallel computing architectures.
There are several variations of the Krylov subspace method, including the Conjugate Gradient method and the GMRES method. These variations differ in how they construct the Krylov subspaces and how they solve for the next approximation in the sequence.
Overall, the Krylov subspace method is a powerful tool for solving large, sparse linear systems and has many applications in scientific and engineering computing.
Method | Pros | Cons |
---|---|---|
Conjugate Gradient | Fast convergence for well-conditioned systems | Requires matrix-vector multiplication |
GMRES | Can handle poorly conditioned systems | Requires more memory and computation than other methods |
BiCGSTAB | Can handle poorly conditioned systems | Can have slower convergence than other methods |
- The Conjugate Gradient method is known for its fast convergence for well-conditioned systems, but it requires matrix-vector multiplication to be performed at each iteration.
- The GMRES method can handle poorly conditioned systems, but it requires more memory and computation than other methods.
- The BiCGSTAB method can also handle poorly conditioned systems, but it can have slower convergence than other methods.
Krylov subspace
The Krylov subspace is generated by repeatedly applying the coefficient matrix to the starting vector and taking the span of the resulting vectors.
Definition The krylov subspace of order , where , relative to the matrix and the vector , is the subspace:
Remaks It follows from the definition that:
- if then there exists a polynomial of degree at most such that
In the setting of solving a linear system given an initial guess and its corresponding residue , we define the affine-subspace:
The idea is finding the new approximate solution in this subspace. The new solution will be choosen such that it minimizes some distance.
Each method will have its own condition, for instance the GMRES methodstands for Generalized Minimal RESidual, it will choose such that it minimizes the euclidean norm of the residue.
The Conjugate gradient method minimizes the energy norm of the error.
It is usefull to find an Orthonormal basis for a given Krylov subspace. The Arnoldi iteration is a numerical algorithm based upon a Gramh-Smith procedure.