Commit 0bf5ca77 authored by conmccoid's avatar conmccoid
Browse files

Extrap: equivalence bewteen MPE and GMRES, with both numerical and logical evidence

parent eb72d8aa
% Example - Extrapolation: fixed point iteration on linear function
% solve the equation x = Ax + b for random A and b
d = 10; n=2*d;
d = 5; n=2*d;
A = rand(d); b = rand(d,1); x_true = -(A - eye(d)) \ b;
X = b;
for i=1:n
......@@ -25,7 +25,7 @@ for i=1:n
res_MPE_v2(i) = norm(A*x_v2(:,i) + b - x_v2(:,i));
end
[x_GMRES,~,~,~,res_GMRES]=gmres(A-eye(d),-b,d,0,n);
[x_GMRES,~,~,~,res_GMRES]=gmres(A-eye(d),-b,d,0,n,eye(d),eye(d),X(:,1));
figure(1)
semilogy(1:n,Error_v1,'b*--',1:n,Error_v2,'k.--',1:length(res_GMRES),res_GMRES,'ro')
......
......@@ -561,5 +561,47 @@ Table \ref{tab:KrylovExtrap} gives the orthogonalization conditions and correspo
\end{table}
The methods become Krylov methods most readily when the sequence to be accelerated may be expressed as $\vec{x}_{n+1} = A \vec{x}_n + \vec{b}$.
Under this sequence the function $\fxi{n}=\vec{x}_{n+1}-\vec{x}_n$ satisfies
\begin{align*}
\fxi{n}= & \vec{x}_{n+1} - \vec{x}_n \\
= & A \vec{x}_n + \vec{b} - \vec{x}_n \\
= & (A-I) \vec{x}_n + \vec{b}, \\
\fxi{n+1} = & \vec{x}_{n+2} - \vec{x}_{n+1} \\
= & A \vec{x}_{n+1} + \vec{b} - A \vec{x}_n - \vec{b} \\
= & A (\vec{x}_{n+1} - \vec{x}_n) \\
= & A \fxi{n}.
\end{align*}
Suppose we wish to solve $(A-I) \vec{x} = -\vec{b}$.
It is then the goal to minimize $\norm{(A-I) \hat{\vec{x}} + \vec{b}}$.
We consider three types of solutions:
\begin{enumerate}
\item $\hat{\vec{x}} = X_{n,k} \vec{u}_k$ such that $\vec{1}^\top \vec{u}_k=1$;
\begin{align*}
\norm{(A-I) X_{n,k} \vec{u}_k + \vec{b}} = & \norm{((A-I) X_{n,k} + \vec{b} \vec{1}^\top) \vec{u}_k} \\
= & \norm{F_{n,k} \vec{u}_k}.
\end{align*}
\item $\hat{\vec{x}} = \vec{x}_n + X_{n,k} \Delta \tilde{\vec{u}}_k$ where
\begin{equation*}
\Delta = \begin{bmatrix} -1 \\ 1 & \ddots \\ & \ddots & -1 \\ & & 1 \end{bmatrix};
\end{equation*}
\begin{align*}
\norm{(A-I) \vec{x}_n + (A-I) X_{n,k} \Delta \tilde{\vec{u}}_k + \vec{b}} = & \norm{\fxi{n} + F_{n,k} \Delta \tilde{\vec{u}}_k} \\
= & \norm{\fxi{n} + (A-I) F_{n,k-1} \tilde{\vec{u}}_k}.
\end{align*}
\item $\hat{\vec{x}} =\vec{x}_n + Q_k \vec{y}_k$ where $Q_k$ is derived from the Arnoldi iteration on the Krylov subspace $\mathcal{K}_{k-1}(A-I,\fxi{n})$;
\begin{align*}
\norm{(A-I) \vec{x}_n + (A-I) Q_k \vec{y}_k + \vec{b}} = & \norm{\fxi{n} + (A-I) Q_k \vec{y}_k}.
\end{align*}
\end{enumerate}
Method 1 is equivalent to MPE since minimizing this equation is equivalent to solving the normal equations.
We have previously shown that methods 1 and 2 are equivalent.
Method 3 is GMRES, and to show it is equivalent to method 2 it suffices to prove the Krylov subspace $\mathcal{K}_{k-1}(A-I,\fxi{n})$ is the same as the column space of $F_{n,k-1}$.
This is trivial to show.
If $Q_k$ is derived from another method but shares the column space of $F_{n,k-1}$ then there is still mathematical equivalence between methods 2 and 3.
If one minimizes with respect to a different norm then the methods correspond to other methods discussed here.
\end{document}
\ No newline at end of file
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment