Commit 856c6234 authored by conmccoid's avatar conmccoid
Browse files

Extrap: update to notes, initial commit on code folder and AltS algorithm (4 lines)

parent 492662b6
function [u1,u2] = ALGO_extrap_AltS_v1_20210708(F1,F2,u2,N)
% ALTS Performs alternating Schwarz
% [u1,u2] = AltS(F1,F2,u2,N) returns u1 and u2, the solution on the first
% and second subdomains, respectively, using N iterations of the
% alternating Schwarz method. The functionals F1 and F2 compute the
% solutions on the respective subdomains by taking the solution on the
% previous subdomain for its boundary data.
for i=1:N
u1=F1(u2);
u2=F2(u1);
end
\ No newline at end of file
......@@ -73,6 +73,7 @@ d^n = & G d^{n-1} = G^n d^0 \\
\sum \gamma_j d^j = & \sum \gamma_j G^j d^0 = p_n(G) d^0.
\end{align*}
Minimizing this last equation is called Modified Polynomial Extrapolation.
(nb: no it's not, it's called Minimized Polynomial Extrapolation)
Overdetermine the system:
\begin{equation*}
......@@ -191,7 +192,7 @@ which may be readily obtained by rearranging the relation $R$.
\section{Summary of Sidi proof of MPE=Arnoldi}
Let $(x_n)$ be a sequence to be accelerated, then we produce approximations
Let $(x_n)$ be a vector sequence to be accelerated, then we produce approximations
\begin{equation}
T_n^{(k)} = \sum_{j=0}^k \gamma_j^{(n,k)} x_{n+j}
\end{equation}
......@@ -200,7 +201,7 @@ where
\sum_{j=0}^k \gamma_j^{(n,k)} = 1 .
\end{equation*}
The $\gamma_j^{(n,k)}$ also satisfy
\begin{equation}
\begin{equation} \label{eq:gamma 2}
\sum_{j=0}^k \langle \Delta x_{n+i}, \Delta x_{n+j} \rangle \gamma_j^{(n,k)} = 0, \quad 0 \leq i \leq k-1
\end{equation}
where $\langle \cdot, \cdot \rangle$ is the L2 inner product.
......@@ -235,7 +236,11 @@ Moreover,
\end{equation}
Thus,
\begin{equation}
r(T_n^{(k)}) = \sum \gamma_i^{(n,k)} \Delta x_{n+i} \in \Span \set{\Delta x_n, A \Delta x_n, \dots , A^k \Delta x_n}.
r(T_n^{(k)}) = \sum \gamma_i^{(n,k)} \Delta x_{n+i} \in \Span \set{\Delta x_n, A \Delta x_n, \dots , A^k \Delta x_n} = K_{k-1}(A,\Delta x_n)
\end{equation}
where $K_{k+1}(A,\Delta x_n)$ is the Krylov subspace of dimension $k+1$ generated by $A$ and $\Delta x_n$.
By Equation (\ref{eq:gamma 2}) we have that $\langle v, r(T_n^{(k)}) \rangle = 0$ for all $v \in K_{k}(A,\Delta x_n)$.
Therefore, MPE is an orthogonal projection method.
In fact, it is identical in methodology to the Arnoldi process.
\end{document}
\ No newline at end of file
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment