Commit 4fc64130 authored by Conor McCoid's avatar Conor McCoid
Browse files

Tetra: application of gen alg to tetra case, first attempt at algo for higher dim

parent bd4b5fb6
function output = SUB_Stepm_v1_20211116(D,J,H)
d=D(:,1); D=D(:,2:end);
j=J(:,1); J=J(:,2:end);
h=H(:,1); H=H(:,2:end);
while ~isempty(D)
indT=prod(d==D); D=D(:,indT==0);
JT=J(:,indT==1); J=J(:,indT==0);
HT=J(:,indT==1); H=H(:,indT==0);
while ~isempty(JT)
indj=sum(j~=JT);
Jj=JT(:,indj==2); JT=JT(:,indj<2);
Hj=HT(:,indj==2); HT=HT(:,indj<2);
end
end
\ No newline at end of file
......@@ -978,8 +978,8 @@ The vertices of $X$ are found by solving the system
where $\vec{v}_0$ is the vertex of $V$ mapped to the origin, $\vec{v}_i$ is the vector between $\vec{v}_0$ and the $i$--th vertex of $V$, and the columns of $\hat{U}$ are the vertices of $U$.
A vertex of $X$, $\vec{x}_i$, lies inside $Y$ only if $\vec{x}_i \cdot \vec{e}_\gamma \geq 0$ for all $\gamma$.
It is also necessary that $1 - \vec{x}_i \cdot \vec{1} \geq 0$, where $\vec{1} = \sum \vec{e}_\gamma$.
For the sake of notation we denote $\vec{x} \cdot \vec{e}_0 = 1 - \vec{x} \cdot \vec{1}$.
It is also necessary that $1 - \sum \vec{x}_i \cdot \vec{e}_\gamma \geq 0$.
For the sake of notation we denote $\vec{x} \cdot \vec{e}_0 = 1 - \sum \vec{x}_i \cdot \vec{e}_\gamma$.
The $(ij)$--th edge of $X$ intersects an $(n-1)$--dimensional hyperplane of $Y$, defined as $P_\gamma = \Set{\vec{x} \in \bbr^n}{\vec{x} \cdot \vec{e}_\gamma = 0}$, if $\sign(\vec{x}_i \cdot \vec{e}_\gamma) \neq \sign(\vec{x}_j \cdot \vec{e}_\gamma)$.
Proposition \ref{prop:tetra intersections} can be extended to this general dimension.
......@@ -1131,4 +1131,19 @@ Each step of the algorithm has been proven to be self-consistent and consistent
By induction, the algorithm as a whole is consistent for any dimension.
\end{proof}
If we apply this general algorithm to dimenion 3 and the intersection of two tetrahedra then we expect to retrieve identical results to the algorithm proposed in the previous sections.
In this particular case $\vec{x}_i = \begin{bmatrix} x_i & y_i & z_i \end{bmatrix}^\top$.
The first two steps of the algorithms already correspond to one another.
In the third step of the general algorithm $\vec{h}(\set{i,k} | \set{j}) \cdot \vec{e}_\eta$ is calculated.
Suppose $j=1$ and $\eta=2$, then
\begin{equation*}
\vec{h}(\set{i,k}|\set{1}) \cdot \vec{e}_2 = \frac{\begin{vmatrix} y_i & x_i \\ y_k & x_k \end{vmatrix}}{\begin{vmatrix} 1 & x_i \\ 1 & x_k \end{vmatrix}}.
\end{equation*}
This is identical to $q^{ik}_x$ from the tetrahedral algorithm.
It is straightforward to work out that the other combinations of $j$ and $\eta$ will give the equations for $q^{ik}_\gamma$, $r^{ik}_\gamma$ and $1-q^{ik}_\gamma-r^{ik}_\gamma$.
Likewise, one can show the calculations in step m, repeated once in this case, are identical to those of step 2bi of the tetrahedral algorithm.
Since these calculations are the same, the comparison of the signs of the intersections will return the same results.
Thus, both algorithms will agree as to which vertices of $Y$ lie inside $X$.
\end{document}
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment