Commit 90f0a946 authored by Conor McCoid's avatar Conor McCoid
Browse files

Tetra and thesis: moved sign of denominator to thesis; finsihed lemma on...

Tetra and thesis: moved sign of denominator to thesis; finsihed lemma on adjacency of next gen, but doesnt consider possibility of smaller cycles containing three of four vertex sets
parent f4d59d15
......@@ -1082,82 +1082,6 @@ For each of these the numerator of $\vec{h}(J | \Gamma_j) \cdot \vec{e}_j$ is th
By this corollary, if there is a change in sign of $\vec{h}(J | \Gamma) \cdot \vec{e}_\eta$ then the entire $m$--face of $X$ defined by the indices $J$ ends up on the other side of the $(n-m)$--face of $Y$ defined by $\Gamma \cup \set{\eta}$.
If the $J$--th $m$--face of $X$ does not have $m+1$ intersections then the signs of all existing intersections can be found using the intersections of the $(m-1)$--faces of $X$ with indices that are subsets of $J$.
For the sake of notation let
\begin{align*}
X_J = & \begin{bmatrix} \vec{x}_{i_0} & \dots & \vec{x}_{i_m} \end{bmatrix}, \quad &
I_\Gamma = & \begin{bmatrix} \vec{e}_{\gamma_1} & \dots & \vec{e}_{\gamma_m} \end{bmatrix}
\end{align*}
for $J = \set{i_j}_{j=0}^m$ and $\Gamma = \set{\gamma_j}_{j=1}^m$.
Then $$\vec{h}(J|\Gamma) \cdot \vec{e}_\eta = \frac{\begin{vmatrix} X_J^\top \vec{e}_\eta & X_J^\top I_\Gamma \end{vmatrix} }{ \begin{vmatrix} \vec{1} & X_J^\top I_\Gamma \end{vmatrix}}.$$
\begin{lemma}
Suppose $\sign(\vec{h}(J \setminus \set{i} | \Gamma) \cdot \vec{e}_\eta) \neq \sign(\vec{h}(J \setminus \set{j} | \Gamma) \cdot \vec{e}_\eta)$ and an intersection $\vec{h}(J \setminus \set{i,j} | \Gamma \setminus \set{\gamma})$ was calculated for some $\gamma \in \Gamma$, then
\begin{equation*}
\sign \left ( \begin{vmatrix} \vec{1} & X_J^\top I_{\Gamma \cup \set{\eta}} \end{vmatrix} \right ) =
\sign \left ( \begin{vmatrix} X_{J \setminus \set{i}}^\top I_{\Gamma \cup \set{\eta}} \end{vmatrix}
\begin{vmatrix} X_{J \setminus \set{i,j}}^\top I_\Gamma \end{vmatrix}
\begin{vmatrix} \vec{1} & X_{J \setminus \set{j}}^\top I_\Gamma \end{vmatrix} \right ).
\end{equation*}
\end{lemma}
\begin{proof}
Without loss of generality, suppose $\vec{x}_i$ is the first column of $X_{J \setminus \set{j}}$ and likewise $\vec{x}_j$ is the first column of $X_{J \setminus \set{i}}$.
Furthermore, to adhere to the notation already established the first column of $I_{\Gamma \cup \set{\eta}}$ must be $\vec{e}_\eta$.
By assumption $\vec{h}(J \setminus \set{i} | \Gamma) \cdot \vec{e}_\eta - \vec{h}(J \setminus \set{j} | \Gamma) \cdot \vec{e}_\eta$ has the same sign as $\vec{h}(J \setminus \set{i} | \Gamma) \cdot \vec{e}_\eta$.
We begin by simplifying this expression:
\begin{align*}
\vec{h}(J \setminus \set{i} | \Gamma)\cdot \vec{e}_\eta & - \vec{h}(J \setminus \set{j} | \Gamma) \cdot \vec{e}_\eta =
\frac{ \begin{vmatrix} X_{J \setminus \set{i}}^\top I_{\Gamma \cup \set{\eta}} \end{vmatrix}
}{ \begin{vmatrix} \vec{1} & X_{J \setminus \set{i}}^\top I_\Gamma \end{vmatrix}} -
\frac{ \begin{vmatrix} X_{J \setminus \set{j}}^\top I_{\Gamma \cup \set{\eta}} \end{vmatrix}
}{ \begin{vmatrix} \vec{1} & X_{J \setminus \set{j}}^\top I_\Gamma \end{vmatrix}} \\
= & \frac{ \begin{vmatrix} \vec{1} & X_{J \setminus \set{j}}^\top I_\Gamma \end{vmatrix}
\begin{vmatrix} X_{J \setminus \set{i}}^\top I_{\Gamma \cup \set{\eta}} \end{vmatrix} -
\begin{vmatrix} \vec{1} & X_{J \setminus \set{i}}^\top I_\Gamma \end{vmatrix}
\begin{vmatrix} X_{J \setminus \set{j}}^\top I_{\Gamma \cup \set{\eta}} \end{vmatrix} }{
\begin{vmatrix} \vec{1} & X_{J \setminus \set{i}}^\top I_\Gamma \end{vmatrix}
\begin{vmatrix} \vec{1} & X_{J \setminus \set{j}}^\top I_\Gamma \end{vmatrix} }.
\end{align*}
We expand the numerator of this expression:
\begin{align*}
& \left ( \begin{vmatrix} X_{J \setminus \set{i,j}}^\top I_\Gamma \end{vmatrix} + \sum\limits_{k=0}^{m-1} (-1)^k \vec{x}_i^\top \vec{e}_{\gamma_k} \begin{vmatrix} \vec{1} & X_{J \setminus \set{i,j}}^\top I_{\Gamma \setminus \set{\gamma_k}} \end{vmatrix} \right ) \begin{vmatrix} X_{J \setminus \set{i}}^\top I_{\Gamma \cup \set{\eta}} \end{vmatrix} \\
& - \left ( \begin{vmatrix} X_{J \setminus \set{i,j}}^\top I_\Gamma \end{vmatrix} + \sum\limits_{k=0}^{m-1} (-1)^k \vec{x}_j^\top \vec{e}_{\gamma_k} \begin{vmatrix} \vec{1} & X_{J \setminus \set{i,j}}^\top I_{\Gamma \setminus \set{\gamma_k}} \end{vmatrix} \right ) \begin{vmatrix} X_{J \setminus \set{j}}^\top I_{\Gamma \cup \set{\eta}} \end{vmatrix} \\
& = \begin{vmatrix} X_{J \setminus \set{i,j}}^\top I_\Gamma \end{vmatrix}
\begin{vmatrix} 1 & \vec{x}_i^\top I_{\Gamma \cup \set{\eta}} \\ 1 & \vec{x}_j^\top I_{\Gamma \cup \set{\eta}} \\ \vec{0} & X_{J \setminus \set{i,j}}^\top I_{\Gamma \cup \set{\eta}} \end{vmatrix}
+ \begin{vmatrix} \vec{x}_i^\top I_\Gamma \vec{w} & \vec{x}_i^\top I_{\Gamma \cup \set{\eta}} \\ \vec{x}_j^\top I_\Gamma \vec{w} & \vec{x}_j^\top I_{\Gamma \cup \set{\eta}} \\ \vec{0} & X_{J \setminus \set{i,j}}^\top I_{\Gamma \cup \set{\eta}} \end{vmatrix} \\
& = \begin{vmatrix} X_{J \setminus \set{i,j}}^\top I_\Gamma \end{vmatrix}
\begin{vmatrix} 1 & \vec{x}_i^\top I_{\Gamma \cup \set{\eta}} \\ 1 & \vec{x}_j^\top I_{\Gamma \cup \set{\eta}} \\ \vec{0} & X_{J \setminus \set{i,j}}^\top I_{\Gamma \cup \set{\eta}} \end{vmatrix}
+ \begin{vmatrix} 0 & \vec{x}_i^\top I_{\Gamma \cup \set{\eta}} \\ 0 & \vec{x}_j^\top I_{\Gamma \cup \set{\eta}} \\ -X_{J \setminus \set{i,j}}^\top I_\Gamma \vec{w} & X_{J \setminus \set{i,j}}^\top I_{\Gamma \cup \set{\eta}} \end{vmatrix} \\
& = \begin{vmatrix} X_{J \setminus \set{i,j}}^\top I_\Gamma \end{vmatrix} \begin{vmatrix} \vec{1} & X_J^\top I_{\Gamma \cup \set{\eta}} \end{vmatrix}.
\end{align*}
where $w_k = (-1)^k \begin{vmatrix} \vec{1} & X_{J \setminus \set{i,j}}^\top I_{\Gamma \setminus \set{\gamma_k}} \end{vmatrix}.$
To prove the last equality note that
each element of $X_{J \setminus \set{i,j}}^\top I_\Gamma \vec{w}$ is equal to the same value:
\begin{align*}
\left ( X_{J \setminus \set{i,j}}^\top I_\Gamma \vec{w} \right )_l = &
\sum_{k=0}^{m-1} (-1)^k \vec{x}_l^\top \vec{e}_{\gamma_k} \begin{vmatrix} \vec{1} & X_{J \setminus \set{i,j}}^\top I_{\Gamma \setminus \set{\gamma_k}} \end{vmatrix} \\
= & \begin{vmatrix} 0 & \vec{x}_l^\top I_\Gamma \\ \vec{1} & X_{J \setminus \set{i,j}}^\top I_\Gamma \end{vmatrix} \\
= & \begin{vmatrix} -1 & \vec{0}^\top \\ \vec{1} & X_{J \setminus \set{i,j}}^\top I_\Gamma \end{vmatrix} \\
= & - \begin{vmatrix} X_{J \setminus \set{i,j}}^\top I_\Gamma \end{vmatrix}.
\end{align*}
The simplified expression is then
\begin{equation*}
\vec{h}(J \setminus \set{i} | \Gamma)\cdot \vec{e}_\eta - \vec{h}(J \setminus \set{j} | \Gamma) \cdot \vec{e}_\eta =
\frac{ \begin{vmatrix} X_{J \setminus \set{i,j}}^\top I_\Gamma \end{vmatrix}
\begin{vmatrix} \vec{1} & X_J^\top I_{\Gamma \cup \set{\eta}} \end{vmatrix} }{
\begin{vmatrix} \vec{1} & X_{J \setminus \set{i}}^\top I_\Gamma \end{vmatrix}
\begin{vmatrix} \vec{1} & X_{J \setminus \set{j}}^\top I_\Gamma \end{vmatrix}} .
\end{equation*}
Therefore, the sign of $\begin{vmatrix} \vec{1} & X_J^\top I_{\Gamma \cup \set{\eta}} \end{vmatrix}$ is
\begin{equation*}
\sign \left ( \begin{vmatrix} \vec{1} & X_J^\top I_{\Gamma \cup \set{\eta}} \end{vmatrix} \right ) =
\sign \left ( \begin{vmatrix} X_{J \setminus \set{i}}^\top I_{\Gamma \cup \set{\eta}} \end{vmatrix}
\begin{vmatrix} X_{J \setminus \set{i,j}}^\top I_\Gamma \end{vmatrix}
\begin{vmatrix} \vec{1} & X_{J \setminus \set{j}}^\top I_\Gamma \end{vmatrix} \right ).
\end{equation*}
\end{proof}
\subsection{Algorithm for the intersection of n-dimensional simplices}
Let $[a..b]$ represent the set of integers between $a$ and $b$.
......
......@@ -7,6 +7,93 @@ investigates methodology for various aspects of the algorithm, including how to
There are a number of practical concerns when it comes to implementing the algorithm.
\subsection{Connectivity of signs}
As has been explained in detail in Section (nb: reference) those intersections of a given $m$--face of $X$ and a given collection of hyperplanes of $Y$ are connected in the signs of their components.
This connectivity is what lends the algorithm its consistency of shape.
This connectivity was presented above through what is effectively Cramer's rule used to solve the intersections.
The numerators and denominators then have specific connections with one another.
However, this represents a particularly inaccurate, unstable and inefficient manner in calculating the intersections, as will be discussed in more detail further on (nb: ref?).
It is then desirable to maintain the connectivity of the Cramer's rule representation while using better subroutines to calculate the intersections.
There exists connections between the denominators of one generation of intersections and the numerators of the previous generations.
These connections will be important to know for the total connectivity of the signs.
For the sake of notation let
\begin{align*}
X_J = & \begin{bmatrix} \vec{x}_{i_0} & \dots & \vec{x}_{i_m} \end{bmatrix}, \quad &
I_\Gamma = & \begin{bmatrix} \vec{e}_{\gamma_1} & \dots & \vec{e}_{\gamma_m} \end{bmatrix}
\end{align*}
for $J = \set{i_j}_{j=0}^m$ and $\Gamma = \set{\gamma_j}_{j=1}^m$.
Then $$\vec{h}(J|\Gamma) \cdot \vec{e}_\eta = \frac{\begin{vmatrix} X_J^\top \vec{e}_\eta & X_J^\top I_\Gamma \end{vmatrix} }{ \begin{vmatrix} \vec{1} & X_J^\top I_\Gamma \end{vmatrix}}.$$
\begin{lemma}
Suppose $\sign(\vec{h}(J \setminus \set{i} | \Gamma) \cdot \vec{e}_\eta) \neq \sign(\vec{h}(J \setminus \set{j} | \Gamma) \cdot \vec{e}_\eta)$ and an intersection $\vec{h}(J \setminus \set{i,j} | \Gamma \setminus \set{\gamma})$ was calculated for some $\gamma \in \Gamma$, then
\begin{equation*}
\sign \left ( \begin{vmatrix} \vec{1} & X_J^\top I_{\Gamma \cup \set{\eta}} \end{vmatrix} \right ) =
\sign \left ( \begin{vmatrix} X_{J \setminus \set{i}}^\top I_{\Gamma \cup \set{\eta}} \end{vmatrix}
\begin{vmatrix} X_{J \setminus \set{i,j}}^\top I_\Gamma \end{vmatrix}
\begin{vmatrix} \vec{1} & X_{J \setminus \set{j}}^\top I_\Gamma \end{vmatrix} \right ).
\end{equation*}
\end{lemma}
\begin{proof}
Without loss of generality, suppose $\vec{x}_i$ is the first column of $X_{J \setminus \set{j}}$ and likewise $\vec{x}_j$ is the first column of $X_{J \setminus \set{i}}$.
Furthermore, to adhere to the notation already established the first column of $I_{\Gamma \cup \set{\eta}}$ must be $\vec{e}_\eta$.
By assumption $\vec{h}(J \setminus \set{i} | \Gamma) \cdot \vec{e}_\eta - \vec{h}(J \setminus \set{j} | \Gamma) \cdot \vec{e}_\eta$ has the same sign as $\vec{h}(J \setminus \set{i} | \Gamma) \cdot \vec{e}_\eta$.
We begin by simplifying this expression:
\begin{align*}
\vec{h}(J \setminus \set{i} | \Gamma)\cdot \vec{e}_\eta & - \vec{h}(J \setminus \set{j} | \Gamma) \cdot \vec{e}_\eta =
\frac{ \begin{vmatrix} X_{J \setminus \set{i}}^\top I_{\Gamma \cup \set{\eta}} \end{vmatrix}
}{ \begin{vmatrix} \vec{1} & X_{J \setminus \set{i}}^\top I_\Gamma \end{vmatrix}} -
\frac{ \begin{vmatrix} X_{J \setminus \set{j}}^\top I_{\Gamma \cup \set{\eta}} \end{vmatrix}
}{ \begin{vmatrix} \vec{1} & X_{J \setminus \set{j}}^\top I_\Gamma \end{vmatrix}} \\
= & \frac{ \begin{vmatrix} \vec{1} & X_{J \setminus \set{j}}^\top I_\Gamma \end{vmatrix}
\begin{vmatrix} X_{J \setminus \set{i}}^\top I_{\Gamma \cup \set{\eta}} \end{vmatrix} -
\begin{vmatrix} \vec{1} & X_{J \setminus \set{i}}^\top I_\Gamma \end{vmatrix}
\begin{vmatrix} X_{J \setminus \set{j}}^\top I_{\Gamma \cup \set{\eta}} \end{vmatrix} }{
\begin{vmatrix} \vec{1} & X_{J \setminus \set{i}}^\top I_\Gamma \end{vmatrix}
\begin{vmatrix} \vec{1} & X_{J \setminus \set{j}}^\top I_\Gamma \end{vmatrix} }.
\end{align*}
We expand the numerator of this expression:
\begin{align*}
& \left ( \begin{vmatrix} X_{J \setminus \set{i,j}}^\top I_\Gamma \end{vmatrix} + \sum\limits_{k=0}^{m-1} (-1)^k \vec{x}_i^\top \vec{e}_{\gamma_k} \begin{vmatrix} \vec{1} & X_{J \setminus \set{i,j}}^\top I_{\Gamma \setminus \set{\gamma_k}} \end{vmatrix} \right ) \begin{vmatrix} X_{J \setminus \set{i}}^\top I_{\Gamma \cup \set{\eta}} \end{vmatrix} \\
& - \left ( \begin{vmatrix} X_{J \setminus \set{i,j}}^\top I_\Gamma \end{vmatrix} + \sum\limits_{k=0}^{m-1} (-1)^k \vec{x}_j^\top \vec{e}_{\gamma_k} \begin{vmatrix} \vec{1} & X_{J \setminus \set{i,j}}^\top I_{\Gamma \setminus \set{\gamma_k}} \end{vmatrix} \right ) \begin{vmatrix} X_{J \setminus \set{j}}^\top I_{\Gamma \cup \set{\eta}} \end{vmatrix} \\
& = \begin{vmatrix} X_{J \setminus \set{i,j}}^\top I_\Gamma \end{vmatrix}
\begin{vmatrix} 1 & \vec{x}_i^\top I_{\Gamma \cup \set{\eta}} \\ 1 & \vec{x}_j^\top I_{\Gamma \cup \set{\eta}} \\ \vec{0} & X_{J \setminus \set{i,j}}^\top I_{\Gamma \cup \set{\eta}} \end{vmatrix}
+ \begin{vmatrix} \vec{x}_i^\top I_\Gamma \vec{w} & \vec{x}_i^\top I_{\Gamma \cup \set{\eta}} \\ \vec{x}_j^\top I_\Gamma \vec{w} & \vec{x}_j^\top I_{\Gamma \cup \set{\eta}} \\ \vec{0} & X_{J \setminus \set{i,j}}^\top I_{\Gamma \cup \set{\eta}} \end{vmatrix} \\
& = \begin{vmatrix} X_{J \setminus \set{i,j}}^\top I_\Gamma \end{vmatrix}
\begin{vmatrix} 1 & \vec{x}_i^\top I_{\Gamma \cup \set{\eta}} \\ 1 & \vec{x}_j^\top I_{\Gamma \cup \set{\eta}} \\ \vec{0} & X_{J \setminus \set{i,j}}^\top I_{\Gamma \cup \set{\eta}} \end{vmatrix}
+ \begin{vmatrix} 0 & \vec{x}_i^\top I_{\Gamma \cup \set{\eta}} \\ 0 & \vec{x}_j^\top I_{\Gamma \cup \set{\eta}} \\ -X_{J \setminus \set{i,j}}^\top I_\Gamma \vec{w} & X_{J \setminus \set{i,j}}^\top I_{\Gamma \cup \set{\eta}} \end{vmatrix} \\
& = \begin{vmatrix} X_{J \setminus \set{i,j}}^\top I_\Gamma \end{vmatrix} \begin{vmatrix} \vec{1} & X_J^\top I_{\Gamma \cup \set{\eta}} \end{vmatrix}.
\end{align*}
where $w_k = (-1)^k \begin{vmatrix} \vec{1} & X_{J \setminus \set{i,j}}^\top I_{\Gamma \setminus \set{\gamma_k}} \end{vmatrix}.$
To prove the last equality note that
each element of $X_{J \setminus \set{i,j}}^\top I_\Gamma \vec{w}$ is equal to the same value:
\begin{align*}
\left ( X_{J \setminus \set{i,j}}^\top I_\Gamma \vec{w} \right )_l = &
\sum_{k=0}^{m-1} (-1)^k \vec{x}_l^\top \vec{e}_{\gamma_k} \begin{vmatrix} \vec{1} & X_{J \setminus \set{i,j}}^\top I_{\Gamma \setminus \set{\gamma_k}} \end{vmatrix} \\
= & \begin{vmatrix} 0 & \vec{x}_l^\top I_\Gamma \\ \vec{1} & X_{J \setminus \set{i,j}}^\top I_\Gamma \end{vmatrix} \\
= & \begin{vmatrix} -1 & \vec{0}^\top \\ \vec{1} & X_{J \setminus \set{i,j}}^\top I_\Gamma \end{vmatrix} \\
= & - \begin{vmatrix} X_{J \setminus \set{i,j}}^\top I_\Gamma \end{vmatrix}.
\end{align*}
The simplified expression is then
\begin{equation*}
\vec{h}(J \setminus \set{i} | \Gamma)\cdot \vec{e}_\eta - \vec{h}(J \setminus \set{j} | \Gamma) \cdot \vec{e}_\eta =
\frac{ \begin{vmatrix} X_{J \setminus \set{i,j}}^\top I_\Gamma \end{vmatrix}
\begin{vmatrix} \vec{1} & X_J^\top I_{\Gamma \cup \set{\eta}} \end{vmatrix} }{
\begin{vmatrix} \vec{1} & X_{J \setminus \set{i}}^\top I_\Gamma \end{vmatrix}
\begin{vmatrix} \vec{1} & X_{J \setminus \set{j}}^\top I_\Gamma \end{vmatrix}} .
\end{equation*}
Therefore, the sign of $\begin{vmatrix} \vec{1} & X_J^\top I_{\Gamma \cup \set{\eta}} \end{vmatrix}$ is
\begin{equation*}
\sign \left ( \begin{vmatrix} \vec{1} & X_J^\top I_{\Gamma \cup \set{\eta}} \end{vmatrix} \right ) =
\sign \left ( \begin{vmatrix} X_{J \setminus \set{i}}^\top I_{\Gamma \cup \set{\eta}} \end{vmatrix}
\begin{vmatrix} X_{J \setminus \set{i,j}}^\top I_\Gamma \end{vmatrix}
\begin{vmatrix} \vec{1} & X_{J \setminus \set{j}}^\top I_\Gamma \end{vmatrix} \right ).
\end{equation*}
\end{proof}
\subsection{Adjacent intersections}
Assuming that a set of intersections for a given collection $\Gamma$ of hyperplanes forms a convex object,
......@@ -39,13 +126,19 @@ Therefore, any other child of either parent, called a sibling, will be adjacent
It remains to determine if any other intersections of this generation are adjacent.
\begin{lemma}
idk something
Let a father be a given parent of a child node, and let the mother of this child be the other parent.
Two children of a given generation of intersections are adjacent under one of three circumstances:
\begin{description}
\item[siblings] the children share exactly one parent;
\item[cousins] the two fathers are adjacent as are the two mothers, ie. the four parents form a 4-cycle;
\item[second cousins] the two fathers are adjacent while the two mothers are both adjacent to a fifth intersection, ie. the four parents are part of a 5-cycle, and all four parents share all but exactly three vertices.
\end{description}
\end{lemma}
\begin{proof}
Consider four nodes of $H(\Gamma)$, indexed by $J_0$, $J_1$, $J_2$ and $J_3$.
Suppose $J_0$ and $J_1$ are adjacent, as are $J_2$ and $J_3$.
These intersections may be configured as seen in Figure \ref{fig:config four}, where there are at least $n$ edges between nodes $J_1$ and $J_2$ and at least $m$ edges between $J_0$ and $J_3$.
These intersections may be configured as seen in Figure \ref{fig:config four}, where there are at least $n$ edges between nodes $J_1$ and $J_2$ that do not pass through $J_3$ and at least $m$ edges between $J_0$ and $J_3$ that do not pass through $J_2$.
Without loss of generality $n \leq m$.
\begin{figure}
\centering
......@@ -122,11 +215,21 @@ This means that either $J_3$ contains neither $J_0 \setminus J_1$ nor $J_1 \setm
The former case has cardinality identical to when $|J_0 \cap J_3| = |J_1| - (n+2)$, while the latter case to when it equals $|J_1| - n$.
This latter case therefore allows two children to be adjacent for $n=1$, which will be referred to as second cousins.
The configuration with $n=1$ and $m=2$ necessarily places the four nodes into a 5-cycle.
The arrangements of vertex sets that adhere to this condition are limited.
If the five vertex sets share all but four or more vertices then all five are adjacent.
The same is true if they share all but one.
The two cases are then when the five sets share all but two or three vertices.
\newcommand{\PentaConfig}{
\foreach \x in {1,2,3,4,5} {
\foreach \x in {0,1,...,4} {
\coordinate (P\x) at (198-\x*360/5:1);
\filldraw (P\x) circle (1pt);}
\draw[thick,black] (P1) \foreach \x in {2,3,4,5} {-- (P\x)} -- cycle;
\draw[thick,black] (P1) \foreach \x in {2,3,4,0} {-- (P\x)} -- cycle;
\foreach \x in {1,2,3,4,0} {
\tikzmath{\y = int(mod(\x+1,5));}
\coordinate (E\x) at ($ (P\x) !.5! (P\y) $);}
\draw[thick,dotted] (E0) \foreach \x in {1,2,...,4} { -- (E\x)} -- cycle;
}
\begin{figure}
\centering
......@@ -137,20 +240,68 @@ This latter case therefore allows two children to be adjacent for $n=1$, which w
\node[above] at (P2) {$jk$};
\node[right] at (P3) {$kl$};
\node[below] at (P4) {$lm$};
\node[left] at (P5) {$mi$}; &
\node[left] at (P0) {$mi$}; &
\PentaConfig
\foreach \x in {0,1,...,4} {
\tikzmath{\y = int(mod(\x+2,5));}
\draw[thick,dotted] (E\x) -- (E\y);}
\node[above] at (P1) {$ijk$};
\node[above] at (P2) {$jkl$};
\node[left] at (P5) {$ijm$};
\node[right] at (P3) {$klm$};
\node[below] at (P4) {$ilm$}; \\};
\node[below] at (P4) {$ilm$};
\node[left] at (P0) {$ijm$}; \\};
\end{tikzpicture}
\caption{The two possible cases of second cousins.
Left: only siblings are adjacent.
Right: all children are adjacent.}
\label{fig:second cousins}
\end{figure}
Figure \ref{fig:second cousins} shows these two scenarios.
To construct them, one may begin at any node with two vertices.
Travel along one edge to the second node by changing one of the vertices.
Travel along the other edge off of the first node by changing a different vertex so that this third node is not adjacent to the second.
For example, in the case where the vertex sets share all but two vertices one can start with the set $\set{i,j}$.
The adjacent sets are then $\set{j,k}$ and $\set{m,i}$.
The final two nodes are adjacent to one another as well as the second and third nodes, but not the first.
This uniquely determines the sets.
As shown in Figure \ref{fig:second cousins} the children of the case where all but two vertices are shared are adjacent only if they are siblings.
Take $J_0 = \set{i,j}$ and $J_3 = \set{l,m}$.
The set $J_3$ clearly does not contain $i$ or $k$, the vertices of $J_0 \setminus J_1$ and $J_1 \setminus J_0$, respectively.
The children $\set{k,l,m}$ and $\set{i,j,k}$ differ by two vertices and therefore are not adjacent.
Meanwhile, all children are adjacent when the vertex sets share all but three vertices.
Using $J_0 = \set{i,j,k}$ and $J_3 = \set{i,l,m}$ both $i$ and $l$ are within $J_3$.
The children $\set{i,k,l,m}$ and $\set{i,j,k,l}$ differ by only one vertex and are adjacent.
By symmetry the same is true of the other children around this cycle.
\end{proof}
As a corollary to this lemma it is impossible to determine $A(\Gamma \cup \set{\eta})$ from $A(\Gamma)$ alone.
This is because the case of second cousins requires knowledge of the shared vertices of the vertex sets of the parents.
Therefore, we cannot completely eliminate the comparison of vertex sets when computing intersections of the next generation.
For each intersection in $H(\Gamma)$ compare its sign in the $\vec{e}_\eta$ direction with each adjacent intersection as identified in $A(\Gamma)$.
If the sign is different calculate an intersection in $H(\Gamma \cup \set{\eta})$.
Add a column to $B(\Gamma \cup \set{\eta})$, the adjacency matrix between $H(\Gamma)$ and $H(\Gamma \cup \set{\eta})$.
There are two non-zero entries in this column, one for each parent.
The matrix $B(\Gamma \cup \set{\eta})$ must be formed before forming $A(\Gamma \cup \set{\eta})$.
The matrix $A(\Gamma \cup \set{\eta})$ must be formed in three parts, one part each for siblings, cousins and second cousins.
The sibling part is formed easily.
Each column of $B(\Gamma \cup \set{\eta})$ is associated with a child in $H(\Gamma \cup \set{\eta})$.
For each of these children the associated row of $A(\Gamma \cup \set{\eta})$ is the `xor' combination of the two rows of $B(\Gamma \cup \set{\eta})$ associated with the child's parents.
Each parent has a limited number of children (nb: prove exactly how much?) and this `xor' combination need only be done for twice this number, with all other entries being zero.
%---Efficiency comparison---%
Based on Proposition (nb: check ref on this) the number of intersections can grow quadratically with the generations.
For $n$--simplices there are up to $n-1$ generations, meaning at the last generation there can, in theory, be up to $(n/2)^{2(n-1)}$ interesctions.
If using a brute force method to determine adjacency between intersections this will require $(n/2)^{2(n-1)} * (n+1)$ comparisons.
%---end---%
Take the graph of the set of intersections $\set{\vec{h}(J | \Gamma)}_J$ with edges defined by its adjacency matrix $A$.
Place the nodes of this graph into two groups, one with non-negative values along the $\vec{e}_\gamma$ direction and one with negative values.
Remove the intragroup edges.
......
......@@ -8,7 +8,7 @@
\usepackage{subcaption}
\usepackage{float}
\usepackage{tikz}
\usetikzlibrary{decorations.pathreplacing,positioning,calc,intersections,3d,shapes.geometric}
\usetikzlibrary{decorations.pathreplacing,positioning,calc,intersections,3d,shapes.geometric,math}
\newcommand{\dxdy}[2]{\frac{d #1}{d #2}}
\newcommand{\dxdyk}[3]{\frac{d^{#3} #1}{d {#2}^{#3}}}
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment