@@ -55,14 +55,15 @@ The remainder of the paper is dedicated to explaining the connection between con

\begin{comment}

This was an overstatement.

We have clarified that we are concerned with affine transformations for triangles with reasonable aspect ratios.

We have clarified that we are concerned with affine transformations for triangles with reasonable aspect ratios and angles.

\end{comment}

• In the last sentence of Section 3.2 it is said that the redundant test is useful for the robustness of the algorithm. How can this be if all decisions are consistent? This needs some explanation or at least a reference to the respective results given in the paper.

\begin{comment}

The pairing of intersections described above now ensures this consistency.

While there is still redundancy the two tests now necessarily agree with one another, as explained in Section 5.2.

While there is still redundancy the two tests now necessarily agree with one another, as explained in Section 5.2 in the new version of the manuscript.

One can remove the redundant test with an optimized implementation.

\end{comment}

• Lemma 1 and Corollary 1 are evident in the given form. Without the proof it is not clear that they are meant in regard to the decision of the algorithm. This should be made more precise.

...

...

@@ -74,7 +75,7 @@ Lemma 1 and Corollaries 1 and 2 have been combined and now refer more clearly to

• In the proof of Lemma 2, the equivalences seem to be derived under the assumption that d is neither equal to a nor to $\tilde{a}$.

\begin{comment}

Lemma 2 has been removed in favour of an exhaustive list of intersection errors.

We have removed Lemma 2 in favour of an exhaustive list of intersection errors.

However, to the point indicated, $d=a$ would be a degenerate case where the intersection is coincident with a vertex of $Y$.

Its equivalence to the case where $d \neq a$ is explained in Section 6.1.

\end{comment}

...

...

@@ -84,9 +85,10 @@ In addition, I would like to give some ideas on how to improve the readability o

1) The authors of PANG claim that their algorithm is numerically robust. They compare their approach with the classical polygon clipping algorithm. I think that it would be very helpful for a quick lead-in into the topic to add a few lines of explanation. What does PANG do differently to the classical approach and why do the authors think that their algorithm is numerically stable? What is causing the issue in Fig. 1. Furthermore, in the introduction a number of algorithms for the intersection of polygons are cited. How do these algorithms compare in speed, consistency, robustness etc.? It is difficult to classify the proposed algorithm without any comparison to alternative approaches. In particular, it is not clear if the proposed algorithm can be generalized for higher dimensions.

\begin{comment}

A paragraph has been added after Theorem 1 to give a brief overview of the algorithm with reference to algorithms that use similar steps.

We have added a paragraph after Theorem 1 to give a brief overview of the algorithm with reference to algorithms that use similar steps.

It is difficult to compare the robustness of these algorithms as each is likely to fail on a highly specific subset of triangle-triangle intersections.

For example, PANG failed when applied to nearly coincident triangles, whereas other algorithms may fail for triangles with disparate aspect ratios.

We have, however, added a comparison of computation times and accuracy as Section 8.1.

\end{comment}

2) If we just want to fix the issue of PANG discussed in the introduction, we could simply add some tolerances within the PointsOfXinY function. Then also vertices of X that lie slightly outside of Y might be detected as lying inside of Y, but these vertices do not change the overall shape and will be removed within the SortAndRemoveDoubles function if redundant. What is the advantage of the proposed approach? How does the count of floating-point operations compare to the original approach? Could we use the values of $q_0$ to find and reduce clustered vertices without using the SortAndRemoveDoubles function?

...

...

@@ -102,17 +104,17 @@ While adding a tolerance is intuitively the correct thing to do, it has not been

3) Section 2 and the Subsections 3.4, 3.5 could be combined and summarized in one section. The detailed description of the reference-free parametrization and its FLOP count seems to add little to the overall discussion. The Subsections 3.1-3.3 describe the actual intersection algorithm. It might be beneficial to connect the descriptions with the respective instructions in Program 1 and then summarize the algorithm as done in Section 4.

\begin{comment}

These sections have been combined and pared down.

We have combined and pared down these sections.

Certain non-trivial details about the reference-free parametrization, such as the pairing of its intersection calculations (which are new additions to the revisions), warrant their inclusion.

Additional comments have been added to Program 1 to connect the description of the algorithm with the code.

We have added additional comments to Program 1 to connect the description of the algorithm with the code.

\end{comment}

Finally, at some places the language is used very figurative. Some text passages should be rephrased to make the corresponding statements more precise.

p.3: "To PointsOfXinY ... . To EdgeIntersections ..." - Functions are not aware of decisions.

\begin{comment}

The sentences have been changed to ``According to...'' to make clear that the functions are not reacting to decisions but are making determinations about the geometry.

We have changed the sentences to ``According to...'' to make clear that the functions are not reacting to decisions but are making determinations about the geometry.

\end{comment}

p.3: "The goal of this paper is to write ..." - Paper has no goal or writes something.

\begin{comment}

...

...

@@ -192,10 +194,13 @@ More often than not, I had to draw my own figures to understand the

proofs, because I could not make sense of the ones from the paper.

\begin{comment}

Thank you for these observations.

We have re-examined every figure and, with few exceptions, overhauled each to represent concrete examples.

All drawings made in Lemma 4 are now present in the paper, either in Figure 12 or Figure 20.

Many of the figures were drawn to be abstract and apply to the broadest possible set of situations.

In fact, the proof of Lemma 3, now Lemma 4, was designed under the expectation that the reader would draw the described figures.

We note that this has not been well received.

Abstraction has been reduced to a minimum and all drawings made in Lemma 4 are now present in the paper, either in Figure 12 or Figure 20.

\end{comment}

Let us consider Figure 3 as an example. It is supposed to show the case

...

...

@@ -222,10 +227,10 @@ what they can do to make them clearer and more illustrative of the core

of the article.

\begin{comment}

Figure 7 has been replaced by Figure 9, listing all intersection errors with specific examples.

Figure 5 has been removed due to the related lemma and corollaries being condensed.

We have replaced Figure 7 with Figure 9, listing all intersection errors with specific examples.

We have removed Figure 5 due to the related lemma and corollaries being condensed.

Figure 10, now Figure 13, no longer shows triangles with two right angles each.

The proof attached to this figure has been restated to rely less heavily on a graphic description.

We have restated the proof attached to this figure to rely less heavily on a graphic description.

\end{comment}

For Figure 9, the authors should indicate the various configurations in

...

...

@@ -243,9 +248,10 @@ that all the geometries are legal (yet a triangle with six edges is

certainly not).

\begin{comment}

Thank you for the excellent suggestion.

We have elected to use a table to connect the graphs and pairings.

This puts all the information in one compact place and shows how the four restrictions help to eliminate impossible pairings.

Illegal graph configurations have been added as Appendix B.

We have added the illegal graph configurations as Appendix B.

Where possible we have made the graphs look like triangles.

Obviously this is impossible for the illegal configurations.

\end{comment}

...

...

@@ -261,7 +267,7 @@ the "sheltered polygon" property.

\begin{comment}

We've added labels to the columns and rows of this figure.

Based on this comment further discussion has been added to Section 6.

Based on this comment we have added further discussion to Section 6.

This figure is intended to show we have accounted for the errors, not the configurations.

Lemma 4 and Corollary 1 are intended to account for all configurations.

The respective statements should now make this clear.

...

...

@@ -312,7 +318,7 @@ would not have to make this kind of assumptions.

\begin{comment}

We've made the suggested replacement of edge by graph edge where applicable.

Reference to planar embedding is removed by making the straight line assumption earlier in the text.

The definition of the triangle condition has been rewritten.

We have rewritten the definition of the triangle condition.

\end{comment}

The proof of Lemma 3 contains a bit too much of handwaving, to the point

...

...

@@ -321,8 +327,8 @@ of X are 'below' the reference line". It would be great if the proof

could be straightened a bit.

\begin{comment}

The offending section of the proof has been cleaned up.

We are satisfied that it is sufficiently rigorous.

Thank you, the offending section of the proof has been cleaned up.

We think that it is sufficiently rigorous after these revisions.

\end{comment}

As I understand it, Lemma 4 is the crux of the matter. Unfortunately,

...

...

@@ -330,7 +336,7 @@ I do not understand its very short proof. Perhaps I am missing something

obvious.

\begin{comment}

This lemma, now numbered 3, has had its proof entirely rewritten.

We have entirely rewritten the proof of this lemma, now numbered 3.

We believe it is much clearer.

We have also restructured Section 6 to bring focus to Lemma 4 and Corollary 1.

\end{comment}

...

...

@@ -432,7 +438,7 @@ The followings are minor remarks.

1. You wrote floating point arithmetic. It is replaced by floating-point arithmetic.

\begin{comment}

This has been fixed.

We've fixed this.

\end{comment}

2. You counted floating-point operations by flops for the proposed method. It is useful in

...

...

@@ -443,11 +449,11 @@ to state that flops indicates floating-point operations because people in high p

computing think floating-point operations per sec.

\begin{comment}

The FLOP count is meant only to compare the expected computational time between the change of coordinates and reference-free parametrization.

The FLOP count compares only the expected computational time between the change of coordinates and reference-free parametrization.

With proper implementation and efficient system solves, the difference may be negligible.

Indeed, the implementation of the reference-free parametrization is faster in the new example compared with this version of the change of coordinates.

Indeed, the implementation of the reference-free parametrization is faster in the new example we added in section 8.1 compared with this version of the change of coordinates.

The term FLOP has been defined in the relevant section.

We have defined the term FLOP in the relevant section.

\end{comment}

3. Matlab may be replaced by MATLAB.

...

...

@@ -459,7 +465,7 @@ Also fixed.

4. Could you provide a figure for Lemma 2?

\begin{comment}

This section was rewritten to be more concise and relevant.

We've rewritten this section to be more concise and relevant.

We did add a figure that might help though, see Figure 8.