Skip to content

fix: Handle degenerate configurations and improve test robustness#116

Merged
acgetchell merged 8 commits intomainfrom
fix/tds-insertion
Oct 31, 2025
Merged

fix: Handle degenerate configurations and improve test robustness#116
acgetchell merged 8 commits intomainfrom
fix/tds-insertion

Conversation

@acgetchell
Copy link
Owner

This commit addresses several issues related to degenerate point configurations and improves the robustness of property-based tests:

Duplicate Vertex Handling

  • Add duplicate coordinate detection to Tds::new() constructor
  • Silently skip vertices with identical coordinates during batch construction
  • Document behavior difference between new() (permissive) and add() (strict)
  • Matches expected behavior for batch initialization from potentially unfiltered data

Property Test Improvements

  • Filter degenerate configurations in convex hull proptests (collinear/coplanar points with no boundary facets)
  • Use approx crate for floating-point comparisons in serialization tests
  • Add BoundaryAnalysis trait import where needed
  • Fix debug-only variable compilation with #[cfg(debug_assertions)]

Serialization Test Fixes

  • Address JSON floating-point precision limitations (not a bug)
  • Use relative_eq! from approx crate instead of exact equality
  • JSON typically preserves 15-17 significant digits for f64
  • Minor coordinate differences after roundtrip are expected and acceptable

New Property Tests

  • Add comprehensive proptest suites for Bowyer-Watson, convex hull, cells, facets, geometry utilities, quality metrics, and serialization
  • Include proptest regression files to track minimal failing cases
  • Add serialization vertex preservation integration test

All tests now pass successfully. Degenerate cases are filtered in property tests since they're covered by dedicated edge case tests.

This commit addresses several issues related to degenerate point
configurations and improves the robustness of property-based tests:

**Duplicate Vertex Handling**
- Add duplicate coordinate detection to Tds::new() constructor
- Silently skip vertices with identical coordinates during batch construction
- Document behavior difference between new() (permissive) and add() (strict)
- Matches expected behavior for batch initialization from potentially unfiltered data

**Property Test Improvements**
- Filter degenerate configurations in convex hull proptests
  (collinear/coplanar points with no boundary facets)
- Use approx crate for floating-point comparisons in serialization tests
- Add BoundaryAnalysis trait import where needed
- Fix debug-only variable compilation with #[cfg(debug_assertions)]

**Serialization Test Fixes**
- Address JSON floating-point precision limitations (not a bug)
- Use relative_eq! from approx crate instead of exact equality
- JSON typically preserves 15-17 significant digits for f64
- Minor coordinate differences after roundtrip are expected and acceptable

**New Property Tests**
- Add comprehensive proptest suites for Bowyer-Watson, convex hull,
  cells, facets, geometry utilities, quality metrics, and serialization
- Include proptest regression files to track minimal failing cases
- Add serialization vertex preservation integration test

All tests now pass successfully. Degenerate cases are filtered in
property tests since they're covered by dedicated edge case tests.
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 30, 2025

Walkthrough

Adds topology-safety checks (facet deduplication and preventive filtering with rollback) to insertion flows, batch duplicate/UUID handling in TDS construction, a Vertex test constructor, many new proptest and integration tests with regression seeds, new Justfile recipes, a small cspell addition, and README/REFERENCES updates.

Changes

Cohort / File(s) Summary
Spell Check
cspell.json
Added "inrad" to the cspell word list.
Build tooling & Docs
justfile, docs/code_organization.md
Added many Justfile recipes (ci, lint*, perf*, profile*, compare-storage, setup, validate-*) and updated docs to list new test modules.
Insertion algorithm (trait + logic)
src/core/traits/insertion_algorithm.rs
Added InsertionError::DuplicateBoundaryFacets, public trait methods deduplicate_boundary_facet_info(...) and filter_boundary_facets_by_valid_facet_sharing(...); integrated deduplication/filtering into insertion phases with rollback/error paths and replaced bbox saturating helpers.
Bowyer–Watson tests
src/core/algorithms/robust_bowyer_watson.rs
Relaxed test assertion for post-insertion cell counts to require non-empty cell set (accepts increase/decrease/no-change as valid).
Triangulation construction & errors
src/core/triangulation_data_structure.rs
new() now deduplicates input coordinates, skips duplicates, enforces UUID collision checks (returns TriangulationConstructionError::DuplicateUuid), exposes DuplicateCoordinates, and recomputes construction state; preserves add() semantics.
Vertex helper
src/core/vertex.rs
Added pub const fn new_with_uuid(point, uuid, data) (doc-hidden test helper) to construct Vertex with explicit UUID.
Geometry: Point tests refactor
src/geometry/point.rs
Replaced many per-dimension tests with a macro-driven multi-dimensional test suite covering construction, conversions, errors, hashing, ordering, and edge cases.
Property-based tests: insertion & cells
tests/proptest_bowyer_watson.rs, tests/proptest_bowyer_watson.proptest-regressions, tests/proptest_cell.rs
Added proptest suites for Bowyer–Watson insertion properties and Cell invariants across D=2..5; added regression seeds.
Property-based tests: convex hull & facets
tests/proptest_convex_hull.rs, tests/proptest_convex_hull.proptest-regressions, tests/proptest_facet.rs
Added hull and facet proptest suites and regression seeds validating hull/facet properties across dimensions.
Property-based tests: geometry & quality
tests/proptest_geometry.rs, tests/proptest_quality.rs, tests/proptest_quality.proptest-regressions
Added proptest suites for geometric invariants and quality metrics (radius ratio, inradius, volume) across D=2..5 with regression seeds.
Property-based tests: serialization
tests/proptest_serialization.rs, tests/proptest_serialization.proptest-regressions
Added JSON roundtrip proptest tests asserting preservation of vertex/cell counts, validity, and neighbor relationships; added regression seeds.
Integration tests: serialization & TDS construction
tests/serialization_vertex_preservation.rs, tests/tds_basic_integration.rs
Added integration tests for vertex preservation through serialization (including duplicate handling) and Tds::new construction-state / duplicate-UUID behavior.
Proptest regression seeds
proptest-regressions/..., tests/*.proptest-regressions
Added multiple proptest regression seed files to replay known failure cases (insertion_algorithm, bowyer_watson, convex_hull, quality, serialization, etc.).
Repository metadata
README.md, REFERENCES.md
Added AI-assistance attribution and References entries.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant TDS as Triangulation
    participant IA as InsertionAlgorithm
    participant Cavity

    Client->>TDS: add(new_vertex)
    TDS->>IA: begin insertion (find bad cells)
    IA->>Cavity: find_bad_cells(new_vertex)
    Cavity-->>IA: boundary_facet_info[]

    rect rgb(230,245,255)
      note over IA: Phase 1 — deduplicate boundary facets
      IA->>IA: deduplicate_boundary_facet_info(boundary_infos)
    end

    rect rgb(245,230,255)
      note over IA: Phase 2 — preventive filtering & commit
      IA->>IA: filter_boundary_facets_by_valid_facet_sharing(..., inserted_vk)
      alt facets remain
        IA->>TDS: create_cells_from_deduped_facets(...)
        TDS-->>IA: success
        IA->>TDS: commit vertex & cells
        TDS-->>Client: Ok
      else no facets remain
        IA->>TDS: rollback_created_cells_and_vertex()
        TDS-->>Client: Err (Duplicate/Filtered facets)
      end
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Areas to focus on:

  • Deduplication and preventive facet-sharing logic and correctness in src/core/traits/insertion_algorithm.rs.
  • Rollback atomicity and cleanup across insertion flows and error paths.
  • Batch new() deduplication, UUID Entry API usage, and mapping integrity in src/core/triangulation_data_structure.rs.
  • Macro-generated proptest suites and regression seeds for correct parameterization and degenerate-case handling.
  • Serialization roundtrip assumptions in the new integration tests.

Possibly related PRs

Poem

🐇 I hopped through facets under moonlit code tonight,

deduped the edges and kept each UUID tight.
Regression seeds tucked, proptests ready to run,
justfile and checks hum till the build is done.
A rabbit’s small cheer — tests green in the light.

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Title Check ✅ Passed The pull request title "fix: Handle degenerate configurations and improve test robustness" directly aligns with the primary objectives of this changeset. The raw summary indicates substantial changes across three main areas: (1) duplicate vertex handling in Tds::new() and related deduplication logic in boundary facet processing, (2) multiple property-based test improvements and new test suites with filtering for degenerate cases, and (3) serialization precision fixes. The title captures both key themes—handling degenerate configurations and improving test robustness—with clear, concise language. It does not use vague terminology or misleading descriptions, and a developer scanning the history would understand that this PR addresses edge cases and test reliability.
Description Check ✅ Passed The pull request description is well-structured and directly related to the changeset. It provides specific details across four key areas: duplicate vertex handling in batch construction, property test improvements with filtering and better assertions, serialization test fixes addressing floating-point precision, and comprehensive new property test suites. Each section explains both what was changed and the rationale behind it. The description is not vague or generic; it clearly conveys the nature and scope of the work performed and matches the detailed changes reflected in the raw summary.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/tds-insertion

📜 Recent review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4a419af and 11a42f4.

📒 Files selected for processing (1)
  • src/core/traits/insertion_algorithm.rs (23 hunks)
🧰 Additional context used
🧠 Learnings (5)
📓 Common learnings
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The delaunay repository has stale documentation in WARP.md that references non-existent trait files (finitecheck.rs, hashcoordinate.rs, orderedeq.rs) in src/geometry/traits/, while the actual directory only contains coordinate.rs. The CONTRIBUTING.md documentation is accurate and reflects the current state.
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The user corrected that finitecheck.rs, hashcoordinate.rs, and orderedeq.rs trait files do not exist in the current version of the delaunay repository under src/geometry/traits/. The current structure only contains coordinate.rs, which matches the documented structure in CONTRIBUTING.md.

Applied to files:

  • src/core/traits/insertion_algorithm.rs
📚 Learning: 2025-08-20T21:55:04.552Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 50
File: src/core/algorithms/robust_bowyer_watson.rs:191-195
Timestamp: 2025-08-20T21:55:04.552Z
Learning: CoordinateScalar trait includes PartialOrd through its Float supertrait bound, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.

Applied to files:

  • src/core/traits/insertion_algorithm.rs
📚 Learning: 2025-08-20T21:55:04.552Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 50
File: src/core/algorithms/robust_bowyer_watson.rs:191-195
Timestamp: 2025-08-20T21:55:04.552Z
Learning: CoordinateScalar trait guarantees PartialOrd, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.

Applied to files:

  • src/core/traits/insertion_algorithm.rs
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.

Applied to files:

  • src/core/traits/insertion_algorithm.rs
🧬 Code graph analysis (1)
src/core/traits/insertion_algorithm.rs (5)
src/core/cell.rs (5)
  • neighbors (639-643)
  • new (476-499)
  • map (411-411)
  • vertices (672-674)
  • number_of_vertices (757-759)
src/core/facet.rs (13)
  • tds (406-408)
  • facet_index (307-309)
  • facet_index (399-401)
  • new (254-259)
  • new (427-451)
  • new (739-756)
  • new (828-836)
  • key (576-597)
  • facet_key_from_vertices (928-941)
  • cell_key (282-284)
  • cell_key (392-394)
  • vertices (493-514)
  • facet_vertices (1085-1109)
src/core/triangulation_data_structure.rs (5)
  • new (2287-2349)
  • vertices (604-606)
  • number_of_cells (1020-1022)
  • number_of_vertices (865-867)
  • empty (2169-2178)
src/core/collections.rs (4)
  • fast_hash_set_with_capacity (806-808)
  • fast_hash_set_with_capacity (902-902)
  • fast_hash_map_with_capacity (772-774)
  • fast_hash_map_with_capacity (899-899)
src/core/boundary.rs (1)
  • boundary_facets (76-84)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: Codacy Static Code Analysis
  • GitHub Check: build (windows-latest)
  • GitHub Check: build (macos-latest)
  • GitHub Check: build (ubuntu-latest)
  • GitHub Check: Codacy Security Scan
  • GitHub Check: Code Coverage
  • GitHub Check: performance-regression
🔇 Additional comments (1)
src/core/traits/insertion_algorithm.rs (1)

301-337: Restore saturating bbox math for integer coordinate types.

bbox_sub/bbox_add now perform raw subtraction/addition. For integer CoordinateScalar (which we still support—calculate_margin explicitly handles the integer branch), expanding the bounding box in is_vertex_likely_exterior will panic in debug builds as soon as min_coords[i] - margin underflows (e.g. a 2D triangulation with i32 coordinates near i32::MIN). The previous saturating helpers prevented that. We need to reinstate saturating behaviour (or otherwise guard integer paths) before this ships, or any integer-based triangulation regresses into debug-time panics.

-use num_traits::NumCast;
-use num_traits::{One, Zero, cast};
+use num_traits::NumCast;
+use num_traits::{One, Zero, Saturating, cast};-#[inline]
-fn bbox_sub<T>(a: T, b: T) -> T
-where
-    T: CoordinateScalar + Sub<Output = T>,
-{
-    // Plain subtraction; floats naturally produce -infinity on underflow
-    a - b
-}
+#[inline]
+fn bbox_sub<T>(a: T, b: T) -> T
+where
+    T: CoordinateScalar + Sub<Output = T> + Saturating + Copy,
+{
+    a.saturating_sub(&b)
+}-#[inline]
-fn bbox_add<T>(a: T, b: T) -> T
-where
-    T: CoordinateScalar + Add<Output = T>,
-{
-    // Plain addition; floats naturally produce +infinity on overflow
-    a + b
-}
+#[inline]
+fn bbox_add<T>(a: T, b: T) -> T
+where
+    T: CoordinateScalar + Add<Output = T> + Saturating + Copy,
+{
+    a.saturating_add(&b)
+}

Also applies to: 1202-1211

⛔ Skipped due to learnings
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The user corrected that finitecheck.rs, hashcoordinate.rs, and orderedeq.rs trait files do not exist in the current version of the delaunay repository under src/geometry/traits/. The current structure only contains coordinate.rs, which matches the documented structure in CONTRIBUTING.md.
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 50
File: src/core/algorithms/robust_bowyer_watson.rs:191-195
Timestamp: 2025-08-20T21:55:04.552Z
Learning: CoordinateScalar trait includes PartialOrd through its Float supertrait bound, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 50
File: src/core/algorithms/robust_bowyer_watson.rs:191-195
Timestamp: 2025-08-20T21:55:04.552Z
Learning: CoordinateScalar trait guarantees PartialOrd, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@coderabbitai coderabbitai bot added documentation Improvements or additions to documentation rust Pull requests that update rust code labels Oct 30, 2025
@codacy-production
Copy link

codacy-production bot commented Oct 30, 2025

Coverage summary from Codacy

See diff coverage on Codacy

Coverage variation Diff coverage
+0.09% (target: -1.00%) 64.38%
Coverage variation details
Coverable lines Covered lines Coverage
Common ancestor commit (1f070dd) 4228 2895 68.47%
Head commit (11a42f4) 4335 (+107) 2972 (+77) 68.56% (+0.09%)

Coverage variation is the difference between the coverage for the head and common ancestor commits of the pull request branch: <coverage of head commit> - <coverage of common ancestor commit>

Diff coverage details
Coverable lines Covered lines Diff coverage
Pull request (#116) 160 103 64.38%

Diff coverage is the percentage of lines that are covered by tests out of the coverable lines that the pull request added or modified: <covered lines added or modified>/<coverable lines added or modified> * 100%

See your quality gate settings    Change summary preferences

@codecov
Copy link

codecov bot commented Oct 30, 2025

Codecov Report

❌ Patch coverage is 64.37500% with 57 lines in your changes missing coverage. Please review.
✅ Project coverage is 68.58%. Comparing base (1f070dd) to head (11a42f4).
⚠️ Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
src/core/traits/insertion_algorithm.rs 62.67% 53 Missing ⚠️
src/core/triangulation_data_structure.rs 76.47% 4 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #116      +/-   ##
==========================================
+ Coverage   68.50%   68.58%   +0.08%     
==========================================
  Files          19       19              
  Lines        4226     4333     +107     
==========================================
+ Hits         2895     2972      +77     
- Misses       1331     1361      +30     
Flag Coverage Δ
unittests 68.58% <64.37%> (+0.08%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
src/core/triangulation_data_structure.rs (1)

2316-2319: Prevent duplicate-UUID corruption during batch insert

Directly inserting into vertices then doing uuid_to_vertex_key.insert(uuid, key) can silently overwrite the mapping when the input contains duplicate UUIDs, leaving two stored vertices with one UUID mapping. Use the Entry API (or insert_vertex_with_mapping) to enforce UUID uniqueness.

-            let key = tds.vertices.insert(*vertex);
-            let uuid = vertex.uuid();
-            tds.uuid_to_vertex_key.insert(uuid, key);
+            let uuid = vertex.uuid();
+            match tds.uuid_to_vertex_key.entry(uuid) {
+                Entry::Occupied(_) => {
+                    return Err(TriangulationConstructionError::DuplicateUuid {
+                        entity: EntityKind::Vertex,
+                        uuid,
+                    });
+                }
+                Entry::Vacant(e) => {
+                    let key = tds.vertices.insert(*vertex);
+                    e.insert(key);
+                }
+            }

Alternatively, call the existing helper (note: bumps generation per insert):

-            let key = tds.vertices.insert(*vertex);
-            let uuid = vertex.uuid();
-            tds.uuid_to_vertex_key.insert(uuid, key);
+            tds.insert_vertex_with_mapping(*vertex)?;
src/core/traits/insertion_algorithm.rs (2)

1588-1655: Abort when preventive filtering removes every boundary facet before Phase 3

filter_boundary_facets_by_valid_facet_sharing can legitimately return an empty list (or at least fewer entries than the original boundary set). Right now we still march into Phase 2/3: we create zero new cells, then remove_bad_cells tears out the old cavity cells, leaving a hole and dangling neighbor pointers. That is a transactional correctness break—one failing insertion permanently corrupts the TDS.

Make the preventive filter a hard-stop: if it removes all boundary facets (or even just one of them), roll back the newly inserted vertex and bail out with an error instead of proceeding to removal.

         let boundary_infos =
             Self::filter_boundary_facets_by_valid_facet_sharing(tds, boundary_infos, inserted_vk);
 
+        if boundary_infos.is_empty() {
+            Self::rollback_created_cells_and_vertex(tds, &[], vertex, vertex_existed_before);
+            return Err(InsertionError::TriangulationState(
+                TriangulationValidationError::FailedToCreateCell {
+                    message: "Preventive facet filtering rejected every cavity facet; aborting to keep the triangulation intact."
+                        .to_string(),
+                },
+            ));
+        }

(and if you expect only a subset to be dropped, take the same rollback path whenever boundary_infos.len() < boundary_facet_handles.len().)


2493-2560: Rollback the vertex when create_cells_from_facet_handles fails

This helper advertises atomic semantics, but after ensure_vertex_in_tds we now return the "Failed to create any cells …" error without undoing the vertex insert. That leaves an orphan vertex in the TDS even though the caller sees an error—exactly the atomicity violation the helper is meant to prevent.

Before returning that error (and similarly if the preventive filter drops every facet) call rollback_created_cells_and_vertex so the vertex is removed when vertex_existed_before was false.

         let boundary_infos =
             Self::filter_boundary_facets_by_valid_facet_sharing(tds, boundary_infos, inserted_vk);
 
+        if boundary_infos.is_empty() {
+            Self::rollback_created_cells_and_vertex(tds, &created_cell_keys, vertex, vertex_existed_before);
+            return Err(InsertionError::TriangulationState(
+                TriangulationValidationError::FailedToCreateCell {
+                    message: "Preventive facet filtering removed every boundary facet; aborting cell creation to keep the TDS consistent."
+                        .to_string(),
+                },
+            ));
+        }
+
         // Phase 2: Create all cells from deduplicated boundary facets
         // Tracking created cell keys for potential rollback
         let mut created_cell_keys = Vec::with_capacity(boundary_infos.len());
@@
         let cells_created = created_cell_keys.len();
 
         // Validate that we created at least some cells
         if cells_created == 0 && !facet_handles.is_empty() {
+            Self::rollback_created_cells_and_vertex(tds, &created_cell_keys, vertex, vertex_existed_before);
             return Err(InsertionError::TriangulationState(
                 TriangulationValidationError::FailedToCreateCell {
                     message: format!(

That restores the documented transactional behavior.

🧹 Nitpick comments (5)
src/core/triangulation_data_structure.rs (3)

2181-2191: Clarify new() vs add() semantics and edge-case outcome

Great to document duplicate skipping. Please also note explicitly that if unique vertices after filtering are fewer than D+1, new() returns InsufficientVertices (while new(&[]) returns an empty TDS). This avoids surprises in batch construction.

Apply this doc tweak:

 /// # Duplicate Handling
 ///
 /// This method **silently skips** vertices with duplicate coordinates during construction.
@@
-/// **Note**: This behavior differs from [`Self::add()`], which returns an error for duplicates.
+/// **Note**: This behavior differs from [`Self::add()`], which returns an error for duplicates.
+/// If, after filtering, fewer than D+1 unique vertices remain, `new()` returns
+/// `TriangulationConstructionError::InsufficientVertices`. Passing an empty slice (`&[]`)
+/// is allowed and yields an empty triangulation.

2204-2209: List InsufficientVertices explicitly in Errors

The Errors section should call out the “insufficient unique vertices” case post-filtering.

 /// # Errors
 ///
-/// Returns a `TriangulationConstructionError` if:
+/// Returns a `TriangulationConstructionError` if:
+/// - There are insufficient unique vertices after duplicate filtering (fewer than D+1)
 /// - Triangulation computation fails during the Bowyer-Watson algorithm
 /// - Cell creation or validation fails
 /// - Neighbor assignment or duplicate cell removal fails

9148-9156: Avoid double is_valid() per iteration in test loop

is_valid() is expensive; the assertion calls it twice per iteration. Compute once and reuse.

-            assert!(
-                tds.is_valid().is_ok(),
-                "TDS should remain valid after insertion {}: {:?}",
-                i + 1,
-                tds.is_valid().err()
-            );
+            let valid = tds.is_valid();
+            assert!(
+                valid.is_ok(),
+                "TDS should remain valid after insertion {}: {:?}",
+                i + 1,
+                valid.err()
+            );

Also applies to: 9159-9163

tests/serialization_vertex_preservation.rs (2)

86-89: Consider using approximate comparison for consistency with other serialization tests.

This test uses exact coordinate equality via HashSet (which relies on Hash + Eq), but tests/proptest_serialization.rs uses approximate comparison with relative_eq! (epsilon=1e-14) to handle JSON floating-point precision limits. For consistency and to avoid potential flakiness with JSON roundtrips, consider using the points_approx_equal helper pattern from proptest_serialization.rs.

Example refactor:

-    // Verify coordinate preservation
-    assert_eq!(
-        tds_coords, deser_coords,
-        "Coordinates changed during serialization"
-    );
+    // Verify coordinate preservation (with tolerance for JSON precision)
+    assert_eq!(tds_coords.len(), deser_coords.len(), "Coordinate count mismatch");
+    for tds_coord in &tds_coords {
+        assert!(
+            deser_coords.iter().any(|dc| {
+                tds_coord.coords().iter().zip(dc.coords().iter())
+                    .all(|(a, b)| (a - b).abs() < 1e-14 * a.abs().max(1.0))
+            }),
+            "Coordinate {:?} not found in deserialized set (within tolerance)", tds_coord
+        );
+    }

202-205: Same concern: exact equality may be too strict for JSON roundtrips.

Similar to the earlier test, this assertion uses exact equality for coordinates after JSON serialization. Consider using approximate comparison consistent with the PR objectives and tests/proptest_serialization.rs approach.

-    assert_eq!(
-        original_coords, deserialized_coords,
-        "Vertex coordinates must be exactly preserved through serialization"
-    );
+    // Verify coordinate preservation with tolerance for JSON precision
+    assert_eq!(original_coords.len(), deserialized_coords.len(), "Coordinate count mismatch");
+    for orig_coord in &original_coords {
+        assert!(
+            deserialized_coords.iter().any(|dc| {
+                orig_coord.coords().iter().zip(dc.coords().iter())
+                    .all(|(a, b)| (a - b).abs() < 1e-14 * a.abs().max(1.0))
+            }),
+            "Coordinate {:?} not preserved through serialization (within tolerance)", orig_coord
+        );
+    }
📜 Review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1f070dd and 4460c4f.

📒 Files selected for processing (18)
  • cspell.json (1 hunks)
  • docs/code_organization.md (1 hunks)
  • justfile (3 hunks)
  • src/core/algorithms/robust_bowyer_watson.rs (1 hunks)
  • src/core/traits/insertion_algorithm.rs (9 hunks)
  • src/core/triangulation_data_structure.rs (3 hunks)
  • tests/proptest_bowyer_watson.proptest-regressions (1 hunks)
  • tests/proptest_bowyer_watson.rs (1 hunks)
  • tests/proptest_cell.rs (1 hunks)
  • tests/proptest_convex_hull.proptest-regressions (1 hunks)
  • tests/proptest_convex_hull.rs (1 hunks)
  • tests/proptest_facet.rs (1 hunks)
  • tests/proptest_geometry.rs (1 hunks)
  • tests/proptest_quality.proptest-regressions (1 hunks)
  • tests/proptest_quality.rs (1 hunks)
  • tests/proptest_serialization.proptest-regressions (1 hunks)
  • tests/proptest_serialization.rs (1 hunks)
  • tests/serialization_vertex_preservation.rs (1 hunks)
🧰 Additional context used
🧠 Learnings (5)
📓 Common learnings
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
📚 Learning: 2025-09-02T20:32:05.985Z
Learnt from: acgetchell
PR: acgetchell/delaunay#60
File: cspell.json:103-103
Timestamp: 2025-09-02T20:32:05.985Z
Learning: In cspell.json for the delaunay project, the word "itional" is intentionally added to the dictionary because it comes from a regex pattern, not a typo.

Applied to files:

  • cspell.json
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.

Applied to files:

  • cspell.json
  • tests/proptest_quality.rs
  • tests/serialization_vertex_preservation.rs
  • tests/proptest_convex_hull.rs
  • docs/code_organization.md
  • tests/proptest_bowyer_watson.proptest-regressions
  • tests/proptest_serialization.rs
  • tests/proptest_cell.rs
  • tests/proptest_bowyer_watson.rs
  • src/core/algorithms/robust_bowyer_watson.rs
  • src/core/traits/insertion_algorithm.rs
  • tests/proptest_geometry.rs
  • tests/proptest_facet.rs
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
PR: acgetchell/delaunay#55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The user corrected that finitecheck.rs, hashcoordinate.rs, and orderedeq.rs trait files do not exist in the current version of the delaunay repository under src/geometry/traits/. The current structure only contains coordinate.rs, which matches the documented structure in CONTRIBUTING.md.

Applied to files:

  • tests/proptest_quality.rs
  • tests/serialization_vertex_preservation.rs
  • tests/proptest_convex_hull.rs
  • docs/code_organization.md
  • tests/proptest_serialization.rs
  • tests/proptest_cell.rs
  • tests/proptest_bowyer_watson.rs
  • src/core/algorithms/robust_bowyer_watson.rs
  • src/core/triangulation_data_structure.rs
  • tests/proptest_geometry.rs
  • tests/proptest_facet.rs
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
PR: acgetchell/delaunay#55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The delaunay repository has stale documentation in WARP.md that references non-existent trait files (finitecheck.rs, hashcoordinate.rs, orderedeq.rs) in src/geometry/traits/, while the actual directory only contains coordinate.rs. The CONTRIBUTING.md documentation is accurate and reflects the current state.

Applied to files:

  • tests/proptest_quality.rs
  • tests/proptest_convex_hull.rs
  • docs/code_organization.md
  • tests/proptest_serialization.rs
  • tests/proptest_bowyer_watson.rs
  • src/core/triangulation_data_structure.rs
  • tests/proptest_geometry.rs
  • tests/proptest_facet.rs
🧬 Code graph analysis (10)
tests/proptest_quality.rs (4)
src/geometry/util.rs (3)
  • core (1810-1810)
  • circumradius (836-842)
  • inradius (1207-1281)
src/core/vertex.rs (2)
  • point (475-477)
  • from_points (419-424)
src/geometry/quality.rs (1)
  • radius_ratio (222-269)
src/core/triangulation_data_structure.rs (3)
  • new (2280-2326)
  • vertices (602-604)
  • cell_keys (620-622)
tests/serialization_vertex_preservation.rs (2)
src/core/vertex.rs (2)
  • point (475-477)
  • from_points (419-424)
src/core/triangulation_data_structure.rs (2)
  • new (2280-2326)
  • vertices (602-604)
tests/proptest_convex_hull.rs (3)
src/core/vertex.rs (2)
  • point (475-477)
  • from_points (419-424)
src/core/triangulation_data_structure.rs (3)
  • vertices (602-604)
  • new (2280-2326)
  • add (2468-2615)
src/geometry/algorithms/convex_hull.rs (3)
  • from_triangulation (650-692)
  • is_valid_for_tds (517-523)
  • facet_count (327-329)
tests/proptest_serialization.rs (2)
src/core/vertex.rs (2)
  • point (475-477)
  • from_points (419-424)
src/core/triangulation_data_structure.rs (8)
  • vertices (602-604)
  • new (2280-2326)
  • number_of_vertices (863-865)
  • number_of_cells (1018-1020)
  • dim (953-959)
  • is_valid (3916-3949)
  • cells (537-539)
  • cells (4185-4193)
tests/proptest_cell.rs (3)
src/core/vertex.rs (2)
  • point (475-477)
  • from_points (419-424)
src/core/triangulation_data_structure.rs (5)
  • vertices (602-604)
  • new (2280-2326)
  • cells (537-539)
  • cells (4185-4193)
  • is_valid (3916-3949)
src/core/cell.rs (1)
  • neighbors (639-643)
tests/proptest_bowyer_watson.rs (2)
src/core/vertex.rs (2)
  • point (475-477)
  • from_points (419-424)
src/core/triangulation_data_structure.rs (6)
  • new (2280-2326)
  • vertices (602-604)
  • add (2468-2615)
  • is_valid (3916-3949)
  • cells (537-539)
  • cells (4185-4193)
src/core/triangulation_data_structure.rs (3)
src/core/facet.rs (2)
  • vertices (493-514)
  • tds (406-408)
src/core/cell.rs (2)
  • vertices (672-674)
  • is_valid (1098-1138)
src/core/vertex.rs (1)
  • is_valid (610-628)
src/core/traits/insertion_algorithm.rs (3)
src/core/facet.rs (9)
  • tds (406-408)
  • new (254-259)
  • new (427-451)
  • new (739-756)
  • new (828-836)
  • key (576-597)
  • facet_key_from_vertices (928-941)
  • cell_key (282-284)
  • cell_key (392-394)
src/core/triangulation_data_structure.rs (3)
  • new (2280-2326)
  • vertex_keys (611-613)
  • cell_keys (620-622)
src/core/cell.rs (2)
  • new (476-499)
  • map (411-411)
tests/proptest_geometry.rs (2)
src/geometry/util.rs (4)
  • circumcenter (731-802)
  • circumradius (836-842)
  • inradius (1207-1281)
  • squared_norm (569-574)
src/geometry/point.rs (1)
  • coords (100-102)
tests/proptest_facet.rs (4)
src/core/vertex.rs (2)
  • point (475-477)
  • from_points (419-424)
tests/proptest_bowyer_watson.rs (1)
  • finite_coordinate (23-25)
src/core/triangulation_data_structure.rs (4)
  • vertices (602-604)
  • new (2280-2326)
  • cell_keys (620-622)
  • get_cell (630-632)
src/core/facet.rs (1)
  • facet_vertices (1085-1109)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Codacy Static Code Analysis
🔇 Additional comments (8)
src/core/algorithms/robust_bowyer_watson.rs (1)

2093-2101: Relaxed cell-count assertion aligns with topology filtering

Thanks for loosening the expectation to “> 0” — this keeps the guardrail that the triangulation isn’t empty while avoiding false negatives when hull extension replaces facets without strictly increasing cell cardinality.

tests/proptest_convex_hull.proptest-regressions (1)

1-10: Captured regression seeds look good

Appreciate checking in the new seeds; they’ll keep the convex-hull proptests reproducible as we iterate on degenerate filtering.

tests/proptest_facet.rs (1)

32-156: Facet property proptests cover the right invariants

The macro-driven suite gives us solid coverage across dimensions on vertex counts, facet validity, and facet/cardinality relationships. Nice work keeping the checks both strict and lightweight.

tests/proptest_geometry.rs (1)

32-183: Geometry proptests exercise the critical numerics

Really like how these tests pin down circumcenter/radius consistency, volume/inradius positivity, and basic norm laws with reasonable tolerances. That should catch a wide class of regressions in util math.

tests/proptest_cell.rs (1)

32-129: Cell invariants are well covered

The cell-focused proptests nicely enforce vertex uniqueness, neighbor bounds, and UUID consistency across dimensions — great addition to the structural test net.

tests/proptest_convex_hull.rs (1)

1-292: LGTM! Well-structured property-based tests for convex hull operations.

The test suite comprehensively validates convex hull properties across dimensions 2–5. Good use of prop_assume! to filter degenerate configurations (line 50, 137, 152) rather than asserting on them, which aligns with the PR objectives. The macro-based test generation effectively reduces duplication while maintaining dimension-specific parameters.

tests/proptest_serialization.rs (1)

1-212: LGTM! Appropriate handling of JSON floating-point precision.

The test suite correctly uses approximate comparison (relative_eq! with epsilon=1e-14, lines 18-25) for coordinate preservation after JSON roundtrips, which aligns with the PR objectives acknowledging JSON's 15-17 significant digit precision for f64. The filtering of minimal/degenerate configurations (line 120) and comprehensive coverage of structure, validity, coordinates, and neighbor relationships provide robust serialization testing.

tests/proptest_bowyer_watson.rs (1)

1-255: LGTM! Comprehensive Bowyer-Watson property tests.

The test suite thoroughly validates insertion algorithm properties across dimensions 2–5. The relaxed cell count assertion (lines 229-237) correctly acknowledges that cell count may vary due to topology-safety filtering and degenerate case handling, aligning with the PR objectives and related changes in src/core/algorithms/robust_bowyer_watson.rs. The strict vertex count check (line 221) and separate tracking of successful insertions (line 87) demonstrate appropriate invariant verification.

Implement atomic insertion semantics by adding rollback mechanisms when
cell creation fails during vertex insertion. This prevents TDS
corruption from partial insertions that leave dangling references.

Key changes:
- Add rollback after preventive facet filtering to handle edge cases
  where all boundary facets are unexpectedly removed
- Add rollback when cell creation yields zero cells despite valid
  facets, preventing inconsistent triangulation state
- Implement Point Hash + Eq for efficient duplicate detection in
  Tds::new() using HashSet instead of O(n²) sequential scan
- Add Point import and move HashSet to top-level imports
- Fix degenerate simplex generator in proptest_quality to create
  full-dimensional thin simplices using safe_usize_to_scalar
- Add proptest regression file for insertion_algorithm edge cases
- Improve serialization test coverage for vertex preservation

The rollback logic ensures failed insertions don't corrupt the
triangulation, maintaining the invariant that operations either fully
succeed or leave the TDS unchanged. Property tests now properly
exercise quality metric comparisons between regular and degenerate
simplices.

Fixes edge cases where invalid facet sharing or degenerate geometry
would previously cause TDS invariant violations.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
tests/proptest_quality.rs (1)

148-158: Past review concern has been addressed.

The degenerate simplex generator now creates a thin but full-dimensional simplex as intended. The construction starts with the origin and adds D vertices stretched along the first axis with unique small offsets in orthogonal directions, ensuring the resulting simplex is not collapsed to a lower dimension. The if let guards on lines 162-183 will now actually exercise the quality comparison assertion.

🧹 Nitpick comments (2)
src/core/triangulation_data_structure.rs (2)

2297-2305: construction_state should reflect unique vertex count (post‑dedup)

State is set from the raw input length before filtering duplicates. Recalculate after the dedup loop for accuracy.

             generation: Arc::new(AtomicU64::new(0)),
         };
@@
         for vertex in vertices {
             ...
         }
+
+        // Recompute construction_state based on unique vertices after dedup
+        let uniq = tds.vertices.len();
+        tds.construction_state = if uniq == 0 {
+            TriangulationConstructionState::Incomplete(0)
+        } else if uniq < D + 1 {
+            TriangulationConstructionState::Incomplete(uniq)
+        } else {
+            TriangulationConstructionState::Constructed
+        };

2312-2327: Duplicate UUIDs with duplicate coordinates are skipped silently — clarify intent

Because coordinate duplicates are skipped before the UUID check, a second vertex with identical coords and same UUID won’t trigger DuplicateUuid. That’s likely OK, but please document this nuance in the new() “Duplicate Handling” section (or add a debug assertion to catch same-UUID cases even when coords match).

📜 Review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4460c4f and 12d9ec0.

📒 Files selected for processing (6)
  • proptest-regressions/core/traits/insertion_algorithm.txt (1 hunks)
  • src/core/traits/insertion_algorithm.rs (11 hunks)
  • src/core/triangulation_data_structure.rs (8 hunks)
  • src/geometry/point.rs (7 hunks)
  • tests/proptest_quality.rs (1 hunks)
  • tests/serialization_vertex_preservation.rs (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • proptest-regressions/core/traits/insertion_algorithm.txt
🧰 Additional context used
🧠 Learnings (6)
📓 Common learnings
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.

Applied to files:

  • src/geometry/point.rs
  • tests/serialization_vertex_preservation.rs
  • src/core/traits/insertion_algorithm.rs
  • src/core/triangulation_data_structure.rs
  • tests/proptest_quality.rs
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
PR: acgetchell/delaunay#55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The user corrected that finitecheck.rs, hashcoordinate.rs, and orderedeq.rs trait files do not exist in the current version of the delaunay repository under src/geometry/traits/. The current structure only contains coordinate.rs, which matches the documented structure in CONTRIBUTING.md.

Applied to files:

  • src/geometry/point.rs
  • tests/serialization_vertex_preservation.rs
  • src/core/traits/insertion_algorithm.rs
  • src/core/triangulation_data_structure.rs
  • tests/proptest_quality.rs
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
PR: acgetchell/delaunay#55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The delaunay repository has stale documentation in WARP.md that references non-existent trait files (finitecheck.rs, hashcoordinate.rs, orderedeq.rs) in src/geometry/traits/, while the actual directory only contains coordinate.rs. The CONTRIBUTING.md documentation is accurate and reflects the current state.

Applied to files:

  • src/geometry/point.rs
  • src/core/triangulation_data_structure.rs
  • tests/proptest_quality.rs
📚 Learning: 2025-08-20T21:55:04.552Z
Learnt from: acgetchell
PR: acgetchell/delaunay#50
File: src/core/algorithms/robust_bowyer_watson.rs:191-195
Timestamp: 2025-08-20T21:55:04.552Z
Learning: CoordinateScalar trait guarantees PartialOrd, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.

Applied to files:

  • src/core/triangulation_data_structure.rs
📚 Learning: 2025-08-20T21:55:04.552Z
Learnt from: acgetchell
PR: acgetchell/delaunay#50
File: src/core/algorithms/robust_bowyer_watson.rs:191-195
Timestamp: 2025-08-20T21:55:04.552Z
Learning: CoordinateScalar trait includes PartialOrd through its Float supertrait bound, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.

Applied to files:

  • src/core/triangulation_data_structure.rs
🧬 Code graph analysis (5)
src/geometry/point.rs (2)
src/core/triangulation_data_structure.rs (2)
  • new (2287-2340)
  • dim (955-961)
src/geometry/traits/coordinate.rs (7)
  • new (676-676)
  • to_array (684-684)
  • dim (663-665)
  • validate (754-754)
  • origin (723-728)
  • get (706-706)
  • ordered_equals (780-780)
tests/serialization_vertex_preservation.rs (4)
src/core/vertex.rs (2)
  • point (475-477)
  • from_points (419-424)
tests/proptest_serialization.rs (1)
  • points_approx_equal (20-25)
src/core/triangulation_data_structure.rs (2)
  • new (2287-2340)
  • vertices (604-606)
src/geometry/point.rs (1)
  • new (115-117)
src/core/traits/insertion_algorithm.rs (4)
src/core/facet.rs (13)
  • tds (406-408)
  • new (254-259)
  • new (427-451)
  • new (739-756)
  • new (828-836)
  • key (576-597)
  • facet_key_from_vertices (928-941)
  • cell_key (282-284)
  • cell_key (392-394)
  • vertices (493-514)
  • facet_index (307-309)
  • facet_index (399-401)
  • facet_vertices (1085-1109)
src/core/triangulation_data_structure.rs (7)
  • new (2287-2340)
  • vertex_keys (613-615)
  • cell_keys (622-624)
  • vertices (604-606)
  • number_of_cells (1020-1022)
  • number_of_vertices (865-867)
  • empty (2169-2178)
src/core/cell.rs (4)
  • new (476-499)
  • map (411-411)
  • vertices (672-674)
  • number_of_vertices (757-759)
src/core/boundary.rs (1)
  • boundary_facets (76-84)
src/core/triangulation_data_structure.rs (2)
src/core/vertex.rs (2)
  • point (475-477)
  • uuid (530-532)
src/geometry/point.rs (1)
  • new (115-117)
tests/proptest_quality.rs (5)
src/geometry/util.rs (3)
  • core (1810-1810)
  • circumradius (836-842)
  • inradius (1207-1281)
src/core/vertex.rs (2)
  • point (475-477)
  • from_points (419-424)
src/geometry/quality.rs (1)
  • radius_ratio (222-269)
src/core/triangulation_data_structure.rs (3)
  • new (2287-2340)
  • vertices (604-606)
  • cell_keys (622-624)
src/geometry/point.rs (4)
  • new (115-117)
  • from (404-406)
  • from (423-425)
  • coords (100-102)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: Codacy Static Code Analysis
  • GitHub Check: performance-regression
  • GitHub Check: Code Coverage
  • GitHub Check: Codacy Security Scan
  • GitHub Check: build (macos-latest)
  • GitHub Check: build (ubuntu-latest)
  • GitHub Check: build (windows-latest)
🔇 Additional comments (11)
tests/proptest_quality.rs (2)

1-27: LGTM - Well-structured test module setup.

The documentation clearly describes the properties being tested, and the finite_coordinate() strategy appropriately generates finite values in a reasonable range for geometric tests.


192-195: LGTM - Appropriate macro invocations.

The test generation covers dimensions 2-5 with appropriate vertex count ranges for each dimension. The num_points parameter is correctly set to D+1 for simplex construction.

src/geometry/point.rs (3)

463-753: Excellent refactoring with macro-driven dimensional tests.

The macro-based approach effectively reduces code duplication while maintaining comprehensive test coverage across dimensions 2D-5D. The test patterns (creation, equality, hashing, ordering, validation, serialization, origin, hashmap, copy) cover all essential Point behaviors consistently. The use of assert_relative_eq! for floating-point comparisons is appropriate.


2520-2663: LGTM - Comprehensive TryFrom conversion testing.

The tests thoroughly cover error cases (overflow, non-finite results) and successful conversions. Error detail verification (coordinate_index, coordinate_value) ensures proper error reporting. Edge cases and various integer-to-float conversions are appropriately tested.


2684-2890: LGTM - Thorough explicit method and 1D point testing.

The explicit tests for dim(), to_array(), ordered_equals(), and hash_coordinate() ensure direct API coverage beyond trait implementations. The comprehensive 1D point tests are particularly valuable as an edge case, verifying that all Point behaviors work correctly in the minimal dimension.

tests/serialization_vertex_preservation.rs (4)

1-27: LGTM - Clear documentation and appropriate tolerance.

The module documentation clearly explains the test lifecycle and what's being investigated. The points_approx_equal helper uses an appropriate tolerance (1e-14) for JSON serialization roundtrips, consistent with other serialization tests in the codebase.


28-111: LGTM - Comprehensive test with excellent diagnostics.

The test correctly distinguishes between expected vertex loss during Tds construction (duplicate removal) and critical vertex loss during serialization. The comprehensive logging and error messages help diagnose issues quickly. The coordinate verification with tolerance (lines 101-108) is appropriate for JSON precision.


113-151: LGTM - Solid baseline test.

This test establishes the baseline expectation that unique vertices should be preserved through both Tds construction and serialization. The two-stage verification (lines 139-148) clearly separates concerns and makes test failures easy to diagnose.


153-238: LGTM - Excellent stress test and exact verification.

The many-duplicates test (lines 153-195) effectively stress-tests duplicate handling with 10 copies of the same coordinate. The exact coordinate preservation test (lines 197-238) goes beyond count comparison to verify that each original coordinate is preserved through the roundtrip with appropriate tolerance, catching potential coordinate corruption issues.

src/core/triangulation_data_structure.rs (2)

681-725: Neighbor assignment guard looks solid

Early facet over‑subscription check (len ≥ 2 before push) is a good safety net; saturating sizes and SmallBuffer reuse are nice touches. LGTM.


2308-2333: Review comment is incorrect—Point<T,D> already provides NaN-aware Eq/Hash semantics.

Point<T,D> implements custom Hash and PartialEq that delegate to ordered_equals(), which calls ordered_eq() on each coordinate. OrderedEq is defined in traits/coordinate.rs and provides NaN-aware equality (NaN == NaN) and treats 0.0 == -0.0. Tests confirm NaN points work reliably as HashMap keys.

The code at lines 2308–2333 using HashSet<Point<T,D>> for dedup is safe and correct. No refactor needed.

Likely an incorrect or invalid review comment.

Fix critical bug in preventive facet filtering where all valid interior
boundary facets were incorrectly rejected, and address UUID duplicate
detection and construction_state accuracy issues.

**Preventive Facet Filter Fix (CRITICAL)**
- Fixed filter_boundary_facets_by_valid_facet_sharing to discount bad
  cell contributions before checking facet sharing limits
- Bad cells are about to be deleted, but were being counted as if they
  would remain, causing every interior facet to be flagged as over-sharing
- Added unit test test_preventive_filter_does_not_reject_valid_interior_facets
  to prevent regression of this critical bug
- Captured bad_cell before moving info to avoid borrow issues

**UUID Duplicate Detection Fix**
- Reordered Tds::new() to check UUID collisions before coordinate deduplication
- Previously duplicate UUIDs were missed when coordinates were also duplicated
- Added Vertex::new_with_uuid() test helper (marked #[doc(hidden)])
- Added 4 integration tests for UUID collision detection scenarios

**Construction State Accuracy Fix**
- Recalculate construction_state after deduplication loop in Tds::new()
- Now reflects actual unique vertex count instead of raw input length
- Added 2 integration tests verifying correct state calculation

All quality checks pass (979 unit tests, 8 integration tests, clippy pedantic).
The preventive filter now correctly allows valid cavity reconstructions to proceed.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/core/traits/insertion_algorithm.rs (1)

279-301: Integer overflow risk: “saturating_*_for_bbox” are not saturating for integers.

Docs promise saturating math for integer CoordinateScalar, but implementations use plain +/- which can debug‑panic or wrap. This affects is_vertex_likely_exterior for integer Ts.

Fix by using checked_* with Bounded fallback; keep plain ops for floats.

-/// Saturating subtraction for bbox expansion - prevents overflow panics.
-/// For integer types: uses `saturating_sub`. For floats: normal subtraction.
-#[inline]
-fn saturating_sub_for_bbox<T>(a: T, b: T) -> T
-where
-    T: CoordinateScalar + Sub<Output = T>,
-{
-    // For floating-point types: normal subtraction (overflow → -infinity)
-    // For integer types: this trait bound ensures safe arithmetic
-    a - b
-}
+/// Saturating subtraction for bbox expansion.
+#[inline]
+fn saturating_sub_for_bbox<T>(a: T, b: T) -> T
+where
+    T: CoordinateScalar
+        + Sub<Output = T>
+        + num_traits::CheckedSub
+        + num_traits::Bounded
+        + num_traits::Zero
+        + PartialOrd,
+{
+    // For floats, checked_sub() returns Some; for ints, use saturating fallback on overflow.
+    a.checked_sub(&b).unwrap_or_else(|| {
+        // If subtracting a positive pushes below min, clamp to min; otherwise clamp to max.
+        if b >= T::zero() { T::min_value() } else { T::max_value() }
+    })
+}
 
-/// Saturating addition for bbox expansion - prevents overflow panics.
-/// For integer types: uses `saturating_add`. For floats: normal addition.
-#[inline]
-fn saturating_add_for_bbox<T>(a: T, b: T) -> T
-where
-    T: CoordinateScalar + Add<Output = T>,
-{
-    // For floating-point types: normal addition (overflow → infinity)
-    // For integer types: this trait bound ensures safe arithmetic
-    a + b
-}
+/// Saturating addition for bbox expansion.
+#[inline]
+fn saturating_add_for_bbox<T>(a: T, b: T) -> T
+where
+    T: CoordinateScalar
+        + Add<Output = T>
+        + num_traits::CheckedAdd
+        + num_traits::Bounded
+        + num_traits::Zero
+        + PartialOrd,
+{
+    a.checked_add(&b).unwrap_or_else(|| {
+        if b >= T::zero() { T::max_value() } else { T::min_value() }
+    })
+}
♻️ Duplicate comments (1)
src/core/traits/insertion_algorithm.rs (1)

2664-2783: Preventive facet filter fix LGTM; resolves prior “rejects all interior facets” bug.

You now discount the bad cell’s contribution when evaluating/accumulating facet counts, aligning pre-check with post-commit topology. This addresses the earlier failure mode where every interior boundary facet was dropped.

Based on past review feedback.

🧹 Nitpick comments (3)
src/core/vertex.rs (1)

629-655: Consider restricting visibility to prevent misuse.

The new_with_uuid function is documented as a test helper and marked #[doc(hidden)], but it's still pub, which means external crates can discover and use it through IDE autocomplete or by reading the source. Since this function bypasses UUID auto-generation and performs no validation, it could be misused to create invalid vertices.

Consider one of these alternatives:

Option 1 (Recommended): Use pub(crate) visibility

-    #[doc(hidden)]
-    pub const fn new_with_uuid(point: Point<T, D>, uuid: Uuid, data: Option<U>) -> Self {
+    /// Create a vertex for testing with a specific UUID.
+    ///
+    /// **Internal use only**: This helper is for integration tests.
+    pub(crate) const fn new_with_uuid(point: Point<T, D>, uuid: Uuid, data: Option<U>) -> Self {

This restricts usage to the same crate while still allowing integration tests (which are part of the crate) to use it.

Option 2: Use a test-only module

Move this to a #[cfg(test)] module or pub mod test_helpers that's clearly marked as test infrastructure.

src/core/traits/insertion_algorithm.rs (2)

2526-2536: Clearer error path and earlier exit for empty facet handle list.

create_cells_from_facet_handles errors when the filtered set is empty, but the message blames “preventive filtering” even for an initially empty handle list, and we also insert+rollback a vertex unnecessarily.

  • Improve UX: short‑circuit if facet_handles.is_empty() before ensure_vertex_in_tds.
  • Make the error message reflect the actual cause.
-        // Phase 1.5: Filter boundary facets to prevent invalid facet sharing
+        // Early exit: empty input
+        if facet_handles.is_empty() {
+            return Err(InsertionError::TriangulationState(
+                TriangulationValidationError::FailedToCreateCell {
+                    message: "No facet handles provided for cell creation".to_string(),
+                },
+            ));
+        }
+
+        // Phase 1.5: Filter boundary facets to prevent invalid facet sharing
         // This prevents creating cells that would violate the "facet shared by at most 2 cells" constraint
         let boundary_infos =
             Self::filter_boundary_facets_by_valid_facet_sharing(tds, boundary_infos, inserted_vk);
 
         // Hard-stop if preventive filter removed all boundary facets
-        // Creating zero cells would be invalid
+        // Creating zero cells would be invalid
         if boundary_infos.is_empty() {
             Self::rollback_created_cells_and_vertex(tds, &[], vertex, vertex_existed_before);
             return Err(InsertionError::TriangulationState(
                 TriangulationValidationError::FailedToCreateCell {
-                    message: "Preventive facet filtering removed every boundary facet; aborting cell creation to keep the TDS consistent."
+                    message: "No boundary facets available after filtering; aborting to keep the TDS consistent."
                         .to_string(),
                 },
             ));
         }

Also applies to: 3899-3921


2615-2634: Dedup set choice and micro-alloc nit.

Current dedup works; consider FastHashSet for consistency/perf and preallocate to boundary_infos.len().

-        use std::collections::HashSet;
-        let mut seen_vertex_sets = HashSet::new();
-        let mut deduplicated = Vec::new();
+        use crate::core::collections::{FastHashSet, fast_hash_set_with_capacity};
+        let mut seen_vertex_sets: FastHashSet<Vec<VertexKey>> =
+            fast_hash_set_with_capacity(boundary_infos.len());
+        let mut deduplicated = Vec::with_capacity(boundary_infos.len());
📜 Review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 12d9ec0 and 6785788.

📒 Files selected for processing (4)
  • src/core/traits/insertion_algorithm.rs (11 hunks)
  • src/core/triangulation_data_structure.rs (8 hunks)
  • src/core/vertex.rs (1 hunks)
  • tests/tds_basic_integration.rs (2 hunks)
🧰 Additional context used
🧠 Learnings (6)
📓 Common learnings
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.

Applied to files:

  • tests/tds_basic_integration.rs
  • src/core/triangulation_data_structure.rs
  • src/core/traits/insertion_algorithm.rs
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
PR: acgetchell/delaunay#55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The user corrected that finitecheck.rs, hashcoordinate.rs, and orderedeq.rs trait files do not exist in the current version of the delaunay repository under src/geometry/traits/. The current structure only contains coordinate.rs, which matches the documented structure in CONTRIBUTING.md.

Applied to files:

  • tests/tds_basic_integration.rs
  • src/core/triangulation_data_structure.rs
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
PR: acgetchell/delaunay#55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The delaunay repository has stale documentation in WARP.md that references non-existent trait files (finitecheck.rs, hashcoordinate.rs, orderedeq.rs) in src/geometry/traits/, while the actual directory only contains coordinate.rs. The CONTRIBUTING.md documentation is accurate and reflects the current state.

Applied to files:

  • src/core/triangulation_data_structure.rs
📚 Learning: 2025-08-20T21:55:04.552Z
Learnt from: acgetchell
PR: acgetchell/delaunay#50
File: src/core/algorithms/robust_bowyer_watson.rs:191-195
Timestamp: 2025-08-20T21:55:04.552Z
Learning: CoordinateScalar trait guarantees PartialOrd, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.

Applied to files:

  • src/core/triangulation_data_structure.rs
📚 Learning: 2025-08-20T21:55:04.552Z
Learnt from: acgetchell
PR: acgetchell/delaunay#50
File: src/core/algorithms/robust_bowyer_watson.rs:191-195
Timestamp: 2025-08-20T21:55:04.552Z
Learning: CoordinateScalar trait includes PartialOrd through its Float supertrait bound, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.

Applied to files:

  • src/core/triangulation_data_structure.rs
🧬 Code graph analysis (4)
src/core/vertex.rs (1)
src/core/cell.rs (1)
  • uuid (784-786)
tests/tds_basic_integration.rs (3)
src/core/vertex.rs (3)
  • point (475-477)
  • uuid (530-532)
  • new_with_uuid (648-655)
src/core/triangulation_data_structure.rs (2)
  • new (2287-2349)
  • vertices (604-606)
src/geometry/point.rs (1)
  • new (115-117)
src/core/triangulation_data_structure.rs (2)
src/core/vertex.rs (2)
  • point (475-477)
  • uuid (530-532)
src/geometry/point.rs (1)
  • new (115-117)
src/core/traits/insertion_algorithm.rs (4)
src/core/facet.rs (15)
  • tds (406-408)
  • new (254-259)
  • new (427-451)
  • new (739-756)
  • new (828-836)
  • key (576-597)
  • facet_key_from_vertices (928-941)
  • cell_key (282-284)
  • cell_key (392-394)
  • next (767-802)
  • next (847-859)
  • vertices (493-514)
  • facet_index (307-309)
  • facet_index (399-401)
  • facet_vertices (1085-1109)
src/core/triangulation_data_structure.rs (6)
  • new (2287-2349)
  • cell_keys (622-624)
  • vertices (604-606)
  • number_of_vertices (865-867)
  • empty (2169-2178)
  • is_valid (3939-3972)
src/core/cell.rs (5)
  • new (476-499)
  • map (411-411)
  • vertices (672-674)
  • number_of_vertices (757-759)
  • is_valid (1098-1138)
src/core/boundary.rs (1)
  • boundary_facets (76-84)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: Codacy Static Code Analysis
  • GitHub Check: performance-regression
  • GitHub Check: build (windows-latest)
  • GitHub Check: build (macos-latest)
  • GitHub Check: build (ubuntu-latest)
  • GitHub Check: Code Coverage
  • GitHub Check: Codacy Security Scan
🔇 Additional comments (9)
tests/tds_basic_integration.rs (5)

14-14: LGTM!

The new imports are necessary for the duplicate handling and construction state tests added below.

Also applies to: 18-18


176-201: LGTM!

This test correctly verifies that duplicate UUIDs are caught even when coordinates are identical, which validates the fix mentioned in the PR objectives where UUID checking happens before coordinate deduplication.


203-226: LGTM!

This test complements the previous test by verifying duplicate UUID detection when coordinates differ, ensuring UUID uniqueness is enforced independently of coordinate deduplication.


228-259: LGTM!

This test effectively verifies that the construction state is calculated based on unique vertices after deduplication. The choice of 2D (requiring only 3 vertices) makes the test more focused, and the case-insensitive error message check is appropriately robust.


261-299: LGTM!

This test provides good coverage of the successful path where deduplication leaves enough unique vertices for construction. The use of pattern matching on TriangulationConstructionState::Constructed is more robust than string-based error checking, and the test complements the previous insufficient-vertices test well.

src/core/triangulation_data_structure.rs (1)

2310-2332: Great call on HashSet-based dedupe

Switching to HashSet<Point<…>> brings the duplicate filter down to O(n) while preserving the existing UUID guard semantics. Nice tidy win.

src/core/traits/insertion_algorithm.rs (3)

3292-3305: Defense-in-depth retained after preventive filtering.

Keeping fix_invalid_facet_sharing post-insertion is good fallback coverage for edge cases when the facet map couldn’t be built or unseen cases slip through.


1588-1591: Transactional Phase 1/2 prechecks are correctly ordered.

Dedup first, ensure vertex, derive inserted_vk, then apply preventive facet filter with rollback on empty result. This sequencing avoids corrupting the TDS if reconstruction would be impossible.

If available, run the new regression covering interior facets to ensure it passes consistently.

Also applies to: 1610-1626


7451-7554: Regression for preventive filter is solid.

The test demonstrates the earlier false‑positive and verifies at least one interior boundary facet survives filtering. Good guard against regressions.

Add early exit for empty facet handles, optimize deduplication, clarify
documentation, and restrict test helper visibility.

**Early Exit for Empty Input**
- Add check at start of create_cells_from_facet_handles for empty handles
- Returns clear error "No facet handles provided" instead of misleading
  "preventive filtering" message
- Avoids unnecessary vertex insertion and rollback operations

**Improved Error Messages**
- Changed post-filtering error to "No boundary facets available after filtering"
- More accurate since empty result can come from filtering, deduplication, or
  empty input

**Performance Optimization**
- Replace std::HashSet with FastHashSet in deduplicate_boundary_facet_info
- Pre-allocate both collections with boundary_infos.len() capacity
- Improves deduplication performance and consistency with codebase

**Documentation Updates**
- Clarify saturating_add/sub_for_bbox only support floats (not integers)
- CoordinateScalar is only implemented for f32/f64
- Update test comments to reflect early exit behavior
- Add prominent warning to Vertex::new_with_uuid test helper
- Keep pub visibility for integration tests but enhance safety warnings
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/core/traits/insertion_algorithm.rs (1)

3898-3936: Align empty-handles tests with new API contract (now errors).

One test asserts Err (good), another swallows Err with unwrap_or(0). Make them consistent and assert the error explicitly.

-        let cells_created = IncrementalBowyerWatson::<f64, Option<()>, Option<()>, 3>::create_cells_from_facet_handles(
+        let result = IncrementalBowyerWatson::<f64, Option<()>, Option<()>, 3>::create_cells_from_facet_handles(
             &mut tds,
             &empty_facet_handles,
             &another_vertex,
-        ).unwrap_or(0);
-
-        assert_eq!(
-            cells_created, 0,
-            "Should create no cells from empty boundary facets"
-        );
+        );
+        assert!(result.is_err(), "Empty facet handle list should error");

Also applies to: 5719-5733

♻️ Duplicate comments (1)
src/core/traits/insertion_algorithm.rs (1)

2648-2795: Preventive filter now discounts the bad cell — resolves the earlier oversharing false positives.

You subtract the bad cell from facet counts both when checking and updating, preventing legitimate interior boundary facets from being dropped. This addresses the earlier bug where every interior boundary facet was rejected.

🧹 Nitpick comments (3)
src/core/traits/insertion_algorithm.rs (3)

2606-2647: Dedup by sorted vertex sets is fine; minor perf nit.

Storing Vec in the set allocates per facet. If you want to shave allocations, consider hashing the SmallBuffer slice once (facet_key_from_vertices) and dedup on the canonical u64 instead.


2684-2701: Use FastHashMap for facet_cell_counts to match hot-path containers.

You already rely on FastHashSet/Map elsewhere. Swapping std::collections::HashMap here trims hashing overhead in the N·D inner loop.

-        use std::collections::HashMap;
+        use crate::core::collections::{FastHashMap, fast_hash_map_with_capacity};
...
-        let mut facet_cell_counts: HashMap<u64, usize> = HashMap::new();
+        let mut facet_cell_counts: FastHashMap<u64, usize> =
+            fast_hash_map_with_capacity(facet_map.len());

279-303: Helper naming vs behavior (floats).

saturating_{sub,add}_for_bbox perform plain FP +/- (no integer saturating). Given T: CoordinateScalar excludes integers here, the code is correct. Consider renaming or tightening the doc comment to “FP-safe bbox +/-” to avoid implying integer saturation.

📜 Review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6785788 and dc53f4d.

📒 Files selected for processing (2)
  • src/core/traits/insertion_algorithm.rs (12 hunks)
  • src/core/vertex.rs (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/core/vertex.rs
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
📚 Learning: 2025-08-20T21:55:04.552Z
Learnt from: acgetchell
PR: acgetchell/delaunay#50
File: src/core/algorithms/robust_bowyer_watson.rs:191-195
Timestamp: 2025-08-20T21:55:04.552Z
Learning: CoordinateScalar trait includes PartialOrd through its Float supertrait bound, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.

Applied to files:

  • src/core/traits/insertion_algorithm.rs
📚 Learning: 2025-08-20T21:55:04.552Z
Learnt from: acgetchell
PR: acgetchell/delaunay#50
File: src/core/algorithms/robust_bowyer_watson.rs:191-195
Timestamp: 2025-08-20T21:55:04.552Z
Learning: CoordinateScalar trait guarantees PartialOrd, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.

Applied to files:

  • src/core/traits/insertion_algorithm.rs
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.

Applied to files:

  • src/core/traits/insertion_algorithm.rs
🧬 Code graph analysis (1)
src/core/traits/insertion_algorithm.rs (4)
src/core/facet.rs (13)
  • tds (406-408)
  • new (254-259)
  • new (427-451)
  • new (739-756)
  • new (828-836)
  • key (576-597)
  • facet_key_from_vertices (928-941)
  • cell_key (282-284)
  • cell_key (392-394)
  • vertices (493-514)
  • facet_index (307-309)
  • facet_index (399-401)
  • facet_vertices (1085-1109)
src/core/cell.rs (4)
  • new (476-499)
  • map (411-411)
  • vertices (672-674)
  • number_of_vertices (757-759)
src/core/collections.rs (2)
  • fast_hash_set_with_capacity (806-808)
  • fast_hash_set_with_capacity (902-902)
src/core/boundary.rs (1)
  • boundary_facets (76-84)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: Codacy Static Code Analysis
  • GitHub Check: Code Coverage
  • GitHub Check: build (windows-latest)
  • GitHub Check: build (ubuntu-latest)
  • GitHub Check: Codacy Security Scan
  • GitHub Check: build (macos-latest)
  • GitHub Check: performance-regression
🔇 Additional comments (4)
src/core/traits/insertion_algorithm.rs (4)

1590-1593: Good: dedup + preventive filtering wired into transactional flow.

This sequencing (dedup → ensure vertex → preventive filter → rollback on empty) is correct and avoids creating duplicate/overshared cells while keeping the TDS atomic. Looks good.

Also applies to: 1612-1627


2503-2511: Early error on empty facet handle list is correct and prevents side effects.

Defensive early-exit before any insertion is the right call for atomicity.


3304-3312: Defense-in-depth keeps topology safe.

Keeping fix_invalid_facet_sharing post-creation as a safety net is prudent given preventive filtering can be bypassed by other paths.


1118-1187: Direct T comparisons in bbox heuristics are fine.

is_vertex_likely_exterior uses < and > on T; CoordinateScalar guarantees PartialOrd via Float, so no f64 bridging needed. Based on learnings

Reduce allocations in deduplication and improve hashing performance in
the preventive facet filter's inner loop.

**Deduplication Optimization**
- Use u64 facet keys instead of Vec<VertexKey> for deduplication
- Leverages canonical facet_key_from_vertices() hashing
- Eliminates Vec allocation per facet (N allocations saved)
- Consistent with hashing used throughout codebase

**Facet Filter Optimization**
- Replace std::HashMap with FastHashMap in filter_boundary_facets_by_valid_facet_sharing
- Pre-allocate with facet_map.len() capacity to avoid rehashing
- Reduces hashing overhead in N·D inner loop
- Improves consistency with other hot-path containers

**Documentation Clarification**
- Clarify saturating_add/sub_for_bbox perform standard FP arithmetic
- Add note explaining "saturating" naming is for bbox semantic consistency
- Not actual integer saturation since CoordinateScalar only supports floats

**Test Alignment**
- Fix test_create_cells_from_boundary_facets to assert error on empty input
- Remove unwrap_or(0) pattern in favor of explicit error assertion
- Consistent with new API contract where empty handles error immediately
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
src/core/traits/insertion_algorithm.rs (3)

502-538: Drop const from mutating stats methods (unnecessary MSRV pressure).

These methods mutate self and don’t need to be const. Keeping them non-const avoids avoidable MSRV constraints.

-    pub const fn reset(&mut self) {
+    pub fn reset(&mut self) {
         *self = Self::new();
     }
-    pub const fn record_vertex_insertion(&mut self, info: &InsertionInfo) {
+    pub fn record_vertex_insertion(&mut self, info: &InsertionInfo) {
         self.vertices_processed += 1;
         self.total_cells_created += info.cells_created;
         self.total_cells_removed += info.cells_removed;
         …
     }
-    pub const fn record_fallback_usage(&mut self) {
+    pub fn record_fallback_usage(&mut self) {
         self.fallback_strategies_used += 1;
     }
-    pub const fn record_cavity_boundary_failure(&mut self) {
+    pub fn record_cavity_boundary_failure(&mut self) {
         self.cavity_boundary_failures += 1;
     }
-    pub const fn record_cavity_boundary_recovery(&mut self) {
+    pub fn record_cavity_boundary_recovery(&mut self) {
         self.cavity_boundary_recoveries += 1;
     }

279-310: Clarify naming of FP bbox helpers (non-saturating).

These perform plain FP +/- (as documented). To avoid confusion with integer “saturating” ops, consider renaming to bbox_sub_fp/bbox_add_fp, or keep names and add a brief “FP-only, not integer-saturating” note at call sites.


4099-4106: Update comment: duplicates are now deduped, not created.

The test comment suggests duplicate cells “might” be created; implementation deduplicates boundary facets by vertex set, so duplicates won’t be formed. Adjust the comment to reflect current behavior.

📜 Review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between dc53f4d and f335256.

📒 Files selected for processing (1)
  • src/core/traits/insertion_algorithm.rs (13 hunks)
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The delaunay repository has stale documentation in WARP.md that references non-existent trait files (finitecheck.rs, hashcoordinate.rs, orderedeq.rs) in src/geometry/traits/, while the actual directory only contains coordinate.rs. The CONTRIBUTING.md documentation is accurate and reflects the current state.
📚 Learning: 2025-08-20T21:55:04.552Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 50
File: src/core/algorithms/robust_bowyer_watson.rs:191-195
Timestamp: 2025-08-20T21:55:04.552Z
Learning: CoordinateScalar trait includes PartialOrd through its Float supertrait bound, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.

Applied to files:

  • src/core/traits/insertion_algorithm.rs
📚 Learning: 2025-08-20T21:55:04.552Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 50
File: src/core/algorithms/robust_bowyer_watson.rs:191-195
Timestamp: 2025-08-20T21:55:04.552Z
Learning: CoordinateScalar trait guarantees PartialOrd, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.

Applied to files:

  • src/core/traits/insertion_algorithm.rs
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.

Applied to files:

  • src/core/traits/insertion_algorithm.rs
🧬 Code graph analysis (1)
src/core/traits/insertion_algorithm.rs (5)
src/core/facet.rs (13)
  • tds (406-408)
  • new (254-259)
  • new (427-451)
  • new (739-756)
  • new (828-836)
  • key (576-597)
  • facet_key_from_vertices (928-941)
  • cell_key (282-284)
  • cell_key (392-394)
  • vertices (493-514)
  • facet_index (307-309)
  • facet_index (399-401)
  • facet_vertices (1085-1109)
src/core/triangulation_data_structure.rs (8)
  • new (2287-2349)
  • cell_keys (622-624)
  • vertex_keys (613-615)
  • vertices (604-606)
  • number_of_cells (1020-1022)
  • number_of_vertices (865-867)
  • empty (2169-2178)
  • is_valid (3939-3972)
src/core/cell.rs (5)
  • new (476-499)
  • map (411-411)
  • vertices (672-674)
  • number_of_vertices (757-759)
  • is_valid (1098-1138)
src/core/collections.rs (4)
  • fast_hash_set_with_capacity (806-808)
  • fast_hash_set_with_capacity (902-902)
  • fast_hash_map_with_capacity (772-774)
  • fast_hash_map_with_capacity (899-899)
src/core/boundary.rs (1)
  • boundary_facets (76-84)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: build (windows-latest)
  • GitHub Check: build (macos-latest)
  • GitHub Check: build (ubuntu-latest)
  • GitHub Check: Codacy Static Code Analysis
  • GitHub Check: Codacy Security Scan
  • GitHub Check: Code Coverage
  • GitHub Check: performance-regression
🔇 Additional comments (4)
src/core/traits/insertion_algorithm.rs (4)

2691-2803: Preventive facet filtering fix looks correct (discounts bad cell).

The logic now subtracts the soon-to-be-deleted bad cell from facet counts before checking and when updating running totals. This prevents false “>=2” rejections of valid interior boundary facets. Nice recovery and alignment with commit 6785788.


1596-1634: Good: transactional flow guards against empty facet set.

Dedup → ensure vertex → preventive filter → hard-stop+rollback on empty set. This preserves atomicity and avoids creating holes.


3411-3416: ****

Based on verification using the retrieved learnings, the review comment is incorrect. The codebase is correctly using pastey::paste!, which is the maintained replacement for the deprecated paste crate. The pastey crate is already properly declared in Cargo.toml as a dev-dependency (pastey = "0.1"), and its usage throughout the codebase is appropriate.

Likely an incorrect or invalid review comment.


383-389: The search results don't directly address const f64 equality comparisons. Let me search more specifically for when const float comparisons were stabilized in Rust.

Let me verify more specifically whether the exact code pattern in the repository compiles successfully:

Based on my verification, the pattern const _: () = assert!(condition) is stable and works for most cases. More specifically, as of Rust 1.57, with const_panic stabilized, this pattern has been possible.

The confusion in the review comment stems from conflating two different Rust contexts:

  • Const items (like your code): f64 equality comparisons are allowed
  • Const fn bodies: f64 comparisons are restricted with error E0658

Your MSRV is 1.90.0, which is far newer than the 1.57 requirement. The code pattern comparing two const f64 literals is well-supported.

No action required. The code is correct for the declared MSRV.

Likely an incorrect or invalid review comment.

Add AI-assisted development tool acknowledgements:
- List AI tools used in project development (ChatGPT, Claude, CodeRabbit,
  GitHub Copilot, KiloCode, WARP) in README.md
- Add detailed citations and human oversight note in REFERENCES.md
- Clarify all AI output was reviewed/edited by maintainer

Clarify floating-point bbox helper documentation:
- Simplify doc comments for saturating_add/sub_for_bbox functions
- Emphasize "saturating" naming is for semantic consistency, not actual
  integer saturation behavior
- Update test comment to reflect deduplication via canonical vertex sets
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/core/traits/insertion_algorithm.rs (1)

1427-1459: Don’t skip facets when neighbor slot is missing; treat as boundary.

If neighbors.get(facet_idx) returns None (buffer shorter than D+1), this skips a potentially valid boundary facet. Safer: consider a missing entry as “no neighbor” (boundary) instead of continue.

-            for facet_idx in 0..=D {
-                let Some(&neighbor_key_opt) = neighbors.get(facet_idx) else {
-                    continue;
-                };
-
-                // Boundary facet if: no neighbor OR neighbor is not bad
-                let is_boundary = neighbor_key_opt.is_none_or(|n| !bad_cell_set.contains(&n));
+            for facet_idx in 0..=D {
+                // Missing slot => treat as boundary
+                let is_boundary = match neighbors.get(facet_idx) {
+                    None => true,
+                    Some(&neighbor_key_opt) => neighbor_key_opt.is_none_or(|n| !bad_cell_set.contains(&n)),
+                };
                 if !is_boundary {
                     continue; // Interior facet; skip
                 }
🧹 Nitpick comments (7)
src/core/traits/insertion_algorithm.rs (7)

2610-2651: Dedup by canonical facet key looks correct; consider debug assert.

Implementation preserves order and removes duplicates via canonical keys. Optional: add a debug assert that output contains unique keys to catch regressions.

         for info in boundary_infos {
             let facet_key = facet_key_from_vertices(&info.facet_vertex_keys);
             if seen_facet_keys.insert(facet_key) {
                 deduplicated.push(info);
             }
         }
+        #[cfg(debug_assertions)]
+        {
+            use std::collections::HashSet;
+            let uniq: HashSet<_> = deduplicated.iter()
+                .map(|i| facet_key_from_vertices(&i.facet_vertex_keys))
+                .collect();
+            debug_assert_eq!(uniq.len(), deduplicated.len(), "Facet dedup failed to ensure uniqueness");
+        }
         deduplicated

279-307: Naming nit: “saturating_*_for_bbox” suggests integer semantics.

These are float‑only under CoordinateScalar. Consider renaming to bbox_sub/bbox_add or tightening docs to avoid confusion.


4035-4038: Avoid brittle message substring checks in error assertions.

You already match the error variant; prefer asserting enum/fields over message text to reduce false breaks from wording changes.


6896-6912: Strengthen rollback test: assert pre‑state restored exactly.

Also compare post‑rollback cell count to the initial cell count captured before creation for a stronger guarantee.

-        assert_eq!(
-            tds.number_of_cells(),
-            cells_after_creation - created_cells.len(),
-            "Cells should be removed"
-        );
+        // Should match the pre-creation state exactly
+        assert_eq!(tds.number_of_cells(), _initial_cell_count, "Cell count should return to initial after rollback");

7199-7218: Add an assertion; test currently only prints.

State the expected behavior explicitly (e.g., single‑vertex TDS ⇒ likely exterior returns true).

-        println!("  ✓ Single vertex TDS: is_exterior = {is_exterior}");
+        assert!(is_exterior, "With a single existing vertex, a different vertex should be classified as exterior");

7231-7245: Add assertions for the “all same point” case.

Either assert constructor fails (degenerate) or assert is_vertex_likely_exterior consistency when it succeeds.

-            Ok(tds) => {
+            Ok(tds) => {
                 let test_vertex = vertex!([2.0, 2.0, 2.0]);
                 let is_exterior = IncrementalBowyerWatson::<f64, Option<()>, Option<()>, 3>::is_vertex_likely_exterior(
                     &tds,
                     &test_vertex,
                 );
-                println!("  ✓ All same point: is_exterior = {is_exterior}");
+                assert!(is_exterior, "Vertex outside degenerate bbox should be considered exterior");
             }

4035-4038: Consolidate error checks to variants; drop message heuristics.

Where you already match InsertionError::TriangulationState(TriangulationValidationError::InconsistentDataStructure { .. }), avoid extra contains(...) on strings.

Also applies to: 3985-3987

📜 Review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f335256 and bc7fc0d.

📒 Files selected for processing (3)
  • README.md (2 hunks)
  • REFERENCES.md (1 hunks)
  • src/core/traits/insertion_algorithm.rs (14 hunks)
✅ Files skipped from review due to trivial changes (1)
  • README.md
🧰 Additional context used
🧠 Learnings (5)
📓 Common learnings
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The delaunay repository has stale documentation in WARP.md that references non-existent trait files (finitecheck.rs, hashcoordinate.rs, orderedeq.rs) in src/geometry/traits/, while the actual directory only contains coordinate.rs. The CONTRIBUTING.md documentation is accurate and reflects the current state.
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.

Applied to files:

  • REFERENCES.md
  • src/core/traits/insertion_algorithm.rs
📚 Learning: 2025-08-28T14:12:46.198Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 55
File: CITATION.cff:5-7
Timestamp: 2025-08-28T14:12:46.198Z
Learning: The DOI 10.5281/zenodo.16931097 in the CITATION.cff file belongs to the Rust delaunay triangulation library by acgetchell, not to any Julia DelaunayTriangulation.jl package.

Applied to files:

  • REFERENCES.md
📚 Learning: 2025-08-20T21:55:04.552Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 50
File: src/core/algorithms/robust_bowyer_watson.rs:191-195
Timestamp: 2025-08-20T21:55:04.552Z
Learning: CoordinateScalar trait includes PartialOrd through its Float supertrait bound, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.

Applied to files:

  • src/core/traits/insertion_algorithm.rs
📚 Learning: 2025-08-20T21:55:04.552Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 50
File: src/core/algorithms/robust_bowyer_watson.rs:191-195
Timestamp: 2025-08-20T21:55:04.552Z
Learning: CoordinateScalar trait guarantees PartialOrd, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.

Applied to files:

  • src/core/traits/insertion_algorithm.rs
🧬 Code graph analysis (1)
src/core/traits/insertion_algorithm.rs (4)
src/core/facet.rs (13)
  • tds (406-408)
  • new (254-259)
  • new (427-451)
  • new (739-756)
  • new (828-836)
  • key (576-597)
  • facet_key_from_vertices (928-941)
  • cell_key (282-284)
  • cell_key (392-394)
  • vertices (493-514)
  • facet_index (307-309)
  • facet_index (399-401)
  • facet_vertices (1085-1109)
src/core/triangulation_data_structure.rs (8)
  • new (2287-2349)
  • cell_keys (622-624)
  • vertex_keys (613-615)
  • vertices (604-606)
  • number_of_cells (1020-1022)
  • number_of_vertices (865-867)
  • empty (2169-2178)
  • is_valid (3939-3972)
src/core/cell.rs (5)
  • new (476-499)
  • map (411-411)
  • vertices (672-674)
  • number_of_vertices (757-759)
  • is_valid (1098-1138)
src/core/collections.rs (4)
  • fast_hash_set_with_capacity (806-808)
  • fast_hash_set_with_capacity (902-902)
  • fast_hash_map_with_capacity (772-774)
  • fast_hash_map_with_capacity (899-899)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Codacy Static Code Analysis
🔇 Additional comments (7)
REFERENCES.md (1)

28-38: Excellent transparency and attribution for AI-assisted development.

The new "AI-Assisted Development Tools" section clearly discloses the AI tools used during development and explicitly states that all generated output was reviewed/edited by the maintainer with no content used without human oversight. The formatting is consistent with the document's markdown style, tool references are accurate with proper URLs, and the placement between the citation guidance and core algorithm references is logical.

src/core/traits/insertion_algorithm.rs (6)

2507-2551: Early‑exit on empty handles and pre‑filtering are spot on.

Rejecting empty input before inserting the vertex and running dedup/filter avoids side effects and keeps semantics atomic.


2653-2801: Preventive filter fix correctly discounts the bad cell.

The count‑discount before the ≥2 check and during running updates resolves the “reject all interior facets” bug from the earlier review. Keeping the reactive fix as safety net is prudent.


3310-3318: Defense‑in‑depth comment is helpful; keep reactive fix.

Retaining fix_invalid_facet_sharing post‑creation covers facet‑map build failures and edge cases.


3917-3940: Great: empty handle path asserts no side effects.

Test verifies early error and unchanged TDS (cells and vertices).


1594-1631: Insertion paths consistently apply dedup + filter; no bypasses detected.

Verified via codebase scan: both main insertion paths (lines 1594–1631 and 2520–2539) uniformly apply deduplicate_boundary_facet_info followed by filter_boundary_facets_by_valid_facet_sharing before cell creation. The transactional sequence—dedup, preventive filter, atomic rollback on failure—is consistent across all paths. No alternate insertion routes that skip these checks were found.


279-307: CoordinateScalar is Float-backed; bbox operations are justified.

CoordinateScalar trait inherits from Float, confirming that the float-only saturating_add_for_bbox and saturating_sub_for_bbox operations are correctly constrained. The natural overflow handling (to ±infinity) documented in the comments is appropriate for floating-point arithmetic.

…nt filtering

Replace silent deduplication with explicit error handling to catch algorithmic
bugs in cavity boundary detection early.

**Breaking API Changes**
- deduplicate_boundary_facet_info: Returns Result<SmallBuffer, InsertionError>
  instead of Vec (errors on duplicates, uses SmallBuffer for performance)
- filter_boundary_facets_by_valid_facet_sharing: Returns Result<SmallBuffer>
  instead of Vec (propagates errors from facet map building)
- Add InsertionError::DuplicateBoundaryFacets variant with duplicate_count and
  total_count fields

**Rationale**
The cavity boundary should form a topological sphere with no duplicate facets.
Duplicates indicate:
- Incorrect neighbor traversal logic
- Non-manifold mesh connectivity
- Data structure corruption

By returning errors instead of silently filtering, we surface these bugs
immediately rather than allowing corrupted triangulations.

**Performance Optimization**
- Use SmallBuffer (stack-allocated, falls back to heap) for typical D+1 facets
- Faster than Vec for small collections (common case in D dimensions)
- Maintains O(n) duplicate detection with FastHashSet

**Implementation Details**
- Propagate DuplicateBoundaryFacets errors through insert_vertex_cavity_based
  and create_cells_from_facet_handles
- Update tests to expect errors on duplicates instead of silent filtering
- Rename prop_deduplication_is_idempotent to prop_deduplication_detects_duplicates
- Fix test_create_cells_from_facet_handles_duplicate_handles to assert error
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
src/core/traits/insertion_algorithm.rs (1)

2703-2857: Preventive facet-sharing filter now discounts bad-cell counts correctly.

This addresses the earlier “oversharing” false positives by subtracting the soon‑to‑be‑removed bad cell before checking counts.
If you want a quick sanity check, run the included test test_preventive_filter_does_not_reject_valid_interior_facets; it should pass under debug and release. Based on learnings

🧹 Nitpick comments (4)
src/core/traits/insertion_algorithm.rs (4)

3198-3207: Remove redundant sort before hashing facet signature.

facet_key_from_vertices() already sorts; drop the extra sort to save work in hot wiring path.

-                let mut facet_sig: FacetSignature = cell_vertices
+                let mut facet_sig: FacetSignature = cell_vertices
                     .iter()
                     .enumerate()
                     .filter_map(|(i, &vk)| (i != opposite_idx).then_some(vk))
                     .collect();
-                facet_sig.sort_unstable();
 
                 let facet_key = facet_key_from_vertices(&facet_sig);

2157-2184: Avoid UUID set; prefer VertexKey membership for speed.

Build a FastHashSet for the facet, then scan adjacent cell keys without per‑vertex UUID hashing. You can fetch the single opposite vertex only once after finding its key.


522-559: Don’t mark mutating methods as const.

reset/record_* mutate state and can’t be used in const contexts, so const brings no benefit and may confuse readers.

-    pub const fn reset(&mut self) {
+    pub fn reset(&mut self) {
         *self = Self::new();
     }
 
-    pub const fn record_vertex_insertion(&mut self, info: &InsertionInfo) {
+    pub fn record_vertex_insertion(&mut self, info: &InsertionInfo) {
         ...
     }
 
-    pub const fn record_fallback_usage(&mut self) {
+    pub fn record_fallback_usage(&mut self) {
         self.fallback_strategies_used += 1;
     }
 
-    pub const fn record_cavity_boundary_failure(&mut self) {
+    pub fn record_cavity_boundary_failure(&mut self) {
         self.cavity_boundary_failures += 1;
     }
 
-    pub const fn record_cavity_boundary_recovery(&mut self) {
+    pub fn record_cavity_boundary_recovery(&mut self) {
         self.cavity_boundary_recoveries += 1;
     }

301-329: Doc nit: clarify “saturating” naming for floats.

These helpers are plain +/- for floats; consider renaming or tightening docs to avoid implying integer‑style saturation.

Also applies to: 316-329

📜 Review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bc7fc0d and 4a419af.

📒 Files selected for processing (2)
  • proptest-regressions/core/traits/insertion_algorithm.txt (1 hunks)
  • src/core/traits/insertion_algorithm.rs (16 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • proptest-regressions/core/traits/insertion_algorithm.txt
🧰 Additional context used
🧠 Learnings (5)
📓 Common learnings
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The delaunay repository has stale documentation in WARP.md that references non-existent trait files (finitecheck.rs, hashcoordinate.rs, orderedeq.rs) in src/geometry/traits/, while the actual directory only contains coordinate.rs. The CONTRIBUTING.md documentation is accurate and reflects the current state.
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The user corrected that finitecheck.rs, hashcoordinate.rs, and orderedeq.rs trait files do not exist in the current version of the delaunay repository under src/geometry/traits/. The current structure only contains coordinate.rs, which matches the documented structure in CONTRIBUTING.md.

Applied to files:

  • src/core/traits/insertion_algorithm.rs
📚 Learning: 2025-08-20T21:55:04.552Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 50
File: src/core/algorithms/robust_bowyer_watson.rs:191-195
Timestamp: 2025-08-20T21:55:04.552Z
Learning: CoordinateScalar trait includes PartialOrd through its Float supertrait bound, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.

Applied to files:

  • src/core/traits/insertion_algorithm.rs
📚 Learning: 2025-08-20T21:55:04.552Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 50
File: src/core/algorithms/robust_bowyer_watson.rs:191-195
Timestamp: 2025-08-20T21:55:04.552Z
Learning: CoordinateScalar trait guarantees PartialOrd, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.

Applied to files:

  • src/core/traits/insertion_algorithm.rs
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.

Applied to files:

  • src/core/traits/insertion_algorithm.rs
🧬 Code graph analysis (1)
src/core/traits/insertion_algorithm.rs (5)
src/core/cell.rs (6)
  • neighbors (639-643)
  • new (476-499)
  • map (411-411)
  • vertices (672-674)
  • number_of_vertices (757-759)
  • is_valid (1098-1138)
src/core/facet.rs (13)
  • tds (406-408)
  • new (254-259)
  • new (427-451)
  • new (739-756)
  • new (828-836)
  • key (576-597)
  • facet_key_from_vertices (928-941)
  • cell_key (282-284)
  • cell_key (392-394)
  • vertices (493-514)
  • facet_index (307-309)
  • facet_index (399-401)
  • facet_vertices (1085-1109)
src/core/triangulation_data_structure.rs (8)
  • new (2287-2349)
  • cell_keys (622-624)
  • vertex_keys (613-615)
  • vertices (604-606)
  • number_of_cells (1020-1022)
  • number_of_vertices (865-867)
  • empty (2169-2178)
  • is_valid (3939-3972)
src/core/collections.rs (4)
  • fast_hash_set_with_capacity (806-808)
  • fast_hash_set_with_capacity (902-902)
  • fast_hash_map_with_capacity (772-774)
  • fast_hash_map_with_capacity (899-899)
src/core/boundary.rs (1)
  • boundary_facets (76-84)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: Codacy Static Code Analysis
  • GitHub Check: build (windows-latest)
  • GitHub Check: performance-regression
  • GitHub Check: build (macos-latest)
  • GitHub Check: build (ubuntu-latest)
  • GitHub Check: Code Coverage
  • GitHub Check: Codacy Security Scan
🔇 Additional comments (7)
src/core/traits/insertion_algorithm.rs (7)

219-241: Good explicit surfacing of duplicate cavity facets.

Clear error variant and message; this prevents silent topology breakage.


1451-1457: Boundary facet detection and dedup look correct.

Using Option::is_none_or and canonical facet keys fixes oversights and avoids duplicates.

Also applies to: 1467-1481


2633-2701: Deduplication via canonical facet keys is solid.

O(n) with FastHashSet and SmallBuffer is appropriate; erroring on duplicates is the right call here.


399-410: Nice const guard on threshold invariants.

Compile‑time assert keeps the integer optimization correct.


2530-2537: Early exit on empty facet handle input is good.

Fast fail with a clear error avoids unnecessary vertex insertion/rollback.


3366-3374: Defensive finalize keeps safety net despite preventive filtering.

Good defense‑in‑depth choice.


3465-3467: No issues found. Pastey dev-dependency is properly configured.

Verification confirms that pastey = "0.1" is present in dev-dependencies, and the codebase consistently uses pastey::paste! throughout—the maintained replacement for the deprecated paste crate. The code at lines 3465–3467 is correct as written.

The "saturating" terminology was misleading for floating-point operations.
These functions perform plain arithmetic, not saturating arithmetic.

**Renaming**
- saturating_sub_for_bbox → bbox_sub
- saturating_add_for_bbox → bbox_add

**Updated Documentation**
- Clarify these are plain float operations, NOT saturating arithmetic
- Explain float overflow behavior: naturally produces ±infinity
- Document that this is the desired behavior for bounding box expansion
  (ensures all vertices are contained even with extreme coordinates)
- Add explicit type constraints: only supports f32/f64 via CoordinateScalar
- Update all 28 call sites and test names

**Rationale**
"Saturating" implies integer-style clamping to min/max bounds, but:
- Floats don't saturate - they naturally produce ±infinity on overflow
- This is actually the correct behavior for bbox expansion
- The old naming suggested these functions did something special, when
  they're just plain +/- operations with overflow-to-infinity semantics
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation rust Pull requests that update rust code

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant