fix: Handle degenerate configurations and improve test robustness#116
fix: Handle degenerate configurations and improve test robustness#116acgetchell merged 8 commits intomainfrom
Conversation
This commit addresses several issues related to degenerate point configurations and improves the robustness of property-based tests: **Duplicate Vertex Handling** - Add duplicate coordinate detection to Tds::new() constructor - Silently skip vertices with identical coordinates during batch construction - Document behavior difference between new() (permissive) and add() (strict) - Matches expected behavior for batch initialization from potentially unfiltered data **Property Test Improvements** - Filter degenerate configurations in convex hull proptests (collinear/coplanar points with no boundary facets) - Use approx crate for floating-point comparisons in serialization tests - Add BoundaryAnalysis trait import where needed - Fix debug-only variable compilation with #[cfg(debug_assertions)] **Serialization Test Fixes** - Address JSON floating-point precision limitations (not a bug) - Use relative_eq! from approx crate instead of exact equality - JSON typically preserves 15-17 significant digits for f64 - Minor coordinate differences after roundtrip are expected and acceptable **New Property Tests** - Add comprehensive proptest suites for Bowyer-Watson, convex hull, cells, facets, geometry utilities, quality metrics, and serialization - Include proptest regression files to track minimal failing cases - Add serialization vertex preservation integration test All tests now pass successfully. Degenerate cases are filtered in property tests since they're covered by dedicated edge case tests.
WalkthroughAdds topology-safety checks (facet deduplication and preventive filtering with rollback) to insertion flows, batch duplicate/UUID handling in TDS construction, a Vertex test constructor, many new proptest and integration tests with regression seeds, new Justfile recipes, a small cspell addition, and README/REFERENCES updates. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant TDS as Triangulation
participant IA as InsertionAlgorithm
participant Cavity
Client->>TDS: add(new_vertex)
TDS->>IA: begin insertion (find bad cells)
IA->>Cavity: find_bad_cells(new_vertex)
Cavity-->>IA: boundary_facet_info[]
rect rgb(230,245,255)
note over IA: Phase 1 — deduplicate boundary facets
IA->>IA: deduplicate_boundary_facet_info(boundary_infos)
end
rect rgb(245,230,255)
note over IA: Phase 2 — preventive filtering & commit
IA->>IA: filter_boundary_facets_by_valid_facet_sharing(..., inserted_vk)
alt facets remain
IA->>TDS: create_cells_from_deduped_facets(...)
TDS-->>IA: success
IA->>TDS: commit vertex & cells
TDS-->>Client: Ok
else no facets remain
IA->>TDS: rollback_created_cells_and_vertex()
TDS-->>Client: Err (Duplicate/Filtered facets)
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Areas to focus on:
Possibly related PRs
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: Path: .coderabbit.yml Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🧰 Additional context used🧠 Learnings (5)📓 Common learnings📚 Learning: 2025-08-30T02:40:22.032ZApplied to files:
📚 Learning: 2025-08-20T21:55:04.552ZApplied to files:
📚 Learning: 2025-08-20T21:55:04.552ZApplied to files:
📚 Learning: 2025-09-04T20:03:49.896ZApplied to files:
🧬 Code graph analysis (1)src/core/traits/insertion_algorithm.rs (5)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
🔇 Additional comments (1)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Coverage summary from CodacySee diff coverage on Codacy
Coverage variation details
Coverage variation is the difference between the coverage for the head and common ancestor commits of the pull request branch: Diff coverage details
Diff coverage is the percentage of lines that are covered by tests out of the coverable lines that the pull request added or modified: See your quality gate settings Change summary preferences |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #116 +/- ##
==========================================
+ Coverage 68.50% 68.58% +0.08%
==========================================
Files 19 19
Lines 4226 4333 +107
==========================================
+ Hits 2895 2972 +77
- Misses 1331 1361 +30
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
src/core/triangulation_data_structure.rs (1)
2316-2319: Prevent duplicate-UUID corruption during batch insertDirectly inserting into
verticesthen doinguuid_to_vertex_key.insert(uuid, key)can silently overwrite the mapping when the input contains duplicate UUIDs, leaving two stored vertices with one UUID mapping. Use the Entry API (orinsert_vertex_with_mapping) to enforce UUID uniqueness.- let key = tds.vertices.insert(*vertex); - let uuid = vertex.uuid(); - tds.uuid_to_vertex_key.insert(uuid, key); + let uuid = vertex.uuid(); + match tds.uuid_to_vertex_key.entry(uuid) { + Entry::Occupied(_) => { + return Err(TriangulationConstructionError::DuplicateUuid { + entity: EntityKind::Vertex, + uuid, + }); + } + Entry::Vacant(e) => { + let key = tds.vertices.insert(*vertex); + e.insert(key); + } + }Alternatively, call the existing helper (note: bumps generation per insert):
- let key = tds.vertices.insert(*vertex); - let uuid = vertex.uuid(); - tds.uuid_to_vertex_key.insert(uuid, key); + tds.insert_vertex_with_mapping(*vertex)?;src/core/traits/insertion_algorithm.rs (2)
1588-1655: Abort when preventive filtering removes every boundary facet before Phase 3
filter_boundary_facets_by_valid_facet_sharingcan legitimately return an empty list (or at least fewer entries than the original boundary set). Right now we still march into Phase 2/3: we create zero new cells, thenremove_bad_cellstears out the old cavity cells, leaving a hole and dangling neighbor pointers. That is a transactional correctness break—one failing insertion permanently corrupts the TDS.Make the preventive filter a hard-stop: if it removes all boundary facets (or even just one of them), roll back the newly inserted vertex and bail out with an error instead of proceeding to removal.
let boundary_infos = Self::filter_boundary_facets_by_valid_facet_sharing(tds, boundary_infos, inserted_vk); + if boundary_infos.is_empty() { + Self::rollback_created_cells_and_vertex(tds, &[], vertex, vertex_existed_before); + return Err(InsertionError::TriangulationState( + TriangulationValidationError::FailedToCreateCell { + message: "Preventive facet filtering rejected every cavity facet; aborting to keep the triangulation intact." + .to_string(), + }, + )); + }(and if you expect only a subset to be dropped, take the same rollback path whenever
boundary_infos.len() < boundary_facet_handles.len().)
2493-2560: Rollback the vertex when create_cells_from_facet_handles failsThis helper advertises atomic semantics, but after
ensure_vertex_in_tdswe now return the"Failed to create any cells …"error without undoing the vertex insert. That leaves an orphan vertex in the TDS even though the caller sees an error—exactly the atomicity violation the helper is meant to prevent.Before returning that error (and similarly if the preventive filter drops every facet) call
rollback_created_cells_and_vertexso the vertex is removed whenvertex_existed_beforewas false.let boundary_infos = Self::filter_boundary_facets_by_valid_facet_sharing(tds, boundary_infos, inserted_vk); + if boundary_infos.is_empty() { + Self::rollback_created_cells_and_vertex(tds, &created_cell_keys, vertex, vertex_existed_before); + return Err(InsertionError::TriangulationState( + TriangulationValidationError::FailedToCreateCell { + message: "Preventive facet filtering removed every boundary facet; aborting cell creation to keep the TDS consistent." + .to_string(), + }, + )); + } + // Phase 2: Create all cells from deduplicated boundary facets // Tracking created cell keys for potential rollback let mut created_cell_keys = Vec::with_capacity(boundary_infos.len()); @@ let cells_created = created_cell_keys.len(); // Validate that we created at least some cells if cells_created == 0 && !facet_handles.is_empty() { + Self::rollback_created_cells_and_vertex(tds, &created_cell_keys, vertex, vertex_existed_before); return Err(InsertionError::TriangulationState( TriangulationValidationError::FailedToCreateCell { message: format!(That restores the documented transactional behavior.
🧹 Nitpick comments (5)
src/core/triangulation_data_structure.rs (3)
2181-2191: Clarify new() vs add() semantics and edge-case outcomeGreat to document duplicate skipping. Please also note explicitly that if unique vertices after filtering are fewer than D+1, new() returns InsufficientVertices (while new(&[]) returns an empty TDS). This avoids surprises in batch construction.
Apply this doc tweak:
/// # Duplicate Handling /// /// This method **silently skips** vertices with duplicate coordinates during construction. @@ -/// **Note**: This behavior differs from [`Self::add()`], which returns an error for duplicates. +/// **Note**: This behavior differs from [`Self::add()`], which returns an error for duplicates. +/// If, after filtering, fewer than D+1 unique vertices remain, `new()` returns +/// `TriangulationConstructionError::InsufficientVertices`. Passing an empty slice (`&[]`) +/// is allowed and yields an empty triangulation.
2204-2209: List InsufficientVertices explicitly in ErrorsThe Errors section should call out the “insufficient unique vertices” case post-filtering.
/// # Errors /// -/// Returns a `TriangulationConstructionError` if: +/// Returns a `TriangulationConstructionError` if: +/// - There are insufficient unique vertices after duplicate filtering (fewer than D+1) /// - Triangulation computation fails during the Bowyer-Watson algorithm /// - Cell creation or validation fails /// - Neighbor assignment or duplicate cell removal fails
9148-9156: Avoid double is_valid() per iteration in test loop
is_valid()is expensive; the assertion calls it twice per iteration. Compute once and reuse.- assert!( - tds.is_valid().is_ok(), - "TDS should remain valid after insertion {}: {:?}", - i + 1, - tds.is_valid().err() - ); + let valid = tds.is_valid(); + assert!( + valid.is_ok(), + "TDS should remain valid after insertion {}: {:?}", + i + 1, + valid.err() + );Also applies to: 9159-9163
tests/serialization_vertex_preservation.rs (2)
86-89: Consider using approximate comparison for consistency with other serialization tests.This test uses exact coordinate equality via
HashSet(which relies onHash+Eq), buttests/proptest_serialization.rsuses approximate comparison withrelative_eq!(epsilon=1e-14) to handle JSON floating-point precision limits. For consistency and to avoid potential flakiness with JSON roundtrips, consider using thepoints_approx_equalhelper pattern fromproptest_serialization.rs.Example refactor:
- // Verify coordinate preservation - assert_eq!( - tds_coords, deser_coords, - "Coordinates changed during serialization" - ); + // Verify coordinate preservation (with tolerance for JSON precision) + assert_eq!(tds_coords.len(), deser_coords.len(), "Coordinate count mismatch"); + for tds_coord in &tds_coords { + assert!( + deser_coords.iter().any(|dc| { + tds_coord.coords().iter().zip(dc.coords().iter()) + .all(|(a, b)| (a - b).abs() < 1e-14 * a.abs().max(1.0)) + }), + "Coordinate {:?} not found in deserialized set (within tolerance)", tds_coord + ); + }
202-205: Same concern: exact equality may be too strict for JSON roundtrips.Similar to the earlier test, this assertion uses exact equality for coordinates after JSON serialization. Consider using approximate comparison consistent with the PR objectives and
tests/proptest_serialization.rsapproach.- assert_eq!( - original_coords, deserialized_coords, - "Vertex coordinates must be exactly preserved through serialization" - ); + // Verify coordinate preservation with tolerance for JSON precision + assert_eq!(original_coords.len(), deserialized_coords.len(), "Coordinate count mismatch"); + for orig_coord in &original_coords { + assert!( + deserialized_coords.iter().any(|dc| { + orig_coord.coords().iter().zip(dc.coords().iter()) + .all(|(a, b)| (a - b).abs() < 1e-14 * a.abs().max(1.0)) + }), + "Coordinate {:?} not preserved through serialization (within tolerance)", orig_coord + ); + }
📜 Review details
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (18)
cspell.json(1 hunks)docs/code_organization.md(1 hunks)justfile(3 hunks)src/core/algorithms/robust_bowyer_watson.rs(1 hunks)src/core/traits/insertion_algorithm.rs(9 hunks)src/core/triangulation_data_structure.rs(3 hunks)tests/proptest_bowyer_watson.proptest-regressions(1 hunks)tests/proptest_bowyer_watson.rs(1 hunks)tests/proptest_cell.rs(1 hunks)tests/proptest_convex_hull.proptest-regressions(1 hunks)tests/proptest_convex_hull.rs(1 hunks)tests/proptest_facet.rs(1 hunks)tests/proptest_geometry.rs(1 hunks)tests/proptest_quality.proptest-regressions(1 hunks)tests/proptest_quality.rs(1 hunks)tests/proptest_serialization.proptest-regressions(1 hunks)tests/proptest_serialization.rs(1 hunks)tests/serialization_vertex_preservation.rs(1 hunks)
🧰 Additional context used
🧠 Learnings (5)
📓 Common learnings
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
📚 Learning: 2025-09-02T20:32:05.985Z
Learnt from: acgetchell
PR: acgetchell/delaunay#60
File: cspell.json:103-103
Timestamp: 2025-09-02T20:32:05.985Z
Learning: In cspell.json for the delaunay project, the word "itional" is intentionally added to the dictionary because it comes from a regex pattern, not a typo.
Applied to files:
cspell.json
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
Applied to files:
cspell.jsontests/proptest_quality.rstests/serialization_vertex_preservation.rstests/proptest_convex_hull.rsdocs/code_organization.mdtests/proptest_bowyer_watson.proptest-regressionstests/proptest_serialization.rstests/proptest_cell.rstests/proptest_bowyer_watson.rssrc/core/algorithms/robust_bowyer_watson.rssrc/core/traits/insertion_algorithm.rstests/proptest_geometry.rstests/proptest_facet.rs
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
PR: acgetchell/delaunay#55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The user corrected that finitecheck.rs, hashcoordinate.rs, and orderedeq.rs trait files do not exist in the current version of the delaunay repository under src/geometry/traits/. The current structure only contains coordinate.rs, which matches the documented structure in CONTRIBUTING.md.
Applied to files:
tests/proptest_quality.rstests/serialization_vertex_preservation.rstests/proptest_convex_hull.rsdocs/code_organization.mdtests/proptest_serialization.rstests/proptest_cell.rstests/proptest_bowyer_watson.rssrc/core/algorithms/robust_bowyer_watson.rssrc/core/triangulation_data_structure.rstests/proptest_geometry.rstests/proptest_facet.rs
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
PR: acgetchell/delaunay#55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The delaunay repository has stale documentation in WARP.md that references non-existent trait files (finitecheck.rs, hashcoordinate.rs, orderedeq.rs) in src/geometry/traits/, while the actual directory only contains coordinate.rs. The CONTRIBUTING.md documentation is accurate and reflects the current state.
Applied to files:
tests/proptest_quality.rstests/proptest_convex_hull.rsdocs/code_organization.mdtests/proptest_serialization.rstests/proptest_bowyer_watson.rssrc/core/triangulation_data_structure.rstests/proptest_geometry.rstests/proptest_facet.rs
🧬 Code graph analysis (10)
tests/proptest_quality.rs (4)
src/geometry/util.rs (3)
core(1810-1810)circumradius(836-842)inradius(1207-1281)src/core/vertex.rs (2)
point(475-477)from_points(419-424)src/geometry/quality.rs (1)
radius_ratio(222-269)src/core/triangulation_data_structure.rs (3)
new(2280-2326)vertices(602-604)cell_keys(620-622)
tests/serialization_vertex_preservation.rs (2)
src/core/vertex.rs (2)
point(475-477)from_points(419-424)src/core/triangulation_data_structure.rs (2)
new(2280-2326)vertices(602-604)
tests/proptest_convex_hull.rs (3)
src/core/vertex.rs (2)
point(475-477)from_points(419-424)src/core/triangulation_data_structure.rs (3)
vertices(602-604)new(2280-2326)add(2468-2615)src/geometry/algorithms/convex_hull.rs (3)
from_triangulation(650-692)is_valid_for_tds(517-523)facet_count(327-329)
tests/proptest_serialization.rs (2)
src/core/vertex.rs (2)
point(475-477)from_points(419-424)src/core/triangulation_data_structure.rs (8)
vertices(602-604)new(2280-2326)number_of_vertices(863-865)number_of_cells(1018-1020)dim(953-959)is_valid(3916-3949)cells(537-539)cells(4185-4193)
tests/proptest_cell.rs (3)
src/core/vertex.rs (2)
point(475-477)from_points(419-424)src/core/triangulation_data_structure.rs (5)
vertices(602-604)new(2280-2326)cells(537-539)cells(4185-4193)is_valid(3916-3949)src/core/cell.rs (1)
neighbors(639-643)
tests/proptest_bowyer_watson.rs (2)
src/core/vertex.rs (2)
point(475-477)from_points(419-424)src/core/triangulation_data_structure.rs (6)
new(2280-2326)vertices(602-604)add(2468-2615)is_valid(3916-3949)cells(537-539)cells(4185-4193)
src/core/triangulation_data_structure.rs (3)
src/core/facet.rs (2)
vertices(493-514)tds(406-408)src/core/cell.rs (2)
vertices(672-674)is_valid(1098-1138)src/core/vertex.rs (1)
is_valid(610-628)
src/core/traits/insertion_algorithm.rs (3)
src/core/facet.rs (9)
tds(406-408)new(254-259)new(427-451)new(739-756)new(828-836)key(576-597)facet_key_from_vertices(928-941)cell_key(282-284)cell_key(392-394)src/core/triangulation_data_structure.rs (3)
new(2280-2326)vertex_keys(611-613)cell_keys(620-622)src/core/cell.rs (2)
new(476-499)map(411-411)
tests/proptest_geometry.rs (2)
src/geometry/util.rs (4)
circumcenter(731-802)circumradius(836-842)inradius(1207-1281)squared_norm(569-574)src/geometry/point.rs (1)
coords(100-102)
tests/proptest_facet.rs (4)
src/core/vertex.rs (2)
point(475-477)from_points(419-424)tests/proptest_bowyer_watson.rs (1)
finite_coordinate(23-25)src/core/triangulation_data_structure.rs (4)
vertices(602-604)new(2280-2326)cell_keys(620-622)get_cell(630-632)src/core/facet.rs (1)
facet_vertices(1085-1109)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Codacy Static Code Analysis
🔇 Additional comments (8)
src/core/algorithms/robust_bowyer_watson.rs (1)
2093-2101: Relaxed cell-count assertion aligns with topology filteringThanks for loosening the expectation to “> 0” — this keeps the guardrail that the triangulation isn’t empty while avoiding false negatives when hull extension replaces facets without strictly increasing cell cardinality.
tests/proptest_convex_hull.proptest-regressions (1)
1-10: Captured regression seeds look goodAppreciate checking in the new seeds; they’ll keep the convex-hull proptests reproducible as we iterate on degenerate filtering.
tests/proptest_facet.rs (1)
32-156: Facet property proptests cover the right invariantsThe macro-driven suite gives us solid coverage across dimensions on vertex counts, facet validity, and facet/cardinality relationships. Nice work keeping the checks both strict and lightweight.
tests/proptest_geometry.rs (1)
32-183: Geometry proptests exercise the critical numericsReally like how these tests pin down circumcenter/radius consistency, volume/inradius positivity, and basic norm laws with reasonable tolerances. That should catch a wide class of regressions in util math.
tests/proptest_cell.rs (1)
32-129: Cell invariants are well coveredThe cell-focused proptests nicely enforce vertex uniqueness, neighbor bounds, and UUID consistency across dimensions — great addition to the structural test net.
tests/proptest_convex_hull.rs (1)
1-292: LGTM! Well-structured property-based tests for convex hull operations.The test suite comprehensively validates convex hull properties across dimensions 2–5. Good use of
prop_assume!to filter degenerate configurations (line 50, 137, 152) rather than asserting on them, which aligns with the PR objectives. The macro-based test generation effectively reduces duplication while maintaining dimension-specific parameters.tests/proptest_serialization.rs (1)
1-212: LGTM! Appropriate handling of JSON floating-point precision.The test suite correctly uses approximate comparison (
relative_eq!with epsilon=1e-14, lines 18-25) for coordinate preservation after JSON roundtrips, which aligns with the PR objectives acknowledging JSON's 15-17 significant digit precision for f64. The filtering of minimal/degenerate configurations (line 120) and comprehensive coverage of structure, validity, coordinates, and neighbor relationships provide robust serialization testing.tests/proptest_bowyer_watson.rs (1)
1-255: LGTM! Comprehensive Bowyer-Watson property tests.The test suite thoroughly validates insertion algorithm properties across dimensions 2–5. The relaxed cell count assertion (lines 229-237) correctly acknowledges that cell count may vary due to topology-safety filtering and degenerate case handling, aligning with the PR objectives and related changes in
src/core/algorithms/robust_bowyer_watson.rs. The strict vertex count check (line 221) and separate tracking of successful insertions (line 87) demonstrate appropriate invariant verification.
Implement atomic insertion semantics by adding rollback mechanisms when cell creation fails during vertex insertion. This prevents TDS corruption from partial insertions that leave dangling references. Key changes: - Add rollback after preventive facet filtering to handle edge cases where all boundary facets are unexpectedly removed - Add rollback when cell creation yields zero cells despite valid facets, preventing inconsistent triangulation state - Implement Point Hash + Eq for efficient duplicate detection in Tds::new() using HashSet instead of O(n²) sequential scan - Add Point import and move HashSet to top-level imports - Fix degenerate simplex generator in proptest_quality to create full-dimensional thin simplices using safe_usize_to_scalar - Add proptest regression file for insertion_algorithm edge cases - Improve serialization test coverage for vertex preservation The rollback logic ensures failed insertions don't corrupt the triangulation, maintaining the invariant that operations either fully succeed or leave the TDS unchanged. Property tests now properly exercise quality metric comparisons between regular and degenerate simplices. Fixes edge cases where invalid facet sharing or degenerate geometry would previously cause TDS invariant violations.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
tests/proptest_quality.rs (1)
148-158: Past review concern has been addressed.The degenerate simplex generator now creates a thin but full-dimensional simplex as intended. The construction starts with the origin and adds D vertices stretched along the first axis with unique small offsets in orthogonal directions, ensuring the resulting simplex is not collapsed to a lower dimension. The
if letguards on lines 162-183 will now actually exercise the quality comparison assertion.
🧹 Nitpick comments (2)
src/core/triangulation_data_structure.rs (2)
2297-2305: construction_state should reflect unique vertex count (post‑dedup)State is set from the raw input length before filtering duplicates. Recalculate after the dedup loop for accuracy.
generation: Arc::new(AtomicU64::new(0)), }; @@ for vertex in vertices { ... } + + // Recompute construction_state based on unique vertices after dedup + let uniq = tds.vertices.len(); + tds.construction_state = if uniq == 0 { + TriangulationConstructionState::Incomplete(0) + } else if uniq < D + 1 { + TriangulationConstructionState::Incomplete(uniq) + } else { + TriangulationConstructionState::Constructed + };
2312-2327: Duplicate UUIDs with duplicate coordinates are skipped silently — clarify intentBecause coordinate duplicates are skipped before the UUID check, a second vertex with identical coords and same UUID won’t trigger DuplicateUuid. That’s likely OK, but please document this nuance in the new() “Duplicate Handling” section (or add a debug assertion to catch same-UUID cases even when coords match).
📜 Review details
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
proptest-regressions/core/traits/insertion_algorithm.txt(1 hunks)src/core/traits/insertion_algorithm.rs(11 hunks)src/core/triangulation_data_structure.rs(8 hunks)src/geometry/point.rs(7 hunks)tests/proptest_quality.rs(1 hunks)tests/serialization_vertex_preservation.rs(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- proptest-regressions/core/traits/insertion_algorithm.txt
🧰 Additional context used
🧠 Learnings (6)
📓 Common learnings
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
Applied to files:
src/geometry/point.rstests/serialization_vertex_preservation.rssrc/core/traits/insertion_algorithm.rssrc/core/triangulation_data_structure.rstests/proptest_quality.rs
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
PR: acgetchell/delaunay#55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The user corrected that finitecheck.rs, hashcoordinate.rs, and orderedeq.rs trait files do not exist in the current version of the delaunay repository under src/geometry/traits/. The current structure only contains coordinate.rs, which matches the documented structure in CONTRIBUTING.md.
Applied to files:
src/geometry/point.rstests/serialization_vertex_preservation.rssrc/core/traits/insertion_algorithm.rssrc/core/triangulation_data_structure.rstests/proptest_quality.rs
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
PR: acgetchell/delaunay#55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The delaunay repository has stale documentation in WARP.md that references non-existent trait files (finitecheck.rs, hashcoordinate.rs, orderedeq.rs) in src/geometry/traits/, while the actual directory only contains coordinate.rs. The CONTRIBUTING.md documentation is accurate and reflects the current state.
Applied to files:
src/geometry/point.rssrc/core/triangulation_data_structure.rstests/proptest_quality.rs
📚 Learning: 2025-08-20T21:55:04.552Z
Learnt from: acgetchell
PR: acgetchell/delaunay#50
File: src/core/algorithms/robust_bowyer_watson.rs:191-195
Timestamp: 2025-08-20T21:55:04.552Z
Learning: CoordinateScalar trait guarantees PartialOrd, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.
Applied to files:
src/core/triangulation_data_structure.rs
📚 Learning: 2025-08-20T21:55:04.552Z
Learnt from: acgetchell
PR: acgetchell/delaunay#50
File: src/core/algorithms/robust_bowyer_watson.rs:191-195
Timestamp: 2025-08-20T21:55:04.552Z
Learning: CoordinateScalar trait includes PartialOrd through its Float supertrait bound, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.
Applied to files:
src/core/triangulation_data_structure.rs
🧬 Code graph analysis (5)
src/geometry/point.rs (2)
src/core/triangulation_data_structure.rs (2)
new(2287-2340)dim(955-961)src/geometry/traits/coordinate.rs (7)
new(676-676)to_array(684-684)dim(663-665)validate(754-754)origin(723-728)get(706-706)ordered_equals(780-780)
tests/serialization_vertex_preservation.rs (4)
src/core/vertex.rs (2)
point(475-477)from_points(419-424)tests/proptest_serialization.rs (1)
points_approx_equal(20-25)src/core/triangulation_data_structure.rs (2)
new(2287-2340)vertices(604-606)src/geometry/point.rs (1)
new(115-117)
src/core/traits/insertion_algorithm.rs (4)
src/core/facet.rs (13)
tds(406-408)new(254-259)new(427-451)new(739-756)new(828-836)key(576-597)facet_key_from_vertices(928-941)cell_key(282-284)cell_key(392-394)vertices(493-514)facet_index(307-309)facet_index(399-401)facet_vertices(1085-1109)src/core/triangulation_data_structure.rs (7)
new(2287-2340)vertex_keys(613-615)cell_keys(622-624)vertices(604-606)number_of_cells(1020-1022)number_of_vertices(865-867)empty(2169-2178)src/core/cell.rs (4)
new(476-499)map(411-411)vertices(672-674)number_of_vertices(757-759)src/core/boundary.rs (1)
boundary_facets(76-84)
src/core/triangulation_data_structure.rs (2)
src/core/vertex.rs (2)
point(475-477)uuid(530-532)src/geometry/point.rs (1)
new(115-117)
tests/proptest_quality.rs (5)
src/geometry/util.rs (3)
core(1810-1810)circumradius(836-842)inradius(1207-1281)src/core/vertex.rs (2)
point(475-477)from_points(419-424)src/geometry/quality.rs (1)
radius_ratio(222-269)src/core/triangulation_data_structure.rs (3)
new(2287-2340)vertices(604-606)cell_keys(622-624)src/geometry/point.rs (4)
new(115-117)from(404-406)from(423-425)coords(100-102)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
- GitHub Check: Codacy Static Code Analysis
- GitHub Check: performance-regression
- GitHub Check: Code Coverage
- GitHub Check: Codacy Security Scan
- GitHub Check: build (macos-latest)
- GitHub Check: build (ubuntu-latest)
- GitHub Check: build (windows-latest)
🔇 Additional comments (11)
tests/proptest_quality.rs (2)
1-27: LGTM - Well-structured test module setup.The documentation clearly describes the properties being tested, and the
finite_coordinate()strategy appropriately generates finite values in a reasonable range for geometric tests.
192-195: LGTM - Appropriate macro invocations.The test generation covers dimensions 2-5 with appropriate vertex count ranges for each dimension. The
num_pointsparameter is correctly set to D+1 for simplex construction.src/geometry/point.rs (3)
463-753: Excellent refactoring with macro-driven dimensional tests.The macro-based approach effectively reduces code duplication while maintaining comprehensive test coverage across dimensions 2D-5D. The test patterns (creation, equality, hashing, ordering, validation, serialization, origin, hashmap, copy) cover all essential Point behaviors consistently. The use of
assert_relative_eq!for floating-point comparisons is appropriate.
2520-2663: LGTM - Comprehensive TryFrom conversion testing.The tests thoroughly cover error cases (overflow, non-finite results) and successful conversions. Error detail verification (coordinate_index, coordinate_value) ensures proper error reporting. Edge cases and various integer-to-float conversions are appropriately tested.
2684-2890: LGTM - Thorough explicit method and 1D point testing.The explicit tests for
dim(),to_array(),ordered_equals(), andhash_coordinate()ensure direct API coverage beyond trait implementations. The comprehensive 1D point tests are particularly valuable as an edge case, verifying that all Point behaviors work correctly in the minimal dimension.tests/serialization_vertex_preservation.rs (4)
1-27: LGTM - Clear documentation and appropriate tolerance.The module documentation clearly explains the test lifecycle and what's being investigated. The
points_approx_equalhelper uses an appropriate tolerance (1e-14) for JSON serialization roundtrips, consistent with other serialization tests in the codebase.
28-111: LGTM - Comprehensive test with excellent diagnostics.The test correctly distinguishes between expected vertex loss during Tds construction (duplicate removal) and critical vertex loss during serialization. The comprehensive logging and error messages help diagnose issues quickly. The coordinate verification with tolerance (lines 101-108) is appropriate for JSON precision.
113-151: LGTM - Solid baseline test.This test establishes the baseline expectation that unique vertices should be preserved through both Tds construction and serialization. The two-stage verification (lines 139-148) clearly separates concerns and makes test failures easy to diagnose.
153-238: LGTM - Excellent stress test and exact verification.The many-duplicates test (lines 153-195) effectively stress-tests duplicate handling with 10 copies of the same coordinate. The exact coordinate preservation test (lines 197-238) goes beyond count comparison to verify that each original coordinate is preserved through the roundtrip with appropriate tolerance, catching potential coordinate corruption issues.
src/core/triangulation_data_structure.rs (2)
681-725: Neighbor assignment guard looks solidEarly facet over‑subscription check (len ≥ 2 before push) is a good safety net; saturating sizes and SmallBuffer reuse are nice touches. LGTM.
2308-2333: Review comment is incorrect—Point<T,D> already provides NaN-aware Eq/Hash semantics.Point<T,D> implements custom Hash and PartialEq that delegate to
ordered_equals(), which callsordered_eq()on each coordinate. OrderedEq is defined in traits/coordinate.rs and provides NaN-aware equality (NaN == NaN) and treats 0.0 == -0.0. Tests confirm NaN points work reliably as HashMap keys.The code at lines 2308–2333 using
HashSet<Point<T,D>>for dedup is safe and correct. No refactor needed.Likely an incorrect or invalid review comment.
Fix critical bug in preventive facet filtering where all valid interior boundary facets were incorrectly rejected, and address UUID duplicate detection and construction_state accuracy issues. **Preventive Facet Filter Fix (CRITICAL)** - Fixed filter_boundary_facets_by_valid_facet_sharing to discount bad cell contributions before checking facet sharing limits - Bad cells are about to be deleted, but were being counted as if they would remain, causing every interior facet to be flagged as over-sharing - Added unit test test_preventive_filter_does_not_reject_valid_interior_facets to prevent regression of this critical bug - Captured bad_cell before moving info to avoid borrow issues **UUID Duplicate Detection Fix** - Reordered Tds::new() to check UUID collisions before coordinate deduplication - Previously duplicate UUIDs were missed when coordinates were also duplicated - Added Vertex::new_with_uuid() test helper (marked #[doc(hidden)]) - Added 4 integration tests for UUID collision detection scenarios **Construction State Accuracy Fix** - Recalculate construction_state after deduplication loop in Tds::new() - Now reflects actual unique vertex count instead of raw input length - Added 2 integration tests verifying correct state calculation All quality checks pass (979 unit tests, 8 integration tests, clippy pedantic). The preventive filter now correctly allows valid cavity reconstructions to proceed.
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/core/traits/insertion_algorithm.rs (1)
279-301: Integer overflow risk: “saturating_*_for_bbox” are not saturating for integers.Docs promise saturating math for integer CoordinateScalar, but implementations use plain +/- which can debug‑panic or wrap. This affects is_vertex_likely_exterior for integer Ts.
Fix by using checked_* with Bounded fallback; keep plain ops for floats.
-/// Saturating subtraction for bbox expansion - prevents overflow panics. -/// For integer types: uses `saturating_sub`. For floats: normal subtraction. -#[inline] -fn saturating_sub_for_bbox<T>(a: T, b: T) -> T -where - T: CoordinateScalar + Sub<Output = T>, -{ - // For floating-point types: normal subtraction (overflow → -infinity) - // For integer types: this trait bound ensures safe arithmetic - a - b -} +/// Saturating subtraction for bbox expansion. +#[inline] +fn saturating_sub_for_bbox<T>(a: T, b: T) -> T +where + T: CoordinateScalar + + Sub<Output = T> + + num_traits::CheckedSub + + num_traits::Bounded + + num_traits::Zero + + PartialOrd, +{ + // For floats, checked_sub() returns Some; for ints, use saturating fallback on overflow. + a.checked_sub(&b).unwrap_or_else(|| { + // If subtracting a positive pushes below min, clamp to min; otherwise clamp to max. + if b >= T::zero() { T::min_value() } else { T::max_value() } + }) +} -/// Saturating addition for bbox expansion - prevents overflow panics. -/// For integer types: uses `saturating_add`. For floats: normal addition. -#[inline] -fn saturating_add_for_bbox<T>(a: T, b: T) -> T -where - T: CoordinateScalar + Add<Output = T>, -{ - // For floating-point types: normal addition (overflow → infinity) - // For integer types: this trait bound ensures safe arithmetic - a + b -} +/// Saturating addition for bbox expansion. +#[inline] +fn saturating_add_for_bbox<T>(a: T, b: T) -> T +where + T: CoordinateScalar + + Add<Output = T> + + num_traits::CheckedAdd + + num_traits::Bounded + + num_traits::Zero + + PartialOrd, +{ + a.checked_add(&b).unwrap_or_else(|| { + if b >= T::zero() { T::max_value() } else { T::min_value() } + }) +}
♻️ Duplicate comments (1)
src/core/traits/insertion_algorithm.rs (1)
2664-2783: Preventive facet filter fix LGTM; resolves prior “rejects all interior facets” bug.You now discount the bad cell’s contribution when evaluating/accumulating facet counts, aligning pre-check with post-commit topology. This addresses the earlier failure mode where every interior boundary facet was dropped.
Based on past review feedback.
🧹 Nitpick comments (3)
src/core/vertex.rs (1)
629-655: Consider restricting visibility to prevent misuse.The
new_with_uuidfunction is documented as a test helper and marked#[doc(hidden)], but it's stillpub, which means external crates can discover and use it through IDE autocomplete or by reading the source. Since this function bypasses UUID auto-generation and performs no validation, it could be misused to create invalid vertices.Consider one of these alternatives:
Option 1 (Recommended): Use
pub(crate)visibility- #[doc(hidden)] - pub const fn new_with_uuid(point: Point<T, D>, uuid: Uuid, data: Option<U>) -> Self { + /// Create a vertex for testing with a specific UUID. + /// + /// **Internal use only**: This helper is for integration tests. + pub(crate) const fn new_with_uuid(point: Point<T, D>, uuid: Uuid, data: Option<U>) -> Self {This restricts usage to the same crate while still allowing integration tests (which are part of the crate) to use it.
Option 2: Use a test-only module
Move this to a
#[cfg(test)]module orpub mod test_helpersthat's clearly marked as test infrastructure.src/core/traits/insertion_algorithm.rs (2)
2526-2536: Clearer error path and earlier exit for empty facet handle list.create_cells_from_facet_handles errors when the filtered set is empty, but the message blames “preventive filtering” even for an initially empty handle list, and we also insert+rollback a vertex unnecessarily.
- Improve UX: short‑circuit if facet_handles.is_empty() before ensure_vertex_in_tds.
- Make the error message reflect the actual cause.
- // Phase 1.5: Filter boundary facets to prevent invalid facet sharing + // Early exit: empty input + if facet_handles.is_empty() { + return Err(InsertionError::TriangulationState( + TriangulationValidationError::FailedToCreateCell { + message: "No facet handles provided for cell creation".to_string(), + }, + )); + } + + // Phase 1.5: Filter boundary facets to prevent invalid facet sharing // This prevents creating cells that would violate the "facet shared by at most 2 cells" constraint let boundary_infos = Self::filter_boundary_facets_by_valid_facet_sharing(tds, boundary_infos, inserted_vk); // Hard-stop if preventive filter removed all boundary facets - // Creating zero cells would be invalid + // Creating zero cells would be invalid if boundary_infos.is_empty() { Self::rollback_created_cells_and_vertex(tds, &[], vertex, vertex_existed_before); return Err(InsertionError::TriangulationState( TriangulationValidationError::FailedToCreateCell { - message: "Preventive facet filtering removed every boundary facet; aborting cell creation to keep the TDS consistent." + message: "No boundary facets available after filtering; aborting to keep the TDS consistent." .to_string(), }, )); }Also applies to: 3899-3921
2615-2634: Dedup set choice and micro-alloc nit.Current dedup works; consider FastHashSet for consistency/perf and preallocate to boundary_infos.len().
- use std::collections::HashSet; - let mut seen_vertex_sets = HashSet::new(); - let mut deduplicated = Vec::new(); + use crate::core::collections::{FastHashSet, fast_hash_set_with_capacity}; + let mut seen_vertex_sets: FastHashSet<Vec<VertexKey>> = + fast_hash_set_with_capacity(boundary_infos.len()); + let mut deduplicated = Vec::with_capacity(boundary_infos.len());
📜 Review details
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
src/core/traits/insertion_algorithm.rs(11 hunks)src/core/triangulation_data_structure.rs(8 hunks)src/core/vertex.rs(1 hunks)tests/tds_basic_integration.rs(2 hunks)
🧰 Additional context used
🧠 Learnings (6)
📓 Common learnings
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
Applied to files:
tests/tds_basic_integration.rssrc/core/triangulation_data_structure.rssrc/core/traits/insertion_algorithm.rs
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
PR: acgetchell/delaunay#55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The user corrected that finitecheck.rs, hashcoordinate.rs, and orderedeq.rs trait files do not exist in the current version of the delaunay repository under src/geometry/traits/. The current structure only contains coordinate.rs, which matches the documented structure in CONTRIBUTING.md.
Applied to files:
tests/tds_basic_integration.rssrc/core/triangulation_data_structure.rs
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
PR: acgetchell/delaunay#55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The delaunay repository has stale documentation in WARP.md that references non-existent trait files (finitecheck.rs, hashcoordinate.rs, orderedeq.rs) in src/geometry/traits/, while the actual directory only contains coordinate.rs. The CONTRIBUTING.md documentation is accurate and reflects the current state.
Applied to files:
src/core/triangulation_data_structure.rs
📚 Learning: 2025-08-20T21:55:04.552Z
Learnt from: acgetchell
PR: acgetchell/delaunay#50
File: src/core/algorithms/robust_bowyer_watson.rs:191-195
Timestamp: 2025-08-20T21:55:04.552Z
Learning: CoordinateScalar trait guarantees PartialOrd, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.
Applied to files:
src/core/triangulation_data_structure.rs
📚 Learning: 2025-08-20T21:55:04.552Z
Learnt from: acgetchell
PR: acgetchell/delaunay#50
File: src/core/algorithms/robust_bowyer_watson.rs:191-195
Timestamp: 2025-08-20T21:55:04.552Z
Learning: CoordinateScalar trait includes PartialOrd through its Float supertrait bound, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.
Applied to files:
src/core/triangulation_data_structure.rs
🧬 Code graph analysis (4)
src/core/vertex.rs (1)
src/core/cell.rs (1)
uuid(784-786)
tests/tds_basic_integration.rs (3)
src/core/vertex.rs (3)
point(475-477)uuid(530-532)new_with_uuid(648-655)src/core/triangulation_data_structure.rs (2)
new(2287-2349)vertices(604-606)src/geometry/point.rs (1)
new(115-117)
src/core/triangulation_data_structure.rs (2)
src/core/vertex.rs (2)
point(475-477)uuid(530-532)src/geometry/point.rs (1)
new(115-117)
src/core/traits/insertion_algorithm.rs (4)
src/core/facet.rs (15)
tds(406-408)new(254-259)new(427-451)new(739-756)new(828-836)key(576-597)facet_key_from_vertices(928-941)cell_key(282-284)cell_key(392-394)next(767-802)next(847-859)vertices(493-514)facet_index(307-309)facet_index(399-401)facet_vertices(1085-1109)src/core/triangulation_data_structure.rs (6)
new(2287-2349)cell_keys(622-624)vertices(604-606)number_of_vertices(865-867)empty(2169-2178)is_valid(3939-3972)src/core/cell.rs (5)
new(476-499)map(411-411)vertices(672-674)number_of_vertices(757-759)is_valid(1098-1138)src/core/boundary.rs (1)
boundary_facets(76-84)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
- GitHub Check: Codacy Static Code Analysis
- GitHub Check: performance-regression
- GitHub Check: build (windows-latest)
- GitHub Check: build (macos-latest)
- GitHub Check: build (ubuntu-latest)
- GitHub Check: Code Coverage
- GitHub Check: Codacy Security Scan
🔇 Additional comments (9)
tests/tds_basic_integration.rs (5)
14-14: LGTM!The new imports are necessary for the duplicate handling and construction state tests added below.
Also applies to: 18-18
176-201: LGTM!This test correctly verifies that duplicate UUIDs are caught even when coordinates are identical, which validates the fix mentioned in the PR objectives where UUID checking happens before coordinate deduplication.
203-226: LGTM!This test complements the previous test by verifying duplicate UUID detection when coordinates differ, ensuring UUID uniqueness is enforced independently of coordinate deduplication.
228-259: LGTM!This test effectively verifies that the construction state is calculated based on unique vertices after deduplication. The choice of 2D (requiring only 3 vertices) makes the test more focused, and the case-insensitive error message check is appropriately robust.
261-299: LGTM!This test provides good coverage of the successful path where deduplication leaves enough unique vertices for construction. The use of pattern matching on
TriangulationConstructionState::Constructedis more robust than string-based error checking, and the test complements the previous insufficient-vertices test well.src/core/triangulation_data_structure.rs (1)
2310-2332: Great call on HashSet-based dedupeSwitching to
HashSet<Point<…>>brings the duplicate filter down to O(n) while preserving the existing UUID guard semantics. Nice tidy win.src/core/traits/insertion_algorithm.rs (3)
3292-3305: Defense-in-depth retained after preventive filtering.Keeping fix_invalid_facet_sharing post-insertion is good fallback coverage for edge cases when the facet map couldn’t be built or unseen cases slip through.
1588-1591: Transactional Phase 1/2 prechecks are correctly ordered.Dedup first, ensure vertex, derive inserted_vk, then apply preventive facet filter with rollback on empty result. This sequencing avoids corrupting the TDS if reconstruction would be impossible.
If available, run the new regression covering interior facets to ensure it passes consistently.
Also applies to: 1610-1626
7451-7554: Regression for preventive filter is solid.The test demonstrates the earlier false‑positive and verifies at least one interior boundary facet survives filtering. Good guard against regressions.
Add early exit for empty facet handles, optimize deduplication, clarify documentation, and restrict test helper visibility. **Early Exit for Empty Input** - Add check at start of create_cells_from_facet_handles for empty handles - Returns clear error "No facet handles provided" instead of misleading "preventive filtering" message - Avoids unnecessary vertex insertion and rollback operations **Improved Error Messages** - Changed post-filtering error to "No boundary facets available after filtering" - More accurate since empty result can come from filtering, deduplication, or empty input **Performance Optimization** - Replace std::HashSet with FastHashSet in deduplicate_boundary_facet_info - Pre-allocate both collections with boundary_infos.len() capacity - Improves deduplication performance and consistency with codebase **Documentation Updates** - Clarify saturating_add/sub_for_bbox only support floats (not integers) - CoordinateScalar is only implemented for f32/f64 - Update test comments to reflect early exit behavior - Add prominent warning to Vertex::new_with_uuid test helper - Keep pub visibility for integration tests but enhance safety warnings
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/core/traits/insertion_algorithm.rs (1)
3898-3936: Align empty-handles tests with new API contract (now errors).One test asserts Err (good), another swallows Err with unwrap_or(0). Make them consistent and assert the error explicitly.
- let cells_created = IncrementalBowyerWatson::<f64, Option<()>, Option<()>, 3>::create_cells_from_facet_handles( + let result = IncrementalBowyerWatson::<f64, Option<()>, Option<()>, 3>::create_cells_from_facet_handles( &mut tds, &empty_facet_handles, &another_vertex, - ).unwrap_or(0); - - assert_eq!( - cells_created, 0, - "Should create no cells from empty boundary facets" - ); + ); + assert!(result.is_err(), "Empty facet handle list should error");Also applies to: 5719-5733
♻️ Duplicate comments (1)
src/core/traits/insertion_algorithm.rs (1)
2648-2795: Preventive filter now discounts the bad cell — resolves the earlier oversharing false positives.You subtract the bad cell from facet counts both when checking and updating, preventing legitimate interior boundary facets from being dropped. This addresses the earlier bug where every interior boundary facet was rejected.
🧹 Nitpick comments (3)
src/core/traits/insertion_algorithm.rs (3)
2606-2647: Dedup by sorted vertex sets is fine; minor perf nit.Storing Vec in the set allocates per facet. If you want to shave allocations, consider hashing the SmallBuffer slice once (facet_key_from_vertices) and dedup on the canonical u64 instead.
2684-2701: Use FastHashMap for facet_cell_counts to match hot-path containers.You already rely on FastHashSet/Map elsewhere. Swapping std::collections::HashMap here trims hashing overhead in the N·D inner loop.
- use std::collections::HashMap; + use crate::core::collections::{FastHashMap, fast_hash_map_with_capacity}; ... - let mut facet_cell_counts: HashMap<u64, usize> = HashMap::new(); + let mut facet_cell_counts: FastHashMap<u64, usize> = + fast_hash_map_with_capacity(facet_map.len());
279-303: Helper naming vs behavior (floats).saturating_{sub,add}_for_bbox perform plain FP +/- (no integer saturating). Given T: CoordinateScalar excludes integers here, the code is correct. Consider renaming or tightening the doc comment to “FP-safe bbox +/-” to avoid implying integer saturation.
📜 Review details
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
src/core/traits/insertion_algorithm.rs(12 hunks)src/core/vertex.rs(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- src/core/vertex.rs
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
📚 Learning: 2025-08-20T21:55:04.552Z
Learnt from: acgetchell
PR: acgetchell/delaunay#50
File: src/core/algorithms/robust_bowyer_watson.rs:191-195
Timestamp: 2025-08-20T21:55:04.552Z
Learning: CoordinateScalar trait includes PartialOrd through its Float supertrait bound, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.
Applied to files:
src/core/traits/insertion_algorithm.rs
📚 Learning: 2025-08-20T21:55:04.552Z
Learnt from: acgetchell
PR: acgetchell/delaunay#50
File: src/core/algorithms/robust_bowyer_watson.rs:191-195
Timestamp: 2025-08-20T21:55:04.552Z
Learning: CoordinateScalar trait guarantees PartialOrd, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.
Applied to files:
src/core/traits/insertion_algorithm.rs
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
Applied to files:
src/core/traits/insertion_algorithm.rs
🧬 Code graph analysis (1)
src/core/traits/insertion_algorithm.rs (4)
src/core/facet.rs (13)
tds(406-408)new(254-259)new(427-451)new(739-756)new(828-836)key(576-597)facet_key_from_vertices(928-941)cell_key(282-284)cell_key(392-394)vertices(493-514)facet_index(307-309)facet_index(399-401)facet_vertices(1085-1109)src/core/cell.rs (4)
new(476-499)map(411-411)vertices(672-674)number_of_vertices(757-759)src/core/collections.rs (2)
fast_hash_set_with_capacity(806-808)fast_hash_set_with_capacity(902-902)src/core/boundary.rs (1)
boundary_facets(76-84)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
- GitHub Check: Codacy Static Code Analysis
- GitHub Check: Code Coverage
- GitHub Check: build (windows-latest)
- GitHub Check: build (ubuntu-latest)
- GitHub Check: Codacy Security Scan
- GitHub Check: build (macos-latest)
- GitHub Check: performance-regression
🔇 Additional comments (4)
src/core/traits/insertion_algorithm.rs (4)
1590-1593: Good: dedup + preventive filtering wired into transactional flow.This sequencing (dedup → ensure vertex → preventive filter → rollback on empty) is correct and avoids creating duplicate/overshared cells while keeping the TDS atomic. Looks good.
Also applies to: 1612-1627
2503-2511: Early error on empty facet handle list is correct and prevents side effects.Defensive early-exit before any insertion is the right call for atomicity.
3304-3312: Defense-in-depth keeps topology safe.Keeping fix_invalid_facet_sharing post-creation as a safety net is prudent given preventive filtering can be bypassed by other paths.
1118-1187: Direct T comparisons in bbox heuristics are fine.is_vertex_likely_exterior uses < and > on T; CoordinateScalar guarantees PartialOrd via Float, so no f64 bridging needed. Based on learnings
Reduce allocations in deduplication and improve hashing performance in the preventive facet filter's inner loop. **Deduplication Optimization** - Use u64 facet keys instead of Vec<VertexKey> for deduplication - Leverages canonical facet_key_from_vertices() hashing - Eliminates Vec allocation per facet (N allocations saved) - Consistent with hashing used throughout codebase **Facet Filter Optimization** - Replace std::HashMap with FastHashMap in filter_boundary_facets_by_valid_facet_sharing - Pre-allocate with facet_map.len() capacity to avoid rehashing - Reduces hashing overhead in N·D inner loop - Improves consistency with other hot-path containers **Documentation Clarification** - Clarify saturating_add/sub_for_bbox perform standard FP arithmetic - Add note explaining "saturating" naming is for bbox semantic consistency - Not actual integer saturation since CoordinateScalar only supports floats **Test Alignment** - Fix test_create_cells_from_boundary_facets to assert error on empty input - Remove unwrap_or(0) pattern in favor of explicit error assertion - Consistent with new API contract where empty handles error immediately
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (3)
src/core/traits/insertion_algorithm.rs (3)
502-538: Dropconstfrom mutating stats methods (unnecessary MSRV pressure).These methods mutate
selfand don’t need to beconst. Keeping them non-const avoids avoidable MSRV constraints.- pub const fn reset(&mut self) { + pub fn reset(&mut self) { *self = Self::new(); } - pub const fn record_vertex_insertion(&mut self, info: &InsertionInfo) { + pub fn record_vertex_insertion(&mut self, info: &InsertionInfo) { self.vertices_processed += 1; self.total_cells_created += info.cells_created; self.total_cells_removed += info.cells_removed; … } - pub const fn record_fallback_usage(&mut self) { + pub fn record_fallback_usage(&mut self) { self.fallback_strategies_used += 1; } - pub const fn record_cavity_boundary_failure(&mut self) { + pub fn record_cavity_boundary_failure(&mut self) { self.cavity_boundary_failures += 1; } - pub const fn record_cavity_boundary_recovery(&mut self) { + pub fn record_cavity_boundary_recovery(&mut self) { self.cavity_boundary_recoveries += 1; }
279-310: Clarify naming of FP bbox helpers (non-saturating).These perform plain FP +/- (as documented). To avoid confusion with integer “saturating” ops, consider renaming to
bbox_sub_fp/bbox_add_fp, or keep names and add a brief “FP-only, not integer-saturating” note at call sites.
4099-4106: Update comment: duplicates are now deduped, not created.The test comment suggests duplicate cells “might” be created; implementation deduplicates boundary facets by vertex set, so duplicates won’t be formed. Adjust the comment to reflect current behavior.
📜 Review details
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/core/traits/insertion_algorithm.rs(13 hunks)
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The delaunay repository has stale documentation in WARP.md that references non-existent trait files (finitecheck.rs, hashcoordinate.rs, orderedeq.rs) in src/geometry/traits/, while the actual directory only contains coordinate.rs. The CONTRIBUTING.md documentation is accurate and reflects the current state.
📚 Learning: 2025-08-20T21:55:04.552Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 50
File: src/core/algorithms/robust_bowyer_watson.rs:191-195
Timestamp: 2025-08-20T21:55:04.552Z
Learning: CoordinateScalar trait includes PartialOrd through its Float supertrait bound, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.
Applied to files:
src/core/traits/insertion_algorithm.rs
📚 Learning: 2025-08-20T21:55:04.552Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 50
File: src/core/algorithms/robust_bowyer_watson.rs:191-195
Timestamp: 2025-08-20T21:55:04.552Z
Learning: CoordinateScalar trait guarantees PartialOrd, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.
Applied to files:
src/core/traits/insertion_algorithm.rs
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
Applied to files:
src/core/traits/insertion_algorithm.rs
🧬 Code graph analysis (1)
src/core/traits/insertion_algorithm.rs (5)
src/core/facet.rs (13)
tds(406-408)new(254-259)new(427-451)new(739-756)new(828-836)key(576-597)facet_key_from_vertices(928-941)cell_key(282-284)cell_key(392-394)vertices(493-514)facet_index(307-309)facet_index(399-401)facet_vertices(1085-1109)src/core/triangulation_data_structure.rs (8)
new(2287-2349)cell_keys(622-624)vertex_keys(613-615)vertices(604-606)number_of_cells(1020-1022)number_of_vertices(865-867)empty(2169-2178)is_valid(3939-3972)src/core/cell.rs (5)
new(476-499)map(411-411)vertices(672-674)number_of_vertices(757-759)is_valid(1098-1138)src/core/collections.rs (4)
fast_hash_set_with_capacity(806-808)fast_hash_set_with_capacity(902-902)fast_hash_map_with_capacity(772-774)fast_hash_map_with_capacity(899-899)src/core/boundary.rs (1)
boundary_facets(76-84)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
- GitHub Check: build (windows-latest)
- GitHub Check: build (macos-latest)
- GitHub Check: build (ubuntu-latest)
- GitHub Check: Codacy Static Code Analysis
- GitHub Check: Codacy Security Scan
- GitHub Check: Code Coverage
- GitHub Check: performance-regression
🔇 Additional comments (4)
src/core/traits/insertion_algorithm.rs (4)
2691-2803: Preventive facet filtering fix looks correct (discounts bad cell).The logic now subtracts the soon-to-be-deleted bad cell from facet counts before checking and when updating running totals. This prevents false “>=2” rejections of valid interior boundary facets. Nice recovery and alignment with commit 6785788.
1596-1634: Good: transactional flow guards against empty facet set.Dedup → ensure vertex → preventive filter → hard-stop+rollback on empty set. This preserves atomicity and avoids creating holes.
3411-3416: ****Based on verification using the retrieved learnings, the review comment is incorrect. The codebase is correctly using
pastey::paste!, which is the maintained replacement for the deprecatedpastecrate. Thepasteycrate is already properly declared inCargo.tomlas a dev-dependency (pastey = "0.1"), and its usage throughout the codebase is appropriate.Likely an incorrect or invalid review comment.
383-389: The search results don't directly address const f64 equality comparisons. Let me search more specifically for when const float comparisons were stabilized in Rust.Let me verify more specifically whether the exact code pattern in the repository compiles successfully:
Based on my verification, the pattern
const _: () = assert!(condition)is stable and works for most cases. More specifically, as of Rust 1.57, with const_panic stabilized, this pattern has been possible.The confusion in the review comment stems from conflating two different Rust contexts:
- Const items (like your code): f64 equality comparisons are allowed
- Const fn bodies: f64 comparisons are restricted with error E0658
Your MSRV is 1.90.0, which is far newer than the 1.57 requirement. The code pattern comparing two const f64 literals is well-supported.
No action required. The code is correct for the declared MSRV.
Likely an incorrect or invalid review comment.
Add AI-assisted development tool acknowledgements: - List AI tools used in project development (ChatGPT, Claude, CodeRabbit, GitHub Copilot, KiloCode, WARP) in README.md - Add detailed citations and human oversight note in REFERENCES.md - Clarify all AI output was reviewed/edited by maintainer Clarify floating-point bbox helper documentation: - Simplify doc comments for saturating_add/sub_for_bbox functions - Emphasize "saturating" naming is for semantic consistency, not actual integer saturation behavior - Update test comment to reflect deduplication via canonical vertex sets
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/core/traits/insertion_algorithm.rs (1)
1427-1459: Don’t skip facets when neighbor slot is missing; treat as boundary.If
neighbors.get(facet_idx)returnsNone(buffer shorter than D+1), this skips a potentially valid boundary facet. Safer: consider a missing entry as “no neighbor” (boundary) instead ofcontinue.- for facet_idx in 0..=D { - let Some(&neighbor_key_opt) = neighbors.get(facet_idx) else { - continue; - }; - - // Boundary facet if: no neighbor OR neighbor is not bad - let is_boundary = neighbor_key_opt.is_none_or(|n| !bad_cell_set.contains(&n)); + for facet_idx in 0..=D { + // Missing slot => treat as boundary + let is_boundary = match neighbors.get(facet_idx) { + None => true, + Some(&neighbor_key_opt) => neighbor_key_opt.is_none_or(|n| !bad_cell_set.contains(&n)), + }; if !is_boundary { continue; // Interior facet; skip }
🧹 Nitpick comments (7)
src/core/traits/insertion_algorithm.rs (7)
2610-2651: Dedup by canonical facet key looks correct; consider debug assert.Implementation preserves order and removes duplicates via canonical keys. Optional: add a debug assert that output contains unique keys to catch regressions.
for info in boundary_infos { let facet_key = facet_key_from_vertices(&info.facet_vertex_keys); if seen_facet_keys.insert(facet_key) { deduplicated.push(info); } } + #[cfg(debug_assertions)] + { + use std::collections::HashSet; + let uniq: HashSet<_> = deduplicated.iter() + .map(|i| facet_key_from_vertices(&i.facet_vertex_keys)) + .collect(); + debug_assert_eq!(uniq.len(), deduplicated.len(), "Facet dedup failed to ensure uniqueness"); + } deduplicated
279-307: Naming nit: “saturating_*_for_bbox” suggests integer semantics.These are float‑only under
CoordinateScalar. Consider renaming tobbox_sub/bbox_addor tightening docs to avoid confusion.
4035-4038: Avoid brittle message substring checks in error assertions.You already match the error variant; prefer asserting enum/fields over message text to reduce false breaks from wording changes.
6896-6912: Strengthen rollback test: assert pre‑state restored exactly.Also compare post‑rollback cell count to the initial cell count captured before creation for a stronger guarantee.
- assert_eq!( - tds.number_of_cells(), - cells_after_creation - created_cells.len(), - "Cells should be removed" - ); + // Should match the pre-creation state exactly + assert_eq!(tds.number_of_cells(), _initial_cell_count, "Cell count should return to initial after rollback");
7199-7218: Add an assertion; test currently only prints.State the expected behavior explicitly (e.g., single‑vertex TDS ⇒ likely exterior returns true).
- println!(" ✓ Single vertex TDS: is_exterior = {is_exterior}"); + assert!(is_exterior, "With a single existing vertex, a different vertex should be classified as exterior");
7231-7245: Add assertions for the “all same point” case.Either assert constructor fails (degenerate) or assert
is_vertex_likely_exteriorconsistency when it succeeds.- Ok(tds) => { + Ok(tds) => { let test_vertex = vertex!([2.0, 2.0, 2.0]); let is_exterior = IncrementalBowyerWatson::<f64, Option<()>, Option<()>, 3>::is_vertex_likely_exterior( &tds, &test_vertex, ); - println!(" ✓ All same point: is_exterior = {is_exterior}"); + assert!(is_exterior, "Vertex outside degenerate bbox should be considered exterior"); }
4035-4038: Consolidate error checks to variants; drop message heuristics.Where you already match
InsertionError::TriangulationState(TriangulationValidationError::InconsistentDataStructure { .. }), avoid extracontains(...)on strings.Also applies to: 3985-3987
📜 Review details
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
README.md(2 hunks)REFERENCES.md(1 hunks)src/core/traits/insertion_algorithm.rs(14 hunks)
✅ Files skipped from review due to trivial changes (1)
- README.md
🧰 Additional context used
🧠 Learnings (5)
📓 Common learnings
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The delaunay repository has stale documentation in WARP.md that references non-existent trait files (finitecheck.rs, hashcoordinate.rs, orderedeq.rs) in src/geometry/traits/, while the actual directory only contains coordinate.rs. The CONTRIBUTING.md documentation is accurate and reflects the current state.
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
Applied to files:
REFERENCES.mdsrc/core/traits/insertion_algorithm.rs
📚 Learning: 2025-08-28T14:12:46.198Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 55
File: CITATION.cff:5-7
Timestamp: 2025-08-28T14:12:46.198Z
Learning: The DOI 10.5281/zenodo.16931097 in the CITATION.cff file belongs to the Rust delaunay triangulation library by acgetchell, not to any Julia DelaunayTriangulation.jl package.
Applied to files:
REFERENCES.md
📚 Learning: 2025-08-20T21:55:04.552Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 50
File: src/core/algorithms/robust_bowyer_watson.rs:191-195
Timestamp: 2025-08-20T21:55:04.552Z
Learning: CoordinateScalar trait includes PartialOrd through its Float supertrait bound, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.
Applied to files:
src/core/traits/insertion_algorithm.rs
📚 Learning: 2025-08-20T21:55:04.552Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 50
File: src/core/algorithms/robust_bowyer_watson.rs:191-195
Timestamp: 2025-08-20T21:55:04.552Z
Learning: CoordinateScalar trait guarantees PartialOrd, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.
Applied to files:
src/core/traits/insertion_algorithm.rs
🧬 Code graph analysis (1)
src/core/traits/insertion_algorithm.rs (4)
src/core/facet.rs (13)
tds(406-408)new(254-259)new(427-451)new(739-756)new(828-836)key(576-597)facet_key_from_vertices(928-941)cell_key(282-284)cell_key(392-394)vertices(493-514)facet_index(307-309)facet_index(399-401)facet_vertices(1085-1109)src/core/triangulation_data_structure.rs (8)
new(2287-2349)cell_keys(622-624)vertex_keys(613-615)vertices(604-606)number_of_cells(1020-1022)number_of_vertices(865-867)empty(2169-2178)is_valid(3939-3972)src/core/cell.rs (5)
new(476-499)map(411-411)vertices(672-674)number_of_vertices(757-759)is_valid(1098-1138)src/core/collections.rs (4)
fast_hash_set_with_capacity(806-808)fast_hash_set_with_capacity(902-902)fast_hash_map_with_capacity(772-774)fast_hash_map_with_capacity(899-899)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Codacy Static Code Analysis
🔇 Additional comments (7)
REFERENCES.md (1)
28-38: Excellent transparency and attribution for AI-assisted development.The new "AI-Assisted Development Tools" section clearly discloses the AI tools used during development and explicitly states that all generated output was reviewed/edited by the maintainer with no content used without human oversight. The formatting is consistent with the document's markdown style, tool references are accurate with proper URLs, and the placement between the citation guidance and core algorithm references is logical.
src/core/traits/insertion_algorithm.rs (6)
2507-2551: Early‑exit on empty handles and pre‑filtering are spot on.Rejecting empty input before inserting the vertex and running dedup/filter avoids side effects and keeps semantics atomic.
2653-2801: Preventive filter fix correctly discounts the bad cell.The count‑discount before the ≥2 check and during running updates resolves the “reject all interior facets” bug from the earlier review. Keeping the reactive fix as safety net is prudent.
3310-3318: Defense‑in‑depth comment is helpful; keep reactive fix.Retaining
fix_invalid_facet_sharingpost‑creation covers facet‑map build failures and edge cases.
3917-3940: Great: empty handle path asserts no side effects.Test verifies early error and unchanged TDS (cells and vertices).
1594-1631: Insertion paths consistently apply dedup + filter; no bypasses detected.Verified via codebase scan: both main insertion paths (lines 1594–1631 and 2520–2539) uniformly apply
deduplicate_boundary_facet_infofollowed byfilter_boundary_facets_by_valid_facet_sharingbefore cell creation. The transactional sequence—dedup, preventive filter, atomic rollback on failure—is consistent across all paths. No alternate insertion routes that skip these checks were found.
279-307: CoordinateScalar is Float-backed; bbox operations are justified.CoordinateScalar trait inherits from Float, confirming that the float-only
saturating_add_for_bboxandsaturating_sub_for_bboxoperations are correctly constrained. The natural overflow handling (to ±infinity) documented in the comments is appropriate for floating-point arithmetic.
…nt filtering Replace silent deduplication with explicit error handling to catch algorithmic bugs in cavity boundary detection early. **Breaking API Changes** - deduplicate_boundary_facet_info: Returns Result<SmallBuffer, InsertionError> instead of Vec (errors on duplicates, uses SmallBuffer for performance) - filter_boundary_facets_by_valid_facet_sharing: Returns Result<SmallBuffer> instead of Vec (propagates errors from facet map building) - Add InsertionError::DuplicateBoundaryFacets variant with duplicate_count and total_count fields **Rationale** The cavity boundary should form a topological sphere with no duplicate facets. Duplicates indicate: - Incorrect neighbor traversal logic - Non-manifold mesh connectivity - Data structure corruption By returning errors instead of silently filtering, we surface these bugs immediately rather than allowing corrupted triangulations. **Performance Optimization** - Use SmallBuffer (stack-allocated, falls back to heap) for typical D+1 facets - Faster than Vec for small collections (common case in D dimensions) - Maintains O(n) duplicate detection with FastHashSet **Implementation Details** - Propagate DuplicateBoundaryFacets errors through insert_vertex_cavity_based and create_cells_from_facet_handles - Update tests to expect errors on duplicates instead of silent filtering - Rename prop_deduplication_is_idempotent to prop_deduplication_detects_duplicates - Fix test_create_cells_from_facet_handles_duplicate_handles to assert error
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (1)
src/core/traits/insertion_algorithm.rs (1)
2703-2857: Preventive facet-sharing filter now discounts bad-cell counts correctly.This addresses the earlier “oversharing” false positives by subtracting the soon‑to‑be‑removed bad cell before checking counts.
If you want a quick sanity check, run the included testtest_preventive_filter_does_not_reject_valid_interior_facets; it should pass under debug and release. Based on learnings
🧹 Nitpick comments (4)
src/core/traits/insertion_algorithm.rs (4)
3198-3207: Remove redundant sort before hashing facet signature.facet_key_from_vertices() already sorts; drop the extra sort to save work in hot wiring path.
- let mut facet_sig: FacetSignature = cell_vertices + let mut facet_sig: FacetSignature = cell_vertices .iter() .enumerate() .filter_map(|(i, &vk)| (i != opposite_idx).then_some(vk)) .collect(); - facet_sig.sort_unstable(); let facet_key = facet_key_from_vertices(&facet_sig);
2157-2184: Avoid UUID set; prefer VertexKey membership for speed.Build a FastHashSet for the facet, then scan adjacent cell keys without per‑vertex UUID hashing. You can fetch the single opposite vertex only once after finding its key.
522-559: Don’t mark mutating methods as const.reset/record_* mutate state and can’t be used in const contexts, so const brings no benefit and may confuse readers.
- pub const fn reset(&mut self) { + pub fn reset(&mut self) { *self = Self::new(); } - pub const fn record_vertex_insertion(&mut self, info: &InsertionInfo) { + pub fn record_vertex_insertion(&mut self, info: &InsertionInfo) { ... } - pub const fn record_fallback_usage(&mut self) { + pub fn record_fallback_usage(&mut self) { self.fallback_strategies_used += 1; } - pub const fn record_cavity_boundary_failure(&mut self) { + pub fn record_cavity_boundary_failure(&mut self) { self.cavity_boundary_failures += 1; } - pub const fn record_cavity_boundary_recovery(&mut self) { + pub fn record_cavity_boundary_recovery(&mut self) { self.cavity_boundary_recoveries += 1; }
301-329: Doc nit: clarify “saturating” naming for floats.These helpers are plain +/- for floats; consider renaming or tightening docs to avoid implying integer‑style saturation.
Also applies to: 316-329
📜 Review details
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
proptest-regressions/core/traits/insertion_algorithm.txt(1 hunks)src/core/traits/insertion_algorithm.rs(16 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- proptest-regressions/core/traits/insertion_algorithm.txt
🧰 Additional context used
🧠 Learnings (5)
📓 Common learnings
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The delaunay repository has stale documentation in WARP.md that references non-existent trait files (finitecheck.rs, hashcoordinate.rs, orderedeq.rs) in src/geometry/traits/, while the actual directory only contains coordinate.rs. The CONTRIBUTING.md documentation is accurate and reflects the current state.
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The user corrected that finitecheck.rs, hashcoordinate.rs, and orderedeq.rs trait files do not exist in the current version of the delaunay repository under src/geometry/traits/. The current structure only contains coordinate.rs, which matches the documented structure in CONTRIBUTING.md.
Applied to files:
src/core/traits/insertion_algorithm.rs
📚 Learning: 2025-08-20T21:55:04.552Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 50
File: src/core/algorithms/robust_bowyer_watson.rs:191-195
Timestamp: 2025-08-20T21:55:04.552Z
Learning: CoordinateScalar trait includes PartialOrd through its Float supertrait bound, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.
Applied to files:
src/core/traits/insertion_algorithm.rs
📚 Learning: 2025-08-20T21:55:04.552Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 50
File: src/core/algorithms/robust_bowyer_watson.rs:191-195
Timestamp: 2025-08-20T21:55:04.552Z
Learning: CoordinateScalar trait guarantees PartialOrd, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.
Applied to files:
src/core/traits/insertion_algorithm.rs
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
Repo: acgetchell/delaunay PR: 65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
Applied to files:
src/core/traits/insertion_algorithm.rs
🧬 Code graph analysis (1)
src/core/traits/insertion_algorithm.rs (5)
src/core/cell.rs (6)
neighbors(639-643)new(476-499)map(411-411)vertices(672-674)number_of_vertices(757-759)is_valid(1098-1138)src/core/facet.rs (13)
tds(406-408)new(254-259)new(427-451)new(739-756)new(828-836)key(576-597)facet_key_from_vertices(928-941)cell_key(282-284)cell_key(392-394)vertices(493-514)facet_index(307-309)facet_index(399-401)facet_vertices(1085-1109)src/core/triangulation_data_structure.rs (8)
new(2287-2349)cell_keys(622-624)vertex_keys(613-615)vertices(604-606)number_of_cells(1020-1022)number_of_vertices(865-867)empty(2169-2178)is_valid(3939-3972)src/core/collections.rs (4)
fast_hash_set_with_capacity(806-808)fast_hash_set_with_capacity(902-902)fast_hash_map_with_capacity(772-774)fast_hash_map_with_capacity(899-899)src/core/boundary.rs (1)
boundary_facets(76-84)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
- GitHub Check: Codacy Static Code Analysis
- GitHub Check: build (windows-latest)
- GitHub Check: performance-regression
- GitHub Check: build (macos-latest)
- GitHub Check: build (ubuntu-latest)
- GitHub Check: Code Coverage
- GitHub Check: Codacy Security Scan
🔇 Additional comments (7)
src/core/traits/insertion_algorithm.rs (7)
219-241: Good explicit surfacing of duplicate cavity facets.Clear error variant and message; this prevents silent topology breakage.
1451-1457: Boundary facet detection and dedup look correct.Using Option::is_none_or and canonical facet keys fixes oversights and avoids duplicates.
Also applies to: 1467-1481
2633-2701: Deduplication via canonical facet keys is solid.O(n) with FastHashSet and SmallBuffer is appropriate; erroring on duplicates is the right call here.
399-410: Nice const guard on threshold invariants.Compile‑time assert keeps the integer optimization correct.
2530-2537: Early exit on empty facet handle input is good.Fast fail with a clear error avoids unnecessary vertex insertion/rollback.
3366-3374: Defensive finalize keeps safety net despite preventive filtering.Good defense‑in‑depth choice.
3465-3467: No issues found. Pastey dev-dependency is properly configured.Verification confirms that
pastey = "0.1"is present in dev-dependencies, and the codebase consistently usespastey::paste!throughout—the maintained replacement for the deprecatedpastecrate. The code at lines 3465–3467 is correct as written.
The "saturating" terminology was misleading for floating-point operations. These functions perform plain arithmetic, not saturating arithmetic. **Renaming** - saturating_sub_for_bbox → bbox_sub - saturating_add_for_bbox → bbox_add **Updated Documentation** - Clarify these are plain float operations, NOT saturating arithmetic - Explain float overflow behavior: naturally produces ±infinity - Document that this is the desired behavior for bounding box expansion (ensures all vertices are contained even with extreme coordinates) - Add explicit type constraints: only supports f32/f64 via CoordinateScalar - Update all 28 call sites and test names **Rationale** "Saturating" implies integer-style clamping to min/max bounds, but: - Floats don't saturate - they naturally produce ±infinity on overflow - This is actually the correct behavior for bbox expansion - The old naming suggested these functions did something special, when they're just plain +/- operations with overflow-to-infinity semantics
This commit addresses several issues related to degenerate point configurations and improves the robustness of property-based tests:
Duplicate Vertex Handling
Property Test Improvements
Serialization Test Fixes
New Property Tests
All tests now pass successfully. Degenerate cases are filtered in property tests since they're covered by dedicated edge case tests.