Skip to content

Feature/migrate uuids to slotmap#77

Merged
acgetchell merged 18 commits intomainfrom
feature/migrate-uuids-to-slotmap
Sep 16, 2025
Merged

Feature/migrate uuids to slotmap#77
acgetchell merged 18 commits intomainfrom
feature/migrate-uuids-to-slotmap

Conversation

@acgetchell
Copy link
Owner

This pull request introduces substantial improvements to UUID migration and boundary facet analysis in the delaunay crate. The most significant change is the addition of a comprehensive migration inventory document, which outlines a strategy for replacing internal UUID usage with SlotMap keys while preserving public API compatibility. In the codebase, boundary facet identification is now more robust and consistent, using key-based logic for facet identification. Associated tests have been updated to improve clarity and reliability, and minor code cleanups and documentation improvements are included.

UUID Migration Documentation and Strategy

  • Added docs/UUID_MIGRATION_INVENTORY.md, detailing all UUID usages, categorizing them for migration, outlining a phased migration strategy, API compatibility plan, testing strategy, and success metrics.

Boundary Facet Analysis Improvements

  • Refactored is_boundary_facet in src/core/boundary.rs to compute facet keys using vertex keys and the new facet_key_from_vertex_keys method, ensuring consistency between facet identification and internal mapping. [1] [2]

Test Suite Enhancements

  • Updated boundary facet tests in src/core/boundary.rs to count confirmed boundary facets rather than asserting each individually, improving test reliability and clarity for 2D, 3D, and 4D cases. [1] [2] [3]
  • Cleaned up boundary facet verification logic in tests, removing redundant counters and improving assertion messages. [1] [2] [3]

Algorithm Test Refactoring

  • Refactored tests in src/core/algorithms/robust_bowyer_watson.rs to use new methods (cell_key_from_uuid, cell_uuid_from_key) for key/UUID conversion, aligning with the migration strategy and simplifying code. [1] [2] [3] [4]

Documentation and Constant Visibility

  • Changed MAX_PRACTICAL_DIMENSION_SIZE in src/core/collections.rs from a private to a public constant for broader accessibility.
  • Updated spell-checking guidance in WARP.md to clarify exceptions for intentional misspellings in cspell.json.

Refactors the internal implementation to replace UUID lookups with direct SlotMap key operations for improved performance.
This change maintains 100% backward compatibility for the public API, which continues to use UUIDs. It also updates the serialization logic to reconstruct the UUID-to-key mappings on deserialization. This is an internal change.

Refs: Fixes #73
The `is_boundary_facet` method was using an incorrect method to retrieve the facet key, causing boundary detection to fail in some cases. This commit fixes the issue by ensuring that
`is_boundary_facet` uses the same method to compute the facet key as the `build_facet_to_cells_hashmap` method, resolving the inconsistency. Additionally, this commit improves the robustness of tests by explicitly checking the number of confirmed boundary facets, ensuring that all expected facets are correctly identified.
Copilot AI review requested due to automatic review settings September 14, 2025 03:25
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 14, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

Replaces internal bidirectional BiMap UUID↔Key maps with one-way UUID→Key maps and key-based hot paths; adds facet-cache provider and caching utilities; introduces zero-allocation vertex UUID iterator and demo; migrates many algorithms and data structures to FastHashMap/FastHashSet/SmallBuffer; adds docs and bench-feature guidance; minor public API additions (cache/lookup helpers, RobustInsertionInfo).

Changes

Cohort / File(s) Summary
TDS core mapping & API
src/core/triangulation_data_structure.rs, src/core/collections.rs
Remove BiMap fields; add internal uuid_to_vertex_key / uuid_to_cell_key maps and public lookup helpers (vertex_key_from_uuid, cell_key_from_uuid, vertex_uuid_from_key, cell_uuid_from_key); expose MAX_PRACTICAL_DIMENSION_SIZE; add UuidToVertexKeyMap/UuidToCellKeyMap aliases (duplicate blocks present); serialization now reconstructs mappings.
Algorithms & insertion
src/core/traits/insertion_algorithm.rs, src/core/algorithms/robust_bowyer_watson.rs
Replace BiMap usage with uuid→key lookups; adopt FastHashMap/FastHashSet and SmallBuffer; add finalize_after_insertion(...) and RobustInsertionInfo; safer numeric conversions, expanded error context, refactored cavity/boundary helpers and loop optimizations.
Facet / Boundary / Cell
src/core/facet.rs, src/core/boundary.rs, src/core/cell.rs, src/core/traits/boundary_analysis.rs
Remove facet_key_from_vertices (Vertex→VertexKey path); rely on facet_key_from_vertex_keys and new derive_facet_key_from_vertices; add is_boundary_facet_with_map trait method and impl; Cell gains vertex_uuids() and zero-allocation vertex_uuid_iter(); into_hashmapFastHashMap; tests updated.
Facet-key util & dedupe
src/core/util.rs
Add derive_facet_key_from_vertices utility (maps facet vertices → VertexKey using TDS) and tests; duplicate insertion of this function observed in diff.
Facet caching / provider
src/core/traits/facet_cache.rs, src/lib.rs, src/geometry/algorithms/convex_hull.rs
Add FacetCacheProvider trait (facet_cache, cached_generation, get_or_build_facet_cache, invalidate_facet_cache); re-export in lib; ConvexHull implements provider and exposes invalidate_cache.
Boundary analysis trait
src/core/traits/boundary_analysis.rs
Add is_boundary_facet_with_map(...) to allow prebuilt facet-to-cells checks in hot loops.
Collections / constants
src/core/collections.rs
Make MAX_PRACTICAL_DIMENSION_SIZE public; add public aliases for UUID→Key maps (duplicate blocks present); documentation tweaks.
Cell iteration ergonomics & examples
src/core/cell.rs, examples/..., examples/zero_allocation_iterator_demo.rs
Add vertex_uuid_iter() zero-allocation iterator and keep vertex_uuids() where needed; examples and benches updated to use tds.vertices() accessor; new demo example benchmarks iterator vs Vec allocation.
Convex hull & geometry changes
src/geometry/algorithms/convex_hull.rs
Derive facet keys via derive_facet_key_from_vertices, use get_or_build_facet_cache, propagate missing-vertex errors; add trait integration and invalidate_cache.
Optimization & strategy docs
docs/REFACTORING_STRATEGY.md, docs/UUID_MIGRATION_INVENTORY.md, docs/optimization_recommendations.md, docs/OPTIMIZING_BOWYER_WATSON.md
Add phased UUID→SlotMap key migration strategy, inventory of UUID usages, optimization recommendations, and Bowyer–Watson caching strategy with phased plans and success criteria.
Iterator docs & examples README
docs/vertex_uuid_iter_optimizations.md, docs/code_organization.md, examples/README.md
Document zero-allocation iterator usage/migration and bench-feature guidance; add zero-allocation demo entry and bench guidance.
Benchmarks / CI / features
benches/microbenchmarks.rs, Cargo.toml, WARP.md, CHANGELOG.md
Add bench feature for timing-style tests; update benches to use tds.vertices() accessor; bench guidance added to WARP.md and changelog; remove bimap dependency from Cargo.toml.
Spell-check / cspell
WARP.md, cspell.json
Add spell-check guidance to skip cspell.json (flagWords intentional); reformat/reorder cspell.json and add hotspot.
Misc docs & new examples
docs/*, examples/zero_allocation_iterator_demo.rs
Add multiple documentation files (migration, optimization, iterator guidance) and a new demo example for zero-allocation iteration.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Client as Public API (UUIDs)
  participant Algo as Algorithm (Insertion / ConvexHull / Boundary)
  participant TDS as Tds (uuid_to_* helpers)
  participant Store as Internal Store (SlotMap keys / Fast structures)

  Client->>Algo: call algorithm using UUIDs
  Algo->>TDS: vertex_key_from_uuid(uuid)
  alt key found
    TDS-->>Algo: VertexKey
    Algo->>Store: operate using VertexKey/CellKey (FastHashMap/Set, SmallBuffer)
    Algo->>TDS: vertex_uuid_from_key(key)  -- when returning public results
    TDS-->>Algo: Uuid(s)
    Algo-->>Client: return results (UUIDs preserved)
  else missing key
    TDS-->>Algo: None
    Algo-->>Client: return FacetError::VertexNotFound
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related issues

Possibly related PRs

Poem

I’m a rabbit with a tidy key,
UUIDs hop while SlotMaps flee,
Caches hum and iterators race,
Zero allocations save the place,
Thump—refactor done, tests leap with glee 🐇

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Title Check ✅ Passed The title "Feature/migrate uuids to slotmap" concisely and accurately captures the primary change in this PR—migrating internal UUID usage toward slotmap/key-based handling—and aligns with the provided diffs and objectives (docs, TDS and boundary refactors). It is clear and scan-able; the "Feature/" branch-style prefix is mildly noisy but not misleading.
Description Check ✅ Passed The pull request description is directly related to the changeset and accurately summarizes the main additions and refactors (UUID migration docs, boundary facet logic, test updates, and minor doc/constant visibility changes), providing reviewers with a clear sense of scope and intent. The level of detail is sufficient for this lenient description check.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/migrate-uuids-to-slotmap

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This pull request introduces substantial improvements to UUID migration and boundary facet analysis in the delaunay crate. The most significant change is the addition of comprehensive migration documentation that outlines a strategy for replacing internal UUID usage with SlotMap keys while preserving public API compatibility. Additionally, the codebase has been refactored to use more efficient key-based logic for facet identification and boundary analysis.

  • Replaced BiMap with optimized UUID-to-key mappings for improved performance
  • Migrated facet identification to use consistent vertex key-based logic throughout
  • Added comprehensive UUID migration strategy documentation

Reviewed Changes

Copilot reviewed 10 out of 10 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
src/geometry/algorithms/convex_hull.rs Updated facet visibility logic to use vertex key-based facet identification
src/core/triangulation_data_structure.rs Major refactor replacing BiMap with optimized UUID→Key mappings and added key conversion methods
src/core/traits/insertion_algorithm.rs Updated insertion algorithms to use vertex key-based facet operations
src/core/facet.rs Added facet_key_from_vertex_keys function and removed BiMap dependency
src/core/collections.rs Made MAX_PRACTICAL_DIMENSION_SIZE public and added UUID-to-key mapping types
src/core/boundary.rs Refactored boundary facet analysis to use consistent key-based facet identification
src/core/algorithms/robust_bowyer_watson.rs Updated test assertions to use new key-based UUID conversion methods
docs/UUID_MIGRATION_INVENTORY.md Added comprehensive inventory of UUID usage categorized for migration
docs/REFACTORING_STRATEGY.md Added detailed phased migration strategy document
WARP.md Updated spell-checking guidance to clarify exceptions for intentional misspellings

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

Comment on lines +423 to +432
// Compute the facet key using VertexKeys (same method as build_facet_to_cells_hashmap)
let facet_vertices_for_key = facet.vertices();
let mut vertex_keys = Vec::with_capacity(facet_vertices_for_key.len());
for vertex in &facet_vertices_for_key {
match tds.vertex_key_from_uuid(&vertex.uuid()) {
Some(key) => vertex_keys.push(key),
None => return Err(FacetError::FacetNotFoundInTriangulation),
}
}
let facet_key = facet_key_from_vertex_keys(&vertex_keys);
Copy link

Copilot AI Sep 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The vertex key collection logic is duplicated from the earlier code (lines 364-366). Consider extracting this into a helper method to reduce code duplication and improve maintainability.

Copilot uses AI. Check for mistakes.
Comment on lines +1672 to +1683
// This is a two-phase process to appease the borrow checker. First we collect all the
// updates, then we apply them.
let updates: Vec<(CellKey, Vec<Option<Uuid>>)> = cell_neighbors
.iter()
.map(|(cell_key, neighbors)| {
let neighbor_uuids = neighbors
.iter()
.map(|&key| key.and_then(|k| self.cell_uuid_from_key(k)))
.collect();
(*cell_key, neighbor_uuids)
})
.collect();
Copy link

Copilot AI Sep 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The two-phase update process creates intermediate collections and performs UUID lookups for every neighbor. Consider storing neighbor relationships as keys internally and only converting to UUIDs when needed for public API access to avoid this overhead.

Copilot uses AI. Check for mistakes.
Comment on lines +1930 to +1932
let Ok(vertex_keys) = self.vertex_keys_for_cell(cell) else {
continue; // Skip cells with missing vertex keys
};
Copy link

Copilot AI Sep 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] Silently skipping cells with missing vertex keys could mask data corruption issues. Consider logging a warning or debug message when this occurs to aid in troubleshooting potential data inconsistencies.

Copilot uses AI. Check for mistakes.
Comment on lines +155 to +158
match self.uuid_to_vertex_key.get(&vertex.uuid()) {
Some(&key) => vertex_keys.push(key),
None => return false, // Vertex not in triangulation, so facet can't be boundary
}
Copy link

Copilot AI Sep 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The vertex key lookup logic is duplicated across multiple functions. Consider creating a helper method like get_vertex_keys_for_facet to reduce code duplication and centralize this common operation.

Copilot uses AI. Check for mistakes.
@coderabbitai coderabbitai bot added documentation Improvements or additions to documentation rust Pull requests that update rust code labels Sep 14, 2025
@codacy-production
Copy link

codacy-production bot commented Sep 14, 2025

Coverage summary from Codacy

See diff coverage on Codacy

Coverage variation Diff coverage
-0.83% (target: -1.00%) 62.50%
Coverage variation details
Coverable lines Covered lines Coverage
Common ancestor commit (a090e1e) 2691 1886 70.09%
Head commit (c21d250) 2784 (+93) 1928 (+42) 69.25% (-0.83%)

Coverage variation is the difference between the coverage for the head and common ancestor commits of the pull request branch: <coverage of head commit> - <coverage of common ancestor commit>

Diff coverage details
Coverable lines Covered lines Diff coverage
Pull request (#77) 400 250 62.50%

Diff coverage is the percentage of lines that are covered by tests out of the coverable lines that the pull request added or modified: <covered lines added or modified>/<coverable lines added or modified> * 100%

See your quality gate settings    Change summary preferences

@codecov
Copy link

codecov bot commented Sep 14, 2025

Codecov Report

❌ Patch coverage is 53.39367% with 103 lines in your changes missing coverage. Please review.
✅ Project coverage is 69.30%. Comparing base (a090e1e) to head (c21d250).
⚠️ Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
src/core/algorithms/robust_bowyer_watson.rs 28.33% 43 Missing ⚠️
src/core/traits/insertion_algorithm.rs 51.61% 30 Missing ⚠️
src/core/boundary.rs 39.02% 25 Missing ⚠️
src/core/util.rs 66.66% 4 Missing ⚠️
src/core/traits/facet_cache.rs 96.29% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main      #77      +/-   ##
==========================================
- Coverage   70.13%   69.30%   -0.84%     
==========================================
  Files          17       18       +1     
  Lines        2689     2782      +93     
==========================================
+ Hits         1886     1928      +42     
- Misses        803      854      +51     
Flag Coverage Δ
unittests 69.30% <53.39%> (-0.84%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/core/triangulation_data_structure.rs (1)

1841-1859: Duplicate removal may drop valid cells on mapping drift

unwrap_or_default() yields empty vertex-keys, which can misclassify cells as duplicates and remove them. Prefer skipping cells whose keys can’t be resolved.

-        for (cell_key, cell) in &self.cells {
-            let mut vertex_keys = self.vertex_keys_for_cell(cell).unwrap_or_default();
+        for (cell_key, cell) in &self.cells {
+            let Ok(mut vertex_keys) = self.vertex_keys_for_cell(cell) else {
+                continue; // skip cells with unresolved vertex keys
+            };
             vertex_keys.sort_unstable();
 
             if let Some(_existing_cell_key) = unique_cells.get(&vertex_keys) {
                 cells_to_remove.push(cell_key);
             } else {
                 unique_cells.insert(vertex_keys, cell_key);
             }
         }
🧹 Nitpick comments (21)
WARP.md (1)

48-48: Fix typo in exception note.

Change “mispelled” → “misspelled” to avoid confusion.

- - **EXCEPTION**: **DO NOT** spell-check `cspell.json` itself - it contains intentional misspellings in `flagWords` from mispelled git commits
+ - **EXCEPTION**: **DO NOT** spell-check `cspell.json` itself - it contains intentional misspellings in `flagWords` from misspelled git commits
src/geometry/algorithms/convex_hull.rs (2)

423-433: Avoid duplicate facet.vertices() allocation; reuse the earlier slice.

You already computed facet_vertices on Line 365. Reuse it here.

-        let facet_vertices_for_key = facet.vertices();
-        let mut vertex_keys = Vec::with_capacity(facet_vertices_for_key.len());
-        for vertex in &facet_vertices_for_key {
+        let mut vertex_keys = Vec::with_capacity(facet_vertices.len());
+        for vertex in &facet_vertices {
             match tds.vertex_key_from_uuid(&vertex.uuid()) {
                 Some(key) => vertex_keys.push(key),
                 None => return Err(FacetError::FacetNotFoundInTriangulation),
             }
         }

427-431: Return a more specific error when a vertex key is missing.

Prefer preserving the missing vertex UUID in the error for diagnosability.

-                None => return Err(FacetError::FacetNotFoundInTriangulation),
+                None => return Err(FacetError::VertexNotFound { uuid: vertex.uuid() }),
src/core/boundary.rs (2)

148-166: Use helper API instead of accessing the map directly.

Use self.vertex_key_from_uuid(...) for consistency with the migration and to avoid tight coupling to the map field.

-        for vertex in &facet_vertices {
-            match self.uuid_to_vertex_key.get(&vertex.uuid()) {
-                Some(&key) => vertex_keys.push(key),
-                None => return false, // Vertex not in triangulation, so facet can't be boundary
-            }
-        }
+        for vertex in &facet_vertices {
+            match self.vertex_key_from_uuid(&vertex.uuid()) {
+                Some(key) => vertex_keys.push(key),
+                None => return false, // Vertex not in triangulation, so facet can't be boundary
+            }
+        }

426-426: Nit: ensure message remains consistent across tests.

No functional issue; just keep assertion messages uniform.

src/core/algorithms/robust_bowyer_watson.rs (2)

999-1011: Fix potential missing Into<[T; D]> impl for Vertex

let v_coords: [T; D] = (*v).into(); assumes a blanket conversion for Vertex that may not exist. Use the point’s conversion instead.

Apply this diff:

-                let v_coords: [T; D] = (*v).into();
+                let v_coords: [T; D] = v.point().into();

923-941: Avoid relying on T::from_usize/from_f64 (tighter bounds, clearer generics)

These helpers may not be guaranteed by current bounds. Prefer From<f64> (already in the impl bounds) and avoid optional conversions.

Apply this diff:

-        let num_vertices = T::from_usize(facet_vertices.len()).unwrap_or_else(T::one);
-        for coord in &mut centroid_coords {
-            *coord /= num_vertices;
-        }
+        // Use From<f64> which is already required by impl bounds
+        let inv = T::one() / T::from(facet_vertices.len() as f64);
+        for coord in &mut centroid_coords {
+            *coord *= inv;
+        }
@@
-        let threshold = self.predicate_config.perturbation_scale
-            * self.predicate_config.perturbation_scale
-            * T::from_f64(100.0).unwrap_or_else(|| T::one());
+        let threshold = self.predicate_config.perturbation_scale
+            * self.predicate_config.perturbation_scale
+            * T::from(100.0);
docs/UUID_MIGRATION_INVENTORY.md (2)

256-259: Clarify “O(1) to O(0)” phrasing

Big-O isn’t applicable here. Suggest “Eliminate internal UUID→Key lookups (0 occurrences in hot paths)” instead.


27-34: Avoid brittle absolute line numbers in docs

Line numbers drift. Reference symbols/sections (function names, module paths) or add permalinks to commits instead.

Also applies to: 37-52

src/core/traits/insertion_algorithm.rs (2)

637-671: Reuse central facet-to-cells map to avoid recomputation

You rebuild the map per call using vertex-keys. Prefer the canonical tds.build_facet_to_cells_hashmap() (which now uses vertex-key-based facet keys), then derive keys only for facets you inspect. Less work, one source of truth.

Apply this diff to drop the local rebuild:

-        // Build a complete mapping from facet keys to all cells that contain them
-        let mut facet_to_cells: HashMap<
-            u64,
-            Vec<crate::core::triangulation_data_structure::CellKey>,
-        > = HashMap::new();
-        for (cell_key, cell) in tds.cells() {
-            if let Ok(facets) = cell.facets() {
-                for facet in &facets {
-                    // Compute facet key using VertexKeys
-                    let facet_vertices = facet.vertices();
-                    let mut vertex_keys = Vec::with_capacity(facet_vertices.len());
-                    for vertex in &facet_vertices {
-                        if let Some(key) = tds.vertex_key_from_uuid(&vertex.uuid()) {
-                            vertex_keys.push(key);
-                        }
-                    }
-                    if vertex_keys.len() == facet_vertices.len() {
-                        let facet_key = facet_key_from_vertex_keys(&vertex_keys);
-                        facet_to_cells.entry(facet_key).or_default().push(cell_key);
-                    }
-                }
-            }
-        }
+        // Canonical, vertex-key-based facet map (u64 -> Vec<(CellKey, facet_index)>)
+        let facet_to_cells = tds.build_facet_to_cells_hashmap();

681-707: Adapt to canonical map shape (keeps code simple and faster)

With the canonical map, count using (CellKey, facet_index) pairs.

Apply this diff:

-                    if let Some(sharing_cells) = facet_to_cells.get(&facet_key) {
-                        // Count how many bad vs good cells share this facet
-                        let bad_count = sharing_cells
-                            .iter()
-                            .filter(|&&cell_key| bad_cell_set.contains(&cell_key))
-                            .count();
-                        let total_count = sharing_cells.len();
+                    if let Some(sharing_cells) = facet_to_cells.get(&facet_key) {
+                        // Count how many bad vs good cells share this facet
+                        let bad_count = sharing_cells
+                            .iter()
+                            .filter(|&&(cell_key, _)| bad_cell_set.contains(&cell_key))
+                            .count();
+                        let total_count = sharing_cells.len();
src/core/triangulation_data_structure.rs (10)

411-422: Public UUID→Key maps risk invariants; consider tightening or documenting

Exposing uuid_to_vertex_key and uuid_to_cell_key as pub invites external mutation and easy corruption (many tests exploit this). If public API parity is required, at least add clear safety docs; otherwise prefer pub(crate) plus read-only getters.

-    #[serde(skip)] // Skip serialization - can be reconstructed from vertices
-    pub uuid_to_vertex_key: UuidToVertexKeyMap,
+    #[serde(skip)] // Skip serialization - can be reconstructed from vertices
+    pub(crate) uuid_to_vertex_key: UuidToVertexKeyMap,
...
-    #[serde(skip)] // Skip serialization - can be reconstructed from cells
-    pub uuid_to_cell_key: UuidToCellKeyMap,
+    #[serde(skip)] // Skip serialization - can be reconstructed from cells
+    pub(crate) uuid_to_cell_key: UuidToCellKeyMap,

If you must keep them pub, add a module-level SAFETY note that mutating these maps invalidates TDS invariants.


772-800: Hot-path helper: add inline + debug assertion

vertex_keys_for_cell() is on hot paths. Add #[inline] and a debug assertion to catch mapping drift early.

-    fn vertex_keys_for_cell(
+    #[inline]
+    fn vertex_keys_for_cell(
         &self,
         cell: &Cell<T, U, V, D>,
     ) -> Result<Vec<VertexKey>, super::facet::FacetError> {
-        cell.vertices()
+        let keys: Vec<VertexKey> = cell.vertices()
             .iter()
             .map(|v| {
                 self.uuid_to_vertex_key
                     .get(&v.uuid())
                     .copied()
                     .ok_or_else(|| super::facet::FacetError::VertexNotFound { uuid: v.uuid() })
             })
-            .collect()
+            .collect();
+        debug_assert_eq!(keys.len(), cell.vertices().len());
+        Ok(keys)
     }

1358-1396: Good UUID duplication guard; minor API nit

Logic is correct. Consider updating Err("Uuid already exists!") to a typed error for consistency with the rest of the API.


1603-1692: Neighbor assignment: bump generation counter after structural update

You update generation in add() and some repair flows, but not here. Assigning neighbors mutates topology; bump to keep caches coherent when assign_neighbors() is called standalone (tests do this).

     for (cell_key, neighbor_uuids) in updates {
         if let Some(cell) = self.cells.get_mut(cell_key) {
             if neighbor_uuids.iter().all(Option::is_none) {
                 cell.neighbors = None;
             } else {
                 cell.neighbors = Some(neighbor_uuids);
             }
         }
     }
 
-    Ok(())
+    // Topology changed; invalidate caches.
+    self.generation.fetch_add(1, Ordering::Relaxed);
+    Ok(())

1779-1813: Error mapping in incident assignment

Swallowing the original error into a generic InconsistentDataStructure loses root cause. Consider including the original FacetError message (you already do in neighbors path).


1930-1951: Micro-opt: reuse facet buffer to avoid per-iteration allocs

Preallocate facet_vertex_keys outside the inner loop and clear() each time (mirrors approach in assign_neighbors).

-        for (cell_id, cell) in &self.cells {
+        for (cell_id, cell) in &self.cells {
             let Ok(vertex_keys) = self.vertex_keys_for_cell(cell) else {
                 continue; // Skip cells with missing vertex keys
             };
 
-            for i in 0..cell.vertices().len() {
-                let mut facet_vertex_keys = Vec::with_capacity(cell.vertices().len() - 1);
+            let mut facet_vertex_keys = Vec::with_capacity(cell.vertices().len().saturating_sub(1));
+            for i in 0..cell.vertices().len() {
+                facet_vertex_keys.clear();
                 for (j, &key) in vertex_keys.iter().enumerate() {
                     if i != j {
                         facet_vertex_keys.push(key);
                     }
                 }

2422-2429: Leftover “bimap” wording in error msg

Message mentions “bimap” though code uses UUID→Key maps. Update for accuracy.

-                        message: format!(
-                            "Cell key {cell_id:?} has no UUID mapping in bimap during validation"
-                        ),
+                        message: format!(
+                            "Cell key {cell_id:?} has no UUID mapping during validation"
+                        ),

2600-2602: Update comments to reflect map-based design (not BiMap)

Comments still reference “BiMaps”. Replace with “UUID→Key maps”.

-        // BiMaps are derived from the vertices/cells, so if those match, the BiMaps should be consistent
-        // (We don't need to compare the BiMaps directly since they're just indexing structures)
+        // UUID→Key maps are derived from the vertices/cells, so if those match, the maps should be consistent
+        // (No need to compare the maps directly since they're just indexing structures)

Also applies to: 2677-2681


2796-2798: Deserialize: include legacy field names in FIELDS for best compatibility

You already handle VertexBimap/CellBimap in the visitor; include them in FIELDS to avoid “unknown field” messages from some formats.

-        const FIELDS: &[&str] = &["vertices", "cells"];
+        const FIELDS: &[&str] = &["vertices", "cells", "vertex_bimap", "cell_bimap"];

Also applies to: 2719-2763


2461-2463: Silent key drops here rely on prior mapping validation

Using filter_map is fine because is_valid() validates mappings first. Add a brief comment to signal this intentional dependency.

📜 Review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a090e1e and 7c7c40c.

📒 Files selected for processing (10)
  • WARP.md (1 hunks)
  • docs/REFACTORING_STRATEGY.md (1 hunks)
  • docs/UUID_MIGRATION_INVENTORY.md (1 hunks)
  • src/core/algorithms/robust_bowyer_watson.rs (4 hunks)
  • src/core/boundary.rs (6 hunks)
  • src/core/collections.rs (2 hunks)
  • src/core/facet.rs (0 hunks)
  • src/core/traits/insertion_algorithm.rs (10 hunks)
  • src/core/triangulation_data_structure.rs (80 hunks)
  • src/geometry/algorithms/convex_hull.rs (2 hunks)
💤 Files with no reviewable changes (1)
  • src/core/facet.rs
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-02T20:32:05.985Z
Learnt from: acgetchell
PR: acgetchell/delaunay#60
File: cspell.json:103-103
Timestamp: 2025-09-02T20:32:05.985Z
Learning: In cspell.json for the delaunay project, the word "itional" is intentionally added to the dictionary because it comes from a regex pattern, not a typo.

Applied to files:

  • WARP.md
🧬 Code graph analysis (5)
src/geometry/algorithms/convex_hull.rs (2)
src/core/facet.rs (3)
  • facet_key_from_vertex_keys (574-586)
  • vertex (370-372)
  • key (455-473)
src/core/cell.rs (1)
  • vertex_keys (602-615)
src/core/triangulation_data_structure.rs (4)
src/core/cell.rs (10)
  • uuid (502-504)
  • cell (2217-2217)
  • cell (2346-2346)
  • cell (2348-2348)
  • cell (2352-2352)
  • cell (2551-2551)
  • cell (2564-2564)
  • vertex_keys (602-615)
  • vertices (475-477)
  • map (341-341)
src/core/vertex.rs (2)
  • uuid (458-460)
  • map (268-268)
src/core/collections.rs (2)
  • fast_hash_map_with_capacity (527-530)
  • fast_hash_map_with_capacity (629-629)
src/core/facet.rs (7)
  • cell (325-327)
  • vertex (370-372)
  • key (455-473)
  • facet_key_from_vertex_keys (574-586)
  • new (287-293)
  • vertices (437-444)
  • map (217-217)
src/core/boundary.rs (2)
src/core/facet.rs (3)
  • facet_key_from_vertex_keys (574-586)
  • vertex (370-372)
  • key (455-473)
src/core/cell.rs (1)
  • vertex_keys (602-615)
src/core/traits/insertion_algorithm.rs (2)
src/core/facet.rs (4)
  • facet_key_from_vertex_keys (574-586)
  • vertex (370-372)
  • key (455-473)
  • cell (325-327)
src/core/cell.rs (7)
  • vertex_keys (602-615)
  • cell (2217-2217)
  • cell (2346-2346)
  • cell (2348-2348)
  • cell (2352-2352)
  • cell (2551-2551)
  • cell (2564-2564)
src/core/algorithms/robust_bowyer_watson.rs (2)
src/core/cell.rs (1)
  • uuid (502-504)
src/core/vertex.rs (1)
  • uuid (458-460)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: Codacy Static Code Analysis
  • GitHub Check: build (windows-latest)
  • GitHub Check: Code Coverage
  • GitHub Check: performance-regression
  • GitHub Check: Codacy Security Scan
🔇 Additional comments (15)
docs/REFACTORING_STRATEGY.md (1)

1-451: Solid strategy doc; no blocking issues.

Plan, phases, and compatibility guarantees read well.

src/core/boundary.rs (4)

7-7: LGTM: using facet_key_from_vertex_keys for consistency.


247-257: Test change is good: count confirmed boundary facets.


295-304: Test change is good: count confirmed boundary facets.


340-349: Test change is good: count confirmed boundary facets.

src/core/collections.rs (2)

355-360: Approve — single public constant definition verified.
rg output shows only src/core/collections.rs:359 defines MAX_PRACTICAL_DIMENSION_SIZE; no duplicates found, no dedupe required.


423-488: Mark doctest examples no_run or use public APIs

Examples in src/core/collections.rs (lines 423–488) access tds.vertices / tds.cells() (likely private) and will fail cargo test --doc. Change the fenced blocks to rust,no_run or update the examples to use public accessors; run cargo test --doc --verbose locally to verify.

src/core/algorithms/robust_bowyer_watson.rs (1)

1419-1431: Key/UUID helper migration in tests looks solid

Using tds.cell_key_from_uuid(...) and tds.cell_uuid_from_key(...) makes neighbor symmetry checks explicit and decoupled from the removed bi-maps. Nice cleanup.

Also applies to: 1563-1575, 1721-1733, 1743-1745

src/core/traits/insertion_algorithm.rs (2)

1619-1656: Test update to vertex-key-based facet keys is correct

Computing the facet key via vertex_key_from_uuidfacet_key_from_vertex_keys aligns tests with the new invariant.


1388-1391: Map maintenance is correct; consider centralizing into Tds helpers later

Directly touching tds.uuid_to_vertex_key / uuid_to_cell_key is acceptable for the migration; add a thin Tds API (register_vertex, register_cell, unregister_cell) to reduce future churn.

I attempted the suggested ripgrep check but the run returned "No files were searched" (ripgrep likely skipped files due to ignore/glob rules), so I cannot confirm absence of pre-migration artifacts. Re-run with this command and attach the output for verification:

#!/bin/bash
rg -nP --hidden --no-ignore '\b(cell_bimap|vertex_bimap)\b|facet_key_from_vertices|\.key\(\)\s*(//.*)?$' -C2 || true

Also applies to: 1427-1428, 1486-1487, 1100-1103, 1115-1116

src/core/triangulation_data_structure.rs (5)

1183-1203: Initialization path looks good

Maps are eagerly built from inserted vertices; consistent with new UUID→Key strategy.


167-170: Serde bounds/imports

The added DeserializeOwned bounds line up with manual Deserialize impl. Looks good.

Also applies to: 174-177


403-436: Serialize/skip strategy is sound

Skipping runtime-only fields and reconstructing maps on deserialize is the right call.


1487-1500: Insufficient-vertices error path verified — no change required.

The insertion algorithm explicitly returns TriangulationConstructionError::InsufficientVertices for 0 < |V| < D+1 (src/core/traits/insertion_algorithm.rs:1028–1034). bowyer_watson() still returns Ok(()) only for an empty vertex list (src/core/triangulation_data_structure.rs:1487–1490). Tests expecting InsufficientVertices (e.g. src/core/triangulation_data_structure.rs:4258–4264) remain valid.


1185-1187: Core insertion paths already update uuid_to_cell_key — no change required.
Verified that triangulate()/insert_vertex() flows (insertion_algorithm.rs and algorithm implementations) call cells.insert(...) and immediately call uuid_to_cell_key.insert(...). Examples: src/core/triangulation_data_structure.rs:1393–1395; src/core/traits/insertion_algorithm.rs:1112–1115,1424–1427. Exceptions (test/local TDS or docs inserts that do NOT update the map): src/core/triangulation_data_structure.rs:3339, src/core/triangulation_data_structure.rs:4896, docs/optimization_recommendations.md:629.

Addresses an edge case in centroid calculation for large facet vertex counts, enhancing numerical stability. Also, refactors facet visibility checks and improves data structure consistency by streamlining facet-to-cells mapping, and optimizes UUID-to-Key
lookups.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (3)
src/core/triangulation_data_structure.rs (3)

1621-1640: Two‑phase neighbor update + UUID conversion overhead.

This keeps borrow-checker happy and preserves public Uuid neighbors. If perf matters, measure and consider a keyed neighbors field internally, exposing UUIDs only via accessors. This mirrors a prior nit.

Also applies to: 1690-1715


1862-1866: Silent skips can hide data drift; emit debug-only diagnostics.

Continuing on missing vertex-key mappings can mask corruption. Emit a debug log when this path is hit. This mirrors an earlier comment.

Apply this minimal pattern at each continue site:

-            let Ok(mut vertex_keys) = self.vertex_keys_for_cell(cell) else {
-                continue; // Skip cells with unresolved vertex keys
-            };
+            let Ok(mut vertex_keys) = self.vertex_keys_for_cell(cell) else {
+                if cfg!(debug_assertions) {
+                    eprintln!("debug: skipping cell {} due to unresolved vertex keys", cell.uuid());
+                }
+                continue;
+            };

Also applies to: 1956-1959, 2031-2050, 2272-2275


1690-1701: Re: prior nit about intermediate collections.

The two-phase update still creates an intermediate Vec<(CellKey, Vec<Option<Uuid>>)>. If profiles show this hot, consider applying updates per-cell via temporary Option<SmallBuffer<...>> + scoped borrows to avoid the extra Vec.

🧹 Nitpick comments (15)
src/core/algorithms/robust_bowyer_watson.rs (4)

923-931: Safe usize→scalar conversion fallback is fine; tiny simplification possible

This path will never trigger for facet sizes (≤ D). Keeping it is harmless; alternatively, simplify by unwrapping with a debug assertion.

-        let num_vertices = crate::geometry::util::safe_usize_to_scalar::<T>(facet_vertices.len())
-            .unwrap_or_else(|_| { /* large fallback */ <T as From<f64>>::from(facet_vertices.len() as f64) });
+        let num_vertices = crate::geometry::util::safe_usize_to_scalar::<T>(facet_vertices.len())
+            .expect("facet vertex count is always small");

949-949: Use of <T as From<f64>>::from(100.0) is consistent with bounds

Given the outer impl bounds include T: From<f64>, this is fine. Consider a named constant to avoid magic numbers.


1008-1023: Avoid direct field access to tds.vertices if a public accessor exists

Mixing tds.cells() (accessor) with tds.vertices (field) leaks internals. Prefer a consistent accessor for vertices to decouple tests from TDS internals.

-        let nearby_vertices = tds
-            .vertices
-            .values()
+        let nearby_vertices = tds
+            .vertices()
+            .values()

If no accessor exists, consider adding pub fn vertices(&self) -> &IndexMap<VertexKey, Vertex<...>>.


300-347: Finalize-after-insertion error mapping is good; minor style tweak

Mapping to InconsistentDataStructure is appropriate. Optional: use with_context-style helpers if available to keep messages uniform.

src/core/triangulation_data_structure.rs (7)

411-427: UUID→Key maps: great direction; add a rebuild helper and keep them strictly in sync.

  • Nice replacement for public BiMaps. To make invariants harder to violate, add a small internal helper to (re)build cell mappings from SlotMap after bulk ops (triangulation, mass removals), and invoke it where appropriate.

Apply this diff to add a helper:

+impl<T, U, V, const D: usize> Tds<T, U, V, D>
+where
+    T: CoordinateScalar + DeserializeOwned,
+    U: DataType + DeserializeOwned,
+    V: DataType + DeserializeOwned,
+    [T; D]: Copy + Default + DeserializeOwned + Serialize + Sized,
+{
+    #[inline]
+    fn rebuild_uuid_to_cell_key(&mut self) {
+        self.uuid_to_cell_key.clear();
+        self.uuid_to_cell_key.reserve(self.cells.len());
+        for (k, c) in &self.cells {
+            self.uuid_to_cell_key.insert(c.uuid(), k);
+        }
+    }
+}

1498-1518: Ensure cell mappings are rebuilt post-triangulation.

Relying on external algorithm code to populate uuid_to_cell_key risks drift. Rebuild right after triangulate(self, &vertices)?;.

Apply this diff:

         let mut algorithm = IncrementalBoyerWatson::new();
-        algorithm.triangulate(self, &vertices)?;
+        algorithm.triangulate(self, &vertices)?;
+        // Keep UUID→CellKey mapping in sync after bulk cell creation
+        self.rebuild_uuid_to_cell_key();

778-819: Avoid per-call Vec alloc in hot path.

vertex_keys_for_cell allocates a fresh Vec each time. Consider a SmallVec/SmallBuffer-backed variant that writes into a caller-provided buffer for zero/low alloc in D≤MAX_PRACTICAL_DIMENSION_SIZE cases.

If you prefer an additive change, introduce:

#[inline]
fn vertex_keys_for_cell_into(
    &self,
    cell: &Cell<T, U, V, D>,
    out: &mut SmallBuffer<VertexKey, MAX_PRACTICAL_DIMENSION_SIZE>,
) -> Result<(), super::facet::FacetError> {
    out.clear();
    for v in cell.vertices() {
        out.push(*self.uuid_to_vertex_key.get(&v.uuid())
            .ok_or(super::facet::FacetError::VertexNotFound { uuid: v.uuid() })?);
    }
    Ok(())
}

1801-1834: Incident cells: good; minor ergonomics.

Consider a tiny helper to fetch cell_uuid_from_key with a clearer error to reduce duplication across call sites.


1947-1977: Facet index fits in u8: assert the contract.

Docs state D ≤ 254. Add a debug assertion once per call to surface violations early.

Apply this diff:

 pub fn build_facet_to_cells_hashmap(&self) -> FacetToCellsMap {
+    debug_assert!(
+        D <= 254,
+        "Facet index must fit in u8; got D={}",
+        D
+    );

2447-2455: Neighbor validation: assert no missing vertex keys in debug.

filter_map hides missing mappings by design; safe because is_valid() calls the mapping validators first. Add a debug assert to catch unexpected drift if call order changes.

Apply:

-            let vertex_keys: Vec<VertexKey> = cell
+            let vertex_keys: Vec<VertexKey> = cell
                 .vertices()
                 .iter()
                 .filter_map(|v| self.vertex_key_from_uuid(&v.uuid()))
                 .collect();
+            debug_assert_eq!(
+                vertex_keys.len(),
+                cell.vertices().len(),
+                "Unexpected missing vertex-key mapping during neighbor validation"
+            );

2730-2763: Deserialize: good reconstruction of UUID→Key maps.

Nice skip-serialize/rebuild pattern. Consider calling the new rebuild_uuid_to_cell_key() for symmetry, but current code is equivalent.

Also applies to: 2779-2781

src/core/traits/insertion_algorithm.rs (4)

485-517: Bounding-box init uses INF casts; produces bogus values for non-float T

cast(f64::INFINITY/NEG_INFINITY) can fail for integer-like T, making min≈1 and max≈−1 (Lines 486–491), which breaks the heuristic. Initialize from the first observed coords instead.

Apply this diff:

-        let mut min_coords = [cast(f64::INFINITY)
-            .unwrap_or_else(|| T::zero() + T::one() * cast(1e10f64).unwrap_or_else(T::one));
-            D];
-        let mut max_coords = [cast(f64::NEG_INFINITY)
-            .unwrap_or_else(|| T::zero() - T::one() * cast(1e10f64).unwrap_or_else(T::one));
-            D];
-        let mut vertex_count = 0;
+        let mut min_coords = [T::zero(); D];
+        let mut max_coords = [T::zero(); D];
+        let mut initialized = false;
+        let mut vertex_count = 0;
@@
-            for i in 0..D {
-                if coords[i] < min_coords[i] {
-                    min_coords[i] = coords[i];
-                }
-                if coords[i] > max_coords[i] {
-                    max_coords[i] = coords[i];
-                }
-            }
+            if !initialized {
+                min_coords = coords;
+                max_coords = coords;
+                initialized = true;
+            } else {
+                for i in 0..D {
+                    if coords[i] < min_coords[i] {
+                        min_coords[i] = coords[i];
+                    }
+                    if coords[i] > max_coords[i] {
+                        max_coords[i] = coords[i];
+                    }
+                }
+            }

637-637: Minor: Local import

Local use std::collections::HashSet; is fine, but consider moving to the module imports for consistency with the rest of the file.


648-679: Cavity boundary selection logic looks right; add a guard for invalid sharing

Facet selection (exactly one bad, total 1 or 2) is correct. Add a debug guard so unexpected total_count > 2 surfaces early.

Apply this diff:

             let bad_count = bad_cells_sharing.len();
             let total_count = sharing_cells.len();
+            #[cfg(debug_assertions)]
+            debug_assert!(
+                total_count <= 2,
+                "Facet shared by more than two cells (total_count = {})",
+                total_count
+            );

1069-1074: Don't mutate Tds internals directly — add/use Tds mapping helpers

uuid_to_vertex_key and uuid_to_cell_key are pub(crate) (so this compiles), but there are no existing wrapper helpers (e.g., vertex_key_from_uuid, insert_*with_mapping, remove_*unmap). Directly updating tds.vertices and the UUID↔Key maps risks invariant/consistency breakage and future drift. Add atomic Tds methods to insert/remove with mapping and refactor these sites to use them: src/core/traits/insertion_algorithm.rs (1069–1074, 1086–1087, 1359–1362, 1398–1399, 1457–1458).

📜 Review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7c7c40c and 53dbea5.

📒 Files selected for processing (5)
  • WARP.md (1 hunks)
  • src/core/algorithms/robust_bowyer_watson.rs (7 hunks)
  • src/core/traits/insertion_algorithm.rs (9 hunks)
  • src/core/triangulation_data_structure.rs (80 hunks)
  • src/geometry/algorithms/convex_hull.rs (2 hunks)
✅ Files skipped from review due to trivial changes (1)
  • WARP.md
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/geometry/algorithms/convex_hull.rs
🧰 Additional context used
🧬 Code graph analysis (3)
src/core/triangulation_data_structure.rs (3)
src/core/cell.rs (9)
  • uuid (502-504)
  • cell (2217-2217)
  • cell (2346-2346)
  • cell (2348-2348)
  • cell (2352-2352)
  • cell (2551-2551)
  • cell (2564-2564)
  • vertices (475-477)
  • vertex_keys (602-615)
src/core/vertex.rs (1)
  • uuid (458-460)
src/core/facet.rs (5)
  • cell (325-327)
  • vertices (437-444)
  • vertex (370-372)
  • key (455-473)
  • facet_key_from_vertex_keys (574-586)
src/core/traits/insertion_algorithm.rs (4)
src/core/boundary.rs (2)
  • tds (494-498)
  • boundary_facets (82-103)
src/core/triangulation_data_structure.rs (1)
  • new (1196-1228)
src/core/facet.rs (4)
  • new (287-293)
  • cell (325-327)
  • vertex (370-372)
  • facet_key_from_vertex_keys (574-586)
src/core/cell.rs (8)
  • cell (2217-2217)
  • cell (2346-2346)
  • cell (2348-2348)
  • cell (2352-2352)
  • cell (2551-2551)
  • cell (2564-2564)
  • from (372-374)
  • vertex_keys (602-615)
src/core/algorithms/robust_bowyer_watson.rs (3)
src/geometry/util.rs (5)
  • safe_usize_to_scalar (158-158)
  • safe_usize_to_scalar (480-517)
  • safe_usize_to_scalar (1101-1101)
  • safe_usize_to_scalar (1410-1410)
  • v (143-143)
src/core/vertex.rs (4)
  • from (600-602)
  • from (614-616)
  • from (628-630)
  • uuid (458-460)
src/geometry/point.rs (2)
  • from (354-356)
  • from (373-375)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Codacy Static Code Analysis
🔇 Additional comments (13)
src/core/algorithms/robust_bowyer_watson.rs (6)

1011-1011: Good fix: convert via v.point().into()

Avoids implicit conversions on Vertex; matches the project’s point/vertex conversion utilities.


1427-1434: Neighbor UUID↔Key translation updated correctly

Using tds.cell_key_from_uuid and tds.cell_uuid_from_key aligns tests with the SlotMap migration and preserves symmetry checks.

If available, prefer a helper like tds.neighbor_keys(cell_key) to avoid nested lookups. If not, current approach is fine.

Also applies to: 1438-1438


1571-1573: Hull-extension neighbor checks correctly use new TDS translations

Consistent with the migration plan; assertions read clearly.

Also applies to: 1577-1577, 1582-1582


1729-1735: Two-cell facet neighbor symmetry assertions look solid

Fetching UUIDs via cell_uuid_from_key and checking neighbors.iter().flatten() is correct under the new mapping.

Also applies to: 1740-1740


1751-1753: Incident-cell verification correctly converts UUID→Key

This ensures incident-cell UUIDs still resolve after the UUID→SlotMap migration.


263-266: Keep let-chains — MSRV >= 1.88 / crate edition check required

The repository MSRV is pinned to 1.89.0 (rust-version + rust-toolchain show 1.89.0), and let-chains were stabilized on stable Rust in 1.88.0 (edition-gated to Rust 2024). Ensure the crate(s) using this syntax use edition = "2024"; if any relevant Cargo.toml still uses an older edition, replace the let-chain with a nested if or guarded match or update the edition.

src/core/triangulation_data_structure.rs (4)

875-938: Lookup helpers LGTM.
Clear, O(1) accessors; good public surface to replace BiMap reads.


1009-1086: Reverse lookups via SlotMap access LGTM.
Correctly avoids duplicating Key→UUID maps.


2160-2181: Mapping validators LGTM.

Solid consistency checks; good error messages.

Also applies to: 2232-2253


2590-2592: PartialEq: len checks on maps are sufficient here.

Given maps are derived from SlotMaps, length parity checks are enough; comment matches behavior. LGTM.

Also applies to: 2666-2669

src/core/traits/insertion_algorithm.rs (3)

681-694: LGTM: Boundary facet materialization

Facet construction via opposite-vertex index is consistent with the (Cell, opposite_vertex) facet definition and guarded against OOB.


1589-1625: LGTM: Test now exercises facet_key via VertexKeys

Using facet_key_from_vertex_keys + tds.vertex_key_from_uuid matches the canonical key path and validates the facet-to-cells map.


1112-1141: API inconsistency: finalize_triangulation vs finalize_after_insertion — clarify intended API and align callers

PR claimed a rename to finalize_after_insertion, but both helpers and mixed callers remain.

Found:

  • Definitions: src/core/traits/insertion_algorithm.rs — fn finalize_triangulation (around line 1111) and fn finalize_after_insertion (around line 1478).
  • Call sites still using finalize_triangulation: src/core/traits/insertion_algorithm.rs:1024 and src/core/algorithms/bowyer_watson.rs:954.
  • Call sites using finalize_after_insertion: src/core/algorithms/robust_bowyer_watson.rs (≈278, 328, 613).

Action:

  • Decide intended split (construction-time finalization vs post-insertion maintenance).
  • If split is intentional: document both behaviors and update callers so each uses the correct helper.
  • If it was meant to be a rename: remove consolidate to a single helper (rename/remove finalize_triangulation), update all callers, and reconcile error types returned.

Comment on lines +771 to +1087
impl<T, U, V, const D: usize> Tds<T, U, V, D>
where
T: CoordinateScalar,
U: DataType,
V: DataType,
[T; D]: Copy + Default + DeserializeOwned + Serialize + Sized,
{
/// Helper function to get vertex keys for a cell using the optimized UUID→Key mapping.
/// This eliminates the need for `BiMap` lookups in hot paths.
///
/// # Arguments
///
/// * `cell` - The cell whose vertex keys we need
///
/// # Returns
///
/// A `Result` containing a `Vec<VertexKey>` if all vertices are found in the mapping,
/// or a `FacetError` if any vertex is missing.
///
/// # Performance
///
/// This uses `FastHashMap` for O(1) UUID→Key lookups.
#[inline]
fn vertex_keys_for_cell(
&self,
cell: &Cell<T, U, V, D>,
) -> Result<Vec<VertexKey>, super::facet::FacetError> {
let keys: Result<Vec<VertexKey>, _> = cell
.vertices()
.iter()
.map(|v| {
self.uuid_to_vertex_key
.get(&v.uuid())
.copied()
.ok_or_else(|| super::facet::FacetError::VertexNotFound { uuid: v.uuid() })
})
.collect();

#[cfg(debug_assertions)]
if let Ok(ref k) = keys {
debug_assert_eq!(
k.len(),
cell.vertices().len(),
"Mapping drift detected: vertex count mismatch"
);
}

keys
}

/// Helper function to get a cell key from a cell UUID using the optimized UUID→Key mapping.
///
/// # Arguments
///
/// * `cell_uuid` - The UUID of the cell to look up
///
/// # Returns
///
/// An `Option<CellKey>` if the cell is found, `None` otherwise.
///
/// # Performance
///
/// This uses `FastHashMap` for O(1) UUID→Key lookups.
///
/// # Examples
///
/// Successfully finding a cell key from a UUID:
///
/// ```
/// use delaunay::core::triangulation_data_structure::Tds;
/// use delaunay::vertex;
///
/// // Create a triangulation with some vertices
/// let vertices = vec![
/// vertex!([0.0, 0.0, 0.0]),
/// vertex!([1.0, 0.0, 0.0]),
/// vertex!([0.0, 1.0, 0.0]),
/// vertex!([0.0, 0.0, 1.0]),
/// ];
///
/// let tds: Tds<f64, Option<()>, Option<()>, 3> = Tds::new(&vertices).unwrap();
///
/// // Get the first cell and its UUID
/// let (cell_key, cell) = tds.cells().iter().next().unwrap();
/// let cell_uuid = cell.uuid();
///
/// // Use the helper function to find the cell key from its UUID
/// let found_key = tds.cell_key_from_uuid(&cell_uuid);
/// assert_eq!(found_key, Some(cell_key));
/// ```
///
/// Returns `None` for non-existent UUID:
///
/// ```
/// use delaunay::core::triangulation_data_structure::Tds;
/// use uuid::Uuid;
///
/// let tds: Tds<f64, Option<()>, Option<()>, 3> = Tds::new(&[]).unwrap();
/// let random_uuid = Uuid::new_v4();
///
/// let result = tds.cell_key_from_uuid(&random_uuid);
/// assert_eq!(result, None);
/// ```
#[inline]
#[must_use]
pub fn cell_key_from_uuid(&self, cell_uuid: &Uuid) -> Option<CellKey> {
self.uuid_to_cell_key.get(cell_uuid).copied()
}

/// Helper function to get a vertex key from a vertex UUID using the optimized UUID→Key mapping.
/// This eliminates `BiMap` lookups in hot paths.
///
/// # Arguments
///
/// * `vertex_uuid` - The UUID of the vertex to look up
///
/// # Returns
///
/// An `Option<VertexKey>` if the vertex is found, `None` otherwise.
///
/// # Performance
///
/// This uses `FastHashMap` for O(1) UUID→Key lookups.
///
/// # Examples
///
/// Successfully finding a vertex key from a UUID:
///
/// ```
/// use delaunay::core::triangulation_data_structure::Tds;
/// use delaunay::vertex;
///
/// // Create a triangulation with some vertices
/// let vertices = vec![
/// vertex!([0.0, 0.0, 0.0]),
/// vertex!([1.0, 0.0, 0.0]),
/// vertex!([0.0, 1.0, 0.0]),
/// vertex!([0.0, 0.0, 1.0]),
/// ];
///
/// let tds: Tds<f64, Option<()>, Option<()>, 3> = Tds::new(&vertices).unwrap();
///
/// // Get the first vertex and its UUID
/// let (vertex_key, vertex) = tds.vertices.iter().next().unwrap();
/// let vertex_uuid = vertex.uuid();
///
/// // Use the helper function to find the vertex key from its UUID
/// let found_key = tds.vertex_key_from_uuid(&vertex_uuid);
/// assert_eq!(found_key, Some(vertex_key));
/// ```
///
/// Returns `None` for non-existent UUID:
///
/// ```
/// use delaunay::core::triangulation_data_structure::Tds;
/// use uuid::Uuid;
///
/// let tds: Tds<f64, Option<()>, Option<()>, 3> = Tds::new(&[]).unwrap();
/// let random_uuid = Uuid::new_v4();
///
/// let result = tds.vertex_key_from_uuid(&random_uuid);
/// assert_eq!(result, None);
/// ```
#[inline]
#[must_use]
pub fn vertex_key_from_uuid(&self, vertex_uuid: &Uuid) -> Option<VertexKey> {
self.uuid_to_vertex_key.get(vertex_uuid).copied()
}

/// Helper function to get a cell UUID from a cell key using direct `SlotMap` access.
/// This is the reverse of `cell_key_from_uuid()` for the less common Key→UUID direction.
///
/// # Arguments
///
/// * `cell_key` - The key of the cell to look up
///
/// # Returns
///
/// An `Option<Uuid>` if the cell is found, `None` otherwise.
///
/// # Performance
///
/// This uses direct `SlotMap` indexing for O(1) Key→UUID lookups.
///
/// # Examples
///
/// Successfully getting a UUID from a cell key:
///
/// ```
/// use delaunay::core::triangulation_data_structure::Tds;
/// use delaunay::vertex;
///
/// // Create a triangulation with some vertices
/// let vertices = vec![
/// vertex!([0.0, 0.0, 0.0]),
/// vertex!([1.0, 0.0, 0.0]),
/// vertex!([0.0, 1.0, 0.0]),
/// vertex!([0.0, 0.0, 1.0]),
/// ];
///
/// let tds: Tds<f64, Option<()>, Option<()>, 3> = Tds::new(&vertices).unwrap();
///
/// // Get the first cell key and expected UUID
/// let (cell_key, cell) = tds.cells().iter().next().unwrap();
/// let expected_uuid = cell.uuid();
///
/// // Use the helper function to get UUID from the cell key
/// let found_uuid = tds.cell_uuid_from_key(cell_key);
/// assert_eq!(found_uuid, Some(expected_uuid));
/// ```
///
/// Round-trip conversion between UUID and key:
///
/// ```
/// use delaunay::core::triangulation_data_structure::Tds;
/// use delaunay::vertex;
///
/// // Create a triangulation with some vertices
/// let vertices = vec![
/// vertex!([0.0, 0.0, 0.0]),
/// vertex!([1.0, 0.0, 0.0]),
/// vertex!([0.0, 1.0, 0.0]),
/// vertex!([0.0, 0.0, 1.0]),
/// ];
///
/// let tds: Tds<f64, Option<()>, Option<()>, 3> = Tds::new(&vertices).unwrap();
///
/// // Get the first cell's UUID
/// let (_, cell) = tds.cells().iter().next().unwrap();
/// let original_uuid = cell.uuid();
///
/// // Convert UUID to key, then key back to UUID
/// let cell_key = tds.cell_key_from_uuid(&original_uuid).unwrap();
/// let round_trip_uuid = tds.cell_uuid_from_key(cell_key).unwrap();
/// assert_eq!(original_uuid, round_trip_uuid);
/// ```
#[inline]
#[must_use]
pub fn cell_uuid_from_key(&self, cell_key: CellKey) -> Option<Uuid> {
self.cells.get(cell_key).map(super::cell::Cell::uuid)
}

/// Helper function to get a vertex UUID from a vertex key using direct `SlotMap` access.
/// This is the reverse of `vertex_key_from_uuid()` for the less common Key→UUID direction.
///
/// # Arguments
///
/// * `vertex_key` - The key of the vertex to look up
///
/// # Returns
///
/// An `Option<Uuid>` if the vertex is found, `None` otherwise.
///
/// # Performance
///
/// This uses direct `SlotMap` indexing for O(1) Key→UUID lookups.
///
/// # Examples
///
/// Successfully getting a UUID from a vertex key:
///
/// ```
/// use delaunay::core::triangulation_data_structure::Tds;
/// use delaunay::vertex;
///
/// // Create a triangulation with some vertices
/// let vertices = vec![
/// vertex!([0.0, 0.0, 0.0]),
/// vertex!([1.0, 0.0, 0.0]),
/// vertex!([0.0, 1.0, 0.0]),
/// vertex!([0.0, 0.0, 1.0]),
/// ];
///
/// let tds: Tds<f64, Option<()>, Option<()>, 3> = Tds::new(&vertices).unwrap();
///
/// // Get the first vertex key and expected UUID
/// let (vertex_key, vertex) = tds.vertices.iter().next().unwrap();
/// let expected_uuid = vertex.uuid();
///
/// // Use the helper function to get UUID from the vertex key
/// let found_uuid = tds.vertex_uuid_from_key(vertex_key);
/// assert_eq!(found_uuid, Some(expected_uuid));
/// ```
///
/// Round-trip conversion between UUID and key:
///
/// ```
/// use delaunay::core::triangulation_data_structure::Tds;
/// use delaunay::vertex;
///
/// // Create a triangulation with some vertices
/// let vertices = vec![
/// vertex!([0.0, 0.0, 0.0]),
/// vertex!([1.0, 0.0, 0.0]),
/// vertex!([0.0, 1.0, 0.0]),
/// vertex!([0.0, 0.0, 1.0]),
/// ];
///
/// let tds: Tds<f64, Option<()>, Option<()>, 3> = Tds::new(&vertices).unwrap();
///
/// // Get the first vertex's UUID
/// let (_, vertex) = tds.vertices.iter().next().unwrap();
/// let original_uuid = vertex.uuid();
///
/// // Convert UUID to key, then key back to UUID
/// let vertex_key = tds.vertex_key_from_uuid(&original_uuid).unwrap();
/// let round_trip_uuid = tds.vertex_uuid_from_key(vertex_key).unwrap();
/// assert_eq!(original_uuid, round_trip_uuid);
/// ```
#[inline]
#[must_use]
pub fn vertex_uuid_from_key(&self, vertex_key: VertexKey) -> Option<Uuid> {
self.vertices
.get(vertex_key)
.map(super::vertex::Vertex::uuid)
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

API compatibility check across repo.

Since public BiMaps were replaced by methods/uuid→key maps, ensure no remaining call sites reference removed fields/types.

Run:

Also applies to: 1498-1518, 1860-1883


🏁 Script executed:

#!/bin/bash
# Verify no remaining references to removed BiMaps or old API
rg -nP -C2 '\b(vertex_bimap|cell_bimap)\b|BiMap<\s*Uuid\s*,\s*(VertexKey|CellKey)\s*>' --glob '!**/target/**'
# Verify all conversions use the new helpers
rg -nP -C1 '\b(cell_key_from_uuid|vertex_key_from_uuid|cell_uuid_from_key|vertex_uuid_from_key)\b' --glob '!**/target/**'

Length of output: 297


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "PWD: $(pwd)"
echo "Git root: $(git rev-parse --show-toplevel 2>/dev/null || true)"
echo "Git branch: $(git rev-parse --abbrev-ref HEAD 2>/dev/null || true)"
echo "Top-level entries:"
ls -la | sed -n '1,200p'

PATTERNS=(
  '\b(vertex_bimap|cell_bimap)\b'
  'BiMap<\s*Uuid\s*,\s*(VertexKey|CellKey)\s*>'
  '\bbimap\b'
  '\bcell_key_from_uuid\b'
  '\bvertex_key_from_uuid\b'
  '\bcell_uuid_from_key\b'
  '\bvertex_uuid_from_key\b'
  '\buuid_to_vertex_key\b'
  '\buuid_to_cell_key\b'
)

for p in "${PATTERNS[@]}"; do
  echo
  echo "=== Searching for pattern: $p ==="
  rg -uu --hidden -n -C2 -S --no-messages -e "$p" || echo "No matches for: $p"
done

Length of output: 96760


Fix leftover BiMap usages — migrate code/tests/docs to the new uuid→key API

rg found live references to BiMap and vertex_bimap/cell_bimap in source, tests, benches and docs; these must be migrated to the TDS uuid_to_* maps or wrapped with compatibility helpers.

  • src/core/cell.rs: still imports bimap and exposes vertex_keys(vertex_bimap: &BiMap<Uuid, VertexKey>) — lines ~55 and ~602–605.
  • Multiple tests/examples in src/core/cell.rs (≈1688–1843) and benches/microbenchmarks.rs reference BiMap usage.
  • docs/optimization_recommendations.md and docs/REFACTORING_STRATEGY.md contain many examples/struct fields using cell_bimap / vertex_bimap (e.g. docs/optimization_recommendations.md:1124–1172, 1029–1041).
  • Cargo.toml / Cargo.lock still list the bimap crate (Cargo.toml:23).

Action: replace/remove BiMap types/usages in code/tests/benches and update docs to use uuid_to_vertex_key/uuid_to_cell_key and helper methods (vertex_key_from_uuid / cell_key_from_uuid), or add stable compatibility wrappers that forward to the new maps; then remove the bimap dependency if no code remains.

🤖 Prompt for AI Agents
In src/core/triangulation_data_structure.rs around lines 771–1087, the review
found leftover BiMap usages across the repo; replace all BiMap types/usages
(vertex_bimap/cell_bimap) with the new Tds uuid_to_vertex_key / uuid_to_cell_key
maps and the helper methods (vertex_key_from_uuid / cell_key_from_uuid), or add
small compatibility wrapper functions on Tds that accept the old BiMap-style
arguments and forward lookups to the uuid_to_* maps; specifically update
src/core/cell.rs to remove bimap imports and change function signatures that
accept vertex_bimap: &BiMap<Uuid, VertexKey> to use Tds methods or a wrapper,
update all tests/benches/examples to call tds.vertex_key_from_uuid /
tds.cell_key_from_uuid (or the wrapper) instead of direct BiMap access, edit
docs to show the new API, and finally remove the bimap crate from
Cargo.toml/Cargo.lock if no references remain.

Refactors internal UUID handling to use SlotMap keys directly, improving triangulation performance by reducing hash lookups.
Maintains API compatibility by preserving UUIDs in public interfaces and serialization. Optimizes collections and algorithms for key-based operations. Removes bimap dependency.
Includes new refactoring documentation. (Internal change)

Refs: Fixes #72, 73
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
src/core/cell.rs (1)

545-586: Add zero‑alloc iterator for vertex UUIDs

Current Vec allocation is fine; consider offering an iterator variant to avoid allocations on hot paths.

 pub fn vertex_uuids(&self) -> Vec<Uuid> {
     self.vertices()
         .iter()
-        .map(super::vertex::Vertex::uuid)
+        .map(super::vertex::Vertex::uuid)
         .collect()
 }
+
+#[inline]
+pub fn vertex_uuid_iter(&self) -> impl ExactSizeIterator<Item = Uuid> + '_ {
+    self.vertices().iter().map(super::vertex::Vertex::uuid)
+}
src/core/algorithms/robust_bowyer_watson.rs (2)

673-701: Facet→cells map validation: consider deduping cell keys defensively

If build_facet_to_cells_hashmap() can ever yield duplicate (cell_key, idx) pairs per facet, dedupe to keep invariants tight.

-            .map(|(facet_key, cell_facet_pairs)| {
+            .map(|(facet_key, cell_facet_pairs)| {
                 // Extract just the CellKeys, discarding facet indices
-                let cell_keys: Vec<CellKey> = cell_facet_pairs
-                    .iter()
-                    .map(|(cell_key, _)| *cell_key)
-                    .collect();
+                let mut cell_keys: Vec<CellKey> = cell_facet_pairs
+                    .iter()
+                    .map(|(cell_key, _)| *cell_key)
+                    .collect();
+                cell_keys.sort_unstable();
+                cell_keys.dedup();

927-951: Avoid panic on safe_usize_to_scalar failure

Even if “should never fail”, prefer non‑panicking path in library code.

-        let num_vertices = crate::geometry::util::safe_usize_to_scalar::<T>(facet_vertices.len())
-            .expect("facet vertex count is always small");
+        let num_vertices = match crate::geometry::util::safe_usize_to_scalar::<T>(facet_vertices.len()) {
+            Ok(n) => n,
+            Err(_) => return false, // Conservatively treat as not visible
+        };
📜 Review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 53dbea5 and eeeb046.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (13)
  • Cargo.toml (0 hunks)
  • benches/microbenchmarks.rs (1 hunks)
  • cspell.json (1 hunks)
  • docs/REFACTORING_STRATEGY.md (1 hunks)
  • docs/UUID_MIGRATION_INVENTORY.md (1 hunks)
  • docs/optimization_recommendations.md (6 hunks)
  • examples/convex_hull_3d_50_points.rs (3 hunks)
  • examples/triangulation_3d_50_points.rs (1 hunks)
  • src/core/algorithms/robust_bowyer_watson.rs (15 hunks)
  • src/core/cell.rs (9 hunks)
  • src/core/collections.rs (2 hunks)
  • src/core/facet.rs (1 hunks)
  • src/core/traits/insertion_algorithm.rs (17 hunks)
💤 Files with no reviewable changes (1)
  • Cargo.toml
✅ Files skipped from review due to trivial changes (1)
  • cspell.json
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/core/collections.rs
👮 Files not reviewed due to content moderation or server errors (8)
  • examples/triangulation_3d_50_points.rs
  • src/core/facet.rs
  • docs/UUID_MIGRATION_INVENTORY.md
  • src/core/traits/insertion_algorithm.rs
  • docs/REFACTORING_STRATEGY.md
  • benches/microbenchmarks.rs
  • examples/convex_hull_3d_50_points.rs
  • docs/optimization_recommendations.md
🧰 Additional context used
🧬 Code graph analysis (5)
benches/microbenchmarks.rs (2)
src/core/cell.rs (8)
  • vertices (474-476)
  • cell (2184-2184)
  • cell (2313-2313)
  • cell (2315-2315)
  • cell (2319-2319)
  • cell (2518-2518)
  • cell (2531-2531)
  • uuid (501-503)
src/core/triangulation_data_structure.rs (2)
  • vertices (535-537)
  • cells_mut (862-864)
src/core/algorithms/robust_bowyer_watson.rs (3)
src/geometry/util.rs (6)
  • core (1388-1388)
  • safe_usize_to_scalar (158-158)
  • safe_usize_to_scalar (480-517)
  • safe_usize_to_scalar (1101-1101)
  • safe_usize_to_scalar (1410-1410)
  • v (143-143)
src/geometry/point.rs (2)
  • from (354-356)
  • from (373-375)
src/core/vertex.rs (3)
  • from (600-602)
  • from (614-616)
  • from (628-630)
src/core/cell.rs (1)
src/core/vertex.rs (3)
  • hash (659-663)
  • uuid (458-460)
  • into_hashmap (381-383)
examples/triangulation_3d_50_points.rs (1)
src/core/boundary.rs (1)
  • tds (494-498)
src/core/traits/insertion_algorithm.rs (3)
src/core/boundary.rs (2)
  • tds (494-498)
  • boundary_facets (82-103)
src/core/facet.rs (4)
  • new (287-293)
  • cell (325-327)
  • vertex (370-372)
  • facet_key_from_vertex_keys (574-586)
src/core/triangulation_data_structure.rs (2)
  • new (1364-1396)
  • default (206-208)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: Codacy Static Code Analysis
  • GitHub Check: performance-regression
  • GitHub Check: build (windows-latest)
  • GitHub Check: Codacy Security Scan
  • GitHub Check: Code Coverage
🔇 Additional comments (13)
src/core/cell.rs (3)

809-811: Return type change to FastHashMap looks good

Nice switch to the project’s fast map. Ensure any callers/tests expecting std::collections::HashMap are updated or covered by a compatibility alias as noted above.


1646-1827: vertex_uuids tests are thorough and clear

Solid coverage across dimensions and scalar types; uniqueness and nil checks are helpful.


751-811: Docs + API mismatch: into_hashmap returns FastHashMap — update docs or provide compatibility

  • Docstring in src/core/cell.rs still says HashMap and the examples mention HashMap while the function signature returns FastHashMap (fn at src/core/cell.rs:809); update docs/examples to FastHashMap.
  • This is a public-API change — confirm downstream crates are compatible or provide a compatibility path (e.g. pub type CellMap<T,U,V,const D:usize> = FastHashMap<Uuid, Cell<T,U,V,D>>, or add a companion into_fast_hashmap/deprecated shim).
src/core/algorithms/robust_bowyer_watson.rs (10)

7-7: Switch to FastHash{Map,Set}

Good consistency with collections module. Confirm other modules using these helpers are aligned.


279-285: Better finalize() error context

Helpful counts in the error message.


334-336: Improved hull‑extension finalize() error context

Good improvement for debugging failures.


519-526: Boundary detection: bad_cell_set + processed_facets

Correct use of sets; avoids duplicate work.


705-707: Boundary facet criterion is precise and readable

Matches (bad=1, total∈{1,2}) expectation.


1010-1024: Heuristic density check uses tds.vertices() accessor

Good encapsulation; coordinate conversion is clear.


1429-1441: Neighbor symmetry checks via UUID↔Key API

Nice migration to cell_key_from_uuid/cell_uuid_from_key; assertions read well.

Also applies to: 1579-1585


1619-1631: Inserted‑vertex presence check is robust

Comparing coords with EPSILON avoids UUID coupling for this test.


1688-1707: Duplicate cell detection via UUID signatures

Effective post‑insertion invariant check.


1751-1755: Neighbor cross‑references validated

Good use of translation helpers; asserts are precise.

Also applies to: 1730-1746

Replaces `vertex_uuids()` method calls with the more efficient `vertex_uuid_iter()` method throughout the delaunay codebase to reduce
heap allocations and improve performance.

Includes comprehensive testing, performance analysis, and migration guidance.

Adds a new example to showcase the zero-allocation iterator's benefits.

Adds documentation about the new optimization.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/core/algorithms/robust_bowyer_watson.rs (1)

934-958: Missing DivAssign bound: /= on generic T will not compile without it.

*coord /= num_vertices; requires T: DivAssign<T>. Add the bound (or avoid /=).

Apply this diff:

-    fn fallback_visibility_heuristic(
+    fn fallback_visibility_heuristic(
         &self,
         facet: &Facet<T, U, V, D>,
         vertex: &Vertex<T, U, D>,
-    ) -> bool {
+    ) -> bool
+    where
+        T: std::ops::DivAssign<T>,
+    {

And (optional, for consistency) import the trait near the top:

-use std::ops::{AddAssign, Div, SubAssign};
+use std::ops::{AddAssign, Div, DivAssign, SubAssign};

Alternatively, replace /= with *coord = *coord / num_vertices; and add a T: std::ops::Div<T, Output = T> bound here.

🧹 Nitpick comments (6)
src/core/cell.rs (3)

854-856: Docs still say “HashMap” but API now returns FastHashMap.

Update rustdoc text and examples to avoid confusion.

Apply this diff to fix the docs:

-    /// Converts a vector of cells into a `HashMap` indexed by their UUIDs.
+    /// Converts a vector of cells into a `FastHashMap` indexed by their UUIDs.
@@
-    /// This utility function transforms a collection of cells into a hash map structure
+    /// This utility function transforms a collection of cells into a hash map structure
@@
-    /// by UUID.
+    /// by UUID. Uses `FastHashMap` for performance.
@@
-    /// # Returns
-    ///
-    /// A [`HashMap<Uuid, Self>`] where each key is a cell's UUID and each value
-    /// is the corresponding cell. The `HashMap` provides O(1) average-case lookups
-    /// by UUID.
+    /// # Returns
+    ///
+    /// A [`FastHashMap<Uuid, Self>`] where each key is a cell's UUID and each value
+    /// is the corresponding cell. The map provides O(1) average‑case lookups by UUID.
@@
-    /// // Empty vector produces empty HashMap
+    /// // Empty vector produces empty FastHashMap

Also applies to: 796-853, 848-853


1776-1789: Nit: 4D cell is a 4‑simplex, not a 5‑simplex.

Fix wording in the test comment.

-    /// // Test vertex_uuids with a 4D cell (5-simplex) using integer data
+    /// // Test vertex_uuids with a 4D cell (4-simplex) using integer data

3136-3250: Benchmark‑style test may be flaky and slow on CI.

Either mark as ignored or gate behind a feature to keep unit tests stable.

-    #[test]
+    #[test]
+    #[ignore = "perf/ergonomics demo; not a deterministic unit test"]
     fn test_vertex_uuid_iter_by_value_vs_by_reference_analysis() {

Alternatively, reduce iterations or guard with #[cfg(feature = "bench")].

examples/zero_allocation_iterator_demo.rs (2)

51-69: Iteration count is very high for an example.

Consider reducing to 10_000 to keep demo snappy; perf trend remains clear.


71-83: Minor: handle near‑zero durations more simply.

The INFINITY branch is fine; alternatively print N/A when iter_duration < 1ns to avoid misleading “∞x”.

docs/vertex_uuid_iter_optimizations.md (1)

217-223: Avoid hard‑coding counts like “All 663 tests pass.”

These numbers drift quickly. Prefer “All tests pass” or include a commit/date qualifier.

Would you like me to update the doc with a short “as of <commit_sha> (<YYYY‑MM‑DD>)” note?

Also applies to: 291-299

📜 Review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between eeeb046 and 4e2bd42.

📒 Files selected for processing (5)
  • cspell.json (1 hunks)
  • docs/vertex_uuid_iter_optimizations.md (1 hunks)
  • examples/zero_allocation_iterator_demo.rs (1 hunks)
  • src/core/algorithms/robust_bowyer_watson.rs (16 hunks)
  • src/core/cell.rs (10 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • cspell.json
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.

Applied to files:

  • src/core/algorithms/robust_bowyer_watson.rs
🧬 Code graph analysis (3)
examples/zero_allocation_iterator_demo.rs (1)
src/core/cell.rs (8)
  • cell (2209-2209)
  • cell (2338-2338)
  • cell (2340-2340)
  • cell (2344-2344)
  • cell (2543-2543)
  • cell (2556-2556)
  • number_of_vertices (448-450)
  • uuid (501-503)
src/core/algorithms/robust_bowyer_watson.rs (3)
src/geometry/util.rs (6)
  • core (1388-1388)
  • safe_usize_to_scalar (158-158)
  • safe_usize_to_scalar (480-517)
  • safe_usize_to_scalar (1101-1101)
  • safe_usize_to_scalar (1410-1410)
  • v (143-143)
src/core/cell.rs (9)
  • from (371-373)
  • cell (2209-2209)
  • cell (2338-2338)
  • cell (2340-2340)
  • cell (2344-2344)
  • cell (2543-2543)
  • cell (2556-2556)
  • vertex_uuids (581-586)
  • uuid (501-503)
src/core/vertex.rs (4)
  • from (600-602)
  • from (614-616)
  • from (628-630)
  • uuid (458-460)
src/core/cell.rs (1)
src/core/vertex.rs (4)
  • point (403-405)
  • hash (659-663)
  • uuid (458-460)
  • into_hashmap (381-383)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: Codacy Static Code Analysis
  • GitHub Check: Code Coverage
  • GitHub Check: performance-regression
  • GitHub Check: build (windows-latest)
  • GitHub Check: Codacy Security Scan
🔇 Additional comments (7)
src/core/cell.rs (2)

545-586: Additive UUID accessor looks good.

Straightforward, correct mapping from vertices to UUIDs. Complements the zero‑alloc iterator.


588-631: Zero‑allocation iterator is well‑designed.

ExactSizeIterator is preserved via slice::Iter; returning Uuid by value is appropriate here.

src/core/algorithms/robust_bowyer_watson.rs (5)

279-287: Improved error context on finalization.

Great addition; the created/removed counts will speed up debugging.

Also applies to: 333-337


520-559: Robust boundary facet detection reads well.

Using FastHashSet for dedup and tracking processed facets avoids redundant work.


674-707: Facet map validation is solid.

Dedup + “≤2 cells per facet” guard prevents silent topology corruption.


1017-1033: OK on density heuristic.

Iterating tds.vertices().values() and simple distance check is fine for the heuristic gate.


1569-1595: Good neighbor symmetry assertions in tests.

Translation via cell_key_from_uuid/cell_uuid_from_key aligns with the migration.

Also applies to: 1733-1749

Adds a `bench` feature to enable timing-based tests and performance analysis. This feature gates tests that may be flaky in CI environments, designed for local performance investigations. Includes a new example demonstrating zero-allocation iterator performance. Also modifies `robust_bowyer_watson.rs` to add a missing `DivAssign` trait bound, and adds `#[cfg(feature = "bench")]` guards around benchmark tests.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/core/algorithms/robust_bowyer_watson.rs (1)

466-476: Reduce per‑cell allocations in robust_find_bad_cells.

Reuse a single Vec<Point<_, D>> across the loop to cut allocs.

-        let mut bad_cells = Vec::new();
-
-        for (cell_key, cell) in tds.cells() {
-            // Extract vertex points from the cell
-            let vertex_points: Vec<Point<T, D>> =
-                cell.vertices().iter().map(|v| *v.point()).collect();
+        let mut bad_cells = Vec::new();
+        let mut vertex_points = Vec::with_capacity(D + 1);
+
+        for (cell_key, cell) in tds.cells() {
+            // Extract vertex points from the cell (reusing buffer)
+            vertex_points.clear();
+            vertex_points.extend(cell.vertices().iter().map(|v| *v.point()));

Also applies to: 505-506

♻️ Duplicate comments (1)
src/core/boundary.rs (1)

7-7: Past note on deduped vertex-key lookup helper still applies.

Centralizing vertex-key derivation for facets would reduce future duplication across modules.

🧹 Nitpick comments (9)
docs/code_organization.md (2)

265-266: Cross-reference the bench section

Link this bullet to the section above so readers know where to find commands.

-- **Performance Analysis**: `bench` feature flag for timing-based tests and performance demonstrations
+- **Performance Analysis**: `bench` feature flag for timing-based tests and performance demos (see “Benchmark-style tests” note above)

276-280: Avoid redundancy with item 4 (“Performance Focus”)

Differentiate “Performance Analysis” (opt-in timing tests) from CI performance focus, or merge to one bullet.

-6. **Performance Analysis**: Optional `bench` feature for timing-based tests and performance demonstrations
+6. **Performance Analysis (opt-in)**: `bench` feature for timing-based tests and ergonomics checks; distinct from CI-driven regression detection in item 4
examples/zero_allocation_iterator_demo.rs (3)

25-31: Guard against empty triangulations instead of unwrap().

Make the example resilient to edge cases (e.g., construction failing or producing no cells).

Apply:

-    // Get the first cell from the triangulation
-    let (_cell_key, cell) = tds.cells().iter().next().unwrap();
+    // Get the first cell from the triangulation
+    let Some((_cell_key, cell)) = tds.cells().iter().next() else {
+        eprintln!("No cells in triangulation; nothing to demo.");
+        return Ok(());
+    };

74-80: Speedup calculation: handle extremely small vec_duration too.

Current guard only checks iter_duration; protect the denominator more generally.

Apply:

-    if iter_duration.as_nanos() < 1000 {
+    if iter_duration.as_nanos() < 1000 || vec_duration.as_nanos() < 1000 {
         println!("  Speedup: N/A (iteration time too small to measure reliably)");
     } else {

26-26: Nit: prefer values() when you only need the cell.

Slightly clearer intent than iter().next().

-let (_cell_key, cell) = tds.cells().iter().next().unwrap();
+let cell = tds.cells().values().next().unwrap();
src/core/cell.rs (1)

545-586: Add #[inline] to vertex_uuids for parity with the iterator.

Tiny hot-path win; aligns with vertex_uuid_iter() which is #[inline].

-    pub fn vertex_uuids(&self) -> Vec<Uuid> {
+    #[inline]
+    pub fn vertex_uuids(&self) -> Vec<Uuid> {
         self.vertices()
             .iter()
             .map(super::vertex::Vertex::uuid)
             .collect()
     }
src/core/boundary.rs (2)

153-160: Avoid reaching into Tds internals; use the helper.

Use Tds::vertex_key_from_uuid(&Uuid) to improve encapsulation and future-proofing.

-        for vertex in &facet_vertices {
-            match self.uuid_to_vertex_key.get(&vertex.uuid()) {
-                Some(&key) => vertex_keys.push(key),
-                None => return false, // Vertex not in triangulation, so facet can't be boundary
-            }
-        }
+        for vertex in &facet_vertices {
+            match self.vertex_key_from_uuid(&vertex.uuid()) {
+                Some(key) => vertex_keys.push(*key),
+                None => return false, // Vertex not in triangulation, so facet can't be boundary
+            }
+        }

144-167: Optional: add a cached variant to avoid rebuilding the map per call.

is_boundary_facet() rebuilds the facet map each time. Consider an overload that accepts a prebuilt map for tight loops.

Example signature to add (outside this hunk):

pub fn is_boundary_facet_with_map(
    &self,
    facet: &Facet<T, U, V, D>,
    facet_to_cells: &std::collections::HashMap<u64, Vec<(CellKey, u8)>>
) -> bool { /* same logic but reuse map */ }
src/core/algorithms/robust_bowyer_watson.rs (1)

794-827: Minor: avoid temporary Vec in find_visible_boundary_facets.

Iterate the hashmap directly to save one allocation.

-        let boundary_facets: Vec<_> = facet_to_cells
-            .iter()
-            .filter(|(_, cells)| cells.len() == 1)
-            .collect();
-
-        for (_facet_key, cells) in boundary_facets {
+        for (_facet_key, cells) in facet_to_cells.iter().filter(|(_, cells)| cells.len() == 1) {
📜 Review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4e2bd42 and be31587.

📒 Files selected for processing (9)
  • Cargo.toml (1 hunks)
  • WARP.md (2 hunks)
  • docs/code_organization.md (3 hunks)
  • docs/vertex_uuid_iter_optimizations.md (1 hunks)
  • examples/README.md (1 hunks)
  • examples/zero_allocation_iterator_demo.rs (1 hunks)
  • src/core/algorithms/robust_bowyer_watson.rs (17 hunks)
  • src/core/boundary.rs (8 hunks)
  • src/core/cell.rs (11 hunks)
✅ Files skipped from review due to trivial changes (1)
  • examples/README.md
🚧 Files skipped from review as they are similar to previous changes (3)
  • WARP.md
  • Cargo.toml
  • docs/vertex_uuid_iter_optimizations.md
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.

Applied to files:

  • docs/code_organization.md
  • src/core/algorithms/robust_bowyer_watson.rs
🧬 Code graph analysis (4)
src/core/cell.rs (1)
src/core/vertex.rs (3)
  • hash (659-663)
  • uuid (458-460)
  • into_hashmap (381-383)
examples/zero_allocation_iterator_demo.rs (1)
src/core/cell.rs (8)
  • cell (2208-2208)
  • cell (2337-2337)
  • cell (2339-2339)
  • cell (2343-2343)
  • cell (2542-2542)
  • cell (2555-2555)
  • number_of_vertices (448-450)
  • uuid (501-503)
src/core/boundary.rs (1)
src/core/facet.rs (3)
  • facet_key_from_vertex_keys (574-586)
  • vertex (370-372)
  • key (455-473)
src/core/algorithms/robust_bowyer_watson.rs (3)
src/geometry/util.rs (12)
  • core (1388-1388)
  • std (238-238)
  • std (278-278)
  • std (494-494)
  • std (502-502)
  • std (1971-1971)
  • std (2059-2059)
  • safe_usize_to_scalar (158-158)
  • safe_usize_to_scalar (480-517)
  • safe_usize_to_scalar (1101-1101)
  • safe_usize_to_scalar (1410-1410)
  • v (143-143)
src/core/cell.rs (9)
  • from (371-373)
  • cell (2208-2208)
  • cell (2337-2337)
  • cell (2339-2339)
  • cell (2343-2343)
  • cell (2542-2542)
  • cell (2555-2555)
  • vertex_uuids (581-586)
  • uuid (501-503)
src/core/vertex.rs (4)
  • from (600-602)
  • from (614-616)
  • from (628-630)
  • uuid (458-460)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: Codacy Static Code Analysis
  • GitHub Check: build (ubuntu-latest)
  • GitHub Check: build (windows-latest)
  • GitHub Check: Code Coverage
  • GitHub Check: performance-regression
  • GitHub Check: Codacy Security Scan
🔇 Additional comments (11)
docs/code_organization.md (2)

91-99: Add docs/UUID_MIGRATION_INVENTORY.md to the docs tree in docs/code_organization.md

Confirmed file present at docs/UUID_MIGRATION_INVENTORY.md — update the tree to include it.

 │   ├── code_organization.md                      # Code organization patterns (this file)
+│   ├── UUID_MIGRATION_INVENTORY.md               # UUID usages, categories, and migration strategy
 │   ├── numerical_robustness_guide.md             # Numerical robustness and stability guide

181-193: Resolved — bench feature and test gating already present

Cargo.toml already defines the bench feature and tests are gated with #[cfg(feature = "bench")]; no changes required.

src/core/cell.rs (2)

588-631: Zero‑allocation UUID iterator looks solid.

ExactSizeIterator propagation via map is correct; by‑value Uuid keeps API ergonomic and fast.


796-855: into_hashmap switch to FastHashMap LGTM.

API is consistent with Vertex::into_hashmap; docs match behavior.

src/core/boundary.rs (2)

246-257: Test updates are clearer and robust.

Counting confirmed boundary facets improves message quality; assertions read well.

Also applies to: 294-305, 339-350


621-631: rand::rng() — OK for rand = "0.9.2"

Cargo.toml lists rand = "0.9.2"; rand 0.9.x exposes rng() (thread_rng() is deprecated/renamed), so no change required. (docs.rs)

src/core/algorithms/robust_bowyer_watson.rs (5)

280-287: Improved error context in finalization paths.

Including created/removed counts in messages is a useful debugging boost.

Also applies to: 333-339


674-707: Validated facet mapping looks good.

Dedup + assert ≤2 owners enforces topological invariant early; correct use of FastHashMap.


922-963: Fallback visibility heuristic is defensively coded.

safe_usize_to_scalar + DivAssign bound are appropriate; threshold expression is clear.


1435-1453: Neighbor symmetry checks via UUID↔Key helpers look correct.

Good use of cell_key_from_uuid/cell_uuid_from_key; assertions exercise invariants.

Also applies to: 1589-1597, 1716-1752, 1757-1769


1628-1636: Use vertex_uuid_iter() in tests: nice coverage of the new API.

The signature and sorting guard against duplicate cells effectively.

Improves benchmark regression reporting, baseline handling, and fixes various documentation errors. Simplifies internal APIs and optimizes collection usage for performance gains.
Also refactors UUID usage to SlotMap keys internally.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
src/core/boundary.rs (1)

144-168: De‑duplicate key derivation; reuse cached path.

is_boundary_facet re-computes both the map and the facet key logic also present in is_boundary_facet_with_map. Call the cached version to cut duplication and keep behavior aligned.

Apply this diff:

   fn is_boundary_facet(&self, facet: &Facet<T, U, V, D>) -> bool {
-        // Build the facet-to-cells map to check if the facet belongs to only one cell
-        // Note: This recomputes the map per call; for repeated queries, compute once and reuse.
-        let facet_to_cells = self.build_facet_to_cells_hashmap();
-
-        // Compute the facet key using VertexKeys (same method as build_facet_to_cells_hashmap)
-        // Get the vertex keys for the facet's vertices
-        let facet_vertices = facet.vertices();
-        let mut vertex_keys = VertexKeyBuffer::with_capacity(facet_vertices.len());
-
-        // Look up VertexKey for each vertex in the facet
-        for vertex in &facet_vertices {
-            match self.vertex_key_from_uuid(&vertex.uuid()) {
-                Some(key) => vertex_keys.push(key),
-                None => return false, // Vertex not in triangulation, so facet can't be boundary
-            }
-        }
-
-        // Generate the facet key using the same method as build_facet_to_cells_hashmap
-        let facet_key = facet_key_from_vertex_keys(&vertex_keys);
-
-        facet_to_cells
-            .get(&facet_key)
-            .is_some_and(|cells| cells.len() == 1)
+        let facet_to_cells = self.build_facet_to_cells_hashmap();
+        self.is_boundary_facet_with_map(facet, &facet_to_cells)
   }
src/core/algorithms/robust_bowyer_watson.rs (2)

536-544: Potential key mismatch: use VertexKeys→facet_key (not Facet::key()).

Facet::key() may diverge from the TDS map’s key derivation. Compute the key from vertex keys to ensure identical hashing.

Apply this diff:

-                    let facet_key = facet.key();
+                    // Derive key from vertex VertexKeys to match TDS mapping
+                    let facet_vertices = facet.vertices();
+                    let mut vertex_keys = VertexKeyBuffer::with_capacity(facet_vertices.len());
+                    for v in &facet_vertices {
+                        if let Some(k) = tds.vertex_key_from_uuid(&v.uuid()) {
+                            vertex_keys.push(k);
+                        } else {
+                            continue; // Vertex not in TDS; skip
+                        }
+                    }
+                    let facet_key = facet_key_from_vertex_keys(&vertex_keys);

1628-1636: Compile error: converting Vertex to [f64; 3].

(*v).into() won’t compile. Use the point’s array (T=f64 in this test).

Apply this diff:

-                let vertex_found = tds.vertices().values().any(|v| {
-                    let v_coords: [f64; 3] = (*v).into();
-                    let test_coords: [f64; 3] = test_vertex.point().into();
-                    v_coords
-                        .iter()
-                        .zip(test_coords.iter())
-                        .all(|(a, b)| (a - b).abs() < f64::EPSILON)
-                });
+                let vertex_found = tds.vertices().values().any(|v| {
+                    let v_coords: [f64; 3] = v.point().to_array();
+                    let test_coords: [f64; 3] = test_vertex.point().to_array();
+                    v_coords
+                        .iter()
+                        .zip(test_coords.iter())
+                        .all(|(a, b)| (a - b).abs() < f64::EPSILON)
+                });
♻️ Duplicate comments (1)
src/core/cell.rs (1)

545-587: New vertex_uuids() API is clear and consistent.

Docs/readability are good; returning Vec is fine since a zero‑alloc iterator is also provided.

#!/bin/bash
# Ensure docs referencing Tds::vertex_key_from_uuid don't point to private helpers anymore
rg -n 'vertex_keys_for_cell' -g 'src/**'
# Confirm vertex_key_from_uuid is public
rg -nP 'pub\s+fn\s+vertex_key_from_uuid' src/core/triangulation_data_structure.rs
🧹 Nitpick comments (1)
src/core/traits/boundary_analysis.rs (1)

132-150: API addition looks good; add example and stabilize FacetToCellsMap.

  • Add a short example calling Tds::build_facet_to_cells_hashmap() (mirror boundary.rs docs).
  • FacetToCellsMap is declared pub at src/core/collections.rs:224 but not re-exported; if this trait is part of the public API, re-export FacetToCellsMap at a stable, higher-level path (crate root or crate::collections) or adjust the trait signature to avoid exposing internal module paths.
📜 Review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between be31587 and d371993.

📒 Files selected for processing (7)
  • CHANGELOG.md (1 hunks)
  • docs/code_organization.md (3 hunks)
  • examples/zero_allocation_iterator_demo.rs (1 hunks)
  • src/core/algorithms/robust_bowyer_watson.rs (20 hunks)
  • src/core/boundary.rs (9 hunks)
  • src/core/cell.rs (11 hunks)
  • src/core/traits/boundary_analysis.rs (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • CHANGELOG.md
🚧 Files skipped from review as they are similar to previous changes (2)
  • examples/zero_allocation_iterator_demo.rs
  • docs/code_organization.md
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.

Applied to files:

  • src/core/algorithms/robust_bowyer_watson.rs
  • src/core/boundary.rs
🧬 Code graph analysis (4)
src/core/traits/boundary_analysis.rs (1)
src/core/boundary.rs (1)
  • is_boundary_facet_with_map (212-235)
src/core/algorithms/robust_bowyer_watson.rs (3)
src/geometry/util.rs (12)
  • core (1388-1388)
  • std (238-238)
  • std (278-278)
  • std (494-494)
  • std (502-502)
  • std (1971-1971)
  • std (2059-2059)
  • safe_usize_to_scalar (158-158)
  • safe_usize_to_scalar (480-517)
  • safe_usize_to_scalar (1101-1101)
  • safe_usize_to_scalar (1410-1410)
  • v (143-143)
src/core/traits/insertion_algorithm.rs (4)
  • new (88-100)
  • new (227-234)
  • with_capacity (238-245)
  • default (213-215)
src/core/triangulation_data_structure.rs (3)
  • new (1364-1396)
  • default (206-208)
  • cells (494-496)
src/core/cell.rs (1)
src/core/vertex.rs (4)
  • point (403-405)
  • hash (659-663)
  • uuid (458-460)
  • into_hashmap (381-383)
src/core/boundary.rs (3)
src/core/facet.rs (6)
  • facet_key_from_vertex_keys (574-586)
  • vertex (370-372)
  • key (455-473)
  • new (287-293)
  • vertices (437-444)
  • cell (325-327)
src/core/triangulation_data_structure.rs (4)
  • cells (494-496)
  • new (1364-1396)
  • vertices (535-537)
  • default (206-208)
src/core/traits/boundary_analysis.rs (1)
  • is_boundary_facet_with_map (145-149)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: Codacy Static Code Analysis
  • GitHub Check: Codacy Security Scan
  • GitHub Check: Code Coverage
  • GitHub Check: build (windows-latest)
  • GitHub Check: performance-regression
🔇 Additional comments (5)
src/core/cell.rs (2)

589-633: Zero‑allocation iterator LGTM.

Returning Uuid by value is reasonable; ExactSizeIterator is a nice touch.


854-856: into_hashmap() switch to FastHashMap is fine.

Local field access to uuid is valid inside impl; no issues.

src/core/boundary.rs (2)

7-9: Correct imports for key‑based facet derivation.

Brings in VertexKeyBuffer and facet_key_from_vertex_keys; matches the new mapping path.


733-734: No change required — rand::rng() is correct with rand = "0.9.2"
Cargo.toml lists rand = "0.9.2"; rand 0.9 renamed thread_rng() → rng(), so keeping rand::rng() is appropriate. (rust-random.github.io)

src/core/algorithms/robust_bowyer_watson.rs (1)

733-734: rand::rng() compatibility.

Cargo.toml pins rand = "0.9.2", so rand::rng() is supported here — no change required. Replace with rand::thread_rng() only if you need compatibility with rand < 0.9.

Fixes an issue where boundary facets were not being correctly identified due to inconsistent facet key generation.

The `is_boundary_facet` method now uses a helper function `facet_key_for_facet` to ensure consistent facet key calculation when checking if a facet belongs to only one cell. This resolves incorrect boundary determination.

Also includes a new test case to validate the helper function.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (4)
src/core/traits/boundary_analysis.rs (1)

132-181: API surface: avoid referencing prelude from a core trait

Using crate::prelude::FacetToCellsMap in a core trait couples the trait to the prelude. Prefer the defining module’s alias (e.g., crate::core::collections::FacetToCellsMap) to keep layering clean. Also consider adding intra‑doc links to [build_facet_to_cells_hashmap] for rustdoc.

src/core/boundary.rs (2)

325-335: Test simplification: assert directly instead of counting

Use all() to make the intent explicit and reduce boilerplate.

-        let mut confirmed_boundary = 0;
-        for boundary_facet in &boundary_facets {
-            if tds.is_boundary_facet(boundary_facet) {
-                confirmed_boundary += 1;
-            }
-        }
-        assert_eq!(
-            confirmed_boundary, 3,
-            "All facets should be boundary facets in single triangle"
-        );
+        assert!(
+            boundary_facets.iter().all(|f| tds.is_boundary_facet(f)),
+            "All facets should be boundary facets in single triangle"
+        );

612-617: Tests: align facet-key grouping with new VertexKey‑based hashing

This test groups facets by facet.key() (UUID‑based). For consistency with production mapping, derive the key via vertex‑keys. Functionally equivalent here, but it future‑proofs the test.

-        let mut facet_map: FastHashMap<u64, Vec<Uuid>> = FastHashMap::default();
+        let mut facet_map: FastHashMap<u64, Vec<Uuid>> = FastHashMap::default();
         for cell in tds.cells().values() {
             for facet in cell.facets().expect("Should get cell facets") {
-                facet_map.entry(facet.key()).or_default().push(cell.uuid());
+                if let Some(fk) = tds.facet_key_for_facet(&facet) {
+                    facet_map.entry(fk).or_default().push(cell.uuid());
+                }
             }
         }
src/core/algorithms/robust_bowyer_watson.rs (1)

811-817: Nit: prefer usize::from(facet_index) over as usize

Minor readability/consistency improvement and avoids accidental sign/width surprises.

-                    if (facet_index as usize) < facets.len() {
-                        let facet = &facets[facet_index as usize];
+                    let idx = usize::from(facet_index);
+                    if idx < facets.len() {
+                        let facet = &facets[idx];
📜 Review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d371993 and ccbcf73.

📒 Files selected for processing (3)
  • src/core/algorithms/robust_bowyer_watson.rs (20 hunks)
  • src/core/boundary.rs (9 hunks)
  • src/core/traits/boundary_analysis.rs (1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.

Applied to files:

  • src/core/algorithms/robust_bowyer_watson.rs
🧬 Code graph analysis (3)
src/core/traits/boundary_analysis.rs (1)
src/core/boundary.rs (1)
  • is_boundary_facet_with_map (191-202)
src/core/algorithms/robust_bowyer_watson.rs (5)
src/geometry/util.rs (12)
  • core (1388-1388)
  • std (238-238)
  • std (278-278)
  • std (494-494)
  • std (502-502)
  • std (1971-1971)
  • std (2059-2059)
  • safe_usize_to_scalar (158-158)
  • safe_usize_to_scalar (480-517)
  • safe_usize_to_scalar (1101-1101)
  • safe_usize_to_scalar (1410-1410)
  • v (143-143)
src/core/facet.rs (3)
  • facet_key_from_vertex_keys (574-586)
  • cell (325-327)
  • vertex (370-372)
src/geometry/point.rs (4)
  • std (323-323)
  • std (324-324)
  • from (354-356)
  • from (373-375)
src/core/triangulation_data_structure.rs (1)
  • cells (494-496)
src/core/vertex.rs (3)
  • from (600-602)
  • from (614-616)
  • from (628-630)
src/core/boundary.rs (4)
src/core/facet.rs (6)
  • facet_key_from_vertex_keys (574-586)
  • vertex (370-372)
  • key (455-473)
  • new (287-293)
  • vertices (437-444)
  • cell (325-327)
src/core/traits/boundary_analysis.rs (1)
  • is_boundary_facet_with_map (176-180)
src/core/triangulation_data_structure.rs (3)
  • cells (494-496)
  • new (1364-1396)
  • vertices (535-537)
src/core/cell.rs (8)
  • vertices (474-476)
  • cell (2209-2209)
  • cell (2338-2338)
  • cell (2340-2340)
  • cell (2344-2344)
  • cell (2543-2543)
  • cell (2556-2556)
  • facets (1052-1057)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: Codacy Static Code Analysis
  • GitHub Check: build (windows-latest)
  • GitHub Check: Codacy Security Scan
  • GitHub Check: performance-regression
  • GitHub Check: Code Coverage
🔇 Additional comments (6)
src/core/boundary.rs (4)

6-9: Good: unified facet-key derivation path

Importing VertexKeyBuffer and facet_key_from_vertex_keys aligns boundary analysis with the key‑based mapping and avoids UUID hashing drift.


146-147: Good: delegate to cached path

Having is_boundary_facet build the map once and delegate keeps the simple API while enabling the optimized variant for tight loops.


191-201: LGTM: map‑based check handles missing vertices safely

Early‑returning false when any facet vertex isn’t found prevents accidental partial-key hashing. Implementation matches the helper’s semantics.


240-282: Nice helper extraction

facet_key_for_facet removes duplication and centralizes the “all vertices must resolve” rule.

src/core/algorithms/robust_bowyer_watson.rs (2)

7-10: Good: import vertex‑key utilities and collections

Pulling in VertexKeyBuffer and facet_key_from_vertex_keys keeps facet‑key construction consistent with Tds.


464-471: Good: buffer reuse in hot path

Using SmallBuffer and reusing vertex_points avoids per‑cell allocations in robust_find_bad_cells.

Also applies to: 509-510

Fixes an issue where facets are skipped during Bowyer-Watson insertion if vertex keys are not found in the TDS.

Ensures facets are only processed if all vertex keys are successfully resolved. Also, fixes indexing errors when accessing facets and improves boundary facet detection.

Updates tests to use iterator patterns for clarity and
conciseness. Adds a benchmark-only test for boundary facet detection in larger triangulations.

Refs: None
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
src/geometry/algorithms/convex_hull.rs (1)

424-437: Derive facet_key from VertexKeys — good; use SmallBuffer to avoid heap allocs

The VertexKey path aligns with how the map is built and fixes key consistency. To shave an alloc on hot paths, prefer SmallBuffer (stack) over Vec for ≤ MAX_PRACTICAL_DIMENSION_SIZE vertices.

Apply:

-        // Reuse the facet_vertices we already computed earlier
-        let mut vertex_keys = Vec::with_capacity(facet_vertices.len());
+        // Reuse the facet_vertices we already computed earlier (stack-allocate)
+        let mut vertex_keys: SmallBuffer<
+            crate::core::triangulation_data_structure::VertexKey,
+            crate::core::collections::MAX_PRACTICAL_DIMENSION_SIZE
+        > = SmallBuffer::new();
         for vertex in &facet_vertices {
             match tds.vertex_key_from_uuid(&vertex.uuid()) {
                 Some(key) => vertex_keys.push(key),
                 None => {
                     return Err(FacetError::VertexNotFound {
                         uuid: vertex.uuid(),
                     });
                 }
             }
         }
         let facet_key = facet_key_from_vertex_keys(&vertex_keys);
src/core/algorithms/robust_bowyer_watson.rs (1)

537-553: Do not hash partial vertex sets — fixed.

Requiring all facet vertices to resolve before hashing prevents misclassification. This addresses the earlier concern.

🧹 Nitpick comments (7)
src/geometry/algorithms/convex_hull.rs (1)

355-355: Split long function or keep the allow justifiable

Allowing clippy::too_many_lines is okay short‑term. Consider extracting cache fetch/rebuild and facet‑key derivation into small helpers to drop the allow and improve reuse.

src/core/boundary.rs (4)

240-282: Helper is correct; minor perf nit.

facet_key_for_facet allocates via facet.vertices(). If this shows up in profiles, consider a non-allocating vertices iterator on Facet to avoid the Vec.


368-380: Prefer all(..) over manual counting.

Matches the 2D test and reduces lines.

Apply:

-        let mut confirmed_boundary = 0;
-        for boundary_facet in &boundary_facets {
-            if tds.is_boundary_facet(boundary_facet) {
-                confirmed_boundary += 1;
-            }
-        }
-        assert_eq!(
-            confirmed_boundary, 4,
-            "All facets should be boundary facets in single tetrahedron"
-        );
+        assert!(
+            boundary_facets.iter().all(|f| tds.is_boundary_facet(f)),
+            "All facets should be boundary facets in single tetrahedron"
+        );

421-483: Avoid asserting hash uniqueness between different facets.

facet_key is a 64‑bit hash of vertex keys; collisions are astronomically unlikely but non‑impossible. The “different facets → different keys” assertion can be theoretically flaky.

Replace by removing the uniqueness check:

-        // Test that different facets produce different keys
-        if facets.len() > 1 {
-            let different_facet = &facets[1];
-            let different_key = tds.facet_key_for_facet(different_facet).unwrap();
-            assert_ne!(
-                facet_key, different_key,
-                "Different facets should produce different keys"
-            );
-        }

You already validate presence in facet_to_cells and boundary classification; that’s sufficient.


605-612: Collect CellKey instead of Uuid for facet_map.

This avoids converting back and forth when you only need counts.

Apply:

-        let mut facet_map: FastHashMap<u64, Vec<Uuid>> = FastHashMap::default();
-        for cell in tds.cells().values() {
+        let mut facet_map: FastHashMap<u64, Vec<_>> = FastHashMap::default();
+        for (cell_key, cell) in tds.cells() {
             for facet in cell.facets().expect("Should get cell facets") {
                 if let Some(fk) = tds.facet_key_for_facet(&facet) {
-                    facet_map.entry(fk).or_default().push(cell.uuid());
+                    facet_map.entry(fk).or_default().push(cell_key);
                 }
             }
         }
src/core/algorithms/robust_bowyer_watson.rs (2)

816-824: Index bounds handling.

You guard idx < facets.len(), but dropping silently may hide structural issues. Keep the guard, but add a debug assertion to catch unexpected facet indices in debug builds.

-                    if idx < facets.len() {
+                    debug_assert!(idx < facets.len(), "facet_index out of bounds");
+                    if idx < facets.len() {
                         let facet = &facets[idx];

953-958: Magic number for visibility heuristic.

Hardcoded multiplier 100.0 affects behavior; make it configurable (predicate_config or a module const) to tune per dataset.

-            const VISIBILITY_THRESHOLD_MULTIPLIER: f64 = 100.0;
+            const VISIBILITY_THRESHOLD_MULTIPLIER: f64 = 100.0; // consider moving to config

Alternatively, add a field to RobustPredicateConfig.

Also applies to: 973-979

📜 Review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ccbcf73 and 9b5f957.

📒 Files selected for processing (6)
  • src/core/algorithms/robust_bowyer_watson.rs (20 hunks)
  • src/core/boundary.rs (10 hunks)
  • src/core/cell.rs (11 hunks)
  • src/core/collections.rs (3 hunks)
  • src/core/traits/boundary_analysis.rs (1 hunks)
  • src/geometry/algorithms/convex_hull.rs (4 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/core/traits/boundary_analysis.rs
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.

Applied to files:

  • src/core/algorithms/robust_bowyer_watson.rs
🧬 Code graph analysis (4)
src/geometry/algorithms/convex_hull.rs (1)
src/core/facet.rs (3)
  • facet_key_from_vertex_keys (574-586)
  • vertex (370-372)
  • key (455-473)
src/core/boundary.rs (2)
src/core/facet.rs (5)
  • facet_key_from_vertex_keys (574-586)
  • vertex (370-372)
  • key (455-473)
  • vertices (437-444)
  • cell (325-327)
src/core/traits/boundary_analysis.rs (2)
  • is_boundary_facet_with_map (178-182)
  • is_boundary_facet (130-130)
src/core/algorithms/robust_bowyer_watson.rs (3)
src/geometry/util.rs (12)
  • core (1388-1388)
  • std (238-238)
  • std (278-278)
  • std (494-494)
  • std (502-502)
  • std (1971-1971)
  • std (2059-2059)
  • safe_usize_to_scalar (158-158)
  • safe_usize_to_scalar (480-517)
  • safe_usize_to_scalar (1101-1101)
  • safe_usize_to_scalar (1410-1410)
  • v (143-143)
src/core/facet.rs (2)
  • facet_key_from_vertex_keys (574-586)
  • vertex (370-372)
src/core/vertex.rs (3)
  • from (600-602)
  • from (614-616)
  • from (628-630)
src/core/cell.rs (1)
src/core/vertex.rs (4)
  • point (403-405)
  • hash (659-663)
  • uuid (458-460)
  • into_hashmap (381-383)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: Codacy Static Code Analysis
  • GitHub Check: Code Coverage
  • GitHub Check: build (windows-latest)
  • GitHub Check: performance-regression
  • GitHub Check: Codacy Security Scan
🔇 Additional comments (21)
src/geometry/algorithms/convex_hull.rs (2)

1585-1611: Test refactor to iterator style looks good

Cleaner, keeps the same assertions while reducing boilerplate.


1702-1709: Consistency check is correct

The vector is guaranteed non‑empty (built from 0..5), so indexing [0] is safe; all() over iter is clean.

src/core/cell.rs (4)

589-633: Zero‑alloc vertex_uuid_iter is a good addition

Returning Uuid by value is fine here (16 bytes), avoids lifetimes and deref churn.


1696-1724: vertex_uuids test coverage is comprehensive

Covers 2D/3D/4D, f32/f64, uniqueness, non‑nil, and iter ergonomics (bench‑gated). Nicely done.

Also applies to: 1727-1743, 1746-1773, 1776-1812, 1815-1852, 3136-3251


545-587: Approve: helper is public — docs/examples are actionable
Tds::vertex_key_from_uuid is public at src/core/triangulation_data_structure.rs:1104.


797-855: Keep public API stable: restore std::collections::HashMap return for Cell::into_hashmap or add an opt-in fast variant

Cell::into_hashmap currently returns FastHashMap<Uuid, Self>, which changes the public return type (breaking for external consumers). Restore the original std::collections::HashMap signature and provide a new into_fast_hashmap returning FastHashMap.

-#[must_use]
-pub fn into_hashmap(cells: Vec<Self>) -> FastHashMap<Uuid, Self> {
-    cells.into_iter().map(|c| (c.uuid, c)).collect()
-}
+#[must_use]
+pub fn into_hashmap(cells: Vec<Self>) -> std::collections::HashMap<Uuid, Self> {
+    cells.into_iter().map(|c| (c.uuid, c)).collect()
+}

And add alongside (outside this hunk):

#[must_use]
pub fn into_fast_hashmap(cells: Vec<Self>) -> FastHashMap<Uuid, Self> {
    cells.into_iter().map(|c| (c.uuid, c)).collect()
}

Relevant locations: src/core/cell.rs:854, src/core/vertex.rs:381, src/core/collections.rs:143.

src/core/collections.rs (2)

355-360: Making MAX_PRACTICAL_DIMENSION_SIZE public is appropriate

Used widely in const‑generic SmallBuffer aliases; public visibility helps downstream crates stay consistent.


423-488: New UUID→Key map aliases match the migration plan

Clear docs, good guidance on reverse lookups via SlotMap. No issues spotted.

src/core/boundary.rs (6)

7-8: Consistent key-derivation imports — good.

Importing VertexKeyBuffer and facet_key_from_vertex_keys aligns boundary analysis with the SlotMap-based keys.


145-147: Delegation to cached path is appropriate.

is_boundary_facet now reuses the precomputed-map variant; this prevents repeated map rebuilds for callers that batch queries.


149-202: MSRV check: Option::is_some_and.

is_some_and requires a relatively recent Rust. If your MSRV is lower, avoid it.

Consider this MSRV-friendly tweak:

-            return facet_to_cells
-                .get(&facet_key)
-                .is_some_and(|cells| cells.len() == 1);
+            if let Some(cells) = facet_to_cells.get(&facet_key) {
+                return cells.len() == 1;
+            }
+            return false;

326-329: 2D test assertion style — good.

Concise all(..) check reads well and matches the intent.


382-420: Cached path test — solid coverage.

Confirms parity with the non-cached method and encourages callers to reuse the map in tight loops.


799-804: Bench gating — good.

CI-friendly; benches only run when explicitly enabled.

Also applies to: 1021-1022

src/core/algorithms/robust_bowyer_watson.rs (7)

7-12: Imports aligned with key-based facet mapping — good.

Brings in VertexKeyBuffer, facet_key_from_vertex_keys, and safe_usize_to_scalar; consistent with boundary.rs.

Also applies to: 29-29


282-289: More informative finalization errors — good.

Including created/removed counts in error context will speed up failures triage.


337-339: Hull-extension error message improvement — good.

Clearer diagnostics on finalize_after_insertion.


464-472: Hot-path buffer reuse — good.

SmallBuffer reuse for vertex points reduces allocs; into_vec() keeps API ergonomic.

Also applies to: 509-510


266-269: MSRV check: if‑let chains.

Patterns like if let Ok(x) = ... && !x.is_empty() and if let Some(..) && let Ok(..) require newer Rust.

If MSRV is lower, rewrite with nested matches/ifs. Example for Lines 266-269:

-            if let Ok(boundary_facets) =
-                self.find_cavity_boundary_facets_with_robust_fallback(tds, &bad_cells)
-                && !boundary_facets.is_empty()
-            {
+            if let Ok(boundary_facets) =
+                self.find_cavity_boundary_facets_with_robust_fallback(tds, &bad_cells)
+            {
+                if !boundary_facets.is_empty() {
                     // ...
-            }
+                }
+            }

Apply similarly to the other occurrences.

Also applies to: 321-324, 533-535


806-807: rand::rng() availability.

rand::rng() depends on crate version. If CI toolchain pulls an older rand, prefer thread_rng() for broad compatibility.

-        let mut rng = rand::rng();
+        let mut rng = rand::thread_rng();

Also applies to: 1039-1040


1600-1613: Neighbor symmetry assertions — good.

Validates bidirectional neighbor links using key/UUID conversions; helps catch subtle topology bugs.

Also applies to: 1456-1463, 1754-1766, 1560-1566

Includes a utility function to derive facet keys from
vertices, replacing redundant code. This change is internal; it doesn't modify the public API.

Implements a new FacetCacheProvider trait that is used by convex hull and boundary analysis. Will be used in Bowyer-Watson algorithms in a future update.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
docs/OPTIMIZING_BOWYER_WATSON.md (1)

140-147: Same return-type issue in the second impl.

Mirror the fix here as well.

-    fn cached_generation(&self) -> &AtomicU64 {
-        &self.cached_generation
-    }
+    fn cached_generation(&self) -> &AtomicU64 {
+        self.cached_generation.as_ref()
+    }
🧹 Nitpick comments (21)
src/lib.rs (1)

172-177: Expose FacetCacheProvider in the prelude (optional).

You re-export facet_cache::* under core::traits, but the prelude omits it. Consider adding it for consistency with other traits.

Apply this diff in the prelude re-exports:

-        traits::{boundary_analysis::*, data_type::*, insertion_algorithm::*},
+        traits::{boundary_analysis::*, data_type::*, insertion_algorithm::*, facet_cache::*},
src/core/util.rs (2)

610-696: Deriving facet keys from Vertex UUIDs: good; document empty-input behavior explicitly.

Logic aligns with the new SlotMap key path and uses the same hasher via facet_key_from_vertex_keys. Please document that an empty facet_vertices slice returns Ok(0) to avoid ambiguity.

Suggested doc tweak:

 /// # Returns
 ///
-/// A `Result` containing the facet key or a `FacetError` if vertex lookup fails
+/// A `Result` containing the facet key or a `FacetError` if vertex lookup fails.
+/// Note: an empty `facet_vertices` slice yields `Ok(0)`.

1673-1683: Strengthen the empty-input test assertion.

The test accepts empty vertices but doesn’t assert the expected key. Assert 0 to lock the contract.

-        // Empty vertices should succeed and produce some key (probably 0)
-        assert!(
-            result_empty.is_ok(),
-            "Empty vertices should succeed (edge case)"
-        );
-        println!("  Empty vertices key: {}", result_empty.unwrap());
+        let key_empty = result_empty.expect("Empty vertices should succeed (edge case)");
+        assert_eq!(key_empty, 0, "Empty vertices should produce key 0");
src/core/traits/facet_cache.rs (1)

77-91: Trait bounds are heavier than needed; relax to reduce API friction.

The cache helper doesn’t use the algebraic/numeric bounds; requiring them at the trait level forces downstream types to satisfy unnecessary constraints.

  • Move or drop most bounds from the trait to call sites that truly need them (none here).
  • Minimal trait could constrain only U: DataType, V: DataType (or even no bounds) and keep generics for T, D.

Example:

-pub trait FacetCacheProvider<T, U, V, const D: usize>
-where
-    T: CoordinateScalar
-        + ComplexField<RealField = T>
-        + AddAssign<T>
-        + SubAssign<T>
-        + Sum
-        + From<f64>,
-    U: DataType,
-    V: DataType,
-    f64: From<T>,
-    for<'a> &'a T: Div<T>,
-    [T; D]: Copy + Default + DeserializeOwned + Serialize + Sized,
-    ordered_float::OrderedFloat<f64>: From<T>,
-{
+pub trait FacetCacheProvider<T, U, V, const D: usize> {
src/geometry/algorithms/convex_hull.rs (2)

455-512: Deduplicate cache logic — rely on the trait’s default method.

This local get_or_build_facet_cache duplicates the default provided by FacetCacheProvider. Remove it and keep calls as-is; method resolution will use the trait impl.

-    fn get_or_build_facet_cache(&self, tds: &Tds<T, U, V, D>) -> Arc<FacetToCellsMap> {
-        // ... entire method body ...
-    }

1140-1163: Fix return type to avoid relying on deref-coercion.

Return &AtomicU64 via Arc::as_ref() for clarity.

-    fn cached_generation(&self) -> &AtomicU64 {
-        &self.cached_generation
-    }
+    fn cached_generation(&self) -> &AtomicU64 {
+        self.cached_generation.as_ref()
+    }
src/core/boundary.rs (1)

191-203: Map-based boundary check is correct; add a tiny guard (optional).

Current logic is fine. Optionally early-return if facet_vertices.len() != D to avoid deriving invalid keys.

     ) -> bool {
-        let facet_vertices = facet.vertices();
+        let facet_vertices = facet.vertices();
+        if facet_vertices.len() != D {
+            return false;
+        }
         if let Ok(facet_key) = derive_facet_key_from_vertices(&facet_vertices, self) {
             return facet_to_cells
                 .get(&facet_key)
                 .is_some_and(|cells| cells.len() == 1);
         }
         false
     }
docs/OPTIMIZING_BOWYER_WATSON.md (4)

123-130: Return type mismatch: cached_generation should return &AtomicU64, not &Arc<AtomicU64>.

As written, &self.cached_generation is &Arc<AtomicU64>; deref coercion won’t apply in return position. Return &AtomicU64 explicitly.

Apply this diff to both impl blocks shown in the doc:

-    fn cached_generation(&self) -> &AtomicU64 {
-        &self.cached_generation
-    }
+    fn cached_generation(&self) -> &AtomicU64 {
+        self.cached_generation.as_ref()
+        // or: &*self.cached_generation
+    }

100-106: Consider AtomicU64 ownership: prefer storing AtomicU64 directly if only shared by reference.

If mutation happens only via methods on &self and you never swap the AtomicU64 instance, an AtomicU64 field (not Arc<AtomicU64>) plus returning &AtomicU64 is simpler and avoids one heap allocation. Keep Arc only if you truly share this counter across cloned algorithm instances.


206-218: Spell out TDS generation bumps.

Automatic invalidation relies on TDS generation increments. Document which TDS mutators bump the counter (vertex insert/remove, neighbor updates, finalization, etc.) and add tests to assert cache invalidation on those paths.

I can add a checklist-based test that records cached_generation before/after key TDS operations.


275-281: Quantify performance claims with reproducible benches.

“50–90% reduction” is strong; add the benchmark setup (dataset size, dimension, hardware, bench harness) so readers can reproduce.

src/core/algorithms/robust_bowyer_watson.rs (10)

805-814: Avoid repeated facet-map rebuilds; use a cached mapping.

tds.build_facet_to_cells_hashmap() can be expensive. Since this method only needs boundary facets (entries with 1 cell), fetch the mapping once via a cache provider and reuse it.

Example:

-        let facet_to_cells = tds.build_facet_to_cells_hashmap();
+        // TODO: integrate FacetCacheProvider here once available:
+        // let facet_to_cells = self.get_or_build_facet_cache(&tds);
+        let facet_to_cells = tds.build_facet_to_cells_hashmap();

When the cache is wired, replace the TODO and drop the direct call.


687-701: Dedup without requiring CellKey: Ord.

sort_unstable + dedup needs Ord. If CellKey doesn’t (or shouldn’t) implement Ord, dedup with a set to avoid accidental trait coupling.

-                cell_keys.sort_unstable();
-                cell_keys.dedup();
+                {
+                    let mut seen = FastHashSet::with_capacity(cell_keys.len());
+                    cell_keys.retain(|k| seen.insert(*k));
+                }

If CellKey: Ord is already guaranteed, keep as-is.


529-568: Facet-key derivation: good fix; also guard facet_to_cells lookups.

Using derive_facet_key_from_vertices eliminates key skew. Add a fast-path continue if facet_to_cells lacks the key (shouldn’t happen, but avoids unwrap-y future edits).

-                    if let Some(sharing_cells) = facet_to_cells.get(&facet_key) {
+                    if let Some(sharing_cells) = facet_to_cells.get(&facet_key) {
                         // ...
-                    }
+                    } else {
+                        continue;
+                    }

457-472: Buffers: prefer SmallBuffer::new() and reserve once.

with_capacity(D + 1) per call is fine, but SmallBuffer::new() plus a one-time reserve_exact(D + 1) (or nothing, since the inline array fits ≤ MAX_PRACTICAL_DIMENSION_SIZE) is marginally leaner.


929-969: Heuristic constant should be configurable.

VISIBILITY_THRESHOLD_MULTIPLIER = 100.0 is baked in. Expose it via RobustPredicateConfig (or a dedicated visibility config) to allow dataset tuning and testing.

-            const VISIBILITY_THRESHOLD_MULTIPLIER: f64 = 100.0;
-            self.predicate_config.perturbation_scale
+            let multiplier: T = <T as From<f64>>::from(
+                self.predicate_config.visibility_threshold_multiplier.unwrap_or(100.0)
+            );
+            self.predicate_config.perturbation_scale
                 * self.predicate_config.perturbation_scale
-                * <T as From<f64>>::from(VISIBILITY_THRESHOLD_MULTIPLIER)
+                * multiplier

282-289: Error wrapping loses original variant/context.

Mapping finalize_after_insertion errors into a generic InconsistentDataStructure drops precise cause. Consider using with_context-style augmentation while preserving the original error (e.g., via an enum variant that nests the source).

-                <Self as InsertionAlgorithm<T, U, V, D>>::finalize_after_insertion(tds).map_err(
-                    |e| {
-                        TriangulationValidationError::InconsistentDataStructure {
-                            message: format!("...: {e}"),
-                        }
-                    },
-                )?;
+                <Self as InsertionAlgorithm<T, U, V, D>>::finalize_after_insertion(tds)
+                    .map_err(|e| TriangulationValidationError::FinalizationFailed {
+                        message: format!("..."),
+                        source: Box::new(e),
+                    })?;

If introducing a new variant isn’t desirable, at least include e’s discriminant or key fields explicitly.


101-111: Minor: initialize buffers capacity from dimension.

InsertionBuffers::with_capacity(100) is arbitrary. If buffers scale with simplex size, base capacity on D to reduce reallocs in high-D tests.


1705-1720: Duplicate-cell check: avoid allocation in signature creation.

Formatting a String per cell is costly. Use a small fixed container (e.g., sorted array of UUIDs hashed into a u64) or a SmallBuffer key to reduce allocations in tests.


1467-1473: Neighbor symmetry assertions: good; add index consistency spot-check.

Optionally assert that the facet indices across neighbors match the shared facet key to catch subtle mismatches early.

I can add a helper that recomputes the shared facet key from both cells and compares.


1027-1044: Heuristic scans all vertices; guard for big instances.

vertex_needs_robust_handling does an O(n) proximity scan. Consider an early exit after k close hits or gate behind a size threshold to keep amortized costs bounded.

📜 Review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9b5f957 and b7bf370.

📒 Files selected for processing (7)
  • docs/OPTIMIZING_BOWYER_WATSON.md (1 hunks)
  • src/core/algorithms/robust_bowyer_watson.rs (20 hunks)
  • src/core/boundary.rs (10 hunks)
  • src/core/traits/facet_cache.rs (1 hunks)
  • src/core/util.rs (2 hunks)
  • src/geometry/algorithms/convex_hull.rs (8 hunks)
  • src/lib.rs (1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.

Applied to files:

  • src/core/util.rs
  • src/core/algorithms/robust_bowyer_watson.rs
  • src/geometry/algorithms/convex_hull.rs
  • src/core/boundary.rs
🧬 Code graph analysis (6)
src/lib.rs (2)
src/core/traits/facet_cache.rs (2)
  • facet_cache (93-93)
  • facet_cache (227-229)
src/geometry/algorithms/convex_hull.rs (1)
  • facet_cache (1156-1158)
src/core/traits/facet_cache.rs (1)
src/geometry/algorithms/convex_hull.rs (3)
  • facet_cache (1156-1158)
  • cached_generation (1160-1162)
  • get_or_build_facet_cache (467-512)
src/core/util.rs (3)
src/core/facet.rs (6)
  • vertex (370-372)
  • facet_key_from_vertex_keys (574-586)
  • new (287-293)
  • key (455-473)
  • vertices (437-444)
  • cell (325-327)
src/core/triangulation_data_structure.rs (3)
  • new (1364-1396)
  • vertices (535-537)
  • default (206-208)
src/core/vertex.rs (1)
  • uuid (458-460)
src/core/algorithms/robust_bowyer_watson.rs (3)
src/geometry/util.rs (12)
  • core (1388-1388)
  • std (238-238)
  • std (278-278)
  • std (494-494)
  • std (502-502)
  • std (1971-1971)
  • std (2059-2059)
  • safe_usize_to_scalar (158-158)
  • safe_usize_to_scalar (480-517)
  • safe_usize_to_scalar (1101-1101)
  • safe_usize_to_scalar (1410-1410)
  • v (143-143)
src/core/util.rs (1)
  • derive_facet_key_from_vertices (662-695)
src/core/vertex.rs (4)
  • from (600-602)
  • from (614-616)
  • from (628-630)
  • uuid (458-460)
src/geometry/algorithms/convex_hull.rs (3)
src/core/traits/facet_cache.rs (6)
  • facet_cache (93-93)
  • facet_cache (227-229)
  • get_or_build_facet_cache (128-173)
  • cached_generation (96-96)
  • cached_generation (231-233)
  • new (218-223)
src/core/util.rs (1)
  • derive_facet_key_from_vertices (662-695)
src/core/triangulation_data_structure.rs (2)
  • new (1364-1396)
  • vertices (535-537)
src/core/boundary.rs (6)
src/core/util.rs (1)
  • derive_facet_key_from_vertices (662-695)
src/core/traits/boundary_analysis.rs (3)
  • is_boundary_facet_with_map (178-182)
  • boundary_facets (92-92)
  • is_boundary_facet (130-130)
src/core/facet.rs (4)
  • facet_vertices (699-711)
  • new (287-293)
  • vertices (437-444)
  • cell (325-327)
src/core/triangulation_data_structure.rs (4)
  • cells (494-496)
  • new (1364-1396)
  • vertices (535-537)
  • default (206-208)
src/core/cell.rs (8)
  • vertices (474-476)
  • cell (2209-2209)
  • cell (2338-2338)
  • cell (2340-2340)
  • cell (2344-2344)
  • cell (2543-2543)
  • cell (2556-2556)
  • facets (1052-1057)
src/core/vertex.rs (1)
  • from_points (347-352)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: build (windows-latest)
  • GitHub Check: Code Coverage
  • GitHub Check: Codacy Security Scan
  • GitHub Check: performance-regression
🔇 Additional comments (11)
src/geometry/algorithms/convex_hull.rs (4)

5-8: Imports align with new cache+key flow.

Bringing FacetCacheProvider and derive_facet_key_from_vertices into scope matches usage. LGTM.


376-383: Use derived facet-key + cached map for adjacency: good.

Switching to derive_facet_key_from_vertices ensures consistency with TDS-built maps and avoids UUID-hash drift.


3603-3752: Tests correctly exercise cache build/reuse/invalidation.

After removing the local helper, these still work via the trait method (trait is in scope). If CI flags method resolution, add use crate::core::traits::facet_cache::FacetCacheProvider; inside the tests module.


753-756: MSRV: Option::is_none_or requires Rust 1.82

src/geometry/algorithms/convex_hull.rs:753-756 — Option::is_none_or was stabilized in Rust 1.82; ensure the crate MSRV ≥ 1.82, otherwise replace with map_or(true, |min| distance < min).

src/core/boundary.rs (5)

10-11: Key-derivation import: good alignment.

Using derive_facet_key_from_vertices centralizes facet-key computation with the TDS map semantics.


145-147: Route through cached-map variant: good default.

Delegating to is_boundary_facet_with_map keeps the single-source-of-truth and enables easy optimization when a map is available.


301-305: Tests: concise aggregate assertions.

Switch to .all(...) improves clarity and runtime. LGTM.


344-350: 3D single-cell aggregate check: LGTM.

Matches the new helper and avoids redundant counters.


568-575: Facet map construction via derived keys: LGTM.

Validates end-to-end key consistency between utility and TDS cache builder.

docs/OPTIMIZING_BOWYER_WATSON.md (1)

162-165: Define get_or_build_facet_cache in the trait (or show where it lives).

The doc uses self.get_or_build_facet_cache(&tds) but the trait surface isn’t shown. Please add its signature and semantics (return type, cloning/Arc behavior) for clarity.

Would you like me to draft the trait’s method signatures and a minimal default impl?

src/core/algorithms/robust_bowyer_watson.rs (1)

972-993: Potential Copy reliance for Vertex.

let mut perturbed_vertex = *vertex; assumes Vertex is Copy. If not guaranteed, switch to clone() or construct from parts.

-        let mut perturbed_vertex = *vertex;
+        let mut perturbed_vertex = vertex.clone();

If Vertex: Copy by design, ignore.

- Refactors robust Bowyer-Watson algorithm for enhanced stability and performance.
- Scales `InsertionBuffers` capacity with dimension for better memory management.
- Implements facet index consistency checks in tests to catch neighbor assignment errors.
- Adds visibility threshold tuning via `RobustPredicateConfig`.
- Improves boundary facet detection by validating vertex count.
- Implements early exit proximity scan in high-density vertex check.
- Includes documentation updates for facet caching and performance benchmarking.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
src/core/algorithms/robust_bowyer_watson.rs (2)

266-279: Fix invalid if let + && guard (doesn’t compile on stable Rust).

if let Ok(boundary_facets) = call && !boundary_facets.is_empty() is not valid. Bind first, then guard on emptiness.

Apply this diff:

-            if let Ok(boundary_facets) =
-                self.find_cavity_boundary_facets_with_robust_fallback(tds, &bad_cells)
-                && !boundary_facets.is_empty()
-            {
+            if let Ok(boundary_facets) =
+                self.find_cavity_boundary_facets_with_robust_fallback(tds, &bad_cells)
+            {
+                if boundary_facets.is_empty() {
+                    // Fall back to trait method below
+                } else {
                 let cells_removed = bad_cells.len();
                 <Self as InsertionAlgorithm<T, U, V, D>>::remove_bad_cells(tds, &bad_cells);
                 <Self as InsertionAlgorithm<T, U, V, D>>::ensure_vertex_in_tds(tds, vertex);
                 let cells_created =
                     <Self as InsertionAlgorithm<T, U, V, D>>::create_cells_from_boundary_facets(
                         tds,
                         &boundary_facets,
                         vertex,
                     );
@@
-                return Ok(InsertionInfo {
+                    return Ok(InsertionInfo {
                         strategy: InsertionStrategy::CavityBased,
                         cells_removed,
                         cells_created,
                         success: true,
                         degenerate_case_handled: false,
-                });
+                    });
+                }
             }

320-341: Same if let + && guard issue here.

Split binding from the non‑empty check.

-        if let Ok(visible_facets) =
-            self.find_visible_boundary_facets_with_robust_fallback(tds, vertex)
-            && !visible_facets.is_empty()
-        {
+        if let Ok(visible_facets) =
+            self.find_visible_boundary_facets_with_robust_fallback(tds, vertex)
+        {
+            if visible_facets.is_empty() {
+                // Fall back to trait method below
+            } else {
             <Self as InsertionAlgorithm<T, U, V, D>>::ensure_vertex_in_tds(tds, vertex);
             let cells_created =
                 <Self as InsertionAlgorithm<T, U, V, D>>::create_cells_from_boundary_facets(
                     tds,
                     &visible_facets,
                     vertex,
                 );
@@
-            return Ok(InsertionInfo {
+                return Ok(InsertionInfo {
                     strategy: InsertionStrategy::HullExtension,
                     cells_removed: 0,
                     cells_created,
                     success: true,
                     degenerate_case_handled: false,
-            });
+                });
+            }
         }
♻️ Duplicate comments (1)
src/core/algorithms/robust_bowyer_watson.rs (1)

1537-1562: UUID↔Key symmetry checks look good; keep using helpers.

Using cell_key_from_uuid/cell_uuid_from_key in tests ensures the public UUID API remains intact during the SlotMap migration.

🧹 Nitpick comments (10)
src/core/traits/facet_cache.rs (3)

120-162: Prevent publishing a stale cache; set generation snapshot on initial build.

If current_generation == cached_generation and the cache is absent, we build and publish a cache but never set cached_generation. Worse, if TDS mutates during build, we may publish a stale cache for one call window. Capture the generation snapshot after install to align the two and narrow the race window.

Apply this diff inside the map_or_else None-branch:

-                || {
+                || {
                     // No cache exists yet - build and store it
                     let new_cache = tds.build_facet_to_cells_hashmap();
                     let new_cache_arc = Arc::new(new_cache);

                     // Try to swap in the new cache (another thread might have done it already)
                     // If another thread beat us to it, use their cache instead
                     let none_ref: Option<Arc<FacetToCellsMap>> = None;
                     let _old = self
                         .facet_cache()
                         .compare_and_swap(&none_ref, Some(new_cache_arc.clone()));
-                    // Load the current cache (could be ours or another thread's)
-                    self.facet_cache().load_full().unwrap_or(new_cache_arc)
+                    // Load the current cache (could be ours or another thread's)
+                    let cache = self.facet_cache().load_full().unwrap_or(new_cache_arc);
+                    // Record the generation snapshot that the cache corresponds to
+                    self.cached_generation().store(current_generation, Ordering::Relaxed);
+                    cache
                 },

117-123: Tighten atomic orderings for cross-thread visibility.

Use Acquire on generation loads and Release on stores to establish a happens-before with cache publication. Relaxed works in practice but weakens intent.

-        let current_generation = tds.generation.load(Ordering::Relaxed);
-        let cached_generation = self.cached_generation().load(Ordering::Relaxed);
+        let current_generation = tds.generation.load(Ordering::Acquire);
+        let cached_generation = self.cached_generation().load(Ordering::Acquire);
@@
-            self.facet_cache().store(Some(new_cache_arc.clone()));
+            self.facet_cache().store(Some(new_cache_arc.clone())); // ArcSwap ensures proper publication
@@
-            self.cached_generation()
-                .store(current_generation, Ordering::Relaxed);
+            self.cached_generation()
+                .store(current_generation, Ordering::Release);
@@
-        self.cached_generation().store(0, Ordering::Relaxed);
+        self.cached_generation().store(0, Ordering::Release);

Also applies to: 153-159, 179-187


73-80: Trait bounds are over-constrained for a cache provider.

FacetCacheProvider doesn’t use numeric ops; requiring AddAssign, SubAssign, Sum, NumCast, Div inflates compile time and restricts implementors. Keep only what’s actually needed (likely none beyond CoordinateScalar and array serde bounds).

src/core/boundary.rs (2)

145-147: Minor: avoid rebuilding the facet map per call in hot paths.

is_boundary_facet rebuilds the map every time. Prefer exposing a cached path (e.g., via a FacetCacheProvider implementation on TDS) and delegate to is_boundary_facet_with_map to cut O(N·F) recomputation in loops.


191-206: Graceful handling looks good; add a debug hint for unexpected vertex sets.

The D-check and fall-through false are fine. Consider a debug_assert_eq!(facet_vertices.len(), D) to surface invalid facets during dev builds without changing release behavior.

src/geometry/algorithms/convex_hull.rs (1)

482-531: Fallback heuristic: consider bounding by epsilon to avoid false positives near the hull.

Distance-vs-diameter works, but very near-surface outside points may flip due to numeric noise. Optionally add a small relative epsilon (e.g., 1e-12 scale) in the comparison.

src/core/util.rs (1)

673-683: Confirm SmallBuffer growth semantics for D > MAX_PRACTICAL_DIMENSION_SIZE.

If SmallBuffer spills to heap, we’re fine; if it panics on overflow, large-D runs will fail. Add a brief comment or assert to document behavior.

src/core/algorithms/robust_bowyer_watson.rs (3)

810-819: Bounds check on facet_index is correct; consider early-continue instead of nested if.

Minor readability: replace the inner if idx < facets.len() block with if idx >= facets.len() { continue; }.

-                    let idx = usize::from(facet_index);
-                    debug_assert!(idx < facets.len(), "facet_index out of bounds");
-                    if idx < facets.len() {
-                        let facet = &facets[idx];
+                    let idx = usize::from(facet_index);
+                    debug_assert!(idx < facets.len(), "facet_index out of bounds");
+                    if idx >= facets.len() {
+                        continue;
+                    }
+                    let facet = &facets[idx];

948-978: Avoid magic number for visibility heuristic multiplier.

Hard‑coding 100.0 makes tuning harder. Promote to config (e.g., optional field on RobustPredicateConfig with a sane default), or at least a module‑level const.

I can draft the config change and a migration note if you want.


463-509: SmallBuffer is SmallVec (growable); switching to Vec is optional.

SmallBuffer is a type alias for SmallVec<[T; N]> (src/core/collections.rs:199) and automatically falls back to heap allocation when exceeded — the panic concern is incorrect. The code already calls bad_cells.into_vec(), so it ultimately returns a Vec; replace with Vec only if you want explicit heap-backed semantics or expect many bad cells.

📜 Review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b7bf370 and d62220a.

📒 Files selected for processing (6)
  • docs/OPTIMIZING_BOWYER_WATSON.md (1 hunks)
  • src/core/algorithms/robust_bowyer_watson.rs (25 hunks)
  • src/core/boundary.rs (10 hunks)
  • src/core/traits/facet_cache.rs (1 hunks)
  • src/core/util.rs (2 hunks)
  • src/geometry/algorithms/convex_hull.rs (8 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • docs/OPTIMIZING_BOWYER_WATSON.md
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.

Applied to files:

  • src/core/util.rs
  • src/geometry/algorithms/convex_hull.rs
  • src/core/algorithms/robust_bowyer_watson.rs
  • src/core/boundary.rs
🧬 Code graph analysis (5)
src/core/util.rs (4)
src/core/facet.rs (6)
  • vertex (370-372)
  • facet_key_from_vertex_keys (574-586)
  • new (287-293)
  • key (455-473)
  • vertices (437-444)
  • cell (325-327)
src/core/triangulation_data_structure.rs (3)
  • new (1370-1402)
  • vertices (541-543)
  • default (206-208)
src/core/vertex.rs (1)
  • uuid (458-460)
src/core/cell.rs (8)
  • vertices (474-476)
  • cell (2209-2209)
  • cell (2338-2338)
  • cell (2340-2340)
  • cell (2344-2344)
  • cell (2543-2543)
  • cell (2556-2556)
  • uuid (501-503)
src/geometry/algorithms/convex_hull.rs (3)
src/core/traits/facet_cache.rs (5)
  • facet_cache (82-82)
  • facet_cache (216-218)
  • cached_generation (85-85)
  • cached_generation (220-222)
  • new (207-212)
src/core/util.rs (1)
  • derive_facet_key_from_vertices (663-696)
src/core/triangulation_data_structure.rs (2)
  • vertices (541-543)
  • new (1370-1402)
src/core/algorithms/robust_bowyer_watson.rs (4)
src/geometry/util.rs (12)
  • core (1388-1388)
  • std (238-238)
  • std (278-278)
  • std (494-494)
  • std (502-502)
  • std (1971-1971)
  • std (2059-2059)
  • safe_usize_to_scalar (158-158)
  • safe_usize_to_scalar (480-517)
  • safe_usize_to_scalar (1101-1101)
  • safe_usize_to_scalar (1410-1410)
  • v (143-143)
src/core/util.rs (2)
  • derive_facet_key_from_vertices (663-696)
  • k (367-367)
src/core/traits/insertion_algorithm.rs (3)
  • with_capacity (238-245)
  • new (88-100)
  • new (227-234)
src/core/triangulation_data_structure.rs (2)
  • new (1370-1402)
  • cells (500-502)
src/core/traits/facet_cache.rs (2)
src/geometry/algorithms/convex_hull.rs (3)
  • facet_cache (1097-1099)
  • cached_generation (1101-1103)
  • is_empty (969-971)
src/core/triangulation_data_structure.rs (2)
  • new (1370-1402)
  • vertices (541-543)
src/core/boundary.rs (4)
src/core/util.rs (1)
  • derive_facet_key_from_vertices (663-696)
src/core/traits/boundary_analysis.rs (2)
  • is_boundary_facet_with_map (178-182)
  • is_boundary_facet (130-130)
src/core/facet.rs (4)
  • facet_vertices (699-711)
  • new (287-293)
  • vertices (437-444)
  • cell (325-327)
src/core/triangulation_data_structure.rs (3)
  • cells (500-502)
  • new (1370-1402)
  • vertices (541-543)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: build (windows-latest)
  • GitHub Check: Code Coverage
  • GitHub Check: performance-regression
🔇 Additional comments (10)
src/core/traits/facet_cache.rs (1)

190-585: Tests: excellent coverage and concurrency checks.

Good lifecycle, wraparound, and concurrent-access tests. This significantly reduces risk around the cache.

src/core/boundary.rs (1)

83-104: Boundary extraction logic LGTM.

Uses the canonical facet-to-cells map, validates indices, and propagates errors from facets(). Good alignment with the Delaunay invariant.

src/geometry/algorithms/convex_hull.rs (3)

376-386: Good switch to keyed lookups and shared cache.

Deriving the facet key via derive_facet_key_from_vertices and using the shared cache removes repeated recomputation. Error mapping is appropriate.


1042-1078: Expose an invalidate method (nice).

invalidate_cache() correctly clears the ArcSwap and resets generation. This pairs well with the provider trait.


1081-1104: Correct trait implementation details.

Returning &self.cached_generation.as_ref() matches the trait’s &AtomicU64 requirement despite storing an Arc<AtomicU64>.

src/core/util.rs (1)

664-697: Facet-key derivation is consistent with TDS mapping.

Uses VertexKey lookup and the same canonical key function as the map builder. Docs cover empty slice as key 0. Looks good.

src/core/algorithms/robust_bowyer_watson.rs (4)

536-541: Good: facet keys derived from VertexKeys via utility, skipping incomplete facets.

This aligns with boundary.rs/TDS and avoids partial-hash bugs.

Consider logging (trace-only) when a facet is skipped due to missing vertex mapping to aid future debugging.


681-720: Validate-and-dedupe facet→cells mapping is solid.

  • Dedupe per-facet cell keys defensively.
  • Enforce ≤2 sharing cells invariant and bubble up a structured error.

1175-1225: Helpful neighbor-facet consistency test.

This catches subtle neighbor assignment/vertex‑key mismatches across the migration.


1814-1821: Efficient duplicate-cell detection.

Sorting UUIDs into a small buffer and hashing that signature is clean and fast.

Adds an epsilon-based threshold to the fallback visibility test in the convex hull algorithm to prevent false positives due to numeric noise near the hull surface. This improves the robustness of the
convex hull computation in degenerate cases.

Also, this commit modifies facet cache to use Acquire/Release ordering for atomic operations to ensure data consistency across threads. Additionally, adds a debug assertion to validate facet vertex counts.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
src/geometry/algorithms/convex_hull.rs (1)

534-543: Tidy epsilon fallback and reduce nesting.

Minor readability refactor; behavior unchanged.

Apply this diff:

-        // Add epsilon-based bound to avoid false positives from numeric noise
-        // Use a small relative epsilon (1e-12 scale) to handle near-surface points
-        let epsilon_factor = num_traits::NumCast::from(1e-12f64).unwrap_or_else(|| {
-            // Fallback for types that can't represent 1e-12 (e.g., very limited precision)
-            num_traits::NumCast::from(1e-6f64).unwrap_or_else(T::default_tolerance)
-        });
-        let epsilon_threshold = max_edge_sq * epsilon_factor;
-        let adjusted_threshold = max_edge_sq + epsilon_threshold;
+        // Add epsilon-based bound to avoid false positives from numeric noise
+        // Use a small relative epsilon (1e-12 scale) to handle near-surface points
+        let epsilon_factor: T = num_traits::NumCast::from(1e-12f64)
+            .or_else(|| num_traits::NumCast::from(1e-6f64))
+            .unwrap_or_else(|| T::default_tolerance());
+        let adjusted_threshold = max_edge_sq + max_edge_sq * epsilon_factor;
src/core/boundary.rs (2)

257-274: Remove the empty inherent impl block; it adds no behavior and duplicates heavy bounds.

This block enforces no additional constraints and only increases compile-time surface. Delete it to reduce noise.

-/// Additional helper methods for Tds boundary analysis.
-impl<T, U, V, const D: usize> Tds<T, U, V, D>
-where
-    T: CoordinateScalar
-        + AddAssign<T>
-        + ComplexField<RealField = T>
-        + SubAssign<T>
-        + Sum
-        + From<f64>
-        + DeserializeOwned,
-    U: DataType + DeserializeOwned,
-    V: DataType + DeserializeOwned,
-    f64: From<T>,
-    for<'a> &'a T: Div<T>,
-    [T; D]: Copy + Default + DeserializeOwned + Serialize + Sized,
-    ordered_float::OrderedFloat<f64>: From<T>,
-{
-}
+// (removed empty impl block)

434-436: Avoid asserting facet_key != 0.

Hash-derived keys can legitimately be 0; this assertion can produce false negatives with some inputs or hash implementations. Recommend removing it.

-        // Verify the key is non-zero (should be a proper hash)
-        assert_ne!(facet_key, 0, "Facet key should be non-zero");
📜 Review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d62220a and c85bd71.

📒 Files selected for processing (3)
  • src/core/boundary.rs (11 hunks)
  • src/core/traits/facet_cache.rs (1 hunks)
  • src/geometry/algorithms/convex_hull.rs (9 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/core/traits/facet_cache.rs
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.

Applied to files:

  • src/core/boundary.rs
  • src/geometry/algorithms/convex_hull.rs
🧬 Code graph analysis (2)
src/core/boundary.rs (4)
src/core/util.rs (1)
  • derive_facet_key_from_vertices (663-696)
src/core/traits/boundary_analysis.rs (2)
  • is_boundary_facet_with_map (178-182)
  • is_boundary_facet (130-130)
src/core/facet.rs (4)
  • facet_vertices (699-711)
  • new (287-293)
  • vertices (437-444)
  • cell (325-327)
src/core/triangulation_data_structure.rs (3)
  • cells (500-502)
  • new (1370-1402)
  • vertices (541-543)
src/geometry/algorithms/convex_hull.rs (3)
src/core/traits/facet_cache.rs (5)
  • facet_cache (82-82)
  • facet_cache (220-222)
  • cached_generation (85-85)
  • cached_generation (224-226)
  • new (211-216)
src/core/util.rs (1)
  • derive_facet_key_from_vertices (663-696)
src/core/triangulation_data_structure.rs (2)
  • vertices (541-543)
  • new (1370-1402)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Code Coverage
  • GitHub Check: build (windows-latest)
  • GitHub Check: performance-regression
🔇 Additional comments (14)
src/geometry/algorithms/convex_hull.rs (7)

5-7: Imports look correct; ensure trait is in scope in tests too.

Bringing FacetCacheProvider and derive_facet_key_from_vertices into this module is right. Make sure the tests module also imports the trait so method resolution works there (see test comments below).


376-382: Good switch to cache + vertex-derived facet keys.

Using get_or_build_facet_cache and derive_facet_key_from_vertices aligns with the SlotMap/vertex-key source of truth and propagates FacetError cleanly.


459-466: Doc updates are clear and helpful.

The scale-adaptive fallback description is precise and actionable.

Also applies to: 476-484


1093-1116: FacetCacheProvider impl is minimal and correct.

Exposes the cache and generation snapshot as required for the shared cache helpers.


1578-1594: Nice iterator-based test flow.

Collecting visibility results and logging coordinates improves diagnosability without test fragility.


1595-1604: Counting via iterators is clean.

No concerns.


1695-1703: Determinism check reads well.

Using iter().all keeps it concise.

src/core/boundary.rs (7)

110-115: Good docs: clear performance guidance on map reuse.

The note sets correct expectations and nudges callers to the cached path. LGTM.


150-153: Nice delegation to the cached variant.

Making is_boundary_facet a thin wrapper over is_boundary_facet_with_map removes duplication and reduces drift.


317-321: Tests: concise “all facets are boundary” checks look good.

These assertions validate the hot path without over-specifying test internals.

Also applies to: 359-366


368-406: Parity test between regular and cached lookup is well structured.

Good to assert equality of both paths before asserting boundary-ness.


584-591: Ad-hoc facet map build in the test is fine here.

It independently validates derive_facet_key_from_vertices against expected sharing counts. LGTM.


197-219: Add #[inline]; keep Option::is_some_and (MSRV = 1.89.0). Cargo.toml sets rust-version = "1.89.0", so Option::is_some_and is supported — do not replace with map_or; add #[inline] to is_boundary_facet_with_map as a small nit.

Likely an incorrect or invalid review comment.


778-783: Bench-only tests: keep rand::rng() — workspace pins rand = "0.9.2"

Cargo.toml pins rand = "0.9.2"; rand 0.9 renamed thread_rng() → rng() and documents rand::rng() as the thread-local generator, so no change required. (rust-random.github.io)

Addresses potential false positives in convex hull calculations due to numeric noise by refining the epsilon threshold calculation. The calculation now ensures a small relative epsilon (1e-12 scale) is applied to handle near-surface points more robustly. Also adds a missing import for FacetCacheProvider to the test module.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
src/geometry/algorithms/convex_hull.rs (2)

5-8: Good: correct trait/util imports for cache + key derivation.

This fixes method-resolution for get_or_build_facet_cache and aligns visibility with the new key-derivation utility.


1143-1144: Nice: tests import the trait explicitly.

Resolves the extension-method resolution issue previously flagged.

🧹 Nitpick comments (7)
src/core/boundary.rs (3)

89-101: Harden against out-of-bounds/missing entries in release builds.

Indexing facets[idx] will panic in release if facet_index is invalid; the debug_assert won’t guard there. Use safe access for both the single-entry vector and the facet lookup.

Apply this diff:

-            if cells.len() == 1 {
-                let cell_id = cells[0].0;
-                let facet_index = cells[0].1;
-                if let Some(cell) = self.cells().get(cell_id) {
-                    let facets = cell.facets()?;
-                    let idx = usize::from(facet_index);
-                    debug_assert!(idx < facets.len(), "facet_index out of bounds");
-                    boundary_facets.push(facets[idx].clone());
-                }
-            }
+            if cells.len() == 1 {
+                if let Some((cell_id, facet_index)) = cells.get(0).copied() {
+                    if let Some(cell) = self.cells().get(cell_id) {
+                        let facets = cell.facets()?;
+                        if let Some(f) = facets.get(usize::from(facet_index)) {
+                            boundary_facets.push(f.clone());
+                        } else {
+                            debug_assert!(false, "facet_index out of bounds");
+                        }
+                    } else {
+                        debug_assert!(false, "cell_id not found in SlotMap");
+                    }
+                }
+            }

150-153: Inline tiny wrapper for hot paths.

Add #[inline] to avoid call overhead when spam‑checking facets.

-    fn is_boundary_facet(&self, facet: &Facet<T, U, V, D>) -> bool {
+    #[inline]
+    fn is_boundary_facet(&self, facet: &Facet<T, U, V, D>) -> bool {

474-483: Avoid rebuilding the map per facet in this tight loop.

Use the cached variant to cut repeated O(N·F) work in the test.

-        let mut confirmed_boundary = 0;
+        // Cache once
+        let facet_to_cells = tds.build_facet_to_cells_hashmap();
+        let mut confirmed_boundary = 0;
         for boundary_facet in &boundary_facets {
-            if tds.is_boundary_facet(boundary_facet) {
+            if tds.is_boundary_facet_with_map(boundary_facet, &facet_to_cells) {
                 confirmed_boundary += 1;
             }
         }
src/geometry/algorithms/convex_hull.rs (4)

1577-1603: Test micro‑tidy: avoid allocating visibility_results just to count.

You can compute counts directly and keep output.

-        // Test each point and collect visibility results
-        let visibility_results: Vec<(Point<f64, 3>, &str, bool)> = test_points
-            .into_iter()
-            .map(|(point, description)| {
-                let is_visible =
-                    ConvexHull::<f64, Option<()>, Option<()>, 3>::fallback_visibility_test(
-                        test_facet, &point,
-                    )
-                    .unwrap();
-
-                let coords: [f64; 3] = point.into();
-                println!("  Point {coords:?} ({description}) - Visible: {is_visible}");
-
-                (point, description, is_visible)
-            })
-            .collect();
-
-        // Count results using iterator patterns
-        let visible_count = visibility_results
-            .iter()
-            .filter(|(_, _, visible)| *visible)
-            .count();
-        let not_visible_count = visibility_results
-            .iter()
-            .filter(|(_, _, visible)| !*visible)
-            .count();
+        let mut visible_count = 0usize;
+        let mut not_visible_count = 0usize;
+        for (point, description) in test_points {
+            let is_visible =
+                ConvexHull::<f64, Option<()>, Option<()>, 3>::fallback_visibility_test(
+                    test_facet, &point,
+                )
+                .unwrap();
+            let coords: [f64; 3] = point.into();
+            println!("  Point {coords:?} ({description}) - Visible: {is_visible}");
+            if is_visible { visible_count += 1 } else { not_visible_count += 1 }
+        }

3601-3606: Prefer Arc::ptr_eq for pointer equality in tests.

It’s the idiomatic way to compare Arc identity.

-        // Verify the cache Arc is the same (reused)
-        let cache_ptr1 = Arc::as_ptr(&cache1);
-        let cache_ptr2 = Arc::as_ptr(&cache2);
-        assert_eq!(
-            cache_ptr1, cache_ptr2,
-            "Cache Arc should be reused when generation matches"
-        );
+        // Verify the cache Arc is the same (reused)
+        assert!(
+            Arc::ptr_eq(&cache1, &cache2),
+            "Cache Arc should be reused when generation matches"
+        );
@@
-        // But should be a different Arc instance
-        let cache_ptr3 = Arc::as_ptr(&cache3);
-        assert_ne!(
-            cache_ptr1, cache_ptr3,
-            "Rebuilt cache should be a new Arc instance"
-        );
+        // But should be a different Arc instance
+        assert!(
+            !Arc::ptr_eq(&cache1, &cache3),
+            "Rebuilt cache should be a new Arc instance"
+        );

Also applies to: 3623-3626


3654-3690: Strengthen the helper‑integration test with an assertion on keys_found.

We’re verifying derivation→cache consistency; asserting at least one hit tightens the guarantee without assuming all hull facets are cached.

         println!(
             "  Found {keys_found}/{} hull facet keys in cache",
             hull.hull_facets.len()
         );
 
-        // Cache should be non-empty (contains facets from the TDS)
+        // Cache should be non-empty (contains facets from the TDS)
         assert!(
             !cache.is_empty(),
             "Cache should contain facets from the triangulation"
         );
+        assert!(
+            keys_found > 0,
+            "At least one hull facet key should be present in the cache"
+        );

534-542: Centralize relative epsilon and remove the local 1e-6 fallback.

CoordinateScalar::default_tolerance() exists and other modules (e.g., robust_predicates) use cast(1e-12).unwrap_or_else(T::default_tolerance). Replace the local 1e-12 → 1e-6 → default_tolerance chain in src/geometry/algorithms/convex_hull.rs (around the epsilon_factor code) with a single, consistent fallback such as num_traits::NumCast::from(1e-12f64).unwrap_or_else(T::default_tolerance) or extract a crate-level RELATIVE_EPSILON constant to keep behavior consistent across the crate.

📜 Review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c85bd71 and abae967.

📒 Files selected for processing (2)
  • src/core/boundary.rs (11 hunks)
  • src/geometry/algorithms/convex_hull.rs (10 hunks)
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.

Applied to files:

  • src/core/boundary.rs
  • src/geometry/algorithms/convex_hull.rs
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
PR: acgetchell/delaunay#55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The user corrected that finitecheck.rs, hashcoordinate.rs, and orderedeq.rs trait files do not exist in the current version of the delaunay repository under src/geometry/traits/. The current structure only contains coordinate.rs, which matches the documented structure in CONTRIBUTING.md.

Applied to files:

  • src/geometry/algorithms/convex_hull.rs
🧬 Code graph analysis (1)
src/core/boundary.rs (5)
src/core/util.rs (1)
  • derive_facet_key_from_vertices (663-696)
src/core/traits/boundary_analysis.rs (2)
  • is_boundary_facet_with_map (178-182)
  • is_boundary_facet (130-130)
src/core/facet.rs (4)
  • facet_vertices (699-711)
  • new (287-293)
  • vertices (437-444)
  • cell (325-327)
src/core/triangulation_data_structure.rs (4)
  • cells (500-502)
  • new (1370-1402)
  • vertices (541-543)
  • default (206-208)
src/core/vertex.rs (1)
  • from_points (347-352)
🪛 GitHub Check: build (ubuntu-latest)
src/core/boundary.rs

[warning] 412-412:
Diff in /home/runner/work/delaunay/delaunay/src/core/boundary.rs

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: performance-regression
  • GitHub Check: Codacy Security Scan
  • GitHub Check: build (windows-latest)
  • GitHub Check: Code Coverage
🔇 Additional comments (13)
src/core/boundary.rs (9)

6-11: Good move: centralize facet-key derivation.

Importing and using derive_facet_key_from_vertices aligns boundary checks with the SlotMap key path and avoids UUID drift.


110-115: Helpful perf note in docs.

Clear guidance to prefer the cached variant in hot paths.


299-303: Nice: assert all facets are boundary (2D).

Simple and correct; leverages the new API nicely.


342-348: Good parity check (3D).

Confirms behavior matches topology invariant.


350-387: Solid cached‑map test.

Verifies equivalence between regular and cached variants.


389-441: Good integration test for key derivation.

Covers determinism and consistency with the facet map.


564-571: Good cross‑check via explicit facet map.

Builds an independent proof of boundary vs. shared facets; aligns with the new key path.


197-219: Resolved — MSRV >= 1.70 (Cargo.toml: rust-version = "1.89.0"). Option::is_some_and is supported, so the map_or change is unnecessary; adding #[inline] is an optional nit.


759-763: rand dependency confirmed — no change required

Cargo.toml pins rand = "0.9.2"; Cargo.lock contains both 0.9.2 and a transitive 0.8.5. No replacement with thread_rng() is necessary unless you intentionally need to support rand 0.8.x.

src/geometry/algorithms/convex_hull.rs (4)

376-382: Cache lookup + facet-key derivation are wired correctly.

Using get_or_build_facet_cache(tds) and derive_facet_key_from_vertices(...) matches the SlotMap migration plan and avoids UUID-based lookups.

Please confirm that FacetCacheProvider::get_or_build_facet_cache guarantees a non-empty map when boundary facets exist (panic note in docs could be removed if guaranteed).


459-466: Docs read well and reflect the scale-adaptive fallback approach.

Clear algorithm notes and error semantics.

Also applies to: 476-485


1091-1114: Trait impl looks correct and minimal.

facet_cache() and cached_generation() expose exactly what the default methods need; bounds mirror the parent impl.


1695-1701: LGTM: deterministic results check is concise.

Addresses a potential out-of-bounds access when determining boundary facets. This could occur when a facet index was incorrect or a cell ID was not found in the slot map.

Also, improves the robustness of the convex hull visibility test by adjusting the epsilon factor calculation and simplifies the test loop. This ensures accurate visibility determination even in the presence of numeric noise.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
src/geometry/algorithms/convex_hull.rs (1)

1142-1142: Tests import the trait to resolve extension methods.

This fixes the method resolution issue noted earlier.

🧹 Nitpick comments (5)
src/core/boundary.rs (3)

91-103: Boundary extraction works; consider a tiny robustness/logging tweak.

Logic is correct. To aid debugging when maps and indices drift, log the unexpected branches instead of silent debug_assert!s in release. Optional.

Apply within this block:

-                    } else {
-                        debug_assert!(false, "facet_index out of bounds");
-                    }
+                    } else {
+                        debug_assert!(false, "facet_index out of bounds");
+                        #[cfg(debug_assertions)]
+                        eprintln!("boundary_facets: facet_index {facet_index:?} out of bounds for cell_id {cell_id:?}");
+                    }
                 } else {
-                    debug_assert!(false, "cell_id not found in SlotMap");
+                    debug_assert!(false, "cell_id not found in SlotMap");
+                    #[cfg(debug_assertions)]
+                    eprintln!("boundary_facets: cell_id {cell_id:?} not found");
                 }

160-224: Handle derivation failures explicitly for better diagnostics.

Today a failed key derivation returns false silently. Emit a debug message so mismatches are easier to find.

Apply this diff:

-        if let Ok(facet_key) = derive_facet_key_from_vertices(&facet_vertices, self) {
-            return facet_to_cells
-                .get(&facet_key)
-                .is_some_and(|cells| cells.len() == 1);
-        }
-        false
+        match derive_facet_key_from_vertices(&facet_vertices, self) {
+            Ok(facet_key) => {
+                facet_to_cells
+                    .get(&facet_key)
+                    .is_some_and(|cells| cells.len() == 1)
+            }
+            Err(e) => {
+                debug_assert!(false, "derive_facet_key_from_vertices failed: {e:?}");
+                false
+            }
+        }

569-576: Micro: pre-size the map in tests.

Tiny perf win when building facet_map.

Apply this diff:

-        let mut facet_map: FastHashMap<u64, Vec<_>> = FastHashMap::default();
+        let mut facet_map: FastHashMap<u64, Vec<_>> =
+            FastHashMap::with_capacity(tds.number_of_cells() * (D + 1));
src/geometry/algorithms/convex_hull.rs (2)

534-541: Guard degenerate facets in fallback threshold.

If all edge lengths collapse to zero (pathological input), the threshold stays zero; return not-visible instead of relying on centroid distance.

Apply this diff:

-        let epsilon_factor: T =
-            num_traits::NumCast::from(1e-12f64).unwrap_or_else(|| T::default_tolerance());
-        let adjusted_threshold = max_edge_sq + max_edge_sq * epsilon_factor;
+        if max_edge_sq == T::zero() {
+            // Degenerate facet geometry; treat as not visible.
+            return Ok(false);
+        }
+        let epsilon_factor: T =
+            num_traits::NumCast::from(1e-12f64).unwrap_or_else(|| T::default_tolerance());
+        let adjusted_threshold = max_edge_sq + max_edge_sq * epsilon_factor;

3640-3694: Strengthen expectation: every hull facet key should be in cache.

Hull facets are boundary facets, so each should map to exactly one cell in the FacetToCellsMap. Count all, not “> 0”.

Apply this diff:

-        let mut keys_found = 0;
+        let mut keys_found = 0usize;
         for (i, facet) in hull.hull_facets.iter().enumerate() {
             let facet_vertices = facet.vertices();
 
             let derived_key_result = derive_facet_key_from_vertices(&facet_vertices, &tds);
 
             if let Ok(derived_key) = derived_key_result {
                 if cache.contains_key(&derived_key) {
                     keys_found += 1;
                     println!("    Facet {i}: key {derived_key} found in cache ✓");
                 } else {
-                    println!("    Facet {i}: key {derived_key} NOT in cache (boundary facet)");
+                    println!("    Facet {i}: key {derived_key} NOT in cache (unexpected)");
                 }
             } else {
                 println!(
                     "    Facet {i}: key derivation failed: {:?}",
                     derived_key_result.err()
                 );
             }
         }
 
-        println!(
-            "  Found {keys_found}/{} hull facet keys in cache",
-            hull.hull_facets.len()
-        );
+        println!("  Found {keys_found}/{} hull facet keys in cache", hull.hull_facets.len());
 
         // Cache should be non-empty (contains facets from the TDS)
         assert!(
             !cache.is_empty(),
             "Cache should contain facets from the triangulation"
         );
-        assert!(
-            keys_found > 0,
-            "At least one hull facet key should be present in the cache"
-        );
+        assert_eq!(
+            keys_found,
+            hull.hull_facets.len(),
+            "Every hull facet key should be present in the cache"
+        );
📜 Review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between abae967 and b36693e.

📒 Files selected for processing (2)
  • src/core/boundary.rs (12 hunks)
  • src/geometry/algorithms/convex_hull.rs (10 hunks)
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.

Applied to files:

  • src/core/boundary.rs
  • src/geometry/algorithms/convex_hull.rs
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
PR: acgetchell/delaunay#55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The user corrected that finitecheck.rs, hashcoordinate.rs, and orderedeq.rs trait files do not exist in the current version of the delaunay repository under src/geometry/traits/. The current structure only contains coordinate.rs, which matches the documented structure in CONTRIBUTING.md.

Applied to files:

  • src/geometry/algorithms/convex_hull.rs
🧬 Code graph analysis (2)
src/core/boundary.rs (5)
src/core/util.rs (1)
  • derive_facet_key_from_vertices (663-696)
src/core/triangulation_data_structure.rs (3)
  • cells (500-502)
  • new (1370-1402)
  • vertices (541-543)
src/core/cell.rs (9)
  • cell (2209-2209)
  • cell (2338-2338)
  • cell (2340-2340)
  • cell (2344-2344)
  • cell (2543-2543)
  • cell (2556-2556)
  • facets (1052-1057)
  • from (371-373)
  • vertices (474-476)
src/core/facet.rs (4)
  • cell (325-327)
  • facet_vertices (699-711)
  • new (287-293)
  • vertices (437-444)
src/core/traits/boundary_analysis.rs (2)
  • is_boundary_facet (130-130)
  • is_boundary_facet_with_map (178-182)
src/geometry/algorithms/convex_hull.rs (4)
src/core/traits/facet_cache.rs (5)
  • facet_cache (82-82)
  • facet_cache (220-222)
  • cached_generation (85-85)
  • cached_generation (224-226)
  • new (211-216)
src/core/util.rs (1)
  • derive_facet_key_from_vertices (663-696)
src/geometry/traits/coordinate.rs (4)
  • default_tolerance (521-521)
  • default_tolerance (526-528)
  • default_tolerance (532-534)
  • new (643-643)
src/core/triangulation_data_structure.rs (2)
  • vertices (541-543)
  • new (1370-1402)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: Code Coverage
  • GitHub Check: build (windows-latest)
  • GitHub Check: build (ubuntu-latest)
  • GitHub Check: performance-regression
  • GitHub Check: Codacy Security Scan
🔇 Additional comments (22)
src/core/boundary.rs (10)

10-10: Good call: centralize facet-key derivation.

Importing and using derive_facet_key_from_vertices removes duplication and keeps key computation consistent with build_facet_to_cells_hashmap.


114-119: Docs: clear performance warning.

Nice guidance toward the cached variant; this will save callers from accidental N·F recomputation in hot paths.


154-158: LGTM: thin wrapper delegates to cached path.

Inlining and delegation keep the non-cached API while nudging consumers toward the efficient version.


265-268: Test imports are correct.

Brings the helper into scope and matches the production path.


304-307: Nice: aggregate verification over all facets.

Concise and representative.


346-352: Consistent with 3D single-cell expectations.

All four facets must be boundary; assertion reads well.


354-391: Great addition: parity check between cached and non-cached paths.

This defends against cache staleness/logic drift.


393-444: Solid integration test for key derivation.

Covers determinism and map consistency.


476-489: Efficient cached verification in 4D test.

Good switch to is_boundary_facet_with_map and counting.


764-765: Bench gating acknowledged.

Nice to keep heavy tests behind a feature.

src/geometry/algorithms/convex_hull.rs (12)

5-8: Imports: cache provider + key derivation are appropriate.

Keeps convex hull on the same facet-key derivation as boundary analysis and enables reuse of the cache infra.


376-383: Good: reuse cached facet→cells map in visibility path.

Avoids rebuilding on each call; key derivation aligns with TDS map.


459-461: Docs upgrade: scale‑adaptive fallback rationale is clear.

Helpful for readers debugging predicate degeneracies.

Also applies to: 477-484


1090-1113: Trait impl hooks up cache state cleanly.

Exposes the swap and generation snapshot with minimal bounds; matches the trait’s expectations.


3544-3623: Cache build/reuse/invalidation test is thorough.

Covers first build, reuse via Arc::ptr_eq, and rebuild on generation change.


241-273: Generation snapshot approach is sensible.

Independent Arc<AtomicU64> prevents aliasing the TDS counter and enables clear staleness checks.


275-453: Visibility algorithm reads well and errors are granular.

Adjacency lookup via derived facet key + robust orientation with a degenerate fallback is a solid pattern.


1051-1088: Cache invalidation API is clear.

Resets both cache and snapshot; docs set the right expectations.


763-1000: Bench-only tests: good bounds and commentary.

Loose time bounds avoid flakiness; using robust algorithm is a nice touch.


3335-3543: Excellent coverage of generation drift paths.

Thoroughly validates initial build, drift detection, rebuild, manual invalidation, and result consistency.


588-711: Nearest-facet selection: numeric path looks stable.

Centroid heuristic + is_none_or keeps code concise; error mapping is appropriate.


1-1: Action: Audit & replace remaining facet.key()/UUID→Key call sites.

Scan found occurrences — review and fix the following call sites:

  • facet.key(): src/core/boundary.rs:663; src/core/triangulation_data_structure.rs (docs 2084–2092) and code 5718; examples/triangulation_3d_50_points.rs:225; examples/convex_hull_3d_50_points.rs:173.
  • UUID↔Key helpers (tds.vertex_key_from_uuid / tds.cell_key_from_uuid / cell_uuid_from_key): src/core/traits/insertion_algorithm.rs:1657; src/core/util.rs:685; src/core/algorithms/robust_bowyer_watson.rs:1538,1691,1854,1885; multiple mappings/uses in src/core/triangulation_data_structure.rs (1050,1110,1183,1251,1334,1883,2003,2457,2622,2661,2698,3189,4254,4321,4422,5718,6558,6569).

Replace usages outside TDS internals/examples with SlotMap keys or the appropriate Tds helpers; keep and document TDS-internal facet.key() uses that intentionally compute canonical facet keys. Re-run the repo-wide scan to confirm no remaining matches.

Fixes a potential edge case where boundary facets with degenerate geometry could cause incorrect visibility testing results.

Adds additional debug assertions and logging to aid in diagnosing boundary facet issues.

Improves facet key lookup performance in tests by pre-allocating hashmap capacity.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/geometry/algorithms/convex_hull.rs (1)

3528-3533: Compile-time bug: invalid float conversion in test (From<_> for f64).

<f64 as From<_>>::from(i) is not valid; use i as f64 or NumCast.

Apply:

-                let test_pt = Point::new([<f64 as From<_>>::from(i).mul_add(0.1, 2.0), 2.0, 2.0]);
+                let x = (i as f64).mul_add(0.1, 2.0);
+                let test_pt = Point::new([x, 2.0, 2.0]);
♻️ Duplicate comments (1)
src/geometry/algorithms/convex_hull.rs (1)

1146-1146: Fix confirmed: tests now import the trait.

Bringing FacetCacheProvider into scope in the tests unblocks method resolution for get_or_build_facet_cache(...). Matches the earlier review note.

🧹 Nitpick comments (6)
src/geometry/algorithms/convex_hull.rs (2)

1094-1118: Over-constrained impl bounds for FacetCacheProvider.

This impl only returns references to fields; ComplexField, Sum, From, f64: From, OrderedFloat: From, and the Div bound are unnecessary. They tighten generics and risk version friction.

Consider reducing to the minimal bounds:

-impl<T, U, V, const D: usize> FacetCacheProvider<T, U, V, D> for ConvexHull<T, U, V, D>
-where
-    T: CoordinateScalar
-        + ComplexField<RealField = T>
-        + AddAssign<T>
-        + SubAssign<T>
-        + Sum
-        + From<f64>,
-    U: DataType,
-    V: DataType,
-    f64: From<T>,
-    for<'a> &'a T: std::ops::Div<T>,
-    [T; D]: Copy + Default + DeserializeOwned + Serialize + Sized,
-    OrderedFloat<f64>: From<T>,
-{
+impl<T, U, V, const D: usize> FacetCacheProvider<T, U, V, D> for ConvexHull<T, U, V, D>
+where
+    T: CoordinateScalar,
+    U: DataType,
+    V: DataType,
+    [T; D]: Copy + Default + DeserializeOwned + Serialize + Sized,
+{

1580-1582: Minor: counters could be inferred on use.

Not a blocker, but visible_count/not_visible_count can be created lazily via iterator combinators to keep tests terse.

src/core/boundary.rs (4)

91-110: Consider avoiding repeated cell.facets() allocations.

For each boundary facet you rebuild the full Vec for that cell. If this path becomes hot, cache per-cell facets locally (e.g., FastHashMap<CellKey, Vec>) to reuse between entries referencing the same cell_id. Keep as-is unless profiling shows pressure.

If useful, I can sketch a small per-call cache keyed by cell_id. Do you want that?


208-233: Minor: drop redundant vertex-count check or make it debug-only.

Facet::vertices() should always return D vertices by construction. The early return false adds a branch in release. Consider relying on debug_assert! only and let derive_facet_key_from_vertices handle invalid cases.

-        let facet_vertices = facet.vertices();
-        if facet_vertices.len() != D {
-            debug_assert_eq!(...);
-            return false;
-        }
-        match derive_facet_key_from_vertices(&facet_vertices, self) {
+        match derive_facet_key_from_vertices(&facet.vertices(), self) {
             Ok(facet_key) => facet_to_cells
                 .get(&facet_key)
                 .is_some_and(|cells| cells.len() == 1),
             Err(e) => {
                 debug_assert!(false, "derive_facet_key_from_vertices failed: {e:?}");
                 false
             }
         }

403-453: Solid integration test for facet-key derivation.

Optional: additionally assert that the (cell_id, facet_index) from the map corresponds to test_facet (by reconstructing and comparing keys), which would further pin the index mapping invariant.


571-575: Generalize facet-map capacity estimate; avoid magic number 4.

Use dim()+1 instead of a hardcoded 4 to keep this test generic across dimensions.

-        let mut facet_map: FastHashMap<u64, Vec<_>> =
-            fast_hash_map_with_capacity(tds.number_of_cells() * 4);
+        let mut facet_map: FastHashMap<u64, Vec<_>> =
+            fast_hash_map_with_capacity(tds.number_of_cells() * (tds.dim() + 1));

Also applies to: 578-586

📜 Review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b36693e and 86bd478.

📒 Files selected for processing (2)
  • src/core/boundary.rs (12 hunks)
  • src/geometry/algorithms/convex_hull.rs (10 hunks)
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.

Applied to files:

  • src/core/boundary.rs
  • src/geometry/algorithms/convex_hull.rs
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
PR: acgetchell/delaunay#55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The user corrected that finitecheck.rs, hashcoordinate.rs, and orderedeq.rs trait files do not exist in the current version of the delaunay repository under src/geometry/traits/. The current structure only contains coordinate.rs, which matches the documented structure in CONTRIBUTING.md.

Applied to files:

  • src/geometry/algorithms/convex_hull.rs
🧬 Code graph analysis (2)
src/core/boundary.rs (6)
src/core/util.rs (1)
  • derive_facet_key_from_vertices (663-696)
src/core/triangulation_data_structure.rs (3)
  • cells (500-502)
  • new (1370-1402)
  • vertices (541-543)
src/core/cell.rs (9)
  • cell (2209-2209)
  • cell (2338-2338)
  • cell (2340-2340)
  • cell (2344-2344)
  • cell (2543-2543)
  • cell (2556-2556)
  • facets (1052-1057)
  • from (371-373)
  • vertices (474-476)
src/core/facet.rs (4)
  • cell (325-327)
  • facet_vertices (699-711)
  • new (287-293)
  • vertices (437-444)
src/core/traits/boundary_analysis.rs (2)
  • is_boundary_facet (130-130)
  • is_boundary_facet_with_map (178-182)
src/core/collections.rs (2)
  • fast_hash_map_with_capacity (527-530)
  • fast_hash_map_with_capacity (629-629)
src/geometry/algorithms/convex_hull.rs (3)
src/core/traits/facet_cache.rs (5)
  • facet_cache (82-82)
  • facet_cache (220-222)
  • cached_generation (85-85)
  • cached_generation (224-226)
  • new (211-216)
src/core/util.rs (1)
  • derive_facet_key_from_vertices (663-696)
src/core/triangulation_data_structure.rs (2)
  • vertices (541-543)
  • new (1370-1402)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Codacy Static Code Analysis
  • GitHub Check: Code Coverage
  • GitHub Check: performance-regression
  • GitHub Check: build (windows-latest)
🔇 Additional comments (13)
src/geometry/algorithms/convex_hull.rs (5)

5-8: Good: trait import enables extension methods.

Importing FacetCacheProvider in this module is necessary so calls like self.get_or_build_facet_cache(...) resolve. Looks correct.


376-382: Correct: cache usage + key derivation align with SlotMap migration.

Using get_or_build_facet_cache(tds) and derive_facet_key_from_vertices(...) ensures facet IDs are computed from vertex keys and the cached map is reused.


1687-1694: Good: deterministic consistency assertion.

Collecting results and asserting all equal makes the fallback behavior expectations explicit.


3548-3627: Nice coverage for cache lifecycle.

Tests verify empty→build→reuse→invalidate paths, Arc reuse via Arc::ptr_eq, and generation sync. This will catch regressions in the FacetCacheProvider flow.


3644-3684: Sanity check: key derivation matches cache contents.

Verifying derive_facet_key_from_vertices against cache keys ensures consistency with boundary facet keying after the SlotMap migration.

If flakes appear, ensure build_facet_to_cells_hashmap and derive_facet_key_from_vertices share the same vertex-key ordering/canonicalization.

src/core/boundary.rs (8)

10-11: Import looks good and aligns with the SlotMap migration.

Using derive_facet_key_from_vertices centralizes facet-key computation and avoids duplication.


120-125: Good perf note.

Clear guidance to prefer the cached variant reduces accidental N·F recomputation.


160-164: Delegation reduces duplication.

Inlining is fine; building the map once per call is explicit and easy to reason about.


313-316: Nice consistency checks in 2D/3D tests.

Asserting via is_boundary_facet on all returned facets exercises both APIs.

Also applies to: 356-361


364-401: Great cached-vs-regular parity test.

This protects the optimized path against drift.


486-497: 4D counting via cached map is clean.

Confirms the optimized path in higher dimensions.


91-94: MSRV note: relies on let_chains.

The pattern if cells.len() == 1 && let Some(...) = ... uses let_chains, stabilized in Rust 1.70. Ensure your MSRV and CI match. If you support older compilers, split into nested ifs.


774-789: rand API usage is correct for rand = "0.9.2"; do not switch to thread_rng()/gen.

Cargo.toml pins rand = "0.9.2" (rust-version = "1.89.0") and the crate 0.9 API uses rand::rng(), Rng::random(), random_range(), etc.; the existing calls in src/core/boundary.rs, src/geometry/util.rs, and src/core/triangulation_data_structure.rs match 0.9 and the suggested thread_rng()/gen change targets rand 0.8 — do not apply.

Likely an incorrect or invalid review comment.

Improves performance of boundary facet detection by caching cell facets to avoid redundant computations when multiple boundary facets reference the same cell. Also simplifies facet
visibility checks and adds additional verification of facet mapping consistency.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
src/geometry/algorithms/convex_hull.rs (2)

459-476: Docs: error variant mismatch in fallback_visibility_test.

The text says “NumericCastFailed” but the code returns CoordinateConversion. Update the docs for accuracy.

Apply this diff:

-    /// Returns a [`ConvexHullConstructionError::NumericCastFailed`] if numeric
-    /// conversion fails during centroid calculation or threshold computation.
+    /// Returns a [`ConvexHullConstructionError::CoordinateConversion`] if
+    /// coordinate conversion fails during centroid calculation or threshold computation.

576-596: Tests: avoid double-evaluating fallback_visibility_test.

You compute visibility in the loop, then recompute inside filter. Cache the booleans once and reuse for the counts.

Apply this diff:

-        for (point, description) in &test_points {
+        let mut flags = Vec::with_capacity(test_points.len());
+        for (point, description) in &test_points {
             let is_visible =
                 ConvexHull::<f64, Option<()>, Option<()>, 3>::fallback_visibility_test(
                     test_facet, point,
                 )
                 .unwrap();
+            flags.push(is_visible);
             let coords: [f64; 3] = (*point).into();
             println!("  Point {coords:?} ({description}) - Visible: {is_visible}");
         }
-
-        let visible_count = test_points
-            .iter()
-            .filter(|(point, _)| {
-                ConvexHull::<f64, Option<()>, Option<()>, 3>::fallback_visibility_test(
-                    test_facet, point,
-                )
-                .unwrap()
-            })
-            .count();
+        let visible_count = flags.iter().filter(|&&v| v).count();
         let not_visible_count = test_points.len() - visible_count;
📜 Review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 86bd478 and 8d7e521.

📒 Files selected for processing (2)
  • src/core/boundary.rs (12 hunks)
  • src/geometry/algorithms/convex_hull.rs (12 hunks)
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.

Applied to files:

  • src/core/boundary.rs
  • src/geometry/algorithms/convex_hull.rs
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
PR: acgetchell/delaunay#55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The user corrected that finitecheck.rs, hashcoordinate.rs, and orderedeq.rs trait files do not exist in the current version of the delaunay repository under src/geometry/traits/. The current structure only contains coordinate.rs, which matches the documented structure in CONTRIBUTING.md.

Applied to files:

  • src/geometry/algorithms/convex_hull.rs
🧬 Code graph analysis (2)
src/core/boundary.rs (5)
src/core/util.rs (1)
  • derive_facet_key_from_vertices (663-696)
src/core/collections.rs (2)
  • fast_hash_map_with_capacity (527-530)
  • fast_hash_map_with_capacity (629-629)
src/core/triangulation_data_structure.rs (3)
  • cells (500-502)
  • vertices (541-543)
  • new (1370-1402)
src/core/facet.rs (5)
  • cell (325-327)
  • vertices (437-444)
  • new (287-293)
  • facet_vertices (699-711)
  • key (455-473)
src/core/cell.rs (9)
  • cell (2209-2209)
  • cell (2338-2338)
  • cell (2340-2340)
  • cell (2344-2344)
  • cell (2543-2543)
  • cell (2556-2556)
  • facets (1052-1057)
  • from (371-373)
  • vertices (474-476)
src/geometry/algorithms/convex_hull.rs (2)
src/core/traits/facet_cache.rs (5)
  • facet_cache (82-82)
  • facet_cache (220-222)
  • cached_generation (85-85)
  • cached_generation (224-226)
  • new (211-216)
src/core/util.rs (1)
  • derive_facet_key_from_vertices (663-696)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: performance-regression
  • GitHub Check: Code Coverage
  • GitHub Check: Codacy Security Scan
  • GitHub Check: build (windows-latest)
🔇 Additional comments (24)
src/core/boundary.rs (16)

9-10: LGTM - Clean imports for UUID-to-key migration

The addition of CellKey import and derive_facet_key_from_vertices utility function aligns well with the PR's objective of migrating from UUIDs to SlotMap keys while maintaining API compatibility.


12-12: LGTM - Import for FastHashMap optimization

Good addition of the fast_hash_map_with_capacity utility for consistent hash map creation with appropriate capacity hints.


90-96: LGTM - Smart caching optimization for boundary facet collection

This per-call cache intelligently avoids repeated cell.facets() allocations when multiple boundary facets reference the same cell, which is a good performance optimization for the boundary facet collection process.


99-127: LGTM - Robust boundary facet collection with caching

The implementation properly handles the cache lookup and insertion logic with appropriate error handling for out-of-bounds facet indices and missing cell IDs. The debug assertions and conditional logging provide good diagnostic capabilities without impacting release performance.


136-141: LGTM - Clear performance documentation

Good documentation of the O(N·F) complexity and recommendation to use is_boundary_facet_with_map() for multiple facet checks. This helps users make informed performance decisions.


176-180: LGTM - Clean delegation to optimized method

The refactor properly delegates to the new map-based method while maintaining the original API. The inline annotation is appropriate for this thin wrapper.


182-248: LGTM - Well-designed cached boundary check method

The new is_boundary_facet_with_map method is well-implemented with:

  • Clear documentation and examples
  • Proper debug assertions for facet vertex count invariant
  • Clean error handling using derive_facet_key_from_vertices
  • Efficient single hash map lookup

This addresses the performance concerns mentioned in past reviews by providing a cached version for hot paths.


289-291: LGTM - Updated test imports for new utilities

Good addition of imports for FastHashMap, fast_hash_map_with_capacity, and derive_facet_key_from_vertices to support the updated test implementations.


328-331: LGTM - Efficient aggregate boundary check

Good refactor from individual facet assertions to using all() for a more concise and potentially more efficient check.


370-376: LGTM - Consistent test pattern

The same efficient aggregate checking pattern applied consistently across test cases.


378-415: LGTM - Excellent test for cached vs regular method consistency

This test effectively validates that both is_boundary_facet and is_boundary_facet_with_map produce identical results, ensuring the optimization doesn't introduce behavioral changes.


417-499: LGTM - Comprehensive integration test for key derivation

This test thoroughly validates the derive_facet_key_from_vertices integration with excellent coverage of:

  • Successful key derivation from valid facets
  • Deterministic key generation (same facet → same key)
  • Consistency with build_facet_to_cells_hashmap
  • Round-trip verification via cell/facet mapping
  • Facet key equivalence checks

This provides strong confidence in the UUID-to-key migration correctness.


532-543: LGTM - Optimized 4D test with cached map

Good optimization to build the facet-to-cells map once and use the cached version for boundary checks, demonstrating the performance benefits of the new API.


620-633: LGTM - Enhanced verification with key-based mapping

The test now uses derive_facet_key_from_vertices for building the facet map, providing better integration testing of the new key-based approach while maintaining the same verification logic.


820-820: LGTM - Appropriate feature gating for performance tests

Good practice to gate performance-sensitive tests with the bench feature flag, making them available for performance validation while not slowing down regular test runs.

Also applies to: 823-823, 955-955, 1041-1041


1042-1120: LGTM - Comprehensive performance benchmark

Excellent benchmark that demonstrates the practical benefits of the optimization with:

  • Multiple point counts for scalability testing
  • Robust triangulation algorithm usage
  • Multiple runs for stable measurements
  • Clear performance metrics and complexity documentation
  • Proper prevention of compiler optimizations with black_box

This provides concrete evidence of the performance improvements achieved through the UUID-to-key migration.

src/geometry/algorithms/convex_hull.rs (8)

5-8: Good: cache/key-derivation integration imports are correct.

Brings in FacetCacheProvider and derive_facet_key_from_vertices as needed for the UUID→key path.


22-22: OK: ops traits import.

Div/Sub and their Assign variants are used in bounds; no unused import issues.


376-383: LGTM: cache lookup + facet-key derivation.

Using get_or_build_facet_cache() and derive_facet_key_from_vertices() aligns with the SlotMap-key flow and avoids recomputing maps.


687-694: LGTM: deterministic consistency check.

Using iter().all against the first result is concise and clear.


1095-1113: LGTM: FacetCacheProvider impl with reduced bounds.

Correctly exposes cache and generation; bounds match trait usage and keep the impl lean.


3530-3532: LGTM: fused multiply-add in test.

mul_add usage is fine here and improves numeric stability a bit.


3550-3628: LGTM: cache build/reuse/invalidation test.

Solid coverage: empty → build → reuse (Arc::ptr_eq) → gen bump → rebuild → gen sync.


3631-3700: LGTM: helper-methods integration test.

Key derivation matches cache contents; end-to-end visibility call validates the wiring.

Addresses potential numeric instability issues in the fallback visibility test within the convex hull algorithm. Replaces `from_f64` with `NumCast::from` for safer numeric conversions and
adds an epsilon-based threshold adjustment. Also, corrects calculation of capacity for facet map.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
src/geometry/algorithms/convex_hull.rs (1)

1142-1142: Trait import in tests unblocks extension methods.

Brings get_or_build_facet_cache into scope; resolves the earlier compile issue.

🧹 Nitpick comments (11)
docs/numerical_robustness_guide.md (1)

518-521: Docs and code out of sync: update fallback threshold to the new scale-adaptive formula.

The snippet still shows a constant threshold cast via NumCast, but src/geometry/algorithms/convex_hull.rs now uses diameter_sq * (1 + epsilon). Please align the docs to prevent drift.

Suggested patch:

-    // Note: NumCast::from requires num_traits::NumCast
-    let threshold = NumCast::from(1.0f64).unwrap_or_else(T::one);
-    distance_squared > threshold
+    // Scale-adaptive threshold: diameter_sq * (1 + epsilon)
+    // epsilon guards against numeric noise near the hull surface.
+    let epsilon: T = NumCast::from(1e-12f64)
+        .or_else(|| NumCast::from(f64::EPSILON))
+        .unwrap_or_else(T::zero);
+    let threshold = max_edge_sq + max_edge_sq * epsilon;
+    distance_squared > threshold

Also ensure the example includes the max_edge_sq computation (facet diameter squared) as in the code.

src/geometry/algorithms/convex_hull.rs (4)

534-546: Minor numeric hygiene: prefer Zero::is_zero() and keep epsilon local.

Use is_zero() instead of == T::zero() to avoid surprising PartialEq constraints.

-        if max_edge_sq == T::zero() {
+        if max_edge_sq.is_zero() {
             // Degenerate facet geometry; treat as not visible.
             return Ok(false);
         }

Everything else (NumCast fallback and adjusted_threshold) looks good.


1095-1113: Trait impl bounds can be trimmed (optional).

FacetCacheProvider only needs what its default methods use. The extra DeserializeOwned bounds on U/V may be unnecessary here unless required by FacetToCellsMap. If not, consider dropping them to widen usability.


1576-1584: Test assertion is ineffective; make it actually check classification.

visible_count > 0 || not_visible_count > 0 is always true for non-empty input. Assert both classes appear to validate the thresholding:

-        assert!(
-            visible_count > 0 || not_visible_count > 0,
-            "Scale-adaptive threshold should produce some classification"
-        );
+        assert!(
+            visible_count > 0 && not_visible_count > 0,
+            "Should classify some points as visible and some as not visible"
+        );

Also applies to: 1680-1687, 1594-1599


3523-3524: Simplify numeric expression; avoid UFCS + From<_> turbofish.

Current form is over-engineered and brittle. Use a straightforward cast:

-                let x = <f64 as From<_>>::from(i).mul_add(0.1, 2.0);
+                let x = 2.0 + (i as f64) * 0.1;
src/core/boundary.rs (6)

99-106: Avoid let-chains here; simplify and reduce MSRV risk.
The if condition uses if cells.len() == 1 && let ... which requires newer Rust. It’s also redundant because len()==1 guarantees first() is Some(_). Index directly.

Please confirm the project MSRV is >= 1.70 if you prefer to keep && let. Otherwise apply:

-            if cells.len() == 1
-                && let Some((cell_id, facet_index)) = cells.first().copied()
-            {
+            if cells.len() == 1 {
+                let (cell_id, facet_index) = cells[0];

103-111: Use a single lookup for the per‑cell facets cache.
Current code does a get, then insert, then get+unwrap. Either use Entry API or pre‑insert then read once.

Apply this minimal change (works regardless of FastHashMap backend):

-                    // Check cache first before computing facets
-                    let facets = if let Some(cached_facets) = cell_facets_cache.get(&cell_id) {
-                        cached_facets
-                    } else {
-                        let computed_facets = cell.facets()?;
-                        cell_facets_cache.insert(cell_id, computed_facets);
-                        cell_facets_cache.get(&cell_id).unwrap() // Safe: just inserted
-                    };
+                    // Cache facets per cell to avoid repeated allocations
+                    if !cell_facets_cache.contains_key(&cell_id) {
+                        let computed_facets = cell.facets()?;
+                        cell_facets_cache.insert(cell_id, computed_facets);
+                    }
+                    let facets = cell_facets_cache
+                        .get(&cell_id)
+                        .expect("inserted above; cell_id must exist");

If FastHashMap supports the entry API, consider an even tighter version later with entry(cell_id).or_insert_with(|| cell.facets()?);.


112-121: Tighten debug assertions.
Instead of debug_assert!(false, "..."), assert the condition you expect (more descriptive) or use unreachable!() in the impossible branch.

Example:

-                    } else {
-                        debug_assert!(false, "facet_index out of bounds");
+                    } else {
+                        debug_assert!(
+                            usize::from(facet_index) < facets.len(),
+                            "facet_index out of bounds"
+                        );

224-248: Avoid recomputing facet.vertices(); minor perf tidy-up.
facet.vertices() allocates; it’s called three times (assert len, pass to derive, format len). Store once.

Apply:

-        // Facet::vertices() should always return D vertices by construction
-        // Only check this invariant in debug builds to avoid branch in release
-        debug_assert_eq!(
-            facet.vertices().len(),
-            D,
-            "Invalid facet: expected {} vertices, got {}",
-            D,
-            facet.vertices().len()
-        );
-
-        match derive_facet_key_from_vertices(&facet.vertices(), self) {
+        // Facet::vertices() should always return D vertices by construction
+        let vertices = facet.vertices();
+        debug_assert_eq!(
+            vertices.len(),
+            D,
+            "Invalid facet: expected {} vertices, got {}",
+            D,
+            vertices.len()
+        );
+
+        match derive_facet_key_from_vertices(&vertices, self) {

224-224: Inline the hot-path helper.
This method is used in loops; add #[inline].

-    fn is_boundary_facet_with_map(
+    #[inline]
+    fn is_boundary_facet_with_map(

327-331: Tests: use the cached map to avoid N× map rebuilds.
These small tests are fine, but switching to the cached variant keeps examples aligned with perf guidance.

-        assert!(
-            boundary_facets.iter().all(|f| tds.is_boundary_facet(f)),
-            "All facets should be boundary facets in single triangle"
-        );
+        let facet_to_cells = tds.build_facet_to_cells_hashmap();
+        assert!(
+            boundary_facets
+                .iter()
+                .all(|f| tds.is_boundary_facet_with_map(f, &facet_to_cells)),
+            "All facets should be boundary facets in single triangle"
+        );
-        assert!(
-            boundary_facets.iter().all(|f| tds.is_boundary_facet(f)),
-            "All facets should be boundary facets in single tetrahedron"
-        );
+        let facet_to_cells = tds.build_facet_to_cells_hashmap();
+        assert!(
+            boundary_facets
+                .iter()
+                .all(|f| tds.is_boundary_facet_with_map(f, &facet_to_cells)),
+            "All facets should be boundary facets in single tetrahedron"
+        );

Also applies to: 371-376

📜 Review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8d7e521 and 203ed6d.

📒 Files selected for processing (3)
  • docs/numerical_robustness_guide.md (1 hunks)
  • src/core/boundary.rs (12 hunks)
  • src/geometry/algorithms/convex_hull.rs (12 hunks)
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.

Applied to files:

  • src/core/boundary.rs
  • src/geometry/algorithms/convex_hull.rs
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
PR: acgetchell/delaunay#55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The user corrected that finitecheck.rs, hashcoordinate.rs, and orderedeq.rs trait files do not exist in the current version of the delaunay repository under src/geometry/traits/. The current structure only contains coordinate.rs, which matches the documented structure in CONTRIBUTING.md.

Applied to files:

  • src/geometry/algorithms/convex_hull.rs
🧬 Code graph analysis (2)
src/core/boundary.rs (5)
src/core/util.rs (1)
  • derive_facet_key_from_vertices (663-696)
src/core/collections.rs (2)
  • fast_hash_map_with_capacity (527-530)
  • fast_hash_map_with_capacity (629-629)
src/core/triangulation_data_structure.rs (3)
  • cells (500-502)
  • vertices (541-543)
  • new (1370-1402)
src/core/cell.rs (9)
  • cell (2209-2209)
  • cell (2338-2338)
  • cell (2340-2340)
  • cell (2344-2344)
  • cell (2543-2543)
  • cell (2556-2556)
  • facets (1052-1057)
  • from (371-373)
  • vertices (474-476)
src/core/facet.rs (5)
  • cell (325-327)
  • vertices (437-444)
  • new (287-293)
  • facet_vertices (699-711)
  • key (455-473)
src/geometry/algorithms/convex_hull.rs (3)
src/core/traits/facet_cache.rs (5)
  • facet_cache (82-82)
  • facet_cache (220-222)
  • cached_generation (85-85)
  • cached_generation (224-226)
  • new (211-216)
src/core/util.rs (1)
  • derive_facet_key_from_vertices (663-696)
src/core/triangulation_data_structure.rs (2)
  • new (1370-1402)
  • vertices (541-543)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: Codacy Static Code Analysis
  • GitHub Check: build (windows-latest)
  • GitHub Check: build (ubuntu-latest)
  • GitHub Check: performance-regression
  • GitHub Check: Codacy Security Scan
  • GitHub Check: Code Coverage
🔇 Additional comments (6)
src/geometry/algorithms/convex_hull.rs (3)

459-466: Good: clarified fallback docs (scale-adaptive + epsilon bound).

The method contract and rationale are now explicit and consistent with intended behavior.

Also applies to: 474-476, 477-485


3542-3621: Cache build/reuse/invalidation tests are solid.

Covers empty->build, Arc reuse via ptr_eq, generation-driven rebuild, and gen sync. Nice coverage.


3623-3693: Good integration test between key derivation and cache.

Verifies every hull facet key is present in the cache and exercises visibility path.

src/core/boundary.rs (3)

9-13: Imports LGTM; align with the new key-derivation flow.
No issues. These bring in CellKey and derive_facet_key_from_vertices needed by the refactor.


176-180: Good delegation.
is_boundary_facet delegating to the map-based variant keeps hot paths focused on *_with_map.


821-822: Bench-only tests: confirm rand API compatibility (Cargo.toml shows rand = "0.9.2")
Cargo.toml contains rand = "0.9.2". Repository search skipped files so I couldn't verify whether the bench-only tests in src/core/boundary.rs (lines 821–822, 956–957, 1042–1043) call rand::rng(). Verify whether they do; if so, replace with rand::thread_rng() or upgrade rand to a version that exposes rng().

Addresses an issue where boundary facet detection could fail due to incorrect facet key derivation. This commit ensures that facet keys are derived from facet vertices, resolving inconsistencies and improving reliability. Adds debug assertions to help detect and diagnose issues. Caches facets per cell to avoid repeated allocation, improving performance.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (6)
src/core/boundary.rs (5)

90-96: Consider pre-allocating cache with better size estimate.

The cache is pre-allocated with number_of_cells(), but since each cell can have up to D+1 facets, you might want to consider a smaller initial capacity to avoid over-allocation in memory-constrained environments.

-        let mut cell_facets_cache: crate::core::collections::FastHashMap<
-            CellKey,
-            Vec<Facet<T, U, V, D>>,
-        > = fast_hash_map_with_capacity(self.number_of_cells());
+        // Pre-allocate with a more conservative estimate - not all cells will have boundary facets
+        let estimated_cells_with_boundaries = self.number_of_cells() / 2;
+        let mut cell_facets_cache: crate::core::collections::FastHashMap<
+            CellKey,
+            Vec<Facet<T, U, V, D>>,
+        > = fast_hash_map_with_capacity(estimated_cells_with_boundaries);

103-122: Simplify cache access pattern and improve error diagnostics.

The cache access pattern can be simplified, and the debug assertions could provide more actionable information.

                if let Some(cell) = self.cells().get(cell_id) {
                    // Cache facets per cell to avoid repeated allocations
-                    cell_facets_cache
-                        .entry(cell_id)
-                        .or_insert_with(|| cell.facets().unwrap_or_default());
-                    let facets = &cell_facets_cache[&cell_id];
+                    let facets = cell_facets_cache
+                        .entry(cell_id)
+                        .or_insert_with(|| cell.facets().unwrap_or_default());

                    if let Some(f) = facets.get(usize::from(facet_index)) {
                        boundary_facets.push(f.clone());
                    } else {
                        debug_assert!(
-                            usize::from(facet_index) < facets.len(),
-                            "facet_index {} out of bounds for {} facets",
-                            facet_index,
-                            facets.len()
+                            false,
+                            "facet_index {} out of bounds for cell_id {:?} with {} facets",
+                            facet_index,
+                            cell_id,
+                            facets.len()
                        );
                        #[cfg(debug_assertions)]
                        eprintln!(
-                            "boundary_facets: facet_index {facet_index:?} out of bounds for cell_id {cell_id:?}"
+                            "boundary_facets: facet_index {} out of bounds for cell_id {:?} (has {} facets)",
+                            facet_index, cell_id, facets.len()
                        );
                    }

245-253: Consider logging errors in production builds.

The error handling silently returns false in release builds when derive_facet_key_from_vertices fails. Consider logging this error in production as it could indicate data corruption.

        match derive_facet_key_from_vertices(&vertices, self) {
            Ok(facet_key) => facet_to_cells
                .get(&facet_key)
                .is_some_and(|cells| cells.len() == 1),
            Err(e) => {
                debug_assert!(false, "derive_facet_key_from_vertices failed: {e:?}");
+                #[cfg(not(debug_assertions))]
+                tracing::warn!("derive_facet_key_from_vertices failed: {:?}", e);
                false
            }
        }

637-644: Optimize map initialization with better size estimate.

The facet map is initialized with capacity for all possible facets, but many won't be boundary facets.

        let dim_usize = usize::try_from(tds.dim().max(0)).unwrap_or(0);
-        let mut facet_map: FastHashMap<u64, Vec<_>> =
-            fast_hash_map_with_capacity(tds.number_of_cells() * (dim_usize + 1));
+        // Estimate: boundary facets are typically a small fraction of total facets
+        let estimated_boundary_facets = (tds.number_of_cells() * (dim_usize + 1)) / 4;
+        let mut facet_map: FastHashMap<u64, Vec<_>> =
+            fast_hash_map_with_capacity(estimated_boundary_facets);

836-839: Consider removing or converting to integration test.

This test is gated with #[cfg(feature = "bench")] but doesn't appear to be a performance-sensitive unit test. Consider moving it to benchmarks or integration tests.

Move this test to a proper benchmark module or integration test suite, as unit tests shouldn't be conditionally compiled based on benchmark features.

src/geometry/algorithms/convex_hull.rs (1)

3640-3677: Verify all hull facets are boundary facets.

The test assumes all hull facets should be present in the facet-to-cells cache as boundary facets. This is correct, but consider adding a comment explaining this invariant.

        // For each facet in the hull, derive its key and check it exists in cache
+        // All hull facets are by definition boundary facets and should be in the cache
        let mut keys_found = 0usize;
📜 Review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 203ed6d and c21d250.

📒 Files selected for processing (2)
  • src/core/boundary.rs (13 hunks)
  • src/geometry/algorithms/convex_hull.rs (13 hunks)
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
PR: acgetchell/delaunay#55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The user corrected that finitecheck.rs, hashcoordinate.rs, and orderedeq.rs trait files do not exist in the current version of the delaunay repository under src/geometry/traits/. The current structure only contains coordinate.rs, which matches the documented structure in CONTRIBUTING.md.

Applied to files:

  • src/geometry/algorithms/convex_hull.rs
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.

Applied to files:

  • src/geometry/algorithms/convex_hull.rs
  • src/core/boundary.rs
🧬 Code graph analysis (2)
src/geometry/algorithms/convex_hull.rs (2)
src/core/traits/facet_cache.rs (5)
  • facet_cache (82-82)
  • facet_cache (220-222)
  • cached_generation (85-85)
  • cached_generation (224-226)
  • new (211-216)
src/core/util.rs (1)
  • derive_facet_key_from_vertices (663-696)
src/core/boundary.rs (6)
src/core/util.rs (1)
  • derive_facet_key_from_vertices (663-696)
src/core/collections.rs (2)
  • fast_hash_map_with_capacity (527-530)
  • fast_hash_map_with_capacity (629-629)
src/core/triangulation_data_structure.rs (3)
  • cells (500-502)
  • vertices (541-543)
  • new (1370-1402)
src/core/cell.rs (9)
  • cell (2209-2209)
  • cell (2338-2338)
  • cell (2340-2340)
  • cell (2344-2344)
  • cell (2543-2543)
  • cell (2556-2556)
  • facets (1052-1057)
  • from (371-373)
  • vertices (474-476)
src/core/facet.rs (5)
  • cell (325-327)
  • vertices (437-444)
  • new (287-293)
  • facet_vertices (699-711)
  • key (455-473)
src/core/traits/boundary_analysis.rs (2)
  • is_boundary_facet (130-130)
  • is_boundary_facet_with_map (178-182)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: build (ubuntu-latest)
  • GitHub Check: build (windows-latest)
  • GitHub Check: Codacy Security Scan
  • GitHub Check: performance-regression
  • GitHub Check: Code Coverage
🔇 Additional comments (7)
src/core/boundary.rs (1)

1576-1588: Improved test ergonomics and clarity.

Good improvements to the test structure using iterator patterns and structured data collection.

The refactored test code is more idiomatic and clearer with the use of iterator patterns for collecting flags and counting visible/non-visible points.

src/geometry/algorithms/convex_hull.rs (6)

376-382: Good migration to key derivation utility!

The switch from direct facet.key() to derive_facet_key_from_vertices with proper error propagation via ? is a clean improvement that centralizes the key derivation logic.


534-543: Robust epsilon handling for numerical stability.

The epsilon-based threshold adjustment with proper fallback handling for degenerate cases is well-implemented and properly documented.


1095-1113: Clean trait implementation with minimal constraints.

The FacetCacheProvider trait implementation has appropriately reduced the constraint set, removing unnecessary bounds like ComplexField, From<f64>, and OrderedFloat. This makes the trait more widely applicable.


3523-3524: Good use of safe numeric conversion.

Using NumCast::from with unwrap_or provides safe fallback for the numeric conversion.


3556-3621: Comprehensive cache testing coverage.

Excellent test coverage for the caching mechanism, including initialization, reuse, and invalidation scenarios. The assertions properly verify both the content and the Arc identity for cache reuse.


3680-3686: Good integration test for visibility with helper methods.

The test properly verifies that the helper methods work together correctly in the context of visibility testing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation rust Pull requests that update rust code

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants