Changed: Optimizes internal collections with key-based access#83
Changed: Optimizes internal collections with key-based access#83acgetchell merged 15 commits intomainfrom
Conversation
Migrates internal collections to use direct key-based access instead of UUID lookups. This change enhances performance by eliminating UUID-to-key mapping overhead in internal algorithms, improving memory efficiency and cache locality. (Internal change)
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughAdds Phase‑1 key‑based collection aliases and SlotMap keys; makes bad‑cell detection fallible with robust fallback and merge logic; converts per‑call facet cache to a key‑based map that propagates Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Algo as InsertionAlgorithm
participant Fast as find_bad_cells (fast)
participant Robust as robust_find_bad_cells
participant Pred as robust predicates
Algo->>Fast: find_bad_cells(tds, vertex) -> Result<Vec<CellKey>, BadCellsError>
alt Ok(cells)
Fast-->>Algo: Ok(cells)
else Err(AllCellsBad|TooManyDegenerateCells)
Fast-->>Algo: Err(...)
Algo->>Robust: robust_find_bad_cells(tds, vertex) -> Vec<CellKey>
Robust-->>Algo: Vec<CellKey>
else Err(NoCells)
Fast-->>Algo: Err(NoCells)
Algo-->>Algo: treat as empty Vec or bail
end
rect rgba(230,245,255,0.6)
note right of Algo: if bad_cells empty OR predicate tolerance > default, compute robust predicates
Algo->>Pred: compute robust predicates -> Vec<CellKey>
Pred-->>Algo: robust_keys
Algo->>Algo: merge robust_keys into bad_cells using CellKeySet (dedupe)
end
sequenceDiagram
autonumber
participant BF as boundary_facets
participant Cache as KeyBasedCellMap
participant TDS as TriangulationDataStructure
BF->>Cache: lookup(cell_key)
alt Occupied
Cache-->>BF: cached Vec<Facet>
else Vacant
BF->>TDS: cell.facets() (may Err -> FacetError)
alt Ok(facets)
TDS-->>BF: Vec<Facet>
BF->>Cache: insert(cell_key, facets)
else Err(FacetError)
TDS-->>BF: Err(FacetError)
BF-->>Caller: propagate FacetError
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests
Tip 👮 Agentic pre-merge checks are now available in preview!Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.
Please see the documentation for more information. Example: reviews:
pre_merge_checks:
custom_checks:
- name: "Undocumented Breaking Changes"
mode: "warning"
instructions: |
Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).Please share your feedback with us on this Discord post. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull Request Overview
This PR optimizes internal collections by migrating from UUID-based operations to direct key-based access patterns. The change enhances performance by eliminating UUID-to-key mapping overhead in internal algorithms, improving memory efficiency and cache locality through direct SlotMap key operations.
- Introduces new key-based collection types (
CellKeySet,VertexKeySet,KeyBasedCellMap, etc.) for internal operations - Migrates critical algorithms to use direct key-based methods instead of UUID lookups
- Optimizes validation logic to check key-to-UUID direction first (direct SlotMap access) before UUID-to-key verification
Reviewed Changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| src/core/collections.rs | Adds Phase 1 key-based collection types and comprehensive documentation |
| src/core/triangulation_data_structure.rs | Implements direct key-based methods and migrates algorithms to eliminate UUID lookups |
| src/core/traits/insertion_algorithm.rs | Updates collection types from FastHashSet to CellKeySet |
| src/core/boundary.rs | Migrates cell facets cache to use KeyBasedCellMap |
| src/core/algorithms/robust_bowyer_watson.rs | Updates bad cell collection to use CellKeySet |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
| super::facet::FacetError::VertexNotFound { | ||
| // For error compatibility, we need a UUID - get it from the cell if possible | ||
| // In practice this error case should be rare since we're working with valid keys | ||
| uuid: uuid::Uuid::new_v4(), // Placeholder UUID for missing cell case |
There was a problem hiding this comment.
Using Uuid::new_v4() as a placeholder creates a random UUID that doesn't correspond to any actual cell, which could be misleading in error messages. Consider using a sentinel value like Uuid::nil() or a more descriptive approach to indicate this is a missing cell error.
| uuid: uuid::Uuid::new_v4(), // Placeholder UUID for missing cell case | |
| uuid: uuid::Uuid::nil(), // Sentinel UUID for missing cell case |
| let (first_cell_key, first_facet_index) = cell_facet_pairs[0]; | ||
| if let Some(first_cell) = self.cells.get(first_cell_key) { | ||
| let Ok(vertex_keys) = self.vertex_keys_for_cell(first_cell) else { | ||
| if let Some(_first_cell) = self.cells.get(first_cell_key) { |
There was a problem hiding this comment.
The variable _first_cell is retrieved but not used. Since you're now using the key-based method, this lookup is unnecessary and should be removed to eliminate the redundant SlotMap access.
| if let Some(_first_cell) = self.cells.get(first_cell_key) { | |
| if self.cells.contains_key(first_cell_key) { |
| for &(cell_key, _facet_index) in &cell_facet_pairs { | ||
| if let Some(cell) = self.cells.get(cell_key) { | ||
| let Ok(cell_vertex_keys_vec) = self.vertex_keys_for_cell(cell) else { | ||
| if let Some(_cell) = self.cells.get(cell_key) { |
There was a problem hiding this comment.
The variable _cell is retrieved but not used. Since you're now using the key-based method, this lookup is unnecessary and should be removed to eliminate the redundant SlotMap access.
| if let Some(_cell) = self.cells.get(cell_key) { | |
| if self.cells.contains_key(cell_key) { |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/core/traits/insertion_algorithm.rs (1)
681-683: Bug: destructuring from Vec indexing takes a reference; needs pattern match on &tuple
bad_cells_sharing[0]yields&(CellKey, FacetIndex). Currentlet (bad_cell_key, facet_index) = bad_cells_sharing[0];won’t compile. Usefirst()with&destructuring (also avoids a potential panic).Apply this diff:
- let (bad_cell_key, facet_index) = bad_cells_sharing[0]; - boundary_facet_specs.push((bad_cell_key, facet_index)); + if let Some(&(bad_cell_key, facet_index)) = bad_cells_sharing.first() { + boundary_facet_specs.push((bad_cell_key, facet_index)); + }
🧹 Nitpick comments (4)
src/core/collections.rs (2)
175-193: Minor doc nit: avoid “O(0)” phrasing and bench-like claims without context
- Replace “O(0) hash operations” with “avoids extra UUID→Key lookups” or similar.
- “2–3x faster” depends on workload; consider noting “in typical non-adversarial workloads” or link to FxHash benchmarks.
Also applies to: 730-747
846-863: Tests compile-check Phase 1 aliasesLightweight presence tests are fine here. Consider adding a simple insert/contains roundtrip later.
src/core/triangulation_data_structure.rs (2)
995-1038: Well-implemented Phase 1 direct key-based methodThe new
vertex_keys_for_cell_directmethod successfully eliminates UUID→Key lookups in hot paths by working directly withCellKey. This is a significant performance improvement for operations that iterate over many cells. The error handling with placeholder UUID for missing cells is pragmatic given the rarity of this error case.Consider a future Phase 2 optimization where
Cellcould storeVec<VertexKey>directly instead ofVec<Vertex>to eliminate the remaining UUID lookups in lines 1029-1036.
2539-2548: Fix unused variable warningThe static analysis correctly identified an unused variable
cellat line 2539. Since you only need the cell key, use_to indicate the value is intentionally unused.- for (cell_key, cell) in &self.cells { + for (cell_key, _cell) in &self.cells {
📜 Review details
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
src/core/algorithms/robust_bowyer_watson.rs(2 hunks)src/core/boundary.rs(2 hunks)src/core/collections.rs(5 hunks)src/core/traits/insertion_algorithm.rs(2 hunks)src/core/triangulation_data_structure.rs(12 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
src/core/boundary.rs (2)
src/core/collections.rs (2)
fast_hash_map_with_capacity(714-717)fast_hash_map_with_capacity(826-826)src/lib.rs (1)
fast_hash_map_with_capacity(312-312)
src/core/triangulation_data_structure.rs (3)
src/core/collections.rs (2)
fast_hash_map_with_capacity(714-717)fast_hash_map_with_capacity(826-826)src/core/cell.rs (7)
cell(2209-2209)cell(2338-2338)cell(2340-2340)cell(2344-2344)cell(2543-2543)cell(2556-2556)uuid(501-503)src/core/vertex.rs (1)
uuid(458-460)
🪛 GitHub Check: build (ubuntu-latest)
src/core/triangulation_data_structure.rs
[failure] 2539-2539:
unused variable: cell
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: build (windows-latest)
- GitHub Check: Code Coverage
- GitHub Check: Codacy Static Code Analysis
🔇 Additional comments (21)
src/core/collections.rs (4)
80-94: Docs for Phase 1 key-based usage look goodClear examples and rationale. No functional concerns.
514-579: CellKeySet/VertexKeySet aliases are a solid abstraction checkpointGood balance of perf and readability; keeps SlotMap keys front-and-center in hot paths.
580-643: KeyBasedCellMap/VertexMap are appropriate; generic V keeps them broadly usefulLGTM, no issues.
644-675: KeyBasedNeighborMap aligns with removing UUIDs from inner loopsStraightforward and consistent with the migration plan.
src/core/traits/insertion_algorithm.rs (2)
7-7: Import swap to CellKeySet is correct for key-first internalsKeeps algorithm code decoupled from the concrete HashSet type.
650-650: Using CellKeySet for bad cell membership tests is the right moveThis avoids UUID translations in tight loops.
src/core/algorithms/robust_bowyer_watson.rs (2)
8-8: Importing CellKeySet is consistent with Phase 1 migrationKeeps local sets keyed directly by SlotMap keys.
523-523: Switch to CellKeySet for bad_cell_set is correctMaintains consistent semantics with the trait implementation change.
src/core/boundary.rs (1)
9-12: Imports updated for key-based cache map are correctRemoval of CellKey import and addition of KeyBasedCellMap + helper is aligned with the new aliases.
src/core/triangulation_data_structure.rs (12)
173-177: LGTM! Successful Phase 1 key-based type migrationThe import changes correctly reflect the migration to key-based collection types (
CellKeySet,VertexKeySet) as part of the Phase 1 optimization. This eliminates UUID→Key hash lookups in hot paths by working directly with SlotMap keys.
2192-2200: Excellent hot-path optimization using direct key accessGreat use of the new
vertex_keys_for_cell_directmethod inbuild_facet_to_cells_hashmap. This is exactly the type of hot-path optimization that benefits from Phase 1 migration - avoiding UUID→Key lookups when iterating over all cells to build facet mappings.
2266-2267: Consistent use of CellKeySet for membership trackingGood migration from generic
FastHashSetto the specializedCellKeySettype alias. This improves code clarity and maintains consistency with the Phase 1 migration pattern.
2273-2280: Consistent hot-path optimization patternGood application of
vertex_keys_for_cell_directin the facet sharing fix logic. The pattern of using direct key-based access in loops over cells is consistently applied throughout the codebase.
2291-2305: Proper type migration for vertex key collectionsGood use of
VertexKeySetfor both cell vertex keys and facet vertex keys. The explicit type annotations improve code clarity and maintain consistency with the Phase 1 migration.
2425-2447: Smart optimization in validation: key-to-UUID check firstExcellent optimization strategy in
validate_vertex_mappings. By checking the key-to-UUID direction first (direct SlotMap access with no hash lookup), you avoid the more expensive UUID-to-key hash lookup when possible. The comment clearly explains the optimization rationale.
2502-2524: Consistent validation optimization patternThe same smart optimization pattern is correctly applied to
validate_cell_mappings. This consistency in optimization approach across similar validation methods is excellent.
2532-2535: Clear documentation of Phase 1 migration benefitsGood documentation update explaining that this method now uses the optimized
vertex_keys_for_cell_directto eliminate UUID→Key hash lookups.
2541-2547: Consistent hot-path optimization in duplicate validationGood use of
vertex_keys_for_cell_directin the duplicate cell validation logic. This maintains the pattern of avoiding UUID→Key lookups in validation hot paths.
2729-2731: Consistent use of VertexKeySet in neighbor validationGood migration to
VertexKeySetfor vertex containment checks in the neighbor validation logic. The type consistency improves code clarity.
2777-2781: Proper use of VertexKeySet for set operationsGood use of
VertexKeySetfor computing expected shared vertices. The specialized collection type makes the set operations more efficient and the code more readable.
2498-2507: Excellent consistency in migration patternsThe Phase 1 migration has been consistently applied throughout the codebase with:
- Direct key-based methods (
vertex_keys_for_cell_direct) eliminating UUID→Key lookups- Specialized collection types (
CellKeySet,VertexKeySet) for better type safety- Smart validation optimizations checking cheaper operations first
- Clear documentation of performance benefits
This systematic approach to the migration ensures maintainability and performance improvements across all hot paths.
Fixes a potential issue where facet errors were not propagated during boundary facet retrieval, ensuring errors are correctly handled. Improves performance by avoiding redundant UUID→Key lookups in internal data structures, especially within performance-critical algorithms related to cell and vertex management. This change optimizes data access patterns for `SlotMap` integration and enhances memory efficiency.
Coverage summary from CodacySee diff coverage on Codacy
Coverage variation details
Coverage variation is the difference between the coverage for the head and common ancestor commits of the pull request branch: Diff coverage details
Diff coverage is the percentage of lines that are covered by tests out of the coverable lines that the pull request added or modified: See your quality gate settings Change summary preferences |
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #83 +/- ##
==========================================
- Coverage 69.26% 67.96% -1.31%
==========================================
Files 18 18
Lines 2782 2978 +196
==========================================
+ Hits 1927 2024 +97
- Misses 855 954 +99
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (4)
src/core/boundary.rs (1)
93-109: Fixed: facet cache now propagates FacetError (no more unwrap_or_default).Match on Entry and using cell.facets()? correctly preserves the Result contract. Thanks for addressing the earlier concern.
src/core/triangulation_data_structure.rs (3)
995-1039: Use a stable sentinel UUID and a more precise error for missing cell.
- Replace the random placeholder UUID with Uuid::nil() to avoid misleading, nondeterministic errors.
- Consider introducing a FacetError::CellNotFound (or map this case to TriangulationValidationError at call sites) instead of reusing VertexNotFound for a missing cell.
Apply minimal fix:
- super::facet::FacetError::VertexNotFound { - // For error compatibility, we need a UUID - get it from the cell if possible - // In practice this error case should be rare since we're working with valid keys - uuid: uuid::Uuid::new_v4(), // Placeholder UUID for missing cell case - } + super::facet::FacetError::VertexNotFound { + // Use a deterministic sentinel to avoid misleading error messages + uuid: uuid::Uuid::nil(), + }
2272-2275: Remove redundant SlotMap fetch; use contains_key.Avoids an unnecessary read and an unused binding.
- if let Some(_first_cell) = self.cells.get(first_cell_key) { + if self.cells.contains_key(first_cell_key) {
2291-2306: Same here: drop unused_cellbinding.Use contains_key before resolving vertex keys.
- for &(cell_key, _facet_index) in &cell_facet_pairs { - if let Some(_cell) = self.cells.get(cell_key) { + for &(cell_key, _facet_index) in &cell_facet_pairs { + if self.cells.contains_key(cell_key) {
🧹 Nitpick comments (3)
src/core/boundary.rs (1)
98-101: Idiomatic single-element match for clarity.Use a slice pattern to enforce “exactly one” without a separate len() check.
- if cells.len() == 1 - && let Some((cell_id, facet_index)) = cells.first().copied() - { + if let [(cell_id, facet_index)] = cells.as_slice() {src/core/collections.rs (1)
902-907: Minor: avoid referencing integer literal for Option<&i32> comparison.Use copied() to compare by value; a bit clearer.
- assert_eq!(vertex_map.get(&vertex_key1), Some(&42)); + assert_eq!(vertex_map.get(&vertex_key1).copied(), Some(42));src/core/triangulation_data_structure.rs (1)
4499-4508: Optional: avoid tests depending on private helpers.These tests call private methods; prefer exercising public API to reduce coupling.
Would you like a follow-up that rewrites these to use vertices()/cell UUIDs only?
📜 Review details
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
src/core/boundary.rs(2 hunks)src/core/collections.rs(9 hunks)src/core/traits/insertion_algorithm.rs(3 hunks)src/core/triangulation_data_structure.rs(12 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- src/core/traits/insertion_algorithm.rs
🧰 Additional context used
🧬 Code graph analysis (2)
src/core/boundary.rs (3)
src/core/triangulation_data_structure.rs (1)
cells(500-502)src/core/facet.rs (1)
cell(325-327)src/core/cell.rs (7)
cell(2209-2209)cell(2338-2338)cell(2340-2340)cell(2344-2344)cell(2543-2543)cell(2556-2556)facets(1052-1057)
src/core/triangulation_data_structure.rs (2)
src/core/cell.rs (7)
cell(2209-2209)cell(2338-2338)cell(2340-2340)cell(2344-2344)cell(2543-2543)cell(2556-2556)uuid(501-503)src/core/vertex.rs (1)
uuid(458-460)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Codacy Static Code Analysis
- GitHub Check: Codacy Security Scan
- GitHub Check: performance-regression
- GitHub Check: build (windows-latest)
- GitHub Check: Code Coverage
🔇 Additional comments (19)
src/core/boundary.rs (1)
9-9: Scoped imports look good.Narrowing to Tds and introducing KeyBasedCellMap + Entry is correct and consistent with Phase 1 key-based access.
Also applies to: 12-12, 16-16
src/core/collections.rs (7)
80-94: Docs: Phase 1 key-based usage examples are clear.Good guidance on internal key-based collections and imports.
136-143: Docs: Performance notes are accurate and scoped.Clear non-cryptographic caveat; good to call out typical 2-3x speedup context.
160-193: Docs: FastHashSet guidance (external vs internal) is well-balanced.Examples show both UUID and key-based usage; matches the migration plan.
452-459: Docs: UUID→Key map sections read well.Speedup language and reverse-lookup notes are precise.
Also applies to: 488-490
514-675: New key-based aliases (CellKeySet, VertexKeySet, KeyBased maps): LGTM.*Names are intuitive; docs spell out benefits and use cases. This aligns with the rest of the PR.
728-747: Docs: Capacity helper examples for key types are helpful.Clear comparison vs UUID usage; consistent with the migration narrative.
846-864: Tests: Presence/instantiation checks for Phase 1 types look good.Covers basic compile-time guarantees without overreach.
src/core/triangulation_data_structure.rs (11)
174-177: Imports aligned with key-based collections.Switch to CellKeySet/VertexKeySet looks good.
2192-2194: Good: direct key path in facet mapping.Eliminates UUID→Key lookups in a hot path.
2267-2267: LGTM: CellKeySet adoption.Using CellKeySet is consistent with the migration.
2302-2306: LGTM: set-based containment via VertexKeySet.Efficient subset checks for facet validation.
2425-2444: Validation order optimization makes sense.Key→UUID check first reduces hash lookups.
2502-2523: Cell mapping validation optimization looks good.Consistent with vertex mapping approach.
2533-2535: Doc note matches implementation.Clear Phase 1 rationale.
2539-2547: Direct key path with deterministic debug fallback is fine.The nil UUID fallback in debug output is appropriate.
2730-2731: Precomputed VertexKeySet is a good call.Improves neighbor validation without extra allocs.
2779-2782: LGTM: expected facet vertices via set ops.Clear and performant.
4536-4539: LGTM: set intersection for shared vertices.Concise and correct.
Addresses potential errors during boundary facet retrieval by hardening checks for cell and facet existence. This prevents out-of-bounds access and ensures that missing cells or vertices do not cause unexpected failures during triangulation processing. Also, simplifies logic for single-cell facets. Refs: None
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/core/triangulation_data_structure.rs (2)
2538-2549: Duplicate-cell validator should not skip cells with unresolved keysContinuing on key resolution failure can falsely pass validation. Propagate an error instead.
Apply:
- for (cell_key, _cell) in &self.cells { - // Phase 1: Use direct key-based method to avoid UUID→Key lookups - let Ok(vertex_keys) = self.vertex_keys_for_cell_direct(cell_key) else { - #[cfg(debug_assertions)] - eprintln!( - "debug: skipping cell {:?} due to missing vertex keys in duplicate validation", - self.cell_uuid_from_key(cell_key) - .unwrap_or_else(uuid::Uuid::nil) - ); - continue; // Skip cells with missing vertex keys - }; + for (cell_key, _cell) in &self.cells { + let vertex_keys = self.vertex_keys_for_cell_direct(cell_key).map_err(|_| { + TriangulationValidationError::InconsistentDataStructure { + message: format!( + "Missing vertex keys while validating duplicates for cell {:?}", + self.cell_uuid_from_key(cell_key).unwrap_or_else(uuid::Uuid::nil) + ), + } + })?;
2191-2199: Fail fast — return an error instead of silently skipping cells in build_facet_to_cells_hashmapbuild_facet_to_cells_hashmap currently continues when vertex_keys_for_cell_direct(cell_id) fails (src/core/triangulation_data_structure.rs, ~lines 2182–2199), which can undercount facet owners and mask invalid topologies used by validate_facet_sharing() and downstream algorithms.
- Add a strict API: pub fn try_build_facet_to_cells_hashmap(&self) -> Result<FacetToCellsMap, TriangulationValidationError> and have validators/robust algorithms call it.
- Preserve a non-breaking convenience method only if it explicitly documents/handles error cases (do not silently swallow errors).
- Audit and fix other early-continue sites using vertex_keys_for_cell_direct in the same file (src/core/triangulation_data_structure.rs at ~2273, 2293, 2540) so correctness-sensitive call sites propagate errors.
♻️ Duplicate comments (4)
src/core/boundary.rs (1)
93-107: Great fix: cache now propagates FacetError instead of swallowing itThe Entry-based cache correctly inserts
cell.facets()?, preserving the function’sResultcontract. This resolves the earlier “unwrap_or_default” issue.src/core/collections.rs (1)
894-901: Test fix confirmed: avoid referencing a temporary StringUsing
.map(String::as_str)removes the temporary borrow issue flagged earlier.src/core/triangulation_data_structure.rs (2)
995-1038: Good follow-up to prior feedback: replaced random v4 UUID with Uuid::nil()This avoids misleading, non-deterministic IDs in error paths.
2265-2333: Facet-sharing repair: solid key-based checks; minor allocation tweak + determinism
- You rebuild facet_vertex_keys_set on each inner iteration. Build once before the loop and reuse.
- If valid_cells.len() > 2, removal order is insertion-dependent. Consider a stable rule (e.g., keep the two with smallest CellKey) for deterministic results.
Suggested allocation tweak:
@@ - let mut valid_cells = ValidCellsBuffer::new(); + let mut valid_cells = ValidCellsBuffer::new(); + // Build once; reuse inside loop + let facet_vertex_keys_set: VertexKeySet = + facet_vertex_keys.iter().copied().collect(); for &(cell_key, _facet_index) in &cell_facet_pairs { if self.cells.contains_key(cell_key) { - // Phase 1: Use direct key-based method to avoid UUID→Key lookups - let Ok(cell_vertex_keys_vec) = - self.vertex_keys_for_cell_direct(cell_key) - else { + let Ok(cell_vertex_keys_vec) = self.vertex_keys_for_cell_direct(cell_key) else { #[cfg(debug_assertions)] eprintln!( "debug: skipping cell {cell_key:?} due to unresolved vertex keys in validation" ); continue; // Skip cells with unresolved vertex keys }; - let cell_vertex_keys: VertexKeySet = - cell_vertex_keys_vec.into_iter().collect(); - let facet_vertex_keys_set: VertexKeySet = - facet_vertex_keys.iter().copied().collect(); + let cell_vertex_keys: VertexKeySet = cell_vertex_keys_vec.into_iter().collect();
🧹 Nitpick comments (5)
src/core/boundary.rs (1)
98-109: Nit: avoid repetitive deref by normalizing borrowed IDs onceMinor readability tweak: bind
(*cell_id, *facet_index)to locals so the rest of the block doesn’t need*/usize::from(*facet_index)sprinkled around.Apply this diff within the block:
- if let [(cell_id, facet_index)] = cells.as_slice() { - if let Some(cell) = self.cells().get(*cell_id) { + if let [(cell_id, facet_index)] = cells.as_slice() { + let (cell_id, facet_index) = (*cell_id, *facet_index); + if let Some(cell) = self.cells().get(cell_id) { // Cache facets per cell to avoid repeated allocations, but propagate errors - let facets = match cell_facets_cache.entry(*cell_id) { + let facets = match cell_facets_cache.entry(cell_id) { Entry::Occupied(e) => e.into_mut(), Entry::Vacant(v) => { let computed = cell.facets()?; // propagate FacetError v.insert(computed) } }; - if let Some(f) = facets.get(usize::from(*facet_index)) { + if let Some(f) = facets.get(usize::from(facet_index)) { boundary_facets.push(f.clone()); } else { debug_assert!( - usize::from(*facet_index) < facets.len(), + usize::from(facet_index) < facets.len(), "facet_index {} out of bounds for {} facets", facet_index, facets.len() ); ... - debug_assert!( - self.cells().contains_key(*cell_id), + debug_assert!( + self.cells().contains_key(cell_id), "cell_id {cell_id:?} should exist in SlotMap" );Also applies to: 113-116, 125-126
src/core/collections.rs (2)
269-271: Consider adding a key-based neighbor list alias to mirror CellNeighborsMapYou added
KeyBasedNeighborMap = FastHashMap<CellKey, CellKey>(1:1), but there’s no keyed analogue for per-cell neighbor lists. Adding one keeps APIs symmetric and reduces UUID traffic in hot paths.Apply near the Phase 1 block:
+/// Phase 1: Per-cell neighbor lists using direct keys (positional, up to MAX_PRACTICAL_DIMENSION_SIZE). +/// Mirrors `CellNeighborsMap` but avoids UUIDs for internal algorithms. +pub type KeyBasedCellNeighborsMap = + FastHashMap<CellKey, SmallBuffer<Option<CellKey>, MAX_PRACTICAL_DIMENSION_SIZE>>;
514-675: Phase 1 key-based aliases: solid, minimal surface areaType aliases for sets/maps over
CellKeyandVertexKeyare cohesive and consistent with existing FastHash* conventions. Names are clear. Consider also exporting the proposedKeyBasedCellNeighborsMapfor completeness (see earlier comment).src/core/triangulation_data_structure.rs (2)
995-1038: CellKey-miss returns FacetError::VertexNotFound (semantic mismatch)Using VertexNotFound with a nil UUID when the cell_key is missing conflates cell vs vertex failures. Recommend either:
- return a Tds-level error here (Result<_, TriangulationValidationError>) and map at call sites, or
- add a FacetError variant for cell-key misses and use that.
Phase‑1 note: this still performs O(D) UUID→Key lookups; acceptable now but plan to store vertex keys on Cell to make this truly key‑based.
4498-4507: Tests: exercise the new key-based helper to cover Phase‑1 pathThese tests still go through UUID→Key via vertex_keys_for_cell(cell). Consider iterating over (&cell_key, cell) and using vertex_keys_for_cell_direct(cell_key) to cover the new path as well.
Also applies to: 4524-4528, 4534-4538
📜 Review details
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
src/core/boundary.rs(3 hunks)src/core/collections.rs(9 hunks)src/core/triangulation_data_structure.rs(12 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
src/core/triangulation_data_structure.rs (3)
src/core/collections.rs (2)
fast_hash_map_with_capacity(714-717)fast_hash_map_with_capacity(826-826)src/core/cell.rs (7)
cell(2209-2209)cell(2338-2338)cell(2340-2340)cell(2344-2344)cell(2543-2543)cell(2556-2556)uuid(501-503)src/core/vertex.rs (1)
uuid(458-460)
src/core/boundary.rs (4)
src/core/collections.rs (2)
fast_hash_map_with_capacity(714-717)fast_hash_map_with_capacity(826-826)src/core/triangulation_data_structure.rs (1)
cells(500-502)src/core/cell.rs (7)
cell(2209-2209)cell(2338-2338)cell(2340-2340)cell(2344-2344)cell(2543-2543)cell(2556-2556)facets(1052-1057)src/core/traits/boundary_analysis.rs (1)
boundary_facets(92-92)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Codacy Static Code Analysis
- GitHub Check: performance-regression
- GitHub Check: build (windows-latest)
- GitHub Check: Codacy Security Scan
- GitHub Check: Code Coverage
🔇 Additional comments (11)
src/core/collections.rs (8)
80-93: Docs: Phase 1 key-based usage examples are clearThe internal examples concisely show direct
CellKey/VertexKeyusage. No issues.
135-143: Docs: performance notes for FastHashMap are fineThe non-cryptographic disclaimer and speed notes are appropriate here.
159-193: Docs: external vs internal examples are helpfulClear separation of UUID-based API and key-based internal usage.
249-249: FacetToCellsMap: good choice of SmallBuffer<(CellKey, FacetIndex), 2>Fits boundary vs interior cases without heap churn.
449-512: Docs: UUID→Key mapping sections read wellBalanced guidance on when to use direct SlotMap access vs maps.
730-747: Docs: capacity helpers include key-based exampleGood call showing
CellKeyusage to avoid UUID→Key lookups.
846-863: Tests: presence/instantiation of Phase 1 types coveredSanity checks for emptiness/len are sufficient at this layer.
911-923: Roundtrip tests for key-based maps/sets look goodCovers insert/get/contains and size checks; nice.
src/core/triangulation_data_structure.rs (3)
174-177: Key-based collection imports look goodAdopts CellKeySet/VertexKeySet while retaining FastHashMap where needed. No issues.
2424-2445: Vertex mapping validation ordering is correct and efficientKey→UUID check first, then UUID→Key. Nice improvement.
2501-2522: Cell mapping validation mirrors vertex path appropriatelySame efficient two-way check. LGTM.
Fixes an issue where boundary facets were not correctly identified when using key-based access to cells, leading to incomplete or incorrect boundary representations. This change ensures that facet sharing is validated correctly by using the new key-based access methods, which improves performance and reduces the risk of errors. It also introduces a stricter validation method and a fallback mechanism to handle cases where vertex keys are missing.
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (1)
src/core/boundary.rs (1)
98-110: Facet cache now propagates FacetError on miss — correct fix.This resolves the previous error‑swallowing behavior; using Entry and cell.facets()? is the right approach.
🧹 Nitpick comments (5)
src/core/boundary.rs (1)
111-124: Out-of-bounds facet_index only debug-asserts; consider failing fast.In release builds this silently skips a boundary facet. If FacetError (or a suitable error) exists for invalid facet indices, prefer returning Err to surface corruption early.
Would you accept an error return here instead of debug-only logging?
src/core/triangulation_data_structure.rs (4)
1851-1859: Use vertex_keys_for_cell_direct() in assign_neighbors for consistency.This block still uses UUID→Key lookups; switching to the direct helper removes unnecessary map accesses.
Apply:
- for (cell_key, cell) in &self.cells { - let vertex_keys = self.vertex_keys_for_cell(cell).map_err(|err| { + for (cell_key, cell) in &self.cells { + let vertex_keys = self.vertex_keys_for_cell_direct(cell_key).map_err(|err| { TriangulationValidationError::VertexKeyRetrievalFailed { cell_id: cell.uuid(), message: format!( "Failed to retrieve vertex keys for cell during neighbor assignment: {err}" ), } })?;
2039-2043: Same here: prefer direct key path in assign_incident_cells.Aligns with Phase 1 and avoids UUID map lookups.
- let vertex_keys = self.vertex_keys_for_cell(cell) + let vertex_keys = self.vertex_keys_for_cell_direct(cell_key) .map_err(|_e| TriangulationValidationError::InconsistentDataStructure { message: format!("Failed to get vertex keys for cell {}: Vertex UUID not found in vertex mapping", cell.uuid()), })?;
2215-2233: Facet index u8 guard: add explicit dim assertion.You already gate strictly in try_build; add a debug_assert!(D <= 254) once in each builder to surface misconfiguration earlier instead of silently continuing.
// Iterate over all cells and their facets for (cell_id, cell) in &self.cells { + debug_assert!(D <= 254, "Facet index requires D <= 254");(similarly in try_build)
Also applies to: 2273-2288
1000-1018: Doc comment notes Phase 2 optimization — consider TODO tag with tracking.Add a TODO with an issue reference to move cells to store VertexKey directly, so it doesn’t get lost.
📜 Review details
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
cspell.json(1 hunks)src/core/boundary.rs(2 hunks)src/core/triangulation_data_structure.rs(19 hunks)
✅ Files skipped from review due to trivial changes (1)
- cspell.json
🧰 Additional context used
🧬 Code graph analysis (2)
src/core/triangulation_data_structure.rs (4)
src/core/collections.rs (2)
fast_hash_map_with_capacity(714-717)fast_hash_map_with_capacity(826-826)src/core/facet.rs (4)
cell(325-327)key(455-473)facet_key_from_vertex_keys(574-586)vertex(370-372)src/core/cell.rs (7)
cell(2209-2209)cell(2338-2338)cell(2340-2340)cell(2344-2344)cell(2543-2543)cell(2556-2556)uuid(501-503)src/core/vertex.rs (1)
uuid(458-460)
src/core/boundary.rs (3)
src/core/collections.rs (2)
fast_hash_map_with_capacity(714-717)fast_hash_map_with_capacity(826-826)src/core/triangulation_data_structure.rs (1)
cells(500-502)src/core/cell.rs (7)
cell(2209-2209)cell(2338-2338)cell(2340-2340)cell(2344-2344)cell(2543-2543)cell(2556-2556)facets(1052-1057)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Codacy Static Code Analysis
- GitHub Check: Codacy Security Scan
- GitHub Check: performance-regression
- GitHub Check: build (windows-latest)
- GitHub Check: Code Coverage
🔇 Additional comments (15)
src/core/boundary.rs (4)
9-11: Import narrowing to Tds is consistent with key-based refactor.Good removal of CellKey exposure from this module’s public surface.
12-12: Key-based cache type is appropriate.Using KeyBasedCellMap with fast_hash_map_with_capacity(self.number_of_cells()) matches the new internal collections strategy.
16-16: Entry API import for cache miss handling looks good.Ready for Occupied/Vacant pattern below.
93-95: Per-call facets cache keyed by CellKey: LGTM.Capacity sizing to number_of_cells() is reasonable.
src/core/triangulation_data_structure.rs (11)
174-177: Adopting CellKeySet/VertexKeySet and keeping FastHashMap internally: LGTM.Imports align with Phase 1 key-based collections and keep FxHash for perf.
995-1045: vertex_keys_for_cell_direct: good Phase 1 helper with clear errors.Direct SlotMap access + explicit InconsistentDataStructure messages are solid.
2204-2213: build_facet_to_cells_hashmap now skips cells via key-based path.Lenient behavior is fine for this helper; debug message aids diagnosis.
2239-2297: try_build_facet_to_cells_hashmap adds strict variant — nice separation.Returning TriangulationValidationError on missing keys is the right contract here.
2338-2345: fix_invalid_facet_sharing: strict-then-lenient map + direct key path: LGTM.Deterministic keep (smallest keys) and proper error propagation read well.
Also applies to: 2351-2365, 2362-2365, 2368-2375, 2384-2391
2497-2518: Mapping validation optimized to check key→UUID first: good.Reduces hash lookups and keeps diagnostics intact.
2574-2595: Cell mapping validation mirrors vertex side: LGTM.Consistent optimization and messaging.
2611-2615: validate_no_duplicate_cells uses direct key lookup now: LGTM.Avoids UUID map in hot path.
2653-2655: validate_facet_sharing now uses strict builder: correct.Ensures missing vertex keys don’t mask topology errors.
2788-2797: Neighbor validation precomputes via direct key path and set ops: LGTM.Good balance of correctness and perf.
Also applies to: 2842-2846
3121-3133: rand::rng() — Cargo.toml pins rand = "0.9.2"; no action required.Cargo.toml (line 31) declares rand = "0.9.2" and the repo uses rand::rng() in tests/src; rand 0.9.x includes rng() (thread_rng → rng, gen → random), so the API matches the declared dependency (Cargo.lock also contains a transitive 0.8.5 entry). (docs.rs)
Improves error handling during boundary facet creation by introducing specific error types for invalid facet indices and missing cells. This change helps to quickly identify data corruption issues in the triangulation structure. Refs: None
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (1)
src/core/boundary.rs (1)
96-97: Propagating cell.facets() errors via Entry cache fixes prior swallowThe cache now correctly uses Entry and propagates FacetError instead of defaulting to empty vecs. This addresses the earlier review note.
Run this to ensure there are no remaining call sites that still swallow
cell.facets()errors:#!/bin/bash # Find unwrap_or_default/unwrap_or related to facets() to avoid silent failures rg -nP -C2 '(facets\s*\(\).*\.(unwrap_or|unwrap_or_default)|unwrap_or(_default)?\s*\(\s*\)\s*;?\s*$)' --type=rust # Also flag `.ok()` on facets() rg -nP -C2 'facets\s*\(\)\.ok\(\)' --type=rustAlso applies to: 101-113
🧹 Nitpick comments (10)
src/core/facet.rs (1)
121-131: Good additions to FacetError; align with new boundary failure pathsThe new variants precisely describe corruption scenarios and fit the updated boundary propagation.
For easier debugging in callers, consider adding debug context (e.g., cell_id) when constructing these errors at call sites (see boundary.rs suggestions).
src/core/boundary.rs (3)
117-121: Add debug assertion with context before failing on invalid indexSmall ergonomics win: assert with cell_id to aid triage; no behavior change in release.
if let Some(f) = facets.get(usize::from(facet_index)) { boundary_facets.push(f.clone()); } else { - // Fail fast: invalid facet index indicates data corruption + // Fail fast: invalid facet index indicates data corruption + debug_assert!( + usize::from(facet_index) < facets.len(), + "Invalid facet index: idx={}, len={}, cell_id={:?}", + facet_index, + facets.len(), + cell_id + ); return Err(FacetError::InvalidFacetIndex { index: facet_index, facet_count: facets.len(), }); }
124-126: Optional: assert when a referenced cell is missingA debug_assert here helps catch upstream corruption early in tests.
} else { - // Fail fast: cell not found indicates data corruption + // Fail fast: cell not found indicates data corruption + debug_assert!(false, "Referenced cell {:?} not found in triangulation", cell_id); return Err(FacetError::CellNotFoundInTriangulation); }
100-128: Optional: assert unexpected adjacency counts (len != 1,2) in debug buildsWe only act on boundary facets (len==1). Adding an else with a debug assertion flags 0 or >2, which violate the invariant, without changing release behavior.
for (_facet_key, cells) in facet_to_cells { - if let [(cell_id, facet_index)] = cells.as_slice() { + if let [(cell_id, facet_index)] = cells.as_slice() { // Bind dereferenced values once to avoid repetitive derefs let (cell_id, facet_index) = (*cell_id, *facet_index); if let Some(cell) = self.cells().get(cell_id) { // Cache facets per cell to avoid repeated allocations, but propagate errors let facets = match cell_facets_cache.entry(cell_id) { Entry::Occupied(e) => e.into_mut(), Entry::Vacant(v) => { let computed = cell.facets()?; // propagate FacetError v.insert(computed) } }; if let Some(f) = facets.get(usize::from(facet_index)) { boundary_facets.push(f.clone()); } else { // Fail fast: invalid facet index indicates data corruption return Err(FacetError::InvalidFacetIndex { index: facet_index, facet_count: facets.len(), }); } } else { // Fail fast: cell not found indicates data corruption return Err(FacetError::CellNotFoundInTriangulation); } - } + } else { + debug_assert!( + cells.len() == 2 || cells.is_empty(), + "Unexpected adjacent cell count for facet: {}", + cells.len() + ); + } }src/core/triangulation_data_structure.rs (6)
952-1005: Minor alloc nit: pre‑reserve and push instead of collect for fewer reallocs.vertex_keys_for_cell_direct() builds a small Vec per cell; pre‑reserving avoids potential reallocation on tight loops.
- let keys: Result<Vec<VertexKey>, _> = cell - .vertices() - .iter() - .map(|v| { - self.uuid_to_vertex_key - .get(&v.uuid()) - .copied() - .ok_or_else(|| TriangulationValidationError::InconsistentDataStructure { - message: format!("Vertex UUID {} not found in mapping", v.uuid()), - }) - }) - .collect(); - - keys + let mut keys = Vec::with_capacity(cell.vertices().len()); + for v in cell.vertices() { + let key = self.uuid_to_vertex_key.get(&v.uuid()).copied().ok_or_else(|| { + TriangulationValidationError::InconsistentDataStructure { + message: format!("Vertex UUID {} not found in mapping", v.uuid()), + } + })?; + keys.push(key); + } + Ok(keys)
2060-2067: Consider surfacing a counter for skipped cells (release builds).Skipping cells with unresolved vertex keys is fine for lenient duplicate removal, but it can hide systemic issues. Optionally increment a debug metric/log once per run so production users can detect persistent skips.
2156-2161: Off‑by‑one in u8 facet index bound.Facet indices are 0..=D; u8 supports up to 255, so D can be 255 safely. Update the assert/message.
- debug_assert!( - D <= 254, - "Dimension D must be <= 254 to fit facet indices in u8" - ); + debug_assert!( + D <= 255, + "Dimension D must be <= 255 to fit facet indices in u8 (indices 0..=D)" + );Also update the doc “Requirements” comment above to say D ≤ 255.
2205-2269: Strict facet map builder looks solid; mirror the u8 bound fix here.The error propagation is correct. Apply the same D ≤ 255 assert/message tweak.
- debug_assert!( - D <= 254, - "Dimension D must be <= 254 to fit facet indices in u8" - ); + debug_assert!( + D <= 255, + "Dimension D must be <= 255 to fit facet indices in u8 (indices 0..=D)" + );
2358-2360: Optional: avoid full sort when keeping two smallest keys.Given pathological facets rarely exceed 2 cells, you could keep the two minima in a single pass to shave work; current sort_unstable is fine if this isn’t hot.
// Track min1,min2 by key instead of sorting; remove others.
4545-4551: Test nit: avoid O(N) scan to recover cell_key.Use the mapping helper to get the key from UUID directly.
- let cell_key = tds - .cells - .iter() - .find(|(_, c)| c.uuid() == cell.uuid()) - .map(|(k, _)| k) - .expect("Cell should have a key"); + let cell_key = tds + .cell_key_from_uuid(&cell.uuid()) + .expect("Cell UUID should map to a key");
📜 Review details
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
src/core/boundary.rs(3 hunks)src/core/facet.rs(1 hunks)src/core/triangulation_data_structure.rs(23 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
src/core/triangulation_data_structure.rs (2)
src/core/collections.rs (2)
fast_hash_map_with_capacity(714-717)fast_hash_map_with_capacity(826-826)src/core/facet.rs (3)
key(466-484)facet_key_from_vertex_keys(585-597)vertex(381-383)
src/core/boundary.rs (3)
src/core/collections.rs (2)
fast_hash_map_with_capacity(714-717)fast_hash_map_with_capacity(826-826)src/core/triangulation_data_structure.rs (1)
cells(500-502)src/core/cell.rs (7)
cell(2209-2209)cell(2338-2338)cell(2340-2340)cell(2344-2344)cell(2543-2543)cell(2556-2556)facets(1052-1057)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: performance-regression
- GitHub Check: Codacy Static Code Analysis
- GitHub Check: build (windows-latest)
- GitHub Check: Codacy Security Scan
- GitHub Check: Code Coverage
🔇 Additional comments (13)
src/core/boundary.rs (2)
9-9: Imports look correct and minimalSwitch to Tds-only import, addition of KeyBasedCellMap and Entry are appropriate for the refactor.
Also applies to: 12-12, 16-16
63-67: Docs: clear error surface descriptionThe error contract now reflects actual propagation and data-corruption cases. Good.
src/core/triangulation_data_structure.rs (11)
174-177: Imports align with key-based migration — good switch.Replacing FastHashSet/Map usages with CellKeySet/VertexKeySet and fast_hash_map_with_capacity is consistent with the PR’s goal and improves clarity of intent.
1812-1820: Good: direct key path for neighbor assignment.Routing through vertex_keys_for_cell_direct() removes UUID→Key lookups from this hot path and surfaces precise errors.
1999-2003: Good: incident-cell mapping now uses direct keys.This avoids silent drops and produces actionable errors when mappings are inconsistent.
2171-2179: Reasonable leniency: skip cells with missing vertex keys.The debug print helps during development; behavior matches the method’s documented “lenient” contract.
2311-2317: Good fallback strategy.Strict-first then lenient fallback is a pragmatic repair path for corrupt states without derailing recovery.
2469-2491: Mapping validation order is sensible.Key→UUID check via SlotMap first, then UUID→key ensures precise diagnostics with minimal hashing.
2546-2567: Same here — solid two‑way mapping check.
2576-2587: Good: duplicate detection now on key path with fail‑fast.This aligns with the new internal model and eliminates UUID round‑trips.
2625-2627: Nice: use strict facet map for validation.This ensures missing vertex keys don’t mask invalid sharing.
2760-2767: Precomputed vertex key sets improve neighbor validation.Avoids recomputation and ensures consistent semantics in checks below.
4410-4417: Tests: exercising the direct key path is good coverage.
Improves the reliability of bad cell detection in Bowyer-Watson by handling degenerate cases and potential errors. This prevents panics and improves the overall stability of the triangulation process, including adding error handling for empty triangulations. Adds more error handling to the cell duplicate removal and ensures it gets propagated correctly.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
src/core/triangulation_data_structure.rs (2)
901-906: Guard against duplicate vertex UUIDs in insert helper
insert_vertex_with_mappingcan silently create multiple vertices sharing the same UUID, breaking mapping invariants (uuid_to_vertex_key.len() == vertices.len()) and causing validation to fail. Add a fast‑path guard (or make this a fallible API) to keep invariants intact.Proposed minimal guard (non‑breaking):
pub fn insert_vertex_with_mapping(&mut self, vertex: Vertex<T, U, D>) -> VertexKey { let vertex_uuid = vertex.uuid(); + // Prevent duplicate UUIDs from entering the structure. + if let Some(existing) = self.uuid_to_vertex_key.get(&vertex_uuid).copied() { + debug_assert!( + false, + "Attempted to insert duplicate vertex UUID: {vertex_uuid}" + ); + return existing; + } let vertex_key = self.vertices.insert(vertex); self.uuid_to_vertex_key.insert(vertex_uuid, vertex_key); vertex_key }Alternatively (preferred), introduce
try_insert_vertex_with_mapping(...) -> Result<VertexKey, TriangulationConstructionError>and migrate internal callsites. Want a patch for that?
945-950: Same UUID collision risk for cells
insert_cell_with_mappinghas the same invariant risk as the vertex variant. Add a guard (or make it fallible) to avoid two cells sharing one UUID.pub fn insert_cell_with_mapping(&mut self, cell: Cell<T, U, V, D>) -> CellKey { let cell_uuid = cell.uuid(); + if let Some(existing) = self.uuid_to_cell_key.get(&cell_uuid).copied() { + debug_assert!( + false, + "Attempted to insert duplicate cell UUID: {cell_uuid}" + ); + return existing; + } let cell_key = self.cells.insert(cell); self.uuid_to_cell_key.insert(cell_uuid, cell_key); cell_key }src/core/traits/insertion_algorithm.rs (4)
504-515: Invalid if-let chain; this won’t compile.
if let ... && ...parses as assignment from a boolean. Replace withmatches!.- if let Ok(containment) = insphere(&vertex_points, *vertex.point()) - && matches!(containment, InSphere::INSIDE) - { - return true; - } + if matches!( + insphere(&vertex_points, *vertex.point()), + Ok(InSphere::INSIDE) + ) { + return true; + }
637-647: Iteration over cells likely incorrect and extra allocation per loop.
for (k, v) in tds.cells()may not iterate (SlotMap typically needs.iter()). Also preallocate the vertex buffer.- for (cell_key, cell) in tds.cells() { + for (cell_key, cell) in tds.cells().iter() { // Skip cells with insufficient vertices if cell.vertices().len() < D + 1 { continue; } cells_tested += 1; - let vertex_points: Vec<_> = cell.vertices().iter().map(|v| *v.point()).collect(); + let mut vertex_points = Vec::with_capacity(D + 1); + vertex_points.extend(cell.vertices().iter().map(|v| *v.point()));
1323-1328: Index destructuring moves from &Vec; fix pattern to destructure by reference.
cells[0]yields&(CellKey, u8). Destructure with&(...)to avoid move.- for (_facet_key, cells) in boundary_facets { - let (cell_key, facet_index) = cells[0]; + for (_facet_key, cells) in boundary_facets { + let &(cell_key, facet_index) = cells.first().expect("boundary facet must have exactly one adjacent cell");
1023-1031: Same destructuring issue here.Destructure by reference to avoid moving out of a borrowed Vec.
- if cells.len() == 1 { - let (cell_key, facet_index) = cells[0]; + if cells.len() == 1 { + let &(cell_key, facet_index) = cells.first().expect("boundary facet must have exactly one adjacent cell");
♻️ Duplicate comments (1)
src/core/triangulation_data_structure.rs (1)
2781-2790: Neighbor validation uses precomputed key sets correctlySet‑based checks and mutuality validation look correct and efficient. Thanks for addressing prior concerns about silent key drops.
Also applies to: 2813-2893
🧹 Nitpick comments (16)
src/core/triangulation_data_structure.rs (11)
2164-2170: Use structured logging for skipped-cell warnings (avoid unconditional stderr)
eprintln!in a library will spam stderr in release builds. Preferlog::warn!(or gate behinddebug_assertions) to keep callers’ stderr clean.- if skipped_cells > 0 { - eprintln!( - "Warning: Skipped {skipped_cells} cell(s) during facet mapping due to missing vertex keys" - ); - } + if skipped_cells > 0 { + #[cfg(feature = "log")] + log::warn!( + "Skipped {skipped_cells} cell(s) during facet mapping due to missing vertex keys" + ); + #[cfg(all(not(feature = "log"), debug_assertions))] + eprintln!( + "Warning: Skipped {skipped_cells} cell(s) during facet mapping due to missing vertex keys" + ); + }If you’re open to it, I can wire up the crate’s logging feature flag.
Also applies to: 2213-2218
1887-1906: Avoid materializing the full updates VecYou build
updates: Vec<(CellKey, Vec<Option<Uuid>>)>only to immediately apply it. This is extra allocation proportional to cell count. You can iteratecell_neighborsand update cells in place by temporarily taking a mutable borrow toself.cellsvia key, converting on the fly.Happy to draft an in‑place version if borrow checker friction is acceptable. Do you want me to sketch it?
2061-2069: Duplicate removal path: good use of key-based lookups; minor nits
- Nice: switching to
vertex_keys_for_cell_directavoids UUID→Key churn.- Minor: consider reserving
unique_cellswithfast_hash_map_with_capacity(self.cells.len())to reduce rehashes.Also applies to: 2074-2086
2918-2987: PartialEq relies on Vertex/Cell equality semanticsGiven UUIDs differ between instances, this hinges on
Vertex/Cellequality ignoring UUIDs (comparing coordinates/vertex sets). If that’s already true, you’re good. If not,tds1 == tds2for identical geometry may fail.Would you like a follow‑up patch to make
PartialEqexplicitly compare by coordinates and vertex key sets only?
1383-1411: Construction state semantics could be clarified
new([])returnsOkand later setsconstruction_state = Constructedeven with 0 vertices. If “Constructed” is intended to mean “post‑algorithm run,” that’s fine; otherwise, consider leaving itIncomplete(0)for empty inputs.Also applies to: 1688-1701
1506-1552: Public add() vs. insert_ helpers divergence*
add()enforces UUID/coordinate uniqueness;insert_*_with_mapping()does not. If theinsert_*helpers are part of the public API, the asymmetry invites misuse.Consider:
- Making
insert_*_with_mappingpub(crate); or- Adding fallible
try_insert_*public variants and documenting invariants.
1805-1829: Facet map capacity sizingNice pre‑allocation. Tiny nit: for many small cells,
(D+1)is exact facets per cell—fine. If you ever support mixed‑dimensional cells, guard withcell.vertices().len()instead.
1957-1961: clear_all_neighbors() leaves generation unchangedEverywhere else topology changes bump
generation. Clearing neighbors arguably changes traversal behavior; consider incrementinggenerationfor cache invalidation consistency.
2231-2245: Facet index u8 bound: debug_assert onlyYou already return an error on overflow in strict builder; good. Optionally, mirror that behavior in the lenient builder to avoid silent facet drops in release builds when D > 255.
2324-2424: Fix path: deterministic cell retention is arbitrary (smallest keys)Sorting keys and keeping the two smallest is deterministic but semantically arbitrary. If a geometric criterion (e.g., volume, quality) is easy to compute, prefer that for better robustness.
173-177: Imports: FastHashMap re-export remains; public API surfaceYou documented the move to
*KeySettypes. Ensure no remaining public exports requireFastHashMapto prevent leaking implementation details.I can scan the crate for stray public
FastHashMaptypes if helpful.src/core/traits/insertion_algorithm.rs (5)
24-26: Make BadCellsError non-exhaustive and equatable.Future-proof the public error enum and enable straightforward testing.
-/// Error that can occur during bad cells detection -#[derive(Debug, Clone)] +/// Error that can occur during bad cells detection +#[non_exhaustive] +#[derive(Debug, Clone, PartialEq, Eq)] pub enum BadCellsError {
536-579: Casting margin factor to T can silently zero-out expansion for integer T.For integer CoordinateScalar implementations,
cast(0.1)becomes 0, disabling the margin. Consider a fallback > 0.- let margin_factor: T = cast(MARGIN_FACTOR).unwrap_or_else(T::zero); + let margin_factor: T = cast(MARGIN_FACTOR) + .unwrap_or_else(|| T::default_tolerance()); // ensures a small positive margin for non-float TAlternatively, compute margins in f64 and cast back with saturation/rounding.
715-749: Avoid allocating bad_cells_sharing; stream-count and short‑circuit.You can count and capture the single bad sharer without building a Vec.
Example tweak:
let mut bad_count = 0; let mut single = None; for &(ck, fi) in sharing_cells { if bad_cell_set.contains(&ck) { bad_count += 1; if bad_count == 1 { single = Some((ck, fi)); } else { break; } } } if bad_count == 1 && (total_count == 2 || total_count == 1) { if let Some((bad_cell_key, facet_index)) = single { boundary_facet_specs.push((bad_cell_key, facet_index)); } }
1537-1541: Result::is_ok_and may exceed MSRV; prefer explicit match.Safer and clearer; avoids relying on Rust ≥1.74.
- CellBuilder::default() - .vertices(facet_vertices) - .build() - .is_ok_and(|new_cell| { - tds.insert_cell_with_mapping(new_cell); - true - }) + match CellBuilder::default().vertices(facet_vertices).build() { + Ok(new_cell) => { + tds.insert_cell_with_mapping(new_cell); + true + } + Err(_) => false, + }If MSRV ≥1.74, this is optional.
506-509: Minor alloc nit: preallocate per-cell buffer.
Vec::new()each loop causes churn. Usewith_capacity(D+1)once and clear.I can wire this through InsertionBuffers if you want zero allocs on hot paths.
📜 Review details
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
benches/microbenchmarks.rs(1 hunks)src/core/algorithms/bowyer_watson.rs(1 hunks)src/core/algorithms/robust_bowyer_watson.rs(3 hunks)src/core/traits/insertion_algorithm.rs(13 hunks)src/core/triangulation_data_structure.rs(25 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
Applied to files:
src/core/traits/insertion_algorithm.rs
🧬 Code graph analysis (4)
src/core/algorithms/bowyer_watson.rs (2)
src/core/traits/insertion_algorithm.rs (1)
find_bad_cells(616-680)src/core/triangulation_data_structure.rs (1)
cells(500-502)
src/core/triangulation_data_structure.rs (4)
src/core/collections.rs (2)
fast_hash_map_with_capacity(714-717)fast_hash_map_with_capacity(826-826)src/core/facet.rs (4)
cell(336-338)key(466-484)facet_key_from_vertex_keys(585-597)vertex(381-383)src/core/cell.rs (7)
cell(2209-2209)cell(2338-2338)cell(2340-2340)cell(2344-2344)cell(2543-2543)cell(2556-2556)uuid(501-503)src/core/vertex.rs (1)
uuid(458-460)
src/core/algorithms/robust_bowyer_watson.rs (1)
src/core/traits/insertion_algorithm.rs (3)
find_bad_cells(616-680)new(138-150)new(277-284)
src/core/traits/insertion_algorithm.rs (2)
src/geometry/predicates.rs (1)
insphere(346-416)src/core/triangulation_data_structure.rs (3)
cells(500-502)number_of_cells(763-765)default(206-208)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Codacy Static Code Analysis
- GitHub Check: Code Coverage
- GitHub Check: Codacy Security Scan
- GitHub Check: build (windows-latest)
- GitHub Check: performance-regression
🔇 Additional comments (15)
benches/microbenchmarks.rs (1)
131-131: LGTM! Error handling properly added for fallibleremove_duplicate_cellsAPI.The change correctly adapts to the new
Resultreturn type fromremove_duplicate_cells, using.expect()with a descriptive error message for benchmark context. This is appropriate since benchmarks should panic on unexpected errors rather than silently proceeding with invalid state.src/core/algorithms/bowyer_watson.rs (1)
741-754: Good migration to falliblefind_bad_cellsAPI with graceful error handling.The change correctly adapts to the new
Result<Vec<CellKey>, BadCellsError>return type. The error handling strategy of returning an empty vector on error allows the test to continue, which is appropriate for diagnostic testing.src/core/algorithms/robust_bowyer_watson.rs (4)
8-8: LGTM! Proper import ofCellKeySetfor key-based collection optimization.The addition of
CellKeySetto the imports aligns with the PR's objective of migrating to direct key-based access patterns.
373-395: Excellent robust fallback strategy forfind_bad_cellserrors.The implementation properly handles all error variants from the new
BadCellsErrorenum:
AllCellsBadandTooManyDegenerateCells: Falls back to robust predicatesNoCells: Returns empty vector immediatelyThis error handling pattern improves resilience against degenerate configurations.
397-409: Smart supplementation logic for edge cases.The logic properly supplements the initial results with robust predicates when:
- No bad cells are found (empty result)
- The base tolerance exceeds the default (indicating a need for more robustness)
The deduplication check before adding cells prevents duplicate entries in the merged result.
545-545: Good adoption ofCellKeySetfor efficient lookups.The change from
FastHashSet<CellKey>toCellKeySetmaintains the same collection semantics while aligning with the PR's key-based optimization strategy. The.iter().copied().collect()pattern correctly populates the set.src/core/triangulation_data_structure.rs (6)
952-1002: Direct key-based vertex lookup helper looks solidGood error shaping, zero extra allocations beyond the pre‑sized Vec, and clear Phase‑2 plan. LGTM.
2246-2290: Strict facet map builder: good failure semanticsReturning a typed error on facet-index overflow and missing vertex keys is the right call for validation‑critical paths. LGTM.
3056-3097: Serde: mapping rebuilds look correctUUID→Key maps are reconstructed deterministically from deserialized SlotMaps; tests confirm round‑trip. LGTM.
Also applies to: 6694-6715
2166-2169: u8 facet-index bound commentDocs say “indices 0..=D fit into u8,” but when D=255 the max index is 255, still OK. All good—just noting the edge condition has been handled correctly.
Also applies to: 2250-2252
1887-1894: Potential None inflation on neighbor UUID conversion
and_then(self.cell_uuid_from_key)can yieldNoneif a neighbor cell was concurrently removed. If this is impossible in single‑threaded contexts, fine. Otherwise, consider surfacing an error to avoid silently dropping neighbors.[suggest_minor_issue]
2048-2086: remove_duplicate_cells() return type change is greatMaking this fallible and bumping
generationon removals is the right tradeoff. LGTM.src/core/traits/insertion_algorithm.rs (3)
46-70: LGTM: Display implementation is clear and actionable.The error messages are concise and user-facing friendly.
1212-1217: UUID mapping still used in “key-based” flow—confirm intent.These helpers touch
uuid_to_vertex_key/uuid_to_cell_key. If the migration aims to de‑emphasize UUIDs internally, consider gating or documenting why these remain.Would you like a follow-up PR to centralize key/UUID mapping behind a thin adapter?
Also applies to: 1590-1594
1028-1031: No change required — MSRV ≥ 1.70. Cargo.toml declares rust-version = "1.89.0" and rust-toolchain.toml sets channel = "1.89.0", so if-let chaining is supported; keep the code as-is.
Addresses potential issues in bad cell detection during vertex insertion by improving circumsphere containment checks and boundary facet handling, preventing premature termination and improving overall robustness of the triangulation process. Also guards against duplicate UUIDs during vertex and cell insertion.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (6)
src/core/triangulation_data_structure.rs (6)
247-254: Prefer a strongly-typed entity discriminator over String in DuplicateUuidUsing String for
entity_typerisks typos and weakens exhaustiveness. Introduce an enum and use it here and at call sites.#[derive(Clone, Debug, Error, PartialEq, Eq)] pub enum TriangulationConstructionError { @@ - #[error("Duplicate UUID: {entity_type} with UUID {uuid} already exists")] - DuplicateUuid { - /// The type of entity (e.g., "vertex" or "cell"). - entity_type: String, - /// The UUID that was duplicated. - uuid: Uuid, - }, + #[error("Duplicate UUID: {entity:?} with UUID {uuid} already exists")] + DuplicateUuid { + /// The duplicated entity type. + entity: EntityKind, + /// The UUID that was duplicated. + uuid: Uuid, + }, } + +#[derive(Clone, Copy, Debug, PartialEq, Eq)] +pub enum EntityKind { + Vertex, + Cell, +}And then return
DuplicateUuid { entity: EntityKind::Vertex, uuid }/EntityKind::Cellat call sites (see lines 912-919 and 958-963).
906-923: Avoid double-hashing on duplicate check; use Entry API to insert atomically
contains_key+insertdoes two lookups. Useentry()to check/insert once and keep the vertex/cell+mapping mutation atomic.- // Check for duplicate UUID - if self.uuid_to_vertex_key.contains_key(&vertex_uuid) { - return Err(TriangulationConstructionError::DuplicateUuid { - entity_type: "vertex".to_string(), - uuid: vertex_uuid, - }); - } - - let vertex_key = self.vertices.insert(vertex); - self.uuid_to_vertex_key.insert(vertex_uuid, vertex_key); - Ok(vertex_key) + match self.uuid_to_vertex_key.entry(vertex_uuid) { + hashbrown::hash_map::Entry::Occupied(_) => Err( + TriangulationConstructionError::DuplicateUuid { + entity: EntityKind::Vertex, + uuid: vertex_uuid, + }, + ), + hashbrown::hash_map::Entry::Vacant(e) => { + let vertex_key = self.vertices.insert(vertex); + e.insert(vertex_key); + Ok(vertex_key) + } + }- // Check for duplicate UUID - if self.uuid_to_cell_key.contains_key(&cell_uuid) { - return Err(TriangulationConstructionError::DuplicateUuid { - entity_type: "cell".to_string(), - uuid: cell_uuid, - }); - } - - let cell_key = self.cells.insert(cell); - self.uuid_to_cell_key.insert(cell_uuid, cell_key); - Ok(cell_key) + match self.uuid_to_cell_key.entry(cell_uuid) { + hashbrown::hash_map::Entry::Occupied(_) => Err( + TriangulationConstructionError::DuplicateUuid { + entity: EntityKind::Cell, + uuid: cell_uuid, + }, + ), + hashbrown::hash_map::Entry::Vacant(e) => { + let cell_key = self.cells.insert(cell); + e.insert(cell_key); + Ok(cell_key) + } + }Also applies to: 951-968
2016-2020: Preserve underlying error details in messageCurrent
map_err(|_e| ...)discards cause. Bubble the original message for better debugging.- let vertex_keys = self.vertex_keys_for_cell_direct(cell_key) - .map_err(|_e| TriangulationValidationError::InconsistentDataStructure { - message: format!("Failed to get vertex keys for cell {}: Vertex UUID not found in vertex mapping", cell.uuid()), - })?; + let vertex_keys = self.vertex_keys_for_cell_direct(cell_key).map_err(|e| { + TriangulationValidationError::InconsistentDataStructure { + message: format!( + "Failed to get vertex keys for cell {}: {e}", + cell.uuid() + ), + } + })?;
2201-2210: Don’t print to stderr in library code; gate or use logging
eprintln!in normal builds is noisy for consumers. Gate underdebug_assertionsor uselog::warn!/tracing::warn!.- skipped_cells += 1; - #[cfg(debug_assertions)] - eprintln!( + skipped_cells += 1; + #[cfg(debug_assertions)] + eprintln!( "debug: skipping cell {} due to missing vertex keys in facet mapping", cell.uuid() ); continue; // Skip cells with missing vertex keys- if skipped_cells > 0 { - eprintln!( - "Warning: Skipped {skipped_cells} cell(s) during facet mapping due to missing vertex keys" - ); - } + #[cfg(debug_assertions)] + if skipped_cells > 0 { + eprintln!( + "debug: skipped {skipped_cells} cell(s) during facet mapping due to missing vertex keys" + ); + } + // Alternatively: + // if skipped_cells > 0 { log::warn!("Skipped {skipped_cells} cell(s) during facet mapping due to missing vertex keys"); }Also applies to: 2243-2247
2361-2367: Repair path is sensible; minor determinism nit
- Fallback to lenient map is pragmatic.
- Sorting by
CellKeyensures deterministic removal; consider documenting this explicitly in the function comment for reproducibility.Also applies to: 2372-2376, 2385-2387, 2391-2396, 2408-2414
3788-3791: Updated tests to handle Result from duplicate removalGood to assert unwraps; consider asserting error-free behavior with
expectmessage for clarity in failures.Also applies to: 5360-5363
📜 Review details
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
cspell.json(1 hunks)src/core/traits/insertion_algorithm.rs(21 hunks)src/core/triangulation_data_structure.rs(31 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- src/core/traits/insertion_algorithm.rs
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-02T20:32:05.985Z
Learnt from: acgetchell
PR: acgetchell/delaunay#60
File: cspell.json:103-103
Timestamp: 2025-09-02T20:32:05.985Z
Learning: In cspell.json for the delaunay project, the word "itional" is intentionally added to the dictionary because it comes from a regex pattern, not a typo.
Applied to files:
cspell.json
🧬 Code graph analysis (1)
src/core/triangulation_data_structure.rs (4)
src/core/collections.rs (2)
fast_hash_map_with_capacity(714-717)fast_hash_map_with_capacity(826-826)src/core/facet.rs (5)
vertex(381-383)cell(336-338)key(466-484)facet_key_from_vertex_keys(585-597)vertices(448-455)src/core/cell.rs (8)
cell(2209-2209)cell(2338-2338)cell(2340-2340)cell(2344-2344)cell(2543-2543)cell(2556-2556)uuid(501-503)vertices(474-476)src/core/vertex.rs (1)
uuid(458-460)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Codacy Static Code Analysis
- GitHub Check: Codacy Security Scan
- GitHub Check: Code Coverage
- GitHub Check: build (windows-latest)
- GitHub Check: performance-regression
🔇 Additional comments (9)
cspell.json (1)
118-122: Dictionary updates look good; “itional” retention acknowledged
- “derefs” and “doctest” are correctly alphabetized.
- Thanks for keeping “itional” per the project learning.
src/core/triangulation_data_structure.rs (8)
997-1020: Direct key path is clean; good error on missing vertex-key mappingImplementation reads well and pre-allocates. No action needed.
1826-1835: Good: neighbor build now fails-fast on vertex-key retrieval issuesUsing
vertex_keys_for_cell_direct()with a preciseVertexKeyRetrievalFailedimproves diagnosability. LGTM.
1979-1981: Nice: invalidate generation after neighbor clearCache invalidation on topology change is correct. LGTM.
2275-2319: Strict facet map builder is solid and appropriately fail-fastError mapping and u8 bound checks look correct. LGTM.
2523-2544: Efficient mapping validation (key→UUID, then UUID→key) looks goodNice optimization and error text. LGTM.
2600-2621: Cell mapping validation mirrors vertex path correctlySymmetric checks and messages are consistent. LGTM.
2814-2823: Precompute vertex keys by cell — good reuse and clarityThis reduces repeated lookups in neighbor validation. LGTM.
6765-6865: Excellent targeted tests for insert_*_with_mapping and duplicate UUIDsThese cover happy path and duplicate rejections for both vertices and cells. LGTM.
Updates facet cache to use RCU (Read-Copy-Update) for atomic read-modify-write operations, enhancing concurrency and reducing potential race conditions when updating the cache. This change ensures that the cache is updated atomically, preventing data corruption and improving the overall stability of the triangulation process. Also, re-exports `Entry` for `FastHashMap` to allow atomic check-and-insert operations via the Entry API. It also improves vertex insertion by leveraging `Entry` to avoid redundant checks and ensure thread safety. The error handling for duplicate UUIDs is updated to use the Entry API for atomic insertion and provides a more informative error message that includes the entity type. Additionally, it refactors duplicate cell removal to use the Entry API for atomic check-and-insert operations, ensuring thread safety and preventing race conditions. Changed: Internal: Renames `entity_type` field to `entity` for clarity Changed: Internal: Adds `EntityKind` enum for vertex and cell types Changed: Internal: Improves debugging output for skipped cells Refs: #86
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
src/core/collections.rs (1)
862-940: Tests cover key-based round‑trips; prior temp‑String borrow bug is resolved.The
map(String::as_str)assertion fixes the temporary-borrow issue called out earlier. Nice coverage using realSlotMapkeys.src/core/triangulation_data_structure.rs (1)
2391-2395: LGTM on switching to key existence checksReplacing redundant fetches with
self.cells.contains_key(...)aligns with the key‑based path and avoids unnecessary SlotMap reads.Also applies to: 2409-2416
🧹 Nitpick comments (13)
src/core/traits/facet_cache.rs (2)
136-144: Potential wasted rebuild under contention; consider lazy-build inside rcu or a fast-path CAS.If multiple threads hit the empty-cache path, each computes
build_facet_to_cells_hashmap()then one wins the swap. To avoid duplicated work, either:
- Build lazily inside
rcuonly whenold.is_none()(accepting possible re-execution), or- Use a two‑step: try
compare_and_swap(None, Some(Arc::new(Default::default())))as a placeholder, then build once andstorethe real cache.Happy to draft a version if you want to try either route.
165-172: Store-then-generate ordering is fine, but document the happens-before.ArcSwap operations are SeqCst, so publishing the new cache before updating
cached_generationis safe. A brief comment noting that readers check both generation and cache presence would prevent future reordering “optimizations.” (woshilapin.github.io)src/core/collections.rs (1)
531-692: ClarifyKeyBasedNeighborMapsemantics (single vs multi-neighbor).Alias is
FastHashMap<CellKey, CellKey>, which encodes at most one neighbor per cell. Typical cells have D+1 neighbors; if this map is intended for a specific 1:1 relation (e.g., “preferred neighbor” or temporary BFS parent), please:
- Rename to reflect the 1:1 nature (e.g.,
PreferredNeighborMap) or- Switch to
FastHashMap<CellKey, SmallBuffer<CellKey, MAX_PRACTICAL_DIMENSION_SIZE>>if it represents adjacency.This avoids misuse and makes intent explicit.
- pub type KeyBasedNeighborMap = FastHashMap<CellKey, CellKey>; + // If representing all neighbors per cell: + pub type KeyBasedNeighborMap = + FastHashMap<CellKey, SmallBuffer<CellKey, MAX_PRACTICAL_DIMENSION_SIZE>>;src/core/triangulation_data_structure.rs (3)
1027-1030: Include cell UUID in vertex‑mapping error for faster diagnosticsAugment the error to name the owning cell when a vertex UUID isn’t mapped.
- .ok_or_else(|| TriangulationValidationError::InconsistentDataStructure { - message: format!("Vertex UUID {} not found in mapping", v.uuid()), - })?; + .ok_or_else(|| TriangulationValidationError::InconsistentDataStructure { + message: format!( + "Vertex UUID {} not found in vertex mapping (cell {})", + v.uuid(), + cell.uuid() + ), + })?;
1583-1658: Unifyadd()error type with construction/validation errors
add()returnsResult<(), &'static str>, while the rest of the API uses rich enums. Consider returning a proper error (e.g.,TriangulationConstructionError) to preserve context and avoid string matching in callers.If changing the signature is temporarily undesirable, introduce a small
AddVertexErrorenum and convert from internal errors viaFromto avoid breaking downstream code now and ease a future switch.
917-935: Entry-based insert is correct; consider minimal rollback commentThe Entry pattern prevents duplicate UUIDs. Add a brief comment noting that if
vertices.insertpanics, the map isn’t updated (safe), and ife.insertpanics, the SlotMap still owns the vertex. Not actionable now, just documenting semantics.src/core/traits/insertion_algorithm.rs (7)
24-46: Error taxonomy reads well; optional: use thiserror.
Switching to#[derive(thiserror::Error)]would reduce boilerplate and standardize Display/Error impls.
642-653: Avoid per-iteration allocation; tighten vertex count check.
- Move
vertex_pointsallocation outside the loop; reuse withclear().- Treat non-
D+1vertex counts as degenerate to keepdegenerate_countaccurate.- for (cell_key, cell) in tds.cells() { + let mut vertex_points = Vec::with_capacity(D + 1); + for (cell_key, cell) in tds.cells() { // Skip cells with insufficient vertices - if cell.vertices().len() < D + 1 { - continue; - } + let v_count = cell.vertices().len(); + if v_count != D + 1 { + degenerate_count += 1; + continue; + } - cells_tested += 1; - // Preallocate with exact capacity to avoid reallocation - let mut vertex_points = Vec::with_capacity(D + 1); + cells_tested += 1; + // Reuse buffer + vertex_points.clear(); vertex_points.extend(cell.vertices().iter().map(|v| *v.point()));
723-765: Boundary facet identification logic is solid; pre-reserve for fewer reallocations.
Minor perf: reserveboundary_facetswithboundary_facet_specs.len().- // Now create the actual Facet objects for the identified boundary facets + // Now create the actual Facet objects for the identified boundary facets + boundary_facets.reserve(boundary_facet_specs.len());
1041-1044: Don’t panic on malformed facet map; return a validation error.
Replaceexpectwith graceful handling.- let &(cell_key, facet_index) = cells - .first() - .expect("boundary facet must have exactly one adjacent cell"); + let Some(&(cell_key, facet_index)) = cells.first() else { + return Err(TriangulationValidationError::InconsistentDataStructure { + message: "Boundary facet had no adjacent cell".to_string(), + }); + };
1350-1353: Avoid expect in visibility path too.
Same rationale as fallback path; return a structured error instead of panicking.- let &(cell_key, facet_index) = cells - .first() - .expect("boundary facet must have exactly one adjacent cell"); + let Some(&(cell_key, facet_index)) = cells.first() else { + return Err(TriangulationValidationError::InconsistentDataStructure { + message: "Boundary facet had no adjacent cell".to_string(), + }); + };
1528-1533: Don’t silently swallow vertex insertion mapping errors.
At least assert in debug to catch unexpected states.- if let Entry::Vacant(_) = tds.uuid_to_vertex_key.entry(vertex.uuid()) { - // Note: We silently ignore duplicate UUID errors here since we're checking first - // If the check passes but insertion fails, it means concurrent modification - // which shouldn't happen in single-threaded context - let _ = tds.insert_vertex_with_mapping(*vertex); - } + if let Entry::Vacant(_) = tds.uuid_to_vertex_key.entry(vertex.uuid()) { + let res = tds.insert_vertex_with_mapping(*vertex); + debug_assert!( + res.is_ok(), + "insert_vertex_with_mapping unexpectedly failed: {res:?}" + ); + }
1849-1851: Tests add good coverage for interior/exterior and error cases.
Consider adding one test where no cells are testable (e.g., cells with wrong vertex count) to assertNoCellsvs “no valid cells”. Optional.Also applies to: 1883-1885, 1914-1921, 1924-1930, 1936-1936, 2192-2240
📜 Review details
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
src/core/collections.rs(9 hunks)src/core/traits/facet_cache.rs(1 hunks)src/core/traits/insertion_algorithm.rs(21 hunks)src/core/triangulation_data_structure.rs(35 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
Applied to files:
src/core/traits/insertion_algorithm.rs
🧬 Code graph analysis (3)
src/core/collections.rs (1)
src/core/triangulation_data_structure.rs (1)
default(206-208)
src/core/traits/insertion_algorithm.rs (3)
src/geometry/predicates.rs (1)
insphere(346-416)src/geometry/traits/coordinate.rs (4)
default_tolerance(521-521)default_tolerance(526-528)default_tolerance(532-534)new(643-643)src/core/triangulation_data_structure.rs (4)
new(1410-1442)cells(519-521)number_of_cells(782-784)default(206-208)
src/core/triangulation_data_structure.rs (4)
src/core/collections.rs (2)
fast_hash_map_with_capacity(731-734)fast_hash_map_with_capacity(843-843)src/core/facet.rs (5)
vertex(381-383)cell(336-338)key(466-484)facet_key_from_vertex_keys(585-597)vertices(448-455)src/core/cell.rs (8)
cell(2209-2209)cell(2338-2338)cell(2340-2340)cell(2344-2344)cell(2543-2543)cell(2556-2556)uuid(501-503)vertices(474-476)src/core/vertex.rs (1)
uuid(458-460)
🪛 GitHub Check: build (ubuntu-latest)
src/core/triangulation_data_structure.rs
[failure] 2214-2214:
variable skipped_cells is assigned to, but never used
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Codacy Static Code Analysis
- GitHub Check: Codacy Security Scan
- GitHub Check: build (windows-latest)
- GitHub Check: Code Coverage
🔇 Additional comments (19)
src/core/traits/facet_cache.rs (1)
136-152: RCU use looks correct; generation update gated on first set.
ArcSwapOption::rcureturns the previous value; checkingresult.is_none()correctly detects the initial set-from-None case before storing the generation. Good use of Acquire/Release around the generation gates. (docs.rs)src/core/collections.rs (2)
159-175: Re‑exportingEntryis helpful for ergonomics.Makes
FastHashMapentry APIs discoverable without extra imports. Nice.
747-764: Docs are clear on Phase‑1 key‑based capacity helpers.Good examples contrasting UUID vs key usage; no issues.
src/core/triangulation_data_structure.rs (3)
2001-2067: Good: incident‑cell assignment uses key path and precise errorsUsing
vertex_keys_for_cell_directwith explicitInconsistentDataStructuremessages improves debuggability and avoids silent drops.
1719-1732: Construction state update placement is correctSetting
construction_state = Constructedafter successful triangulation keeps serialization/validation paths consistent.
2294-2335: Strict facet map builder looks solidThe Result-returning variant is precise, with clear bounds checks on
u8facet indices and no silent skips.src/core/traits/insertion_algorithm.rs (13)
7-7: LGTM: imports align with key-based collections.
47-71: Display messages: concise and actionable.
73-73: Error impl OK.
507-513: Good: buffer reuse in is_vertex_interior.
614-627: Docs and Result-based API are clear.
670-677: All-cells-bad guard now correct (uses cells_tested).
679-685: Degenerate thresholding reads well.
862-895: Error mapping in cavity-based path is clear and user-focused.
1219-1253: Initial simplex creation: mapping + insertion flow looks correct.
1287-1289: Good: remove_duplicate_cells now propagates errors.
1653-1653: Nice: finalize_after_insertion removes duplicates early.
1569-1572: MSRV confirmed — Rust toolchain = 1.89.0; no change required.
rust-toolchain pins channel = "1.89.0", so Result::is_ok_and (stabilized in 1.70) is supported; the code uses .is_ok() already.
574-577: No action needed — CoordinateScalar only implemented for floats
CoordinateScalar::default_tolerance() is defined and implemented for f32 and f64 (src/geometry/traits/coordinate.rs); no CoordinateScalar impls for integer types were found, so the integer-zero-margin path is not applicable.
Removes the `KeyBasedNeighborMap` collection as it was unused, incorrectly defined, and unnecessary for the planned migration. This simplifies the codebase by eliminating an obsolete data structure. The actual neighbor storage is handled directly in `Cell.neighbors` which will be updated in Phase 1. `CellNeighborsMap` is used for temporary neighbor collections during algorithms.
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/core/traits/insertion_algorithm.rs (1)
753-767: Don’t swallow Facet::new errors — propagate as validation errorsIgnoring Facet::new failures can mask topology/data issues and later surface as “No boundary facets” noise.
Apply:
- if let Ok(facet) = Facet::new(cell.clone(), opposite_vertex) { - boundary_facets.push(facet); - } + let facet = Facet::new(cell.clone(), opposite_vertex).map_err(|e| { + TriangulationValidationError::InconsistentDataStructure { + message: format!("Failed to construct boundary facet: {e}"), + } + })?; + boundary_facets.push(facet);src/core/triangulation_data_structure.rs (1)
2104-2143: remove_duplicate_cells leaves stale topology; rebuild neighbors/incident cells when removals occurAfter deletions, vertices may point to removed cells via
incident_cell, and neighbor arrays can reference removed keys. Recompute to maintain invariants.- if duplicate_count > 0 { - self.generation.fetch_add(1, Ordering::Relaxed); - } - Ok(duplicate_count) + if duplicate_count > 0 { + // Rebuild topology to avoid stale references + self.assign_neighbors()?; + self.assign_incident_cells()?; + } + Ok(duplicate_count)
♻️ Duplicate comments (2)
src/core/collections.rs (1)
840-909: Round‑trip tests avoid temporary String borrow pitfallThe map(String::as_str) pattern fixes the prior lifetime issue; tests cover basic ops for key-based types.
src/core/triangulation_data_structure.rs (1)
2233-2249: Nice: fixed debug-onlyskipped_cellsbuild breakGuarding declaration and increments under
cfg(debug_assertions)resolves the earlier CI failure.Also applies to: 2284-2290
🧹 Nitpick comments (17)
src/core/traits/facet_cache.rs (3)
129-155: RCU build can still double-compute under heavy contentionTwo threads can observe None and both compute build_facet_to_cells_hashmap(); only one wins the CAS, the loser drops its freshly built map. Not a correctness bug, but it wastes work on hot paths.
Consider memoizing the built Arc within the closure attempt to avoid recomputation on retry:
- let built_cache = self.facet_cache().rcu(|old| { - old.as_ref().map_or_else( - || { - let new_cache = tds.build_facet_to_cells_hashmap(); - Some(Arc::new(new_cache)) - }, - |existing| Some(existing.clone()), - ) - }); + // Avoid recomputation across potential internal RCU retries. + let mut built: Option<Arc<FacetToCellsMap>> = None; + let built_cache = self.facet_cache().rcu(|old| { + if let Some(existing) = old { + return Some(existing.clone()); + } + let arc = built.get_or_insert_with(|| Arc::new(tds.build_facet_to_cells_hashmap())); + Some(arc.clone()) + });This way, even if RCU retries the closure, the expensive build happens at most once per caller.
166-178: Publish order comments are right; add reader-side note (Acquire) for cached_generationWriters store cache (SeqCst) then store generation (Release). Readers use Acquire loads for both generations; good. Add a brief comment near the generation loads (Lines 122–124) stating that the Acquire pairs with the Release on cached_generation to prevent reordering-visible torn publication.
No code change required; doc comment only.
201-206: Invalidate order: keep “cache None” before generation reset (good); consider fence after store(None) only if future readers changeGiven ArcSwap’s SeqCst store, you’re fine. If future refactors relax it, add an explicit atomic fence to maintain hb. For now, just a note.
src/core/collections.rs (1)
531-669: Phase‑1 key-based aliases look coherent; keep neighbor map removal note synced with Cell.neighbors migrationThe comment block is helpful. When you flip Cell.neighbors to Option<Vec<Option>>, revisit CellNeighborsMap docs to prevent drift.
src/core/traits/insertion_algorithm.rs (7)
485-496: Avoid per-cell Vec allocation churn with a reusable bufferYou already preallocate, then clear per iteration; good! Minor nit: reserve exactly D+1 once outside the loop to avoid potential growth when D changes via const generics in other contexts.
551-562: Integer T margin fallback may be zeroFor integer CoordinateScalar implementations, default_tolerance might be zero. That would nullify the margin expansion.
Consider clamping to at least 1 unit for integer types:
- let margin_factor: T = cast(MARGIN_FACTOR).unwrap_or_else(|| T::default_tolerance()); + let margin_factor: T = cast(MARGIN_FACTOR) + .unwrap_or_else(|| T::one().checked_div(&cast(10).unwrap_or(T::one())).unwrap_or(T::one()));Or gate this path behind a Float bound when integers aren’t meaningful here.
610-614: NoCells is returned only when slots are empty, not when all cells are degenerateIf tds has cells but none with D+1 vertices, cells_tested stays 0 and you’ll hit TooManyDegenerateCells with total_tested = 0. That’s acceptable, but the error message “Too many degenerate circumspheres (k/0)” reads odd.
Tweak the message when total_tested == 0 to say “All candidate cells were degenerate (k)”.
651-666: “Too many degenerate cells” threshold uses >50%; consider making it tunableHardcoding degenerate_count * 2 > cells_tested is fine, but some datasets benefit from a tighter threshold.
Expose a const or parameter to tune the threshold for algorithms that want stricter or looser behavior.
1216-1222: ensure_vertex_in_tds: Entry::Vacant check is fine; consider handling insert errors in release buildsCurrently only debug_asserts on failure. In release, a silent failure would proceed with an unregistered vertex.
Propagate or log a TriangulationConstructionError on failure instead of relying solely on debug_assert.
1556-1559: insert_cell_with_mapping result intentionally ignored — document whyYou note duplicate UUIDs as failure path; add a short comment that this returns false for dupes and is handled by caller (e.g., not counting as created).
2179-2195: Error tests cover NoCells; consider adding a degenerate-only test (cells_tested == 0, degenerate_count > 0)This ensures the TooManyDegenerateCells path with total_tested = 0 behaves as intended.
I can add a focused test that constructs a cell with wrong vertex count to trigger this path.
src/core/triangulation_data_structure.rs (6)
925-943: Clarify no coordinate‑duplication checks in low‑level insert and tighten docsinsert_vertex_with_mapping intentionally skips coordinate-dup checks. Make this explicit to prevent misuse; callers should prefer add() for user‑facing insertions.
/// The `VertexKey` that can be used to access the inserted vertex. /// - /// # Errors + /// # Errors /// - /// Returns `TriangulationConstructionError::DuplicateUuid` if a vertex with the - /// same UUID already exists in the triangulation. + /// Returns `TriangulationConstructionError::DuplicateUuid` if a vertex with the + /// same UUID already exists in the triangulation. + /// + /// Note: + /// - This method does NOT check for duplicate coordinates. For public API use, + /// prefer `add()`, which enforces coordinate-uniqueness.
971-989: Cell insertion lacks structural validation; confirm upstream guaranteesThis helper bypasses structural checks (e.g., vertices exist in
uuid_to_vertex_key, cell not duplicate). If upstream always guarantees invariants, fine; otherwise, add lightweight debug assertions.Proposed debug assertions:
pub(crate) fn insert_cell_with_mapping( &mut self, cell: Cell<T, U, V, D>, ) -> Result<CellKey, TriangulationConstructionError> { let cell_uuid = cell.uuid(); + debug_assert!(cell.vertices().len() == D + 1, "cell must be a D-simplex"); + debug_assert!( + cell.vertices().iter().all(|v| self.uuid_to_vertex_key.contains_key(&v.uuid())), + "all cell vertices must already exist in the vertex mapping" + );
2221-2291: Lenient facet map silently skips cells; surface this in API or telemetryCurrent behavior logs only in debug. Consider returning the skip count, or emitting a
tracing::warn!behind a feature for production observability.
3017-3032: PartialEq sorting on floats with partial_cmp can mis-handle NaNs
partial_cmp(...).unwrap_or(Ordering::Equal)treats incomparable (NaN) coordinates as equal, risking false equality.Options:
- For f32/f64 builds, compare with
total_cmpbehind a trait bound specialization.- Reject NaNs at vertex construction (preferred for geometry libs) and document it.
4173-4185: Seed RNG in tests for reproducibilityNon‑deterministic tests can flake. Use a fixed seed.
- let mut rng = rand::rng(); + use rand::{SeedableRng, rngs::StdRng}; + let mut rng = StdRng::seed_from_u64(42);
1595-1665: No double-insert — algorithm implementations expect the vertex already in the TDS; clarify APIVerified: IncrementalBoyerWatson and RobustBoyerWatson do not call tds.vertices.insert (they operate on the passed Vertex by reference), so add() inserting into the SlotMap before calling algorithm.insert_vertex(...) does not cause a duplicate insertion.
Actionable recommendation: document the InsertionAlgorithm contract (that the TDS must already contain the vertex) or change the trait to accept &Vertex or a VertexKey/UUID to make ownership/semantics explicit.
Locations: src/core/triangulation_data_structure.rs::add (≈ lines 1621–1664); implementations in src/core/algorithms/bowyer_watson.rs and src/core/algorithms/robust_bowyer_watson.rs.
📜 Review details
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
src/core/collections.rs(9 hunks)src/core/traits/facet_cache.rs(3 hunks)src/core/traits/insertion_algorithm.rs(21 hunks)src/core/triangulation_data_structure.rs(39 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
Applied to files:
src/core/traits/insertion_algorithm.rs
🧬 Code graph analysis (3)
src/core/triangulation_data_structure.rs (4)
src/core/collections.rs (2)
fast_hash_map_with_capacity(708-711)fast_hash_map_with_capacity(820-820)src/core/facet.rs (5)
vertex(381-383)cell(336-338)key(466-484)facet_key_from_vertex_keys(585-597)vertices(448-455)src/core/cell.rs (8)
cell(2209-2209)cell(2338-2338)cell(2340-2340)cell(2344-2344)cell(2543-2543)cell(2556-2556)uuid(501-503)vertices(474-476)src/core/vertex.rs (1)
uuid(458-460)
src/core/traits/insertion_algorithm.rs (4)
src/core/boundary.rs (1)
boundary_facets(88-131)src/geometry/traits/coordinate.rs (3)
default_tolerance(521-521)default_tolerance(526-528)default_tolerance(532-534)src/geometry/predicates.rs (1)
insphere(346-416)src/core/triangulation_data_structure.rs (3)
cells(527-529)number_of_cells(790-792)default(206-208)
src/core/traits/facet_cache.rs (1)
src/core/triangulation_data_structure.rs (1)
new(1422-1454)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Codacy Static Code Analysis
- GitHub Check: performance-regression
- GitHub Check: Codacy Security Scan
- GitHub Check: build (windows-latest)
- GitHub Check: Code Coverage
🔇 Additional comments (13)
src/core/traits/facet_cache.rs (1)
146-154: Generation update tie-in relies on RCU return semanticsYou key the generation update on built_cache.is_none(). That’s correct if (and only if) rcu returns the previous value from the final successful attempt. If ArcSwap changes ever alter that guarantee, the generation may not advance for the builder.
Please confirm arc_swap::ArcSwapOption::rcu returns the old pointer from the last attempt. If not, gate the update on load().is_some() before RCU vs after (by comparing pre/post) to robustly detect “we created the first cache”.
src/core/collections.rs (4)
159-175: Re-exporting Entry helps ergonomicsThe Entry re-export aligns with FastHashMap usage and avoids std imports at call sites. Nice touch.
247-267: FacetToCellsMap value uses SmallBuffer<(CellKey, FacetIndex), 2> — good stack localityDesign matches the “1 or 2 sharers” invariant; solid for cache lines.
286-288: CellNeighborsMap still UUID-based; Phase‑1 note is clearDocs explain this is a temporary state before switching to CellKey. Ensure cross-file callers don’t assume keys are CellKey yet.
724-741: Capacity helpers: good examples for UUID vs key-basedExamples make intent obvious. No issues.
src/core/traits/insertion_algorithm.rs (2)
1018-1028: Boundary facet guard is solidUsing first() with explicit error keeps invariants clear. Good defensive programming.
1640-1655: Post-insertion finalize orders look correctremove_duplicate_cells → fix_invalid_facet_sharing → assign_neighbors → assign_incident_cells is a safe sequence. Nice.
src/core/triangulation_data_structure.rs (6)
1859-1867: Good: direct key-path eliminates UUID lookups in hot facet loopUsing
vertex_keys_for_cell_directhere removes UUID→Key overhead in the neighbor build path.
2012-2013: Good: invalidate caches on neighbor clearBumping
generationafterclear_all_neighbors()avoids stale cache consumers.
2318-2362: Strict facet map is solidFail‑fast on missing vertex keys and u8 overflow keeps topology consistent. Good separation from lenient path.
2560-2592: Mapping validation looks good; key→UUID checked before UUID→keyThe two‑way check is efficient and correct.
6825-6863: insert_vertex_with_mapping tests: good coverageCovers happy path and duplicate UUID error; assertions are tight.
6866-6925: insert_cell_with_mapping tests: good smoke coverageValidates mapping and basic invariants post‑insert. Looks good.
Propagates errors that may occur during vertex insertion within the robust Bowyer-Watson algorithm. This ensures that any failures in adding a vertex to the triangulation data structure are properly handled and reported, preventing inconsistent states. Also, improves degenerate cell detection by refining the logic for determining when the number of degenerate cells makes the results unreliable. It introduces a threshold to avoid false positives. Finally, enhances facet cache management, adding ordering constraints to prevent torn reads, ensuring cache and generation updates are consistent.
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/core/triangulation_data_structure.rs (1)
1925-1931: Guard SmallBuffer capacity to avoid panic for large Dneighbors uses SmallBuffer<_, MAX_PRACTICAL_DIMENSION_SIZE>; resizing to vertex_count can overflow the inline capacity if D+1 > MAX_PRACTICAL_DIMENSION_SIZE. Add a hard check and fail fast.
for (cell_key, cell) in &self.cells { let vertex_count = cell.vertices().len(); - let mut neighbors = SmallBuffer::with_capacity(vertex_count); + if vertex_count > MAX_PRACTICAL_DIMENSION_SIZE { + return Err(TriangulationValidationError::InconsistentDataStructure { + message: format!( + "neighbors buffer overflow: vertex_count={} exceeds MAX_PRACTICAL_DIMENSION_SIZE={}", + vertex_count, MAX_PRACTICAL_DIMENSION_SIZE + ), + }); + } + let mut neighbors = SmallBuffer::with_capacity(vertex_count); neighbors.resize(vertex_count, None); cell_neighbors.insert(cell_key, neighbors); }src/core/algorithms/robust_bowyer_watson.rs (1)
841-852: Fail fast on invalid facet_index instead of silently skipping.If facet_index is out of bounds, returning an error surfaces TDS corruption; silently continuing can mask bugs.
- if idx < facets.len() { - let facet = &facets[idx]; - // Test visibility using robust orientation predicates with fallback - if self.is_facet_visible_from_vertex_robust(tds, facet, vertex, cell_key) { - visible_facets.push(facet.clone()); - } - } + if idx < facets.len() { + let facet = &facets[idx]; + if self.is_facet_visible_from_vertex_robust(tds, facet, vertex, cell_key) { + visible_facets.push(facet.clone()); + } + } else { + return Err(TriangulationValidationError::InconsistentDataStructure { + message: format!( + "facet_index {idx} out of bounds ({} facets) during visibility computation", + facets.len() + ), + }); + }
🧹 Nitpick comments (12)
src/core/triangulation_data_structure.rs (8)
1668-1676: Deterministic initial simplex verticesIterating SlotMap::values() yields non-stable order across runs. Sort vertices before building the first cell to ensure reproducible topology.
- let all_vertices: Vec<_> = self.vertices.values().copied().collect(); + let mut all_vertices: Vec<_> = self.vertices.values().copied().collect(); + all_vertices.sort_unstable_by_key(Vertex::uuid); let cell = CellBuilder::default() .vertices(all_vertices) .build()
1050-1077: Avoid per-call heap allocations in hot pathvertex_keys_for_cell_direct allocates a Vec each call. Consider using a stack-backed small buffer or caller-provided scratch to reduce pressure in facet/neighbor loops.
Example (non-breaking first step): reserve exact D+1 and reuse local buffer in callers.
- let mut keys = Vec::with_capacity(cell.vertices().len()); + let mut keys = Vec::with_capacity(D + 1);Longer term: add a variant that fills &mut SmallBuffer<VertexKey, MAX_PRACTICAL_DIMENSION_SIZE>.
2157-2170: HashMap<Vec> key reuse: avoid cloning on duplicate pathunique_cells.entry(vertex_keys) moves the Vec; on duplicate you don't need to keep another owned Vec. Current code is fine; if this shows up in profiles, hash a fixed-size [u64; D+1] or use stable facet_key to reduce heap traffic.
2480-2491: Set conversion creates extra alloc; reuse a single bufferIn fix_invalid_facet_sharing, converting cell_vertex_keys_vec into a set each iteration allocates. Cache a VertexKeySet per cell in a prepass (like validate_neighbors_internal does) or reuse a set by clear+extend.
- let cell_vertex_keys_vec = - self.vertex_keys_for_cell_direct(cell_key)?; - let cell_vertex_keys: VertexKeySet = - cell_vertex_keys_vec.into_iter().collect(); + let cell_vertex_keys_vec = self.vertex_keys_for_cell_direct(cell_key)?; + // Reuse a preallocated set if available; otherwise: + let cell_vertex_keys: VertexKeySet = cell_vertex_keys_vec.iter().copied().collect();
979-1006: insert_cell_with_mapping: validate D+1 and vertex existence — good; add a debug assertAdd a debug_assert_eq!(cell.vertices().len(), D + 1) before the error path for quicker failure in dev.
+ debug_assert_eq!(cell.vertices().len(), D + 1); if cell.vertices().len() != D + 1 {Also applies to: 1009-1020
1634-1649: O(N) coordinate-duplicate scan in add()Acceptable for now; consider a stable-hash cache (e.g., hash of quantized coords) behind a feature flag if this path is hot.
If you want, I can sketch a FastHashMap<[OrderedFloat; D], VertexKey> variant for non-NaN builds.
Also applies to: 1641-1650
3045-3050: NaN handling in PartialEq is surprisingTreating NaN as equal to preserve reflexivity is non-standard and may hide issues. If NaN is invalid by design, consider asserting/rejecting earlier and using unwrap() here, or gate this behavior behind cfg(test).
Also applies to: 3074-3091
1896-1916: assign_neighbors: conversion to UUIDs creates an intermediate VecYou already note this in comments. If this shows up in profiles, consider storing neighbors as Option in Cell and expose UUIDs via accessors.
Also applies to: 1975-1985
src/core/algorithms/robust_bowyer_watson.rs (1)
403-413: Avoid O(n²) merge on bad_cells; use a set to dedup in O(n).contains() inside a loop turns merging robust_bad_cells into O(n²) for large cavities. Track membership with CellKeySet.
- // Add any cells found by robust method that weren't found by the standard method - for cell_key in robust_bad_cells { - if !bad_cells.contains(&cell_key) { - bad_cells.push(cell_key); - } - } + // Add any cells found by robust method that weren't found by the standard method + let mut seen: CellKeySet = bad_cells.iter().copied().collect(); + for cell_key in robust_bad_cells { + if seen.insert(cell_key) { + bad_cells.push(cell_key); + } + }src/core/traits/insertion_algorithm.rs (3)
498-507: Nit: redundant reserve_exact inside loop.Capacity is preallocated to D+1; clearing is enough. The extra reserve_exact adds branches.
- // Reserve exact capacity if not already present (avoids growing) - if vertex_points.capacity() < D + 1 { - vertex_points.reserve_exact((D + 1).saturating_sub(vertex_points.capacity())); - }
1393-1417: Propagate invalid facet index instead of silently skipping.As in robust path, treat out‑of‑range indices as structural errors.
- if let Some(facet) = facets.get(<usize as From<_>>::from(facet_index)) { - // Test visibility using proper orientation predicates - if Self::is_facet_visible_from_vertex_impl(tds, facet, vertex, cell_key) { - visible_facets.push(facet.clone()); - } - } + let idx = <usize as From<_>>::from(facet_index); + if let Some(facet) = facets.get(idx) { + if Self::is_facet_visible_from_vertex_impl(tds, facet, vertex, cell_key) { + visible_facets.push(facet.clone()); + } + } else { + return Err(TriangulationValidationError::InconsistentDataStructure { + message: format!("facet_index {idx} out of bounds ({} facets)", facets.len()), + }); + }
905-937: Minor: visibility test could degrade to integer math if T is integral.fallback_visibility_heuristic divides and squares in T; with integral T this truncates. If CoordinateScalar allows ints, consider constraining T to floating scalars here or compute in f64 locally.
Also applies to: 941-949
📜 Review details
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
src/core/algorithms/robust_bowyer_watson.rs(6 hunks)src/core/traits/facet_cache.rs(5 hunks)src/core/traits/insertion_algorithm.rs(23 hunks)src/core/triangulation_data_structure.rs(44 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
Applied to files:
src/core/traits/insertion_algorithm.rs
🧬 Code graph analysis (3)
src/core/triangulation_data_structure.rs (4)
src/core/collections.rs (2)
fast_hash_map_with_capacity(708-711)fast_hash_map_with_capacity(820-820)src/core/facet.rs (5)
vertex(381-383)cell(336-338)vertices(448-455)key(466-484)facet_key_from_vertex_keys(585-597)src/core/cell.rs (8)
cell(2209-2209)cell(2338-2338)cell(2340-2340)cell(2344-2344)cell(2543-2543)cell(2556-2556)vertices(474-476)uuid(501-503)src/core/vertex.rs (2)
uuid(458-460)from_points(347-352)
src/core/traits/insertion_algorithm.rs (5)
src/geometry/predicates.rs (2)
insphere(346-416)simplex_orientation(113-161)src/core/facet.rs (4)
cell(336-338)vertex(381-383)new(298-304)vertices(448-455)src/core/cell.rs (8)
cell(2209-2209)cell(2338-2338)cell(2340-2340)cell(2344-2344)cell(2543-2543)cell(2556-2556)from(371-373)vertices(474-476)src/core/boundary.rs (2)
tds(701-705)boundary_facets(88-131)src/core/triangulation_data_structure.rs (5)
new(1454-1486)cells(527-529)number_of_cells(790-792)default(206-208)vertices(568-570)
src/core/algorithms/robust_bowyer_watson.rs (1)
src/core/traits/insertion_algorithm.rs (4)
ensure_vertex_in_tds(1580-1592)find_bad_cells(626-726)new(130-142)new(269-276)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Codacy Static Code Analysis
- GitHub Check: Code Coverage
- GitHub Check: Codacy Security Scan
- GitHub Check: build (windows-latest)
- GitHub Check: performance-regression
🔇 Additional comments (15)
src/core/triangulation_data_structure.rs (6)
2273-2286: Debug-only skipped_cells is well-guardedThe cfg(debug_assertions) gating avoids the prior build failure. Nice cleanup.
2369-2413: Strict facet map builder: good fail-fast behaviorReturning an error on any missing vertex keys is the right call for validation paths.
906-922: insert_vertex_with_mapping: atomic semantics look solidEntry API avoids races/inconsistency; docstrings clearly scope coordinate-uniqueness to add(). LGTM.
Ensure all test helpers use this method rather than mutating uuid_to_vertex_key directly except where intentionally corrupting state.
Also applies to: 929-947
2609-2646: Mapping consistency checks are thoroughBoth directions verified with entity context in errors. Nice improvement to diagnosability.
Also applies to: 2689-2726
2049-2051: Good: generation bump on topology changesCache invalidation via AtomicU64 is consistently applied here and elsewhere.
2455-2460: Lenient fallback in fix_invalid_facet_sharing is a good resilience choiceGraceful degradation when strict build fails is appropriate for repair paths.
src/core/algorithms/robust_bowyer_watson.rs (4)
272-275: Good: ensure vertex registration before cell creation.Propagating errors from ensure_vertex_in_tds avoids silent topology corruption.
283-293: Good: explicit finalization error context.Mapping finalize_after_insertion errors to FinalizationFailed with details will speed up triage.
327-329: Good: consistent vertex registration in hull-extension path.Prevents missing-vertex mapping during exterior insertions.
550-556: Good: switch to CellKeySet for membership checks.This aligns with key-based collections and reduces allocs compared to ad‑hoc hash sets.
src/core/traits/facet_cache.rs (2)
122-127: Good: Acquire loads on generation checks.Matches the later Release stores; prevents torn reads of (cache, generation).
171-183: Good: ordering commentary and SeqCst store on cache update.Strong store before generation update ensures readers see a consistent pair.
src/core/traits/insertion_algorithm.rs (3)
1083-1106: Good: conservative fallback insertion with immediate finalization.This is a safe, bounded attempt path that won’t over-correct.
1331-1333: Good: unify finalization in construction and insertion flows.Centralizing duplicate-removal, facet sharing, neighbors, and incident cells reduces drift.
Also applies to: 1715-1731
1679-1690: Resolved — callers already invalidate TDS generation after removals.remove_bad_cells mutates cells/uuid map but the insertion flow rebuilds topology and calls methods that increment the TDS generation (tds.remove_duplicate_cells(), fix_invalid_facet_sharing(), assign_neighbors(), assign_incident_cells()), so facet caches are invalidated. See call sites: src/core/traits/insertion_algorithm.rs:958, src/core/algorithms/robust_bowyer_watson.rs:271; generation updates at src/core/triangulation_data_structure.rs:1684, 1712, 1997, 2050, 2191, 2549; insertion-flow topology rebuilds at src/core/traits/insertion_algorithm.rs:1331-1350 and 1716-1730.
- Fixes a potential out-of-bounds access in `is_facet_visible_from_vertex_robust` by adding a check to ensure the facet index is within the valid range. This prevents potential panics and data structure corruption. - Improves handling of degenerate cells during bad cell detection. Now utilizes a dedicated `TooManyDegenerateCellsError` struct and adjusts the error handling logic to use the robust method as a fallback when too many degenerate cells are encountered, preventing potential infinite loops and improving stability. - Adds further robustness to the facet cache by retrying cache retrieval if it's invalidated concurrently - Improves initial simplex selection by sorting vertices for deterministic behavior Refs: None
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
src/core/traits/insertion_algorithm.rs (1)
693-698: Correct detection logic for "all cells bad" condition.The condition
cells_tested > 1 && bad_cells.len() == cells_testedproperly compares against actually tested cells rather than total cells, fixing the issue from the previous review.
🧹 Nitpick comments (8)
src/core/traits/facet_cache.rs (1)
134-147: Consider extracting the cache building logic for clarity.The RCU closure with memoization is correct but could be more readable if the cache building logic was extracted to a helper method.
- // Build cache lazily inside RCU to minimize duplicate work under contention. - // We memoize the built cache outside the RCU closure to avoid recomputation - // if RCU needs to retry due to concurrent updates. - let mut built: Option<Arc<FacetToCellsMap>> = None; - let built_cache = self.facet_cache().rcu(|old| { - if let Some(existing) = old { - // Another thread built the cache while we were waiting - return Some(existing.clone()); - } - // Build the cache only once, even if RCU retries - let arc = built - .get_or_insert_with(|| Arc::new(tds.build_facet_to_cells_hashmap())); - Some(arc.clone()) - }); + // Build cache lazily inside RCU to minimize duplicate work under contention. + let built_cache = self.build_cache_with_rcu(tds);Then add a helper method:
fn build_cache_with_rcu(&self, tds: &Tds<T, U, V, D>) -> Option<Arc<FacetToCellsMap>> { let mut built: Option<Arc<FacetToCellsMap>> = None; self.facet_cache().rcu(|old| { if let Some(existing) = old { return Some(existing.clone()); } let arc = built .get_or_insert_with(|| Arc::new(tds.build_facet_to_cells_hashmap())); Some(arc.clone()) }) }src/core/traits/insertion_algorithm.rs (2)
1106-1110: Consider using if-let-chains for cleaner pattern matching.The nested pattern matching could be simplified:
- let Some(&(cell_key, facet_index)) = cells.first() else { - return Err(TriangulationValidationError::InconsistentDataStructure { - message: "Boundary facet had no adjacent cell".to_string(), - }); - }; + let &(cell_key, facet_index) = cells.first() + .ok_or_else(|| TriangulationValidationError::InconsistentDataStructure { + message: "Boundary facet had no adjacent cell".to_string(), + })?;
1652-1655: Consider logging when vertex registration fails.While returning false is appropriate, consider adding debug logging to help diagnose registration failures:
// Ensure the vertex is registered in the TDS vertex mapping // If registration fails, we treat it as a failed cell creation if Self::ensure_vertex_in_tds(tds, vertex).is_err() { + #[cfg(debug_assertions)] + eprintln!("Failed to register vertex {:?} in TDS", vertex.uuid()); return false; }src/core/triangulation_data_structure.rs (5)
21-22: Docs: reflect internal key-based access (not just UUIDs).Update the feature bullet to clarify "internal key-based storage with UUIDs for external identity", matching the Phase‑1 migration.
1055-1082: Avoid per-call allocation in hot path.
vertex_keys_for_cell_directallocates a newVeceach call. Provide an_intovariant that fills a caller-provided buffer.I can draft a
vertex_keys_for_cell_direct_into(&self, cell_key, out: &mut Vec<VertexKey>) -> Result<(), _>and migrate hot callers (assign_neighbors,*_facet_*, validation) to reuse one buffer.
1646-1655: Minor: computenew_coordsonce.Avoid recomputing inside the loop.
- // Iterate over self.vertices.values() to check for coordinate duplicates - for val in self.vertices.values() { - let existing_coords: [T; D] = val.into(); - let new_coords: [T; D] = (&vertex).into(); + // Check for coordinate duplicates + let new_coords: [T; D] = (&vertex).into(); + for val in self.vertices.values() { + let existing_coords: [T; D] = val.into(); if existing_coords == new_coords { return Err(TriangulationConstructionError::DuplicateCoordinates { coordinates: format!("{new_coords:?}"), }); } }If duplicate checks are hot, consider a per-TDS coordinate index (hash of coords) for O(1) lookups.
2476-2484: Lenient fallback is pragmatic; log at info once in release builds.When strict facet-map build fails, you silently fall back in release. Consider a single
log::info!to aid field debugging without panicking.I can add a gated
logcall (no-std aware) if you’re usinglogelsewhere.
5049-5095: Test brittleness: don’t assert a constant value for MAX_PRACTICAL_DIMENSION_SIZE.Asserting it equals 8 couples tests to an internal tuning knob.
- assert_eq!( - MAX_PRACTICAL_DIMENSION_SIZE, 8, - "MAX_PRACTICAL_DIMENSION_SIZE should be 8" - ); + assert!( + MAX_PRACTICAL_DIMENSION_SIZE >= 8, + "MAX_PRACTICAL_DIMENSION_SIZE should be large enough for practical Ds" + );
📜 Review details
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
src/core/algorithms/robust_bowyer_watson.rs(7 hunks)src/core/traits/facet_cache.rs(5 hunks)src/core/traits/insertion_algorithm.rs(23 hunks)src/core/triangulation_data_structure.rs(46 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
Applied to files:
src/core/traits/insertion_algorithm.rs
🧬 Code graph analysis (4)
src/core/algorithms/robust_bowyer_watson.rs (3)
src/core/traits/insertion_algorithm.rs (4)
ensure_vertex_in_tds(1614-1626)find_bad_cells(642-748)new(150-162)new(289-296)src/core/triangulation_data_structure.rs (2)
cells(527-529)new(1459-1491)src/core/cell.rs (1)
facets(1052-1057)
src/core/triangulation_data_structure.rs (4)
src/core/collections.rs (2)
fast_hash_map_with_capacity(708-711)fast_hash_map_with_capacity(820-820)src/core/facet.rs (5)
vertex(381-383)cell(336-338)vertices(448-455)key(466-484)facet_key_from_vertex_keys(585-597)src/core/cell.rs (8)
cell(2209-2209)cell(2338-2338)cell(2340-2340)cell(2344-2344)cell(2543-2543)cell(2556-2556)vertices(474-476)uuid(501-503)src/core/vertex.rs (1)
uuid(458-460)
src/core/traits/facet_cache.rs (1)
src/core/triangulation_data_structure.rs (1)
new(1459-1491)
src/core/traits/insertion_algorithm.rs (5)
src/geometry/predicates.rs (2)
insphere(346-416)simplex_orientation(113-161)src/core/facet.rs (4)
cell(336-338)vertex(381-383)new(298-304)vertices(448-455)src/core/cell.rs (9)
cell(2209-2209)cell(2338-2338)cell(2340-2340)cell(2344-2344)cell(2543-2543)cell(2556-2556)from(371-373)facets(1052-1057)vertices(474-476)src/core/boundary.rs (2)
tds(701-705)boundary_facets(88-131)src/core/triangulation_data_structure.rs (5)
new(1459-1491)cells(527-529)number_of_cells(790-792)default(206-208)vertices(568-570)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: Codacy Static Code Analysis
- GitHub Check: Code Coverage
- GitHub Check: performance-regression
- GitHub Check: Codacy Security Scan
- GitHub Check: build (windows-latest)
- GitHub Check: build (ubuntu-latest)
🔇 Additional comments (17)
src/core/traits/facet_cache.rs (1)
156-162: Good RCU-based implementation with proper race condition handling.The updated implementation correctly addresses the potential race condition by recursively calling
get_or_build_facet_cacheif the cache is concurrently invalidated between RCU andload_full(). This is a robust approach to handle the edge case.src/core/algorithms/robust_bowyer_watson.rs (5)
272-275: Good error propagation with?operator.Properly propagates errors from
ensure_vertex_in_tdsinstead of silently ignoring them.
406-414: Efficient O(1) duplicate checking with CellKeySet.Good use of
CellKeySetfor O(1) membership checking when merging bad cells, avoiding O(n²) complexity.
851-861: Proper bounds check prevents array index panic.Converting from a debug assertion to an actual bounds check with error return is the right approach for production code. This prevents potential panics from invalid facet indices.
377-399: Well-structured error handling with appropriate fallback paths.The match statement properly handles all
BadCellsErrorvariants with appropriate fallback strategies forAllCellsBadandTooManyDegenerateCells, while returning early forNoCells.
283-292: Good finalization with error context.The addition of
finalize_after_insertionwith proper error mapping ensures triangulation consistency after insertion operations.src/core/traits/insertion_algorithm.rs (7)
36-52: Clean Display implementation for TooManyDegenerateCellsError.The conditional formatting based on
total_testedprovides clear, context-appropriate error messages.
719-721: Efficient threshold comparison for 50% case.Good use of integer arithmetic to avoid floating-point precision issues when the threshold is exactly 0.5.
802-816: Efficient bad cell counting without allocation.The implementation counts bad cells in a single pass without creating intermediate collections, properly using
single_bad_cellto capture the necessary information for boundary facet creation.
841-846: Proper error propagation from Facet::new.Converting
FacetErrortoTriangulationValidationErrorwith descriptive context is appropriate for maintaining the error chain.
1431-1440: Good defensive programming with bounds check.The addition of bounds checking with descriptive error message prevents array index panics and provides clear diagnostic information about the corruption.
1618-1626: Clear and well-documented utility method.The
ensure_vertex_in_tdsmethod properly encapsulates vertex registration logic with appropriate error handling.
587-601: Good handling of integer types with proper margin calculation.The code properly handles integer types by ensuring at least 1 unit of margin when the computed margin would be zero.
src/core/triangulation_data_structure.rs (4)
468-475: Confirm memory ordering for generation counter.Relaxed is usually fine for cache invalidation, but verify readers don’t assume stronger ordering across threads.
Would you like me to scan for all reads/writes of
generationto assess ordering assumptions?
247-263: Error taxonomy additions look solid.Adding
DuplicateUuid,DuplicateCoordinates, andEntityKindimproves diagnostics without widening the public surface unexpectedly.Also applies to: 265-272, 312-316
1935-1947: Good guard against neighbor buffer overflow.Early error with
MAX_PRACTICAL_DIMENSION_SIZEprevents UB inSmallBuffer.
2068-2070: Good: invalidates caches after clearing neighbors.This keeps dependent caches consistent.
- Add generation.fetch_add(1) to both insert_vertex_with_mapping and insert_cell_with_mapping - This ensures facet caches are invalidated when topology changes through these methods - Prevents potential stale cache usage when these internal methods are called directly - Maintains consistency with other topology-changing operations in the codebase
Improves the facet cache building process by introducing a helper method `build_cache_with_rcu` that uses Read-Copy-Update (RCU) to minimize duplicate work under high contention. Also refactors the cell creation logic to handle potential errors and ensure vertex registration. Adds concurrent test to confirm only one thread builds the cache.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/core/traits/insertion_algorithm.rs (1)
439-451: Add missing trait bounds (One, Zero, PartialEq) to callers of is_vertex_likely_exterior.determine_strategy and determine_strategy_default call is_vertex_likely_exterior, which requires T: One + Zero + PartialEq. Without mirroring these bounds, generic implementations can fail to compile.
Apply this diff:
@@ ) -> InsertionStrategy where - T: AddAssign<T> + SubAssign<T> + Sum + NumCast, + T: AddAssign<T> + SubAssign<T> + Sum + NumCast + One + Zero + PartialEq, for<'a> &'a T: Div<T>, { @@ ) -> InsertionStrategy where - T: AddAssign<T> + SubAssign<T> + Sum + NumCast, + T: AddAssign<T> + SubAssign<T> + Sum + NumCast + One + Zero + PartialEq, for<'a> &'a T: Div<T>, {Also applies to: 466-495
♻️ Duplicate comments (1)
src/core/triangulation_data_structure.rs (1)
935-948: Prior review feedback addressed.
- Generation bump added on vertex/cell insert.
- Skipped-cells counter and logs gated under cfg(debug_assertions).
- filter_map-based silent key drops replaced with vertex_keys_for_cell_direct.
All look correct. Thanks for tightening these paths.
Also applies to: 1016-1029, 2307-2323, 2941-2949
🧹 Nitpick comments (13)
src/core/traits/facet_cache.rs (2)
149-151: Clarify Acquire/Release pairing in the comment.Acquire on tds.generation and cached_generation here pairs with your later Release store to cached_generation (in this module). It does not synchronize with TDS writers unless they use Release (tests currently use Relaxed). Consider rewording to avoid implying cross-module synchronization.
161-178: Avoid recursive retry; prefer a small loop to handle invalidation races.The recursive fallback on concurrent invalidation is correct but could recurse repeatedly under a flapping invalidator. A local retry loop (with a quick yield) would prevent unbounded stack growth and keep control flow in one place.
Would you like me to provide a loop-based rewrite of this branch?
src/core/triangulation_data_structure.rs (6)
468-475: Make generation private and centralize bumps.generation being public invites accidental external mutation. Prefer private (or pub(crate)) plus a helper for consistent bumps.
Apply:
- #[serde(skip)] - pub generation: Arc<AtomicU64>, + #[serde(skip)] + generation: Arc<AtomicU64>,Add a helper (outside this hunk):
impl<T, U, V, const D: usize> Tds<T, U, V, D> { #[inline] fn bump_generation(&self) { // Relaxed is fine for an invalidation counter self.generation.fetch_add(1, Ordering::Relaxed); } }Then replace direct fetch_add calls with self.bump_generation() at the mutation sites.
1059-1086: Avoid heap alloc in hot path.vertex_keys_for_cell_direct allocates a Vec each call. For D ≤ 8 (your MAX_PRACTICAL_DIMENSION_SIZE), a smallvec/small buffer avoids heap.
Example:
- let mut keys = Vec::with_capacity(D + 1); + let mut keys: smallvec::SmallVec<[VertexKey; { D + 1 }]> = smallvec::SmallVec::with_capacity(D + 1); ... - Ok(keys) + Ok(keys.into_vec())(Or reuse your SmallBuffer abstraction if available for VertexKey.)
1651-1658: Exact float-equality for duplicate coordinates can be surprising and is O(n).
- Exact equality on floats will miss near-duplicates and can reject only bit-identical values. If that’s intended, document it.
- The scan is O(n). Consider an optional coordinate index (hash on quantized/OrderedFloat coordinates) behind a feature flag.
2480-2483: Log when strict facet-map build falls back to lenient.Today the fallback silently masks the strict error cause. Emit a debug log (or feature‑gated warn) to aid diagnosis.
Apply:
- let facet_to_cells = self - .try_build_facet_to_cells_hashmap() - .unwrap_or_else(|_| self.build_facet_to_cells_hashmap()); + let facet_to_cells = match self.try_build_facet_to_cells_hashmap() { + Ok(m) => m, + Err(err) => { + #[cfg(debug_assertions)] + eprintln!("debug: strict facet map failed ({}); falling back to lenient build", err); + self.build_facet_to_cells_hashmap() + } + };
3069-3074: Doc note about NaN handling is misleading.The equality path still relies on Vertex equality; NaNs won’t compare equal. Either remove the “treated as equal” claim or explicitly reject NaNs at construction and state that here.
Proposed tweak:
-/// **Note on NaN handling:** ... they are treated as equal to maintain reflexivity ... +/// Note: Vertices with NaN coordinates are rejected during construction; equality assumes no NaNs.
2069-2074: Replace direct generation.fetch_add calls with self.bump_generation().Replace all occurrences of
self.generation.fetch_add(1, Ordering::Relaxed)
with
self.bump_generation()
to keep generation-bump sites uniform.Locations:
- src/core/triangulation_data_structure.rs:945
- src/core/triangulation_data_structure.rs:1026
- src/core/triangulation_data_structure.rs:1695
- src/core/triangulation_data_structure.rs:1723
- src/core/triangulation_data_structure.rs:2020
- src/core/triangulation_data_structure.rs:2073
- src/core/triangulation_data_structure.rs:2214
- src/core/triangulation_data_structure.rs:2573
src/core/traits/insertion_algorithm.rs (5)
526-528: Tighten insphere check for readability and fewer temporaries.Use matches! directly; avoids a temporary binding in the let‑chain.
Apply this diff:
- if let Ok(containment) = insphere(&vertex_points, *vertex.point()) - && matches!(containment, InSphere::INSIDE) - { + if matches!(insphere(&vertex_points, *vertex.point()), Ok(InSphere::INSIDE)) { return true; }
717-745: Degenerate threshold path only optimized for 0.5; guard against accidental drift.Comment says the threshold is tweakable, but logic explicitly special‑cases 0.5. If this const changes later, behavior silently shifts to the float path. Consider extracting a helper that compares degenerate_count/total_cells > threshold using integer math when threshold is a simple rational (p/q), else falls back to f64, and assert at startup when using the optimized branch.
Would you like me to provide a small helper fn compare_ratio_usize(lhs_num, lhs_den, rhs_num, rhs_den) -> bool and wire it here?
796-801: Fail fast when a facet is shared by >2 cells.Currently this is only a debug_assert and the loop silently skips invalid cases. Prefer returning a structured validation error in release builds to surface TDS corruption early.
Apply this diff:
- #[cfg(debug_assertions)] - debug_assert!( - total_count <= 2, - "Facet shared by more than two cells (total_count = {total_count})" - ); + if total_count > 2 { + return Err(TriangulationValidationError::InconsistentDataStructure { + message: format!( + "Facet shared by more than two cells (total_count = {total_count})" + ), + }); + }Also applies to: 817-825
2442-2449: Tests assert on error strings; prefer matching variants.String contains checks (“does not contain the vertex”) are brittle. Match FacetError::CellDoesNotContainVertex (or the mapped TriangulationValidationError variant) instead to keep tests stable across message wording changes.
If FacetError isn’t re-exported here, I can adjust the test to pattern-match the TriangulationValidationError produced by find_cavity_boundary_facets instead—confirm which error surface you want tests to assert on.
1299-1305: UUID mapping usage remains in an otherwise key-based refactor.These paths still touch uuid_to_vertex_key/uuid_to_cell_key. If the PR aims to remove UUID lookups internally, consider gating these through a thin adapter or migrating to pure key flows here as well (e.g., pre-alloc vertex keys, return keys from insert helpers).
Do you want a follow-up patch that introduces key-first insert helpers (returning VertexKey/CellKey) and deprecates the uuid mapping paths behind a feature flag?
Also applies to: 1315-1320, 1729-1735
📜 Review details
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
src/core/traits/facet_cache.rs(7 hunks)src/core/traits/insertion_algorithm.rs(30 hunks)src/core/triangulation_data_structure.rs(47 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
Applied to files:
src/core/traits/insertion_algorithm.rs
🧬 Code graph analysis (3)
src/core/triangulation_data_structure.rs (4)
src/core/collections.rs (2)
fast_hash_map_with_capacity(708-711)fast_hash_map_with_capacity(820-820)src/core/facet.rs (6)
vertex(381-383)cell(336-338)vertices(448-455)key(466-484)new(298-304)facet_key_from_vertex_keys(585-597)src/core/cell.rs (9)
cell(2209-2209)cell(2338-2338)cell(2340-2340)cell(2344-2344)cell(2543-2543)cell(2556-2556)vertices(474-476)uuid(501-503)is_valid(905-941)src/core/vertex.rs (3)
uuid(458-460)from_points(347-352)is_valid(538-554)
src/core/traits/facet_cache.rs (2)
src/core/triangulation_data_structure.rs (1)
new(1463-1495)src/geometry/algorithms/convex_hull.rs (2)
facet_cache(1106-1108)is_empty(983-985)
src/core/traits/insertion_algorithm.rs (4)
src/geometry/predicates.rs (2)
insphere(346-416)simplex_orientation(113-161)src/core/facet.rs (4)
cell(336-338)vertex(381-383)new(298-304)vertices(448-455)src/core/cell.rs (9)
cell(2209-2209)cell(2338-2338)cell(2340-2340)cell(2344-2344)cell(2543-2543)cell(2556-2556)from(371-373)facets(1052-1057)vertices(474-476)src/core/triangulation_data_structure.rs (4)
new(1463-1495)cells(527-529)default(206-208)vertices(568-570)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Codacy Security Scan
- GitHub Check: performance-regression
- GitHub Check: Code Coverage
- GitHub Check: build (windows-latest)
🔇 Additional comments (16)
src/core/traits/facet_cache.rs (8)
87-113: RCU helper looks solid; single-build avoidance achieved.Good use of ArcSwapOption::rcu with external memoization to prevent duplicate builds under retries. The “builder-only updates generation” protocol aligns with the call-site logic.
132-133: Doc update: clear and helpful.The contention note accurately reflects the new RCU path.
189-201: Publish-before-advance protocol LGTM.Storing the new cache before advancing cached_generation preserves the “cache and epoch move together” guarantee for readers.
225-228: Invalidation ordering comment LGTM.Clearing via ArcSwap then resetting the snapshot is the right order to force rebuilds deterministically.
231-233: Release is sufficient here.Release on the generation reset is appropriate given readers use Acquire.
525-577: RCU unit test verifies intended semantics.Nice validation that build_cache_with_rcu returns the previous value (None on first build) and does not rebuild when cache already exists.
579-644: Concurrent single-builder semantics well covered.Barrier-coordinated test and AtomicUsize counter confirm exactly one builder under contention.
If you want an additional end-to-end check on the public path (get_or_build_facet_cache), I can craft a concurrent test ensuring only one actual build occurs when multiple threads enter that method simultaneously.
87-113: Confirmed: arc-swap@1.7.1 — ArcSwapOption::rcu returns the previous value (Option<Arc<_>>).Cargo.toml/Cargo.lock pin arc-swap = "1.7.1". (docs.rs)
ArcSwapAny::rcu returns the stored T (the previous value) and ArcSwapOption is a type alias to ArcSwapAny<Option<Arc>> — your assumption (Some if cache existed, None on first build) is correct. (docs.rs)
src/core/triangulation_data_structure.rs (8)
174-177: Imports align with the key-based migration.Brings in the right collections and key-set types for SlotMap usage. LGTM.
247-263: New DuplicateUuid/DuplicateCoordinates errors: good API surface.Clear diagnostics and entity context will help callers. LGTM.
265-272: EntityKind introduction is appropriate.Minimal, precise context for errors; fits public API. LGTM.
311-316: Validation error enriched with entity context.Improves debuggability without changing semantics. LGTM.
1677-1686: Deterministic initial simplex construction.Sorting by UUID ensures stable builds; clear error mapping on CellBuilder failure. LGTM.
1907-1916: Neighbor assignment via facet keys is sound; bounds/guards look good.
- Properly uses direct keys, enforces MAX_PRACTICAL_DIMENSION_SIZE, and preserves positional semantics.
- Generation bump at the end is correct.
Also applies to: 1936-1951, 1998-2006, 2019-2021
2297-2301: Lenient facet-map builder: debug-only skips are handled cleanly.The u8 facet-index guard and debug accounting prevent surprises in release. LGTM.
Also applies to: 2336-2347, 2358-2364
2395-2436: Strict facet-map builder is correct.Early, explicit failure on facet-index overflow and missing keys is the right default for validators. LGTM.
Refactors facet cache to use a generation counter for invalidation and improves bounding box margin calculation, ensuring that caches are rebuilt when underlying data changes. This change introduces a `generation()` method and bumps the generation on mutations, which improves the facet cache and convex hull algorithm staleness detection. Also fixes an integer margin calculation bug in bounding box expansion logic for vertex inclusion and adjusts validation to allow the propagation of cell errors.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (4)
src/core/traits/facet_cache.rs (1)
161-178: Good: removed expect() panic race on concurrent invalidation.The retry via load_full().unwrap_or_else(|| self.get_or_build_facet_cache(tds)) resolves the prior panic window. Consider a small loop to avoid recursion under worst‑case churn.
- self.facet_cache().load_full().unwrap_or_else(|| { - self.get_or_build_facet_cache(tds) - }) + { + // Avoid recursion in rare churn. + if let Some(c) = self.facet_cache().load_full() { c } else { + // Slow path + self.get_or_build_facet_cache(tds) + } + }src/core/triangulation_data_structure.rs (1)
2333-2402: build_facet_to_cells_hashmap(): debug‑only counters properly guarded.The cfg(debug_assertions) fix eliminates the “assigned but unused” build failure.
src/core/traits/insertion_algorithm.rs (2)
36-55: Good fix: stable Display for TooManyDegenerateCellsErrorReplacing the fragile inline formatting with a manual Display impl removes macro brittleness and is test‑friendly.
711-718: All-cells-bad check now uses cells_tested — correct.Comparing against
cells_tested(and gating oncells_tested > 1) fixes the earlier false‑positive scenario when some cells are skipped.
🧹 Nitpick comments (18)
src/core/traits/facet_cache.rs (2)
164-169: Update generation after confirming cache is present to avoid stale store.If the cache was invalidated between RCU and the store, we may record current_generation despite a cleared cache. Store generation only if load_full() returns Some(_).
- if built_cache.is_none() { - self.cached_generation() - .store(current_generation, Ordering::Release); - } + if built_cache.is_none() { + if self.facet_cache().load_full().is_some() { + self.cached_generation().store(current_generation, Ordering::Release); + } + }
3345-3505: End‑to‑end cache invalidation tests look good.Thorough checks across initial build, TDS mutation, manual invalidation, and rebuild. Just ensure println! noise stays behind a test-only feature flag if CI logs are noisy.
Also applies to: 3500-3503, 3542-3614
src/geometry/algorithms/convex_hull.rs (3)
1086-1092: Use Release when resetting cached_generation for consistency with readers.invalidate_cache() uses Relaxed. Since readers elsewhere load with Acquire, prefer Release here (mirrors the trait’s invalidate_facet_cache).
- self.cached_generation.store(0, Ordering::Relaxed); + self.cached_generation.store(0, Ordering::Release);
404-412: Optional: avoid UUID equality in hot paths.Now that collections are key-based, consider comparing vertex keys (or direct pointer/key equality) instead of UUIDs inside facet membership checks to reduce overhead.
1398-1401: Tests are verbose but comprehensive.Great coverage across dims and numerics. Consider gating stdout prints behind a cargo feature to keep CI logs clean.
Also applies to: 1708-1717, 2421-2425
src/core/triangulation_data_structure.rs (10)
21-21: Docs: call out Phase-1/Phase-2 trade‑off explicitly.Nice addition. Consider a short note that vertex lookups are still UUID→Key (O(D)) until Phase‑2 to set expectations for hot paths.
247-263: Stronger error typing (DuplicateUuid/Coordinates + EntityKind).Good improvement. Consider including a brief remediation hint in Display (e.g., “use a fresh UUID”).
Also applies to: 265-272
1006-1058: insert_cell_with_mapping: invariants and referential checks are good.
- Early D+1 vertex count guard helps.
- Verifying all vertex UUIDs exist avoids phantom references.
Minor: error message could include the missing vertex index for faster diagnosis.
1945-1953: assign_neighbors(): robust with semantic ordering; one portability nit.
- Strict facet sharing semantics are enforced—good.
- Guard against MAX_PRACTICAL_DIMENSION_SIZE overflow—good.
- Consider using u16 for facet indices to remove the D ≤ 255 constraint across profiles (try_ variant already errors on u8 overflow).
Suggested change (non‑blocking):
- let Ok(facet_index_u8) = u8::try_from(i) else { + let Ok(facet_index_u16) = u16::try_from(i) else { ... }; - .push((cell_id, facet_index_u8)); + .push((cell_id, facet_index_u16 as u8)) // or change FacetIndex typedef to u16 end-to-endAlso applies to: 1977-1987, 2057-2058
2146-2151: assign_incident_cells(): direct-key path + precise errors.Looks good. Question: should this also bump generation to reflect vertex metadata changes? Today callers bump after invoking; documenting this would help avoid foot‑guns.
2333-2338: Facet index u8 bound only debug‑asserted here.Release builds may still silently skip >255. Align behavior with try_ variant (return error) or document explicitly.
Also applies to: 2373-2385
2406-2473: try_build_facet_to_cells_hashmap(): strict path is clean.Nice separation of strict vs lenient builders. Consider planning the rename in docs with a deprecation notice to ease migration.
156-176: add(): exact‑equality dedup is O(n); optional index can help large builds.If vertex inserts become hot, consider an auxiliary HashMap<[T; D], VertexKey> (behind a feature) for exact‑match dedup to avoid linear scans.
+ // Optional: exact-coordinate index to accelerate duplicate checks in add() + // #[cfg(feature = "exact_dedup_index")] + // coord_index: FastHashMap<[T; D], VertexKey>,Also applies to: 1666-1695
5085-5129: Test doesn’t exercise the runtime overflow guard.This asserts constants rather than the guard path. Either convert to a doc test or gate under cfg(doc) to avoid a brittle unit test. Alternatively, instantiate Tds with D = MAX_PRACTICAL_DIMENSION_SIZE + 1 in a compile‑fail test (ui test).
7139-7207: insert_cell_with_mapping() tests: good coverage; minor enhancement.Consider also asserting generation() increases after insert for completeness.
src/core/traits/insertion_algorithm.rs (3)
588-589: Avoid 100% expansion when cast(10) fails (make “ten” robust without casting)Fallback to
T::one()makesrange/ten == rangeifcast(10)returns None, re‑introducing a 100% margin in exoticTs. Buildtenvia addition to guarantee 10 across numericTs.Apply:
- // Precompute 10 for integer division fallback - let ten: T = cast(10).unwrap_or_else(T::one); + // Precompute 10 for integer division fallback without relying on casts + let mut ten = T::zero(); + for _ in 0..10 { + ten += T::one(); + }Also applies to: 598-621
736-764: Simplify 0.5 threshold branch guard
abs_diff_eq!(DEGENERATE_CELL_THRESHOLD, 0.5, epsilon = f64::EPSILON)is effectively an exact compare on a const. A direct== 0.5is clearer, and you can document that only 0.5 uses the integer shortcut.- if abs_diff_eq!(DEGENERATE_CELL_THRESHOLD, 0.5, epsilon = f64::EPSILON) { + if DEGENERATE_CELL_THRESHOLD == 0.5 {
445-447: Tighten generic bounds (Div on &T appears unused)
for<'a> &'a T: Div<T>isn’t exercised in these impls; remove if unnecessary to reduce trait surface. Apply similarly to other methods that repeat this bound.Also applies to: 471-473
📜 Review details
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
src/core/traits/facet_cache.rs(10 hunks)src/core/traits/insertion_algorithm.rs(33 hunks)src/core/triangulation_data_structure.rs(47 hunks)src/geometry/algorithms/convex_hull.rs(9 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
Applied to files:
src/core/traits/insertion_algorithm.rs
🧬 Code graph analysis (2)
src/core/traits/facet_cache.rs (1)
src/geometry/algorithms/convex_hull.rs (2)
facet_cache(1106-1108)cached_generation(1110-1112)
src/core/traits/insertion_algorithm.rs (3)
src/geometry/predicates.rs (2)
insphere(346-416)simplex_orientation(113-161)src/core/facet.rs (4)
cell(336-338)vertex(381-383)new(298-304)vertices(448-455)src/core/cell.rs (9)
cell(2209-2209)cell(2338-2338)cell(2340-2340)cell(2344-2344)cell(2543-2543)cell(2556-2556)from(371-373)facets(1052-1057)vertices(474-476)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Code Coverage
- GitHub Check: build (windows-latest)
- GitHub Check: build (ubuntu-latest)
- GitHub Check: Codacy Security Scan
- GitHub Check: performance-regression
🔇 Additional comments (24)
src/core/traits/facet_cache.rs (5)
225-233: Align invalidation orderings with readers.You reset generation with Release in the trait; good. If any readers use Acquire (they do), this preserves HB. Keep comments accurate and ensure future relaxations add fences as you note.
Please confirm all generation reads use Acquire (they do in Line 153). If any use Relaxed, upgrade to Acquire.
532-585: Tests: solid coverage of RCU path and single‑builder semantics.Once build_cache_with_rcu() is fixed to return the intended “old” value, these tests validate correctness under contention. LGTM.
Also applies to: 586-652
654-719: Nice: lifecycle under multiple generation bumps.Covers successive add() operations and rebuilds. LGTM.
87-112: Incorrect —rcureturns the previous value (Option<Arc<_>>); no type mismatch.
arc-swap'srcureturns T (for ArcSwapOption T = Option<Arc>), so the current signature and return in build_cache_with_rcu are correct and the proposed refactor is unnecessary.Likely an incorrect or invalid review comment.
189-201: Incorrect — ArcSwapOption::store() provides SeqCst.
arc-swap's docs state store (via swap) includes at least one SeqCst atomic read–write; update the inline comment to say SeqCst ordering and no explicit fence is required.Likely an incorrect or invalid review comment.
src/geometry/algorithms/convex_hull.rs (2)
271-272: Good: snapshot generation via tds.generation().Switching to the accessor avoids direct atomic loads and aligns with new TDS API.
246-273: Trait impl bounds trimmed appropriately.ConvexHull’s FacetCacheProvider impl uses the reduced bound set; consistent with trait requirements. LGTM.
src/core/triangulation_data_structure.rs (12)
174-177: Import surface looks consistent with new key‑based types.Good switch to crate-local Entry/FastHashMap aliases; avoids std/hashbrown drift.
474-474: Generation counter field: good placement and serde skip.Arc with skip-serialize is appropriate.
899-927: bump_generation() + generation(): relaxed ordering is fine here.Invalidation counter doesn’t need ordering; this is correct.
935-977: insert_vertex_with_mapping: atomic check/insert + gen bump.Looks solid and fixes stale cache risk.
1089-1116: vertex_keys_for_cell_direct(): correct failure-on-miss semantics.Good use of SmallVec buffer and explicit error paths.
2110-2111: clear_all_neighbors() bumps generation — good.Consistent with “topology changed” semantics.
2218-2254: remove_duplicate_cells(): correct two‑pass + topology rebuild.
- Using vertex_keys_for_cell_direct avoids silent drops.
- Rebuilding neighbors/incident cells and bumping generation prevents stale refs.
LGTM.
2800-2804: validate_no_duplicate_cells(): Phase‑1 key path used correctly.Good propagation of errors from direct lookups.
2977-2986: validate_neighbors_internal(): precompute and set checks are solid.Efficient set operations; early exits keep this tight.
Also applies to: 3031-3036
3106-3108: NaN note in PartialEq docs is helpful.Good to preempt undefined ordering issues.
4372-4376: Seeded RNG in tests for reproducibility — good.Deterministic tests reduce flakiness.
7100-7137: insert_vertex_with_mapping() tests: cover success + duplicate paths thoroughly.Nice use of explicit UUID to validate error payload.
src/core/traits/insertion_algorithm.rs (5)
1438-1461: Consistent fail‑fast on invalid facet indices in visibility pathGreat: this mirrors the integrity guarantees expected from TDS and prevents papering over corruption.
518-530: Nice reuse of per‑cell buffer in is_vertex_interiorPrealloc + clear avoids churn in tight loops.
947-981: Good error mapping from BadCellsError to TriangulationValidationErrorClear, actionable messages; preserves degeneracy diagnostics.
1778-1783: Post‑insertion finalization: order looks rightDedup → fix facet sharing → neighbors → incident cells is a sensible sequence.
2489-2597: Solid regression tests for integer margin logicCovers int/float and small‑range minimum; protects against the prior 100% expansion bug.
Fixes a potential issue where a stale generation value could be stored in the facet cache due to a race condition during cache building. This ensures that the generation is only stored if the cache is actually present, preventing incorrect cache invalidation. Also, adds a check for out-of-bounds facet indices when building the cavity boundary during insertion, and returns an error if an invalid index is encountered. This prevents potential panics due to out-of-bounds access and provides a more informative error message, improving debugging. Updates facet cache reset to use Release ordering to ensure consistency with Acquire loads by readers.
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (7)
src/core/traits/facet_cache.rs (1)
161-179: Make cache rebuild path non-recursive and update generation only if we actually builtRecursing into
get_or_build_facet_cachecan degenerate under churn and risks deep stacks; also, after the refactor above, gate the generation store on the returnedbuilt_by_this_threadflag.- || { - // Build cache lazily inside RCU to minimize duplicate work under contention. - let built_cache = self.build_cache_with_rcu(tds); - - // Update generation if we were the ones who built it - // Note: built_cache is the old value before our update - // Only store generation if the cache is actually present to avoid stale store - if built_cache.is_none() && self.facet_cache().load_full().is_some() { - self.cached_generation() - .store(current_generation, Ordering::Release); - } - - // Return the cache; if concurrently invalidated, retry via slow path - // Another thread could invalidate between RCU and load_full() - self.facet_cache().load_full().unwrap_or_else(|| { - // Generation may have changed; fall back to rebuild path - // This handles the race where another thread invalidated after our RCU - self.get_or_build_facet_cache(tds) - }) - }, + || { + // Build cache lazily inside RCU; avoid duplicate work. + let (_cache, built_by_this_thread) = self.build_cache_with_rcu(tds); + if built_by_this_thread { + self.cached_generation().store(current_generation, Ordering::Release); + } + // Retry load in case of concurrent invalidation; loop instead of recursion. + loop { + if let Some(c) = self.facet_cache().load_full() { + break c; + } + let (_c2, built2) = self.build_cache_with_rcu(tds); + if built2 { + self.cached_generation().store(current_generation, Ordering::Release); + } + } + },src/geometry/algorithms/convex_hull.rs (1)
404-407: Hot path: replace UUID equality with key-based comparisonThis aligns with the PR’s objective. Recommend extending Facet to carry vertex keys or exposing a
vertex_keys()accessor to enable O(D) key lookups without UUID costs.// Example direction (API sketch): impl<T, U, V, const D: usize> Facet<T, U, V, D> { pub fn vertex_keys(&self) -> &[VertexKey] { /* ... */ } } // Then: let facet_keys = facet.vertex_keys(); let inside_key = adjacent_cell .vertex_keys() .iter() .find(|k| !facet_keys.contains(k)) .copied() .ok_or(FacetError::InsideVertexNotFound)?;src/core/traits/insertion_algorithm.rs (5)
82-88: Generalize degenerate-threshold check; avoid hard-coding 0.5 special case.Encapsulate the comparison in a helper and reuse it. This keeps the hot path simple and supports arbitrary thresholds without branching on an exact float.
Apply:
@@ -const DEGENERATE_CELL_THRESHOLD: f64 = 0.5; +const DEGENERATE_CELL_THRESHOLD: f64 = 0.5; + +#[inline] +fn degenerate_threshold_exceeded(degenerate_count: usize, total_cells: usize) -> bool { + if total_cells == 0 { + return degenerate_count > 0; + } + if abs_diff_eq!(DEGENERATE_CELL_THRESHOLD, 0.5, epsilon = f64::EPSILON) { + return degenerate_count.saturating_mul(2) > total_cells; + } + #[allow(clippy::cast_precision_loss)] + (degenerate_count as f64) / (total_cells as f64) > DEGENERATE_CELL_THRESHOLD +} @@ - } else if cells_tested > 0 && degenerate_count > 0 { + } else if cells_tested > 0 && degenerate_count > 0 { @@ - if abs_diff_eq!(DEGENERATE_CELL_THRESHOLD, 0.5, epsilon = f64::EPSILON) { - // Use optimized integer arithmetic for 50% threshold - let threshold_exceeded = degenerate_count.saturating_mul(2) > total_cells; - - if threshold_exceeded { - return Err(BadCellsError::TooManyDegenerateCells( - TooManyDegenerateCellsError { - degenerate_count, - total_tested: cells_tested, - }, - )); - } - } else { - // For non-0.5 thresholds, we must use floating point comparison - // We accept the precision loss here as it's unavoidable for large usize values - #[allow(clippy::cast_precision_loss)] - let degenerate_ratio = (degenerate_count as f64) / (total_cells as f64); - if degenerate_ratio > DEGENERATE_CELL_THRESHOLD { - return Err(BadCellsError::TooManyDegenerateCells( - TooManyDegenerateCellsError { - degenerate_count, - total_tested: cells_tested, - }, - )); - } - } + if degenerate_threshold_exceeded(degenerate_count, total_cells) { + return Err(BadCellsError::TooManyDegenerateCells( + TooManyDegenerateCellsError { + degenerate_count, + total_tested: cells_tested, + }, + )); + }Also applies to: 736-764
445-447: Tighten generic bounds (drop unusedfor<'a> &'a T: Div<T>here).These fns don’t divide references; requiring
&T / Tincreases friction without benefit.- where - T: AddAssign<T> + SubAssign<T> + Sum + NumCast + One + Zero + PartialEq + Div<Output = T>, - for<'a> &'a T: Div<T>, + where + T: AddAssign<T> + SubAssign<T> + Sum + NumCast + One + Zero + PartialEq + Div<Output = T>,Do the same in
determine_strategy_default.Also applies to: 471-473
586-624: Extract margin calc to a small helper; improves readability and reuse.Logic is correct but dense. A helper makes integer/float branches self‑documenting and reusable from tests.
Add near the threshold helper:
#[inline] fn bbox_margin<T>(range: T) -> T where T: Zero + One + NumCast + PartialEq + Div<Output = T> + Copy + std::ops::Mul<Output = T>, { let ten: T = cast(10).unwrap_or_else(T::one); cast::<f64, T>(MARGIN_FACTOR).map_or_else( || { let mut m = range / ten; if m == T::zero() { m = T::one(); } m }, |mf| { if mf == T::zero() { let mut m = range / ten; if m == T::zero() { m = T::one(); } m } else { range * mf } }, ) }Then replace local
margincomputation withlet margin = bbox_margin(range);.
711-718: Order-of-checks: consider degeneracy check before “all cells bad”.When degeneracy is high, returning “TooManyDegenerateCells” may be more actionable than “AllCellsBad”.
- // Check for problematic conditions - if cells_tested > 1 && bad_cells.len() == cells_tested { + // Prefer reporting high degeneracy before "all bad" to guide callers + // Check for problematic conditions + if cells_tested > 1 + && bad_cells.len() == cells_tested + && degenerate_count == 0 {
1333-1341: UUID→key paths remain; align with key‑first objective.These calls keep UUID maps hot on the critical path. If phase‑1 allows, gate them behind a feature or provide key‑first fast path with UUID fallback.
- Add a
feature = "uuid_compat"to guard UUID map mutations.- Provide an
ensure_vertex_in_tds_by_keypath to skip UUID lookups when caller already has keys.Would you like a small patch introducing a
uuid_compatcargo feature and swapping these blocks behind it?Also applies to: 1351-1356, 1650-1662
📜 Review details
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
src/core/traits/facet_cache.rs(10 hunks)src/core/traits/insertion_algorithm.rs(33 hunks)src/geometry/algorithms/convex_hull.rs(11 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
Applied to files:
src/core/traits/insertion_algorithm.rs
🧬 Code graph analysis (3)
src/core/traits/facet_cache.rs (1)
src/core/triangulation_data_structure.rs (1)
new(1493-1525)
src/core/traits/insertion_algorithm.rs (5)
src/geometry/predicates.rs (2)
insphere(346-416)simplex_orientation(113-161)src/core/facet.rs (4)
cell(336-338)vertex(381-383)new(298-304)vertices(448-455)src/core/cell.rs (9)
cell(2209-2209)cell(2338-2338)cell(2340-2340)cell(2344-2344)cell(2543-2543)cell(2556-2556)from(371-373)vertices(474-476)facets(1052-1057)src/core/boundary.rs (2)
tds(701-705)boundary_facets(88-131)src/core/triangulation_data_structure.rs (5)
new(1493-1525)vertices(568-570)cells(527-529)default(206-208)number_of_cells(790-792)
src/geometry/algorithms/convex_hull.rs (4)
src/core/traits/facet_cache.rs (1)
new(254-259)src/core/facet.rs (3)
new(298-304)vertices(448-455)vertex(381-383)src/core/triangulation_data_structure.rs (2)
new(1493-1525)vertices(568-570)src/core/cell.rs (1)
vertices(474-476)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: build (windows-latest)
- GitHub Check: performance-regression
- GitHub Check: Codacy Security Scan
- GitHub Check: Code Coverage
🔇 Additional comments (19)
src/core/traits/facet_cache.rs (8)
149-152: Acquire/Release commentary LGTMThe ordering notes are accurate and align with ArcSwap’s SeqCst-including guarantees. (woshilapin.github.io)
190-205: Publication ordering is soundSwapping in the new cache before storing the generation with Release is correct; ArcSwap operations include a SeqCst RMW, providing a strong global order. (woshilapin.github.io)
If you want to tighten reasoning further, add a brief comment citing docs.rs that rcu/store perform a SeqCst read‑modify‑write.
226-233: Invalidate path ordering is appropriateStoring None then resetting generation with Release provides the intended happens‑before for readers using Acquire. Looks good.
655-720: Generation bump/overflow handling tests: solid coverageGood end‑to‑end validation of generation tracking across adds and neighbor clears.
318-325: Use of tds.generation() in tests is consistent with new APINice alignment with the TDS accessor.
3348-3354: Cache invalidation behavior tests read wellGood verification of snapshot semantics and cache lifecycle. No issues.
Also applies to: 3378-3395, 3398-3421, 3434-3440, 3487-3491, 3503-3507
3545-3566: Run the test suite and confirm no callers rely on the oldbuild_cache_with_rcusignature.rg shows the function and callers in src/core/traits/facet_cache.rs (fn ≈ line 99; callers ≈ lines 162, 534–546, 562–566, 608–612). Run cargo test to ensure the builder-path generation-update assertions still pass.
87-112: Incorrect assumption about ArcSwapAny::rcu — original code is correctarc-swap's rcu returns the previous (old) value, so the function's docstring and returning the old cache are correct; the proposed refactor is unnecessary.
Likely an incorrect or invalid review comment.
src/geometry/algorithms/convex_hull.rs (3)
271-272: Decouple hull snapshot from TDS atomicInitializing
cached_generationwith the numeric snapshot (tds.generation()) avoids aliasing the TDS atomic. Good.
1094-1096: Release store on generation reset is appropriateMatches the cache invalidation contract readers rely on.
3348-3387: Tests: migration to tds.generation() and stronger assertionsAll good; assertions reflect the new semantics and improve resilience.
Also applies to: 3398-3423, 3470-3494, 3503-3507, 3531-3532
src/core/traits/insertion_algorithm.rs (8)
36-52: Good: Clean Display impl avoids macro fragility.Moving formatting logic out of #[error(...)] is safer and keeps messages stable.
518-529: Nice: prealloc + reuse buffer in inner loop.This avoids churn in a hot path.
670-674: Good: explicit NoCells error early.Prevents ambiguous downstream states.
853-885: Fail-fast on invalid facet index/cell key is correct.Early Err maintains TDS invariants and simplifies recovery.
915-929: Visibility helper is sound and side‑effect free.Opposite‑vertex selection and orientation comparison match the standard test.
Also applies to: 1493-1565
2179-2191: Test assertion pattern is fine.Using
result.err()only on failure avoids moving on success.
2338-2430: Great: direct tests for error variants and messages.Covers AllCellsBad/TooManyDegenerateCells/NoCells formatting and equality.
1141-1167: MSRV and edition OK — let-chain syntax supportedCargo.toml declares rust-version = "1.89.0" (line 5). Let-chains were stabilized on stable in Rust 1.88 and are edition-gated to 2024; repo docs (WARP.md) already reference Edition 2024, so the
if let … && let …guard is supported. (blog.rust-lang.org)
Migrates internal collections to use direct key-based access instead of UUID lookups. This change enhances performance by eliminating UUID-to-key mapping overhead in internal algorithms, improving memory efficiency and cache locality. (Internal change)