feat: Memory profiling system and performance optimizations (v0.4.3)#68
feat: Memory profiling system and performance optimizations (v0.4.3)#68acgetchell merged 13 commits intomainfrom
Conversation
Streamlines benchmark workflow by replacing complex bash scripts with Python utility functions. This simplifies maintenance, improves code readability, and reduces the risk of errors. The changes encompass baseline generation, comparison, commit extraction, skip determination and result display within GitHub Actions workflows.
This commit introduces a comprehensive profiling suite for in-depth performance analysis and a separate memory stress test job. The profiling suite includes: - Large-scale triangulation performance analysis (10³-10⁶ points) - Multiple point distributions (random, grid, Poisson disk) - Memory allocation tracking (with `count-allocations` feature) - Query latency analysis - Multi-dimensional scaling (2D-5D) - Algorithmic bottleneck identification It's integrated into GitHub Actions with scheduled runs and manual triggering, along with uploading profiling results and baselines. The memory stress test runs independently to exercise allocation APIs and memory scaling under load. Also, ignores the "benches/**" directory in codecov, and adds the profiling suite to the README.md.
…0.4.3 - Add allocation counter infrastructure with count-allocations feature flag - Implement memory tracking for triangulation and convex hull operations - Add profiling benchmarks in GitHub Actions workflow - Optimize collections with FxHashMap/FxHashSet for better performance - Add domain-specific collection types and small buffer optimizations - Clean up examples by removing test functions (convert to pure demonstrations) - Update WARP.md guidelines and project documentation - Add comprehensive memory analysis examples across dimensions (2D-5D)
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. Note Currently processing new changes in this PR. This may take a few minutes, please wait... 📒 Files selected for processing (19)
WalkthroughThis PR introduces a new core collections module and migrates triangulation internals to optimized FxHash/SmallVec-based types. It adds extensive geometry utilities for safe conversions and random data generation, overhauls benchmarks (new profiling suite, CI suite refactors), significantly expands Python benchmark tooling and tests, updates workflows, and adjusts examples/documentation. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor GH as GitHub Actions
participant UV as uv + Python
participant BU as benchmark-utils
participant Repo as Repo Artifacts
GH->>UV: Setup Python (uv) env
GH->>BU: prepare-baseline (SOURCE_TYPE/tag)
alt Baseline found
BU-->>GH: baseline path + metadata
GH->>BU: extract-baseline-commit
GH->>BU: determine-skip(current vs baseline)
alt Should skip
BU-->>GH: skip reason
GH->>BU: display-skip-message
else Run
GH->>BU: run-regression-test(baseline_results.txt)
BU-->>Repo: write compare_results.txt
GH->>BU: display-results
GH->>BU: generate-summary
end
else No baseline
GH->>BU: display-no-baseline
end
sequenceDiagram
autonumber
participant TDS as Tds
participant COL as core::collections
participant CH as ConvexHull
participant Cache as ArcSwapOption<FacetToCellsMap>
TDS->>COL: build_facet_to_cells_hashmap() -> FacetToCellsMap
CH->>Cache: load_full()
alt Cache miss or stale
CH->>TDS: build_facet_to_cells_hashmap()
TDS-->>CH: FacetToCellsMap
CH->>Cache: store(Arc<FacetToCellsMap>)
end
CH-->>CH: use FacetToCellsMap for queries
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120+ minutes Possibly related issues
Possibly related PRs
Pre-merge checks (3 passed)✅ Passed checks (3 passed)
Poem
Tip CodeRabbit can generate a title for your PR based on the changes.Add @coderabbitai placeholder anywhere in the title of your PR and CodeRabbit will replace it with a title based on the changes in the PR. You can change the placeholder by changing the ✨ Finishing touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull Request Overview
This PR adds comprehensive memory profiling capabilities and implements safe performance optimizations for the v0.4.3 release. The changes introduce allocation tracking infrastructure, optimize collection usage with fast hash implementations, and create an extensive profiling suite for detailed performance analysis.
- Added memory tracking and profiling infrastructure with
count-allocationsfeature flag - Optimized data structures using FxHashMap/FxHashSet and SmallVec for better performance
- Created comprehensive profiling benchmarks and workflow automation for performance monitoring
Reviewed Changes
Copilot reviewed 20 out of 21 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| src/lib.rs | Added new collections module for optimized data structures |
| src/geometry/util.rs | Added grid and Poisson disk point generation utilities with comprehensive tests |
| src/geometry/predicates.rs | Added inline optimization hints for performance-critical functions |
| src/geometry/algorithms/convex_hull.rs | Migrated to optimized collection types for better performance |
| src/core/triangulation_data_structure.rs | Replaced standard collections with optimized variants throughout |
| src/core/collections.rs | New module providing high-performance collection type aliases |
| scripts/tests/test_benchmark_utils.py | Added comprehensive test coverage for new workflow helper classes |
| scripts/benchmark_utils.py | Extended with workflow automation and regression testing utilities |
| examples/*.rs | Cleaned up by removing test functions, converting to pure demonstrations |
| benches/profiling_suite.rs | New comprehensive profiling suite for optimization work |
| .github/workflows/*.yml | Enhanced automation with Python-based workflow management |
| Cargo.toml | Added performance dependencies (fxhash, smallvec) and profiling configuration |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
| /// Generate points arranged in a regular grid pattern. | ||
| /// | ||
| /// This function creates points arranged in a D-dimensional hypercube grid, | ||
| /// which provides a structured, predictable point distribution useful for | ||
| /// benchmarking and testing geometric algorithms under best-case scenarios. | ||
| /// | ||
| /// # Arguments | ||
| /// | ||
| /// * `points_per_dim` - Number of points along each dimension | ||
| /// * `spacing` - Distance between adjacent grid points | ||
| /// * `offset` - Translation offset for the entire grid | ||
| /// | ||
| /// # Returns | ||
| /// | ||
| /// Vector of grid points, or a `RandomPointGenerationError` if parameters are invalid. | ||
| /// | ||
| /// # Errors | ||
| /// | ||
| /// * `RandomPointGenerationError::InvalidPointCount` if `points_per_dim` is zero | ||
| /// | ||
| /// # Examples | ||
| /// | ||
| /// ``` |
There was a problem hiding this comment.
[nitpick] The documentation states the function generates 'D-dimensional hypercube grid' but the examples show different dimensional grids. Consider clarifying that this generates points in D-dimensional space arranged in a regular grid pattern, where D is the const generic parameter.
| #[must_use] | ||
| pub fn fast_hash_map_with_capacity<K, V>(capacity: usize) -> FastHashMap<K, V> { | ||
| use std::hash::BuildHasherDefault; | ||
| FastHashMap::with_capacity_and_hasher(capacity, BuildHasherDefault::default()) | ||
| } |
There was a problem hiding this comment.
The function creates a BuildHasherDefault::default() on each call. Consider creating a static or const instance of the hasher to avoid repeated allocations, especially since this is a performance-focused utility function.
benches/profiling_suite.rs
Outdated
| let points_per_dim = ((count as f64).powf(1.0 / D as f64).ceil() as usize).max(2); | ||
| generate_grid_points(points_per_dim, 10.0, [0.0; D]).unwrap() |
There was a problem hiding this comment.
The calculation of points_per_dim could overflow or produce unexpected results for very large count values or high dimensions. Consider adding bounds checking or using checked arithmetic to prevent potential issues in production profiling runs.
| let points_per_dim = ((count as f64).powf(1.0 / D as f64).ceil() as usize).max(2); | |
| generate_grid_points(points_per_dim, 10.0, [0.0; D]).unwrap() | |
| // Checked arithmetic and bounds checking for points_per_dim | |
| let raw_points_per_dim = (count as f64).powf(1.0 / D as f64).ceil(); | |
| let max_points_per_dim = usize::MAX as f64; | |
| let safe_points_per_dim = if raw_points_per_dim.is_finite() && raw_points_per_dim >= 2.0 { | |
| if raw_points_per_dim > max_points_per_dim { | |
| eprintln!( | |
| "Warning: points_per_dim ({}) exceeds usize::MAX ({}); clamping to usize::MAX.", | |
| raw_points_per_dim, usize::MAX | |
| ); | |
| usize::MAX | |
| } else { | |
| raw_points_per_dim as usize | |
| } | |
| } else { | |
| eprintln!( | |
| "Warning: points_per_dim ({}) is not finite or less than 2; using minimum value 2.", | |
| raw_points_per_dim | |
| ); | |
| 2 | |
| }; | |
| generate_grid_points(safe_points_per_dim, 10.0, [0.0; D]).unwrap() |
scripts/benchmark_utils.py
Outdated
| commit_ref = f"{baseline_commit}^{{commit}}" | ||
| subprocess.run(["git", "cat-file", "-e", commit_ref], check=True, capture_output=True) # noqa: S603,S607 |
There was a problem hiding this comment.
Although the baseline_commit is validated with regex, using string formatting to construct git commands could still be risky. Consider using subprocess with explicit arguments list: subprocess.run([\"git\", \"cat-file\", \"-e\", baseline_commit], check=True, capture_output=True) to avoid shell interpretation of the commit reference format.
| commit_ref = f"{baseline_commit}^{{commit}}" | |
| subprocess.run(["git", "cat-file", "-e", commit_ref], check=True, capture_output=True) # noqa: S603,S607 | |
| # Construct commit reference safely using only validated SHA | |
| subprocess.run(["git", "cat-file", "-e", f"{baseline_commit}^{{commit}}"], check=True, capture_output=True) # noqa: S603,S607 |
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
src/geometry/algorithms/convex_hull.rs (3)
162-167: Cache staleness bug: cached_generation shares AtomicU64 with TDS (always equal) and invalidate_cache mutates TDS.You clone the Arc from TDS, so current_generation == cached_generation is always true; cache never rebuilds on TDS changes. invalidate_cache() also resets TDS’s generation to 0 via the shared Arc. Keep an independent snapshot in ConvexHull.
Apply this minimal fix:
@@ - cached_generation: Arc<AtomicU64>, + cached_generation: Arc<AtomicU64>, @@ - Ok(Self { - hull_facets, - facet_to_cells_cache: ArcSwap::from_pointee(None), - cached_generation: tds.generation.clone(), - }) + Ok(Self { + hull_facets, + facet_to_cells_cache: ArcSwap::from_pointee(None), + // Snapshot the current TDS generation; do not share the AtomicU64 + cached_generation: Arc::new(AtomicU64::new( + tds.generation.load(Ordering::Relaxed), + )), + }) @@ - // Check if cache is stale and needs to be invalidated - let current_generation = tds.generation.load(Ordering::Relaxed); - let cached_generation = self.cached_generation.load(Ordering::Relaxed); + // Check if cache is stale and needs to be invalidated + let current_generation = tds.generation.load(Ordering::Relaxed); + let cached_generation = self.cached_generation.load(Ordering::Relaxed); @@ - // Update the generation counter - self.cached_generation - .store(current_generation, Ordering::Relaxed); + // Update the generation snapshot + self.cached_generation + .store(current_generation, Ordering::Relaxed); @@ pub fn invalidate_cache(&self) { // Clear the cache self.facet_to_cells_cache.store(Arc::new(None)); - // Reset the generation counter to 0 to force cache rebuild - self.cached_generation.store(0, Ordering::Relaxed); + // Reset only our snapshot to force rebuild on next access + self.cached_generation.store(0, Ordering::Relaxed); }Update tests accordingly (see below).
Also applies to: 264-268, 372-409, 1084-1090
718-721: MSRV-safe Option check: replace is_none_or (1.7x+) with map_or.To keep compatibility with older Rust versions (e.g., 1.70 seen in baseline text), avoid Option::is_none_or.
- if min_distance.is_none_or(|min_dist| distance < min_dist) { + if min_distance.map_or(true, |min_dist| distance < min_dist) {
3338-3346: Fix tests to reflect independent generation snapshot.Tests currently assert that ConvexHull shares the same Arc as TDS. With the bug fix, update them to check staleness detection instead of pointer equality.
- // Verify they share the same Arc (same memory location) - assert!( - std::ptr::eq(tds.generation.as_ref(), hull.cached_generation.as_ref()), - "TDS and ConvexHull should share the same Arc<AtomicU64> for generation tracking" - ); + // ConvexHull keeps an independent snapshot for staleness detection + assert!( + !std::ptr::eq(tds.generation.as_ref(), hull.cached_generation.as_ref()), + "ConvexHull should keep an independent generation snapshot" + ); @@ - // Verify cache exists by checking that it contains expected facet keys + // Verify cache exists by checking that it was built let cache_arc = hull.facet_to_cells_cache.load(); assert!( cache_arc.is_some(), "Cache should exist after first visibility test" ); @@ - // Now simulate TDS modification by incrementing its generation counter - // This simulates what would happen when the TDS is actually modified + // Simulate TDS modification by incrementing its generation counter let old_generation = tds.generation.load(Ordering::Relaxed); tds.generation.store(old_generation + 1, Ordering::Relaxed); @@ - // Verify that both see the same change since they share the Arc - assert_eq!( - modified_tds_gen, stale_hull_gen, - "Both should see the generation change since they share the same Arc" - ); + // Hull snapshot is now stale relative to TDS + assert!(modified_tds_gen > stale_hull_gen); @@ - // First, let's create a separate ConvexHull with its own generation counter - // to simulate what would happen with independent generation tracking - let hull_with_independent_generation = ConvexHull { - hull_facets: hull.hull_facets.clone(), - facet_to_cells_cache: ArcSwap::from_pointee(Some( - cache_arc.as_ref().as_ref().unwrap().clone(), - )), - cached_generation: Arc::new(AtomicU64::new(old_generation)), // Stale generation - }; + // Next visibility call should rebuild the cache due to stale snapshot + let hull_with_independent_generation = &hull; @@ - let result2 = - hull_with_independent_generation.is_facet_visible_from_point(facet, &test_point, &tds); + let result2 = + hull_with_independent_generation.is_facet_visible_from_point(facet, &test_point, &tds); @@ - let updated_independent_gen = hull_with_independent_generation - .cached_generation - .load(Ordering::Relaxed); + let updated_independent_gen = hull_with_independent_generation + .cached_generation + .load(Ordering::Relaxed);Also applies to: 3349-3368, 3410-3444, 3453-3496, 3500-3514
.github/workflows/generate-baseline.yml (2)
43-54: Fix ordering: using uv before installing it will failYou invoke “uv run …” in “Determine tag information” before uv is installed. Move “Install uv” and “Verify uv installation” above the tag step (or install uv inline).
Apply:
- - name: Determine tag information - id: tag_info - run: uv run benchmark-utils determine-tag - - - name: Install uv (Python package manager) + - name: Install uv (Python package manager) uses: astral-sh/setup-uv@557e51de59eb14aaaba2ed9621916900a91d50c6 # v6.6.1 with: version: "latest" - name: Verify uv installation run: uv --version + + - name: Determine tag information + id: tag_info + run: uv run benchmark-utils determine-tag
55-71: Create output directory before writing baselinebaseline-artifact/ may not exist if benchmark-utils doesn’t create it. Safer to create it explicitly.
run: | echo "🚀 Generating performance baseline for tag $TAG_NAME" - + mkdir -p baseline-artifact # Generate baseline using Python CLI tool uv run benchmark-utils generate-baseline \ --output "baseline-artifact/baseline-$TAG_NAME.txt" \ --tag "$TAG_NAME"src/core/triangulation_data_structure.rs (1)
1216-1247: Preserve per-vertex neighbor mapping or clarify docs
Inassign_neighbors(src/core/triangulation_data_structure.rs:1393), the calllet neighbors: Vec<Uuid> = neighbor_options.into_iter().flatten().collect();drops
Noneplaceholders, so the guarantee “neighbors[i]is oppositevertices[i]” is lost.
- Either change the
neighborsfield toOption<Vec<Option<Uuid>>>to retain positional semantics,- Or keep the flat
Vec<Uuid>, add an accessor returning per-vertexOption<Uuid>, and update docs to state thatCell::neighborsis unordered. Will you include a patch for the accessor + doc update?
🧹 Nitpick comments (40)
cspell.json (2)
31-31: Add CamelCase variants for ahash types to prevent false positivesIf used in code/docs, consider adding: AHashMap, AHashSet, AHasher.
186-186: Add CamelCase variant for SeaHashIf referenced as SeaHash in docs/code, add “SeaHash” to words.
src/geometry/predicates.rs (1)
445-451: #[inline] on insphere + doc duplication nearbyInlining is reasonable, but the preceding docs for insphere appear duplicated (large repeated sections). Deduplicate to reduce rustdoc bloat.
Apply this minimal doc cleanup pattern near the second repeated block:
-/// Check if a point is contained within the circumsphere of a simplex using matrix determinant. -/// -/// This is the `InSphere` predicate test, ... +/// See function-level docs above for detailed explanation and references.benches/README.md (1)
61-66: All-bench invocation note (optional)Optional: add “cargo bench --no-run” tip for CI compile-only checks when iterating.
docs/code_organization.md (1)
574-581: Avoid brittle exact line countsStating util.rs as “3,806 lines” will drift. Prefer approximate wording or omit counts to keep docs evergreen.
-#### `util.rs` (3,806 lines) +#### `util.rs` (large module; thousands of lines)scripts/benchmark_utils.py (7)
16-21: Subprocess usage: add timeouts and centralized wrapperDirect subprocess.run calls (git diff/cat-file) can hang in CI. Prefer a thin wrapper with timeouts and consistent error reporting.
- result = subprocess.run(["git", "diff", "--name-only", diff_range], capture_output=True, text=True, check=True) # noqa: S603,S607 + result = subprocess.run( + ["git", "diff", "--name-only", diff_range], + capture_output=True, text=True, check=True, timeout=60 # 60s safety timeout + ) # noqa: S603,S607Optionally, expose a
run_git_command(args: list[str], timeout=60)helper for reuse.
99-124: Baseline rounding strategy: acceptable, but document itRounding to 2 decimals when writing baselines slightly increases noise in %-change calculations. Consider documenting this or storing higher precision in the baseline file.
- .with_timing(round(low_us, 2), round(mean_us, 2), round(high_us, 2), "µs") + .with_timing(round(low_us, 2), round(mean_us, 2), round(high_us, 2), "µs") # baseline stored at 2dp
190-229: Bench runs: consider reusing dev-mode argsThe dev args list is duplicated here and in comparator. Factor into a constant to keep them in sync.
542-549: Unit handling: support both micro symbolsSome tools emit “μs” (Greek mu) vs “µs” (micro sign). Add both to unit_scale to avoid “unit mismatch”.
- unit_scale = {"ns": 1e-3, "µs": 1.0, "us": 1.0, "ms": 1e3, "s": 1e6} + unit_scale = {"ns": 1e-3, "µs": 1.0, "μs": 1.0, "us": 1.0, "ms": 1e3, "s": 1e6}
577-716: WorkflowHelper outputs: consistent with GHAEmits to GITHUB_OUTPUT and stdout. Looks correct. Minor nit: avoid importing re inside a function; reuse the top-level import.
- import re # Replace any non-alphanumeric characters (except . _ -) with underscore clean_name = re.sub(r"[^a-zA-Z0-9._-]", "_", tag_name)
830-872: Skip logic: handle shallow clones and irrelevant diffsNice coverage of edge cases. Consider also skipping when HEAD is a merge commit of the same baseline (rare); optional.
913-927: Regression run doesn’t propagate “regression found” to envgenerate_summary infers status by scanning compare_results.txt, which is fine. If you need machine-readable output, optionally set an env var when regressions are present.
scripts/tests/test_benchmark_utils.py (5)
699-727: Simplify env mocking: prefer patch.dict only (drop os.getenv mocking).Mocking both os.environ and benchmark_utils.os.getenv is redundant and can mask real behavior. Using patch.dict is sufficient and clearer.
-@patch.dict(os.environ, {"GITHUB_REF": "refs/tags/v1.2.3"}, clear=False) -@patch("benchmark_utils.os.getenv") -def test_determine_tag_name_from_github_ref(self, mock_getenv): +@patch.dict(os.environ, {"GITHUB_REF": "refs/tags/v1.2.3"}, clear=False) +def test_determine_tag_name_from_github_ref(self): """Test tag name determination from GITHUB_REF with tag.""" - mock_getenv.side_effect = lambda key, default="": { - "GITHUB_REF": "refs/tags/v1.2.3", - "GITHUB_OUTPUT": None, - }.get(key, default) tag_name = WorkflowHelper.determine_tag_name() assert tag_name == "v1.2.3"Likewise in test_determine_tag_name_generated, keep only patch.dict for GITHUB_REF and patch benchmark_utils.datetime for determinism.
Also applies to: 729-746
841-891: stdout vs stderr assertions: be explicit.display_baseline_summary prints the “not found” error to stderr. You’re correctly filtering by file==sys.stderr. For positive cases, assert once on stdout-only to avoid false positives from prior prints in the same patched context.
Also applies to: 892-911
913-952: Artifact name sanitization: cover full replacement length and escaping.The expected underscores count in "@#$%^&*()[]{}|\<>?" is brittle. Consider asserting the regex rule outcome instead of literal length.
-assert artifact_name == "performance-baseline-__________________" +import re +assert re.fullmatch(r"performance-baseline-[A-Za-z0-9._-]+", artifact_name) +assert "_" in artifact_name # at least one replacement happenedAlso applies to: 922-937, 939-952
1186-1193: Message expectation may not match implementation.Test expects “💡 To enable performance regression testing…”, but display_no_baseline_message in benchmark_utils prints two bullet lines without that hint. Either update the implementation to include the hint or relax the assertion to the existing lines.
1254-1312: generate_summary: set CWD only around the call.Good use of chdir in a try/finally. Minor nit: use contextlib.ExitStack or a helper to avoid repeating cwd save/restore across tests.
src/lib.rs (1)
162-164: New core::collections module: consider prelude exposure and docs caveat.
- Optional: re-export frequently used aliases (e.g., FacetToCellsMap) from prelude if you expect downstream usage in examples.
- Add a doc note that fast hashes (e.g., FxHash) are not DoS-resistant and should not be used with untrusted attacker-controlled keys.
src/geometry/algorithms/convex_hull.rs (2)
162-167: ArcSwap<Option> works; ArcSwapOption would be cleaner.ArcSwapOption removes Some/None wrapping boilerplate and avoids double as_ref() later.
If you prefer minimal churn, keep as-is; otherwise, migrate to ArcSwapOption.
930-949: FastHashMap in validation is fine; consider SmallVec for positions.positions: Vec is tiny; using SmallVec<[usize; D]> avoids heap for common cases.
.github/workflows/generate-baseline.yml (1)
99-120: Use the sanitized artifact name in the summaryYou echo “performance-baseline-$TAG_NAME”, but the upload uses the sanitized name from the sanitize step. Echo the actual artifact name to avoid confusion.
- echo " Artifact: performance-baseline-$TAG_NAME" + echo " Artifact: ${{ steps.safe_name.outputs.artifact_name }}".github/workflows/profiling-benchmarks.yml (4)
71-76: Long condition and style: split for readability and linterThe if condition line exceeds the configured limit and is hard to read. Use a temp variable and compare explicitly.
- if [[ "${{ github.event.inputs.mode }}" == "development" ]] || [[ "${{ github.event_name }}" == "workflow_dispatch" && -z "${{ github.event.inputs.mode }}" ]]; then + INPUT_MODE="${{ github.event.inputs.mode }}" + if [[ "$INPUT_MODE" == "development" ]] || { [[ "${{ github.event_name }}" == "workflow_dispatch" ]] && [[ -z "$INPUT_MODE" ]]; }; then
82-104: Potential duplicate memory runYou run with --features count-allocations once for all benches and again for memory_profiling unless the filter contains “memory”. If the first run already included memory benches, this doubles work. Consider gating the first run or skipping the second when BENCH_FILTER is empty.
Example tweak:
- Run the first pass without count-allocations unless the filter requests memory.
- Keep the dedicated memory run for memory_profiling only.
152-170: Artifact duplicationYou upload target/criterion/ in both “results” and “baseline” artifacts for tags. Consider excluding it from the first artifact when tagging to avoid doubling artifact size.
- path: | - profiling-results/ - target/criterion/ + path: profiling-results/(Keep target/criterion only in the baseline artifact for tags.)
40-40: Trim trailing spaces flagged by the linterTrailing spaces at these lines were reported by CI. Removing them will quiet the checks.
Also applies to: 86-86, 92-92, 110-110, 117-117
src/geometry/util.rs (3)
1295-1307: Guard against extremely large total_pointspoints_per_dim.pow(D as u32) can overflow Vec capacity or exhaust memory. Add a sanity cap (e.g., 50M points) and return a clear error if exceeded to avoid OOM in CI.
let total_points = points_per_dim.pow(d_u32); + const MAX_POINTS: usize = 50_000_000; + if total_points > MAX_POINTS { + return Err(RandomPointGenerationError::RandomGenerationFailed { + min: "n/a".into(), + max: "n/a".into(), + details: format!("Requested {total_points} grid points exceeds safety cap ({MAX_POINTS})"), + }); + }
1418-1437: Use to_array() instead of Into for clarityUsing (&candidate).into() and existing_point.into() relies on From/Into impls; prefer the explicit Point::to_array() used elsewhere for consistency.
- for existing_point in &points { - let existing_coords: [T; D] = existing_point.into(); - let candidate_coords: [T; D] = (&candidate).into(); + for existing_point in &points { + let existing_coords: [T; D] = existing_point.to_array(); + let candidate_coords: [T; D] = candidate.to_array();
1399-1416: Validate min_distance early (optional)If min_distance <= 0, the rejection loop adds overhead with no effect. Treat non-positive min_distance as “no spacing constraint” and short-circuit.
- let mut rng = rand::rngs::StdRng::seed_from_u64(seed); + let mut rng = rand::rngs::StdRng::seed_from_u64(seed); + if min_distance <= T::zero() { + let mut points = Vec::with_capacity(n_points); + for _ in 0..n_points { + let coords = [T::zero(); D].map(|_| rng.random_range(bounds.0..bounds.1)); + points.push(Point::new(coords)); + } + return Ok(points); + }benches/profiling_suite.rs (4)
489-527: Hoist per-cell preprocessing out of the inner query loopYou recompute vertex_coords for every query_point. Precompute once per cell to cut redundant allocations and copies.
- for query_point in &query_points { - let query_coords: [f64; 3] = query_point.into(); - let vertex_coords: Vec<[f64; 3]> = cell - .1 - .vertices() - .iter() - .map(std::convert::Into::into) - .collect(); - if vertex_coords.len() == 4 { - let points_for_test: Vec<Point<f64, 3>> = - vertex_coords.into_iter().map(Point::new).collect(); - let query_point_obj = Point::new(query_coords); - #[allow(clippy::items_after_statements)] - { - use delaunay::geometry::predicates::insphere_lifted; - let result = insphere_lifted(&points_for_test, query_point_obj); - query_results.push(result); - } - } + // Precompute once per cell + let vertex_coords: Vec<[f64; 3]> = cell + .1 + .vertices() + .iter() + .map(std::convert::Into::into) + .collect(); + if vertex_coords.len() == 4 { + let points_for_test: Vec<Point<f64, 3>> = + vertex_coords.iter().copied().map(Point::new).collect(); + for query_point in &query_points { + let query_coords: [f64; 3] = (*query_point).into(); + let query_point_obj = Point::new(query_coords); + #[allow(clippy::items_after_statements)] + { + use delaunay::geometry::predicates::insphere_lifted; + let result = insphere_lifted(&points_for_test, query_point_obj); + query_results.push(result); + } + if query_results.len() >= 1000 { break; } + } + }
272-291: “Bytes per point” uses requested count, not actual generated countGrid can overshoot and Poisson can undershoot the requested count, so the reported bytes/point is misleading. Track and pass the actual vertices.len() to print_alloc_summary.
551-571: Measure the success path of boundary_facetsYou’re benchmarking a Result, not the operation’s payload. Unwrap to fail fast and avoid measuring Err paths.
- let boundary_facets = tds.boundary_facets(); - let _ = black_box(boundary_facets); + let boundary_facets = tds.boundary_facets().expect("boundary_facets failed"); + black_box(boundary_facets);
169-191: Throughput can be inaccurate for Grid/PoissonThroughput::Elements(count) won’t match actual input sizes for Grid/Poisson. Either compute actual_count in setup and use that, or omit throughput to avoid skewed metrics.
src/core/triangulation_data_structure.rs (6)
1392-1400: Avoid flattening if positional semantics must hold; otherwise adjust docsFlattening neighbor_options drops None slots. If you decide to keep the unordered representation, update this method’s rustdoc to remove the positional guarantee to avoid misleading users.
- // Filter out None values to get only actual neighbors - let neighbors: Vec<Uuid> = neighbor_options.into_iter().flatten().collect(); + // Collect only actual neighbors; order is not guaranteed to correspond to vertex indices + let neighbors: Vec<Uuid> = neighbor_options.into_iter().flatten().collect();
1629-1651: Docstring still references HashMap; return type is FacetToCellsMapAdjust docs to match the new alias to avoid confusion.
- /// Builds a `HashMap` mapping facet keys to the cells and facet indices that contain them. + /// Builds a `FacetToCellsMap` mapping facet keys to the cells and facet indices that contain them. ... - /// A `HashMap<u64, Vec<(CellKey, usize)>>` where: + /// A `FacetToCellsMap` where:
2145-2153: Comment/code mismatch: storing VertexKey, not UUIDsYou precompute FastHashSet but the comment says “vertex UUIDs”. Fix the comment to avoid confusion.
- // Pre-compute vertex UUIDs for all cells to avoid repeated computation + // Pre-compute vertex keys for all cells to avoid repeated computation
2188-2196: Avoid per-iteration set allocation for mutual neighbor checkNeighbor lists are tiny (≤ D+1). A linear any() is cheaper than building a set each time.
- if let Some(neighbor_neighbors) = &neighbor_cell.neighbors { - let neighbor_set: FastHashSet<_> = neighbor_neighbors.iter().collect(); - if !neighbor_set.contains(&cell.uuid()) { + if let Some(neighbor_neighbors) = &neighbor_cell.neighbors { + if !neighbor_neighbors.iter().any(|u| *u == cell.uuid()) { return Err(TriangulationValidationError::InvalidNeighbors { message: format!( "Neighbor relationship not mutual: {:?} → {neighbor_uuid:?}", cell.uuid() ), }); }
1549-1555: Prefer sort_unstable for UUID vectors in hot pathsMinor perf win and less overhead; order stability isn’t required here.
- vertex_uuids.sort(); + vertex_uuids.sort_unstable();
1479-1484: Pre-allocate vertex_to_cellsYou know the upper bound (vertices.len()); pre-alloc to cut rehashing on large meshes.
- let mut vertex_to_cells: VertexToCellsMap = FastHashMap::default(); + let mut vertex_to_cells: VertexToCellsMap = + fast_hash_map_with_capacity(self.vertices.len());src/core/collections.rs (2)
68-69: Re-export coupling may create tight module interdependencecollections re-exports CellKey/VertexKey from triangulation_data_structure, while triangulation_data_structure imports types from collections. Verify this doesn’t cause cyclic init or name-resolution headaches. If it does, consider moving keys to a small keys.rs module used by both.
331-335: Be explicit about the hasher to avoid inference pitfallsUsing BuildHasherDefault::default() works but is opaque. Consider FxBuildHasher for clarity.
- use std::hash::BuildHasherDefault; - FastHashMap::with_capacity_and_hasher(capacity, BuildHasherDefault::default()) + use fxhash::FxBuildHasher; + FastHashMap::with_capacity_and_hasher(capacity, FxBuildHasher::default())
📜 Review details
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (20)
.codecov.yml(1 hunks).github/workflows/benchmarks.yml(2 hunks).github/workflows/generate-baseline.yml(2 hunks).github/workflows/profiling-benchmarks.yml(1 hunks)Cargo.toml(3 hunks)WARP.md(1 hunks)benches/README.md(2 hunks)benches/profiling_suite.rs(1 hunks)cspell.json(2 hunks)docs/code_organization.md(1 hunks)examples/convex_hull_3d_50_points.rs(0 hunks)examples/triangulation_3d_50_points.rs(0 hunks)scripts/benchmark_utils.py(4 hunks)scripts/tests/test_benchmark_utils.py(3 hunks)src/core/collections.rs(1 hunks)src/core/triangulation_data_structure.rs(15 hunks)src/geometry/algorithms/convex_hull.rs(3 hunks)src/geometry/predicates.rs(2 hunks)src/geometry/util.rs(2 hunks)src/lib.rs(1 hunks)
💤 Files with no reviewable changes (2)
- examples/convex_hull_3d_50_points.rs
- examples/triangulation_3d_50_points.rs
🧰 Additional context used
📓 Path-based instructions (1)
scripts/*.py
⚙️ CodeRabbit configuration file
scripts/*.py: These are Python utility scripts for changelog and commit processing.
Focus on code quality, maintainability, and adherence to Python best practices.
The scripts use ruff for comprehensive linting and formatting (pylint has been retired).
Files:
scripts/benchmark_utils.py
🧠 Learnings (3)
📚 Learning: 2025-09-02T20:32:05.985Z
Learnt from: acgetchell
PR: acgetchell/delaunay#60
File: cspell.json:103-103
Timestamp: 2025-09-02T20:32:05.985Z
Learning: In cspell.json for the delaunay project, the word "itional" is intentionally added to the dictionary because it comes from a regex pattern, not a typo.
Applied to files:
cspell.json
📚 Learning: 2025-09-02T02:40:54.439Z
Learnt from: acgetchell
PR: acgetchell/delaunay#58
File: scripts/tests/test_subprocess_utils.py:92-101
Timestamp: 2025-09-02T02:40:54.439Z
Learning: In Rust projects, cargo is expected to be available in development and CI environments, so conditional skipping of cargo-dependent tests is typically unnecessary.
Applied to files:
WARP.md
📚 Learning: 2025-09-04T20:03:49.859Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.859Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
Applied to files:
docs/code_organization.md
🧬 Code graph analysis (6)
src/core/triangulation_data_structure.rs (1)
src/core/collections.rs (2)
fast_hash_map_with_capacity(332-335)fast_hash_map_with_capacity(434-434)
scripts/tests/test_benchmark_utils.py (1)
scripts/benchmark_utils.py (18)
BenchmarkRegressionHelper(718-988)CriterionParser(78-179)PerformanceComparator(273-574)WorkflowHelper(577-715)determine_tag_name(581-606)create_metadata(609-653)display_baseline_summary(656-689)sanitize_artifact_name(692-715)prepare_baseline(722-761)set_no_baseline_status(764-774)extract_baseline_commit(777-827)determine_benchmark_skip(830-871)display_skip_message(874-887)display_no_baseline_message(890-900)compare_with_baseline(281-338)run_regression_test(903-930)display_results(933-945)generate_summary(948-988)
src/core/collections.rs (2)
src/core/triangulation_data_structure.rs (3)
map(2415-2415)map(2426-2426)new(874-906)src/core/traits/insertion_algorithm.rs (1)
with_capacity(234-241)
benches/profiling_suite.rs (3)
src/geometry/util.rs (36)
generate_grid_points(1290-1352)generate_grid_points(3547-3547)generate_grid_points(3580-3580)generate_grid_points(3596-3596)generate_grid_points(3612-3612)generate_grid_points(3621-3621)generate_grid_points(3635-3635)generate_grid_points(3648-3648)generate_grid_points(3661-3661)generate_poisson_points(1390-1457)generate_poisson_points(3679-3679)generate_poisson_points(3712-3712)generate_poisson_points(3745-3745)generate_poisson_points(3746-3746)generate_poisson_points(3760-3760)generate_poisson_points(3767-3767)generate_poisson_points(3778-3778)generate_poisson_points(3794-3794)generate_poisson_points(3817-3817)generate_poisson_points(3829-3829)generate_random_points_seeded(1224-1251)generate_random_points_seeded(3309-3309)generate_random_points_seeded(3310-3310)generate_random_points_seeded(3329-3329)generate_random_points_seeded(3330-3330)generate_random_points_seeded(3348-3348)generate_random_points_seeded(3349-3349)generate_random_points_seeded(3367-3367)generate_random_points_seeded(3368-3368)generate_random_points_seeded(3387-3387)generate_random_points_seeded(3388-3388)generate_random_points_seeded(3392-3392)generate_random_points_seeded(3393-3393)generate_random_points_seeded(3397-3397)generate_random_points_seeded(3398-3398)generate_random_points_seeded(3402-3402)src/geometry/predicates.rs (1)
insphere_lifted(600-716)src/geometry/algorithms/convex_hull.rs (1)
from_triangulation(238-269)
src/geometry/util.rs (2)
src/geometry/point.rs (2)
try_from(313-336)new(74-76)src/geometry/traits/coordinate.rs (2)
dim(630-632)new(643-643)
scripts/benchmark_utils.py (2)
scripts/tests/test_benchmark_utils.py (3)
comparator(214-217)comparator(504-507)comparator(621-624)scripts/hardware_utils.py (1)
main(578-625)
🪛 GitHub Check: build (ubuntu-latest)
.github/workflows/profiling-benchmarks.yml
[failure] 117-117:
117:1 [trailing-spaces] trailing spaces
[failure] 110-110:
110:1 [trailing-spaces] trailing spaces
[failure] 92-92:
92:1 [trailing-spaces] trailing spaces
[failure] 86-86:
86:1 [trailing-spaces] trailing spaces
[failure] 71-71:
71:121 [line-length] line too long (172 > 120 characters)
[failure] 42-42:
42:5 [indentation] wrong indentation: expected 6 but found 4
[failure] 40-40:
40:1 [trailing-spaces] trailing spaces
[failure] 13-13:
13:9 [indentation] wrong indentation: expected 10 but found 8
.github/workflows/benchmarks.yml
[failure] 280-280:
280:1 [trailing-spaces] trailing spaces
[failure] 243-243:
243:1 [trailing-spaces] trailing spaces
🪛 GitHub Actions: CI
.github/workflows/benchmarks.yml
[error] 243-243: Trailing spaces detected (trailing-spaces).
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: build (windows-latest)
- GitHub Check: performance-regression
- GitHub Check: Code Coverage
- GitHub Check: Codacy Static Code Analysis
- GitHub Check: performance-regression
🔇 Additional comments (20)
.codecov.yml (1)
15-15: Exclude benches from coverage: LGTMIgnoring benches in Codecov aligns with tarpaulin excludes and avoids skewed coverage. No action needed.
Cargo.toml (3)
24-28: Performance deps: clarify fxhash threat modelfxhash is fast but collision-prone. Confirm it’s used only with trusted/non-adversarial keys (e.g., integers, UUIDs) or document the threat model; otherwise consider rustc-hash/ahash.
Would you like a quick scan to list maps/sets switched to FxHash* and their key types?
76-79: New benchmark target: looks goodprofiling_suite bench entry is correctly configured with harness = false.
85-90: Tarpaulin exclusions consistent with CodecovExcluding benches and examples in tarpaulin matches Codecov settings. Good consistency.
src/geometry/predicates.rs (1)
112-116: #[inline] on simplex_orientation: measure impactInlining a determinant-building routine may increase code size without wins. Keep #[inline] (not always) but please confirm with perf/size measurements in hot paths; remove if no benefit.
benches/README.md (3)
32-45: Profiling suite commands: clear and actionableCommands and dev mode flag are clear. Consider adding a note that count-allocations requires feature gating in Cargo.toml (already present).
47-57: Runtime disclaimer: goodStrong warning about runtime is helpful for contributors. No changes needed.
423-457: GitHub Actions integration: verify schedule text vs workflowEnsure the documented monthly schedule (“First Sunday at 2 AM UTC”) matches the cron in profiling-benchmarks.yml to avoid confusion.
WARP.md (1)
258-270: TDD guidance: helpful and conciseThe Red–Green–Refactor section is solid. You might cross-link to tests/README.md for concrete examples.
scripts/benchmark_utils.py (7)
245-271: Baseline metadata: good resilienceHandles missing git/hardware gracefully and writes metadata first. LGTM.
721-762: prepare_baseline: robust env plumbingSets BASELINE_* vars and previews metadata. Looks good.
777-827: extract_baseline_commit: safe parsing + env exportGood validation of SHA and fallback to metadata.json. LGTM.
991-1049: CLI surface: comprehensiveSubcommands are well structured. No issues spotted.
1051-1061: find_project_root: early exit pathClear error messaging and exit code. Good.
1151-1177: Command dispatcher: straightforwardClean routing across command groups. LGTM.
146-179: Criterion path pattern too specific
Verify that your benchmarks actually output undertarget/criterion/*/tds_new_*d/tds_new/estimates.json; if your Criterion IDs differ, replace the hardcoded glob with a recursive search forestimates.json(e.g.path.rglob("estimates.json"))..github/workflows/generate-baseline.yml (1)
84-89: sanitize_artifact_name emitsartifact_nameto GITHUB_OUTPUT
Confirmed thatsanitize_artifact_nameopens the file at$GITHUB_OUTPUTand appendsartifact_name=<value>\n, so the step output is correctly set..github/workflows/benchmarks.yml (1)
232-246: Ignore baseline preparation env clarity comment
The workflow step echoes BASELINE_ORIGIN and BASELINE_TAG to $GITHUB_ENV before invoking benchmark-utils, and prepare-baseline appends BASELINE_EXISTS, BASELINE_SOURCE (and when present) BASELINE_COMMIT to $GITHUB_ENV. Existing tests cover all success, missing, and commit scenarios.Likely an incorrect or invalid review comment.
src/geometry/util.rs (1)
161-168: rand API usage is compatible with rand 0.9.2
The rand 0.9.2 changelog shows thatrand::thread_rng()was renamed torand::rng(),Rng::gen_rangetorandom_range, and theSampleUniformtrait inrand::distr::uniformis available as of 0.9 (docs.rs)src/core/collections.rs (1)
149-168: Good choice of inline capacities for SmallVec-backed mapsInline 2 for facets and 8 for neighbors/vertex-incidence matches typical D+1 patterns and should reduce allocations on common cases.
.github/workflows/benchmarks.yml
Outdated
| echo " Baseline source: ${BASELINE_SOURCE:-unknown}" | ||
| echo " Baseline origin: ${BASELINE_ORIGIN:-unknown}" | ||
|
|
||
| # This will exit with code 1 if significant regressions are found | ||
| echo " Using CI performance suite against ${BASELINE_ORIGIN:-unknown} baseline" | ||
| BASELINE_PATH="baseline-artifact/baseline_results.txt" | ||
| if uv run benchmark-utils --help >/dev/null 2>&1; then | ||
| uv run benchmark-utils compare --baseline "$BASELINE_PATH" | ||
| elif uv run python -c "import importlib; importlib.import_module('scripts.benchmark_utils')" \ | ||
| >/dev/null 2>&1; then | ||
| uv run python -m scripts.benchmark_utils compare --baseline "$BASELINE_PATH" | ||
| else | ||
| echo "❌ benchmark-utils entrypoint and module not found" >&2 | ||
| exit 2 | ||
| fi | ||
|
|
||
| # Run regression test using Python utility | ||
| uv run benchmark-utils run-regression-test \ | ||
| --baseline "baseline-artifact/baseline_results.txt" | ||
|
|
There was a problem hiding this comment.
Typo in variable name: echo uses BASELINE_SOURCE, but you set BASELINE_ORIGIN
This prints “unknown” even when BASELINE_ORIGIN is set.
- echo " Baseline source: ${BASELINE_SOURCE:-unknown}"
+ echo " Baseline origin: ${BASELINE_ORIGIN:-unknown}"
- echo " Baseline origin: ${BASELINE_ORIGIN:-unknown}"
+ # (printed above)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| echo " Baseline source: ${BASELINE_SOURCE:-unknown}" | |
| echo " Baseline origin: ${BASELINE_ORIGIN:-unknown}" | |
| # This will exit with code 1 if significant regressions are found | |
| echo " Using CI performance suite against ${BASELINE_ORIGIN:-unknown} baseline" | |
| BASELINE_PATH="baseline-artifact/baseline_results.txt" | |
| if uv run benchmark-utils --help >/dev/null 2>&1; then | |
| uv run benchmark-utils compare --baseline "$BASELINE_PATH" | |
| elif uv run python -c "import importlib; importlib.import_module('scripts.benchmark_utils')" \ | |
| >/dev/null 2>&1; then | |
| uv run python -m scripts.benchmark_utils compare --baseline "$BASELINE_PATH" | |
| else | |
| echo "❌ benchmark-utils entrypoint and module not found" >&2 | |
| exit 2 | |
| fi | |
| # Run regression test using Python utility | |
| uv run benchmark-utils run-regression-test \ | |
| --baseline "baseline-artifact/baseline_results.txt" | |
| echo " Baseline origin: ${BASELINE_ORIGIN:-unknown}" | |
| # (printed above) | |
| # Run regression test using Python utility | |
| uv run benchmark-utils run-regression-test \ | |
| --baseline "baseline-artifact/baseline_results.txt" |
🧰 Tools
🪛 GitHub Check: build (ubuntu-latest)
[failure] 280-280:
280:1 [trailing-spaces] trailing spaces
🤖 Prompt for AI Agents
In .github/workflows/benchmarks.yml around lines 278 to 284 the echo line prints
BASELINE_SOURCE but the workflow actually sets/uses BASELINE_ORIGIN, causing
"unknown" to be shown; update the echo to reference ${BASELINE_ORIGIN:-unknown}
(or consistently set BASELINE_SOURCE where intended) so the correct variable
value is displayed, keeping the rest of the block unchanged.
Refactors benchmark workflows for better performance profiling, improves memory allocation tracking, and enhances numerical stability. Updates profiling benchmarks to track actual point counts and adds safety cap to prevent out-of-memory errors. Switches `Cell` neighbor storage to `Option` for correct positional semantics. Changes ConvexHull caching to `ArcSwapOption`. These are internal changes to improve performance and robustness of the library.
Coverage summary from CodacySee diff coverage on Codacy
Coverage variation details
Coverage variation is the difference between the coverage for the head and common ancestor commits of the pull request branch: Diff coverage details
Diff coverage is the percentage of lines that are covered by tests out of the coverable lines that the pull request added or modified: See your quality gate settings Change summary preferences |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #68 +/- ##
==========================================
+ Coverage 69.60% 70.50% +0.90%
==========================================
Files 18 17 -1
Lines 2520 2689 +169
==========================================
+ Hits 1754 1896 +142
- Misses 766 793 +27
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
src/core/algorithms/robust_bowyer_watson.rs (2)
261-266: Invalid if-let chain with extra boolean guard; won’t compile.if let ... && !boundary_facets.is_empty() is not valid syntax. Split into nested if or use match.
- if let Ok(boundary_facets) = - self.find_cavity_boundary_facets_with_robust_fallback(tds, &bad_cells) - && !boundary_facets.is_empty() - { + if let Ok(boundary_facets) = + self.find_cavity_boundary_facets_with_robust_fallback(tds, &bad_cells) + { + if !boundary_facets.is_empty() { let cells_removed = bad_cells.len(); <Self as InsertionAlgorithm<T, U, V, D>>::remove_bad_cells(tds, &bad_cells); <Self as InsertionAlgorithm<T, U, V, D>>::ensure_vertex_in_tds(tds, vertex); let cells_created = <Self as InsertionAlgorithm<T, U, V, D>>::create_cells_from_boundary_facets( tds, &boundary_facets, vertex, ); // Maintain invariants after structural changes <Self as InsertionAlgorithm<T, U, V, D>>::finalize_after_insertion(tds).map_err( |e| TriangulationValidationError::InconsistentDataStructure { message: format!( "Failed to finalize triangulation after cavity-based insertion: {e}" ), }, )?; return Ok(InsertionInfo { strategy: InsertionStrategy::CavityBased, cells_removed, cells_created, success: true, degenerate_case_handled: false, }); - } + } + }
315-318: Same invalid if-let pattern for visible facets.Split the boolean check out of the if let.
- if let Ok(visible_facets) = - self.find_visible_boundary_facets_with_robust_fallback(tds, vertex) - && !visible_facets.is_empty() - { + if let Ok(visible_facets) = self.find_visible_boundary_facets_with_robust_fallback(tds, vertex) { + if !visible_facets.is_empty() { <Self as InsertionAlgorithm<T, U, V, D>>::ensure_vertex_in_tds(tds, vertex); let cells_created = <Self as InsertionAlgorithm<T, U, V, D>>::create_cells_from_boundary_facets( tds, &visible_facets, vertex, ); // Maintain invariants after structural changes <Self as InsertionAlgorithm<T, U, V, D>>::finalize_after_insertion(tds).map_err( |e| TriangulationValidationError::InconsistentDataStructure { message: format!( "Failed to finalize triangulation after hull extension insertion: {e}" ), }, )?; return Ok(InsertionInfo { strategy: InsertionStrategy::HullExtension, cells_removed: 0, cells_created, success: true, degenerate_case_handled: false, }); - } + } + }src/geometry/util.rs (1)
413-449: Adjust f64 integer precision limit to 2³³−1
Replace every use ofMAX_PRECISE_USIZE_IN_F64 = 1_u64 << 52(2⁵²) and comments/tests referencing “2^52” with a new constant// f64 integers are exact up to 2^53 − 1 (implicit leading 1 bit) const MAX_EXACT_INT_IN_F64: u64 = (1_u64 << 53) - 1; // 9_007_199_254_740_991and update all checks (
value_u64 > …) accordingly.
Occurrences to update (e.g. viarg -nP '2\^52|<<\s*52|MAX_PRECISE_USIZE_IN_F64'):
- src/geometry/util.rs lines 371, 410–429
- Tests at lines 1691–1739, 1821–1824
- Example print at line 1864
Adjust doc comments and assertions to expect the 2^53−1 boundary.src/core/triangulation_data_structure.rs (2)
2419-2425: Fix: manual Deserialize uses serde_json::Value incorrectly (will not compile).
deserialize_bimapexpects a Deserializer, but a serde_json::Value is passed. Remove serde_json entirely here and deserialize into a HashMap, then build the BiMap.Apply:
- // Use the custom deserialize function for BiMap - let vertex_bimap_deserializer = - map.next_value::<serde_json::Value>()?; - vertex_bimap = Some( - deserialize_bimap(vertex_bimap_deserializer) - .map_err(de::Error::custom)?, - ); + // Deserialize into a HashMap then construct the BiMap + let raw_map: std::collections::HashMap<Uuid, VertexKey> = map.next_value()?; + let mut bm = BiMap::new(); + for (uuid, key) in raw_map { + bm.insert(uuid, key); + } + vertex_bimap = Some(bm);
2427-2436: Same issue for Cell BiMap deserialization.Mirror the fix for the cell BiMap arm.
Apply:
- // Use the custom deserialize function for BiMap - let cell_bimap_deserializer = map.next_value::<serde_json::Value>()?; - cell_bimap = Some( - deserialize_cell_bimap(cell_bimap_deserializer) - .map_err(de::Error::custom)?, - ); + let raw_map: std::collections::HashMap<Uuid, CellKey> = map.next_value()?; + let mut bm = BiMap::new(); + for (uuid, key) in raw_map { + bm.insert(uuid, key); + } + cell_bimap = Some(bm);
♻️ Duplicate comments (7)
scripts/benchmark_utils.py (1)
48-65: Use the secure git wrapper instead of redefining run_git_command.This duplicates functionality already provided by scripts/subprocess_utils.run_git_command and bypasses the get_safe_executable safety. Remove this local helper and import/use the shared one for consistency and security.
- def run_git_command(args: list[str], timeout: int = 60) -> subprocess.CompletedProcess[str]: - """ - Run git command with timeout and consistent error handling. - ... - """ - cmd = ["git", *args] - return subprocess.run(cmd, capture_output=True, text=True, check=True, timeout=timeout) # noqa: S603 +# Use the shared secure wrapper from subprocess_utilsAnd adjust imports at the top:
- from subprocess_utils import get_git_commit_hash, run_cargo_command # type: ignore[no-redef] + from subprocess_utils import get_git_commit_hash, run_cargo_command, run_git_command # type: ignore[no-redef] @@ - from scripts.subprocess_utils import get_git_commit_hash, run_cargo_command # type: ignore[no-redef] + from scripts.subprocess_utils import get_git_commit_hash, run_cargo_command, run_git_command # type: ignore[no-redef].github/workflows/benchmarks.yml (1)
278-283: Variable fix confirmed.Echo now uses BASELINE_ORIGIN (was BASELINE_SOURCE previously). Trailing-space issue from earlier comments also appears resolved.
.github/workflows/profiling-benchmarks.yml (2)
41-44: Indentation issues previously flagged look fixed.The steps blocks are correctly indented now.
Also applies to: 191-194
124-131: Explicit compare for PROFILING_DEV_MODE looks good.The boolean-like expression is now deterministic.
src/geometry/util.rs (2)
1272-1276: Clarify grid docs wording (not strictly a “hypercube grid”)Say “regular grid in D-dimensional space (Cartesian product of equally spaced coordinates)” to avoid confusion.
Apply:
-/// This function creates points arranged in a D-dimensional hypercube grid, +/// This function creates points in D-dimensional space arranged in a regular grid +/// (Cartesian product of equally spaced coordinates),
1344-1369: Mixed‑radix generator: great improvement over recursive tuple buildingNo extra O(N) allocations; iteration order is well-defined and cache-friendly. Nice.
benches/profiling_suite.rs (1)
134-143: Grid point count: avoid bench panics and handle safety-cap errors.For large
countor high D,generate_grid_pointsmay exceed the safety cap and return Err;.unwrap()will panic mid-benchmark. Handle gracefully and clamp.Apply:
- let points_per_dim = ((count as f64).powf(1.0 / D as f64).ceil() as usize).max(2); - generate_grid_points(points_per_dim, 10.0, [0.0; D]).unwrap() + let points_per_dim = ((count as f64).powf(1.0 / D as f64).ceil() as usize).max(2); + match generate_grid_points(points_per_dim, 10.0, [0.0; D]) { + Ok(pts) => pts, + Err(e) => { + eprintln!( + "Grid generation capped/failed for D={}: count={}, points_per_dim={}, err={:?}. Falling back to random.", + D, count, points_per_dim, e + ); + generate_random_points_seeded(count, (-100.0, 100.0), seed).unwrap() + } + }
🧹 Nitpick comments (25)
REFERENCES.md (1)
99-109: Consistency: align book-entry formatting with the rest of the page.Use a consistent style (italicize book titles, include publisher/city like other entries, and add DOIs or persistent links when available). This keeps the section uniform with surrounding references.
benches/README.md (2)
32-57: Set expectations about feature overhead and hardware.Explicitly note that enabling the count-allocations feature can materially slow runs and increase memory usage; add a brief hardware guidance (RAM/cores) to prevent timeouts in local/CI runs.
61-69: Add a quick help tip for targeted runs.Mention that cargo bench --bench profiling_suite -- --help shows available filters and Criterion flags; this helps users scope long runs.
docs/code_organization.md (2)
87-100: Update the scripts/tests tree to include new tests.scripts/tests/test_benchmark_utils.py exists per this PR but is missing from the directory tree. Add it for completeness.
│ ├── tests/ # Python utility tests │ │ ├── __init__.py # Test package initialization + │ │ ├── test_benchmark_utils.py # Tests for benchmark_utils.py │ │ ├── test_changelog_utils.py # Comprehensive tests for changelog_utils.py │ │ ├── test_hardware_utils.py # Tests for hardware_utils.py │ │ └── test_subprocess_utils.py # Comprehensive tests for subprocess_utils.py
574-582: Tone down “thousands of lines” wording.If util.rs isn’t literally >2000 lines, consider “large module” to avoid future drift as the file changes.
scripts/benchmark_utils.py (1)
223-231: Docstring default path is stale.generate_baseline writes to baseline-artifact/baseline_results.txt by default, but the docstring says benches/baseline_results.txt.
- output_file: Output file path (default: benches/baseline_results.txt) + output_file: Output file path (default: baseline-artifact/baseline_results.txt)src/core/algorithms/robust_bowyer_watson.rs (1)
665-697: Avoid rebuilding facet mappings; reuse Tds::build_facet_to_cells_hashmap().You already provide a validated facet map later; consider reusing TDS’s mapping then validating, to reduce duplication and improve performance.
- fn build_validated_facet_mapping( + fn build_validated_facet_mapping( &self, tds: &Tds<T, U, V, D>, - ) -> Result<HashMap<u64, Vec<CellKey>>, TriangulationValidationError> { - let mut facet_to_cells: HashMap<u64, Vec<CellKey>> = HashMap::new(); - - for (cell_key, cell) in tds.cells() { - if let Ok(facets) = cell.facets() { - for facet in facets { - facet_to_cells - .entry(facet.key()) - .or_default() - .push(cell_key); - } - } - } + ) -> Result<HashMap<u64, Vec<CellKey>>, TriangulationValidationError> { + // Reuse existing mapping from TDS to avoid recomputation here + let tds_map = tds.build_facet_to_cells_hashmap(); + let mut facet_to_cells: HashMap<u64, Vec<CellKey>> = HashMap::new(); + for (facet_key, cells) in tds_map { + // Strip indices; keep keys only + facet_to_cells.insert(facet_key, cells.iter().map(|(k, _)| *k).collect()); + }src/core/cell.rs (1)
454-456: Prefer returning a slice over &Vec to reduce API coupling.Returning &[Vertex<…>] preserves flexibility and avoids exposing Vec-specific semantics.
- pub const fn vertices(&self) -> &Vec<Vertex<T, U, D>> { - &self.vertices - } + pub fn vertices(&self) -> &[Vertex<T, U, D>] { + &self.vertices[..] + }.github/workflows/profiling-benchmarks.yml (3)
95-105: Don’t run all benches with count-allocations by default; enable it only for memory-focused runs.Always enabling the feature slows benchmarks and skews timing. Restrict it to memory-specific invocations.
- # Run profiling with memory allocation tracking + # Run profiling (timing-focused) echo "Starting comprehensive profiling benchmarks..." - if [[ -z "$BENCH_FILTER" ]] || [[ "$BENCH_FILTER" != *"memory"* ]]; then - cargo bench --bench profiling_suite --features count-allocations $BENCH_FILTER \ + if [[ -z "$BENCH_FILTER" ]] || [[ "$BENCH_FILTER" != *"memory"* ]]; then + cargo bench --bench profiling_suite $BENCH_FILTER \ 2>&1 | tee profiling-results/profiling_output.log else # If filter contains memory, run only the filtered benchmarks - cargo bench --bench profiling_suite --features count-allocations $BENCH_FILTER \ + cargo bench --bench profiling_suite --features count-allocations $BENCH_FILTER \ 2>&1 | tee profiling-results/profiling_output.log echo "MEMORY_BENCHMARKS_RUN=true" >> $GITHUB_ENV fiAlso applies to: 107-113
124-134: Make string fallback explicit in expressions.Avoid relying on truthiness of empty strings in GitHub expressions.
- **Benchmark Filter**: ${{ github.event.inputs.benchmark_filter || 'All benchmarks' }} + **Benchmark Filter**: ${{ github.event.inputs.benchmark_filter != '' && github.event.inputs.benchmark_filter || 'All benchmarks' }}
7-14: Mode handling: default + logic mismatch.With default: 'production', the “manual run with no input => dev” branch never triggers. Either default to 'development' or drop the special-case.
@@ - default: 'production' + default: 'development' @@ - if [[ "$INPUT_MODE" == "development" ]] || \ - { [[ "${{ github.event_name }}" == "workflow_dispatch" ]] && [[ -z "$INPUT_MODE" ]]; }; then + if [[ "$INPUT_MODE" == "development" ]]; then echo "PROFILING_DEV_MODE=1" >> $GITHUB_ENV echo "Running in development mode (reduced scale)" else echo "Running in production mode (full scale)" fiAlso applies to: 70-79
src/geometry/algorithms/convex_hull.rs (2)
935-959: SmallBuffer-based duplicate detection is good; consider MSRV-friendly into conversion.The map(smallvec::SmallVec::into_vec) is fine; alternatively, clone into Vec when MSRV constraints require.
723-726: Use map_or instead of is_none_or for broader MSRV.is_none_or requires newer Rust; map_or(true, …) avoids MSRV bumps without changing behavior.
- if min_distance.is_none_or(|min_dist| distance < min_dist) { + if min_distance.map_or(true, |min_dist| distance < min_dist) { min_distance = Some(distance); nearest_facet = Some(facet_index); }src/geometry/util.rs (3)
1419-1437: Handle n_points == 0 early in Poisson generatorReturn an empty Vec immediately; avoids unnecessary setup and the final empty-points error.
Apply:
// Validate bounds if bounds.0 >= bounds.1 { return Err(RandomPointGenerationError::InvalidRange { min: format!("{:?}", bounds.0), max: format!("{:?}", bounds.1), }); } + if n_points == 0 { + return Ok(Vec::new()); + } + let mut rng = rand::rngs::StdRng::seed_from_u64(seed);
1453-1464: Minor perf: avoid recomputing candidate.to_array() inside inner loopCompute once per candidate.
Apply:
- // Check distance to all existing points + // Check distance to all existing points let mut valid = true; + let candidate_coords: [T; D] = candidate.to_array(); for existing_point in &points { - let existing_coords: [T; D] = existing_point.to_array(); - let candidate_coords: [T; D] = candidate.to_array(); + let existing_coords: [T; D] = existing_point.to_array();
3675-3681: Tests: avoid hard‑coding the safety cap valueAsserting on the literal “50000000” ties tests to an implementation detail and will break if the cap changes (or moves to byte-based). Reference the constant or just assert on the message.
Apply (current point-count cap):
- assert!(error_msg.contains("exceeds safety cap")); - assert!(error_msg.contains("50000000")); + assert!(error_msg.contains("exceeds safety cap")); + assert!(error_msg.contains(&super::MAX_POINTS_SAFETY_CAP.to_string()));If you adopt the bytes-based cap above, change to:
- assert!(error_msg.contains(&super::MAX_POINTS_SAFETY_CAP.to_string())); + assert!(error_msg.contains("exceeds safety cap")); + // Optionally assert the byte cap string if you surface the constant: + // assert!(error_msg.contains(&super::MAX_GRID_BYTES_SAFETY_CAP.to_string()));src/core/triangulation_data_structure.rs (5)
873-906: Consider deferring vertex insertion until after feasibility checks.
new()inserts all vertices, then calls Bowyer–Watson, which may fail for insufficient/degenerate inputs. Safer to validate counts/degeneracy first to avoid populating SlotMaps that are then unused on error. Low priority since errors are bubbled, but it avoids transient inconsistent states.
1630-1647: Preallocate facet map capacity for fewer rehashes.You know an upper bound: cells × (D+1). Preallocate for
facet_to_cells.Apply:
- let mut facet_to_cells: FacetToCellsMap = FastHashMap::default(); + let mut facet_to_cells: FacetToCellsMap = + fast_hash_map_with_capacity(self.cells.len() * (D + 1));
1936-1938: Use sort_unstable for speed (no stability required).Duplicate detection doesn’t need stable ordering.
Apply:
- vertex_uuids.sort(); + vertex_uuids.sort_unstable();
1392-1399: Neighbor positional semantics: minor guard.If a cell’s vertex count changes (future mutations),
neighbors.len()may not match. Consider assertingneighbors.len() == cell.vertices().len()in debug builds when settingcell.neighbors.
2471-2523: Redundant serde helpers after manual Deserialize.With a manual
Deserializeimpl, thedeserialize_withfield attributes aren’t used. Keeping both can confuse future maintainers. Either remove the manual impl and rely on field-level helpers, or keep manual impl and drop thedeserialize_*helpers/attributes.benches/profiling_suite.rs (4)
171-201: Minor: avoid double-generating sample points for throughput and bench body.You generate once for throughput and again inside
iter_batched. Cacheactual_countby recomputingpoints_per_dim.pow(D)cheaply for Grid, or reuse the pre-generated points when possible to reduce setup noise.
320-399: Memory profiling: summarize 95th percentile too.Mean hides spikes. Track and print 95th percentile of
bytes_maxacross iters to better capture peak allocation behavior.
528-569: Micro: precompute simplex vertices outside inner query loop.You already do this per-cell; consider reusing a preallocated SmallBuffer to avoid per-iteration allocations in tight loops.
652-663: Bench config: make measurement_time overridable via env.Allow
PROFILING_MEASUREMENT_SECSto override defaults to ease CI tuning without editing code.
📜 Review details
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (18)
.github/workflows/benchmarks.yml(2 hunks).github/workflows/generate-baseline.yml(2 hunks).github/workflows/profiling-benchmarks.yml(1 hunks)REFERENCES.md(1 hunks)benches/README.md(2 hunks)benches/profiling_suite.rs(1 hunks)cspell.json(5 hunks)docs/code_organization.md(1 hunks)scripts/benchmark_utils.py(9 hunks)scripts/tests/test_benchmark_utils.py(3 hunks)src/core/algorithms/robust_bowyer_watson.rs(5 hunks)src/core/cell.rs(6 hunks)src/core/collections.rs(1 hunks)src/core/triangulation_data_structure.rs(31 hunks)src/geometry/algorithms/convex_hull.rs(11 hunks)src/geometry/predicates.rs(2 hunks)src/geometry/util.rs(3 hunks)src/lib.rs(4 hunks)
🚧 Files skipped from review as they are similar to previous changes (5)
- scripts/tests/test_benchmark_utils.py
- src/geometry/predicates.rs
- src/core/collections.rs
- cspell.json
- .github/workflows/generate-baseline.yml
🧰 Additional context used
📓 Path-based instructions (1)
scripts/*.py
⚙️ CodeRabbit configuration file
scripts/*.py: These are Python utility scripts for changelog and commit processing.
Focus on code quality, maintainability, and adherence to Python best practices.
The scripts use ruff for comprehensive linting and formatting (pylint has been retired).
Files:
scripts/benchmark_utils.py
🧠 Learnings (3)
📚 Learning: 2025-09-04T20:03:49.859Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.859Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
Applied to files:
docs/code_organization.md
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
PR: acgetchell/delaunay#55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The user corrected that finitecheck.rs, hashcoordinate.rs, and orderedeq.rs trait files do not exist in the current version of the delaunay repository under src/geometry/traits/. The current structure only contains coordinate.rs, which matches the documented structure in CONTRIBUTING.md.
Applied to files:
docs/code_organization.md
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
PR: acgetchell/delaunay#55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The delaunay repository has stale documentation in WARP.md that references non-existent trait files (finitecheck.rs, hashcoordinate.rs, orderedeq.rs) in src/geometry/traits/, while the actual directory only contains coordinate.rs. The CONTRIBUTING.md documentation is accurate and reflects the current state.
Applied to files:
docs/code_organization.md
🧬 Code graph analysis (7)
src/core/cell.rs (2)
src/core/facet.rs (1)
cell(325-327)src/core/vertex.rs (1)
uuid(458-460)
src/geometry/util.rs (1)
src/geometry/point.rs (2)
try_from(313-336)new(74-76)
src/core/triangulation_data_structure.rs (4)
src/core/collections.rs (2)
fast_hash_map_with_capacity(361-364)fast_hash_map_with_capacity(463-463)src/core/cell.rs (8)
cell(2179-2179)cell(2308-2308)cell(2310-2310)cell(2314-2314)cell(2513-2513)cell(2526-2526)uuid(481-483)vertices(454-456)src/core/vertex.rs (1)
uuid(458-460)src/core/util.rs (1)
facets_are_adjacent(285-301)
src/lib.rs (1)
src/core/collections.rs (4)
fast_hash_map_with_capacity(361-364)fast_hash_map_with_capacity(463-463)fast_hash_set_with_capacity(386-389)fast_hash_set_with_capacity(466-466)
src/geometry/algorithms/convex_hull.rs (2)
src/core/vertex.rs (1)
uuid(458-460)src/core/triangulation_data_structure.rs (1)
new(874-906)
benches/profiling_suite.rs (3)
src/geometry/util.rs (39)
generate_grid_points(1316-1372)generate_grid_points(3578-3578)generate_grid_points(3611-3611)generate_grid_points(3627-3627)generate_grid_points(3643-3643)generate_grid_points(3652-3652)generate_grid_points(3666-3666)generate_grid_points(3676-3676)generate_grid_points(3686-3686)generate_grid_points(3699-3699)generate_poisson_points(1410-1488)generate_poisson_points(3717-3717)generate_poisson_points(3750-3750)generate_poisson_points(3783-3783)generate_poisson_points(3784-3784)generate_poisson_points(3798-3798)generate_poisson_points(3805-3805)generate_poisson_points(3816-3816)generate_poisson_points(3829-3829)generate_poisson_points(3835-3835)generate_poisson_points(3844-3844)generate_poisson_points(3867-3867)generate_poisson_points(3879-3879)generate_random_points_seeded(1241-1268)generate_random_points_seeded(3340-3340)generate_random_points_seeded(3341-3341)generate_random_points_seeded(3360-3360)generate_random_points_seeded(3361-3361)generate_random_points_seeded(3379-3379)generate_random_points_seeded(3380-3380)generate_random_points_seeded(3398-3398)generate_random_points_seeded(3399-3399)generate_random_points_seeded(3418-3418)generate_random_points_seeded(3419-3419)generate_random_points_seeded(3423-3423)generate_random_points_seeded(3424-3424)generate_random_points_seeded(3428-3428)generate_random_points_seeded(3429-3429)generate_random_points_seeded(3433-3433)src/core/triangulation_data_structure.rs (2)
default(207-209)new(874-906)src/geometry/predicates.rs (1)
insphere_lifted(500-616)
scripts/benchmark_utils.py (3)
scripts/changelog_utils.py (2)
run_git_command(622-649)main(804-837)scripts/subprocess_utils.py (1)
run_git_command(43-66)scripts/hardware_utils.py (1)
main(578-625)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: performance-regression
- GitHub Check: build (windows-latest)
- GitHub Check: Code Coverage
- GitHub Check: Codacy Security Scan
🔇 Additional comments (12)
benches/README.md (1)
426-475: Nice CI overview.The two-tier workflow and triggers are clearly documented. LGTM.
scripts/benchmark_utils.py (1)
852-877: Skip logic looks good; keep the safety checks.SHA regex validation and explicit args to git prevent injection; continuing to use the shared run_git_command wrapper will align with repo standards.
.github/workflows/benchmarks.yml (2)
238-246: Good: environment wiring and CLI usage.Baseline origin/tag env setup plus delegating to uv run benchmark-utils is clear. LGTM.
299-299: Concise summary step.Delegating summary generation to the Python CLI simplifies maintenance. LGTM.
src/core/algorithms/robust_bowyer_watson.rs (1)
1416-1430: Neighbor symmetry assertions look correct with Option.The iter().filter_map / iter().flatten patterns are appropriate for optional neighbors.
Also applies to: 1572-1575, 1729-1735
src/lib.rs (3)
162-166: Public collections module exposure is appropriate.Keeping core::collections public but not glob-re-exported avoids pollution while enabling direct use.
240-246: Prelude re-exports are pragmatic.The selected collection types and helpers are likely what downstream code needs. LGTM.
292-320: Useful sanity test for prelude exports.This ensures the prelude stays stable. LGTM.
.github/workflows/profiling-benchmarks.yml (1)
21-25: Cron doesn’t mean “first Sunday of the month”.In Actions (POSIX cron), day-of-month and day-of-week are ORed. Your cron runs on days 1–7 and every Sunday. Either update the comment or change the schedule and gate in-step.
Option A (fix comment):
- # Monthly scheduled run for trend monitoring + # Scheduled run: days 1–7 AND every Sunday (cron fields are ORed) @@ - # First Sunday of each month at 2 AM UTC + # 02:00 UTC on days 1–7 and on every SundayOption B (weekly Sunday + gate in step):
- Use cron: '0 2 * * 0'
- Add an early step: exit non-first Sundays when $(date -u +%d) -gt 07.
src/geometry/algorithms/convex_hull.rs (2)
378-421: Lock-free cache build/swap logic with ArcSwapOption looks solid.Staleness check, CAS-on-None, and fallbacks are correct; avoids double-build races.
1102-1107: Cache invalidation is correct and minimal.store(None) + generation reset forces rebuild on next access without leaking.
src/core/triangulation_data_structure.rs (1)
1067-1070: Drop double-insert concern
Theinsert_vertexmethods in theIncrementalBoyerWatsonimplementations (Bowyer-Watson and its robust variant) do not calltds.vertices.insertorvertex_bimap.insert, so the initial insertion before invoking the algorithm is the sole insertion.Likely an incorrect or invalid review comment.
Enhances profiling benchmarks with improved memory allocation tracking, optimized query benchmarks, complete dimensional coverage (2D-5D), and environment variable control for faster iteration. It provides more comprehensive performance analysis for optimization work. This also introduces error handling for grid generation in benchmarks.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
scripts/benchmark_utils.py (1)
134-141: Guard division by zero in throughput calcConfidence interval lower_bound can be 0 in very fast benches, making thrpt_high = points*1000/low_us blow up. Clamp denominators to a tiny epsilon.
- thrpt_mean = points * 1000 / mean_us - thrpt_low = points * 1000 / high_us # Lower time = higher throughput - thrpt_high = points * 1000 / low_us # Higher time = lower throughput + eps = 1e-9 # µs + thrpt_mean = points * 1000 / max(mean_us, eps) + thrpt_low = points * 1000 / max(high_us, eps) # Lower time = higher throughput + thrpt_high = points * 1000 / max(low_us, eps) # Higher time = lower throughputsrc/core/triangulation_data_structure.rs (1)
2458-2475: Manual Deserialize bug: passing serde_json::Value to functions expecting a Deserializer.
deserialize_bimap(..)anddeserialize_cell_bimap(..)expect aDeserializer, but the code passes aserde_json::Value. This will not compile. Deserialize the innerHashMapdirectly viamap.next_value()and build theBiMap.Apply:
- // Use the custom deserialize function for BiMap - let vertex_bimap_deserializer = - map.next_value::<serde_json::Value>()?; - vertex_bimap = Some( - deserialize_bimap(vertex_bimap_deserializer) - .map_err(de::Error::custom)?, - ); + // Deserialize as HashMap then build BiMap + let hm: std::collections::HashMap<Uuid, VertexKey> = map.next_value()?; + let mut bm = BiMap::new(); + for (u, k) in hm { bm.insert(u, k); } + vertex_bimap = Some(bm);- // Use the custom deserialize function for BiMap - let cell_bimap_deserializer = map.next_value::<serde_json::Value>()?; - cell_bimap = Some( - deserialize_cell_bimap(cell_bimap_deserializer) - .map_err(de::Error::custom)?, - ); + // Deserialize as HashMap then build BiMap + let hm: std::collections::HashMap<Uuid, CellKey> = map.next_value()?; + let mut bm = BiMap::new(); + for (u, k) in hm { bm.insert(u, k); } + cell_bimap = Some(bm);Optionally remove the now-unused helper fns
deserialize_bimap/deserialize_cell_bimap.
🧹 Nitpick comments (25)
docs/code_organization.md (2)
90-96: Add note on test discovery for scripts/testsSince scripts/tests/test_benchmark_utils.py is new, add a brief note that tests under scripts/ are executed via pytest (or uv run pytest) and discovered through pyproject.toml. Helps contributors run the right test target locally.
547-583: Tighten module-size claims and remove stale bullet
- “~2,400/2,000/1,400/400 lines” are good as approximate sizes; add “as of v0.4.3” to reduce future drift.
- The “Generic type coverage” bullet under util.rs reads like a leftover from the prior version; either expand it or drop it for consistency with the rest of the updated list.
docs/RELEASING.md (3)
86-106: Call out required toolchain and expected runtime explicitlyAdd a short “Prereqs” note (Rust toolchain version, Criterion installed via Cargo, sufficient RAM/CPU, and that --features count-allocations increases runtime/memory). Also suggest running with --release and ensuring no debug assertions that could skew timings.
116-127: Include baseline artifact update in the staged pathsIf benches/compare_results.txt or baseline-artifact/metadata.json are generated as part of release profiling, consider staging them (or explicitly ignoring them) to avoid confusion across releases.
129-134: Add guidance when profiling is skippedNote that teams may skip step 4 in a pinch; recommend reusing the most recent baseline artifact and explicitly documenting that in the PR to maintain traceability.
scripts/subprocess_utils.py (2)
43-71: run_git_command: good flexibility; mirror kwargs on cargo/input variantAllowing **kwargs (timeout, env, etc.) is great. For API consistency, mirror this in run_cargo_command and run_git_command_with_input so callers can set timeout/env uniformly.
Apply:
def run_cargo_command( - args: list[str], cwd: Path | None = None, capture_output: bool = True, text: bool = True, check: bool = True + args: list[str], + cwd: Path | None = None, + **kwargs: Any, ) -> subprocess.CompletedProcess[str]: @@ - return subprocess.run( # noqa: S603 # Uses validated full executable path, no shell=True - [cargo_path, *args], cwd=cwd, capture_output=capture_output, text=text, check=check - ) + run_kwargs = {"capture_output": True, "text": True, "check": True, **kwargs} + return subprocess.run( # noqa: S603,PLW1510 + [cargo_path, *args], cwd=cwd, **run_kwargs + )And:
-def run_git_command_with_input( - args: list[str], input_data: str, cwd: Path | None = None, text: bool = True, check: bool = True -) -> subprocess.CompletedProcess[str]: +def run_git_command_with_input( + args: list[str], + input_data: str, + cwd: Path | None = None, + **kwargs: Any, +) -> subprocess.CompletedProcess[str]: @@ - return subprocess.run( # noqa: S603 # Uses validated full executable path, no shell=True - [git_path, *args], cwd=cwd, input=input_data, text=text, check=check, capture_output=True - ) + run_kwargs = {"capture_output": True, "text": True, "check": True, **kwargs} + return subprocess.run( # noqa: S603,PLW1510 + [git_path, *args], cwd=cwd, input=input_data, **run_kwargs + )
100-129: Document encoding behavior when text=TrueNote that CompletedProcess.stdout uses locale encoding by default. If you need deterministic UTF-8 in CI, consider passing encoding="utf-8" via kwargs.
src/geometry/util.rs (7)
120-131: Make the safety cap configurable by env var4 GiB is reasonable, but CI runners vary. Allow overriding via an env, defaulting to 4 GiB. Keeps tests deterministic while enabling tuning.
-const MAX_GRID_BYTES_SAFETY_CAP: usize = 4_294_967_296; // 4 GiB +const MAX_GRID_BYTES_SAFETY_CAP_DEFAULT: usize = 4_294_967_296; // 4 GiB +fn max_grid_bytes_safety_cap() -> usize { + if let Ok(v) = std::env::var("MAX_GRID_BYTES_SAFETY_CAP") { + if let Ok(n) = v.parse::<usize>() { + return n; + } + } + MAX_GRID_BYTES_SAFETY_CAP_DEFAULT +}And in generate_grid_points:
- if total_bytes > MAX_GRID_BYTES_SAFETY_CAP { + let cap = max_grid_bytes_safety_cap(); + if total_bytes > cap { return Err(RandomPointGenerationError::RandomGenerationFailed { @@ - "Requested grid requires ~{total_bytes} bytes (> cap {MAX_GRID_BYTES_SAFETY_CAP})" + "Requested grid requires ~{total_bytes} bytes (> cap {cap})" ), }); }
529-594: hypot: reduce duplication in 2D fallbackThe 2D branch duplicates the scaled computation. Minor: factor a small helper to keep one implementation path and reduce branching.
1046-1059: Avoid redundant cast/double factorial computationperoxide::statistics::ops::factorial(usize) returns f64; casting to f64 and recomputing in the error message is unnecessary. Compute once and drop the unreachable error branch.
- // Calculate (D-1)! factorial using peroxide's factorial function - let factorial_val = cast::<_, f64>(factorial(D - 1)).ok_or_else(|| { - CircumcenterError::ValueConversion(ValueConversionError::ConversionFailed { - value: format!("{}", factorial(D - 1)), - from_type: "usize", - to_type: "f64", - details: "Factorial value too large for f64 precision".to_string(), - }) - })?; - - sqrt_det / factorial_val + // (D-1)! as f64 + let factorial_val = factorial(D - 1); + sqrt_det / factorial_valAnd consider removing ValueConversionError if it’s now unused.
1317-1379: Grid generation: great improvements; one small UX tweakError message references cap as a raw number. Consider formatting bytes as human-readable (e.g., ~X.Y GiB) to aid users diagnosing limits.
1417-1499: Poisson generator: bound attempts relative to volume and min_distancen_points*30 may underflow in higher D or tight bounds. Consider deriving max_attempts from an estimated packing density or allow a caller-specified cap. At minimum, document that tight min_distance can terminate early.
3582-3720: Tests: overflow test is good; add a bytes-cap assertionAlso assert that the bytes-cap error includes the computed cap and that total_bytes > cap path is exercised (e.g., with D=5, points_per_dim large, spacing=1).
3737-3747: Negative spacing test: tighten expectationAlso assert that the max coordinate equals offset[d] and min equals offset[d] + (1 - points_per_dim) * spacing to fully pin the range.
scripts/benchmark_utils.py (3)
229-234: Pass timeout to cargo benches (symmetry with git timeouts)Long-running benches can hang in CI. Consider passing a configurable timeout via run_cargo_command kwargs (after updating the wrapper).
960-991: Summary output: consider setting an explicit machine flagIf regressions are detected, export BENCHMARK_REGRESSION_DETECTED=true to GITHUB_ENV for downstream steps.
1054-1063: find_project_root: avoid process exit in library contextReturning an error instead of exiting would improve reusability (main can still exit). Optional.
src/core/algorithms/robust_bowyer_watson.rs (2)
670-696: Simplify the facet-to-cells mapping logicThe current implementation extracts CellKeys from the TDS mapping and then builds a new HashMap. Consider directly working with the TDS map's iterator to avoid intermediate collections and improve readability.
- // Reuse existing mapping from TDS to avoid recomputation - let tds_map = tds.build_facet_to_cells_hashmap(); - let mut facet_to_cells: HashMap<u64, Vec<CellKey>> = HashMap::new(); - - for (facet_key, cell_facet_pairs) in tds_map { - // Extract just the CellKeys, discarding facet indices - let cell_keys: Vec<CellKey> = cell_facet_pairs - .iter() - .map(|(cell_key, _)| *cell_key) - .collect(); - - // Validate that no facet is shared by more than 2 cells - if cell_keys.len() > 2 { - return Err(TriangulationValidationError::InconsistentDataStructure { - message: format!( - "Facet {} is shared by {} cells (should be ≤2)", - facet_key, - cell_keys.len() - ), - }); - } - - facet_to_cells.insert(facet_key, cell_keys); - } + // Reuse existing mapping from TDS to avoid recomputation + let tds_map = tds.build_facet_to_cells_hashmap(); + + // Transform the TDS map into the required format with validation + let facet_to_cells: HashMap<u64, Vec<CellKey>> = tds_map + .into_iter() + .map(|(facet_key, cell_facet_pairs)| { + // Extract just the CellKeys, discarding facet indices + let cell_keys: Vec<CellKey> = cell_facet_pairs + .iter() + .map(|(cell_key, _)| *cell_key) + .collect(); + + // Validate that no facet is shared by more than 2 cells + if cell_keys.len() > 2 { + return Err(TriangulationValidationError::InconsistentDataStructure { + message: format!( + "Facet {} is shared by {} cells (should be ≤2)", + facet_key, + cell_keys.len() + ), + }); + } + + Ok((facet_key, cell_keys)) + }) + .collect::<Result<HashMap<_, _>, _>>()?;
1415-1435: Simplify nested option handling in neighbor iterationThe current nested iteration over
Option<Option<Uuid>>can be simplified usingflatten()for cleaner code.if let Some(neighbors) = &cell.neighbors { - for neighbor_uuid in neighbors.iter().filter_map(|n| n.as_ref()) { + for neighbor_uuid in neighbors.iter().flatten() {Similarly at lines 1560-1573:
if let Some(neighbors) = &cell.neighbors { - for neighbor_uuid in neighbors.iter().flatten() { + // This line is already using flatten() correctlyAnd at lines 1728-1733:
- neighbors1.iter().any(|n| n.as_ref() == Some(cell2_uuid)), + neighbors1.iter().flatten().any(|uuid| uuid == cell2_uuid),- neighbors2.iter().any(|n| n.as_ref() == Some(cell1_uuid)), + neighbors2.iter().flatten().any(|uuid| uuid == cell1_uuid),src/core/cell.rs (1)
244-250: Document neighbor positional semantics in public field documentationThe public
neighborsfield documentation explains the positional semantics well, but consider adding an example to make it even clearer for API users./// The neighboring cells connected to the current cell. /// This field enforces a length of exactly `D+1` when present. Each /// `Some(uuid)` represents a neighbor at that position, while `None` /// indicates no neighbor at that position. The positional semantics ensure /// that `neighbors[i]` is the neighbor opposite `vertices[i]`. + /// + /// # Example + /// For a 3D cell (tetrahedron) with 4 vertices: + /// - `neighbors[0]` is the neighbor opposite `vertices[0]` (shares vertices 1, 2, 3) + /// - `neighbors[1]` is the neighbor opposite `vertices[1]` (shares vertices 0, 2, 3) + /// - And so on... #[builder(setter(skip), default = "None")] pub neighbors: Option<Vec<Option<Uuid>>>,src/core/triangulation_data_structure.rs (5)
410-415: Serialize attributes OK; ensure deserialization remains symmetric.Custom serializers on the BiMaps look fine, given the manual Deserialize impl below will handle the reverse path. See my separate comment on the manual deserializer bug.
1285-1290: Neighbor assignment data-structure changes look solid.
- Facet map capacity pre-sizing and SmallBuffer for neighbors cut allocs.
- Positional SmallBuffer initialized with None avoids extra Option wrapping/dangling lengths.
Minor: you clone+remove in a tight loop to build facet keys. With D small this is fine, but you could avoid clone/remove by iterating indices and writing a branch in
facet_key_from_vertex_keysto skip i, if you ever chase another micro-opt here.Also applies to: 1313-1321
1386-1400: Avoid extra Vec allocation when possible.If
SmallBufferexposesinto_vec(), prefer it overinto_iter().collect()to reduce one layer of abstraction:- let neighbors: Vec<Option<Uuid>> = neighbor_options.into_iter().collect(); + let neighbors: Vec<Option<Uuid>> = neighbor_options.into_vec();If not available, current code is fine.
1694-1741: Follow-up: neighbor/incident recomputation after removals.After removing invalid/excess cells, neighbor and incident mappings may be stale. Consider (best-effort) refreshing them here, or document that callers must do so.
Example:
// Clean up any resulting duplicate cells let duplicate_cells_removed = self.remove_duplicate_cells(); + + // Optional: refresh topology (ignore errors to avoid changing signature) + let _ = self.assign_neighbors(); + let _ = self.assign_incident_cells();If you prefer strictness, plumb a
ResultwithTriangulationValidationError.Would you like me to open a follow-up issue to track this?
Also applies to: 1758-1771
2147-2156: Neighbor validation is thorough; a couple of small perf tidy-ups.
- You recompute
cell_vertex_keysper cell; you already havethis_verticesset. If you need index alignment withvertices()[i], build the keyed Vec once and reuse both structures.- The mutual-neighbor check is linear scan over D+1; fine for small D.
No functional blockers.
Also applies to: 2164-2173, 2177-2231, 2233-2263
📜 Review details
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (12)
.github/workflows/profiling-benchmarks.yml(1 hunks)benches/README.md(2 hunks)benches/profiling_suite.rs(1 hunks)cspell.json(6 hunks)docs/RELEASING.md(1 hunks)docs/code_organization.md(2 hunks)scripts/benchmark_utils.py(10 hunks)scripts/subprocess_utils.py(1 hunks)src/core/algorithms/robust_bowyer_watson.rs(6 hunks)src/core/cell.rs(13 hunks)src/core/triangulation_data_structure.rs(40 hunks)src/geometry/util.rs(3 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- benches/README.md
- .github/workflows/profiling-benchmarks.yml
- cspell.json
🧰 Additional context used
📓 Path-based instructions (1)
scripts/*.py
⚙️ CodeRabbit configuration file
scripts/*.py: These are Python utility scripts for changelog and commit processing.
Focus on code quality, maintainability, and adherence to Python best practices.
The scripts use ruff for comprehensive linting and formatting (pylint has been retired).
Files:
scripts/subprocess_utils.pyscripts/benchmark_utils.py
🧠 Learnings (3)
📚 Learning: 2025-09-04T20:03:49.859Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.859Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
Applied to files:
docs/code_organization.md
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
PR: acgetchell/delaunay#55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The user corrected that finitecheck.rs, hashcoordinate.rs, and orderedeq.rs trait files do not exist in the current version of the delaunay repository under src/geometry/traits/. The current structure only contains coordinate.rs, which matches the documented structure in CONTRIBUTING.md.
Applied to files:
docs/code_organization.md
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
PR: acgetchell/delaunay#55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The delaunay repository has stale documentation in WARP.md that references non-existent trait files (finitecheck.rs, hashcoordinate.rs, orderedeq.rs) in src/geometry/traits/, while the actual directory only contains coordinate.rs. The CONTRIBUTING.md documentation is accurate and reflects the current state.
Applied to files:
docs/code_organization.md
🧬 Code graph analysis (7)
benches/profiling_suite.rs (5)
src/geometry/util.rs (41)
core(1338-1338)generate_grid_points(1316-1379)generate_grid_points(3589-3589)generate_grid_points(3622-3622)generate_grid_points(3638-3638)generate_grid_points(3654-3654)generate_grid_points(3663-3663)generate_grid_points(3677-3677)generate_grid_points(3687-3687)generate_grid_points(3704-3704)generate_grid_points(3725-3725)generate_grid_points(3738-3738)generate_poisson_points(1417-1499)generate_poisson_points(3756-3756)generate_poisson_points(3789-3789)generate_poisson_points(3822-3822)generate_poisson_points(3823-3823)generate_poisson_points(3837-3837)generate_poisson_points(3844-3844)generate_poisson_points(3855-3855)generate_poisson_points(3868-3868)generate_poisson_points(3874-3874)generate_poisson_points(3883-3883)generate_poisson_points(3906-3906)generate_poisson_points(3918-3918)generate_random_points_seeded(1240-1267)generate_random_points_seeded(3351-3351)generate_random_points_seeded(3352-3352)generate_random_points_seeded(3371-3371)generate_random_points_seeded(3372-3372)generate_random_points_seeded(3390-3390)generate_random_points_seeded(3391-3391)generate_random_points_seeded(3409-3409)generate_random_points_seeded(3410-3410)generate_random_points_seeded(3429-3429)generate_random_points_seeded(3430-3430)generate_random_points_seeded(3434-3434)generate_random_points_seeded(3435-3435)generate_random_points_seeded(3439-3439)generate_random_points_seeded(3440-3440)generate_random_points_seeded(3444-3444)src/core/triangulation_data_structure.rs (2)
default(207-209)new(868-900)src/geometry/algorithms/convex_hull.rs (2)
default(1117-1123)from_triangulation(239-271)src/core/cell.rs (7)
vertices(468-470)cell(2210-2210)cell(2339-2339)cell(2341-2341)cell(2345-2345)cell(2544-2544)cell(2557-2557)src/geometry/predicates.rs (1)
insphere_lifted(500-616)
src/geometry/util.rs (2)
src/geometry/traits/coordinate.rs (1)
new(643-643)src/geometry/point.rs (1)
new(74-76)
src/core/triangulation_data_structure.rs (4)
src/core/collections.rs (2)
fast_hash_map_with_capacity(361-364)fast_hash_map_with_capacity(463-463)src/core/cell.rs (8)
cell(2210-2210)cell(2339-2339)cell(2341-2341)cell(2345-2345)cell(2544-2544)cell(2557-2557)vertices(468-470)uuid(495-497)src/core/vertex.rs (1)
uuid(458-460)src/core/util.rs (1)
facets_are_adjacent(285-301)
scripts/subprocess_utils.py (1)
scripts/changelog_utils.py (1)
run_git_command(622-649)
src/core/algorithms/robust_bowyer_watson.rs (1)
src/core/boundary.rs (1)
tds(463-467)
src/core/cell.rs (3)
src/core/facet.rs (3)
vertices(437-444)cell(325-327)vertex(370-372)src/core/vertex.rs (2)
is_valid(538-554)uuid(458-460)src/core/triangulation_data_structure.rs (1)
is_valid(2101-2137)
scripts/benchmark_utils.py (2)
scripts/subprocess_utils.py (3)
get_git_commit_hash(132-144)run_cargo_command(74-97)run_git_command(43-71)scripts/hardware_utils.py (2)
HardwareComparator(409-575)HardwareInfo(33-406)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
- GitHub Check: Codacy Static Code Analysis
- GitHub Check: Codacy Security Scan
- GitHub Check: build (ubuntu-latest)
- GitHub Check: build (macos-latest)
- GitHub Check: build (windows-latest)
- GitHub Check: performance-regression
- GitHub Check: Code Coverage
🔇 Additional comments (14)
docs/code_organization.md (1)
140-149: Confirm debug test names still match fileDouble-check the function names in tests/circumsphere_debug_tools.rs still match the invocations shown (test_2d_circumsphere_debug, etc.). Update if they’ve been renamed.
src/geometry/util.rs (1)
175-213: Confirm trait bounds cover Float usagesafe_cast_to_f64/safe_cast_from_f64 rely on is_finite/is_finite_generic and num_traits::cast; hypot uses Float methods. Ensure CoordinateScalar implies Float for T or add explicit T: Float where needed.
scripts/benchmark_utils.py (2)
535-543: Unit normalization: include “μs” and “µs” — goodNice touch supporting both micro symbols. LGTM.
842-851: determine_benchmark_skip: safe git object check — goodUsing cat-file -e with a validated SHA and no shell is solid.
src/core/cell.rs (1)
906-915: Good implementation of neighbors length validationThe validation correctly enforces the D+1 length invariant for the neighbors vector when present, ensuring positional semantics are maintained.
src/core/triangulation_data_structure.rs (9)
174-178: Good switch to specialized collections (FxHash, SmallBuffer, typed maps).*The imports align with the PR goals (perf/memory). Clear improvement over std::collections for hot paths.
1234-1234: Neighbor semantics documentation is clear.Explicitly stating preservation of None slots opposite vertices[i] will save future debugging time.
1480-1483: Good: pre-allocated vertex→cells map.This should materially reduce rehashing under larger meshes.
1545-1547: Duplicate removal path improved.Using FastHashMap plus
sort_unstableis the right tradeoff here.Also applies to: 1557-1557
1579-1592: Typed facet map API is cleaner and cheaper.Returning
FacetToCellsMapclarifies intent, and capacity pre-sizing is good. No issues spotted.Also applies to: 1631-1654
2342-2360: Stable cell comparison with sort_unstable is appropriate.This helps PartialEq avoid order sensitivity. Looks good.
5185-5207: Tests: fast facet map and Option neighbor counts.The assertions match the new neighbor representation (counting non-None entries). Nice coverage.
Also applies to: 5225-5244
5738-5777: Topology reset helper validated.
clear_all_neighbors()test ensures invariants after clearing; good to have.
5799-5921: Serialization roundtrip test is valuable; will pass once the Deserialize bug is fixed.After applying the manual deserializer fix, this should be green. Keep this as a guardrail.
Enhances benchmark utilities to include timeout handling for cargo bench commands, preventing indefinite execution. Introduces `ProjectRootNotFoundError` for clearer error reporting when `Cargo.toml` is missing. Exports `BENCHMARK_REGRESSION_DETECTED` to GITHUB_ENV for better CI integration. Protects against division by zero in throughput calculations.
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/core/triangulation_data_structure.rs (1)
1079-1096: Generation counter not bumped when creating the very first cell (cache invalidation bug).In the “exactly D+1 vertices” branch, you return early after inserting the initial cell and assigning incident cells, but do not bump generation. Any cache keyed by generation will miss this topology change.
Apply this diff:
// Case 2: Exactly D+1 vertices - create first cell directly if vertex_count == D + 1 && self.number_of_cells() == 0 { let all_vertices: Vec<_> = self.vertices.values().copied().collect(); let cell = CellBuilder::default() .vertices(all_vertices) .build() .map_err(|_| "Failed to create initial cell from vertices")?; let cell_key = self.cells.insert(cell); let cell_uuid = self.cells[cell_key].uuid(); self.cell_bimap.insert(cell_uuid, cell_key); // Assign incident cells to vertices self.assign_incident_cells() .map_err(|_| "Failed to assign incident cells")?; - return Ok(()); + // Topology changed; invalidate caches. + self.generation.fetch_add(1, Ordering::Relaxed); + return Ok(()); }
♻️ Duplicate comments (2)
src/core/collections.rs (1)
431-459: No need for a global hasher; current approach is fine.Prior comments suggested a static BuildHasher. FxBuildHasher is a ZST; default() is zero-cost. The current with_capacity_and_hasher(capacity, FxBuildHasher::default()) is optimal.
benches/profiling_suite.rs (1)
124-166: Grid points_per_dim: add bounds checks to avoid overflow/NaN; panic change is goodFail-fast on grid generation is correct (addresses prior fallback concern). Add safe computation for points_per_dim to prevent edge-case overflow for extreme inputs.
- let points_per_dim = ((count as f64).powf(1.0 / D as f64).ceil() as usize).max(2); + let raw = (count as f64).powf(1.0 / D as f64).ceil(); + let points_per_dim = if raw.is_finite() && raw >= 2.0 { + raw.min(usize::MAX as f64) as usize + } else { + 2 + };
🧹 Nitpick comments (29)
WARP.md (1)
102-102: Quoting note for cross-shell usage (Windows/PowerShell).RUSTDOCFLAGS='-D warnings' is perfect for Bash. Consider adding a short note/example for PowerShell (e.g., $env:RUSTDOCFLAGS='-D warnings') to avoid confusion on Windows dev machines.
docs/code_organization.md (1)
151-159: Minor: add a quiet mode example for CI logs.Consider including uv run pytest -q and an example for running a single test (e.g., -k 'sanitiz(e|ation)') to keep CI logs terse when desired.
src/core/collections.rs (1)
184-197: Shrink facet index storage from usize to u8.Facet indices are in 0..=D (D ≤ MAX_PRACTICAL_DIMENSION_SIZE). Using u8 cuts tuple size and improves cache density.
Apply this diff:
+/// Compact index type for facet positions (D+1 fits in u8 for practical D). +pub type FacetIndex = u8; - -pub type FacetToCellsMap = FastHashMap<u64, SmallBuffer<(CellKey, usize), 2>>; +pub type FacetToCellsMap = FastHashMap<u64, SmallBuffer<(CellKey, FacetIndex), 2>>;And adjust call sites where facet indices are produced/consumed (cast as needed).
scripts/benchmark_utils.py (2)
168-199: Make Criterion results discovery robust to bench/group renames.Hard-coding "tds_new_*d/tds_new" is brittle. Generalize to any "*d" dir and take its lone inner benchmark dir.
Apply this diff:
- # Look for benchmark results in tds_new_*d directories - for dim_dir in sorted(criterion_dir.glob("tds_new_*d")): - dim = dim_dir.name.removeprefix("tds_new_").removesuffix("d") - if not dim.isdigit(): - continue - benchmark_dir = dim_dir / "tds_new" + # Look for benchmark results in *d directories (group names can change) + for dim_dir in sorted(p for p in criterion_dir.iterdir() if p.is_dir() and p.name.endswith("d")): + dim = dim_dir.name.removesuffix("d") + if not dim.isdigit(): + # Fallback: extract trailing "<digits>d" + m = re.search(r"(\d+)d$", dim_dir.name) + if not m: + continue + dim = m.group(1) + # Criterion nests one directory per benchmark target under each *d group + benchmark_dir = next((p for p in dim_dir.iterdir() if p.is_dir()), None) if not benchmark_dir.exists(): continue
943-947: Use project root detection instead of CWD in CI.GH Actions typically runs at repo root, but be defensive: use find_project_root().
Apply this diff:
- project_root = Path.cwd() + project_root = find_project_root()src/core/algorithms/robust_bowyer_watson.rs (4)
7-7: Prefer fast collections for hot paths.Replace std::collections::HashMap/HashSet with core::collections::{FastHashMap, FastHashSet} to align with the new perf types used elsewhere.
Apply this diff:
-use std::collections::{HashMap, HashSet}; +use crate::core::collections::{FastHashMap as HashMap, FastHashSet as HashSet, SmallBuffer};
666-699: Avoid extra allocations when transforming facet maps.Return a compact map using SmallBuffer<CellKey, 2> to keep data on-stack in the common case.
Apply this diff:
- ) -> Result<HashMap<u64, Vec<CellKey>>, TriangulationValidationError> { + ) -> Result<HashMap<u64, SmallBuffer<CellKey, 2>>, TriangulationValidationError> { // Reuse existing mapping from TDS to avoid recomputation let tds_map = tds.build_facet_to_cells_hashmap(); - // Transform the TDS map into the required format with validation - let facet_to_cells: HashMap<u64, Vec<CellKey>> = tds_map + // Transform into a compact representation with validation + let facet_to_cells: HashMap<u64, SmallBuffer<CellKey, 2>> = tds_map .into_iter() .map(|(facet_key, cell_facet_pairs)| { - // Extract just the CellKeys, discarding facet indices - let cell_keys: Vec<CellKey> = cell_facet_pairs - .iter() - .map(|(cell_key, _)| *cell_key) - .collect(); + // Extract just the CellKeys, discarding facet indices + let mut cell_keys: SmallBuffer<CellKey, 2> = SmallBuffer::new(); + cell_facet_pairs.iter().for_each(|(cell_key, _)| cell_keys.push(*cell_key));Call sites that iterate over sharing_cells keep working (iter/len on SmallVec).
515-522: Use FastHashSet for bad_cell_set/processed_facets.These are on hot paths; swap to FastHashSet for consistency and speed.
- let bad_cell_set: HashSet<CellKey> = bad_cells.iter().copied().collect(); + let bad_cell_set: HashSet<CellKey> = bad_cells.iter().copied().collect(); @@ - let mut processed_facets = HashSet::new(); + let mut processed_facets = HashSet::default();
867-875: Micro-opt: avoid heap Vec for small simplices.Use a SmallBuffer for simplex points (D+1) to reduce allocs in tight loops.
- let mut simplex_with_opposite: Vec<Point<T, D>> = - facet_vertices.iter().map(|v| *v.point()).collect(); + let mut simplex_with_opposite: SmallBuffer<Point<T, D>, 8> = + facet_vertices.iter().map(|v| *v.point()).collect(); simplex_with_opposite.push(*opposite_vertex.point()); - let mut simplex_with_test: Vec<Point<T, D>> = - facet_vertices.iter().map(|v| *v.point()).collect(); + let mut simplex_with_test: SmallBuffer<Point<T, D>, 8> = + facet_vertices.iter().map(|v| *v.point()).collect(); simplex_with_test.push(*vertex.point());src/core/cell.rs (1)
245-257: Add a safe setter to prevent invalid neighbor assignmentsDirect public mutation invites mistakes. Provide a checked API to set neighbors.
impl<T, U, V, const D: usize> Cell<T, U, V, D> where T: CoordinateScalar, U: DataType, V: DataType, [T; D]: Copy + Default + serde::de::DeserializeOwned + serde::Serialize + Sized, { pub fn set_neighbors(&mut self, neighbors: Vec<Option<uuid::Uuid>>) -> Result<(), CellValidationError> { if neighbors.len() != D + 1 { return Err(CellValidationError::InvalidNeighborsLength { actual: neighbors.len(), expected: D + 1, dimension: D, }); } self.neighbors = Some(neighbors); Ok(()) } }benches/profiling_suite.rs (4)
53-56: Right-size SmallBuffer capacitiesFor 3D simplices D+1=4; 8 is safe but wastes stack. Consider 4.
-const SIMPLEX_VERTICES_BUFFER_SIZE: usize = 8; // For 3D simplex vertices (4) with some headroom +const SIMPLEX_VERTICES_BUFFER_SIZE: usize = 4; // 3D simplex = 4 vertices
58-76: When count-allocations is disabled, emit one-time note to avoid misleading zero statsCurrently summaries will show zeros; clarify that it’s a no-op mode.
#[cfg(not(feature = "count-allocations"))] fn print_count_allocations_banner_once() { use std::sync::Once; static ONCE: Once = Once::new(); ONCE.call_once(|| eprintln!("count-allocations feature not enabled; memory stats are placeholders.")); }Call at the start of benchmark_memory_profiling() before running groups:
#[cfg(not(feature = "count-allocations"))] print_count_allocations_banner_once();
172-223: DRY env-based measurement time parsingFactor out repeated BENCH_MEASUREMENT_TIME parsing into a helper.
fn bench_time(default_secs: u64) -> Duration { std::env::var("BENCH_MEASUREMENT_TIME") .ok() .and_then(|s| s.parse::<u64>().ok()) .map(Duration::from_secs) .unwrap_or(Duration::from_secs(default_secs)) }Then replace group.measurement_time(...) calls with group.measurement_time(bench_time(DEFAULT));
Also applies to: 221-229, 271-279, 316-323
358-371: 95th percentile index: use nearest-rank to avoid off-by-one at small NCurrent index casts may under-select. Optional tweak:
- let index = ((values.len() as f64 * 0.95) as usize).min(values.len() - 1); + let n = values.len(); + let rank = ((95 * (n.saturating_sub(1))) + 99) / 100; // nearest-rank, clamps at n-1 + let index = rank.min(n - 1);src/geometry/util.rs (11)
139-145: MSRV check: avoid if-let chains in env parsing unless MSRV ≥ 1.70If the crate’s MSRV is below 1.70,
if letchains won’t compile. Prefer a nested parse to be safe.fn max_grid_bytes_safety_cap() -> usize { - if let Ok(v) = std::env::var("MAX_GRID_BYTES_SAFETY_CAP") - && let Ok(n) = v.parse::<usize>() - { - return n; - } + if let Ok(v) = std::env::var("MAX_GRID_BYTES_SAFETY_CAP") { + if let Ok(n) = v.parse::<usize>() { + return n; + } + } MAX_GRID_BYTES_SAFETY_CAP_DEFAULT }
151-172: format_bytes: avoid f64 conversion; keep it integer for precision and simplicityThe current implementation converts to f64 (via safe_usize_to_scalar) and then divides, which is unnecessary and can introduce rounding quirks for large values. A pure-integer approach is simpler and exact.
fn format_bytes(bytes: usize) -> String { - const UNITS: &[&str] = &["B", "KiB", "MiB", "GiB", "TiB"]; - - // Use safe cast to avoid precision loss warnings - let Ok(mut size) = safe_usize_to_scalar::<f64>(bytes) else { - // Fallback for extremely large values - return format!("{bytes} B"); - }; - - let mut unit_index = 0; - - while size >= 1024.0 && unit_index < UNITS.len() - 1 { - size /= 1024.0; - unit_index += 1; - } - - if unit_index == 0 { - format!("{} {}", bytes, UNITS[0]) - } else { - format!("{:.1} {}", size, UNITS[unit_index]) - } + const UNITS: &[&str] = &["B", "KiB", "MiB", "GiB", "TiB"]; + let mut val = bytes; + let mut unit = 0; + while val >= 1024 && unit < UNITS.len() - 1 { + val /= 1024; + unit += 1; + } + if unit == 0 { + format!("{} {}", bytes, UNITS[0]) + } else { + // Use the remainder to produce one decimal place without floating point + let higher = bytes >> (10 * unit); + let rem = bytes - (higher << (10 * unit)); + let tenth = (rem * 10) >> (10 * unit); + format!("{}.{} {}", higher, tenth, UNITS[unit]) + } }
191-201: Trait bound clarity: scaled_hypot_2d uses Float APIs
scaled_hypot_2dcallsFloat::absandFloat::sqrt, so it implicitly requiresT: Float. IfCoordinateScalardoesn’t guaranteeFloat, add it here to make the bound explicit.-fn scaled_hypot_2d<T: CoordinateScalar + num_traits::Zero>(x: T, y: T) -> T { +fn scaled_hypot_2d<T: CoordinateScalar + num_traits::Zero + num_traits::Float>(x: T, y: T) -> T {
471-476: Use the correct f64 exact-integer limit (2^53−1), and align docsf64 can exactly represent all integers up to 2^53−1. Using 2^52 is overly conservative and may error on values that are still exactly representable, surprising callers. Update the limit and docs.
-/// - `f64` mantissa has 52 bits of precision -/// - `usize` values larger than 2^52 (4,503,599,627,370,496) may lose precision +/// - f64 represents all integers exactly up to 2^53−1 (9,007,199,254,740,991) +/// - `usize` values larger than 2^53−1 may lose precision when converted via f64 @@ - const MAX_PRECISE_USIZE_IN_F64: u64 = 1_u64 << 52; // 2^52 = 4,503,599,627,370,496 + const MAX_PRECISE_USIZE_IN_F64: u64 = (1_u64 << 53) - 1; // 2^53−1 @@ - if value_u64 > MAX_PRECISE_USIZE_IN_F64 { + if value_u64 > MAX_PRECISE_USIZE_IN_F64 { return Err(CoordinateConversionError::ConversionFailed {Note: Tests that assert the 2^52 boundary should be adjusted to use 2^53−1 as the safe limit and 2^53 as the first failing case. I can provide those diffs if you want.
Also applies to: 482-500
609-616: 2D hypot path: minor simplificationYou convert to f64 and potentially fall back. Consider extracting the “convert-to/from f64 with fallback” into a small helper to reduce duplication if you end up using it elsewhere. No action required if this is the only call site.
819-825: Preserve error semantics in circumradiusWrapping every error from
circumradius_with_centerasMatrixInversionFailedhides real causes (e.g., coordinate conversion). Propagate the original error instead.- let circumcenter = circumcenter(points)?; - circumradius_with_center(points, &circumcenter).map_err(|e| { - CircumcenterError::MatrixInversionFailed { - details: format!("Failed to calculate circumradius: {e}"), - } - }) + let circumcenter = circumcenter(points)?; + circumradius_with_center(points, &circumcenter)
890-898: Docs mismatch with implementation for (D−1)-facet casesThe bullets say “2D: Area of triangle” but the 2D branch computes length of a line segment (correct for a (D−1)-facet with D=2). Please correct the docs to avoid confusion.
-/// - 1D: Distance between two points (length) -/// - 2D: Area of triangle using cross product -/// - 3D: Volume of tetrahedron using scalar triple product +/// - 1D: Measure of a 0D facet (by convention 1) +/// - 2D: Length of a line segment (1D facet in 2D) +/// - 3D: Area of a triangle using cross product (2D facet in 3D)
1095-1105: Avoid usize factorial overflow/precision issues by computing in f64
peroxide::statistics::ops::factorialreturns an integer type; converting very quickly exceeds 2^53−1 and triggers your precision guard. Since you ultimately need f64, compute factorial directly in f64.- // Calculate (D-1)! factorial - peroxide's factorial function returns usize - let factorial_usize = factorial(D - 1); - let factorial_val = safe_usize_to_scalar::<f64>(factorial_usize).map_err(|_| { - CircumcenterError::ValueConversion(ValueConversionError::ConversionFailed { - value: factorial_usize.to_string(), - from_type: "usize", - to_type: "f64", - details: "Factorial value too large for f64 precision".to_string(), - }) - })?; - sqrt_det / factorial_val + // Compute (D-1)! directly in f64 to avoid integer overflow/precision conversions + let factorial_val = (1..=D.saturating_sub(1)) + .fold(1.0_f64, |acc, k| acc * (k as f64)); + sqrt_det / factorial_val
1366-1416: generate_grid_points: clarify error message when index conversion failsThe error uses
{idx:?}for the whole index vector, which is great, but themin/maxfields currently reflect the index range, not coordinates. Consider setting min/max to something likeformat!("{}", offset[d])andformat!("{}", offset[d] + (points_per_dim - 1) * spacing)for better guidance. Optional.
1476-1552: Poisson sampling: O(n²) neighbor checks—consider a grid/bin acceleratorThe all-pairs distance check will degrade as n grows. A simple uniform grid (cell size = min_distance) or spatial hash can reduce candidate checks to near-constant time on average, materially speeding up 2D/3D and keeping higher-D cases manageable. Keeping the current path as a fallback is fine.
I can sketch a drop-in grid-based acceleration that preserves determinism with the given seed.
4021-4045: format_bytes tests rely on specific rounding; keep behavior stable after refactorIf you adopt the integer formatter above, the outputs remain the same for these cases. If you keep f64, add a small epsilon-based formatting test to avoid flakiness on other platforms.
src/core/triangulation_data_structure.rs (4)
1288-1293: Avoid per-facet heap allocs; reuse a single buffer inside the loop.facet_vertices is rebuilt for each i with a fresh Vec, causing avoidable allocs. Reuse one Vec and clear it per iteration to cut churn in tight loops.
Apply this diff:
- for i in 0..vertex_keys.len() { - // Compute facet key by creating a slice that excludes the i-th vertex - // More efficient than clone+remove: directly iterate without creating intermediate collection - let facet_vertices: Vec<_> = vertex_keys - .iter() - .enumerate() - .filter(|(j, _)| *j != i) - .map(|(_, &key)| key) - .collect(); - let facet_key = facet_key_from_vertex_keys(&facet_vertices); - // Store both the cell and the vertex index that is opposite to this facet - facet_map.entry(facet_key).or_default().push((cell_key, i)); - } + let mut facet_vertices = Vec::with_capacity(vertex_keys.len().saturating_sub(1)); + for i in 0..vertex_keys.len() { + facet_vertices.clear(); + for (j, &key) in vertex_keys.iter().enumerate() { + if j != i { + facet_vertices.push(key); + } + } + let facet_key = facet_key_from_vertex_keys(&facet_vertices); + facet_map.entry(facet_key).or_default().push((cell_key, i)); + }Also applies to: 1305-1314
1487-1490: Tiny fast-path: return early when no cells exist.Saves building vertex_to_cells on empty triangulations.
Apply this diff:
pub fn assign_incident_cells(&mut self) -> Result<(), TriangulationValidationError> { + if self.cells.is_empty() { + return Ok(()); + } // Build vertex_to_cells mapping using optimized collections let mut vertex_to_cells: VertexToCellsMap = fast_hash_map_with_capacity(self.vertices.len());
1552-1554: Bump generation when duplicates are removed.remove_duplicate_cells mutates core topology but doesn’t invalidate caches.
Apply this diff:
// Second pass: remove duplicate cells and their corresponding UUID mappings for cell_key in &cells_to_remove { if let Some(removed_cell) = self.cells.remove(*cell_key) { self.cell_bimap.remove_by_left(&removed_cell.uuid()); } } - duplicate_count + if duplicate_count > 0 { + self.generation.fetch_add(1, Ordering::Relaxed); + } + duplicate_countAlso applies to: 1576-1584
4685-4704: Stale test comment contradicts assertion.The comment claims “should pass” but the test intentionally asserts failure due to neighbors length 3 < D+1=4.
Apply this diff:
- // This should pass validation (exactly D neighbors is valid) - let result = tds.is_valid(); - // Should fail because the neighbors vector has wrong length (3 instead of 4 for 3D) + // Intentionally invalid: neighbors length is 3 (< D+1 = 4). Expect failure. + let result = tds.is_valid();
📜 Review details
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (12)
CHANGELOG.md(1 hunks)WARP.md(2 hunks)benches/profiling_suite.rs(1 hunks)docs/code_organization.md(3 hunks)scripts/benchmark_utils.py(12 hunks)scripts/subprocess_utils.py(3 hunks)scripts/tests/test_benchmark_utils.py(4 hunks)src/core/algorithms/robust_bowyer_watson.rs(6 hunks)src/core/cell.rs(13 hunks)src/core/collections.rs(1 hunks)src/core/triangulation_data_structure.rs(46 hunks)src/geometry/util.rs(5 hunks)
✅ Files skipped from review due to trivial changes (1)
- CHANGELOG.md
🚧 Files skipped from review as they are similar to previous changes (1)
- scripts/tests/test_benchmark_utils.py
🧰 Additional context used
📓 Path-based instructions (1)
scripts/*.py
⚙️ CodeRabbit configuration file
scripts/*.py: These are Python utility scripts for changelog and commit processing.
Focus on code quality, maintainability, and adherence to Python best practices.
The scripts use ruff for comprehensive linting and formatting (pylint has been retired).
Files:
scripts/subprocess_utils.pyscripts/benchmark_utils.py
🧠 Learnings (4)
📚 Learning: 2025-09-02T02:40:54.439Z
Learnt from: acgetchell
PR: acgetchell/delaunay#58
File: scripts/tests/test_subprocess_utils.py:92-101
Timestamp: 2025-09-02T02:40:54.439Z
Learning: In Rust projects, cargo is expected to be available in development and CI environments, so conditional skipping of cargo-dependent tests is typically unnecessary.
Applied to files:
WARP.md
📚 Learning: 2025-09-04T20:03:49.859Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.859Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
Applied to files:
docs/code_organization.md
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
PR: acgetchell/delaunay#55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The user corrected that finitecheck.rs, hashcoordinate.rs, and orderedeq.rs trait files do not exist in the current version of the delaunay repository under src/geometry/traits/. The current structure only contains coordinate.rs, which matches the documented structure in CONTRIBUTING.md.
Applied to files:
docs/code_organization.md
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
PR: acgetchell/delaunay#55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The delaunay repository has stale documentation in WARP.md that references non-existent trait files (finitecheck.rs, hashcoordinate.rs, orderedeq.rs) in src/geometry/traits/, while the actual directory only contains coordinate.rs. The CONTRIBUTING.md documentation is accurate and reflects the current state.
Applied to files:
docs/code_organization.md
🧬 Code graph analysis (8)
src/core/cell.rs (3)
src/core/facet.rs (3)
vertices(437-444)cell(325-327)vertex(370-372)src/core/vertex.rs (2)
is_valid(538-554)uuid(458-460)src/core/triangulation_data_structure.rs (1)
is_valid(2117-2153)
src/geometry/util.rs (2)
src/geometry/point.rs (4)
std(323-323)std(324-324)num_traits(257-257)new(74-76)src/geometry/traits/coordinate.rs (1)
new(643-643)
benches/profiling_suite.rs (5)
src/geometry/util.rs (42)
core(1387-1387)generate_grid_points(1365-1431)generate_grid_points(3655-3655)generate_grid_points(3688-3688)generate_grid_points(3704-3704)generate_grid_points(3720-3720)generate_grid_points(3729-3729)generate_grid_points(3743-3743)generate_grid_points(3753-3753)generate_grid_points(3773-3773)generate_grid_points(3794-3794)generate_grid_points(3807-3807)generate_grid_points(4052-4052)generate_poisson_points(1475-1565)generate_poisson_points(3843-3843)generate_poisson_points(3876-3876)generate_poisson_points(3909-3909)generate_poisson_points(3910-3910)generate_poisson_points(3924-3924)generate_poisson_points(3931-3931)generate_poisson_points(3942-3942)generate_poisson_points(3955-3955)generate_poisson_points(3961-3961)generate_poisson_points(3970-3970)generate_poisson_points(3993-3993)generate_poisson_points(4005-4005)generate_random_points_seeded(1289-1316)generate_random_points_seeded(3417-3417)generate_random_points_seeded(3418-3418)generate_random_points_seeded(3437-3437)generate_random_points_seeded(3438-3438)generate_random_points_seeded(3456-3456)generate_random_points_seeded(3457-3457)generate_random_points_seeded(3475-3475)generate_random_points_seeded(3476-3476)generate_random_points_seeded(3495-3495)generate_random_points_seeded(3496-3496)generate_random_points_seeded(3500-3500)generate_random_points_seeded(3501-3501)generate_random_points_seeded(3505-3505)generate_random_points_seeded(3506-3506)generate_random_points_seeded(3510-3510)src/core/triangulation_data_structure.rs (2)
default(207-209)new(871-903)src/geometry/algorithms/convex_hull.rs (2)
default(1117-1123)from_triangulation(239-271)src/core/cell.rs (7)
vertices(475-477)cell(2217-2217)cell(2346-2346)cell(2348-2348)cell(2352-2352)cell(2551-2551)cell(2564-2564)src/geometry/predicates.rs (1)
insphere_lifted(500-616)
src/core/triangulation_data_structure.rs (3)
src/core/collections.rs (2)
fast_hash_map_with_capacity(431-434)fast_hash_map_with_capacity(533-533)src/core/cell.rs (11)
from(372-374)vertex_keys(602-615)cell(2217-2217)cell(2346-2346)cell(2348-2348)cell(2352-2352)cell(2551-2551)cell(2564-2564)vertices(475-477)uuid(502-504)map(341-341)src/core/facet.rs (6)
key(455-473)facet_key_from_vertex_keys(574-586)cell(325-327)vertices(437-444)vertex(370-372)map(217-217)
scripts/subprocess_utils.py (1)
scripts/changelog_utils.py (1)
run_git_command(622-649)
scripts/benchmark_utils.py (3)
scripts/subprocess_utils.py (3)
get_git_commit_hash(145-157)run_cargo_command(78-110)run_git_command(43-75)scripts/changelog_utils.py (2)
run_git_command(622-649)main(804-837)scripts/hardware_utils.py (3)
HardwareComparator(409-575)HardwareInfo(33-406)main(578-625)
src/core/algorithms/robust_bowyer_watson.rs (1)
src/core/boundary.rs (1)
tds(463-467)
src/core/collections.rs (3)
src/core/cell.rs (2)
uuid(502-504)map(341-341)src/lib.rs (2)
fast_hash_map_with_capacity(310-310)fast_hash_set_with_capacity(313-313)src/core/triangulation_data_structure.rs (1)
new(871-903)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Codacy Static Code Analysis
- GitHub Check: Codacy Security Scan
- GitHub Check: performance-regression
- GitHub Check: Code Coverage
- GitHub Check: build (windows-latest)
🔇 Additional comments (23)
WARP.md (1)
258-270: TDD section looks solid and aligned with repo workflows.No issues; clear Red-Green-Refactor guidance and Rust-specific notes are helpful.
docs/code_organization.md (2)
90-91: Good: new Python test file surfaced in tree.Entry matches the added tests around benchmark tooling.
556-592: Version references may be stale.These “as of v0.4.2” size notes likely changed with v0.4.3 edits. Please re-check counts or drop the specific line counts to avoid drift.
src/core/collections.rs (1)
81-92: Good centralization of perf-oriented collections + clear safety caveat.Aliasing FxHash* and SmallVec behind domain types is the right call; the DoS warning is explicit and appropriate.
scripts/benchmark_utils.py (1)
877-893: Good security posture on baseline SHA.Regex validation and arg-list invocation avoid injection; using cat-file -e "^{commit}" is safe without a shell.
src/core/cell.rs (9)
109-121: Good addition: explicit neighbors-length validation errorThe dedicated InvalidNeighborsLength variant is clear and actionable.
225-229: Neighbors field docs and representation look correctD+1 positional semantics are well stated and match downstream usage.
Also applies to: 245-257
457-457: API improvement: vertices() returns a sliceReturn-by-slice avoids copies and matches call-sites. LGTM.
Also applies to: 475-476
530-534: Docs: neighbor example matches D+1 invariantExample aligns with positional semantics. Good.
861-874: Validation docs updated to cover neighborsAccurate and complete. Good.
1400-1402: Tests: neighbors updated to D+1 layoutThese adjustments prevent silent misuse. Nice.
Also applies to: 1435-1437, 1635-1637
2723-2850: Solid test coverage for neighbors length invariantsCovers correct, too-short, too-long, and None. LGTM.
3050-3057: Equality tests ignore neighbors as intendedConfirms Eq/Hash contract based solely on vertices. Good.
913-923: MSRV supports let-chains used here
Cargo.toml declaresrust-version = "1.89.0", and let-chains were stabilized in Rust 1.65.0, so the guard is fully supported.benches/profiling_suite.rs (2)
1-39: Bench docs are clear and actionableUsage, modes, and categories are well documented.
978-989: Criterion config is reasonable for long benchesGlobal settings align with category-specific overrides.
src/core/triangulation_data_structure.rs (7)
174-178: Good switch to fast, domain-specific collections + clean FacetError propagation.The crate-local FastHashMap/Set and SmallBuffer usage is appropriate for hot paths. Adding TriangulationValidationError::FacetError(#[from]) simplifies error plumbing across facet ops.
Also applies to: 315-318
413-413: Serde symmetry looks solid.Custom serialize_with for BiMaps plus manual Deserialize that rebuilds BiMaps from HashMaps ensures stable JSON shape and round-tripping. Tests below cover it.
Also applies to: 417-417, 2474-2494
1638-1641: Facet map preallocation LGTM.Returning FacetToCellsMap with fast_hash_map_with_capacity(self.cells.len() * (D+1)) is a sensible heuristic.
2163-2177: Neighbor validation logic is correct and thorough.Precomputing vertex sets, enforcing length D+1, positional semantics (opposite vertex), mutuality, and |shared|==D cover the essential invariants.
Also applies to: 2185-2192, 2200-2247, 2249-2270, 2273-2280
2359-2361: sort_unstable for UUID lists is appropriate.Reduces overhead in PartialEq comparisons for cells.
Also applies to: 2375-2377
5176-5226: Nice boundary/shared facet accounting in tests.FastHashMap<u64, Vec> with facet.key() aligns with production hashing. Non-None neighbor counts are asserted correctly.
5506-5603: Tests for FacetError conversion and neighbor error propagation are well targeted.They validate the new error variant, message formatting, and propagation through fix_invalid_facet_sharing().
Also applies to: 5605-5651, 5753-5754, 5810-5815, 5839-5841
Extends benchmark suite to include 5D circumsphere containment tests and edge cases across all dimensions, improving the profiling suite for more granular performance analysis. Adds memory profiling with 95th percentile stats and optimizes query benchmarks with precomputed simplices, while also introducing env vars for CI tuning. Refs: None
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
benches/circumsphere_containment.rs (1)
431-456: Guard against division by zero in the consistency report.If all three methods error for all 1000 cases,
totalstays 0 and percentages become NaN. Add an early return.- println!("Method Comparisons ({total} total tests):"); + if total == 0 { + println!("Method Comparisons (0 total tests): no valid cases; skipping percentage report."); + return; + } + println!("Method Comparisons ({total} total tests):");
♻️ Duplicate comments (4)
scripts/tests/test_benchmark_utils.py (1)
1182-1219: Patch target is correct (local module subprocess).Thanks for switching to @patch("benchmark_utils.subprocess.run") so the helper’s internal calls are isolated.
scripts/subprocess_utils.py (1)
43-64: LGTM: hardened defaults and docs match prior guidance.Secure defaults, UTF-8 text mode, and TimeoutExpired in the docstring look good and align with earlier feedback.
src/core/triangulation_data_structure.rs (1)
1715-1716: Caches recomputed and generation bumped after facet repair — thanks for closing the loop.This addresses the prior concern about stale dependent caches after topology changes.
Also applies to: 1724-1724, 1734-1734, 1740-1744, 1791-1797
scripts/benchmark_utils.py (1)
1334-1341: Addressed: safe git invocation for baseline commit existenceYou now validate baseline_commit with a SHA regex and use run_git_command with an arg list; this resolves the earlier injection concern.
🧹 Nitpick comments (21)
benches/README.md (1)
116-125: Add a brief pre-req note for uv to reduce friction.Suggest adding one line above these commands: “Requires uv (pipx install uv or brew install uv).”
## Performance Data Maintenance @@ ```bash +# Prerequisite: install uv (e.g., `pipx install uv` or `brew install uv`) # Generate updated performance summary uv run performance-summary-utils generatebenches/circumsphere_containment.rs (2)
86-86: Remove unused_rng.Leftover variable; drop it to keep benches clean.
- let _rng = rand::rng();
88-111: Reduce duplication with a small macro for per-dimension benches.Both “different dimensions” and “edge cases” repeat near-identical blocks. A macro like below keeps names consistent and trims ~200 LOC.
macro_rules! bench_simplex { ($c:ident, $dim:literal, $simplex:expr, $pt:expr) => {{ $c.bench_function(concat!($dim, "d/insphere"), |b| { b.iter(|| black_box(insphere(black_box(&$simplex), black_box($pt)).unwrap())) }); $c.bench_function(concat!($dim, "d/insphere_distance"), |b| { b.iter(|| black_box(insphere_distance(black_box(&$simplex), black_box($pt)).unwrap())) }); $c.bench_function(concat!($dim, "d/insphere_lifted"), |b| { b.iter(|| black_box(insphere_lifted(black_box(&$simplex), black_box($pt)).unwrap())) }); }}; }Then call
bench_simplex!(c, 2, simplex_2d, test_point_2d);etc., and similarly for edge-case names.Also applies to: 112-136, 137-161, 163-189, 191-387
docs/code_organization.md (2)
63-71: Mark PERFORMANCE_RESULTS.md as generated and point to the CLI.Add a short note that the file is auto-generated and should be updated via the CLI.
Apply this diff to insert a one-line notice:
│ ├── README.md # Benchmarking guide and usage instructions │ ├── PERFORMANCE_RESULTS.md # Auto-generated performance results and analysis +│ │ # (Do not edit manually; update via: uv run performance-summary-utils generate)
563-599: Avoid hard-coded line counts; they drift quickly.Replace “~2,400 lines as of v0.4.2” style wording with “approximate” and/or current version tag to reduce doc churn.
Example edit:
-#### `cell.rs` (large module; ~2,400 lines as of v0.4.2) +#### `cell.rs` (large module; ~2,400 lines, approximate as of v0.4.3)scripts/performance_summary_utils.py (2)
294-309: Simplify benchmark run control flow.With check=True, failures raise; the “failed” branch is unreachable.
Apply this diff:
- if result: - print("✅ Circumsphere benchmarks completed successfully") - return True - print("❌ Circumsphere benchmarks failed") - return False + print("✅ Circumsphere benchmarks completed successfully") + return True
23-31: Reuse shared project-root utilities from benchmark_utils
Remove the localProjectRootNotFoundErrorandfind_project_rootdefinitions inscripts/performance_summary_utils.py(lines 98–102) and import them instead frombenchmark_utils, e.g.:try: from hardware_utils import HardwareInfo # type: ignore[no-redef] from subprocess_utils import get_git_commit_hash, run_cargo_command, run_git_command # type: ignore[no-redef] + from benchmark_utils import find_project_root, ProjectRootNotFoundError # type: ignore[no-redef] except ModuleNotFoundError: from scripts.hardware_utils import HardwareInfo # type: ignore[no-redef] from scripts.subprocess_utils import get_git_commit_hash, run_cargo_command, run_git_command # type: ignore[no-redef] + from scripts.benchmark_utils import find_project_root, ProjectRootNotFoundError # type: ignore[no-redef] -# class ProjectRootNotFoundError(Exception): -# """Raised when project root directory cannot be located.""" -# -# def find_project_root() -> Path: -# """Find the project root by looking for Cargo.toml.""" -# … -# raise ProjectRootNotFoundError(msg) +# (Use shared find_project_root and ProjectRootNotFoundError from benchmark_utils.)src/core/boundary.rs (1)
95-96: Use safe indexing and explicit cast for facet_index.Prevent accidental panics; prefer usize::from and a debug assertion.
Apply this diff:
- boundary_facets.push(facets[facet_index as usize].clone()); + let idx = usize::from(facet_index); + debug_assert!(idx < facets.len(), "facet_index out of bounds"); + boundary_facets.push(facets[idx].clone());src/core/traits/insertion_algorithm.rs (3)
909-915: Use safe indexing via facets.get(..) and avoid repeated casts.Casting with as and manually checking length is fine, but using usize::from(...) and facets.get(fi) removes the double cast/index and avoids accidental panics if the check drifts.
Apply:
- let (cell_key, facet_index) = cells[0]; - if let Some(cell) = tds.cells().get(cell_key) - && let Ok(facets) = cell.facets() - && (facet_index as usize) < facets.len() - { - let facet = &facets[facet_index as usize]; + let (cell_key, facet_index) = cells[0]; + let fi = usize::from(facet_index); + if let Some(cell) = tds.cells().get(cell_key) + && let Ok(facets) = cell.facets() + && facets.get(fi).is_some() + { + let facet = &facets[fi];
931-938: Same cast/index pattern here — prefer usize::from + get().Keeps the style consistent with the boundary-facet loop above and prevents duplicate casts.
- for &(cell_key, facet_index) in cells { - if let Some(cell) = tds.cells().get(cell_key) - && let Ok(facets) = cell.facets() - && (facet_index as usize) < facets.len() - { - let facet = &facets[facet_index as usize]; + for &(cell_key, facet_index) in cells { + let fi = usize::from(facet_index); + if let Some(cell) = tds.cells().get(cell_key) + && let Ok(facets) = cell.facets() + && facets.get(fi).is_some() + { + let facet = &facets[fi];
1191-1195: Tighten bounds-check-and-index to a single get().This avoids manual len() checks and repeated casts.
- if let Ok(facets) = cell.facets() { - if (facet_index as usize) < facets.len() { - let facet = &facets[facet_index as usize]; + if let Ok(facets) = cell.facets() { + if let Some(facet) = facets.get(usize::from(facet_index)) { // Test visibility using proper orientation predicates if Self::is_facet_visible_from_vertex_impl(tds, facet, vertex, cell_key) { visible_facets.push(facet.clone()); } }scripts/tests/test_benchmark_utils.py (2)
34-43: Deduplicate temp_chdir helper across tests.This contextmanager appears in multiple test modules. Move it to a shared test utility (e.g., scripts/tests/_utils.py or conftest.py fixture) to DRY imports and maintenance.
Example conftest fixture:
+# conftest.py +import os +from pathlib import Path +import pytest + +@pytest.fixture +def temp_chdir(): + def _enter(path): + original = Path.cwd() + os.chdir(path) + return original + yield _enter + # caller should chdir back explicitly if needed
1496-1500: Use capsys for output assertions (optional).Converting mock print calls to strings works but is brittle. capsys would assert stdout/stderr content directly and keep messages readable.
scripts/tests/test_performance_summary_utils.py (2)
31-43: Share temp_chdir helper.Same helper exists in other test modules. Centralize to conftest.py or a small tests/utils.py to avoid duplication.
161-173: Stabilize side effects for dual git calls.init may query both tag and tag date; using return_value risks feeding "v0.4.2" to the date path. Prefer side_effect=["v0.4.2", "2025-01-15"] even if this test doesn’t assert date.
- with patch("performance_summary_utils.run_git_command") as mock_git: - mock_git.return_value = "v0.4.2" + with patch("performance_summary_utils.run_git_command") as mock_git: + mock_git.side_effect = ["v0.4.2", "2025-01-01"]scripts/subprocess_utils.py (1)
66-80: Optional: DRY the kwargs-hardening into a tiny helper.A private _build_run_kwargs(**kwargs) would reduce duplication and keep the invariant consistent across wrappers.
Also applies to: 108-121, 146-160, 257-270
src/core/triangulation_data_structure.rs (1)
1648-1651: Avoid panicking on facet_index → u8 conversion.Even if D is small, panics in library code are undesirable. Prefer a debug_assert then cast, or widen FacetIndex. Optionally const-assert D+1 ≤ 256.
- facet_to_cells.entry(facet_key).or_default().push(( - cell_id, - u8::try_from(facet_index).unwrap_or_else(|_| { - panic!("facet_index {facet_index} too large for FacetIndex (u8)") - }), - )); + debug_assert!(facet_index <= u8::MAX as usize, "facet_index too large"); + facet_to_cells + .entry(facet_key) + .or_default() + .push((cell_id, facet_index as u8));Also applies to: 1662-1667
scripts/benchmark_utils.py (4)
48-50: Exception type location: consider de-duplicationProjectRootNotFoundError may already exist in performance_summary_utils. Prefer a single definition exported from one module to avoid divergence.
168-180: Only parses the first nested benchmark directory per dimensionCriterion may lay out multiple benchmark targets under each group. Scanning just the first dir risks dropping data.
Apply this to iterate all nested targets:
- benchmark_dir = next((p for p in dim_dir.iterdir() if p.is_dir()), None) - if not benchmark_dir or not benchmark_dir.exists(): - continue - - # Find point count directories - for point_dir in benchmark_dir.iterdir(): + # Iterate all nested benchmark targets under the <Nd> group + for benchmark_dir in (p for p in dim_dir.iterdir() if p.is_dir()): + # Find point count directories + for point_dir in benchmark_dir.iterdir(): + if not point_dir.is_dir(): + continue + ...
263-267: Timeout UX is helpful; add a log lineConsider logging the exception details at debug level to aid CI forensics.
- except subprocess.TimeoutExpired: + except subprocess.TimeoutExpired as e: print(f"❌ Benchmark execution timed out after {bench_timeout} seconds", file=sys.stderr) print(" Consider increasing --bench-timeout or using --dev mode for faster benchmarks", file=sys.stderr) + logging.debug("TimeoutExpired: %s", e) return False
1565-1582: Duplicate find_project_root implementationThere’s an equivalent helper in performance_summary_utils. Prefer importing a single implementation to prevent drift; if this must stay, at least add a module-level TODO to consolidate.
-def find_project_root() -> Path: - """Find the project root by looking for Cargo.toml. - ... - raise ProjectRootNotFoundError(msg) +from scripts.performance_summary_utils import find_project_root # re-export for CLI entrypointIf performance_summary_utils already defines ProjectRootNotFoundError, align both to the same source.
📜 Review details
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (19)
benches/PERFORMANCE_RESULTS.md(1 hunks)benches/README.md(2 hunks)benches/circumsphere_containment.rs(4 hunks)benches/memory_scaling.rs(1 hunks)benches/profiling_suite.rs(1 hunks)benches/triangulation_vs_hull_memory.rs(1 hunks)docs/code_organization.md(7 hunks)pyproject.toml(2 hunks)scripts/benchmark_utils.py(13 hunks)scripts/performance_summary_utils.py(1 hunks)scripts/subprocess_utils.py(4 hunks)scripts/tests/test_benchmark_utils.py(4 hunks)scripts/tests/test_performance_summary_utils.py(1 hunks)scripts/tests/test_subprocess_utils.py(2 hunks)src/core/algorithms/robust_bowyer_watson.rs(7 hunks)src/core/boundary.rs(1 hunks)src/core/collections.rs(1 hunks)src/core/traits/insertion_algorithm.rs(3 hunks)src/core/triangulation_data_structure.rs(48 hunks)
✅ Files skipped from review due to trivial changes (1)
- benches/PERFORMANCE_RESULTS.md
🚧 Files skipped from review as they are similar to previous changes (3)
- benches/profiling_suite.rs
- src/core/algorithms/robust_bowyer_watson.rs
- src/core/collections.rs
🧰 Additional context used
📓 Path-based instructions (1)
scripts/*.py
⚙️ CodeRabbit configuration file
scripts/*.py: These are Python utility scripts for changelog and commit processing.
Focus on code quality, maintainability, and adherence to Python best practices.
The scripts use ruff for comprehensive linting and formatting (pylint has been retired).
Files:
scripts/performance_summary_utils.pyscripts/subprocess_utils.pyscripts/benchmark_utils.py
🧠 Learnings (3)
📚 Learning: 2025-09-04T20:03:49.859Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.859Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
Applied to files:
benches/circumsphere_containment.rsdocs/code_organization.md
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
PR: acgetchell/delaunay#55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The user corrected that finitecheck.rs, hashcoordinate.rs, and orderedeq.rs trait files do not exist in the current version of the delaunay repository under src/geometry/traits/. The current structure only contains coordinate.rs, which matches the documented structure in CONTRIBUTING.md.
Applied to files:
docs/code_organization.md
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
PR: acgetchell/delaunay#55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The delaunay repository has stale documentation in WARP.md that references non-existent trait files (finitecheck.rs, hashcoordinate.rs, orderedeq.rs) in src/geometry/traits/, while the actual directory only contains coordinate.rs. The CONTRIBUTING.md documentation is accurate and reflects the current state.
Applied to files:
docs/code_organization.md
🧬 Code graph analysis (12)
src/core/traits/insertion_algorithm.rs (2)
src/geometry/algorithms/convex_hull.rs (1)
facets(880-882)src/core/cell.rs (1)
facets(1036-1041)
benches/memory_scaling.rs (1)
benches/triangulation_vs_hull_memory.rs (1)
new_placeholder(88-109)
src/core/boundary.rs (3)
src/core/traits/boundary_analysis.rs (1)
boundary_facets(92-92)src/geometry/algorithms/convex_hull.rs (1)
facets(880-882)src/core/cell.rs (1)
facets(1036-1041)
scripts/tests/test_performance_summary_utils.py (1)
scripts/performance_summary_utils.py (16)
CircumspherePerformanceData(34-40)CircumsphereTestCase(44-74)PerformanceSummaryGenerator(121-885)ProjectRootNotFoundError(98-99)VersionComparisonData(78-95)find_project_root(102-118)get_winner(51-55)get_relative_performance(57-74)_get_fallback_circumsphere_data(424-459)_parse_circumsphere_benchmark_results(314-422)_analyze_performance_ranking(737-782)_generate_dynamic_recommendations(784-857)generate_summary(137-171)_run_circumsphere_benchmarks(287-312)_get_circumsphere_performance_results(461-562)_get_update_instructions(859-885)
scripts/tests/test_subprocess_utils.py (1)
scripts/subprocess_utils.py (1)
run_safe_command(127-162)
benches/circumsphere_containment.rs (1)
src/geometry/predicates.rs (3)
insphere(346-416)insphere_distance(216-245)insphere_lifted(500-616)
scripts/performance_summary_utils.py (3)
scripts/hardware_utils.py (2)
HardwareInfo(33-406)get_hardware_info(335-359)scripts/subprocess_utils.py (3)
get_git_commit_hash(166-178)run_cargo_command(85-124)run_git_command(43-82)scripts/benchmark_utils.py (2)
ProjectRootNotFoundError(48-49)find_project_root(1565-1581)
scripts/tests/test_benchmark_utils.py (2)
scripts/benchmark_utils.py (24)
BenchmarkRegressionHelper(1204-1490)CriterionParser(97-206)PerformanceComparator(298-604)WorkflowHelper(607-1201)parse_estimates_json(101-149)determine_tag_name(611-636)create_metadata(639-683)display_baseline_summary(686-719)sanitize_artifact_name(722-743)prepare_baseline(1208-1247)set_no_baseline_status(1250-1260)extract_baseline_commit(1263-1313)determine_benchmark_skip(1316-1359)display_skip_message(1362-1375)display_no_baseline_message(1378-1388)compare_with_baseline(306-367)run_regression_test(1391-1424)display_results(1427-1439)generate_summary(745-779)generate_summary(1442-1490)find_project_root(1565-1581)ProjectRootNotFoundError(48-49)BaselineGenerator(209-295)generate_baseline(217-268)scripts/tests/test_performance_summary_utils.py (1)
temp_chdir(32-41)
benches/triangulation_vs_hull_memory.rs (1)
benches/memory_scaling.rs (1)
new_placeholder(72-90)
scripts/subprocess_utils.py (1)
scripts/changelog_utils.py (1)
run_git_command(622-649)
src/core/triangulation_data_structure.rs (3)
src/core/collections.rs (2)
fast_hash_map_with_capacity(460-463)fast_hash_map_with_capacity(562-562)src/core/cell.rs (11)
from(372-374)vertex_keys(602-615)cell(2217-2217)cell(2346-2346)cell(2348-2348)cell(2352-2352)cell(2551-2551)cell(2564-2564)vertices(475-477)uuid(502-504)map(341-341)src/core/facet.rs (6)
key(455-473)facet_key_from_vertex_keys(574-586)cell(325-327)vertices(437-444)vertex(370-372)map(217-217)
scripts/benchmark_utils.py (3)
scripts/subprocess_utils.py (3)
get_git_commit_hash(166-178)run_cargo_command(85-124)run_git_command(43-82)scripts/hardware_utils.py (2)
HardwareComparator(409-575)HardwareInfo(33-406)scripts/performance_summary_utils.py (7)
ProjectRootNotFoundError(98-99)generate_summary(137-171)_run_circumsphere_benchmarks(287-312)_generate_markdown_content(173-239)_get_current_version(241-265)_get_version_date(267-285)find_project_root(102-118)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: build (ubuntu-latest)
- GitHub Check: build (windows-latest)
- GitHub Check: performance-regression
- GitHub Check: Code Coverage
- GitHub Check: Codacy Static Code Analysis
🔇 Additional comments (30)
benches/memory_scaling.rs (1)
72-77: Const placeholder ctor: good consistency and zero-cost.Making
new_placeholderaconst fnmatches the other benches and keeps call sites valid. No issues.benches/triangulation_vs_hull_memory.rs (1)
88-109: Const placeholder ctor: LGTM.Const construction of zeros and f64 literals is fine and keeps this usable in const contexts. Nice that the CSV push happens outside
b.iter(), avoiding lock noise in the hot path.pyproject.toml (1)
41-41: Performance-summary-utils CLI wiring verified Modulescripts/performance_summary_utils.pydefines a top-levelmain()and is exercised byscripts/tests/test_performance_summary_utils.py; no further changes needed.benches/README.md (2)
32-49: Profiling Suite section reads clearly.Commands and feature flag usage are concrete and actionable. No change requested.
60-67: Nice callouts on profiling features.Precomputing simplices and reporting percentiles are good improvements for signal quality.
docs/code_organization.md (3)
39-39: New core/collections module docs look accurate.Matches the new public module and prelude re-exports. No issues.
157-164: Python tests note is helpful and consistent.Command examples are clear; no changes needed.
174-175: Architecture section correctly references collections.rs.Good cross-link with the Core Library overview.
scripts/tests/test_subprocess_utils.py (2)
129-135: Good: enforce text=True semantics despite caller override.Accurately validates the API contract and typing stability of CompletedProcess[str].
150-155: LGTM: combined-kwargs path tested.Covers the “text=False is ignored” branch plus check/capture flags.
scripts/tests/test_benchmark_utils.py (1)
143-179: Nice coverage: protects division-by-zero in very fast benchmarks.This test meaningfully exercises the epsilon guard in CriterionParser.
scripts/tests/test_performance_summary_utils.py (3)
315-341: End-to-end summary happy path looks solid.Validates tag, commit stamping, and file creation without running benches.
398-413: Good: benchmark runner is timeout-bound and mocked.Covers success path and asserts the command dispatch; complements timeout tests elsewhere.
570-581: Comprehensive integration coverage.Great to see all major sections asserted; reduces regressions in presentation.
src/core/triangulation_data_structure.rs (8)
174-178: Switch to fast collections looks good.Imports and local aliases improve perf and readability without widening API surface.
315-318: Good: propagate FacetError via TriangulationValidationError.This simplifies error plumbing and preserves context.
413-418: Serde attributes on BiMaps: LGTM.Custom serialize_with aligns with manual Deserialize below.
1093-1095: Nice: bump generation after initial cell creation.Prevents stale caches on the D+1 transition.
1289-1294: Neighbor assignment algorithm is solid and preserves positional semantics.Facet map + pre-sized SmallBuffer minimizes allocs; keeping None slots enforces neighbors[i] ↔ opposite(vertices[i]).
Also applies to: 1305-1314, 1319-1327, 1392-1405
1486-1492: Early return for empty cells is correct.Avoids building maps for degenerate cases.
1554-1556: Duplicate removal logic and generation bump: LGTM.Using FastHashMap + sort_unstable is efficient; cache invalidation only on change is good.
Also applies to: 1565-1565, 1586-1587
2175-2190: Neighbor validation is thorough and efficient.Precomputed vertex maps, positional checks against opposite vertex, mutuality, and D-way intersection cover the invariants well.
Also applies to: 2198-2205, 2210-2260
scripts/benchmark_utils.py (8)
35-45: Nice: centralizing dev-mode argsConsolidating benchmark flags into DEV_MODE_BENCH_ARGS reduces drift between generate/compare paths and improves maintainability.
125-139: Throughput guard and bounds look correcteps avoids div-by-zero and low/high mapping aligns with CI conventions (high time → low throughput).
217-249: Bench timeout plumbed end-to-endPassing bench_timeout into run_cargo_command for both dev/full runs is solid; also avoiding cargo clean in dev mode keeps iteration fast.
306-344: Symmetric timeout handling in compare pathMirrors baseline generation behavior; good consistency.
570-593: Unit normalization is robustHandling ns/µs/μs/us/ms/s avoids locale/encoding pitfalls. The OK/regression thresholds read well.
1279-1314: Commit extraction is defensive and CI-friendlyValidates SHA format and exports BASELINE_COMMIT; good job handling metadata.json fallback and GITHUB_ENV writes.
1493-1520: CLI bench-timeout is practicalExposing --bench-timeout and sourcing default from BENCHMARK_TIMEOUT keeps CI configurable.
1702-1717: CLI wiring and exit codes look correctGraceful ProjectRootNotFoundError handling and command dispatch are clean.
Adds new benchmarks for triangulation creation and circumsphere containment, providing more comprehensive performance analysis. Refactors existing benchmark code to reduce duplication and improve maintainability. Updates code coverage configuration and adds mypy cache to gitignore. Refs: N/A
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (4)
benches/profiling_suite.rs (1)
143-189: Fail-fast grid generation: addressed prior concernReplacing the silent random fallback with a panic prevents mislabeled benchmarks. Nice.
scripts/subprocess_utils.py (2)
107-134: run_cargo_command mirrors git hardeningPrevents shell/executable overrides and enforces text/UTF-8. Matches previous recommendations.
223-256: stdin variant hardened consistentlyBlocks shell/executable overrides; keeps typing stable via text=True.
scripts/tests/test_benchmark_utils.py (1)
1166-1176: Good fix: patch the right subprocess symbol.Switching to patch("benchmark_utils.subprocess.run") correctly targets the symbol used by the code under test and avoids bleed-over.
Also applies to: 1178-1190, 1192-1204
🧹 Nitpick comments (25)
scripts/enhance_commits.py (2)
330-336: Early commit-line collection is correct; precompile the bullet regex.Minor perf/readability: compile the r"^- **" pattern once and reuse.
Apply:
- if (section_state["in_changes_section"] or section_state["in_fixed_issues"]) and re.match(r"^- \*\*", line): + if (section_state["in_changes_section"] or section_state["in_fixed_issues"]) and COMMIT_BULLET_RE.match(line): entry, next_index = _collect_commit_entry(lines, line_index) categorize_entries_list.append(entry) line_index = next_index continueAdd near imports:
# top-level COMMIT_BULLET_RE = re.compile(r"^- \*\*")
342-368: Simplify release-end handling; remove unreachable branch.After handling a new "## " header you immediately continue, making the later is_release_end branch dead code.
Apply:
- if is_release_end or (is_file_end and categorize_entries_list): + if is_release_end or (is_file_end and categorize_entries_list): # Process any pending entries only if we have them if categorize_entries_list: process_and_output_categorized_entries(categorize_entries_list, output_lines) categorize_entries_list.clear() # Reset section state section_state.update( { "in_changes_section": False, "in_fixed_issues": False, "in_merged_prs_section": False, } ) # Add the release header to output if it's a new release - if re.match(r"^## ", line): + if re.match(r"^## ", line): output_lines.append("") # Add blank line before new release output_lines.append(line) line_index += 1 continue - if is_release_end: - # Not a release header but is release end, add normally - output_lines.append(line) - line_index += 1 - continueCHANGELOG.md (3)
30-31: Fix category label mismatch under “Fixed”.Bullet reads “Changed:” inside the Fixed section; make it consistent.
-- **Changed: Improves benchmark utils with timeout and error handling** [`56f322d`](https://github.com/acgetchell/delaunay/commit/56f322d875080eb2dcaef753184d6a5898256251) +- **Fix: Improve benchmark utils with timeout and error handling** [`56f322d`](https://github.com/acgetchell/delaunay/commit/56f322d875080eb2dcaef753184d6a5898256251)
14-16: Drop redundant “Added:” prefix inside the Added section.Keeps bullets concise and aligned with Keep a Changelog style.
-- **Added: Comprehensive profiling benchmarks and memory stress tests** [`8a5f176`](https://github.com/acgetchell/delaunay/commit/8a5f17698b10aea44e81aca32397270baf95b6f0) +- **Comprehensive profiling benchmarks and memory stress tests** [`8a5f176`](https://github.com/acgetchell/delaunay/commit/8a5f17698b10aea44e81aca32397270baf95b6f0)
20-27: Remove “Changed:” prefixes inside the Changed section.Avoids tautology and improves readability.
-- **Changed: Refactors benchmark workflow using Python utils (internal)** [`b121cca`](https://github.com/acgetchell/delaunay/commit/b121cca8d102dd6b7f3e3096cb2edb430cf1b160) +- **Refactors benchmark workflow using Python utils (internal)** [`b121cca`](https://github.com/acgetchell/delaunay/commit/b121cca8d102dd6b7f3e3096cb2edb430cf1b160) -- **Changed: Improves benchmark workflow and memory profiling** [`3d86e5b`](https://github.com/acgetchell/delaunay/commit/3d86e5beac64a1160ea44a2a11e929fe3ad01aa9) +- **Improves benchmark workflow and memory profiling** [`3d86e5b`](https://github.com/acgetchell/delaunay/commit/3d86e5beac64a1160ea44a2a11e929fe3ad01aa9) -- **Changed: Enhance profiling benchmarks for performance analysis** [`af5c3d9`](https://github.com/acgetchell/delaunay/commit/af5c3d9febd24b12d229b7d80a760ae6413aefff) +- **Enhance profiling benchmarks for performance analysis** [`af5c3d9`](https://github.com/acgetchell/delaunay/commit/af5c3d9febd24b12d229b7d80a760ae6413aefff) -- **Changed: Enhances benchmark suite for detailed analysis** [`e4ac997`](https://github.com/acgetchell/delaunay/commit/e4ac997ec3fcb43e13a52eb0b513012ae4132700) +- **Enhances benchmark suite for detailed analysis** [`e4ac997`](https://github.com/acgetchell/delaunay/commit/e4ac997ec3fcb43e13a52eb0b513012ae4132700)benches/circumsphere_containment.rs (2)
99-118: Add a “near-boundary” edge case to stress numerical toleranceCurrent “boundary_point = simplex[i]” is trivial; consider also testing a point epsilon-away from the sphere to exercise tolerance handling.
Example:
let eps = 1e-9; let near_boundary = Point::new([eps, 0.0, /* ... fill per D ... */]); bench_edge_case!(c, 3, "near_boundary", simplex_3d, near_boundary);
120-163: Deduplicate simplex construction with a generic standard-simplex helperRemoves repeated literal vectors for 2D–5D and eases future extensions.
Proposed helper (place above benches):
fn standard_simplex<const D: usize>() -> Vec<Point<f64, D>> { let mut pts = Vec::with_capacity(D + 1); pts.push(Point::new([0.0; D])); for i in 0..D { let mut p = [0.0; D]; p[i] = 1.0; pts.push(Point::new(p)); } pts }Apply within this block (illustrative diffs):
- let simplex_2d = vec![Point::new([0.0,0.0]), Point::new([1.0,0.0]), Point::new([0.0,1.0])]; + let simplex_2d = standard_simplex::<2>(); ... - let simplex_3d = vec![ ... ]; + let simplex_3d = standard_simplex::<3>(); ... - let simplex_4d = vec![ ... ]; + let simplex_4d = standard_simplex::<4>(); ... - let simplex_5d = vec![ ... ]; + let simplex_5d = standard_simplex::<5>();benches/profiling_suite.rs (3)
108-115: Parse PROFILING_DEV_MODE as bool-like instead of presence-onlyTreat “0/false” as off to avoid accidental enablement from empty/placeholder values.
-fn get_profiling_counts() -> &'static [usize] { - if std::env::var("PROFILING_DEV_MODE").is_ok() { +fn get_profiling_counts() -> &'static [usize] { + let dev = std::env::var("PROFILING_DEV_MODE").ok(); + if matches!(dev.as_deref(), Some("1" | "true" | "TRUE" | "yes" | "on")) { PROFILING_COUNTS_DEVELOPMENT } else { PROFILING_COUNTS_PRODUCTION } }
409-805: Factor repeated 2D–5D memory-profiling blocks into a generic helperReduces ~250 lines of repetition; makes it easier to adjust logic consistently.
Example helper sketch:
fn bench_memory_usage<const D: usize>( group: &mut criterion::BenchmarkGroup<'_, criterion::measurement::WallTime>, bench_id_prefix: &str, count: usize, ) { group.bench_with_input(BenchmarkId::new(bench_id_prefix, count), &count, |b, &count| { b.iter_custom(|iters| { let mut total = Duration::ZERO; let mut infos: SmallBuffer<AllocationInfo, BENCHMARK_ITERATION_BUFFER_SIZE> = SmallBuffer::new(); let mut actual_counts: SmallBuffer<usize, BENCHMARK_ITERATION_BUFFER_SIZE> = SmallBuffer::new(); for _ in 0..iters { let start = Instant::now(); let info = measure(|| { let points = generate_points_by_distribution::<D>(count, PointDistribution::Random, 42); let vertices: Vec<_> = points.iter().map(|p| vertex!(*p)).collect(); actual_counts.push(points.len()); black_box(Tds::<f64, (), (), D>::new(&vertices).unwrap()); }); total += start.elapsed(); infos.push(info); } if !infos.is_empty() { // same summary code... } total }); }); }Then call for D in {2,3,4,5} with appropriate count guards.
872-878: Avoid redundant conversions in inner query loopYou already have Point<f64,3>; no need to go to array and back.
- let query_coords: [f64; 3] = (*query_point).into(); - let query_point_obj = Point::new(query_coords); + let query_point_obj = *query_point;scripts/tests/conftest.py (3)
16-43: Harden temp_chdir: validate target exists and accept PathLike.Early-check the path and add a type hint so failures are clearer and usage is flexible.
@pytest.fixture def temp_chdir(): @@ - @contextmanager - def _temp_chdir_context(path): + @contextmanager + def _temp_chdir_context(path: os.PathLike | str): """Context manager for temporarily changing working directory.""" - original_cwd = Path.cwd() - os.chdir(path) + original_cwd = Path.cwd() + target = Path(path) + if not target.exists(): + raise FileNotFoundError(target) + os.chdir(target) try: yield finally: os.chdir(original_cwd)
45-65: Make the git mock closer to subprocess.CompletedProcess.Set stdout to a real string; consumers can still call .strip(), and this better matches actual CompletedProcess behavior. Consider also setting .returncode and .args if needed.
def _create_mock_result(output: str) -> Mock: """Create a mock CompletedProcess object for git commands.""" - mock_result = Mock() - mock_result.stdout.strip.return_value = output + mock_result = Mock() + mock_result.stdout = output # mimic CompletedProcess.stdout (str) + mock_result.returncode = 0 + mock_result.args = ["git"] return mock_result
1-1: Remove shebang from a test helper module.Shebang is unnecessary for an importable test utility file and can be misleading.
-#!/usr/bin/env python3benches/ci_performance_suite.rs (1)
51-77: Prefer expect with context over unwrap; consider input determinism.
- Using expect gives clearer failure context in CI.
- If generate_random_points is non-deterministic, consider seeding to reduce variance in regression detection. If a seeded API isn’t available, please confirm CI’s tolerance thresholds account for randomness.
- black_box(Tds::<f64, (), (), $dim>::new(&vertices).unwrap()); + black_box( + Tds::<f64, (), (), $dim>::new(&vertices) + .expect(concat!("Tds::new failed for ", stringify!($dim), "D")) + );scripts/tests/test_enhance_commits.py (3)
47-66: Parametrize to reduce repetition in pattern tests.Turn per-string loops into parametrized tests for clearer failure reporting and less boilerplate.
- def test_added_patterns(self): - """Test patterns for 'Added' category.""" - patterns = _get_regex_patterns()["added"] - - test_cases = [ - "add new feature", - "adds support for", - "added functionality", - "create new module", - "enable advanced mode", - "implement algorithm", - "introduce new api", - "new feature for users", - "feat: add benchmarking", - "feat: implement caching", - ] - - for text in test_cases: - assert any(_match_pattern(pattern, text) for pattern in patterns), f"'{text}' should match 'added' patterns" + @pytest.mark.parametrize( + "text", + [ + "add new feature", + "adds support for", + "added functionality", + "create new module", + "enable advanced mode", + "implement algorithm", + "introduce new api", + "new feature for users", + "feat: add benchmarking", + "feat: implement caching", + ], + ) + def test_added_patterns(self, text): + """Test patterns for 'Added' category.""" + patterns = _get_regex_patterns()["added"] + assert any(_match_pattern(pattern, text) for pattern in patterns), f"'{text}' should match 'added' patterns"
507-535: Match file encodings with main()’s UTF-8 reads.Write and read using UTF-8 to avoid platform-default encoding surprises.
- input_file.write_text(input_content) + input_file.write_text(input_content, encoding="utf-8") @@ - output_content = output_file.read_text() + output_content = output_file.read_text(encoding="utf-8")
590-606: Large N test: keep but monitor runtime.100 entries is fine; if CI time grows, consider trimming to 50 without losing coverage.
scripts/tests/test_benchmark_utils.py (1)
1989-2007: Make the write-failure test deterministic across OSes.chmod(0o444) on directories can be flaky on some platforms/FS. Prefer forcing an IOError via a targeted patch to Path.open for the specific file under test.
- readonly_dir = Path(temp_dir) / "readonly" - readonly_dir.mkdir() - readonly_dir.chmod(0o444) # Read-only - output_file = readonly_dir / "summary.md" - - success = generator.generate_summary(output_path=output_file) + output_file = Path(temp_dir) / "readonly" / "summary.md" + with patch.object(Path, "open", side_effect=OSError("permission denied")): + success = generator.generate_summary(output_path=output_file)benches/memory_scaling.rs (2)
153-173: Macro design looks solid; consider minor style tweak.Returning from inside cfg blocks is fine, but you can reduce explicit returns and let the final expression be the value to slightly simplify the body.
294-316: Don’t swallow I/O errors silently.write_memory_records_to_csv ignores write errors. Propagate or at least log failures to avoid losing measurements silently.
- if let Ok(mut file) = File::create(&csv_path) { + if let Ok(mut file) = File::create(&csv_path) { let _ = MemoryRecord::write_csv_header(&mut file); @@ - let _ = record.write_csv_row(&mut file); + if let Err(e) = record.write_csv_row(&mut file) { + eprintln!("failed writing CSV row: {e}"); + } } - println!("Memory scaling results written to: {}", csv_path.display()); + println!("Memory scaling results written to: {}", csv_path.display()); } + else { + eprintln!("failed to create {}", csv_path.display()); + }benches/triangulation_creation.rs (2)
14-29: Nice DRY macro; add tiny prealloc for vertices.Avoid a small reallocation by preallocating vertices.
- let vertices: Vec<Vertex<f64, (), $dim>> = points.iter().map(|p| vertex!(*p)).collect(); + let mut vertices: Vec<Vertex<f64, (), $dim>> = Vec::with_capacity(points.len()); + vertices.extend(points.iter().map(|p| vertex!(*p)));
23-27: Optional: annotate throughput.Adding Throughput::Elements(points.len() as u64) to the bench group helps Criterion report rates.
docs/code_organization.md (1)
158-166: Clarify CLI names for summaries.You now have two CLIs:
- uv run benchmark-utils generate-summary
- uv run performance-summary-utils generate
Consider listing both to avoid confusion, and note that benchmark-utils writes benches/PERFORMANCE_RESULTS.md while the dedicated CLI can be used standalone.
scripts/benchmark_utils.py (2)
1007-1055: Duplicate PerformanceSummaryGenerator across two modules.You now maintain two divergent implementations (here and in performance_summary_utils.py). This will drift. Consider re-exporting the dedicated version from benchmark_utils or moving common logic to a shared module.
1391-1435: Git diff range checks: minor robustness.You already validate the SHA. Also consider passing cwd=find_project_root() to run_git_command for consistency in monorepo scenarios.
- run_git_command(["cat-file", "-e", commit_ref], timeout=60) + root = find_project_root() + run_git_command(["cat-file", "-e", commit_ref], cwd=root, timeout=60) @@ - result = run_git_command(["diff", "--name-only", diff_range], timeout=60) + result = run_git_command(["diff", "--name-only", diff_range], cwd=root, timeout=60)
📜 Review details
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (24)
.github/workflows/codecov.yml(1 hunks).gitignore(1 hunks)CHANGELOG.md(1 hunks)Cargo.toml(3 hunks)benches/README.md(2 hunks)benches/ci_performance_suite.rs(1 hunks)benches/circumsphere_containment.rs(4 hunks)benches/memory_scaling.rs(2 hunks)benches/profiling_suite.rs(1 hunks)benches/triangulation_creation.rs(1 hunks)cspell.json(8 hunks)docs/code_organization.md(7 hunks)scripts/benchmark_utils.py(13 hunks)scripts/enhance_commits.py(4 hunks)scripts/performance_summary_utils.py(1 hunks)scripts/subprocess_utils.py(4 hunks)scripts/tests/conftest.py(1 hunks)scripts/tests/test_benchmark_utils.py(4 hunks)scripts/tests/test_enhance_commits.py(1 hunks)scripts/tests/test_performance_summary_utils.py(1 hunks)scripts/tests/test_subprocess_utils.py(3 hunks)src/core/boundary.rs(1 hunks)src/core/traits/insertion_algorithm.rs(3 hunks)src/core/triangulation_data_structure.rs(48 hunks)
✅ Files skipped from review due to trivial changes (1)
- .gitignore
🚧 Files skipped from review as they are similar to previous changes (7)
- Cargo.toml
- scripts/tests/test_performance_summary_utils.py
- src/core/boundary.rs
- src/core/triangulation_data_structure.rs
- benches/README.md
- cspell.json
- src/core/traits/insertion_algorithm.rs
🧰 Additional context used
📓 Path-based instructions (1)
scripts/*.py
⚙️ CodeRabbit configuration file
scripts/*.py: These are Python utility scripts for changelog and commit processing.
Focus on code quality, maintainability, and adherence to Python best practices.
The scripts use ruff for comprehensive linting and formatting (pylint has been retired).
Files:
scripts/benchmark_utils.pyscripts/performance_summary_utils.pyscripts/enhance_commits.pyscripts/subprocess_utils.py
🧠 Learnings (4)
📚 Learning: 2025-09-04T20:03:49.859Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.859Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
Applied to files:
benches/circumsphere_containment.rsdocs/code_organization.mdbenches/triangulation_creation.rs
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
PR: acgetchell/delaunay#55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The user corrected that finitecheck.rs, hashcoordinate.rs, and orderedeq.rs trait files do not exist in the current version of the delaunay repository under src/geometry/traits/. The current structure only contains coordinate.rs, which matches the documented structure in CONTRIBUTING.md.
Applied to files:
docs/code_organization.md
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
PR: acgetchell/delaunay#55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The delaunay repository has stale documentation in WARP.md that references non-existent trait files (finitecheck.rs, hashcoordinate.rs, orderedeq.rs) in src/geometry/traits/, while the actual directory only contains coordinate.rs. The CONTRIBUTING.md documentation is accurate and reflects the current state.
Applied to files:
docs/code_organization.md
📚 Learning: 2025-08-28T03:54:34.371Z
Learnt from: acgetchell
PR: acgetchell/delaunay#54
File: scripts/generate_changelog.sh:417-438
Timestamp: 2025-08-28T03:54:34.371Z
Learning: The generate_changelog.sh script uses a deliberate design pattern where the auto-changelog template uses simple generic headers (### Changes, ### Fixed Issues) and the enhancer function transforms these into Keep a Changelog format sections (### Added, ### Changed, ### Fixed, etc.). This separation keeps the template simple while ensuring standardized output format.
Applied to files:
scripts/enhance_commits.py
🧬 Code graph analysis (11)
benches/ci_performance_suite.rs (2)
src/core/triangulation_data_structure.rs (1)
new(871-903)benches/profiling_suite.rs (4)
points(226-226)points(264-264)points(304-304)points(343-343)
scripts/benchmark_utils.py (3)
scripts/subprocess_utils.py (5)
ProjectRootNotFoundError(259-260)find_project_root(263-279)get_git_commit_hash(162-174)run_cargo_command(107-133)run_git_command(78-104)scripts/hardware_utils.py (2)
HardwareComparator(409-575)HardwareInfo(33-406)scripts/performance_summary_utils.py (10)
_get_current_version(233-259)generate_summary(129-163)_run_circumsphere_benchmarks(282-304)_generate_markdown_content(165-231)_get_circumsphere_performance_results(453-554)_parse_baseline_results(556-594)_parse_comparison_results(596-634)_get_dynamic_analysis_sections(636-727)_get_update_instructions(851-877)_get_version_date(261-280)
scripts/tests/test_enhance_commits.py (2)
scripts/enhance_commits.py (9)
_add_section_with_entries(208-219)_categorize_entry(189-205)_collect_commit_entry(285-298)_extract_title_text(169-186)_get_regex_patterns(21-166)_process_changelog_lines(301-383)_process_section_header(266-282)main(386-406)process_and_output_categorized_entries(222-263)scripts/tests/conftest.py (1)
temp_chdir(17-42)
scripts/tests/test_benchmark_utils.py (3)
scripts/benchmark_utils.py (41)
BenchmarkRegressionHelper(1279-1565)CriterionParser(106-212)PerformanceComparator(305-612)PerformanceSummaryGenerator(754-1276)WorkflowHelper(615-751)parse_estimates_json(110-158)determine_tag_name(619-644)create_metadata(647-691)display_baseline_summary(694-727)sanitize_artifact_name(730-751)prepare_baseline(1283-1322)set_no_baseline_status(1325-1335)extract_baseline_commit(1338-1388)determine_benchmark_skip(1391-1434)display_skip_message(1437-1450)display_no_baseline_message(1453-1463)compare_with_baseline(313-375)run_regression_test(1466-1499)display_results(1502-1514)generate_summary(764-798)generate_summary(1517-1565)BaselineGenerator(215-302)generate_baseline(223-275)_get_current_version(1007-1033)_get_version_date(1035-1055)_parse_baseline_results(852-896)_parse_comparison_results(898-946)_extract_benchmark_data(948-976)_parse_benchmark_header(978-988)BenchmarkData(65-103)_parse_time_data(990-1005)_parse_throughput_data(1057-1072)with_timing(79-85)with_throughput(87-93)_format_benchmark_tables(1074-1126)_format_time_value(1128-1148)_format_throughput_value(1150-1168)_get_circumsphere_performance_results(1170-1184)_get_update_instructions(1196-1219)_run_circumsphere_benchmarks(1221-1238)_get_static_content(1240-1276)scripts/tests/conftest.py (1)
temp_chdir(17-42)scripts/subprocess_utils.py (2)
find_project_root(263-279)ProjectRootNotFoundError(259-260)
scripts/tests/test_subprocess_utils.py (1)
scripts/subprocess_utils.py (4)
run_safe_command(136-158)run_git_command(78-104)run_cargo_command(107-133)run_git_command_with_input(223-255)
scripts/performance_summary_utils.py (3)
scripts/hardware_utils.py (2)
HardwareInfo(33-406)get_hardware_info(335-359)scripts/subprocess_utils.py (5)
ProjectRootNotFoundError(259-260)find_project_root(263-279)get_git_commit_hash(162-174)run_cargo_command(107-133)run_git_command(78-104)scripts/benchmark_utils.py (10)
PerformanceSummaryGenerator(754-1276)_get_current_version(1007-1033)_get_version_date(1035-1055)generate_summary(764-798)generate_summary(1517-1565)_run_circumsphere_benchmarks(1221-1238)_generate_markdown_content(800-850)_get_circumsphere_performance_results(1170-1184)_parse_baseline_results(852-896)_parse_comparison_results(898-946)
benches/circumsphere_containment.rs (1)
src/geometry/predicates.rs (3)
insphere(346-416)insphere_distance(216-245)insphere_lifted(500-616)
benches/triangulation_creation.rs (2)
src/geometry/util.rs (16)
generate_random_points(1226-1247)generate_random_points(3308-3308)generate_random_points(3323-3323)generate_random_points(3338-3338)generate_random_points(3353-3353)generate_random_points(3370-3370)generate_random_points(3381-3381)generate_random_points(3385-3385)generate_random_points(3389-3389)generate_random_points(3393-3393)generate_random_points(3400-3400)generate_random_points(3403-3403)generate_random_points(3406-3406)generate_random_points(3409-3409)generate_random_points(3520-3520)generate_random_points(3521-3521)src/core/triangulation_data_structure.rs (1)
new(871-903)
benches/memory_scaling.rs (2)
benches/triangulation_vs_hull_memory.rs (4)
new_placeholder(88-109)measure_with_result(271-280)measure_with_result(283-288)new(43-84)src/core/triangulation_data_structure.rs (4)
cells(485-487)new(871-903)number_of_vertices(595-597)number_of_cells(753-755)
scripts/subprocess_utils.py (1)
scripts/changelog_utils.py (1)
run_git_command(622-649)
benches/profiling_suite.rs (7)
src/geometry/util.rs (46)
core(1387-1387)generate_grid_points(1365-1431)generate_grid_points(3655-3655)generate_grid_points(3688-3688)generate_grid_points(3704-3704)generate_grid_points(3720-3720)generate_grid_points(3729-3729)generate_grid_points(3743-3743)generate_grid_points(3753-3753)generate_grid_points(3773-3773)generate_grid_points(3794-3794)generate_grid_points(3807-3807)generate_grid_points(4052-4052)generate_poisson_points(1475-1565)generate_poisson_points(3843-3843)generate_poisson_points(3876-3876)generate_poisson_points(3909-3909)generate_poisson_points(3910-3910)generate_poisson_points(3924-3924)generate_poisson_points(3931-3931)generate_poisson_points(3942-3942)generate_poisson_points(3955-3955)generate_poisson_points(3961-3961)generate_poisson_points(3970-3970)generate_poisson_points(3993-3993)generate_poisson_points(4005-4005)generate_random_points_seeded(1289-1316)generate_random_points_seeded(3417-3417)generate_random_points_seeded(3418-3418)generate_random_points_seeded(3437-3437)generate_random_points_seeded(3438-3438)generate_random_points_seeded(3456-3456)generate_random_points_seeded(3457-3457)generate_random_points_seeded(3475-3475)generate_random_points_seeded(3476-3476)generate_random_points_seeded(3495-3495)generate_random_points_seeded(3496-3496)generate_random_points_seeded(3500-3500)generate_random_points_seeded(3501-3501)generate_random_points_seeded(3505-3505)generate_random_points_seeded(3506-3506)generate_random_points_seeded(3510-3510)safe_usize_to_scalar(155-155)safe_usize_to_scalar(477-513)safe_usize_to_scalar(1097-1097)safe_usize_to_scalar(1409-1409)src/core/triangulation_data_structure.rs (2)
default(207-209)new(871-903)src/geometry/algorithms/convex_hull.rs (2)
default(1117-1123)from_triangulation(239-271)benches/memory_scaling.rs (1)
new(37-68)src/core/cell.rs (7)
vertices(475-477)cell(2217-2217)cell(2346-2346)cell(2348-2348)cell(2352-2352)cell(2551-2551)cell(2564-2564)src/core/boundary.rs (2)
tds(465-469)boundary_facets(82-103)src/geometry/predicates.rs (1)
insphere_lifted(500-616)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Codacy Static Code Analysis
- GitHub Check: build (windows-latest)
- GitHub Check: performance-regression
- GitHub Check: Code Coverage
- GitHub Check: Codacy Security Scan
🔇 Additional comments (23)
scripts/enhance_commits.py (2)
171-173: Robust title extraction guards look good.Nil/blank handling and safe first-line fallback remove IndexErrors and noisy categories.
Also applies to: 179-187
378-382: EOF flush is correct.Prevents dropped entries when file ends without a trailing header.
benches/circumsphere_containment.rs (4)
12-12: Docs update to 5D looks goodAccurately reflects expanded coverage.
84-97: Macroized per-dimension benches: solid reduction of duplicationNames are clear and inputs are preallocated; unwraps are acceptable for benches.
165-216: Edge-case benches across dimensions look goodClear labels and bounded “far” cases; unwraps are fine for failing-fast in benches.
260-263: Good guard for zero valid testsPrevents divide-by-zero in percentages and exits early.
benches/profiling_suite.rs (8)
81-87: One-time banner for missing allocation counter: good UXAvoids noisy repeats while keeping runs informative.
117-124: Environment-driven measurement time is clean and predictableGood defaulting and safe parsing.
196-237: 2D scaling bench setup is soundThroughput uses actual generated size; iter_batched avoids double work inside measurement.
239-276: 3D scaling: sensible dev-mode skip for large countsPrevents runaway times locally.
284-316: 4D scaling group looks goodReasonable counts and per-group measurement time override.
317-355: 5D scaling group looks goodTight count set for very high complexity cases.
978-991: Criterion config: sensible defaults for a long-running suiteLower sample_size and longer warmup are appropriate here.
361-376: MSRV supportsdiv_ceil
Project MSRV is pinned to Rust 1.89.0, andu64::div_ceilwas stabilized in Rust 1.73.0 (blog.rust-lang.org)scripts/subprocess_utils.py (4)
43-76: Centralized hardened kwargs builder: well-designedBlocks shell/executable overrides, enforces text=True and UTF-8; consistent with security goals.
136-159: run_safe_command: consistent hardening for arbitrary toolsKeeps validated executable path invariant.
259-279: find_project_root utility is practical and safeClear error when Cargo.toml not found; useful across tooling.
78-105: run_git_command hardening verifiedNo production calls pass shell= or executable= (overrides only appear in tests expecting rejection), so security measures remain intact.
benches/ci_performance_suite.rs (1)
80-84: Macro-driven duplication removal looks good.Clear, DRY, and keeps all dimensions aligned.
benches/memory_scaling.rs (1)
70-90: Const placeholder ctor is fine under cfg.The const fn new_placeholder with cfg(not(feature="count-allocations")) keeps call sites identical and compiles away cleanly in allocation-counting builds.
scripts/tests/test_subprocess_utils.py (2)
129-135: Text enforcement tests and multi-kwargs case look good.These validate the CompletedProcess[str] contract and kwargs handling without over-constraining the implementation.
Also applies to: 150-155
257-278: Executable override protections are well covered.Comprehensive negative tests across git/cargo/safe/with_input harden the security posture.
docs/code_organization.md (1)
39-43: Doc addition for core::collections is helpful.The new collections.rs mention aligns with the refactor and improves discoverability.
.github/workflows/codecov.yml
Outdated
| --out xml \ | ||
| --output-dir coverage/ \ | ||
| -- --test-threads=1 | ||
| cargo tarpaulin --config Cargo.toml |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Config-driven tarpaulin is fine; ensure it still writes coverage/cobertura.xml.
Since the upload steps are hardcoded to coverage/cobertura.xml, verify [tool.tarpaulin] sets out=Xml and output-dir="coverage". Also confirm benches/examples exclusions moved into Cargo.toml/.codecov.yml are effective.
Run to verify config presence and targets:
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Check [tool.tarpaulin] section and key settings"
rg -n '^\[tool\.tarpaulin\]' Cargo.toml
rg -nP '(?i)^\s*out\s*=\s*"xml"' Cargo.toml || echo "WARN: out not set to Xml"
rg -nP '^\s*output-dir\s*=\s*"(?:coverage|coverage/)"' Cargo.toml || echo "WARN: output-dir not set to coverage/"
rg -nP '^\s*exclude\s*=\s*\[.*\]' -n Cargo.toml | sed -n '1,3p'
rg -n 'benches/|examples/' Cargo.toml || echo "INFO: No benches/examples excludes detected in Cargo.toml"
echo "Check .codecov.yml ignores benches/** (if present)"
if [ -f .codecov.yml ]; then rg -n 'ignore:|benches/\*\*' .codecov.yml -n -C1; else echo "INFO: .codecov.yml not found"; fi
echo "Check workflow expects cobertura.xml at coverage/"
rg -n 'coverage/cobertura\.xml' .github/workflows/codecov.yml -n -C2Length of output: 120
Set XML output and exclusions in Tarpaulin config
- Under
[tool.tarpaulin]in Cargo.toml, addout = "Xml"to emit a Cobertura-compatible XML report intocoverage/. - Ensure benches and examples are excluded (e.g.
exclude = ["benches/*","examples/*"]) either in Cargo.toml or via an.codecov.ymlignore section.
🤖 Prompt for AI Agents
In .github/workflows/codecov.yml around line 53, update the Tarpaulin
configuration by adding out = "Xml" under the [tool.tarpaulin] section in
Cargo.toml so Tarpaulin emits a Cobertura-compatible XML report (place the
produced file into a coverage/ directory or configure Tarpaulin’s output path
accordingly), and ensure benches and examples are excluded by adding exclude =
["benches/*","examples/*"] under [tool.tarpaulin] in Cargo.toml or by adding
those patterns to an .codecov.yml ignore section so those targets are not
included in coverage.
scripts/performance_summary_utils.py
Outdated
| def __init__(self, project_root: Path): | ||
| """Initialize with project root directory.""" | ||
| self.project_root = project_root | ||
| self.baseline_file = project_root / "benches" / "baseline_results.txt" | ||
| self.comparison_file = project_root / "benches" / "compare_results.txt" | ||
|
|
||
| # Path for storing circumsphere benchmark results | ||
| self.circumsphere_results_dir = project_root / "target" / "criterion" | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Baseline path divergence: prefer baseline-artifact with fallback.
benchmark_utils writes baselines to baseline-artifact/baseline_results.txt. Defaulting here to benches/… may miss CI artifacts. Prefer baseline-artifact first, then fallback to benches.
- self.baseline_file = project_root / "benches" / "baseline_results.txt"
+ # Prefer CI artifact location; fall back to benches/ for local runs
+ self.baseline_file = project_root / "baseline-artifact" / "baseline_results.txt"
+ self._baseline_fallback = project_root / "benches" / "baseline_results.txt"And when reading:
- if self.baseline_file.exists():
+ if self.baseline_file.exists() or getattr(self, "_baseline_fallback", Path()).exists():
- lines.extend(self._parse_baseline_results())
+ # Use fallback if primary is missing
+ if not self.baseline_file.exists():
+ self.baseline_file = self._baseline_fallback
+ lines.extend(self._parse_baseline_results())📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def __init__(self, project_root: Path): | |
| """Initialize with project root directory.""" | |
| self.project_root = project_root | |
| self.baseline_file = project_root / "benches" / "baseline_results.txt" | |
| self.comparison_file = project_root / "benches" / "compare_results.txt" | |
| # Path for storing circumsphere benchmark results | |
| self.circumsphere_results_dir = project_root / "target" / "criterion" | |
| ++ b/scripts/performance_summary_utils.py | |
| @@ class PerformanceSummaryUtils: | |
| def __init__(self, project_root: Path): | |
| """Initialize with project root directory.""" | |
| self.project_root = project_root | |
| # Prefer CI artifact location; fall back to benches/ for local runs | |
| self.baseline_file = project_root / "baseline-artifact" / "baseline_results.txt" | |
| self._baseline_fallback = project_root / "benches" / "baseline_results.txt" | |
| self.comparison_file = project_root / "benches" / "compare_results.txt" | |
| # Path for storing circumsphere benchmark results | |
| self.circumsphere_results_dir = project_root / "target" / "criterion" |
| def __init__(self, project_root: Path): | |
| """Initialize with project root directory.""" | |
| self.project_root = project_root | |
| self.baseline_file = project_root / "benches" / "baseline_results.txt" | |
| self.comparison_file = project_root / "benches" / "compare_results.txt" | |
| # Path for storing circumsphere benchmark results | |
| self.circumsphere_results_dir = project_root / "target" / "criterion" | |
| ++ b/scripts/performance_summary_utils.py | |
| @@ def load_summary(self) -> list[str]: | |
| if self.baseline_file.exists() or getattr(self, "_baseline_fallback", Path()).exists(): | |
| # Use fallback if primary is missing | |
| if not self.baseline_file.exists(): | |
| self.baseline_file = self._baseline_fallback | |
| lines.extend(self._parse_baseline_results()) |
🤖 Prompt for AI Agents
In scripts/performance_summary_utils.py around lines 116 to 124, the constructor
currently defaults baseline paths to benches/... which can miss CI artifacts;
change the baseline_file to check for project_root / "baseline-artifact" /
"baseline_results.txt" first and use that if it exists, otherwise fall back to
project_root / "benches" / "baseline_results.txt"; similarly change
comparison_file to prefer project_root / "baseline-artifact" /
"compare_results.txt" with a fallback to benches; also update any reading code
that opens these files to rely on the chosen path (no separate hardcoded reads)
so the file selection logic is centralized in __init__.
scripts/performance_summary_utils.py
Outdated
| try: | ||
| with self.baseline_file.open("r", encoding="utf-8") as f: | ||
| content = f.read() | ||
|
|
||
| # Extract metadata from baseline | ||
| metadata_lines = [] | ||
| for line in content.split("\n")[:10]: # Check first 10 lines for metadata | ||
| if line.startswith(("Generated at:", "Git commit:", "Hardware:")): | ||
| metadata_lines.append(line) | ||
|
|
||
| if metadata_lines: | ||
| lines.extend( | ||
| [ | ||
| "### Current Baseline Information", | ||
| "", | ||
| ] | ||
| ) | ||
| for meta_line in metadata_lines: | ||
| lines.append(f"- **{meta_line}**") | ||
| lines.append("") | ||
|
|
||
| except Exception as e: | ||
| lines.extend( | ||
| [ | ||
| "### Baseline Results", | ||
| "", | ||
| f"*Error parsing baseline results: {e}*", | ||
| "", | ||
| ] | ||
| ) |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Parse “Hardware Information” blocks too (not only “Hardware: …”).
Baselines produced by benchmark_utils embed a multi-line “Hardware Information:” section. Extend detection so this summary reliably shows hardware on real artifacts.
- for line in content.split("\n")[:10]: # Check first 10 lines for metadata
- if line.startswith(("Generated at:", "Git commit:", "Hardware:")):
+ first_lines = content.split("\n")[:20]
+ for line in first_lines:
+ if line.startswith(("Generated at:", "Date:", "Git commit:", "Hardware:")):
metadata_lines.append(line)
+ if not any(l.startswith("Hardware:") for l in metadata_lines):
+ if "Hardware Information:" in content:
+ # Emit a concise single-line summary from the block’s first two fields
+ for i, line in enumerate(first_lines):
+ if line.startswith("Hardware Information:"):
+ os_line = first_lines[i + 1].strip()
+ cpu_line = first_lines[i + 2].strip()
+ metadata_lines.append(f"Hardware: {cpu_line.removeprefix('CPU: ').strip()}")
+ break📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try: | |
| with self.baseline_file.open("r", encoding="utf-8") as f: | |
| content = f.read() | |
| # Extract metadata from baseline | |
| metadata_lines = [] | |
| for line in content.split("\n")[:10]: # Check first 10 lines for metadata | |
| if line.startswith(("Generated at:", "Git commit:", "Hardware:")): | |
| metadata_lines.append(line) | |
| if metadata_lines: | |
| lines.extend( | |
| [ | |
| "### Current Baseline Information", | |
| "", | |
| ] | |
| ) | |
| for meta_line in metadata_lines: | |
| lines.append(f"- **{meta_line}**") | |
| lines.append("") | |
| except Exception as e: | |
| lines.extend( | |
| [ | |
| "### Baseline Results", | |
| "", | |
| f"*Error parsing baseline results: {e}*", | |
| "", | |
| ] | |
| ) | |
| try: | |
| with self.baseline_file.open("r", encoding="utf-8") as f: | |
| content = f.read() | |
| # Extract metadata from baseline | |
| metadata_lines = [] | |
| first_lines = content.split("\n")[:20] | |
| for line in first_lines: | |
| if line.startswith(("Generated at:", "Date:", "Git commit:", "Hardware:")): | |
| metadata_lines.append(line) | |
| if not any(l.startswith("Hardware:") for l in metadata_lines): | |
| if "Hardware Information:" in content: | |
| # Emit a concise single-line summary from the block’s first two fields | |
| for i, line in enumerate(first_lines): | |
| if line.startswith("Hardware Information:"): | |
| os_line = first_lines[i + 1].strip() | |
| cpu_line = first_lines[i + 2].strip() | |
| metadata_lines.append(f"Hardware: {cpu_line.removeprefix('CPU: ').strip()}") | |
| break | |
| if metadata_lines: | |
| lines.extend( | |
| [ | |
| "### Current Baseline Information", | |
| "", | |
| ] | |
| ) | |
| for meta_line in metadata_lines: | |
| lines.append(f"- **{meta_line}**") | |
| lines.append("") | |
| except Exception as e: | |
| lines.extend( | |
| [ | |
| "### Baseline Results", | |
| "", | |
| f"*Error parsing baseline results: {e}*", | |
| "", | |
| ] | |
| ) |
🤖 Prompt for AI Agents
In scripts/performance_summary_utils.py around lines 563 to 592, the current
metadata extraction only detects single-line keys like "Hardware: ..." and
misses the multi-line "Hardware Information:" block; update the parsing to also
detect a line that starts with "Hardware Information:" and when found, include
that line and all following lines that belong to the block (e.g., until a blank
line or until a line that starts with another metadata key such as "Generated
at:" or "Git commit:") into metadata_lines so the full multi-line hardware
summary is captured and later rendered in the summary.
| from enhance_commits import ( | ||
| _add_section_with_entries, | ||
| _categorize_entry, | ||
| _collect_commit_entry, | ||
| _extract_title_text, | ||
| _get_regex_patterns, | ||
| _process_changelog_lines, | ||
| _process_section_header, | ||
| main, | ||
| process_and_output_categorized_entries, | ||
| ) |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Ensure enhance_commits import resolves in CI.
The tests import enhance_commits as a top-level module. Verify PYTHONPATH includes the scripts/ directory or switch to a package-relative import (e.g., from scripts.enhance_commits import ...).
🏁 Script executed:
#!/bin/bash
# Verify module location and import path expectations
fd -a '^enhance_commits\.py$' -HI
rg -n --glob 'scripts/tests/**' 'sys\.path|PYTHONPATH'
python - <<'PY'
import sys, pprint
pprint.pp(sys.path[:5])
try:
import enhance_commits
print("Imported enhance_commits OK from:", enhance_commits.__file__)
except Exception as e:
print("Import failed:", e)
PYLength of output: 421
Ensure enhance_commits import resolves in CI
Add a path insertion or switch to package-relative imports so enhance_commits can be found:
- In scripts/tests/test_enhance_commits.py, prepend:
before importing
import sys from pathlib import Path sys.path.insert(0, str(Path(__file__).parent.parent))
enhance_commits, mirroring other tests; - Or change the imports to:
from scripts.enhance_commits import (...)
🤖 Prompt for AI Agents
In scripts/tests/test_enhance_commits.py around lines 15 to 25, the test imports
enhance_commits directly which can fail in CI; either (A) insert the project
root into sys.path before importing by adding an import sys and Path and
sys.path.insert(0, str(Path(__file__).parent.parent)) immediately above the
import block, or (B) convert the imports to package-relative form (from
scripts.enhance_commits import ...) so the module resolves without modifying
sys.path; pick one approach and apply it consistently with other tests.
Updates benchmark tooling for more consistent and reliable performance measurements. Switches to tarpaulin.toml for configuration, excludes examples from coverage, and seeds random number generators in benchmarks to reduce variance and improve regression detection.
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
benches/circumsphere_containment.rs (1)
64-103: Random 3D simplex can be degenerate → unwrap panics; also non-deterministic RNGUnseeded RNG + potential degeneracy can make CI flaky. Seed the RNG and ensure the simplex is non-degenerate before benchmarking.
Apply these diffs:
- Use a seeded RNG:
- use rand::Rng; + use rand::{Rng, SeedableRng, rngs::StdRng};- let mut rng = rand::rng(); + let mut rng = StdRng::seed_from_u64(2025_09_10);
- Regenerate until the simplex is non-degenerate (with a cap):
-fn generate_random_simplex_3d(rng: &mut impl Rng) -> Vec<Point<f64, 3>> { - (0..4) - .map(|_| { - let x = rng.random::<f64>().mul_add(20.0, -10.0); // Range -10.0..10.0 - let y = rng.random::<f64>().mul_add(20.0, -10.0); - let z = rng.random::<f64>().mul_add(20.0, -10.0); - Point::new([x, y, z]) - }) - .collect() -} +fn generate_random_simplex_3d(rng: &mut impl Rng) -> Vec<Point<f64, 3>> { + for _ in 0..10_000 { + let pts: Vec<Point<f64, 3>> = (0..4) + .map(|_| { + let x = rng.random::<f64>().mul_add(20.0, -10.0); // -10..10 + let y = rng.random::<f64>().mul_add(20.0, -10.0); + let z = rng.random::<f64>().mul_add(20.0, -10.0); + Point::new([x, y, z]) + }) + .collect(); + if let Ok(o) = simplex_orientation(&pts) { + if !matches!(o, Orientation::DEGENERATE) { + return pts; + } + } + } + // Fallback: use a standard simplex to avoid panics if RNG repeatedly degenerates + standard_simplex::<3>() +}scripts/benchmark_utils.py (1)
1191-1226: Make Criterion results discovery robust across layout variations.Add a recursive fallback scan so we still parse results if the directory layout changes.
Apply this diff:
if estimates_file: benchmark_data = CriterionParser.parse_estimates_json(estimates_file, point_count, f"{dim}D") if benchmark_data: results.append(benchmark_data) + # Fallback: recursively discover estimates.json if nothing was found above + if not results: + seen: set[tuple[int, str]] = set() + for estimates_file in criterion_dir.rglob("estimates.json"): + parent_name = estimates_file.parent.name + if parent_name not in {"base", "new"}: + continue + # Find nearest numeric points dir and nearest "<Nd>" dir in ancestors + points_dir = next((p for p in estimates_file.parents if p.name.isdigit()), None) + dim_dir = next((p for p in estimates_file.parents if re.search(r"\d+d$", p.name)), None) + if not points_dir or not dim_dir: + continue + dim_match = re.search(r"(\d+)d$", dim_dir.name) + if not dim_match: + continue + points = int(points_dir.name) + dimension = f"{dim_match.group(1)}D" + key = (points, dimension) + # Prefer "new" over "base" when duplicates exist + if key in seen and parent_name == "base": + continue + bd = CriterionParser.parse_estimates_json(estimates_file, points, dimension) + if bd: + seen.add(key) + results.append(bd)
♻️ Duplicate comments (4)
benches/profiling_suite.rs (2)
176-184: Good: fail-fast on grid generation errors (no silent fallbacks)This addresses the earlier concern about misleading benchmarks when grid gen fails. Thanks for making it explicit.
586-607: Good: cap precomputed simplices to bound memoryThe sample limit avoids runaway allocations in large meshes and aligns with earlier feedback.
scripts/tests/test_benchmark_utils.py (1)
1117-1127: Good fix: patch the right subprocess symbolSwitching to @patch("benchmark_utils.subprocess.run") isolates the mock to the module under test. Matches prior guidance.
scripts/benchmark_utils.py (1)
1898-1911: Good hardening on git commit validation and safe invocation.Regex validation of SHA and argumentized git calls address prior injection concerns.
🧹 Nitpick comments (35)
cspell.json (3)
161-161: Add lowercase variant for “Precomputes” to avoid misses.If “precomputes” appears mid-sentence, cspell may flag it. Add the lowercase variant.
Apply:
"powi", - "Precomputes", + "Precomputes", + "precomputes", "Preparata",
31-31: Consider adding crate-style capitalizations (“AHash”, “SeaHash”) if used in docs.Rust crate names are lowercase, but prose sometimes uses “AHash”/“SeaHash”. Add if referenced that way; otherwise ignore.
Proposed patch:
"ahash", + "AHash", "Algorithmica", ... "seahash", + "SeaHash", "Seedable",Also applies to: 191-191
66-69: Nit: keep words alphabetically sorted to reduce churn.Local block is "coderabbit", "conftest", "coderabbitai"; strict alpha would place "conftest" after "coderabbitai".
"CODEOWNERS", "coderabbit", - "conftest", "coderabbitai", + "conftest", "collinearity",.github/workflows/codecov.yml (2)
62-77: Make uploads conditional on report presence to avoid noisy failures.Gate the Codecov and Codacy steps on the report file. This keeps logs clean when coverage generation legitimately fails upstream.
Example:
- name: Upload coverage to Codecov if: ${{ success() && hashFiles('coverage/cobertura.xml') != '' }} uses: codecov/codecov-action@8f6ec407a34d5ec2b33d7d8f7f50279493b5efb4 with: files: coverage/cobertura.xml flags: unittests name: codecov-umbrella fail_ci_if_error: false - name: Upload coverage to Codacy if: ${{ success() && hashFiles('coverage/cobertura.xml') != '' }} uses: codacy/codacy-coverage-reporter-action@89d6c85cfafaec52c72b6c5e8b2878d33104c699 with: project-token: ${{ secrets.CODACY_PROJECT_TOKEN }} coverage-reports: coverage/cobertura.xml language: rust continue-on-error: true
53-58: .github/workflows/codecov.yml: Add a post-tarpaulin sanity check for cobertura.xml
The tarpaulin config already outputs XML tocoverage/, matching your Codecov/Codacy steps; insert after- run: cargo tarpaulin --config tarpaulin.tomlsomething like:
ls -la coverage || true test -f coverage/cobertura.xml || { echo "::error::coverage/cobertura.xml not found. Verify tarpaulin.toml has out = [\"Xml\"] and output-dir = \"coverage/\"." find coverage -maxdepth 2 -type f -print 2>/dev/null || true exit 2 }to fail early if the report isn’t produced.
docs/code_organization.md (3)
39-39: Call out public exposure of collections in the tree entrySmall clarity boost: note that
collectionsis re-exported via the prelude so users know where to import from.-│ │ ├── collections.rs # Optimized collection types and utilities +│ │ ├── collections.rs # Optimized collection types and utilities (publicly re-exported via lib.rs prelude)
70-71: Cross-link the profiling suite to its workflowReaders will find the CI entry point faster with a link hint.
-│ ├── profiling_suite.rs # Comprehensive profiling suite for large-scale performance analysis +│ ├── profiling_suite.rs # Comprehensive profiling suite for large-scale performance analysis (see .github/workflows/profiling-benchmarks.yml)
159-167: Document uv prerequisite and add pytest fallbackMany users won’t have
uvinstalled; include a one-liner to install or a plain pytest fallback.-**Note**: Python tests in `scripts/tests/` are executed via pytest (or `uv run pytest`) and discovered through the test configuration in `pyproject.toml`. Run with: +**Note**: Python tests in `scripts/tests/` are executed via pytest (you can use `uv run pytest` for reproducible envs) and are discovered via `pyproject.toml`. Run with: @@ -# Run all Python utility tests -uv run pytest -# Or run specific test files -uv run pytest scripts/tests/test_benchmark_utils.py +# Pre-req for uv users (optional): +# pipx install uv # or: pip install uv + +# Run all Python utility tests (using uv) +uv run pytest -q +# Or run specific test files (using uv) +uv run pytest scripts/tests/test_benchmark_utils.py + +# Without uv: +pytest -q +pytest scripts/tests/test_benchmark_utils.pybenches/ci_performance_suite.rs (1)
59-90: Macroized benches and iter_with_setup: solid; consider bounding sample sizesThe macro removes duplication and keeps setup out of timing. To keep CI times stable across machines, consider setting per-group sample sizes.
Apply per-dimension sample sizes (example):
- let mut group = c.benchmark_group(concat!("tds_new_", stringify!($dim), "d")); + let mut group = c.benchmark_group(concat!("tds_new_", stringify!($dim), "d")); + if $dim >= 4 { + group.sample_size(15); + }benches/memory_scaling.rs (1)
153-177: Prevent optimizer from eliding work in allocation-measured pathWrap vertices and the constructed Tds in black_box inside the measured closure to avoid differential optimization between feature flags.
Apply:
- let (tds, info) = measure_with_result(|| { - let vertices: Vec<_> = points.iter().map(|p| vertex!(*p)).collect(); - Tds::<f64, (), (), $dim>::new(&vertices).unwrap() - }); + let (tds, info) = measure_with_result(|| { + let vertices: Vec<_> = points.iter().map(|p| vertex!(*p)).collect(); + let vertices = black_box(vertices); + let tds = Tds::<f64, (), (), $dim>::new(&vertices).unwrap(); + black_box(tds) + });benches/profiling_suite.rs (9)
51-52: Drop unused serde import (and bounds below) to slim compile depsThese imports aren’t used by the bench and pull in an unnecessary dev-dependency.
-use serde::{Deserialize, Serialize};
113-115: Make PROFILING_DEV_MODE parsing case-insensitive and cover common valuesAccept “True/Yes/On” in any case.
- matches!(dev.as_deref(), Some("1" | "true" | "TRUE" | "yes" | "on")) + dev.as_deref() + .map(|s| s == "1" + || s.eq_ignore_ascii_case("true") + || s.eq_ignore_ascii_case("yes") + || s.eq_ignore_ascii_case("on")) + .unwrap_or(false)
55-59: Replace magic seeds/limits with named constantsCentralize magic numbers used across the suite (seeds, max queries).
const QUERY_RESULTS_BUFFER_SIZE: usize = 1024; // For bounded query result collections (max 1000 in code) + +// Reusable seeds and caps +const DEFAULT_SEED: u64 = 42; +const QUERY_SEED: u64 = 123; +const MAX_QUERY_RESULTS: usize = 1_000;Follow-up: replace hardcoded 42/123/1000 usages with these constants throughout this file. Want me to push a patch touching all sites?
152-198: Prefer expect(...) with context over unwrap() in point generationKeep the fail-fast behavior, but add context for Random/Poisson branches too.
- generate_random_points_seeded(count, (-100.0, 100.0), seed).unwrap() + generate_random_points_seeded(count, (-100.0, 100.0), seed) + .expect("random point generation failed") @@ - generate_poisson_points(count, (-100.0, 100.0), min_distance, seed).unwrap() + generate_poisson_points(count, (-100.0, 100.0), min_distance, seed) + .expect("poisson point generation failed")
381-384: Fix percentile to standard nearest-rank (ceil(p·n), then 0-based index)Current formula biases high/low for small samples. Use ceil(0.95·n), then subtract 1.
- let n = values.len(); - let rank = (95 * (n.saturating_sub(1))).div_ceil(100); // nearest-rank, clamps at n-1 - let index = rank.min(n - 1); + let n = values.len(); + let rank = (95 * n).div_ceil(100); // 1-based nearest-rank + let index = rank.saturating_sub(1).min(n - 1); // 0-based index
389-416: Guard bytes-per-point division for robustnessNot hit today (counts ≥ 1_000), but cheap to harden against 0.
- println!( - "Bytes per point (peak): {:.1}", - info.bytes_max as f64 / actual_point_count as f64 - ); + if actual_point_count > 0 { + println!( + "Bytes per point (peak): {:.1}", + info.bytes_max as f64 / actual_point_count as f64 + ); + } else { + println!("Bytes per point (peak): n/a (0 points)"); + }
424-425: Remove unused serde bounds from bench_memory_usage signatureThe bench doesn’t serialize; these bounds just slow builds.
-) -where - [f64; D]: Copy + Default + for<'de> Deserialize<'de> + Serialize + Sized, +)
628-635: Use the MAX_QUERY_RESULTS constant instead of literal 1000Avoid magic numbers; keeps buffer/cap aligned.
- if query_results.len() >= 1000 { + if query_results.len() >= MAX_QUERY_RESULTS { break; } @@ - if query_results.len() >= 1000 { + if query_results.len() >= MAX_QUERY_RESULTS { break; }
720-725: Allow env override for default measurement_time, tooUse bench_time here for consistency with per-group overrides.
config = Criterion::default() .sample_size(10) // Fewer samples due to long-running nature .warm_up_time(Duration::from_secs(10)) - .measurement_time(Duration::from_secs(60)); + .measurement_time(bench_time(60));scripts/tests/test_benchmark_models.py (1)
8-20: Import path sanity checkIf pytest isn’t launched with PYTHONPATH including scripts/,
from benchmark_models import …will fail. Consider making imports robust for local runs.Example (in scripts/tests/conftest.py) to prepend scripts/ to sys.path:
+# Ensure `scripts/` is on sys.path for test imports +import sys +from pathlib import Path +_scripts = Path(__file__).resolve().parents[1] +if str(_scripts) not in sys.path: + sys.path.insert(0, str(_scripts))scripts/tests/test_benchmark_utils.py (3)
1288-1306: Avoid leaking env vars across tests
generate_summary()sets BENCHMARK_REGRESSION_DETECTED in os.environ. Since the key isn’t part of patch.dict, it can leak after the context. Use clear=True to sandbox the env.- with patch.dict(os.environ, env_vars), temp_chdir(temp_dir): + with patch.dict(os.environ, env_vars, clear=True), temp_chdir(temp_dir): BenchmarkRegressionHelper.generate_summary()
1321-1328: Also sandbox env when exporting to GITHUB_ENVSame leak risk; use clear=True.
- with patch.dict(os.environ, env_vars), temp_chdir(temp_dir): + with patch.dict(os.environ, env_vars, clear=True), temp_chdir(temp_dir): BenchmarkRegressionHelper.generate_summary()
837-853: Specify UTF-8 when writing files containing “µ”On some platforms the default temp file encoding isn’t UTF-8. Specify encoding to avoid Unicode errors.
- with tempfile.NamedTemporaryFile(mode="w", suffix=".txt", delete=False) as f: + with tempfile.NamedTemporaryFile(mode="w", suffix=".txt", delete=False, encoding="utf-8") as f:Apply similarly where baseline files with “µs” are written (e.g., Lines 869-883).
scripts/benchmark_models.py (4)
261-273: Normalize microsecond unit aliases ('us'/'μs')
format_time_valueonly recognizes "µs". Normalize common aliases to ensure consistent formatting and ms conversion.def format_time_value(value: float, unit: str) -> str: - # Convert µs to ms if >= 1000 µs + # Normalize microsecond aliases + unit = {"us": "µs", "μs": "µs"}.get(unit, unit) + # Convert µs to ms if >= 1000 µs if unit == "µs" and value >= 1000: return f"{value / 1000:.3f} ms"
157-175: Allow scientific notation and signs in time valuesBaseline lines like 1.2e3 or negative bounds won’t parse with the current regex. Expand the numeric class.
- match = re.match(r"^Time: \[([0-9., ]+)\] (.+)$", line.strip()) + match = re.match(r"^Time:\s*\[([0-9eE+.\-,\s]+)\]\s+(.+)$", line.strip())
188-206: Same for throughput parsingSupport scientific notation and whitespace robustness.
- match = re.match(r"^Throughput: \[([0-9., ]+)\] (.+)$", line.strip()) + match = re.match(r"^Throughput:\s*\[([0-9eE+.\-,\s]+)\]\s+(.+)$", line.strip())
315-319: Numeric sort of dimensionsString sort yields "10D" before "2D". Sort by the numeric prefix when present; falls back to name.
- for dimension in sorted(by_dimension.keys()): + def _dim_key(d: str) -> tuple[int, str]: + m = re.match(r"^\s*(\d+)\s*[dD]\b", d) + return (int(m.group(1)) if m else 1_000_000, d) + for dimension in sorted(by_dimension.keys(), key=_dim_key):scripts/benchmark_utils.py (5)
711-716: Sort dimensions numerically to avoid misordering (e.g., 10D before 9D).Current key sorts by string length; prefer numeric sort.
Apply this diff:
- sorted_dims = sorted(cases_by_dimension.keys(), key=lambda x: (len(x), x)) + sorted_dims = sorted( + cases_by_dimension.keys(), + key=lambda d: int(str(d).rstrip("D")) if str(d).rstrip("D").isdigit() else sys.maxsize, + )
284-291: Reuse DEV_MODE_BENCH_ARGS for consistency.Keep dev/quick runs centralized and consistent with the rest of the tool.
Apply this diff:
- result = run_cargo_command( - ["bench", "--bench", "circumsphere_containment", "--", "--sample-size", "10", "--measurement-time", "5", "--warm-up-time", "1"], + result = run_cargo_command( + ["bench", "--bench", "circumsphere_containment", "--", *DEV_MODE_BENCH_ARGS],
764-778: Guard hardware metadata parsing and include cores in summary.Avoid potential IndexError on short files and enrich the one-line summary.
Apply this diff:
- if not any(line.startswith("Hardware:") for line in metadata_lines) and "Hardware Information:" in content: + if not any(line.startswith("Hardware:") for line in metadata_lines) and "Hardware Information:" in content: # Emit a concise single-line summary from the block's first two fields for i, line in enumerate(first_lines): if line.startswith("Hardware Information:"): - os_line = first_lines[i + 1].strip() - cpu_line = first_lines[i + 2].strip() - metadata_lines.append(f"Hardware: {cpu_line.removeprefix('CPU: ').strip()}") + cpu_line = first_lines[i + 2].strip() if i + 2 < len(first_lines) else "" + cores_line = first_lines[i + 3].strip() if i + 3 < len(first_lines) else "" + cpu = cpu_line.removeprefix("CPU: ").strip() + cores = cores_line.removeprefix("CPU Cores: ").strip() + summary = f"{cpu} ({cores} cores)" if cpu and cores else cpu or "Unknown CPU" + metadata_lines.append(f"Hardware: {summary}") break
539-541: Specify encoding when reading JSON.Be explicit for deterministic behavior across locales.
Apply this diff:
- with estimates_file.open() as f: + with estimates_file.open(encoding="utf-8") as f:
351-356: Tweak phrasing to avoid hardcoding sample size.The circumsphere suite’s sample count may vary; consider a neutral phrasing.
Apply this diff:
- "Based on 1000 random test cases:", + "Based on random test cases:",scripts/enhance_commits.py (3)
174-176: Use compiled fallback regex and group() for clarity and speed.Avoid recreating the fallback pattern per call and prefer group(1) over indexing for readability.
- lines = entry.splitlines() - if not lines: - return "" - - first = lines[0] - pattern = r"-\s+([^[(]+?)(?:\s+\(#\d+\))?\s*(?:\[`[a-f0-9]{7,40}`\].*)?$" - match = re.match(pattern, first, re.I) - return match[1].lower().strip() if match else "" + lines = entry.splitlines() + if not lines: + return "" + first = lines[0] + match = TITLE_FALLBACK_RE.match(first) + return match.group(1).lower().strip() if match else ""Also applies to: 182-189
340-366: Avoid accidental double blank lines before new release headers.Guard the blank-line insertion.
- if re.match(r"^## ", line): - output_lines.append("") # Add blank line before new release - output_lines.append(line) + if re.match(r"^## ", line): + if output_lines and output_lines[-1] != "": + output_lines.append("") # Avoid double blank lines + output_lines.append(line) line_index += 1 continue
376-380: Redundant clear() just before return.Not harmful, but unnecessary; the function returns immediately after.
- if categorize_entries_list: - process_and_output_categorized_entries(categorize_entries_list, output_lines) - categorize_entries_list.clear() + if categorize_entries_list: + process_and_output_categorized_entries(categorize_entries_list, output_lines)
📜 Review details
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (20)
.codecov.yml(1 hunks).github/workflows/codecov.yml(1 hunks).gitignore(2 hunks)Cargo.toml(3 hunks)benches/PERFORMANCE_RESULTS.md(1 hunks)benches/ci_performance_suite.rs(1 hunks)benches/circumsphere_containment.rs(4 hunks)benches/memory_scaling.rs(3 hunks)benches/profiling_suite.rs(1 hunks)benches/triangulation_creation.rs(2 hunks)cspell.json(9 hunks)docs/code_organization.md(7 hunks)scripts/benchmark_models.py(1 hunks)scripts/benchmark_utils.py(12 hunks)scripts/enhance_commits.py(5 hunks)scripts/tests/conftest.py(1 hunks)scripts/tests/test_benchmark_models.py(1 hunks)scripts/tests/test_benchmark_utils.py(3 hunks)scripts/tests/test_enhance_commits.py(1 hunks)tarpaulin.toml(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- tarpaulin.toml
🚧 Files skipped from review as they are similar to previous changes (6)
- scripts/tests/test_enhance_commits.py
- .gitignore
- scripts/tests/conftest.py
- .codecov.yml
- Cargo.toml
- benches/PERFORMANCE_RESULTS.md
🧰 Additional context used
📓 Path-based instructions (1)
scripts/*.py
⚙️ CodeRabbit configuration file
scripts/*.py: These are Python utility scripts for changelog and commit processing.
Focus on code quality, maintainability, and adherence to Python best practices.
The scripts use ruff for comprehensive linting and formatting (pylint has been retired).
Files:
scripts/benchmark_models.pyscripts/enhance_commits.pyscripts/benchmark_utils.py
🧠 Learnings (4)
📚 Learning: 2025-09-04T20:03:49.859Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.859Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
Applied to files:
benches/circumsphere_containment.rsbenches/profiling_suite.rsbenches/triangulation_creation.rsdocs/code_organization.md
📚 Learning: 2025-09-02T20:32:05.985Z
Learnt from: acgetchell
PR: acgetchell/delaunay#60
File: cspell.json:103-103
Timestamp: 2025-09-02T20:32:05.985Z
Learning: In cspell.json for the delaunay project, the word "itional" is intentionally added to the dictionary because it comes from a regex pattern, not a typo.
Applied to files:
cspell.json
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
PR: acgetchell/delaunay#55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The user corrected that finitecheck.rs, hashcoordinate.rs, and orderedeq.rs trait files do not exist in the current version of the delaunay repository under src/geometry/traits/. The current structure only contains coordinate.rs, which matches the documented structure in CONTRIBUTING.md.
Applied to files:
docs/code_organization.md
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
PR: acgetchell/delaunay#55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The delaunay repository has stale documentation in WARP.md that references non-existent trait files (finitecheck.rs, hashcoordinate.rs, orderedeq.rs) in src/geometry/traits/, while the actual directory only contains coordinate.rs. The CONTRIBUTING.md documentation is accurate and reflects the current state.
Applied to files:
docs/code_organization.md
🧬 Code graph analysis (8)
scripts/tests/test_benchmark_models.py (1)
scripts/benchmark_models.py (16)
BenchmarkData(15-56)CircumspherePerformanceData(60-66)CircumsphereTestCase(70-100)VersionComparisonData(104-121)extract_benchmark_data(208-244)format_benchmark_tables(295-352)format_throughput_value(275-292)format_time_value(250-272)parse_benchmark_header(127-143)parse_throughput_data(177-205)parse_time_data(146-174)with_timing(29-35)with_throughput(37-43)to_baseline_format(45-56)get_winner(77-81)get_relative_performance(83-100)
benches/circumsphere_containment.rs (3)
src/geometry/traits/coordinate.rs (1)
new(643-643)src/geometry/point.rs (1)
new(74-76)src/geometry/predicates.rs (3)
insphere(346-416)insphere_distance(216-245)insphere_lifted(500-616)
scripts/tests/test_benchmark_utils.py (4)
scripts/benchmark_models.py (1)
BenchmarkData(15-56)scripts/benchmark_utils.py (31)
BenchmarkRegressionHelper(1768-2055)CriterionParser(1120-1226)PerformanceComparator(1319-1626)PerformanceSummaryGenerator(82-1117)WorkflowHelper(1629-1765)parse_estimates_json(1124-1172)determine_tag_name(1633-1658)create_metadata(1661-1705)display_baseline_summary(1708-1741)sanitize_artifact_name(1744-1765)prepare_baseline(1772-1811)set_no_baseline_status(1814-1824)extract_baseline_commit(1827-1877)determine_benchmark_skip(1880-1924)display_skip_message(1927-1940)display_no_baseline_message(1943-1953)compare_with_baseline(1327-1389)run_regression_test(1956-1989)display_results(1992-2004)generate_summary(103-142)generate_summary(2007-2055)_get_current_version(225-251)_get_version_date(253-272)_parse_baseline_results(753-805)_parse_comparison_results(807-854)_generate_markdown_content(144-223)_get_circumsphere_performance_results(673-751)_get_update_instructions(1087-1117)_run_circumsphere_benchmarks(274-300)_get_static_sections(1030-1085)_parse_circumsphere_benchmark_results(389-411)scripts/tests/conftest.py (1)
temp_chdir(16-44)scripts/subprocess_utils.py (2)
find_project_root(263-279)ProjectRootNotFoundError(259-260)
benches/ci_performance_suite.rs (2)
src/geometry/util.rs (16)
generate_random_points_seeded(1289-1316)generate_random_points_seeded(3417-3417)generate_random_points_seeded(3418-3418)generate_random_points_seeded(3437-3437)generate_random_points_seeded(3438-3438)generate_random_points_seeded(3456-3456)generate_random_points_seeded(3457-3457)generate_random_points_seeded(3475-3475)generate_random_points_seeded(3476-3476)generate_random_points_seeded(3495-3495)generate_random_points_seeded(3496-3496)generate_random_points_seeded(3500-3500)generate_random_points_seeded(3501-3501)generate_random_points_seeded(3505-3505)generate_random_points_seeded(3506-3506)generate_random_points_seeded(3510-3510)src/core/triangulation_data_structure.rs (1)
new(871-903)
benches/profiling_suite.rs (6)
src/geometry/util.rs (52)
core(1387-1387)generate_grid_points(1365-1431)generate_grid_points(3655-3655)generate_grid_points(3688-3688)generate_grid_points(3704-3704)generate_grid_points(3720-3720)generate_grid_points(3729-3729)generate_grid_points(3743-3743)generate_grid_points(3753-3753)generate_grid_points(3773-3773)generate_grid_points(3794-3794)generate_grid_points(3807-3807)generate_grid_points(4052-4052)generate_poisson_points(1475-1565)generate_poisson_points(3843-3843)generate_poisson_points(3876-3876)generate_poisson_points(3909-3909)generate_poisson_points(3910-3910)generate_poisson_points(3924-3924)generate_poisson_points(3931-3931)generate_poisson_points(3942-3942)generate_poisson_points(3955-3955)generate_poisson_points(3961-3961)generate_poisson_points(3970-3970)generate_poisson_points(3993-3993)generate_poisson_points(4005-4005)generate_random_points_seeded(1289-1316)generate_random_points_seeded(3417-3417)generate_random_points_seeded(3418-3418)generate_random_points_seeded(3437-3437)generate_random_points_seeded(3438-3438)generate_random_points_seeded(3456-3456)generate_random_points_seeded(3457-3457)generate_random_points_seeded(3475-3475)generate_random_points_seeded(3476-3476)generate_random_points_seeded(3495-3495)generate_random_points_seeded(3496-3496)generate_random_points_seeded(3500-3500)generate_random_points_seeded(3501-3501)generate_random_points_seeded(3505-3505)generate_random_points_seeded(3506-3506)generate_random_points_seeded(3510-3510)safe_usize_to_scalar(155-155)safe_usize_to_scalar(477-513)safe_usize_to_scalar(1097-1097)safe_usize_to_scalar(1409-1409)std(235-235)std(275-275)std(490-490)std(498-498)std(1812-1812)std(1900-1900)src/core/triangulation_data_structure.rs (2)
default(207-209)new(871-903)src/geometry/algorithms/convex_hull.rs (2)
default(1117-1123)from_triangulation(239-271)benches/memory_scaling.rs (1)
new(37-68)src/core/boundary.rs (2)
tds(465-469)boundary_facets(82-103)src/geometry/predicates.rs (1)
insphere_lifted(500-616)
benches/triangulation_creation.rs (3)
benches/profiling_suite.rs (4)
points(235-235)points(273-273)points(313-313)points(352-352)src/geometry/util.rs (12)
generate_random_points(1226-1247)generate_random_points(3308-3308)generate_random_points(3323-3323)generate_random_points(3338-3338)generate_random_points(3353-3353)generate_random_points(3370-3370)generate_random_points(3381-3381)generate_random_points(3385-3385)generate_random_points(3389-3389)generate_random_points(3393-3393)generate_random_points(3400-3400)generate_random_points(3403-3403)src/core/triangulation_data_structure.rs (1)
new(871-903)
benches/memory_scaling.rs (2)
benches/triangulation_vs_hull_memory.rs (5)
new_placeholder(88-109)measure_with_result(271-280)measure_with_result(283-288)new(43-84)write_csv_header(112-117)src/core/triangulation_data_structure.rs (4)
cells(485-487)new(871-903)number_of_vertices(595-597)number_of_cells(753-755)
scripts/benchmark_utils.py (3)
scripts/benchmark_models.py (6)
BenchmarkData(15-56)CircumspherePerformanceData(60-66)CircumsphereTestCase(70-100)extract_benchmark_data(208-244)format_benchmark_tables(295-352)get_winner(77-81)scripts/hardware_utils.py (3)
HardwareComparator(409-575)HardwareInfo(33-406)get_hardware_info(335-359)scripts/subprocess_utils.py (5)
ProjectRootNotFoundError(259-260)find_project_root(263-279)get_git_commit_hash(162-174)run_cargo_command(107-133)run_git_command(78-104)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Codacy Static Code Analysis
- GitHub Check: performance-regression
- GitHub Check: Codacy Security Scan
- GitHub Check: build (windows-latest)
- GitHub Check: Code Coverage
🔇 Additional comments (31)
cspell.json (1)
31-31: LGTM: new dictionary entries align with PR scope.ahash, capsys, conftest, Nijenhuis, pipx, Precomputes, seahash, TAOCP, uncategorized, Wilf look appropriate and should reduce false positives. Noted that “itional” remains intentionally included per prior learning.
Also applies to: 48-48, 67-67, 137-137, 154-154, 161-161, 191-191, 215-215, 225-225, 234-234
docs/code_organization.md (11)
63-64: LGTM: benches index additionsGood to document
README.mdand auto-generatedPERFORMANCE_RESULTS.md. Consider confirming whether the generated file is committed or ignored to avoid churn, but content here reads fine.If helpful, I can add a CI check ensuring the generated file isn’t out of date relative to baseline. Want me to open an issue?
116-116: LGTM: profiling workflow entry listedThe workflow reference aligns with the benches and profiling notes.
120-120: No action neededListing
dependabot.ymlin the tree is fine; nothing to change here.
121-121: No action needed
.cargo/entry is accurate; leaving as-is.
185-185: LGTM: collections documented in Core LibraryThis aligns with the new module and past cleanup that consolidated trait docs (matches prior learnings about removing stale trait file references).
574-581: LGTM: cell.rs module notesAccurately characterizes the large module; no changes needed.
588-594: LGTM: facet.rs module notesScope and emphasis look correct.
595-601: LGTM: boundary.rs module notesConcise and accurate.
168-175: Remove module-invocation suggestion; the CLI is already registered
Thebenchmark-utilsconsole script is defined under[project.scripts]in pyproject.toml, so the existing example (uv run benchmark-utils generate-summary) will work as shown.Likely an incorrect or invalid review comment.
101-108: No action required – all six Python utility modules listed in docs/code_organization.md exist in the scripts/ directory with matching filenames and top-level definitions.
93-99: All documented test modules are present and pytest will discover them by default.benches/ci_performance_suite.rs (2)
31-39: Deterministic point generation: LGTMFixed per-dimension seeds with clear expect messages improve CI reproducibility and debuggability.
Also applies to: 41-57
92-96: Macro invocations read well and keep naming consistentGood alignment with other benches; easy to extend.
benches/circumsphere_containment.rs (6)
23-42: standard_simplex(): LGTMSimple, valid, non-degenerate simplex for all D.
105-118: bench_simplex! macro: LGTMClear per-dimension labels; consistent method coverage.
120-139: bench_edge_case! macro: LGTMGood coverage of boundary/far/near-boundary scenarios.
141-162: Dimension sweep benches: LGTMConcise, readable, and aligned with the macro pattern elsewhere.
164-206: Edge-case sweep across 2D–5D: LGTMNice touch using a shared epsilon; thorough coverage.
250-253: Zero-division guard: LGTMPrevents flaky diagnostics when no valid cases exist.
benches/memory_scaling.rs (2)
72-90: new_placeholder() as const fn: LGTMSignature change is backward-compatible; call sites are unaffected.
306-326: CSV error handling: LGTMHeader/row write checks make failures visible and avoid partial files.
benches/triangulation_creation.rs (1)
37-41: Consistent macro invocations: LGTMUniform naming; easy to extend to more dimensions.
benches/profiling_suite.rs (1)
254-257: Dev-mode 3D skip threshold looks rightSkipping >10k in dev prevents runaway runtime without affecting prod runs.
scripts/tests/test_benchmark_models.py (1)
224-248: Nice coverage of table formatting and scalingAssertions match the current formatting rules (2 dp for µs, 3 dp for throughput, 1 dp for scaling). Looks good.
scripts/benchmark_models.py (1)
45-57: Baseline format: looks consistentHeader/time/optional throughput with trailing blank line matches expected tests. No issues.
scripts/benchmark_utils.py (3)
1157-1162: Throughput zero-division guard looks good.Using eps to cap extremely small times prevents crashes and keeps ordering reasonable.
1593-1605: Solid unit normalization across ns/µs/ms/s.Handles µ (U+00B5), μ (U+03BC), and us. Good defensive checks and messaging on mismatches.
1526-1537: Use of geometric mean for aggregate change is correct.Appropriate for multiplicative effects across heterogeneous benchmarks.
scripts/enhance_commits.py (2)
333-339: Early commit collection is correct and prevents duplication.Processing bullets before release-boundary checks and advancing line_index via _collect_commit_entry avoids double-emission. LGTM.
20-22: Verify changelog bullet formatting for non-bold entries
Script-based detection failed due to AWK syntax issues; please manually inspect CHANGELOG.md’s “Changes/Fixed/Added/etc.” sections to confirm if any top-level bullets lack**bold**titles before broadeningCOMMIT_BULLET_REand addingTITLE_FALLBACK_RE.
Refactors and expands the benchmark suite for more comprehensive performance analysis. Includes: - Adds CI performance suite for regression testing. - Adds triangulation creation benchmarks for dimension scaling. - Enhances profiling benchmarks with memory usage and query latency analysis. - Refactors circumsphere benchmarks with seeded generation for reproducible results. - Excludes test files from semgrep error-prone checks. - Adds coverage checks in CI.
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
scripts/enhance_commits.py (1)
41-44: Avoid false positives from “support/supports” in Added category.
Bare “support/supports” misclassifies “drop/remove support …” as Added due to category precedence.Apply:
- r"\bsupport\b", - r"\bsupports\b", - r"\badding support\b", + r"\badd(?:s|ed|ing)?\s+support\b",Optionally add similarly-scoped patterns if needed:
+ r"\bintroduc(?:e|es|ed|ing)\s+support\b",src/geometry/util.rs (2)
480-516: Incorrect f64 precision threshold (2^52 vs 2^53) in safe_usize_to_scalar.Integers up to 2^53 (inclusive) are exactly representable in f64. The current cap uses 2^52 and will incorrectly reject values in (2^52, 2^53], and misdocument the limit.
Apply:
- // f64 has 52 bits of precision in the mantissa, so values larger than 2^52 may lose precision - const MAX_PRECISE_USIZE_IN_F64: u64 = 1_u64 << 52; // 2^52 = 4,503,599,627,370,496 + // f64 has 53 bits of integer precision (including the implicit bit). + // Integers up to 2^53 - 1 are exactly representable. + const MAX_PRECISE_USIZE_IN_F64: u64 = (1_u64 << 53) - 1; // 9,007,199,254,740,991 @@ - if value_u64 > MAX_PRECISE_USIZE_IN_F64 { + if value_u64 > MAX_PRECISE_USIZE_IN_F64 { return Err(CoordinateConversionError::ConversionFailed { coordinate_index: 0, coordinate_value: format!("{value}"), from_type: "usize", to_type: std::any::type_name::<T>(), }); }Also update the docs above (Lines 474–479) to say “2^53 − 1” instead of “2^52”.
-/// - `f64` mantissa has 52 bits of precision -/// - `usize` values larger than 2^52 (4,503,599,627,370,496) may lose precision +/// - `f64` integers are exact up to 2^53 − 1 +/// - `usize` values larger than 2^53 − 1 (9,007,199,254,740,991) may lose precisionNote: tests referencing 2^52 (e.g.,
test_safe_usize_to_scalar_precision_boundary) should be shifted to 2^53 accordingly.
1230-1320: Replacerand::rng()andrng.random_range()with the correct Rand 0.9 API
Cargo.toml pinsrand = "0.9.2", which does not exposerand::rng()orRng::random_range. Update all sites to:
let mut rng = rand::thread_rng();rng.random_range(a..b)→rng.gen_range(a..b)(adduse rand::Rng;)Affected locations:
- src/geometry/util.rs (lines 1241, 1245, 1313, 1505, 1530)
- src/core/boundary.rs (lines 604, 837)
- src/core/triangulation_data_structure.rs (line 3444)
scripts/benchmark_models.py (1)
298-361: Fix scaling baseline when first entry has zero/empty timeUsing 1.0 as fallback inflates scaling. Prefer first non-zero mean, else "N/A".
Apply:
- first_bench = dim_benchmarks[0] if dim_benchmarks else None - baseline_time = first_bench.time_mean if first_bench and first_bench.time_mean > 0 else 1.0 + first_nonzero = next((b for b in dim_benchmarks if b.time_mean and b.time_mean > 0), None) + baseline_time = first_nonzero.time_mean if first_nonzero else None @@ - if bench.time_mean > 0 and baseline_time > 0: - scaling = bench.time_mean / baseline_time + if bench.time_mean > 0 and baseline_time and baseline_time > 0: + scaling = bench.time_mean / baseline_time scaling_str = f"{scaling:.1f}x" else: scaling_str = "N/A"
♻️ Duplicate comments (3)
.github/workflows/codecov.yml (1)
53-58: Correct the tarpaulin config section names in the comment (and verify the TOML schema).Tarpaulin’s config file uses sections like [report], [run], and [exclude] (not [tool.tarpaulin]). The current comment may mislead future edits; also please verify tarpaulin.toml actually follows this schema so cobertura.xml is produced as expected.
Apply this inline comment tweak:
-# All configuration is in tarpaulin.toml [tool.tarpaulin] section +# All configuration is in tarpaulin.toml ([report], [run], [exclude] sections) # - XML output: out = ["Xml"] -# - Output directory: output-dir = "coverage/" -# - Exclusions: exclude-files = ["benches/*", "examples/*"] -# - Test threads: test-threads = 1 +# - Output directory: output-dir = "coverage/" +# - Exclusions: [exclude] files = ["benches/*", "examples/*"] +# - Test threads: [run] test-threads = 1If needed, here’s a minimal tarpaulin.toml example (outside this file):
[report] out = ["Xml"] output-dir = "coverage" [run] test-threads = 1 [exclude] files = ["benches/*", "examples/*"]benches/profiling_suite.rs (1)
171-195: Fail-fast on grid generation is the right call.Replacing the silent fallback with a panic prevents mislabeled runs and matches prior feedback. LGTM.
benches/triangulation_creation.rs (1)
71-75: Deterministic seeds: nice—matches earlier guidance.
🧹 Nitpick comments (43)
.github/workflows/codecov.yml (3)
59-59: Remove trailing whitespace.Static analysis flagged trailing spaces on this blank line.
Apply:
- +
60-66: Split the long error message to satisfy line-length lint and improve readability.Line-length check fails (135 > 120). Break it into two echoes.
Apply:
- test -f coverage/cobertura.xml || { - echo "::error::coverage/cobertura.xml not found. Verify tarpaulin.toml has out = [\"Xml\"] and output-dir = \"coverage/\"." - find coverage -maxdepth 2 -type f -print 2>/dev/null || true - exit 2 - } + test -f coverage/cobertura.xml || { + echo "::error::coverage/cobertura.xml not found." + echo "Hint: Ensure tarpaulin.toml sets [report].out = [\"Xml\"] and [report].output-dir = \"coverage/\"." + find coverage -maxdepth 2 -type f -print 2>/dev/null || true + exit 2 + }
90-96: Only archive if the report exists (or ignore missing files).As written, this step runs always(); if coverage generation fails earlier, upload-artifact may error on a missing folder. Guard it or ignore missing files.
Apply:
- - name: Archive coverage results - uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2 - if: always() + - name: Archive coverage results + uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2 + if: ${{ always() && hashFiles('coverage/cobertura.xml') != '' }} with: name: coverage-report path: coverage/ + if-no-files-found: ignorecspell.json (1)
4-16: Nit: also ignore.venvdirectories.Many contributors use .venv alongside venv; ignoring both avoids noisy scans.
Apply this diff:
"ignorePaths": [ "**/.git/**", "**/.cache/**", "**/.pytest_cache/**", "**/target/**", "**/node_modules/**", "**/dist/**", "**/build/**", "**/coverage/**", - "**/venv/**", + "**/venv/**", + "**/.venv/**", "**/.mypy_cache/**", "**/.ruff_cache/**" ],scripts/enhance_commits.py (4)
19-21: Make commit-bullet regex resilient to leading whitespace.
In real-world changelogs bullets may be indented; current pattern misses them.Apply:
-COMMIT_BULLET_RE = re.compile(r"^- \*\*") +COMMIT_BULLET_RE = re.compile(r"^\s*-\s*\*\*")
182-187: Minor: avoid building the whole lines list just to read the first line.
Small allocation win; behavior unchanged.- lines = entry.splitlines() - if not lines: - return "" - first = lines[0] + first = entry.split("\n", 1)[0]
292-296: Generalize body-line collection to any indentation (spaces/tabs).
Prevents missing wrapped paragraphs that use tabs or >2 spaces.- while next_line_index < len(lines) and ( - lines[next_line_index].strip() == "" # Empty line - or lines[next_line_index].startswith(" ") - ): # Indented body content + while next_line_index < len(lines) and ( + lines[next_line_index].strip() == "" # Empty line + or re.match(r"^\s{2,}", lines[next_line_index]) # Indented body content + ):
383-386: Provide a helpful usage message on bad args.
Improves UX in CI and local runs.- if len(sys.argv) != 3: - sys.exit(1) + if len(sys.argv) != 3: + print(f"Usage: {Path(sys.argv[0]).name} <input_changelog> <output_changelog>", file=sys.stderr) + sys.exit(1).semgrep.yaml (2)
51-55: Don’t globally exclude tests from all rule sets; keep secrets scanning on tests.Excluding
scripts/tests/,tests/, and**/*test*.pywill also suppress high-signal rules like p/secrets. Prefer rule-scoped excludes (e.g., suppress only assert-related checks) or run a second semgrep job limited top/secretson test paths.Example adjustment (keep tests included for secrets while excluding them for other packs):
exclude: - "target/" - "node_modules/" - "*.min.js" - "*.min.css" - "__pycache__/" - ".git/" - "Cargo.lock" - "benches/baseline_results.txt" - "benches/benchmark_results/" - # Exclude test files from error-prone checks - pytest uses assert statements - - "scripts/tests/" - - "tests/" - - "**/test_*.py" - - "**/*_test.py" + # Keep tests in scope globally; suppress noise via rule-scoped paths or a separate jobIf you prefer to keep global excludes, add a dedicated CI step that runs
p/secretswith an allowlist of test paths.
63-65: Document how test-path exclusions are handled.The updated comment mentions “path exclusions” but the config now excludes all tests. Clarify whether secrets scanning still runs on tests via a separate job.
CHANGELOG.md (1)
32-35: Inconsistent category label under “Fixed”.Entry reads “Changed: Improves benchmark utils...” beneath the “Fixed” section. Align the verb with the section to avoid confusing release notes.
- - **Changed: Improves benchmark utils with timeout and error handling** [`56f322d`](...) + - **Fixed: Improve benchmark utils with timeout and error handling** [`56f322d`](...)Optional: avoid repeating the section name in bullets (use the verb alone).
benches/profiling_suite.rs (5)
136-143: Guard against zero/invalid BENCH_MEASUREMENT_TIME.If set to “0”, Criterion may pass
iters == 0, which would skip pushing data. Consider clamping to a minimum of 1 second.- std::env::var("BENCH_MEASUREMENT_TIME") + std::env::var("BENCH_MEASUREMENT_TIME") .ok() .and_then(|s| s.parse::<u64>().ok()) - .map_or_else(|| Duration::from_secs(default_secs), Duration::from_secs) + .map(|secs| secs.max(1)) + .map_or(default_secs, |s| s) + .pipe(Duration::from_secs)
396-412: Nearest-rank percentile: clarify behavior and avoid potential overflow.
let rank = (95 * n).div_ceil(100);can overflow ifnis very large (unlikely here, but free to guard). Also consider making percentile configurable.- let rank = (95 * n).div_ceil(100); // 1-based nearest-rank + let rank = (95usize.saturating_mul(n)).div_ceil(100); // 1-based nearest-rank
448-541: Average computations: protect against empty buffers and division by zero.You gate on
!allocation_infos.is_empty(), good. Mirror that foractual_point_countsto avoid accidental div-by-zero if loop shape changes.- let avg_actual_count = - actual_point_counts.iter().sum::<usize>() / actual_point_counts.len(); + let avg_actual_count = if actual_point_counts.is_empty() { + 0 + } else { + actual_point_counts.iter().sum::<usize>() / actual_point_counts.len() + };
618-642: Preallocate precomputed_simplices capacity.Minor allocation win; we know the upper bound.
- let mut precomputed_simplices: Vec< + let mut precomputed_simplices: Vec< SmallBuffer<Point<f64, 3>, SIMPLEX_VERTICES_BUFFER_SIZE>, - > = Vec::new(); + > = Vec::with_capacity(MAX_PRECOMPUTED_SIMPLICES);
756-767: Criterion config: expose sample size and warm-up via env for CI stability.Allow overriding
sample_size/warm_up_timeto stabilize across machines without code changes.- config = Criterion::default() - .sample_size(10) // Fewer samples due to long-running nature - .warm_up_time(Duration::from_secs(10)) + config = { + let ss = std::env::var("BENCH_SAMPLE_SIZE").ok().and_then(|v| v.parse().ok()).unwrap_or(10); + let wu = std::env::var("BENCH_WARMUP_SECS").ok().and_then(|v| v.parse().ok()).unwrap_or(10); + Criterion::default() + .sample_size(ss) + .warm_up_time(Duration::from_secs(wu)) .measurement_time(bench_time(60)) - ; + };examples/convex_hull_3d_50_points.rs (4)
62-79: Avoid reaching into private/internal fields of Tds.Accessing
tds.verticesdirectly couples the example to internal representation. Prefer a public iterator method if available (e.g.,tds.vertices()), or add one.
86-88: Remove the unused original_vertices parameter.
test_point_containment(&tds, &[])passes a dummy slice that’s no longer used. Simplify the signature and calls.-fn test_point_containment( - tds: &Tds<f64, (), (), 3>, - _original_vertices: &[()], // Not needed anymore since we access vertices from tds -) { +fn test_point_containment(tds: &Tds<f64, (), (), 3>) {And update the single call site accordingly.
239-249: Same note re: direct access to tds.vertices.If there is a public API to iterate vertices, use it for forward compatibility.
460-470: Fix type mismatch in memory-size estimation.You use
ConvexHull<f64, (), (), 3>elsewhere, but estimate sizes forConvexHull<f64, Option<()>, Option<()>, 3>andFacet<…Option<()>, Option<()>, 3>. Make them consistent.- let hull_size = std::mem::size_of::<ConvexHull<f64, Option<()>, Option<()>, 3>>(); - let estimated_hull_memory = - hull_size + (facet_count * std::mem::size_of::<Facet<f64, Option<()>, Option<()>, 3>>()); + let hull_size = std::mem::size_of::<ConvexHull<f64, (), (), 3>>(); + let estimated_hull_memory = + hull_size + (facet_count * std::mem::size_of::<Facet<f64, (), (), 3>>());src/geometry/util.rs (2)
1369-1434: Grid generation: great move to mixed-radix; keep error text consistent.Efficient, allocation-free index generation. Consider harmonizing error messages to avoid both “bytes” and human-readable units in different branches; you already format with KiB/MiB/GiB—use that consistently.
1479-1568: Poisson disk: clarify complexity and suggest spatial bucketing for larger n.Current O(n^2) rejection is fine for examples/tests; note in docs that for large n or D, a grid or k-d tree accelerator would be preferable.
examples/memory_analysis.rs (3)
41-48: Avoid panicking on generation errors; handle Result and continue gracefully.This keeps the example resilient and still measures allocations/time correctly.
- let (tds, tri_info) = measure_with_result(|| { - generate_random_triangulation::<f64, (), (), $dim>( - n_points, - (-50.0, 50.0), - None, - Some(seed), - ) - .expect("failed to build triangulation") - }); + let (tds_res, tri_info) = measure_with_result(|| { + generate_random_triangulation::<f64, (), (), $dim>( + n_points, + (-50.0, 50.0), + None, + Some(seed), + ) + }); + let tds = match tds_res { + Ok(t) => t, + Err(e) => { + eprintln!("✗ Failed to build triangulation: {e}"); + return; + } + };
56-59: Same here: don’texpecton hull construction.This avoids aborting the run and provides a clear error.
- let (hull, hull_info) = measure_with_result(|| { - ConvexHull::from_triangulation(&tds) - .expect("failed to construct convex hull from triangulation") - }); + let (hull_res, hull_info) = measure_with_result(|| { + ConvexHull::from_triangulation(&tds) + }); + let hull = match hull_res { + Ok(h) => h, + Err(e) => { + eprintln!("✗ Failed to construct convex hull from triangulation: {e}"); + return; + } + };
74-86: Clarify units and add MiB for readability.Minor UX improvement when scanning outputs.
let tri_kb = tri_bytes / 1024.0; let hull_kb = hull_bytes / 1024.0; + let tri_mib = tri_kb / 1024.0; + let hull_mib = hull_kb / 1024.0; if tri_info.bytes_total > 0 && num_vertices > 0 { let bytes_per_vertex = tri_bytes / num_vertices as f64; let hull_ratio = (hull_bytes / tri_bytes) * 100.0; - println!(" Triangulation memory: {tri_kb:.1} KB ({bytes_per_vertex:.0} bytes/vertex)"); - println!(" Hull memory: {hull_kb:.1} KB ({hull_ratio:.1}% of triangulation)"); + println!(" Triangulation memory: {tri_kb:.1} KiB ({tri_mib:.2} MiB, {bytes_per_vertex:.0} bytes/vertex)"); + println!(" Hull memory: {hull_kb:.1} KiB ({hull_mib:.2} MiB, {hull_ratio:.1}% of triangulation)"); } else { - println!(" Triangulation memory: {tri_kb:.1} KB"); - println!(" Hull memory: {hull_kb:.1} KB"); + println!(" Triangulation memory: {tri_kb:.1} KiB ({tri_mib:.2} MiB)"); + println!(" Hull memory: {hull_kb:.1} KiB ({hull_mib:.2} MiB)"); }examples/triangulation_3d_50_points.rs (5)
54-55: Send errors to stderr.Examples should print failures to stderr.
- println!("✗ Failed to create triangulation: {e}"); + eprintln!("✗ Failed to create triangulation: {e}");
63-75: Idiomatic iteration over vertices; avoid(&map).into_iter()and manual break.Use
iter().take(10)for clarity.- println!("First few vertices:"); - for (displayed, (_key, vertex)) in (&tds.vertices).into_iter().enumerate() { - if displayed >= 10 { - break; - } - let coords: [f64; 3] = vertex.into(); - println!( - " v{:2}: [{:8.3}, {:8.3}, {:8.3}]", - displayed, coords[0], coords[1], coords[2] - ); - } + println!("First few vertices:"); + for (i, (_key, vertex)) in tds.vertices.iter().take(10).enumerate() { + let coords: [f64; 3] = vertex.into(); + println!(" v{:2}: [{:8.3}, {:8.3}, {:8.3}]", i, coords[0], coords[1], coords[2]); + }
116-139: “First few cells” condition prints by validity count, not position.If early cells are invalid, you may print later ones. Track a separate counter.
- let mut valid_cells = 0; + let mut valid_cells = 0; + let mut shown = 0; @@ - // Show details for first few cells - if valid_cells <= 3 { + // Show details for first few valid cells + if shown < 3 { println!(" Cell {cell_key:?}:"); println!(" Vertices: {}", cell.vertices().len()); if let Some(neighbors) = &cell.neighbors { println!(" Neighbors: {}", neighbors.len()); } + shown += 1; }
285-288: Avoid shadowinglen_u32.Minor readability tweak.
- let len_u32 = u32::try_from(boundary_times.len()).unwrap_or(1u32); - let avg_boundary_time: std::time::Duration = - boundary_times.iter().sum::<std::time::Duration>() / len_u32; + let len_u32_boundary = u32::try_from(boundary_times.len()).unwrap_or(1u32); + let avg_boundary_time: std::time::Duration = + boundary_times.iter().sum::<std::time::Duration>() / len_u32_boundary;
248-305: Memory “estimation” undercounts heap allocations.
size_of::<Vertex/Cell>()excludes heap-owned members. Consider noting this in the output.benches/triangulation_creation.rs (2)
32-33: Imports look good; add BatchSize for better timing control.-use criterion::{Criterion, Throughput, criterion_group, criterion_main}; +use criterion::{Criterion, Throughput, BatchSize, criterion_group, criterion_main};
67-79: Measure creation time without including Drop by usingiter_batched.Returning the triangulation from the measured closure lets Criterion drop it outside the timed section.
- group.bench_function("triangulation", |b| { - b.iter(|| { - black_box( - generate_random_triangulation::<f64, (), (), D>( - 1_000, // Number of points - (-100.0, 100.0), // Coordinate bounds - None, // No vertex data - Some(10_864 + D as u64), // Seeded for reproducible results - ) - .expect("Failed to generate triangulation"), - ); - }); - }); + group.bench_function("triangulation", |b| { + b.iter_batched( + || (), + |_| { + generate_random_triangulation::<f64, (), (), D>( + 1_000, + (-100.0, 100.0), + None, + Some(10_864 + D as u64), + ) + .expect("Failed to generate triangulation") + }, + BatchSize::SmallInput, + ); + });scripts/tests/test_benchmark_utils.py (1)
339-345: Avoid brittle “-0.0%” expectation.String-matching a signed zero is fragile across float formatting changes.
Use a regex tolerant of optional minus:
- assert "Average time change: -0.0%" in result + assert re.search(r"Average time change:\s*-?0\.0%", result)benches/ci_performance_suite.rs (1)
56-76: Optional: vary seeds per input size to avoid prefix correlation.Using the same seed for all counts makes larger inputs prefixes of smaller ones; varying by count can reduce correlation and improve robustness.
Apply this diff inside the bench_with_input body:
- black_box( - generate_random_triangulation::<f64, (), (), $dim>( - count, - (-100.0, 100.0), - None, - Some($seed), - ) + // Derive a per-run seed from the base seed and count to decorrelate inputs + let run_seed: u64 = ($seed as u64) ^ (count as u64).wrapping_mul(0x9E3779B185EBCA87); + black_box( + generate_random_triangulation::<f64, (), (), $dim>( + count, + (-100.0, 100.0), + None, + Some(run_seed), + )benches/memory_scaling.rs (2)
158-164: Use expect instead of unwrap for clearer bench failures.Provide dimension/context on failure to speed up triage.
- ) - .unwrap(); + ) + .expect(concat!("generate_random_triangulation failed for ", stringify!($dim), "D"));
299-303: Include context in CSV error logs.Add file path and record details to aid debugging partial-write issues.
- if let Err(e) = MemoryRecord::write_csv_header(&mut file) { - eprintln!("failed writing CSV header: {e}"); + if let Err(e) = MemoryRecord::write_csv_header(&mut file) { + eprintln!("failed writing CSV header to {}: {e}", csv_path.display()); return; }- if let Err(e) = record.write_csv_row(&mut file) { - eprintln!("failed writing CSV row: {e}"); + if let Err(e) = record.write_csv_row(&mut file) { + eprintln!( + "failed writing CSV row (dim={} points={}): {e}", + record.dimension, record.points + ); }Also applies to: 309-311
scripts/benchmark_models.py (3)
44-56: Guard throughput emission when fields are partially unsetto_baseline_format() appends Throughput when throughput_mean is not None, but could emit "None" if callers set attributes manually without with_throughput(). Guard all throughput fields and unit.
Apply:
- if self.throughput_mean is not None: - lines.append(f"Throughput: [{self.throughput_low}, {self.throughput_mean}, {self.throughput_high}] {self.throughput_unit}") + if ( + self.throughput_low is not None + and self.throughput_mean is not None + and self.throughput_high is not None + and self.throughput_unit + ): + lines.append( + f"Throughput: [{self.throughput_low}, {self.throughput_mean}, {self.throughput_high}] {self.throughput_unit}" + )
145-175: Parsing is robust; consider compiling regex for hot-path parsingRepeated re.match with complex classes is fine here, but if parsing large baselines in CI grows, precompile TIME_RE once. Optional.
251-276: Normalize and trim units before conversionMinor: strip whitespace to avoid "µs " mismatches.
Apply:
- unit = {"us": "µs", "μs": "µs"}.get(unit, unit) + unit = {"us": "µs", "μs": "µs"}.get((unit or "").strip(), (unit or "").strip())scripts/benchmark_utils.py (4)
716-719: Dimension sort: strip and handle mixed-case "D"Safer sort if dimension strings include spaces or varied case.
Apply:
- sorted_dims = sorted( - cases_by_dimension.keys(), - key=lambda d: int(str(d).rstrip("D")) if str(d).rstrip("D").isdigit() else sys.maxsize, - ) + sorted_dims = sorted( + cases_by_dimension.keys(), + key=lambda d: ( + int(str(d).strip().removesuffix("D").removesuffix("d")) + if str(d).strip().removesuffix("D").removesuffix("d").isdigit() + else sys.maxsize + ), + )
1273-1290: Criterion discovery: add case-insensitive group matching (optional)If Criterion ever emits uppercase "2D" group names, broaden the filter.
Apply:
- for dim_dir in sorted(p for p in criterion_dir.iterdir() if p.is_dir() and p.name.endswith("d")): + for dim_dir in sorted(p for p in criterion_dir.iterdir() if p.is_dir() and re.search(r"\d+[dD]$", p.name)):
760-805: Baseline metadata extraction is pragmatic; consider bounds checksWhen collapsing "Hardware Information:" you index i+2/i+3. Add guard to avoid accidental IndexError if format changes.
Apply:
- cpu_line = first_lines[i + 2].strip() if i + 2 < len(first_lines) else "" - cores_line = first_lines[i + 3].strip() if i + 3 < len(first_lines) else "" + cpu_line = first_lines[i + 2].strip() if i + 2 < len(first_lines) else "" + cores_line = first_lines[i + 3].strip() if i + 3 < len(first_lines) else "" + if not cpu_line and not cores_line: + continue
1980-1991: Relevant-change detection: compile patterns once (micro)Tiny optimization; avoids recompiling per file.
Apply:
- relevant_patterns = [r"^src/", r"^benches/", r"^Cargo\.toml$", r"^Cargo\.lock$"] - changed_files = result.stdout.strip().split("\n") if result.stdout.strip() else [] - has_relevant_changes = any(re.match(pattern, file) for file in changed_files for pattern in relevant_patterns) + patterns = [re.compile(p) for p in (r"^src/", r"^benches/", r"^Cargo\.toml$", r"^Cargo\.lock$")] + changed_files = result.stdout.strip().split("\n") if result.stdout.strip() else [] + has_relevant_changes = any(p.match(file) for file in changed_files for p in patterns)
📜 Review details
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (23)
.github/workflows/codecov.yml(2 hunks).semgrep.yaml(2 hunks)CHANGELOG.md(1 hunks)benches/ci_performance_suite.rs(2 hunks)benches/circumsphere_containment.rs(3 hunks)benches/memory_scaling.rs(8 hunks)benches/profiling_suite.rs(1 hunks)benches/triangulation_creation.rs(1 hunks)cspell.json(9 hunks)docs/code_organization.md(7 hunks)examples/convex_hull_3d_50_points.rs(10 hunks)examples/memory_analysis.rs(3 hunks)examples/triangulation_3d_50_points.rs(8 hunks)scripts/benchmark_models.py(1 hunks)scripts/benchmark_utils.py(15 hunks)scripts/enhance_commits.py(6 hunks)scripts/hardware_utils.py(4 hunks)scripts/subprocess_utils.py(4 hunks)scripts/tests/conftest.py(1 hunks)scripts/tests/test_benchmark_models.py(1 hunks)scripts/tests/test_benchmark_utils.py(4 hunks)scripts/tests/test_enhance_commits.py(1 hunks)src/geometry/util.rs(6 hunks)
✅ Files skipped from review due to trivial changes (1)
- scripts/hardware_utils.py
🚧 Files skipped from review as they are similar to previous changes (5)
- scripts/tests/test_benchmark_models.py
- scripts/tests/conftest.py
- scripts/tests/test_enhance_commits.py
- benches/circumsphere_containment.rs
- scripts/subprocess_utils.py
🧰 Additional context used
📓 Path-based instructions (1)
scripts/*.py
⚙️ CodeRabbit configuration file
scripts/*.py: These are Python utility scripts for changelog and commit processing.
Focus on code quality, maintainability, and adherence to Python best practices.
The scripts use ruff for comprehensive linting and formatting (pylint has been retired).
Files:
scripts/enhance_commits.pyscripts/benchmark_models.pyscripts/benchmark_utils.py
🧠 Learnings (6)
📚 Learning: 2025-09-04T20:03:49.859Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.859Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
Applied to files:
examples/memory_analysis.rsbenches/memory_scaling.rsdocs/code_organization.mdbenches/profiling_suite.rsexamples/convex_hull_3d_50_points.rsbenches/triangulation_creation.rsexamples/triangulation_3d_50_points.rs
📚 Learning: 2025-09-02T20:32:05.985Z
Learnt from: acgetchell
PR: acgetchell/delaunay#60
File: cspell.json:103-103
Timestamp: 2025-09-02T20:32:05.985Z
Learning: In cspell.json for the delaunay project, the word "itional" is intentionally added to the dictionary because it comes from a regex pattern, not a typo.
Applied to files:
cspell.json
📚 Learning: 2025-08-28T03:54:34.371Z
Learnt from: acgetchell
PR: acgetchell/delaunay#54
File: scripts/generate_changelog.sh:417-438
Timestamp: 2025-08-28T03:54:34.371Z
Learning: The generate_changelog.sh script uses a deliberate design pattern where the auto-changelog template uses simple generic headers (### Changes, ### Fixed Issues) and the enhancer function transforms these into Keep a Changelog format sections (### Added, ### Changed, ### Fixed, etc.). This separation keeps the template simple while ensuring standardized output format.
Applied to files:
scripts/enhance_commits.py
📚 Learning: 2025-08-28T03:49:30.582Z
Learnt from: acgetchell
PR: acgetchell/delaunay#54
File: scripts/generate_changelog.sh:416-435
Timestamp: 2025-08-28T03:49:30.582Z
Learning: The generate_changelog.sh script processes template headers from auto-changelog (### Changes, ### Fixed Issues) and transforms them into Keep a Changelog format sections (### Added, ### Changed, ### Fixed, etc.). When analyzing changelog generation scripts, check both the template and the final output to understand the transformation pipeline.
Applied to files:
scripts/enhance_commits.py
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
PR: acgetchell/delaunay#55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The user corrected that finitecheck.rs, hashcoordinate.rs, and orderedeq.rs trait files do not exist in the current version of the delaunay repository under src/geometry/traits/. The current structure only contains coordinate.rs, which matches the documented structure in CONTRIBUTING.md.
Applied to files:
docs/code_organization.mdexamples/triangulation_3d_50_points.rs
📚 Learning: 2025-08-30T02:40:22.032Z
Learnt from: acgetchell
PR: acgetchell/delaunay#55
File: CONTRIBUTING.md:152-167
Timestamp: 2025-08-30T02:40:22.032Z
Learning: The delaunay repository has stale documentation in WARP.md that references non-existent trait files (finitecheck.rs, hashcoordinate.rs, orderedeq.rs) in src/geometry/traits/, while the actual directory only contains coordinate.rs. The CONTRIBUTING.md documentation is accurate and reflects the current state.
Applied to files:
docs/code_organization.md
🧬 Code graph analysis (10)
examples/memory_analysis.rs (2)
src/geometry/util.rs (15)
generate_random_triangulation(1671-1723)generate_random_triangulation(4261-4261)generate_random_triangulation(4270-4270)generate_random_triangulation(4279-4279)generate_random_triangulation(4283-4283)generate_random_triangulation(4295-4295)generate_random_triangulation(4305-4305)generate_random_triangulation(4316-4316)generate_random_triangulation(4320-4320)generate_random_triangulation(4341-4341)generate_random_triangulation(4348-4348)generate_random_triangulation(4355-4355)generate_random_triangulation(4368-4368)generate_random_triangulation(4386-4386)generate_random_triangulation(4394-4394)benches/triangulation_vs_hull_memory.rs (2)
measure_with_result(271-280)measure_with_result(283-288)
scripts/tests/test_benchmark_utils.py (3)
scripts/benchmark_models.py (3)
BenchmarkData(14-55)CircumspherePerformanceData(59-65)CircumsphereTestCase(69-99)scripts/benchmark_utils.py (32)
BenchmarkRegressionHelper(1839-2126)CriterionParser(1130-1293)PerformanceComparator(1386-1697)PerformanceSummaryGenerator(82-1127)WorkflowHelper(1700-1836)parse_estimates_json(1134-1182)determine_tag_name(1704-1729)create_metadata(1732-1776)display_baseline_summary(1779-1812)sanitize_artifact_name(1815-1836)prepare_baseline(1843-1882)set_no_baseline_status(1885-1895)extract_baseline_commit(1898-1948)determine_benchmark_skip(1951-1995)display_skip_message(1998-2011)display_no_baseline_message(2014-2024)compare_with_baseline(1394-1456)run_regression_test(2027-2060)display_results(2063-2075)generate_summary(103-142)generate_summary(2078-2126)_get_current_version(225-251)_get_version_date(253-272)_parse_baseline_results(760-815)_parse_comparison_results(817-864)_generate_markdown_content(144-223)_get_circumsphere_performance_results(677-758)_get_update_instructions(1097-1127)_parse_numerical_accuracy_output(302-341)_run_circumsphere_benchmarks(274-300)_get_static_sections(1040-1095)_parse_circumsphere_benchmark_results(389-411)scripts/subprocess_utils.py (2)
find_project_root(272-288)ProjectRootNotFoundError(268-269)
benches/memory_scaling.rs (3)
src/geometry/util.rs (15)
generate_random_triangulation(1671-1723)generate_random_triangulation(4261-4261)generate_random_triangulation(4270-4270)generate_random_triangulation(4279-4279)generate_random_triangulation(4283-4283)generate_random_triangulation(4295-4295)generate_random_triangulation(4305-4305)generate_random_triangulation(4316-4316)generate_random_triangulation(4320-4320)generate_random_triangulation(4341-4341)generate_random_triangulation(4348-4348)generate_random_triangulation(4355-4355)generate_random_triangulation(4368-4368)generate_random_triangulation(4386-4386)generate_random_triangulation(4394-4394)benches/triangulation_vs_hull_memory.rs (3)
measure_with_result(271-280)measure_with_result(283-288)new(43-84)src/core/triangulation_data_structure.rs (4)
cells(485-487)number_of_vertices(595-597)number_of_cells(753-755)new(871-903)
src/geometry/util.rs (3)
src/core/facet.rs (3)
vertex(370-372)new(287-293)vertices(437-444)src/core/triangulation_data_structure.rs (4)
new(871-903)number_of_vertices(595-597)dim(685-692)is_valid(2140-2176)src/core/vertex.rs (3)
point(403-405)dim(502-504)is_valid(538-554)
scripts/benchmark_utils.py (3)
scripts/benchmark_models.py (8)
BenchmarkData(14-55)CircumspherePerformanceData(59-65)CircumsphereTestCase(69-99)extract_benchmark_data(209-245)format_benchmark_tables(298-360)get_winner(76-80)with_timing(28-34)with_throughput(36-42)scripts/hardware_utils.py (3)
HardwareComparator(409-575)HardwareInfo(33-406)get_hardware_info(335-359)scripts/subprocess_utils.py (5)
ProjectRootNotFoundError(268-269)find_project_root(272-288)get_git_commit_hash(168-180)run_cargo_command(109-137)run_git_command(78-106)
benches/ci_performance_suite.rs (3)
benches/triangulation_creation.rs (1)
generate_random_triangulation(70-70)src/geometry/util.rs (21)
generate_random_triangulation(1671-1723)generate_random_triangulation(4261-4261)generate_random_triangulation(4270-4270)generate_random_triangulation(4279-4279)generate_random_triangulation(4283-4283)generate_random_triangulation(4295-4295)generate_random_triangulation(4305-4305)generate_random_triangulation(4316-4316)generate_random_triangulation(4320-4320)generate_random_triangulation(4341-4341)generate_random_triangulation(4348-4348)generate_random_triangulation(4355-4355)generate_random_triangulation(4368-4368)generate_random_triangulation(4386-4386)generate_random_triangulation(4394-4394)std(238-238)std(278-278)std(493-493)std(501-501)std(1970-1970)std(2058-2058)src/core/triangulation_data_structure.rs (1)
new(871-903)
benches/profiling_suite.rs (3)
src/geometry/util.rs (52)
core(1390-1390)generate_grid_points(1368-1434)generate_grid_points(3813-3813)generate_grid_points(3846-3846)generate_grid_points(3862-3862)generate_grid_points(3878-3878)generate_grid_points(3887-3887)generate_grid_points(3901-3901)generate_grid_points(3911-3911)generate_grid_points(3931-3931)generate_grid_points(3952-3952)generate_grid_points(3965-3965)generate_grid_points(4210-4210)generate_poisson_points(1478-1568)generate_poisson_points(4001-4001)generate_poisson_points(4034-4034)generate_poisson_points(4067-4067)generate_poisson_points(4068-4068)generate_poisson_points(4082-4082)generate_poisson_points(4089-4089)generate_poisson_points(4100-4100)generate_poisson_points(4113-4113)generate_poisson_points(4119-4119)generate_poisson_points(4128-4128)generate_poisson_points(4151-4151)generate_poisson_points(4163-4163)generate_random_points_seeded(1292-1319)generate_random_points_seeded(3575-3575)generate_random_points_seeded(3576-3576)generate_random_points_seeded(3595-3595)generate_random_points_seeded(3596-3596)generate_random_points_seeded(3614-3614)generate_random_points_seeded(3615-3615)generate_random_points_seeded(3633-3633)generate_random_points_seeded(3634-3634)generate_random_points_seeded(3653-3653)generate_random_points_seeded(3654-3654)generate_random_points_seeded(3658-3658)generate_random_points_seeded(3659-3659)generate_random_points_seeded(3663-3663)generate_random_points_seeded(3664-3664)generate_random_points_seeded(3668-3668)safe_usize_to_scalar(158-158)safe_usize_to_scalar(480-516)safe_usize_to_scalar(1100-1100)safe_usize_to_scalar(1412-1412)std(238-238)std(278-278)std(493-493)std(501-501)std(1970-1970)std(2058-2058)src/core/triangulation_data_structure.rs (2)
default(207-209)new(871-903)src/geometry/predicates.rs (1)
insphere_lifted(500-616)
examples/convex_hull_3d_50_points.rs (3)
src/geometry/util.rs (21)
generate_random_triangulation(1671-1723)generate_random_triangulation(4261-4261)generate_random_triangulation(4270-4270)generate_random_triangulation(4279-4279)generate_random_triangulation(4283-4283)generate_random_triangulation(4295-4295)generate_random_triangulation(4305-4305)generate_random_triangulation(4316-4316)generate_random_triangulation(4320-4320)generate_random_triangulation(4341-4341)generate_random_triangulation(4348-4348)generate_random_triangulation(4355-4355)generate_random_triangulation(4368-4368)generate_random_triangulation(4386-4386)generate_random_triangulation(4394-4394)std(238-238)std(278-278)std(493-493)std(501-501)std(1970-1970)std(2058-2058)examples/triangulation_3d_50_points.rs (5)
std(293-293)std(294-294)main(33-94)analyze_triangulation(97-150)performance_analysis(249-318)src/geometry/algorithms/convex_hull.rs (1)
from_triangulation(239-271)
benches/triangulation_creation.rs (1)
src/geometry/util.rs (21)
generate_random_triangulation(1671-1723)generate_random_triangulation(4261-4261)generate_random_triangulation(4270-4270)generate_random_triangulation(4279-4279)generate_random_triangulation(4283-4283)generate_random_triangulation(4295-4295)generate_random_triangulation(4305-4305)generate_random_triangulation(4316-4316)generate_random_triangulation(4320-4320)generate_random_triangulation(4341-4341)generate_random_triangulation(4348-4348)generate_random_triangulation(4355-4355)generate_random_triangulation(4368-4368)generate_random_triangulation(4386-4386)generate_random_triangulation(4394-4394)std(238-238)std(278-278)std(493-493)std(501-501)std(1970-1970)std(2058-2058)
examples/triangulation_3d_50_points.rs (2)
src/geometry/util.rs (21)
generate_random_triangulation(1671-1723)generate_random_triangulation(4261-4261)generate_random_triangulation(4270-4270)generate_random_triangulation(4279-4279)generate_random_triangulation(4283-4283)generate_random_triangulation(4295-4295)generate_random_triangulation(4305-4305)generate_random_triangulation(4316-4316)generate_random_triangulation(4320-4320)generate_random_triangulation(4341-4341)generate_random_triangulation(4348-4348)generate_random_triangulation(4355-4355)generate_random_triangulation(4368-4368)generate_random_triangulation(4386-4386)generate_random_triangulation(4394-4394)std(238-238)std(278-278)std(493-493)std(501-501)std(1970-1970)std(2058-2058)examples/convex_hull_3d_50_points.rs (4)
std(461-461)std(463-463)main(36-98)analyze_triangulation(101-122)
🪛 GitHub Check: build (ubuntu-latest)
.github/workflows/codecov.yml
[failure] 63-63:
63:121 [line-length] line too long (135 > 120 characters)
[failure] 59-59:
59:1 [trailing-spaces] trailing spaces
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: Codacy Static Code Analysis
- GitHub Check: build (windows-latest)
- GitHub Check: Code Coverage
- GitHub Check: Codacy Security Scan
- GitHub Check: performance-regression
🔇 Additional comments (32)
.github/workflows/codecov.yml (2)
71-77: Gating upload on report presence looks good.Conditional upload via success() && hashFiles('coverage/cobertura.xml') prevents noisy failures.
82-88: Codacy gating mirrors Codecov correctly.Same guard as above; sensible.
cspell.json (2)
31-31: LGTM: new dictionary entries are accurate and well-placed alphabetically.
- Terms reflect repo/tooling (pytest’s capsys/conftest, hash crates ahash/seahash, refs TAOCP/Nijenhuis/Wilf, pipeline/tooling pipx, common “misordering/uncategorized”).
- Including both “Precomputes” and “precomputes” covers sentence-case vs. lowercase correctly (cspell is case-sensitive by default).
- Noting prior learning: “itional” is intentionally present in this repo’s dictionary; no action needed.
Also applies to: 48-48, 67-67, 130-130, 138-138, 155-155, 162-163, 193-193, 217-217, 227-227, 236-236
21-26: cspell check passed – no new spelling issues detected.scripts/enhance_commits.py (6)
2-2: Docstring clarity LGTM.
Concise and aligned with the script’s purpose.
174-176: Early-return guard LGTM.
Good defensive check for empty/blank entries.
343-356: Release-end flush and state reset look correct.
Avoids duplicated headers and resets section flags safely.
360-365: Nice touch: blank-line guard before emitting new release header.
Prevents extra vertical whitespace.
375-378: EOF flush safeguard LGTM.
Ensures tail entries aren’t dropped.
331-337: **Verify presence of indented “- ” bullets in changelogs and docs
Use this updated script to scan all Markdown/RST/TXT files under changelog or docs paths for indented bullets before adjusting the regex:#!/usr/bin/env bash set -euo pipefail # Find all .md/.rst/.txt files in changelog or docs directories fd -a -H -E .git -E target -E node_modules -t f -e md -e rst -e txt --full-path 2>/dev/null | grep -Ei 'changelog.*\.(md|rst|txt)|docs/.*\.(md|rst|txt)' | while read -r f; do echo "FILE: $f" if rg -nP '^[[:space:]]+-\s+\*\*' "$f"; then : else echo " No indented '**' bullets found" fi donedocs/code_organization.md (4)
39-43: New core/collections module reference looks good.Accurately reflects the addition and prelude re-export of collections.rs.
63-71: Bench/profiling docs align with PR scope.README, PERFORMANCE_RESULTS.md, and profiling_suite.rs entries are consistent with the new profiling workflow.
116-117: profiling-benchmarks.yml reference validated.Consistent with the workflow addition; no issues.
159-181: Console script entry point verified
Pyproject.toml already definesbenchmark-utils = "benchmark_utils:main"under[project.scripts], souv run benchmark-utils generate-summaryworks as documented.examples/convex_hull_3d_50_points.rs (1)
41-50: Good switch to generate_random_triangulation with a fixed seed.Reproducibility and simplified setup are 👍.
src/geometry/util.rs (3)
150-176: format_bytes: minor resilience is fine.The fallback on conversion failure is reasonable; no action needed.
597-646: Numerically stable hypot path is solid.2D uses std hypot with safe conversions and a robust fallback; higher-D uses scaling. Looks good.
1672-1723: generate_random_triangulation API is cohesive.Clear error propagation and data-type constraints; matches example usage.
examples/memory_analysis.rs (1)
127-141: Deterministic seeds: good call.Using fixed seeds per dimension makes runs comparable and CI-friendly.
examples/triangulation_3d_50_points.rs (1)
206-227: Remove.key()verification—boundary_facets()returnsFacet<…>which implementskey().scripts/tests/test_benchmark_utils.py (2)
83-119: LGTM: division-by-zero protection test is solid.Good coverage for epsilon handling and unit conversion in parse_estimates_json.
2133-2146: LGTM: DEV_MODE_BENCH_ARGS contract.Sanity checks on structure (arg/value pairs) help keep workflows stable.
benches/ci_performance_suite.rs (1)
41-80: LGTM: macro-driven per-dimension benches reduce duplication cleanly.Deterministic seeds, dimension-aware sample sizes, and consistent throughput labeling look good.
benches/memory_scaling.rs (2)
68-88: LGTM: const placeholder ctor improves compile-time checks.Making new_placeholder const is a nice touch for immutability and simplicity when the feature is off.
151-181: LGTM: macro-generated measurement functions and call sites.Nice consolidation across 2D–5D with feature-gated allocation tracking.
Also applies to: 184-188
scripts/benchmark_models.py (2)
13-27: Solid, well-scoped datamodels for benchmarksThe dataclass fields and defaults look clean and practical for CI usage. Fluent setters make baseline generation terse.
76-101: Winner and relative performance helpers are coherentThe API is straightforward and safe against empty/missing baselines.
scripts/benchmark_utils.py (5)
253-273: Tag/date retrieval is correct and uses secure git wrapperGood use of run_git_command and fallbacks; handles missing tags gracefully.
1664-1673: Time unit normalization: good coverageCovers ns/µs/μs/us/ms/s with defensive fallbacks. Nice.
1158-1179: Division-by-zero guard and rounding policy look goodeps for µs and consistent rounding prevent noisy diffs in baselines.
1350-1355: Graceful timeout handling with actionable guidanceClear messaging and logging on TimeoutExpired. Nice touch for CI.
2201-2218: CLI surface is comprehensive and consistent with docsgenerate-summary exists and flags match guidance. Good.
| #### `cell.rs` (large module) | ||
|
|
||
| - Most comprehensive implementation | ||
| - Multiple specialized implementation blocks | ||
| - Extensive geometric predicates integration | ||
| - Detailed Hash/Eq contract documentation | ||
|
|
||
| #### `vertex.rs` (1,950 lines) | ||
| #### `vertex.rs` (large module) | ||
|
|
||
| - Strong focus on coordinate validation | ||
| - Comprehensive equality testing | ||
| - Multiple numeric type support | ||
| - Detailed serialization testing | ||
|
|
||
| #### `facet.rs` (1,420 lines) | ||
| #### `facet.rs` (medium module) | ||
|
|
||
| - Geometric relationship focus | ||
| - Key generation utilities | ||
| - Adjacency testing | ||
| - Error handling for geometric constraints | ||
|
|
||
| #### `boundary.rs` (415 lines) | ||
| #### `boundary.rs` (small module) | ||
|
|
||
| - Trait implementation focused | ||
| - Algorithm-specific testing | ||
| - Performance benchmarking | ||
| - Integration with TDS | ||
|
|
||
| #### `util.rs` (871 lines) | ||
| #### `util.rs` (large module) | ||
|
|
||
| - Function-focused (not struct-focused) | ||
| - Extensive edge case testing | ||
| - Generic type coverage | ||
| - Utility function combinations | ||
| - UUID generation and validation utilities with comprehensive error handling | ||
| - Extreme coordinate finding functions for SlotMap-based vertex collections | ||
| - Supercell simplex creation for triangulation initialization | ||
| - Hash utilities for stable, deterministic hash computation | ||
| - Facet adjacency checking and geometric utilities | ||
| - Combination generation for k-simplex vertex combinations | ||
| - Multi-dimensional testing across 1D-5D with both f32 and f64 coordinate types | ||
| - Extensive edge case testing and error handling validation with systematic test organization | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Disambiguate “util.rs” responsibilities (core vs geometry).
This section mixes responsibilities of src/core/util.rs (UUIDs, supercell/simplex, mapping helpers) with src/geometry/util.rs (hypot, circumcenter, facet/surface measures, point generators). Readers may assume a single util.rs provides all of these.
Propose splitting bullets into two subsections to match the actual code layout.
-#### `util.rs` (large module)
-
-- Function-focused (not struct-focused)
-- UUID generation and validation utilities with comprehensive error handling
-- Extreme coordinate finding functions for SlotMap-based vertex collections
-- Supercell simplex creation for triangulation initialization
-- Hash utilities for stable, deterministic hash computation
-- Facet adjacency checking and geometric utilities
-- Combination generation for k-simplex vertex combinations
-- Multi-dimensional testing across 1D-5D with both f32 and f64 coordinate types
-- Extensive edge case testing and error handling validation with systematic test organization
+#### `core/util.rs` (large module)
+
+- Function-focused (not struct-focused)
+- UUID generation/validation and custom error types
+- Extreme-coordinate search helpers over SlotMap-based vertex collections
+- Supercell simplex creation for triangulation bootstrapping
+- Conversion/mapping helpers used by TDS and related structures
+
+#### `geometry/util.rs` (large module)
+
+- Numerically stable hypot (d-dimensional), circumcenter/radius
+- Facet/surface measure (Gram matrix for 4D+)
+- Random/grid/Poisson point generators (2D–5D tested)
+- Random triangulation generation helper (seeded/unseeded)
+- Extensive tests across dimensions and f32/f64📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| #### `cell.rs` (large module) | |
| - Most comprehensive implementation | |
| - Multiple specialized implementation blocks | |
| - Extensive geometric predicates integration | |
| - Detailed Hash/Eq contract documentation | |
| #### `vertex.rs` (1,950 lines) | |
| #### `vertex.rs` (large module) | |
| - Strong focus on coordinate validation | |
| - Comprehensive equality testing | |
| - Multiple numeric type support | |
| - Detailed serialization testing | |
| #### `facet.rs` (1,420 lines) | |
| #### `facet.rs` (medium module) | |
| - Geometric relationship focus | |
| - Key generation utilities | |
| - Adjacency testing | |
| - Error handling for geometric constraints | |
| #### `boundary.rs` (415 lines) | |
| #### `boundary.rs` (small module) | |
| - Trait implementation focused | |
| - Algorithm-specific testing | |
| - Performance benchmarking | |
| - Integration with TDS | |
| #### `util.rs` (871 lines) | |
| #### `util.rs` (large module) | |
| - Function-focused (not struct-focused) | |
| - Extensive edge case testing | |
| - Generic type coverage | |
| - Utility function combinations | |
| - UUID generation and validation utilities with comprehensive error handling | |
| - Extreme coordinate finding functions for SlotMap-based vertex collections | |
| - Supercell simplex creation for triangulation initialization | |
| - Hash utilities for stable, deterministic hash computation | |
| - Facet adjacency checking and geometric utilities | |
| - Combination generation for k-simplex vertex combinations | |
| - Multi-dimensional testing across 1D-5D with both f32 and f64 coordinate types | |
| - Extensive edge case testing and error handling validation with systematic test organization | |
| #### `core/util.rs` (large module) | |
| - Function-focused (not struct-focused) | |
| - UUID generation/validation and custom error types | |
| - Extreme-coordinate search helpers over SlotMap-based vertex collections | |
| - Supercell simplex creation for triangulation bootstrapping | |
| - Conversion/mapping helpers used by TDS and related structures | |
| #### `geometry/util.rs` (large module) | |
| - Numerically stable hypot (d-dimensional), circumcenter/radius | |
| - Facet/surface measure (Gram matrix for 4D+) | |
| - Random/grid/Poisson point generators (2D–5D tested) | |
| - Random triangulation generation helper (seeded/unseeded) | |
| - Extensive tests across dimensions and f32/f64 |
🤖 Prompt for AI Agents
In docs/code_organization.md around lines 580 to 619, the "util.rs" entry
conflates responsibilities from src/core/util.rs and src/geometry/util.rs; split
this single "util.rs (large module)" bullet into two subsections labeled
"core/util.rs (large module)" and "geometry/util.rs (large module)" and move
bullets accordingly: put UUID generation/validation, slotmap vertex helpers,
supercell/simplex creation, mapping helpers and stable hash utilities under
core/util.rs; put hypotenuse/circumcenter, facet/surface measures, point
generators, geometric utilities (facet adjacency, combination generation) and
multi-dimensional testing under geometry/util.rs; keep test/edge-case notes
under each relevant subsection and ensure wording mirrors existing bullets for
consistency.
Addresses reliability issues in CI and benchmark suite. Explicitly configures tarpaulin to generate coverage reports with explicit arguments instead of relying on the config file, which was causing failures in some environments. Configures criterion benchmarks via environment variables and implements timeout handling to prevent hangs. Fixes a usize to scalar conversion precision issue and adds edge case handling to benchmarks.
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/geometry/util.rs (1)
598-646: Same here:hypotuses Float operations; bound it explicitly.Makes intent clear and prevents subtle trait-bound drift.
-pub fn hypot<T, const D: usize>(coords: [T; D]) -> T -where - T: CoordinateScalar + num_traits::Zero, +pub fn hypot<T, const D: usize>(coords: [T; D]) -> T +where + T: CoordinateScalar + Float,
♻️ Duplicate comments (1)
.github/workflows/codecov.yml (1)
53-60: Switch to explicit Tarpaulin CLI looks good and resolves earlier config concerns.Moving from config-driven to explicit CLI guarantees Cobertura XML at coverage/cobertura.xml and aligns the upload steps. This addresses the prior note about ensuring out=Xml and output-dir=coverage.
Also applies to: 70-70
🧹 Nitpick comments (38)
.github/workflows/codecov.yml (2)
82-101: Tighten verification: fix newline echoes and assert exclusions.echo "\n..." won’t render a newline; use printf. Also, add a quick check ensuring benches/examples don’t slip into the report.
Apply:
-echo "\nSearching for XML files:" +printf "\nSearching for XML files:\n" find . -name "*.xml" -type f -ls 2>/dev/null || echo "No XML files found" -echo "\nSearching for cobertura files:" +printf "\nSearching for cobertura files:\n" find . -name "*cobertura*" -type f -ls 2>/dev/null || echo "No cobertura files found" if [ ! -f coverage/cobertura.xml ]; then echo "::error::coverage/cobertura.xml not found. Tarpaulin failed to generate XML output." echo "::error::Check tarpaulin logs above for errors." exit 2 else echo "::notice::Coverage report generated successfully: $(wc -l < coverage/cobertura.xml) lines" fi +# Sanity: ensure benches/examples aren’t present (defense-in-depth with .codecov.yml) +if rg -n '(^|/)(benches|examples)/' coverage/cobertura.xml; then + echo "::warning::benches/ or examples/ paths detected in coverage report" +fi
56-56: Strip trailing spaces to satisfy lint.CI flagged trailing spaces on these lines. Trim them to clear the check.
Also applies to: 59-59, 81-81, 86-86, 89-89, 92-92
scripts/tests/test_subprocess_utils.py (8)
34-45: Make path checks OS-agnostic and avoid brittle assertionsUsing startswith("/") is POSIX-only and parametrizing "echo"/"ls" is non-portable. Prefer Path.is_absolute() and either skip those commands on Windows or constrain them to POSIX. Also assert on the basename to avoid false matches.
Apply this diff within the test body:
- result = get_safe_executable(command) + result = get_safe_executable(command) assert isinstance(result, str) assert len(result) > 0 - assert command in result # Command name should be in the path + assert Path(result).name.startswith(command) # Command name should match basename - - # Git should be an absolute path (important for project) - if command == "git": - assert result.startswith("/") + # Absolute path on all platforms + assert Path(result).is_absolute() + # Skip commands that are not guaranteed cross-platform + if sys.platform.startswith("win") and command in {"ls", "echo"}: + pytest.skip(f"{command} may not be an external executable on Windows")
249-261: Nice security test; consider asserting fail-fast (no subprocess.run call)Optional: monkeypatch subprocess.run to ensure it isn’t invoked when an executable override is provided, proving fail-fast behavior.
Example:
@pytest.mark.parametrize( @@ - def test_rejects_executable_override(self, function, args, kwargs): + def test_rejects_executable_override(self, function, args, kwargs, monkeypatch): """Test that functions reject executable override for security.""" - with pytest.raises(ValueError, match="Overriding 'executable' is not allowed"): - function(*args, **kwargs) + called = {"run": False} + def fake_run(*_a, **_k): + called["run"] = True # should never be set + raise AssertionError("subprocess.run should not be called on override") + monkeypatch.setattr("subprocess.run", fake_run) + with pytest.raises(ValueError, match="Overriding 'executable' is not allowed"): + function(*args, **kwargs) + assert called["run"] is False
87-93: Gate Cargo-dependent tests to avoid environment flakinessThese will fail on machines/CI images without Rust. Skip when cargo isn’t present.
Apply this diff:
- def test_cargo_version(self): + @pytest.mark.skipif(__import__("shutil").which("cargo") is None, reason="cargo not installed in PATH") + def test_cargo_version(self):Also add this import near the top of the file (outside this hunk):
import shutil # for which()
95-99: Similarly gate the dry-run checkSkip when cargo isn’t available to keep the suite green across environments.
- def test_cargo_command_with_custom_params(self): + @pytest.mark.skipif(__import__("shutil").which("cargo") is None, reason="cargo not installed in PATH") + def test_cargo_command_with_custom_params(self):
131-139: Echo may not be an external executable on WindowsOn Windows, echo is typically a shell builtin, making get_safe_executable("echo") unreliable. Skip on Windows or switch to a cross-platform command (e.g., git).
- result = run_safe_command("echo", ["no capture"], capture_output=False) + if sys.platform.startswith("win"): + pytest.skip("echo may not be an external executable on Windows") + result = run_safe_command("echo", ["no capture"], capture_output=False)
245-247: Use a cross-platform failing command to validate check=True defaultls may not exist on Windows. Reuse git with an invalid subcommand to guarantee failure everywhere.
- with pytest.raises(subprocess.CalledProcessError): - run_safe_command("ls", ["/definitely-nonexistent-directory"]) + with pytest.raises(subprocess.CalledProcessError): + run_safe_command("git", ["invalid-git-subcommand-xyz"])
164-173: Make git-repo assumptions explicit to avoid false failuresThese tests assume execution inside a git repo with history. Consider skipping when not inside a repo to reduce environmental brittleness.
- assert check_git_repo() is True + if not check_git_repo(): + pytest.skip("Not running inside a git repository") + assert check_git_repo() is True @@ - assert check_git_history() is True + if not check_git_history(): + pytest.skip("Repository has no commit history") + assert check_git_history() is True
182-189: Remote name 'origin' isn’t guaranteedSome clones/remotes use a different default. Skip if 'origin' is absent to keep tests resilient.
- remote_url = get_git_remote_url("origin") + remotes = run_git_command(["remote"]).stdout.split() + if "origin" not in remotes: + pytest.skip("No 'origin' remote configured") + remote_url = get_git_remote_url("origin")cspell.json (3)
14-14: Nice add: ignore local virtualenvs. Optional: add direnv, too.Consider also ignoring direnv environments to reduce noise from developers using it.
"**/venv/**", + "**/.venv/**", + "**/.direnv/**",
156-165: Keep "Precomputes" — drop lowercase unless neededFound "Precomputes" in benches/README.md:62; cspell.json contains both "Precomputes" and "precomputes" at lines 163–164. Remove the lowercase "precomputes" to slim the dictionary; keep both only if cspell still flags the lowercase form during CI/local runs.
237-247: Remove truncated tokens from cspell.jsonbdelet, bintroduc, bremov are truncated tokens present only in cspell.json; remove them to avoid masking real spelling mistakes.
File: cspell.json (≈ lines 237–247)
"zerocopy", - "bdelet", - "bintroduc", - "bremov", "endgroup"scripts/tests/test_changelog_utils.py (6)
28-53: Good parameterized coverage for escape_markdown; add a couple of “no-op” cases.Consider adding explicit cases that should remain unchanged (parentheses/braces) to guard against future over-escaping.
("[Link](url) with *emphasis*", "\\[Link\\](url) with \\*emphasis\\*"), # Edge cases ("", ""), ("No special chars", "No special chars"), ("***", "\\*\\*\\*"), ("___", "\\_\\_\\_"), ("```", "\\`\\`\\`"), ("[[[]]]", "\\[\\[\\[\\]\\]\\]"), + ("Text with (parens)", "Text with (parens)"), + ("Curly {braces}", "Curly {braces}"),
55-69: SemVer “valid” tests look solid; assert the return to lock in API.The function returns True on success—assert it to codify the contract.
def test_validate_semver_valid(self, version): """Test semantic version validation with valid versions.""" - # Should not raise any exception - ChangelogUtils.validate_semver(version) + # Should not raise and should return True + assert ChangelogUtils.validate_semver(version) is True
71-86: Add leading-zero invalid cases; current validator likely accepts them.Per SemVer 2.0.0, numeric identifiers must not have leading zeros (both in core and pre-release). Add tests; if they fail, tighten the regex in changelog_utils.validate_semver accordingly.
@pytest.mark.parametrize( "version", [ "1.0.0", # Missing 'v' prefix "v1.0", # Missing patch version "v1", # Missing minor and patch "vx.y.z", # Non-numeric components "v1.0.0.0", # Too many components + "v01.2.3", # Leading zero in MAJOR + "v1.02.3", # Leading zero in MINOR + "v1.2.03", # Leading zero in PATCH + "v1.2.3-01", # Leading zero in pre-release numeric id + "v1.2.3-rc.01", # Leading zero in dotted pre-release numeric id "", # Empty string "random-text", # Not a version at all ], )If these additions fail, consider updating the validator (outside this test file) to a stricter SemVer regex:
# scripts/changelog_utils.py: validate_semver semver_pattern = ( r"^v" r"(0|[1-9]\d*)\." r"(0|[1-9]\d*)\." r"(0|[1-9]\d*)" r"(?:-(?:" r"(?:0|[1-9]\d*)" # numeric id r"|(?:[0-9A-Za-z-][0-9A-Za-z-]*)" # non-numeric id r")(?:\.(?:0|[1-9]\d*|[0-9A-Za-z-][0-9A-Za-z-]*))*" r")?" r"(?:\+[0-9A-Za-z-]+(?:\.[0-9A-Za-z-]+)*)?$" )
218-242: Avoid global chdir and ensure cleanup on setup failure.Switch to tmp_path + monkeypatch.chdir (or pass cwd= to run_git_command) to reduce flakiness and guarantee cleanup even if setup fails before yield.
-@pytest.fixture -def git_repo_fixture(): - """Fixture for temporary git repository setup.""" - temp_dir = tempfile.mkdtemp() - original_cwd = Path.cwd() - - # Change to temp directory and initialize git repo - os.chdir(temp_dir) - - # Initialize git repo with initial commit - run_git_command(["init"]) - run_git_command(["config", "user.name", "Test User"]) - run_git_command(["config", "user.email", "test@example.com"]) - - # Create initial commit - Path("README.md").write_text("# Test Repo\n") - run_git_command(["add", "README.md"]) - run_git_command(["commit", "-m", "Initial commit"]) - - yield temp_dir - - # Clean up - os.chdir(original_cwd) - if temp_dir: - shutil.rmtree(temp_dir, ignore_errors=True) +@pytest.fixture +def git_repo_fixture(tmp_path: Path, monkeypatch): + """Fixture for temporary git repository setup (isolated cwd).""" + monkeypatch.chdir(tmp_path) + run_git_command(["init"], cwd=tmp_path) + run_git_command(["config", "user.name", "Test User"], cwd=tmp_path) + run_git_command(["config", "user.email", "test@example.com"], cwd=tmp_path) + (tmp_path / "README.md").write_text("# Test Repo\n", encoding="utf-8") + run_git_command(["add", "README.md"], cwd=tmp_path) + run_git_command(["commit", "-m", "Initial commit"], cwd=tmp_path) + yield str(tmp_path)
279-304: Simplify: remote add/remove is unnecessary here.Since get_git_remote_url is patched, adding/removing a test remote adds overhead without affecting behavior.
def test_git_repository_url_normalization(self, git_repo_fixture, input_url, expected_url): """Test repository URL normalization from various formats.""" # Verify we're in the git repository set up by the fixture assert str(git_repo_fixture) in str(Path.cwd()) - # Add a test remote - run_git_command(["remote", "add", "test-origin", input_url]) - - try: - # Mock the get_git_remote_url to return our test URL - with patch("changelog_utils.get_git_remote_url", return_value=input_url): - result = ChangelogUtils.get_repository_url() - assert result == expected_url - finally: - # Clean up remote - run_git_command(["remote", "remove", "test-origin"]) + # Mock the get_git_remote_url to return our test URL + with patch("changelog_utils.get_git_remote_url", return_value=input_url): + result = ChangelogUtils.get_repository_url() + assert result == expected_url
361-375: Broaden invalid URL coverage to catch near-miss patterns.Add a few more unsupported forms to harden normalization boundaries.
@pytest.mark.parametrize( "invalid_url", [ "not-a-url", "ftp://invalid.com/repo", "https://notgithub.com/owner/repo", + "ssh://git@gitlab.com/owner/repo.git", # non-GitHub host + "git://notgithub.com/owner/repo", # git protocol to non-GitHub + "git@github.com/owner/repo", # scp-like form missing ':' ], )scripts/enhance_commits.py (1)
20-20: Broaden commit-bullet detection to allow “*” bullets.Some generators and Markdown styles use “*” bullets. This keeps detection resilient.
-COMMIT_BULLET_RE = re.compile(r"^\s*-\s*\*\*") +COMMIT_BULLET_RE = re.compile(r"^\s*[-*]\s*\*\*")benches/profiling_suite.rs (3)
446-451: Percentile calculation uses floor; switch to true nearest-rank (ceil).Current rank uses integer division (floor). Nearest-rank per Hyndman & Fan is ceil(p/100*n). Fix off-by-one bias.
- let n = values.len(); - let rank = percentile.saturating_mul(n).saturating_div(100).max(1); // 1-based nearest-rank, prevent overflow - let index = rank.saturating_sub(1).min(n - 1); // 0-based index + let n = values.len(); + // nearest-rank: ceil(p/100 * n), clamped to [1, n] + let rank = ((percentile.saturating_mul(n)).saturating_add(99)).saturating_div(100).clamp(1, n); + let index = rank - 1; // safe: rank in [1, n]
194-201: Avoid silent under-sizing on grid when f64→usize cast fails.
cast::<f64, usize>(raw).unwrap_or(2)collapses to a tiny grid ifrawexceedsusize. Prefer saturating clamp so failures don’t masquerade as small runs.- let points_per_dim = if raw.is_finite() && raw >= 2.0 { - // Use safe conversion from f64 to usize - cast::<f64, usize>(raw).unwrap_or(2) - } else { - 2 - }; + let points_per_dim = if raw.is_finite() && raw >= 2.0 { + // Saturate instead of shrinking to 2 on cast failure + cast::<f64, usize>(raw).unwrap_or(usize::MAX).max(2) + } else { + 2 + };
636-721: Expose query throughput in reports.Add a throughput metric (queries executed) to make latency benches comparable across inputs.
Minimal change: set throughput to the cap before the bench.
let mut group = c.benchmark_group("query_latency"); group.measurement_time(bench_time(90)); +group.throughput(Throughput::Elements(MAX_QUERY_RESULTS as u64));scripts/tests/test_hardware_utils.py (1)
524-545: Param coverage for memory parsing looks solid; add a “TB” case.Extend to cover larger-unit inputs to guard against future changes.
("16,5 GB", 16.5), # Comma decimal separator + ("1.0 TB", 1.0), # Larger unit still parses numeric partsrc/geometry/util.rs (1)
1052-1113: Factorial to f64: error-path is correct; minor simplification available.Map the precise
CoordinateConversionErrorinto yourValueConversionErrorwithFrominstead of rebuilding the message.- let factorial_val = safe_usize_to_scalar::<f64>(factorial_usize).map_err(|_| { - CircumcenterError::ValueConversion(ValueConversionError::ConversionFailed { - value: factorial_usize.to_string(), - from_type: "usize", - to_type: "f64", - details: "Factorial value too large for f64 precision".to_string(), - }) - })?; + let factorial_val = safe_usize_to_scalar::<f64>(factorial_usize) + .map_err(|e| CircumcenterError::ValueConversion(ValueConversionError::ConversionFailed { + value: factorial_usize.to_string(), + from_type: "usize", + to_type: "f64", + details: e.to_string(), + }))?;examples/memory_analysis.rs (3)
41-47: Unify bounds with benches via a named constant (consistency, easier tuning).Use a shared BOUNDS constant (same as benches: -100..100) to keep distributions comparable and reduce magic numbers.
Apply within this hunk:
- generate_random_triangulation::<f64, (), (), $dim>( - n_points, - (-50.0, 50.0), - None, - Some(seed), - ) + generate_random_triangulation::<f64, (), (), $dim>( + n_points, + BOUNDS, + None, + Some(seed), + )And add once near the top of the file (outside this hunk):
const BOUNDS: (f64, f64) = (-100.0, 100.0);
90-100: Trim noisy “0.00 MiB” output when below 1 MiB (cosmetic).Minor UX tweak: only print MiB once KiB ≥ 1024 to reduce clutter.
Example pattern:
if tri_kb >= 1024.0 { println!(" Triangulation memory: {tri_kb:.1} KiB ({tri_mib:.2} MiB, {bytes_per_vertex:.0} bytes/vertex)"); } else { println!(" Triangulation memory: {tri_kb:.1} KiB ({bytes_per_vertex:.0} bytes/vertex)"); }
10-29: Deduplicate measure_with_result helper across examples/benches.Consider a small internal module (cfg-gated) to avoid drift between copies.
benches/memory_scaling.rs (3)
23-29: Avoid lossy usize conversions; keep counters/bytes as u64.On 32-bit runners or large benches, try_into().unwrap_or(0) can silently zero values. Store counts/bytes as u64 and assign directly.
- allocations_total: usize, - bytes_total: usize, - allocations_peak: usize, - bytes_peak: usize, + allocations_total: u64, + bytes_total: u64, + allocations_peak: u64, + bytes_peak: u64, @@ - allocations_total: info.count_total.try_into().unwrap_or(0), - bytes_total: info.bytes_total.try_into().unwrap_or(0), - allocations_peak: info.count_max.try_into().unwrap_or(0), - bytes_peak: info.bytes_max.try_into().unwrap_or(0), + allocations_total: info.count_total, + bytes_total: info.bytes_total, + allocations_peak: info.count_max, + bytes_peak: info.bytes_max,No other changes required; formatting with {} handles u64.
Also applies to: 59-66
158-163: Extract generation bounds to a const and reuse across benches/examples.Reduces magic numbers and keeps distributions consistent with the example.
- (-100.0, 100.0), + BOUNDS,Add near the top of this file:
const BOUNDS: (f64, f64) = (-100.0, 100.0);
349-358: Summary prints are useful; consider MiB alongside KB for large runs (optional).Not required, but aligning display with the example can help eyeball larger datasets.
scripts/tests/test_benchmark_utils.py (3)
1778-1792: Stabilize mock: ensure stdout is a real string for parsing._generator.run_circumsphere_benchmarks() calls result.stdout.split(); returning a bare Mock can behave oddly under iteration. Set stdout to an empty string to avoid brittle behavior.
@patch("benchmark_utils.run_cargo_command") def test_run_circumsphere_benchmarks_success(self, mock_cargo): """Test running circumsphere benchmarks successfully.""" - mock_cargo.return_value = Mock() + mock_cargo.return_value = Mock(stdout="")
1680-1691: Avoid incidental git calls during generator initialization.This test instantiates PerformanceSummaryGenerator which probes git in init. Patch run_git_command here too to keep the test hermetic.
-@patch("benchmark_utils.get_git_commit_hash") -@patch("benchmark_utils.datetime") -def test_generate_markdown_content(self, mock_datetime, mock_git_commit): +@patch("benchmark_utils.get_git_commit_hash") +@patch("benchmark_utils.run_git_command") +@patch("benchmark_utils.datetime") +def test_generate_markdown_content(self, mock_datetime, mock_run_git, mock_git_commit): """Test generating complete markdown content.""" + # Avoid calling actual git in __init__ helpers + mock_run_git.side_effect = Exception("git unavailable in test")
1841-1862: Make summary generation test fully isolated from git.generate_summary() triggers init → version/date discovery. Patch run_git_command to eliminate flakiness on CI hosts without git.
- def test_generate_summary_success(self, capsys): + @patch("benchmark_utils.run_git_command") + def test_generate_summary_success(self, mock_git, capsys): """Test successful generation of performance summary.""" + mock_git.side_effect = Exception("git unavailable in test")scripts/benchmark_utils.py (4)
1977-1983: Tighten SHA validation to 7–40 hex (aligns with other validation).extract_baseline_commit() uses 7–40; here it’s 6–40. Harmonize to reduce false positives on too-short prefixes.
- if not re.match(r"^[0-9A-Fa-f]{6,40}$", baseline_commit): + if not re.match(r"^[0-9A-Fa-f]{7,40}$", baseline_commit): return False, "invalid_baseline_sha"
1193-1200: Handle uppercase 'D' in dimension folders.Fallback regex only matches trailing 'd'. Support both 'd' and 'D'.
- m = re.search(r"(\d+)d$", dim_dir.name) + m = re.search(r"(\d+)[dD]$", dim_dir.name)
1244-1251: Also accept uppercase 'D' during fallback discovery.Consistent with the directory scan above.
- dim_match = re.search(r"(\d+)d$", dim_dir.name) + dim_match = re.search(r"(\d+)[dD]$", dim_dir.name)
781-793: Leverage existing hardware parser to build the summary line.You already import HardwareComparator; reuse its parser to avoid brittle index math and keep formats in sync with writers.
- if not any(line.startswith("Hardware:") for line in metadata_lines) and "Hardware Information:" in content: - # Emit a concise single-line summary from the block's first two fields - for i, line in enumerate(first_lines): - if line.startswith("Hardware Information:"): - cpu_line = first_lines[i + 2].strip() if i + 2 < len(first_lines) else "" - cores_line = first_lines[i + 3].strip() if i + 3 < len(first_lines) else "" - if not cpu_line and not cores_line: - continue - cpu = cpu_line.removeprefix("CPU: ").strip() - cores = cores_line.removeprefix("CPU Cores: ").strip() - summary = f"{cpu} ({cores} cores)" if cpu and cores else cpu or "Unknown CPU" - metadata_lines.append(f"Hardware: {summary}") - break + if not any(line.startswith("Hardware:") for line in metadata_lines) and "Hardware Information:" in content: + hw = HardwareComparator.parse_baseline_hardware(content) + cpu = hw.get("CPU", "") + cores = hw.get("CPU_CORES", "") + if cpu: + summary = f"{cpu} ({cores} cores)" if cores else cpu + metadata_lines.append(f"Hardware: {summary}")
📜 Review details
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (18)
.github/workflows/codecov.yml(2 hunks)CHANGELOG.md(2 hunks)benches/memory_scaling.rs(8 hunks)benches/profiling_suite.rs(1 hunks)benches/triangulation_creation.rs(1 hunks)cspell.json(10 hunks)examples/memory_analysis.rs(4 hunks)examples/triangulation_3d_50_points.rs(9 hunks)scripts/benchmark_models.py(1 hunks)scripts/benchmark_utils.py(15 hunks)scripts/enhance_commits.py(9 hunks)scripts/tests/test_benchmark_models.py(1 hunks)scripts/tests/test_benchmark_utils.py(7 hunks)scripts/tests/test_changelog_utils.py(6 hunks)scripts/tests/test_enhance_commits.py(1 hunks)scripts/tests/test_hardware_utils.py(9 hunks)scripts/tests/test_subprocess_utils.py(4 hunks)src/geometry/util.rs(13 hunks)
✅ Files skipped from review due to trivial changes (1)
- CHANGELOG.md
🚧 Files skipped from review as they are similar to previous changes (4)
- scripts/tests/test_enhance_commits.py
- scripts/benchmark_models.py
- benches/triangulation_creation.rs
- examples/triangulation_3d_50_points.rs
🧰 Additional context used
📓 Path-based instructions (1)
scripts/*.py
⚙️ CodeRabbit configuration file
scripts/*.py: These are Python utility scripts for changelog and commit processing.
Focus on code quality, maintainability, and adherence to Python best practices.
The scripts use ruff for comprehensive linting and formatting (pylint has been retired).
Files:
scripts/enhance_commits.pyscripts/benchmark_utils.py
🧠 Learnings (3)
📚 Learning: 2025-08-28T03:49:30.582Z
Learnt from: acgetchell
PR: acgetchell/delaunay#54
File: scripts/generate_changelog.sh:416-435
Timestamp: 2025-08-28T03:49:30.582Z
Learning: The generate_changelog.sh script processes template headers from auto-changelog (### Changes, ### Fixed Issues) and transforms them into Keep a Changelog format sections (### Added, ### Changed, ### Fixed, etc.). When analyzing changelog generation scripts, check both the template and the final output to understand the transformation pipeline.
Applied to files:
scripts/enhance_commits.py
📚 Learning: 2025-09-04T20:03:49.896Z
Learnt from: acgetchell
PR: acgetchell/delaunay#65
File: WARP.md:249-254
Timestamp: 2025-09-04T20:03:49.896Z
Learning: The delaunay repository contains all the specialized integration tests documented in WARP.md: circumsphere_debug_tools.rs, robust_predicates_comparison.rs, convex_hull_bowyer_watson_integration.rs, and allocation_api.rs, plus additional test files like robust_predicates_showcase.rs and coordinate_conversion_errors.rs.
Applied to files:
benches/memory_scaling.rsexamples/memory_analysis.rs
📚 Learning: 2025-09-02T20:32:05.985Z
Learnt from: acgetchell
PR: acgetchell/delaunay#60
File: cspell.json:103-103
Timestamp: 2025-09-02T20:32:05.985Z
Learning: In cspell.json for the delaunay project, the word "itional" is intentionally added to the dictionary because it comes from a regex pattern, not a typo.
Applied to files:
cspell.json
🧬 Code graph analysis (11)
scripts/enhance_commits.py (2)
scripts/benchmark_utils.py (1)
main(2339-2354)scripts/changelog_utils.py (1)
main(804-837)
scripts/tests/test_benchmark_utils.py (2)
scripts/benchmark_models.py (2)
BenchmarkData(14-55)with_timing(28-34)scripts/benchmark_utils.py (28)
BenchmarkRegressionHelper(1846-2132)CriterionParser(1137-1300)PerformanceComparator(1393-1704)PerformanceSummaryGenerator(82-1134)WorkflowHelper(1707-1843)parse_estimates_json(1141-1189)determine_tag_name(1711-1736)create_metadata(1739-1783)display_baseline_summary(1786-1819)sanitize_artifact_name(1822-1843)prepare_baseline(1850-1889)set_no_baseline_status(1892-1902)extract_baseline_commit(1905-1955)determine_benchmark_skip(1958-2001)display_skip_message(2004-2017)display_no_baseline_message(2020-2030)compare_with_baseline(1401-1463)run_regression_test(2033-2066)display_results(2069-2081)_get_current_version(225-251)_get_version_date(253-272)_parse_baseline_results(764-821)_parse_comparison_results(823-870)_get_circumsphere_performance_results(677-762)_parse_numerical_accuracy_output(302-341)_run_circumsphere_benchmarks(274-300)_parse_circumsphere_benchmark_results(389-411)_get_numerical_accuracy_analysis(343-387)
benches/profiling_suite.rs (7)
src/geometry/util.rs (53)
core(1388-1388)generate_grid_points(1366-1432)generate_grid_points(3814-3814)generate_grid_points(3847-3847)generate_grid_points(3863-3863)generate_grid_points(3879-3879)generate_grid_points(3888-3888)generate_grid_points(3902-3902)generate_grid_points(3912-3912)generate_grid_points(3932-3932)generate_grid_points(3953-3953)generate_grid_points(3966-3966)generate_grid_points(4211-4211)generate_poisson_points(1476-1566)generate_poisson_points(4002-4002)generate_poisson_points(4035-4035)generate_poisson_points(4068-4068)generate_poisson_points(4069-4069)generate_poisson_points(4083-4083)generate_poisson_points(4090-4090)generate_poisson_points(4101-4101)generate_poisson_points(4114-4114)generate_poisson_points(4120-4120)generate_poisson_points(4129-4129)generate_poisson_points(4152-4152)generate_poisson_points(4164-4164)generate_random_points_seeded(1293-1317)generate_random_points_seeded(3576-3576)generate_random_points_seeded(3577-3577)generate_random_points_seeded(3596-3596)generate_random_points_seeded(3597-3597)generate_random_points_seeded(3615-3615)generate_random_points_seeded(3616-3616)generate_random_points_seeded(3634-3634)generate_random_points_seeded(3635-3635)generate_random_points_seeded(3654-3654)generate_random_points_seeded(3655-3655)generate_random_points_seeded(3659-3659)generate_random_points_seeded(3660-3660)generate_random_points_seeded(3664-3664)generate_random_points_seeded(3665-3665)generate_random_points_seeded(3669-3669)safe_usize_to_scalar(158-158)safe_usize_to_scalar(480-517)safe_usize_to_scalar(1101-1101)safe_usize_to_scalar(1410-1410)std(238-238)std(278-278)std(494-494)std(502-502)std(1971-1971)std(2059-2059)v(143-143)src/core/triangulation_data_structure.rs (2)
default(207-209)new(871-903)src/geometry/algorithms/convex_hull.rs (2)
default(1117-1123)from_triangulation(239-271)benches/memory_scaling.rs (1)
new(35-66)benches/triangulation_vs_hull_memory.rs (1)
new(43-84)src/core/boundary.rs (2)
tds(465-469)boundary_facets(82-103)src/geometry/predicates.rs (1)
insphere_lifted(500-616)
scripts/tests/test_benchmark_models.py (1)
scripts/benchmark_models.py (15)
BenchmarkData(14-55)CircumspherePerformanceData(59-65)CircumsphereTestCase(69-99)VersionComparisonData(103-120)format_benchmark_tables(306-368)format_throughput_value(286-303)format_time_value(255-283)parse_benchmark_header(126-142)parse_throughput_data(181-210)parse_time_data(149-178)with_timing(28-34)with_throughput(36-42)to_baseline_format(44-55)get_winner(76-80)get_relative_performance(82-99)
benches/memory_scaling.rs (3)
src/geometry/util.rs (15)
generate_random_triangulation(1669-1721)generate_random_triangulation(4262-4262)generate_random_triangulation(4271-4271)generate_random_triangulation(4280-4280)generate_random_triangulation(4284-4284)generate_random_triangulation(4296-4296)generate_random_triangulation(4306-4306)generate_random_triangulation(4317-4317)generate_random_triangulation(4321-4321)generate_random_triangulation(4342-4342)generate_random_triangulation(4349-4349)generate_random_triangulation(4356-4356)generate_random_triangulation(4369-4369)generate_random_triangulation(4387-4387)generate_random_triangulation(4395-4395)benches/triangulation_vs_hull_memory.rs (4)
measure_with_result(271-280)measure_with_result(283-288)new(43-84)write_csv_header(112-117)src/core/triangulation_data_structure.rs (4)
cells(485-487)number_of_vertices(595-597)number_of_cells(753-755)new(871-903)
scripts/benchmark_utils.py (3)
scripts/benchmark_models.py (8)
BenchmarkData(14-55)CircumspherePerformanceData(59-65)CircumsphereTestCase(69-99)extract_benchmark_data(213-249)format_benchmark_tables(306-368)get_winner(76-80)with_timing(28-34)with_throughput(36-42)scripts/hardware_utils.py (3)
HardwareComparator(409-575)HardwareInfo(33-406)get_hardware_info(335-359)scripts/subprocess_utils.py (5)
ProjectRootNotFoundError(268-269)find_project_root(272-288)get_git_commit_hash(168-180)run_cargo_command(109-137)run_git_command(78-106)
scripts/tests/test_subprocess_utils.py (1)
scripts/subprocess_utils.py (6)
get_safe_executable(24-40)ExecutableNotFoundError(20-21)run_safe_command(140-164)run_git_command(78-106)run_cargo_command(109-137)run_git_command_with_input(229-264)
scripts/tests/test_changelog_utils.py (2)
scripts/changelog_utils.py (7)
escape_markdown(303-314)validate_semver(138-157)VersionError(51-52)escape_version_for_regex(290-300)run_git_command(622-649)get_repository_url(232-270)GitRepoError(47-48)scripts/subprocess_utils.py (1)
run_git_command(78-106)
scripts/tests/test_hardware_utils.py (1)
scripts/hardware_utils.py (7)
HardwareInfo(33-406)_run_command(385-406)get_rust_info(308-333)get_hardware_info(335-359)format_hardware_info(361-383)HardwareComparator(409-575)_extract_memory_value(565-575)
examples/memory_analysis.rs (4)
src/geometry/util.rs (15)
generate_random_triangulation(1669-1721)generate_random_triangulation(4262-4262)generate_random_triangulation(4271-4271)generate_random_triangulation(4280-4280)generate_random_triangulation(4284-4284)generate_random_triangulation(4296-4296)generate_random_triangulation(4306-4306)generate_random_triangulation(4317-4317)generate_random_triangulation(4321-4321)generate_random_triangulation(4342-4342)generate_random_triangulation(4349-4349)generate_random_triangulation(4356-4356)generate_random_triangulation(4369-4369)generate_random_triangulation(4387-4387)generate_random_triangulation(4395-4395)benches/memory_scaling.rs (2)
measure_with_result(122-131)measure_with_result(134-139)src/core/triangulation_data_structure.rs (2)
number_of_vertices(595-597)number_of_cells(753-755)src/geometry/algorithms/convex_hull.rs (1)
from_triangulation(239-271)
src/geometry/util.rs (3)
src/geometry/point.rs (4)
std(323-323)std(324-324)new(74-76)try_from(313-336)src/core/triangulation_data_structure.rs (2)
new(871-903)is_valid(2140-2176)src/core/vertex.rs (2)
point(403-405)is_valid(538-554)
🪛 GitHub Check: build (ubuntu-latest)
.github/workflows/codecov.yml
[failure] 92-92:
92:1 [trailing-spaces] trailing spaces
[failure] 89-89:
89:1 [trailing-spaces] trailing spaces
[failure] 86-86:
86:1 [trailing-spaces] trailing spaces
[failure] 81-81:
81:1 [trailing-spaces] trailing spaces
[failure] 59-59:
59:1 [trailing-spaces] trailing spaces
[failure] 56-56:
56:1 [trailing-spaces] trailing spaces
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: Codacy Static Code Analysis
- GitHub Check: build (macos-latest)
- GitHub Check: build (windows-latest)
- GitHub Check: Codacy Security Scan
- GitHub Check: Code Coverage
- GitHub Check: performance-regression
🔇 Additional comments (36)
.github/workflows/codecov.yml (1)
71-79: Use regex-style excludes and bump timeout to 300s.Tarpaulin's --exclude-files is regex-based; glob-like "benches/" / "examples/" can fail. 120s is often tight on CI.
File: .github/workflows/codecov.yml Lines: 71-79
Apply:
cargo tarpaulin \ --out Xml \ --output-dir coverage \ - --exclude-files "benches/*" "examples/*" \ + --exclude-files '^benches/.*' \ + --exclude-files '^examples/.*' \ --all-features \ --workspace \ - --timeout 120 \ + --timeout 300 \ --verbose \ --implicit-test-threadsVerification: coverage/cobertura.xml was not present in the run. Execute locally to confirm no benches/examples appear in the coverage report:
#!/bin/bash # Fail if any benches/examples paths appear in the report if rg -n '(^|/)(benches|examples)/' coverage/cobertura.xml; then echo "::error::Found benches/examples in coverage report" exit 1 fiscripts/tests/test_subprocess_utils.py (4)
46-53: Good negative-path coverageAsserting the error type, a stable fragment of the message, and inclusion of the command name is solid.
121-127: Text-enforcement behavior validatedThis correctly asserts that text=False is ignored and stdout remains a string.
141-147: Multiple kwargs semantics look goodChecks for enforced text semantics alongside other kwargs are appropriate.
262-268: Coverage for stdin variant is completeGood to see the override rejection is tested for the input-accepting helper as well.
cspell.json (6)
31-50: Dictionary additions look correct for hashing and pytest tooling."ahash" and "capsys" align with crates/pytest. No issues.
67-70: pytest term acknowledged."conftest" is commonly flagged; good to whitelist.
131-140: Proper nouns/terms are fine."misordering" and "Nijenhuis" look correct in this codebase context.
194-195: Hashing term OK."seahash" matches common crate/name usage.
218-229: Acronym and category term OK."TAOCP" and "uncategorized" are reasonable additions.
100-110: Note: “itional” is intentionally whitelisted.Recalling prior guidance for this repo: “itional” comes from a regex pattern, not a typo. Leaving as-is.
scripts/tests/test_changelog_utils.py (3)
171-182: Regex-escaping tests: nice coverage of dots, hyphens, and plus.Looks correct and aligns with re.escape behavior.
245-253: Nice: isolates git-operation tests under a dedicated class and verifies cwd from the fixture.
305-305: Good sanity checks around the fixture’s cwd before crafting commits.Also applies to: 307-308
scripts/enhance_commits.py (6)
330-336: LGTM: prioritize commit-line ingestion before release boundaries.This reordering prevents interleaving with headers and improves categorization accuracy.
342-355: LGTM: release/file-end flushing and blank-line de-duping.End-of-release/file flush plus final post-loop flush handle trailing entries cleanly; the extra blank-line guard avoids double spacers.
Also applies to: 359-364, 374-377
293-295: Confirm body-collection rules won’t truncate unindented fenced blocks.The body collector stops at the first non-empty, non-indented line. If commit bodies include fenced code blocks (```), these often aren’t indented and may get split. If that’s intended, ignore; otherwise consider allowing fenced blocks until the closing fence.
I can add a unit test covering:
- A bullet with a multi-line body that includes an unindented fenced block.
- Expectation: either entire fence is captured or explicitly not, per desired behavior. Want me to open a follow-up?
41-43: Nice: explicit “add/remove … support” patterns.These improve categorization signal for API/feature toggles.
Also applies to: 73-76
176-178: Good guard for empty entries.Prevents regex work on None/blank input.
2-2: Docstring clarity improvement.Clear statement of purpose; aligns with Keep a Changelog categorization.
scripts/tests/test_hardware_utils.py (2)
18-22: Good move to a pytest fixture.Fixture-based reuse is clean and reduces setup duplication.
245-266: Nice isolation via patch.object on the instance.Keeps tests hermetic without touching globals; good pattern.
src/geometry/util.rs (4)
150-175: format_bytes: good fallbacks and units.Clear UX and safe conversions; tests cover multiple scales.
1367-1432: Grid generation: overflow/bytes cap and mixed-radix are solid.The safety checks and iteration strategy look correct and memory-safe.
1230-1251: Resolved — rand = "0.9.2" confirmed; API usage matches.Cargo.toml pins rand = "0.9.2"; repository uses rand::rng() and rng.random_range and no thread_rng/gen_range occurrences were found.
194-204: T relies on Float methods; make the bound explicit.
scaled_hypot_2dcallsFloat::abs/sqrtbut the where-clause only requiresZero. Tighten the bound to avoid accidental regressions ifCoordinateScalarever loosens constraints.-fn scaled_hypot_2d<T: CoordinateScalar + num_traits::Zero>(x: T, y: T) -> T { +fn scaled_hypot_2d<T: CoordinateScalar + Float>(x: T, y: T) -> T {⛔ Skipped due to learnings
Learnt from: acgetchell PR: acgetchell/delaunay#50 File: src/core/algorithms/robust_bowyer_watson.rs:191-195 Timestamp: 2025-08-20T21:55:04.552Z Learning: CoordinateScalar trait includes PartialOrd through its Float supertrait bound, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.Learnt from: acgetchell PR: acgetchell/delaunay#50 File: src/core/algorithms/robust_bowyer_watson.rs:191-195 Timestamp: 2025-08-20T21:55:04.552Z Learning: CoordinateScalar trait guarantees PartialOrd, making direct comparisons between CoordinateScalar values valid without needing conversion to f64.examples/memory_analysis.rs (4)
48-54: Error handling on triangulation creation is solid.Early-return with a clear error message keeps the example resilient.
65-71: Good defensive construction of convex hull.Graceful failure path mirrors triangulation handling; nice symmetry.
120-156: Seeds per dimension for reproducibility look good.Deterministic runs make the example stable and comparable.
10-29: Run the example locally and confirm it builds and runs with and without the feature
Commands:cargo run --example memory_analysisandcargo run --example memory_analysis --features count-allocations— confirm both compile and run (no compile errors or panics) and that the cfg branches compile.
File: examples/memory_analysis.rs lines 10–29 (also applies to 41–47, 62–71). Automated sandbox check failed (no /proc/self/exe); manual verification required.benches/memory_scaling.rs (3)
151-171: Macro body looks clean; deterministic generation + black_box is appropriate.Good balance between clarity and benchmark hygiene.
302-325: CSV writing resilience is solid.Header/row error handling with diagnostics is appreciated; keeps runs from aborting.
188-193: Run benches locally and confirm CSV producedSandbox attempt failed with: "Could not locate working directory: no /proc/self/exe available. Is /proc mounted?". Run locally or in CI and confirm both cfg paths compile and write the CSV:
- cargo bench --bench memory_scaling
- cargo bench --bench memory_scaling --features count-allocations
- test -f target/memory_scaling.csv && echo "CSV OK: target/memory_scaling.csv"
Relevant locations: benches/memory_scaling.rs lines 188-193 (also applies to 195-213, 219-241, 244-266, 269-291, 382-398).
scripts/tests/test_benchmark_models.py (2)
24-181: Nice coverage and edge cases.Tests exercise happy paths, scientific notation, zero/negative values, sorting, and scaling-baseline pitfalls. Good balance between unit and formatting checks.
Also applies to: 183-455
8-22: Ensure module import works regardless of pytest cwd (scripts/tests/test_benchmark_models.py)Add a small sys.path tweak at the top of the test so
from benchmark_models import ...works when pytest is run from other directories.@@ -import pytest +import os +import sys +import pytest +# Ensure scripts/ is importable when running from repo root or other dirs +sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))Verification: couldn't run here — the provided run errored with "/bin/bash: line 3: pytest: command not found". Re-run after installing pytest or run
PYTHONPATH=scripts python -m pytest -q scripts/tests/test_benchmark_models.pyto confirm.scripts/benchmark_utils.py (1)
685-699: Nice: actionable guidance with correct CLI.The “No benchmark results” block points to the now-implemented generate-summary subcommand and includes --run-benchmarks. Looks good.
scripts/enhance_commits.py
Outdated
|
|
||
| # Precompiled regex patterns for performance | ||
| COMMIT_BULLET_RE = re.compile(r"^\s*-\s*\*\*") | ||
| TITLE_FALLBACK_RE = re.compile(r"-\s+([^[(]+?)(?:\s+\(#\d+\))?\s*(?:\[`[a-f0-9]{7,40}`\].*)?$", re.IGNORECASE) |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Make title fallback robust to indentation.
TITLE_FALLBACK_RE fails when bullets are indented (common in Markdown). Anchor with optional leading whitespace.
-TITLE_FALLBACK_RE = re.compile(r"-\s+([^[(]+?)(?:\s+\(#\d+\))?\s*(?:\[`[a-f0-9]{7,40}`\].*)?$", re.IGNORECASE)
+TITLE_FALLBACK_RE = re.compile(
+ r"^\s*-\s+([^[(]+?)(?:\s+\(#\d+\))?\s*(?:\[`[a-f0-9]{7,40}`\].*)?$",
+ re.IGNORECASE,
+)Also applies to: 184-187
🤖 Prompt for AI Agents
In scripts/enhance_commits.py around lines 21 and 184-187, TITLE_FALLBACK_RE
doesn't match indented Markdown bullets; modify the regex to anchor the pattern
with optional leading whitespace (e.g. change
r"-\s+([^[(]+?)(?:\s+\(#\d+\))?\s*(?:\[`[a-f0-9]{7,40}`\].*)?$" to
r"^\s*-\s+([^[(]+?)(?:\s+\(#\d+\))?\s*(?:\[`[a-f0-9]{7,40}`\].*)?$") and add the
re.MULTILINE flag to the compile call so it will match bullet lines anywhere in
a multi-line commit/message string; apply the same change to the other
occurrence at lines 184-187.
Refactors memory measurement, improves CI coverage reporting, and fixes benchmark issues. - Refactors memory measurement and random point generation into utility modules for reuse across examples and benchmarks, promoting code sharing and consistency. - Enhances CI coverage reporting by verifying the absence of `benches/` and `examples/` paths in coverage reports, ensuring that benchmark and example code does not skew coverage metrics of the core library code. - Improves benchmark stability by capping point-generation to `usize::MAX` and handling edge-cases in percentile calculations. Adds throughput measurement to query latency benchmarks. - Improves reporting in memory analysis tools.
Refactors import statements for improved code readability and maintainability. Configures ruff to enforce consistent import ordering and removes unnecessary local imports. Updates WARP.md with current ruff-based import checking guidance.
Overview
This PR adds comprehensive memory profiling capabilities and implements safe performance optimizations for the v0.4.3 release.
Key Changes
🔍 Memory Profiling System
count-allocationsfeature flag⚡ Performance Optimizations
🧹 Code Quality Improvements
📊 Benchmarking Infrastructure
Testing
Performance Impact
Version
This represents the v0.4.2 → v0.4.3 release with backward-compatible improvements.