Skip to content

fix(cross): performance improvements#629

Merged
s0up4200 merged 62 commits intomainfrom
fix/perf/cross
Nov 24, 2025
Merged

fix(cross): performance improvements#629
s0up4200 merged 62 commits intomainfrom
fix/perf/cross

Conversation

@KyleSanderson
Copy link
Contributor

@KyleSanderson KyleSanderson commented Nov 22, 2025

Summary by CodeRabbit

  • Refactor

    • Faster, more reliable torrent syncing and caching via batch operations and normalized hashes.
  • Behavior

    • Consistent string normalization and pointer-safe metadata handling; streaming-based matching and improved largest-file content detection.
  • Search / Indexers

    • Async prioritized multi-indexer scheduler, suggestion-driven indexer selection, unified cache badge UI, and ability to rebase search cache TTL.
  • Rate limiting

    • Per-request priority controls with configurable wait budgets and clearer wait-time errors.
  • Health & Metrics

    • Service health checks and Prometheus metrics added.
  • Tests

    • Expanded coverage for normalization, batching, caching, scheduling, rate limits, and content detection.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 22, 2025

Walkthrough

Migrate many cross-seed subsystems to pointer-based release types, add a generic TTL-backed string Normalizer, replace single-item qBittorrent calls with filtered/batch/canonicalized APIs, implement batch DB/file caching and debounced sync, and add a prioritized Jackett scheduler with rate-limiting and extensive tests. (39 words)

Changes

Cohort / File(s) Summary
String normalizer
pkg/stringutils/normalize.go
Add generic Normalizer[K,V] with TTL cache, NewNormalizer/NewDefaultNormalizer, Normalize, Clear, and DefaultNormalizer.
Release parser & cache
pkg/releases/parser.go
Parser cache and Parse now use/return *rls.Release (pointer semantics); callers updated.
qBittorrent sync & tests
internal/qbittorrent/sync_manager.go, internal/qbittorrent/sync_manager_test.go
Add GetTorrents(ctx,instanceID,filter), GetTorrentFilesBatch, HasTorrentByAnyHash; canonicalize/normalize hashes; per-instance fetch semaphores, debounced sync, batch caching and unit tests.
Files manager (repo & service & tests)
internal/services/filesmanager/repository.go, internal/services/filesmanager/service.go, internal/services/filesmanager/service_test.go
Batch interning and grouped upserts for torrent files; add UpsertSyncInfoBatch, DeleteTorrentCache, DeleteCacheForRemovedTorrents, CacheFilesBatch, GetCachedFilesBatch, and related tests.
Cross‑seed core & pointer migration
internal/services/crossseed/service.go, internal/services/crossseed/*.go
Migrate many functions to *rls.Release, thread stringNormalizer through Service, add automation snapshots/context caching, batch-loading by canonicalHash, metrics, HealthCheck, and NewService wiring changes.
Matching / parsing / layout / search-query
internal/services/crossseed/matching.go, parsing.go, layout.go, search_query.go
Convert helpers to pointer releases, use Normalizer for comparisons and ignore matching, add streaming match paths and metrics hooks, update signatures.
Align / dedup / variants & tests
internal/services/crossseed/align.go, align_test.go, dedup_test.go, variant_overrides.go, variant_overrides_test.go
Update signatures to pointer releases; align uses canonicalHash and batch file lookups; add variant normalizer and transform helper; tests updated to pass pointers.
Cross‑seed tests & mocks
internal/services/crossseed/*_test.go
Tests and fake managers updated for batch qBittorrent APIs, inject stringutils.NewDefaultNormalizer(), pass *rls.Release, and implement mock methods (GetTorrents, GetTorrentFilesBatch, HasTorrentByAnyHash).
Jackett: scheduler & rate-limiter & tests
internal/services/jackett/*.go, ratelimiter.go, scheduler.go, scheduler_test.go
Add searchScheduler with heap-based prioritized queue and per-indexer workers; introduce RateLimitOptions/RateLimitPriority, RateLimitWaitError, queued execution, and comprehensive scheduler tests.
Jackett client (gojackett)
pkg/gojackett/*.go
New Jackett client: HTTP retry with body restore, domain XML models, Torznab item conversion utilities, constants, context-aware APIs and Torznab error handling.
Reannounce & API callers
internal/services/reannounce/service.go, internal/api/handlers/crossseed.go, cmd/qui/main.go
Replace broad torrent fetches with filtered GetTorrents calls; handlers propagate search priority and use cancel-safe contexts; update NewService callsite (clientPool removed).
Frontend UX
web/src/pages/Search.tsx, web/src/components/indexers/IndexersPage.tsx, web/src/components/torrents/TorrentContextMenu.tsx
Add suggestion-driven indexer selection helper, centralize cacheBadge logic, refactor concurrent indexer testing flow, and adjust cross-seed filter placement.
Database helpers & model batching
internal/dbinterface/querier.go, internal/dbinterface/string_pool.go, internal/database/db.go, internal/models/backups.go, internal/models/torznab_search_cache.go
Add DeferForeignKeyChecks, BuildQueryWithPlaceholders, prebuilt query templates, chunked/bulk INSERT optimizations, pre-interning of strings, and RebaseTTL for Torznab search cache (with tests).
Cross‑seed content-detection tests
internal/services/crossseed/content_detection_test.go
New tests validating largest-file-driven content detection and classification.
Misc tests & refactors
many *_test.go
Numerous tests updated for pointer params, normalizer injections, and new batch APIs across cross-seed, qbittorrent sync and filesmanager subsystems.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant API
    participant Service
    participant SyncMgr as qBittorrent
    participant FilesMgr
    participant Normalizer

    rect rgb(240,255,240)
    Client->>API: HTTP request
    API->>Service: ctx (WithSearchPriority / cancel-safe)
    end

    rect rgb(235,245,255)
    Service->>SyncMgr: GetTorrents(ctx, instanceID, filter)
    SyncMgr-->>Service: []qbt.Torrent
    Service->>SyncMgr: GetTorrentFilesBatch(ctx, instanceID, [hashes])
    SyncMgr-->>Service: map[canonicalHash]TorrentFiles
    Service->>FilesMgr: GetCachedFilesBatch(ctx, instanceID, [hashes])
    FilesMgr-->>Service: cached map + missing
    end

    rect rgb(255,250,230)
    Service->>Normalizer: Normalize(title/group/hash/patterns)
    Normalizer-->>Service: normalized values
    Service->>Service: pointer-based matching/enrich (*rls.Release)
    Service-->>Client: response
    end
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

  • Areas needing extra attention:
    • internal/services/crossseed/service.go — pointer refactors, automation snapshot caching, batch fetch integration, metrics and HealthCheck.
    • internal/qbittorrent/sync_manager.go — canonicalization, GetTorrentFilesBatch semantics, concurrency semaphores, debounced sync timers, and cache correctness.
    • internal/services/filesmanager/repository.go — batched DB upserts/deletes, placeholder generation, DeferForeignKeyChecks usage and transaction safety.
    • internal/services/jackett/* — scheduler concurrency, rate-limit MaxWait logic, and search-cache integration.
    • matching/parsing/layout/search modules — pointer migrations, Normalizer propagation, nil-safety and behavior preservation.
    • pkg/gojackett HTTP retry/body reset logic and XML error decoding.

Possibly related PRs

Suggested reviewers

  • Audionut
  • buroa

Poem

🐇 I hopped through pointers and normalized strings,
I batched all the hashes and tuned tiny things.
I queued searching workers and kept rate limits meek,
Cached crumbs in neat piles so lookups run quick and sleek.
A rabbit cheers: merge — the meadow’s complete.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 12.77% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Title check ❓ Inconclusive The title 'fix(cross): performance improvements' is vague and generic, using non-descriptive language like 'performance improvements' without specifying which aspects or mechanisms were optimized. Consider a more specific title that highlights the primary optimization, such as 'fix(cross): optimize torrent matching with batch operations and caching' or 'fix(cross): reduce API calls via batching and hash normalization'.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/perf/cross

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (15)
internal/services/crossseed/layout_test.go (1)

9-10: Layout tests now correctly pass a normalizer

Passing stringutils.NewDefaultNormalizer() into classifyTorrentLayout aligns the test with the updated function signature and normalization strategy; behavior remains unchanged. If you ever care about tiny test overhead, you could instantiate a single normalizer before the loop and reuse it, but that’s strictly optional.

Also applies to: 56-60

internal/services/crossseed/service_search_test.go (1)

172-179: Unify skipCache key normalization in tests

Here skipCache is asserted with stringutils.DefaultNormalizer.Normalize("recent-hash"), while TestPropagateDuplicateSearchHistory still checks state.skipCache[strings.ToLower(hash)]. To keep tests resilient to future changes in the default normalization transform, consider using the same normalizer path (e.g., the Service.stringNormalizer or stringutils.DefaultNormalizer) in all skipCache-related assertions instead of mixing in strings.ToLower.

Based on learnings

Also applies to: 210-219

internal/services/crossseed/dedup_test.go (1)

10-17: Dedup tests now inject required parsing/normalization dependencies

Populating releaseCache: NewReleaseCache() and stringNormalizer: stringutils.NewDefaultNormalizer() (and syncManager in the second test) brings the test Service instances in line with the production Service. This ensures dedup decisions see the same parsed/normalized view as real runs. You might eventually factor a small helper to build a test Service to avoid repeating this wiring, but it’s not urgent.

Also applies to: 72-76

internal/services/crossseed/matching_layout_test.go (1)

102-110: Be mindful of reusing the same parsed release pointer across match calls

In TestFindBestCandidateMatch_PrefersTopLevelFolderOnTie, singleRelease := svc.releaseCache.Parse("Minimal.Payload") is passed by pointer into two separate getMatchType calls with different candidate file layouts. If getMatchType ever mutates the candidateRelease (e.g., adjusts type or fields based on layout), the second call will see a mutated release, which may diverge from how production code parses candidates independently. If getMatchType is intended to be pure with respect to its *rls.Release arguments, nothing needs changing; otherwise, consider cloning or re-parsing for the second call.

internal/services/crossseed/search_query.go (1)

27-27: Consider adding nil check for defensive programming.

The function now accepts a pointer parameter but doesn't check for nil before dereferencing (lines 30, 34, 46). While the current call sites appear to always pass valid pointers, adding a defensive nil check could prevent potential panics.

Apply this diff to add a nil guard:

 func buildSafeSearchQuery(name string, release *rls.Release, baseQuery string) SearchQuery {
+	if release == nil {
+		return SearchQuery{Query: strings.TrimSpace(name)}
+	}
 	// If rls already gave us structured series/episode info, keep it.
 	var seasonPtr, episodePtr *int
internal/services/crossseed/align.go (2)

386-393: Consider nil checks for robustness.

The function now accepts pointer parameters but dereferences them without nil checks (lines 388-389). While current usage appears safe, adding nil guards would improve defensive programming.

Apply this diff:

 func shouldRenameTorrentDisplay(newRelease, matchedRelease *rls.Release) bool {
+	if newRelease == nil || matchedRelease == nil {
+		return true
+	}
 	// Keep episode torrents named after the episode even when pointing at season pack files
 	if newRelease.Series > 0 && newRelease.Episode > 0 &&

395-401: Consider nil checks for robustness.

Similar to shouldRenameTorrentDisplay, this function dereferences pointer parameters without nil checks (lines 396-397).

Apply this diff:

 func shouldAlignFilesWithCandidate(newRelease, matchedRelease *rls.Release) bool {
+	if newRelease == nil || matchedRelease == nil {
+		return true
+	}
 	if newRelease.Series > 0 && newRelease.Episode > 0 &&
internal/services/crossseed/variant_overrides.go (1)

55-93: Consider nil check for robustness.

The method dereferences the Release pointer without checking for nil (lines 66, 88-90). While current usage appears safe, a defensive nil check would prevent potential panics.

Apply this diff:

 func (o variantOverrides) releaseVariants(r *rls.Release) map[string]struct{} {
+	if r == nil {
+		return make(map[string]struct{})
+	}
 	variants := make(map[string]struct{})
pkg/stringutils/normalize.go (1)

13-37: Normalizer implementation is clean; consider defensive nil handling

The generic Normalizer wrapper over ttlcache looks good and keeps the transform/caching concerns nicely encapsulated. Two optional hardening ideas:

  • Guard against a nil receiver/transform in Normalize to avoid panics if a caller ever forgets to initialize the normalizer.
  • Document that DefaultNormalizer is global and shared across concerns (hashes, indexer names, etc.), since the key space is shared even though the transform is benign.

These aren’t blockers but would make the utility more robust to misuse.

Also applies to: 40-42, 45-58, 60-61

internal/services/crossseed/parsing.go (1)

28-61: Pointer-based parsing helpers look correct; rely on non-nil releases

The migration of parsing helpers to *rls.Release (including isAdultContent, DetermineContentType, normalizeReleaseTypeForContent, looksLikeVideoRelease, and ParseMusicReleaseFromTorrentName) maintains the previous behavior while avoiding repeated value copies. The copy‑on‑write pattern (normalized := *release, musicRelease := *baseRelease) is sound.

This design does assume callers never pass a nil *rls.Release; given ReleaseCache.Parse is providing the inputs, that’s fine, but it’s worth keeping in mind if these helpers are ever exposed more broadly.

Also applies to: 142-161, 305-321, 323-351, 435-484

internal/services/crossseed/matching.go (2)

88-124: Title normalization in releasesMatch depends on non-nil stringNormalizer

Switching releasesMatch to use s.stringNormalizer.Normalize for titles is a nice centralization of normalization and should help performance once the cache is warm.

One caveat: this now assumes s.stringNormalizer is non-nil. NewService and updated tests set it correctly, but a zero-valued Service (e.g. constructed via &Service{} elsewhere in the package) would now panic. If you want extra safety, you could:

  • Initialize stringNormalizer lazily when nil (falling back to stringutils.DefaultNormalizer), or
  • Add a small helper on Service that always returns a valid normalizer.

Not urgent, but it would make the API harder to misuse.

Also applies to: 166-201, 253-262


278-283: File-ignore logic with Normalizer is solid; consider pattern pre-normalization

Using shouldIgnoreFile(filename, patterns, normalizer) with a shared normalizer gives you case-insensitive suffix and glob checks and amortizes transform cost via the TTL cache.

If you ever find patterns large or very hot, a minor optimization would be to pre-normalize and cache the patterns once per call site, so Normalize on patterns is not repeated on every filename scan. This is purely an optimization consideration; the current design is correct and reasonable.

Also applies to: 592-619

internal/services/crossseed/service.go (3)

1785-1794: Pointer-based release flows and music-specific parsing in search are sound

The changes to:

  • Pass *rls.Release through processCrossSeedCandidate, findBestCandidateMatch, and getMatchType, and
  • Use a pointer-based contentDetectionRelease plus ParseMusicReleaseFromTorrentName in SearchTorrentMatches

are consistent with the broader pointer migration. The logic for:

  • Preferring the largest file’s enriched release where it yields a concrete Type,
  • Building a music-aware query (Artist + Title, album mapping), and
  • Reusing ParseMusicReleaseFromTorrentName both for the search query and for populating jackett.TorznabSearchRequest music fields

looks correct and should improve search quality without altering behavior for non‑music content.

Just keep relying on ReleaseCache.Parse never returning nil pointers; if that ever changes, these paths would need a nil guard.

Also applies to: 2091-2162, 2698-2990


2004-2051: Hash normalization now mixes normalizeHash and DefaultNormalizer; behavior is OK but could be unified

You now have two normalization paths for hashes:

  • normalizeHash (upper+trim) used for qBittorrent RPC and file lookups (batchLoadCandidateFiles, etc.).
  • stringutils.DefaultNormalizer.Normalize (lower+trim) used for:
    • searchResultCacheKey / asyncFilteringCacheKey,
    • getTorrentByHash comparisons,
    • torrentHasTopLevelFolderCached,
    • propagateDuplicateSearchHistory skipCache keys, and
    • shouldSkipCandidate skipCache keys.

Because both sides of each comparison go through the same transform, equality remains correct and you effectively gain case‑insensitive behavior. However, there is now some conceptual duplication around “how we normalize hashes”.

If you want to simplify mental overhead, you could:

  • Either switch the cache key helpers to call normalizeHash directly, or
  • Keep using the DefaultNormalizer but add a short comment near normalizeHash and the cache key helpers to clarify that one is for qBittorrent RPC and the other is for case‑insensitive in‑process keys.

Functionally this is fine; this is mainly about long‑term clarity.

Also applies to: 3367-3381, 3390-3396, 3640-3650, 3689-3690, 3761-3811


5038-5080: evaluateReleaseMatch pointer migration integrates well with search and webhook scoring

Converting evaluateReleaseMatch to accept *rls.Release and using it in both:

  • SearchTorrentMatches (to score candidate search results), and
  • CheckWebhook (to annotate webhook matches with score and reasons)

is consistent with the rest of the pointer-based changes. The scoring heuristic (group/resolution/source/season/episode/year/codec/audio) is unchanged and the “title match” fallback keeps a non-empty reason string.

As with other helpers, this assumes non-nil releases, but all current call sites are backed by ReleaseCache.Parse, so that’s acceptable.

Also applies to: 5464-5473

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between de0e00a and 71eb0c0.

📒 Files selected for processing (21)
  • internal/services/crossseed/align.go (2 hunks)
  • internal/services/crossseed/align_test.go (2 hunks)
  • internal/services/crossseed/crossseed_test.go (5 hunks)
  • internal/services/crossseed/dedup_test.go (2 hunks)
  • internal/services/crossseed/find_individual_episodes_test.go (2 hunks)
  • internal/services/crossseed/layout.go (2 hunks)
  • internal/services/crossseed/layout_test.go (2 hunks)
  • internal/services/crossseed/matching.go (11 hunks)
  • internal/services/crossseed/matching_layout_test.go (9 hunks)
  • internal/services/crossseed/matching_title_test.go (3 hunks)
  • internal/services/crossseed/parsing.go (8 hunks)
  • internal/services/crossseed/parsing_test.go (1 hunks)
  • internal/services/crossseed/search_query.go (1 hunks)
  • internal/services/crossseed/search_query_test.go (3 hunks)
  • internal/services/crossseed/service.go (17 hunks)
  • internal/services/crossseed/service_example_test.go (1 hunks)
  • internal/services/crossseed/service_search_test.go (3 hunks)
  • internal/services/crossseed/variant_overrides.go (2 hunks)
  • internal/services/crossseed/variant_overrides_test.go (6 hunks)
  • pkg/releases/parser.go (2 hunks)
  • pkg/stringutils/normalize.go (1 hunks)
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2025-11-21T21:11:50.599Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.599Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).

Applied to files:

  • internal/services/crossseed/matching_layout_test.go
  • internal/services/crossseed/service_search_test.go
  • internal/services/crossseed/matching.go
  • internal/services/crossseed/service.go
📚 Learning: 2025-11-06T12:11:04.963Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.

Applied to files:

  • internal/services/crossseed/service_search_test.go
  • internal/services/crossseed/layout.go
  • internal/services/crossseed/matching.go
📚 Learning: 2025-11-06T11:59:21.390Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: web/src/components/torrents/TorrentTableOptimized.tsx:1510-1515
Timestamp: 2025-11-06T11:59:21.390Z
Learning: In the qui project, the API layer in web/src/lib/api.ts normalizes backend snake_case responses to camelCase for frontend consumption. For CrossSeed search results, the backend's download_url field is transformed to downloadUrl in the searchCrossSeedTorrent method, so frontend code should always use the camelCase variant (result.downloadUrl).

Applied to files:

  • internal/services/crossseed/service.go
🧬 Code graph analysis (14)
internal/services/crossseed/find_individual_episodes_test.go (1)
pkg/stringutils/normalize.go (1)
  • NewDefaultNormalizer (35-37)
internal/services/crossseed/parsing_test.go (1)
internal/services/crossseed/parsing.go (1)
  • DetermineContentType (143-300)
internal/services/crossseed/layout_test.go (1)
pkg/stringutils/normalize.go (1)
  • NewDefaultNormalizer (35-37)
internal/services/crossseed/dedup_test.go (3)
internal/services/crossseed/service.go (1)
  • Service (72-108)
internal/services/crossseed/release_cache.go (1)
  • NewReleaseCache (13-15)
pkg/stringutils/normalize.go (1)
  • NewDefaultNormalizer (35-37)
internal/services/crossseed/matching_layout_test.go (3)
internal/services/crossseed/service.go (1)
  • Service (72-108)
pkg/releases/parser.go (1)
  • NewDefaultParser (29-31)
pkg/stringutils/normalize.go (1)
  • NewDefaultNormalizer (35-37)
internal/services/crossseed/service_search_test.go (2)
internal/services/crossseed/release_cache.go (1)
  • NewReleaseCache (13-15)
pkg/stringutils/normalize.go (2)
  • NewDefaultNormalizer (35-37)
  • DefaultNormalizer (61-61)
internal/services/crossseed/variant_overrides_test.go (2)
internal/services/crossseed/service.go (1)
  • Service (72-108)
pkg/stringutils/normalize.go (1)
  • NewDefaultNormalizer (35-37)
internal/services/crossseed/matching_title_test.go (2)
internal/services/crossseed/service.go (1)
  • Service (72-108)
pkg/stringutils/normalize.go (1)
  • NewDefaultNormalizer (35-37)
internal/services/crossseed/variant_overrides.go (1)
pkg/version/version.go (1)
  • Release (19-34)
internal/services/crossseed/layout.go (1)
pkg/stringutils/normalize.go (1)
  • Normalizer (19-22)
pkg/releases/parser.go (2)
pkg/version/version.go (1)
  • Release (19-34)
internal/database/db.go (1)
  • New (348-458)
internal/services/crossseed/crossseed_test.go (2)
internal/services/crossseed/release_cache.go (1)
  • NewReleaseCache (13-15)
pkg/stringutils/normalize.go (1)
  • NewDefaultNormalizer (35-37)
internal/services/crossseed/matching.go (2)
internal/services/crossseed/service.go (1)
  • Service (72-108)
pkg/stringutils/normalize.go (1)
  • Normalizer (19-22)
internal/services/crossseed/service.go (2)
pkg/stringutils/normalize.go (3)
  • Normalizer (19-22)
  • NewDefaultNormalizer (35-37)
  • DefaultNormalizer (61-61)
internal/services/crossseed/parsing.go (1)
  • ParseMusicReleaseFromTorrentName (435-484)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Run tests
🔇 Additional comments (22)
pkg/releases/parser.go (2)

18-26: Pointer-based cache wiring looks correct

Switching Parser.cache to *ttlcache.Cache[string, *rls.Release] and constructing it via ttlcache.Options[string, *rls.Release]{} is consistent and avoids copying rls.Release values on cache hits. No issues from a correctness or perf perspective.


34-50: Parse pointer semantics and early returns are safe

The new Parse implementation (nil receiver/empty key guards, caching &release, and returning pointers) is sound. Taking the address of the local release value is safe in Go and shares the same heap-allocated struct with the cache entry. Just be aware that callers now share a mutable *rls.Release across all users of a given key, which is usually desired but means it should be treated as read-only.

internal/services/crossseed/find_individual_episodes_test.go (1)

17-18: Service test setup correctly wires string normalizer

Injecting stringNormalizer: stringutils.NewDefaultNormalizer() into the test Service instance matches the production struct shape and prevents nil access when normalization is used during processing. Looks good.

Also applies to: 43-56

internal/services/crossseed/service_search_test.go (1)

16-22: Search queue test Service wiring matches production dependencies

Initializing the Service in TestRefreshSearchQueueCountsCooldownEligibleTorrents with releaseCache: NewReleaseCache() and stringNormalizer: stringutils.NewDefaultNormalizer() is consistent with the real struct and avoids nil dereferences during parsing/normalization. No issues here.

Also applies to: 130-146

internal/services/crossseed/service_example_test.go (1)

149-169: enrichReleaseFromTorrent test updated correctly for pointer API

Switching the call to enrichReleaseFromTorrent(&fileRelease, &torrentRelease) matches the new pointer-based signature and keeps the test logic intact. No further changes needed.

internal/services/crossseed/matching_layout_test.go (3)

15-16: Matching/layout tests updated consistently for parser + normalizer

Across these tests, constructing Service with releaseCache: releases.NewDefaultParser() and stringNormalizer: stringutils.NewDefaultNormalizer() plus switching all match helpers to accept *rls.Release keeps the tests aligned with the refactored APIs and avoids unnecessary value copying. This looks coherent and maintains the intended behaviors.

Also applies to: 21-24, 44-47, 77-80, 188-192, 209-212, 239-242, 267-270, 267-272


139-147: Test batch GetTorrentFilesBatch implementation matches normalizeHash usage

The test candidateSelectionSyncManager.GetTorrentFilesBatch normalizes keys with normalizeHash(h) after delegating to GetTorrentFiles, which lowercases the lookup key. This mirrors the production pattern of using normalized (e.g., canonicalized) hashes as map keys while still allowing flexible inputs. The behavior is consistent for the tests’ purposes.

Based on learnings


185-204: Title-based match tests are correctly migrated to pointer-based releases

All getMatchTypeFromTitle and getMatchType invocations now pass *rls.Release for target/candidate releases while the Service is initialized with the new parser and normalizer. This matches the updated matching/parsing code paths and keeps the title-based tests meaningful without changing their expectations.

Also applies to: 206-234, 236-262, 264-283

internal/services/crossseed/parsing_test.go (1)

279-288: DetermineContentType test correctly updated for pointer parameter

Passing &tt.release into DetermineContentType matches the new pointer-based signature. Because tt is the per-iteration copy from the range loop, any internal mutation of the release during classification won’t leak across test cases. The assertions remain valid.

internal/services/crossseed/crossseed_test.go (2)

25-25: LGTM!

The addition of the stringutils package import aligns with the introduction of centralized string normalization across the crossseed service.


1093-1098: LGTM!

The Service initialization correctly includes the new stringNormalizer field, ensuring consistent string normalization behavior throughout the crossseed tests.

internal/services/crossseed/search_query_test.go (1)

14-14: LGTM!

Test properly updated to pass a pointer to Release, aligning with the function signature change in buildSafeSearchQuery.

internal/services/crossseed/matching_title_test.go (2)

9-13: LGTM!

Tests properly initialize the Service with the string normalizer, aligning with the broader refactor to centralize string normalization.


30-31: LGTM!

Test calls correctly updated to pass pointer references to Release structs, matching the updated releasesMatch signature.

internal/services/crossseed/align_test.go (2)

142-146: LGTM!

Test calls correctly updated to pass pointer references, aligning with the updated function signatures in align.go.


155-159: LGTM!

Test calls properly updated to use pointer parameters, consistent with the signature changes.

internal/services/crossseed/layout.go (1)

51-56: No nil handling verification complete—normalizer is guaranteed non-nil by design.

Verification confirms the original concern: shouldIgnoreFile (matching.go:592) directly calls normalizer.Normalize() without nil checks on lines 593 and 596, which would panic if nil. However, the codebase analysis shows this is not a practical issue:

  • The Service constructor (service.go:142-167) always initializes stringNormalizer to stringutils.NewDefaultNormalizer(), guaranteeing it is never nil.
  • All production call sites pass s.stringNormalizer, which is guaranteed non-nil from Service initialization.
  • No instances exist where normalizer is set to nil.

Recommendation: The code is safe because the design pattern enforces non-nil through Service initialization. To improve defensive programming and clarity, consider adding a comment to shouldIgnoreFile documenting that the normalizer parameter must be non-nil, or add an explicit nil check early in the function.

internal/services/crossseed/variant_overrides_test.go (1)

11-12: Tests correctly updated for pointer-based API and normalizer dependency

Using &release everywhere and initializing Service{stringNormalizer: stringutils.NewDefaultNormalizer()} keeps the tests aligned with the new pointer + normalization behavior. This is the right pattern to avoid nil panics in releasesMatch and matches the runtime wiring in NewService.

Also applies to: 21-22, 31-32, 41-42, 49-50, 55-56, 60-87

internal/services/crossseed/matching.go (1)

35-52: Release-key creation and safe parsing align with pointer-based cache

makeReleaseKey(*rls.Release) and Service.parseReleaseName now line up with the pointer-returning release cache, and the nil/cache guard in parseReleaseName (&rls.Release{}) is a sensible defensive fallback for tests or partially-initialized services, avoiding panics in hot paths like matching/layout.

Also applies to: 73-79

internal/services/crossseed/service.go (3)

37-43: Service wiring for string normalization is coherent

Importing stringutils and adding stringNormalizer *stringutils.Normalizer[string, string] to Service, then initializing it in NewService with stringutils.NewDefaultNormalizer(), cleanly centralizes normalization for matching and ignore logic. This also keeps unit tests in the same package free to override or inject a different normalizer when needed.

Given how widely s.stringNormalizer is now used, treating it as a required dependency via NewService is appropriate.

Also applies to: 71-84, 141-175


3383-3406: Hash-based lookups and dedup/skip caches using DefaultNormalizer are consistent

Using stringutils.DefaultNormalizer.Normalize for:

  • needle and candidate hashes in getTorrentByHash,
  • torrentWithRelease.release *rls.Release in dedup flows (paired with pointer-based parsing),
  • hash keys in torrentHasTopLevelFolderCached,
  • normalized duplicate hashes in propagateDuplicateSearchHistory, and
  • state.skipCache keys in shouldSkipCandidate

provides consistent, case-insensitive treatment of hashes across these in-memory structures. This pairs cleanly with the batch/individual file fetches that still use normalizeHash for qBittorrent calls.

Overall, the dedup, skip, and lookup behavior should remain unchanged while being slightly more robust to hash case/whitespace variations.

Also applies to: 3515-3527, 3636-3651, 3694-3715, 3756-3811


4801-4822: Indexer/domain normalization via DefaultNormalizer is thorough and consistent

normalizeIndexerName now feeds indexer names through stringutils.DefaultNormalizer.Normalize and then normalizeDomainName, and trackerDomainsMatchIndexer uses the same normalization pattern (plus TLD stripping and hardcoded mappings) to compare tracker domains against indexer names/domains.

Given the amount of fallback logic here (direct match, specific-domain match, partial + normalized matches), unifying everything through the same lower+trim Normalizer is a good move and should reduce subtle case/spacing issues when correlating trackers and indexers.

Also applies to: 4537-4773

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (4)
internal/services/crossseed/dedup_test.go (1)

62-86: Test name may not reflect current behavior.

The test name TestService_deduplicateSourceTorrents_PrefersRootFolders suggests validation of a preference for root folders, but Line 85 now expects hash-flat (the non-root variant) to be selected with the comment "prefer oldest torrent". Since hash-flat has AddedOn: 1 (older) and hash-root has AddedOn: 2 (newer), the test is validating age-based selection rather than root-folder preference.

Consider renaming the test to better reflect the actual deduplication logic being validated, for example: TestService_deduplicateSourceTorrents_PrefersOldestTorrent or updating the test to explicitly validate both the age-based and structure-based logic.

internal/services/crossseed/service.go (3)

82-82: Consider making stringNormalizer private.

The stringNormalizer field is exported (public), but based on usage patterns in the file it appears to be an internal implementation detail. Consider renaming to stringNormalizer (lowercase) to prevent external access unless there's a specific need for tests or other packages to access it directly.

-	stringNormalizer    *stringutils.Normalizer[string, string]
+	stringNormalizer    *stringutils.Normalizer[string, string]

2913-2916: Verify nil safety for sourceRelease dereferencing.

Lines 2913 and 2916 dereference sourceRelease and the result of ParseMusicReleaseFromTorrentName without nil checks. While the context suggests these should be non-nil (sourceRelease is derived from parsing on line 2653), adding defensive nil checks would improve robustness:

 	// Add music-specific parameters if we have them
 	if contentInfo.IsMusic {
 		// Parse music information from the source release or torrent name
 		var musicRelease rls.Release
-		if sourceRelease.Type == rls.Music && sourceRelease.Artist != "" {
+		if sourceRelease != nil && sourceRelease.Type == rls.Music && sourceRelease.Artist != "" {
 			musicRelease = *sourceRelease
 		} else {
 			// Try to parse music info from torrent name
-			musicRelease = *ParseMusicReleaseFromTorrentName(sourceRelease, sourceTorrent.Name)
+			musicReleasePtr := ParseMusicReleaseFromTorrentName(sourceRelease, sourceTorrent.Name)
+			if musicReleasePtr != nil {
+				musicRelease = *musicReleasePtr
+			}
 		}

2964-2969: Add nil checks before dereferencing for logging.

Lines 2966 and 2968 dereference pointers without nil checks. While this is used only for logging, adding nil checks prevents potential panics:

 		// Use the appropriate release object for logging based on content type
 		var logRelease rls.Release
 		if contentInfo.IsMusic && contentDetectionRelease.Type == rls.Music {
 			// For music, create a proper music release object by parsing the torrent name as music
-			logRelease = *ParseMusicReleaseFromTorrentName(sourceRelease, sourceTorrent.Name)
+			musicReleasePtr := ParseMusicReleaseFromTorrentName(sourceRelease, sourceTorrent.Name)
+			if musicReleasePtr != nil {
+				logRelease = *musicReleasePtr
+			}
 		} else {
-			logRelease = *sourceRelease
+			if sourceRelease != nil {
+				logRelease = *sourceRelease
+			}
 		}
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 71eb0c0 and 1f3f941.

📒 Files selected for processing (8)
  • internal/qbittorrent/sync_manager.go (1 hunks)
  • internal/services/crossseed/align.go (4 hunks)
  • internal/services/crossseed/crossseed_test.go (6 hunks)
  • internal/services/crossseed/dedup_test.go (3 hunks)
  • internal/services/crossseed/find_individual_episodes_test.go (4 hunks)
  • internal/services/crossseed/matching_layout_test.go (10 hunks)
  • internal/services/crossseed/service.go (25 hunks)
  • internal/services/crossseed/service_search_test.go (4 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • internal/services/crossseed/matching_layout_test.go
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
📚 Learning: 2025-11-21T21:11:50.633Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).

Applied to files:

  • internal/services/crossseed/align.go
  • internal/services/crossseed/service_search_test.go
  • internal/services/crossseed/crossseed_test.go
  • internal/services/crossseed/find_individual_episodes_test.go
  • internal/services/crossseed/service.go
📚 Learning: 2025-11-06T11:59:21.390Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: web/src/components/torrents/TorrentTableOptimized.tsx:1510-1515
Timestamp: 2025-11-06T11:59:21.390Z
Learning: In the qui project, the API layer in web/src/lib/api.ts normalizes backend snake_case responses to camelCase for frontend consumption. For CrossSeed search results, the backend's download_url field is transformed to downloadUrl in the searchCrossSeedTorrent method, so frontend code should always use the camelCase variant (result.downloadUrl).

Applied to files:

  • internal/services/crossseed/service_search_test.go
  • internal/services/crossseed/service.go
📚 Learning: 2025-11-06T12:11:04.963Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.

Applied to files:

  • internal/services/crossseed/service_search_test.go
🧬 Code graph analysis (6)
internal/qbittorrent/sync_manager.go (1)
web/src/types/index.ts (1)
  • Torrent (172-229)
internal/services/crossseed/dedup_test.go (3)
internal/services/crossseed/service.go (1)
  • Service (71-107)
internal/services/crossseed/release_cache.go (1)
  • NewReleaseCache (13-15)
pkg/stringutils/normalize.go (1)
  • NewDefaultNormalizer (35-37)
internal/services/crossseed/service_search_test.go (2)
internal/services/crossseed/release_cache.go (1)
  • NewReleaseCache (13-15)
pkg/stringutils/normalize.go (2)
  • NewDefaultNormalizer (35-37)
  • DefaultNormalizer (61-61)
internal/services/crossseed/crossseed_test.go (2)
internal/services/crossseed/release_cache.go (1)
  • NewReleaseCache (13-15)
pkg/stringutils/normalize.go (1)
  • NewDefaultNormalizer (35-37)
internal/services/crossseed/find_individual_episodes_test.go (1)
pkg/stringutils/normalize.go (1)
  • NewDefaultNormalizer (35-37)
internal/services/crossseed/service.go (2)
pkg/stringutils/normalize.go (3)
  • Normalizer (19-22)
  • NewDefaultNormalizer (35-37)
  • DefaultNormalizer (61-61)
internal/services/crossseed/parsing.go (2)
  • FindLargestFile (603-625)
  • ParseMusicReleaseFromTorrentName (435-484)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Run tests
🔇 Additional comments (4)
internal/services/crossseed/service.go (4)

1487-1487: LGTM: GetTorrents API updated consistently.

The GetTorrents method calls have been consistently updated to use the new qbt.TorrentFilterOptions parameter. All usages correctly pass Filter: qbt.TorrentFilterAll which is appropriate for the contexts where they're used (searching across all torrents).

Also applies to: 1802-1802, 3365-3365, 3679-3679, 5124-5124


3347-3347: LGTM: Consistent hash and string normalization.

The code consistently uses stringutils.DefaultNormalizer.Normalize() for hash and string normalization throughout. This centralized approach ensures:

  • Consistent uppercase normalization for hash comparisons
  • Proper handling of whitespace and case variations
  • Shared normalization logic with caching benefits

The use of the global DefaultNormalizer instead of the service's stringNormalizer field is appropriate as it provides a singleton instance with shared cache.

Also applies to: 3356-3356, 3370-3375, 3620-3620, 3673-3673, 3745-3745, 4785-4785


2024-2061: LGTM: Proper batch loading with error handling.

The batchLoadCandidateFiles function correctly:

  • Filters torrents to only 100% complete ones
  • Normalizes hashes before batch API call
  • Returns nil on errors (appropriate for map return type)
  • Logs warnings on batch failures

Callers properly check for nil/empty results before using the map.


2280-2289: LGTM: Proper nil/empty checks for sourceFiles.

The code properly checks len(sourceFiles) > 0 before accessing file data at multiple locations (lines 2280, 2657, 3098). This defensive programming prevents panics after the batch API changes and handles cases where file retrieval might fail or return empty results.

Also applies to: 2656-2676, 3098-3101

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
internal/services/crossseed/service.go (1)

2272-2280: Normalize hashes before GetTorrentFilesBatch to align with batch API and internal conventions.

All three call sites currently pass and index with the raw hash / torrent.Hash, but the batch APIs expect callers to normalize hashes (uppercase/trimmed) for consistent keying, and other callers in this file already use normalizeHash for that purpose. This can lead to subtle mismatches if different hash casings are used.

Reusing normalizeHash here keeps behavior consistent with batchLoadCandidateFiles and the documented expectations in the sync manager. Suggested adjustments:

@@ func (s *Service) AnalyzeTorrentForSearchAsync(ctx context.Context, instanceID int, hash string, enableContentFiltering bool) (*AsyncTorrentAnalysis, error) {
-	filesMap, err := s.syncManager.GetTorrentFilesBatch(ctx, instanceID, []string{hash})
+	normalizedHash := normalizeHash(hash)
+	filesMap, err := s.syncManager.GetTorrentFilesBatch(ctx, instanceID, []string{normalizedHash})
 	if err != nil {
 		return nil, fmt.Errorf("failed to get torrent files: %w", err)
 	}
-	sourceFiles, ok := filesMap[hash]
+	sourceFiles, ok := filesMap[normalizedHash]
 	if !ok {
-		return nil, fmt.Errorf("torrent files not found for hash %s", hash)
+		return nil, fmt.Errorf("torrent files not found for hash %s", normalizedHash)
 	}
@@ func (s *Service) SearchTorrentMatches(ctx context.Context, instanceID int, hash string, opts TorrentSearchOptions) (*TorrentSearchResponse, error) {
-	filesMap, err := s.syncManager.GetTorrentFilesBatch(ctx, instanceID, []string{hash})
+	normalizedHash := normalizeHash(hash)
+	filesMap, err := s.syncManager.GetTorrentFilesBatch(ctx, instanceID, []string{normalizedHash})
 	if err != nil {
 		return nil, fmt.Errorf("failed to get torrent files: %w", err)
 	}
-	sourceFiles, ok := filesMap[hash]
+	sourceFiles, ok := filesMap[normalizedHash]
 	if !ok {
-		return nil, fmt.Errorf("torrent files not found for hash %s", hash)
+		return nil, fmt.Errorf("torrent files not found for hash %s", normalizedHash)
 	}
@@ func (s *Service) torrentHasTopLevelFolder(ctx context.Context, instanceID int, torrent *qbt.Torrent) bool {
-	if s == nil || s.syncManager == nil || torrent == nil {
-		return false
-	}
-	filesMap, err := s.syncManager.GetTorrentFilesBatch(ctx, instanceID, []string{torrent.Hash})
+	if s == nil || s.syncManager == nil || torrent == nil {
+		return false
+	}
+	hash := normalizeHash(torrent.Hash)
+	if hash == "" {
+		return false
+	}
+	filesMap, err := s.syncManager.GetTorrentFilesBatch(ctx, instanceID, []string{hash})
 	if err != nil {
 		return false
 	}
-	filesPtr, ok := filesMap[torrent.Hash]
+	filesPtr, ok := filesMap[hash]
 	if !ok || len(filesPtr) == 0 {
 		return false
 	}
 	return detectCommonRoot(filesPtr) != ""
 }

This keeps all batch file lookups on the same normalized hash convention as the rest of the cross-seed service.

Based on learnings

Also applies to: 2655-2662, 3629-3637

🧹 Nitpick comments (1)
internal/services/crossseed/service.go (1)

82-83: Service.stringNormalizer is currently unused; consider either wiring it through or removing it.

You’ve added stringNormalizer *stringutils.Normalizer[string, string] and initialize it in NewService, but all normalization in this file still goes through the global stringutils.DefaultNormalizer. That makes the field dead weight and slightly confusing.

Either drop the field/initializer for now, or (if you plan to support per-service normalizer state) route the key helpers to use it instead. For example, a minimal cleanup could be:

 type Service struct {
-	indexerDomainCache  *ttlcache.Cache[string, string]
-	stringNormalizer    *stringutils.Normalizer[string, string]
+	indexerDomainCache *ttlcache.Cache[string, string]
@@ func NewService( ... ) *Service {
-		indexerDomainCache:   indexerDomainCache,
-		stringNormalizer:     stringutils.NewDefaultNormalizer(),
+		indexerDomainCache:   indexerDomainCache,

Also applies to: 166-167

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1f3f941 and 3080ac6.

📒 Files selected for processing (1)
  • internal/services/crossseed/service.go (27 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
📚 Learning: 2025-11-21T21:11:50.633Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).

Applied to files:

  • internal/services/crossseed/service.go
🧬 Code graph analysis (1)
internal/services/crossseed/service.go (2)
pkg/stringutils/normalize.go (3)
  • Normalizer (19-22)
  • NewDefaultNormalizer (35-37)
  • DefaultNormalizer (61-61)
internal/services/crossseed/parsing.go (2)
  • FindLargestFile (603-625)
  • ParseMusicReleaseFromTorrentName (435-484)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Run tests
🔇 Additional comments (2)
internal/services/crossseed/service.go (2)

3365-3379: DefaultNormalizer-based key normalization for hashes and indexer names looks consistent.

Using stringutils.DefaultNormalizer.Normalize for:

  • searchResultCacheKey / asyncFilteringCacheKey (lines 3365-3379),
  • cached folder detection (torrentHasTopLevelFolderCached, line 3612),
  • searchRunState.skipCache keys (line 3665),
  • candidate skip decisions (lines 3737-3787),
  • and normalizeIndexerName (lines 4783-4803),

provides a single normalization path for hashes and indexer identifiers, which should reduce cache key skew and make lookups more robust to caller input variations. No issues spotted with these usages given the existing fallback behavior when normalization returns an empty string.

Also applies to: 3612-3621, 3665-3665, 3737-3787, 4783-4803


5020-5062: evaluateReleaseMatch helper is a clear, extensible scoring function.

The new pointer-based evaluateReleaseMatch cleanly encapsulates release scoring, covers the key axes (group, resolution, source, season/episode, year, codec, audio), and returns a useful reason string for logging/UI without introducing nil-safety issues at current call sites. This is a solid improvement over ad-hoc scoring.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (4)
internal/services/crossseed/service.go (4)

2273-2280: Normalize hash before batch API call.

Based on the retrieved learning, GetTorrentFilesBatch expects uppercase hex hashes for keying. The code passes hash directly without normalization, but then uses the same hash as a map key.

Ensure the hash parameter is normalized to uppercase before the batch call:

+	normalizedHash := stringutils.DefaultNormalizer.Normalize(hash)
-	filesMap, err := s.syncManager.GetTorrentFilesBatch(ctx, instanceID, []string{hash})
+	filesMap, err := s.syncManager.GetTorrentFilesBatch(ctx, instanceID, []string{normalizedHash})
 	if err != nil {
 		return nil, fmt.Errorf("failed to get torrent files: %w", err)
 	}
-	sourceFiles, ok := filesMap[hash]
+	sourceFiles, ok := filesMap[normalizedHash]
 	if !ok {
-		return nil, fmt.Errorf("torrent files not found for hash %s", hash)
+		return nil, fmt.Errorf("torrent files not found for hash %s", normalizedHash)
 	}

Based on learnings


2655-2662: Normalize hash before batch API call.

Same issue as in AnalyzeTorrentForSearchAsync. The hash parameter should be normalized to uppercase before calling GetTorrentFilesBatch and using it as a map key:

+	normalizedHash := stringutils.DefaultNormalizer.Normalize(hash)
-	filesMap, err := s.syncManager.GetTorrentFilesBatch(ctx, instanceID, []string{hash})
+	filesMap, err := s.syncManager.GetTorrentFilesBatch(ctx, instanceID, []string{normalizedHash})
 	if err != nil {
 		return nil, fmt.Errorf("failed to get torrent files: %w", err)
 	}
-	sourceFiles, ok := filesMap[hash]
+	sourceFiles, ok := filesMap[normalizedHash]
 	if !ok {
-		return nil, fmt.Errorf("torrent files not found for hash %s", hash)
+		return nil, fmt.Errorf("torrent files not found for hash %s", normalizedHash)
 	}

Based on learnings


3174-3182: Normalize hash before GetTorrents filter.

The hash parameter should be normalized before being passed to GetTorrents:

+	normalizedHash := stringutils.DefaultNormalizer.Normalize(hash)
 	torrents, err := s.syncManager.GetTorrents(ctx, instanceID, qbt.TorrentFilterOptions{
-		Hashes: []string{hash},
+		Hashes: []string{normalizedHash},
 	})
 	if err != nil {
 		return nil, err
 	}
 	if len(torrents) == 0 {
-		return nil, fmt.Errorf("%w: torrent %s not found in instance %d", ErrTorrentNotFound, hash, instanceID)
+		return nil, fmt.Errorf("%w: torrent %s not found in instance %d", ErrTorrentNotFound, normalizedHash, instanceID)
 	}

4359-4368: Normalize hash before GetTorrents filter.

The hash parameter should be normalized before being passed to GetTorrents:

+	normalizedHash := stringutils.DefaultNormalizer.Normalize(hash)
 	torrents, err := s.syncManager.GetTorrents(ctx, instanceID, qbt.TorrentFilterOptions{
-		Hashes: []string{hash},
+		Hashes: []string{normalizedHash},
 	})
 	if err != nil {
 		return indexerIDs, nil, nil, err
 	}
 	if len(torrents) == 0 {
-		return indexerIDs, nil, nil, fmt.Errorf("%w: torrent %s not found in instance %d", ErrTorrentNotFound, hash, instanceID)
+		return indexerIDs, nil, nil, fmt.Errorf("%w: torrent %s not found in instance %d", ErrTorrentNotFound, normalizedHash, instanceID)
 	}
🧹 Nitpick comments (1)
internal/services/crossseed/service.go (1)

2024-2061: Consider returning empty map instead of nil on error.

When GetTorrentFilesBatch fails, the function returns nil instead of an empty map. While the callers appear to handle this correctly (checking len(filesByHash) == 0), returning an empty map would be more defensive and clearer:

 	filesByHash, err := s.syncManager.GetTorrentFilesBatch(ctx, instanceID, hashes)
 	if err != nil {
 		log.Warn().
 			Int("instanceID", instanceID).
 			Int("hashCount", len(hashes)).
 			Err(err).
 			Msg("Failed to batch load torrent files for candidate selection")
-		return nil
+		return make(map[string]qbt.TorrentFiles)
 	}
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3080ac6 and 3c6eab6.

📒 Files selected for processing (1)
  • internal/services/crossseed/service.go (27 hunks)
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
📚 Learning: 2025-11-21T21:11:50.633Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).

Applied to files:

  • internal/services/crossseed/service.go
📚 Learning: 2025-11-06T11:59:21.390Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: web/src/components/torrents/TorrentTableOptimized.tsx:1510-1515
Timestamp: 2025-11-06T11:59:21.390Z
Learning: In the qui project, the API layer in web/src/lib/api.ts normalizes backend snake_case responses to camelCase for frontend consumption. For CrossSeed search results, the backend's download_url field is transformed to downloadUrl in the searchCrossSeedTorrent method, so frontend code should always use the camelCase variant (result.downloadUrl).

Applied to files:

  • internal/services/crossseed/service.go
🧬 Code graph analysis (1)
internal/services/crossseed/service.go (2)
pkg/stringutils/normalize.go (3)
  • Normalizer (19-22)
  • NewDefaultNormalizer (35-37)
  • DefaultNormalizer (61-61)
internal/services/crossseed/parsing.go (2)
  • FindLargestFile (603-625)
  • ParseMusicReleaseFromTorrentName (435-484)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Run tests
🔇 Additional comments (6)
internal/services/crossseed/service.go (6)

42-42: LGTM! String normalizer integration looks good.

The addition of the stringNormalizer field using the new stringutils.Normalizer and its initialization with NewDefaultNormalizer() provides centralized string normalization. This aligns well with the batch operations and hash normalization requirements.

Also applies to: 82-82, 166-166


3364-3379: LGTM! Correct hash normalization for cache keys.

Both searchResultCacheKey and asyncFilteringCacheKey properly normalize the hash using stringutils.DefaultNormalizer.Normalize(hash) before using it as a cache key. This is the correct pattern that ensures consistent key usage.


3516-3534: LGTM! Proper batch fetching and hash normalization.

The deduplication logic correctly:

  • Batch fetches torrent files for all hashes upfront
  • Normalizes hashes before using them as map keys (normalizeHash on line 3525)
  • Uses the normalized hashes consistently for map lookups

This is the correct pattern for batch operations.


3660-3676: LGTM! Correct use of GetTorrents with filter.

The function properly uses GetTorrents with TorrentFilterAll to fetch all torrents from the instance. The subsequent deduplication handles any necessary hash normalization.


5010-5010: LGTM! Pointer-based release signature.

The function signature correctly uses pointer-based releases (*rls.Release), which aligns with the PR objectives and improves memory efficiency for large release structures.


4772-4794: LGTM! Proper use of string normalizer.

The normalizeIndexerName function correctly uses stringutils.DefaultNormalizer.Normalize() to normalize indexer names before applying additional domain-specific transformations. This ensures consistent string comparison throughout the codebase.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3c6eab6 and dbeef0d.

📒 Files selected for processing (1)
  • internal/services/reannounce/service.go (1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: web/src/components/torrents/TorrentTableOptimized.tsx:1510-1515
Timestamp: 2025-11-06T11:59:21.390Z
Learning: In the qui project, the API layer in web/src/lib/api.ts normalizes backend snake_case responses to camelCase for frontend consumption. For CrossSeed search results, the backend's download_url field is transformed to downloadUrl in the searchCrossSeedTorrent method, so frontend code should always use the camelCase variant (result.downloadUrl).
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
📚 Learning: 2025-11-06T12:11:04.963Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.

Applied to files:

  • internal/services/reannounce/service.go
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Run tests

KyleSanderson and others added 3 commits November 22, 2025 02:11
- Canonicalize torrent hashes in SyncManager.GetTorrentFilesBatch and align cross-seed normalizeHash so file caches and progress lookups share a single key space.
- Add HasTorrentByAnyHash to avoid full-library scans when checking for existing torrents and wire it into processCrossSeedCandidate.
- Build per-run automation snapshots of instance torrents and reuse them in FindCandidates via an automation context to avoid repeated GetTorrents(FilterAll) calls.
- Introduce a per-instance semaphore and improved batching for GetTorrentFilesBatch, plus a torrent-files cache in the cross-seed service to reduce repeated /files calls across analyze/search flows.
- Debounce syncAfterModification per instance to coalesce bursts of mutations into fewer syncs.
- Add lightweight timing/logging around candidate search and batch file fetches to surface slow paths for future tuning.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
internal/services/crossseed/service.go (1)

5226-5268: Add nil guards to evaluateReleaseMatch

evaluateReleaseMatch now takes *rls.Release but dereferences both source and candidate unconditionally. A defensive check like if source == nil || candidate == nil { return 0, "no metadata" } (or similar) would prevent panics if any caller ever hands in a nil pointer, especially as more paths migrate to pointer-based releases.

♻️ Duplicate comments (3)
internal/services/crossseed/service.go (3)

2460-2488: Normalize hash consistently before GetTorrents filter

Here, AnalyzeTorrentForSearchAsync calls GetTorrents with Hashes: []string{hash} but all downstream file and cache operations rely on a normalized form (via getTorrentFilesCachedcacheKeyForTorrentFilesnormalizeHash). To avoid edge cases where qBittorrent’s filter expects a canonicalized hash (case/whitespace) and to keep behavior consistent with your own helpers, consider normalizing once (e.g. normalizedHash := normalizeHash(hash)) and using that for both Hashes and subsequent lookups. The same pattern appears in other GetTorrents-with-Hashes call sites (e.g., SearchTorrentMatches, ApplyTorrentSearchResults, filterIndexersByExistingContent).

Based on learnings


2842-2874: Align SearchTorrentMatches hash usage with normalized keys

normalizedHash := normalizeHash(hash) is only used for the filesMap lookup from GetTorrentFilesBatch, while GetTorrents and the batch call both still receive the raw hash. For consistency and to avoid subtle mismatches if external code expects canonical hashes, it would be safer to pass normalizedHash into Hashes and GetTorrentFilesBatch, and keep all subsequent keys in that form; you already treat it as the canonical key when indexing filesMap.

Based on learnings

Also applies to: 2881-2901, 3322-3325


3386-3394: Reuse normalized hash for manual-apply lookup and caching

ApplyTorrentSearchResults now fetches the source torrent via GetTorrents with Hashes: []string{hash} and then uses hash again for searchResultCacheKey. Since searchResultCacheKey already uses stringutils.DefaultNormalizer.Normalize, you might want to normalize once per call and use that for both Hashes and cache key derivation, mirroring the helper behavior and further reducing chances of cache misses due to hash formatting.

Based on learnings

Also applies to: 3520-3544

🧹 Nitpick comments (6)
internal/services/crossseed/matching_layout_test.go (1)

120-147: Fake sync manager correctly updated to new qBittorrent interface (with normalized hash keys)

candidateSelectionSyncManager now implements GetTorrents, batch GetTorrentFilesBatch, and HasTorrentByAnyHash in a way that matches the new sync manager API and the cross-seed hash-normalization story (files keyed by normalizeHash(h)). This keeps the layout/matching tests representative without over-implementing unused methods.

If you ever need to assert on which hashes were requested in these tests, consider capturing the incoming hashes slice in GetTorrentFilesBatch for extra visibility.

Also applies to: 149-151

internal/services/crossseed/service_search_test.go (1)

16-16: Search service tests now exercise real parser/normalizer and normalized skipCache keys

Injecting NewReleaseCache() and stringutils.NewDefaultNormalizer() into Service in TestRefreshSearchQueueCountsCooldownEligibleTorrents is aligned with runtime wiring, and asserting skipCache via stringutils.DefaultNormalizer.Normalize(...) matches the new keying strategy in the implementation. Looks good.

For consistency with this test and the production normalization, consider updating any remaining uses of strings.ToLower for skipCache keys (e.g., in TestPropagateDuplicateSearchHistory) if the underlying code has been switched fully to the normalizer.

Also applies to: 135-146, 176-179

internal/qbittorrent/sync_manager.go (2)

46-54: New torrent client/lookup abstractions and SyncManager fields are well-scoped

Introducing torrentFilesClient / torrentLookup plus the corresponding provider fields on SyncManager, along with the debounce and file-fetch semaphore state, cleanly separates concerns and allows tests to inject stubs. NewSyncManager initializing timers, debounce delays, and per-instance semaphores gives a sensible default while still allowing zero-value SyncManager usage in tests.

If you foresee tuning debounce or file-fetch limits per-instance or per-call, consider exposing small setters (used only in tests/advanced config) instead of relying solely on struct field overrides.

Also applies to: 144-162, 181-192


2060-2135: Debounced sync and per-instance file-fetch semaphore are reasonable; minor timer-nit only

The new syncAfterModification / runDebouncedSync logic debounces syncs per instance using debouncedSyncTimers, applies a small jitter before Sync, and falls back to a sane default delay/minJitter when fields are zero. Combined with the fileFetchSem semaphore (and acquireFileFetchSlot / getFileFetchSemaphore), this should noticeably reduce sync and file-fetch pressure on large instances while remaining safe even for zero-value SyncManager used in tests.

One small nit: the debounce logic uses existing.Stop() and then existing.Reset(delay) without draining the timer or checking whether the callback has already fired. That’s usually fine for time.AfterFunc, but if you ever run into surprising double-firings or missed cleanups under heavy load, the timer lifecycle might be the first place to investigate.

If you want to make the timer handling more conservative, consider the pattern:

if existing, ok := sm.debouncedSyncTimers[instanceID]; ok {
    if existing.Stop() {
        existing.Reset(delay)
    }
    sm.syncDebounceMu.Unlock()
    return
}

and ensure runDebouncedSync doesn’t rely on timers that may have already been reused.

Also applies to: 2136-2173

internal/services/crossseed/crossseed_test.go (1)

1406-1433: fakeSyncManager updated to new sync interface with uppercased hash normalization

The fakeSyncManager now:

  • Implements GetTorrents with the new qbt.TorrentFilterOptions parameter, returning per-instance torrents.
  • Implements GetTorrentFilesBatch by normalizing requested hashes with normalizeHash(h) (likely uppercase) and copying from its pre-normalized files map, while still tolerating lowercased/raw keys as a fallback.
  • Implements HasTorrentByAnyHash by normalizing both the requested hashes and torrent Hash / InfohashV1 / InfohashV2 via normalizeHash, mirroring the production “match any hash variant” semantics.

This keeps cross-seed tests using the same hash-normalization convention as the service code while satisfying the expanded sync manager interface. Behavior here looks consistent and test-focused.

If you ever need to assert exactly which hashes were requested from GetTorrentFilesBatch in these tests, consider recording the incoming hashes slice on fakeSyncManager similar to the stubTorrentFilesClient pattern in sync_manager_test.go.

Also applies to: 1435-1456

internal/services/crossseed/align.go (1)

391-406: Defensive nil checks for *rls.Release helpers

Both shouldRenameTorrentDisplay and shouldAlignFilesWithCandidate dereference newRelease and matchedRelease without guarding against nil. If releaseCache.Parse or a future caller ever returns/passes nil, this will panic; consider early-returning true (i.e., “safe default”) when either argument is nil.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0390764 and b8f5ad2.

📒 Files selected for processing (9)
  • internal/qbittorrent/sync_manager.go (11 hunks)
  • internal/qbittorrent/sync_manager_test.go (1 hunks)
  • internal/services/crossseed/align.go (5 hunks)
  • internal/services/crossseed/crossseed_test.go (6 hunks)
  • internal/services/crossseed/dedup_test.go (3 hunks)
  • internal/services/crossseed/find_individual_episodes_test.go (5 hunks)
  • internal/services/crossseed/matching_layout_test.go (11 hunks)
  • internal/services/crossseed/service.go (40 hunks)
  • internal/services/crossseed/service_search_test.go (4 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • internal/services/crossseed/dedup_test.go
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: web/src/components/torrents/TorrentTableOptimized.tsx:1510-1515
Timestamp: 2025-11-06T11:59:21.390Z
Learning: In the qui project, the API layer in web/src/lib/api.ts normalizes backend snake_case responses to camelCase for frontend consumption. For CrossSeed search results, the backend's download_url field is transformed to downloadUrl in the searchCrossSeedTorrent method, so frontend code should always use the camelCase variant (result.downloadUrl).
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
📚 Learning: 2025-11-21T21:11:50.633Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).

Applied to files:

  • internal/qbittorrent/sync_manager_test.go
  • internal/services/crossseed/find_individual_episodes_test.go
  • internal/services/crossseed/matching_layout_test.go
  • internal/services/crossseed/align.go
  • internal/services/crossseed/service.go
  • internal/services/crossseed/crossseed_test.go
  • internal/services/crossseed/service_search_test.go
  • internal/qbittorrent/sync_manager.go
📚 Learning: 2025-11-06T12:11:04.963Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.

Applied to files:

  • internal/services/crossseed/matching_layout_test.go
  • internal/services/crossseed/service.go
  • internal/services/crossseed/service_search_test.go
📚 Learning: 2025-11-06T11:59:21.390Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: web/src/components/torrents/TorrentTableOptimized.tsx:1510-1515
Timestamp: 2025-11-06T11:59:21.390Z
Learning: In the qui project, the API layer in web/src/lib/api.ts normalizes backend snake_case responses to camelCase for frontend consumption. For CrossSeed search results, the backend's download_url field is transformed to downloadUrl in the searchCrossSeedTorrent method, so frontend code should always use the camelCase variant (result.downloadUrl).

Applied to files:

  • internal/services/crossseed/service.go
🧬 Code graph analysis (7)
internal/qbittorrent/sync_manager_test.go (1)
internal/qbittorrent/sync_manager.go (1)
  • SyncManager (145-162)
internal/services/crossseed/find_individual_episodes_test.go (2)
pkg/stringutils/normalize.go (1)
  • NewDefaultNormalizer (35-37)
internal/services/jackett/service.go (1)
  • EnabledIndexerInfo (3038-3042)
internal/services/crossseed/matching_layout_test.go (3)
internal/services/crossseed/service.go (1)
  • Service (71-110)
pkg/releases/parser.go (1)
  • NewDefaultParser (29-31)
pkg/stringutils/normalize.go (1)
  • NewDefaultNormalizer (35-37)
internal/services/crossseed/service.go (3)
pkg/stringutils/normalize.go (3)
  • Normalizer (19-22)
  • NewDefaultNormalizer (35-37)
  • DefaultNormalizer (61-61)
internal/services/crossseed/models.go (3)
  • FindCandidatesResponse (128-131)
  • FindCandidatesRequest (106-119)
  • MatchedTorrent (64-69)
internal/services/crossseed/parsing.go (2)
  • FindLargestFile (603-625)
  • ParseMusicReleaseFromTorrentName (435-484)
internal/services/crossseed/crossseed_test.go (2)
internal/services/crossseed/release_cache.go (1)
  • NewReleaseCache (13-15)
pkg/stringutils/normalize.go (1)
  • NewDefaultNormalizer (35-37)
internal/services/crossseed/service_search_test.go (2)
internal/services/crossseed/release_cache.go (1)
  • NewReleaseCache (13-15)
pkg/stringutils/normalize.go (2)
  • NewDefaultNormalizer (35-37)
  • DefaultNormalizer (61-61)
internal/qbittorrent/sync_manager.go (1)
internal/qbittorrent/client.go (1)
  • Client (32-60)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Run tests
🔇 Additional comments (17)
internal/services/crossseed/matching_layout_test.go (1)

15-38: Service test wiring and pointer-based releases look consistent with production paths

Using releases.NewDefaultParser() and stringutils.NewDefaultNormalizer() on the test Service instances, and switching getMatchType / getMatchTypeFromTitle calls to *rls.Release, keeps tests exercising the same parser/normalizer and pointer-based signatures as the real service. No issues spotted here.

Also applies to: 44-72, 91-113, 192-208, 213-238, 243-266, 271-287

internal/services/crossseed/service_search_test.go (1)

226-238: queueTestSyncManager stub matches new sync manager surface

The queueTestSyncManager now exposes GetTorrents, a no-op GetTorrentFilesBatch, and HasTorrentByAnyHash to satisfy the expanded sync manager interface while keeping these tests focused on queue-refresh behavior. This keeps compilation and behavior correct without introducing unnecessary complexity.

internal/qbittorrent/sync_manager.go (2)

209-239: Provider-based accessors and GetTorrents wrapper improve testability without changing semantics

getTorrentFilesClient / getTorrentLookup guard against nil receivers and fall back to live getClientAndSyncManager when no providers are set, while tests can inject in-memory implementations. The new GetTorrents(ctx, instanceID, filter) simply delegates to the underlying qBittorrent SyncManager.GetTorrents, so callers can move off direct sync-manager usage without behavior change.

Also applies to: 263-274


1178-1314: Now I need to inspect the codebase directly to confirm the actual normalization behavior:Based on the retrieved learning and my analysis, I now have enough information to make a determination. Let me generate the rewritten review comment based on what I've verified:

Batch torrent-file fetching, caching, and hash normalization—verify hash-case alignment across components

The GetTorrentFilesBatch pipeline is well-structured:

  • Normalizes input hashes once via normalizeHashes, deduping canonical forms and building a lookup set.
  • Uses torrentFilesClient.getTorrentsByHashes(normalized.lookup) to populate progressByHash keyed by canonical (lowercase) hash.
  • Asks the FilesManager for cached files with GetCachedFilesBatch and only fetches missing entries concurrently using an errgroup plus acquireFileFetchSlot for per-instance throttling.
  • Caches fresh results back via CacheFiles keyed by canonical (lowercase) hash with the right progress, and returns a map keyed consistently by lowercase-trimmed hash.
  • Returns partial results on per-hash failures while aggregating them into fetchErrors and logging at debug-level for visibility.

GetTorrentFiles correctly canonicalizes the single hash and then delegates to the batch path, and HasTorrentByAnyHash + matchesAnyHash use the same normalizeHashes / canonicalizeHash machinery to match across Hash, InfohashV1, and InfohashV2.

One thing worth validating:

Hash case convention across components: Here, canonicalizeHash converts to lowercase, and all returned map keys are lowercase hex. The crossseed service has a separate normalizeHash function (line 2173) that historically used uppercase normalization. Ensure:

  • Callers in crossseed (align.go:81, service.go:2205, 2264, 2867, 3736) pass raw input hashes (not pre-uppercased keys) to GetTorrentFilesBatch, and
  • Any code consuming the returned map[string]qbt.TorrentFiles correctly handles the lowercase canonical keys (does not assume uppercase or re-normalize).
internal/qbittorrent/sync_manager_test.go (1)

12-118: New sync manager tests provide solid coverage for hash normalization, caching, and lookup behavior

These tests accurately pin down the intended behavior of normalizeHashes, GetTorrentFilesBatch, and HasTorrentByAnyHash:

  • TestNormalizeHashes enforces canonical lowercase keys, preferred original mapping, and the exact lookup ordering across trimmed/lower/upper variants.
  • TestGetTorrentFilesBatch_NormalizesAndCaches validates cache hits vs. misses, progress propagation into CacheFiles, and that getTorrentsByHashes is called with the expanded lookup set.
  • TestHasTorrentByAnyHash confirms matching against both Hash and InfohashV2.

The stub types (stubTorrentFilesClient, stubFilesManager, stubTorrentLookup) are minimal but precise, making these tests easy to reason about and extend.

Also applies to: 120-195

internal/services/crossseed/crossseed_test.go (1)

25-25: Cross-seed tests now inject string normalizer alongside release cache

Adding stringutils.NewDefaultNormalizer() to the Service initializations in webhook and candidate tests (alongside NewReleaseCache()) keeps the test wiring aligned with the production Service struct. This ensures that matching, layout decisions, and candidate selection in tests go through the same normalization layer as in real runs.

Also applies to: 1094-1098, 1167-1170, 1210-1214, 1327-1332

internal/services/crossseed/align.go (1)

50-86: Batch file lookup with canonical hash looks consistent

Using canonicalHash := normalizeHash(torrentHash) and preferring the live GetTorrentFilesBatch result over expectedSourceFiles is a good improvement and matches the batch-loading pattern used elsewhere, while safely falling back on error or missing entries.

internal/services/crossseed/find_individual_episodes_test.go (1)

17-18: Test wiring for new normalizer and batch APIs looks correct

The tests correctly initialize stringNormalizer on Service and extend episodeSyncManager to satisfy GetTorrents, GetTorrentFilesBatch, and HasTorrentByAnyHash. The batch implementation using strings.ToLower(h) with normalizeHash(h) for map keys matches the production normalization strategy and should keep the tests representative without overcomplicating the mock.

Also applies to: 43-56, 209-235

internal/services/crossseed/service.go (9)

42-43: New normalizer and torrent-file cache integration looks good

Introducing stringNormalizer and torrentFilesCache in Service, wiring them via NewService, and documenting GetTorrentFilesBatch on qbittorrentSync provides a clear, central normalization/caching story for hashes and file metadata. The TTL-backed caches and explicit comments keep the API surface understandable.

Also applies to: 70-83, 167-193


112-125: Automation snapshots and candidate caching are well-structured

automationInstanceSnapshot / automationSnapshots plus automationContext and automationCacheKey give a clean per-run snapshot and de-dup layer for FindCandidates, and findCandidatesWithAutomationContext falls back gracefully when no automation context is provided. This should significantly reduce repeated parsing and qB calls during automation runs without changing semantics.

Also applies to: 176-193, 1204-1207, 1545-1570


1620-1765: FindCandidates snapshot fallback and latency logging look solid

The updated findCandidates correctly prefers prebuilt snapshots when present, falls back to instanceStore.List otherwise, and limits qB calls via a torrentByHash set plus batchLoadCandidateFiles. The added elapsed-time check and debug logging for slow runs will help validate the new batching behavior in production.


2173-2220: Confirm sync-manager hash normalization matches normalizeHash

normalizeHash now lowercases hashes and is used both in cacheKeyForTorrentFiles and when indexing filesMap[normalizeHash(hash)] after GetTorrentFilesBatch. This assumes the sync manager normalizes batch keys with the same transform; if it still uses a different canonical form (e.g., uppercase from earlier revisions), lookups will quietly miss. Please verify GetTorrentFilesBatch’s key normalization matches normalizeHash, and if not, align one side or normalize the input slice before the batch call.

Based on learnings


2238-2275: Batch candidate file loading and best-match selection are coherent

batchLoadCandidateFiles now dedupes hashes via normalizeHash and calls GetTorrentFilesBatch once per instance, and findBestCandidateMatch operates on those normalized keys while also preferring candidates with a root folder and more files at equal score. This is a nice perf and UX win for cross-seed selection logic.

Also applies to: 2298-2348


3699-3750: Dedup root-folder probing via batch files is a good optimization

The new prefetch of all torrent files in deduplicateSourceTorrents using GetTorrentFilesBatch and hasRootCache[normalizeHash(hash)] nicely avoids per-torrent file calls and lets you prefer representatives with a root folder. This should substantially cut I/O for large queues without sacrificing correctness.


3943-3993: Skip-cache normalization is consistent with search history propagation

Using stringutils.DefaultNormalizer.Normalize(torrent.Hash) as the skipCache key, and the same normalizer when propagating duplicate search history, ensures you don’t end up with multiple entries for the same hash differing only in case/format. This keeps seeded-search cooldown logic predictable across deduplicated torrents.

Also applies to: 3847-3873


3576-3591: Consistent hash normalization for cache keys

Both searchResultCacheKey and asyncFilteringCacheKey now use stringutils.DefaultNormalizer.Normalize(hash), which should make cache lookups resilient to hash formatting differences. Just ensure DefaultNormalizer’s transform matches your intended canonical hash format (and is consistent with normalizeHash) so you don’t accidentally diverge between content caches and search/async caches.

Based on learnings

Also applies to: 3584-3591


4989-5010: Indexer name normalization pipeline is sensible

normalizeIndexerName running the indexer name through stringutils.DefaultNormalizer.Normalize, stripping tool suffixes like “(prowlarr)”/“(jackett)”, then feeding into normalizeDomainName gives a clean, reusable canonical indexer identifier for domain matching. This should make trackerDomainsMatchIndexer behavior more stable across different naming conventions.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (6)
internal/services/jackett/ratelimiter_test.go (1)

3-8: MaxWait test correctly exercises new RateLimiter options

  • Importing errors and updating BeforeRequest to BeforeRequest(ctx, indexer, nil) keeps existing tests aligned with the new API without changing semantics.
  • TestRateLimiterMaxWaitBudget is a good focused check that:
    • Seeds a prior request via RecordRequest.
    • Uses MinInterval: 50ms and MaxWait: 10ms so the computed wait is guaranteed to exceed MaxWait under normal execution.
    • Asserts the returned error is a *RateLimitWaitError and that Wait > MaxWait, which directly validates the new budget-enforcement behavior.

If you ever see flakes here due to extreme clock skew in CI, consider capturing now := time.Now() once and reusing it for RecordRequest and assertions, but current values are conservative enough that this should be reliable.

Also applies to: 20-29, 70-95

internal/services/jackett/service.go (4)

94-96: Search timeouts now account for per-request rate-limit budgets

The introduction of interactiveSearchMinInterval/interactiveSearchMaxWait, the rateLimit field on searchContext, and computeSearchTimeout ties search timeouts to the rate-limit configuration:

  • Search and SearchGeneric populate meta.rateLimit with interactive options, and computeSearchTimeout(meta, indexerCount) adds a single waitBudget (max of MinInterval and MaxWait) on top of timeouts.AdaptiveSearchTimeout. This ensures the outer search deadline has headroom for one rate-limit-induced wait.
  • Recent passes meta=nil, so computeSearchTimeout falls back to defaultMinRequestInterval as the wait budget.
  • The additional logging in SearchGeneric gives good observability into search_timeout and rate_limit_wait_budget.

One nuance: because computeSearchTimeout adds waitBudget after AdaptiveSearchTimeout has already clamped to MaxSearchTimeout, the effective deadline can exceed MaxSearchTimeout by up to waitBudget. If you expect MaxSearchTimeout to be a hard global cap, you may want to clamp again after adding the budget.

Also applies to: 142-148, 304-320, 358-370, 463-474, 547-560, 668-669, 1380-1393


993-999: Cache persistence now robust to nil and overly long contexts

Switching persistSearchCacheEntry to derive storeCtx via:

parent := ctx
if parent == nil {
	parent = context.Background()
}
storeCtx, cancel := context.WithTimeout(parent, 3*time.Second)

removes the risk of panicking on a nil context and bounds cache writes to 3 seconds, while still respecting upstream cancellation when ctx is non-nil. This is a solid hardening change for store operations.


1364-1378: Rate-limit option plumbing and error handling are consistent, but RateLimitWaitError semantics may be harsh

  • cloneRateLimitOptions gives each indexer goroutine its own copy of the options, avoiding shared mutable state.
  • asRateLimitWaitError cleanly wraps the errors.As pattern and is correctly used in searchMultipleIndexers.
  • computeSearchTimeout and resolveOptions both base their budgets on the same RateLimitOptions, so the timeout math is consistent with what BeforeRequest will actually wait.
  • In searchMultipleIndexers, BeforeRequest(ctx, idx, rateLimitOpts) now short-circuits with a RateLimitWaitError when the required wait exceeds MaxWait, and you:
    • Log a detailed warning with required vs. max wait and priority.
    • Push indexerResult{id: 0, err: waitErr} into the channel so coverage does not count that indexer as completed.
    • Log aggregate counters and a completion summary with explicit failure/timeout counts.

However, because RateLimitWaitError is treated as a non-timeout failure, if every non-timeout indexer hits the MaxWait budget, searchMultipleIndexers will return all %d indexers failed (last error: %w), which Search / SearchGeneric treat as a hard error rather than a “gracefully skipped due to rate limits” case.

If the intent is to degrade gracefully when all indexers exceed the wait budget (especially for interactive searches), consider either:

  • Treating RateLimitWaitError as a distinct “skipped” category that does not trigger the “all failed” aggregate error, or
  • Returning a more explicit top-level error indicating that the search did not run because of per-request wait budgets, not backend failures.

Also applies to: 1395-1403, 1558-1569, 1663-1668, 1683-1688


1850-1854: Prowlarr year workaround is simple and correct, but diverges from capability-aware behavior

applyProwlarrWorkaround now:

  • Accepts (*models.TorznabIndexer, map[string]string) and only runs for TorznabBackendProwlarr.
  • If "year" is non-empty, moves it into "q" (appending when q is non-empty) and deletes "year".
  • Logs a detailed debug message about the transformation.

This matches TestProwlarrYearParameterWorkaround expectations and fixes the previous URL-values-based signature.

Note though that TestProwlarrCapabilityAwareYearWorkaround models a more advanced behavior (keeping "year" when the indexer advertises "movie-search-year"). Right now, that logic is duplicated in the test but not wired into applyProwlarrWorkaround or applyCapabilitySpecificParams, so production always moves "year" into the query for Prowlarr regardless of capabilities.

To avoid this drift and ensure capability-aware handling is actually used at runtime, consider:

  • Refactoring the capability-aware logic from the test into applyProwlarrWorkaround (or into a shared helper), and
  • Updating applyProwlarrWorkaround to consult hasCapability before deciding whether to strip "year".

Also applies to: 1856-1884

internal/services/jackett/ratelimiter.go (1)

12-14: Rate limiter options and MaxWait enforcement are implemented cleanly

Key points in the updated rate limiter:

  • Raising defaultMinRequestInterval to 60s is a behavioral change for callers that rely on the default; background-style flows will now be much more conservative unless they override MinInterval via RateLimitOptions.
  • RateLimitPriority and RateLimitOptions give you a flexible surface to distinguish interactive vs background work and tweak MinInterval / MaxWait per request.
  • RateLimitWaitError:
    • Carries indexer ID, name, required wait, max wait, and priority.
    • Implements Error() with a descriptive message.
    • Implements Is so errors.Is(err, &RateLimitWaitError{}) works as expected.

In BeforeRequest:

  • resolveOptions merges caller-provided options with defaults, ensuring MinInterval is always > 0 (falling back to defaultMinRequestInterval or r.minInterval).
  • The main loop:
    • Computes wait via computeWaitLocked(indexer, now, cfg.MinInterval).
    • Immediately returns a RateLimitWaitError if cfg.MaxWait > 0 and wait > cfg.MaxWait, before allocating a timer.
    • Otherwise, waits with a time.Timer and respects ctx.Done() correctly, reacquiring the mutex after the wait.

In computeWaitLocked:

  • Accepting minInterval as a parameter lets per-request configuration override the global r.minInterval cleanly.
  • The function combines cooldown, per-request min interval, and hourly/daily windows into a single max-wait duration, which is exactly what BeforeRequest needs to decide between waiting or failing fast with RateLimitWaitError.

Overall this is a solid extension of the existing limiter. Just be aware of the new 60s default and audit any call sites that relied on the old 2s default but do not yet pass options.

Also applies to: 16-21, 23-49, 73-114, 190-240, 286-319

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 41431b0 and 8a4798f.

📒 Files selected for processing (4)
  • internal/services/jackett/ratelimiter.go (5 hunks)
  • internal/services/jackett/ratelimiter_test.go (3 hunks)
  • internal/services/jackett/service.go (16 hunks)
  • internal/services/jackett/service_test.go (1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
🧬 Code graph analysis (3)
internal/services/jackett/ratelimiter_test.go (2)
internal/services/jackett/ratelimiter.go (4)
  • NewRateLimiter (63-71)
  • RateLimitOptions (23-27)
  • RateLimitPriorityInteractive (20-20)
  • RateLimitWaitError (29-35)
internal/models/torznab_indexer.go (1)
  • TorznabIndexer (64-83)
internal/services/jackett/ratelimiter.go (1)
internal/models/torznab_indexer.go (1)
  • TorznabIndexer (64-83)
internal/services/jackett/service.go (2)
internal/services/jackett/ratelimiter.go (3)
  • RateLimitOptions (23-27)
  • RateLimitPriorityInteractive (20-20)
  • RateLimitWaitError (29-35)
internal/pkg/timeouts/timeouts.go (2)
  • WithSearchTimeout (35-46)
  • AdaptiveSearchTimeout (19-32)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Run tests
🔇 Additional comments (3)
internal/services/jackett/service_test.go (1)

1435-1440: Updated workaround call matches new API and semantics

The test now calls service.applyProwlarrWorkaround(indexer, inputParams) with a map[string]string, which aligns with the updated method signature and the production call site in searchMultipleIndexers. This keeps the test exercising the actual in-place mutation behavior.

internal/services/jackett/service.go (2)

1480-1482: Search client invocations correctly switched to map-based params

The three search call sites now invoke the client with map[string]string params:

  • Native: client.SearchDirect(ctx, paramsMap)
  • Prowlarr: client.Search(ctx, indexerID, paramsMap)
  • Aggregator: client.Search(ctx, indexerID, paramsMap)

This aligns with the earlier conversion from url.Values to paramsMap and keeps the Prowlarr workaround and capability-based parameter logic working on a simple, mutable map.

Also applies to: 1497-1499, 1523-1525


190-240: RateLimiter integration points look consistent with new options

  • Search / SearchGeneric set rateLimit on searchContext and pass that meta down to searchMultipleIndexers.
  • computeWaitLocked uses the same min interval as resolved in resolveOptions, so the wait computation matches what Search assumed when constructing its timeout.
  • The new RateLimitWaitError flow in BeforeRequest is accurately tested in ratelimiter_test.go.

Overall, the wiring from search-level intent (interactive vs background) down to per-request rate-limit behavior appears coherent.

Also applies to: 296-319

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
internal/services/jackett/ratelimiter.go (1)

13-14: Per‑request option flow is solid; consider having NextWait respect MaxWait too

The switch to per‑request options in BeforeRequest (with resolveOptions) and the early RateLimitWaitError when wait > MaxWait look correct and give callers a clean way to bound blocking time. computeWaitLocked’s minInterval parameter plus a higher defaultMinRequestInterval also integrates cleanly with the new interactive vs background priorities.

One behavioral gap: NextWait currently ignores MaxWait and always returns the full computed wait, even though BeforeRequest will immediately fail when that wait exceeds MaxWait. This means the scheduler may sleep based on a long raw wait that can never actually be honored for interactive jobs. Consider clamping NextWait’s return value by MaxWait when it’s set, so scheduling decisions reflect the same wait budget as execution:

func (r *RateLimiter) NextWait(indexer *models.TorznabIndexer, opts *RateLimitOptions) time.Duration {
    if indexer == nil {
        return 0
    }
    cfg := r.resolveOptions(opts)
    r.mu.Lock()
    defer r.mu.Unlock()
    now := time.Now()
-   return r.computeWaitLocked(indexer, now, cfg.MinInterval)
+   wait := r.computeWaitLocked(indexer, now, cfg.MinInterval)
+   if cfg.MaxWait > 0 && wait > cfg.MaxWait {
+       wait = cfg.MaxWait
+   }
+   return wait
}

Also applies to: 73-99, 190-205, 286-333

internal/services/jackett/service.go (2)

561-575: Minor duplication of wait‑budget computation between SearchGeneric and computeSearchTimeout

SearchGeneric recomputes waitBudget for logging using essentially the same MinInterval/MaxWait max logic embodied in computeSearchTimeout. This is harmless today but creates a second place to keep in sync if the budgeting logic changes.

Consider factoring the wait‑budget calculation into a small helper (e.g., searchWaitBudget(meta *searchContext) time.Duration) and having both computeSearchTimeout and the log line call it, to avoid drift.

Also applies to: 1394-1405


1865-1868: Capability‑specific params hook is a stub; behavior remains unchanged

applyCapabilitySpecificParams currently no‑ops unless you later add capability‑driven parameter tweaks, so it doesn’t alter existing behavior. The Prowlarr year workaround is nicely isolated and logged, and being a pure transformation on params keeps it easy to extend if you want similar backend‑specific adjustments later.

Also applies to: 1872-1898

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8a4798f and 4bc8b51.

📒 Files selected for processing (3)
  • internal/services/jackett/ratelimiter.go (6 hunks)
  • internal/services/jackett/scheduler.go (1 hunks)
  • internal/services/jackett/service.go (20 hunks)
🧰 Additional context used
🧠 Learnings (1)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
🧬 Code graph analysis (3)
internal/services/jackett/service.go (2)
internal/services/jackett/ratelimiter.go (3)
  • RateLimitOptions (23-27)
  • RateLimitPriorityInteractive (20-20)
  • RateLimitWaitError (29-35)
internal/pkg/timeouts/timeouts.go (2)
  • WithSearchTimeout (35-46)
  • AdaptiveSearchTimeout (19-32)
internal/services/jackett/ratelimiter.go (1)
internal/models/torznab_indexer.go (1)
  • TorznabIndexer (64-83)
internal/services/jackett/scheduler.go (2)
internal/services/jackett/client.go (1)
  • Result (84-101)
internal/services/jackett/ratelimiter.go (2)
  • RateLimiter (57-61)
  • RateLimitPriorityInteractive (20-20)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Run tests
🔇 Additional comments (9)
internal/services/jackett/ratelimiter.go (1)

16-35: New rate‑limit types and error wiring look coherent

RateLimitPriority, RateLimitOptions, and RateLimitWaitError are well factored and line up with how the scheduler/service consume them. The Error message is clear, and Is matching by type only is appropriate for errors.Is checks.

Also applies to: 37-48

internal/services/jackett/scheduler.go (1)

62-87: Queue ordering, cancellation, and runnable selection look correct

The scheduler’s FIFO‑within‑priority ordering, busy‑indexer checks, and nextRunnable logic (dropping cancelled jobs, computing a minimal nextWait, and returning immediately runnable jobs) are consistent and thread‑safe with s.mu. Using a per‑job respCh and feeding completions back through completeCh cleanly decouples job producers from the execution loop.

Also applies to: 150-188

internal/services/jackett/service.go (7)

64-83: Scheduler integration into Service is straightforward

Adding searchScheduler *searchScheduler to Service, constructing it in NewService, and wiring it to the shared RateLimiter keeps search orchestration encapsulated and maintains the existing executeSearch override hook. Apart from the initialization ordering concern already noted in scheduler.go, the overall service wiring and field layout look good.

Also applies to: 198-217


88-107: Interactive search rate‑limit metadata is well scoped

The introduction of interactiveSearchMinInterval/interactiveSearchMaxWait and the new searchContext.rateLimit field, populated in Search and SearchGeneric, cleanly separate interactive flows from background ones. Using RateLimitPriorityInteractive here lines up with jobPriority in the scheduler and with the per‑request options passed into the rate limiter.

Also applies to: 143-149, 324-333, 477-488


227-235: executeQueuedSearch and computeSearchTimeout give consistent budgeting across flows

Routing all networked searches (Search, SearchGeneric, Recent) through executeQueuedSearch when a scheduler is available, and falling back to executeSearch otherwise, preserves the existing behavior while enabling queueing over cooldown‑blocked indexers.

computeSearchTimeout’s AdaptiveSearchTimeout + waitBudget model, where waitBudget is derived from meta.rateLimit (or falls back to defaultMinRequestInterval), ensures there’s room in the overall deadline for both rate‑limit waiting and the actual HTTP calls. The subsequent timeout handling (DeadlineExceeded treated as partial results rather than hard failure) is consistent across the three entry points.

Also applies to: 372-385, 561-575, 682-687, 1394-1407


1007-1012: Defensive context handling in persistSearchCacheEntry is a nice robustness fix

Normalizing a nil ctx to context.Background() before WithTimeout avoids a potential panic path when persisting cache entries from callers that don’t pass a context. This change is backwards‑compatible and keeps the 3s timeout behavior intact.


1378-1407: Rate‑limit option cloning and reuse are safe and avoid shared mutation

cloneRateLimitOptions correctly deep‑copies the pointed‑to struct so each call site (scheduler and searchMultipleIndexers) can safely re‑use a shared options template without risk of concurrent mutation. asRateLimitWaitError centralizes error type extraction, and computeSearchTimeout reuses the same wait‑budget logic as the logs in SearchGeneric. Using a single cloned rateLimitOpts instance per search and passing it into BeforeRequest across goroutines is safe because the limiter never mutates the passed‑in struct.

Also applies to: 1416-1417


1496-1513: Prowlarr workaround and rate‑limit enforcement in searchMultipleIndexers look correct

The refactored applyProwlarrWorkaround (now backend‑aware and side‑effecting only the params map) plus the updated SearchDirect/Search calls that pass ctx align with the client API and keep the year‑parameter workaround scoped to Prowlarr.

Hooking s.rateLimiter.BeforeRequest(ctx, idx, rateLimitOpts) inside each indexer goroutine, and treating RateLimitWaitError specially for logging while skipping coverage (id: 0), integrates nicely with the new per‑search RateLimitOptions. The aggregate error logic still only fails the entire search when all non‑timeout indexers fail, so a few rate‑limit‑skipped indexers won’t poison successful results.

Also applies to: 1538-1539, 1568-1569, 1572-1585


1677-1683: Additional search metrics logging is helpful and low‑risk

The new debug and warn logs around indexers_requested/failed/successful/timed_out and the completion summary provide useful observability for tuning the new scheduler and rate‑limit behavior without affecting control flow.

Also applies to: 1697-1703

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
web/src/components/indexers/IndexersPage.tsx (2)

121-121: Standardize error message fallback for consistency.

Line 121 uses "Connection test failed" as fallback while Line 144 uses String(error). Consider using the same approach for consistency across both handlers.

-      const errorMsg = error instanceof Error ? error.message : String(error)
+      const errorMsg = error instanceof Error ? error.message : "Connection test failed"

Also applies to: 144-144


114-162: Consider adding cleanup for async operations on unmount.

Both handleTest and handleTestAll perform async operations that update state after completion. If the component unmounts while tests are running, these state updates could attempt to execute on an unmounted component. While React 19 handles this better than previous versions, consider using AbortController for proper cleanup.

Example pattern:

const handleTest = async (id: number) => {
  const abortController = new AbortController()
  updateIndexerTestState(id, "testing", undefined)
  
  try {
    await api.testTorznabIndexer(id, { signal: abortController.signal })
    updateIndexerTestState(id, "ok", undefined)
    toast.success("Connection test successful")
  } catch (error) {
    if (error.name === 'AbortError') return
    const errorMsg = error instanceof Error ? error.message : "Connection test failed"
    updateIndexerTestState(id, "error", errorMsg)
    toast.error(errorMsg)
  }
}

// Store abort controllers in a ref and cleanup in useEffect
useEffect(() => {
  return () => {
    // Cleanup logic to abort ongoing operations
  }
}, [])

Note: This assumes the API client supports AbortSignal.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4bc8b51 and fced4ce.

📒 Files selected for processing (1)
  • web/src/components/indexers/IndexersPage.tsx (2 hunks)
🧰 Additional context used
🧠 Learnings (1)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
🧬 Code graph analysis (1)
web/src/components/indexers/IndexersPage.tsx (1)
web/src/lib/api.ts (1)
  • api (1574-1574)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Run tests
🔇 Additional comments (1)
web/src/components/indexers/IndexersPage.tsx (1)

164-179: Backend test handler persists results to database; review comment based on incomplete analysis.

The original concern is incorrect. Test results ARE persisted:

  1. Frontend calls api.testTorznabIndexer(id) → backend TestIndexer handler (jackett.go:579)
  2. Handler performs actual test and calls updateTestStatusWithTimeout()UpdateTestStatus() → database UPDATE (jackett.go lines 618, 625)
  3. Frontend's updateIndexerTestState() is subsequent UI-only state update, not the persistence mechanism
  4. Page refresh fetches from backend via loadIndexers()api.listTorznabIndexers(), restoring persisted test results

The local state update complements backend persistence for immediate UI feedback; it does not replace it. Test results persist across page refreshes.

Likely an incorrect or invalid review comment.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
internal/services/jackett/scheduler.go (1)

39-51: Still a potential data race on searchScheduler.rateLimiter initialization

newSearchScheduler starts the loop with rateLimiter set to nil, and the field is expected to be populated later from outside the scheduler. Since nextRunnablemaxWait reads s.rateLimiter concurrently with that write, this is still a data race under the Go memory model.

Consider wiring the *RateLimiter in before starting the loop, e.g. by:

  • Changing newSearchScheduler to accept a *RateLimiter and store it before go s.loop(), or
  • Providing a constructor or factory in the Jackett service that creates both the rate limiter and scheduler and never mutates rateLimiter afterward.

This will keep the scheduler’s internal state write‑once and race‑free.

You can confirm the race by running go test ./... -race focusing on Jackett service tests that exercise concurrent searches.

Also applies to: 91-99, 192-205

internal/services/crossseed/service.go (1)

2176-2185: Unify hash normalization with qBittorrent expectations across GetTorrents/GetTorrentFilesBatch/HasTorrentByAnyHash

There are several places where hash normalization and qBittorrent integration don’t line up cleanly:

  • normalizeHash now lowercases hashes:
    func normalizeHash(hash string) string {
        return strings.ToLower(strings.TrimSpace(hash))
    }
  • Calls into qBittorrent use raw hash values:
    • GetTorrents(..., TorrentFilterOptions{Hashes: []string{hash}}) in:
      • AnalyzeTorrentForSearchAsync (Lines 2463–2465),
      • SearchTorrentMatches (Lines 2855–2857),
      • ApplyTorrentSearchResults (Lines 3389–3397),
      • filterIndexersByExistingContent (Lines 4578–4588),
      • plus other non‑hash‑filtered calls that are fine.
    • GetTorrentFilesBatch(ctx, instanceID, []string{hash}) in getTorrentFilesCached (Lines 2208–2211) and SearchTorrentMatches (Lines 2870–2873).
  • Map lookups and caches use normalized hashes:
    • filesMap[normalizeHash(hash)] in getTorrentFilesCached.
    • filesByHash[normalizeHash(torrent.Hash)] in batchLoadCandidateFiles.
    • filesMap[hash] where hash is normalizeHash(t.Hash) in deduplicateSourceTorrents.
    • normalizedHash := normalizeHash(hash) and filesMap[normalizedHash] in SearchTorrentMatches.
    • Caches use stringutils.DefaultNormalizer.Normalize(hash) in searchResultCacheKey, asyncFilteringCacheKey, and skipCache.

Based on the retrieved learning, the qBittorrent sync layer expects uppercase hex for batch/hash‑keyed APIs (e.g. GetCachedFilesBatch / GetTorrentFilesBatch), and other parts of this repo historically normalized infohashes to uppercase via a shared normalizer. Mixing:

  • lowercased normalizeHash,
  • raw incoming hash for qBittorrent filters, and
  • stringutils.DefaultNormalizer elsewhere

creates a high risk that:

  • GetTorrents(...Hashes: []string{hash}) silently fails to match when hash casing differs from qBittorrent’s internal normalization, and
  • GetTorrentFilesBatch returns a map keyed by one canonical form (e.g., uppercase) while lookups use another (lowercase), causing false “torrent files not found” errors and breaking cross‑seed searches, Analyze, deduplication, and file‑level matching.

To make this robust and consistent, I recommend:

  1. Pick a single canonical hash normalizer (preferably the same one used by qBittorrent sync):

    If stringutils.DefaultNormalizer is already aligned with qBittorrent’s expectation (uppercase hex per the previous learning), use it everywhere for infohashes and drop the separate normalizeHash helper, or re‑implement normalizeHash in terms of it.

  • func normalizeHash(hash string) string {
  • return strings.ToLower(strings.TrimSpace(hash))
    
  • }
  • func normalizeHash(hash string) string {
  • return stringutils.DefaultNormalizer.Normalize(strings.TrimSpace(hash))
    
  • }
    
    
  1. Always normalize hashes before calling qBittorrent hash‑filtered APIs and reuse that same value for map keys and caches:

    Example for SearchTorrentMatches (pattern should be applied similarly in AnalyzeTorrentForSearchAsync, ApplyTorrentSearchResults, getTorrentFilesCached, batchLoadCandidateFiles, deduplicateSourceTorrents, and filterIndexersByExistingContent):

  • if strings.TrimSpace(hash) == "" {
  • if strings.TrimSpace(hash) == "" {
    return nil, fmt.Errorf("%w: torrent hash is required", ErrInvalidRequest)
    }
  • normalizedHash := normalizeHash(hash)
  • normalizedHash := normalizeHash(hash)

  • if normalizedHash == "" {

  • return nil, fmt.Errorf("%w: torrent hash is required", ErrInvalidRequest)
    
  • }

    ...

  • torrents, err := s.syncManager.GetTorrents(ctx, instanceID, qbt.TorrentFilterOptions{
  • Hashes: []string{hash},
    
  • })
  • torrents, err := s.syncManager.GetTorrents(ctx, instanceID, qbt.TorrentFilterOptions{
  • Hashes: []string{normalizedHash},
    
  • })
    ...
  • filesMap, err := s.syncManager.GetTorrentFilesBatch(ctx, instanceID, []string{hash})
  • filesMap, err := s.syncManager.GetTorrentFilesBatch(ctx, instanceID, []string{normalizedHash})
    ...
  • sourceFiles, ok := filesMap[normalizedHash]
  • sourceFiles, ok := filesMap[normalizedHash]
    if !ok {
  • return nil, fmt.Errorf("torrent files not found for hash %s", hash)
    
  • return nil, fmt.Errorf("torrent files not found for hash %s", normalizedHash)
    
    }
    
    For `getTorrentFilesCached`:
    
    ```diff
    
  • key := cacheKeyForTorrentFiles(instanceID, hash)
  • normHash := normalizeHash(hash)
  • key := cacheKeyForTorrentFiles(instanceID, normHash)
    ...
  • filesMap, err := s.syncManager.GetTorrentFilesBatch(ctx, instanceID, []string{hash})
  • filesMap, err := s.syncManager.GetTorrentFilesBatch(ctx, instanceID, []string{normHash})
    ...
  • files, ok := filesMap[normalizeHash(hash)]
  • files, ok := filesMap[normHash]
    
    
  1. Ensure HasTorrentByAnyHash callers also feed normalized hashes:

    In processCrossSeedCandidate you currently do:

    existingTorrent, exists, err := s.syncManager.HasTorrentByAnyHash(ctx, candidate.InstanceID, []string{torrentHash})

    Normalize torrentHash once up front and reuse it there and in any subsequent comparisons.

This unification will keep:

  • qBittorrent filters (GetTorrents, HasTorrentByAnyHash) and batch file lookups,
  • internal maps (filesByHash, torrentByHash, deduplication), and
  • caches (searchResultCacheKey, asyncFilteringCacheKey, skipCache)

all speaking the same canonical infohash format, avoiding subtle “not found” and duplicate‑detection failures.

Based on learnings

Once updated, it’s worth:

  • Adding a small unit/integration test that calls GetTorrentFilesBatch and GetTorrents with mixed‑case hashes and asserts you can still find files and torrents, and
  • Verifying the underlying qBittorrent sync manager’s normalization (uppercase vs lowercase) so normalizeHash / DefaultNormalizer match it exactly.

Also applies to: 2196-2223, 2241-2268, 2463-2465, 2489-2493, 2845-2877, 2883-2899, 3389-3397, 3731-3753, 3880-3881, 3946-3950, 4005-4014, 4578-4588, 5333-5347, 3523-3542, 3580-3585, 3587-3594

🧹 Nitpick comments (7)
internal/services/crossseed/service.go (1)

4527-4720: Temporary testFilteringEnabled flag in filterIndexersByExistingContent should not live in production code

The content‑based indexer filtering logic is quite detailed and seems well‑instrumented, but there is a leftover test hook:

// TEMPORARY TEST: Remove all indexers to see if the filtering is working
testFilteringEnabled := false // Set to true to test if filtering is working
if testFilteringEnabled {
    log.Debug().Msg("[CROSSSEED-FILTER] *** TEST MODE: FILTERING OUT ALL INDEXERS ***")
    return []int{}, nil, nil, nil
}

Even though it’s currently false, shipping a “flip this to kill all indexers” switch in core logic is risky (accidental toggle, merge conflict, or debug change).

Recommend removing this block or replacing it with:

  • A dedicated unit test that forces the same conditions, or
  • A build‑tagged or environment‑driven debug path that can’t be toggled accidentally in normal builds.

If you keep any debug path, consider guarding it with an env var and logging prominently when it’s active so production behavior is obvious.

internal/services/jackett/service.go (6)

143-212: Search context and rate-limit priority handling look consistent; be aware of overwrite semantics

The searchContext + finalizeSearchContext / resolveSearchPriority / rateLimitOptionsForPriority + WithSearchPriority stack is clean: you normalize priority from (a) explicit options, (b) context, and (c) a fallback, and always end up with non‑nil RateLimitOptions. The searchPriorityKey{} context key is also safely namespaced.

One subtle behavior: finalizeSearchContext always replaces any pre‑existing meta.rateLimit struct with the canned options for the resolved priority. If there is (now or in the future) any call-path that populates custom RateLimitOptions (e.g., tuned MinInterval/MaxWait) on meta before calling finalizeSearchContext, those custom durations will be discarded. Given searchContext is internal and you currently only supply the priority via context/fallback, this is fine, but it’s worth keeping in mind if you later add more granular per‑search tuning.

Also applies to: 261-267


291-307: Queued search flow + timeout/partial handling are coherent with rate-limit metadata

The combination of:

  • Building meta via finalizeSearchContext (interactive priority for user‑initiated searches),
  • Computing searchTimeout := computeSearchTimeout(meta, len(indexersToSearch)) and wrapping via timeouts.WithSearchTimeout, and
  • Routing through executeQueuedSearch (which safely re‑finalizes meta in an idempotent way)

yields a sensible, rate‑limit‑aware search envelope. The new deadlineErr handling and Partial flag computation in Search also look correct and avoid treating deadline timeouts as hard failures when some results were gathered.

The only very minor nit is that executeQueuedSearch always calls finalizeSearchContext again even when meta was already finalized; you could micro‑optimize by skipping if meta != nil && meta.rateLimit != nil, but behavior today is correct.

Also applies to: 396-401, 439-451


544-551: SearchGeneric timeout and wait-budget logging are good; consider de-duplicating wait-budget logic

Using computeSearchTimeout(meta, len(indexersToSearch)) and logging both search_timeout and a derived rate_limit_wait_budget gives nice visibility into how much of the overall window is being reserved for rate‑limit waiting.

The waitBudget calculation here duplicates the logic already embedded in computeSearchTimeout (both derive it from meta.rateLimit.MinInterval/MaxWait with the same rules). That’s fine as‑is, but you could consider extracting a small helper (e.g., waitBudgetForMeta(meta *searchContext) time.Duration) used by both computeSearchTimeout and this log block to keep the behavior from ever drifting.

Also applies to: 623-640


1071-1076: Cache persistence now handles nil contexts safely; consider whether writes should outlive caller ctx

Switching to:

parent := ctx
if parent == nil {
    parent = context.Background()
}
storeCtx, cancel := context.WithTimeout(parent, 3*time.Second)

eliminates the risk of panicking when ctx is nil and keeps cache writes bounded by both a small timeout and the caller’s cancellation.

If, in future, you decide cache persistence should still succeed when the request has already timed out (e.g., to maximize reuse of expensive searches), you might consider always using context.Background() as the parent here instead of ctx, but the current behavior is internally consistent and safe.


1442-1471: Rate-limit option cloning and timeout computation look correct; shared options assumed read-only

cloneRateLimitOptions:

  • Produces a fresh copy of meta.rateLimit when present (so goroutines don’t share the original), and
  • Falls back to a “background” RateLimitOptions when meta/meta.rateLimit is nil.

computeSearchTimeout then cleanly layers a single “wait budget” (max of MinInterval/MaxWait, or defaultMinRequestInterval) on top of AdaptiveSearchTimeout, which matches the intuitive “network time + expected rate‑limit delay” model.

One nuance: in searchMultipleIndexers the single cloned rateLimitOpts pointer is shared among all goroutines. That’s perfectly fine as long as RateLimiter.BeforeRequest treats opts as immutable (which is the natural contract for an options struct). If you ever need to mutate opts inside BeforeRequest, consider cloning per goroutine instead of once at the top of the function to avoid a potential data race.


1473-1767: Rate-limiter integration and RateLimitWaitError handling in searchMultipleIndexers—semantics to double-check

The integration with the rate limiter is generally solid:

  • cloneRateLimitOptions(meta) ensures there is always a usable RateLimitOptions.
  • Calling BeforeRequest per indexer is the right hook point.
  • The RateLimitWaitError branch logs with detailed context and emits indexerResult{id: 0, err: waitErr}, so skipped‑due‑to‑budget indexers do not incorrectly contribute to coverage.

A couple of subtle behaviors to be aware of:

  1. Classification of wait-budget skips as “failures”:
    In the aggregation loop, any non‑timeout error (including RateLimitWaitError) increments failures. If all non‑timeout indexers end up in this bucket, you return a hard error ("all %d indexers failed (last error: %w)"), and the higher-level Search functions treat that as fatal (they only soften context.DeadlineExceeded). That means a search where every indexer exceeds its wait budget will fail the entire request, not return an empty/partial result. If the intent is instead “no indexers could run within the allowed budget, but the query itself is ‘successful’ with zero fresh results,” you may want to treat RateLimitWaitError more like a timeout (e.g., increment timeouts instead of failures, or exclude them from nonTimeoutIndexers).

  2. Coverage and caching semantics:
    Using id: 0 for RateLimitWaitError results means those indexers never enter the coverage set, which is desirable for cache coverage accounting—cached entries won’t be marked as covering an indexer that couldn’t be queried at all.

If this “all wait‑budget‑skipped indexers ⇒ hard error” behavior is intentional, the current implementation is fine. Otherwise, a small tweak to how RateLimitWaitError contributes to failures/nonTimeoutIndexers would get you to a softer “partial/empty but OK” outcome.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fced4ce and 5549440.

📒 Files selected for processing (5)
  • internal/api/handlers/crossseed.go (2 hunks)
  • internal/services/crossseed/service.go (43 hunks)
  • internal/services/jackett/ratelimiter.go (6 hunks)
  • internal/services/jackett/scheduler.go (1 hunks)
  • internal/services/jackett/service.go (20 hunks)
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
📚 Learning: 2025-11-21T21:11:50.633Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).

Applied to files:

  • internal/services/crossseed/service.go
📚 Learning: 2025-11-06T11:59:21.390Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: web/src/components/torrents/TorrentTableOptimized.tsx:1510-1515
Timestamp: 2025-11-06T11:59:21.390Z
Learning: In the qui project, the API layer in web/src/lib/api.ts normalizes backend snake_case responses to camelCase for frontend consumption. For CrossSeed search results, the backend's download_url field is transformed to downloadUrl in the searchCrossSeedTorrent method, so frontend code should always use the camelCase variant (result.downloadUrl).

Applied to files:

  • internal/services/crossseed/service.go
📚 Learning: 2025-11-06T12:11:04.963Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.

Applied to files:

  • internal/services/crossseed/service.go
🧬 Code graph analysis (5)
internal/api/handlers/crossseed.go (2)
internal/services/jackett/service.go (1)
  • WithSearchPriority (262-267)
internal/services/jackett/ratelimiter.go (1)
  • RateLimitPriorityInteractive (19-19)
internal/services/jackett/scheduler.go (2)
internal/services/jackett/client.go (1)
  • Result (84-101)
internal/services/jackett/ratelimiter.go (4)
  • RateLimiter (59-63)
  • RateLimitPriorityInteractive (19-19)
  • RateLimitPriorityRSS (20-20)
  • RateLimitPriorityCompletion (21-21)
internal/services/jackett/ratelimiter.go (1)
internal/models/torznab_indexer.go (1)
  • TorznabIndexer (64-83)
internal/services/crossseed/service.go (4)
pkg/stringutils/normalize.go (3)
  • Normalizer (19-22)
  • NewDefaultNormalizer (35-37)
  • DefaultNormalizer (61-61)
internal/services/crossseed/models.go (2)
  • FindCandidatesResponse (128-131)
  • FindCandidatesRequest (106-119)
internal/services/jackett/service.go (1)
  • WithSearchPriority (262-267)
internal/services/crossseed/parsing.go (1)
  • FindLargestFile (603-625)
internal/services/jackett/service.go (3)
internal/services/jackett/ratelimiter.go (8)
  • RateLimitOptions (25-29)
  • RateLimitPriority (16-16)
  • RateLimitPriorityBackground (22-22)
  • RateLimitPriorityInteractive (19-19)
  • RateLimitPriorityRSS (20-20)
  • RateLimitPriorityCompletion (21-21)
  • NewRateLimiter (65-73)
  • RateLimitWaitError (31-37)
internal/models/torznab_indexer.go (1)
  • TorznabIndexer (64-83)
internal/pkg/timeouts/timeouts.go (2)
  • WithSearchTimeout (35-46)
  • AdaptiveSearchTimeout (19-32)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Run tests
🔇 Additional comments (12)
internal/api/handlers/crossseed.go (1)

20-21: Correct use of Jackett search priority for interactive API calls

Wrapping r.Context() with jackett.WithSearchPriority(...RateLimitPriorityInteractive) before calling SearchTorrentMatches cleanly tags interactive searches for the rate limiter without affecting cancellation or deadlines. This fits the new priority model and looks good.

If you want to double‑check behavior, run a quick manual search while an RSS run is active and confirm interactive searches aren’t starved behind background load.

Also applies to: 410-411

internal/services/jackett/scheduler.go (1)

64-89: Scheduler core loop and job lifecycle look sound

The queue ordering by (priority, createdAt), cancellation handling in nextRunnable, and completion path (startJobcompleteChrespCh) are logically consistent and avoid goroutine leaks (buffered respCh means callers that time out don’t block workers).

No changes needed here from a correctness perspective.

Also applies to: 123-151, 152-190

internal/services/jackett/ratelimiter.go (1)

16-37: Rate limiter option surface and error type are well structured

The introduction of RateLimitPriority, RateLimitOptions, and RateLimitWaitError, plus routing both BeforeRequest and NextWait through resolveOptions/computeWaitLocked, gives a clean, testable API for per‑request tuning. The Is method on RateLimitWaitError correctly supports errors.Is(err, &RateLimitWaitError{}) for caller matching.

Internals (cooldown, hourly/daily limits, and minInterval handling) remain consistent with prior behavior.

It’s worth adding/expanding tests to cover:

  • Per‑request MinInterval overrides,
  • MaxWait just below/above the threshold, and
  • Matching RateLimitWaitError via errors.Is.

Also applies to: 39-51, 75-101, 192-205, 288-300, 312-335

internal/services/crossseed/service.go (6)

37-43: Service wiring for new caches and string normalizer looks consistent

Adding stringNormalizer and torrentFilesCache to Service, and initializing them via stringutils.NewDefaultNormalizer() and a 5‑minute TTL cache, aligns with the broader normalization and caching strategy (search results, async filtering, domain cache). The updated qbittorrentSync interface with GetTorrents, GetTorrentFilesBatch, and HasTorrentByAnyHash matches how the rest of this file now consumes qBittorrent.

No issues spotted in this wiring.

Also applies to: 51-68, 70-83, 167-193


751-805: Jackett rate‑limit priorities are applied consistently across automation and completion

Three key paths now correctly tag searches with appropriate Jackett rate‑limit priorities:

  • Completion search (executeCompletionSearch): RateLimitPriorityCompletion.
  • RSS automation (executeAutomationRun): RateLimitPriorityRSS used for Recent.
  • Seeded Torrent Search automation (processSearchCandidate): RateLimitPriorityRSS for per‑candidate searches.

This cleanly separates interactive, RSS, and completion traffic for the new rate limiter and scheduler, without altering higher‑level error handling or timeouts. Looks good.

It’s worth running a quick load test (or prod observation) to confirm that:

  • RSS runs don’t starve completion searches, and
  • Interactive API searches remain responsive under background load.

Also applies to: 1185-1187, 4005-4083


1960-2048: Use of HasTorrentByAnyHash and add‑options in processCrossSeedCandidate looks correct

The new flow in processCrossSeedCandidate:

  • Uses HasTorrentByAnyHash to short‑circuit when the torrent already exists on a target instance, and
  • Correctly sets paused and stopped together for start state, with skip_checking=true on the first attempt and a retry without skip_checking on failure,

is in line with the project’s go‑qbittorrent usage and the prior learning about pairing paused and stopped flags.

No functional issues spotted here; just ensure the hash passed into HasTorrentByAnyHash is normalized consistently per the earlier comment.

You may want a small test to cover:

  • “exists” short‑circuit behavior with HasTorrentByAnyHash,
  • fallback from skip_checking=true to a recheck when AddTorrent fails.

Also applies to: 2071-2113, 2113-2161


3698-3753: Source torrent deduplication & search history propagation look well designed

The deduplication pipeline:

  • Parses all candidate torrents into torrentWithRelease and groups them via releasesMatch.
  • Prefers representatives that:
    • Have a top‑level folder (via batched GetTorrentFilesBatch + detectCommonRoot), then
    • Have the earliest AddedOn.
  • Tracks duplicates per representative and logs grouping decisions.
  • Propagates search history updates from each representative to its duplicates and marks them in state.skipCache.

This is a nice performance/UX win for Seeded Torrent Search, especially in instances with many cross‑seeds of the same content, and the grouping logic is careful about layout preferences.

Aside from the hash normalization concerns already called out (ensuring batch file keys match), the overall approach looks solid.

After fixing hash normalization, it’s worth adding tests that:

  • Verify deduplication prefers “with root folder” over “flat” torrents, and
  • Confirm duplicateHashes + propagateDuplicateSearchHistory actually prevent re‑processing duplicates within the cooldown window.

Also applies to: 3813-3847, 3879-3899, 3941-3997


5230-5272: New evaluateReleaseMatch scoring and webhook reuse look correct

evaluateReleaseMatch now provides a structured scoring and reason string based on:

  • Group, resolution, source, season/episode, year, codec, and audio.

That score is:

  • Used to bias Jackett search results ordering in SearchTorrentMatches, and
  • Logged for webhook checks to explain why a torrent was considered a match.

Baseline score 1.0 plus small increments per matching attribute is reasonable, and the “title match” fallback reason ensures logs are always interpretable.

The webhook integration correctly:

  • Uses releasesMatch for coarse filtering,
  • Applies isSizeWithinTolerance based on automation settings, and
  • Then uses evaluateReleaseMatch only for scoring/logging, not gating.

This all looks consistent and safe.

Adding tests around evaluateReleaseMatch (e.g., asserting relative ordering of otherwise similar releases) would help keep future tweaks from surprising search/webhook behavior.

Also applies to: 5395-5418, 5547-5745


3523-3545: Consistent use of stringutils.DefaultNormalizer for cache keys and skipCache is good

Using stringutils.DefaultNormalizer.Normalize(hash) in:

  • searchResultCacheKey,
  • asyncFilteringCacheKey, and
  • shouldSkipCandidate / propagateDuplicateSearchHistory (via skipCache),

ensures cross‑seed search results, async filtering state, and automation cooldown checks are keyed in a consistent, case‑insensitive manner across different API entrypoints (web UI hash, webhook hash, internal torrent hash).

Once the qBittorrent‑facing hash normalization is unified per earlier comments, this provides a solid, canonical layering between external input and internal caches.

After unifying hash normalization, it’s worth verifying that:

  • Running a search and then applying results with differently cased hashes still hits the same cache entries, and
  • Seeded search automation’s skipCache behaves identically regardless of hash casing from qBittorrent.

Also applies to: 3580-3594, 3946-3950

internal/services/jackett/service.go (3)

269-288: Scheduler wiring in NewService correctly respects options

Initializing searchScheduler by default and then reassigning its rateLimiter after applying ServiceOptions means:

  • Options can override either s.rateLimiter or s.searchScheduler.
  • The final scheduler always points at the final s.rateLimiter value, if non‑nil.

That ordering avoids the usual “scheduler captured old dependency before options ran” bug and makes testing with injected schedulers/rate‑limiters straightforward.


744-751: Recent now uses rate-limit-aware timeouts, which matches the other search paths

Using finalizeSearchContext with a background priority and then computeSearchTimeout(meta, len(indexersToSearch)) for Recent keeps it consistent with Search / SearchGeneric in terms of honoring rate‑limit wait budgets in the overall timeout. The subsequent partial‑result and logging behavior lines up with the other methods.


1935-1962: Context-free Prowlarr year workaround is straightforward and safe

Refactoring applyProwlarrWorkaround to be context‑free and take only idx + params simplifies the call sites and keeps the behavior focused:

  • It no‑ops for non‑Prowlarr backends.
  • For Prowlarr, it moves year into q and deletes the year param, with clear debug logging.

This aligns with the comment that the workaround should “always move the year parameter into the search query for Prowlarr indexers” and avoids any extra context plumbing.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
internal/services/jackett/service.go (1)

270-289: Scheduler integration and fan‑out are solid; mind coverage semantics for skipped indexers

The default construction/injection of searchScheduler and later wiring of rateLimiter are correct, and executeQueuedSearch falls back cleanly to the old path. The fan‑out in searchIndexersWithScheduler with a buffered resultsCh sized to len(indexers) plus cloneValues per job avoids data races and goroutine leaks on context cancellation.

One behavioural difference to be aware of: in the scheduler path, coverage is whatever runIndexerSearch returns. When applyIndexerRestrictions short‑circuits an indexer (caps/categories mismatch), runIndexerSearch currently returns nil, nil, nil, so that indexer is treated as not covered. In the legacy searchMultipleIndexers path, the goroutine sends indexerResult{id: idx.ID} on such a skip, which causes that indexer to be counted in coverage even though no network call was made. If cache/partial‑result semantics should remain identical between the two execution strategies, consider returning []int{idx.ID} as coverage from runIndexerSearch when restrictions cause a skip:

-	if s.applyIndexerRestrictions(ctx, client, idx, "", meta, paramsMap) {
-		return nil, nil, nil
-	}
+	if s.applyIndexerRestrictions(ctx, client, idx, "", meta, paramsMap) {
+		// Treated as "handled" for coverage, even though no request was sent.
+		return nil, []int{idx.ID}, nil
+	}

(and similarly for the other backend branches where a restriction returns early).

Also applies to: 299-307, 309-375, 2289-2298

🧹 Nitpick comments (2)
internal/services/jackett/service.go (2)

1142-1147: Nil‑safe cache persistence and timeout helpers are well‑factored; consider clarifying RateLimitOptions mutability

The persistSearchCacheEntry change to guard ctx and fall back to context.Background() avoids a panic when callers pass nil, without changing behaviour when a real context is supplied.

cloneRateLimitOptions, asRateLimitWaitError, and computeSearchTimeout are clean and line up with how search timeouts and wait budgets are computed and logged.

One minor point: in searchMultipleIndexers you compute rateLimitOpts := cloneRateLimitOptions(meta) once and then pass the same pointer concurrently to BeforeRequest for multiple indexers. This is perfectly fine as long as BeforeRequest treats the options as read‑only (which appears to be the intent). If there’s any chance it might mutate the struct in the future, consider cloning inside the goroutine instead to avoid a potential data race:

-	rateLimitOpts := cloneRateLimitOptions(meta)
...
-	if err := s.rateLimiter.BeforeRequest(ctx, idx, rateLimitOpts); err != nil {
+	if err := s.rateLimiter.BeforeRequest(ctx, idx, cloneRateLimitOptions(meta)); err != nil {

Also applies to: 1513-1542


1842-1968: Shared per‑indexer search logic is correct; align coverage semantics for restriction skips

runIndexerSearch cleanly encapsulates the single‑indexer flow: rate‑limiter state initialization, API key decryption, client construction, Prowlarr year workaround, capability/category restriction checks, BeforeRequest with cloned options, latency recording, rate‑limit detection/handling, and tracker/indexer ID normalization. The 429/rate‑limit error enrichment matches the behaviour in searchMultipleIndexers, which keeps caller expectations consistent. The added meta == nil guard in applyCapabilitySpecificParams is also a good defensive fix.

As noted earlier, the only behavioural gap is that when applyIndexerRestrictions returns true, this helper currently returns nil, nil, nil, so the scheduler path will not treat that indexer as covered. If you want parity with the non‑scheduler path (where such indexers are effectively “handled” but skipped), having runIndexerSearch return []int{idx.ID} in those early‑return cases would close that gap without affecting error handling.

Also applies to: 2128-2131, 2135-2161

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5549440 and 2061809.

📒 Files selected for processing (1)
  • internal/services/jackett/service.go (21 hunks)
🧰 Additional context used
🧠 Learnings (1)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
🧬 Code graph analysis (1)
internal/services/jackett/service.go (4)
internal/services/jackett/ratelimiter.go (8)
  • RateLimitOptions (25-29)
  • RateLimitPriority (16-16)
  • RateLimitPriorityBackground (22-22)
  • RateLimitPriorityInteractive (19-19)
  • RateLimitPriorityRSS (20-20)
  • RateLimitPriorityCompletion (21-21)
  • NewRateLimiter (65-73)
  • RateLimitWaitError (31-37)
internal/models/torznab_indexer.go (3)
  • TorznabIndexer (64-83)
  • TorznabBackendNative (37-37)
  • TorznabBackendProwlarr (35-35)
internal/services/jackett/client.go (2)
  • Result (84-101)
  • NewClient (36-81)
internal/pkg/timeouts/timeouts.go (2)
  • WithSearchTimeout (35-46)
  • AdaptiveSearchTimeout (19-32)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Run tests
🔇 Additional comments (3)
internal/services/jackett/service.go (3)

69-69: Search context and priority plumbing looks consistent

searchContext, the priority helpers (searchPriorityKey, finalizeSearchContext, resolveSearchPriority, rateLimitOptionsForPriority), and WithSearchPriority form a coherent pipeline: priority resolution is idempotent, respects explicit options, then context hints, and defaults to background. The interactive/RSS/completion intervals and waits align with how computeSearchTimeout and RateLimiter.BeforeRequest are later used, so this wiring looks sound.

Also applies to: 95-97, 145-150, 152-212, 262-268


465-471: Timeout and queued‑search integration looks correct and consistent

The three entrypoints now consistently:

  • Build meta via finalizeSearchContext (with interactive vs background fallback),
  • Compute a timeout using computeSearchTimeout(meta, len(indexersToSearch)),
  • Wrap the call with timeouts.WithSearchTimeout,
  • Route through executeQueuedSearch and treat context.DeadlineExceeded as a partial‑results condition rather than a hard failure.

The extra debug log in SearchGeneric showing rate_limit_wait_budget plus the timeout should be very helpful when tuning priorities and intervals. This all looks coherent.

Also applies to: 509-521, 614-621, 694-707, 711-751, 815-821


1618-1631: Prowlarr year workaround and rate‑limit skip handling look sane

Applying applyProwlarrWorkaround to the per‑indexer paramsMap before capability/category restrictions and backend‑specific search calls is safe and keeps the original url.Values immutable. Handling RateLimitWaitError explicitly in searchMultipleIndexers—logging the wait/maxWait/priority and sending a result with id: 0—ensures these “budget‑skip” cases don’t incorrectly contribute to coverage, while still being counted toward overall failure stats.

The added debug logs for failure/timeout counters and the final completion summary give a clear view of how each batch behaved without changing error semantics.

Also applies to: 1647-1648, 1673-1674, 1703-1704, 1707-1717, 1812-1817, 1832-1837

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (2)
internal/services/jackett/service.go (1)

276-288: Scheduler initialization race condition (see scheduler.go review).

Lines 286-288 set s.searchScheduler.rateLimiter after the scheduler's loop has already started (in newSearchScheduler). This race condition is flagged in detail in the scheduler.go review.

internal/services/jackett/scheduler.go (1)

95-105: Race condition on searchScheduler.rateLimiter initialization

The rateLimiter field is nil when go s.loop() starts (line 103), and NewService assigns it later (line 287) without synchronization. Since jobPriority (line 273) and worker operations read s.rateLimiter concurrently, this creates a data race. The suggested fix in the existing review comment—passing the rate limiter to newSearchScheduler—should be implemented.

🧹 Nitpick comments (7)
internal/services/jackett/scheduler.go (3)

168-190: Dispatch loop may spin rapidly under certain conditions.

The loop calls dispatchTasks() (line 170) before every select, and the default case (lines 179-188) allows the loop to continue immediately if no channels are ready. If taskQueue has items but all worker queues are full, dispatchTasks() will return quickly (lines 221-225), and the loop will spin without blocking.

Consider:

  • Removing the default case to always block when no work is ready
  • Adding a small sleep in the default case if no progress is made
  • Using a condition variable to signal when dispatch should retry

107-166: Minor: Error condition could be more precise.

Line 161 checks len(results) == 0, but results could be legitimately empty even when some indexers succeeded (they just returned no matches). Consider checking len(coverage) == 0 instead, since coverage tracks indexers that completed successfully.

Apply this diff:

-	if failures == len(tasks) && lastErr != nil && len(results) == 0 {
+	if failures == len(tasks) && lastErr != nil && len(coverage) == 0 {

288-300: Consider using atomic operations for ID generation.

Lines 288-300 use mutex-protected counters for job and task IDs. Since these are simple integer increments without additional state changes, sync/atomic operations (atomic.AddUint64) would be more efficient and reduce lock contention.

Example refactor:

func (s *searchScheduler) nextJobID() uint64 {
	return atomic.AddUint64(&s.jobSeq, 1)
}

func (s *searchScheduler) nextTaskID() uint64 {
	return atomic.AddUint64(&s.taskSeq, 1)
}
internal/services/jackett/service.go (4)

322-388: Consider simplifying the goroutine architecture.

Lines 336-351 spawn one goroutine per indexer, and each calls s.searchScheduler.Submit (which internally creates per-indexer workers). This double-layered goroutine pattern seems redundant:

Current flow:

  1. searchIndexersWithScheduler creates N goroutines
  2. Each goroutine calls Submit with 1 indexer
  3. Submit enqueues task for worker
  4. Scheduler's getWorker creates/reuses worker goroutine

Simpler alternatives:

  • Call Submit once with all indexers (it already handles multiple indexers)
  • Or bypass Submit and directly dispatch to scheduler's task queue

The current pattern adds complexity without clear concurrency benefits, since the scheduler already manages per-indexer workers.


2311-2320: LGTM! cloneValues correctly prevents concurrent modification.

The function properly deep-copies url.Values by cloning each value slice, which is essential for the concurrent scheduler usage.

Minor style note: Line 2317's append([]string(nil), v...) is idiomatic but could be slightly more explicit as:

clone := make([]string, len(v))
copy(clone, v)
out[k] = clone

523-523: Minor inconsistency in logging.

SearchGeneric (lines 709-720) logs the rate limit wait budget, but Search and Recent don't. Consider adding similar logging to other entry points for consistency, or document why SearchGeneric needs extra verbosity.

Also applies to: 708-720, 830-830


1155-1160: Minor: Context handling can be simplified.

Lines 1155-1158 create an intermediate parent variable that can be eliminated:

-	parent := ctx
-	if parent == nil {
-		parent = context.Background()
-	}
-	storeCtx, cancel := context.WithTimeout(parent, 3*time.Second)
+	if ctx == nil {
+		ctx = context.Background()
+	}
+	storeCtx, cancel := context.WithTimeout(ctx, 3*time.Second)
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2061809 and 4e79c39.

📒 Files selected for processing (3)
  • internal/services/jackett/scheduler.go (1 hunks)
  • internal/services/jackett/service.go (21 hunks)
  • internal/services/jackett/service_test.go (4 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • internal/services/jackett/service_test.go
🧰 Additional context used
🧠 Learnings (1)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
🧬 Code graph analysis (2)
internal/services/jackett/scheduler.go (2)
internal/services/jackett/client.go (1)
  • Result (84-101)
internal/services/jackett/ratelimiter.go (4)
  • RateLimiter (59-63)
  • RateLimitPriorityInteractive (19-19)
  • RateLimitPriorityRSS (20-20)
  • RateLimitPriorityCompletion (21-21)
internal/services/jackett/service.go (4)
internal/services/jackett/ratelimiter.go (8)
  • RateLimitOptions (25-29)
  • RateLimitPriority (16-16)
  • RateLimitPriorityBackground (22-22)
  • RateLimitPriorityInteractive (19-19)
  • RateLimitPriorityRSS (20-20)
  • RateLimitPriorityCompletion (21-21)
  • NewRateLimiter (65-73)
  • RateLimitWaitError (31-37)
internal/models/torznab_indexer.go (3)
  • TorznabIndexer (64-83)
  • TorznabBackendNative (37-37)
  • TorznabBackendProwlarr (35-35)
internal/services/jackett/client.go (2)
  • Result (84-101)
  • NewClient (36-81)
internal/pkg/timeouts/timeouts.go (2)
  • WithSearchTimeout (35-46)
  • AdaptiveSearchTimeout (19-32)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Run tests
🔇 Additional comments (9)
internal/services/jackett/scheduler.go (3)

1-11: LGTM!

The imports are appropriate for the scheduler implementation.


13-19: LGTM!

Priority constants are well-defined, with lower numeric values representing higher priority, consistent with the heap implementation.


272-286: jobPriority could panic if meta.rateLimit is accessed during initialization race.

Line 273 checks meta != nil && meta.rateLimit != nil, but if the race condition (flagged earlier) occurs, meta.rateLimit could reference a RateLimiter with uninitialized state. While the nil check protects against direct panics, the underlying race condition should be fixed first.

This concern is directly related to the initialization race flagged in the earlier comment. Fixing the scheduler initialization will resolve this.

internal/services/jackett/service.go (6)

152-213: LGTM! Well-structured priority resolution.

The priority resolution logic follows a clear precedence: context value → explicit options → fallback → background default. The rateLimitOptionsForPriority function correctly creates appropriate intervals and max-wait times for each priority level.


262-268: LGTM!

Public API for setting search priority via context follows Go conventions correctly.


299-314: LGTM! Sensible routing logic.

The fallback chain handles test executors, explicit indexer requests, missing scheduler, and normal scheduled execution correctly.


1526-1555: LGTM!

Helper functions are well-implemented:

  • cloneRateLimitOptions safely copies by value
  • asRateLimitWaitError uses standard error unwrapping
  • computeSearchTimeout correctly adds wait budget to adaptive timeout

478-483: LGTM! Appropriate priority assignments.

Search entry points correctly use Interactive priority for user-initiated searches (Search, SearchGeneric) and Background priority for automated queries (Recent).

Also applies to: 629-634, 828-828


2157-2183: All call sites have been successfully updated.

Verification confirms:

  • Function definition (line 2157) shows the new signature with ctx removed
  • All 3 call sites in the codebase use the new signature: lines 1643, 1894 in service.go and line 1448 in service_test.go
  • No calls with the old signature exist

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
internal/qbittorrent/sync_manager.go (1)

1275-1311: Redundant cloning in batch caching reduces performance.

The code clones file slices multiple times:

  1. Line 1277-1278: Clone after fetch
  2. Line 1281: Store clone in filesByHash
  3. Line 1297-1299: Clone again before caching

The second clone (lines 1297-1299) is unnecessary because filesByHash[canonicalHash] already contains a copy from line 1277. This adds allocation overhead for every fetched file.

Consider this optimization:

     mu.Lock()
-    filesByHash[ch] = copied
+    filesByHash[ch] = copied  // Already a copy, safe to cache as-is
     mu.Unlock()

Then update the caching section:

     fetchedFiles := make(map[string]qbt.TorrentFiles)
     for _, canonicalHash := range hashesToFetch {
         if files, ok := filesByHash[canonicalHash]; ok {
-            // Cache a clone to keep cache entries isolated.
-            cloned := make(qbt.TorrentFiles, len(files))
-            copy(cloned, files)
-            fetchedFiles[canonicalHash] = cloned
+            // files is already a copy from the fetch, safe to cache directly
+            fetchedFiles[canonicalHash] = files
         }
     }
♻️ Duplicate comments (1)
internal/qbittorrent/sync_manager.go (1)

1407-1447: Hash normalization inconsistency remains unresolved.

Per the retrieved learning and past review comments, there's a critical inconsistency:

  1. canonicalizeHash (line 1408) uses strings.ToLower
  2. Documentation (line 39) mentions "lowercase hex"
  3. The learning states that normalizeHash in crossseed uses strings.ToUpper
  4. normalizeHashes compensates by creating multiple variants (lines 1434-1442)

While the multi-variant approach in normalizeHashes masks the issue during lookups, it's inefficient and confusing. Standardize to uppercase hex throughout the codebase as recommended in the past review.

Apply this diff to use uppercase as the canonical format:

 func canonicalizeHash(hash string) string {
-    return strings.ToLower(strings.TrimSpace(hash))
+    return strings.ToUpper(strings.TrimSpace(hash))
 }

Also update the documentation at line 39:

-    // Callers must pass hashes already trimmed/normalized (e.g. lowercase hex)
+    // Callers must pass hashes already trimmed/normalized (e.g. uppercase hex)

Based on learnings

🧹 Nitpick comments (1)
internal/qbittorrent/sync_manager.go (1)

47-54: Consider pointer-based slice returns for efficiency.

The torrentFilesClient.getTorrentsByHashes method returns []qbt.Torrent (slice of values). If qbt.Torrent is a large struct, consider using []*qbt.Torrent to avoid copying overhead, especially when dealing with batch operations.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2f6561c and ea22571.

📒 Files selected for processing (1)
  • internal/qbittorrent/sync_manager.go (14 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-11-21T21:11:50.633Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).

Applied to files:

  • internal/qbittorrent/sync_manager.go
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Run tests
🔇 Additional comments (4)
internal/qbittorrent/sync_manager.go (4)

2103-2112: Timer handling correctly avoids race conditions.

The implementation properly addresses timer safety by:

  1. Calling Stop() on the existing timer (best-effort)
  2. Always creating a new timer instead of calling Reset()

This pattern avoids the documented race condition where Reset() must only be called on stopped/drained timers. Good defensive coding.


2157-2194: File fetch concurrency control is well-implemented.

The per-instance semaphore pattern effectively limits concurrent file fetches:

  • Lazy initialization avoids overhead for unused instances
  • Context cancellation is properly handled
  • Release closure is safe with defer usage in callers

1355-1380: HasTorrentByAnyHash is correctly implemented.

The method properly:

  • Normalizes input hashes to handle case variations
  • Iterates through lookup variants efficiently
  • Verifies matches against canonical hash set
  • Handles edge cases (empty input, not found)

264-274: GetTorrents wrapper is correctly implemented.

Clean delegation to the underlying sync manager with proper error handling.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
internal/services/crossseed/align.go (1)

328-339: Consider expanding the video extension list.

The sidecar file handling logic is well-implemented. However, the video extension list might be missing some common formats that could benefit from this normalization logic.

Consider adding these commonly used video extensions:

 		if dot := strings.LastIndex(base, "."); dot >= 0 && dot < len(base)-1 {
 			videoExt := strings.ToLower(base[dot+1:])
 			switch videoExt {
-			case "mkv", "mp4", "avi", "ts", "m2ts", "mov", "mpg", "mpeg":
+			case "mkv", "mp4", "avi", "ts", "m2ts", "mov", "mpg", "mpeg", "flv", "webm", "wmv", "vob":
 				base = base[:dot]
 			}
 		}

This ensures broader coverage for sidecar files with intermediate video extensions.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a9ff749 and d2b6d27.

📒 Files selected for processing (2)
  • internal/services/crossseed/align.go (8 hunks)
  • internal/services/crossseed/align_test.go (3 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-11-21T21:11:50.633Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).

Applied to files:

  • internal/services/crossseed/align.go
🧬 Code graph analysis (1)
internal/services/crossseed/align.go (1)
internal/qbittorrent/sync_manager.go (1)
  • WithForceFilesRefresh (65-67)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Run tests
🔇 Additional comments (10)
internal/services/crossseed/align_test.go (3)

53-95: LGTM! Comprehensive test for sidecar file handling.

The new test effectively validates the sidecar file normalization logic where intermediate video extensions (e.g., .mkv in name.mkv.nfo) are stripped for matching purposes. The test covers both the video file and its associated sidecar file, ensuring both are correctly mapped.


186-189: LGTM! Pointer conversions align with API changes.

The function calls correctly pass pointer arguments to match the updated shouldRenameTorrentDisplay signature.


199-202: LGTM! Pointer conversions align with API changes.

The function calls correctly pass pointer arguments to match the updated shouldAlignFilesWithCandidate signature.

internal/services/crossseed/align.go (7)

13-14: LGTM! Import addition for batch file retrieval.

The import is necessary to access WithForceFilesRefresh for forced refresh context.


83-89: LGTM! Graceful batch retrieval with forced refresh.

The implementation correctly uses forced refresh context and gracefully falls back to the expected source files if the batch call fails or returns no results. The error handling ensures robustness.


94-97: LGTM! Logical root rename condition.

The updated condition correctly restricts root folder renaming to cases where no file-level rename plan exists. This prevents conflicts between root and individual file renames, falling back to a root rename only when file-level mappings cannot be established.


130-143: LGTM! Simplified file rename logic.

The direct use of instr.oldPath and instr.newPath from the plan is correct, as the plan now contains the final paths without requiring runtime adjustment for root renames.


188-196: LGTM! Efficient filtered torrent availability check.

The updated implementation correctly uses GetTorrents with a hash filter instead of retrieving all torrents, improving efficiency. The error handling and availability check are appropriate.


399-414: LGTM! Function signatures updated for pointer-based API.

The function signatures correctly updated to accept pointer parameters (*rls.Release), aligning with the broader refactor to pointer-based releases throughout the codebase.


52-52: Hash normalization is consistent—no changes needed.

Verification confirms that both normalizeHash (used at line 52) and GetTorrentFilesBatch use lowercase hex normalization via strings.ToLower, ensuring the lookup at line 86 will correctly match map keys returned by the batch API.

@s0up4200 s0up4200 merged commit 5f016a8 into main Nov 24, 2025
12 checks passed
@s0up4200 s0up4200 deleted the fix/perf/cross branch November 24, 2025 20:11
alexlebens pushed a commit to alexlebens/infrastructure that referenced this pull request Dec 3, 2025
This PR contains the following updates:

| Package | Update | Change |
|---|---|---|
| [ghcr.io/autobrr/qui](https://github.com/autobrr/qui) | minor | `v1.7.0` -> `v1.8.1` |

---

### Release Notes

<details>
<summary>autobrr/qui (ghcr.io/autobrr/qui)</summary>

### [`v1.8.1`](https://github.com/autobrr/qui/releases/tag/v1.8.1)

[Compare Source](autobrr/qui@v1.8.0...v1.8.1)

#### Changelog

##### Bug Fixes

- [`61c87e1`](autobrr/qui@61c87e1): fix(torznab): use detached context for indexer tests ([#&#8203;659](autobrr/qui#659)) ([@&#8203;s0up4200](https://github.com/s0up4200))

**Full Changelog**: <autobrr/qui@v1.8.0...v1.8.1>

#### Docker images

- `docker pull ghcr.io/autobrr/qui:v1.8.1`
- `docker pull ghcr.io/autobrr/qui:latest`

#### What to do next?

- Join our [Discord server](https://discord.autobrr.com/qui)

Thank you for using qui!

### [`v1.8.0`](https://github.com/autobrr/qui/releases/tag/v1.8.0)

[Compare Source](autobrr/qui@v1.7.0...v1.8.0)

#### Changelog

##### New Features

- [`6903812`](autobrr/qui@6903812): feat(crossseed): batch torrent file lookups end-to-end ([#&#8203;625](autobrr/qui#625)) ([@&#8203;s0up4200](https://github.com/s0up4200))
- [`336ce48`](autobrr/qui@336ce48): feat(crossseed): persist seeded search settings ([#&#8203;618](autobrr/qui#618)) ([@&#8203;s0up4200](https://github.com/s0up4200))
- [`7b0b292`](autobrr/qui@7b0b292): feat(docker): add curl to Dockerfiles ([#&#8203;570](autobrr/qui#570)) ([@&#8203;onedr0p](https://github.com/onedr0p))
- [`91e1677`](autobrr/qui@91e1677): feat(filters): default-hide empty status/category/tag groups ([#&#8203;581](autobrr/qui#581)) ([@&#8203;s0up4200](https://github.com/s0up4200))
- [`f07bb8d`](autobrr/qui@f07bb8d): feat(header): add missing links to header burger menu ([#&#8203;624](autobrr/qui#624)) ([@&#8203;nuxencs](https://github.com/nuxencs))
- [`ee4c16b`](autobrr/qui@ee4c16b): feat(instances): allow disabling qbit instances ([#&#8203;582](autobrr/qui#582)) ([@&#8203;s0up4200](https://github.com/s0up4200))
- [`477db14`](autobrr/qui@477db14): feat(search): column filters ([#&#8203;633](autobrr/qui#633)) ([@&#8203;nuxencs](https://github.com/nuxencs))
- [`cd6db45`](autobrr/qui@cd6db45): feat(themes): add basic variation support ([#&#8203;569](autobrr/qui#569)) ([@&#8203;jabloink](https://github.com/jabloink))
- [`979a0d4`](autobrr/qui@979a0d4): feat(torrents): add clear filters action for empty filtered state ([#&#8203;627](autobrr/qui#627)) ([@&#8203;s0up4200](https://github.com/s0up4200))
- [`e06acb7`](autobrr/qui@e06acb7): feat(torrents): add cross-seeding and search ([#&#8203;553](autobrr/qui#553)) ([@&#8203;KyleSanderson](https://github.com/KyleSanderson))
- [`95cef23`](autobrr/qui@95cef23): feat(torrents): add reannounce monitor ([#&#8203;606](autobrr/qui#606)) ([@&#8203;s0up4200](https://github.com/s0up4200))
- [`098fdb0`](autobrr/qui@098fdb0): feat(torrents): add rename functionality in TorrentDetailsPanel ([#&#8203;590](autobrr/qui#590)) ([@&#8203;s0up4200](https://github.com/s0up4200))
- [`6e8fdbd`](autobrr/qui@6e8fdbd): feat(torrents): implement drag-and-drop file upload to add torrents ([#&#8203;568](autobrr/qui#568)) ([@&#8203;dthinhle](https://github.com/dthinhle))
- [`9240545`](autobrr/qui@9240545): feat(ui): add dense view mode for compact table display ([#&#8203;643](autobrr/qui#643)) ([@&#8203;s0up4200](https://github.com/s0up4200))
- [`77fad15`](autobrr/qui@77fad15): feat(ui): improve torrent details panel file tree and rename UX ([#&#8203;650](autobrr/qui#650)) ([@&#8203;s0up4200](https://github.com/s0up4200))
- [`8b1e70e`](autobrr/qui@8b1e70e): feat(web): Use original qBittorrent status names ([#&#8203;595](autobrr/qui#595)) ([@&#8203;FibreTTP](https://github.com/FibreTTP))
- [`01dd553`](autobrr/qui@01dd553): feat(web): show listening port in connectable status tooltip ([#&#8203;635](autobrr/qui#635)) ([@&#8203;s0up4200](https://github.com/s0up4200))
- [`3140739`](autobrr/qui@3140739): feat: make tracker icon column sortable ([#&#8203;513](autobrr/qui#513)) ([@&#8203;s0up4200](https://github.com/s0up4200))

##### Bug Fixes

- [`240b40d`](autobrr/qui@240b40d): fix(auth): avoid logout on license activation errors ([#&#8203;602](autobrr/qui#602)) ([@&#8203;s0up4200](https://github.com/s0up4200))
- [`7185408`](autobrr/qui@7185408): fix(backups): do not persist ZIPs to disk ([#&#8203;632](autobrr/qui#632)) ([@&#8203;KyleSanderson](https://github.com/KyleSanderson))
- [`de0e00a`](autobrr/qui@de0e00a): fix(content): use Hints for detection ([#&#8203;621](autobrr/qui#621)) ([@&#8203;KyleSanderson](https://github.com/KyleSanderson))
- [`5f016a8`](autobrr/qui@5f016a8): fix(cross): performance improvements ([#&#8203;629](autobrr/qui#629)) ([@&#8203;KyleSanderson](https://github.com/KyleSanderson))
- [`82c74ba`](autobrr/qui@82c74ba): fix(crossseed): flip deduplication to maps ([#&#8203;622](autobrr/qui#622)) ([@&#8203;KyleSanderson](https://github.com/KyleSanderson))
- [`b78a079`](autobrr/qui@b78a079): fix(crossseed): inherit TMM state from matched torrent ([#&#8203;654](autobrr/qui#654)) ([@&#8203;s0up4200](https://github.com/s0up4200))
- [`2438fc6`](autobrr/qui@2438fc6): fix(crossseed): process full RSS feeds ([#&#8203;615](autobrr/qui#615)) ([@&#8203;s0up4200](https://github.com/s0up4200))
- [`6f57090`](autobrr/qui@6f57090): fix(database): do not release mutex on tx err ([#&#8203;571](autobrr/qui#571)) ([@&#8203;KyleSanderson](https://github.com/KyleSanderson))
- [`74509d4`](autobrr/qui@74509d4): fix(incognito): prevent categories leaking ([#&#8203;592](autobrr/qui#592)) ([@&#8203;s0up4200](https://github.com/s0up4200))
- [`f08eff2`](autobrr/qui@f08eff2): fix(instances): support empty username for localhost bypass ([#&#8203;575](autobrr/qui#575)) ([@&#8203;s0up4200](https://github.com/s0up4200))
- [`cd3caaf`](autobrr/qui@cd3caaf): fix(license): cap 7d offline grace, ignore transient errors ([#&#8203;617](autobrr/qui#617)) ([@&#8203;s0up4200](https://github.com/s0up4200))
- [`59c747b`](autobrr/qui@59c747b): fix(reannounce): validate number fields and show min hints ([#&#8203;613](autobrr/qui#613)) ([@&#8203;s0up4200](https://github.com/s0up4200))
- [`f6bd1e6`](autobrr/qui@f6bd1e6): fix(themes): correct Nightwalker description from purple to blue ([#&#8203;648](autobrr/qui#648)) ([@&#8203;s0up4200](https://github.com/s0up4200))
- [`2b641c5`](autobrr/qui@2b641c5): fix(torznab): filter Prowlarr autodiscovery to enabled torrent indexers ([#&#8203;638](autobrr/qui#638)) ([@&#8203;s0up4200](https://github.com/s0up4200))
- [`1995783`](autobrr/qui@1995783): fix(ui): improve cross-seed mobile responsiveness ([#&#8203;647](autobrr/qui#647)) ([@&#8203;s0up4200](https://github.com/s0up4200))
- [`b83aebe`](autobrr/qui@b83aebe): fix(web): align CrossSeedDialog indexers with search flows ([#&#8203;619](autobrr/qui#619)) ([@&#8203;s0up4200](https://github.com/s0up4200))
- [`3b60821`](autobrr/qui@3b60821): fix(web): indent subcategories in SetCategoryDialog ([#&#8203;636](autobrr/qui#636)) ([@&#8203;s0up4200](https://github.com/s0up4200))
- [`82850cd`](autobrr/qui@82850cd): fix: glob pattern formatting in tooltip content ([#&#8203;579](autobrr/qui#579)) ([@&#8203;onedr0p](https://github.com/onedr0p))

##### Other Changes

- [`c20bc0a`](autobrr/qui@c20bc0a): build(vite): enable default minification ([#&#8203;574](autobrr/qui#574)) ([@&#8203;s0up4200](https://github.com/s0up4200))
- [`ceac8ca`](autobrr/qui@ceac8ca): chore(ci): upgrade Claude Code workflow to Opus 4.5 ([@&#8203;s0up4200](https://github.com/s0up4200))
- [`9d6c10e`](autobrr/qui@9d6c10e): chore(deps): bump actions/checkout from 5 to 6 in the github group ([#&#8203;628](autobrr/qui#628)) ([@&#8203;dependabot](https://github.com/dependabot)\[bot])
- [`f5704de`](autobrr/qui@f5704de): chore(deps): bump golang.org/x/crypto from 0.43.0 to 0.45.0 ([#&#8203;611](autobrr/qui#611)) ([@&#8203;dependabot](https://github.com/dependabot)\[bot])
- [`0aae9aa`](autobrr/qui@0aae9aa): chore(deps): bump the golang group with 3 updates ([#&#8203;546](autobrr/qui#546)) ([@&#8203;dependabot](https://github.com/dependabot)\[bot])
- [`0d97087`](autobrr/qui@0d97087): chore(themes): add crypto instructions in-app ([#&#8203;620](autobrr/qui#620)) ([@&#8203;s0up4200](https://github.com/s0up4200))
- [`e778865`](autobrr/qui@e778865): docs(funding): add donation methods and crypto addresses ([#&#8203;583](autobrr/qui#583)) ([@&#8203;s0up4200](https://github.com/s0up4200))
- [`563645c`](autobrr/qui@563645c): docs: update qui image ([#&#8203;655](autobrr/qui#655)) ([@&#8203;s0up4200](https://github.com/s0up4200))

**Full Changelog**: <autobrr/qui@v1.7.0...v1.8.0>

#### Docker images

- `docker pull ghcr.io/autobrr/qui:v1.8.0`
- `docker pull ghcr.io/autobrr/qui:latest`

#### What to do next?

- Join our [Discord server](https://discord.autobrr.com/qui)

Thank you for using qui!

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied.

♻ **Rebasing**: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update again.

---

 - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box

---

This PR has been generated by [Renovate Bot](https://github.com/renovatebot/renovate).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0Mi41LjAiLCJ1cGRhdGVkSW5WZXIiOiI0Mi41LjAiLCJ0YXJnZXRCcmFuY2giOiJtYWluIiwibGFiZWxzIjpbImltYWdlIl19-->

Reviewed-on: https://gitea.alexlebens.dev/alexlebens/infrastructure/pulls/2211
Co-authored-by: Renovate Bot <renovate-bot@alexlebens.net>
Co-committed-by: Renovate Bot <renovate-bot@alexlebens.net>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants