Skip to content

refactor(cross-seed): replace blocking rate limiter with async job-based scheduler#641

Merged
s0up4200 merged 63 commits intomainfrom
fix/rss-seeded-search-starvation-and-memory
Nov 29, 2025
Merged

refactor(cross-seed): replace blocking rate limiter with async job-based scheduler#641
s0up4200 merged 63 commits intomainfrom
fix/rss-seeded-search-starvation-and-memory

Conversation

@s0up4200
Copy link
Collaborator

@s0up4200 s0up4200 commented Nov 26, 2025

Summary

  • Scheduler redesign: Replace blocking BeforeRequest() with async job-based scheduling
  • Dispatch-time rate limiting: Check NextWait() at dispatch, execute ready tasks immediately
  • Callback-based results: Submit() returns immediately, results delivered via callbacks
  • Priority enforcement: Interactive > RSS > Completion > Background, with priority-based MaxWait defaults
  • Fix starvation: RSS and seeded search no longer block each other
  • Rate limit escalation: Replace aggressive daily limit inference with Prowlarr-style escalating backoff (1m → 5m → 15m → ... → 24h max)
  • Memory optimization: Cache deduplication results, reduce unnecessary clones
  • Documentation: Added SCHEDULER.md explaining the architecture

Problem

The blocking rate limiter model caused several issues:

  1. Resource waste: Workers called BeforeRequest() which slept until rate limits cleared—up to 10 goroutines sitting blocked
  2. Priority inversion: Sleeping workers couldn't yield to higher-priority work
  3. Mutual starvation: RSS and seeded search with identical settings would block each other
  4. Memory spikes: Repeated file fetches in deduplication caused 1.5GB+ allocations
  5. Aggressive rate limiting: A single 429 could trigger 24-hour blocking by auto-inferring DailyRequestLimit from request count

Solution

Complete scheduler redesign with "job in, scheduled, job done" model:

  • Async submission: Submit() returns immediately with job ID
  • Dispatch-time decisions: Scheduler queries NextWait() and decides to execute, defer, or skip
  • Worker pool: Semaphore-based concurrency (default 10), workers only run when there's executable work
  • Priority heap: Tasks ordered by priority, then FIFO within same priority
  • Retry timer: Single timer wakes dispatcher when earliest blocked task becomes ready
  • MaxWait defaults: RSS (15s), Completion (30s), Background (45s)—skip indexers that would wait too long
  • Escalating backoff: Rate limit failures escalate cooldown (0 → 1m → 5m → 15m → 30m → 1h → 3h → 6h → 12h → 24h max), reset on success

Changes

Category File Change
Scheduler jackett/scheduler.go Complete rewrite: async dispatch, worker pool, priority heap, callback model
Scheduler jackett/scheduler_test.go Expanded to 28+ tests covering dispatch, priorities, MaxWait, concurrency
Rate Limits jackett/scheduler.go Add escalating backoff (0, 1m, 5m, 15m, 30m, 1h, 3h, 6h, 12h, 24h), reset on success
Merged jackett/ratelimiter.go DELETED — merged into scheduler.go
Merged jackett/ratelimiter_test.go DELETED — merged into scheduler_test.go
Integration jackett/service.go Remove BeforeRequest() calls, remove unused onReady parameter
Cleanup jackett/models.go Remove unused OnReady callback field, remove unused context import
Cleanup models/torznab_indexer.go Remove HourlyRequestLimit, DailyRequestLimit fields and related methods
Cleanup migrations/014_*.sql Remove hourly_request_limit, daily_request_limit columns
Docs jackett/SCHEDULER.md NEW — Architecture and design documentation
Priority crossseed/service.go Seeded search uses RateLimitPriorityBackground (60s interval, 45s max)
Coordination crossseed/service.go Seeded search waits for RSS automation (5-min timeout)
Caching crossseed/service.go Cache deduplication results with 5-minute TTL
Memory qbittorrent/sync_manager.go Remove redundant clone in GetTorrentFilesBatch

Architecture

Submit(job) → returns immediately with jobID
    │
    ▼
loop() → event loop handles submissions, completions, retries
    │
    ▼
enqueueTasks() → push to priority heap
    │
    ▼
dispatchTasks() → for each task in priority order:
    ├─ Context cancelled?      → complete with error
    ├─ NextWait() > MaxWait?   → skip with RateLimitWaitError
    ├─ NextWait() > 0?         → re-queue, schedule retry timer
    ├─ Worker available?       → execute
    └─ No worker?              → re-queue
    │
    ▼
executeTask() → in goroutine
    ├─ RecordRequest() before execution
    ├─ Run actual search
    ├─ OnComplete(results, err) callback
    └─ OnJobDone() when all indexers finished

See internal/services/jackett/SCHEDULER.md for detailed documentation.

Test plan

  • All scheduler tests pass (28+ tests)
  • Build passes
  • RSS automation completes without "wait budget" warnings during seeded search
  • Seeded search completes without being starved by RSS
  • Memory usage stays stable during search operations
  • Dedup cache hits visible in trace logs after first cycle
  • Rate limit escalation increases cooldown progressively on repeated 429s
  • Successful request resets escalation to level 0

Summary by CodeRabbit

  • New Features

    • Search history and indexer activity APIs plus new UI panels; centralized scheduler with activity reporting; source-specific tagging controls (RSS/Seeded/Completion/Webhook) and "Inherit source tags"; improved run/cancel UI behaviors.
  • Bug Fixes

    • Improved save-path/rename alignment, recheck/resume recovery, dedup/caching robustness, and safer read-only torrent-file results.
  • Documentation

    • Added scheduler design doc and expanded README tagging guidance.
  • Tests

    • Expanded cross-seed/layout and variant tests; removed legacy rate-limiter tests.
  • Chores

    • DB migration removing per-indexer hourly/daily limits; minor API surface and formatting updates.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 26, 2025

Walkthrough

Centralized Torznab scheduler, in-memory search history/outcomes, activity APIs/UI, and source-specific cross-seed tagging added; legacy per-indexer ratelimiter removed. qBittorrent FilesManager batch API extended with shared read-only slice semantics. Cross-seed gained dedup, recheck/resume scaffolding, expanded determineSavePath logic, and many tests/UI updates.

Changes

Cohort / File(s) Summary
Jackett scheduler & history
internal/services/jackett/scheduler.go, internal/services/jackett/search_history.go, internal/services/jackett/models.go, internal/services/jackett/SCHEDULER.md, internal/services/jackett/scheduler_test.go
New centralized scheduler API (SubmitRequest, JobCallbacks, JobID), RateLimiter integration, prioritized heap dispatch, worker pool, in-memory SearchHistoryBuffer and IndexerOutcomeStore, status APIs and tests.
Removed legacy ratelimiter
internal/services/jackett/ratelimiter.go, internal/services/jackett/ratelimiter_test.go
Deleted previous per-indexer RateLimiter implementation and its tests.
Jackett service wiring & client/API
internal/services/jackett/service.go, internal/services/jackett/client.go, internal/api/handlers/jackett.go
Service wired to scheduler/history/outcomes; added GetSearchHistory and GetActivityStatus handlers; HTTP client adds User-Agent header.
Web UI, types & hooks
web/src/lib/api.ts, web/src/types/index.ts, web/src/components/indexers/*, web/src/hooks/useSearchHistory.ts, web/src/pages/CrossSeedPage.tsx, web/src/lib/dateTimeUtils.ts
Front-end: new types, API methods, IndexerActivityPanel and SearchHistoryPanel components, useSearchHistory hook, date utilities, CrossSeedPage source-tagging UI and UI updates.
qBittorrent sync batching
internal/qbittorrent/sync_manager.go
FilesManager interface extended with CacheFilesBatch; GetTorrentFilesBatch now shares a single callerCopy slice between caller results and cache and documents returned qbt.TorrentFiles as read-only.
Cross-seed core, recheck & tagging
internal/services/crossseed/service.go, internal/services/crossseed/models.go, internal/services/crossseed/*
Added dedup cache, pending recheck/resume queue and worker, retry helpers, GetAppPreferences hook, JobID propagation, richer determineSavePath/contentLayout handling, and source-specific tagging fields/flows.
Alignment, matching & variant rules
internal/services/crossseed/align.go, internal/services/crossseed/matching.go, internal/services/crossseed/variant_overrides.go
Improved rename alignment (single-file↔folder), folder-first renames, largest-size retention per release key, and stricter/nuanced variant compatibility (IMAX/REPACK/PROPER).
Cross-seed tests & helpers
internal/services/crossseed/*_test.go, internal/services/crossseed/crossseed_test.go
Expanded save-path/content-layout tests, new mocks/helpers (GetAppPreferences), recheck/recovery simulation scaffolding, numerous test updates.
Models / DB schema & migrations
internal/models/crossseed.go, internal/models/torznab_indexer.go, internal/database/migrations/014_add_torznab_indexers.sql, internal/database/migrations/020_add_source_specific_tags.sql
Removed per-indexer hourly/daily request limits; added source-specific tag columns and inherit_source_tags; models and persistence updated with encoding/decoding and defaults.
Handlers, patches & docs
internal/api/handlers/crossseed.go, internal/api/handlers/crossseed_patch_test.go, README.md
Removed top-level Tags handling, introduced per-source tag patch fields, updated handlers/tests/docs to use new tagging model.
Misc / deps / CLI
cmd/qui/main.go, go.mod, assorted tests
Added xxhash dependency, default WithSearchHistory/WithIndexerOutcomes wiring in CLI init, and various test adaptations/formatting changes.

Sequence Diagram(s)

sequenceDiagram
    autonumber
    participant Client
    participant API as Torznab API
    participant Service as Jackett Service
    participant Scheduler
    participant Indexer
    participant History

    Client->>API: POST /torznab/search (params, optional releaseName)
    API->>Service: performSearch(..., releaseName, callbacks)
    Service->>Scheduler: SubmitRequest(ExecFn, Meta, JobCallbacks)
    Scheduler->>Scheduler: compute per-indexer NextWait & priority
    alt wait exceeds max for priority
        Scheduler->>History: Record(entry status=rate_limited)
        Scheduler->>Service: JobCallbacks.OnComplete(jobID, indexer, err=rate_limited)
    else allowed to run
        Scheduler->>Indexer: ExecFn(ctx, indexers, params, meta)
        Indexer-->>Scheduler: results / error
        Scheduler->>History: Record(SearchHistoryEntry)
        Scheduler->>Service: JobCallbacks.OnComplete(jobID, indexer, results, coverage, nil)
        Scheduler->>Service: JobCallbacks.OnJobDone(jobID)
    end
    Service->>Service: persist run/outcome (with retry helpers)
    API-->>Client: HTTP response (JobID/status)
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~90+ minutes

  • Areas needing extra attention:
    • internal/services/jackett/scheduler.go — heap/priority dispatch, concurrency, Stop/GetStatus, callbacks and rate-limiter integration.
    • internal/services/jackett/search_history.go — ring buffer thread-safety, ID generation, eviction correctness.
    • internal/services/crossseed/service.go — determineSavePath complexity, recheck/resume orchestration, dedup cache correctness.
    • internal/qbittorrent/sync_manager.go — shared-slice caching semantics and read-only contract.
    • Database migrations and model changes for source-specific tags and removed limit columns.
    • Web components/hooks — polling/refetch behavior and React Query integration.

Possibly related PRs

Suggested labels

enhancement

Suggested reviewers

  • Audionut
  • buroa

Poem

🐇 I hopped through queues and kept tasks in line,
I cached one slice so callers won't entwine.
I matched and rechecked, nudged stalled jobs awake,
Panels and tags and retries — a tidy bunny make.
Hooray for fresh runs; now go review and bake!

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 31.58% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title refers to a refactor of the rate limiter to an async scheduler, which is the primary change across the jackett service layer, but omits critical cross-seed changes (tagging, resilience, dedup caching) that comprise roughly 40% of the changeset.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/rss-seeded-search-starvation-and-memory

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (5)
internal/services/jackett/ratelimiter.go (1)

111-124: Cooldown cap behavior vs RateLimitWaitError.Wait semantics

Capping the stored cooldown to cfg.MaxWait here is a nice way to avoid persisting very long blocks that could cascade across priorities. One subtlety: RateLimitWaitError.Wait still reports the full computed wait, so any caller that uses err.Wait to schedule future retries/cooldowns will believe the indexer is blocked for longer than the limiter actually enforces.

If RateLimitWaitError is only used for logging/metrics, this is fine. If it feeds external scheduling (e.g., RSS job backoff), consider either returning min(wait, cfg.MaxWait) in Wait or documenting that Wait is the theoretical requirement while the internal cooldown is capped at MaxWait.

internal/qbittorrent/sync_manager.go (2)

35-45: Hash normalization doc still mentions lowercase hex

The FilesManager comment still suggests callers pass hashes normalized as “e.g. lowercase hex”, but elsewhere (e.g. crossseed’s normalizeHash) the canonical form is uppercase and prior discussion pointed to standardizing on uppercase for inter-service keys.

Given canonicalizeHash currently lowercases internally, behavior is correct either way, but it would be good to align the documentation and cross-service conventions on a single canonical form (likely uppercase) to avoid confusion for implementers of FilesManager.

For example:

-	// Callers must pass hashes already trimmed/normalized (e.g. lowercase hex)
+	// Callers must pass hashes already trimmed/normalized (e.g. uppercase hex)

Optional, but helps keep the interface contract consistent across packages.
Based on learnings, this was previously called out as a desired change.


1181-1337: Clarify GetTorrentFilesBatch comments and aliasing contract

The new batching logic generally looks solid and the “one clone per network fetch” optimization is nice. A couple of nits around comments vs actual behavior and the aliasing trade-off:

  • Lines 1275–1277 say “The original API response will be used for caching… passing the original to cache before returning to caller,” but the code clones into callerCopy and never stores *files anywhere. The cache later receives the entries from filesByHash, i.e. the same callerCopy slices.
  • Lines 1292–1294 then say “We pass filesByHash entries directly – callers receive the same slice reference but are expected not to mutate,” which matches the code for fresh fetches, but contradicts the earlier comment and is only true for the initial miss-path (cache hits are cloned in the earlier GetCachedFilesBatch block).

So the effective behavior is:

  • Fresh fetch: one clone per torrent; that clone is shared between the result map and FilesManager’s CacheFilesBatch.
  • Subsequent hits: GetCachedFilesBatch data is cloned before being put into filesByHash, so callers get isolated slices.

This is a reasonable allocation/caching trade-off, but the comments should be updated to describe the current semantics clearly (and probably drop the “original API response” phrasing). If you ever decide you want full cache isolation as well, a small follow-up would be to keep a second map keyed by hash with the original *files for CacheFilesBatch, while still returning callerCopy to callers.

As-is, I’d suggest tightening the comments to: (a) explicitly state that fresh fetches share slices between caller and cache and (b) remind callers that qbt.TorrentFiles must be treated as read-only.

internal/services/crossseed/service.go (2)

3991-4002: Cache hit returns shared slice references.

The cached entry.deduplicated and entry.duplicateHashes are returned directly without copying. If any caller modifies these slices, the cached data would be corrupted.

Currently, callers (refreshSearchQueue) appear to use these in a read-only manner, so this is likely safe. However, for defensive programming, consider returning copies as done elsewhere in this codebase (e.g., cloneTorrentFiles).

 		if entry, ok := s.dedupCache.Get(cacheKey); ok && entry != nil {
 			log.Trace().
 				Int("instanceID", instanceID).
 				Int("cachedCount", len(entry.deduplicated)).
 				Msg("[CROSSSEED-DEDUP] Using cached deduplication result")
-			return entry.deduplicated, entry.duplicateHashes
+			// Return copies to prevent cache corruption if callers modify slices
+			dedupCopy := append([]qbt.Torrent(nil), entry.deduplicated...)
+			hashCopy := make(map[string][]string, len(entry.duplicateHashes))
+			for k, v := range entry.duplicateHashes {
+				hashCopy[k] = append([]string(nil), v...)
+			}
+			return dedupCopy, hashCopy
 		}

3948-3969: Based on my verification, I can now provide the final rewritten review comment:


Sampling cache strategy is vulnerable to torrent reordering, causing potential cache misses.

The sampling approach (first, middle, last hashes) will generate different cache keys if the torrent list reorders between calls, even when the underlying data is identical. Since GetTorrents() returns torrents from the underlying qbittorrent library without sorting, and there's no explicit ordering guarantee documented, subsequent calls could return the same torrents in different orders.

Impact: This causes cache misses and performance degradation when torrents reorder, though data correctness is unaffected. False cache hits are unlikely (would require identical sampled hashes with different torrents), but remain possible if the torrent ordering is non-deterministic.

Recommendation: Sort the torrents slice before passing to dedupCacheKey() to ensure consistent cache keys, or consider a more robust cache key strategy (e.g., hashing all torrent hashes or using a set-based approach).

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between dfad426 and a75f988.

📒 Files selected for processing (3)
  • internal/qbittorrent/sync_manager.go (2 hunks)
  • internal/services/crossseed/service.go (9 hunks)
  • internal/services/jackett/ratelimiter.go (1 hunks)
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.733Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.
📚 Learning: 2025-11-25T22:46:03.733Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.733Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.

Applied to files:

  • internal/services/crossseed/service.go
  • internal/qbittorrent/sync_manager.go
📚 Learning: 2025-11-21T21:11:50.633Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).

Applied to files:

  • internal/qbittorrent/sync_manager.go
🧬 Code graph analysis (1)
internal/services/crossseed/service.go (3)
web/src/types/index.ts (1)
  • Torrent (204-261)
internal/services/jackett/service.go (1)
  • WithSearchPriority (268-273)
internal/services/jackett/ratelimiter.go (1)
  • RateLimitPriorityBackground (38-38)
🔇 Additional comments (5)
internal/services/crossseed/service.go (5)

72-76: LGTM - Cache entry struct is well-defined.

The struct appropriately captures both the deduplicated torrent list and the mapping of representative hashes to their duplicates for propagating cooldowns.


256-259: Cache field declaration looks good.

The 5-minute TTL aligns with the existing torrentFilesCache TTL and is appropriate for deduplication results that may change as torrents are added/removed.


4194-4200: Cache storage after deduplication is correct.

The newly computed deduplicated slice and duplicateMap are owned by this function at this point, so storing them in the cache is safe.


4459-4473: RSS wait loop implementation addresses starvation concern.

The 5-minute cap with 500ms polling prevents indefinite blocking while allowing RSS automation to complete. The labeled break and context handling are correct.

One consideration: if RSS automation runs are very frequent, seeded searches could still wait up to 5 minutes each iteration. This is an acceptable trade-off given RSS has higher priority, and the timeout prevents complete starvation.


4484-4484: Background priority correctly prioritizes RSS over seeded searches.

Using RateLimitPriorityBackground for seeded torrent searches ensures they yield to higher-priority RSS automation requests in the rate limiter, complementing the wait loop above.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
internal/qbittorrent/sync_manager.go (1)

1275-1282: Single-clone sharing between caller and cache is correct but relies on immutability

The new pattern (clone once into callerCopy and then store that same slice both in filesByHash and in fetchedFilesCacheFilesBatch) achieves the desired allocation reduction and is concurrency-safe given the errgroup+mutex usage.

The trade-off is that the first caller’s qbt.TorrentFiles slice and the cache now share the same backing array. That’s fine as long as:

  • Callers never mutate the returned TorrentFiles (treat as read-only), and
  • FilesManager implementations also treat cached slices as immutable (no in-place edits, appends, or re-slicing).

You’ve documented this locally in the comments; consider also spelling out the “read-only qbt.TorrentFiles” contract in the FilesManager interface or GetTorrentFilesBatch doc so future implementers/callers don’t accidentally violate it.

Also applies to: 1291-1299

internal/services/crossseed/service.go (2)

3949-3966: XOR-based signature has theoretical collision properties.

Using XOR for the signature is elegant for order-independence, but note that:

  1. Duplicate hashes cancel out (a ^ a = 0), though torrents should have unique hashes
  2. Different torrent sets can produce identical signatures if their XOR values collide

Given the short 5-minute TTL and the count being part of the key, the practical collision risk is low. If you observe unexpected cache hits in the future, this would be a place to revisit.


3991-3998: Cache aliasing: callers must not modify returned slices.

The comment correctly warns that returned slices are cache-backed, but this creates a fragile contract. If any caller (current or future) modifies entry.deduplicated or entry.duplicateHashes, it will corrupt the cache.

Consider returning defensive copies for safety, or document this prominently in the function's doc comment. The current approach is acceptable for performance if callers are carefully audited, but a defensive clone would prevent subtle bugs:

 			log.Trace().
 				Int("instanceID", instanceID).
 				Int("cachedCount", len(entry.deduplicated)).
 				Msg("[CROSSSEED-DEDUP] Using cached deduplication result")
-			// IMPORTANT: Returned slices are cache-backed. Do not modify.
-			return entry.deduplicated, entry.duplicateHashes
+			// Return defensive copies to prevent cache corruption
+			dedupCopy := make([]qbt.Torrent, len(entry.deduplicated))
+			copy(dedupCopy, entry.deduplicated)
+			hashCopy := make(map[string][]string, len(entry.duplicateHashes))
+			for k, v := range entry.duplicateHashes {
+				hashCopy[k] = append([]string(nil), v...)
+			}
+			return dedupCopy, hashCopy
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a75f988 and a7f11ad.

📒 Files selected for processing (4)
  • internal/qbittorrent/sync_manager.go (3 hunks)
  • internal/services/crossseed/service.go (10 hunks)
  • internal/services/jackett/ratelimiter.go (1 hunks)
  • internal/services/jackett/ratelimiter_test.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • internal/services/jackett/ratelimiter.go
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 553
File: internal/database/migrations/014_add_torznab_indexers.sql:294-310
Timestamp: 2025-11-15T21:35:06.686Z
Learning: In the qui codebase, storing binary BLOB data (e.g., torrent_data, response_data) directly in SQLite tables for cache purposes (such as torznab_torrent_cache and torznab_search_cache) is intentional and acceptable. This design choice prioritizes system simplicity, self-containment, and backup ease over external file/object storage, as the data is short-lived cache and no performance issues have been observed. External storage would introduce unnecessary complexity for cleanup and migrations.
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.733Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.
📚 Learning: 2025-11-21T21:11:50.633Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).

Applied to files:

  • internal/qbittorrent/sync_manager.go
📚 Learning: 2025-11-25T22:46:03.733Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.733Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.

Applied to files:

  • internal/qbittorrent/sync_manager.go
  • internal/services/crossseed/service.go
🧬 Code graph analysis (1)
internal/services/crossseed/service.go (2)
internal/services/jackett/service.go (1)
  • WithSearchPriority (268-273)
internal/services/jackett/ratelimiter.go (1)
  • RateLimitPriorityBackground (38-38)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Run tests
🔇 Additional comments (7)
internal/services/jackett/ratelimiter_test.go (1)

92-95: LGTM! Stricter assertion validates the capping behavior.

The change from waitErr.Wait <= waitErr.MaxWait to waitErr.Wait == waitErr.MaxWait ensures the implementation precisely caps the wait duration when it would exceed the budget. Given the test setup (required wait ~150ms vs MaxWait 5ms), this stricter equality check correctly validates that the capping logic in ratelimiter.go is functioning as intended.

internal/qbittorrent/sync_manager.go (1)

39-40: Hash normalization doc tweak looks good

Switching the example to “uppercase hex” in the GetCachedFilesBatch comment matches the normalization guidance used elsewhere, while still allowing callers that normalize differently (the “e.g.” wording keeps it flexible).

Based on learnings, this aligns the interface docs with the intended hash casing.

internal/services/crossseed/service.go (5)

73-77: LGTM on the cache entry structure.

The dedupCacheEntry struct appropriately encapsulates both the deduplicated torrent list and the duplicate hash mapping needed to propagate cooldowns.


257-260: Cache field and TTL are appropriate.

The 5-minute TTL aligns with the intended deduplication caching semantics and matches the PR objective of avoiding repeated expensive file fetches during search cycles.


4456-4470: RSS coordination loop is well-structured.

The implementation correctly:

  • Creates the timeout channel once before the loop
  • Handles context cancellation
  • Uses a labeled break for clean exit
  • Logs when the timeout is exceeded

This effectively yields to RSS automation while preventing indefinite blocking.


4481-4481: Core fix: seeded search now uses Background priority.

This change, combined with the RSS wait loop above, correctly addresses the starvation issue by:

  1. Giving seeded search a separate, lower priority (RateLimitPriorityBackground)
  2. Yielding to active RSS automation before starting searches

This should eliminate the "wait budget exceeded" warnings mentioned in the PR objectives.


4192-4197: Cache storage is appropriate.

The deduplicated slice and duplicateMap are freshly allocated within this function, so storing them directly in the cache is safe. The concern is on the read path (returning cache-backed slices to callers), which was addressed in the earlier comment.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
internal/qbittorrent/sync_manager.go (1)

1198-1205: Based on my verification of the codebase, I can now provide the final rewritten review comment.

The shared slice reference between fresh fetch results and cache input creates a maintenance burden and potential corruption risk

The code at lines 1283–1305 clones the API response once (callerCopy) and reuses that single clone for both:

  • The caller's returned map (filesByHash[ch] = callerCopy)
  • The cache input (fetchedFiles[canonicalHash] = filesCacheFilesBatch)

While the current FilesManager implementation converts slices to database rows before storage (mitigating immediate risk), this design:

  1. Relies entirely on a read-only convention (documented at line 36 and 1296 but not enforced). Any future caller mutation—or alternative FilesManager implementation that stores slices directly (as shown in aliasingFilesManager test)—would corrupt cached data for all subsequent callers.

  2. Violates the isolation pattern used for cache hits (lines 1212–1216), where cached entries are explicitly cloned. Fresh fetches get no such protection.

  3. Creates an asymmetry that makes the code harder to reason about: cache hits return independent clones; fresh fetches share a reference between caller and cache.

I recommend either:

  • Restore a dedicated clone for cache input, or
  • Have CacheFilesBatch clone internally before storing, or
  • At minimum, strengthen the read-only convention (e.g., with linting rules or interface-level enforcement)
🧹 Nitpick comments (1)
internal/services/crossseed/service.go (1)

3963-3966: Normalize torrent hash before hashing for cache key consistency.

The cache key uses torrents[i].Hash directly without normalization. If hash strings have inconsistent casing (even though qBittorrent typically returns lowercase), the xxhash digest will differ, causing cache misses for the same torrent set.

Apply this diff to ensure consistent cache key generation:

 	// XOR all individual hash digests for order-independent signature.
 	// XOR is commutative and associative, so order doesn't matter.
 	var sig uint64
 	for i := range torrents {
-		sig ^= xxhash.Sum64String(torrents[i].Hash)
+		sig ^= xxhash.Sum64String(normalizeHash(torrents[i].Hash))
 	}
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a7f11ad and be8c97b.

📒 Files selected for processing (2)
  • internal/qbittorrent/sync_manager.go (3 hunks)
  • internal/services/crossseed/service.go (10 hunks)
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
📚 Learning: 2025-11-21T21:11:50.633Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).

Applied to files:

  • internal/qbittorrent/sync_manager.go
📚 Learning: 2025-11-25T22:46:03.733Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.733Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.

Applied to files:

  • internal/qbittorrent/sync_manager.go
  • internal/services/crossseed/service.go
🧬 Code graph analysis (1)
internal/services/crossseed/service.go (2)
internal/services/jackett/service.go (1)
  • WithSearchPriority (268-273)
internal/services/jackett/ratelimiter.go (1)
  • RateLimitPriorityBackground (38-38)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Run tests
🔇 Additional comments (6)
internal/qbittorrent/sync_manager.go (1)

35-47: FilesManager docs and CacheFilesBatch addition look consistent with normalization strategy

The read‑only contract on returned qbt.TorrentFiles and the updated note about callers passing pre‑normalized hashes (with the uppercase‑hex example) align well with how hashes are handled elsewhere (e.g., cross‑seed flows using strings.ToUpper). The CacheFilesBatch addition also matches the new batched caching usage in this file; Go will enforce that all implementations are updated at compile time.

Based on learnings, this now matches the intended “uppercase hex” guidance for batch cache keys.

internal/services/crossseed/service.go (5)

3991-4005: Intentional cache-backed return pattern is documented but fragile.

The decision to return cache-backed slices without defensive copies is documented and intentional for memory efficiency. Current callers are read-only, which is correct. However, this pattern is fragile for future maintenance.

Consider adding a compile-time or runtime assertion mechanism if feasible, or at minimum ensure code review vigilance for any future modifications to the returned slices.


4462-4488: LGTM! RSS coordination and priority separation look correct.

The RSS wait loop correctly:

  • Yields to RSS automation when active (runActive.Load())
  • Has a 5-minute timeout to prevent indefinite blocking
  • Respects context cancellation
  • Uses labeled break to exit cleanly

The priority change to RateLimitPriorityBackground properly separates seeded search from RSS automation, which should resolve the mutual starvation issue described in the PR.


4197-4203: Cache storage implementation is correct.

The cache stores freshly allocated local variables (deduplicated and duplicateMap), so there's no risk of external mutation affecting cached data. The 5-minute TTL aligns with the PR's stated goal.


295-296: Cache initialization is consistent with existing patterns.

The 5-minute TTL for the dedup cache aligns with the stated goal of avoiding repeated expensive file fetches while allowing periodic refresh.


257-260: Clear documentation for the dedup cache field.

The comment explains the cache key format and purpose well, which will help future maintainers understand the hashing strategy.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
internal/services/crossseed/service.go (1)

2239-2284: AutoTMM/contentLayout handling is solid, but logging/messages use the wrong path

The new logic in processCrossSeedCandidate that:

  • Disables autoTMM when matchType == "partial-in-pack" or when the computed savePath differs from props.SavePath, and
  • Sets contentLayout = "NoSubfolder" whenever root folders differ or one side is rootless,

properly enforces the new path semantics and avoids qBittorrent silently overriding your explicit layout.

However, the user-facing strings still use props.SavePath:

result.Message = fmt.Sprintf("Added torrent paused to %s (match: %s)", props.SavePath, matchType)
// ...
result.Message = fmt.Sprintf("Added torrent with recheck to %s (match: %s)", props.SavePath, matchType)
// ...
log.Debug().Str("savePath", props.SavePath)...

When savePath != props.SavePath (e.g., partial‑in‑pack into a collection/season folder, or different roots), these messages/logs will report the wrong path.

Consider switching these to use the actual savePath you pass in options["savepath"] so logs and messages reflect where the torrent really lands.

Also applies to: 2296-2306, 2331-2338

🧹 Nitpick comments (6)
internal/services/crossseed/crossseed_test.go (2)

236-699: Expanded determineSavePath coverage is strong but getting dense

The expanded TestDetermineSavePath matrix does a good job pinning all the new path rules (partial-in-pack, ContentPath vs SavePath, single-file vs rooted, collections, edge chars/long names). The only concern is maintainability: this single table is now quite long and mixed (movies, TV, collections, edge cases). Consider, at some point, splitting it into smaller focused subtests (e.g. movies/TV/collections/edges) or helper builders for the common qbt.TorrentFiles patterns to keep it readable.


25-27: Service construction in integration tests depends on ReleaseCache staying an alias

The integration tests build Service manually using:

svc := &Service{
    releaseCache:     releases.NewDefaultParser(),
    stringNormalizer: stringutils.NewDefaultNormalizer(),
}

This works as long as Service.releaseCache remains compatible (e.g. a type alias or wrapper exposing Parse). If ReleaseCache ever stops being a thin alias around releases.Parser (e.g. gains extra behavior/metrics), these tests could silently diverge from the production configuration.

Consider switching these tests to use NewReleaseCache() (or a small helper that mirrors it) so they stay aligned with whatever ReleaseCache actually is going forward.

Also applies to: 721-816

internal/services/crossseed/service.go (4)

74-79: Dedup cache wiring is coherent and matches the perf goal

The introduction of dedupCacheEntry, the Service.dedupCache field, and its initialization with a 5-minute TTL cleanly encapsulate deduplication results and avoid redoing the expensive GetTorrentFilesBatch + grouping on every refresh. Using a per-service cache is fine given the single-process nature of the scheduler, and TTL bounds stale-data risk.

The in-code comment that cached slices must be treated as read-only is important; current callers (refreshSearchQueue / search run) respect that by only iterating and copying. Just keep that contract in mind if future code starts mutating deduplicated or duplicateHashes.

Also applies to: 255-262, 287-316


4015-4035: dedupCacheKey XOR signature is reasonable; consider normalizing hashes

Using an XOR of xxhash.Sum64String over the torrent hashes plus the count to get an order-independent signature is a pragmatic choice, and the 5-minute TTL + count in the key keep accidental collisions low.

If you want to reduce tiny edge risks and make the key robust against case-differences in hashes, you could feed normalizeHash(torrents[i].Hash) into Sum64String instead of the raw Hash. This would keep behavior consistent with the rest of the code, which treats normalized hashes as canonical.


4528-4554: RSS coordination and background priority for seeded search look correct

The new RSS coordination in processSearchCandidate:

  • Spins while s.runActive.Load() is true, yielding with a 500ms sleep and aborting on context cancellation or after 5 minutes, and
  • Then sets jackett.WithSearchPriority(..., RateLimitPriorityBackground) for seeded searches,

nicely prevents seeded search from competing with higher-priority RSS automation and completion flows. The wait loop respects ctx and has a bounded timeout, so it won’t wedge the run.

If you ever touch this again, you could replace time.After(5 * time.Minute) with an explicit time.NewTimer that you Stop() when done to avoid leaking a timer goroutine, but it’s a minor optimization given the long interval.


5031-5051: Leftover test flag in filterIndexersByExistingContent

filterIndexersByExistingContent still contains a testFilteringEnabled := false section that, when flipped, short-circuits all indexers:

// TEMPORARY TEST: Remove all indexers to see if the filtering is working
testFilteringEnabled := false // Set to true to test if filtering is working
if testFilteringEnabled {
    log.Debug().Msg("[CROSSSEED-FILTER] *** TEST MODE: FILTERING OUT ALL INDEXERS ***")
    return []int{}, nil, nil, nil
}

Since this is dead in production (hardcoded false) but would be very surprising if someone ever toggled it, it’s better to remove this block or hide it behind a proper debug feature flag/config rather than a local boolean.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between be8c97b and 900fe18.

📒 Files selected for processing (3)
  • go.mod (1 hunks)
  • internal/services/crossseed/crossseed_test.go (3 hunks)
  • internal/services/crossseed/service.go (15 hunks)
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
📚 Learning: 2025-11-06T12:11:04.963Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.

Applied to files:

  • go.mod
  • internal/services/crossseed/service.go
  • internal/services/crossseed/crossseed_test.go
📚 Learning: 2025-11-25T22:46:03.733Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.733Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.

Applied to files:

  • internal/services/crossseed/service.go
🧬 Code graph analysis (1)
internal/services/crossseed/service.go (2)
internal/services/jackett/service.go (1)
  • WithSearchPriority (268-273)
internal/services/jackett/ratelimiter.go (1)
  • RateLimitPriorityBackground (38-38)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Run tests
🔇 Additional comments (2)
go.mod (1)

14-14: Direct xxhash dependency looks appropriate

Declaring github.com/cespare/xxhash/v2 v2.3.0 as a direct dependency matches the new usage in crossseed (dedup cache key) and avoids relying on it being pulled in transitively. Just ensure go.sum and any vendored modules are updated via go mod tidy / CI before merge.

internal/services/crossseed/service.go (1)

2684-2787: determineSavePath logic aligns with new partial‑in‑pack and root‑mismatch tests

The updated determineSavePath correctly:

  • Uses detectCommonRoot on sourceFiles/candidateFiles to distinguish single-file vs rooted torrents.
  • For matchType == "partial-in-pack":
    • Places loose files into the candidate’s root under SavePath when the source has no root but the candidate does.
    • Otherwise prefers matchedTorrent.ContentPath when available, which matches the “season pack / collection folder” expectations.
  • For differing non-empty roots with matchType != "partial-in-pack", builds SavePath + candidateRoot, matching the “spaces vs dots” and Plex-style folder cases.
  • Falls back to the matched torrent’s SavePath for same-root or non-episodic cases.

This matches the expanded TestDetermineSavePath cases and the new integration tests for episode-vs-pack and movie-collection scenarios.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (4)
internal/services/crossseed/service.go (3)

74-78: Dedup cache design and keying look solid; consider minor guard rails

The dedupCache wiring and dedupCacheKey design (instanceID + count + order‑independent xxhash XOR) are coherent and should dramatically reduce repeated file‑metadata scans while remaining robust to small queue changes. The cache‑hit path in deduplicateSourceTorrents is also careful to treat returned slices as read‑only, which fits how callers use the results.

Two small follow‑ups you might consider:

  • Add a brief note to the deduplicateSourceTorrents doc comment that its return slices must not be mutated, so future refactors don’t accidentally append/sort in place (you already call this out at the cache hit, but surfacing it at the function boundary would help).
  • In dedupCacheKey, you could optionally normalize hashes (e.g., via normalizeHash) before feeding them to xxhash so the signature is invariant to upstream casing differences, even though current callers appear consistent.

Neither is blocking, but they’d make the contract and key stability a bit more future‑proof.

Please double‑check that github.com/cespare/xxhash/v2 is correctly added/pinned in go.mod and that its use aligns with your repo’s dependency policy.

Also applies to: 258-262, 296-297, 314-315, 4015-4035, 4057-4071, 4263-4269


2239-2284: Cross‑seed save‑path, AutoTMM, and contentLayout logic align with goals; minor logging nit

The expanded determineSavePath plus the new handling in processCrossSeedCandidate for:

  • matchType == "partial-in-pack" (using candidate root or ContentPath),
  • differing sourceRoot/candidateRoot (favoring the candidate folder),
  • deriving useAutoTMM from matchedTorrent.AutoManaged and disabling it only when you must enforce a specific savepath, and
  • setting contentLayout = "NoSubfolder" when roots differ or are missing,

all look consistent with the intent to avoid double subfolders and to better respect the source layout. The control‑flow around partial‑in‑pack vs root‑diff vs default path is also sound and won’t regress non‑episodic cases.

One small nit: the success/recheck log messages and result.Message still interpolate props.SavePath, which may now differ from the effective savePath you computed (e.g., partial‑in‑pack or root‑diff cases). If you want fully accurate diagnostics, consider logging the final path you passed to qBittorrent instead of the original props.SavePath.

Also applies to: 2296-2305, 2689-2787


4528-4553: RSS wait coordination and background priority are reasonable; watch timer churn

The new RSS coordination in processSearchCandidate—yielding while runActive is true (up to 5 minutes) and then running searches with RateLimitPriorityBackground—is a sensible way to avoid seeded search starving RSS automation or vice‑versa.

Two minor points to keep in mind:

  • runActive covers both scheduled and manual automation runs, so seeded search will also yield to long manual runs; this is probably acceptable but worth being aware of.
  • The inner loop uses a fresh time.After(500 * time.Millisecond) each iteration; if you ever see many concurrent seeded runs waiting on a long RSS run, swapping this for a time.Ticker would slightly reduce timer churn, though that’s an optimization rather than a correctness issue.

Overall, the coordination logic and priority choice look sound.

web/src/pages/CrossSeedPage.tsx (1)

233-233: Consider aligning fetch limit with display limit.

The query fetches 20 runs but the display logic limits each group (scheduled, manual, other) to 5, showing a maximum of 15 runs total. While the current approach provides a buffer for uneven distribution, you could optimize by fetching exactly what you display or adjusting the grouping logic to use all 20 fetched runs.

Also applies to: 829-834

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 900fe18 and b31b323.

📒 Files selected for processing (2)
  • internal/services/crossseed/service.go (16 hunks)
  • web/src/pages/CrossSeedPage.tsx (6 hunks)
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
📚 Learning: 2025-11-25T22:46:03.733Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.733Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.

Applied to files:

  • internal/services/crossseed/service.go
📚 Learning: 2025-11-06T12:11:04.963Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.

Applied to files:

  • internal/services/crossseed/service.go
🧬 Code graph analysis (1)
web/src/pages/CrossSeedPage.tsx (3)
web/src/lib/api.ts (1)
  • api (1613-1613)
web/src/components/ui/badge.tsx (1)
  • Badge (51-51)
web/src/components/ui/collapsible.tsx (2)
  • CollapsibleTrigger (14-14)
  • CollapsibleContent (14-14)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Run tests
🔇 Additional comments (6)
internal/services/crossseed/service.go (1)

5533-5541: recentResults now tracks only successful additions (good for UI signal‑to‑noise)

Limiting state.recentResults to result.Added == true gives the UI a much clearer “last N successful injections” view instead of being dominated by skips/failures. The cap at 10 and slice‑tail trimming are straightforward and safe.

web/src/pages/CrossSeedPage.tsx (5)

1134-1141: LGTM!

The conditional count badges provide a helpful summary of run distribution before expanding the collapsible section.


1150-1156: LGTM!

The compact group headers with inline counts improve readability and provide clear context for the displayed runs.


1161-1163: LGTM!

Displaying the run status in the badge is more valuable than the trigger type, especially since the trigger information is already conveyed by the group headers ("Scheduled" vs "Manual"). This improves the information hierarchy and makes it easier to spot failed or running operations at a glance.


1176-1176: The review comment is incorrect—totalFeedItems is guaranteed to be defined.

The CrossSeedRun interface (web/src/types/index.ts:1234) declares totalFeedItems: number as a required property without the optional ? modifier. TypeScript guarantees this property is always present, making the direct access at line 1176 safe and appropriate. No fallback handling is needed.


1533-1533: Perfect! I have completed the verification. Here's my analysis:

Verification Results:

I traced the flow from the UI to the backend:

  1. Backend enforcement of "last 10 additions": In internal/services/crossseed/service.go, when results are added:

    • recentResults is initialized with capacity 10
    • When a result is added: if len(state.recentResults) > 10 { state.recentResults = state.recentResults[len(state.recentResults)-10:] }
    • This maintains a sliding window of exactly the last 10 added results
  2. "List clears when the run stops": In GetSearchRunStatus():

    • Returns an empty RecentResults array by default
    • Only populates it if state != nil (search is running)
    • When a search completes, s.searchState = nil, so subsequent calls return empty results

The UI text is accurate and verified. Both claims are properly enforced by the backend.


The UI text "Shows the last 10 additions during this run. List clears when the run stops." is accurate. Both behaviors are correctly implemented in the backend: the service maintains a sliding window of exactly 10 added results during a search run, and clears them when the search state is reset. No changes needed.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
internal/services/crossseed/service.go (1)

4532-4547: Add debug logging for RSS coordination to aid diagnostics.

The RSS coordination logic correctly prevents rate-limiter contention with a 5-minute timeout. However, logging only occurs on timeout (line 4542), not when the wait starts or completes successfully. Consider adding debug logging to help diagnose when seeded search is yielding to RSS automation:

+	if s.runActive.Load() {
+		log.Debug().Msg("[CROSSSEED-SEARCH] Waiting for RSS automation to complete before searching")
+	}
 	rssWaitDeadline := time.After(5 * time.Minute)
 rssWaitLoop:
 	for s.runActive.Load() {
 		select {
 		case <-ctx.Done():
 			return ctx.Err()
 		case <-rssWaitDeadline:
 			log.Warn().Msg("[CROSSSEED-SEARCH] RSS wait timeout exceeded, proceeding with search")
 			break rssWaitLoop
 		case <-time.After(500 * time.Millisecond):
 		}
 	}
+	if !s.runActive.Load() {
+		log.Debug().Msg("[CROSSSEED-SEARCH] RSS automation completed, proceeding with search")
+	}

This would provide visibility into coordination behavior without adding significant overhead.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b31b323 and 23083bd.

📒 Files selected for processing (1)
  • internal/services/crossseed/service.go (18 hunks)
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
📚 Learning: 2025-11-25T22:46:03.733Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.733Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.

Applied to files:

  • internal/services/crossseed/service.go
📚 Learning: 2025-11-06T12:11:04.963Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.

Applied to files:

  • internal/services/crossseed/service.go
🧬 Code graph analysis (1)
internal/services/crossseed/service.go (1)
internal/services/jackett/ratelimiter.go (1)
  • RateLimitPriorityBackground (38-38)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Run tests
🔇 Additional comments (3)
internal/services/crossseed/service.go (3)

4015-4035: XOR-based cache key is acceptable for the use case.

The order-independent hashing using XOR of individual hash digests is a pragmatic choice. While XOR has theoretical collision risk (different torrent sets could produce identical signatures), the 5-minute TTL, instance ID scoping, and torrent count make practical collisions extremely unlikely.

Importantly, false cache hits would only cause suboptimal deduplication (processing some torrents twice) rather than correctness issues, which is an acceptable tradeoff given the memory pressure reduction achieved by avoiding repeated expensive file fetches.


5537-5545: Good filtering of recent results for stable UI display.

The distinction between state.run.Results (all attempts) and state.recentResults (only successful additions) is well-designed. Filtering by result.Added ensures the UI shows a meaningful sliding window of actual cross-seed additions rather than being diluted by skipped or failed attempts.


74-78: Well-implemented dedup caching and RSS coordination.

The changes successfully address the PR objectives:

  1. Deduplication cache (lines 74-78, 296-297, 314, 4015-4035, 4056-4276): The 5-minute TTL cache with order-independent XOR-based keys effectively reduces memory pressure from repeated expensive file fetches. The read-only contract for cached slices is properly documented and respected by callers.

  2. Rate-limit priority separation (line 4557): Changing seeded search to RateLimitPriorityBackground prevents mutual starvation with RSS automation as intended.

  3. RSS coordination (lines 4532-4547): The wait loop with 5-minute timeout ensures seeded search yields to higher-priority RSS automation while preventing indefinite blocking.

The implementation is sound and aligns with the memory optimization and starvation prevention goals outlined in the PR objectives.

Also applies to: 258-261, 296-297, 314-314, 4015-4035, 4056-4276, 4532-4547, 4557-4557

@s0up4200 s0up4200 changed the title fix(crossseed): resolve RSS/seeded search starvation and memory pressure refactor(jackett): replace blocking rate limiter with async job-based scheduler Nov 26, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
internal/services/jackett/SCHEDULER.md (1)

17-66: LGTM: Comprehensive scheduler documentation.

The architecture documentation is thorough and well-structured, clearly explaining the async, job-based scheduler model and dispatch-time rate limiting.

Minor improvement: Add a language specifier to the code block (line 17) to address the linter warning:

-```
+```text
 ┌─────────────────────────────────────────────────────────────────────┐
internal/services/crossseed/service.go (1)

2690-2787: Based on my comprehensive verification of the determineSavePath function in internal/services/crossseed/service.go (lines 2690-2787), I can now provide the rewritten review comment.


Expanded logic correctly handles partial-in-pack and root-folder scenarios with comprehensive test coverage

The new matchType, sourceFiles, and candidateFiles parameters enable handling of partial-in-pack and differing root-folder scenarios. The implementation is sound:

  1. Partial-in-pack (lines 2694-2721): When source lacks a root folder but candidate has one, the file is correctly placed inside the candidate's folder. When ContentPath is empty, the logic properly falls through to the root-diff check (lines 2723-2736).

  2. Root-folder handling (lines 2723-2736): Correctly detects when root folders differ and uses the candidate's folder path, handling cases like scene releases vs Plex-named folders.

  3. Test coverage: 45+ test cases confirm correct behavior across movies, TV shows, collections, and edge cases, including explicit coverage of empty ContentPath fallback behavior.

Minor improvement opportunity:

The function has grown to ~100 lines with distinct logical blocks. Consider extracting the partial-in-pack logic (lines 2694-2721) and root-diff logic (lines 2723-2736) into separate helper methods to improve readability and testability, though the current implementation is functionally correct.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 23083bd and 48e0206.

📒 Files selected for processing (8)
  • internal/services/crossseed/service.go (18 hunks)
  • internal/services/jackett/SCHEDULER.md (1 hunks)
  • internal/services/jackett/models.go (0 hunks)
  • internal/services/jackett/ratelimiter.go (0 hunks)
  • internal/services/jackett/ratelimiter_test.go (0 hunks)
  • internal/services/jackett/scheduler.go (9 hunks)
  • internal/services/jackett/scheduler_test.go (9 hunks)
  • internal/services/jackett/service.go (7 hunks)
💤 Files with no reviewable changes (3)
  • internal/services/jackett/models.go
  • internal/services/jackett/ratelimiter_test.go
  • internal/services/jackett/ratelimiter.go
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.733Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.
📚 Learning: 2025-11-25T22:46:03.733Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.733Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.

Applied to files:

  • internal/services/crossseed/service.go
  • internal/services/jackett/service.go
  • internal/services/jackett/SCHEDULER.md
📚 Learning: 2025-11-06T12:11:04.963Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.

Applied to files:

  • internal/services/crossseed/service.go
🧬 Code graph analysis (2)
internal/services/jackett/scheduler.go (3)
internal/models/torznab_indexer.go (1)
  • TorznabIndexer (64-83)
internal/services/jackett/client.go (1)
  • Result (87-104)
pkg/gojackett/domain.go (1)
  • Indexers (16-93)
internal/services/crossseed/service.go (1)
internal/services/jackett/scheduler.go (1)
  • RateLimitPriorityBackground (42-42)
🪛 LanguageTool
internal/services/jackett/SCHEDULER.md

[style] ~169-~169: Try moving the adverb to make the sentence clearer.
Context: ...king cleared 5. Chain: completeCh signaled to potentially unblock more tasks ### 4. Retry Timer If tasks are block...

(SPLIT_INFINITIVE)

🪛 markdownlint-cli2 (0.18.1)
internal/services/jackett/SCHEDULER.md

17-17: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Run tests
🔇 Additional comments (18)
internal/services/jackett/scheduler_test.go (3)

18-113: LGTM: Test coverage for new scheduler API is comprehensive.

The tests properly exercise the new SubmitRequest and callback-based API, including priority ordering, worker pool limits, and rate limiting integration.


587-722: Well-designed test coverage for MaxWait behavior.

The tests for explicit MaxWait (lines 587-640) and priority-based default MaxWait (lines 642-722) properly verify that rate-limited indexers are skipped when wait time exceeds budget. This directly addresses the starvation issue mentioned in the PR objectives.


724-878: Comprehensive rate limiter unit tests.

The direct RateLimiter tests provide good coverage of cooldown tracking, interval enforcement, priority multipliers, and state management. These low-level tests complement the integration tests above.

internal/services/jackett/service.go (3)

326-417: Solid implementation of result aggregation for async scheduler.

The refactored searchIndexersWithScheduler correctly bridges the new callback-based scheduler API with the legacy synchronous result aggregation:

  • Per-indexer OnComplete callback aggregates results under mutex
  • RateLimitWaitError treated as skip (not failure) - correct per PR objectives
  • Final OnJobDone callback delivers aggregated results
  • Coverage tracking properly merges indexer IDs

1631-1633: I'll help you verify the rate limiting delegation to the scheduler. Let me investigate the codebase to confirm that RecordRequest is properly called before execution.
<function_calls>


#!/bin/bash

First, let's understand the structure and locate the scheduler-related code

cd /tmp && git clone https://github.com/torrentpier/torrentpier.git repo 2>&1 | head -20
cd repo && git log --oneline -1


</function_calls>

Let me search for the relevant scheduler and rate limiting code:
<function_calls>


#!/bin/bash

Search for RecordRequest usage

rg -n "RecordRequest" --type go


</function_calls>

Let me also check the scheduler implementation and the specific file mentioned:
<function_calls>


#!/bin/bash

Check the jackett service file around the mentioned lines

wc -l internal/services/jackett/service.go
echo "---"
sed -n '1620,1650p' internal/services/jackett/service.go


</function_calls>

Now let me search for the scheduler dispatch mechanism:
<function_calls>


#!/bin/bash

Search for scheduler dispatch and how it handles requests

rg -n "dispatch|Dispatch" --type go -A 3 -B 3 | head -100


</function_calls>

Let me look for the BeforeRequest references to confirm removal:
<function_calls>


#!/bin/bash

Search for BeforeRequest to confirm it was removed

rg -n "BeforeRequest" --type go


</function_calls>

Let me find the scheduler implementation:
<function_calls>


#!/bin/bash

Find scheduler-related files

fd -e go "scheduler|schedule" --type f


</function_calls>

Let me search more broadly for the scheduler code:
<function_calls>


#!/bin/bash

Search for scheduler types and methods

rg -n "type.*Scheduler|func.*scheduler" --type go -B 2 -A 5 | head -150


</function_calls>

Let me check for NextWait usage mentioned in the comment:
<function_calls>


#!/bin/bash

Search for NextWait and how the scheduler calls it

rg -n "NextWait|next_wait" --type go -B 3 -A 3


</function_calls>

Now let me look more carefully at the service.go file to understand the context:
<function_calls>


#!/bin/bash

Get a larger context around the mentioned lines

sed -n '1600,1680p' internal/services/jackett/service.go


</function_calls>

Let me search for ExecuteSearch or similar execution methods:
<function_calls>


#!/bin/bash

Search for where searches are executed

rg -n "ExecuteSearch|executeSearch|execute.*search" --type go -B 5 -A 10 | head -200


</function_calls>

Let me look for rate limit handling in the code:
<function_calls>


#!/bin/bash

Search for rate limiting related code

rg -n "rate.*limit|rateLimit|RateLimit" --type go -B 2 -A 2 | head -150


</function_calls>

Let me check what methods the scheduler has:
<function_calls>


#!/bin/bash

Find files containing "scheduler" or "Scheduler"

fd --type f -e go | xargs rg -l "Scheduler" | head -10


</function_calls>

Let me examine the jackett service file more systematically:
<function_calls>


#!/bin/bash

Look at the structure of the service.go file

rg -n "func.*Execute|func.*Search|RecordRequest|NextWait" internal/services/jackett/service.go -B 1 -A 3 | head -100


</function_calls>

Let me get more context about the dispatch process:
<function_calls>


#!/bin/bash

Search across the codebase for how execution and RecordRequest relate

rg -n "RecordRequest|NextWait" --type go -B 5 -A 5


</function_calls>

Let me check the actual dispatch and execution flow:
<function_calls>


#!/bin/bash

Search for dispatch logic and execution

rg -n "func.*dispatch|fn dispatch" --type go -A 20 | head -150


</function_calls>

Let me try a different approach and look at the task/job execution:
<function_calls>


#!/bin/bash

Look for task or job execution patterns

rg -n "type.*Task|type.*Job|Execute(" --type go | grep -i "task|job|execute" | head -50


</function_calls>

Let me search for the scheduler struct and its methods:
<function_calls>


#!/bin/bash

Find scheduler implementation files and content

find . -name "*.go" -type f | xargs grep -l "type.*Scheduler" | head -5


</function_calls>

<function_calls>

torrentpier jackett service scheduler rate limiting RecordRequest

</function_calls>


304-324: The review comment's premise is incorrect—the onComplete parameter has NOT been removed from the signature.

The function signature at line 304 still includes the onComplete callback parameter:

func (s *Service) executeQueuedSearch(ctx context.Context, indexers []*models.TorznabIndexer, params url.Values, meta *searchContext, onComplete func(jobID uint64, indexerID int, err error), resultCallback func([]Result, []int, error)) error

Both call sites correctly pass this parameter:

  • Line 671: req.OnComplete
  • Line 752: nil

The review comment's claim that "executeQueuedSearch no longer takes onReady callback" contradicts the actual code. Either the review comment was written based on an outdated or different version of the code, or the refactoring described in the scratchpad was not actually implemented as stated.

Likely an incorrect or invalid review comment.

internal/services/jackett/scheduler.go (6)

79-97: Well-designed non-blocking rate limiter.

The RateLimiter correctly decouples rate limit checking from execution. The scheduler queries NextWait() at dispatch time to make scheduling decisions without blocking. This is a key design improvement over the previous model.


277-315: Priority multipliers effectively prevent starvation.

The resolveOptions method applies priority multipliers to reduce min intervals for higher-priority requests:

  • Interactive: 0.1x (6s instead of 60s default)
  • RSS: 0.5x (30s)
  • Completion: 0.7x (42s)
  • Background: 1.0x (60s)

This addresses the starvation issue mentioned in the PR objectives by allowing RSS automation to bypass longer waits.


579-663: Core dispatch logic correctly implements MaxWait budgets.

The dispatchTasks method properly implements the starvation mitigation strategy:

  1. Checks context cancellation (line 597)
  2. Prevents concurrent requests to same indexer (line 606)
  3. Calls NextWait to compute required wait (line 616)
  4. Skips with RateLimitWaitError if wait exceeds MaxWait budget (lines 619-637)
  5. Re-queues if wait > 0 but within budget (lines 639-643)
  6. Acquires worker slot and dispatches if ready (lines 646-653)

This ensures RSS and seeded searches won't wait indefinitely for rate-limited indexers.


665-708: RecordRequest timing is correct.

Line 693 calls RecordRequest BEFORE executing the search, which properly reserves the rate limit slot at dispatch time. This prevents race conditions where multiple tasks might dispatch for the same indexer before the first request is recorded.


734-755: Confirm MaxWait=0 for Interactive priority is intentional.

The getMaxWait function returns 0 for RateLimitPriorityInteractive (line 750), which means "no limit" per the SCHEDULER.md documentation. This is correctly handled in dispatch logic (line 619 checks maxWait > 0), allowing interactive searches to wait as long as needed.

This design choice should be acceptable for user-initiated searches, but verify that long waits won't cause UI timeouts.

Consider whether interactive searches need a timeout at a higher level (e.g., HTTP request timeout) to prevent indefinite hangs from the user's perspective. The scheduler itself won't limit interactive waits.


757-789: Retry timer optimization reduces dispatch latency.

The scheduleRetryTimerLocked method computes the minimum wait among blocked tasks and schedules a retry accordingly (lines 762-787). This ensures blocked tasks are re-dispatched as soon as they become eligible, minimizing unnecessary delays.

internal/services/crossseed/service.go (6)

74-78: LGTM: Deduplication cache infrastructure

The cache structure and initialization look good. The 5-minute TTL aligns with other caches and the PR's memory pressure reduction goal.

Also applies to: 258-261, 296-297, 314-314


4015-4035: Reasonable tradeoff: XOR-based cache key

The XOR approach has theoretical collision risk (different torrent sets could produce the same signature), but the mitigations are sound:

  • 64-bit hash space makes collisions unlikely
  • 5-minute TTL limits exposure window
  • Instance ID and count provide additional uniqueness

The performance benefit (O(n) vs O(n log n) for sorting) justifies the negligible collision risk for this use case.


4564-4564: LGTM: Search priority change prevents rate-limiter starvation

Changing from RateLimitPriorityRSS to RateLimitPriorityBackground implements the core PR objective. This gives seeded search a separate priority tier (60s interval, 45s max wait per PR description) to prevent mutual blocking with RSS automation.


2265-2307: LGTM: autoTMM and contentLayout logic correctly handles folder structure

The logic correctly coordinates autoTMM and contentLayout based on matchType and root folder comparison:

  1. autoTMM (lines 2268-2274): Properly disabled for partial-in-pack or when savePath differs, ensuring qBittorrent uses the calculated path instead of category defaults.

  2. contentLayout (lines 2299-2306): Correctly sets NoSubfolder when root folders differ to prevent creating incorrect subfolders, while allowing default behavior when roots match.

The implementation aligns with the PR's goal of handling different root folder scenarios (scene releases vs Plex-named folders).


5544-5547: LGTM: Filtering recentResults improves UI clarity

Only tracking successfully added torrents in recentResults provides a cleaner UI experience by showing a stable sliding window of actual additions rather than being diluted by skipped/failed results. The full result set is still available in state.run.Results for complete tracking.


4532-4553: RSS wait timeout design is appropriate and starvation is not possible

The 5-minute timeout is well-justified by the RSS automation architecture:

  1. Timeout appropriateness: RSS automation enforces a minimum 30-minute gap between runs (line 1349 in computeNextRunDelay). RSS runs should complete well within 5 minutes; if they exceed this, it indicates a stuck process that should fail fast rather than block seeded search indefinitely.

  2. No starvation risk: The automationLoop enforces minimum 30-minute intervals between RSS runs regardless of how long individual runs take. Even if seeded search times out after 5 minutes, the next RSS run won't start for 30+ minutes, preventing perpetual blocking.

  3. User experience: Debug logs track wait initiation and completion (lines 4536–4537, 4551–4552), and the 500ms poll interval is reasonable for this non-critical coordination.

The implementation correctly prioritizes RSS automation while preventing indefinite blocking.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (5)
internal/services/jackett/scheduler_test.go (5)

18-50: Basic async Submit + callbacks test looks correct; consider guarding against hangs

The basic end‑to‑end test for Submit + JobCallbacks is clear and correctly asserts both execution and result propagation. As a minor robustness tweak (applies to other tests using bare <-done), you might wrap the wait in a select with time.After so a scheduler regression doesn’t hang the entire go test run.


311-332: Empty and nil indexer submissions are covered appropriately

The tests for empty indexer slices and nil indexers ensure Submit short‑circuits safely and still invokes OnJobDone. If you ever need to tighten behavior, you could also assert that exec is not called in these cases, but current coverage is reasonable.

Also applies to: 334-355


548-585: Relax timing thresholds in dispatch-time rate limiting test to reduce flakiness

The logic of “first call unthrottled, second call delayed” is right, but the hard elapsed1 < 50ms bound can be brittle on slow or noisy CI environments. You can keep the behavioral guarantee while giving more slack and making the second-call assertion relative:

-	elapsed1 := time.Since(start1)
-	assert.Less(t, elapsed1, 50*time.Millisecond)
+	elapsed1 := time.Since(start1)
+	// First dispatch should not be significantly delayed by rate limiting.
+	assert.Less(t, elapsed1, 500*time.Millisecond)
@@
-	elapsed2 := time.Since(start2)
-	// Should have waited for rate limit
-	assert.Greater(t, elapsed2, 50*time.Millisecond)
+	elapsed2 := time.Since(start2)
+	// Second dispatch should incur additional wait due to rate limiting.
+	assert.Greater(t, elapsed2, elapsed1+50*time.Millisecond)

This still fails if rate limiting is applied to the first request or not applied to the second, but is much less susceptible to timing noise.


587-640: MaxWait skip behavior test is good; consider asserting more details

The test correctly forces a long wait and verifies that a short MaxWait results in a RateLimitWaitError. If you want stronger guarantees, you could also assert waitErr.IndexerID/IndexerName and that waitErr.Wait > waitErr.MaxWait to fully validate the error contents.


642-722: Default MaxWait-by-priority behavior is well covered; one non-skip case could help

This table-driven test nicely asserts that RSS, completion, and background priorities pick up the expected default MaxWait and result in skips when NextWait exceeds those values. You might consider adding a case where NextWait is below the default (e.g., shorter limiter interval or longer explicit MaxWait) to exercise the non-skip branch too, but what’s here already covers the primary contract.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0719948 and fdaf464.

📒 Files selected for processing (1)
  • internal/services/jackett/scheduler_test.go (9 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
📚 Learning: 2025-11-25T22:46:03.733Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.733Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.

Applied to files:

  • internal/services/jackett/scheduler_test.go
🧬 Code graph analysis (1)
internal/services/jackett/scheduler_test.go (2)
internal/models/torznab_indexer.go (1)
  • TorznabIndexer (64-83)
internal/services/jackett/scheduler.go (10)
  • SubmitRequest (353-359)
  • JobCallbacks (343-350)
  • NewRateLimiter (88-97)
  • RateLimitPriority (36-36)
  • RateLimitOptions (45-49)
  • RateLimitPriorityBackground (42-42)
  • RateLimitPriorityInteractive (39-39)
  • RateLimitPriorityRSS (40-40)
  • RateLimitWaitError (51-57)
  • RateLimitPriorityCompletion (41-41)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Build distribution binaries
🔇 Additional comments (11)
internal/services/jackett/scheduler_test.go (11)

52-113: Priority ordering test is well-constructed and race-safe

Using a single worker plus execMu to serialize access to executedTasks gives a clean signal that interactive work preempts background, and the completed counter avoids double‑closing done. This is a solid, deterministic priority test.


115-163: Worker pool concurrency limit test is correct and robust

The use of currentConcurrent/maxConcurrent with atomics to assert <= 2 concurrent execs gives a good safety net for future changes to the worker pool. The CAS loop is a bit heavy for test code but totally fine here.


165-208: Context cancellation behavior is validated correctly

The test reliably forces the exec function to observe ctx.Done() and checks context.Canceled propagation via OnComplete. The spin‑wait on started avoids a race where you cancel before the worker actually enters the select.


210-261: Panic recovery test covers the critical behavior well

Exercising a panic on the first indexer and then verifying a clean run on the second is a good pattern to ensure workers are recovered and the scheduler remains usable. The error message assertion for "scheduler worker panic" is a nice regression guard.


263-309: RSS deduplication test matches the intended behavior

The test correctly verifies that two overlapping RSS submissions to the same indexer result in only one exec invocation. The atomic.Int32 counter plus a slow exec make the dedup signal clear without introducing unnecessary complexity.


357-401: Concurrent submissions test gives good load and race coverage

Spawning multiple goroutines, each submitting several jobs, and counting completions with atomics is a solid stress test of the scheduler’s concurrency. The pattern of wg.Wait() for submissions plus a separate done channel for completions is appropriate here.


403-449: Multi-indexer submission behavior is clearly verified

Asserting that exec is called once per indexer and that OnComplete fires three times (with a WaitGroup) is a good way to lock in the “fan‑out per indexer” contract for a single Submit call.


451-502: Heap ordering and scheduler priority mapping tests are tight

The taskHeap ordering test cleanly asserts priority first, then FIFO by created time; the separate mapping test ensures RateLimitPriority* values are correctly translated to internal search priorities and have sane defaults. Both look correct and low‑maintenance.


504-517: Job and task ID generation test is straightforward and sufficient

Verifying monotonically increasing IDs starting from 1 for both jobs and tasks is exactly what you need here; no issues.


519-546: Error propagation through OnComplete is correctly asserted

The test nicely checks that an ExecFn error surfaces as‑is to OnComplete via the completeCh. This will catch regressions where errors are swallowed or wrapped incorrectly.


724-878: Rate limiter unit tests are comprehensive and realistic

The rate limiter tests collectively cover cooldowns, min interval behavior, “ready” cases, active cooldown enumeration, expiration, priority multipliers, recording requests (including zero-time handling), clearing cooldowns, and bulk loading. The chosen sleep intervals and thresholds look reasonable and unlikely to be flaky, given the margins you’ve used.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (4)
internal/services/crossseed/service.go (4)

2714-2843: Suggest refactoring for reduced complexity.

The determineSavePath function has grown significantly with extensive branching logic for partial-in-pack, root-diff, and season pack scenarios. While the logic appears correct and logging is comprehensive, the increased cyclomatic complexity (10+ decision points) makes maintenance challenging. Consider:

  • Extracting helper functions for distinct scenarios:
    • handlePartialInPack(sourceRoot, candidateRoot, props, matchedTorrent) string
    • handleRootFolderDifference(sourceRoot, candidateRoot, props, matchedTorrent) string
    • handleSeasonPackPlacement(newRelease, matchedRelease, props) string
  • Using early returns consistently to flatten nesting
  • Adding comprehensive test coverage for each scenario

This would improve readability and reduce the risk of regression when adding new layout scenarios.


4071-4091: Consider collision-resistant alternative to XOR-based signature.

The XOR-based cache key generation has a theoretical collision risk: different torrent sets can produce identical signatures if their XOR results match (e.g., {A XOR B} could equal {C XOR D} for different torrents). While the 5-minute TTL and torrent count in the key reduce the practical impact, a collision would cause deduplicateSourceTorrents to return incorrect results, potentially processing duplicates or skipping valid torrents.

Consider a collision-resistant alternative:

 func dedupCacheKey(instanceID int, torrents []qbt.Torrent) string {
 	n := len(torrents)
 	if n == 0 {
 		return fmt.Sprintf("dedup:%d:0:0", instanceID)
 	}
 
-	// XOR all individual hash digests for order-independent signature.
-	// XOR is commutative and associative, so order doesn't matter.
-	var sig uint64
+	// Sort hashes and hash the concatenation for order-independent, collision-resistant signature.
+	hashes := make([]string, n)
 	for i := range torrents {
-		sig ^= xxhash.Sum64String(torrents[i].Hash)
+		hashes[i] = torrents[i].Hash
 	}
-
-	return fmt.Sprintf("dedup:%d:%d:%x", instanceID, n, sig)
+	sort.Strings(hashes)
+	sig := xxhash.Sum64String(strings.Join(hashes, "|"))
+	return fmt.Sprintf("dedup:%d:%d:%x", instanceID, n, sig)
 }

This approach sorts the hashes before hashing their concatenation, providing order-independence without XOR's collision risk. The performance impact is minimal (sorting ~100s of strings every 5 minutes).


4117-4131: Cache mutability contract requires vigilance.

The cache-hit path returns slices backed directly by cached data (lines 4125-4129) to reduce memory pressure, enforcing a "read-only" contract through documentation rather than type safety. While this aligns with the performance goal and was addressed in commit 23083bd (per past review comments), the lack of compile-time enforcement creates ongoing maintenance risk.

Consider adding runtime assertions in development/test builds to detect violations:

+// devMode can be set via build tags or environment variable
+var devMode = os.Getenv("QUI_DEV_MODE") == "true"
+
 func (s *Service) deduplicateSourceTorrents(ctx context.Context, instanceID int, torrents []qbt.Torrent) ([]qbt.Torrent, map[string][]string) {
 	if len(torrents) <= 1 {
 		return torrents, map[string][]string{}
 	}
 
 	cacheKey := dedupCacheKey(instanceID, torrents)
 	if s.dedupCache != nil {
 		if entry, ok := s.dedupCache.Get(cacheKey); ok && entry != nil {
+			if devMode {
+				// In dev mode, clone to detect mutation bugs
+				cloned := append([]qbt.Torrent(nil), entry.deduplicated...)
+				return cloned, entry.duplicateHashes
+			}
 			// IMPORTANT: Returned slices are cache-backed. Do not modify.
 			return entry.deduplicated, entry.duplicateHashes
 		}
 	}

This allows mutation detection during development while preserving production performance.


4588-4610: RSS wait timeout balances coordination with responsiveness.

The RSS wait loop (lines 4588-4610) blocks seeded search processing for up to 5 minutes per torrent while RSS automation runs, using 500ms polling. This coordination prevents rate limiter contention between RSS (higher priority) and background search (lower priority), aligning with the PR's goal to fix mutual starvation.

However, the blocking approach has trade-offs:

  • Per-torrent impact: Each torrent in the queue waits up to 5 minutes, potentially delaying the entire search run
  • Polling overhead: 500ms polling is simple but not event-driven
  • Timeout rationale: The 5-minute timeout exceeds typical RSS run durations but prevents indefinite blocking if RSS gets stuck

Consider an event-driven alternative using a channel to signal RSS completion:

 type Service struct {
+	rssCompleteChan chan struct{} // Closed when RSS automation completes
 	// ... other fields
 }

-	wasWaitingForRSS := s.runActive.Load()
-	if wasWaitingForRSS {
-		log.Debug().Msg("[CROSSSEED-SEARCH] Waiting for RSS automation to complete before searching")
-	}
-	rssWaitDeadline := time.After(5 * time.Minute)
-rssWaitLoop:
-	for s.runActive.Load() {
+	select {
+	case <-ctx.Done():
+		return ctx.Err()
+	case <-s.rssCompleteChan:
+		// RSS completed, proceed immediately
+	case <-time.After(5 * time.Minute):
+		log.Warn().Msg("[CROSSSEED-SEARCH] RSS wait timeout exceeded, proceeding with search")
+	}
-		select {
-		case <-ctx.Done():
-			return ctx.Err()
-		case <-rssWaitDeadline:
-			log.Warn().Msg("[CROSSSEED-SEARCH] RSS wait timeout exceeded, proceeding with search")
-			break rssWaitLoop
-		case <-time.After(500 * time.Millisecond):
-		}
-	}

This eliminates polling overhead and provides immediate wake-up when RSS completes, improving responsiveness for large search queues.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fdaf464 and 963d3a0.

📒 Files selected for processing (7)
  • internal/services/crossseed/crossseed_test.go (4 hunks)
  • internal/services/crossseed/find_individual_episodes_test.go (1 hunks)
  • internal/services/crossseed/matching_layout_test.go (1 hunks)
  • internal/services/crossseed/release_cache_test.go (1 hunks)
  • internal/services/crossseed/service.go (22 hunks)
  • internal/services/crossseed/service_example_test.go (1 hunks)
  • internal/services/crossseed/service_search_test.go (1 hunks)
✅ Files skipped from review due to trivial changes (2)
  • internal/services/crossseed/service_example_test.go
  • internal/services/crossseed/release_cache_test.go
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
📚 Learning: 2025-11-21T21:11:50.633Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).

Applied to files:

  • internal/services/crossseed/matching_layout_test.go
📚 Learning: 2025-11-06T12:11:04.963Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.

Applied to files:

  • internal/services/crossseed/crossseed_test.go
  • internal/services/crossseed/service_search_test.go
  • internal/services/crossseed/find_individual_episodes_test.go
  • internal/services/crossseed/service.go
📚 Learning: 2025-11-25T22:46:03.733Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.733Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.

Applied to files:

  • internal/services/crossseed/find_individual_episodes_test.go
  • internal/services/crossseed/service.go
🧬 Code graph analysis (3)
internal/services/crossseed/service_search_test.go (1)
web/src/types/index.ts (1)
  • AppPreferences (587-824)
internal/services/crossseed/find_individual_episodes_test.go (1)
web/src/types/index.ts (1)
  • AppPreferences (587-824)
internal/services/crossseed/service.go (1)
internal/services/jackett/scheduler.go (1)
  • RateLimitPriorityBackground (42-42)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Run tests
🔇 Additional comments (14)
internal/services/crossseed/service_search_test.go (1)

244-246: LGTM! Test mock properly extended.

The GetAppPreferences method correctly returns a valid content layout for test scenarios, aligning with the PR's goal of adding content-layout awareness to cross-seed functionality.

internal/services/crossseed/matching_layout_test.go (1)

139-141: LGTM! Consistent test mock implementation.

The GetAppPreferences method follows the same pattern as other test mocks in this PR, correctly supporting content-layout-aware test scenarios.

internal/services/crossseed/find_individual_episodes_test.go (1)

247-249: LGTM! Test mock properly extended.

Consistent with other test helpers in this PR, correctly supporting content-layout-aware cross-seed tests.

internal/services/crossseed/crossseed_test.go (5)

25-25: LGTM! Import supports expanded test coverage.

The releases package import is necessary for the new content-layout-aware test scenarios that use releases.NewDefaultParser().


236-1027: Excellent test coverage for content-layout-aware save path logic.

The expanded test suite comprehensively covers:

  • Season pack ↔ episode scenarios
  • Movie collections and partial-in-pack matches
  • Edge cases (deep nesting, unicode, recursion prevention, path separator variations)
  • Different content layouts (Original, Subfolder, NoSubfolder)

The test organization with clear sections and descriptive comments makes it easy to understand expected behavior for each scenario. This aligns well with the PR's goal of fixing ContentPath vs SavePath handling in cross-seed operations.


1029-1128: Excellent integration tests for partial-in-pack behavior.

These end-to-end tests verify the critical fix for ContentPath vs SavePath handling in partial-in-pack scenarios:

  1. Season pack → episode matching correctly produces "partial-in-pack" matchType
  2. determineSavePath uses ContentPath (not SavePath) for proper file placement
  3. Explicit negative assertions confirm SavePath is NOT used

This directly addresses the PR objective of fixing save path determination for cross-seed operations.


2284-2286: LGTM! Test mock properly extended.

Consistent with other test helpers, correctly supporting content-layout-aware test scenarios.


2428-2661: Excellent validation of layout-agnostic behavior.

This test comprehensively verifies that cross-seed save path determination works correctly across all qBittorrent content layout settings (Original, Subfolder, NoSubfolder). The structured approach—testing each key scenario with all three layouts—ensures that users' layout preferences don't break cross-seed functionality.

The clear test organization and descriptive assertions make it easy to understand expected behavior for each layout mode.

internal/services/crossseed/service.go (6)

63-63: LGTM! Interface extension for content layout support.

The GetAppPreferences method addition enables content-layout-aware save path decisions, which aligns with the PR objectives for fixing cross-seed placement logic.


75-79: LGTM! Clean cache entry structure.

The dedupCacheEntry structure appropriately captures both the deduplicated torrent list and the duplicate hash mappings needed for cooldown propagation.


259-262: LGTM! Consistent cache initialization.

The dedupCache initialization follows the same pattern as other service caches with a 5-minute TTL, which is appropriate for avoiding repeated expensive deduplication operations during automation runs.

Also applies to: 297-298, 315-315


4620-4620: LGTM! Priority change aligns with coordination goals.

The switch to RateLimitPriorityBackground for seeded search (line 4620) ensures that background searches yield to RSS automation, addressing the mutual starvation issue described in the PR objectives. Combined with the RSS wait logic (lines 4588-4610), this establishes clear priority ordering: Interactive > RSS > Completion > Background.


5597-5610: LGTM! Focused recentResults improves UI clarity.

The change to track only successfully added torrents in recentResults (line 5603) provides a cleaner UI experience by showing a stable sliding window of actual progress rather than being diluted by skipped/failed attempts. This aligns well with user expectations for a "recent successes" view.

Note: This filters out skipped/failed results from the recent list, though they remain in the full run.Results. Consider adding metrics or separate UI elements if visibility into recent failures is desired for troubleshooting.


2239-2244: Based on my verification, I can now provide the rewritten review comment:

The "Original" fallback is safe and intentionally designed for cross-seed placement logic.

The code explicitly documents (line 2323) that "Original" is the default qBittorrent behavior that preserves folder structure. The fallback is well-integrated into the placement logic: when folder structures differ between source and candidate torrents, the code intelligently overrides to "NoSubfolder" (lines 2316-2318, 2324-2326) to avoid creating incorrect subfolders. When structures match, it uses "Original" to maintain consistency. All test mocks validate this approach. The WARN-level logging (line 2242) adequately alerts operators.

Rather than adding retry logic (which would increase latency for transient failures), consider documenting the fallback behavior in the function comment to clarify that:

  • "Original" is the qBittorrent default preserving folder structure
  • Folder structure mismatches trigger automatic "NoSubfolder" override
  • If persistent preference fetch failures occur, investigate the sync manager connection

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
internal/services/crossseed/service.go (1)

945-961: Consider using retry helper for consistency.

The stuck-run recovery logic correctly marks runs that have been in "running" status for >5 minutes as failed. However, it uses the direct UpdateRun call rather than updateAutomationRunWithRetry, which could leave the database inconsistent if the update fails.

Apply this change for consistency:

-				if updated, updateErr := s.automationStore.UpdateRun(ctx, lastRun); updateErr != nil {
+				if updated, updateErr := s.updateAutomationRunWithRetry(ctx, lastRun); updateErr != nil {
 					log.Warn().Err(updateErr).Int64("runID", lastRun.ID).Msg("failed to update stuck automation run")
 				} else {
 					lastRun = updated
 				}
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 963d3a0 and 166e474.

📒 Files selected for processing (1)
  • internal/services/crossseed/service.go (30 hunks)
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
📚 Learning: 2025-11-25T22:46:03.733Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.733Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.

Applied to files:

  • internal/services/crossseed/service.go
📚 Learning: 2025-11-06T12:11:04.963Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.

Applied to files:

  • internal/services/crossseed/service.go
🧬 Code graph analysis (1)
internal/services/crossseed/service.go (3)
web/src/types/index.ts (3)
  • AppPreferences (587-824)
  • Torrent (204-261)
  • CrossSeedRun (1227-1243)
internal/models/crossseed.go (2)
  • CrossSeedRun (144-160)
  • CrossSeedRunStatusRunning (118-118)
internal/services/jackett/scheduler.go (1)
  • RateLimitPriorityBackground (42-42)
🔇 Additional comments (11)
internal/services/crossseed/service.go (11)

872-926: Retry helpers look solid.

The exponential-like backoff (100ms × attempt) and max 3 retries are reasonable for database update conflicts. Context cancellation is properly checked between attempts.


4145-4165: Order-independent cache key design is sound.

The XOR-based signature ensures the same set of torrents produces the same cache key regardless of order. While XOR collisions are theoretically possible, the combination of torrent count and 5-minute TTL makes practical impact negligible, as noted in the comments.


4191-4205: Cache-hit early return avoids recomputation as intended.

The function now returns cached slices directly without defensive copies to reduce memory pressure. The read-only contract is clearly documented in both the function doc comment and inline comments. This addresses the dedup cache behavior noted in previous reviews.

Based on past review comments, this concern was already addressed in commit 23083bd.


2788-2866: Complex save-path logic for partial-in-pack scenarios.

The new branching handles partial-in-pack and differing root folders with appropriate path construction. The defensive checks to avoid duplicating folder names in the path (e.g., checking if SavePath already ends with candidateRoot) are important safeguards.

Verify the path logic works correctly across different content layouts and operating systems during testing, particularly the folder duplication checks.


2313-2318: Graceful fallback for app preferences.

The error handling correctly falls back to "Original" content layout when preferences cannot be fetched, allowing the cross-seed operation to continue with a safe default.


2378-2404: ContentLayout decision logic aligns with qBittorrent behavior.

The branching correctly adapts the content layout based on the instance's TorrentContentLayout preference and whether the source/candidate root folders match. Forcing NoSubfolder when roots differ prevents qBittorrent from creating mismatched folder structures.


2347-2366: AutoTMM disabling for precise path control is correct.

The logic correctly disables Automatic Torrent Management when the calculated save path differs from the matched torrent's path or for partial-in-pack scenarios. This ensures the cross-seed uses the specific computed location rather than qBittorrent's category-based path.


4671-4671: Background priority for seeded search aligns with PR objectives.

Setting seeded torrent search to RateLimitPriorityBackground ensures it yields to RSS automation, preventing the mutual starvation described in the PR objectives. This allows RSS (higher priority) to run without being blocked by long-running seeded searches.

Based on PR objectives, this is an essential part of the scheduler coordination strategy.


5648-5660: Recent results filtering improves UI signal-to-noise.

Filtering recentResults to only include successfully added torrents (result.Added == true) provides a cleaner UI experience by showing a focused view of actual work completed, while state.run.Results maintains the complete history of all attempts.


1220-1220: CompletedAt check prevents stale "running" status.

Adding state.run.CompletedAt == nil to the condition ensures the status correctly reflects when a run has actually finished, preventing the UI from showing "running" for completed runs.


4121-4123: Clearing current candidate on finalization prevents stale UI state.

Resetting currentCandidate to nil before marking the run as completed ensures the UI doesn't display outdated "currently processing" information after the search run finishes.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
internal/services/crossseed/crossseed_test.go (3)

236-1185: determineSavePath test matrix is thorough and encodes nuanced path invariants well

The expanded table for TestDetermineSavePath covers a ton of realistic movie/TV/collection and edge scenarios (deep nesting, unicode, mixed separators, empty paths, chained cross-seeds, etc.) and clearly documents the intended behavior via description. This is a solid executable spec for the path logic.

If this grows further, consider extracting a small helper to reduce repetition around building qbt.Torrent/TorrentProperties inputs, but I wouldn’t change anything for this PR.


2874-3236: mockRecoverSyncManager and recoverErroredTorrents tests give strong coverage; consider a few failure‑path cases

The new mockRecoverSyncManager plus the recoverErroredTorrents tests cover the happy paths and mixed‑state batching well: no‑errored, single/multiple errored, missing‑files, mixed states, empty list, and context‑cancellation.

Since the mock already has switches like bulkActionFails, keepInCheckingState, failGetTorrentsAfterRecheck, setProgressToThreshold, and secondRecheckCompletes, it might be worth adding one or two focused tests that flip these to verify:

  • how recoverErroredTorrents behaves when a bulk action fails (error surfacing / retry or abort behavior), and
  • how it reacts when progress lands exactly on the “good enough” threshold vs. slightly below, or when torrents disappear mid‑recheck.

Not required for this PR, but you’ve already done most of the setup to make those edge behaviors easy to lock in.


2453-2455: Review comment is accurate; optional refactoring suggestion is well-justified

The hard-coded "Original" layout in fakeSyncManager.GetAppPreferences is indeed fine for deterministic testing. Verification confirms that all three layout types (Original, Subfolder, NoSubfolder) are comprehensively tested through TestDetermineSavePathContentLayoutScenarios by directly passing contentLayout to determineSavePath. The review correctly identifies this pattern and appropriately offers the parameterization suggestion as optional future work if tests later require exercising layouts through the GetAppPreferences path rather than direct parameter passing.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c39399e and f51edc5.

📒 Files selected for processing (1)
  • internal/services/crossseed/crossseed_test.go (7 hunks)
🧰 Additional context used
🧠 Learnings (5)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/jackett/service.go:309-330
Timestamp: 2025-11-27T15:33:39.007Z
Learning: In internal/services/jackett/service.go, synchronous execution paths (test executors, interactive searches with explicit indexer selection, and scheduler-unavailable fallback) intentionally pass jobID=0 to result callbacks. This is acceptable because outcome tracking via ReportIndexerOutcome is only used for async cross-seed operations that always use the scheduler and receive unique jobIDs. Interactive searches and tests do not use outcome tracking.
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go:209-212
Timestamp: 2025-11-28T20:32:30.126Z
Learning: Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go
Learning: The cross-seed recheck-resume worker intentionally runs for the process lifetime and keys pending entries by hash only. This is acceptable under the current constraint that background seeded-search runs operate on a single instance at a time; graceful shutdown and instanceID|hash keying are deferred by design.
Learnt from: s0up4200
Repo: autobrr/qui PR: 553
File: internal/database/migrations/014_add_torznab_indexers.sql:294-310
Timestamp: 2025-11-15T21:35:06.686Z
Learning: In the qui codebase, storing binary BLOB data (e.g., torrent_data, response_data) directly in SQLite tables for cache purposes (such as torznab_torrent_cache and torznab_search_cache) is intentional and acceptable. This design choice prioritizes system simplicity, self-containment, and backup ease over external file/object storage, as the data is short-lived cache and no performance issues have been observed. External storage would introduce unnecessary complexity for cleanup and migrations.
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
📚 Learning: 2025-11-28T22:21:20.730Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go:2415-2457
Timestamp: 2025-11-28T22:21:20.730Z
Learning: Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go
Learning: The determineSavePath function intentionally includes a contentLayout string parameter for future/content-layout branching and API consistency. Its presence is by design even if unused in the current body; do not flag as an issue in reviews.

Applied to files:

  • internal/services/crossseed/crossseed_test.go
📚 Learning: 2025-11-28T20:32:30.126Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go:209-212
Timestamp: 2025-11-28T20:32:30.126Z
Learning: Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go
Learning: The cross-seed recheck-resume worker intentionally runs for the process lifetime and keys pending entries by hash only. This is acceptable under the current constraint that background seeded-search runs operate on a single instance at a time; graceful shutdown and instanceID|hash keying are deferred by design.

Applied to files:

  • internal/services/crossseed/crossseed_test.go
📚 Learning: 2025-11-06T12:11:04.963Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.

Applied to files:

  • internal/services/crossseed/crossseed_test.go
📚 Learning: 2025-11-25T22:46:03.762Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.762Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.

Applied to files:

  • internal/services/crossseed/crossseed_test.go
🧬 Code graph analysis (1)
internal/services/crossseed/crossseed_test.go (2)
pkg/releases/parser.go (1)
  • NewDefaultParser (29-31)
internal/services/crossseed/release_cache.go (1)
  • NewReleaseCache (13-15)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Run tests
🔇 Additional comments (3)
internal/services/crossseed/crossseed_test.go (3)

1187-1284: Partial‑in‑pack integration tests nicely validate matchType ↔ save‑path interplay

TestPartialInPackIntegration and TestPartialInPackMovieCollectionIntegration do a good job of exercising the full flow (releaseCache.ParsegetMatchTypedetermineSavePath) for key partial‑in‑pack scenarios (TV episode into season pack, movie into collection). The expectations align with the large table above (episodes use season pack ContentPath, single movies in collections use the collection path), which should catch regressions in either matcher or path logic.


1343-1465: Category / tag preservation tests and boolPtr helper look correct and comprehensive

The updated TestCrossSeed_CategoryAndTagPreservation cases exercise the important combinations: inheritance on/off, explicit indexer category, and explicit disabling of the cross-seed tag using addCrossSeedTag = boolPtr(false). That matches how I’d expect buildCrossSeedTags and determineCrossSeedCategory to behave.

boolPtr is a simple, local helper and keeps the table readable; keeping it in this file near the tests is fine.


2597-2872: Content‑layout scenario tests are a good cross‑check that determineSavePath is layout‑agnostic

TestDetermineSavePathContentLayoutScenarios clearly verifies that for the key scenarios (season packs, partial‑in‑pack episodes/movies, single‑file torrents), determineSavePath returns the same logical path regardless of "Original", "Subfolder", or "NoSubfolder" content layout, with behavior differences pushed into qBittorrent itself. This is a nice safeguard against regressions when wiring in TorrentContentLayout.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
internal/services/crossseed/service.go (1)

1649-1659: Tagging & cross‑seed tag backward‑compat: verify defaults cover legacy callers

You’ve moved most tagging to source‑specific arrays:

  • RSS: RSSAutomationTags.
  • Seeded search: SeededSearchTags.
  • Webhook: WebhookTags.
  • Manual apply: either user‑supplied TagName or SeededSearchTags.

buildCrossSeedTags now:

  • Merges these “source” tags plus optional matched‑torrent tags when inheritSourceTags is true.
  • Treats AddCrossSeedTag purely as a removal signal: if explicitly false, it strips "cross-seed" from the final list; otherwise it doesn’t touch it.

That’s a nice cleanup, but it does mean that legacy callers who previously relied on AddCrossSeedTag=true without configuring any tags will no longer automatically get a "cross-seed" tag unless defaults for RSSAutomationTags/SeededSearchTags/WebhookTags now include it.

Please double‑check:

  • Default values for those tag arrays do include "cross-seed" where you expect the tag.
  • Older webhook/autobrr flows that only toggle AddCrossSeedTag (and never send Tags) still behave as intended.

If you haven’t already, a short note in the settings docs that "cross-seed" now comes from the per‑source tag lists (and AddCrossSeedTag only disables it) would help avoid surprises.

Also applies to: 2222-2245, 4090-4097, 4206-4237, 5303-5315, 6206-6253

♻️ Duplicate comments (3)
internal/services/crossseed/service.go (3)

414-430: Search run options & status: confirm completion‑run visibility choice

  • Adding InheritSourceTags to SearchRunOptions and wiring it from automation settings into StartSearchRun/executeCrossSeedSearchAttempt is coherent; seeded search now shares the same tag‑inheritance behavior as RSS/completion.
  • GetSearchRunStatus now suppresses runs where state.opts.RequestedBy == "completion". This matches the intent to hide completion‑triggered runs from the generic seeded‑search status endpoint but will also hide any user‑initiated runs that set RequestedBy="completion".

If the API is supposed to expose all user‑visible runs regardless of source, consider adding an explicit discriminator (e.g. SystemInitiated or a dedicated mode) rather than keying purely on RequestedBy. If hiding all completion runs is intentional, a brief comment on SearchRunOptions.RequestedBy or in GetSearchRunStatus would help future readers.

Also applies to: 1155-1178, 1233-1252, 5937-5947


208-211: Errored‑torrent recovery: good batching; minor timeout‑intent mismatch

The new recoverErroredTorrents flow is a substantial improvement:

  • Batches errored torrents, pauses them once, and rechecks via a single BulkAction call.
  • Polls every 2s instead of 10–100ms and uses GetTorrents with a Hashes filter, which is much gentler on qBittorrent.
  • Uses per‑torrent state with:
    • capped error retries (maxErrorRetries),
    • a 25‑minute limit while actively checking,
    • a second recheck after DiskCacheTTL.
  • Resumes only torrents that reach the computed minCompletionProgress, logs failures, and respects the caller’s ctx.

One nit: the 100ms settingsCtx in this function doesn’t actually constrain GetAutomationSettings, since that helper replaces the incoming context with context.WithoutCancel and then applies its own 5s timeout. If you no longer want a separate short bound here, you can drop the 100ms wrapper to avoid implying a tighter SLA. If you do want a genuinely short call, it has to be implemented inside GetAutomationSettings or via a lighter helper.

Also applies to: 6698-6995


968-985: Stuck automation‑run recovery no longer mislabels long runs

Gating the “mark last run as timed‑out” logic on !s.runActive.Load() fixes the earlier problem where a legitimately long‑running in‑memory automation could be briefly reported as Failed. The 5‑minute window is now only applied when nothing is currently active in memory, which matches the intended semantics.

🧹 Nitpick comments (3)
internal/services/crossseed/service.go (3)

200-214: Recheck‑resume worker: design matches stated invariants

The new recheckPollInterval/recheckAbsoluteTimeout/recheckAPITimeout constants, recheckResumeChan fields, queueRecheckResume, and recheckResumeWorker give you a single, batched worker that:

  • Groups pending hashes per instance and calls GetTorrents once per tick.
  • Resumes when Progress >= threshold and not checking, or drops entries on absolute timeout / below‑threshold completion.
  • Uses a background context so it survives per‑request cancellation.

Keying pending by hash only and running the worker for process lifetime are consistent with the documented constraint that background seeded‑search runs operate on a single instance, so I wouldn’t change that here. If/when you move to multi‑instance background runs, swapping the map key to instanceID|normalizedHash will be the right follow‑up.
Based on learnings, this is acceptable as‑is.

Also applies to: 281-285, 316-341, 2560-2591, 2593-2729


2290-2552: Cross‑seed add path: alignment, contentLayout, and recheck/resume

This refactor of processCrossSeedCandidate is quite a bit clearer and looks correct:

  • requiresAlignment/hasExtraFiles split rename‑alignment from “extra file” scenarios.
  • skip_checking is only set when there are no extra source files, so straight “all files already on disk” matches can skip verification.
  • sourceRoot/candidateRoot + isEpisodeInPack drive:
    • contentLayout="Subfolder" for single‑file → folder (non episode‑in‑pack),
    • contentLayout="NoSubfolder" for folder → single‑file,
    • leave default (“Original”) otherwise.
  • For episode‑in‑season‑pack matches, you disable TMM and use matchedTorrent.ContentPath as savepath, avoiding category default paths.
  • TMM is enabled only when a category is set and the match is not an episode‑in‑pack, which keeps season‑pack paths intact.
  • Recheck/resume behavior:
    • needsRecheckAndResume := requiresAlignment || hasExtraFiles triggers a manual recheck followed by queueRecheckResume.
    • Perfect matches with startPaused resume immediately; startPaused=false relies on qBittorrent starting the torrent from the initial add.

This matches the comments and should handle the previously tricky partial‑in‑pack and season‑pack scenarios more robustly.


4141-4279: ApplyTorrentSearchResults: concurrency & tagging semantics

The new implementation that processes selections concurrently via goroutines and a buffered resultChan is straightforward and safe:

  • Each goroutine is independent, indexes its own slot, and writes a single selectionResult back.
  • The collector closes resultChan after wg.Wait() and reconstructs results in original order.
  • Errors per selection are captured and surfaced per‑entry without failing the whole call.

Behavioral bits to note:

  • Manual apply tags now prefer an explicit TagName, then fall back to SeededSearchTags when useTag is true.
  • InheritSourceTags is pulled from automation settings so manual apply mirrors seeded‑search behavior.

Overall this is a nice win for responsiveness on multiple selections.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f51edc5 and b9b3ef5.

📒 Files selected for processing (1)
  • internal/services/crossseed/service.go (55 hunks)
🧰 Additional context used
🧠 Learnings (7)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/jackett/service.go:309-330
Timestamp: 2025-11-27T15:33:39.007Z
Learning: In internal/services/jackett/service.go, synchronous execution paths (test executors, interactive searches with explicit indexer selection, and scheduler-unavailable fallback) intentionally pass jobID=0 to result callbacks. This is acceptable because outcome tracking via ReportIndexerOutcome is only used for async cross-seed operations that always use the scheduler and receive unique jobIDs. Interactive searches and tests do not use outcome tracking.
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go:209-212
Timestamp: 2025-11-28T20:32:30.126Z
Learning: Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go
Learning: The cross-seed recheck-resume worker intentionally runs for the process lifetime and keys pending entries by hash only. This is acceptable under the current constraint that background seeded-search runs operate on a single instance at a time; graceful shutdown and instanceID|hash keying are deferred by design.
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
Learnt from: s0up4200
Repo: autobrr/qui PR: 553
File: internal/database/migrations/014_add_torznab_indexers.sql:294-310
Timestamp: 2025-11-15T21:35:06.686Z
Learning: In the qui codebase, storing binary BLOB data (e.g., torrent_data, response_data) directly in SQLite tables for cache purposes (such as torznab_torrent_cache and torznab_search_cache) is intentional and acceptable. This design choice prioritizes system simplicity, self-containment, and backup ease over external file/object storage, as the data is short-lived cache and no performance issues have been observed. External storage would introduce unnecessary complexity for cleanup and migrations.
📚 Learning: 2025-11-28T20:32:30.126Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go:209-212
Timestamp: 2025-11-28T20:32:30.126Z
Learning: Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go
Learning: The cross-seed recheck-resume worker intentionally runs for the process lifetime and keys pending entries by hash only. This is acceptable under the current constraint that background seeded-search runs operate on a single instance at a time; graceful shutdown and instanceID|hash keying are deferred by design.

Applied to files:

  • internal/services/crossseed/service.go
📚 Learning: 2025-11-28T22:21:20.730Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go:2415-2457
Timestamp: 2025-11-28T22:21:20.730Z
Learning: Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go
Learning: The determineSavePath function intentionally includes a contentLayout string parameter for future/content-layout branching and API consistency. Its presence is by design even if unused in the current body; do not flag as an issue in reviews.

Applied to files:

  • internal/services/crossseed/service.go
📚 Learning: 2025-11-25T22:46:03.762Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.762Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.

Applied to files:

  • internal/services/crossseed/service.go
📚 Learning: 2025-11-21T21:11:50.633Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).

Applied to files:

  • internal/services/crossseed/service.go
📚 Learning: 2025-11-27T15:33:39.007Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/jackett/service.go:309-330
Timestamp: 2025-11-27T15:33:39.007Z
Learning: In internal/services/jackett/service.go, synchronous execution paths (test executors, interactive searches with explicit indexer selection, and scheduler-unavailable fallback) intentionally pass jobID=0 to result callbacks. This is acceptable because outcome tracking via ReportIndexerOutcome is only used for async cross-seed operations that always use the scheduler and receive unique jobIDs. Interactive searches and tests do not use outcome tracking.

Applied to files:

  • internal/services/crossseed/service.go
📚 Learning: 2025-11-06T12:11:04.963Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.

Applied to files:

  • internal/services/crossseed/service.go
🧬 Code graph analysis (1)
internal/services/crossseed/service.go (2)
internal/models/crossseed.go (1)
  • CrossSeedRun (156-172)
internal/services/crossseed/models.go (1)
  • CrossSeedRequest (15-41)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Run tests
🔇 Additional comments (10)
internal/services/crossseed/service.go (10)

55-73: qbittorrentSync interface extension: ensure all implementations updated

Adding GetAppPreferences(ctx context.Context, instanceID int) (qbt.AppPreferences, error) is consistent with its use in recoverErroredTorrents, but it’s a breaking interface change. Please double‑check that SyncManager and any test/mocks implement this method with the same signature to avoid compile or runtime DI issues.


75-79: Dedup cache & keying: behavior is sound, collisions are a documented tradeoff

The dedupCache + dedupCacheKey wiring looks correct: a 5‑minute TTL, key includes instanceID + torrent count + order‑independent hash signature, and deduplicateSourceTorrents returns cache‑backed slices with a clear read‑only contract. Given the explicit comment about XOR’s theoretical collision risk and the short TTL, the tradeoff is acceptable and well‑documented.

Also applies to: 262-266, 313-315, 4443-4463, 4489-4503, 4695-4701


883-1139: Completion search refactor: tagging & inheritance path looks consistent

HandleTorrentCompletion now delegates to executeCompletionSearch, which:

  • Filters indexers asynchronously, then runs SearchTorrentMatches with RateLimitPriorityCompletion.
  • Builds a searchRunState used purely to pass options (category override, CompletionSearchTags, InheritSourceTags, ignore patterns, StartPaused) into executeCrossSeedSearchAttempt.
  • Does not persist a search run, which matches the “fire‑and‑forget” completion behavior.

This keeps completion tagging (CompletionSearchTags + inheritance) aligned with seeded search and RSS automation, without polluting the seeded‑search run history. The flow looks sound.


894-948: Retry helpers for run persistence are well‑factored

updateAutomationRunWithRetry and updateSearchRunWithRetry centralize transient‑error handling around store updates, and the callers (executeAutomationRun, GetAutomationStatus timeout update, finalizeSearchRun) now use them appropriately.

Given the low retry count and capped backoff, this should improve resilience against brief DB hiccups without materially extending request latency. No issues from my side.

Also applies to: 1459-1487, 1570-1574, 4405-4441, 5951-5962


1155-1177: Dedup + cooldown propagation in seeded search: behavior aligns with goals

The seeded‑search pipeline now:

  • Normalizes seeded‑search tags from settings when none are provided.
  • Deduplicates queue entries per content group via deduplicateSourceTorrents.
  • Propagates cooldown history to duplicate hashes in propagateDuplicateSearchHistory.

Combined, this should prevent repeatedly searching identical cross‑seeds while still respecting per‑torrent cooldown semantics across duplicates. The wiring to searchRunState.duplicateHashes looks consistent.

Also applies to: 1787-1815, 4723-4740


3015-3174: determineSavePath: expanded partial‑in‑pack logic is consistent with new add path

determineSavePath now:

  • Normalizes SavePath/ContentPath separators.
  • Handles partial-in-pack with nuanced behavior:
    • Single‑file candidate → SavePath.
    • TV episode into season pack → ContentPath.
    • Non‑TV single‑file‑into‑folder → SavePath with “Subfolder” layout at the add call site.
  • Adjusts when roots differ (sourceRoot != candidateRoot) by constructing a folder path under SavePath when appropriate.
  • Falls back to season‑pack/episode/same‑type scenarios based on rls metadata.

Given processCrossSeedCandidate now largely relies on TMM + contentLayout and savepath directly, this helper is mainly for other flows/tests. The branches are logically consistent with the comments and don’t introduce obvious path bugs.
Based on learnings, I’m intentionally not flagging the unused contentLayout parameter here.


3821-3826: Jackett scheduler integration: JobID plumbing and outcome reporting

The changes to SearchTorrentMatches and processSearchCandidate hook nicely into the new async Jackett scheduler:

  • You now propagate searchResp.JobID through TorrentSearchResponse.JobID.
  • Automation search paths aggregate indexer adds/fails in indexerAdds/indexerFails and call reportIndexerOutcomes(jobID, ...).
  • reportIndexerOutcomes correctly:
    • skips reporting when jobID == 0 (interactive/test paths),
    • merges indexer IDs across both maps and reports “added” vs “failed” per indexer.

That should give the scheduler enough signal to drive its escalating backoff without affecting interactive completion/manual flows (which use jobID=0). Looks good.

Also applies to: 4041-4042, 4095-4097, 5051-5078, 5108-5136


4965-4975: Background seeded search priority wiring

Using jackett.WithSearchPriority(..., jackett.RateLimitPriorityBackground) for library seeded‑search automation correctly sets these runs to the lowest scheduler priority (below RSS and completion). That matches the PR’s stated priority scheme and should help avoid starvation for interactive flows.


5493-5498: Content‑based indexer filtering: logging & matches exposure

The tweaks in filterIndexersByExistingContent:

  • Add a clear “filtering indexers by existing content” debug log with counts.
  • Collect human‑readable contentMatches ("Name (InstanceName)") that bubble up into async filtering state for UI/diagnostics.
  • Log excluded indexers with reasons when any are filtered.

This makes it much easier to understand why certain indexers were removed while keeping the core behavior unchanged.

Also applies to: 5525-5528, 5590-5597


5937-5947: Search recentResults now only track successful additions

Restricting recentResults to result.Added == true while still appending all results to state.run.Results gives the UI a stable rolling window of actual additions without being spammed by skips/failures. That’s a sensible UX choice and the guard if s.searchState == state avoids races with stale states.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
internal/services/crossseed/service.go (2)

1233-1253: Search run visibility still hides all “completion” runs

GetSearchRunStatus continues to suppress any active state where state.opts.RequestedBy == "completion". That keeps internal completion-triggered runs out of the status API, but it also hides any future user-initiated runs that might reuse "completion" as a discriminator.

If the intent is “hide only system/internal completion runs”, consider either:

  • Using a more explicit flag (e.g. SystemInitiated bool) in SearchRunOptions, or
  • Adding a comment here clarifying that completion-tagged runs are intentionally excluded from the user-visible status API so future callers don’t rely on them appearing.

6713-6762: Errored-torrent recovery: good batching, but settings timeout wrapper is misleading

The new recoverErroredTorrents dramatically improves on the earlier per-torrent recheck logic: it batches pause/recheck operations, polls every 2 seconds, bounds “active checking” time per torrent (25 minutes), and does a second pass after DiskCacheTTL. That keeps both API load and total wall time more predictable.

Two minor points to consider:

  1. 100ms settings context is ineffective
    GetAutomationSettings immediately wraps the incoming context with context.WithoutCancel and applies its own 5s timeout. The 100ms settingsCtx here doesn’t actually limit the call, but it suggests it does.

    You could simplify by dropping the wrapper:

  • settingsCtx, cancel := context.WithTimeout(ctx, 100*time.Millisecond)
  • settings, err := s.GetAutomationSettings(settingsCtx)
  • cancel()
  • settings, err := s.GetAutomationSettings(ctx)

or, if you genuinely need a short bound, refactor `GetAutomationSettings` (or add a lighter helper) to respect upstream deadlines.

2. **Worst-case blocking for manual flows**  
Because this function blocks until both phases complete (or `ctx` is canceled), manual `FindCandidates` / `CrossSeed` calls can, in pathological cases with many errored torrents, run for a long time. If that becomes an issue in practice, you might cap the number of errored torrents recovered per invocation or gate this recovery to automation/search paths instead of all callers.






Also applies to: 6751-6762, 6820-6878, 6927-7008

</blockquote></details>

</blockquote></details>

<details>
<summary>🧹 Nitpick comments (1)</summary><blockquote>

<details>
<summary>internal/services/crossseed/service.go (1)</summary><blockquote>

`281-285`: **Recheck-resume worker design is sound; one minor guard improvement**

The single recheck-resume worker, per-instance batching, and use of a dedicated background context avoid goroutine leaks and keep qBittorrent polling under control. The `queueRecheckResume` threshold calculation from automation settings is also reasonable.

One small robustness tweak: in `processCrossSeedCandidate`, `queueRecheckResume` is called even if the explicit `recheck` `BulkAction` fails. That leaves an entry in the worker queue for a torrent that was never rechecked and will just sit until absolute timeout.

Consider only enqueueing when the initial `recheck` call succeeds:

```diff
-    if err := s.syncManager.BulkAction(ctx, candidate.InstanceID, []string{torrentHash}, "recheck"); err != nil {
-        log.Warn().
-            Err(err).
-            Int("instanceID", candidate.InstanceID).
-            Str("torrentHash", torrentHash).
-            Msg("Failed to trigger recheck after add")
-    }
-    // Queue for background resume when recheck completes
-    s.queueRecheckResume(ctx, candidate.InstanceID, torrentHash)
+    if err := s.syncManager.BulkAction(ctx, candidate.InstanceID, []string{torrentHash}, "recheck"); err != nil {
+        log.Warn().
+            Err(err).
+            Int("instanceID", candidate.InstanceID).
+            Str("torrentHash", torrentHash).
+            Msg("Failed to trigger recheck after add; not queueing for resume")
+        return result
+    }
+    // Queue for background resume when recheck completes
+    s.queueRecheckResume(ctx, candidate.InstanceID, torrentHash)

Also applies to: 316-341, 2473-2531, 2560-2591, 2593-2729

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b9b3ef5 and 3fc1abb.

📒 Files selected for processing (2)
  • internal/services/crossseed/content_detection_test.go (1 hunks)
  • internal/services/crossseed/service.go (56 hunks)
🧰 Additional context used
🧠 Learnings (7)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/jackett/service.go:309-330
Timestamp: 2025-11-27T15:33:39.007Z
Learning: In internal/services/jackett/service.go, synchronous execution paths (test executors, interactive searches with explicit indexer selection, and scheduler-unavailable fallback) intentionally pass jobID=0 to result callbacks. This is acceptable because outcome tracking via ReportIndexerOutcome is only used for async cross-seed operations that always use the scheduler and receive unique jobIDs. Interactive searches and tests do not use outcome tracking.
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go:209-212
Timestamp: 2025-11-28T20:32:30.126Z
Learning: Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go
Learning: The cross-seed recheck-resume worker intentionally runs for the process lifetime and keys pending entries by hash only. This is acceptable under the current constraint that background seeded-search runs operate on a single instance at a time; graceful shutdown and instanceID|hash keying are deferred by design.
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
Learnt from: s0up4200
Repo: autobrr/qui PR: 553
File: internal/database/migrations/014_add_torznab_indexers.sql:294-310
Timestamp: 2025-11-15T21:35:06.686Z
Learning: In the qui codebase, storing binary BLOB data (e.g., torrent_data, response_data) directly in SQLite tables for cache purposes (such as torznab_torrent_cache and torznab_search_cache) is intentional and acceptable. This design choice prioritizes system simplicity, self-containment, and backup ease over external file/object storage, as the data is short-lived cache and no performance issues have been observed. External storage would introduce unnecessary complexity for cleanup and migrations.
📚 Learning: 2025-11-28T20:32:30.126Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go:209-212
Timestamp: 2025-11-28T20:32:30.126Z
Learning: Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go
Learning: The cross-seed recheck-resume worker intentionally runs for the process lifetime and keys pending entries by hash only. This is acceptable under the current constraint that background seeded-search runs operate on a single instance at a time; graceful shutdown and instanceID|hash keying are deferred by design.

Applied to files:

  • internal/services/crossseed/service.go
📚 Learning: 2025-11-28T22:21:20.730Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go:2415-2457
Timestamp: 2025-11-28T22:21:20.730Z
Learning: Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go
Learning: The determineSavePath function intentionally includes a contentLayout string parameter for future/content-layout branching and API consistency. Its presence is by design even if unused in the current body; do not flag as an issue in reviews.

Applied to files:

  • internal/services/crossseed/service.go
📚 Learning: 2025-11-25T22:46:03.762Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.762Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.

Applied to files:

  • internal/services/crossseed/service.go
📚 Learning: 2025-11-21T21:11:50.633Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).

Applied to files:

  • internal/services/crossseed/service.go
📚 Learning: 2025-11-27T15:33:39.007Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/jackett/service.go:309-330
Timestamp: 2025-11-27T15:33:39.007Z
Learning: In internal/services/jackett/service.go, synchronous execution paths (test executors, interactive searches with explicit indexer selection, and scheduler-unavailable fallback) intentionally pass jobID=0 to result callbacks. This is acceptable because outcome tracking via ReportIndexerOutcome is only used for async cross-seed operations that always use the scheduler and receive unique jobIDs. Interactive searches and tests do not use outcome tracking.

Applied to files:

  • internal/services/crossseed/service.go
📚 Learning: 2025-11-06T12:11:04.963Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.

Applied to files:

  • internal/services/crossseed/service.go
🧬 Code graph analysis (2)
internal/services/crossseed/content_detection_test.go (1)
internal/services/crossseed/models.go (1)
  • TorrentInfo (74-97)
internal/services/crossseed/service.go (2)
internal/models/crossseed.go (2)
  • CrossSeedRun (156-172)
  • CrossSeedAutomationSettings (20-48)
internal/services/crossseed/models.go (1)
  • CrossSeedRequest (15-41)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Run tests
🔇 Additional comments (12)
internal/services/crossseed/content_detection_test.go (2)

90-134: LGTM! Well-structured test for miniseries edge case.

The test correctly validates that episode markers in file names (E01, E02) are trusted over the torrent name when detecting TV content, even when the torrent name contains a year that might suggest a movie. The test data is realistic and the assertions verify the expected behavior.


136-184: LGTM! Well-structured test for anime episode detection.

The test correctly validates that anime-style episode markers (" - 01 ") in file names are trusted for TV content detection, even when the torrent name lacks episode indicators. The test data accurately reflects common anime naming conventions and the assertions verify the expected behavior.

internal/services/crossseed/service.go (10)

262-266: Dedup cache wiring and read-only contract look solid

The dedupCache field, its initialization with a 5‑minute TTL, and the dedupCacheKey/deduplicateSourceTorrents integration are consistent and well-documented. The explicit “results are read‑only” contract plus cache-backed slices avoid unnecessary allocations without sacrificing correctness for current call sites.

Also applies to: 313-315, 4458-4718


894-920: Retry helpers for run persistence improve resilience

The new updateAutomationRunWithRetry and updateSearchRunWithRetry helpers, plus the updated call sites in automation/search finalization, are a good durability improvement: transient store errors no longer immediately lose run state. Logging at WARN and then ERROR on exhaustion is appropriate, and the use of the original ctx (or context.Background() at finalize) makes sense to persist status even when the run’s work context has been canceled.

Also applies to: 922-948, 1459-1497, 1570-1574, 4439-4448


968-985: Stuck-run auto-recovery now correctly gated on in-memory activity

Gating the “mark stuck run as failed after 5 minutes” logic on !s.runActive.Load() fixes the earlier misclassification risk for legitimately long-running automations. This should prevent transient “failed” statuses when a run is still executing and only clean up genuinely orphaned runs.


414-430: Tag inheritance plumbing is coherent across all entry points

SearchRunOptions.InheritSourceTags, the various call sites that populate it from CrossSeedAutomationSettings.InheritSourceTags, and the centralized buildCrossSeedTags helper give a consistent tagging story for RSS, seeded search, completion, and webhook/apply flows. Respecting AddCrossSeedTag == false for backwards compatibility while letting settings drive the default tag sets is a nice touch.

Also applies to: 1094-1103, 1163-1168, 2217-2245, 5321-5326, 6221-6268


2949-2952: Cross-seed matching and save-path/content-layout handling look consistent

The swap in getMatchType argument order to treat existing files as the candidate set, along with the expanded determineSavePath and the new processCrossSeedCandidate logic (skip_checking heuristics, contentLayout decisions, TMM vs explicit savepath, and the isEpisodeInPack handling) form a coherent strategy for aligning cross-seeds with existing layouts, including tricky partial-in-pack and season-pack cases. The added comments make the intent clear and should greatly reduce path/layout regressions.

Also applies to: 3025-3189, 2373-2525


4156-4290: Parallelizing ApplyTorrentSearchResults is safe and improves throughput

The refactor to process selections in parallel (buffered resultChan, WaitGroup, and ordered collation by index) is correct and preserves deterministic result ordering. Each goroutine produces exactly one selectionResult, and all shared configuration (settings) is read-only, so there’s no new data race introduced here.


5067-5093: Indexer outcome reporting is wired correctly to scheduler JobID

Collecting per-indexer add/fail counts in processSearchCandidate and reporting them via reportIndexerOutcomes using searchResp.JobID lets Jackett’s scheduler track which indexers actually produced successful adds. The explicit jobID == 0 guard avoids spurious reporting for synchronous/non-scheduler searches, and aggregating counts before reporting keeps the number of calls bounded.

Also applies to: 5123-5151, 4050-4057, 4111-4112


5540-5543: Content-based indexer filtering adds useful signal with acceptable logging

The filterIndexersByExistingContent enrichments (tracking contentMatches and logging excludedIndexers when any are dropped) provide good observability for why indexers are being filtered without changing behavior for the non-match case. This should make it much easier to debug “why didn’t we search X?” questions.

Also applies to: 5605-5612


5954-5958: Recent-results window now focuses on successful additions

Restricting state.recentResults to only result.Added == true keeps the UI’s “recent” view focused on actual cross-seed injections instead of being flooded by skips/failures. The sliding window of 10 entries is small and stable enough to be cheap to maintain.


2825-2838: TV episode special-case in content detection is a nice correctness tweak

The new branch in selectContentDetectionRelease that trusts largest-file metadata when it clearly indicates TV episodes (episode/series set) despite the torrent-level metadata saying “movie” fixes the common “season pack named like a movie year” misclassification. The additional title-match guard prevents over-eager reclassification on unrelated content.

@s0up4200 s0up4200 added this to the v1.8.0 milestone Nov 29, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (3)
internal/services/crossseed/service.go (3)

1233-1252: Search run status hides completion‑triggered runs — document or refine the intent

GetSearchRunStatus now suppresses active runs where state.opts.RequestedBy == "completion". That matches the goal of hiding internal completion flows but will also hide any future user‑initiated runs that happen to reuse "completion".

Either:

  • Document this behavior on the API (e.g., “completion runs are intentionally omitted from status”), or
  • Use a more explicit discriminator (e.g., an internal/system flag) if you eventually need user‑visible completion runs.

4454-4474: dedupCacheKey XOR signature tradeoff is acceptable and well‑documented

The XOR‑of‑xxhash approach in dedupCacheKey gives you an order‑independent signature with minimal overhead, and the comments explicitly call out the theoretical collision risk and why it’s acceptable given the 5‑minute TTL and torrent-count baked into the key. Combined with the cached duplicateMap, this is a reasonable engineering tradeoff for a best‑effort performance cache.

Also applies to: 4697-4712


6696-6994: Confirmed: Settings timeout is ignored due to context.WithoutCancel, and perpetually queued torrents lack a wall-clock bound

The recoverErroredTorrents batched recovery is sound, but two design issues remain:

  1. Ineffective 100ms settings timeout: The settingsCtx created at line 6718 with a 100ms deadline is immediately stripped by GetAutomationSettings (line 505), which calls context.WithoutCancel before applying its own 5-second timeout. The 100ms wrapper reads like an intent to perform a cheap, fast read but is completely ignored. Either drop the 100ms wrapper here or modify GetAutomationSettings to honor upstream deadlines.

  2. No absolute timeout for perpetually queued torrents: Torrents that remain queued without ever entering a checking* state (rare but possible) will loop indefinitely. The maxCheckingTime of 25 minutes only applies to torrents actively checking. Consider adding an absolute "pending in recovery workflow" wall-clock timeout (e.g., reuse maxCheckingTime or define a separate constant) and log-and-skip hashes that exceed it.

🧹 Nitpick comments (6)
internal/services/crossseed/models.go (2)

23-32: Clarify InheritSourceTags override semantics (bool vs *bool).

Struct-wise this looks fine, but using bool here means you cannot distinguish “field omitted” from “explicitly set to false” in JSON. If the intent is strictly to drive behavior from global settings (and this field is only populated internally), that’s OK; if per-request overrides are expected later, consider switching to *bool to support a tri‑state (inherit default / force on / force off).


326-336: AutobrrApplyRequest.SkipIfExists shape is good; consider documenting it.

Adding SkipIfExists *bool \json:"skipIfExists,omitempty"`mirrors the existingCrossSeedRequest.SkipIfExists` and gives autobrr a clean way to avoid duplicate adds. From a model perspective this is solid.

I’d suggest also wiring this into the README/autobrr examples so users know the flag exists and what the default behavior is when it’s omitted.

internal/services/crossseed/service.go (3)

962-986: Stuck automation run handling now correctly guards on in‑memory activity

The new timeout block in GetAutomationStatus only marks a running DB record as failed when no in‑memory run is active, which avoids misclassifying long but legitimate runs. The 5‑minute threshold is reasonable; if you want this more discoverable, consider extracting it into a named constant (purely cosmetic).


1094-1103: InheritSourceTags and source‑specific tags are wired consistently across flows

Completion search, RSS automation, AutobrrApply, seeded search runs, and manual apply all now feed tags through CrossSeedRequest.Tags plus InheritSourceTags, with buildCrossSeedTags centralizing the merge and dedup. This keeps tag behavior consistent no matter how cross‑seeding is initiated.

One optional thought: if tags should be case-insensitive in your UX, you could normalize (e.g., strings.ToLower) inside buildCrossSeedTags before inserting into the set; otherwise current behavior (case-sensitive distinct tags) is fine.

Also applies to: 1653-1655, 2222-2243, 5293-5322, 6217-6251


4142-4286: Parallelizing ApplyTorrentSearchResults is a nice usability/perf improvement

The new goroutine-per-selection pattern with a buffered results channel and index tagging:

  • keeps the external API order‑stable,
  • limits shared state to a read‑only cached slice and immutable settings,
  • and lets long‑running downloads or cross‑seed attempts overlap.

Given typical selection counts, this is a pragmatic concurrency model; if you ever see very large selection batches, a small worker pool would be an easy follow‑up.

internal/services/crossseed/crossseed_test.go (1)

2864-3027: Well-designed mock for recovery testing.

The mockRecoverSyncManager provides flexible behavior control for testing various torrent recovery scenarios. The implementation:

  • Tracks method calls for verification
  • Supports multiple failure modes and edge cases
  • Simulates state transitions correctly

Consider making the progress threshold (line 2958) a configurable parameter rather than hardcoding 0.95, to improve test maintainability if the threshold changes.

Apply this diff to make the threshold configurable:

 type mockRecoverSyncManager struct {
 	torrents                    map[string]*qbt.Torrent
 	calls                       []string
 	recheckCompletes            bool
 	disappearAfterRecheck       bool
 	bulkActionFails             bool
 	keepInCheckingState         bool
 	failGetTorrentsAfterRecheck bool
 	setProgressToThreshold      bool
+	progressThreshold           float64
 	hasRechecked                bool
 	secondRecheckCompletes      bool
 	recheckCount                int
 }
 
 func newMockRecoverSyncManager(initialTorrents []qbt.Torrent) *mockRecoverSyncManager {
 	torrents := make(map[string]*qbt.Torrent)
 	for _, t := range initialTorrents {
 		torrent := t
 		torrents[t.Hash] = &torrent
 	}
 	return &mockRecoverSyncManager{
 		torrents:                    torrents,
 		calls:                       []string{},
 		recheckCompletes:            true,
 		disappearAfterRecheck:       false,
 		bulkActionFails:             false,
 		keepInCheckingState:         false,
 		failGetTorrentsAfterRecheck: false,
 		setProgressToThreshold:      false,
+		progressThreshold:           0.95,
 		hasRechecked:                false,
 		secondRecheckCompletes:      false,
 		recheckCount:                0,
 	}
 }

Then use torrent.Progress = m.progressThreshold at line 2958 instead of hardcoding 0.95.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3fc1abb and ca7ae1b.

📒 Files selected for processing (5)
  • README.md (1 hunks)
  • internal/services/crossseed/crossseed_test.go (6 hunks)
  • internal/services/crossseed/models.go (3 hunks)
  • internal/services/crossseed/service.go (56 hunks)
  • internal/web/swagger/openapi.yaml (0 hunks)
💤 Files with no reviewable changes (1)
  • internal/web/swagger/openapi.yaml
🧰 Additional context used
🧠 Learnings (7)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/jackett/service.go:309-330
Timestamp: 2025-11-27T15:33:39.007Z
Learning: In internal/services/jackett/service.go, synchronous execution paths (test executors, interactive searches with explicit indexer selection, and scheduler-unavailable fallback) intentionally pass jobID=0 to result callbacks. This is acceptable because outcome tracking via ReportIndexerOutcome is only used for async cross-seed operations that always use the scheduler and receive unique jobIDs. Interactive searches and tests do not use outcome tracking.
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go:209-212
Timestamp: 2025-11-28T20:32:30.126Z
Learning: Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go
Learning: The cross-seed recheck-resume worker intentionally runs for the process lifetime and keys pending entries by hash only. This is acceptable under the current constraint that background seeded-search runs operate on a single instance at a time; graceful shutdown and instanceID|hash keying are deferred by design.
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
Learnt from: s0up4200
Repo: autobrr/qui PR: 553
File: internal/database/migrations/014_add_torznab_indexers.sql:294-310
Timestamp: 2025-11-15T21:35:06.686Z
Learning: In the qui codebase, storing binary BLOB data (e.g., torrent_data, response_data) directly in SQLite tables for cache purposes (such as torznab_torrent_cache and torznab_search_cache) is intentional and acceptable. This design choice prioritizes system simplicity, self-containment, and backup ease over external file/object storage, as the data is short-lived cache and no performance issues have been observed. External storage would introduce unnecessary complexity for cleanup and migrations.
📚 Learning: 2025-11-28T20:32:30.126Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go:209-212
Timestamp: 2025-11-28T20:32:30.126Z
Learning: Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go
Learning: The cross-seed recheck-resume worker intentionally runs for the process lifetime and keys pending entries by hash only. This is acceptable under the current constraint that background seeded-search runs operate on a single instance at a time; graceful shutdown and instanceID|hash keying are deferred by design.

Applied to files:

  • internal/services/crossseed/service.go
  • internal/services/crossseed/crossseed_test.go
  • internal/services/crossseed/models.go
📚 Learning: 2025-11-28T22:21:20.730Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go:2415-2457
Timestamp: 2025-11-28T22:21:20.730Z
Learning: Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go
Learning: The determineSavePath function intentionally includes a contentLayout string parameter for future/content-layout branching and API consistency. Its presence is by design even if unused in the current body; do not flag as an issue in reviews.

Applied to files:

  • internal/services/crossseed/service.go
  • internal/services/crossseed/crossseed_test.go
  • internal/services/crossseed/models.go
📚 Learning: 2025-11-25T22:46:03.762Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.762Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.

Applied to files:

  • internal/services/crossseed/service.go
  • internal/services/crossseed/crossseed_test.go
📚 Learning: 2025-11-21T21:11:50.633Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).

Applied to files:

  • internal/services/crossseed/service.go
📚 Learning: 2025-11-27T15:33:39.007Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/jackett/service.go:309-330
Timestamp: 2025-11-27T15:33:39.007Z
Learning: In internal/services/jackett/service.go, synchronous execution paths (test executors, interactive searches with explicit indexer selection, and scheduler-unavailable fallback) intentionally pass jobID=0 to result callbacks. This is acceptable because outcome tracking via ReportIndexerOutcome is only used for async cross-seed operations that always use the scheduler and receive unique jobIDs. Interactive searches and tests do not use outcome tracking.

Applied to files:

  • internal/services/crossseed/service.go
  • internal/services/crossseed/models.go
📚 Learning: 2025-11-06T12:11:04.963Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.

Applied to files:

  • internal/services/crossseed/service.go
  • internal/services/crossseed/crossseed_test.go
🧬 Code graph analysis (2)
internal/services/crossseed/service.go (3)
internal/models/crossseed.go (3)
  • CrossSeedRun (156-172)
  • CrossSeedRunStatusRunning (130-130)
  • CrossSeedRunStatusFailed (133-133)
internal/services/crossseed/models.go (1)
  • CrossSeedRequest (15-38)
internal/services/jackett/scheduler.go (1)
  • RateLimitPriorityBackground (66-66)
internal/services/crossseed/crossseed_test.go (4)
web/src/types/index.ts (3)
  • Torrent (204-261)
  • CrossSeedAutomationSettings (1327-1348)
  • Category (342-345)
pkg/releases/parser.go (1)
  • NewDefaultParser (29-31)
internal/services/crossseed/models.go (1)
  • CrossSeedRequest (15-38)
internal/services/crossseed/release_cache.go (1)
  • NewReleaseCache (13-15)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Run tests
🔇 Additional comments (19)
internal/services/crossseed/models.go (1)

194-202: JobID on TorrentSearchResponse aligns with async scheduler/outcome tracking.

Exposing JobID uint64 \json:"jobId,omitempty"`here is consistent with the Torznab scheduler design and the cross-seed outcome tracking model, and matches the pattern where async cross-seed flows receive real job IDs while synchronous/test paths use0`. Based on learnings, this looks correct and non-breaking.

README.md (1)

627-659: Cross‑Seed apply/tagging and global settings docs look consistent and clear.

The new examples for instanceIds, the description of the /api/cross-seed/apply pipeline, and the “Tagging” / “Global Cross-Seed Settings → Source Tagging” sections align well with the updated models (tags, per-source tag groups, inherit‑source‑tags). This should give autobrr users and UI users an accurate mental model of how scoping and tagging work.

internal/services/crossseed/service.go (10)

75-79: Dedup cache structure and read‑only contract look good

The dedupCacheEntry type, dedupCache field wiring, dedupCacheKey generation, and the cache-hit/insert paths in deduplicateSourceTorrents all line up: you avoid extra allocations on cache hits and clearly document that returned slices/maps are read-only. Current callers (refreshSearchQueue and propagateDuplicateSearchHistory) respect that contract, so this is a nice perf win without compromising correctness.

Also applies to: 262-265, 313-314, 333-334, 4454-4474, 4491-4513, 4706-4712


2371-2457: Cross‑seed add strategy (TMM, contentLayout, recheck/resume) is coherent

The new processCrossSeedCandidate flow combines:

  • skip_checking=true only when we’re confident all data exists (no extra source files),
  • contentLayout inference from sourceRoot/candidateRoot with special handling for episode‑in‑pack cases,
  • TMM enabled only when a category is used and not episode‑in‑pack,
  • explicit recheck + queueRecheckResume when alignment or extra files require verification, and immediate resume only for safe “perfect match” cases.

This significantly simplifies previous save‑path logic while still covering the tricky episode/season‑pack scenarios and avoids rechecks when they’re unnecessary. Nice refactor.

Also applies to: 2482-2492, 2495-2522, 2535-2550


2558-2589: Recheck‑resume worker and queue design match stated invariants

queueRecheckResume computes a per‑torrent resume threshold from the size‑tolerance setting and feeds a bounded recheckResumeChan; recheckResumeWorker then:

  • batches hashes per instance,
  • uses a single ticker with recheckPollInterval and recheckAPITimeout,
  • resumes only when progress ≥ threshold and no longer checking, and
  • evicts entries on absolute timeout or non‑improving completion.

Given the learning that this worker is intentionally process‑lifetime and keyed by hash only under a single‑instance background search constraint, this implementation looks aligned with the design.

Based on learnings, this hash-only keying and long-lived worker are acceptable under current constraints.

Also applies to: 2591-2727


2823-2836: Largest‑file TV override in content detection is a good edge‑case fix

The special case in selectContentDetectionRelease that trusts an episode‑annotated largest file (when titles align and torrent metadata says “movie”) should correct misclassified season packs where filenames clearly indicate TV episodes. This is a targeted change with low risk to other content types.


3834-3839: Jackett job ID propagation and outcome reporting are well‑plumbed

Plumbing searchResp.JobID through TorrentSearchResponse.JobID and then feeding it into reportIndexerOutcomes lets you attribute adds/failures per indexer back to the scheduler job, while the jobID == 0 guard avoids touching history for synchronous/test flows.

The add/fail counting logic in processSearchCandidate looks correct and ensures each indexer gets a single summarized outcome per job.

Based on learnings, skipping outcome reporting for jobID=0 is consistent with how synchronous paths are modeled.

Also applies to: 4055-4055, 4110-4110, 5058-5088, 5119-5147


4415-4444: Search run finalization retry helper is robust

Using updateSearchRunWithRetry(context.Background(), state.run) in finalizeSearchRun ensures you persist terminal state even if the original run context was canceled, with bounded retries and reasonable logging on failure. This is a good reliability improvement over single‑shot updates.


5487-5554: Content‑based indexer filtering and telemetry look solid

filterIndexersByExistingContent now:

  • builds a richer contentMatches list for UI/telemetry,
  • records explicit excludedIndexers reasons, and
  • logs a concise summary when any indexers are filtered out.

That should make it much easier to reason about why certain indexers were skipped while keeping the default path (no indexer info or cache miss) safe by falling back to the original list.

Also applies to: 5601-5607


5948-5959: Recent search results now show only successful additions

Updating appendSearchResult so recentResults tracks only result.Added == true gives the UI a much clearer sliding window of actual additions instead of being diluted by skips/failures. The cap of 10 entries is a sensible default.


6217-6251: buildCrossSeedTags implementation matches the documented semantics

The helper correctly:

  • dedups tags via a set,
  • preserves sourceTags ordering,
  • and only pulls in matched‑torrent tags when inheritSourceTags is true.

This makes the various call sites easier to reason about and avoids subtle duplication bugs when combining automation/webhook/seeded search tags with existing qBittorrent tags.


2554-2556: I'll help you verify the hash normalization convention consistency between the crossseed service and the qbittorrent sync manager. Let me search the codebase for the relevant implementations.
<function_calls>

#!/bin/bash

Search for normalizeHash implementations and hash normalization in both packages

echo "=== normalizeHash in crossseed package ==="
rg -n "normalizeHash" internal/services/crossseed -C3

echo ""
echo "=== Hash normalization in qbittorrent sync manager ==="
rg -n "GetTorrentFilesBatch|GetTorrents" internal/qbittorrent -C5

echo ""
echo "=== ToUpper/ToLower usage in qbittorrent package ==="
rg -n "ToUpper|ToLower" internal/qbittorrent -C2

echo ""
echo "=== Check for normalizeHash in qbittorrent ==="
rg -n "normalizeHash" internal/qbittorrent -C3

echo ""
echo "=== Hashes field/parameter usage in qbittorrent ==="
rg -n "Hashes\s*:" internal/qbittorrent -C5


</function_calls>

internal/services/crossseed/crossseed_test.go (7)

25-25: LGTM! Import addition is appropriate.

The releases package import is correctly added and used by tests that construct ReleaseCache for parsing release names.


232-1185: Excellent test coverage expansion.

The massively expanded TestDetermineSavePath now covers a comprehensive matrix of scenarios including:

  • TV shows: season packs, individual episodes, different naming conventions
  • Movies: single files, folders, collections
  • Edge cases: unicode characters, long paths, mixed separators, empty paths, deep nesting prevention

The test cases are well-documented with clear descriptions. The addition of contentLayout, matchType, and file lists aligns with the broader content layout support changes in the PR.


1187-1284: LGTM! Well-structured integration tests.

Both TestPartialInPackIntegration and TestPartialInPackMovieCollectionIntegration effectively verify the full chain from matching to save path determination for partial-in-pack scenarios. The tests:

  • Use t.Parallel() for efficient execution
  • Clearly document the expected behavior in assertions
  • Verify specific behavior changes introduced in this PR (TV episodes into season packs, movies into collections)

1339-1455: LGTM! Tag inheritance test coverage is comprehensive.

The updated TestCrossSeed_CategoryAndTagPreservation properly covers the new inheritSourceTags functionality with test cases for:

  • Tag inheritance enabled with source tags
  • Tag inheritance disabled
  • Empty source tags
  • Indexer category usage

The test structure is clear and all scenarios are properly validated.


2443-2445: LGTM! Appropriate test double implementation.

The GetAppPreferences method addition to fakeSyncManager provides the necessary interface implementation for tests. The default "Original" content layout is a reasonable test value.


2587-2862: Excellent comprehensive content layout testing.

TestDetermineSavePathContentLayoutScenarios provides thorough coverage of save path determination across different qBittorrent content layout settings (Original, Subfolder, NoSubfolder). The test correctly verifies that:

  • Save path logic remains consistent across all layout types
  • Single-file TV episodes use ContentPath (not SavePath + Subfolder)
  • Partial-in-pack scenarios work correctly
  • Collection and same-content-type scenarios are handled properly

This comprehensive testing significantly reduces the risk of layout-specific regressions.


3028-3226: LGTM! Comprehensive recovery test coverage.

The TestRecoverErroredTorrents_* test suite provides excellent coverage of the torrent recovery functionality:

  • No-op case (no errored torrents)
  • Single and batch recovery scenarios
  • Different error states (Error, MissingFiles)
  • Context cancellation handling
  • Mixed states (selective recovery)
  • Empty input handling

The tests properly verify batching behavior, selective processing, and that non-errored torrents are not affected. Test structure is clear and assertions are appropriate.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (4)
web/src/types/index.ts (1)

1036-1064: Search history types align with backend and add outcome tracking

SearchHistoryEntry and SearchHistoryResponse line up with the backend definitions (IDs as numbers, time fields as strings, status union covering success|error|skipped|rate_limited) while adding outcome/addedCount for cross-seed outcome tracking. As a minor enhancement, you could narrow source to a string union (e.g. "memory" | "database") once the backend values are stable.

internal/api/handlers/crossseed.go (1)

168-233: applyAutomationSettingsPatch correctly applies new tag fields; consider PUT behaviour

The patch application logic for the new tag fields (RSSAutomationTags, SeededSearchTags, CompletionSearchTags, WebhookTags, InheritSourceTags) is correct: nil leaves existing values untouched, while provided slices (including empty) and booleans overwrite settings as expected.

One design consideration: automationSettingsRequest / UpdateAutomationSettings (PUT) still don’t expose these tagging fields, so tags can only be managed via PATCH. If that’s not intentional, consider extending the PUT request and constructor for CrossSeedAutomationSettings to include the same fields so a full replacement can also control tags; otherwise, a brief comment noting “tags are PATCH-only” might prevent confusion later.

internal/services/jackett/service.go (2)

2399-2428: Escalating backoff on rate limits is coherent, but duration plumbing is now dead code

handleRateLimit now ignores the provided cooldown duration and instead:

  • Calls s.rateLimiter.RecordFailure(idx.ID) to obtain an escalating cooldown.
  • Persists and logs that cooldown, and marks it in the rate limiter for future dispatch-time skips.

This matches the new backoff design, but it also means:

  • The time.Duration parameter on handleRateLimit is unused (_ time.Duration), and
  • detectRateLimit’s first return value (including extractRetryAfter parsing of Retry-After) is no longer used anywhere.

You might want to either:

  • Simplify detectRateLimit to just return a bool (or rename/remove extractRetryAfter and the unused duration), or
  • Feed Retry-After into RecordFailure (e.g., as a floor/ceiling) if you intend to respect server hints.

Right now this is just dead/unused data flow, not a functional bug.


3706-3763: ActivityStatus surface is useful; consider stabilizing cooldown ordering

IndexerCooldownStatus and ActivityStatus, plus GetActivityStatus, provide a helpful summary of:

  • Current scheduler state (when present), and
  • Per-indexer cooldowns derived from rateLimiter.GetCooldownIndexers(), with indexer names resolved via a single indexerStore.List call.

The implementation is best-effort (it tolerates List errors), and it correctly calls ensureRateLimiterState() to restore persisted cooldowns before reading.

One small optional enhancement: status.CooldownIndexers is built from a map iteration, so its order is nondeterministic. If the UI benefits from stable ordering, you might sort this slice by CooldownEnd or IndexerName before returning it.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ca7ae1b and d7806f0.

📒 Files selected for processing (9)
  • README.md (1 hunks)
  • cmd/qui/main.go (1 hunks)
  • go.mod (1 hunks)
  • internal/api/handlers/crossseed.go (3 hunks)
  • internal/services/jackett/models.go (2 hunks)
  • internal/services/jackett/service.go (21 hunks)
  • internal/web/swagger/openapi.yaml (0 hunks)
  • web/src/lib/api.ts (3 hunks)
  • web/src/types/index.ts (4 hunks)
💤 Files with no reviewable changes (1)
  • internal/web/swagger/openapi.yaml
🚧 Files skipped from review as they are similar to previous changes (3)
  • cmd/qui/main.go
  • go.mod
  • web/src/lib/api.ts
🧰 Additional context used
🧠 Learnings (5)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/jackett/service.go:309-330
Timestamp: 2025-11-27T15:33:39.007Z
Learning: In internal/services/jackett/service.go, synchronous execution paths (test executors, interactive searches with explicit indexer selection, and scheduler-unavailable fallback) intentionally pass jobID=0 to result callbacks. This is acceptable because outcome tracking via ReportIndexerOutcome is only used for async cross-seed operations that always use the scheduler and receive unique jobIDs. Interactive searches and tests do not use outcome tracking.
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go:209-212
Timestamp: 2025-11-28T20:32:30.126Z
Learning: Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go
Learning: The cross-seed recheck-resume worker intentionally runs for the process lifetime and keys pending entries by hash only. This is acceptable under the current constraint that background seeded-search runs operate on a single instance at a time; graceful shutdown and instanceID|hash keying are deferred by design.
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.762Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.
Learnt from: s0up4200
Repo: autobrr/qui PR: 553
File: internal/database/migrations/014_add_torznab_indexers.sql:294-310
Timestamp: 2025-11-15T21:35:06.686Z
Learning: In the qui codebase, storing binary BLOB data (e.g., torrent_data, response_data) directly in SQLite tables for cache purposes (such as torznab_torrent_cache and torznab_search_cache) is intentional and acceptable. This design choice prioritizes system simplicity, self-containment, and backup ease over external file/object storage, as the data is short-lived cache and no performance issues have been observed. External storage would introduce unnecessary complexity for cleanup and migrations.
📚 Learning: 2025-11-27T15:33:39.007Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/jackett/service.go:309-330
Timestamp: 2025-11-27T15:33:39.007Z
Learning: In internal/services/jackett/service.go, synchronous execution paths (test executors, interactive searches with explicit indexer selection, and scheduler-unavailable fallback) intentionally pass jobID=0 to result callbacks. This is acceptable because outcome tracking via ReportIndexerOutcome is only used for async cross-seed operations that always use the scheduler and receive unique jobIDs. Interactive searches and tests do not use outcome tracking.

Applied to files:

  • internal/services/jackett/models.go
  • internal/services/jackett/service.go
📚 Learning: 2025-11-28T20:32:30.126Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go:209-212
Timestamp: 2025-11-28T20:32:30.126Z
Learning: Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go
Learning: The cross-seed recheck-resume worker intentionally runs for the process lifetime and keys pending entries by hash only. This is acceptable under the current constraint that background seeded-search runs operate on a single instance at a time; graceful shutdown and instanceID|hash keying are deferred by design.

Applied to files:

  • internal/services/jackett/service.go
📚 Learning: 2025-11-25T22:46:03.762Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.762Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.

Applied to files:

  • internal/services/jackett/service.go
📚 Learning: 2025-11-28T22:21:20.730Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go:2415-2457
Timestamp: 2025-11-28T22:21:20.730Z
Learning: Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go
Learning: The determineSavePath function intentionally includes a contentLayout string parameter for future/content-layout branching and API consistency. Its presence is by design even if unused in the current body; do not flag as an issue in reviews.

Applied to files:

  • internal/services/jackett/service.go
🧬 Code graph analysis (2)
web/src/types/index.ts (3)
internal/services/jackett/scheduler.go (3)
  • SchedulerTaskStatus (945-953)
  • SchedulerJobStatus (956-960)
  • SchedulerStatus (963-970)
internal/services/jackett/service.go (1)
  • IndexerCooldownStatus (3707-3712)
internal/services/jackett/search_history.go (2)
  • SearchHistoryEntry (14-43)
  • SearchHistoryResponse (46-50)
internal/services/jackett/service.go (5)
internal/services/jackett/search_history.go (8)
  • SearchHistoryBuffer (53-60)
  • IndexerOutcomeStore (233-240)
  • NewSearchHistoryBuffer (63-72)
  • NewHistoryRecorder (199-203)
  • NewIndexerOutcomeStore (243-252)
  • SearchHistoryResponseWithOutcome (315-319)
  • SearchHistoryEntryWithOutcome (308-312)
  • SearchHistoryEntry (14-43)
internal/services/filesmanager/service.go (1)
  • Service (20-25)
internal/models/torznab_indexer.go (1)
  • TorznabIndexer (64-81)
internal/services/jackett/scheduler.go (3)
  • SubmitRequest (340-346)
  • JobCallbacks (330-337)
  • SchedulerStatus (963-970)
internal/buildinfo/buildinfo.go (1)
  • String (29-31)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Run tests
🔇 Additional comments (17)
README.md (2)

495-508: Cross-seed apply payload + tagging docs look consistent

The instanceIds scoping example and the Tagging paragraph correctly describe the /api/cross-seed/apply behaviour (global vs scoped instances, webhook tags override, inherit-source-tags option). No issues from an API/UX perspective.


516-527: Global rules / Source Tagging section matches new configuration model

The “Global Cross-Seed Settings” and “Source Tagging” bullets line up with the new per-source tag fields (RSS, Seeded Search, Completion Search, Webhook, inherit source tags) and clearly explain how ignore patterns and tagging interact. Wording is clear and technically accurate.

web/src/types/index.ts (3)

998-1034: Scheduler and indexer activity types correctly mirror backend structs

SchedulerTaskStatus, SchedulerJobStatus, SchedulerStatus, IndexerCooldownStatus, and IndexerActivityStatus fields, names, and shapes match the Go structs and JSON tags (including createdAt/cooldownEnd as strings and scheduler being optional). This will serialize/deserialize cleanly for the activity UI.


1338-1367: Cross-seed automation per-source tagging types are consistent

The added rssAutomationTags, seededSearchTags, completionSearchTags, webhookTags, and inheritSourceTags fields (and their optional counterparts in the patch type) match the updated backend CrossSeedAutomationSettings model and README wording. This gives the UI full control over source-specific tagging without breaking existing settings.


1439-1446: CrossSeedSearchStatus shape remains coherent after change

CrossSeedSearchStatus (including recentResults and nextRunAt) is still consistent with the search run/status handlers; the touched line doesn’t introduce any type-level issues.

internal/api/handlers/crossseed.go (2)

53-73: Patch request struct correctly models optional source-specific tags

Adding RSSAutomationTags, SeededSearchTags, CompletionSearchTags, WebhookTags as *[]string and InheritSourceTags as *bool matches the existing optional-field pattern (optionalString, optionalInt) and cleanly distinguishes “field absent” from “set to empty/false”.


139-158: isEmpty now properly accounts for completion and tag fields

The updated automationSettingsPatchRequest.isEmpty check ensures that patches which only change completion settings or any of the tag-related fields are treated as non-empty, avoiding spurious 400 responses when only those sections are updated.

internal/services/jackett/models.go (1)

14-15: Release name and job ID metadata wiring looks consistent

The new ReleaseName on TorznabSearchRequest and JobID on SearchResponse have sane JSON tags (release_name, jobId) and align with the history/outcome types (SearchHistoryEntry.ReleaseName, SearchHistoryEntry.JobID, SearchHistoryEntryWithOutcome). This should make correlating requests, history, and outcomes straightforward without impacting existing search semantics.

Also applies to: 50-51

internal/services/jackett/service.go (9)

84-89: Search history and indexer outcome tracking are cleanly integrated and nil-safe

The additions of searchHistory and indexerOutcomes on Service, the WithSearchHistory/WithIndexerOutcomes options, and the GetSearchHistory/GetSearchHistoryStats/ReportIndexerOutcome methods are well factored:

  • Options cleanly gate the features; all call sites check for nil before use, so disabling them is safe.
  • Wiring the HistoryRecorder into searchScheduler only when history is configured avoids unnecessary overhead.
  • GetSearchHistory gracefully returns an empty response when history is disabled and correctly merges outcomes by (jobID, indexerID) when present, matching the SearchHistoryEntry/IndexerOutcomeStore shapes in search_history.go. Based on learnings, this matches the intended outcome-tracking path for async cross-seed flows.

No functional issues spotted here.

Also applies to: 441-459, 461-512


149-157: Release name propagation into searchContext is straightforward and consistent

Threading req.ReleaseName into searchContext.releaseName at construction time gives the scheduler/history recorder access to the original full release name for debugging/history without affecting query semantics. The field is optional and read-only metadata, so this change is low risk and aligns with SearchHistoryEntry.ReleaseName.

Also applies to: 590-595


280-288: Scheduler construction wired to shared rate limiter looks correct

NewService now constructs searchScheduler via newSearchScheduler(rl, defaultMaxWorkers), sharing the same RateLimiter instance used elsewhere in the service. This centralizes rate-limit state between direct searches and scheduled jobs, which is necessary for the new dispatch-time rate limiting and escalating backoff to behave consistently.


309-323: Scheduler bridging and jobID-aware result aggregation look sound

The new executeQueuedSearch + searchIndexersWithScheduler flow appears well-structured:

  • Sync/test/fallback paths still execute immediately and invoke resultCallback with jobID=0, which is acceptable per the documented design that outcome tracking is only used for async cross-seed/scheduler-backed flows. Based on learnings.
  • When searchScheduler is present, Submit is called with an ExecFn that runs a single-indexer search, and JobCallbacks bridge the scheduler’s per-indexer callbacks into the existing aggregate resultCallback.
  • The aggregation code is concurrency-safe (mu guarding allResults, coverage, failures, lastErr), skips RateLimitWaitError as a non-failure, and only surfaces an error when all indexers fail with non-rate-limit errors.
  • Coverage is normalized via coverageSetToSlice, so callers get deterministic, sorted indexer-ID coverage.

This is a solid adaptation of legacy semantics onto the new job-based scheduler.

Also applies to: 325-416


654-747: JobID propagation through performSearch response paths is coherent

The updated resultCallback in performSearch:

  • Propagates jobID into both the paginated SearchResponse and the full-response copy used for cache persistence, so job-aware consumers (e.g., cross-seed outcome tracking/history views) can correlate responses with scheduler jobs.
  • Preserves existing semantics for cache-only and error paths, with Partial still driven by deadline handling and cacheSig logic unchanged.
  • Correctly uses jobID from the scheduler-backed path and 0 from sync/fallback paths, matching the intended separation between async cross-seed tracking and interactive/test flows. Based on learnings.

No issues found with the updated error handling or caching integration.


805-829: Recent() now benefits from scheduler and job IDs without changing behavior

The Recent method’s new resultCallback(jobID, ...):

  • Reuses the scheduler pipeline via executeQueuedSearch, aligning behavior with normal searches.
  • Surfaces jobID on SearchResponse, enabling activity/history correlation if desired.
  • Maintains prior semantics for partial results (driven by deadline or errors) and logging.

This is a clean, non-breaking enhancement.


1708-1710: Rate-limiting responsibilities are clearly separated and escalation reset is correct

The new comments and behavior in executeIndexerSearch:

  • Clarify that dispatch-time rate limiting is now fully handled by the scheduler (via NextWait() and MaxWait), with BeforeRequest removed.
  • Use s.rateLimiter.RecordSuccess(idx.ID) on successful searches to reset the escalating backoff state, which is essential for the new failure-based cooldown strategy described in the PR objectives.

This is consistent with the new scheduler/backoff design; no functional problems spotted.

Also applies to: 1761-1763


1908-1919: runIndexerSearch correctly adapts to the new executeIndexerSearch signature

Updating runIndexerSearch to call:

result := s.executeIndexerSearch(ctx, idx, params, meta, indexerExecOptions{})

preserves previous behavior (no per-indexer activity logging in this path) while matching the new indexerExecOptions parameter. Coverage and error handling logic downstream are unchanged.


3128-3157: contentType.String provides a clear canonical representation

The new contentType.String() covers all defined enum values and returns sensible, stable string identifiers ("movie", "tv", "music", "book", etc., with "unknown" as a default). This is useful for logging, history, and any APIs that want human-readable content types and matches the existing contentType cases in searchModeForContentType / category mapping.

@s0up4200 s0up4200 changed the title refactor(jackett): replace blocking rate limiter with async job-based scheduler refactor(cross-seed): replace blocking rate limiter with async job-based scheduler Nov 29, 2025
@s0up4200 s0up4200 merged commit b8847db into main Nov 29, 2025
12 checks passed
@s0up4200 s0up4200 deleted the fix/rss-seeded-search-starvation-and-memory branch November 29, 2025 14:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants