Conversation
WalkthroughCentralized Torznab scheduler, in-memory search history/outcomes, activity APIs/UI, and source-specific cross-seed tagging added; legacy per-indexer ratelimiter removed. qBittorrent FilesManager batch API extended with shared read-only slice semantics. Cross-seed gained dedup, recheck/resume scaffolding, expanded determineSavePath logic, and many tests/UI updates. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Client
participant API as Torznab API
participant Service as Jackett Service
participant Scheduler
participant Indexer
participant History
Client->>API: POST /torznab/search (params, optional releaseName)
API->>Service: performSearch(..., releaseName, callbacks)
Service->>Scheduler: SubmitRequest(ExecFn, Meta, JobCallbacks)
Scheduler->>Scheduler: compute per-indexer NextWait & priority
alt wait exceeds max for priority
Scheduler->>History: Record(entry status=rate_limited)
Scheduler->>Service: JobCallbacks.OnComplete(jobID, indexer, err=rate_limited)
else allowed to run
Scheduler->>Indexer: ExecFn(ctx, indexers, params, meta)
Indexer-->>Scheduler: results / error
Scheduler->>History: Record(SearchHistoryEntry)
Scheduler->>Service: JobCallbacks.OnComplete(jobID, indexer, results, coverage, nil)
Scheduler->>Service: JobCallbacks.OnJobDone(jobID)
end
Service->>Service: persist run/outcome (with retry helpers)
API-->>Client: HTTP response (JobID/status)
Estimated code review effort🎯 5 (Critical) | ⏱️ ~90+ minutes
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (5)
internal/services/jackett/ratelimiter.go (1)
111-124: Cooldown cap behavior vsRateLimitWaitError.WaitsemanticsCapping the stored cooldown to
cfg.MaxWaithere is a nice way to avoid persisting very long blocks that could cascade across priorities. One subtlety:RateLimitWaitError.Waitstill reports the full computedwait, so any caller that useserr.Waitto schedule future retries/cooldowns will believe the indexer is blocked for longer than the limiter actually enforces.If
RateLimitWaitErroris only used for logging/metrics, this is fine. If it feeds external scheduling (e.g., RSS job backoff), consider either returningmin(wait, cfg.MaxWait)inWaitor documenting thatWaitis the theoretical requirement while the internal cooldown is capped atMaxWait.internal/qbittorrent/sync_manager.go (2)
35-45: Hash normalization doc still mentions lowercase hexThe
FilesManagercomment still suggests callers pass hashes normalized as “e.g. lowercase hex”, but elsewhere (e.g. crossseed’snormalizeHash) the canonical form is uppercase and prior discussion pointed to standardizing on uppercase for inter-service keys.Given
canonicalizeHashcurrently lowercases internally, behavior is correct either way, but it would be good to align the documentation and cross-service conventions on a single canonical form (likely uppercase) to avoid confusion for implementers ofFilesManager.For example:
- // Callers must pass hashes already trimmed/normalized (e.g. lowercase hex) + // Callers must pass hashes already trimmed/normalized (e.g. uppercase hex)Optional, but helps keep the interface contract consistent across packages.
Based on learnings, this was previously called out as a desired change.
1181-1337: Clarify GetTorrentFilesBatch comments and aliasing contractThe new batching logic generally looks solid and the “one clone per network fetch” optimization is nice. A couple of nits around comments vs actual behavior and the aliasing trade-off:
- Lines 1275–1277 say “The original API response will be used for caching… passing the original to cache before returning to caller,” but the code clones into
callerCopyand never stores*filesanywhere. The cache later receives the entries fromfilesByHash, i.e. the samecallerCopyslices.- Lines 1292–1294 then say “We pass filesByHash entries directly – callers receive the same slice reference but are expected not to mutate,” which matches the code for fresh fetches, but contradicts the earlier comment and is only true for the initial miss-path (cache hits are cloned in the earlier
GetCachedFilesBatchblock).So the effective behavior is:
- Fresh fetch: one clone per torrent; that clone is shared between the result map and
FilesManager’sCacheFilesBatch.- Subsequent hits:
GetCachedFilesBatchdata is cloned before being put intofilesByHash, so callers get isolated slices.This is a reasonable allocation/caching trade-off, but the comments should be updated to describe the current semantics clearly (and probably drop the “original API response” phrasing). If you ever decide you want full cache isolation as well, a small follow-up would be to keep a second map keyed by hash with the original
*filesforCacheFilesBatch, while still returningcallerCopyto callers.As-is, I’d suggest tightening the comments to: (a) explicitly state that fresh fetches share slices between caller and cache and (b) remind callers that
qbt.TorrentFilesmust be treated as read-only.internal/services/crossseed/service.go (2)
3991-4002: Cache hit returns shared slice references.The cached
entry.deduplicatedandentry.duplicateHashesare returned directly without copying. If any caller modifies these slices, the cached data would be corrupted.Currently, callers (
refreshSearchQueue) appear to use these in a read-only manner, so this is likely safe. However, for defensive programming, consider returning copies as done elsewhere in this codebase (e.g.,cloneTorrentFiles).if entry, ok := s.dedupCache.Get(cacheKey); ok && entry != nil { log.Trace(). Int("instanceID", instanceID). Int("cachedCount", len(entry.deduplicated)). Msg("[CROSSSEED-DEDUP] Using cached deduplication result") - return entry.deduplicated, entry.duplicateHashes + // Return copies to prevent cache corruption if callers modify slices + dedupCopy := append([]qbt.Torrent(nil), entry.deduplicated...) + hashCopy := make(map[string][]string, len(entry.duplicateHashes)) + for k, v := range entry.duplicateHashes { + hashCopy[k] = append([]string(nil), v...) + } + return dedupCopy, hashCopy }
3948-3969: Based on my verification, I can now provide the final rewritten review comment:
Sampling cache strategy is vulnerable to torrent reordering, causing potential cache misses.
The sampling approach (first, middle, last hashes) will generate different cache keys if the torrent list reorders between calls, even when the underlying data is identical. Since
GetTorrents()returns torrents from the underlying qbittorrent library without sorting, and there's no explicit ordering guarantee documented, subsequent calls could return the same torrents in different orders.Impact: This causes cache misses and performance degradation when torrents reorder, though data correctness is unaffected. False cache hits are unlikely (would require identical sampled hashes with different torrents), but remain possible if the torrent ordering is non-deterministic.
Recommendation: Sort the torrents slice before passing to
dedupCacheKey()to ensure consistent cache keys, or consider a more robust cache key strategy (e.g., hashing all torrent hashes or using a set-based approach).
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
internal/qbittorrent/sync_manager.go(2 hunks)internal/services/crossseed/service.go(9 hunks)internal/services/jackett/ratelimiter.go(1 hunks)
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.733Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.
📚 Learning: 2025-11-25T22:46:03.733Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.733Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.
Applied to files:
internal/services/crossseed/service.gointernal/qbittorrent/sync_manager.go
📚 Learning: 2025-11-21T21:11:50.633Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
Applied to files:
internal/qbittorrent/sync_manager.go
🧬 Code graph analysis (1)
internal/services/crossseed/service.go (3)
web/src/types/index.ts (1)
Torrent(204-261)internal/services/jackett/service.go (1)
WithSearchPriority(268-273)internal/services/jackett/ratelimiter.go (1)
RateLimitPriorityBackground(38-38)
🔇 Additional comments (5)
internal/services/crossseed/service.go (5)
72-76: LGTM - Cache entry struct is well-defined.The struct appropriately captures both the deduplicated torrent list and the mapping of representative hashes to their duplicates for propagating cooldowns.
256-259: Cache field declaration looks good.The 5-minute TTL aligns with the existing
torrentFilesCacheTTL and is appropriate for deduplication results that may change as torrents are added/removed.
4194-4200: Cache storage after deduplication is correct.The newly computed
deduplicatedslice andduplicateMapare owned by this function at this point, so storing them in the cache is safe.
4459-4473: RSS wait loop implementation addresses starvation concern.The 5-minute cap with 500ms polling prevents indefinite blocking while allowing RSS automation to complete. The labeled break and context handling are correct.
One consideration: if RSS automation runs are very frequent, seeded searches could still wait up to 5 minutes each iteration. This is an acceptable trade-off given RSS has higher priority, and the timeout prevents complete starvation.
4484-4484: Background priority correctly prioritizes RSS over seeded searches.Using
RateLimitPriorityBackgroundfor seeded torrent searches ensures they yield to higher-priority RSS automation requests in the rate limiter, complementing the wait loop above.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (3)
internal/qbittorrent/sync_manager.go (1)
1275-1282: Single-clone sharing between caller and cache is correct but relies on immutabilityThe new pattern (clone once into
callerCopyand then store that same slice both infilesByHashand infetchedFiles→CacheFilesBatch) achieves the desired allocation reduction and is concurrency-safe given theerrgroup+mutex usage.The trade-off is that the first caller’s
qbt.TorrentFilesslice and the cache now share the same backing array. That’s fine as long as:
- Callers never mutate the returned
TorrentFiles(treat as read-only), andFilesManagerimplementations also treat cached slices as immutable (no in-place edits, appends, or re-slicing).You’ve documented this locally in the comments; consider also spelling out the “read-only
qbt.TorrentFiles” contract in theFilesManagerinterface orGetTorrentFilesBatchdoc so future implementers/callers don’t accidentally violate it.Also applies to: 1291-1299
internal/services/crossseed/service.go (2)
3949-3966: XOR-based signature has theoretical collision properties.Using XOR for the signature is elegant for order-independence, but note that:
- Duplicate hashes cancel out (
a ^ a = 0), though torrents should have unique hashes- Different torrent sets can produce identical signatures if their XOR values collide
Given the short 5-minute TTL and the count being part of the key, the practical collision risk is low. If you observe unexpected cache hits in the future, this would be a place to revisit.
3991-3998: Cache aliasing: callers must not modify returned slices.The comment correctly warns that returned slices are cache-backed, but this creates a fragile contract. If any caller (current or future) modifies
entry.deduplicatedorentry.duplicateHashes, it will corrupt the cache.Consider returning defensive copies for safety, or document this prominently in the function's doc comment. The current approach is acceptable for performance if callers are carefully audited, but a defensive clone would prevent subtle bugs:
log.Trace(). Int("instanceID", instanceID). Int("cachedCount", len(entry.deduplicated)). Msg("[CROSSSEED-DEDUP] Using cached deduplication result") - // IMPORTANT: Returned slices are cache-backed. Do not modify. - return entry.deduplicated, entry.duplicateHashes + // Return defensive copies to prevent cache corruption + dedupCopy := make([]qbt.Torrent, len(entry.deduplicated)) + copy(dedupCopy, entry.deduplicated) + hashCopy := make(map[string][]string, len(entry.duplicateHashes)) + for k, v := range entry.duplicateHashes { + hashCopy[k] = append([]string(nil), v...) + } + return dedupCopy, hashCopy
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
internal/qbittorrent/sync_manager.go(3 hunks)internal/services/crossseed/service.go(10 hunks)internal/services/jackett/ratelimiter.go(1 hunks)internal/services/jackett/ratelimiter_test.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- internal/services/jackett/ratelimiter.go
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 553
File: internal/database/migrations/014_add_torznab_indexers.sql:294-310
Timestamp: 2025-11-15T21:35:06.686Z
Learning: In the qui codebase, storing binary BLOB data (e.g., torrent_data, response_data) directly in SQLite tables for cache purposes (such as torznab_torrent_cache and torznab_search_cache) is intentional and acceptable. This design choice prioritizes system simplicity, self-containment, and backup ease over external file/object storage, as the data is short-lived cache and no performance issues have been observed. External storage would introduce unnecessary complexity for cleanup and migrations.
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.733Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.
📚 Learning: 2025-11-21T21:11:50.633Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
Applied to files:
internal/qbittorrent/sync_manager.go
📚 Learning: 2025-11-25T22:46:03.733Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.733Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.
Applied to files:
internal/qbittorrent/sync_manager.gointernal/services/crossseed/service.go
🧬 Code graph analysis (1)
internal/services/crossseed/service.go (2)
internal/services/jackett/service.go (1)
WithSearchPriority(268-273)internal/services/jackett/ratelimiter.go (1)
RateLimitPriorityBackground(38-38)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run tests
🔇 Additional comments (7)
internal/services/jackett/ratelimiter_test.go (1)
92-95: LGTM! Stricter assertion validates the capping behavior.The change from
waitErr.Wait <= waitErr.MaxWaittowaitErr.Wait == waitErr.MaxWaitensures the implementation precisely caps the wait duration when it would exceed the budget. Given the test setup (required wait ~150ms vs MaxWait 5ms), this stricter equality check correctly validates that the capping logic inratelimiter.gois functioning as intended.internal/qbittorrent/sync_manager.go (1)
39-40: Hash normalization doc tweak looks goodSwitching the example to “uppercase hex” in the
GetCachedFilesBatchcomment matches the normalization guidance used elsewhere, while still allowing callers that normalize differently (the “e.g.” wording keeps it flexible).Based on learnings, this aligns the interface docs with the intended hash casing.
internal/services/crossseed/service.go (5)
73-77: LGTM on the cache entry structure.The
dedupCacheEntrystruct appropriately encapsulates both the deduplicated torrent list and the duplicate hash mapping needed to propagate cooldowns.
257-260: Cache field and TTL are appropriate.The 5-minute TTL aligns with the intended deduplication caching semantics and matches the PR objective of avoiding repeated expensive file fetches during search cycles.
4456-4470: RSS coordination loop is well-structured.The implementation correctly:
- Creates the timeout channel once before the loop
- Handles context cancellation
- Uses a labeled break for clean exit
- Logs when the timeout is exceeded
This effectively yields to RSS automation while preventing indefinite blocking.
4481-4481: Core fix: seeded search now uses Background priority.This change, combined with the RSS wait loop above, correctly addresses the starvation issue by:
- Giving seeded search a separate, lower priority (
RateLimitPriorityBackground)- Yielding to active RSS automation before starting searches
This should eliminate the "wait budget exceeded" warnings mentioned in the PR objectives.
4192-4197: Cache storage is appropriate.The
deduplicatedslice andduplicateMapare freshly allocated within this function, so storing them directly in the cache is safe. The concern is on the read path (returning cache-backed slices to callers), which was addressed in the earlier comment.
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
internal/qbittorrent/sync_manager.go (1)
1198-1205: Based on my verification of the codebase, I can now provide the final rewritten review comment.The shared slice reference between fresh fetch results and cache input creates a maintenance burden and potential corruption risk
The code at lines 1283–1305 clones the API response once (
callerCopy) and reuses that single clone for both:
- The caller's returned map (
filesByHash[ch] = callerCopy)- The cache input (
fetchedFiles[canonicalHash] = files→CacheFilesBatch)While the current
FilesManagerimplementation converts slices to database rows before storage (mitigating immediate risk), this design:
Relies entirely on a read-only convention (documented at line 36 and 1296 but not enforced). Any future caller mutation—or alternative
FilesManagerimplementation that stores slices directly (as shown inaliasingFilesManagertest)—would corrupt cached data for all subsequent callers.Violates the isolation pattern used for cache hits (lines 1212–1216), where cached entries are explicitly cloned. Fresh fetches get no such protection.
Creates an asymmetry that makes the code harder to reason about: cache hits return independent clones; fresh fetches share a reference between caller and cache.
I recommend either:
- Restore a dedicated clone for cache input, or
- Have
CacheFilesBatchclone internally before storing, or- At minimum, strengthen the read-only convention (e.g., with linting rules or interface-level enforcement)
🧹 Nitpick comments (1)
internal/services/crossseed/service.go (1)
3963-3966: Normalize torrent hash before hashing for cache key consistency.The cache key uses
torrents[i].Hashdirectly without normalization. If hash strings have inconsistent casing (even though qBittorrent typically returns lowercase), the xxhash digest will differ, causing cache misses for the same torrent set.Apply this diff to ensure consistent cache key generation:
// XOR all individual hash digests for order-independent signature. // XOR is commutative and associative, so order doesn't matter. var sig uint64 for i := range torrents { - sig ^= xxhash.Sum64String(torrents[i].Hash) + sig ^= xxhash.Sum64String(normalizeHash(torrents[i].Hash)) }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
internal/qbittorrent/sync_manager.go(3 hunks)internal/services/crossseed/service.go(10 hunks)
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
📚 Learning: 2025-11-21T21:11:50.633Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
Applied to files:
internal/qbittorrent/sync_manager.go
📚 Learning: 2025-11-25T22:46:03.733Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.733Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.
Applied to files:
internal/qbittorrent/sync_manager.gointernal/services/crossseed/service.go
🧬 Code graph analysis (1)
internal/services/crossseed/service.go (2)
internal/services/jackett/service.go (1)
WithSearchPriority(268-273)internal/services/jackett/ratelimiter.go (1)
RateLimitPriorityBackground(38-38)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run tests
🔇 Additional comments (6)
internal/qbittorrent/sync_manager.go (1)
35-47: FilesManager docs and CacheFilesBatch addition look consistent with normalization strategyThe read‑only contract on returned
qbt.TorrentFilesand the updated note about callers passing pre‑normalized hashes (with the uppercase‑hex example) align well with how hashes are handled elsewhere (e.g., cross‑seed flows usingstrings.ToUpper). TheCacheFilesBatchaddition also matches the new batched caching usage in this file; Go will enforce that all implementations are updated at compile time.Based on learnings, this now matches the intended “uppercase hex” guidance for batch cache keys.
internal/services/crossseed/service.go (5)
3991-4005: Intentional cache-backed return pattern is documented but fragile.The decision to return cache-backed slices without defensive copies is documented and intentional for memory efficiency. Current callers are read-only, which is correct. However, this pattern is fragile for future maintenance.
Consider adding a compile-time or runtime assertion mechanism if feasible, or at minimum ensure code review vigilance for any future modifications to the returned slices.
4462-4488: LGTM! RSS coordination and priority separation look correct.The RSS wait loop correctly:
- Yields to RSS automation when active (
runActive.Load())- Has a 5-minute timeout to prevent indefinite blocking
- Respects context cancellation
- Uses labeled break to exit cleanly
The priority change to
RateLimitPriorityBackgroundproperly separates seeded search from RSS automation, which should resolve the mutual starvation issue described in the PR.
4197-4203: Cache storage implementation is correct.The cache stores freshly allocated local variables (
deduplicatedandduplicateMap), so there's no risk of external mutation affecting cached data. The 5-minute TTL aligns with the PR's stated goal.
295-296: Cache initialization is consistent with existing patterns.The 5-minute TTL for the dedup cache aligns with the stated goal of avoiding repeated expensive file fetches while allowing periodic refresh.
257-260: Clear documentation for the dedup cache field.The comment explains the cache key format and purpose well, which will help future maintainers understand the hashing strategy.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
internal/services/crossseed/service.go (1)
2239-2284: AutoTMM/contentLayout handling is solid, but logging/messages use the wrong pathThe new logic in
processCrossSeedCandidatethat:
- Disables
autoTMMwhenmatchType == "partial-in-pack"or when the computedsavePathdiffers fromprops.SavePath, and- Sets
contentLayout = "NoSubfolder"whenever root folders differ or one side is rootless,properly enforces the new path semantics and avoids qBittorrent silently overriding your explicit layout.
However, the user-facing strings still use
props.SavePath:result.Message = fmt.Sprintf("Added torrent paused to %s (match: %s)", props.SavePath, matchType) // ... result.Message = fmt.Sprintf("Added torrent with recheck to %s (match: %s)", props.SavePath, matchType) // ... log.Debug().Str("savePath", props.SavePath)...When
savePath != props.SavePath(e.g., partial‑in‑pack into a collection/season folder, or different roots), these messages/logs will report the wrong path.Consider switching these to use the actual
savePathyou pass inoptions["savepath"]so logs and messages reflect where the torrent really lands.Also applies to: 2296-2306, 2331-2338
🧹 Nitpick comments (6)
internal/services/crossseed/crossseed_test.go (2)
236-699: Expanded determineSavePath coverage is strong but getting denseThe expanded
TestDetermineSavePathmatrix does a good job pinning all the new path rules (partial-in-pack, ContentPath vs SavePath, single-file vs rooted, collections, edge chars/long names). The only concern is maintainability: this single table is now quite long and mixed (movies, TV, collections, edge cases). Consider, at some point, splitting it into smaller focused subtests (e.g. movies/TV/collections/edges) or helper builders for the commonqbt.TorrentFilespatterns to keep it readable.
25-27: Service construction in integration tests depends on ReleaseCache staying an aliasThe integration tests build
Servicemanually using:svc := &Service{ releaseCache: releases.NewDefaultParser(), stringNormalizer: stringutils.NewDefaultNormalizer(), }This works as long as
Service.releaseCacheremains compatible (e.g. a type alias or wrapper exposingParse). IfReleaseCacheever stops being a thin alias aroundreleases.Parser(e.g. gains extra behavior/metrics), these tests could silently diverge from the production configuration.Consider switching these tests to use
NewReleaseCache()(or a small helper that mirrors it) so they stay aligned with whateverReleaseCacheactually is going forward.Also applies to: 721-816
internal/services/crossseed/service.go (4)
74-79: Dedup cache wiring is coherent and matches the perf goalThe introduction of
dedupCacheEntry, theService.dedupCachefield, and its initialization with a 5-minute TTL cleanly encapsulate deduplication results and avoid redoing the expensiveGetTorrentFilesBatch+ grouping on every refresh. Using a per-service cache is fine given the single-process nature of the scheduler, and TTL bounds stale-data risk.The in-code comment that cached slices must be treated as read-only is important; current callers (
refreshSearchQueue/ search run) respect that by only iterating and copying. Just keep that contract in mind if future code starts mutatingdeduplicatedorduplicateHashes.Also applies to: 255-262, 287-316
4015-4035: dedupCacheKey XOR signature is reasonable; consider normalizing hashesUsing an XOR of
xxhash.Sum64Stringover the torrent hashes plus the count to get an order-independent signature is a pragmatic choice, and the 5-minute TTL + count in the key keep accidental collisions low.If you want to reduce tiny edge risks and make the key robust against case-differences in hashes, you could feed
normalizeHash(torrents[i].Hash)intoSum64Stringinstead of the rawHash. This would keep behavior consistent with the rest of the code, which treats normalized hashes as canonical.
4528-4554: RSS coordination and background priority for seeded search look correctThe new RSS coordination in
processSearchCandidate:
- Spins while
s.runActive.Load()is true, yielding with a 500ms sleep and aborting on context cancellation or after 5 minutes, and- Then sets
jackett.WithSearchPriority(..., RateLimitPriorityBackground)for seeded searches,nicely prevents seeded search from competing with higher-priority RSS automation and completion flows. The wait loop respects
ctxand has a bounded timeout, so it won’t wedge the run.If you ever touch this again, you could replace
time.After(5 * time.Minute)with an explicittime.NewTimerthat youStop()when done to avoid leaking a timer goroutine, but it’s a minor optimization given the long interval.
5031-5051: Leftover test flag in filterIndexersByExistingContent
filterIndexersByExistingContentstill contains atestFilteringEnabled := falsesection that, when flipped, short-circuits all indexers:// TEMPORARY TEST: Remove all indexers to see if the filtering is working testFilteringEnabled := false // Set to true to test if filtering is working if testFilteringEnabled { log.Debug().Msg("[CROSSSEED-FILTER] *** TEST MODE: FILTERING OUT ALL INDEXERS ***") return []int{}, nil, nil, nil }Since this is dead in production (hardcoded
false) but would be very surprising if someone ever toggled it, it’s better to remove this block or hide it behind a proper debug feature flag/config rather than a local boolean.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
go.mod(1 hunks)internal/services/crossseed/crossseed_test.go(3 hunks)internal/services/crossseed/service.go(15 hunks)
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
📚 Learning: 2025-11-06T12:11:04.963Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
Applied to files:
go.modinternal/services/crossseed/service.gointernal/services/crossseed/crossseed_test.go
📚 Learning: 2025-11-25T22:46:03.733Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.733Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.
Applied to files:
internal/services/crossseed/service.go
🧬 Code graph analysis (1)
internal/services/crossseed/service.go (2)
internal/services/jackett/service.go (1)
WithSearchPriority(268-273)internal/services/jackett/ratelimiter.go (1)
RateLimitPriorityBackground(38-38)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run tests
🔇 Additional comments (2)
go.mod (1)
14-14: Direct xxhash dependency looks appropriateDeclaring
github.com/cespare/xxhash/v2 v2.3.0as a direct dependency matches the new usage incrossseed(dedup cache key) and avoids relying on it being pulled in transitively. Just ensurego.sumand any vendored modules are updated viago mod tidy/ CI before merge.internal/services/crossseed/service.go (1)
2684-2787: determineSavePath logic aligns with new partial‑in‑pack and root‑mismatch testsThe updated
determineSavePathcorrectly:
- Uses
detectCommonRootonsourceFiles/candidateFilesto distinguish single-file vs rooted torrents.- For
matchType == "partial-in-pack":
- Places loose files into the candidate’s root under
SavePathwhen the source has no root but the candidate does.- Otherwise prefers
matchedTorrent.ContentPathwhen available, which matches the “season pack / collection folder” expectations.- For differing non-empty roots with
matchType != "partial-in-pack", buildsSavePath + candidateRoot, matching the “spaces vs dots” and Plex-style folder cases.- Falls back to the matched torrent’s
SavePathfor same-root or non-episodic cases.This matches the expanded
TestDetermineSavePathcases and the new integration tests for episode-vs-pack and movie-collection scenarios.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (4)
internal/services/crossseed/service.go (3)
74-78: Dedup cache design and keying look solid; consider minor guard railsThe
dedupCachewiring anddedupCacheKeydesign (instanceID + count + order‑independent xxhash XOR) are coherent and should dramatically reduce repeated file‑metadata scans while remaining robust to small queue changes. The cache‑hit path indeduplicateSourceTorrentsis also careful to treat returned slices as read‑only, which fits how callers use the results.Two small follow‑ups you might consider:
- Add a brief note to the
deduplicateSourceTorrentsdoc comment that its return slices must not be mutated, so future refactors don’t accidentally append/sort in place (you already call this out at the cache hit, but surfacing it at the function boundary would help).- In
dedupCacheKey, you could optionally normalize hashes (e.g., vianormalizeHash) before feeding them toxxhashso the signature is invariant to upstream casing differences, even though current callers appear consistent.Neither is blocking, but they’d make the contract and key stability a bit more future‑proof.
Please double‑check that
github.com/cespare/xxhash/v2is correctly added/pinned in go.mod and that its use aligns with your repo’s dependency policy.Also applies to: 258-262, 296-297, 314-315, 4015-4035, 4057-4071, 4263-4269
2239-2284: Cross‑seed save‑path, AutoTMM, and contentLayout logic align with goals; minor logging nitThe expanded
determineSavePathplus the new handling inprocessCrossSeedCandidatefor:
matchType == "partial-in-pack"(using candidate root orContentPath),- differing
sourceRoot/candidateRoot(favoring the candidate folder),- deriving
useAutoTMMfrommatchedTorrent.AutoManagedand disabling it only when you must enforce a specificsavepath, and- setting
contentLayout = "NoSubfolder"when roots differ or are missing,all look consistent with the intent to avoid double subfolders and to better respect the source layout. The control‑flow around partial‑in‑pack vs root‑diff vs default path is also sound and won’t regress non‑episodic cases.
One small nit: the success/recheck log messages and
result.Messagestill interpolateprops.SavePath, which may now differ from the effectivesavePathyou computed (e.g., partial‑in‑pack or root‑diff cases). If you want fully accurate diagnostics, consider logging the final path you passed to qBittorrent instead of the originalprops.SavePath.Also applies to: 2296-2305, 2689-2787
4528-4553: RSS wait coordination and background priority are reasonable; watch timer churnThe new RSS coordination in
processSearchCandidate—yielding whilerunActiveis true (up to 5 minutes) and then running searches withRateLimitPriorityBackground—is a sensible way to avoid seeded search starving RSS automation or vice‑versa.Two minor points to keep in mind:
runActivecovers both scheduled and manual automation runs, so seeded search will also yield to long manual runs; this is probably acceptable but worth being aware of.- The inner loop uses a fresh
time.After(500 * time.Millisecond)each iteration; if you ever see many concurrent seeded runs waiting on a long RSS run, swapping this for atime.Tickerwould slightly reduce timer churn, though that’s an optimization rather than a correctness issue.Overall, the coordination logic and priority choice look sound.
web/src/pages/CrossSeedPage.tsx (1)
233-233: Consider aligning fetch limit with display limit.The query fetches 20 runs but the display logic limits each group (scheduled, manual, other) to 5, showing a maximum of 15 runs total. While the current approach provides a buffer for uneven distribution, you could optimize by fetching exactly what you display or adjusting the grouping logic to use all 20 fetched runs.
Also applies to: 829-834
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
internal/services/crossseed/service.go(16 hunks)web/src/pages/CrossSeedPage.tsx(6 hunks)
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
📚 Learning: 2025-11-25T22:46:03.733Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.733Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.
Applied to files:
internal/services/crossseed/service.go
📚 Learning: 2025-11-06T12:11:04.963Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
Applied to files:
internal/services/crossseed/service.go
🧬 Code graph analysis (1)
web/src/pages/CrossSeedPage.tsx (3)
web/src/lib/api.ts (1)
api(1613-1613)web/src/components/ui/badge.tsx (1)
Badge(51-51)web/src/components/ui/collapsible.tsx (2)
CollapsibleTrigger(14-14)CollapsibleContent(14-14)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run tests
🔇 Additional comments (6)
internal/services/crossseed/service.go (1)
5533-5541: recentResults now tracks only successful additions (good for UI signal‑to‑noise)Limiting
state.recentResultstoresult.Added == truegives the UI a much clearer “last N successful injections” view instead of being dominated by skips/failures. The cap at 10 and slice‑tail trimming are straightforward and safe.web/src/pages/CrossSeedPage.tsx (5)
1134-1141: LGTM!The conditional count badges provide a helpful summary of run distribution before expanding the collapsible section.
1150-1156: LGTM!The compact group headers with inline counts improve readability and provide clear context for the displayed runs.
1161-1163: LGTM!Displaying the run status in the badge is more valuable than the trigger type, especially since the trigger information is already conveyed by the group headers ("Scheduled" vs "Manual"). This improves the information hierarchy and makes it easier to spot failed or running operations at a glance.
1176-1176: The review comment is incorrect—totalFeedItemsis guaranteed to be defined.The
CrossSeedRuninterface (web/src/types/index.ts:1234) declarestotalFeedItems: numberas a required property without the optional?modifier. TypeScript guarantees this property is always present, making the direct access at line 1176 safe and appropriate. No fallback handling is needed.
1533-1533: Perfect! I have completed the verification. Here's my analysis:Verification Results:
I traced the flow from the UI to the backend:
Backend enforcement of "last 10 additions": In
internal/services/crossseed/service.go, when results are added:
recentResultsis initialized with capacity 10- When a result is added:
if len(state.recentResults) > 10 { state.recentResults = state.recentResults[len(state.recentResults)-10:] }- This maintains a sliding window of exactly the last 10 added results
"List clears when the run stops": In
GetSearchRunStatus():
- Returns an empty
RecentResultsarray by default- Only populates it if
state != nil(search is running)- When a search completes,
s.searchState = nil, so subsequent calls return empty resultsThe UI text is accurate and verified. Both claims are properly enforced by the backend.
The UI text "Shows the last 10 additions during this run. List clears when the run stops." is accurate. Both behaviors are correctly implemented in the backend: the service maintains a sliding window of exactly 10 added results during a search run, and clears them when the search state is reset. No changes needed.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
internal/services/crossseed/service.go (1)
4532-4547: Add debug logging for RSS coordination to aid diagnostics.The RSS coordination logic correctly prevents rate-limiter contention with a 5-minute timeout. However, logging only occurs on timeout (line 4542), not when the wait starts or completes successfully. Consider adding debug logging to help diagnose when seeded search is yielding to RSS automation:
+ if s.runActive.Load() { + log.Debug().Msg("[CROSSSEED-SEARCH] Waiting for RSS automation to complete before searching") + } rssWaitDeadline := time.After(5 * time.Minute) rssWaitLoop: for s.runActive.Load() { select { case <-ctx.Done(): return ctx.Err() case <-rssWaitDeadline: log.Warn().Msg("[CROSSSEED-SEARCH] RSS wait timeout exceeded, proceeding with search") break rssWaitLoop case <-time.After(500 * time.Millisecond): } } + if !s.runActive.Load() { + log.Debug().Msg("[CROSSSEED-SEARCH] RSS automation completed, proceeding with search") + }This would provide visibility into coordination behavior without adding significant overhead.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
internal/services/crossseed/service.go(18 hunks)
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
📚 Learning: 2025-11-25T22:46:03.733Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.733Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.
Applied to files:
internal/services/crossseed/service.go
📚 Learning: 2025-11-06T12:11:04.963Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
Applied to files:
internal/services/crossseed/service.go
🧬 Code graph analysis (1)
internal/services/crossseed/service.go (1)
internal/services/jackett/ratelimiter.go (1)
RateLimitPriorityBackground(38-38)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run tests
🔇 Additional comments (3)
internal/services/crossseed/service.go (3)
4015-4035: XOR-based cache key is acceptable for the use case.The order-independent hashing using XOR of individual hash digests is a pragmatic choice. While XOR has theoretical collision risk (different torrent sets could produce identical signatures), the 5-minute TTL, instance ID scoping, and torrent count make practical collisions extremely unlikely.
Importantly, false cache hits would only cause suboptimal deduplication (processing some torrents twice) rather than correctness issues, which is an acceptable tradeoff given the memory pressure reduction achieved by avoiding repeated expensive file fetches.
5537-5545: Good filtering of recent results for stable UI display.The distinction between
state.run.Results(all attempts) andstate.recentResults(only successful additions) is well-designed. Filtering byresult.Addedensures the UI shows a meaningful sliding window of actual cross-seed additions rather than being diluted by skipped or failed attempts.
74-78: Well-implemented dedup caching and RSS coordination.The changes successfully address the PR objectives:
Deduplication cache (lines 74-78, 296-297, 314, 4015-4035, 4056-4276): The 5-minute TTL cache with order-independent XOR-based keys effectively reduces memory pressure from repeated expensive file fetches. The read-only contract for cached slices is properly documented and respected by callers.
Rate-limit priority separation (line 4557): Changing seeded search to
RateLimitPriorityBackgroundprevents mutual starvation with RSS automation as intended.RSS coordination (lines 4532-4547): The wait loop with 5-minute timeout ensures seeded search yields to higher-priority RSS automation while preventing indefinite blocking.
The implementation is sound and aligns with the memory optimization and starvation prevention goals outlined in the PR objectives.
Also applies to: 258-261, 296-297, 314-314, 4015-4035, 4056-4276, 4532-4547, 4557-4557
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (2)
internal/services/jackett/SCHEDULER.md (1)
17-66: LGTM: Comprehensive scheduler documentation.The architecture documentation is thorough and well-structured, clearly explaining the async, job-based scheduler model and dispatch-time rate limiting.
Minor improvement: Add a language specifier to the code block (line 17) to address the linter warning:
-``` +```text ┌─────────────────────────────────────────────────────────────────────┐internal/services/crossseed/service.go (1)
2690-2787: Based on my comprehensive verification of thedetermineSavePathfunction ininternal/services/crossseed/service.go(lines 2690-2787), I can now provide the rewritten review comment.
Expanded logic correctly handles partial-in-pack and root-folder scenarios with comprehensive test coverage
The new
matchType,sourceFiles, andcandidateFilesparameters enable handling of partial-in-pack and differing root-folder scenarios. The implementation is sound:
Partial-in-pack (lines 2694-2721): When source lacks a root folder but candidate has one, the file is correctly placed inside the candidate's folder. When
ContentPathis empty, the logic properly falls through to the root-diff check (lines 2723-2736).Root-folder handling (lines 2723-2736): Correctly detects when root folders differ and uses the candidate's folder path, handling cases like scene releases vs Plex-named folders.
Test coverage: 45+ test cases confirm correct behavior across movies, TV shows, collections, and edge cases, including explicit coverage of empty
ContentPathfallback behavior.Minor improvement opportunity:
The function has grown to ~100 lines with distinct logical blocks. Consider extracting the partial-in-pack logic (lines 2694-2721) and root-diff logic (lines 2723-2736) into separate helper methods to improve readability and testability, though the current implementation is functionally correct.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
internal/services/crossseed/service.go(18 hunks)internal/services/jackett/SCHEDULER.md(1 hunks)internal/services/jackett/models.go(0 hunks)internal/services/jackett/ratelimiter.go(0 hunks)internal/services/jackett/ratelimiter_test.go(0 hunks)internal/services/jackett/scheduler.go(9 hunks)internal/services/jackett/scheduler_test.go(9 hunks)internal/services/jackett/service.go(7 hunks)
💤 Files with no reviewable changes (3)
- internal/services/jackett/models.go
- internal/services/jackett/ratelimiter_test.go
- internal/services/jackett/ratelimiter.go
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.733Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.
📚 Learning: 2025-11-25T22:46:03.733Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.733Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.
Applied to files:
internal/services/crossseed/service.gointernal/services/jackett/service.gointernal/services/jackett/SCHEDULER.md
📚 Learning: 2025-11-06T12:11:04.963Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
Applied to files:
internal/services/crossseed/service.go
🧬 Code graph analysis (2)
internal/services/jackett/scheduler.go (3)
internal/models/torznab_indexer.go (1)
TorznabIndexer(64-83)internal/services/jackett/client.go (1)
Result(87-104)pkg/gojackett/domain.go (1)
Indexers(16-93)
internal/services/crossseed/service.go (1)
internal/services/jackett/scheduler.go (1)
RateLimitPriorityBackground(42-42)
🪛 LanguageTool
internal/services/jackett/SCHEDULER.md
[style] ~169-~169: Try moving the adverb to make the sentence clearer.
Context: ...king cleared 5. Chain: completeCh signaled to potentially unblock more tasks ### 4. Retry Timer If tasks are block...
(SPLIT_INFINITIVE)
🪛 markdownlint-cli2 (0.18.1)
internal/services/jackett/SCHEDULER.md
17-17: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run tests
🔇 Additional comments (18)
internal/services/jackett/scheduler_test.go (3)
18-113: LGTM: Test coverage for new scheduler API is comprehensive.The tests properly exercise the new
SubmitRequestand callback-based API, including priority ordering, worker pool limits, and rate limiting integration.
587-722: Well-designed test coverage for MaxWait behavior.The tests for explicit MaxWait (lines 587-640) and priority-based default MaxWait (lines 642-722) properly verify that rate-limited indexers are skipped when wait time exceeds budget. This directly addresses the starvation issue mentioned in the PR objectives.
724-878: Comprehensive rate limiter unit tests.The direct RateLimiter tests provide good coverage of cooldown tracking, interval enforcement, priority multipliers, and state management. These low-level tests complement the integration tests above.
internal/services/jackett/service.go (3)
326-417: Solid implementation of result aggregation for async scheduler.The refactored
searchIndexersWithSchedulercorrectly bridges the new callback-based scheduler API with the legacy synchronous result aggregation:
- Per-indexer
OnCompletecallback aggregates results under mutexRateLimitWaitErrortreated as skip (not failure) - correct per PR objectives- Final
OnJobDonecallback delivers aggregated results- Coverage tracking properly merges indexer IDs
1631-1633: I'll help you verify the rate limiting delegation to the scheduler. Let me investigate the codebase to confirm thatRecordRequestis properly called before execution.
<function_calls>
#!/bin/bashFirst, let's understand the structure and locate the scheduler-related code
cd /tmp && git clone https://github.com/torrentpier/torrentpier.git repo 2>&1 | head -20
cd repo && git log --oneline -1
</function_calls>Let me search for the relevant scheduler and rate limiting code:
<function_calls>
#!/bin/bashSearch for RecordRequest usage
rg -n "RecordRequest" --type go
</function_calls>Let me also check the scheduler implementation and the specific file mentioned:
<function_calls>
#!/bin/bashCheck the jackett service file around the mentioned lines
wc -l internal/services/jackett/service.go
echo "---"
sed -n '1620,1650p' internal/services/jackett/service.go
</function_calls>Now let me search for the scheduler dispatch mechanism:
<function_calls>
#!/bin/bashSearch for scheduler dispatch and how it handles requests
rg -n "dispatch|Dispatch" --type go -A 3 -B 3 | head -100
</function_calls>Let me look for the BeforeRequest references to confirm removal:
<function_calls>
#!/bin/bashSearch for BeforeRequest to confirm it was removed
rg -n "BeforeRequest" --type go
</function_calls>Let me find the scheduler implementation:
<function_calls>
#!/bin/bashFind scheduler-related files
fd -e go "scheduler|schedule" --type f
</function_calls>Let me search more broadly for the scheduler code:
<function_calls>
#!/bin/bashSearch for scheduler types and methods
rg -n "type.*Scheduler|func.*scheduler" --type go -B 2 -A 5 | head -150
</function_calls>Let me check for NextWait usage mentioned in the comment:
<function_calls>
#!/bin/bashSearch for NextWait and how the scheduler calls it
rg -n "NextWait|next_wait" --type go -B 3 -A 3
</function_calls>Now let me look more carefully at the service.go file to understand the context:
<function_calls>
#!/bin/bashGet a larger context around the mentioned lines
sed -n '1600,1680p' internal/services/jackett/service.go
</function_calls>Let me search for ExecuteSearch or similar execution methods:
<function_calls>
#!/bin/bashSearch for where searches are executed
rg -n "ExecuteSearch|executeSearch|execute.*search" --type go -B 5 -A 10 | head -200
</function_calls>Let me look for rate limit handling in the code:
<function_calls>
#!/bin/bashSearch for rate limiting related code
rg -n "rate.*limit|rateLimit|RateLimit" --type go -B 2 -A 2 | head -150
</function_calls>Let me check what methods the scheduler has:
<function_calls>
#!/bin/bashFind files containing "scheduler" or "Scheduler"
fd --type f -e go | xargs rg -l "Scheduler" | head -10
</function_calls>Let me examine the jackett service file more systematically:
<function_calls>
#!/bin/bashLook at the structure of the service.go file
rg -n "func.*Execute|func.*Search|RecordRequest|NextWait" internal/services/jackett/service.go -B 1 -A 3 | head -100
</function_calls>Let me get more context about the dispatch process:
<function_calls>
#!/bin/bashSearch across the codebase for how execution and RecordRequest relate
rg -n "RecordRequest|NextWait" --type go -B 5 -A 5
</function_calls>Let me check the actual dispatch and execution flow:
<function_calls>
#!/bin/bashSearch for dispatch logic and execution
rg -n "func.*dispatch|fn dispatch" --type go -A 20 | head -150
</function_calls>Let me try a different approach and look at the task/job execution:
<function_calls>
#!/bin/bashLook for task or job execution patterns
rg -n "type.*Task|type.*Job|Execute(" --type go | grep -i "task|job|execute" | head -50
</function_calls>Let me search for the scheduler struct and its methods:
<function_calls>
#!/bin/bashFind scheduler implementation files and content
find . -name "*.go" -type f | xargs grep -l "type.*Scheduler" | head -5
</function_calls><function_calls>
torrentpier jackett service scheduler rate limiting RecordRequest
</function_calls>
304-324: The review comment's premise is incorrect—theonCompleteparameter has NOT been removed from the signature.The function signature at line 304 still includes the
onCompletecallback parameter:func (s *Service) executeQueuedSearch(ctx context.Context, indexers []*models.TorznabIndexer, params url.Values, meta *searchContext, onComplete func(jobID uint64, indexerID int, err error), resultCallback func([]Result, []int, error)) errorBoth call sites correctly pass this parameter:
- Line 671:
req.OnComplete- Line 752:
nilThe review comment's claim that "
executeQueuedSearchno longer takes onReady callback" contradicts the actual code. Either the review comment was written based on an outdated or different version of the code, or the refactoring described in the scratchpad was not actually implemented as stated.Likely an incorrect or invalid review comment.
internal/services/jackett/scheduler.go (6)
79-97: Well-designed non-blocking rate limiter.The RateLimiter correctly decouples rate limit checking from execution. The scheduler queries
NextWait()at dispatch time to make scheduling decisions without blocking. This is a key design improvement over the previous model.
277-315: Priority multipliers effectively prevent starvation.The
resolveOptionsmethod applies priority multipliers to reduce min intervals for higher-priority requests:
- Interactive: 0.1x (6s instead of 60s default)
- RSS: 0.5x (30s)
- Completion: 0.7x (42s)
- Background: 1.0x (60s)
This addresses the starvation issue mentioned in the PR objectives by allowing RSS automation to bypass longer waits.
579-663: Core dispatch logic correctly implements MaxWait budgets.The
dispatchTasksmethod properly implements the starvation mitigation strategy:
- Checks context cancellation (line 597)
- Prevents concurrent requests to same indexer (line 606)
- Calls
NextWaitto compute required wait (line 616)- Skips with
RateLimitWaitErrorif wait exceedsMaxWaitbudget (lines 619-637)- Re-queues if wait > 0 but within budget (lines 639-643)
- Acquires worker slot and dispatches if ready (lines 646-653)
This ensures RSS and seeded searches won't wait indefinitely for rate-limited indexers.
665-708: RecordRequest timing is correct.Line 693 calls
RecordRequestBEFORE executing the search, which properly reserves the rate limit slot at dispatch time. This prevents race conditions where multiple tasks might dispatch for the same indexer before the first request is recorded.
734-755: Confirm MaxWait=0 for Interactive priority is intentional.The
getMaxWaitfunction returns 0 forRateLimitPriorityInteractive(line 750), which means "no limit" per the SCHEDULER.md documentation. This is correctly handled in dispatch logic (line 619 checksmaxWait > 0), allowing interactive searches to wait as long as needed.This design choice should be acceptable for user-initiated searches, but verify that long waits won't cause UI timeouts.
Consider whether interactive searches need a timeout at a higher level (e.g., HTTP request timeout) to prevent indefinite hangs from the user's perspective. The scheduler itself won't limit interactive waits.
757-789: Retry timer optimization reduces dispatch latency.The
scheduleRetryTimerLockedmethod computes the minimum wait among blocked tasks and schedules a retry accordingly (lines 762-787). This ensures blocked tasks are re-dispatched as soon as they become eligible, minimizing unnecessary delays.internal/services/crossseed/service.go (6)
74-78: LGTM: Deduplication cache infrastructureThe cache structure and initialization look good. The 5-minute TTL aligns with other caches and the PR's memory pressure reduction goal.
Also applies to: 258-261, 296-297, 314-314
4015-4035: Reasonable tradeoff: XOR-based cache keyThe XOR approach has theoretical collision risk (different torrent sets could produce the same signature), but the mitigations are sound:
- 64-bit hash space makes collisions unlikely
- 5-minute TTL limits exposure window
- Instance ID and count provide additional uniqueness
The performance benefit (O(n) vs O(n log n) for sorting) justifies the negligible collision risk for this use case.
4564-4564: LGTM: Search priority change prevents rate-limiter starvationChanging from
RateLimitPriorityRSStoRateLimitPriorityBackgroundimplements the core PR objective. This gives seeded search a separate priority tier (60s interval, 45s max wait per PR description) to prevent mutual blocking with RSS automation.
2265-2307: LGTM: autoTMM and contentLayout logic correctly handles folder structureThe logic correctly coordinates autoTMM and contentLayout based on matchType and root folder comparison:
autoTMM (lines 2268-2274): Properly disabled for partial-in-pack or when savePath differs, ensuring qBittorrent uses the calculated path instead of category defaults.
contentLayout (lines 2299-2306): Correctly sets
NoSubfolderwhen root folders differ to prevent creating incorrect subfolders, while allowing default behavior when roots match.The implementation aligns with the PR's goal of handling different root folder scenarios (scene releases vs Plex-named folders).
5544-5547: LGTM: Filtering recentResults improves UI clarityOnly tracking successfully added torrents in
recentResultsprovides a cleaner UI experience by showing a stable sliding window of actual additions rather than being diluted by skipped/failed results. The full result set is still available instate.run.Resultsfor complete tracking.
4532-4553: RSS wait timeout design is appropriate and starvation is not possibleThe 5-minute timeout is well-justified by the RSS automation architecture:
Timeout appropriateness: RSS automation enforces a minimum 30-minute gap between runs (line 1349 in
computeNextRunDelay). RSS runs should complete well within 5 minutes; if they exceed this, it indicates a stuck process that should fail fast rather than block seeded search indefinitely.No starvation risk: The
automationLoopenforces minimum 30-minute intervals between RSS runs regardless of how long individual runs take. Even if seeded search times out after 5 minutes, the next RSS run won't start for 30+ minutes, preventing perpetual blocking.User experience: Debug logs track wait initiation and completion (lines 4536–4537, 4551–4552), and the 500ms poll interval is reasonable for this non-critical coordination.
The implementation correctly prioritizes RSS automation while preventing indefinite blocking.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (5)
internal/services/jackett/scheduler_test.go (5)
18-50: Basic async Submit + callbacks test looks correct; consider guarding against hangsThe basic end‑to‑end test for
Submit+JobCallbacksis clear and correctly asserts both execution and result propagation. As a minor robustness tweak (applies to other tests using bare<-done), you might wrap the wait in aselectwithtime.Afterso a scheduler regression doesn’t hang the entirego testrun.
311-332: Empty and nil indexer submissions are covered appropriatelyThe tests for empty indexer slices and
nilindexers ensureSubmitshort‑circuits safely and still invokesOnJobDone. If you ever need to tighten behavior, you could also assert thatexecis not called in these cases, but current coverage is reasonable.Also applies to: 334-355
548-585: Relax timing thresholds in dispatch-time rate limiting test to reduce flakinessThe logic of “first call unthrottled, second call delayed” is right, but the hard
elapsed1 < 50msbound can be brittle on slow or noisy CI environments. You can keep the behavioral guarantee while giving more slack and making the second-call assertion relative:- elapsed1 := time.Since(start1) - assert.Less(t, elapsed1, 50*time.Millisecond) + elapsed1 := time.Since(start1) + // First dispatch should not be significantly delayed by rate limiting. + assert.Less(t, elapsed1, 500*time.Millisecond) @@ - elapsed2 := time.Since(start2) - // Should have waited for rate limit - assert.Greater(t, elapsed2, 50*time.Millisecond) + elapsed2 := time.Since(start2) + // Second dispatch should incur additional wait due to rate limiting. + assert.Greater(t, elapsed2, elapsed1+50*time.Millisecond)This still fails if rate limiting is applied to the first request or not applied to the second, but is much less susceptible to timing noise.
587-640: MaxWait skip behavior test is good; consider asserting more detailsThe test correctly forces a long wait and verifies that a short
MaxWaitresults in aRateLimitWaitError. If you want stronger guarantees, you could also assertwaitErr.IndexerID/IndexerNameand thatwaitErr.Wait > waitErr.MaxWaitto fully validate the error contents.
642-722: Default MaxWait-by-priority behavior is well covered; one non-skip case could helpThis table-driven test nicely asserts that RSS, completion, and background priorities pick up the expected default
MaxWaitand result in skips whenNextWaitexceeds those values. You might consider adding a case whereNextWaitis below the default (e.g., shorter limiter interval or longer explicitMaxWait) to exercise the non-skip branch too, but what’s here already covers the primary contract.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
internal/services/jackett/scheduler_test.go(9 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
📚 Learning: 2025-11-25T22:46:03.733Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.733Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.
Applied to files:
internal/services/jackett/scheduler_test.go
🧬 Code graph analysis (1)
internal/services/jackett/scheduler_test.go (2)
internal/models/torznab_indexer.go (1)
TorznabIndexer(64-83)internal/services/jackett/scheduler.go (10)
SubmitRequest(353-359)JobCallbacks(343-350)NewRateLimiter(88-97)RateLimitPriority(36-36)RateLimitOptions(45-49)RateLimitPriorityBackground(42-42)RateLimitPriorityInteractive(39-39)RateLimitPriorityRSS(40-40)RateLimitWaitError(51-57)RateLimitPriorityCompletion(41-41)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Build distribution binaries
🔇 Additional comments (11)
internal/services/jackett/scheduler_test.go (11)
52-113: Priority ordering test is well-constructed and race-safeUsing a single worker plus
execMuto serialize access toexecutedTasksgives a clean signal that interactive work preempts background, and thecompletedcounter avoids double‑closingdone. This is a solid, deterministic priority test.
115-163: Worker pool concurrency limit test is correct and robustThe use of
currentConcurrent/maxConcurrentwith atomics to assert<= 2concurrent execs gives a good safety net for future changes to the worker pool. The CAS loop is a bit heavy for test code but totally fine here.
165-208: Context cancellation behavior is validated correctlyThe test reliably forces the exec function to observe
ctx.Done()and checkscontext.Canceledpropagation viaOnComplete. The spin‑wait onstartedavoids a race where you cancel before the worker actually enters theselect.
210-261: Panic recovery test covers the critical behavior wellExercising a panic on the first indexer and then verifying a clean run on the second is a good pattern to ensure workers are recovered and the scheduler remains usable. The error message assertion for
"scheduler worker panic"is a nice regression guard.
263-309: RSS deduplication test matches the intended behaviorThe test correctly verifies that two overlapping RSS submissions to the same indexer result in only one exec invocation. The
atomic.Int32counter plus a slowexecmake the dedup signal clear without introducing unnecessary complexity.
357-401: Concurrent submissions test gives good load and race coverageSpawning multiple goroutines, each submitting several jobs, and counting completions with atomics is a solid stress test of the scheduler’s concurrency. The pattern of
wg.Wait()for submissions plus a separatedonechannel for completions is appropriate here.
403-449: Multi-indexer submission behavior is clearly verifiedAsserting that
execis called once per indexer and thatOnCompletefires three times (with aWaitGroup) is a good way to lock in the “fan‑out per indexer” contract for a singleSubmitcall.
451-502: Heap ordering and scheduler priority mapping tests are tightThe
taskHeapordering test cleanly asserts priority first, then FIFO bycreatedtime; the separate mapping test ensuresRateLimitPriority*values are correctly translated to internal search priorities and have sane defaults. Both look correct and low‑maintenance.
504-517: Job and task ID generation test is straightforward and sufficientVerifying monotonically increasing IDs starting from 1 for both jobs and tasks is exactly what you need here; no issues.
519-546: Error propagation through OnComplete is correctly assertedThe test nicely checks that an
ExecFnerror surfaces as‑is toOnCompletevia thecompleteCh. This will catch regressions where errors are swallowed or wrapped incorrectly.
724-878: Rate limiter unit tests are comprehensive and realisticThe rate limiter tests collectively cover cooldowns, min interval behavior, “ready” cases, active cooldown enumeration, expiration, priority multipliers, recording requests (including zero-time handling), clearing cooldowns, and bulk loading. The chosen sleep intervals and thresholds look reasonable and unlikely to be flaky, given the margins you’ve used.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (4)
internal/services/crossseed/service.go (4)
2714-2843: Suggest refactoring for reduced complexity.The
determineSavePathfunction has grown significantly with extensive branching logic for partial-in-pack, root-diff, and season pack scenarios. While the logic appears correct and logging is comprehensive, the increased cyclomatic complexity (10+ decision points) makes maintenance challenging. Consider:
- Extracting helper functions for distinct scenarios:
handlePartialInPack(sourceRoot, candidateRoot, props, matchedTorrent) stringhandleRootFolderDifference(sourceRoot, candidateRoot, props, matchedTorrent) stringhandleSeasonPackPlacement(newRelease, matchedRelease, props) string- Using early returns consistently to flatten nesting
- Adding comprehensive test coverage for each scenario
This would improve readability and reduce the risk of regression when adding new layout scenarios.
4071-4091: Consider collision-resistant alternative to XOR-based signature.The XOR-based cache key generation has a theoretical collision risk: different torrent sets can produce identical signatures if their XOR results match (e.g.,
{A XOR B}could equal{C XOR D}for different torrents). While the 5-minute TTL and torrent count in the key reduce the practical impact, a collision would causededuplicateSourceTorrentsto return incorrect results, potentially processing duplicates or skipping valid torrents.Consider a collision-resistant alternative:
func dedupCacheKey(instanceID int, torrents []qbt.Torrent) string { n := len(torrents) if n == 0 { return fmt.Sprintf("dedup:%d:0:0", instanceID) } - // XOR all individual hash digests for order-independent signature. - // XOR is commutative and associative, so order doesn't matter. - var sig uint64 + // Sort hashes and hash the concatenation for order-independent, collision-resistant signature. + hashes := make([]string, n) for i := range torrents { - sig ^= xxhash.Sum64String(torrents[i].Hash) + hashes[i] = torrents[i].Hash } - - return fmt.Sprintf("dedup:%d:%d:%x", instanceID, n, sig) + sort.Strings(hashes) + sig := xxhash.Sum64String(strings.Join(hashes, "|")) + return fmt.Sprintf("dedup:%d:%d:%x", instanceID, n, sig) }This approach sorts the hashes before hashing their concatenation, providing order-independence without XOR's collision risk. The performance impact is minimal (sorting ~100s of strings every 5 minutes).
4117-4131: Cache mutability contract requires vigilance.The cache-hit path returns slices backed directly by cached data (lines 4125-4129) to reduce memory pressure, enforcing a "read-only" contract through documentation rather than type safety. While this aligns with the performance goal and was addressed in commit 23083bd (per past review comments), the lack of compile-time enforcement creates ongoing maintenance risk.
Consider adding runtime assertions in development/test builds to detect violations:
+// devMode can be set via build tags or environment variable +var devMode = os.Getenv("QUI_DEV_MODE") == "true" + func (s *Service) deduplicateSourceTorrents(ctx context.Context, instanceID int, torrents []qbt.Torrent) ([]qbt.Torrent, map[string][]string) { if len(torrents) <= 1 { return torrents, map[string][]string{} } cacheKey := dedupCacheKey(instanceID, torrents) if s.dedupCache != nil { if entry, ok := s.dedupCache.Get(cacheKey); ok && entry != nil { + if devMode { + // In dev mode, clone to detect mutation bugs + cloned := append([]qbt.Torrent(nil), entry.deduplicated...) + return cloned, entry.duplicateHashes + } // IMPORTANT: Returned slices are cache-backed. Do not modify. return entry.deduplicated, entry.duplicateHashes } }This allows mutation detection during development while preserving production performance.
4588-4610: RSS wait timeout balances coordination with responsiveness.The RSS wait loop (lines 4588-4610) blocks seeded search processing for up to 5 minutes per torrent while RSS automation runs, using 500ms polling. This coordination prevents rate limiter contention between RSS (higher priority) and background search (lower priority), aligning with the PR's goal to fix mutual starvation.
However, the blocking approach has trade-offs:
- Per-torrent impact: Each torrent in the queue waits up to 5 minutes, potentially delaying the entire search run
- Polling overhead: 500ms polling is simple but not event-driven
- Timeout rationale: The 5-minute timeout exceeds typical RSS run durations but prevents indefinite blocking if RSS gets stuck
Consider an event-driven alternative using a channel to signal RSS completion:
type Service struct { + rssCompleteChan chan struct{} // Closed when RSS automation completes // ... other fields } - wasWaitingForRSS := s.runActive.Load() - if wasWaitingForRSS { - log.Debug().Msg("[CROSSSEED-SEARCH] Waiting for RSS automation to complete before searching") - } - rssWaitDeadline := time.After(5 * time.Minute) -rssWaitLoop: - for s.runActive.Load() { + select { + case <-ctx.Done(): + return ctx.Err() + case <-s.rssCompleteChan: + // RSS completed, proceed immediately + case <-time.After(5 * time.Minute): + log.Warn().Msg("[CROSSSEED-SEARCH] RSS wait timeout exceeded, proceeding with search") + } - select { - case <-ctx.Done(): - return ctx.Err() - case <-rssWaitDeadline: - log.Warn().Msg("[CROSSSEED-SEARCH] RSS wait timeout exceeded, proceeding with search") - break rssWaitLoop - case <-time.After(500 * time.Millisecond): - } - }This eliminates polling overhead and provides immediate wake-up when RSS completes, improving responsiveness for large search queues.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
internal/services/crossseed/crossseed_test.go(4 hunks)internal/services/crossseed/find_individual_episodes_test.go(1 hunks)internal/services/crossseed/matching_layout_test.go(1 hunks)internal/services/crossseed/release_cache_test.go(1 hunks)internal/services/crossseed/service.go(22 hunks)internal/services/crossseed/service_example_test.go(1 hunks)internal/services/crossseed/service_search_test.go(1 hunks)
✅ Files skipped from review due to trivial changes (2)
- internal/services/crossseed/service_example_test.go
- internal/services/crossseed/release_cache_test.go
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
📚 Learning: 2025-11-21T21:11:50.633Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
Applied to files:
internal/services/crossseed/matching_layout_test.go
📚 Learning: 2025-11-06T12:11:04.963Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
Applied to files:
internal/services/crossseed/crossseed_test.gointernal/services/crossseed/service_search_test.gointernal/services/crossseed/find_individual_episodes_test.gointernal/services/crossseed/service.go
📚 Learning: 2025-11-25T22:46:03.733Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.733Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.
Applied to files:
internal/services/crossseed/find_individual_episodes_test.gointernal/services/crossseed/service.go
🧬 Code graph analysis (3)
internal/services/crossseed/service_search_test.go (1)
web/src/types/index.ts (1)
AppPreferences(587-824)
internal/services/crossseed/find_individual_episodes_test.go (1)
web/src/types/index.ts (1)
AppPreferences(587-824)
internal/services/crossseed/service.go (1)
internal/services/jackett/scheduler.go (1)
RateLimitPriorityBackground(42-42)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run tests
🔇 Additional comments (14)
internal/services/crossseed/service_search_test.go (1)
244-246: LGTM! Test mock properly extended.The
GetAppPreferencesmethod correctly returns a valid content layout for test scenarios, aligning with the PR's goal of adding content-layout awareness to cross-seed functionality.internal/services/crossseed/matching_layout_test.go (1)
139-141: LGTM! Consistent test mock implementation.The
GetAppPreferencesmethod follows the same pattern as other test mocks in this PR, correctly supporting content-layout-aware test scenarios.internal/services/crossseed/find_individual_episodes_test.go (1)
247-249: LGTM! Test mock properly extended.Consistent with other test helpers in this PR, correctly supporting content-layout-aware cross-seed tests.
internal/services/crossseed/crossseed_test.go (5)
25-25: LGTM! Import supports expanded test coverage.The
releasespackage import is necessary for the new content-layout-aware test scenarios that usereleases.NewDefaultParser().
236-1027: Excellent test coverage for content-layout-aware save path logic.The expanded test suite comprehensively covers:
- Season pack ↔ episode scenarios
- Movie collections and partial-in-pack matches
- Edge cases (deep nesting, unicode, recursion prevention, path separator variations)
- Different content layouts (Original, Subfolder, NoSubfolder)
The test organization with clear sections and descriptive comments makes it easy to understand expected behavior for each scenario. This aligns well with the PR's goal of fixing ContentPath vs SavePath handling in cross-seed operations.
1029-1128: Excellent integration tests for partial-in-pack behavior.These end-to-end tests verify the critical fix for ContentPath vs SavePath handling in partial-in-pack scenarios:
- Season pack → episode matching correctly produces
"partial-in-pack"matchTypedetermineSavePathuses ContentPath (not SavePath) for proper file placement- Explicit negative assertions confirm SavePath is NOT used
This directly addresses the PR objective of fixing save path determination for cross-seed operations.
2284-2286: LGTM! Test mock properly extended.Consistent with other test helpers, correctly supporting content-layout-aware test scenarios.
2428-2661: Excellent validation of layout-agnostic behavior.This test comprehensively verifies that cross-seed save path determination works correctly across all qBittorrent content layout settings (Original, Subfolder, NoSubfolder). The structured approach—testing each key scenario with all three layouts—ensures that users' layout preferences don't break cross-seed functionality.
The clear test organization and descriptive assertions make it easy to understand expected behavior for each layout mode.
internal/services/crossseed/service.go (6)
63-63: LGTM! Interface extension for content layout support.The
GetAppPreferencesmethod addition enables content-layout-aware save path decisions, which aligns with the PR objectives for fixing cross-seed placement logic.
75-79: LGTM! Clean cache entry structure.The
dedupCacheEntrystructure appropriately captures both the deduplicated torrent list and the duplicate hash mappings needed for cooldown propagation.
259-262: LGTM! Consistent cache initialization.The
dedupCacheinitialization follows the same pattern as other service caches with a 5-minute TTL, which is appropriate for avoiding repeated expensive deduplication operations during automation runs.Also applies to: 297-298, 315-315
4620-4620: LGTM! Priority change aligns with coordination goals.The switch to
RateLimitPriorityBackgroundfor seeded search (line 4620) ensures that background searches yield to RSS automation, addressing the mutual starvation issue described in the PR objectives. Combined with the RSS wait logic (lines 4588-4610), this establishes clear priority ordering: Interactive > RSS > Completion > Background.
5597-5610: LGTM! Focused recentResults improves UI clarity.The change to track only successfully added torrents in
recentResults(line 5603) provides a cleaner UI experience by showing a stable sliding window of actual progress rather than being diluted by skipped/failed attempts. This aligns well with user expectations for a "recent successes" view.Note: This filters out skipped/failed results from the recent list, though they remain in the full
run.Results. Consider adding metrics or separate UI elements if visibility into recent failures is desired for troubleshooting.
2239-2244: Based on my verification, I can now provide the rewritten review comment:The "Original" fallback is safe and intentionally designed for cross-seed placement logic.
The code explicitly documents (line 2323) that
"Original"is the default qBittorrent behavior that preserves folder structure. The fallback is well-integrated into the placement logic: when folder structures differ between source and candidate torrents, the code intelligently overrides to"NoSubfolder"(lines 2316-2318, 2324-2326) to avoid creating incorrect subfolders. When structures match, it uses"Original"to maintain consistency. All test mocks validate this approach. The WARN-level logging (line 2242) adequately alerts operators.Rather than adding retry logic (which would increase latency for transient failures), consider documenting the fallback behavior in the function comment to clarify that:
"Original"is the qBittorrent default preserving folder structure- Folder structure mismatches trigger automatic
"NoSubfolder"override- If persistent preference fetch failures occur, investigate the sync manager connection
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
internal/services/crossseed/service.go (1)
945-961: Consider using retry helper for consistency.The stuck-run recovery logic correctly marks runs that have been in "running" status for >5 minutes as failed. However, it uses the direct
UpdateRuncall rather thanupdateAutomationRunWithRetry, which could leave the database inconsistent if the update fails.Apply this change for consistency:
- if updated, updateErr := s.automationStore.UpdateRun(ctx, lastRun); updateErr != nil { + if updated, updateErr := s.updateAutomationRunWithRetry(ctx, lastRun); updateErr != nil { log.Warn().Err(updateErr).Int64("runID", lastRun.ID).Msg("failed to update stuck automation run") } else { lastRun = updated }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
internal/services/crossseed/service.go(30 hunks)
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
📚 Learning: 2025-11-25T22:46:03.733Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.733Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.
Applied to files:
internal/services/crossseed/service.go
📚 Learning: 2025-11-06T12:11:04.963Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
Applied to files:
internal/services/crossseed/service.go
🧬 Code graph analysis (1)
internal/services/crossseed/service.go (3)
web/src/types/index.ts (3)
AppPreferences(587-824)Torrent(204-261)CrossSeedRun(1227-1243)internal/models/crossseed.go (2)
CrossSeedRun(144-160)CrossSeedRunStatusRunning(118-118)internal/services/jackett/scheduler.go (1)
RateLimitPriorityBackground(42-42)
🔇 Additional comments (11)
internal/services/crossseed/service.go (11)
872-926: Retry helpers look solid.The exponential-like backoff (100ms × attempt) and max 3 retries are reasonable for database update conflicts. Context cancellation is properly checked between attempts.
4145-4165: Order-independent cache key design is sound.The XOR-based signature ensures the same set of torrents produces the same cache key regardless of order. While XOR collisions are theoretically possible, the combination of torrent count and 5-minute TTL makes practical impact negligible, as noted in the comments.
4191-4205: Cache-hit early return avoids recomputation as intended.The function now returns cached slices directly without defensive copies to reduce memory pressure. The read-only contract is clearly documented in both the function doc comment and inline comments. This addresses the dedup cache behavior noted in previous reviews.
Based on past review comments, this concern was already addressed in commit 23083bd.
2788-2866: Complex save-path logic for partial-in-pack scenarios.The new branching handles partial-in-pack and differing root folders with appropriate path construction. The defensive checks to avoid duplicating folder names in the path (e.g., checking if
SavePathalready ends withcandidateRoot) are important safeguards.Verify the path logic works correctly across different content layouts and operating systems during testing, particularly the folder duplication checks.
2313-2318: Graceful fallback for app preferences.The error handling correctly falls back to
"Original"content layout when preferences cannot be fetched, allowing the cross-seed operation to continue with a safe default.
2378-2404: ContentLayout decision logic aligns with qBittorrent behavior.The branching correctly adapts the content layout based on the instance's
TorrentContentLayoutpreference and whether the source/candidate root folders match. ForcingNoSubfolderwhen roots differ prevents qBittorrent from creating mismatched folder structures.
2347-2366: AutoTMM disabling for precise path control is correct.The logic correctly disables Automatic Torrent Management when the calculated save path differs from the matched torrent's path or for partial-in-pack scenarios. This ensures the cross-seed uses the specific computed location rather than qBittorrent's category-based path.
4671-4671: Background priority for seeded search aligns with PR objectives.Setting seeded torrent search to
RateLimitPriorityBackgroundensures it yields to RSS automation, preventing the mutual starvation described in the PR objectives. This allows RSS (higher priority) to run without being blocked by long-running seeded searches.Based on PR objectives, this is an essential part of the scheduler coordination strategy.
5648-5660: Recent results filtering improves UI signal-to-noise.Filtering
recentResultsto only include successfully added torrents (result.Added == true) provides a cleaner UI experience by showing a focused view of actual work completed, whilestate.run.Resultsmaintains the complete history of all attempts.
1220-1220: CompletedAt check prevents stale "running" status.Adding
state.run.CompletedAt == nilto the condition ensures the status correctly reflects when a run has actually finished, preventing the UI from showing "running" for completed runs.
4121-4123: Clearing current candidate on finalization prevents stale UI state.Resetting
currentCandidatetonilbefore marking the run as completed ensures the UI doesn't display outdated "currently processing" information after the search run finishes.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (3)
internal/services/crossseed/crossseed_test.go (3)
236-1185: determineSavePath test matrix is thorough and encodes nuanced path invariants wellThe expanded table for
TestDetermineSavePathcovers a ton of realistic movie/TV/collection and edge scenarios (deep nesting, unicode, mixed separators, empty paths, chained cross-seeds, etc.) and clearly documents the intended behavior viadescription. This is a solid executable spec for the path logic.If this grows further, consider extracting a small helper to reduce repetition around building
qbt.Torrent/TorrentPropertiesinputs, but I wouldn’t change anything for this PR.
2874-3236: mockRecoverSyncManager and recoverErroredTorrents tests give strong coverage; consider a few failure‑path casesThe new
mockRecoverSyncManagerplus therecoverErroredTorrentstests cover the happy paths and mixed‑state batching well: no‑errored, single/multiple errored, missing‑files, mixed states, empty list, and context‑cancellation.Since the mock already has switches like
bulkActionFails,keepInCheckingState,failGetTorrentsAfterRecheck,setProgressToThreshold, andsecondRecheckCompletes, it might be worth adding one or two focused tests that flip these to verify:
- how
recoverErroredTorrentsbehaves when a bulk action fails (error surfacing / retry or abort behavior), and- how it reacts when progress lands exactly on the “good enough” threshold vs. slightly below, or when torrents disappear mid‑recheck.
Not required for this PR, but you’ve already done most of the setup to make those edge behaviors easy to lock in.
2453-2455: Review comment is accurate; optional refactoring suggestion is well-justifiedThe hard-coded "Original" layout in
fakeSyncManager.GetAppPreferencesis indeed fine for deterministic testing. Verification confirms that all three layout types (Original,Subfolder,NoSubfolder) are comprehensively tested throughTestDetermineSavePathContentLayoutScenariosby directly passingcontentLayouttodetermineSavePath. The review correctly identifies this pattern and appropriately offers the parameterization suggestion as optional future work if tests later require exercising layouts through theGetAppPreferencespath rather than direct parameter passing.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
internal/services/crossseed/crossseed_test.go(7 hunks)
🧰 Additional context used
🧠 Learnings (5)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/jackett/service.go:309-330
Timestamp: 2025-11-27T15:33:39.007Z
Learning: In internal/services/jackett/service.go, synchronous execution paths (test executors, interactive searches with explicit indexer selection, and scheduler-unavailable fallback) intentionally pass jobID=0 to result callbacks. This is acceptable because outcome tracking via ReportIndexerOutcome is only used for async cross-seed operations that always use the scheduler and receive unique jobIDs. Interactive searches and tests do not use outcome tracking.
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go:209-212
Timestamp: 2025-11-28T20:32:30.126Z
Learning: Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go
Learning: The cross-seed recheck-resume worker intentionally runs for the process lifetime and keys pending entries by hash only. This is acceptable under the current constraint that background seeded-search runs operate on a single instance at a time; graceful shutdown and instanceID|hash keying are deferred by design.
Learnt from: s0up4200
Repo: autobrr/qui PR: 553
File: internal/database/migrations/014_add_torznab_indexers.sql:294-310
Timestamp: 2025-11-15T21:35:06.686Z
Learning: In the qui codebase, storing binary BLOB data (e.g., torrent_data, response_data) directly in SQLite tables for cache purposes (such as torznab_torrent_cache and torznab_search_cache) is intentional and acceptable. This design choice prioritizes system simplicity, self-containment, and backup ease over external file/object storage, as the data is short-lived cache and no performance issues have been observed. External storage would introduce unnecessary complexity for cleanup and migrations.
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
📚 Learning: 2025-11-28T22:21:20.730Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go:2415-2457
Timestamp: 2025-11-28T22:21:20.730Z
Learning: Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go
Learning: The determineSavePath function intentionally includes a contentLayout string parameter for future/content-layout branching and API consistency. Its presence is by design even if unused in the current body; do not flag as an issue in reviews.
Applied to files:
internal/services/crossseed/crossseed_test.go
📚 Learning: 2025-11-28T20:32:30.126Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go:209-212
Timestamp: 2025-11-28T20:32:30.126Z
Learning: Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go
Learning: The cross-seed recheck-resume worker intentionally runs for the process lifetime and keys pending entries by hash only. This is acceptable under the current constraint that background seeded-search runs operate on a single instance at a time; graceful shutdown and instanceID|hash keying are deferred by design.
Applied to files:
internal/services/crossseed/crossseed_test.go
📚 Learning: 2025-11-06T12:11:04.963Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
Applied to files:
internal/services/crossseed/crossseed_test.go
📚 Learning: 2025-11-25T22:46:03.762Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.762Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.
Applied to files:
internal/services/crossseed/crossseed_test.go
🧬 Code graph analysis (1)
internal/services/crossseed/crossseed_test.go (2)
pkg/releases/parser.go (1)
NewDefaultParser(29-31)internal/services/crossseed/release_cache.go (1)
NewReleaseCache(13-15)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run tests
🔇 Additional comments (3)
internal/services/crossseed/crossseed_test.go (3)
1187-1284: Partial‑in‑pack integration tests nicely validate matchType ↔ save‑path interplay
TestPartialInPackIntegrationandTestPartialInPackMovieCollectionIntegrationdo a good job of exercising the full flow (releaseCache.Parse→getMatchType→determineSavePath) for key partial‑in‑pack scenarios (TV episode into season pack, movie into collection). The expectations align with the large table above (episodes use season packContentPath, single movies in collections use the collection path), which should catch regressions in either matcher or path logic.
1343-1465: Category / tag preservation tests and boolPtr helper look correct and comprehensiveThe updated
TestCrossSeed_CategoryAndTagPreservationcases exercise the important combinations: inheritance on/off, explicit indexer category, and explicit disabling of thecross-seedtag usingaddCrossSeedTag = boolPtr(false). That matches how I’d expectbuildCrossSeedTagsanddetermineCrossSeedCategoryto behave.
boolPtris a simple, local helper and keeps the table readable; keeping it in this file near the tests is fine.
2597-2872: Content‑layout scenario tests are a good cross‑check that determineSavePath is layout‑agnostic
TestDetermineSavePathContentLayoutScenariosclearly verifies that for the key scenarios (season packs, partial‑in‑pack episodes/movies, single‑file torrents),determineSavePathreturns the same logical path regardless of"Original","Subfolder", or"NoSubfolder"content layout, with behavior differences pushed into qBittorrent itself. This is a nice safeguard against regressions when wiring inTorrentContentLayout.
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
internal/services/crossseed/service.go (1)
1649-1659: Tagging & cross‑seed tag backward‑compat: verify defaults cover legacy callersYou’ve moved most tagging to source‑specific arrays:
- RSS:
RSSAutomationTags.- Seeded search:
SeededSearchTags.- Webhook:
WebhookTags.- Manual apply: either user‑supplied
TagNameorSeededSearchTags.
buildCrossSeedTagsnow:
- Merges these “source” tags plus optional matched‑torrent tags when
inheritSourceTagsis true.- Treats
AddCrossSeedTagpurely as a removal signal: if explicitlyfalse, it strips"cross-seed"from the final list; otherwise it doesn’t touch it.That’s a nice cleanup, but it does mean that legacy callers who previously relied on
AddCrossSeedTag=truewithout configuring any tags will no longer automatically get a"cross-seed"tag unless defaults forRSSAutomationTags/SeededSearchTags/WebhookTagsnow include it.Please double‑check:
- Default values for those tag arrays do include
"cross-seed"where you expect the tag.- Older webhook/autobrr flows that only toggle
AddCrossSeedTag(and never sendTags) still behave as intended.If you haven’t already, a short note in the settings docs that
"cross-seed"now comes from the per‑source tag lists (andAddCrossSeedTagonly disables it) would help avoid surprises.Also applies to: 2222-2245, 4090-4097, 4206-4237, 5303-5315, 6206-6253
♻️ Duplicate comments (3)
internal/services/crossseed/service.go (3)
414-430: Search run options & status: confirm completion‑run visibility choice
- Adding
InheritSourceTagstoSearchRunOptionsand wiring it from automation settings intoStartSearchRun/executeCrossSeedSearchAttemptis coherent; seeded search now shares the same tag‑inheritance behavior as RSS/completion.GetSearchRunStatusnow suppresses runs wherestate.opts.RequestedBy == "completion". This matches the intent to hide completion‑triggered runs from the generic seeded‑search status endpoint but will also hide any user‑initiated runs that setRequestedBy="completion".If the API is supposed to expose all user‑visible runs regardless of source, consider adding an explicit discriminator (e.g.
SystemInitiatedor a dedicated mode) rather than keying purely onRequestedBy. If hiding all completion runs is intentional, a brief comment onSearchRunOptions.RequestedByor inGetSearchRunStatuswould help future readers.Also applies to: 1155-1178, 1233-1252, 5937-5947
208-211: Errored‑torrent recovery: good batching; minor timeout‑intent mismatchThe new
recoverErroredTorrentsflow is a substantial improvement:
- Batches errored torrents, pauses them once, and rechecks via a single
BulkActioncall.- Polls every 2s instead of 10–100ms and uses
GetTorrentswith aHashesfilter, which is much gentler on qBittorrent.- Uses per‑torrent state with:
- capped error retries (
maxErrorRetries),- a 25‑minute limit while actively checking,
- a second recheck after
DiskCacheTTL.- Resumes only torrents that reach the computed
minCompletionProgress, logs failures, and respects the caller’sctx.One nit: the 100ms
settingsCtxin this function doesn’t actually constrainGetAutomationSettings, since that helper replaces the incoming context withcontext.WithoutCanceland then applies its own 5s timeout. If you no longer want a separate short bound here, you can drop the 100ms wrapper to avoid implying a tighter SLA. If you do want a genuinely short call, it has to be implemented insideGetAutomationSettingsor via a lighter helper.Also applies to: 6698-6995
968-985: Stuck automation‑run recovery no longer mislabels long runsGating the “mark last run as timed‑out” logic on
!s.runActive.Load()fixes the earlier problem where a legitimately long‑running in‑memory automation could be briefly reported asFailed. The 5‑minute window is now only applied when nothing is currently active in memory, which matches the intended semantics.
🧹 Nitpick comments (3)
internal/services/crossseed/service.go (3)
200-214: Recheck‑resume worker: design matches stated invariantsThe new
recheckPollInterval/recheckAbsoluteTimeout/recheckAPITimeoutconstants,recheckResumeChanfields,queueRecheckResume, andrecheckResumeWorkergive you a single, batched worker that:
- Groups pending hashes per instance and calls
GetTorrentsonce per tick.- Resumes when
Progress >= thresholdand not checking, or drops entries on absolute timeout / below‑threshold completion.- Uses a background context so it survives per‑request cancellation.
Keying
pendingby hash only and running the worker for process lifetime are consistent with the documented constraint that background seeded‑search runs operate on a single instance, so I wouldn’t change that here. If/when you move to multi‑instance background runs, swapping the map key toinstanceID|normalizedHashwill be the right follow‑up.
Based on learnings, this is acceptable as‑is.Also applies to: 281-285, 316-341, 2560-2591, 2593-2729
2290-2552: Cross‑seed add path: alignment, contentLayout, and recheck/resumeThis refactor of
processCrossSeedCandidateis quite a bit clearer and looks correct:
requiresAlignment/hasExtraFilessplit rename‑alignment from “extra file” scenarios.skip_checkingis only set when there are no extra source files, so straight “all files already on disk” matches can skip verification.sourceRoot/candidateRoot+isEpisodeInPackdrive:
contentLayout="Subfolder"for single‑file → folder (non episode‑in‑pack),contentLayout="NoSubfolder"for folder → single‑file,- leave default (“Original”) otherwise.
- For episode‑in‑season‑pack matches, you disable TMM and use
matchedTorrent.ContentPathassavepath, avoiding category default paths.- TMM is enabled only when a category is set and the match is not an episode‑in‑pack, which keeps season‑pack paths intact.
- Recheck/resume behavior:
needsRecheckAndResume := requiresAlignment || hasExtraFilestriggers a manualrecheckfollowed byqueueRecheckResume.- Perfect matches with
startPausedresume immediately;startPaused=falserelies on qBittorrent starting the torrent from the initial add.This matches the comments and should handle the previously tricky partial‑in‑pack and season‑pack scenarios more robustly.
4141-4279: ApplyTorrentSearchResults: concurrency & tagging semanticsThe new implementation that processes selections concurrently via goroutines and a buffered
resultChanis straightforward and safe:
- Each goroutine is independent, indexes its own slot, and writes a single
selectionResultback.- The collector closes
resultChanafterwg.Wait()and reconstructsresultsin original order.- Errors per selection are captured and surfaced per‑entry without failing the whole call.
Behavioral bits to note:
- Manual apply tags now prefer an explicit
TagName, then fall back toSeededSearchTagswhenuseTagis true.InheritSourceTagsis pulled from automation settings so manual apply mirrors seeded‑search behavior.Overall this is a nice win for responsiveness on multiple selections.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
internal/services/crossseed/service.go(55 hunks)
🧰 Additional context used
🧠 Learnings (7)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/jackett/service.go:309-330
Timestamp: 2025-11-27T15:33:39.007Z
Learning: In internal/services/jackett/service.go, synchronous execution paths (test executors, interactive searches with explicit indexer selection, and scheduler-unavailable fallback) intentionally pass jobID=0 to result callbacks. This is acceptable because outcome tracking via ReportIndexerOutcome is only used for async cross-seed operations that always use the scheduler and receive unique jobIDs. Interactive searches and tests do not use outcome tracking.
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go:209-212
Timestamp: 2025-11-28T20:32:30.126Z
Learning: Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go
Learning: The cross-seed recheck-resume worker intentionally runs for the process lifetime and keys pending entries by hash only. This is acceptable under the current constraint that background seeded-search runs operate on a single instance at a time; graceful shutdown and instanceID|hash keying are deferred by design.
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
Learnt from: s0up4200
Repo: autobrr/qui PR: 553
File: internal/database/migrations/014_add_torznab_indexers.sql:294-310
Timestamp: 2025-11-15T21:35:06.686Z
Learning: In the qui codebase, storing binary BLOB data (e.g., torrent_data, response_data) directly in SQLite tables for cache purposes (such as torznab_torrent_cache and torznab_search_cache) is intentional and acceptable. This design choice prioritizes system simplicity, self-containment, and backup ease over external file/object storage, as the data is short-lived cache and no performance issues have been observed. External storage would introduce unnecessary complexity for cleanup and migrations.
📚 Learning: 2025-11-28T20:32:30.126Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go:209-212
Timestamp: 2025-11-28T20:32:30.126Z
Learning: Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go
Learning: The cross-seed recheck-resume worker intentionally runs for the process lifetime and keys pending entries by hash only. This is acceptable under the current constraint that background seeded-search runs operate on a single instance at a time; graceful shutdown and instanceID|hash keying are deferred by design.
Applied to files:
internal/services/crossseed/service.go
📚 Learning: 2025-11-28T22:21:20.730Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go:2415-2457
Timestamp: 2025-11-28T22:21:20.730Z
Learning: Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go
Learning: The determineSavePath function intentionally includes a contentLayout string parameter for future/content-layout branching and API consistency. Its presence is by design even if unused in the current body; do not flag as an issue in reviews.
Applied to files:
internal/services/crossseed/service.go
📚 Learning: 2025-11-25T22:46:03.762Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.762Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.
Applied to files:
internal/services/crossseed/service.go
📚 Learning: 2025-11-21T21:11:50.633Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
Applied to files:
internal/services/crossseed/service.go
📚 Learning: 2025-11-27T15:33:39.007Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/jackett/service.go:309-330
Timestamp: 2025-11-27T15:33:39.007Z
Learning: In internal/services/jackett/service.go, synchronous execution paths (test executors, interactive searches with explicit indexer selection, and scheduler-unavailable fallback) intentionally pass jobID=0 to result callbacks. This is acceptable because outcome tracking via ReportIndexerOutcome is only used for async cross-seed operations that always use the scheduler and receive unique jobIDs. Interactive searches and tests do not use outcome tracking.
Applied to files:
internal/services/crossseed/service.go
📚 Learning: 2025-11-06T12:11:04.963Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
Applied to files:
internal/services/crossseed/service.go
🧬 Code graph analysis (1)
internal/services/crossseed/service.go (2)
internal/models/crossseed.go (1)
CrossSeedRun(156-172)internal/services/crossseed/models.go (1)
CrossSeedRequest(15-41)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run tests
🔇 Additional comments (10)
internal/services/crossseed/service.go (10)
55-73: qbittorrentSync interface extension: ensure all implementations updatedAdding
GetAppPreferences(ctx context.Context, instanceID int) (qbt.AppPreferences, error)is consistent with its use inrecoverErroredTorrents, but it’s a breaking interface change. Please double‑check thatSyncManagerand any test/mocks implement this method with the same signature to avoid compile or runtime DI issues.
75-79: Dedup cache & keying: behavior is sound, collisions are a documented tradeoffThe
dedupCache+dedupCacheKeywiring looks correct: a 5‑minute TTL, key includes instanceID + torrent count + order‑independent hash signature, anddeduplicateSourceTorrentsreturns cache‑backed slices with a clear read‑only contract. Given the explicit comment about XOR’s theoretical collision risk and the short TTL, the tradeoff is acceptable and well‑documented.Also applies to: 262-266, 313-315, 4443-4463, 4489-4503, 4695-4701
883-1139: Completion search refactor: tagging & inheritance path looks consistent
HandleTorrentCompletionnow delegates toexecuteCompletionSearch, which:
- Filters indexers asynchronously, then runs
SearchTorrentMatcheswithRateLimitPriorityCompletion.- Builds a
searchRunStateused purely to pass options (category override,CompletionSearchTags,InheritSourceTags, ignore patterns, StartPaused) intoexecuteCrossSeedSearchAttempt.- Does not persist a search run, which matches the “fire‑and‑forget” completion behavior.
This keeps completion tagging (
CompletionSearchTags+ inheritance) aligned with seeded search and RSS automation, without polluting the seeded‑search run history. The flow looks sound.
894-948: Retry helpers for run persistence are well‑factored
updateAutomationRunWithRetryandupdateSearchRunWithRetrycentralize transient‑error handling around store updates, and the callers (executeAutomationRun,GetAutomationStatustimeout update,finalizeSearchRun) now use them appropriately.Given the low retry count and capped backoff, this should improve resilience against brief DB hiccups without materially extending request latency. No issues from my side.
Also applies to: 1459-1487, 1570-1574, 4405-4441, 5951-5962
1155-1177: Dedup + cooldown propagation in seeded search: behavior aligns with goalsThe seeded‑search pipeline now:
- Normalizes seeded‑search tags from settings when none are provided.
- Deduplicates queue entries per content group via
deduplicateSourceTorrents.- Propagates cooldown history to duplicate hashes in
propagateDuplicateSearchHistory.Combined, this should prevent repeatedly searching identical cross‑seeds while still respecting per‑torrent cooldown semantics across duplicates. The wiring to
searchRunState.duplicateHasheslooks consistent.Also applies to: 1787-1815, 4723-4740
3015-3174: determineSavePath: expanded partial‑in‑pack logic is consistent with new add path
determineSavePathnow:
- Normalizes
SavePath/ContentPathseparators.- Handles
partial-in-packwith nuanced behavior:
- Single‑file candidate →
SavePath.- TV episode into season pack →
ContentPath.- Non‑TV single‑file‑into‑folder →
SavePathwith “Subfolder” layout at the add call site.- Adjusts when roots differ (
sourceRoot != candidateRoot) by constructing a folder path underSavePathwhen appropriate.- Falls back to season‑pack/episode/same‑type scenarios based on rls metadata.
Given
processCrossSeedCandidatenow largely relies on TMM + contentLayout andsavepathdirectly, this helper is mainly for other flows/tests. The branches are logically consistent with the comments and don’t introduce obvious path bugs.
Based on learnings, I’m intentionally not flagging the unusedcontentLayoutparameter here.
3821-3826: Jackett scheduler integration: JobID plumbing and outcome reportingThe changes to
SearchTorrentMatchesandprocessSearchCandidatehook nicely into the new async Jackett scheduler:
- You now propagate
searchResp.JobIDthroughTorrentSearchResponse.JobID.- Automation search paths aggregate indexer adds/fails in
indexerAdds/indexerFailsand callreportIndexerOutcomes(jobID, ...).reportIndexerOutcomescorrectly:
- skips reporting when
jobID == 0(interactive/test paths),- merges indexer IDs across both maps and reports “added” vs “failed” per indexer.
That should give the scheduler enough signal to drive its escalating backoff without affecting interactive completion/manual flows (which use
jobID=0). Looks good.Also applies to: 4041-4042, 4095-4097, 5051-5078, 5108-5136
4965-4975: Background seeded search priority wiringUsing
jackett.WithSearchPriority(..., jackett.RateLimitPriorityBackground)for library seeded‑search automation correctly sets these runs to the lowest scheduler priority (below RSS and completion). That matches the PR’s stated priority scheme and should help avoid starvation for interactive flows.
5493-5498: Content‑based indexer filtering: logging & matches exposureThe tweaks in
filterIndexersByExistingContent:
- Add a clear “filtering indexers by existing content” debug log with counts.
- Collect human‑readable
contentMatches("Name (InstanceName)") that bubble up into async filtering state for UI/diagnostics.- Log excluded indexers with reasons when any are filtered.
This makes it much easier to understand why certain indexers were removed while keeping the core behavior unchanged.
Also applies to: 5525-5528, 5590-5597
5937-5947: Search recentResults now only track successful additionsRestricting
recentResultstoresult.Added == truewhile still appending all results tostate.run.Resultsgives the UI a stable rolling window of actual additions without being spammed by skips/failures. That’s a sensible UX choice and the guardif s.searchState == stateavoids races with stale states.
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (2)
internal/services/crossseed/service.go (2)
1233-1253: Search run visibility still hides all “completion” runs
GetSearchRunStatuscontinues to suppress any active state wherestate.opts.RequestedBy == "completion". That keeps internal completion-triggered runs out of the status API, but it also hides any future user-initiated runs that might reuse"completion"as a discriminator.If the intent is “hide only system/internal completion runs”, consider either:
- Using a more explicit flag (e.g.
SystemInitiated bool) inSearchRunOptions, or- Adding a comment here clarifying that completion-tagged runs are intentionally excluded from the user-visible status API so future callers don’t rely on them appearing.
6713-6762: Errored-torrent recovery: good batching, but settings timeout wrapper is misleadingThe new
recoverErroredTorrentsdramatically improves on the earlier per-torrent recheck logic: it batches pause/recheck operations, polls every 2 seconds, bounds “active checking” time per torrent (25 minutes), and does a second pass afterDiskCacheTTL. That keeps both API load and total wall time more predictable.Two minor points to consider:
100ms settings context is ineffective
GetAutomationSettingsimmediately wraps the incoming context withcontext.WithoutCanceland applies its own 5s timeout. The 100mssettingsCtxhere doesn’t actually limit the call, but it suggests it does.You could simplify by dropping the wrapper:
- settingsCtx, cancel := context.WithTimeout(ctx, 100*time.Millisecond)
- settings, err := s.GetAutomationSettings(settingsCtx)
- cancel()
- settings, err := s.GetAutomationSettings(ctx)
or, if you genuinely need a short bound, refactor `GetAutomationSettings` (or add a lighter helper) to respect upstream deadlines. 2. **Worst-case blocking for manual flows** Because this function blocks until both phases complete (or `ctx` is canceled), manual `FindCandidates` / `CrossSeed` calls can, in pathological cases with many errored torrents, run for a long time. If that becomes an issue in practice, you might cap the number of errored torrents recovered per invocation or gate this recovery to automation/search paths instead of all callers. Also applies to: 6751-6762, 6820-6878, 6927-7008 </blockquote></details> </blockquote></details> <details> <summary>🧹 Nitpick comments (1)</summary><blockquote> <details> <summary>internal/services/crossseed/service.go (1)</summary><blockquote> `281-285`: **Recheck-resume worker design is sound; one minor guard improvement** The single recheck-resume worker, per-instance batching, and use of a dedicated background context avoid goroutine leaks and keep qBittorrent polling under control. The `queueRecheckResume` threshold calculation from automation settings is also reasonable. One small robustness tweak: in `processCrossSeedCandidate`, `queueRecheckResume` is called even if the explicit `recheck` `BulkAction` fails. That leaves an entry in the worker queue for a torrent that was never rechecked and will just sit until absolute timeout. Consider only enqueueing when the initial `recheck` call succeeds: ```diff - if err := s.syncManager.BulkAction(ctx, candidate.InstanceID, []string{torrentHash}, "recheck"); err != nil { - log.Warn(). - Err(err). - Int("instanceID", candidate.InstanceID). - Str("torrentHash", torrentHash). - Msg("Failed to trigger recheck after add") - } - // Queue for background resume when recheck completes - s.queueRecheckResume(ctx, candidate.InstanceID, torrentHash) + if err := s.syncManager.BulkAction(ctx, candidate.InstanceID, []string{torrentHash}, "recheck"); err != nil { + log.Warn(). + Err(err). + Int("instanceID", candidate.InstanceID). + Str("torrentHash", torrentHash). + Msg("Failed to trigger recheck after add; not queueing for resume") + return result + } + // Queue for background resume when recheck completes + s.queueRecheckResume(ctx, candidate.InstanceID, torrentHash)Also applies to: 316-341, 2473-2531, 2560-2591, 2593-2729
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
internal/services/crossseed/content_detection_test.go(1 hunks)internal/services/crossseed/service.go(56 hunks)
🧰 Additional context used
🧠 Learnings (7)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/jackett/service.go:309-330
Timestamp: 2025-11-27T15:33:39.007Z
Learning: In internal/services/jackett/service.go, synchronous execution paths (test executors, interactive searches with explicit indexer selection, and scheduler-unavailable fallback) intentionally pass jobID=0 to result callbacks. This is acceptable because outcome tracking via ReportIndexerOutcome is only used for async cross-seed operations that always use the scheduler and receive unique jobIDs. Interactive searches and tests do not use outcome tracking.
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go:209-212
Timestamp: 2025-11-28T20:32:30.126Z
Learning: Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go
Learning: The cross-seed recheck-resume worker intentionally runs for the process lifetime and keys pending entries by hash only. This is acceptable under the current constraint that background seeded-search runs operate on a single instance at a time; graceful shutdown and instanceID|hash keying are deferred by design.
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
Learnt from: s0up4200
Repo: autobrr/qui PR: 553
File: internal/database/migrations/014_add_torznab_indexers.sql:294-310
Timestamp: 2025-11-15T21:35:06.686Z
Learning: In the qui codebase, storing binary BLOB data (e.g., torrent_data, response_data) directly in SQLite tables for cache purposes (such as torznab_torrent_cache and torznab_search_cache) is intentional and acceptable. This design choice prioritizes system simplicity, self-containment, and backup ease over external file/object storage, as the data is short-lived cache and no performance issues have been observed. External storage would introduce unnecessary complexity for cleanup and migrations.
📚 Learning: 2025-11-28T20:32:30.126Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go:209-212
Timestamp: 2025-11-28T20:32:30.126Z
Learning: Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go
Learning: The cross-seed recheck-resume worker intentionally runs for the process lifetime and keys pending entries by hash only. This is acceptable under the current constraint that background seeded-search runs operate on a single instance at a time; graceful shutdown and instanceID|hash keying are deferred by design.
Applied to files:
internal/services/crossseed/service.go
📚 Learning: 2025-11-28T22:21:20.730Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go:2415-2457
Timestamp: 2025-11-28T22:21:20.730Z
Learning: Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go
Learning: The determineSavePath function intentionally includes a contentLayout string parameter for future/content-layout branching and API consistency. Its presence is by design even if unused in the current body; do not flag as an issue in reviews.
Applied to files:
internal/services/crossseed/service.go
📚 Learning: 2025-11-25T22:46:03.762Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.762Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.
Applied to files:
internal/services/crossseed/service.go
📚 Learning: 2025-11-21T21:11:50.633Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
Applied to files:
internal/services/crossseed/service.go
📚 Learning: 2025-11-27T15:33:39.007Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/jackett/service.go:309-330
Timestamp: 2025-11-27T15:33:39.007Z
Learning: In internal/services/jackett/service.go, synchronous execution paths (test executors, interactive searches with explicit indexer selection, and scheduler-unavailable fallback) intentionally pass jobID=0 to result callbacks. This is acceptable because outcome tracking via ReportIndexerOutcome is only used for async cross-seed operations that always use the scheduler and receive unique jobIDs. Interactive searches and tests do not use outcome tracking.
Applied to files:
internal/services/crossseed/service.go
📚 Learning: 2025-11-06T12:11:04.963Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
Applied to files:
internal/services/crossseed/service.go
🧬 Code graph analysis (2)
internal/services/crossseed/content_detection_test.go (1)
internal/services/crossseed/models.go (1)
TorrentInfo(74-97)
internal/services/crossseed/service.go (2)
internal/models/crossseed.go (2)
CrossSeedRun(156-172)CrossSeedAutomationSettings(20-48)internal/services/crossseed/models.go (1)
CrossSeedRequest(15-41)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run tests
🔇 Additional comments (12)
internal/services/crossseed/content_detection_test.go (2)
90-134: LGTM! Well-structured test for miniseries edge case.The test correctly validates that episode markers in file names (E01, E02) are trusted over the torrent name when detecting TV content, even when the torrent name contains a year that might suggest a movie. The test data is realistic and the assertions verify the expected behavior.
136-184: LGTM! Well-structured test for anime episode detection.The test correctly validates that anime-style episode markers (" - 01 ") in file names are trusted for TV content detection, even when the torrent name lacks episode indicators. The test data accurately reflects common anime naming conventions and the assertions verify the expected behavior.
internal/services/crossseed/service.go (10)
262-266: Dedup cache wiring and read-only contract look solidThe
dedupCachefield, its initialization with a 5‑minute TTL, and thededupCacheKey/deduplicateSourceTorrentsintegration are consistent and well-documented. The explicit “results are read‑only” contract plus cache-backed slices avoid unnecessary allocations without sacrificing correctness for current call sites.Also applies to: 313-315, 4458-4718
894-920: Retry helpers for run persistence improve resilienceThe new
updateAutomationRunWithRetryandupdateSearchRunWithRetryhelpers, plus the updated call sites in automation/search finalization, are a good durability improvement: transient store errors no longer immediately lose run state. Logging at WARN and then ERROR on exhaustion is appropriate, and the use of the originalctx(orcontext.Background()at finalize) makes sense to persist status even when the run’s work context has been canceled.Also applies to: 922-948, 1459-1497, 1570-1574, 4439-4448
968-985: Stuck-run auto-recovery now correctly gated on in-memory activityGating the “mark stuck run as failed after 5 minutes” logic on
!s.runActive.Load()fixes the earlier misclassification risk for legitimately long-running automations. This should prevent transient “failed” statuses when a run is still executing and only clean up genuinely orphaned runs.
414-430: Tag inheritance plumbing is coherent across all entry points
SearchRunOptions.InheritSourceTags, the various call sites that populate it fromCrossSeedAutomationSettings.InheritSourceTags, and the centralizedbuildCrossSeedTagshelper give a consistent tagging story for RSS, seeded search, completion, and webhook/apply flows. RespectingAddCrossSeedTag == falsefor backwards compatibility while letting settings drive the default tag sets is a nice touch.Also applies to: 1094-1103, 1163-1168, 2217-2245, 5321-5326, 6221-6268
2949-2952: Cross-seed matching and save-path/content-layout handling look consistentThe swap in
getMatchTypeargument order to treat existing files as the candidate set, along with the expandeddetermineSavePathand the newprocessCrossSeedCandidatelogic (skip_checking heuristics,contentLayoutdecisions, TMM vs explicitsavepath, and theisEpisodeInPackhandling) form a coherent strategy for aligning cross-seeds with existing layouts, including tricky partial-in-pack and season-pack cases. The added comments make the intent clear and should greatly reduce path/layout regressions.Also applies to: 3025-3189, 2373-2525
4156-4290: Parallelizing ApplyTorrentSearchResults is safe and improves throughputThe refactor to process selections in parallel (buffered
resultChan,WaitGroup, and ordered collation by index) is correct and preserves deterministic result ordering. Each goroutine produces exactly oneselectionResult, and all shared configuration (settings) is read-only, so there’s no new data race introduced here.
5067-5093: Indexer outcome reporting is wired correctly to scheduler JobIDCollecting per-indexer add/fail counts in
processSearchCandidateand reporting them viareportIndexerOutcomesusingsearchResp.JobIDlets Jackett’s scheduler track which indexers actually produced successful adds. The explicitjobID == 0guard avoids spurious reporting for synchronous/non-scheduler searches, and aggregating counts before reporting keeps the number of calls bounded.Also applies to: 5123-5151, 4050-4057, 4111-4112
5540-5543: Content-based indexer filtering adds useful signal with acceptable loggingThe
filterIndexersByExistingContentenrichments (trackingcontentMatchesand loggingexcludedIndexerswhen any are dropped) provide good observability for why indexers are being filtered without changing behavior for the non-match case. This should make it much easier to debug “why didn’t we search X?” questions.Also applies to: 5605-5612
5954-5958: Recent-results window now focuses on successful additionsRestricting
state.recentResultsto onlyresult.Added == truekeeps the UI’s “recent” view focused on actual cross-seed injections instead of being flooded by skips/failures. The sliding window of 10 entries is small and stable enough to be cheap to maintain.
2825-2838: TV episode special-case in content detection is a nice correctness tweakThe new branch in
selectContentDetectionReleasethat trusts largest-file metadata when it clearly indicates TV episodes (episode/series set) despite the torrent-level metadata saying “movie” fixes the common “season pack named like a movie year” misclassification. The additional title-match guard prevents over-eager reclassification on unrelated content.
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (3)
internal/services/crossseed/service.go (3)
1233-1252: Search run status hides completion‑triggered runs — document or refine the intent
GetSearchRunStatusnow suppresses active runs wherestate.opts.RequestedBy == "completion". That matches the goal of hiding internal completion flows but will also hide any future user‑initiated runs that happen to reuse"completion".Either:
- Document this behavior on the API (e.g., “completion runs are intentionally omitted from status”), or
- Use a more explicit discriminator (e.g., an internal/system flag) if you eventually need user‑visible completion runs.
4454-4474: dedupCacheKey XOR signature tradeoff is acceptable and well‑documentedThe XOR‑of‑xxhash approach in
dedupCacheKeygives you an order‑independent signature with minimal overhead, and the comments explicitly call out the theoretical collision risk and why it’s acceptable given the 5‑minute TTL and torrent-count baked into the key. Combined with the cachedduplicateMap, this is a reasonable engineering tradeoff for a best‑effort performance cache.Also applies to: 4697-4712
6696-6994: Confirmed: Settings timeout is ignored due tocontext.WithoutCancel, and perpetually queued torrents lack a wall-clock boundThe
recoverErroredTorrentsbatched recovery is sound, but two design issues remain:
Ineffective 100ms settings timeout: The
settingsCtxcreated at line 6718 with a 100ms deadline is immediately stripped byGetAutomationSettings(line 505), which callscontext.WithoutCancelbefore applying its own 5-second timeout. The 100ms wrapper reads like an intent to perform a cheap, fast read but is completely ignored. Either drop the 100ms wrapper here or modifyGetAutomationSettingsto honor upstream deadlines.No absolute timeout for perpetually queued torrents: Torrents that remain queued without ever entering a
checking*state (rare but possible) will loop indefinitely. ThemaxCheckingTimeof 25 minutes only applies to torrents actively checking. Consider adding an absolute "pending in recovery workflow" wall-clock timeout (e.g., reusemaxCheckingTimeor define a separate constant) and log-and-skip hashes that exceed it.
🧹 Nitpick comments (6)
internal/services/crossseed/models.go (2)
23-32: ClarifyInheritSourceTagsoverride semantics (boolvs*bool).Struct-wise this looks fine, but using
boolhere means you cannot distinguish “field omitted” from “explicitly set to false” in JSON. If the intent is strictly to drive behavior from global settings (and this field is only populated internally), that’s OK; if per-request overrides are expected later, consider switching to*boolto support a tri‑state (inherit default / force on / force off).
326-336:AutobrrApplyRequest.SkipIfExistsshape is good; consider documenting it.Adding
SkipIfExists *bool \json:"skipIfExists,omitempty"`mirrors the existingCrossSeedRequest.SkipIfExists` and gives autobrr a clean way to avoid duplicate adds. From a model perspective this is solid.I’d suggest also wiring this into the README/autobrr examples so users know the flag exists and what the default behavior is when it’s omitted.
internal/services/crossseed/service.go (3)
962-986: Stuck automation run handling now correctly guards on in‑memory activityThe new timeout block in
GetAutomationStatusonly marks arunningDB record as failed when no in‑memory run is active, which avoids misclassifying long but legitimate runs. The 5‑minute threshold is reasonable; if you want this more discoverable, consider extracting it into a named constant (purely cosmetic).
1094-1103: InheritSourceTags and source‑specific tags are wired consistently across flowsCompletion search, RSS automation, AutobrrApply, seeded search runs, and manual apply all now feed tags through
CrossSeedRequest.TagsplusInheritSourceTags, withbuildCrossSeedTagscentralizing the merge and dedup. This keeps tag behavior consistent no matter how cross‑seeding is initiated.One optional thought: if tags should be case-insensitive in your UX, you could normalize (e.g.,
strings.ToLower) insidebuildCrossSeedTagsbefore inserting into the set; otherwise current behavior (case-sensitive distinct tags) is fine.Also applies to: 1653-1655, 2222-2243, 5293-5322, 6217-6251
4142-4286: Parallelizing ApplyTorrentSearchResults is a nice usability/perf improvementThe new goroutine-per-selection pattern with a buffered results channel and index tagging:
- keeps the external API order‑stable,
- limits shared state to a read‑only cached slice and immutable
settings,- and lets long‑running downloads or cross‑seed attempts overlap.
Given typical selection counts, this is a pragmatic concurrency model; if you ever see very large selection batches, a small worker pool would be an easy follow‑up.
internal/services/crossseed/crossseed_test.go (1)
2864-3027: Well-designed mock for recovery testing.The
mockRecoverSyncManagerprovides flexible behavior control for testing various torrent recovery scenarios. The implementation:
- Tracks method calls for verification
- Supports multiple failure modes and edge cases
- Simulates state transitions correctly
Consider making the progress threshold (line 2958) a configurable parameter rather than hardcoding 0.95, to improve test maintainability if the threshold changes.
Apply this diff to make the threshold configurable:
type mockRecoverSyncManager struct { torrents map[string]*qbt.Torrent calls []string recheckCompletes bool disappearAfterRecheck bool bulkActionFails bool keepInCheckingState bool failGetTorrentsAfterRecheck bool setProgressToThreshold bool + progressThreshold float64 hasRechecked bool secondRecheckCompletes bool recheckCount int } func newMockRecoverSyncManager(initialTorrents []qbt.Torrent) *mockRecoverSyncManager { torrents := make(map[string]*qbt.Torrent) for _, t := range initialTorrents { torrent := t torrents[t.Hash] = &torrent } return &mockRecoverSyncManager{ torrents: torrents, calls: []string{}, recheckCompletes: true, disappearAfterRecheck: false, bulkActionFails: false, keepInCheckingState: false, failGetTorrentsAfterRecheck: false, setProgressToThreshold: false, + progressThreshold: 0.95, hasRechecked: false, secondRecheckCompletes: false, recheckCount: 0, } }Then use
torrent.Progress = m.progressThresholdat line 2958 instead of hardcoding 0.95.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
README.md(1 hunks)internal/services/crossseed/crossseed_test.go(6 hunks)internal/services/crossseed/models.go(3 hunks)internal/services/crossseed/service.go(56 hunks)internal/web/swagger/openapi.yaml(0 hunks)
💤 Files with no reviewable changes (1)
- internal/web/swagger/openapi.yaml
🧰 Additional context used
🧠 Learnings (7)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/jackett/service.go:309-330
Timestamp: 2025-11-27T15:33:39.007Z
Learning: In internal/services/jackett/service.go, synchronous execution paths (test executors, interactive searches with explicit indexer selection, and scheduler-unavailable fallback) intentionally pass jobID=0 to result callbacks. This is acceptable because outcome tracking via ReportIndexerOutcome is only used for async cross-seed operations that always use the scheduler and receive unique jobIDs. Interactive searches and tests do not use outcome tracking.
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go:209-212
Timestamp: 2025-11-28T20:32:30.126Z
Learning: Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go
Learning: The cross-seed recheck-resume worker intentionally runs for the process lifetime and keys pending entries by hash only. This is acceptable under the current constraint that background seeded-search runs operate on a single instance at a time; graceful shutdown and instanceID|hash keying are deferred by design.
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
Learnt from: s0up4200
Repo: autobrr/qui PR: 553
File: internal/database/migrations/014_add_torznab_indexers.sql:294-310
Timestamp: 2025-11-15T21:35:06.686Z
Learning: In the qui codebase, storing binary BLOB data (e.g., torrent_data, response_data) directly in SQLite tables for cache purposes (such as torznab_torrent_cache and torznab_search_cache) is intentional and acceptable. This design choice prioritizes system simplicity, self-containment, and backup ease over external file/object storage, as the data is short-lived cache and no performance issues have been observed. External storage would introduce unnecessary complexity for cleanup and migrations.
📚 Learning: 2025-11-28T20:32:30.126Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go:209-212
Timestamp: 2025-11-28T20:32:30.126Z
Learning: Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go
Learning: The cross-seed recheck-resume worker intentionally runs for the process lifetime and keys pending entries by hash only. This is acceptable under the current constraint that background seeded-search runs operate on a single instance at a time; graceful shutdown and instanceID|hash keying are deferred by design.
Applied to files:
internal/services/crossseed/service.gointernal/services/crossseed/crossseed_test.gointernal/services/crossseed/models.go
📚 Learning: 2025-11-28T22:21:20.730Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go:2415-2457
Timestamp: 2025-11-28T22:21:20.730Z
Learning: Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go
Learning: The determineSavePath function intentionally includes a contentLayout string parameter for future/content-layout branching and API consistency. Its presence is by design even if unused in the current body; do not flag as an issue in reviews.
Applied to files:
internal/services/crossseed/service.gointernal/services/crossseed/crossseed_test.gointernal/services/crossseed/models.go
📚 Learning: 2025-11-25T22:46:03.762Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.762Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.
Applied to files:
internal/services/crossseed/service.gointernal/services/crossseed/crossseed_test.go
📚 Learning: 2025-11-21T21:11:50.633Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
Applied to files:
internal/services/crossseed/service.go
📚 Learning: 2025-11-27T15:33:39.007Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/jackett/service.go:309-330
Timestamp: 2025-11-27T15:33:39.007Z
Learning: In internal/services/jackett/service.go, synchronous execution paths (test executors, interactive searches with explicit indexer selection, and scheduler-unavailable fallback) intentionally pass jobID=0 to result callbacks. This is acceptable because outcome tracking via ReportIndexerOutcome is only used for async cross-seed operations that always use the scheduler and receive unique jobIDs. Interactive searches and tests do not use outcome tracking.
Applied to files:
internal/services/crossseed/service.gointernal/services/crossseed/models.go
📚 Learning: 2025-11-06T12:11:04.963Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
Applied to files:
internal/services/crossseed/service.gointernal/services/crossseed/crossseed_test.go
🧬 Code graph analysis (2)
internal/services/crossseed/service.go (3)
internal/models/crossseed.go (3)
CrossSeedRun(156-172)CrossSeedRunStatusRunning(130-130)CrossSeedRunStatusFailed(133-133)internal/services/crossseed/models.go (1)
CrossSeedRequest(15-38)internal/services/jackett/scheduler.go (1)
RateLimitPriorityBackground(66-66)
internal/services/crossseed/crossseed_test.go (4)
web/src/types/index.ts (3)
Torrent(204-261)CrossSeedAutomationSettings(1327-1348)Category(342-345)pkg/releases/parser.go (1)
NewDefaultParser(29-31)internal/services/crossseed/models.go (1)
CrossSeedRequest(15-38)internal/services/crossseed/release_cache.go (1)
NewReleaseCache(13-15)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run tests
🔇 Additional comments (19)
internal/services/crossseed/models.go (1)
194-202:JobIDonTorrentSearchResponsealigns with async scheduler/outcome tracking.Exposing
JobID uint64 \json:"jobId,omitempty"`here is consistent with the Torznab scheduler design and the cross-seed outcome tracking model, and matches the pattern where async cross-seed flows receive real job IDs while synchronous/test paths use0`. Based on learnings, this looks correct and non-breaking.README.md (1)
627-659: Cross‑Seed apply/tagging and global settings docs look consistent and clear.The new examples for
instanceIds, the description of the/api/cross-seed/applypipeline, and the “Tagging” / “Global Cross-Seed Settings → Source Tagging” sections align well with the updated models (tags, per-source tag groups, inherit‑source‑tags). This should give autobrr users and UI users an accurate mental model of how scoping and tagging work.internal/services/crossseed/service.go (10)
75-79: Dedup cache structure and read‑only contract look goodThe
dedupCacheEntrytype,dedupCachefield wiring,dedupCacheKeygeneration, and the cache-hit/insert paths indeduplicateSourceTorrentsall line up: you avoid extra allocations on cache hits and clearly document that returned slices/maps are read-only. Current callers (refreshSearchQueueandpropagateDuplicateSearchHistory) respect that contract, so this is a nice perf win without compromising correctness.Also applies to: 262-265, 313-314, 333-334, 4454-4474, 4491-4513, 4706-4712
2371-2457: Cross‑seed add strategy (TMM, contentLayout, recheck/resume) is coherentThe new
processCrossSeedCandidateflow combines:
skip_checking=trueonly when we’re confident all data exists (no extra source files),- contentLayout inference from
sourceRoot/candidateRootwith special handling for episode‑in‑pack cases,- TMM enabled only when a category is used and not episode‑in‑pack,
- explicit
recheck + queueRecheckResumewhen alignment or extra files require verification, and immediate resume only for safe “perfect match” cases.This significantly simplifies previous save‑path logic while still covering the tricky episode/season‑pack scenarios and avoids rechecks when they’re unnecessary. Nice refactor.
Also applies to: 2482-2492, 2495-2522, 2535-2550
2558-2589: Recheck‑resume worker and queue design match stated invariants
queueRecheckResumecomputes a per‑torrent resume threshold from the size‑tolerance setting and feeds a boundedrecheckResumeChan;recheckResumeWorkerthen:
- batches hashes per instance,
- uses a single ticker with
recheckPollIntervalandrecheckAPITimeout,- resumes only when progress ≥ threshold and no longer checking, and
- evicts entries on absolute timeout or non‑improving completion.
Given the learning that this worker is intentionally process‑lifetime and keyed by hash only under a single‑instance background search constraint, this implementation looks aligned with the design.
Based on learnings, this hash-only keying and long-lived worker are acceptable under current constraints.
Also applies to: 2591-2727
2823-2836: Largest‑file TV override in content detection is a good edge‑case fixThe special case in
selectContentDetectionReleasethat trusts an episode‑annotated largest file (when titles align and torrent metadata says “movie”) should correct misclassified season packs where filenames clearly indicate TV episodes. This is a targeted change with low risk to other content types.
3834-3839: Jackett job ID propagation and outcome reporting are well‑plumbedPlumbing
searchResp.JobIDthroughTorrentSearchResponse.JobIDand then feeding it intoreportIndexerOutcomeslets you attribute adds/failures per indexer back to the scheduler job, while thejobID == 0guard avoids touching history for synchronous/test flows.The add/fail counting logic in
processSearchCandidatelooks correct and ensures each indexer gets a single summarized outcome per job.Based on learnings, skipping outcome reporting for jobID=0 is consistent with how synchronous paths are modeled.
Also applies to: 4055-4055, 4110-4110, 5058-5088, 5119-5147
4415-4444: Search run finalization retry helper is robustUsing
updateSearchRunWithRetry(context.Background(), state.run)infinalizeSearchRunensures you persist terminal state even if the original run context was canceled, with bounded retries and reasonable logging on failure. This is a good reliability improvement over single‑shot updates.
5487-5554: Content‑based indexer filtering and telemetry look solid
filterIndexersByExistingContentnow:
- builds a richer
contentMatcheslist for UI/telemetry,- records explicit
excludedIndexersreasons, and- logs a concise summary when any indexers are filtered out.
That should make it much easier to reason about why certain indexers were skipped while keeping the default path (no indexer info or cache miss) safe by falling back to the original list.
Also applies to: 5601-5607
5948-5959: Recent search results now show only successful additionsUpdating
appendSearchResultsorecentResultstracks onlyresult.Added == truegives the UI a much clearer sliding window of actual additions instead of being diluted by skips/failures. The cap of 10 entries is a sensible default.
6217-6251: buildCrossSeedTags implementation matches the documented semanticsThe helper correctly:
- dedups tags via a set,
- preserves
sourceTagsordering,- and only pulls in matched‑torrent tags when
inheritSourceTagsis true.This makes the various call sites easier to reason about and avoids subtle duplication bugs when combining automation/webhook/seeded search tags with existing qBittorrent tags.
2554-2556: I'll help you verify the hash normalization convention consistency between the crossseed service and the qbittorrent sync manager. Let me search the codebase for the relevant implementations.
<function_calls>
#!/bin/bashSearch for normalizeHash implementations and hash normalization in both packages
echo "=== normalizeHash in crossseed package ==="
rg -n "normalizeHash" internal/services/crossseed -C3echo ""
echo "=== Hash normalization in qbittorrent sync manager ==="
rg -n "GetTorrentFilesBatch|GetTorrents" internal/qbittorrent -C5echo ""
echo "=== ToUpper/ToLower usage in qbittorrent package ==="
rg -n "ToUpper|ToLower" internal/qbittorrent -C2echo ""
echo "=== Check for normalizeHash in qbittorrent ==="
rg -n "normalizeHash" internal/qbittorrent -C3echo ""
echo "=== Hashes field/parameter usage in qbittorrent ==="
rg -n "Hashes\s*:" internal/qbittorrent -C5
</function_calls>internal/services/crossseed/crossseed_test.go (7)
25-25: LGTM! Import addition is appropriate.The
releasespackage import is correctly added and used by tests that construct ReleaseCache for parsing release names.
232-1185: Excellent test coverage expansion.The massively expanded
TestDetermineSavePathnow covers a comprehensive matrix of scenarios including:
- TV shows: season packs, individual episodes, different naming conventions
- Movies: single files, folders, collections
- Edge cases: unicode characters, long paths, mixed separators, empty paths, deep nesting prevention
The test cases are well-documented with clear descriptions. The addition of
contentLayout,matchType, and file lists aligns with the broader content layout support changes in the PR.
1187-1284: LGTM! Well-structured integration tests.Both
TestPartialInPackIntegrationandTestPartialInPackMovieCollectionIntegrationeffectively verify the full chain from matching to save path determination for partial-in-pack scenarios. The tests:
- Use
t.Parallel()for efficient execution- Clearly document the expected behavior in assertions
- Verify specific behavior changes introduced in this PR (TV episodes into season packs, movies into collections)
1339-1455: LGTM! Tag inheritance test coverage is comprehensive.The updated
TestCrossSeed_CategoryAndTagPreservationproperly covers the newinheritSourceTagsfunctionality with test cases for:
- Tag inheritance enabled with source tags
- Tag inheritance disabled
- Empty source tags
- Indexer category usage
The test structure is clear and all scenarios are properly validated.
2443-2445: LGTM! Appropriate test double implementation.The
GetAppPreferencesmethod addition tofakeSyncManagerprovides the necessary interface implementation for tests. The default "Original" content layout is a reasonable test value.
2587-2862: Excellent comprehensive content layout testing.
TestDetermineSavePathContentLayoutScenariosprovides thorough coverage of save path determination across different qBittorrent content layout settings (Original, Subfolder, NoSubfolder). The test correctly verifies that:
- Save path logic remains consistent across all layout types
- Single-file TV episodes use ContentPath (not SavePath + Subfolder)
- Partial-in-pack scenarios work correctly
- Collection and same-content-type scenarios are handled properly
This comprehensive testing significantly reduces the risk of layout-specific regressions.
3028-3226: LGTM! Comprehensive recovery test coverage.The
TestRecoverErroredTorrents_*test suite provides excellent coverage of the torrent recovery functionality:
- No-op case (no errored torrents)
- Single and batch recovery scenarios
- Different error states (Error, MissingFiles)
- Context cancellation handling
- Mixed states (selective recovery)
- Empty input handling
The tests properly verify batching behavior, selective processing, and that non-errored torrents are not affected. Test structure is clear and assertions are appropriate.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (4)
web/src/types/index.ts (1)
1036-1064: Search history types align with backend and add outcome tracking
SearchHistoryEntryandSearchHistoryResponseline up with the backend definitions (IDs as numbers, time fields as strings, status union coveringsuccess|error|skipped|rate_limited) while addingoutcome/addedCountfor cross-seed outcome tracking. As a minor enhancement, you could narrowsourceto a string union (e.g."memory" | "database") once the backend values are stable.internal/api/handlers/crossseed.go (1)
168-233: applyAutomationSettingsPatch correctly applies new tag fields; consider PUT behaviourThe patch application logic for the new tag fields (
RSSAutomationTags,SeededSearchTags,CompletionSearchTags,WebhookTags,InheritSourceTags) is correct: nil leaves existing values untouched, while provided slices (including empty) and booleans overwrite settings as expected.One design consideration:
automationSettingsRequest/UpdateAutomationSettings(PUT) still don’t expose these tagging fields, so tags can only be managed via PATCH. If that’s not intentional, consider extending the PUT request and constructor forCrossSeedAutomationSettingsto include the same fields so a full replacement can also control tags; otherwise, a brief comment noting “tags are PATCH-only” might prevent confusion later.internal/services/jackett/service.go (2)
2399-2428: Escalating backoff on rate limits is coherent, but duration plumbing is now dead code
handleRateLimitnow ignores the providedcooldownduration and instead:
- Calls
s.rateLimiter.RecordFailure(idx.ID)to obtain an escalating cooldown.- Persists and logs that cooldown, and marks it in the rate limiter for future dispatch-time skips.
This matches the new backoff design, but it also means:
- The
time.Durationparameter onhandleRateLimitis unused (_ time.Duration), anddetectRateLimit’s first return value (includingextractRetryAfterparsing ofRetry-After) is no longer used anywhere.You might want to either:
- Simplify
detectRateLimitto just return abool(or rename/removeextractRetryAfterand the unused duration), or- Feed
Retry-AfterintoRecordFailure(e.g., as a floor/ceiling) if you intend to respect server hints.Right now this is just dead/unused data flow, not a functional bug.
3706-3763: ActivityStatus surface is useful; consider stabilizing cooldown ordering
IndexerCooldownStatusandActivityStatus, plusGetActivityStatus, provide a helpful summary of:
- Current scheduler state (when present), and
- Per-indexer cooldowns derived from
rateLimiter.GetCooldownIndexers(), with indexer names resolved via a singleindexerStore.Listcall.The implementation is best-effort (it tolerates
Listerrors), and it correctly callsensureRateLimiterState()to restore persisted cooldowns before reading.One small optional enhancement:
status.CooldownIndexersis built from a map iteration, so its order is nondeterministic. If the UI benefits from stable ordering, you might sort this slice byCooldownEndorIndexerNamebefore returning it.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
README.md(1 hunks)cmd/qui/main.go(1 hunks)go.mod(1 hunks)internal/api/handlers/crossseed.go(3 hunks)internal/services/jackett/models.go(2 hunks)internal/services/jackett/service.go(21 hunks)internal/web/swagger/openapi.yaml(0 hunks)web/src/lib/api.ts(3 hunks)web/src/types/index.ts(4 hunks)
💤 Files with no reviewable changes (1)
- internal/web/swagger/openapi.yaml
🚧 Files skipped from review as they are similar to previous changes (3)
- cmd/qui/main.go
- go.mod
- web/src/lib/api.ts
🧰 Additional context used
🧠 Learnings (5)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/jackett/service.go:309-330
Timestamp: 2025-11-27T15:33:39.007Z
Learning: In internal/services/jackett/service.go, synchronous execution paths (test executors, interactive searches with explicit indexer selection, and scheduler-unavailable fallback) intentionally pass jobID=0 to result callbacks. This is acceptable because outcome tracking via ReportIndexerOutcome is only used for async cross-seed operations that always use the scheduler and receive unique jobIDs. Interactive searches and tests do not use outcome tracking.
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go:209-212
Timestamp: 2025-11-28T20:32:30.126Z
Learning: Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go
Learning: The cross-seed recheck-resume worker intentionally runs for the process lifetime and keys pending entries by hash only. This is acceptable under the current constraint that background seeded-search runs operate on a single instance at a time; graceful shutdown and instanceID|hash keying are deferred by design.
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.762Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.
Learnt from: s0up4200
Repo: autobrr/qui PR: 553
File: internal/database/migrations/014_add_torznab_indexers.sql:294-310
Timestamp: 2025-11-15T21:35:06.686Z
Learning: In the qui codebase, storing binary BLOB data (e.g., torrent_data, response_data) directly in SQLite tables for cache purposes (such as torznab_torrent_cache and torznab_search_cache) is intentional and acceptable. This design choice prioritizes system simplicity, self-containment, and backup ease over external file/object storage, as the data is short-lived cache and no performance issues have been observed. External storage would introduce unnecessary complexity for cleanup and migrations.
📚 Learning: 2025-11-27T15:33:39.007Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/jackett/service.go:309-330
Timestamp: 2025-11-27T15:33:39.007Z
Learning: In internal/services/jackett/service.go, synchronous execution paths (test executors, interactive searches with explicit indexer selection, and scheduler-unavailable fallback) intentionally pass jobID=0 to result callbacks. This is acceptable because outcome tracking via ReportIndexerOutcome is only used for async cross-seed operations that always use the scheduler and receive unique jobIDs. Interactive searches and tests do not use outcome tracking.
Applied to files:
internal/services/jackett/models.gointernal/services/jackett/service.go
📚 Learning: 2025-11-28T20:32:30.126Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go:209-212
Timestamp: 2025-11-28T20:32:30.126Z
Learning: Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go
Learning: The cross-seed recheck-resume worker intentionally runs for the process lifetime and keys pending entries by hash only. This is acceptable under the current constraint that background seeded-search runs operate on a single instance at a time; graceful shutdown and instanceID|hash keying are deferred by design.
Applied to files:
internal/services/jackett/service.go
📚 Learning: 2025-11-25T22:46:03.762Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 632
File: internal/backups/service.go:1401-1404
Timestamp: 2025-11-25T22:46:03.762Z
Learning: In qui's backup service (internal/backups/service.go), background torrent downloads initiated during manifest import intentionally use a fire-and-forget pattern with the shared service context (s.ctx). Per-run cancellation is not needed, as orphaned downloads completing after run deletion are considered harmless and acceptable. This design prioritizes simplicity over per-run lifecycle management for background downloads.
Applied to files:
internal/services/jackett/service.go
📚 Learning: 2025-11-28T22:21:20.730Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go:2415-2457
Timestamp: 2025-11-28T22:21:20.730Z
Learning: Repo: autobrr/qui PR: 641
File: internal/services/crossseed/service.go
Learning: The determineSavePath function intentionally includes a contentLayout string parameter for future/content-layout branching and API consistency. Its presence is by design even if unused in the current body; do not flag as an issue in reviews.
Applied to files:
internal/services/jackett/service.go
🧬 Code graph analysis (2)
web/src/types/index.ts (3)
internal/services/jackett/scheduler.go (3)
SchedulerTaskStatus(945-953)SchedulerJobStatus(956-960)SchedulerStatus(963-970)internal/services/jackett/service.go (1)
IndexerCooldownStatus(3707-3712)internal/services/jackett/search_history.go (2)
SearchHistoryEntry(14-43)SearchHistoryResponse(46-50)
internal/services/jackett/service.go (5)
internal/services/jackett/search_history.go (8)
SearchHistoryBuffer(53-60)IndexerOutcomeStore(233-240)NewSearchHistoryBuffer(63-72)NewHistoryRecorder(199-203)NewIndexerOutcomeStore(243-252)SearchHistoryResponseWithOutcome(315-319)SearchHistoryEntryWithOutcome(308-312)SearchHistoryEntry(14-43)internal/services/filesmanager/service.go (1)
Service(20-25)internal/models/torznab_indexer.go (1)
TorznabIndexer(64-81)internal/services/jackett/scheduler.go (3)
SubmitRequest(340-346)JobCallbacks(330-337)SchedulerStatus(963-970)internal/buildinfo/buildinfo.go (1)
String(29-31)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run tests
🔇 Additional comments (17)
README.md (2)
495-508: Cross-seed apply payload + tagging docs look consistentThe
instanceIdsscoping example and the Tagging paragraph correctly describe the/api/cross-seed/applybehaviour (global vs scoped instances, webhook tags override, inherit-source-tags option). No issues from an API/UX perspective.
516-527: Global rules / Source Tagging section matches new configuration modelThe “Global Cross-Seed Settings” and “Source Tagging” bullets line up with the new per-source tag fields (RSS, Seeded Search, Completion Search, Webhook, inherit source tags) and clearly explain how ignore patterns and tagging interact. Wording is clear and technically accurate.
web/src/types/index.ts (3)
998-1034: Scheduler and indexer activity types correctly mirror backend structs
SchedulerTaskStatus,SchedulerJobStatus,SchedulerStatus,IndexerCooldownStatus, andIndexerActivityStatusfields, names, and shapes match the Go structs and JSON tags (includingcreatedAt/cooldownEndas strings andschedulerbeing optional). This will serialize/deserialize cleanly for the activity UI.
1338-1367: Cross-seed automation per-source tagging types are consistentThe added
rssAutomationTags,seededSearchTags,completionSearchTags,webhookTags, andinheritSourceTagsfields (and their optional counterparts in the patch type) match the updated backendCrossSeedAutomationSettingsmodel and README wording. This gives the UI full control over source-specific tagging without breaking existing settings.
1439-1446: CrossSeedSearchStatus shape remains coherent after change
CrossSeedSearchStatus(includingrecentResultsandnextRunAt) is still consistent with the search run/status handlers; the touched line doesn’t introduce any type-level issues.internal/api/handlers/crossseed.go (2)
53-73: Patch request struct correctly models optional source-specific tagsAdding
RSSAutomationTags,SeededSearchTags,CompletionSearchTags,WebhookTagsas*[]stringandInheritSourceTagsas*boolmatches the existing optional-field pattern (optionalString,optionalInt) and cleanly distinguishes “field absent” from “set to empty/false”.
139-158: isEmpty now properly accounts for completion and tag fieldsThe updated
automationSettingsPatchRequest.isEmptycheck ensures that patches which only change completion settings or any of the tag-related fields are treated as non-empty, avoiding spurious400responses when only those sections are updated.internal/services/jackett/models.go (1)
14-15: Release name and job ID metadata wiring looks consistentThe new
ReleaseNameonTorznabSearchRequestandJobIDonSearchResponsehave sane JSON tags (release_name,jobId) and align with the history/outcome types (SearchHistoryEntry.ReleaseName,SearchHistoryEntry.JobID,SearchHistoryEntryWithOutcome). This should make correlating requests, history, and outcomes straightforward without impacting existing search semantics.Also applies to: 50-51
internal/services/jackett/service.go (9)
84-89: Search history and indexer outcome tracking are cleanly integrated and nil-safeThe additions of
searchHistoryandindexerOutcomesonService, theWithSearchHistory/WithIndexerOutcomesoptions, and theGetSearchHistory/GetSearchHistoryStats/ReportIndexerOutcomemethods are well factored:
- Options cleanly gate the features; all call sites check for
nilbefore use, so disabling them is safe.- Wiring the
HistoryRecorderintosearchScheduleronly when history is configured avoids unnecessary overhead.GetSearchHistorygracefully returns an empty response when history is disabled and correctly merges outcomes by(jobID, indexerID)when present, matching theSearchHistoryEntry/IndexerOutcomeStoreshapes insearch_history.go. Based on learnings, this matches the intended outcome-tracking path for async cross-seed flows.No functional issues spotted here.
Also applies to: 441-459, 461-512
149-157: Release name propagation into searchContext is straightforward and consistentThreading
req.ReleaseNameintosearchContext.releaseNameat construction time gives the scheduler/history recorder access to the original full release name for debugging/history without affecting query semantics. The field is optional and read-only metadata, so this change is low risk and aligns withSearchHistoryEntry.ReleaseName.Also applies to: 590-595
280-288: Scheduler construction wired to shared rate limiter looks correct
NewServicenow constructssearchSchedulervianewSearchScheduler(rl, defaultMaxWorkers), sharing the sameRateLimiterinstance used elsewhere in the service. This centralizes rate-limit state between direct searches and scheduled jobs, which is necessary for the new dispatch-time rate limiting and escalating backoff to behave consistently.
309-323: Scheduler bridging and jobID-aware result aggregation look soundThe new
executeQueuedSearch+searchIndexersWithSchedulerflow appears well-structured:
- Sync/test/fallback paths still execute immediately and invoke
resultCallbackwithjobID=0, which is acceptable per the documented design that outcome tracking is only used for async cross-seed/scheduler-backed flows. Based on learnings.- When
searchScheduleris present,Submitis called with anExecFnthat runs a single-indexer search, andJobCallbacksbridge the scheduler’s per-indexer callbacks into the existing aggregateresultCallback.- The aggregation code is concurrency-safe (
muguardingallResults,coverage,failures,lastErr), skipsRateLimitWaitErroras a non-failure, and only surfaces an error when all indexers fail with non-rate-limit errors.- Coverage is normalized via
coverageSetToSlice, so callers get deterministic, sorted indexer-ID coverage.This is a solid adaptation of legacy semantics onto the new job-based scheduler.
Also applies to: 325-416
654-747: JobID propagation through performSearch response paths is coherentThe updated
resultCallbackinperformSearch:
- Propagates
jobIDinto both the paginatedSearchResponseand the full-response copy used for cache persistence, so job-aware consumers (e.g., cross-seed outcome tracking/history views) can correlate responses with scheduler jobs.- Preserves existing semantics for cache-only and error paths, with
Partialstill driven by deadline handling andcacheSiglogic unchanged.- Correctly uses
jobIDfrom the scheduler-backed path and0from sync/fallback paths, matching the intended separation between async cross-seed tracking and interactive/test flows. Based on learnings.No issues found with the updated error handling or caching integration.
805-829: Recent() now benefits from scheduler and job IDs without changing behaviorThe
Recentmethod’s newresultCallback(jobID, ...):
- Reuses the scheduler pipeline via
executeQueuedSearch, aligning behavior with normal searches.- Surfaces
jobIDonSearchResponse, enabling activity/history correlation if desired.- Maintains prior semantics for partial results (driven by deadline or errors) and logging.
This is a clean, non-breaking enhancement.
1708-1710: Rate-limiting responsibilities are clearly separated and escalation reset is correctThe new comments and behavior in
executeIndexerSearch:
- Clarify that dispatch-time rate limiting is now fully handled by the scheduler (via
NextWait()and MaxWait), withBeforeRequestremoved.- Use
s.rateLimiter.RecordSuccess(idx.ID)on successful searches to reset the escalating backoff state, which is essential for the new failure-based cooldown strategy described in the PR objectives.This is consistent with the new scheduler/backoff design; no functional problems spotted.
Also applies to: 1761-1763
1908-1919: runIndexerSearch correctly adapts to the new executeIndexerSearch signatureUpdating
runIndexerSearchto call:result := s.executeIndexerSearch(ctx, idx, params, meta, indexerExecOptions{})preserves previous behavior (no per-indexer activity logging in this path) while matching the new
indexerExecOptionsparameter. Coverage and error handling logic downstream are unchanged.
3128-3157: contentType.String provides a clear canonical representationThe new
contentType.String()covers all defined enum values and returns sensible, stable string identifiers ("movie","tv","music","book", etc., with"unknown"as a default). This is useful for logging, history, and any APIs that want human-readable content types and matches the existing contentType cases insearchModeForContentType/ category mapping.
Summary
BeforeRequest()with async job-based schedulingNextWait()at dispatch, execute ready tasks immediatelySubmit()returns immediately, results delivered via callbacksSCHEDULER.mdexplaining the architectureProblem
The blocking rate limiter model caused several issues:
BeforeRequest()which slept until rate limits cleared—up to 10 goroutines sitting blockedDailyRequestLimitfrom request countSolution
Complete scheduler redesign with "job in, scheduled, job done" model:
Submit()returns immediately with job IDNextWait()and decides to execute, defer, or skipChanges
jackett/scheduler.gojackett/scheduler_test.gojackett/scheduler.gojackett/ratelimiter.gojackett/ratelimiter_test.gojackett/service.goBeforeRequest()calls, remove unusedonReadyparameterjackett/models.goOnReadycallback field, remove unusedcontextimportmodels/torznab_indexer.goHourlyRequestLimit,DailyRequestLimitfields and related methodsmigrations/014_*.sqlhourly_request_limit,daily_request_limitcolumnsjackett/SCHEDULER.mdcrossseed/service.goRateLimitPriorityBackground(60s interval, 45s max)crossseed/service.gocrossseed/service.goqbittorrent/sync_manager.goGetTorrentFilesBatchArchitecture
See
internal/services/jackett/SCHEDULER.mdfor detailed documentation.Test plan
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
Tests
Chores
✏️ Tip: You can customize this high-level summary in your review settings.