Conversation
WalkthroughMigrate many cross-seed subsystems to pointer-based release types, add a generic TTL-backed string Normalizer, replace single-item qBittorrent calls with filtered/batch/canonicalized APIs, implement batch DB/file caching and debounced sync, and add a prioritized Jackett scheduler with rate-limiting and extensive tests. (39 words) Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant API
participant Service
participant SyncMgr as qBittorrent
participant FilesMgr
participant Normalizer
rect rgb(240,255,240)
Client->>API: HTTP request
API->>Service: ctx (WithSearchPriority / cancel-safe)
end
rect rgb(235,245,255)
Service->>SyncMgr: GetTorrents(ctx, instanceID, filter)
SyncMgr-->>Service: []qbt.Torrent
Service->>SyncMgr: GetTorrentFilesBatch(ctx, instanceID, [hashes])
SyncMgr-->>Service: map[canonicalHash]TorrentFiles
Service->>FilesMgr: GetCachedFilesBatch(ctx, instanceID, [hashes])
FilesMgr-->>Service: cached map + missing
end
rect rgb(255,250,230)
Service->>Normalizer: Normalize(title/group/hash/patterns)
Normalizer-->>Service: normalized values
Service->>Service: pointer-based matching/enrich (*rls.Release)
Service-->>Client: response
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (15)
internal/services/crossseed/layout_test.go (1)
9-10: Layout tests now correctly pass a normalizerPassing
stringutils.NewDefaultNormalizer()intoclassifyTorrentLayoutaligns the test with the updated function signature and normalization strategy; behavior remains unchanged. If you ever care about tiny test overhead, you could instantiate a single normalizer before the loop and reuse it, but that’s strictly optional.Also applies to: 56-60
internal/services/crossseed/service_search_test.go (1)
172-179: Unify skipCache key normalization in testsHere
skipCacheis asserted withstringutils.DefaultNormalizer.Normalize("recent-hash"), whileTestPropagateDuplicateSearchHistorystill checksstate.skipCache[strings.ToLower(hash)]. To keep tests resilient to future changes in the default normalization transform, consider using the same normalizer path (e.g., theService.stringNormalizerorstringutils.DefaultNormalizer) in all skipCache-related assertions instead of mixing instrings.ToLower.Based on learnings
Also applies to: 210-219
internal/services/crossseed/dedup_test.go (1)
10-17: Dedup tests now inject required parsing/normalization dependenciesPopulating
releaseCache: NewReleaseCache()andstringNormalizer: stringutils.NewDefaultNormalizer()(andsyncManagerin the second test) brings the testServiceinstances in line with the production Service. This ensures dedup decisions see the same parsed/normalized view as real runs. You might eventually factor a small helper to build a test Service to avoid repeating this wiring, but it’s not urgent.Also applies to: 72-76
internal/services/crossseed/matching_layout_test.go (1)
102-110: Be mindful of reusing the same parsed release pointer across match callsIn
TestFindBestCandidateMatch_PrefersTopLevelFolderOnTie,singleRelease := svc.releaseCache.Parse("Minimal.Payload")is passed by pointer into two separategetMatchTypecalls with different candidate file layouts. IfgetMatchTypeever mutates thecandidateRelease(e.g., adjusts type or fields based on layout), the second call will see a mutated release, which may diverge from how production code parses candidates independently. IfgetMatchTypeis intended to be pure with respect to its*rls.Releasearguments, nothing needs changing; otherwise, consider cloning or re-parsing for the second call.internal/services/crossseed/search_query.go (1)
27-27: Consider adding nil check for defensive programming.The function now accepts a pointer parameter but doesn't check for nil before dereferencing (lines 30, 34, 46). While the current call sites appear to always pass valid pointers, adding a defensive nil check could prevent potential panics.
Apply this diff to add a nil guard:
func buildSafeSearchQuery(name string, release *rls.Release, baseQuery string) SearchQuery { + if release == nil { + return SearchQuery{Query: strings.TrimSpace(name)} + } // If rls already gave us structured series/episode info, keep it. var seasonPtr, episodePtr *intinternal/services/crossseed/align.go (2)
386-393: Consider nil checks for robustness.The function now accepts pointer parameters but dereferences them without nil checks (lines 388-389). While current usage appears safe, adding nil guards would improve defensive programming.
Apply this diff:
func shouldRenameTorrentDisplay(newRelease, matchedRelease *rls.Release) bool { + if newRelease == nil || matchedRelease == nil { + return true + } // Keep episode torrents named after the episode even when pointing at season pack files if newRelease.Series > 0 && newRelease.Episode > 0 &&
395-401: Consider nil checks for robustness.Similar to
shouldRenameTorrentDisplay, this function dereferences pointer parameters without nil checks (lines 396-397).Apply this diff:
func shouldAlignFilesWithCandidate(newRelease, matchedRelease *rls.Release) bool { + if newRelease == nil || matchedRelease == nil { + return true + } if newRelease.Series > 0 && newRelease.Episode > 0 &&internal/services/crossseed/variant_overrides.go (1)
55-93: Consider nil check for robustness.The method dereferences the Release pointer without checking for nil (lines 66, 88-90). While current usage appears safe, a defensive nil check would prevent potential panics.
Apply this diff:
func (o variantOverrides) releaseVariants(r *rls.Release) map[string]struct{} { + if r == nil { + return make(map[string]struct{}) + } variants := make(map[string]struct{})pkg/stringutils/normalize.go (1)
13-37: Normalizer implementation is clean; consider defensive nil handlingThe generic
Normalizerwrapper overttlcachelooks good and keeps the transform/caching concerns nicely encapsulated. Two optional hardening ideas:
- Guard against a nil receiver/transform in
Normalizeto avoid panics if a caller ever forgets to initialize the normalizer.- Document that
DefaultNormalizeris global and shared across concerns (hashes, indexer names, etc.), since the key space is shared even though the transform is benign.These aren’t blockers but would make the utility more robust to misuse.
Also applies to: 40-42, 45-58, 60-61
internal/services/crossseed/parsing.go (1)
28-61: Pointer-based parsing helpers look correct; rely on non-nil releasesThe migration of parsing helpers to
*rls.Release(includingisAdultContent,DetermineContentType,normalizeReleaseTypeForContent,looksLikeVideoRelease, andParseMusicReleaseFromTorrentName) maintains the previous behavior while avoiding repeated value copies. The copy‑on‑write pattern (normalized := *release,musicRelease := *baseRelease) is sound.This design does assume callers never pass a nil
*rls.Release; givenReleaseCache.Parseis providing the inputs, that’s fine, but it’s worth keeping in mind if these helpers are ever exposed more broadly.Also applies to: 142-161, 305-321, 323-351, 435-484
internal/services/crossseed/matching.go (2)
88-124: Title normalization in releasesMatch depends on non-nil stringNormalizerSwitching
releasesMatchto uses.stringNormalizer.Normalizefor titles is a nice centralization of normalization and should help performance once the cache is warm.One caveat: this now assumes
s.stringNormalizeris non-nil.NewServiceand updated tests set it correctly, but a zero-valuedService(e.g. constructed via&Service{}elsewhere in the package) would now panic. If you want extra safety, you could:
- Initialize
stringNormalizerlazily when nil (falling back tostringutils.DefaultNormalizer), or- Add a small helper on
Servicethat always returns a valid normalizer.Not urgent, but it would make the API harder to misuse.
Also applies to: 166-201, 253-262
278-283: File-ignore logic with Normalizer is solid; consider pattern pre-normalizationUsing
shouldIgnoreFile(filename, patterns, normalizer)with a shared normalizer gives you case-insensitive suffix and glob checks and amortizes transform cost via the TTL cache.If you ever find
patternslarge or very hot, a minor optimization would be to pre-normalize and cache the patterns once per call site, soNormalizeon patterns is not repeated on every filename scan. This is purely an optimization consideration; the current design is correct and reasonable.Also applies to: 592-619
internal/services/crossseed/service.go (3)
1785-1794: Pointer-based release flows and music-specific parsing in search are soundThe changes to:
- Pass
*rls.ReleasethroughprocessCrossSeedCandidate,findBestCandidateMatch, andgetMatchType, and- Use a pointer-based
contentDetectionReleaseplusParseMusicReleaseFromTorrentNameinSearchTorrentMatchesare consistent with the broader pointer migration. The logic for:
- Preferring the largest file’s enriched release where it yields a concrete
Type,- Building a music-aware query (
Artist+Title, album mapping), and- Reusing
ParseMusicReleaseFromTorrentNameboth for the search query and for populatingjackett.TorznabSearchRequestmusic fieldslooks correct and should improve search quality without altering behavior for non‑music content.
Just keep relying on
ReleaseCache.Parsenever returning nil pointers; if that ever changes, these paths would need a nil guard.Also applies to: 2091-2162, 2698-2990
2004-2051: Hash normalization now mixes normalizeHash and DefaultNormalizer; behavior is OK but could be unifiedYou now have two normalization paths for hashes:
normalizeHash(upper+trim) used for qBittorrent RPC and file lookups (batchLoadCandidateFiles, etc.).stringutils.DefaultNormalizer.Normalize(lower+trim) used for:
searchResultCacheKey/asyncFilteringCacheKey,getTorrentByHashcomparisons,torrentHasTopLevelFolderCached,propagateDuplicateSearchHistoryskipCachekeys, andshouldSkipCandidateskipCachekeys.Because both sides of each comparison go through the same transform, equality remains correct and you effectively gain case‑insensitive behavior. However, there is now some conceptual duplication around “how we normalize hashes”.
If you want to simplify mental overhead, you could:
- Either switch the cache key helpers to call
normalizeHashdirectly, or- Keep using the DefaultNormalizer but add a short comment near
normalizeHashand the cache key helpers to clarify that one is for qBittorrent RPC and the other is for case‑insensitive in‑process keys.Functionally this is fine; this is mainly about long‑term clarity.
Also applies to: 3367-3381, 3390-3396, 3640-3650, 3689-3690, 3761-3811
5038-5080: evaluateReleaseMatch pointer migration integrates well with search and webhook scoringConverting
evaluateReleaseMatchto accept*rls.Releaseand using it in both:
SearchTorrentMatches(to score candidate search results), andCheckWebhook(to annotate webhook matches with score and reasons)is consistent with the rest of the pointer-based changes. The scoring heuristic (group/resolution/source/season/episode/year/codec/audio) is unchanged and the “title match” fallback keeps a non-empty reason string.
As with other helpers, this assumes non-nil releases, but all current call sites are backed by
ReleaseCache.Parse, so that’s acceptable.Also applies to: 5464-5473
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (21)
internal/services/crossseed/align.go(2 hunks)internal/services/crossseed/align_test.go(2 hunks)internal/services/crossseed/crossseed_test.go(5 hunks)internal/services/crossseed/dedup_test.go(2 hunks)internal/services/crossseed/find_individual_episodes_test.go(2 hunks)internal/services/crossseed/layout.go(2 hunks)internal/services/crossseed/layout_test.go(2 hunks)internal/services/crossseed/matching.go(11 hunks)internal/services/crossseed/matching_layout_test.go(9 hunks)internal/services/crossseed/matching_title_test.go(3 hunks)internal/services/crossseed/parsing.go(8 hunks)internal/services/crossseed/parsing_test.go(1 hunks)internal/services/crossseed/search_query.go(1 hunks)internal/services/crossseed/search_query_test.go(3 hunks)internal/services/crossseed/service.go(17 hunks)internal/services/crossseed/service_example_test.go(1 hunks)internal/services/crossseed/service_search_test.go(3 hunks)internal/services/crossseed/variant_overrides.go(2 hunks)internal/services/crossseed/variant_overrides_test.go(6 hunks)pkg/releases/parser.go(2 hunks)pkg/stringutils/normalize.go(1 hunks)
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2025-11-21T21:11:50.599Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.599Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
Applied to files:
internal/services/crossseed/matching_layout_test.gointernal/services/crossseed/service_search_test.gointernal/services/crossseed/matching.gointernal/services/crossseed/service.go
📚 Learning: 2025-11-06T12:11:04.963Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
Applied to files:
internal/services/crossseed/service_search_test.gointernal/services/crossseed/layout.gointernal/services/crossseed/matching.go
📚 Learning: 2025-11-06T11:59:21.390Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: web/src/components/torrents/TorrentTableOptimized.tsx:1510-1515
Timestamp: 2025-11-06T11:59:21.390Z
Learning: In the qui project, the API layer in web/src/lib/api.ts normalizes backend snake_case responses to camelCase for frontend consumption. For CrossSeed search results, the backend's download_url field is transformed to downloadUrl in the searchCrossSeedTorrent method, so frontend code should always use the camelCase variant (result.downloadUrl).
Applied to files:
internal/services/crossseed/service.go
🧬 Code graph analysis (14)
internal/services/crossseed/find_individual_episodes_test.go (1)
pkg/stringutils/normalize.go (1)
NewDefaultNormalizer(35-37)
internal/services/crossseed/parsing_test.go (1)
internal/services/crossseed/parsing.go (1)
DetermineContentType(143-300)
internal/services/crossseed/layout_test.go (1)
pkg/stringutils/normalize.go (1)
NewDefaultNormalizer(35-37)
internal/services/crossseed/dedup_test.go (3)
internal/services/crossseed/service.go (1)
Service(72-108)internal/services/crossseed/release_cache.go (1)
NewReleaseCache(13-15)pkg/stringutils/normalize.go (1)
NewDefaultNormalizer(35-37)
internal/services/crossseed/matching_layout_test.go (3)
internal/services/crossseed/service.go (1)
Service(72-108)pkg/releases/parser.go (1)
NewDefaultParser(29-31)pkg/stringutils/normalize.go (1)
NewDefaultNormalizer(35-37)
internal/services/crossseed/service_search_test.go (2)
internal/services/crossseed/release_cache.go (1)
NewReleaseCache(13-15)pkg/stringutils/normalize.go (2)
NewDefaultNormalizer(35-37)DefaultNormalizer(61-61)
internal/services/crossseed/variant_overrides_test.go (2)
internal/services/crossseed/service.go (1)
Service(72-108)pkg/stringutils/normalize.go (1)
NewDefaultNormalizer(35-37)
internal/services/crossseed/matching_title_test.go (2)
internal/services/crossseed/service.go (1)
Service(72-108)pkg/stringutils/normalize.go (1)
NewDefaultNormalizer(35-37)
internal/services/crossseed/variant_overrides.go (1)
pkg/version/version.go (1)
Release(19-34)
internal/services/crossseed/layout.go (1)
pkg/stringutils/normalize.go (1)
Normalizer(19-22)
pkg/releases/parser.go (2)
pkg/version/version.go (1)
Release(19-34)internal/database/db.go (1)
New(348-458)
internal/services/crossseed/crossseed_test.go (2)
internal/services/crossseed/release_cache.go (1)
NewReleaseCache(13-15)pkg/stringutils/normalize.go (1)
NewDefaultNormalizer(35-37)
internal/services/crossseed/matching.go (2)
internal/services/crossseed/service.go (1)
Service(72-108)pkg/stringutils/normalize.go (1)
Normalizer(19-22)
internal/services/crossseed/service.go (2)
pkg/stringutils/normalize.go (3)
Normalizer(19-22)NewDefaultNormalizer(35-37)DefaultNormalizer(61-61)internal/services/crossseed/parsing.go (1)
ParseMusicReleaseFromTorrentName(435-484)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run tests
🔇 Additional comments (22)
pkg/releases/parser.go (2)
18-26: Pointer-based cache wiring looks correctSwitching
Parser.cacheto*ttlcache.Cache[string, *rls.Release]and constructing it viattlcache.Options[string, *rls.Release]{}is consistent and avoids copyingrls.Releasevalues on cache hits. No issues from a correctness or perf perspective.
34-50: Parse pointer semantics and early returns are safeThe new
Parseimplementation (nil receiver/empty key guards, caching&release, and returning pointers) is sound. Taking the address of the localreleasevalue is safe in Go and shares the same heap-allocated struct with the cache entry. Just be aware that callers now share a mutable*rls.Releaseacross all users of a given key, which is usually desired but means it should be treated as read-only.internal/services/crossseed/find_individual_episodes_test.go (1)
17-18: Service test setup correctly wires string normalizerInjecting
stringNormalizer: stringutils.NewDefaultNormalizer()into the testServiceinstance matches the production struct shape and prevents nil access when normalization is used during processing. Looks good.Also applies to: 43-56
internal/services/crossseed/service_search_test.go (1)
16-22: Search queue test Service wiring matches production dependenciesInitializing the
ServiceinTestRefreshSearchQueueCountsCooldownEligibleTorrentswithreleaseCache: NewReleaseCache()andstringNormalizer: stringutils.NewDefaultNormalizer()is consistent with the real struct and avoids nil dereferences during parsing/normalization. No issues here.Also applies to: 130-146
internal/services/crossseed/service_example_test.go (1)
149-169: enrichReleaseFromTorrent test updated correctly for pointer APISwitching the call to
enrichReleaseFromTorrent(&fileRelease, &torrentRelease)matches the new pointer-based signature and keeps the test logic intact. No further changes needed.internal/services/crossseed/matching_layout_test.go (3)
15-16: Matching/layout tests updated consistently for parser + normalizerAcross these tests, constructing
ServicewithreleaseCache: releases.NewDefaultParser()andstringNormalizer: stringutils.NewDefaultNormalizer()plus switching all match helpers to accept*rls.Releasekeeps the tests aligned with the refactored APIs and avoids unnecessary value copying. This looks coherent and maintains the intended behaviors.Also applies to: 21-24, 44-47, 77-80, 188-192, 209-212, 239-242, 267-270, 267-272
139-147: Test batch GetTorrentFilesBatch implementation matches normalizeHash usageThe test
candidateSelectionSyncManager.GetTorrentFilesBatchnormalizes keys withnormalizeHash(h)after delegating toGetTorrentFiles, which lowercases the lookup key. This mirrors the production pattern of using normalized (e.g., canonicalized) hashes as map keys while still allowing flexible inputs. The behavior is consistent for the tests’ purposes.Based on learnings
185-204: Title-based match tests are correctly migrated to pointer-based releasesAll
getMatchTypeFromTitleandgetMatchTypeinvocations now pass*rls.Releasefor target/candidate releases while the Service is initialized with the new parser and normalizer. This matches the updated matching/parsing code paths and keeps the title-based tests meaningful without changing their expectations.Also applies to: 206-234, 236-262, 264-283
internal/services/crossseed/parsing_test.go (1)
279-288: DetermineContentType test correctly updated for pointer parameterPassing
&tt.releaseintoDetermineContentTypematches the new pointer-based signature. Becausettis the per-iteration copy from the range loop, any internal mutation of the release during classification won’t leak across test cases. The assertions remain valid.internal/services/crossseed/crossseed_test.go (2)
25-25: LGTM!The addition of the stringutils package import aligns with the introduction of centralized string normalization across the crossseed service.
1093-1098: LGTM!The Service initialization correctly includes the new
stringNormalizerfield, ensuring consistent string normalization behavior throughout the crossseed tests.internal/services/crossseed/search_query_test.go (1)
14-14: LGTM!Test properly updated to pass a pointer to Release, aligning with the function signature change in
buildSafeSearchQuery.internal/services/crossseed/matching_title_test.go (2)
9-13: LGTM!Tests properly initialize the Service with the string normalizer, aligning with the broader refactor to centralize string normalization.
30-31: LGTM!Test calls correctly updated to pass pointer references to Release structs, matching the updated
releasesMatchsignature.internal/services/crossseed/align_test.go (2)
142-146: LGTM!Test calls correctly updated to pass pointer references, aligning with the updated function signatures in
align.go.
155-159: LGTM!Test calls properly updated to use pointer parameters, consistent with the signature changes.
internal/services/crossseed/layout.go (1)
51-56: No nil handling verification complete—normalizer is guaranteed non-nil by design.Verification confirms the original concern:
shouldIgnoreFile(matching.go:592) directly callsnormalizer.Normalize()without nil checks on lines 593 and 596, which would panic if nil. However, the codebase analysis shows this is not a practical issue:
- The Service constructor (service.go:142-167) always initializes
stringNormalizertostringutils.NewDefaultNormalizer(), guaranteeing it is never nil.- All production call sites pass
s.stringNormalizer, which is guaranteed non-nil from Service initialization.- No instances exist where normalizer is set to nil.
Recommendation: The code is safe because the design pattern enforces non-nil through Service initialization. To improve defensive programming and clarity, consider adding a comment to
shouldIgnoreFiledocumenting that the normalizer parameter must be non-nil, or add an explicit nil check early in the function.internal/services/crossseed/variant_overrides_test.go (1)
11-12: Tests correctly updated for pointer-based API and normalizer dependencyUsing
&releaseeverywhere and initializingService{stringNormalizer: stringutils.NewDefaultNormalizer()}keeps the tests aligned with the new pointer + normalization behavior. This is the right pattern to avoid nil panics inreleasesMatchand matches the runtime wiring inNewService.Also applies to: 21-22, 31-32, 41-42, 49-50, 55-56, 60-87
internal/services/crossseed/matching.go (1)
35-52: Release-key creation and safe parsing align with pointer-based cache
makeReleaseKey(*rls.Release)andService.parseReleaseNamenow line up with the pointer-returning release cache, and the nil/cache guard inparseReleaseName(&rls.Release{}) is a sensible defensive fallback for tests or partially-initialized services, avoiding panics in hot paths like matching/layout.Also applies to: 73-79
internal/services/crossseed/service.go (3)
37-43: Service wiring for string normalization is coherentImporting
stringutilsand addingstringNormalizer *stringutils.Normalizer[string, string]toService, then initializing it inNewServicewithstringutils.NewDefaultNormalizer(), cleanly centralizes normalization for matching and ignore logic. This also keeps unit tests in the same package free to override or inject a different normalizer when needed.Given how widely
s.stringNormalizeris now used, treating it as a required dependency viaNewServiceis appropriate.Also applies to: 71-84, 141-175
3383-3406: Hash-based lookups and dedup/skip caches using DefaultNormalizer are consistentUsing
stringutils.DefaultNormalizer.Normalizefor:
needleand candidate hashes ingetTorrentByHash,torrentWithRelease.release *rls.Releasein dedup flows (paired with pointer-based parsing),- hash keys in
torrentHasTopLevelFolderCached,- normalized duplicate hashes in
propagateDuplicateSearchHistory, andstate.skipCachekeys inshouldSkipCandidateprovides consistent, case-insensitive treatment of hashes across these in-memory structures. This pairs cleanly with the batch/individual file fetches that still use
normalizeHashfor qBittorrent calls.Overall, the dedup, skip, and lookup behavior should remain unchanged while being slightly more robust to hash case/whitespace variations.
Also applies to: 3515-3527, 3636-3651, 3694-3715, 3756-3811
4801-4822: Indexer/domain normalization via DefaultNormalizer is thorough and consistent
normalizeIndexerNamenow feeds indexer names throughstringutils.DefaultNormalizer.Normalizeand thennormalizeDomainName, andtrackerDomainsMatchIndexeruses the same normalization pattern (plus TLD stripping and hardcoded mappings) to compare tracker domains against indexer names/domains.Given the amount of fallback logic here (direct match, specific-domain match, partial + normalized matches), unifying everything through the same lower+trim Normalizer is a good move and should reduce subtle case/spacing issues when correlating trackers and indexers.
Also applies to: 4537-4773
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (4)
internal/services/crossseed/dedup_test.go (1)
62-86: Test name may not reflect current behavior.The test name
TestService_deduplicateSourceTorrents_PrefersRootFolderssuggests validation of a preference for root folders, but Line 85 now expectshash-flat(the non-root variant) to be selected with the comment "prefer oldest torrent". Sincehash-flathasAddedOn: 1(older) andhash-roothasAddedOn: 2(newer), the test is validating age-based selection rather than root-folder preference.Consider renaming the test to better reflect the actual deduplication logic being validated, for example:
TestService_deduplicateSourceTorrents_PrefersOldestTorrentor updating the test to explicitly validate both the age-based and structure-based logic.internal/services/crossseed/service.go (3)
82-82: Consider making stringNormalizer private.The
stringNormalizerfield is exported (public), but based on usage patterns in the file it appears to be an internal implementation detail. Consider renaming tostringNormalizer(lowercase) to prevent external access unless there's a specific need for tests or other packages to access it directly.- stringNormalizer *stringutils.Normalizer[string, string] + stringNormalizer *stringutils.Normalizer[string, string]
2913-2916: Verify nil safety for sourceRelease dereferencing.Lines 2913 and 2916 dereference
sourceReleaseand the result ofParseMusicReleaseFromTorrentNamewithout nil checks. While the context suggests these should be non-nil (sourceRelease is derived from parsing on line 2653), adding defensive nil checks would improve robustness:// Add music-specific parameters if we have them if contentInfo.IsMusic { // Parse music information from the source release or torrent name var musicRelease rls.Release - if sourceRelease.Type == rls.Music && sourceRelease.Artist != "" { + if sourceRelease != nil && sourceRelease.Type == rls.Music && sourceRelease.Artist != "" { musicRelease = *sourceRelease } else { // Try to parse music info from torrent name - musicRelease = *ParseMusicReleaseFromTorrentName(sourceRelease, sourceTorrent.Name) + musicReleasePtr := ParseMusicReleaseFromTorrentName(sourceRelease, sourceTorrent.Name) + if musicReleasePtr != nil { + musicRelease = *musicReleasePtr + } }
2964-2969: Add nil checks before dereferencing for logging.Lines 2966 and 2968 dereference pointers without nil checks. While this is used only for logging, adding nil checks prevents potential panics:
// Use the appropriate release object for logging based on content type var logRelease rls.Release if contentInfo.IsMusic && contentDetectionRelease.Type == rls.Music { // For music, create a proper music release object by parsing the torrent name as music - logRelease = *ParseMusicReleaseFromTorrentName(sourceRelease, sourceTorrent.Name) + musicReleasePtr := ParseMusicReleaseFromTorrentName(sourceRelease, sourceTorrent.Name) + if musicReleasePtr != nil { + logRelease = *musicReleasePtr + } } else { - logRelease = *sourceRelease + if sourceRelease != nil { + logRelease = *sourceRelease + } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
internal/qbittorrent/sync_manager.go(1 hunks)internal/services/crossseed/align.go(4 hunks)internal/services/crossseed/crossseed_test.go(6 hunks)internal/services/crossseed/dedup_test.go(3 hunks)internal/services/crossseed/find_individual_episodes_test.go(4 hunks)internal/services/crossseed/matching_layout_test.go(10 hunks)internal/services/crossseed/service.go(25 hunks)internal/services/crossseed/service_search_test.go(4 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- internal/services/crossseed/matching_layout_test.go
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
📚 Learning: 2025-11-21T21:11:50.633Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
Applied to files:
internal/services/crossseed/align.gointernal/services/crossseed/service_search_test.gointernal/services/crossseed/crossseed_test.gointernal/services/crossseed/find_individual_episodes_test.gointernal/services/crossseed/service.go
📚 Learning: 2025-11-06T11:59:21.390Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: web/src/components/torrents/TorrentTableOptimized.tsx:1510-1515
Timestamp: 2025-11-06T11:59:21.390Z
Learning: In the qui project, the API layer in web/src/lib/api.ts normalizes backend snake_case responses to camelCase for frontend consumption. For CrossSeed search results, the backend's download_url field is transformed to downloadUrl in the searchCrossSeedTorrent method, so frontend code should always use the camelCase variant (result.downloadUrl).
Applied to files:
internal/services/crossseed/service_search_test.gointernal/services/crossseed/service.go
📚 Learning: 2025-11-06T12:11:04.963Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
Applied to files:
internal/services/crossseed/service_search_test.go
🧬 Code graph analysis (6)
internal/qbittorrent/sync_manager.go (1)
web/src/types/index.ts (1)
Torrent(172-229)
internal/services/crossseed/dedup_test.go (3)
internal/services/crossseed/service.go (1)
Service(71-107)internal/services/crossseed/release_cache.go (1)
NewReleaseCache(13-15)pkg/stringutils/normalize.go (1)
NewDefaultNormalizer(35-37)
internal/services/crossseed/service_search_test.go (2)
internal/services/crossseed/release_cache.go (1)
NewReleaseCache(13-15)pkg/stringutils/normalize.go (2)
NewDefaultNormalizer(35-37)DefaultNormalizer(61-61)
internal/services/crossseed/crossseed_test.go (2)
internal/services/crossseed/release_cache.go (1)
NewReleaseCache(13-15)pkg/stringutils/normalize.go (1)
NewDefaultNormalizer(35-37)
internal/services/crossseed/find_individual_episodes_test.go (1)
pkg/stringutils/normalize.go (1)
NewDefaultNormalizer(35-37)
internal/services/crossseed/service.go (2)
pkg/stringutils/normalize.go (3)
Normalizer(19-22)NewDefaultNormalizer(35-37)DefaultNormalizer(61-61)internal/services/crossseed/parsing.go (2)
FindLargestFile(603-625)ParseMusicReleaseFromTorrentName(435-484)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run tests
🔇 Additional comments (4)
internal/services/crossseed/service.go (4)
1487-1487: LGTM: GetTorrents API updated consistently.The
GetTorrentsmethod calls have been consistently updated to use the newqbt.TorrentFilterOptionsparameter. All usages correctly passFilter: qbt.TorrentFilterAllwhich is appropriate for the contexts where they're used (searching across all torrents).Also applies to: 1802-1802, 3365-3365, 3679-3679, 5124-5124
3347-3347: LGTM: Consistent hash and string normalization.The code consistently uses
stringutils.DefaultNormalizer.Normalize()for hash and string normalization throughout. This centralized approach ensures:
- Consistent uppercase normalization for hash comparisons
- Proper handling of whitespace and case variations
- Shared normalization logic with caching benefits
The use of the global
DefaultNormalizerinstead of the service'sstringNormalizerfield is appropriate as it provides a singleton instance with shared cache.Also applies to: 3356-3356, 3370-3375, 3620-3620, 3673-3673, 3745-3745, 4785-4785
2024-2061: LGTM: Proper batch loading with error handling.The
batchLoadCandidateFilesfunction correctly:
- Filters torrents to only 100% complete ones
- Normalizes hashes before batch API call
- Returns nil on errors (appropriate for map return type)
- Logs warnings on batch failures
Callers properly check for nil/empty results before using the map.
2280-2289: LGTM: Proper nil/empty checks for sourceFiles.The code properly checks
len(sourceFiles) > 0before accessing file data at multiple locations (lines 2280, 2657, 3098). This defensive programming prevents panics after the batch API changes and handles cases where file retrieval might fail or return empty results.Also applies to: 2656-2676, 3098-3101
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (1)
internal/services/crossseed/service.go (1)
2272-2280: Normalize hashes beforeGetTorrentFilesBatchto align with batch API and internal conventions.All three call sites currently pass and index with the raw
hash/torrent.Hash, but the batch APIs expect callers to normalize hashes (uppercase/trimmed) for consistent keying, and other callers in this file already usenormalizeHashfor that purpose. This can lead to subtle mismatches if different hash casings are used.Reusing
normalizeHashhere keeps behavior consistent withbatchLoadCandidateFilesand the documented expectations in the sync manager. Suggested adjustments:@@ func (s *Service) AnalyzeTorrentForSearchAsync(ctx context.Context, instanceID int, hash string, enableContentFiltering bool) (*AsyncTorrentAnalysis, error) { - filesMap, err := s.syncManager.GetTorrentFilesBatch(ctx, instanceID, []string{hash}) + normalizedHash := normalizeHash(hash) + filesMap, err := s.syncManager.GetTorrentFilesBatch(ctx, instanceID, []string{normalizedHash}) if err != nil { return nil, fmt.Errorf("failed to get torrent files: %w", err) } - sourceFiles, ok := filesMap[hash] + sourceFiles, ok := filesMap[normalizedHash] if !ok { - return nil, fmt.Errorf("torrent files not found for hash %s", hash) + return nil, fmt.Errorf("torrent files not found for hash %s", normalizedHash) } @@ func (s *Service) SearchTorrentMatches(ctx context.Context, instanceID int, hash string, opts TorrentSearchOptions) (*TorrentSearchResponse, error) { - filesMap, err := s.syncManager.GetTorrentFilesBatch(ctx, instanceID, []string{hash}) + normalizedHash := normalizeHash(hash) + filesMap, err := s.syncManager.GetTorrentFilesBatch(ctx, instanceID, []string{normalizedHash}) if err != nil { return nil, fmt.Errorf("failed to get torrent files: %w", err) } - sourceFiles, ok := filesMap[hash] + sourceFiles, ok := filesMap[normalizedHash] if !ok { - return nil, fmt.Errorf("torrent files not found for hash %s", hash) + return nil, fmt.Errorf("torrent files not found for hash %s", normalizedHash) } @@ func (s *Service) torrentHasTopLevelFolder(ctx context.Context, instanceID int, torrent *qbt.Torrent) bool { - if s == nil || s.syncManager == nil || torrent == nil { - return false - } - filesMap, err := s.syncManager.GetTorrentFilesBatch(ctx, instanceID, []string{torrent.Hash}) + if s == nil || s.syncManager == nil || torrent == nil { + return false + } + hash := normalizeHash(torrent.Hash) + if hash == "" { + return false + } + filesMap, err := s.syncManager.GetTorrentFilesBatch(ctx, instanceID, []string{hash}) if err != nil { return false } - filesPtr, ok := filesMap[torrent.Hash] + filesPtr, ok := filesMap[hash] if !ok || len(filesPtr) == 0 { return false } return detectCommonRoot(filesPtr) != "" }This keeps all batch file lookups on the same normalized hash convention as the rest of the cross-seed service.
Based on learnings
Also applies to: 2655-2662, 3629-3637
🧹 Nitpick comments (1)
internal/services/crossseed/service.go (1)
82-83:Service.stringNormalizeris currently unused; consider either wiring it through or removing it.You’ve added
stringNormalizer *stringutils.Normalizer[string, string]and initialize it inNewService, but all normalization in this file still goes through the globalstringutils.DefaultNormalizer. That makes the field dead weight and slightly confusing.Either drop the field/initializer for now, or (if you plan to support per-service normalizer state) route the key helpers to use it instead. For example, a minimal cleanup could be:
type Service struct { - indexerDomainCache *ttlcache.Cache[string, string] - stringNormalizer *stringutils.Normalizer[string, string] + indexerDomainCache *ttlcache.Cache[string, string] @@ func NewService( ... ) *Service { - indexerDomainCache: indexerDomainCache, - stringNormalizer: stringutils.NewDefaultNormalizer(), + indexerDomainCache: indexerDomainCache,Also applies to: 166-167
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
internal/services/crossseed/service.go(27 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
📚 Learning: 2025-11-21T21:11:50.633Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
Applied to files:
internal/services/crossseed/service.go
🧬 Code graph analysis (1)
internal/services/crossseed/service.go (2)
pkg/stringutils/normalize.go (3)
Normalizer(19-22)NewDefaultNormalizer(35-37)DefaultNormalizer(61-61)internal/services/crossseed/parsing.go (2)
FindLargestFile(603-625)ParseMusicReleaseFromTorrentName(435-484)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run tests
🔇 Additional comments (2)
internal/services/crossseed/service.go (2)
3365-3379: DefaultNormalizer-based key normalization for hashes and indexer names looks consistent.Using
stringutils.DefaultNormalizer.Normalizefor:
searchResultCacheKey/asyncFilteringCacheKey(lines 3365-3379),- cached folder detection (
torrentHasTopLevelFolderCached, line 3612),searchRunState.skipCachekeys (line 3665),- candidate skip decisions (lines 3737-3787),
- and
normalizeIndexerName(lines 4783-4803),provides a single normalization path for hashes and indexer identifiers, which should reduce cache key skew and make lookups more robust to caller input variations. No issues spotted with these usages given the existing fallback behavior when normalization returns an empty string.
Also applies to: 3612-3621, 3665-3665, 3737-3787, 4783-4803
5020-5062:evaluateReleaseMatchhelper is a clear, extensible scoring function.The new pointer-based
evaluateReleaseMatchcleanly encapsulates release scoring, covers the key axes (group, resolution, source, season/episode, year, codec, audio), and returns a useful reason string for logging/UI without introducing nil-safety issues at current call sites. This is a solid improvement over ad-hoc scoring.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (4)
internal/services/crossseed/service.go (4)
2273-2280: Normalize hash before batch API call.Based on the retrieved learning,
GetTorrentFilesBatchexpects uppercase hex hashes for keying. The code passeshashdirectly without normalization, but then uses the samehashas a map key.Ensure the
hashparameter is normalized to uppercase before the batch call:+ normalizedHash := stringutils.DefaultNormalizer.Normalize(hash) - filesMap, err := s.syncManager.GetTorrentFilesBatch(ctx, instanceID, []string{hash}) + filesMap, err := s.syncManager.GetTorrentFilesBatch(ctx, instanceID, []string{normalizedHash}) if err != nil { return nil, fmt.Errorf("failed to get torrent files: %w", err) } - sourceFiles, ok := filesMap[hash] + sourceFiles, ok := filesMap[normalizedHash] if !ok { - return nil, fmt.Errorf("torrent files not found for hash %s", hash) + return nil, fmt.Errorf("torrent files not found for hash %s", normalizedHash) }Based on learnings
2655-2662: Normalize hash before batch API call.Same issue as in AnalyzeTorrentForSearchAsync. The
hashparameter should be normalized to uppercase before callingGetTorrentFilesBatchand using it as a map key:+ normalizedHash := stringutils.DefaultNormalizer.Normalize(hash) - filesMap, err := s.syncManager.GetTorrentFilesBatch(ctx, instanceID, []string{hash}) + filesMap, err := s.syncManager.GetTorrentFilesBatch(ctx, instanceID, []string{normalizedHash}) if err != nil { return nil, fmt.Errorf("failed to get torrent files: %w", err) } - sourceFiles, ok := filesMap[hash] + sourceFiles, ok := filesMap[normalizedHash] if !ok { - return nil, fmt.Errorf("torrent files not found for hash %s", hash) + return nil, fmt.Errorf("torrent files not found for hash %s", normalizedHash) }Based on learnings
3174-3182: Normalize hash before GetTorrents filter.The
hashparameter should be normalized before being passed toGetTorrents:+ normalizedHash := stringutils.DefaultNormalizer.Normalize(hash) torrents, err := s.syncManager.GetTorrents(ctx, instanceID, qbt.TorrentFilterOptions{ - Hashes: []string{hash}, + Hashes: []string{normalizedHash}, }) if err != nil { return nil, err } if len(torrents) == 0 { - return nil, fmt.Errorf("%w: torrent %s not found in instance %d", ErrTorrentNotFound, hash, instanceID) + return nil, fmt.Errorf("%w: torrent %s not found in instance %d", ErrTorrentNotFound, normalizedHash, instanceID) }
4359-4368: Normalize hash before GetTorrents filter.The
hashparameter should be normalized before being passed toGetTorrents:+ normalizedHash := stringutils.DefaultNormalizer.Normalize(hash) torrents, err := s.syncManager.GetTorrents(ctx, instanceID, qbt.TorrentFilterOptions{ - Hashes: []string{hash}, + Hashes: []string{normalizedHash}, }) if err != nil { return indexerIDs, nil, nil, err } if len(torrents) == 0 { - return indexerIDs, nil, nil, fmt.Errorf("%w: torrent %s not found in instance %d", ErrTorrentNotFound, hash, instanceID) + return indexerIDs, nil, nil, fmt.Errorf("%w: torrent %s not found in instance %d", ErrTorrentNotFound, normalizedHash, instanceID) }
🧹 Nitpick comments (1)
internal/services/crossseed/service.go (1)
2024-2061: Consider returning empty map instead of nil on error.When
GetTorrentFilesBatchfails, the function returnsnilinstead of an empty map. While the callers appear to handle this correctly (checkinglen(filesByHash) == 0), returning an empty map would be more defensive and clearer:filesByHash, err := s.syncManager.GetTorrentFilesBatch(ctx, instanceID, hashes) if err != nil { log.Warn(). Int("instanceID", instanceID). Int("hashCount", len(hashes)). Err(err). Msg("Failed to batch load torrent files for candidate selection") - return nil + return make(map[string]qbt.TorrentFiles) }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
internal/services/crossseed/service.go(27 hunks)
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
📚 Learning: 2025-11-21T21:11:50.633Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
Applied to files:
internal/services/crossseed/service.go
📚 Learning: 2025-11-06T11:59:21.390Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: web/src/components/torrents/TorrentTableOptimized.tsx:1510-1515
Timestamp: 2025-11-06T11:59:21.390Z
Learning: In the qui project, the API layer in web/src/lib/api.ts normalizes backend snake_case responses to camelCase for frontend consumption. For CrossSeed search results, the backend's download_url field is transformed to downloadUrl in the searchCrossSeedTorrent method, so frontend code should always use the camelCase variant (result.downloadUrl).
Applied to files:
internal/services/crossseed/service.go
🧬 Code graph analysis (1)
internal/services/crossseed/service.go (2)
pkg/stringutils/normalize.go (3)
Normalizer(19-22)NewDefaultNormalizer(35-37)DefaultNormalizer(61-61)internal/services/crossseed/parsing.go (2)
FindLargestFile(603-625)ParseMusicReleaseFromTorrentName(435-484)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run tests
🔇 Additional comments (6)
internal/services/crossseed/service.go (6)
42-42: LGTM! String normalizer integration looks good.The addition of the
stringNormalizerfield using the newstringutils.Normalizerand its initialization withNewDefaultNormalizer()provides centralized string normalization. This aligns well with the batch operations and hash normalization requirements.Also applies to: 82-82, 166-166
3364-3379: LGTM! Correct hash normalization for cache keys.Both
searchResultCacheKeyandasyncFilteringCacheKeyproperly normalize the hash usingstringutils.DefaultNormalizer.Normalize(hash)before using it as a cache key. This is the correct pattern that ensures consistent key usage.
3516-3534: LGTM! Proper batch fetching and hash normalization.The deduplication logic correctly:
- Batch fetches torrent files for all hashes upfront
- Normalizes hashes before using them as map keys (
normalizeHashon line 3525)- Uses the normalized hashes consistently for map lookups
This is the correct pattern for batch operations.
3660-3676: LGTM! Correct use of GetTorrents with filter.The function properly uses
GetTorrentswithTorrentFilterAllto fetch all torrents from the instance. The subsequent deduplication handles any necessary hash normalization.
5010-5010: LGTM! Pointer-based release signature.The function signature correctly uses pointer-based releases (
*rls.Release), which aligns with the PR objectives and improves memory efficiency for large release structures.
4772-4794: LGTM! Proper use of string normalizer.The
normalizeIndexerNamefunction correctly usesstringutils.DefaultNormalizer.Normalize()to normalize indexer names before applying additional domain-specific transformations. This ensures consistent string comparison throughout the codebase.
There was a problem hiding this comment.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
internal/services/reannounce/service.go(1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: web/src/components/torrents/TorrentTableOptimized.tsx:1510-1515
Timestamp: 2025-11-06T11:59:21.390Z
Learning: In the qui project, the API layer in web/src/lib/api.ts normalizes backend snake_case responses to camelCase for frontend consumption. For CrossSeed search results, the backend's download_url field is transformed to downloadUrl in the searchCrossSeedTorrent method, so frontend code should always use the camelCase variant (result.downloadUrl).
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
📚 Learning: 2025-11-06T12:11:04.963Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
Applied to files:
internal/services/reannounce/service.go
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run tests
- Canonicalize torrent hashes in SyncManager.GetTorrentFilesBatch and align cross-seed normalizeHash so file caches and progress lookups share a single key space. - Add HasTorrentByAnyHash to avoid full-library scans when checking for existing torrents and wire it into processCrossSeedCandidate. - Build per-run automation snapshots of instance torrents and reuse them in FindCandidates via an automation context to avoid repeated GetTorrents(FilterAll) calls. - Introduce a per-instance semaphore and improved batching for GetTorrentFilesBatch, plus a torrent-files cache in the cross-seed service to reduce repeated /files calls across analyze/search flows. - Debounce syncAfterModification per instance to coalesce bursts of mutations into fewer syncs. - Add lightweight timing/logging around candidate search and batch file fetches to surface slow paths for future tuning.
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
internal/services/crossseed/service.go (1)
5226-5268: Add nil guards toevaluateReleaseMatch
evaluateReleaseMatchnow takes*rls.Releasebut dereferences bothsourceandcandidateunconditionally. A defensive check likeif source == nil || candidate == nil { return 0, "no metadata" }(or similar) would prevent panics if any caller ever hands in a nil pointer, especially as more paths migrate to pointer-based releases.
♻️ Duplicate comments (3)
internal/services/crossseed/service.go (3)
2460-2488: Normalize hash consistently beforeGetTorrentsfilterHere,
AnalyzeTorrentForSearchAsynccallsGetTorrentswithHashes: []string{hash}but all downstream file and cache operations rely on a normalized form (viagetTorrentFilesCached→cacheKeyForTorrentFiles→normalizeHash). To avoid edge cases where qBittorrent’s filter expects a canonicalized hash (case/whitespace) and to keep behavior consistent with your own helpers, consider normalizing once (e.g.normalizedHash := normalizeHash(hash)) and using that for bothHashesand subsequent lookups. The same pattern appears in otherGetTorrents-with-Hashes call sites (e.g.,SearchTorrentMatches,ApplyTorrentSearchResults,filterIndexersByExistingContent).Based on learnings
2842-2874: AlignSearchTorrentMatcheshash usage with normalized keys
normalizedHash := normalizeHash(hash)is only used for thefilesMaplookup fromGetTorrentFilesBatch, whileGetTorrentsand the batch call both still receive the rawhash. For consistency and to avoid subtle mismatches if external code expects canonical hashes, it would be safer to passnormalizedHashintoHashesandGetTorrentFilesBatch, and keep all subsequent keys in that form; you already treat it as the canonical key when indexingfilesMap.Based on learnings
Also applies to: 2881-2901, 3322-3325
3386-3394: Reuse normalized hash for manual-apply lookup and caching
ApplyTorrentSearchResultsnow fetches the source torrent viaGetTorrentswithHashes: []string{hash}and then useshashagain forsearchResultCacheKey. SincesearchResultCacheKeyalready usesstringutils.DefaultNormalizer.Normalize, you might want to normalize once per call and use that for bothHashesand cache key derivation, mirroring the helper behavior and further reducing chances of cache misses due to hash formatting.Based on learnings
Also applies to: 3520-3544
🧹 Nitpick comments (6)
internal/services/crossseed/matching_layout_test.go (1)
120-147: Fake sync manager correctly updated to new qBittorrent interface (with normalized hash keys)
candidateSelectionSyncManagernow implementsGetTorrents, batchGetTorrentFilesBatch, andHasTorrentByAnyHashin a way that matches the new sync manager API and the cross-seed hash-normalization story (files keyed bynormalizeHash(h)). This keeps the layout/matching tests representative without over-implementing unused methods.If you ever need to assert on which hashes were requested in these tests, consider capturing the incoming
hashesslice inGetTorrentFilesBatchfor extra visibility.Also applies to: 149-151
internal/services/crossseed/service_search_test.go (1)
16-16: Search service tests now exercise real parser/normalizer and normalized skipCache keysInjecting
NewReleaseCache()andstringutils.NewDefaultNormalizer()intoServiceinTestRefreshSearchQueueCountsCooldownEligibleTorrentsis aligned with runtime wiring, and assertingskipCacheviastringutils.DefaultNormalizer.Normalize(...)matches the new keying strategy in the implementation. Looks good.For consistency with this test and the production normalization, consider updating any remaining uses of
strings.ToLowerforskipCachekeys (e.g., inTestPropagateDuplicateSearchHistory) if the underlying code has been switched fully to the normalizer.Also applies to: 135-146, 176-179
internal/qbittorrent/sync_manager.go (2)
46-54: New torrent client/lookup abstractions and SyncManager fields are well-scopedIntroducing
torrentFilesClient/torrentLookupplus the corresponding provider fields onSyncManager, along with the debounce and file-fetch semaphore state, cleanly separates concerns and allows tests to inject stubs.NewSyncManagerinitializing timers, debounce delays, and per-instance semaphores gives a sensible default while still allowing zero-valueSyncManagerusage in tests.If you foresee tuning debounce or file-fetch limits per-instance or per-call, consider exposing small setters (used only in tests/advanced config) instead of relying solely on struct field overrides.
Also applies to: 144-162, 181-192
2060-2135: Debounced sync and per-instance file-fetch semaphore are reasonable; minor timer-nit onlyThe new
syncAfterModification/runDebouncedSynclogic debounces syncs per instance usingdebouncedSyncTimers, applies a small jitter beforeSync, and falls back to a sane default delay/minJitter when fields are zero. Combined with thefileFetchSemsemaphore (andacquireFileFetchSlot/getFileFetchSemaphore), this should noticeably reduce sync and file-fetch pressure on large instances while remaining safe even for zero-valueSyncManagerused in tests.One small nit: the debounce logic uses
existing.Stop()and thenexisting.Reset(delay)without draining the timer or checking whether the callback has already fired. That’s usually fine fortime.AfterFunc, but if you ever run into surprising double-firings or missed cleanups under heavy load, the timer lifecycle might be the first place to investigate.If you want to make the timer handling more conservative, consider the pattern:
if existing, ok := sm.debouncedSyncTimers[instanceID]; ok { if existing.Stop() { existing.Reset(delay) } sm.syncDebounceMu.Unlock() return }and ensure
runDebouncedSyncdoesn’t rely on timers that may have already been reused.Also applies to: 2136-2173
internal/services/crossseed/crossseed_test.go (1)
1406-1433: fakeSyncManager updated to new sync interface with uppercased hash normalizationThe
fakeSyncManagernow:
- Implements
GetTorrentswith the newqbt.TorrentFilterOptionsparameter, returning per-instance torrents.- Implements
GetTorrentFilesBatchby normalizing requested hashes withnormalizeHash(h)(likely uppercase) and copying from its pre-normalizedfilesmap, while still tolerating lowercased/raw keys as a fallback.- Implements
HasTorrentByAnyHashby normalizing both the requested hashes and torrentHash/InfohashV1/InfohashV2vianormalizeHash, mirroring the production “match any hash variant” semantics.This keeps cross-seed tests using the same hash-normalization convention as the service code while satisfying the expanded sync manager interface. Behavior here looks consistent and test-focused.
If you ever need to assert exactly which hashes were requested from
GetTorrentFilesBatchin these tests, consider recording the incominghashesslice onfakeSyncManagersimilar to thestubTorrentFilesClientpattern insync_manager_test.go.Also applies to: 1435-1456
internal/services/crossseed/align.go (1)
391-406: Defensive nil checks for*rls.ReleasehelpersBoth
shouldRenameTorrentDisplayandshouldAlignFilesWithCandidatedereferencenewReleaseandmatchedReleasewithout guarding againstnil. IfreleaseCache.Parseor a future caller ever returns/passesnil, this will panic; consider early-returningtrue(i.e., “safe default”) when either argument isnil.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
internal/qbittorrent/sync_manager.go(11 hunks)internal/qbittorrent/sync_manager_test.go(1 hunks)internal/services/crossseed/align.go(5 hunks)internal/services/crossseed/crossseed_test.go(6 hunks)internal/services/crossseed/dedup_test.go(3 hunks)internal/services/crossseed/find_individual_episodes_test.go(5 hunks)internal/services/crossseed/matching_layout_test.go(11 hunks)internal/services/crossseed/service.go(40 hunks)internal/services/crossseed/service_search_test.go(4 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- internal/services/crossseed/dedup_test.go
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: web/src/components/torrents/TorrentTableOptimized.tsx:1510-1515
Timestamp: 2025-11-06T11:59:21.390Z
Learning: In the qui project, the API layer in web/src/lib/api.ts normalizes backend snake_case responses to camelCase for frontend consumption. For CrossSeed search results, the backend's download_url field is transformed to downloadUrl in the searchCrossSeedTorrent method, so frontend code should always use the camelCase variant (result.downloadUrl).
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
📚 Learning: 2025-11-21T21:11:50.633Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
Applied to files:
internal/qbittorrent/sync_manager_test.gointernal/services/crossseed/find_individual_episodes_test.gointernal/services/crossseed/matching_layout_test.gointernal/services/crossseed/align.gointernal/services/crossseed/service.gointernal/services/crossseed/crossseed_test.gointernal/services/crossseed/service_search_test.gointernal/qbittorrent/sync_manager.go
📚 Learning: 2025-11-06T12:11:04.963Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
Applied to files:
internal/services/crossseed/matching_layout_test.gointernal/services/crossseed/service.gointernal/services/crossseed/service_search_test.go
📚 Learning: 2025-11-06T11:59:21.390Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: web/src/components/torrents/TorrentTableOptimized.tsx:1510-1515
Timestamp: 2025-11-06T11:59:21.390Z
Learning: In the qui project, the API layer in web/src/lib/api.ts normalizes backend snake_case responses to camelCase for frontend consumption. For CrossSeed search results, the backend's download_url field is transformed to downloadUrl in the searchCrossSeedTorrent method, so frontend code should always use the camelCase variant (result.downloadUrl).
Applied to files:
internal/services/crossseed/service.go
🧬 Code graph analysis (7)
internal/qbittorrent/sync_manager_test.go (1)
internal/qbittorrent/sync_manager.go (1)
SyncManager(145-162)
internal/services/crossseed/find_individual_episodes_test.go (2)
pkg/stringutils/normalize.go (1)
NewDefaultNormalizer(35-37)internal/services/jackett/service.go (1)
EnabledIndexerInfo(3038-3042)
internal/services/crossseed/matching_layout_test.go (3)
internal/services/crossseed/service.go (1)
Service(71-110)pkg/releases/parser.go (1)
NewDefaultParser(29-31)pkg/stringutils/normalize.go (1)
NewDefaultNormalizer(35-37)
internal/services/crossseed/service.go (3)
pkg/stringutils/normalize.go (3)
Normalizer(19-22)NewDefaultNormalizer(35-37)DefaultNormalizer(61-61)internal/services/crossseed/models.go (3)
FindCandidatesResponse(128-131)FindCandidatesRequest(106-119)MatchedTorrent(64-69)internal/services/crossseed/parsing.go (2)
FindLargestFile(603-625)ParseMusicReleaseFromTorrentName(435-484)
internal/services/crossseed/crossseed_test.go (2)
internal/services/crossseed/release_cache.go (1)
NewReleaseCache(13-15)pkg/stringutils/normalize.go (1)
NewDefaultNormalizer(35-37)
internal/services/crossseed/service_search_test.go (2)
internal/services/crossseed/release_cache.go (1)
NewReleaseCache(13-15)pkg/stringutils/normalize.go (2)
NewDefaultNormalizer(35-37)DefaultNormalizer(61-61)
internal/qbittorrent/sync_manager.go (1)
internal/qbittorrent/client.go (1)
Client(32-60)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run tests
🔇 Additional comments (17)
internal/services/crossseed/matching_layout_test.go (1)
15-38: Service test wiring and pointer-based releases look consistent with production pathsUsing
releases.NewDefaultParser()andstringutils.NewDefaultNormalizer()on the testServiceinstances, and switchinggetMatchType/getMatchTypeFromTitlecalls to*rls.Release, keeps tests exercising the same parser/normalizer and pointer-based signatures as the real service. No issues spotted here.Also applies to: 44-72, 91-113, 192-208, 213-238, 243-266, 271-287
internal/services/crossseed/service_search_test.go (1)
226-238: queueTestSyncManager stub matches new sync manager surfaceThe
queueTestSyncManagernow exposesGetTorrents, a no-opGetTorrentFilesBatch, andHasTorrentByAnyHashto satisfy the expanded sync manager interface while keeping these tests focused on queue-refresh behavior. This keeps compilation and behavior correct without introducing unnecessary complexity.internal/qbittorrent/sync_manager.go (2)
209-239: Provider-based accessors and GetTorrents wrapper improve testability without changing semantics
getTorrentFilesClient/getTorrentLookupguard againstnilreceivers and fall back to livegetClientAndSyncManagerwhen no providers are set, while tests can inject in-memory implementations. The newGetTorrents(ctx, instanceID, filter)simply delegates to the underlying qBittorrentSyncManager.GetTorrents, so callers can move off direct sync-manager usage without behavior change.Also applies to: 263-274
1178-1314: Now I need to inspect the codebase directly to confirm the actual normalization behavior:Based on the retrieved learning and my analysis, I now have enough information to make a determination. Let me generate the rewritten review comment based on what I've verified:Batch torrent-file fetching, caching, and hash normalization—verify hash-case alignment across components
The
GetTorrentFilesBatchpipeline is well-structured:
- Normalizes input hashes once via
normalizeHashes, deduping canonical forms and building a lookup set.- Uses
torrentFilesClient.getTorrentsByHashes(normalized.lookup)to populateprogressByHashkeyed by canonical (lowercase) hash.- Asks the
FilesManagerfor cached files withGetCachedFilesBatchand only fetches missing entries concurrently using anerrgroupplusacquireFileFetchSlotfor per-instance throttling.- Caches fresh results back via
CacheFileskeyed by canonical (lowercase) hash with the right progress, and returns a map keyed consistently by lowercase-trimmed hash.- Returns partial results on per-hash failures while aggregating them into
fetchErrorsand logging at debug-level for visibility.
GetTorrentFilescorrectly canonicalizes the single hash and then delegates to the batch path, andHasTorrentByAnyHash+matchesAnyHashuse the samenormalizeHashes/canonicalizeHashmachinery to match acrossHash,InfohashV1, andInfohashV2.One thing worth validating:
Hash case convention across components: Here,
canonicalizeHashconverts to lowercase, and all returned map keys are lowercase hex. The crossseed service has a separatenormalizeHashfunction (line 2173) that historically used uppercase normalization. Ensure:
- Callers in crossseed (align.go:81, service.go:2205, 2264, 2867, 3736) pass raw input hashes (not pre-uppercased keys) to
GetTorrentFilesBatch, and- Any code consuming the returned
map[string]qbt.TorrentFilescorrectly handles the lowercase canonical keys (does not assume uppercase or re-normalize).internal/qbittorrent/sync_manager_test.go (1)
12-118: New sync manager tests provide solid coverage for hash normalization, caching, and lookup behaviorThese tests accurately pin down the intended behavior of
normalizeHashes,GetTorrentFilesBatch, andHasTorrentByAnyHash:
TestNormalizeHashesenforces canonical lowercase keys, preferred original mapping, and the exact lookup ordering across trimmed/lower/upper variants.TestGetTorrentFilesBatch_NormalizesAndCachesvalidates cache hits vs. misses, progress propagation intoCacheFiles, and thatgetTorrentsByHashesis called with the expanded lookup set.TestHasTorrentByAnyHashconfirms matching against bothHashandInfohashV2.The stub types (
stubTorrentFilesClient,stubFilesManager,stubTorrentLookup) are minimal but precise, making these tests easy to reason about and extend.Also applies to: 120-195
internal/services/crossseed/crossseed_test.go (1)
25-25: Cross-seed tests now inject string normalizer alongside release cacheAdding
stringutils.NewDefaultNormalizer()to theServiceinitializations in webhook and candidate tests (alongsideNewReleaseCache()) keeps the test wiring aligned with the productionServicestruct. This ensures that matching, layout decisions, and candidate selection in tests go through the same normalization layer as in real runs.Also applies to: 1094-1098, 1167-1170, 1210-1214, 1327-1332
internal/services/crossseed/align.go (1)
50-86: Batch file lookup with canonical hash looks consistentUsing
canonicalHash := normalizeHash(torrentHash)and preferring the liveGetTorrentFilesBatchresult overexpectedSourceFilesis a good improvement and matches the batch-loading pattern used elsewhere, while safely falling back on error or missing entries.internal/services/crossseed/find_individual_episodes_test.go (1)
17-18: Test wiring for new normalizer and batch APIs looks correctThe tests correctly initialize
stringNormalizeronServiceand extendepisodeSyncManagerto satisfyGetTorrents,GetTorrentFilesBatch, andHasTorrentByAnyHash. The batch implementation usingstrings.ToLower(h)withnormalizeHash(h)for map keys matches the production normalization strategy and should keep the tests representative without overcomplicating the mock.Also applies to: 43-56, 209-235
internal/services/crossseed/service.go (9)
42-43: New normalizer and torrent-file cache integration looks goodIntroducing
stringNormalizerandtorrentFilesCacheinService, wiring them viaNewService, and documentingGetTorrentFilesBatchonqbittorrentSyncprovides a clear, central normalization/caching story for hashes and file metadata. The TTL-backed caches and explicit comments keep the API surface understandable.Also applies to: 70-83, 167-193
112-125: Automation snapshots and candidate caching are well-structured
automationInstanceSnapshot/automationSnapshotsplusautomationContextandautomationCacheKeygive a clean per-run snapshot and de-dup layer forFindCandidates, andfindCandidatesWithAutomationContextfalls back gracefully when no automation context is provided. This should significantly reduce repeated parsing and qB calls during automation runs without changing semantics.Also applies to: 176-193, 1204-1207, 1545-1570
1620-1765: FindCandidates snapshot fallback and latency logging look solidThe updated
findCandidatescorrectly prefers prebuiltsnapshotswhen present, falls back toinstanceStore.Listotherwise, and limits qB calls via atorrentByHashset plusbatchLoadCandidateFiles. The added elapsed-time check and debug logging for slow runs will help validate the new batching behavior in production.
2173-2220: Confirm sync-manager hash normalization matchesnormalizeHash
normalizeHashnow lowercases hashes and is used both incacheKeyForTorrentFilesand when indexingfilesMap[normalizeHash(hash)]afterGetTorrentFilesBatch. This assumes the sync manager normalizes batch keys with the same transform; if it still uses a different canonical form (e.g., uppercase from earlier revisions), lookups will quietly miss. Please verifyGetTorrentFilesBatch’s key normalization matchesnormalizeHash, and if not, align one side or normalize the input slice before the batch call.Based on learnings
2238-2275: Batch candidate file loading and best-match selection are coherent
batchLoadCandidateFilesnow dedupes hashes vianormalizeHashand callsGetTorrentFilesBatchonce per instance, andfindBestCandidateMatchoperates on those normalized keys while also preferring candidates with a root folder and more files at equal score. This is a nice perf and UX win for cross-seed selection logic.Also applies to: 2298-2348
3699-3750: Dedup root-folder probing via batch files is a good optimizationThe new prefetch of all torrent files in
deduplicateSourceTorrentsusingGetTorrentFilesBatchandhasRootCache[normalizeHash(hash)]nicely avoids per-torrent file calls and lets you prefer representatives with a root folder. This should substantially cut I/O for large queues without sacrificing correctness.
3943-3993: Skip-cache normalization is consistent with search history propagationUsing
stringutils.DefaultNormalizer.Normalize(torrent.Hash)as theskipCachekey, and the same normalizer when propagating duplicate search history, ensures you don’t end up with multiple entries for the same hash differing only in case/format. This keeps seeded-search cooldown logic predictable across deduplicated torrents.Also applies to: 3847-3873
3576-3591: Consistent hash normalization for cache keysBoth
searchResultCacheKeyandasyncFilteringCacheKeynow usestringutils.DefaultNormalizer.Normalize(hash), which should make cache lookups resilient to hash formatting differences. Just ensureDefaultNormalizer’s transform matches your intended canonical hash format (and is consistent withnormalizeHash) so you don’t accidentally diverge between content caches and search/async caches.Based on learnings
Also applies to: 3584-3591
4989-5010: Indexer name normalization pipeline is sensible
normalizeIndexerNamerunning the indexer name throughstringutils.DefaultNormalizer.Normalize, stripping tool suffixes like “(prowlarr)”/“(jackett)”, then feeding intonormalizeDomainNamegives a clean, reusable canonical indexer identifier for domain matching. This should maketrackerDomainsMatchIndexerbehavior more stable across different naming conventions.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (6)
internal/services/jackett/ratelimiter_test.go (1)
3-8: MaxWait test correctly exercises new RateLimiter options
- Importing
errorsand updatingBeforeRequesttoBeforeRequest(ctx, indexer, nil)keeps existing tests aligned with the new API without changing semantics.TestRateLimiterMaxWaitBudgetis a good focused check that:
- Seeds a prior request via
RecordRequest.- Uses
MinInterval: 50msandMaxWait: 10msso the computed wait is guaranteed to exceedMaxWaitunder normal execution.- Asserts the returned error is a
*RateLimitWaitErrorand thatWait > MaxWait, which directly validates the new budget-enforcement behavior.If you ever see flakes here due to extreme clock skew in CI, consider capturing
now := time.Now()once and reusing it forRecordRequestand assertions, but current values are conservative enough that this should be reliable.Also applies to: 20-29, 70-95
internal/services/jackett/service.go (4)
94-96: Search timeouts now account for per-request rate-limit budgetsThe introduction of
interactiveSearchMinInterval/interactiveSearchMaxWait, therateLimitfield onsearchContext, andcomputeSearchTimeoutties search timeouts to the rate-limit configuration:
SearchandSearchGenericpopulatemeta.rateLimitwith interactive options, andcomputeSearchTimeout(meta, indexerCount)adds a singlewaitBudget(max ofMinIntervalandMaxWait) on top oftimeouts.AdaptiveSearchTimeout. This ensures the outer search deadline has headroom for one rate-limit-induced wait.Recentpassesmeta=nil, socomputeSearchTimeoutfalls back todefaultMinRequestIntervalas the wait budget.- The additional logging in
SearchGenericgives good observability intosearch_timeoutandrate_limit_wait_budget.One nuance: because
computeSearchTimeoutaddswaitBudgetafterAdaptiveSearchTimeouthas already clamped toMaxSearchTimeout, the effective deadline can exceedMaxSearchTimeoutby up towaitBudget. If you expectMaxSearchTimeoutto be a hard global cap, you may want to clamp again after adding the budget.Also applies to: 142-148, 304-320, 358-370, 463-474, 547-560, 668-669, 1380-1393
993-999: Cache persistence now robust to nil and overly long contextsSwitching
persistSearchCacheEntryto derivestoreCtxvia:parent := ctx if parent == nil { parent = context.Background() } storeCtx, cancel := context.WithTimeout(parent, 3*time.Second)removes the risk of panicking on a nil context and bounds cache writes to 3 seconds, while still respecting upstream cancellation when
ctxis non-nil. This is a solid hardening change for store operations.
1364-1378: Rate-limit option plumbing and error handling are consistent, but RateLimitWaitError semantics may be harsh
cloneRateLimitOptionsgives each indexer goroutine its own copy of the options, avoiding shared mutable state.asRateLimitWaitErrorcleanly wraps theerrors.Aspattern and is correctly used insearchMultipleIndexers.computeSearchTimeoutandresolveOptionsboth base their budgets on the sameRateLimitOptions, so the timeout math is consistent with whatBeforeRequestwill actually wait.- In
searchMultipleIndexers,BeforeRequest(ctx, idx, rateLimitOpts)now short-circuits with aRateLimitWaitErrorwhen the required wait exceedsMaxWait, and you:
- Log a detailed warning with required vs. max wait and priority.
- Push
indexerResult{id: 0, err: waitErr}into the channel so coverage does not count that indexer as completed.- Log aggregate counters and a completion summary with explicit failure/timeout counts.
However, because
RateLimitWaitErroris treated as a non-timeout failure, if every non-timeout indexer hits the MaxWait budget,searchMultipleIndexerswill returnall %d indexers failed (last error: %w), whichSearch/SearchGenerictreat as a hard error rather than a “gracefully skipped due to rate limits” case.If the intent is to degrade gracefully when all indexers exceed the wait budget (especially for interactive searches), consider either:
- Treating
RateLimitWaitErroras a distinct “skipped” category that does not trigger the “all failed” aggregate error, or- Returning a more explicit top-level error indicating that the search did not run because of per-request wait budgets, not backend failures.
Also applies to: 1395-1403, 1558-1569, 1663-1668, 1683-1688
1850-1854: Prowlarr year workaround is simple and correct, but diverges from capability-aware behavior
applyProwlarrWorkaroundnow:
- Accepts
(*models.TorznabIndexer, map[string]string)and only runs forTorznabBackendProwlarr.- If
"year"is non-empty, moves it into"q"(appending whenqis non-empty) and deletes"year".- Logs a detailed debug message about the transformation.
This matches
TestProwlarrYearParameterWorkaroundexpectations and fixes the previous URL-values-based signature.Note though that
TestProwlarrCapabilityAwareYearWorkaroundmodels a more advanced behavior (keeping"year"when the indexer advertises"movie-search-year"). Right now, that logic is duplicated in the test but not wired intoapplyProwlarrWorkaroundorapplyCapabilitySpecificParams, so production always moves"year"into the query for Prowlarr regardless of capabilities.To avoid this drift and ensure capability-aware handling is actually used at runtime, consider:
- Refactoring the capability-aware logic from the test into
applyProwlarrWorkaround(or into a shared helper), and- Updating
applyProwlarrWorkaroundto consulthasCapabilitybefore deciding whether to strip"year".Also applies to: 1856-1884
internal/services/jackett/ratelimiter.go (1)
12-14: Rate limiter options and MaxWait enforcement are implemented cleanlyKey points in the updated rate limiter:
- Raising
defaultMinRequestIntervalto 60s is a behavioral change for callers that rely on the default; background-style flows will now be much more conservative unless they overrideMinIntervalviaRateLimitOptions.RateLimitPriorityandRateLimitOptionsgive you a flexible surface to distinguish interactive vs background work and tweakMinInterval/MaxWaitper request.RateLimitWaitError:
- Carries indexer ID, name, required wait, max wait, and priority.
- Implements
Error()with a descriptive message.- Implements
Issoerrors.Is(err, &RateLimitWaitError{})works as expected.In
BeforeRequest:
resolveOptionsmerges caller-provided options with defaults, ensuringMinIntervalis always > 0 (falling back todefaultMinRequestIntervalorr.minInterval).- The main loop:
- Computes
waitviacomputeWaitLocked(indexer, now, cfg.MinInterval).- Immediately returns a
RateLimitWaitErrorifcfg.MaxWait > 0andwait > cfg.MaxWait, before allocating a timer.- Otherwise, waits with a
time.Timerand respectsctx.Done()correctly, reacquiring the mutex after the wait.In
computeWaitLocked:
- Accepting
minIntervalas a parameter lets per-request configuration override the globalr.minIntervalcleanly.- The function combines cooldown, per-request min interval, and hourly/daily windows into a single max-wait duration, which is exactly what
BeforeRequestneeds to decide between waiting or failing fast withRateLimitWaitError.Overall this is a solid extension of the existing limiter. Just be aware of the new 60s default and audit any call sites that relied on the old 2s default but do not yet pass options.
Also applies to: 16-21, 23-49, 73-114, 190-240, 286-319
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
internal/services/jackett/ratelimiter.go(5 hunks)internal/services/jackett/ratelimiter_test.go(3 hunks)internal/services/jackett/service.go(16 hunks)internal/services/jackett/service_test.go(1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
🧬 Code graph analysis (3)
internal/services/jackett/ratelimiter_test.go (2)
internal/services/jackett/ratelimiter.go (4)
NewRateLimiter(63-71)RateLimitOptions(23-27)RateLimitPriorityInteractive(20-20)RateLimitWaitError(29-35)internal/models/torznab_indexer.go (1)
TorznabIndexer(64-83)
internal/services/jackett/ratelimiter.go (1)
internal/models/torznab_indexer.go (1)
TorznabIndexer(64-83)
internal/services/jackett/service.go (2)
internal/services/jackett/ratelimiter.go (3)
RateLimitOptions(23-27)RateLimitPriorityInteractive(20-20)RateLimitWaitError(29-35)internal/pkg/timeouts/timeouts.go (2)
WithSearchTimeout(35-46)AdaptiveSearchTimeout(19-32)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run tests
🔇 Additional comments (3)
internal/services/jackett/service_test.go (1)
1435-1440: Updated workaround call matches new API and semanticsThe test now calls
service.applyProwlarrWorkaround(indexer, inputParams)with amap[string]string, which aligns with the updated method signature and the production call site insearchMultipleIndexers. This keeps the test exercising the actual in-place mutation behavior.internal/services/jackett/service.go (2)
1480-1482: Search client invocations correctly switched to map-based paramsThe three search call sites now invoke the client with
map[string]stringparams:
- Native:
client.SearchDirect(ctx, paramsMap)- Prowlarr:
client.Search(ctx, indexerID, paramsMap)- Aggregator:
client.Search(ctx, indexerID, paramsMap)This aligns with the earlier conversion from
url.ValuestoparamsMapand keeps the Prowlarr workaround and capability-based parameter logic working on a simple, mutable map.Also applies to: 1497-1499, 1523-1525
190-240: RateLimiter integration points look consistent with new options
Search/SearchGenericsetrateLimitonsearchContextand pass that meta down tosearchMultipleIndexers.computeWaitLockeduses the same min interval as resolved inresolveOptions, so the wait computation matches whatSearchassumed when constructing its timeout.- The new
RateLimitWaitErrorflow inBeforeRequestis accurately tested inratelimiter_test.go.Overall, the wiring from search-level intent (interactive vs background) down to per-request rate-limit behavior appears coherent.
Also applies to: 296-319
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (3)
internal/services/jackett/ratelimiter.go (1)
13-14: Per‑request option flow is solid; consider havingNextWaitrespectMaxWaittooThe switch to per‑request options in
BeforeRequest(withresolveOptions) and the earlyRateLimitWaitErrorwhenwait > MaxWaitlook correct and give callers a clean way to bound blocking time.computeWaitLocked’sminIntervalparameter plus a higherdefaultMinRequestIntervalalso integrates cleanly with the new interactive vs background priorities.One behavioral gap:
NextWaitcurrently ignoresMaxWaitand always returns the full computed wait, even thoughBeforeRequestwill immediately fail when that wait exceedsMaxWait. This means the scheduler may sleep based on a long raw wait that can never actually be honored for interactive jobs. Consider clampingNextWait’s return value byMaxWaitwhen it’s set, so scheduling decisions reflect the same wait budget as execution:func (r *RateLimiter) NextWait(indexer *models.TorznabIndexer, opts *RateLimitOptions) time.Duration { if indexer == nil { return 0 } cfg := r.resolveOptions(opts) r.mu.Lock() defer r.mu.Unlock() now := time.Now() - return r.computeWaitLocked(indexer, now, cfg.MinInterval) + wait := r.computeWaitLocked(indexer, now, cfg.MinInterval) + if cfg.MaxWait > 0 && wait > cfg.MaxWait { + wait = cfg.MaxWait + } + return wait }Also applies to: 73-99, 190-205, 286-333
internal/services/jackett/service.go (2)
561-575: Minor duplication of wait‑budget computation betweenSearchGenericandcomputeSearchTimeout
SearchGenericrecomputeswaitBudgetfor logging using essentially the sameMinInterval/MaxWaitmax logic embodied incomputeSearchTimeout. This is harmless today but creates a second place to keep in sync if the budgeting logic changes.Consider factoring the wait‑budget calculation into a small helper (e.g.,
searchWaitBudget(meta *searchContext) time.Duration) and having bothcomputeSearchTimeoutand the log line call it, to avoid drift.Also applies to: 1394-1405
1865-1868: Capability‑specific params hook is a stub; behavior remains unchanged
applyCapabilitySpecificParamscurrently no‑ops unless you later add capability‑driven parameter tweaks, so it doesn’t alter existing behavior. The Prowlarr year workaround is nicely isolated and logged, and being a pure transformation onparamskeeps it easy to extend if you want similar backend‑specific adjustments later.Also applies to: 1872-1898
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
internal/services/jackett/ratelimiter.go(6 hunks)internal/services/jackett/scheduler.go(1 hunks)internal/services/jackett/service.go(20 hunks)
🧰 Additional context used
🧠 Learnings (1)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
🧬 Code graph analysis (3)
internal/services/jackett/service.go (2)
internal/services/jackett/ratelimiter.go (3)
RateLimitOptions(23-27)RateLimitPriorityInteractive(20-20)RateLimitWaitError(29-35)internal/pkg/timeouts/timeouts.go (2)
WithSearchTimeout(35-46)AdaptiveSearchTimeout(19-32)
internal/services/jackett/ratelimiter.go (1)
internal/models/torznab_indexer.go (1)
TorznabIndexer(64-83)
internal/services/jackett/scheduler.go (2)
internal/services/jackett/client.go (1)
Result(84-101)internal/services/jackett/ratelimiter.go (2)
RateLimiter(57-61)RateLimitPriorityInteractive(20-20)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run tests
🔇 Additional comments (9)
internal/services/jackett/ratelimiter.go (1)
16-35: New rate‑limit types and error wiring look coherent
RateLimitPriority,RateLimitOptions, andRateLimitWaitErrorare well factored and line up with how the scheduler/service consume them. TheErrormessage is clear, andIsmatching by type only is appropriate forerrors.Ischecks.Also applies to: 37-48
internal/services/jackett/scheduler.go (1)
62-87: Queue ordering, cancellation, and runnable selection look correctThe scheduler’s FIFO‑within‑priority ordering, busy‑indexer checks, and
nextRunnablelogic (dropping cancelled jobs, computing a minimalnextWait, and returning immediately runnable jobs) are consistent and thread‑safe withs.mu. Using a per‑jobrespChand feeding completions back throughcompleteChcleanly decouples job producers from the execution loop.Also applies to: 150-188
internal/services/jackett/service.go (7)
64-83: Scheduler integration intoServiceis straightforwardAdding
searchScheduler *searchSchedulertoService, constructing it inNewService, and wiring it to the sharedRateLimiterkeeps search orchestration encapsulated and maintains the existingexecuteSearchoverride hook. Apart from the initialization ordering concern already noted inscheduler.go, the overall service wiring and field layout look good.Also applies to: 198-217
88-107: Interactive search rate‑limit metadata is well scopedThe introduction of
interactiveSearchMinInterval/interactiveSearchMaxWaitand the newsearchContext.rateLimitfield, populated inSearchandSearchGeneric, cleanly separate interactive flows from background ones. UsingRateLimitPriorityInteractivehere lines up withjobPriorityin the scheduler and with the per‑request options passed into the rate limiter.Also applies to: 143-149, 324-333, 477-488
227-235:executeQueuedSearchandcomputeSearchTimeoutgive consistent budgeting across flowsRouting all networked searches (
Search,SearchGeneric,Recent) throughexecuteQueuedSearchwhen a scheduler is available, and falling back toexecuteSearchotherwise, preserves the existing behavior while enabling queueing over cooldown‑blocked indexers.
computeSearchTimeout’sAdaptiveSearchTimeout + waitBudgetmodel, wherewaitBudgetis derived frommeta.rateLimit(or falls back todefaultMinRequestInterval), ensures there’s room in the overall deadline for both rate‑limit waiting and the actual HTTP calls. The subsequent timeout handling (DeadlineExceededtreated as partial results rather than hard failure) is consistent across the three entry points.Also applies to: 372-385, 561-575, 682-687, 1394-1407
1007-1012: Defensive context handling inpersistSearchCacheEntryis a nice robustness fixNormalizing a nil
ctxtocontext.Background()beforeWithTimeoutavoids a potential panic path when persisting cache entries from callers that don’t pass a context. This change is backwards‑compatible and keeps the 3s timeout behavior intact.
1378-1407: Rate‑limit option cloning and reuse are safe and avoid shared mutation
cloneRateLimitOptionscorrectly deep‑copies the pointed‑to struct so each call site (scheduler andsearchMultipleIndexers) can safely re‑use a shared options template without risk of concurrent mutation.asRateLimitWaitErrorcentralizes error type extraction, andcomputeSearchTimeoutreuses the same wait‑budget logic as the logs inSearchGeneric. Using a single clonedrateLimitOptsinstance per search and passing it intoBeforeRequestacross goroutines is safe because the limiter never mutates the passed‑in struct.Also applies to: 1416-1417
1496-1513: Prowlarr workaround and rate‑limit enforcement insearchMultipleIndexerslook correctThe refactored
applyProwlarrWorkaround(now backend‑aware and side‑effecting only the params map) plus the updatedSearchDirect/Searchcalls that passctxalign with the client API and keep the year‑parameter workaround scoped to Prowlarr.Hooking
s.rateLimiter.BeforeRequest(ctx, idx, rateLimitOpts)inside each indexer goroutine, and treatingRateLimitWaitErrorspecially for logging while skipping coverage (id: 0), integrates nicely with the new per‑searchRateLimitOptions. The aggregate error logic still only fails the entire search when all non‑timeout indexers fail, so a few rate‑limit‑skipped indexers won’t poison successful results.Also applies to: 1538-1539, 1568-1569, 1572-1585
1677-1683: Additional search metrics logging is helpful and low‑riskThe new debug and warn logs around
indexers_requested/failed/successful/timed_outand the completion summary provide useful observability for tuning the new scheduler and rate‑limit behavior without affecting control flow.Also applies to: 1697-1703
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
web/src/components/indexers/IndexersPage.tsx (2)
121-121: Standardize error message fallback for consistency.Line 121 uses
"Connection test failed"as fallback while Line 144 usesString(error). Consider using the same approach for consistency across both handlers.- const errorMsg = error instanceof Error ? error.message : String(error) + const errorMsg = error instanceof Error ? error.message : "Connection test failed"Also applies to: 144-144
114-162: Consider adding cleanup for async operations on unmount.Both
handleTestandhandleTestAllperform async operations that update state after completion. If the component unmounts while tests are running, these state updates could attempt to execute on an unmounted component. While React 19 handles this better than previous versions, consider using AbortController for proper cleanup.Example pattern:
const handleTest = async (id: number) => { const abortController = new AbortController() updateIndexerTestState(id, "testing", undefined) try { await api.testTorznabIndexer(id, { signal: abortController.signal }) updateIndexerTestState(id, "ok", undefined) toast.success("Connection test successful") } catch (error) { if (error.name === 'AbortError') return const errorMsg = error instanceof Error ? error.message : "Connection test failed" updateIndexerTestState(id, "error", errorMsg) toast.error(errorMsg) } } // Store abort controllers in a ref and cleanup in useEffect useEffect(() => { return () => { // Cleanup logic to abort ongoing operations } }, [])Note: This assumes the API client supports AbortSignal.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
web/src/components/indexers/IndexersPage.tsx(2 hunks)
🧰 Additional context used
🧠 Learnings (1)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
🧬 Code graph analysis (1)
web/src/components/indexers/IndexersPage.tsx (1)
web/src/lib/api.ts (1)
api(1574-1574)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run tests
🔇 Additional comments (1)
web/src/components/indexers/IndexersPage.tsx (1)
164-179: Backend test handler persists results to database; review comment based on incomplete analysis.The original concern is incorrect. Test results ARE persisted:
- Frontend calls
api.testTorznabIndexer(id)→ backendTestIndexerhandler (jackett.go:579)- Handler performs actual test and calls
updateTestStatusWithTimeout()→UpdateTestStatus()→ database UPDATE (jackett.go lines 618, 625)- Frontend's
updateIndexerTestState()is subsequent UI-only state update, not the persistence mechanism- Page refresh fetches from backend via
loadIndexers()→api.listTorznabIndexers(), restoring persisted test resultsThe local state update complements backend persistence for immediate UI feedback; it does not replace it. Test results persist across page refreshes.
Likely an incorrect or invalid review comment.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
internal/services/jackett/scheduler.go (1)
39-51: Still a potential data race onsearchScheduler.rateLimiterinitialization
newSearchSchedulerstarts the loop withrateLimiterset tonil, and the field is expected to be populated later from outside the scheduler. SincenextRunnable→maxWaitreadss.rateLimiterconcurrently with that write, this is still a data race under the Go memory model.Consider wiring the
*RateLimiterin before starting the loop, e.g. by:
- Changing
newSearchSchedulerto accept a*RateLimiterand store it beforego s.loop(), or- Providing a constructor or factory in the Jackett service that creates both the rate limiter and scheduler and never mutates
rateLimiterafterward.This will keep the scheduler’s internal state write‑once and race‑free.
You can confirm the race by running
go test ./... -racefocusing on Jackett service tests that exercise concurrent searches.Also applies to: 91-99, 192-205
internal/services/crossseed/service.go (1)
2176-2185: Unify hash normalization with qBittorrent expectations across GetTorrents/GetTorrentFilesBatch/HasTorrentByAnyHashThere are several places where hash normalization and qBittorrent integration don’t line up cleanly:
normalizeHashnow lowercases hashes:func normalizeHash(hash string) string { return strings.ToLower(strings.TrimSpace(hash)) }- Calls into qBittorrent use raw
hashvalues:
GetTorrents(..., TorrentFilterOptions{Hashes: []string{hash}})in:
AnalyzeTorrentForSearchAsync(Lines 2463–2465),SearchTorrentMatches(Lines 2855–2857),ApplyTorrentSearchResults(Lines 3389–3397),filterIndexersByExistingContent(Lines 4578–4588),- plus other non‑hash‑filtered calls that are fine.
GetTorrentFilesBatch(ctx, instanceID, []string{hash})ingetTorrentFilesCached(Lines 2208–2211) andSearchTorrentMatches(Lines 2870–2873).- Map lookups and caches use normalized hashes:
filesMap[normalizeHash(hash)]ingetTorrentFilesCached.filesByHash[normalizeHash(torrent.Hash)]inbatchLoadCandidateFiles.filesMap[hash]wherehashisnormalizeHash(t.Hash)indeduplicateSourceTorrents.normalizedHash := normalizeHash(hash)andfilesMap[normalizedHash]inSearchTorrentMatches.- Caches use
stringutils.DefaultNormalizer.Normalize(hash)insearchResultCacheKey,asyncFilteringCacheKey, andskipCache.Based on the retrieved learning, the qBittorrent sync layer expects uppercase hex for batch/hash‑keyed APIs (e.g.
GetCachedFilesBatch/GetTorrentFilesBatch), and other parts of this repo historically normalized infohashes to uppercase via a shared normalizer. Mixing:
- lowercased
normalizeHash,- raw incoming
hashfor qBittorrent filters, andstringutils.DefaultNormalizerelsewherecreates a high risk that:
GetTorrents(...Hashes: []string{hash})silently fails to match whenhashcasing differs from qBittorrent’s internal normalization, andGetTorrentFilesBatchreturns a map keyed by one canonical form (e.g., uppercase) while lookups use another (lowercase), causing false “torrent files not found” errors and breaking cross‑seed searches, Analyze, deduplication, and file‑level matching.To make this robust and consistent, I recommend:
Pick a single canonical hash normalizer (preferably the same one used by qBittorrent sync):
If
stringutils.DefaultNormalizeris already aligned with qBittorrent’s expectation (uppercase hex per the previous learning), use it everywhere for infohashes and drop the separatenormalizeHashhelper, or re‑implementnormalizeHashin terms of it.
- func normalizeHash(hash string) string {
return strings.ToLower(strings.TrimSpace(hash))- }
- func normalizeHash(hash string) string {
return stringutils.DefaultNormalizer.Normalize(strings.TrimSpace(hash))- }
Always normalize hashes before calling qBittorrent hash‑filtered APIs and reuse that same value for map keys and caches:
Example for
SearchTorrentMatches(pattern should be applied similarly inAnalyzeTorrentForSearchAsync,ApplyTorrentSearchResults,getTorrentFilesCached,batchLoadCandidateFiles,deduplicateSourceTorrents, andfilterIndexersByExistingContent):
- if strings.TrimSpace(hash) == "" {
- if strings.TrimSpace(hash) == "" {
return nil, fmt.Errorf("%w: torrent hash is required", ErrInvalidRequest)
}
- normalizedHash := normalizeHash(hash)
normalizedHash := normalizeHash(hash)
if normalizedHash == "" {
return nil, fmt.Errorf("%w: torrent hash is required", ErrInvalidRequest)}
...
- torrents, err := s.syncManager.GetTorrents(ctx, instanceID, qbt.TorrentFilterOptions{
Hashes: []string{hash},- })
- torrents, err := s.syncManager.GetTorrents(ctx, instanceID, qbt.TorrentFilterOptions{
Hashes: []string{normalizedHash},- })
...
- filesMap, err := s.syncManager.GetTorrentFilesBatch(ctx, instanceID, []string{hash})
- filesMap, err := s.syncManager.GetTorrentFilesBatch(ctx, instanceID, []string{normalizedHash})
...
- sourceFiles, ok := filesMap[normalizedHash]
- sourceFiles, ok := filesMap[normalizedHash]
if !ok {
return nil, fmt.Errorf("torrent files not found for hash %s", hash)
}return nil, fmt.Errorf("torrent files not found for hash %s", normalizedHash)For `getTorrentFilesCached`: ```diff
- key := cacheKeyForTorrentFiles(instanceID, hash)
- normHash := normalizeHash(hash)
- key := cacheKeyForTorrentFiles(instanceID, normHash)
...
- filesMap, err := s.syncManager.GetTorrentFilesBatch(ctx, instanceID, []string{hash})
- filesMap, err := s.syncManager.GetTorrentFilesBatch(ctx, instanceID, []string{normHash})
...
- files, ok := filesMap[normalizeHash(hash)]
- files, ok := filesMap[normHash]
Ensure
HasTorrentByAnyHashcallers also feed normalized hashes:In
processCrossSeedCandidateyou currently do:existingTorrent, exists, err := s.syncManager.HasTorrentByAnyHash(ctx, candidate.InstanceID, []string{torrentHash})Normalize
torrentHashonce up front and reuse it there and in any subsequent comparisons.This unification will keep:
- qBittorrent filters (
GetTorrents,HasTorrentByAnyHash) and batch file lookups,- internal maps (
filesByHash,torrentByHash, deduplication), and- caches (
searchResultCacheKey,asyncFilteringCacheKey,skipCache)all speaking the same canonical infohash format, avoiding subtle “not found” and duplicate‑detection failures.
Based on learnings
Once updated, it’s worth:
- Adding a small unit/integration test that calls
GetTorrentFilesBatchandGetTorrentswith mixed‑case hashes and asserts you can still find files and torrents, and- Verifying the underlying qBittorrent sync manager’s normalization (uppercase vs lowercase) so
normalizeHash/DefaultNormalizermatch it exactly.Also applies to: 2196-2223, 2241-2268, 2463-2465, 2489-2493, 2845-2877, 2883-2899, 3389-3397, 3731-3753, 3880-3881, 3946-3950, 4005-4014, 4578-4588, 5333-5347, 3523-3542, 3580-3585, 3587-3594
🧹 Nitpick comments (7)
internal/services/crossseed/service.go (1)
4527-4720: TemporarytestFilteringEnabledflag infilterIndexersByExistingContentshould not live in production codeThe content‑based indexer filtering logic is quite detailed and seems well‑instrumented, but there is a leftover test hook:
// TEMPORARY TEST: Remove all indexers to see if the filtering is working testFilteringEnabled := false // Set to true to test if filtering is working if testFilteringEnabled { log.Debug().Msg("[CROSSSEED-FILTER] *** TEST MODE: FILTERING OUT ALL INDEXERS ***") return []int{}, nil, nil, nil }Even though it’s currently
false, shipping a “flip this to kill all indexers” switch in core logic is risky (accidental toggle, merge conflict, or debug change).Recommend removing this block or replacing it with:
- A dedicated unit test that forces the same conditions, or
- A build‑tagged or environment‑driven debug path that can’t be toggled accidentally in normal builds.
If you keep any debug path, consider guarding it with an env var and logging prominently when it’s active so production behavior is obvious.
internal/services/jackett/service.go (6)
143-212: Search context and rate-limit priority handling look consistent; be aware of overwrite semanticsThe
searchContext+finalizeSearchContext/resolveSearchPriority/rateLimitOptionsForPriority+WithSearchPrioritystack is clean: you normalize priority from (a) explicit options, (b) context, and (c) a fallback, and always end up with non‑nilRateLimitOptions. ThesearchPriorityKey{}context key is also safely namespaced.One subtle behavior:
finalizeSearchContextalways replaces any pre‑existingmeta.rateLimitstruct with the canned options for the resolved priority. If there is (now or in the future) any call-path that populates customRateLimitOptions(e.g., tunedMinInterval/MaxWait) onmetabefore callingfinalizeSearchContext, those custom durations will be discarded. GivensearchContextis internal and you currently only supply the priority via context/fallback, this is fine, but it’s worth keeping in mind if you later add more granular per‑search tuning.Also applies to: 261-267
291-307: Queued search flow + timeout/partial handling are coherent with rate-limit metadataThe combination of:
- Building
metaviafinalizeSearchContext(interactive priority for user‑initiated searches),- Computing
searchTimeout := computeSearchTimeout(meta, len(indexersToSearch))and wrapping viatimeouts.WithSearchTimeout, and- Routing through
executeQueuedSearch(which safely re‑finalizesmetain an idempotent way)yields a sensible, rate‑limit‑aware search envelope. The new
deadlineErrhandling andPartialflag computation inSearchalso look correct and avoid treating deadline timeouts as hard failures when some results were gathered.The only very minor nit is that
executeQueuedSearchalways callsfinalizeSearchContextagain even whenmetawas already finalized; you could micro‑optimize by skipping ifmeta != nil && meta.rateLimit != nil, but behavior today is correct.Also applies to: 396-401, 439-451
544-551:SearchGenerictimeout and wait-budget logging are good; consider de-duplicating wait-budget logicUsing
computeSearchTimeout(meta, len(indexersToSearch))and logging bothsearch_timeoutand a derivedrate_limit_wait_budgetgives nice visibility into how much of the overall window is being reserved for rate‑limit waiting.The
waitBudgetcalculation here duplicates the logic already embedded incomputeSearchTimeout(both derive it frommeta.rateLimit.MinInterval/MaxWaitwith the same rules). That’s fine as‑is, but you could consider extracting a small helper (e.g.,waitBudgetForMeta(meta *searchContext) time.Duration) used by bothcomputeSearchTimeoutand this log block to keep the behavior from ever drifting.Also applies to: 623-640
1071-1076: Cache persistence now handles nil contexts safely; consider whether writes should outlive caller ctxSwitching to:
parent := ctx if parent == nil { parent = context.Background() } storeCtx, cancel := context.WithTimeout(parent, 3*time.Second)eliminates the risk of panicking when
ctxis nil and keeps cache writes bounded by both a small timeout and the caller’s cancellation.If, in future, you decide cache persistence should still succeed when the request has already timed out (e.g., to maximize reuse of expensive searches), you might consider always using
context.Background()as the parent here instead ofctx, but the current behavior is internally consistent and safe.
1442-1471: Rate-limit option cloning and timeout computation look correct; shared options assumed read-only
cloneRateLimitOptions:
- Produces a fresh copy of
meta.rateLimitwhen present (so goroutines don’t share the original), and- Falls back to a “background”
RateLimitOptionswhenmeta/meta.rateLimitis nil.
computeSearchTimeoutthen cleanly layers a single “wait budget” (max ofMinInterval/MaxWait, ordefaultMinRequestInterval) on top ofAdaptiveSearchTimeout, which matches the intuitive “network time + expected rate‑limit delay” model.One nuance: in
searchMultipleIndexersthe single clonedrateLimitOptspointer is shared among all goroutines. That’s perfectly fine as long asRateLimiter.BeforeRequesttreatsoptsas immutable (which is the natural contract for an options struct). If you ever need to mutateoptsinsideBeforeRequest, consider cloning per goroutine instead of once at the top of the function to avoid a potential data race.
1473-1767: Rate-limiter integration andRateLimitWaitErrorhandling insearchMultipleIndexers—semantics to double-checkThe integration with the rate limiter is generally solid:
cloneRateLimitOptions(meta)ensures there is always a usableRateLimitOptions.- Calling
BeforeRequestper indexer is the right hook point.- The
RateLimitWaitErrorbranch logs with detailed context and emitsindexerResult{id: 0, err: waitErr}, so skipped‑due‑to‑budget indexers do not incorrectly contribute to coverage.A couple of subtle behaviors to be aware of:
Classification of wait-budget skips as “failures”:
In the aggregation loop, any non‑timeout error (includingRateLimitWaitError) incrementsfailures. If all non‑timeout indexers end up in this bucket, you return a hard error ("all %d indexers failed (last error: %w)"), and the higher-levelSearchfunctions treat that as fatal (they only softencontext.DeadlineExceeded). That means a search where every indexer exceeds its wait budget will fail the entire request, not return an empty/partial result. If the intent is instead “no indexers could run within the allowed budget, but the query itself is ‘successful’ with zero fresh results,” you may want to treatRateLimitWaitErrormore like a timeout (e.g., incrementtimeoutsinstead offailures, or exclude them fromnonTimeoutIndexers).Coverage and caching semantics:
Usingid: 0forRateLimitWaitErrorresults means those indexers never enter the coverage set, which is desirable for cache coverage accounting—cached entries won’t be marked as covering an indexer that couldn’t be queried at all.If this “all wait‑budget‑skipped indexers ⇒ hard error” behavior is intentional, the current implementation is fine. Otherwise, a small tweak to how
RateLimitWaitErrorcontributes tofailures/nonTimeoutIndexerswould get you to a softer “partial/empty but OK” outcome.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
internal/api/handlers/crossseed.go(2 hunks)internal/services/crossseed/service.go(43 hunks)internal/services/jackett/ratelimiter.go(6 hunks)internal/services/jackett/scheduler.go(1 hunks)internal/services/jackett/service.go(20 hunks)
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
📚 Learning: 2025-11-21T21:11:50.633Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
Applied to files:
internal/services/crossseed/service.go
📚 Learning: 2025-11-06T11:59:21.390Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: web/src/components/torrents/TorrentTableOptimized.tsx:1510-1515
Timestamp: 2025-11-06T11:59:21.390Z
Learning: In the qui project, the API layer in web/src/lib/api.ts normalizes backend snake_case responses to camelCase for frontend consumption. For CrossSeed search results, the backend's download_url field is transformed to downloadUrl in the searchCrossSeedTorrent method, so frontend code should always use the camelCase variant (result.downloadUrl).
Applied to files:
internal/services/crossseed/service.go
📚 Learning: 2025-11-06T12:11:04.963Z
Learnt from: Audionut
Repo: autobrr/qui PR: 553
File: internal/services/crossseed/service.go:1045-1082
Timestamp: 2025-11-06T12:11:04.963Z
Learning: The autobrr/qui project uses a custom go-qbittorrent client library (github.com/autobrr/go-qbittorrent) that supports both "paused" and "stopped" parameters when adding torrents via the options map. Both parameters should be set together when controlling torrent start state, as seen in internal/services/crossseed/service.go and throughout the codebase.
Applied to files:
internal/services/crossseed/service.go
🧬 Code graph analysis (5)
internal/api/handlers/crossseed.go (2)
internal/services/jackett/service.go (1)
WithSearchPriority(262-267)internal/services/jackett/ratelimiter.go (1)
RateLimitPriorityInteractive(19-19)
internal/services/jackett/scheduler.go (2)
internal/services/jackett/client.go (1)
Result(84-101)internal/services/jackett/ratelimiter.go (4)
RateLimiter(59-63)RateLimitPriorityInteractive(19-19)RateLimitPriorityRSS(20-20)RateLimitPriorityCompletion(21-21)
internal/services/jackett/ratelimiter.go (1)
internal/models/torznab_indexer.go (1)
TorznabIndexer(64-83)
internal/services/crossseed/service.go (4)
pkg/stringutils/normalize.go (3)
Normalizer(19-22)NewDefaultNormalizer(35-37)DefaultNormalizer(61-61)internal/services/crossseed/models.go (2)
FindCandidatesResponse(128-131)FindCandidatesRequest(106-119)internal/services/jackett/service.go (1)
WithSearchPriority(262-267)internal/services/crossseed/parsing.go (1)
FindLargestFile(603-625)
internal/services/jackett/service.go (3)
internal/services/jackett/ratelimiter.go (8)
RateLimitOptions(25-29)RateLimitPriority(16-16)RateLimitPriorityBackground(22-22)RateLimitPriorityInteractive(19-19)RateLimitPriorityRSS(20-20)RateLimitPriorityCompletion(21-21)NewRateLimiter(65-73)RateLimitWaitError(31-37)internal/models/torznab_indexer.go (1)
TorznabIndexer(64-83)internal/pkg/timeouts/timeouts.go (2)
WithSearchTimeout(35-46)AdaptiveSearchTimeout(19-32)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run tests
🔇 Additional comments (12)
internal/api/handlers/crossseed.go (1)
20-21: Correct use of Jackett search priority for interactive API callsWrapping
r.Context()withjackett.WithSearchPriority(...RateLimitPriorityInteractive)before callingSearchTorrentMatchescleanly tags interactive searches for the rate limiter without affecting cancellation or deadlines. This fits the new priority model and looks good.If you want to double‑check behavior, run a quick manual search while an RSS run is active and confirm interactive searches aren’t starved behind background load.
Also applies to: 410-411
internal/services/jackett/scheduler.go (1)
64-89: Scheduler core loop and job lifecycle look soundThe queue ordering by
(priority, createdAt), cancellation handling innextRunnable, and completion path (startJob→completeCh→respCh) are logically consistent and avoid goroutine leaks (bufferedrespChmeans callers that time out don’t block workers).No changes needed here from a correctness perspective.
Also applies to: 123-151, 152-190
internal/services/jackett/ratelimiter.go (1)
16-37: Rate limiter option surface and error type are well structuredThe introduction of
RateLimitPriority,RateLimitOptions, andRateLimitWaitError, plus routing bothBeforeRequestandNextWaitthroughresolveOptions/computeWaitLocked, gives a clean, testable API for per‑request tuning. TheIsmethod onRateLimitWaitErrorcorrectly supportserrors.Is(err, &RateLimitWaitError{})for caller matching.Internals (cooldown, hourly/daily limits, and minInterval handling) remain consistent with prior behavior.
It’s worth adding/expanding tests to cover:
- Per‑request
MinIntervaloverrides,MaxWaitjust below/above the threshold, and- Matching
RateLimitWaitErrorviaerrors.Is.Also applies to: 39-51, 75-101, 192-205, 288-300, 312-335
internal/services/crossseed/service.go (6)
37-43: Service wiring for new caches and string normalizer looks consistentAdding
stringNormalizerandtorrentFilesCachetoService, and initializing them viastringutils.NewDefaultNormalizer()and a 5‑minute TTL cache, aligns with the broader normalization and caching strategy (search results, async filtering, domain cache). The updatedqbittorrentSyncinterface withGetTorrents,GetTorrentFilesBatch, andHasTorrentByAnyHashmatches how the rest of this file now consumes qBittorrent.No issues spotted in this wiring.
Also applies to: 51-68, 70-83, 167-193
751-805: Jackett rate‑limit priorities are applied consistently across automation and completionThree key paths now correctly tag searches with appropriate Jackett rate‑limit priorities:
- Completion search (
executeCompletionSearch):RateLimitPriorityCompletion.- RSS automation (
executeAutomationRun):RateLimitPriorityRSSused forRecent.- Seeded Torrent Search automation (
processSearchCandidate):RateLimitPriorityRSSfor per‑candidate searches.This cleanly separates interactive, RSS, and completion traffic for the new rate limiter and scheduler, without altering higher‑level error handling or timeouts. Looks good.
It’s worth running a quick load test (or prod observation) to confirm that:
- RSS runs don’t starve completion searches, and
- Interactive API searches remain responsive under background load.
Also applies to: 1185-1187, 4005-4083
1960-2048: Use ofHasTorrentByAnyHashand add‑options inprocessCrossSeedCandidatelooks correctThe new flow in
processCrossSeedCandidate:
- Uses
HasTorrentByAnyHashto short‑circuit when the torrent already exists on a target instance, and- Correctly sets
pausedandstoppedtogether for start state, withskip_checking=trueon the first attempt and a retry withoutskip_checkingon failure,is in line with the project’s go‑qbittorrent usage and the prior learning about pairing
pausedandstoppedflags.No functional issues spotted here; just ensure the hash passed into
HasTorrentByAnyHashis normalized consistently per the earlier comment.You may want a small test to cover:
- “exists” short‑circuit behavior with
HasTorrentByAnyHash,- fallback from
skip_checking=trueto a recheck when AddTorrent fails.Also applies to: 2071-2113, 2113-2161
3698-3753: Source torrent deduplication & search history propagation look well designedThe deduplication pipeline:
- Parses all candidate torrents into
torrentWithReleaseand groups them viareleasesMatch.- Prefers representatives that:
- Have a top‑level folder (via batched
GetTorrentFilesBatch+detectCommonRoot), then- Have the earliest
AddedOn.- Tracks duplicates per representative and logs grouping decisions.
- Propagates search history updates from each representative to its duplicates and marks them in
state.skipCache.This is a nice performance/UX win for Seeded Torrent Search, especially in instances with many cross‑seeds of the same content, and the grouping logic is careful about layout preferences.
Aside from the hash normalization concerns already called out (ensuring batch file keys match), the overall approach looks solid.
After fixing hash normalization, it’s worth adding tests that:
- Verify deduplication prefers “with root folder” over “flat” torrents, and
- Confirm
duplicateHashes+propagateDuplicateSearchHistoryactually prevent re‑processing duplicates within the cooldown window.Also applies to: 3813-3847, 3879-3899, 3941-3997
5230-5272: NewevaluateReleaseMatchscoring and webhook reuse look correct
evaluateReleaseMatchnow provides a structured scoring and reason string based on:
- Group, resolution, source, season/episode, year, codec, and audio.
That score is:
- Used to bias Jackett search results ordering in
SearchTorrentMatches, and- Logged for webhook checks to explain why a torrent was considered a match.
Baseline score
1.0plus small increments per matching attribute is reasonable, and the “title match” fallback reason ensures logs are always interpretable.The webhook integration correctly:
- Uses
releasesMatchfor coarse filtering,- Applies
isSizeWithinTolerancebased on automation settings, and- Then uses
evaluateReleaseMatchonly for scoring/logging, not gating.This all looks consistent and safe.
Adding tests around
evaluateReleaseMatch(e.g., asserting relative ordering of otherwise similar releases) would help keep future tweaks from surprising search/webhook behavior.Also applies to: 5395-5418, 5547-5745
3523-3545: Consistent use ofstringutils.DefaultNormalizerfor cache keys and skipCache is goodUsing
stringutils.DefaultNormalizer.Normalize(hash)in:
searchResultCacheKey,asyncFilteringCacheKey, andshouldSkipCandidate/propagateDuplicateSearchHistory(viaskipCache),ensures cross‑seed search results, async filtering state, and automation cooldown checks are keyed in a consistent, case‑insensitive manner across different API entrypoints (web UI hash, webhook hash, internal torrent hash).
Once the qBittorrent‑facing hash normalization is unified per earlier comments, this provides a solid, canonical layering between external input and internal caches.
After unifying hash normalization, it’s worth verifying that:
- Running a search and then applying results with differently cased hashes still hits the same cache entries, and
- Seeded search automation’s skipCache behaves identically regardless of hash casing from qBittorrent.
Also applies to: 3580-3594, 3946-3950
internal/services/jackett/service.go (3)
269-288: Scheduler wiring inNewServicecorrectly respects optionsInitializing
searchSchedulerby default and then reassigning itsrateLimiterafter applyingServiceOptions means:
- Options can override either
s.rateLimiterors.searchScheduler.- The final scheduler always points at the final
s.rateLimitervalue, if non‑nil.That ordering avoids the usual “scheduler captured old dependency before options ran” bug and makes testing with injected schedulers/rate‑limiters straightforward.
744-751:Recentnow uses rate-limit-aware timeouts, which matches the other search pathsUsing
finalizeSearchContextwith a background priority and thencomputeSearchTimeout(meta, len(indexersToSearch))forRecentkeeps it consistent withSearch/SearchGenericin terms of honoring rate‑limit wait budgets in the overall timeout. The subsequent partial‑result and logging behavior lines up with the other methods.
1935-1962: Context-free Prowlarr year workaround is straightforward and safeRefactoring
applyProwlarrWorkaroundto be context‑free and take onlyidx+paramssimplifies the call sites and keeps the behavior focused:
- It no‑ops for non‑Prowlarr backends.
- For Prowlarr, it moves
yearintoqand deletes theyearparam, with clear debug logging.This aligns with the comment that the workaround should “always move the year parameter into the search query for Prowlarr indexers” and avoids any extra context plumbing.
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
internal/services/jackett/service.go (1)
270-289: Scheduler integration and fan‑out are solid; mind coverage semantics for skipped indexersThe default construction/injection of
searchSchedulerand later wiring ofrateLimiterare correct, andexecuteQueuedSearchfalls back cleanly to the old path. The fan‑out insearchIndexersWithSchedulerwith a bufferedresultsChsized tolen(indexers)pluscloneValuesper job avoids data races and goroutine leaks on context cancellation.One behavioural difference to be aware of: in the scheduler path, coverage is whatever
runIndexerSearchreturns. WhenapplyIndexerRestrictionsshort‑circuits an indexer (caps/categories mismatch),runIndexerSearchcurrently returnsnil, nil, nil, so that indexer is treated as not covered. In the legacysearchMultipleIndexerspath, the goroutine sendsindexerResult{id: idx.ID}on such a skip, which causes that indexer to be counted in coverage even though no network call was made. If cache/partial‑result semantics should remain identical between the two execution strategies, consider returning[]int{idx.ID}as coverage fromrunIndexerSearchwhen restrictions cause a skip:- if s.applyIndexerRestrictions(ctx, client, idx, "", meta, paramsMap) { - return nil, nil, nil - } + if s.applyIndexerRestrictions(ctx, client, idx, "", meta, paramsMap) { + // Treated as "handled" for coverage, even though no request was sent. + return nil, []int{idx.ID}, nil + }(and similarly for the other backend branches where a restriction returns early).
Also applies to: 299-307, 309-375, 2289-2298
🧹 Nitpick comments (2)
internal/services/jackett/service.go (2)
1142-1147: Nil‑safe cache persistence and timeout helpers are well‑factored; consider clarifying RateLimitOptions mutabilityThe
persistSearchCacheEntrychange to guardctxand fall back tocontext.Background()avoids a panic when callers passnil, without changing behaviour when a real context is supplied.
cloneRateLimitOptions,asRateLimitWaitError, andcomputeSearchTimeoutare clean and line up with how search timeouts and wait budgets are computed and logged.One minor point: in
searchMultipleIndexersyou computerateLimitOpts := cloneRateLimitOptions(meta)once and then pass the same pointer concurrently toBeforeRequestfor multiple indexers. This is perfectly fine as long asBeforeRequesttreats the options as read‑only (which appears to be the intent). If there’s any chance it might mutate the struct in the future, consider cloning inside the goroutine instead to avoid a potential data race:- rateLimitOpts := cloneRateLimitOptions(meta) ... - if err := s.rateLimiter.BeforeRequest(ctx, idx, rateLimitOpts); err != nil { + if err := s.rateLimiter.BeforeRequest(ctx, idx, cloneRateLimitOptions(meta)); err != nil {Also applies to: 1513-1542
1842-1968: Shared per‑indexer search logic is correct; align coverage semantics for restriction skips
runIndexerSearchcleanly encapsulates the single‑indexer flow: rate‑limiter state initialization, API key decryption, client construction, Prowlarr year workaround, capability/category restriction checks,BeforeRequestwith cloned options, latency recording, rate‑limit detection/handling, and tracker/indexer ID normalization. The 429/rate‑limit error enrichment matches the behaviour insearchMultipleIndexers, which keeps caller expectations consistent. The addedmeta == nilguard inapplyCapabilitySpecificParamsis also a good defensive fix.As noted earlier, the only behavioural gap is that when
applyIndexerRestrictionsreturns true, this helper currently returnsnil, nil, nil, so the scheduler path will not treat that indexer as covered. If you want parity with the non‑scheduler path (where such indexers are effectively “handled” but skipped), havingrunIndexerSearchreturn[]int{idx.ID}in those early‑return cases would close that gap without affecting error handling.Also applies to: 2128-2131, 2135-2161
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
internal/services/jackett/service.go(21 hunks)
🧰 Additional context used
🧠 Learnings (1)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
🧬 Code graph analysis (1)
internal/services/jackett/service.go (4)
internal/services/jackett/ratelimiter.go (8)
RateLimitOptions(25-29)RateLimitPriority(16-16)RateLimitPriorityBackground(22-22)RateLimitPriorityInteractive(19-19)RateLimitPriorityRSS(20-20)RateLimitPriorityCompletion(21-21)NewRateLimiter(65-73)RateLimitWaitError(31-37)internal/models/torznab_indexer.go (3)
TorznabIndexer(64-83)TorznabBackendNative(37-37)TorznabBackendProwlarr(35-35)internal/services/jackett/client.go (2)
Result(84-101)NewClient(36-81)internal/pkg/timeouts/timeouts.go (2)
WithSearchTimeout(35-46)AdaptiveSearchTimeout(19-32)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run tests
🔇 Additional comments (3)
internal/services/jackett/service.go (3)
69-69: Search context and priority plumbing looks consistent
searchContext, the priority helpers (searchPriorityKey,finalizeSearchContext,resolveSearchPriority,rateLimitOptionsForPriority), andWithSearchPriorityform a coherent pipeline: priority resolution is idempotent, respects explicit options, then context hints, and defaults to background. The interactive/RSS/completion intervals and waits align with howcomputeSearchTimeoutandRateLimiter.BeforeRequestare later used, so this wiring looks sound.Also applies to: 95-97, 145-150, 152-212, 262-268
465-471: Timeout and queued‑search integration looks correct and consistentThe three entrypoints now consistently:
- Build
metaviafinalizeSearchContext(with interactive vs background fallback),- Compute a timeout using
computeSearchTimeout(meta, len(indexersToSearch)),- Wrap the call with
timeouts.WithSearchTimeout,- Route through
executeQueuedSearchand treatcontext.DeadlineExceededas a partial‑results condition rather than a hard failure.The extra debug log in
SearchGenericshowingrate_limit_wait_budgetplus the timeout should be very helpful when tuning priorities and intervals. This all looks coherent.Also applies to: 509-521, 614-621, 694-707, 711-751, 815-821
1618-1631: Prowlarr year workaround and rate‑limit skip handling look saneApplying
applyProwlarrWorkaroundto the per‑indexerparamsMapbefore capability/category restrictions and backend‑specific search calls is safe and keeps the originalurl.Valuesimmutable. HandlingRateLimitWaitErrorexplicitly insearchMultipleIndexers—logging the wait/maxWait/priority and sending a result withid: 0—ensures these “budget‑skip” cases don’t incorrectly contribute to coverage, while still being counted toward overall failure stats.The added debug logs for failure/timeout counters and the final completion summary give a clear view of how each batch behaved without changing error semantics.
Also applies to: 1647-1648, 1673-1674, 1703-1704, 1707-1717, 1812-1817, 1832-1837
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (2)
internal/services/jackett/service.go (1)
276-288: Scheduler initialization race condition (see scheduler.go review).Lines 286-288 set
s.searchScheduler.rateLimiterafter the scheduler's loop has already started (innewSearchScheduler). This race condition is flagged in detail in the scheduler.go review.internal/services/jackett/scheduler.go (1)
95-105: Race condition on searchScheduler.rateLimiter initializationThe
rateLimiterfield isnilwhengo s.loop()starts (line 103), andNewServiceassigns it later (line 287) without synchronization. SincejobPriority(line 273) and worker operations reads.rateLimiterconcurrently, this creates a data race. The suggested fix in the existing review comment—passing the rate limiter tonewSearchScheduler—should be implemented.
🧹 Nitpick comments (7)
internal/services/jackett/scheduler.go (3)
168-190: Dispatch loop may spin rapidly under certain conditions.The loop calls
dispatchTasks()(line 170) before every select, and thedefaultcase (lines 179-188) allows the loop to continue immediately if no channels are ready. IftaskQueuehas items but all worker queues are full,dispatchTasks()will return quickly (lines 221-225), and the loop will spin without blocking.Consider:
- Removing the
defaultcase to always block when no work is ready- Adding a small sleep in the default case if no progress is made
- Using a condition variable to signal when dispatch should retry
107-166: Minor: Error condition could be more precise.Line 161 checks
len(results) == 0, but results could be legitimately empty even when some indexers succeeded (they just returned no matches). Consider checkinglen(coverage) == 0instead, since coverage tracks indexers that completed successfully.Apply this diff:
- if failures == len(tasks) && lastErr != nil && len(results) == 0 { + if failures == len(tasks) && lastErr != nil && len(coverage) == 0 {
288-300: Consider using atomic operations for ID generation.Lines 288-300 use mutex-protected counters for job and task IDs. Since these are simple integer increments without additional state changes,
sync/atomicoperations (atomic.AddUint64) would be more efficient and reduce lock contention.Example refactor:
func (s *searchScheduler) nextJobID() uint64 { return atomic.AddUint64(&s.jobSeq, 1) } func (s *searchScheduler) nextTaskID() uint64 { return atomic.AddUint64(&s.taskSeq, 1) }internal/services/jackett/service.go (4)
322-388: Consider simplifying the goroutine architecture.Lines 336-351 spawn one goroutine per indexer, and each calls
s.searchScheduler.Submit(which internally creates per-indexer workers). This double-layered goroutine pattern seems redundant:Current flow:
searchIndexersWithSchedulercreates N goroutines- Each goroutine calls
Submitwith 1 indexerSubmitenqueues task for worker- Scheduler's
getWorkercreates/reuses worker goroutineSimpler alternatives:
- Call
Submitonce with all indexers (it already handles multiple indexers)- Or bypass
Submitand directly dispatch to scheduler's task queueThe current pattern adds complexity without clear concurrency benefits, since the scheduler already manages per-indexer workers.
2311-2320: LGTM! cloneValues correctly prevents concurrent modification.The function properly deep-copies url.Values by cloning each value slice, which is essential for the concurrent scheduler usage.
Minor style note: Line 2317's
append([]string(nil), v...)is idiomatic but could be slightly more explicit as:clone := make([]string, len(v)) copy(clone, v) out[k] = clone
523-523: Minor inconsistency in logging.
SearchGeneric(lines 709-720) logs the rate limit wait budget, butSearchandRecentdon't. Consider adding similar logging to other entry points for consistency, or document why SearchGeneric needs extra verbosity.Also applies to: 708-720, 830-830
1155-1160: Minor: Context handling can be simplified.Lines 1155-1158 create an intermediate
parentvariable that can be eliminated:- parent := ctx - if parent == nil { - parent = context.Background() - } - storeCtx, cancel := context.WithTimeout(parent, 3*time.Second) + if ctx == nil { + ctx = context.Background() + } + storeCtx, cancel := context.WithTimeout(ctx, 3*time.Second)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
internal/services/jackett/scheduler.go(1 hunks)internal/services/jackett/service.go(21 hunks)internal/services/jackett/service_test.go(4 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- internal/services/jackett/service_test.go
🧰 Additional context used
🧠 Learnings (1)
📓 Common learnings
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
🧬 Code graph analysis (2)
internal/services/jackett/scheduler.go (2)
internal/services/jackett/client.go (1)
Result(84-101)internal/services/jackett/ratelimiter.go (4)
RateLimiter(59-63)RateLimitPriorityInteractive(19-19)RateLimitPriorityRSS(20-20)RateLimitPriorityCompletion(21-21)
internal/services/jackett/service.go (4)
internal/services/jackett/ratelimiter.go (8)
RateLimitOptions(25-29)RateLimitPriority(16-16)RateLimitPriorityBackground(22-22)RateLimitPriorityInteractive(19-19)RateLimitPriorityRSS(20-20)RateLimitPriorityCompletion(21-21)NewRateLimiter(65-73)RateLimitWaitError(31-37)internal/models/torznab_indexer.go (3)
TorznabIndexer(64-83)TorznabBackendNative(37-37)TorznabBackendProwlarr(35-35)internal/services/jackett/client.go (2)
Result(84-101)NewClient(36-81)internal/pkg/timeouts/timeouts.go (2)
WithSearchTimeout(35-46)AdaptiveSearchTimeout(19-32)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run tests
🔇 Additional comments (9)
internal/services/jackett/scheduler.go (3)
1-11: LGTM!The imports are appropriate for the scheduler implementation.
13-19: LGTM!Priority constants are well-defined, with lower numeric values representing higher priority, consistent with the heap implementation.
272-286: jobPriority could panic if meta.rateLimit is accessed during initialization race.Line 273 checks
meta != nil && meta.rateLimit != nil, but if the race condition (flagged earlier) occurs,meta.rateLimitcould reference aRateLimiterwith uninitialized state. While the nil check protects against direct panics, the underlying race condition should be fixed first.This concern is directly related to the initialization race flagged in the earlier comment. Fixing the scheduler initialization will resolve this.
internal/services/jackett/service.go (6)
152-213: LGTM! Well-structured priority resolution.The priority resolution logic follows a clear precedence: context value → explicit options → fallback → background default. The
rateLimitOptionsForPriorityfunction correctly creates appropriate intervals and max-wait times for each priority level.
262-268: LGTM!Public API for setting search priority via context follows Go conventions correctly.
299-314: LGTM! Sensible routing logic.The fallback chain handles test executors, explicit indexer requests, missing scheduler, and normal scheduled execution correctly.
1526-1555: LGTM!Helper functions are well-implemented:
cloneRateLimitOptionssafely copies by valueasRateLimitWaitErroruses standard error unwrappingcomputeSearchTimeoutcorrectly adds wait budget to adaptive timeout
478-483: LGTM! Appropriate priority assignments.Search entry points correctly use
Interactivepriority for user-initiated searches (Search, SearchGeneric) andBackgroundpriority for automated queries (Recent).Also applies to: 629-634, 828-828
2157-2183: All call sites have been successfully updated.Verification confirms:
- Function definition (line 2157) shows the new signature with
ctxremoved- All 3 call sites in the codebase use the new signature: lines 1643, 1894 in service.go and line 1448 in service_test.go
- No calls with the old signature exist
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
internal/qbittorrent/sync_manager.go (1)
1275-1311: Redundant cloning in batch caching reduces performance.The code clones file slices multiple times:
- Line 1277-1278: Clone after fetch
- Line 1281: Store clone in
filesByHash- Line 1297-1299: Clone again before caching
The second clone (lines 1297-1299) is unnecessary because
filesByHash[canonicalHash]already contains a copy from line 1277. This adds allocation overhead for every fetched file.Consider this optimization:
mu.Lock() - filesByHash[ch] = copied + filesByHash[ch] = copied // Already a copy, safe to cache as-is mu.Unlock()Then update the caching section:
fetchedFiles := make(map[string]qbt.TorrentFiles) for _, canonicalHash := range hashesToFetch { if files, ok := filesByHash[canonicalHash]; ok { - // Cache a clone to keep cache entries isolated. - cloned := make(qbt.TorrentFiles, len(files)) - copy(cloned, files) - fetchedFiles[canonicalHash] = cloned + // files is already a copy from the fetch, safe to cache directly + fetchedFiles[canonicalHash] = files } }
♻️ Duplicate comments (1)
internal/qbittorrent/sync_manager.go (1)
1407-1447: Hash normalization inconsistency remains unresolved.Per the retrieved learning and past review comments, there's a critical inconsistency:
canonicalizeHash(line 1408) usesstrings.ToLower- Documentation (line 39) mentions "lowercase hex"
- The learning states that
normalizeHashin crossseed usesstrings.ToUppernormalizeHashescompensates by creating multiple variants (lines 1434-1442)While the multi-variant approach in
normalizeHashesmasks the issue during lookups, it's inefficient and confusing. Standardize to uppercase hex throughout the codebase as recommended in the past review.Apply this diff to use uppercase as the canonical format:
func canonicalizeHash(hash string) string { - return strings.ToLower(strings.TrimSpace(hash)) + return strings.ToUpper(strings.TrimSpace(hash)) }Also update the documentation at line 39:
- // Callers must pass hashes already trimmed/normalized (e.g. lowercase hex) + // Callers must pass hashes already trimmed/normalized (e.g. uppercase hex)Based on learnings
🧹 Nitpick comments (1)
internal/qbittorrent/sync_manager.go (1)
47-54: Consider pointer-based slice returns for efficiency.The
torrentFilesClient.getTorrentsByHashesmethod returns[]qbt.Torrent(slice of values). Ifqbt.Torrentis a large struct, consider using[]*qbt.Torrentto avoid copying overhead, especially when dealing with batch operations.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
internal/qbittorrent/sync_manager.go(14 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-11-21T21:11:50.633Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
Applied to files:
internal/qbittorrent/sync_manager.go
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run tests
🔇 Additional comments (4)
internal/qbittorrent/sync_manager.go (4)
2103-2112: Timer handling correctly avoids race conditions.The implementation properly addresses timer safety by:
- Calling
Stop()on the existing timer (best-effort)- Always creating a new timer instead of calling
Reset()This pattern avoids the documented race condition where
Reset()must only be called on stopped/drained timers. Good defensive coding.
2157-2194: File fetch concurrency control is well-implemented.The per-instance semaphore pattern effectively limits concurrent file fetches:
- Lazy initialization avoids overhead for unused instances
- Context cancellation is properly handled
- Release closure is safe with defer usage in callers
1355-1380: HasTorrentByAnyHash is correctly implemented.The method properly:
- Normalizes input hashes to handle case variations
- Iterates through lookup variants efficiently
- Verifies matches against canonical hash set
- Handles edge cases (empty input, not found)
264-274: GetTorrents wrapper is correctly implemented.Clean delegation to the underlying sync manager with proper error handling.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
internal/services/crossseed/align.go (1)
328-339: Consider expanding the video extension list.The sidecar file handling logic is well-implemented. However, the video extension list might be missing some common formats that could benefit from this normalization logic.
Consider adding these commonly used video extensions:
if dot := strings.LastIndex(base, "."); dot >= 0 && dot < len(base)-1 { videoExt := strings.ToLower(base[dot+1:]) switch videoExt { - case "mkv", "mp4", "avi", "ts", "m2ts", "mov", "mpg", "mpeg": + case "mkv", "mp4", "avi", "ts", "m2ts", "mov", "mpg", "mpeg", "flv", "webm", "wmv", "vob": base = base[:dot] } }This ensures broader coverage for sidecar files with intermediate video extensions.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
internal/services/crossseed/align.go(8 hunks)internal/services/crossseed/align_test.go(3 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-11-21T21:11:50.633Z
Learnt from: s0up4200
Repo: autobrr/qui PR: 625
File: internal/qbittorrent/sync_manager.go:1112-1123
Timestamp: 2025-11-21T21:11:50.633Z
Learning: In internal/qbittorrent/sync_manager.go, the GetCachedFilesBatch interface documentation (around line 39-40) should specify "uppercase hex" instead of "lowercase hex" to match the actual normalization practice used throughout the codebase (e.g., normalizeHash in internal/services/crossseed/service.go uses strings.ToUpper, and hash filtering uses uppercase).
Applied to files:
internal/services/crossseed/align.go
🧬 Code graph analysis (1)
internal/services/crossseed/align.go (1)
internal/qbittorrent/sync_manager.go (1)
WithForceFilesRefresh(65-67)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run tests
🔇 Additional comments (10)
internal/services/crossseed/align_test.go (3)
53-95: LGTM! Comprehensive test for sidecar file handling.The new test effectively validates the sidecar file normalization logic where intermediate video extensions (e.g.,
.mkvinname.mkv.nfo) are stripped for matching purposes. The test covers both the video file and its associated sidecar file, ensuring both are correctly mapped.
186-189: LGTM! Pointer conversions align with API changes.The function calls correctly pass pointer arguments to match the updated
shouldRenameTorrentDisplaysignature.
199-202: LGTM! Pointer conversions align with API changes.The function calls correctly pass pointer arguments to match the updated
shouldAlignFilesWithCandidatesignature.internal/services/crossseed/align.go (7)
13-14: LGTM! Import addition for batch file retrieval.The import is necessary to access
WithForceFilesRefreshfor forced refresh context.
83-89: LGTM! Graceful batch retrieval with forced refresh.The implementation correctly uses forced refresh context and gracefully falls back to the expected source files if the batch call fails or returns no results. The error handling ensures robustness.
94-97: LGTM! Logical root rename condition.The updated condition correctly restricts root folder renaming to cases where no file-level rename plan exists. This prevents conflicts between root and individual file renames, falling back to a root rename only when file-level mappings cannot be established.
130-143: LGTM! Simplified file rename logic.The direct use of
instr.oldPathandinstr.newPathfrom the plan is correct, as the plan now contains the final paths without requiring runtime adjustment for root renames.
188-196: LGTM! Efficient filtered torrent availability check.The updated implementation correctly uses
GetTorrentswith a hash filter instead of retrieving all torrents, improving efficiency. The error handling and availability check are appropriate.
399-414: LGTM! Function signatures updated for pointer-based API.The function signatures correctly updated to accept pointer parameters (
*rls.Release), aligning with the broader refactor to pointer-based releases throughout the codebase.
52-52: Hash normalization is consistent—no changes needed.Verification confirms that both
normalizeHash(used at line 52) andGetTorrentFilesBatchuse lowercase hex normalization viastrings.ToLower, ensuring the lookup at line 86 will correctly match map keys returned by the batch API.
This PR contains the following updates: | Package | Update | Change | |---|---|---| | [ghcr.io/autobrr/qui](https://github.com/autobrr/qui) | minor | `v1.7.0` -> `v1.8.1` | --- ### Release Notes <details> <summary>autobrr/qui (ghcr.io/autobrr/qui)</summary> ### [`v1.8.1`](https://github.com/autobrr/qui/releases/tag/v1.8.1) [Compare Source](autobrr/qui@v1.8.0...v1.8.1) #### Changelog ##### Bug Fixes - [`61c87e1`](autobrr/qui@61c87e1): fix(torznab): use detached context for indexer tests ([#​659](autobrr/qui#659)) ([@​s0up4200](https://github.com/s0up4200)) **Full Changelog**: <autobrr/qui@v1.8.0...v1.8.1> #### Docker images - `docker pull ghcr.io/autobrr/qui:v1.8.1` - `docker pull ghcr.io/autobrr/qui:latest` #### What to do next? - Join our [Discord server](https://discord.autobrr.com/qui) Thank you for using qui! ### [`v1.8.0`](https://github.com/autobrr/qui/releases/tag/v1.8.0) [Compare Source](autobrr/qui@v1.7.0...v1.8.0) #### Changelog ##### New Features - [`6903812`](autobrr/qui@6903812): feat(crossseed): batch torrent file lookups end-to-end ([#​625](autobrr/qui#625)) ([@​s0up4200](https://github.com/s0up4200)) - [`336ce48`](autobrr/qui@336ce48): feat(crossseed): persist seeded search settings ([#​618](autobrr/qui#618)) ([@​s0up4200](https://github.com/s0up4200)) - [`7b0b292`](autobrr/qui@7b0b292): feat(docker): add curl to Dockerfiles ([#​570](autobrr/qui#570)) ([@​onedr0p](https://github.com/onedr0p)) - [`91e1677`](autobrr/qui@91e1677): feat(filters): default-hide empty status/category/tag groups ([#​581](autobrr/qui#581)) ([@​s0up4200](https://github.com/s0up4200)) - [`f07bb8d`](autobrr/qui@f07bb8d): feat(header): add missing links to header burger menu ([#​624](autobrr/qui#624)) ([@​nuxencs](https://github.com/nuxencs)) - [`ee4c16b`](autobrr/qui@ee4c16b): feat(instances): allow disabling qbit instances ([#​582](autobrr/qui#582)) ([@​s0up4200](https://github.com/s0up4200)) - [`477db14`](autobrr/qui@477db14): feat(search): column filters ([#​633](autobrr/qui#633)) ([@​nuxencs](https://github.com/nuxencs)) - [`cd6db45`](autobrr/qui@cd6db45): feat(themes): add basic variation support ([#​569](autobrr/qui#569)) ([@​jabloink](https://github.com/jabloink)) - [`979a0d4`](autobrr/qui@979a0d4): feat(torrents): add clear filters action for empty filtered state ([#​627](autobrr/qui#627)) ([@​s0up4200](https://github.com/s0up4200)) - [`e06acb7`](autobrr/qui@e06acb7): feat(torrents): add cross-seeding and search ([#​553](autobrr/qui#553)) ([@​KyleSanderson](https://github.com/KyleSanderson)) - [`95cef23`](autobrr/qui@95cef23): feat(torrents): add reannounce monitor ([#​606](autobrr/qui#606)) ([@​s0up4200](https://github.com/s0up4200)) - [`098fdb0`](autobrr/qui@098fdb0): feat(torrents): add rename functionality in TorrentDetailsPanel ([#​590](autobrr/qui#590)) ([@​s0up4200](https://github.com/s0up4200)) - [`6e8fdbd`](autobrr/qui@6e8fdbd): feat(torrents): implement drag-and-drop file upload to add torrents ([#​568](autobrr/qui#568)) ([@​dthinhle](https://github.com/dthinhle)) - [`9240545`](autobrr/qui@9240545): feat(ui): add dense view mode for compact table display ([#​643](autobrr/qui#643)) ([@​s0up4200](https://github.com/s0up4200)) - [`77fad15`](autobrr/qui@77fad15): feat(ui): improve torrent details panel file tree and rename UX ([#​650](autobrr/qui#650)) ([@​s0up4200](https://github.com/s0up4200)) - [`8b1e70e`](autobrr/qui@8b1e70e): feat(web): Use original qBittorrent status names ([#​595](autobrr/qui#595)) ([@​FibreTTP](https://github.com/FibreTTP)) - [`01dd553`](autobrr/qui@01dd553): feat(web): show listening port in connectable status tooltip ([#​635](autobrr/qui#635)) ([@​s0up4200](https://github.com/s0up4200)) - [`3140739`](autobrr/qui@3140739): feat: make tracker icon column sortable ([#​513](autobrr/qui#513)) ([@​s0up4200](https://github.com/s0up4200)) ##### Bug Fixes - [`240b40d`](autobrr/qui@240b40d): fix(auth): avoid logout on license activation errors ([#​602](autobrr/qui#602)) ([@​s0up4200](https://github.com/s0up4200)) - [`7185408`](autobrr/qui@7185408): fix(backups): do not persist ZIPs to disk ([#​632](autobrr/qui#632)) ([@​KyleSanderson](https://github.com/KyleSanderson)) - [`de0e00a`](autobrr/qui@de0e00a): fix(content): use Hints for detection ([#​621](autobrr/qui#621)) ([@​KyleSanderson](https://github.com/KyleSanderson)) - [`5f016a8`](autobrr/qui@5f016a8): fix(cross): performance improvements ([#​629](autobrr/qui#629)) ([@​KyleSanderson](https://github.com/KyleSanderson)) - [`82c74ba`](autobrr/qui@82c74ba): fix(crossseed): flip deduplication to maps ([#​622](autobrr/qui#622)) ([@​KyleSanderson](https://github.com/KyleSanderson)) - [`b78a079`](autobrr/qui@b78a079): fix(crossseed): inherit TMM state from matched torrent ([#​654](autobrr/qui#654)) ([@​s0up4200](https://github.com/s0up4200)) - [`2438fc6`](autobrr/qui@2438fc6): fix(crossseed): process full RSS feeds ([#​615](autobrr/qui#615)) ([@​s0up4200](https://github.com/s0up4200)) - [`6f57090`](autobrr/qui@6f57090): fix(database): do not release mutex on tx err ([#​571](autobrr/qui#571)) ([@​KyleSanderson](https://github.com/KyleSanderson)) - [`74509d4`](autobrr/qui@74509d4): fix(incognito): prevent categories leaking ([#​592](autobrr/qui#592)) ([@​s0up4200](https://github.com/s0up4200)) - [`f08eff2`](autobrr/qui@f08eff2): fix(instances): support empty username for localhost bypass ([#​575](autobrr/qui#575)) ([@​s0up4200](https://github.com/s0up4200)) - [`cd3caaf`](autobrr/qui@cd3caaf): fix(license): cap 7d offline grace, ignore transient errors ([#​617](autobrr/qui#617)) ([@​s0up4200](https://github.com/s0up4200)) - [`59c747b`](autobrr/qui@59c747b): fix(reannounce): validate number fields and show min hints ([#​613](autobrr/qui#613)) ([@​s0up4200](https://github.com/s0up4200)) - [`f6bd1e6`](autobrr/qui@f6bd1e6): fix(themes): correct Nightwalker description from purple to blue ([#​648](autobrr/qui#648)) ([@​s0up4200](https://github.com/s0up4200)) - [`2b641c5`](autobrr/qui@2b641c5): fix(torznab): filter Prowlarr autodiscovery to enabled torrent indexers ([#​638](autobrr/qui#638)) ([@​s0up4200](https://github.com/s0up4200)) - [`1995783`](autobrr/qui@1995783): fix(ui): improve cross-seed mobile responsiveness ([#​647](autobrr/qui#647)) ([@​s0up4200](https://github.com/s0up4200)) - [`b83aebe`](autobrr/qui@b83aebe): fix(web): align CrossSeedDialog indexers with search flows ([#​619](autobrr/qui#619)) ([@​s0up4200](https://github.com/s0up4200)) - [`3b60821`](autobrr/qui@3b60821): fix(web): indent subcategories in SetCategoryDialog ([#​636](autobrr/qui#636)) ([@​s0up4200](https://github.com/s0up4200)) - [`82850cd`](autobrr/qui@82850cd): fix: glob pattern formatting in tooltip content ([#​579](autobrr/qui#579)) ([@​onedr0p](https://github.com/onedr0p)) ##### Other Changes - [`c20bc0a`](autobrr/qui@c20bc0a): build(vite): enable default minification ([#​574](autobrr/qui#574)) ([@​s0up4200](https://github.com/s0up4200)) - [`ceac8ca`](autobrr/qui@ceac8ca): chore(ci): upgrade Claude Code workflow to Opus 4.5 ([@​s0up4200](https://github.com/s0up4200)) - [`9d6c10e`](autobrr/qui@9d6c10e): chore(deps): bump actions/checkout from 5 to 6 in the github group ([#​628](autobrr/qui#628)) ([@​dependabot](https://github.com/dependabot)\[bot]) - [`f5704de`](autobrr/qui@f5704de): chore(deps): bump golang.org/x/crypto from 0.43.0 to 0.45.0 ([#​611](autobrr/qui#611)) ([@​dependabot](https://github.com/dependabot)\[bot]) - [`0aae9aa`](autobrr/qui@0aae9aa): chore(deps): bump the golang group with 3 updates ([#​546](autobrr/qui#546)) ([@​dependabot](https://github.com/dependabot)\[bot]) - [`0d97087`](autobrr/qui@0d97087): chore(themes): add crypto instructions in-app ([#​620](autobrr/qui#620)) ([@​s0up4200](https://github.com/s0up4200)) - [`e778865`](autobrr/qui@e778865): docs(funding): add donation methods and crypto addresses ([#​583](autobrr/qui#583)) ([@​s0up4200](https://github.com/s0up4200)) - [`563645c`](autobrr/qui@563645c): docs: update qui image ([#​655](autobrr/qui#655)) ([@​s0up4200](https://github.com/s0up4200)) **Full Changelog**: <autobrr/qui@v1.7.0...v1.8.0> #### Docker images - `docker pull ghcr.io/autobrr/qui:v1.8.0` - `docker pull ghcr.io/autobrr/qui:latest` #### What to do next? - Join our [Discord server](https://discord.autobrr.com/qui) Thank you for using qui! </details> --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever PR is behind base branch, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR has been generated by [Renovate Bot](https://github.com/renovatebot/renovate). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0Mi41LjAiLCJ1cGRhdGVkSW5WZXIiOiI0Mi41LjAiLCJ0YXJnZXRCcmFuY2giOiJtYWluIiwibGFiZWxzIjpbImltYWdlIl19--> Reviewed-on: https://gitea.alexlebens.dev/alexlebens/infrastructure/pulls/2211 Co-authored-by: Renovate Bot <renovate-bot@alexlebens.net> Co-committed-by: Renovate Bot <renovate-bot@alexlebens.net>
Summary by CodeRabbit
Refactor
Behavior
Search / Indexers
Rate limiting
Health & Metrics
Tests
✏️ Tip: You can customize this high-level summary in your review settings.