Skip to content

fix(reannounce): tracker health focus/ignore + targeted reannounce#1473

Open
s0up4200 wants to merge 6 commits intodevelopfrom
codex/reannounce-tracker-health-controls
Open

fix(reannounce): tracker health focus/ignore + targeted reannounce#1473
s0up4200 wants to merge 6 commits intodevelopfrom
codex/reannounce-tracker-health-controls

Conversation

@s0up4200
Copy link
Collaborator

@s0up4200 s0up4200 commented Feb 17, 2026

Adds healthFocusTrackers and healthIgnoreTrackers (default sptracker.cc) to control which trackers drive reannounce decisions. Fixes multi-tracker cases where a torrent was skipped because one tracker was OK, and targets reannounce requests to specific unhealthy tracker URLs when supported with a safe fallback plus per-domain give-up after maxRetries.

Summary by CodeRabbit

  • New Features

    • Per-instance tracker health: focus & ignore lists for health checks.
    • Per-instance reannounce: targeted reannounce to selected trackers/domains.
  • Enhancements

    • Improved tracker classification (relevant/healthy/updating/unhealthy) with per-domain retry limits and dynamic timeouts.
    • Database migration adds persisted health focus/ignore lists and seeded default ignore entry.
    • UI: new "Tracker Health" section with multi-selects (create-option + tooltips).
  • Documentation

    • API docs expose reannounce settings including health tracker fields.
  • Tests

    • Added tests for classification and focus/ignore behavior.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 17, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Adds per-tracker health monitoring and domain-aware reannounce: new focus/ignore lists are persisted and exposed via API/UI, qBittorrent client gains per-tracker reannounce API call, and the reannounce service classifies trackers and enforces per-domain retry/give-up logic.

Changes

Cohort / File(s) Summary
API Handlers & OpenAPI
internal/api/handlers/instances.go, internal/web/swagger/openapi.yaml
Adds HealthFocusTrackers and HealthIgnoreTrackers to the InstanceReannounceSettings payload/schema and wires them into Instance API request/response surfaces.
Database Migration & Model
internal/database/migrations/062_add_reannounce_tracker_health.sql, internal/models/instance_reannounce.go
Adds JSON columns health_focus_trackers_json and health_ignore_trackers_json; model scanning, sanitization, defaults, Upsert/SELECT/INSERT/UPDATE logic extended to persist and return these fields.
qBittorrent Client
internal/qbittorrent/client.go
Adds ReannounceTrackersCtx method and per-client host, basicUsername, basicPassword fields to support per-instance, per-tracker reannounce via qBittorrent HTTP API.
Reannounce Service
internal/services/reannounce/service.go, internal/services/reannounce/settings_cache.go
Introduces trackerScope/classification (relevant/healthy/updating/unhealthy), per-domain attempt/give-up state on jobs, dynamic timeouts, domain-aware reannounce calls, and integrates focus/ignore settings; settings cache clone deep-copies new slices.
Service Tests
internal/services/reannounce/service_test.go
Refactors tests to use classifyTrackers, adds cases for focus/ignore, mixed tracker states, and updating/not-contacted classification.
Frontend & Types
web/src/components/instances/preferences/TrackerReannounceForm.tsx, web/src/lib/instance-validation.ts, web/src/types/index.ts
Adds UI controls, normalization, and defaults for healthFocusTrackers and healthIgnoreTrackers (default ignore includes sptracker.cc); updates TS types and default settings.

Sequence Diagram

sequenceDiagram
    autonumber
    participant User as User
    participant API as API Handler
    participant DB as Database
    participant Service as Reannounce Service
    participant qBT as qBittorrent Client

    User->>API: Update Instance reannounce settings (healthFocusTrackers, healthIgnoreTrackers)
    API->>DB: Persist reannounce settings (including health lists)
    DB-->>API: Ack saved
    API-->>User: Confirmed

    Note over Service,DB: Periodic or requested reannounce job
    Service->>DB: Fetch instance + reannounce settings
    DB-->>Service: Return settings (focus/ignore lists)
    Service->>Service: Classify trackers (relevant, healthy, updating, unhealthy)
    alt Unhealthy trackers exist
        Service->>qBT: ReannounceTrackersCtx(hashes, filtered tracker URLs)
        qBT->>qBT: POST /api/v2/torrents/reannounce
        qBT-->>Service: Success/Failure
        Service->>Service: Update per-domain attempts/give-up state
    else All trackers healthy or updating
        Service->>Service: Skip reannounce
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested labels

reannounce, web, area/backend, bugfix

Suggested reviewers

  • buroa
  • Barcode-eng

Poem

🐇 I hop through lists of trackers late at night,
I note the quiet ones and those that give a fright.
I nudge the client per-domain with careful art,
Count attempts, then rest — a gentle reannounce heart.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 6.25% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main changes: adding tracker health focus/ignore controls and implementing targeted reannounce functionality to specific tracker URLs.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch codex/reannounce-tracker-health-controls

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
internal/services/reannounce/service_test.go (1)

320-371: Consider table‑driving the basic classifier cases.

It would reduce repetition and align with the preferred test style.

As per coding guidelines: Prefer table-driven test cases and reuse integration fixtures in internal/qbittorrent/.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/services/reannounce/service_test.go` around lines 320 - 371,
Refactor TestClassifyTrackers_BasicCases into a table-driven test: create a
slice of structs (name, input []qbt.TorrentTracker, settings
*models.InstanceReannounceSettings, expected counts for
relevant/healthy/unhealthy/updating) and iterate subtests calling
service.classifyTrackers for each case; replace the repeated individual
assertions with table entries for the cases currently coded (nil, okTrackers,
downTrackers, unregistered, disabledOnly, updatingTrackers,
notContactedTrackers, mixed) and assert lengths against the expected values,
using t.Run(entry.name) for better failure isolation and reusing any existing
fixtures from internal/qbittorrent if needed.
internal/services/reannounce/service.go (1)

867-885: Optional: preallocate the domains slice.

Small perf nit; aligns with the prealloc lint hint.

♻️ Optional tweak
-	var domains []string
+	domains := make([]string, 0, len(trackers))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/services/reannounce/service.go` around lines 867 - 885, The domains
slice in domainsFromTrackers is not preallocated; to reduce allocations,
initialize domains with a capacity hint (e.g. domains := make([]string, 0,
len(trackers))) before the loop, keep the rest of the logic unchanged (still use
append), and continue using extractTrackerDomain(tracker.Url) and the seen map
as-is to preserve de-duplication and order.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@internal/services/reannounce/service.go`:
- Around line 472-484: The loop building targetURLs doesn't preallocate the
slice which triggers the linter; change the initialization of targetURLs to
preallocate with capacity len(cls.unhealthy) (e.g., targetURLs := make([]string,
0, len(cls.unhealthy))) before the for-range over cls.unhealthy, leaving the
rest of the loop (using tracker.Url, strings.TrimSpace, extractTrackerDomain and
targetDomains map) unchanged so append operations reuse the reserved capacity.

---

Nitpick comments:
In `@internal/services/reannounce/service_test.go`:
- Around line 320-371: Refactor TestClassifyTrackers_BasicCases into a
table-driven test: create a slice of structs (name, input []qbt.TorrentTracker,
settings *models.InstanceReannounceSettings, expected counts for
relevant/healthy/unhealthy/updating) and iterate subtests calling
service.classifyTrackers for each case; replace the repeated individual
assertions with table entries for the cases currently coded (nil, okTrackers,
downTrackers, unregistered, disabledOnly, updatingTrackers,
notContactedTrackers, mixed) and assert lengths against the expected values,
using t.Run(entry.name) for better failure isolation and reusing any existing
fixtures from internal/qbittorrent if needed.

In `@internal/services/reannounce/service.go`:
- Around line 867-885: The domains slice in domainsFromTrackers is not
preallocated; to reduce allocations, initialize domains with a capacity hint
(e.g. domains := make([]string, 0, len(trackers))) before the loop, keep the
rest of the logic unchanged (still use append), and continue using
extractTrackerDomain(tracker.Url) and the seen map as-is to preserve
de-duplication and order.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 79221d6583

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@internal/services/reannounce/service.go`:
- Around line 472-484: Preallocation warning fixed: initialize targetURLs with
capacity len(cls.unhealthy) and keep appending trimmed tracker URLs only after
skipping empty ones; specifically ensure the code uses targetURLs :=
make([]string, 0, len(cls.unhealthy)), iterates over cls.unhealthy, trims
tracker.Url with strings.TrimSpace, continues on empty u, extracts domain via
s.extractTrackerDomain(u) and records non-empty domains into targetDomains
before appending u to targetURLs.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
internal/services/reannounce/service.go (1)

583-592: Consider adding a clarifying comment about targetDomains scope.

The increment loop iterates over targetDomains, which is built from all unhealthy trackers before the retry-cap filtering. The check at line 586 correctly prevents over-incrementing domains that were filtered out of targetURLs, but this relationship isn't immediately obvious to readers.

A brief comment would help future maintainers understand why both the early filter (lines 548-553) and this check are needed.

📝 Suggested comment
 		if job != nil {
 			s.jobsMu.Lock()
+			// targetDomains includes all unhealthy domains; skip those already filtered
+			// from targetURLs due to reaching max retries.
 			for domain := range targetDomains {
 				if job.attempts[domain] >= settings.MaxRetries {
 					continue
 				}
 				job.attempts[domain]++
 			}
 			s.jobsMu.Unlock()
 		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/services/reannounce/service.go` around lines 583 - 592, Add a brief
clarifying comment above the loop that increments job.attempts for each domain
in targetDomains to explain that targetDomains is constructed from all unhealthy
trackers (before retry-cap filtering) while targetURLs was already filtered for
domains that reached MaxRetries; note that the subsequent if check
(job.attempts[domain] >= settings.MaxRetries) prevents incrementing attempts for
domains that were excluded from targetURLs, so both the earlier filter and this
guard are required. Reference symbols: targetDomains, targetURLs, job.attempts,
settings.MaxRetries, s.jobsMu.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@internal/services/reannounce/service.go`:
- Around line 583-592: Add a brief clarifying comment above the loop that
increments job.attempts for each domain in targetDomains to explain that
targetDomains is constructed from all unhealthy trackers (before retry-cap
filtering) while targetURLs was already filtered for domains that reached
MaxRetries; note that the subsequent if check (job.attempts[domain] >=
settings.MaxRetries) prevents incrementing attempts for domains that were
excluded from targetURLs, so both the earlier filter and this guard are
required. Reference symbols: targetDomains, targetURLs, job.attempts,
settings.MaxRetries, s.jobsMu.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 075428ff92

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@internal/services/reannounce/service.go`:
- Around line 886-932: The classifyTrackers logic currently treats "OK but
unregistered" as unhealthy; change it to skip unregistered trackers entirely (do
not append them to healthy, unhealthy or updating) by using
qbittorrent.TrackerMessageMatchesUnregistered(tracker.Message) to detect and
continue early; update trackerIsHealthy/trackerIsUpdating usage so
trackerIsHealthy returns false for non-OK and trackerIsUpdating remains the same
but classification flow in classifyTrackers should: skip disabled,
buildTrackerScope/extractTrackerDomain/trackerIncluded as before, append to
out.relevant, then if TrackerMessageMatchesUnregistered(...) continue (skip),
else if trackerIsHealthy append to out.healthy else if trackerIsUpdating append
to out.updating else append to out.unhealthy; apply the same
skip-for-unregistered change to the equivalent classification blocks referenced
(the other ranges around 961-989 and 995-1023) so any later "any healthy" checks
operate only on truly healthy trackers.
- Around line 481-593: The code normalizes maxRetries into the local variable
maxRetries but still uses settings.MaxRetries when filtering and incrementing
job.attempts, causing incorrect behavior when settings.MaxRetries <= 0; update
the filtering loop and the increment loop to use the normalized maxRetries
variable (replace references to settings.MaxRetries with maxRetries) so both the
"filtered := ..." domain-cap check and the job.attempts increment guard use the
same effective retry cap; ensure you keep the comment about targetDomains being
built before filtering and continue to lock s.jobsMu around both operations
(methods: the filtering block that builds filtered from targetURLs, and the
block that increments job.attempts).

Comment on lines +481 to +593
interval := time.Duration(settings.ReannounceIntervalSeconds) * time.Second
maxRetries := settings.MaxRetries
if maxRetries <= 0 {
maxRetries = 1
}
if err := client.ReannounceTorrentWithRetry(ctx, hash, opts); err != nil {
log.Debug().Err(err).Int("instanceID", instanceID).Str("hash", hash).Msg("reannounce: retry failed")
s.recordActivity(instanceID, hash, torrentName, freshTrackers, ActivityOutcomeFailed, fmt.Sprintf("reannounce failed: %v", err))
return

var (
lastProblemDomains string
requestLogged bool
)

retryLoop:
for attempt := 0; attempt < maxRetries; attempt++ {
trackerList, err := client.GetTorrentTrackersCtx(ctx, hash)
if err != nil {
log.Debug().Err(err).Int("instanceID", instanceID).Str("hash", hash).Msg("reannounce: failed to load trackers")
s.recordActivity(instanceID, hash, torrentName, initialTrackers, ActivityOutcomeFailed, fmt.Sprintf("failed to load trackers: %v", err))
return
}

cls := s.classifyTrackers(trackerList, settings)
if len(cls.relevant) == 0 {
s.recordActivity(instanceID, hash, torrentName, initialTrackers, ActivityOutcomeSkipped, "no matching trackers in scope")
return
}
if len(cls.unhealthy) == 0 {
if len(cls.updating) > 0 {
updatingDomains := s.domainsFromTrackers(cls.updating)
s.recordActivity(instanceID, hash, torrentName, updatingDomains, ActivityOutcomeSkipped, "trackers updating")
return
}

healthyTrackers := s.getHealthyTrackers(trackerList, settings)
outcome := ActivityOutcomeSkipped
reason := "tracker healthy"
if requestLogged {
outcome = ActivityOutcomeSucceeded
reason = "tracker healthy after reannounce"
}

s.jobsMu.Lock()
if job != nil {
job.attempts = nil
}
s.jobsMu.Unlock()

s.recordActivity(instanceID, hash, torrentName, healthyTrackers, outcome, reason)
return
}

targetDomains := make(map[string]struct{})
targetURLs := make([]string, 0, len(cls.unhealthy))
for _, tracker := range cls.unhealthy {
u := strings.TrimSpace(tracker.Url)
if u == "" {
continue
}
if d := domainForURL(u); d != "" {
targetDomains[d] = struct{}{}
}
targetURLs = append(targetURLs, u)
}

// Filter out domains that have reached the retry cap.
if job != nil && len(targetURLs) > 0 {
s.jobsMu.Lock()
filtered := targetURLs[:0]
for _, u := range targetURLs {
d := domainForURL(u)
if d != "" && job.attempts[d] >= settings.MaxRetries {
continue
}
filtered = append(filtered, u)
}
targetURLs = filtered
s.jobsMu.Unlock()
}

lastProblemDomains = s.domainsFromTrackers(cls.unhealthy)
if lastProblemDomains == "" {
lastProblemDomains = initialTrackers
}
if len(targetURLs) == 0 {
outcome := ActivityOutcomeSkipped
if requestLogged {
outcome = ActivityOutcomeFailed
}
s.recordActivity(instanceID, hash, torrentName, lastProblemDomains, outcome, "max retries reached for target trackers")
return
}

if err := client.ReannounceTrackersCtx(ctx, []string{hash}, targetURLs); err != nil {
log.Debug().Err(err).Int("instanceID", instanceID).Str("hash", hash).Msg("reannounce: request failed")
s.recordActivity(instanceID, hash, torrentName, lastProblemDomains, ActivityOutcomeFailed, fmt.Sprintf("reannounce failed: %v", err))
return
}

if !requestLogged {
s.recordActivity(instanceID, hash, torrentName, lastProblemDomains, ActivityOutcomeSucceeded, "reannounce requested")
requestLogged = true
}

if job != nil {
s.jobsMu.Lock()
// targetDomains is built before the max-retries filtering of targetURLs, so keep the guard
// to avoid incrementing domains that were effectively skipped this round.
for domain := range targetDomains {
if job.attempts[domain] >= settings.MaxRetries {
continue
}
job.attempts[domain]++
}
s.jobsMu.Unlock()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use normalized maxRetries when filtering/incrementing attempts.

You normalize maxRetries to at least 1, but the filtering/increment logic still uses settings.MaxRetries. If it’s <= 0, every target domain is immediately filtered out and never incremented.

🛠️ Suggested fix
-				if d != "" && job.attempts[d] >= settings.MaxRetries {
+				if d != "" && job.attempts[d] >= maxRetries {
 					continue
 				}
-				if job.attempts[domain] >= settings.MaxRetries {
+				if job.attempts[domain] >= maxRetries {
 					continue
 				}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/services/reannounce/service.go` around lines 481 - 593, The code
normalizes maxRetries into the local variable maxRetries but still uses
settings.MaxRetries when filtering and incrementing job.attempts, causing
incorrect behavior when settings.MaxRetries <= 0; update the filtering loop and
the increment loop to use the normalized maxRetries variable (replace references
to settings.MaxRetries with maxRetries) so both the "filtered := ..." domain-cap
check and the job.attempts increment guard use the same effective retry cap;
ensure you keep the comment about targetDomains being built before filtering and
continue to lock s.jobsMu around both operations (methods: the filtering block
that builds filtered from targetURLs, and the block that increments
job.attempts).

Comment on lines +886 to 932
func (s *Service) trackerIsHealthy(tracker qbt.TorrentTracker) bool {
if tracker.Status != qbt.TrackerStatusOK {
return false
}
// Treat "OK but unregistered" as unhealthy.
return !qbittorrent.TrackerMessageMatchesUnregistered(tracker.Message)
}

func (s *Service) trackerIsUpdating(tracker qbt.TorrentTracker) bool {
switch tracker.Status {
case qbt.TrackerStatusUpdating, qbt.TrackerStatusNotContacted:
return true
case qbt.TrackerStatusDisabled, qbt.TrackerStatusOK, qbt.TrackerStatusNotWorking:
return false
}
return false
}

func (s *Service) classifyTrackers(trackers []qbt.TorrentTracker, settings *models.InstanceReannounceSettings) trackerClassification {
var out trackerClassification
if len(trackers) == 0 {
return out
}

scope := s.buildTrackerScope(settings)

for _, tracker := range trackers {
if tracker.Status == qbt.TrackerStatusDisabled {
continue
}
// Check message first to catch OK status with unregistered msg
if qbittorrent.TrackerMessageMatchesUnregistered(tracker.Message) {
domain := s.extractTrackerDomain(tracker.Url)
if !s.trackerIncluded(domain, scope) {
continue
}
if tracker.Status == qbt.TrackerStatusOK {
return true

out.relevant = append(out.relevant, tracker)

if s.trackerIsHealthy(tracker) {
out.healthy = append(out.healthy, tracker)
continue
}
if s.trackerIsUpdating(tracker) {
out.updating = append(out.updating, tracker)
continue
}
out.unhealthy = append(out.unhealthy, tracker)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Skip unregistered trackers instead of marking them unhealthy.

Unregistered trackers should be ignored (neither healthy nor unhealthy). Right now they fall into the unhealthy bucket, which can trigger futile reannounce attempts and noisy logs.

🛠️ Suggested fix
 func (s *Service) trackerIsHealthy(tracker qbt.TorrentTracker) bool {
 	if tracker.Status != qbt.TrackerStatusOK {
 		return false
 	}
 	// Treat "OK but unregistered" as unhealthy.
 	return !qbittorrent.TrackerMessageMatchesUnregistered(tracker.Message)
 }
+
+func (s *Service) trackerIsUnregistered(tracker qbt.TorrentTracker) bool {
+	return tracker.Status == qbt.TrackerStatusOK && qbittorrent.TrackerMessageMatchesUnregistered(tracker.Message)
+}
 	for _, tracker := range trackers {
 		if tracker.Status == qbt.TrackerStatusDisabled {
 			continue
 		}
 		domain := s.extractTrackerDomain(tracker.Url)
 		if !s.trackerIncluded(domain, scope) {
 			continue
 		}
+		if s.trackerIsUnregistered(tracker) {
+			continue
+		}
 	for _, tracker := range trackers {
 		if tracker.Status == qbt.TrackerStatusDisabled {
 			continue
 		}
 		domain := s.extractTrackerDomain(tracker.Url)
 		if !s.trackerIncluded(domain, scope) {
 			continue
 		}
+		if s.trackerIsUnregistered(tracker) {
+			continue
+		}
 		if s.trackerIsHealthy(tracker) {
 			continue
 		}
 	for _, tracker := range trackers {
 		if tracker.Status == qbt.TrackerStatusDisabled {
 			continue
 		}
 		domain := s.extractTrackerDomain(tracker.Url)
 		if !s.trackerIncluded(domain, scope) {
 			continue
 		}
+		if s.trackerIsUnregistered(tracker) {
+			continue
+		}
 		if !s.trackerIsHealthy(tracker) {
 			continue
 		}

Based on learnings: “skip unregistered trackers and check if any other tracker is healthy.”

Also applies to: 961-989, 995-1023

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/services/reannounce/service.go` around lines 886 - 932, The
classifyTrackers logic currently treats "OK but unregistered" as unhealthy;
change it to skip unregistered trackers entirely (do not append them to healthy,
unhealthy or updating) by using
qbittorrent.TrackerMessageMatchesUnregistered(tracker.Message) to detect and
continue early; update trackerIsHealthy/trackerIsUpdating usage so
trackerIsHealthy returns false for non-OK and trackerIsUpdating remains the same
but classification flow in classifyTrackers should: skip disabled,
buildTrackerScope/extractTrackerDomain/trackerIncluded as before, append to
out.relevant, then if TrackerMessageMatchesUnregistered(...) continue (skip),
else if trackerIsHealthy append to out.healthy else if trackerIsUpdating append
to out.updating else append to out.unhealthy; apply the same
skip-for-unregistered change to the equivalent classification blocks referenced
(the other ranges around 961-989 and 995-1023) so any later "any healthy" checks
operate only on truly healthy trackers.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 10bdca122e

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

req.SetBasicAuth(c.basicUsername, c.basicPassword)
}

resp, err := c.GetHTTPClient().Do(req)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Reuse qBittorrent retry/auth flow for tracker reannounce

This sends the request with a raw GetHTTPClient().Do(req) call, so targeted reannounce no longer benefits from the qBittorrent client's normal relogin-on-403 and retry behavior used by the rest of the API calls. In environments with expiring WebUI sessions or transient connection resets, this path can fail immediately and mark the reannounce job as failed even though a retried/re-authenticated request would have succeeded.

Useful? React with 👍 / 👎.

Comment on lines +235 to +236
if len(clone.HealthIgnoreTrackers) == 0 {
clone.HealthIgnoreTrackers = append([]string{}, defaultHealthIgnoreTrackers...)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Permit empty health ignore list in reannounce settings

This forces healthIgnoreTrackers back to sptracker.cc whenever the submitted list is empty, which makes it impossible for users to intentionally clear the ignore list from the UI/API. The feature is presented as configurable, so persisting an empty list should be allowed; otherwise saved settings are silently rewritten and one tracker remains permanently excluded.

Useful? React with 👍 / 👎.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
internal/web/swagger/openapi.yaml (1)

5338-5350: Clarify time-based fields in InstanceReannounceSettings.

The new fields read like seconds-based values, but the schema doesn’t explicitly state units. Adding short descriptions will reduce client ambiguity.

♻️ Suggested documentation tweaks
     InstanceReannounceSettings:
       type: object
       properties:
         enabled:
           type: boolean
         initialWaitSeconds:
           type: integer
+          description: Initial wait before first reannounce attempt (seconds).
         reannounceIntervalSeconds:
           type: integer
+          description: Minimum interval between reannounce attempts (seconds).
         maxAgeSeconds:
           type: integer
+          description: Maximum torrent age to consider for reannounce (seconds).
         maxRetries:
           type: integer
+          description: Maximum reannounce retries per tracker domain.

Also, please confirm make test-openapi passes for this API contract update. As per coding guidelines: internal/web/swagger/**/*.{yaml,yml,json}: API contract changes must update OpenAPI content under internal/web/swagger and pass make test-openapi.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/web/swagger/openapi.yaml` around lines 5338 - 5350, Update the
InstanceReannounceSettings schema to add short description strings for each
time-based property (initialWaitSeconds, reannounceIntervalSeconds,
maxAgeSeconds) explicitly stating the unit is seconds (and clarify maxRetries
and enabled as needed); modify the OpenAPI YAML entry for
InstanceReannounceSettings to include these description fields next to the
properties so clients aren’t ambiguous, then run and ensure make test-openapi
passes to validate the API contract change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@internal/qbittorrent/client.go`:
- Around line 214-218: Tracker URLs joined with a literal '|' can collide with
'|' characters originating inside an individual tracker URL; instead of joining
escaped URLs with the raw '|' separator, join them with the percent-encoded form
"%7C" so the separator remains encoded in the final query string (i.e., replace
strings.Join(escaped, "|") with strings.Join(escaped, "%7C") and keep the
per-URL escaping logic intact), and update the surrounding comment near escaped
and body.WriteString to reflect this change.

---

Nitpick comments:
In `@internal/web/swagger/openapi.yaml`:
- Around line 5338-5350: Update the InstanceReannounceSettings schema to add
short description strings for each time-based property (initialWaitSeconds,
reannounceIntervalSeconds, maxAgeSeconds) explicitly stating the unit is seconds
(and clarify maxRetries and enabled as needed); modify the OpenAPI YAML entry
for InstanceReannounceSettings to include these description fields next to the
properties so clients aren’t ambiguous, then run and ensure make test-openapi
passes to validate the API contract change.

Comment on lines +214 to +218
if len(escaped) > 0 {
body.WriteString("&urls=")
// Do not escape the pipe separator; the API splits on '|'.
body.WriteString(strings.Join(escaped, "|"))
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Edge case: tracker URLs containing pipe characters.

Tracker URLs are escaped individually, then joined with a literal |. If a tracker URL contains a | character (rare but valid in query strings), it would be escaped to %7C in that URL. After server-side URL decoding, this %7C becomes |, potentially causing incorrect splitting if the server splits before decoding.

This is unlikely to occur with typical tracker URLs, but worth noting for completeness.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/qbittorrent/client.go` around lines 214 - 218, Tracker URLs joined
with a literal '|' can collide with '|' characters originating inside an
individual tracker URL; instead of joining escaped URLs with the raw '|'
separator, join them with the percent-encoded form "%7C" so the separator
remains encoded in the final query string (i.e., replace strings.Join(escaped,
"|") with strings.Join(escaped, "%7C") and keep the per-URL escaping logic
intact), and update the surrounding comment near escaped and body.WriteString to
reflect this change.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant