Skip to content

feat: sse#551

Open
s0up4200 wants to merge 78 commits intodevelopfrom
feat/sse
Open

feat: sse#551
s0up4200 wants to merge 78 commits intodevelopfrom
feat/sse

Conversation

@s0up4200
Copy link
Collaborator

@s0up4200 s0up4200 commented Nov 2, 2025

Summary by CodeRabbit

Release Notes

  • New Features

    • Real-time torrent streaming with automatic reconnection and fallback polling
    • Notifications system with customizable targets and event subscriptions
    • External program execution triggered by automation rules
    • HTTP Basic Auth support for indexers and ARR instances
    • Cross-seed blocklists and Gazelle OPS/RED integration
    • Automation dry-run mode for testing rules before activation
    • Optional authentication disable mode with IP allowlisting
  • Improvements

    • Enhanced instance metadata caching with configurable fallback delays
    • Improved backup reliability with failure cooldown and export throttling
    • Better streaming status indicators and connection health visibility in UI
    • Expanded cross-seed configuration with additional indexer options

@s0up4200 s0up4200 added enhancement New feature or request sse labels Nov 2, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 2, 2025

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

This PR introduces a comprehensive real-time streaming subsystem via Server-Sent Events (SSE), adds a notifications feature with multiple target support, enables authentication-disabled mode with IP allowlist controls, extends cross-seed functionality with per-instance blocklists and Gazelle OPS/RED API integration, refactors external program execution into a shared service, and integrates new API endpoints for instance transfer info, torrent field extraction, and notification management.

Changes

Cohort / File(s) Summary
SSE Streaming Infrastructure
go.mod, internal/api/sse/manager.go, internal/api/sse/manager_test.go, web/src/contexts/SyncStreamContext.tsx, web/src/hooks/useTorrentsList.ts, web/src/lib/api.ts
Introduces SSE-based per-instance streaming with StreamManager, exponential backoff retry logic, and frontend hooks (useSyncStream) to track streaming state, connection status, and metadata updates.
Notification System
internal/services/notifications/..., internal/models/notifications.go, internal/api/handlers/notifications.go, internal/database/migrations/062_add_notifications.sql
Adds notification targets (Notifiarr/Shoutrrr), event-driven handlers, notification service with support for event publishing, and full CRUD API for managing targets.
Authentication & Security
internal/domain/config.go, internal/config/config.go, internal/api/middleware/auth.go, internal/api/middleware/auth_disabled_ip_allowlist.go, internal/api/ctxkeys/ctxkeys.go
Implements auth-disabled mode with explicit acknowledgment, IP CIDR allowlist validation, synthetic admin user in disabled mode, and typed context keys for username propagation.
Cross-Seed Enhancements
internal/models/crossseed.go, internal/models/crossseed_blocklist.go, internal/database/migrations/061_add_cross_seed_gazelle_settings.sql, internal/database/migrations/059_add_cross_seed_blocklist.sql, internal/api/handlers/crossseed.go
Adds per-instance blocklist management, Gazelle OPS/RED API key storage (encrypted), and automation settings extensions for blocklist and Gazelle integration.
External Programs Service
internal/services/externalprograms/..., internal/api/handlers/external_programs.go, internal/models/...ExternalProgram*
Refactors external program execution into a dedicated service with path allowlisting, automation integration via ExternalProgramAction, and service-based execution logic.
Automations Extensions
internal/models/automation.go, internal/api/handlers/automations.go, internal/database/migrations/058_add_automation_dry_run.sql
Adds DryRun flag for non-persistent simulation, ExternalProgramAction/ResumeAction support, activity tracking enhancements, and validation for referenced external programs.
API Server & Routing
internal/api/server.go, internal/api/server_test.go, internal/api/handlers/arr.go, internal/api/handlers/torrents.go, internal/api/handlers/instances.go, internal/api/middleware/apikey_query.go
Updates server dependencies to wire streaming/notification/external-program services, adds /stream endpoint, implements HTTP Basic Auth for ARR/Torznab, adds GetTransferInfo and GetTorrentField endpoints, and introduces APIKeyFromQuery middleware.
qBittorrent Integration
internal/qbittorrent/client.go, internal/qbittorrent/pool.go, internal/qbittorrent/sync_events.go, internal/qbittorrent/app_info.go, internal/qbittorrent/preferences.go, internal/qbittorrent/sync_manager.go
Adds SyncEventSink interface for event propagation, SetSyncEventSink/SetTorrentAddedHandler on Client/Pool, cached app info and preferences retrieval, pointer-based TorrentView for efficiency, and InstanceMeta/InstanceError types for SSE health metadata.
Frontend Streaming Integration
web/src/App.tsx, web/src/components/torrents/TorrentTableOptimized.tsx, web/src/hooks/useInstanceMetadata.ts, web/src/hooks/useInstancePreferences.ts, web/src/hooks/useQBittorrentAppInfo.ts
Integrates SyncStreamProvider context, adds streaming state (connected, error, retry) to torrent list display, implements cache-driven metadata/preferences fetching with optional fallback delays, and supports conditional data source selection (streaming vs. polling).
Database Models & Persistence
internal/models/arr_instance.go, internal/models/torznab_indexer.go, internal/database/migrations/060_add_basic_auth_to_arr_and_torznab.sql, internal/database/migrations/057_add_license_provider_dodo.sql, internal/database/migrations/056_add_completion_indexer_ids.sql
Adds HTTP Basic Auth fields to ARR/Torznab models with encryption, license provider tracking (Dodo/Polar), completion settings IndexerIDs, and normalized string handling via normalizeLowerTrim helper.
License & Dodo Integration
internal/dodo/client.go, internal/database/license_repo.go, internal/models/license.go, cmd/qui/main.go
Introduces Dodo Payments API client for license activation/validation/deactivation, adds provider field to licenses, updates license service wiring with Dodo support, and adds environment-based configuration.
Backup Service Enhancements
internal/backups/service.go, internal/backups/restore_plan.go, internal/backups/restore_execute.go
Integrates notification service for backup success/failure events, adds failure cooldown and export throttle configuration, implements metadata unavailability skip logic, and centralizes string normalization via normalizeLowerTrim.
Metrics & Monitoring
internal/metrics/collector/torrent.go, internal/metrics/metrics.go
Adds per-tracker Prometheus metrics (count, uploaded, downloaded, size) with tracker customization support and groupTrackerTransfersForMetrics aggregation logic.
Configuration & Persistence
internal/config/config.go, internal/config/persist.go
Extends config with auth-disabled settings and CIDR allowlist, implements active-key tracking for log settings updates, adds canonical key normalization, and improves TOML persistence for log configuration.
Dodo Payments & Build
go.mod, .github/workflows/release.yml, .github/workflows/format.yml, .github/workflows/license-update.yml, .github/workflows/claude.yml.disabled
Updates dependencies (autobrr v1.72.1, go-sse, shoutrrr, mapstructure, locafero), migrates CI runners from Blacksmith to ubuntu-latest, adds Buildx caching, and updates Docker build steps.
Documentation & Guides
documentation/docs/..., README.md, .github/CONTRIBUTING.md, .github/ISSUE_TEMPLATE/config.yml, .github/DISCUSSION_TEMPLATE/issue-triage.yml, AGENTS.md
Adds Configuration Reference, SSO/CORS proxy guide, external programs automation docs, instance settings guide, notifications feature doc, Gazelle OPS/RED integration guide, and updates sidebar ordering; restricts PR creation to collaborators.
Miscellaneous
.github/FUNDING.yml, .gitignore
Adds Patreon funding source, updates .envrc and local reference ignore entries.

Sequence Diagrams

sequenceDiagram
    participant Client as Web Client
    participant Provider as SyncStreamProvider
    participant Manager as SSE Manager
    participant Pool as Client Pool
    participant Sink as Sync Event Sink
    
    Client->>Provider: useSyncStream(params)
    Provider->>Manager: openConnection(key, params)
    Manager->>Manager: ensureStream(key)
    Manager->>Pool: register listener for events
    
    loop Sync Updates
        Pool->>Sink: dispatchMainData(instanceID, data)
        Sink->>Manager: HandleMainData(instanceID, data)
        Manager->>Manager: updateStreamState(key)
        Manager->>Manager: enqueueGroup(key)
        Manager->>Client: broadcast SSE event
        Client->>Provider: handleStreamPayload
        Provider->>Client: update state (connected, meta)
    end
    
    loop Sync Errors
        Pool->>Sink: dispatchSyncError(instanceID, err)
        Sink->>Manager: HandleSyncError(instanceID, err)
        Manager->>Manager: scheduleRetry(exponentialBackoff)
        Manager->>Client: SSE error event
        Client->>Provider: handleError(retry schedule)
    end
Loading
sequenceDiagram
    participant Automation as Automation Processor
    participant ExternalProg as External Program Service
    participant Notifier as Notification Service
    participant Target as Notification Target
    
    Automation->>Automation: evaluate conditions
    alt Action: ExternalProgram
        Automation->>ExternalProg: Execute(torrent, programID)
        ExternalProg->>ExternalProg: validatePathAllowed(path)
        ExternalProg->>ExternalProg: executeProgram(args, paths)
        ExternalProg->>Automation: return results
    end
    
    Automation->>Automation: determine action outcomes
    Automation->>Notifier: Notify(event, context)
    Notifier->>Notifier: build payload (title, description)
    Notifier->>Target: POST event (Notifiarr/Shoutrrr)
    Target->>Target: format/send notification
Loading
sequenceDiagram
    participant Client as HTTP Client
    participant Middleware as Auth Middleware
    participant Config as Domain Config
    participant Handler as API Handler
    
    Client->>Middleware: GET /api/torrents
    Middleware->>Config: IsAuthDisabled()
    alt Auth Disabled
        Config->>Middleware: true (if acknowledged)
        Middleware->>Middleware: inject synthetic admin user
        Middleware->>Handler: forward request
    else Auth Enabled
        Config->>Middleware: false
        Middleware->>Middleware: validate session/token
        alt Valid
            Middleware->>Handler: forward request
        else Invalid
            Middleware->>Client: 401 Unauthorized
        end
    end
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Possibly related PRs

Poem

🐰 Streaming torrents flow so free,

Notifications hop with glee,

Blocklists block, and Gazelles align,

Auth-disabled (with care so fine),

External programs do their part,

Real-time updates touch the heart! ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 14.43% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat(sse): torrent streaming' is concise and clearly describes the main feature being added - Server-Sent Events (SSE) for real-time torrent streaming. It directly relates to the substantial changes throughout the codebase for SSE implementation.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/sse

Comment @coderabbitai help to get the list of available commands and usage tips.

@s0up4200 s0up4200 added this to the v1.8.0 milestone Nov 2, 2025
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (1)
web/src/components/torrents/TorrentTableOptimized.tsx (1)

997-1004: Align polling message with actual interval

The copy still says “Using 3s polling…” but the useQueries fallback polls every 5000 ms. Tweaking the text (or reading the interval value) will keep the status panel accurate.

-      message: "Using 3s polling until the SSE connection is ready.",
-      secondary: "Polling every 3s",
+      message: "Using 5s polling until the SSE connection is ready.",
+      secondary: "Polling every 5s",
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8b3eb17 and 4c8f722.

⛔ Files ignored due to path filters (1)
  • go.sum is excluded by !**/*.sum
📒 Files selected for processing (16)
  • go.mod (1 hunks)
  • internal/api/server.go (5 hunks)
  • internal/api/server_test.go (4 hunks)
  • internal/api/sse/manager.go (1 hunks)
  • internal/api/sse/manager_test.go (1 hunks)
  • internal/qbittorrent/client.go (4 hunks)
  • internal/qbittorrent/pool.go (3 hunks)
  • internal/qbittorrent/sync_events.go (1 hunks)
  • web/src/App.tsx (2 hunks)
  • web/src/components/torrents/TorrentCardsMobile.tsx (3 hunks)
  • web/src/components/torrents/TorrentTableOptimized.tsx (4 hunks)
  • web/src/contexts/SyncStreamContext.tsx (1 hunks)
  • web/src/hooks/useTorrentsList.ts (4 hunks)
  • web/src/lib/api.ts (1 hunks)
  • web/src/pages/Dashboard.tsx (4 hunks)
  • web/src/types/index.ts (1 hunks)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
internal/api/sse/manager.go (1)

178-229: Consider removing redundant assignment.

The subscription registration logic is solid, but line 209 appears redundant:

} else {
    group.options = opts
}

Since streamOptionsKey generates identical keys for identical options, all subscriptions in a group should have the same options by construction. This assignment does no harm but serves no purpose.

Apply this diff to remove the redundant else branch:

     m.groups[state.groupKey] = group
     if _, exists := m.instanceGroups[opts.InstanceID]; !exists {
         m.instanceGroups[opts.InstanceID] = make(map[string]*subscriptionGroup)
     }
     m.instanceGroups[opts.InstanceID][state.groupKey] = group
-} else {
-    group.options = opts
 }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4c8f722 and 25bff12.

📒 Files selected for processing (6)
  • internal/api/server.go (6 hunks)
  • internal/api/sse/manager.go (1 hunks)
  • web/src/components/torrents/TorrentCardsMobile.tsx (4 hunks)
  • web/src/components/torrents/TorrentTableOptimized.tsx (5 hunks)
  • web/src/contexts/SyncStreamContext.tsx (1 hunks)
  • web/src/hooks/useTorrentsList.ts (6 hunks)
🧰 Additional context used
🧬 Code graph analysis (6)
internal/api/server.go (4)
internal/api/sse/manager.go (2)
  • StreamManager (71-90)
  • NewStreamManager (141-170)
internal/qbittorrent/pool.go (1)
  • ClientPool (52-66)
internal/qbittorrent/sync_manager.go (1)
  • SyncManager (102-106)
internal/models/instance.go (1)
  • InstanceStore (115-118)
web/src/contexts/SyncStreamContext.tsx (2)
web/src/types/index.ts (3)
  • TorrentFilters (225-237)
  • TorrentStreamPayload (261-266)
  • TorrentStreamMeta (253-259)
web/src/lib/api.ts (1)
  • api (831-831)
web/src/hooks/useTorrentsList.ts (3)
web/src/types/index.ts (2)
  • TorrentResponse (239-251)
  • TorrentStreamPayload (261-266)
internal/qbittorrent/sync_manager.go (1)
  • TorrentResponse (62-75)
web/src/contexts/SyncStreamContext.tsx (1)
  • useSyncStream (401-455)
web/src/components/torrents/TorrentCardsMobile.tsx (1)
web/src/hooks/useTorrentsList.ts (1)
  • TORRENT_STREAM_POLL_INTERVAL_SECONDS (14-17)
web/src/components/torrents/TorrentTableOptimized.tsx (3)
web/src/hooks/useDateTimeFormatters.ts (1)
  • useDateTimeFormatters (13-47)
web/src/lib/utils.ts (2)
  • formatDate (56-61)
  • cn (9-11)
web/src/hooks/useTorrentsList.ts (1)
  • TORRENT_STREAM_POLL_INTERVAL_SECONDS (14-17)
internal/api/sse/manager.go (5)
internal/qbittorrent/filter_types.go (1)
  • FilterOptions (7-18)
internal/api/server.go (1)
  • Server (39-56)
internal/qbittorrent/pool.go (1)
  • ClientPool (52-66)
internal/qbittorrent/sync_manager.go (1)
  • SyncManager (102-106)
internal/models/instance.go (1)
  • InstanceStore (115-118)
🔇 Additional comments (10)
internal/api/sse/manager.go (10)

26-138: LGTM! Well-structured types and constants.

The type definitions are well-designed with appropriate concurrency primitives:

  • Atomic types for closing and counter flags
  • Multiple indexes for efficient lookups by subscription ID, instance ID, and group key
  • Fine-grained locking in subscriptionGroup (separate mutexes for pending state vs. subscribers)

The constants are sensible (300 default limit, 2s sync interval with 30s max backoff).


141-170: LGTM! Robust constructor with proper fallback.

The constructor properly handles the unlikely replayer creation error by logging and continuing with nil, which is safe since the SSE server can operate without replay buffering.


232-275: LGTM! Thorough cleanup with proper cascading.

The unregister logic correctly:

  • Removes from all indexes (subscriptions, groups, instance index)
  • Cleans up empty groups and group indexes
  • Stops the sync loop when the last subscriber for an instance disconnects
  • Clears backoff state when stopping sync

337-406: LGTM! Well-structured HTTP handlers with proper validation.

Both handlers demonstrate good practices:

  • ServeInstance: Validates input, checks instance existence, uses defer for cleanup
  • onSession: Validates subscription state before accepting connection, sends initial snapshot via init event

The defer pattern at line 366 ensures cleanup even if ServeHTTP panics.


452-509: LGTM! Clever coalescing pattern prevents redundant updates.

The enqueueGroupprocessGroup pattern implements a "latest-wins" coalescing strategy:

  1. If a group is already sending, new events just update pendingMeta/pendingType
  2. Only one worker goroutine processes each group at a time
  3. The worker drains all pending updates in a loop

This efficiently handles bursts of updates without spawning excessive goroutines or queuing stale data.


626-664: LGTM! Robust shutdown with proper cleanup.

The shutdown sequence is well-ordered:

  1. Atomically set closing flag (idempotent via CAS)
  2. Cancel context to signal all goroutines
  3. Stop all sync loops
  4. Shut down SSE server
  5. Filter out expected errors during shutdown

666-721: LGTM! Well-implemented exponential backoff with caps.

The backoff logic is properly designed:

  • Exponential backoff with 2^attempt multiplier capped at 2^4 (16x)
  • Interval bounded between 2s and 30s
  • Optimization to skip loop restart if interval unchanged (lines 733-735, 702-705)
  • Defensive validation in ensureBackoffStateLocked (lines 710-712)

741-786: LGTM! Ticker-based sync loop with proper cleanup.

The sync loop implementation is solid:

  • Line 752 captures interval as a parameter to avoid closure issues
  • Ticker cleanup via defer (line 754)
  • Context cancellation for graceful shutdown (lines 760-761)
  • Debug-level logging in forceSync is appropriate for transient sync failures

793-842: LGTM! Thorough query parameter validation.

The parseStreamOptions function properly validates all inputs:

  • Limit bounded between 1 and 2000 (line 798)
  • Page validated as non-negative (line 807)
  • Order defaults to "desc" if not "asc" or "desc" (lines 820-822)
  • Sort defaults to "addedOn" if empty (line 816)
  • Filters parsed from JSON with error handling (lines 828-830)

408-624: LGTM! Thread-safe helpers with proper locking.

All getter and utility methods follow best practices:

  • Use appropriate locks (RLock for reads, Lock for writes)
  • Take snapshots/copies to release locks before slow operations
  • Defensive nil checks (e.g., lines 618-621 in cloneMeta)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (4)
web/src/hooks/useTorrentsList.ts (4)

115-115: Consider removing currentPage from dependencies.

The handleStreamPayload callback includes currentPage in its dependency array but only uses it once (line 111) to conditionally update hasLoadedAll. This causes the callback to be recreated on every page change, which may be unnecessary. If the hasLoadedAll logic can rely on currentPage at call time rather than closure time, you can remove it from the deps to avoid re-subscribing to the stream on every pagination.

-  [currentPage, queryClient, streamQueryKey]
+  [queryClient, streamQueryKey]

If hasLoadedAll updates are critical only when currentPage === 0, you can check currentPage at call time or accept that it might update slightly out of sync.


123-137: LGTM: Fallback and polling control logic.

The httpFallbackAllowed state correctly starts as true, enabling the initial page-0 fetch while the stream handshake is in flight. Once the stream connects, polling is disabled (shouldDisablePolling), and queryEnabled ensures page-1+ queries still run. The fallback re-enables on stream error. This addresses the past concern about suppressing the initial fetch.

The effect on lines 129–131 optimizes re-renders by only calling setState when the value changes. If you prefer brevity:

-setHttpFallbackAllowed(prev =>
-  prev === shouldAllowFallback ? prev : shouldAllowFallback
-)
+setHttpFallbackAllowed(shouldAllowFallback)

React already bails out if the new state equals the old state (using Object.is), so the optimization may be redundant unless you've measured a performance benefit.


172-173: Consider renaming the polling constant for clarity.

TORRENT_STREAM_POLL_INTERVAL_MS is used when streaming is inactive (line 173), which is counterintuitive. A name like TORRENT_POLL_INTERVAL_MS or TORRENT_FALLBACK_POLL_INTERVAL_MS would clarify that this is the HTTP polling interval when SSE is unavailable.

-export const TORRENT_STREAM_POLL_INTERVAL_MS = 3000
+export const TORRENT_POLL_INTERVAL_MS = 3000
 export const TORRENT_STREAM_POLL_INTERVAL_SECONDS = Math.max(
   1,
-  Math.round(TORRENT_STREAM_POLL_INTERVAL_MS / 1000)
+  Math.round(TORRENT_POLL_INTERVAL_MS / 1000)
 )

180-190: Add comments to clarify activeData selection.

The activeData logic prioritizes the stream snapshot when streaming, then conditionally falls back to data vs. lastStreamSnapshot depending on currentPage. The intent—using page-0 metadata even when paginated—is sound but not immediately obvious.

Consider adding inline comments:

 const activeData = useMemo(() => {
+  // When streaming is active, always use the latest stream snapshot (page 0)
   if (shouldDisablePolling && lastStreamSnapshot) {
     return lastStreamSnapshot
   }

+  // On page 0 without streaming, prefer current query data
   if (currentPage === 0) {
     return data ?? lastStreamSnapshot ?? null
   }

+  // On page 1+ without streaming, prefer last stream snapshot (page 0 metadata)
+  // because stats/counts/categories are only accurate from the first page
   return lastStreamSnapshot ?? data ?? null
 }, [currentPage, data, lastStreamSnapshot, shouldDisablePolling])
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 25bff12 and 9efaa56.

📒 Files selected for processing (2)
  • internal/api/sse/manager.go (1 hunks)
  • web/src/hooks/useTorrentsList.ts (6 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
web/src/hooks/useTorrentsList.ts (2)
web/src/types/index.ts (2)
  • TorrentResponse (239-251)
  • TorrentStreamPayload (261-266)
web/src/contexts/SyncStreamContext.tsx (1)
  • useSyncStream (401-455)
internal/api/sse/manager.go (4)
internal/qbittorrent/filter_types.go (1)
  • FilterOptions (7-18)
internal/qbittorrent/pool.go (1)
  • ClientPool (52-66)
internal/qbittorrent/sync_manager.go (1)
  • SyncManager (102-106)
internal/models/instance.go (1)
  • InstanceStore (115-118)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: Build Docker images (linux/arm64)
  • GitHub Check: Build Docker images (linux/arm/v6)
  • GitHub Check: Build Docker images (linux/amd64)
  • GitHub Check: Build Docker images (linux/arm/v7)
  • GitHub Check: Build Docker images (linux/amd64/v2)
  • GitHub Check: Build Docker images (linux/amd64/v3)
  • GitHub Check: Build distribution binaries
🔇 Additional comments (4)
web/src/hooks/useTorrentsList.ts (4)

8-17: LGTM: Imports and polling constants.

The streaming imports and polling interval constants are well-defined. The safeguard Math.max(1, ...) prevents zero-second intervals.


42-65: LGTM: Stream initialization.

The streamQueryKey and streamParams correctly target page 0 for streaming, and memoization dependencies are accurate.


144-151: LGTM: State reset includes stream snapshot.

The reset effect on filter/search/sort/instance changes now correctly clears lastStreamSnapshot, ensuring stale stream data doesn't linger after a query context change.


288-364: LGTM: Metadata and streaming state integrated correctly.

The return object correctly uses activeData for metadata (stats, counts, categories, tags, serverState) and exposes the new streaming state fields (isStreaming, streamConnected, streamError, etc.) without breaking the existing API surface. Backward compatibility is preserved.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
internal/api/sse/manager.go (1)

308-311: Simplify the fallback retry calculation.

Line 308 correctly uses backoff.Seconds(), but the fallback at line 310 still uses the awkward pattern flagged in past reviews:

retrySeconds = int(defaultSyncInterval.Round(time.Second) / time.Second)

Apply this diff to simplify:

 retrySeconds := int(backoff.Seconds())
 if retrySeconds <= 0 {
-    retrySeconds = int(defaultSyncInterval.Round(time.Second) / time.Second)
+    retrySeconds = int(defaultSyncInterval.Seconds())
 }

Note: Both approaches are functionally equivalent since defaultSyncInterval is always an exact number of seconds.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9efaa56 and 130400e.

📒 Files selected for processing (3)
  • internal/api/sse/manager.go (1 hunks)
  • web/src/hooks/useTorrentsList.ts (6 hunks)
  • web/src/types/index.ts (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
web/src/types/index.ts (1)
internal/qbittorrent/sync_manager.go (1)
  • TorrentResponse (62-75)
web/src/hooks/useTorrentsList.ts (2)
web/src/types/index.ts (2)
  • TorrentResponse (239-251)
  • TorrentStreamPayload (262-267)
web/src/contexts/SyncStreamContext.tsx (1)
  • useSyncStream (401-455)
internal/api/sse/manager.go (6)
internal/qbittorrent/filter_types.go (1)
  • FilterOptions (7-18)
internal/api/server.go (1)
  • Server (39-56)
internal/qbittorrent/pool.go (1)
  • ClientPool (52-66)
internal/qbittorrent/sync_manager.go (1)
  • SyncManager (102-106)
internal/models/instance.go (1)
  • InstanceStore (115-118)
web/src/types/index.ts (2)
  • TorrentResponse (239-251)
  • MainData (270-274)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: Build Docker images (linux/arm/v7)
  • GitHub Check: Build Docker images (linux/arm/v6)
  • GitHub Check: Build Docker images (linux/arm64)
  • GitHub Check: Build Docker images (linux/amd64)
  • GitHub Check: Build Docker images (linux/amd64/v2)
  • GitHub Check: Build Docker images (linux/amd64/v3)
  • GitHub Check: Build distribution binaries
🔇 Additional comments (24)
internal/api/sse/manager.go (13)

51-68: LGTM: Composite key generation is sound.

The streamOptionsKey function correctly creates a unique identifier by combining all relevant filtering/pagination parameters. The JSON marshaling fallback for filters is appropriate.


141-170: LGTM: Constructor properly initializes all state.

The constructor correctly:

  • Handles replayer creation errors with fallback
  • Initializes all maps to prevent nil panics
  • Sets up context for lifecycle management
  • Wires OnSession callback

229-273: LGTM: Proper cleanup and resource management.

The unregister logic correctly:

  • Removes subscriptions from all indexes
  • Cleans up empty groups and maps
  • Cancels sync loops when the last subscriber disconnects
  • Holds the mutex throughout to ensure consistency

275-295: LGTM: Proper event handling with async publishing.

The MainData handler correctly:

  • Validates input and checks shutdown state
  • Resets backoff on successful sync
  • Constructs metadata from the event
  • Uses a goroutine to avoid blocking the caller

334-370: LGTM: Well-structured HTTP handler with proper lifecycle management.

The handler correctly:

  • Validates shutdown state and instance existence
  • Parses and validates request parameters
  • Registers the subscription with deferred cleanup
  • Blocks until the client disconnects

372-404: LGTM: Proper session initialization with initial snapshot.

The session callback correctly:

  • Validates all preconditions
  • Sends an initial "init" event with full data
  • Returns the subscription ID as the SSE topic for message routing

450-507: LGTM: Proper coalescing pattern for event batching.

The enqueue/process pattern correctly:

  • Coalesces rapid updates by marking pending without spawning multiple goroutines
  • Processes all pending work in a single goroutine
  • Uses separate mutexes for pending state vs. subscriber list
  • Exits gracefully when work is complete

509-552: LGTM: Proper payload construction with timeout.

The payload builder correctly:

  • Uses a timeout context to prevent hanging
  • Constructs error payloads for failures
  • Clones metadata to prevent sharing

This reads group.options without a lock (line 523), which is safe given that options are immutable after group creation.


576-608: LGTM: Proper message publishing with error handling.

The publish methods correctly:

  • Hold appropriate locks when accessing shared state
  • Handle marshaling errors gracefully
  • Ignore expected errors during shutdown

624-662: LGTM: Graceful shutdown with proper cleanup.

The shutdown logic correctly:

  • Prevents concurrent shutdown with atomic CAS
  • Cancels all background work via context
  • Cleans up sync loops under lock
  • Filters expected shutdown errors

664-719: LGTM: Proper exponential backoff with reasonable bounds.

The backoff logic correctly:

  • Implements capped exponential backoff (2, 4, 8, 16, 32 seconds)
  • Clamps to [2s, 30s] range
  • Resets on success
  • Restarts sync loops at new intervals

721-784: LGTM: Proper ticker-based sync loop with graceful cancellation.

The sync loop implementation correctly:

  • Only restarts when interval changes
  • Uses ticker for periodic sync
  • Respects context cancellation
  • Logs errors at appropriate levels

786-848: LGTM: Well-implemented utility functions with proper validation.

The helper functions correctly:

  • Validate and clamp pagination parameters
  • Provide sensible defaults
  • Handle JSON parsing errors
  • Use appropriate locking in accessors
web/src/types/index.ts (1)

253-267: LGTM: Clean type definitions matching backend contract.

The new streaming interfaces correctly:

  • Define metadata fields that align with the Go backend
  • Use discriminated unions for type safety
  • Mark fields optional where appropriate
  • Follow TypeScript naming conventions
web/src/hooks/useTorrentsList.ts (10)

13-17: LGTM: Reasonable polling interval with safeguards.

The 3-second interval is appropriate for torrent status updates. The Math.max(1, ...) ensures a minimum 1-second interval even if the constant is changed.


46-65: LGTM: Stream parameters correctly target page 0.

The stream setup correctly:

  • Always subscribes to page 0 regardless of current pagination state
  • Memoizes with appropriate dependencies
  • Disables when hook is disabled

67-124: LGTM: Page-based merge correctly preserves sort order.

The payload merge logic correctly addresses the sort order issue flagged in past reviews by:

  • Using page-based slicing instead of pure hash-based deduplication
  • Replacing the target page while preserving items before and after
  • Deduplicating against the new page to remove items that moved between pages
  • Truncating to the backend-provided total

This ensures the global sort order is maintained even when multiple pages are loaded.


126-129: LGTM: Proper stream subscription.

The hook correctly subscribes to the stream with appropriate enabled flag and message handler.


131-145: LGTM: HTTP fallback correctly allows initial fetch.

The fallback logic addresses the past issue where initial fetch was suppressed. Now:

  • httpFallbackAllowed starts true
  • Initial query runs immediately (page 0, not connected yet, fallback allowed)
  • Once stream connects, polling is disabled
  • If stream errors, HTTP fallback re-enables

This prevents the blank table during connection establishment.


180-183: LGTM: Proper polling control that defers to streaming.

The query correctly:

  • Disables polling for page 0 when streaming is active
  • Falls back to 3s polling when streaming is unavailable
  • Never polls pagination pages (only page 0 needs polling)

188-198: LGTM: Smart data source selection.

The activeData logic correctly:

  • Prioritizes stream data when streaming is active
  • Falls back to HTTP data when streaming is unavailable
  • Provides consistent data across pages

296-355: LGTM: Consistent use of activeData for derived state.

All computed values correctly prioritize streaming data when available, ensuring the UI reflects real-time updates across stats, categories, tags, and server state.


366-372: LGTM: Clean API extension with streaming metadata.

The return interface correctly exposes streaming state, allowing consumers to:

  • Detect active streaming (isStreaming)
  • Show connection status (streamConnected, streamError)
  • Display retry information (streamRetrying, streamNextRetryAt, streamRetryAttempt)
  • Access metadata from stream events (streamMeta)

152-159: LGTM: Proper state reset including stream snapshot.

The effect correctly clears all state, including the stream snapshot, when query parameters change. This prevents stale stream data from being mixed with new filter criteria.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
internal/api/sse/manager.go (1)

298-332: Simplify fallback retry seconds calculation.

Line 310 uses an unnecessarily complex pattern to convert duration to seconds:

retrySeconds = int(defaultSyncInterval.Round(time.Second) / time.Second)

Since defaultSyncInterval is defined as 2 * time.Second (a whole number of seconds), this can be simplified.

Apply this diff:

-		retrySeconds = int(defaultSyncInterval.Round(time.Second) / time.Second)
+		retrySeconds = int(defaultSyncInterval.Seconds())

Note: The main calculation at line 308 already uses the simpler backoff.Seconds() approach. This change just makes the fallback consistent.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 130400e and 1ab5f33.

📒 Files selected for processing (1)
  • internal/api/sse/manager.go (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
internal/api/sse/manager.go (6)
internal/qbittorrent/filter_types.go (1)
  • FilterOptions (7-18)
internal/api/server.go (1)
  • Server (39-56)
internal/qbittorrent/pool.go (1)
  • ClientPool (52-66)
internal/qbittorrent/sync_manager.go (1)
  • SyncManager (102-106)
internal/models/instance.go (1)
  • InstanceStore (115-118)
web/src/types/index.ts (2)
  • TorrentResponse (239-251)
  • MainData (270-274)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: Build Docker images (linux/arm/v6)
  • GitHub Check: Build Docker images (linux/amd64/v3)
  • GitHub Check: Build Docker images (linux/amd64/v2)
  • GitHub Check: Build Docker images (linux/arm64)
  • GitHub Check: Build Docker images (linux/arm/v7)
  • GitHub Check: Build Docker images (linux/amd64)
  • GitHub Check: Build distribution binaries
🔇 Additional comments (14)
internal/api/sse/manager.go (14)

1-34: LGTM! Package structure and constants are well-defined.

The package setup, imports, and constant definitions are appropriate for an SSE streaming manager. The sync interval bounds (2s-30s) and pagination limits (300 default, 2000 max) are reasonable choices.


36-138: LGTM! Type definitions are well-structured.

The type hierarchy properly separates concerns:

  • StreamOptions encodes the subscription key
  • subscriptionGroup batches subscribers with identical view parameters
  • Separate mutexes (mu, subsMu) allow fine-grained locking for pending updates vs. subscriber management

140-170: LGTM! Constructor initialization is complete and defensive.

The constructor properly initializes all state and gracefully degrades if the replayer can't be created. The replay buffer of 4 messages provides a reasonable catch-up window for reconnecting clients.


334-370: LGTM! HTTP handler follows SSE best practices.

The handler properly:

  • Validates input and returns appropriate HTTP status codes
  • Ensures subscription cleanup via defer m.Unregister(subscriptionID) (line 364)
  • Delegates to the SSE server's blocking ServeHTTP call

372-404: LGTM! Session callback properly initializes SSE connections.

The callback correctly validates the subscription context and sends an initial snapshot to new subscribers, ensuring they start with a consistent view.


450-469: LGTM! Coalescing pattern correctly prevents concurrent sends.

The enqueueGroup implementation properly uses the sending flag to ensure only one goroutine processes updates for a group at a time, while coalescing multiple pending updates into a single send.


471-508: LGTM! Group processing loop handles updates efficiently.

The loop correctly:

  • Checks for pending updates under lock
  • Captures state and releases lock before expensive operations
  • Continues looping if more updates arrived during processing
  • Uses snapshotSubscribers() to safely iterate subscribers

510-552: LGTM! Payload building has appropriate timeout and error handling.

The 10-second timeout (line 521) provides reasonable bounds for torrent data fetching, and errors are gracefully converted to error payloads rather than dropped.


554-622: LGTM! Helper methods use proper synchronization.

All helper functions correctly use read locks when accessing shared state, and error handling in publish (line 605) appropriately filters expected shutdown errors.


229-273: LGTM! Unregister properly cleans up all references.

The cleanup logic correctly:

  • Removes the subscription from all indexes under m.mu
  • Cleans up empty groups and instance mappings
  • Cancels the sync loop when the last subscriber for an instance disconnects

624-662: LGTM! Shutdown is idempotent and comprehensive.

The shutdown sequence properly:

  • Uses CompareAndSwap (line 629) to ensure single execution
  • Cancels all sync loops before shutting down the SSE server
  • Filters expected context and provider-closed errors (lines 654-658)

664-704: LGTM! Backoff implementation follows best practices.

The exponential backoff is correctly capped (2^4 = 16x max, 30s ceiling) and resets on success, providing good resilience to transient qBittorrent failures.


739-784: LGTM! Sync loop properly manages resources.

The ticker-based loop correctly:

  • Defers ticker.Stop() to prevent leaks (line 752)
  • Respects the cancellation context (line 758)
  • Uses appropriate timeouts for sync operations (line 772)

786-848: LGTM! Utility functions provide proper validation.

The query parameter parsing correctly:

  • Validates pagination bounds (limit ≤ 2000, page ≥ 0)
  • Provides sensible defaults (sort="addedOn", order="desc")
  • Returns descriptive errors for malformed input

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1ab5f33 and 9401f9e.

📒 Files selected for processing (1)
  • internal/api/sse/manager.go (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
internal/api/sse/manager.go (5)
internal/qbittorrent/filter_types.go (1)
  • FilterOptions (7-18)
internal/api/server.go (1)
  • Server (39-56)
internal/qbittorrent/pool.go (1)
  • ClientPool (52-66)
internal/qbittorrent/sync_manager.go (1)
  • SyncManager (102-106)
internal/models/instance.go (1)
  • InstanceStore (115-118)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: Build Docker images (linux/arm/v7)
  • GitHub Check: Build Docker images (linux/arm64)
  • GitHub Check: Build Docker images (linux/arm/v6)
  • GitHub Check: Build Docker images (linux/amd64)
  • GitHub Check: Build Docker images (linux/amd64/v3)
  • GitHub Check: Build Docker images (linux/amd64/v2)
  • GitHub Check: Build distribution binaries

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9401f9e and bc819df.

📒 Files selected for processing (1)
  • internal/api/sse/manager.go (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
internal/api/sse/manager.go (6)
internal/qbittorrent/filter_types.go (1)
  • FilterOptions (7-18)
internal/api/server.go (1)
  • Server (39-56)
internal/qbittorrent/pool.go (1)
  • ClientPool (52-66)
internal/qbittorrent/sync_manager.go (1)
  • SyncManager (102-106)
internal/models/instance.go (1)
  • InstanceStore (115-118)
web/src/types/index.ts (2)
  • TorrentResponse (239-251)
  • MainData (270-274)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: Build Docker images (linux/amd64/v2)
  • GitHub Check: Build Docker images (linux/amd64)
  • GitHub Check: Build Docker images (linux/arm/v7)
  • GitHub Check: Build Docker images (linux/amd64/v3)
  • GitHub Check: Build Docker images (linux/arm/v6)
  • GitHub Check: Build Docker images (linux/arm64)
  • GitHub Check: Build distribution binaries

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bc819df and 662d991.

📒 Files selected for processing (3)
  • internal/api/sse/manager.go (1 hunks)
  • web/src/contexts/SyncStreamContext.tsx (1 hunks)
  • web/src/types/index.ts (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
web/src/contexts/SyncStreamContext.tsx (2)
web/src/types/index.ts (3)
  • TorrentFilters (225-237)
  • TorrentStreamPayload (262-267)
  • TorrentStreamMeta (253-260)
web/src/lib/api.ts (1)
  • api (831-831)
web/src/types/index.ts (1)
internal/qbittorrent/sync_manager.go (1)
  • TorrentResponse (62-75)
internal/api/sse/manager.go (5)
internal/qbittorrent/filter_types.go (1)
  • FilterOptions (7-18)
internal/api/server.go (1)
  • Server (39-56)
internal/qbittorrent/pool.go (1)
  • ClientPool (52-66)
internal/qbittorrent/sync_manager.go (1)
  • SyncManager (102-106)
internal/models/instance.go (1)
  • InstanceStore (115-118)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: Build Docker images (linux/arm64)
  • GitHub Check: Build Docker images (linux/amd64/v2)
  • GitHub Check: Build Docker images (linux/amd64)
  • GitHub Check: Build Docker images (linux/arm/v6)
  • GitHub Check: Build Docker images (linux/amd64/v3)
  • GitHub Check: Build Docker images (linux/arm/v7)
  • GitHub Check: Build distribution binaries
🔇 Additional comments (15)
internal/api/sse/manager.go (8)

51-67: LGTM! Key collision issue resolved.

The past review correctly identified ambiguous concatenation with plain | delimiters. Using strconv.Quote on all string fields (Sort, Order, Search, and filtersKey) ensures the key is injective and prevents collisions when these fields contain | characters.


194-207: Verify: Options synchronization appears correct.

The past review flagged a data race on group.options. In the current code:

  • Line 199 writes group.options under m.mu only when creating a new group (if !ok).
  • Line 489 reads opts := group.options under group.mu in processGroup.

While these use different mutexes, the group is created and fully initialized (including options) before being published to m.groups. After publication, group.options is never modified (immutable), so all subsequent reads access a stable value. The happens-before relationship is established by the map publication under m.mu.


296-331: LGTM! Retry calculation simplified as suggested.

The past review recommended simplifying the retry seconds calculation. The current implementation at line 307 uses int(backoff.Seconds()) directly, which is cleaner than the previous round-and-divide pattern.


449-468: LGTM! Group enqueueing logic is sound.

The lock-based state machine (lines 456-465) correctly prevents multiple concurrent processGroup goroutines and ensures pendingMeta/pendingType/hasPending are updated atomically. The sending flag is properly managed to avoid launching duplicate processors.


470-507: LGTM! Processing loop avoids lock contention.

The loop correctly snapshots state (lines 481-491) and subscribers (line 493) under their respective locks, then releases them before performing potentially blocking operations (buildGroupPayload, publish). This prevents lock contention during I/O.


575-596: LGTM! Lock released before publishing as recommended.

The past review recommended releasing the manager lock before calling m.publish to prevent blocking. The current implementation (lines 587-591) correctly copies subscription IDs under the read lock, releases it, then publishes to each ID without holding any lock.


634-672: LGTM! Shutdown is properly orchestrated.

The shutdown sequence is well-structured:

  1. Atomic flag (line 639) prevents double-shutdown.
  2. Context cancellation (line 643) stops new work.
  3. Sync loops are canceled (lines 654-658) before server shutdown.
  4. Graceful SSE server shutdown (lines 664-669) with error filtering.

749-801: LGTM! Sync loop is correctly implemented.

The timer-based loop (lines 760-779) correctly spaces out syncs and checks for cancellation both in the select (line 766) and after each sync (line 772). Using timer.Reset after each iteration ensures consistent spacing regardless of sync duration.

web/src/types/index.ts (1)

253-267: LGTM! Type definitions align with backend.

The new TorrentStreamMeta and TorrentStreamPayload interfaces correctly match the Go structs (StreamMeta and StreamPayload) from internal/api/sse/manager.go. Field names and types are consistent, enabling type-safe streaming envelope handling.

web/src/contexts/SyncStreamContext.tsx (6)

68-131: LGTM! Subscription model prevents whole-app re-renders.

The past review flagged forceUpdate() causing entire app re-renders. The current implementation correctly uses a subscription model:

  • Stream state stored in refs (line 69-70), not provider state.
  • Per-key subscribers notified individually (lines 91-109).
  • Only components subscribed to changed keys re-render.

This architecture avoids performance bottlenecks from frequent stream updates.


147-166: LGTM! EventSource cleanup is thorough.

The closeStream function properly cleans up resources:

  • Removes event listeners (lines 154-156).
  • Clears callbacks to prevent leaks (lines 159-160).
  • Closes the connection (line 161).
  • Clears entry references (lines 163-165).

199-235: Verify: Retry continues indefinitely after max attempts.

The exponential backoff correctly calculates delays (4s → 8s → 16s → 30s max), but MAX_RETRY_ATTEMPTS (line 213) only caps the exponent growth, not the total retries. The stream will retry indefinitely at 30s intervals as long as listeners exist. If this is intentional for long-lived connections, consider documenting it; otherwise, you may want to stop after a maximum number of attempts.


168-321: LGTM! Stale-closure checks prevent cleanup races.

The defensive checks (lines 203-205, 227-229, 309-312) correctly validate that the entry hasn't been replaced or removed before retrying. This prevents stale timeout callbacks from affecting the wrong stream. The pattern is appropriate for JavaScript's single-threaded event loop.


350-367: LGTM! Proper cleanup prevents memory leaks.

The connect function correctly returns a cleanup function (lines 356-364) that:

  • Removes the listener.
  • Closes the stream if no listeners remain.
  • Deletes the entry from the ref.

This integrates properly with React's useEffect cleanup, preventing listener accumulation and EventSource leaks.


471-486: LGTM! Parameter comparison is logically sound.

The isSameParams function correctly compares all fields (lines 472-478) and uses JSON.stringify for filters (lines 483-485). This aligns with the intent of createStreamKey to identify equivalent streams. After fixing createStreamKey to use proper escaping, this function will remain consistent.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
web/src/contexts/SyncStreamContext.tsx (2)

133-145: Simplify clearTimeout call.

The typeof window !== "undefined" check is unnecessary in a React hooks context. React hooks only execute on the client (after hydration), so window is always defined. Additionally, clearTimeout is globally available in browsers and is equivalent to window.clearTimeout.

Apply this diff to simplify:

 const clearRetryState = useCallback((entry: StreamEntry) => {
   if (entry.retryTimer !== undefined) {
-    if (typeof window !== "undefined") {
-      window.clearTimeout(entry.retryTimer)
-    } else {
-      clearTimeout(entry.retryTimer)
-    }
+    clearTimeout(entry.retryTimer)
     entry.retryTimer = undefined
   }

223-223: Consider using setTimeout without the window prefix.

For consistency with the simplified clearTimeout approach (see earlier comment), consider using setTimeout directly instead of window.setTimeout. Both are equivalent in browser environments.

For example:

-entry.retryTimer = window.setTimeout(() => {
+entry.retryTimer = setTimeout(() => {

Also applies to: 278-278, 305-305

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 662d991 and ddd36ec.

📒 Files selected for processing (1)
  • web/src/contexts/SyncStreamContext.tsx (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
web/src/contexts/SyncStreamContext.tsx (1)
web/src/types/index.ts (3)
  • TorrentFilters (225-237)
  • TorrentStreamPayload (262-267)
  • TorrentStreamMeta (253-260)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: Build Docker images (linux/arm/v6)
  • GitHub Check: Build Docker images (linux/arm64)
  • GitHub Check: Build Docker images (linux/amd64/v3)
  • GitHub Check: Build Docker images (linux/arm/v7)
  • GitHub Check: Build Docker images (linux/amd64/v2)
  • GitHub Check: Build Docker images (linux/amd64)
  • GitHub Check: Build distribution binaries
🔇 Additional comments (6)
web/src/contexts/SyncStreamContext.tsx (6)

14-56: LGTM! Well-structured type definitions.

The interface definitions clearly capture the streaming subsystem's API surface and internal state. The separation between public interfaces (StreamParams, StreamState, SyncStreamContextValue) and internal structures (StreamEntry) is clean.


199-235: Retry logic correctly implements exponential backoff with indefinite reconnection.

The exponential backoff (4s → 8s → 16s → 30s cap) is correctly implemented. Note that MAX_RETRY_ATTEMPTS caps the delay tier but doesn't stop reconnection—streams will retry indefinitely at 30-second intervals once they reach attempt 6. This persistent reconnection behavior is appropriate for SSE streams but may benefit from inline documentation.


323-367: LGTM! Stream lifecycle management is correct.

The ensureStream function properly reuses existing streams and reopens when parameters change. The connect cleanup is thorough: it removes the listener, closes the stream when no subscribers remain, clears retry state, and removes the entry from the registry.


401-455: LGTM! Hook implementation follows React best practices.

The ref pattern for onMessage and params correctly avoids recreating the listener on every callback change while maintaining fresh references. The state subscription pattern ensures components re-render only when their specific stream's state changes, not on every SSE event.


465-475: LGTM! Key collision issue from previous review is resolved.

Using JSON.stringify on a structured object ensures deterministic, collision-free keys. This properly addresses the earlier concern about delimiter-based collisions. While slightly less performant than string concatenation, the tradeoff for correctness and simplicity is appropriate here.


477-492: LGTM! Parameter comparison logic is correct.

The function properly compares all StreamParams fields, including deep comparison of the filters object via JSON.stringify. Search field normalization correctly handles null/undefined/empty string cases.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ddd36ec and d71aaff.

📒 Files selected for processing (3)
  • internal/api/sse/manager.go (1 hunks)
  • internal/qbittorrent/context.go (2 hunks)
  • internal/qbittorrent/sync_manager.go (7 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
internal/api/sse/manager.go (6)
internal/qbittorrent/filter_types.go (1)
  • FilterOptions (7-18)
internal/api/server.go (1)
  • Server (39-56)
internal/qbittorrent/pool.go (1)
  • ClientPool (52-66)
internal/qbittorrent/sync_manager.go (2)
  • SyncManager (102-106)
  • TorrentResponse (62-75)
internal/models/instance.go (1)
  • InstanceStore (115-118)
internal/qbittorrent/context.go (1)
  • WithSkipFreshData (30-35)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Run tests

@s0up4200 s0up4200 changed the base branch from main to develop February 21, 2026 14:17
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 56acd1e8c5

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +203 to +206
const dedupedDisplaced = displacedSlice.filter(torrent => !seen.has(torrent.hash))
const dedupedTrailing = trailing.filter(torrent => !seen.has(torrent.hash))

const merged = [...dedupedLeading, ...nextTorrents, ...dedupedDisplaced, ...dedupedTrailing]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Remove stale displaced rows when first page shrinks

In handleStreamPayload, merging nextTorrents with dedupedDisplaced keeps rows that just fell out of page 0 (for example after a delete/completion transition) when users have loaded later pages. The subsequent total truncation then removes rows from the end instead, so allTorrents can contain stale entries and miss valid ones until a full refetch. This causes incorrect torrent data/actions in the paginated view while streaming is active.

Useful? React with 👍 / 👎.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 783cc871f3

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +49 to +50
queryFn: async () => getSnapshot() ?? emptyMetadataRef.current,
initialData: () => getSnapshot() ?? emptyMetadataRef.current,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Restore metadata fetch when cache is empty

useInstanceMetadata no longer fetches categories/tags from the API and now returns only getSnapshot() data for instance-metadata; the only network fallback left is preferences. This means views that rely on this hook without first opening the torrents list (for example RSSPage and DirScanTab) will keep receiving empty category/tag sets indefinitely, so their selectors lose real options. Please keep a fetch path for categories/tags when the cache is missing.

Useful? React with 👍 / 👎.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 86c5867d40

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 9a2ebd9dd7

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: f995f5051b

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: c587c5ea9b

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: b8d7c680a7

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 7e9a64608b

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@s0up4200 s0up4200 changed the title feat(sse): torrent streaming feat: sse Feb 21, 2026
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 457b7d0ed2

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: fd1a60631b

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: d1c1c11a99

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request sse

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants