Separation of backend services#933
Conversation
- Updated SYNC_MICROSERVICE_URL in settings.py files to remove redundant API versioning. - Removed microservice utility functions from main.py and adjusted router configurations. - Enhanced setup scripts for sync-microservice with Python 3.12 compatibility. - Improved documentation for sync-microservice setup and troubleshooting. - Cleaned up server utility functions to streamline service start processes. - Adjusted API routes in sync-microservice for better organization and clarity.
|
|
📝 WalkthroughWalkthroughRefactors microservice integration: removes backend microservice launcher, updates sync-microservice routing and retry logic, centralizes service startup in the frontend (Tauri) to orchestrate backend and sync services, and updates SYNC_MICROSERVICE_URL and multi-platform setup docs/scripts. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant Frontend as Frontend (Tauri)
participant Backend as Backend Service
participant Sync as Sync Service
participant DB as Database
User->>Frontend: Launch app
Frontend->>Frontend: startServer()
note right of Frontend: resolve resources folder via tauri command
rect rgb(230, 245, 255)
Frontend->>Backend: StartBackendUnix/Windows (if not running)
Backend-->>Frontend: Started
Frontend->>Backend: Health check (BACKEND_URL)
Backend-->>Frontend: Healthy
end
rect rgb(240, 255, 230)
Frontend->>Sync: StartSyncServiceUnix/Windows (if not running)
Sync-->>Frontend: Started
Frontend->>Sync: Health check (SYNC_MICROSERVICE_URL)
Sync->>DB: Connect (retry loop: 5s interval, 60s timeout)
DB-->>Sync: Connected
Sync-->>Frontend: Healthy
end
Frontend->>User: App ready
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested labels
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
|
1 similar comment
|
|
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
frontend/src/pages/SettingsPage/components/ApplicationControlsCard.tsx (1)
123-123: Update description to match current functionality.The description mentions "server operations" but the Restart Server button has been removed, leaving no server-related controls in this component.
🔎 Proposed fix
- description="Manage updates, server operations, and face clustering" + description="Manage updates and face clustering"docs/backend/backend_rust/api.md (1)
13-20: Fix variable name inconsistency in example code.Line 18 declares
resourcesFolderPathbut line 19 logsserverPath, which is undefined. This will cause a runtime error if copied.🔎 Proposed fix
// Example: Get server path const resourcesFolderPath = await invoke("get_resources_folder_path"); -console.log("Resources folder path:", serverPath); +console.log("Resources folder path:", resourcesFolderPath);
🧹 Nitpick comments (12)
docs/Manual_Setup_Guide.md (2)
108-167: Add language identifiers to code blocks and correct macOS terminology.The new Sync-Microservice section is comprehensive and well-structured. However, it has formatting and terminology issues:
- Code block language identifiers: Lines 116, 124, 130, 138, 144, 154, and 162 lack language identifiers (e.g.,
bash,powershell).- macOS terminology: Lines 122 and 136 use "MacOS" instead of the correct "macOS" branding.
🔎 Proposed fixes for code block language specs and macOS terminology
Bash(Linux/MacOS) + ``` + bash cd sync-microservice ``` Bash(Linux/MacOS) + ``` + bash python3 -m venv .sync-env ``` Powershell(Windows) + ``` + powershell python -m venv .sync-env ``` Bash(Linux/MacOS) + ``` + bash source .sync-env/bin/activate ``` Powershell(Windows) + ``` + powershell .sync-env\Scripts\Activate.ps1 ``` Bash + ``` + bash pip install -r requirements.txt ``` Bash/Powershell + ``` + bash uvicorn main:app --port 8001 --reload ```Additionally, replace all instances of "MacOS" with "macOS" in lines 122 and 136.
168-191: Add language identifiers to troubleshooting code blocks.The new troubleshooting section is helpful and addresses common setup issues. However, the code blocks at lines 177–180 and 188–191 lack language identifiers.
🔎 Proposed fixes for code block language specs
**Debian/Ubuntu:** Bash + ``` + bash sudo apt update sudo apt install -y libglib2.0-dev libgl1-mesa-glx ``` Bash + ``` + bash sudo apt install -y libglib2.0-dev pkg-config ```frontend/src/pages/SettingsPage/components/ApplicationControlsCard.tsx (1)
125-148: Adjust grid columns to match button count.The grid is configured for 3 columns on medium+ screens (
md:grid-cols-3), but only 2 buttons remain after removing the Restart Server button. This creates an unbalanced layout with unused space.🔎 Proposed fix
- <div className="grid grid-cols-1 gap-4 md:grid-cols-3"> + <div className="grid grid-cols-1 gap-4 md:grid-cols-2"> <Button onClick={onCheckUpdatesClick} variant="outline"backend/main.py (3)
72-72: LGTM: Formatting consolidated to single line.The servers configuration is now more concise without changing functionality.
93-93: LGTM: Path construction simplified.The OpenAPI path concatenation is now more readable on a single line.
123-124: LGTM: Router inclusions consolidated.The router inclusions are now more concise without changing functionality.
sync-microservice/app/routes/folders.py (1)
15-15: LGTM: Response model formatting simplified.The Union type annotation is now more concise on a single line without changing functionality.
docs/backend/backend_rust/api.md (1)
7-7: Consider updating the description to reflect the broader scope.The description mentions "server resources directory" which might be misleading since the command now returns the general resources folder path (not specifically "resources/backend"). Consider updating to: "Retrieves the path to the application's resources directory."
scripts/setup.sh (1)
102-110: Add error handling for directory navigation.The
cd sync-microservicecommand (line 104) lacks error handling. If the directory doesn't exist, the script will continue execution in the wrong working directory, potentially causing installation failures or installing packages in the wrong location.🔎 Proposed fix to add error handling
# ---- Set up the sync-microservice ---- echo -e "${YELLOW}Setting up sync-microservice...${NC}" -cd sync-microservice +cd sync-microservice || { echo -e "${RED}sync-microservice directory not found${NC}"; exit 1; } python -m venv .sync-env source .sync-env/bin/activate pip install --upgrade pip pip install -r requirements.txt deactivate cd ..This follows the same pattern used for the frontend setup at line 116.
frontend/src-tauri/capabilities/migrated.json (1)
8-96: Consider removing duplicate permission entries.The permissions array contains several duplicates that could be cleaned up:
core:path:default(lines 9, 17)core:event:default(lines 10, 18)core:window:default(lines 11, 19)core:app:default(lines 12, 21)core:resources:default(lines 13, 22)core:menu:default(lines 14, 23)core:tray:default(lines 15, 24)dialog:default(lines 95, 96)While duplicates don't break functionality, they add noise and could cause confusion during maintenance.
frontend/src/utils/serverUtils.ts (2)
36-75: Consider adding startup verification with retry logic.The function spawns both services but returns immediately without confirming they're actually ready to accept connections. This could cause race conditions where callers assume services are available when they're still initializing.
Consider polling the health endpoints with a timeout after spawning:
🔎 Proposed enhancement with startup verification
+const waitForService = async ( + healthCheck: () => Promise<boolean>, + serviceName: string, + maxAttempts = 30, + delayMs = 500 +): Promise<boolean> => { + for (let i = 0; i < maxAttempts; i++) { + if (await healthCheck()) { + return true; + } + await new Promise((resolve) => setTimeout(resolve, delayMs)); + } + console.error(`${serviceName} failed to start within timeout`); + return false; +}; export const startServer = async () => { try { console.log('Starting services!'); const resourcesFolderPath: string = await invoke( 'get_resources_folder_path', ); // Start backend server if (!(await isServerRunning())) { const backendCommand = Command.create( isWindows() ? 'StartBackendWindows' : 'StartBackendUnix', '', { cwd: resourcesFolderPath + '/backend' }, ); const backendChild = await backendCommand.spawn(); backendCommand.stderr.on('data', (line) => console.error('Backend Error:', line), ); console.log('Backend server started with PID:', backendChild.pid); + await waitForService(isServerRunning, 'Backend'); } // Start sync service if (!(await isSyncServiceRunning())) { const syncCommand = Command.create( isWindows() ? 'StartSyncServiceWindows' : 'StartSyncServiceUnix', '', { cwd: resourcesFolderPath + '/sync-microservice' }, ); const syncChild = await syncCommand.spawn(); syncCommand.stderr.on('data', (line) => console.error('Sync Service Error:', line), ); console.log('Sync service started with PID:', syncChild.pid); + await waitForService(isSyncServiceRunning, 'Sync Service'); } } catch (error) { console.error('Error starting services:', error); } };
45-72: Consider independent error handling for each service.If the backend spawn fails, the sync service startup is skipped entirely. If these services can operate independently or you want better diagnostics, consider wrapping each service startup in its own try-catch:
🔎 Proposed independent error handling
// Start backend server + try { if (!(await isServerRunning())) { const backendCommand = Command.create( isWindows() ? 'StartBackendWindows' : 'StartBackendUnix', '', { cwd: resourcesFolderPath + '/backend' }, ); const backendChild = await backendCommand.spawn(); backendCommand.stderr.on('data', (line) => console.error('Backend Error:', line), ); console.log('Backend server started with PID:', backendChild.pid); } + } catch (error) { + console.error('Error starting backend:', error); + } // Start sync service + try { if (!(await isSyncServiceRunning())) { const syncCommand = Command.create( isWindows() ? 'StartSyncServiceWindows' : 'StartSyncServiceUnix', '', { cwd: resourcesFolderPath + '/sync-microservice' }, ); const syncChild = await syncCommand.spawn(); syncCommand.stderr.on('data', (line) => console.error('Sync Service Error:', line), ); console.log('Sync service started with PID:', syncChild.pid); } + } catch (error) { + console.error('Error starting sync service:', error); + }
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (22)
backend/app/config/settings.pybackend/app/utils/microservice.pybackend/main.pydocs/Manual_Setup_Guide.mddocs/Script_Setup_Guide.mddocs/backend/backend_rust/api.mdfrontend/src-tauri/capabilities/migrated.jsonfrontend/src-tauri/src/main.rsfrontend/src-tauri/src/services/mod.rsfrontend/src/config/Backend.tsfrontend/src/main.tsxfrontend/src/pages/SettingsPage/components/ApplicationControlsCard.tsxfrontend/src/utils/serverUtils.tsscripts/setup.jsscripts/setup.ps1scripts/setup.shsync-microservice/app/config/settings.pysync-microservice/app/core/lifespan.pysync-microservice/app/routes/folders.pysync-microservice/app/routes/health.pysync-microservice/app/routes/watcher.pysync-microservice/main.py
💤 Files with no reviewable changes (1)
- backend/app/utils/microservice.py
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-10-31T17:00:50.132Z
Learnt from: Hemil36
Repo: AOSSIE-Org/PictoPy PR: 570
File: backend/app/database/connection.py:16-24
Timestamp: 2025-10-31T17:00:50.132Z
Learning: In PictoPy backend, the user prefers not to use database connection retry logic or extended busy timeouts in the centralized get_db_connection() context manager, even though the app has concurrent access patterns via ProcessPoolExecutor and FastAPI.
Applied to files:
sync-microservice/app/core/lifespan.py
📚 Learning: 2025-10-15T07:13:34.502Z
Learnt from: Hemil36
Repo: AOSSIE-Org/PictoPy PR: 548
File: backend/app/database/images.py:115-115
Timestamp: 2025-10-15T07:13:34.502Z
Learning: In the PictoPy backend and sync-microservice, exception logging using `logger.error(f"...")` without stack traces (i.e., without `exc_info=True` or `logger.exception()`) is acceptable for console logging, as this is a deliberate design decision made with the maintainer for console-only logs.
Applied to files:
sync-microservice/app/core/lifespan.py
🧬 Code graph analysis (7)
sync-microservice/app/config/settings.py (1)
frontend/src/config/Backend.ts (1)
SYNC_MICROSERVICE_URL(2-2)
sync-microservice/app/routes/folders.py (1)
sync-microservice/app/schemas/folders.py (2)
FolderTaggingStatusSuccessResponse(18-23)FolderTaggingStatusErrorResponse(26-36)
frontend/src/utils/serverUtils.ts (1)
frontend/src/config/Backend.ts (2)
BACKEND_URL(1-1)SYNC_MICROSERVICE_URL(2-2)
frontend/src-tauri/src/main.rs (1)
frontend/src-tauri/src/services/mod.rs (1)
get_resources_folder_path(5-11)
backend/app/config/settings.py (1)
frontend/src/config/Backend.ts (1)
SYNC_MICROSERVICE_URL(2-2)
sync-microservice/app/core/lifespan.py (1)
sync-microservice/app/database/folders.py (1)
db_check_database_connection(47-72)
sync-microservice/main.py (1)
sync-microservice/app/utils/logger_writer.py (1)
redirect_stdout_stderr(71-99)
🪛 LanguageTool
docs/Manual_Setup_Guide.md
[uncategorized] ~122-~122: The operating system from Apple is written “macOS”.
Context: ...cies. Create one using: Bash(Linux/MacOS) ``` python3 -m venv .sync-env...
(MAC_OS)
[uncategorized] ~136-~136: The operating system from Apple is written “macOS”.
Context: ... Virtual Environment:** Bash(Linux/MacOS) ``` source .sync-env/bin/acti...
(MAC_OS)
🪛 markdownlint-cli2 (0.18.1)
docs/Script_Setup_Guide.md
67-67: Multiple headings with the same content
(MD024, no-duplicate-heading)
75-75: Multiple headings with the same content
(MD024, no-duplicate-heading)
docs/Manual_Setup_Guide.md
116-116: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
124-124: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
130-130: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
138-138: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
144-144: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
154-154: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
162-162: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🪛 Shellcheck (0.11.0)
scripts/setup.sh
[warning] 104-104: Use 'cd ... || exit' or 'cd ... || return' in case cd fails.
(SC2164)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Tauri Build Check (windows-latest)
- GitHub Check: Tauri Build Check (ubuntu-22.04)
- GitHub Check: Tauri Build Check (macos-latest, --target aarch64-apple-darwin)
- GitHub Check: Backend Tests
🔇 Additional comments (18)
docs/Manual_Setup_Guide.md (1)
81-81: Correct PowerShell activation script casing.The case correction from
.activate.ps1to.Activate.ps1is accurate and necessary for Windows PowerShell execution.frontend/src/pages/SettingsPage/components/ApplicationControlsCard.tsx (1)
2-2: LGTM! Clean import removal.The Server icon has been correctly removed from imports, aligning with the removal of the Restart Server functionality.
sync-microservice/app/routes/folders.py (1)
10-10: Router is correctly mounted with prefix and tags in main.py.The folders router is properly included at line 42 of sync-microservice/main.py with both
prefix="/folders"andtags=["Folders"]specified, confirming the design pattern where prefix and tags are applied at the mounting point rather than at the router definition.sync-microservice/app/config/settings.py (1)
4-4: LGTM: URL change consistent across services.The SYNC_MICROSERVICE_URL value of
http://localhost:8001(without/api/v1) is consistent across all three locations: sync-microservice/app/config/settings.py, backend/app/config/settings.py, and frontend/src/config/Backend.ts. All usages in the codebase correctly append specific endpoints to this base URL.backend/app/config/settings.py (1)
5-5: The path handling inSYNC_MICROSERVICE_URLis correct.The only usage in backend/app/utils/API.py correctly constructs the full endpoint path by appending
/watcher/restartto the base URL, which matches the sync-microservice route definition. There is no evidence that/api/v1was ever part of the URL structure—the sync-microservice has no/apiversioning or root prefix configuration.Likely an incorrect or invalid review comment.
backend/main.py (1)
42-61: Verify sync microservice startup mechanism and add health checks before calling the sync service API.The removal of
microservice_util_start_sync_service()means the sync microservice must now be started manually in a separate terminal (uvicorn main:app --port 8001 --reload), and the backend does not wait for it to be ready. This creates potential race conditions:
Manual startup required: Sync microservice is no longer automatically started—no Tauri commands were found for automated startup as mentioned in PR objectives.
Race condition exists: If a user processes images before the sync microservice is running,
API_util_restart_sync_microservice_watcher()calls will fail. While error handling exists (logging and returningFalse), there is no retry logic or pre-flight health checks.Error handling is minimal: The API utility catches exceptions and logs errors but provides no mechanism to wait for service availability or retry failed requests.
Consider adding a health check call in the backend's lifespan function to verify the sync microservice is ready before accepting image processing requests, or implement retry logic with exponential backoff in
API_util_restart_sync_microservice_watcher().frontend/src-tauri/src/services/mod.rs (1)
5-11: All callers properly updated—refactoring is complete and intentional.The function rename from
get_server_pathtoget_resources_folder_pathhas no remaining references in the codebase. Callers infrontend/src/utils/serverUtils.tscorrectly append their subdirectories:resourcesFolderPath + '/backend'(line 49) andresourcesFolderPath + '/sync-microservice'(line 64). The broader path scope—returning"resources"instead of"resources/backend"—is intentional and provides a cleaner abstraction where callers specify their required subdirectory.scripts/setup.js (1)
19-44: Platform detection logic looks solid.The Debian-based Linux detection using
/etc/debian_versionis a standard approach, and the error handling for unsupported platforms is clear and helpful. The early exit prevents attempting to spawn processes on unsupported systems.sync-microservice/app/routes/health.py (1)
6-6: Tag removal is correctly applied during route mounting.Tags have been properly configured at the mount point in main.py (line 40:
app.include_router(health.router, tags=["Health"])). Removing inline tags from the router definition while applying them centrally during mounting is a valid and clean refactoring pattern. The health check logic remains intact.frontend/src-tauri/src/main.rs (2)
25-27: All frontend callers have been properly updated to use the new command name.The migration from
get_server_pathtoget_resources_folder_pathis complete. Frontend code atfrontend/src/utils/serverUtils.ts:41correctly invokes the new command, and no references to the old command name remain in the frontend codebase.
18-24: Path difference between setup and command is intentional—no action required.The setup block resolves
"resources/backend"directly (line 21), whileget_resources_folder_pathreturns"resources". This is intentional: the command provides the parent resources folder, and the frontend appends"/backend"when starting the backend service (serverUtils.ts, line 49). Both ultimately reference the same directory, with the setup validating backend path availability.frontend/src/config/Backend.ts (1)
2-2: The concern about the/api/v1prefix removal is incorrect. The sync-microservice never had an/api/v1prefix in its route structure. Routes are mounted directly (e.g.,/health,/watcher,/folders) and the frontend is already correctly configured to access them athttp://localhost:8001without any API version prefix. The current configuration is working as intended.Likely an incorrect or invalid review comment.
sync-microservice/app/routes/watcher.py (1)
14-14: LGTM! Router configuration moved to application level.The router initialization has been updated to remove the prefix and tags, which are now applied at the mount point in
sync-microservice/main.py(line 41). This centralizes route configuration and maintains the same effective API paths.sync-microservice/main.py (1)
40-42: LGTM! Centralized route configuration.The explicit router mounting with prefixes and tags aligns with the router definition changes across route modules. This centralizes route configuration at the application level and makes the API structure more explicit.
scripts/setup.ps1 (1)
153-173: LGTM! Sync-microservice setup follows established patterns.The new setup block correctly mirrors the backend setup structure with proper virtual environment creation, dependency installation, and error handling.
docs/Script_Setup_Guide.md (2)
63-81: Documentation correctly documents sync-microservice startup.The new step properly documents Windows and Linux commands with the correct port (8001) matching the
SYNC_MICROSERVICE_URLconfiguration. The duplicate#### Windowsand#### Linuxheadings flagged by markdownlint are acceptable here since they exist under different parent sections and provide clear platform-specific instructions.
78-80: The uvicorn commandmain:app --port 8001 --reloadis correct. The sync-microservice has amain.pyfile at its root with a properly instantiated FastAPIappinstance.frontend/src-tauri/capabilities/migrated.json (1)
124-150: LGTM! Sync service spawn commands are well-structured.The new
StartSyncServiceUnixandStartSyncServiceWindowscommands follow the established pattern from the backend commands with appropriate shell invocations andsidecar: falseconfiguration.
| import BrowserWarning from './components/BrowserWarning'; | ||
| import { isProd } from './utils/isProd'; | ||
| import { stopServer, startServer } from './utils/serverUtils'; | ||
| import { startServer } from './utils/serverUtils'; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's look at the main.tsx file to understand the context
cat -n frontend/src/main.tsxRepository: AOSSIE-Org/PictoPy
Length of output: 1772
🏁 Script executed:
# Check the serverUtils file to understand service management
find . -name "serverUtils*" -type fRepository: AOSSIE-Org/PictoPy
Length of output: 97
🏁 Script executed:
# Search for cleanup/shutdown logic related to services
rg -n "stop.*[Ss]erver|cleanup|terminate|shutdown" --type ts --type tsx -C 2Repository: AOSSIE-Org/PictoPy
Length of output: 89
🏁 Script executed:
# Search for Tauri lifecycle hooks and window event handlers
rg -n "onWindowEvent|on_window_event|window\.onCloseRequested|beforeWindowClose|on_close" --type ts --type tsx --type rust -C 2Repository: AOSSIE-Org/PictoPy
Length of output: 89
🏁 Script executed:
# Check for any service stop/shutdown logic in the Tauri backend
fd -e rs . src-tauri | head -20Repository: AOSSIE-Org/PictoPy
Length of output: 158
🏁 Script executed:
cat -n frontend/src/utils/serverUtils.tsRepository: AOSSIE-Org/PictoPy
Length of output: 2815
🏁 Script executed:
# Search for stopServer references in all files
rg -n "stopServer" --type tsRepository: AOSSIE-Org/PictoPy
Length of output: 44
🏁 Script executed:
# Look for service cleanup in other locations
rg -n "export.*function\|export.*const" frontend/src/utils/serverUtils.tsRepository: AOSSIE-Org/PictoPy
Length of output: 44
🏁 Script executed:
# Check for Tauri app lifecycle setup (usually in src-tauri or main.tsx)
rg -n "appWindow\|invoke\|command" --type ts -C 2 frontend/src/Repository: AOSSIE-Org/PictoPy
Length of output: 44
🏁 Script executed:
# Find the Tauri backend files
find . -name "main.rs" -o -name "lib.rs" | grep -E "src-tauri|tauri" | head -5Repository: AOSSIE-Org/PictoPy
Length of output: 126
🏁 Script executed:
cat -n frontend/src-tauri/src/main.rs | head -80Repository: AOSSIE-Org/PictoPy
Length of output: 1326
🏁 Script executed:
cat -n frontend/src-tauri/src/lib.rs | head -100Repository: AOSSIE-Org/PictoPy
Length of output: 86
🏁 Script executed:
# Search for any process cleanup or command setup in Tauri backend
rg -n "setup\|cleanup\|exit\|shutdown" frontend/src-tauri/src/ -C 2Repository: AOSSIE-Org/PictoPy
Length of output: 44
🏁 Script executed:
# Look for the Tauri commands that start the services
rg -n "StartBackend\|StartSync" frontend/src-tauri/src/ -C 3Repository: AOSSIE-Org/PictoPy
Length of output: 44
🏁 Script executed:
# Find all Rust files in the services module
find frontend/src-tauri/src/services -name "*.rs" -type f 2>/dev/null | head -20Repository: AOSSIE-Org/PictoPy
Length of output: 100
🏁 Script executed:
# Alternative: list directory structure
find frontend/src-tauri/src -type f | head -30Repository: AOSSIE-Org/PictoPy
Length of output: 161
🏁 Script executed:
# Search for command definitions containing StartBackend or StartSync
rg -n "StartBackend|StartSync|get_resources_folder_path" frontend/src-tauri/ -C 3Repository: AOSSIE-Org/PictoPy
Length of output: 3287
🏁 Script executed:
# Check if there's a git history showing what stopServer did before
git log -p --all -S "stopServer" -- frontend/src/ 2>/dev/null | head -100Repository: AOSSIE-Org/PictoPy
Length of output: 44
🏁 Script executed:
# Verify the services module completely
cat -n frontend/src-tauri/src/services/mod.rsRepository: AOSSIE-Org/PictoPy
Length of output: 490
Implement service cleanup on application exit.
The stopServer() call was removed from the window close handler without a replacement mechanism. Backend services are spawned with Command.spawn() in startServer(), but process references are not retained, making them impossible to terminate when the application closes.
Child processes will remain orphaned after exit unless explicitly managed. Implement cleanup by either:
- Storing process references (PIDs) globally or in state for later termination on window close
- Adding Tauri lifecycle hooks (e.g.,
on_window_eventwithWindowEvent::CloseRequested) to handle termination - Defining kill/stop shell commands in Tauri capabilities alongside the existing spawn commands
The current empty handler at lines 20–21 of main.tsx must be populated with actual cleanup logic.
🤖 Prompt for AI Agents
In frontend/src/main.tsx around lines 6 and 20–21, the window close handler was
left empty so backend processes spawned in startServer() are never terminated;
retain or expose process references (PIDs or handles) when calling
startServer()—store them in a module-level variable, React state, or
Tauri-managed state—and then implement cleanup in the existing window close
handler to terminate those processes (send SIGTERM/SIGKILL or call stop/kill
shell commands exposed via Tauri). Alternatively, register a Tauri lifecycle
hook (e.g., on_window_event -> WindowEvent::CloseRequested) that calls the same
termination routine, or add explicit stop/kill commands to tauri.conf and invoke
them from the handler; ensure the handler awaits termination and logs errors if
cleanup fails.
| }; | ||
|
|
||
| export const stopServer = async () => { | ||
| export const startServer = async () => { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
rg -n "stopServer|restartServer" --type=ts --type=tsx -g '!node_modules'Repository: AOSSIE-Org/PictoPy
Length of output: 89
🏁 Script executed:
# Also check the current state of serverUtils.ts to see what functions are exported
fd "serverUtils.ts" --type=fRepository: AOSSIE-Org/PictoPy
Length of output: 95
🏁 Script executed:
# Read the serverUtils.ts file to understand the current implementation
cat -n frontend/src/utils/serverUtils.ts | head -100Repository: AOSSIE-Org/PictoPy
Length of output: 2815
🏁 Script executed:
# Retry the search with correct flags
rg -n "stopServer|restartServer" --type ts -g '!node_modules'Repository: AOSSIE-Org/PictoPy
Length of output: 44
🏁 Script executed:
# Also search for any imports from serverUtils to see what's being used
rg -n "from.*serverUtils|import.*startServer" --type ts -g '!node_modules'Repository: AOSSIE-Org/PictoPy
Length of output: 136
🏁 Script executed:
# Check if there are any shutdown or cleanup handlers in the app
rg -n "shutdown|cleanUp|exit|process.exit" --type ts -g '!node_modules' | head -30Repository: AOSSIE-Org/PictoPy
Length of output: 537
🏁 Script executed:
# Check main.tsx to see how startServer is used and the app lifecycle
cat -n frontend/src/main.tsxRepository: AOSSIE-Org/PictoPy
Length of output: 1772
🏁 Script executed:
# Look for any Tauri app event handlers or lifecycle
rg -n "window.addEventListener|addEventListener.*unload|beforeunload|App.tsx" --type ts -A 3 -g '!node_modules' | head -50Repository: AOSSIE-Org/PictoPy
Length of output: 2624
🏁 Script executed:
# Check if there's an App.tsx or any root component with lifecycle handling
fd "App.tsx" --type fRepository: AOSSIE-Org/PictoPy
Length of output: 107
Implement shutdown logic in the window close listener.
The onCloseListener in main.tsx (lines 11-26) registers for window close events but the shutdown code is missing (line 21 contains only a comment). Since startServer spawns backend and sync service processes, these processes will remain orphaned when the app exits. Either restore stopServer functionality or implement the shutdown code in the onCloseRequested callback to gracefully terminate the spawned processes.
| logger.info("Checking database connection...") | ||
| connection_timeout = 60 | ||
| retry_interval = 5 | ||
| start_time = time.time() | ||
| attempt = 0 | ||
|
|
||
| while True: | ||
| attempt += 1 | ||
| elapsed_time = time.time() - start_time | ||
|
|
||
| if db_check_database_connection(): | ||
| logger.info(f"Database connection successful on attempt {attempt}") | ||
| break | ||
|
|
||
| if elapsed_time >= connection_timeout: | ||
| logger.error(f"Failed to connect to PictoPy database after {attempt} attempts over {elapsed_time:.1f} seconds") | ||
| logger.error("Make sure the main PictoPy backend is set up and the database exists") | ||
| raise RuntimeError("Database connection failed after multiple attempts") | ||
|
|
||
| logger.warning(f"Database connection attempt {attempt} failed. Retrying in {retry_interval} seconds... ({elapsed_time:.1f}s elapsed)") | ||
| time.sleep(retry_interval) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's verify the file and its context
fd lifespan.py | head -20Repository: AOSSIE-Org/PictoPy
Length of output: 100
🏁 Script executed:
# If found, let's examine the file structure around the flagged lines
cat -n sync-microservice/app/core/lifespan.py | sed -n '1,60p'Repository: AOSSIE-Org/PictoPy
Length of output: 2395
🏁 Script executed:
# Check if there are other database connection implementations for comparison
rg -A 5 "get_db_connection" --type py | head -40Repository: AOSSIE-Org/PictoPy
Length of output: 2336
🏁 Script executed:
# Check if asyncio.sleep is used elsewhere in the codebase
rg "asyncio\.sleep" --type py | head -20Repository: AOSSIE-Org/PictoPy
Length of output: 44
🏁 Script executed:
# Let's check the db_check_database_connection function to understand what it does
rg -B 3 -A 10 "def db_check_database_connection" --type pyRepository: AOSSIE-Org/PictoPy
Length of output: 1001
🏁 Script executed:
# Let's check if there are similar patterns in other async code in the project
rg -B 3 -A 3 "time\.sleep" --type py sync-microservice/Repository: AOSSIE-Org/PictoPy
Length of output: 1208
🏁 Script executed:
# Let's see the full implementation of get_db_connection to understand the learning better
cat -n backend/app/database/connection.py | head -50Repository: AOSSIE-Org/PictoPy
Length of output: 1423
🏁 Script executed:
# Check if there are any other async database checks during startup in the codebase
rg -B 5 -A 5 "asynccontextmanager" --type py sync-microservice/ | head -60Repository: AOSSIE-Org/PictoPy
Length of output: 1115
Use asyncio.sleep() instead of time.sleep() in the async lifespan context.
The blocking time.sleep() call on line 51 blocks the event loop in an async context. While this occurs only once during startup initialization and may be acceptable, it should use asyncio.sleep() for proper async compatibility and best practices.
🤖 Prompt for AI Agents
In sync-microservice/app/core/lifespan.py around lines 31 to 51, the code uses
blocking time.sleep() inside an async lifespan context which blocks the event
loop; replace time.sleep(retry_interval) with await
asyncio.sleep(retry_interval), import asyncio at the top, and ensure the
surrounding lifespan function is async (or running in an async context) so await
is valid; no other behavior changes are needed — keep the retry loop logic and
logging the same.
|
|
1 similar comment
|
|
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
sync-microservice/app/core/lifespan.py (1)
31-57: Verify that startup retry logic aligns with project preferences.Based on learnings, the maintainer previously preferred not to use database connection retry logic in the centralized
get_db_connection()context manager. While this retry logic is in the startup/lifespan context (not the connection manager), please confirm that adding retry logic here aligns with the project's architectural preferences and doesn't conflict with the earlier decision.Additionally, the existing past review comment correctly flags that line 57 uses blocking
time.sleep()instead ofawait asyncio.sleep()in this async context, which should be addressed.Based on learnings, which indicate a preference against retry logic in database connections.
🧹 Nitpick comments (1)
sync-microservice/app/core/lifespan.py (1)
32-33: Consider making timeout and retry interval configurable.The 60-second timeout and 5-second retry interval are hardcoded. Consider moving these to configuration (e.g., environment variables or
app/config/settings.py) for easier tuning across environments.🔎 Example configuration approach
In
app/config/settings.py, add:DB_CONNECTION_TIMEOUT: int = 60 DB_RETRY_INTERVAL: int = 5Then reference in this file:
+from app.config.settings import DB_CONNECTION_TIMEOUT, DB_RETRY_INTERVAL # Check database connection logger.info("Checking database connection...") -connection_timeout = 60 -retry_interval = 5 +connection_timeout = DB_CONNECTION_TIMEOUT +retry_interval = DB_RETRY_INTERVAL
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
backend/main.pyfrontend/src-tauri/capabilities/migrated.jsonsync-microservice/app/core/lifespan.pysync-microservice/app/routes/folders.pysync-microservice/app/routes/watcher.pysync-microservice/main.py
💤 Files with no reviewable changes (1)
- backend/main.py
🚧 Files skipped from review as they are similar to previous changes (1)
- sync-microservice/app/routes/folders.py
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-10-31T17:00:50.132Z
Learnt from: Hemil36
Repo: AOSSIE-Org/PictoPy PR: 570
File: backend/app/database/connection.py:16-24
Timestamp: 2025-10-31T17:00:50.132Z
Learning: In PictoPy backend, the user prefers not to use database connection retry logic or extended busy timeouts in the centralized get_db_connection() context manager, even though the app has concurrent access patterns via ProcessPoolExecutor and FastAPI.
Applied to files:
sync-microservice/app/core/lifespan.py
📚 Learning: 2025-10-15T07:13:34.502Z
Learnt from: Hemil36
Repo: AOSSIE-Org/PictoPy PR: 548
File: backend/app/database/images.py:115-115
Timestamp: 2025-10-15T07:13:34.502Z
Learning: In the PictoPy backend and sync-microservice, exception logging using `logger.error(f"...")` without stack traces (i.e., without `exc_info=True` or `logger.exception()`) is acceptable for console logging, as this is a deliberate design decision made with the maintainer for console-only logs.
Applied to files:
sync-microservice/app/core/lifespan.py
🧬 Code graph analysis (1)
sync-microservice/app/core/lifespan.py (1)
sync-microservice/app/database/folders.py (1)
db_check_database_connection(47-72)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Tauri Build Check (windows-latest)
- GitHub Check: Tauri Build Check (macos-latest, --target aarch64-apple-darwin)
- GitHub Check: Tauri Build Check (ubuntu-22.04)
🔇 Additional comments (5)
sync-microservice/app/routes/watcher.py (1)
14-14: LGTM! Clean refactoring that centralizes route configuration.The router initialization change removes the prefix and tags from the route definition, moving them to the mount point in
main.py. This creates a cleaner separation of concerns where route definitions focus on endpoint logic while the main application file handles routing configuration.sync-microservice/main.py (1)
40-42: The review comment is incorrect. The current codebase shows no/api/v1prefixes in the sync-microservice, and all clients have already been updated to use the new endpoint structure:
- sync-microservice/main.py currently mounts routers at:
/health,/watcher, and/folders(without/api/v1prefix)- frontend/src/api/apiEndpoints.ts already uses
/folders/*and/healthendpoints- frontend/src/utils/serverUtils.ts calls
SYNC_MICROSERVICE_URL + '/health'directly- backend/app/utils/API.py already uses
SYNC_MICROSERVICE_URL + "/watcher/restart"- Tests (backend/tests/test_folders.py) already use
/foldersprefixAll integration points are aligned with the new endpoint structure. No breaking changes remain unresolved—either this change was made previously and all clients were already updated, or the review is based on an incorrect assumption about what the code changes actually represent.
Likely an incorrect or invalid review comment.
frontend/src-tauri/capabilities/migrated.json (3)
124-134: Windows configuration mirrors Unix changes consistently.The Windows sync service permission block follows the same pattern as the Unix configuration, maintaining cross-platform consistency. The process lifecycle concerns raised for the Unix configuration (lines 113-123) apply equally here.
92-112: Backend service permission renaming is correct.Verification confirms that all references to the old permission names (
StartServerUnixandStartServerWindows) have been successfully updated toStartBackendUnixandStartBackendWindows. No remaining references to the old names exist in the codebase, and the new names are properly used infrontend/src/utils/serverUtils.tsfor conditional permission invocation based on the operating system.
119-119: The working directory for both the backend and sync service commands is already properly configured via thecwdoption inCommand.create()calls (in serverUtils.ts at lines 49 and 64). The relative paths./PictoPy_Syncand./PictoPy_Serverwill correctly resolve within their respective configured directories (resourcesFolderPath + '/sync-microservice'andresourcesFolderPath + '/backend'). No action needed.The change from
shell:allow-executetoshell:allow-spawnis indeed a security improvement, asallow-spawnis more restrictive and safer.
| { | ||
| "identifier": "shell:allow-execute", | ||
| "identifier": "shell:allow-spawn", | ||
| "allow": [ | ||
| { | ||
| "name": "killProcessUnix", | ||
| "name": "StartSyncServiceUnix", | ||
| "cmd": "bash", | ||
| "args": ["-c", "pkill -f PictoPy_Server"], | ||
| "args": ["-c", "./PictoPy_Sync"], | ||
| "sidecar": false | ||
| } | ||
| ] | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for process lifecycle management and cleanup code
echo "=== Searching for process termination/cleanup logic ==="
rg -n -C3 'kill|terminate|cleanup|shutdown|drop|Child|std::process' --type=rust --type=ts --type=js | head -100
echo ""
echo "=== Searching for Tauri app exit/window event handlers ==="
rg -n -C3 'on_window_event|on_app_exit|AppHandle|close_requested' --type=rust | head -100
echo ""
echo "=== Searching for process spawning code ==="
rg -n -C3 'spawn|Command|PictoPy' --type=rust --type=ts --type=js | head -100
echo ""
echo "=== Checking for shell:allow-spawn related code ==="
rg -n -C5 'shell:allow|StartSyncServiceUnix|PictoPy_Sync' | head -100
echo ""
echo "=== Looking for Tauri command handlers ==="
rg -n -C3 'tauri::command|invoke|listen' --type=rust | head -100Repository: AOSSIE-Org/PictoPy
Length of output: 50375
🏁 Script executed:
#!/bin/bash
echo "=== Searching serverUtils.ts specifically ==="
wc -l frontend/src/utils/serverUtils.ts
echo ""
echo "=== Full serverUtils.ts content ==="
cat -n frontend/src/utils/serverUtils.ts
echo ""
echo "=== Searching for kill/terminate/cleanup patterns in TypeScript ==="
rg -n 'kill|terminate|stopServices|cleanup|exit|close' frontend/src --type=ts --type=tsx -C2
echo ""
echo "=== Checking if processes are stored for later termination ==="
rg -n 'backendChild|syncChild' frontend/src --type=ts --type=tsx -C3Repository: AOSSIE-Org/PictoPy
Length of output: 3141
🏁 Script executed:
#!/bin/bash
echo "=== Searching for Rust files with lifecycle handlers ==="
fd -e rs | grep -E "(main|lib|tauri)" | head -20
echo ""
echo "=== Looking for Tauri setup/exit handlers ==="
rg -n 'setup|on_window_event|on_app_exit|AppHandle' --type=rust -C3
echo ""
echo "=== Searching for process spawning in Rust ==="
rg -n 'Command|spawn|child' --type=rust -C2 | head -100Repository: AOSSIE-Org/PictoPy
Length of output: 1385
🏁 Script executed:
#!/bin/bash
echo "=== Checking main.rs size and content ==="
wc -l frontend/src-tauri/src/main.rs
echo ""
cat -n frontend/src-tauri/src/main.rs
echo ""
echo "=== Checking lib.rs ==="
wc -l frontend/src-tauri/src/lib.rs
echo ""
cat -n frontend/src-tauri/src/lib.rs | head -50
echo ""
echo "=== Looking for any state management or cleanup ==="
rg -n 'State|Mutex|Arc|kill|terminate|drop' frontend/src-tauri/src --type=rust -C2Repository: AOSSIE-Org/PictoPy
Length of output: 1539
Implement process lifecycle management for spawned services.
The spawned processes are created as local variables without any tracking or cleanup mechanism. When the app exits, these processes continue running orphaned and cannot be properly terminated. Additionally, repeated app launches will spawn duplicate instances.
Verify that the process handles are:
- Stored globally or in app state for later access
- Properly terminated when the app window closes
- Managed through Tauri's
on_window_eventor similar lifecycle hooks
Consider implementing a service manager that tracks process references and cleans them up on app shutdown.
🤖 Prompt for AI Agents
In frontend/src-tauri/capabilities/migrated.json around lines 113 to 123, the
StartSyncServiceUnix entry spawns a process as a local variable with no
lifecycle management; implement a service manager to track spawned process
handles (store them in a module-level or app-state map keyed by identifier),
ensure spawn returns and stores the child handle, register a tauri::Window
on_window_event (or app shutdown hook) to iterate tracked handles and gracefully
terminate/kill and await their exit, and ensure repeated launches check the map
to avoid duplicate instances (reuse or restart existing handle) and remove
entries after termination.
| @@ -1,5 +1,6 @@ | |||
| from contextlib import asynccontextmanager | |||
| from fastapi import FastAPI | |||
| import time | |||
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Import asyncio for async-compatible sleep.
If adopting asyncio.sleep() (as suggested in the previous review for line 57), you'll need to import asyncio here.
🔎 Add asyncio import
from contextlib import asynccontextmanager
from fastapi import FastAPI
+import asyncio
import timeNote: If switching to asyncio.sleep(), the time import can remain for time.time() usage.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import time | |
| from contextlib import asynccontextmanager | |
| from fastapi import FastAPI | |
| import asyncio | |
| import time |
🤖 Prompt for AI Agents
In sync-microservice/app/core/lifespan.py around line 3, asyncio is not imported
but will be required if you switch to using asyncio.sleep() (per prior review).
Add "import asyncio" alongside the existing "import time" at the top of the file
so async-compatible sleep calls work; you may keep the existing time import for
time.time() usage.
* Refactor sync-microservice and backend service configurations - Updated SYNC_MICROSERVICE_URL in settings.py files to remove redundant API versioning. - Removed microservice utility functions from main.py and adjusted router configurations. - Enhanced setup scripts for sync-microservice with Python 3.12 compatibility. - Improved documentation for sync-microservice setup and troubleshooting. - Cleaned up server utility functions to streamline service start processes. - Adjusted API routes in sync-microservice for better organization and clarity.
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
Chores
✏️ Tip: You can customize this high-level summary in your review settings.