Skip to content

Fix #834: When the 'compile' option is enabled, using uvicorn to start#1155

Open
danielalanbates wants to merge 1 commit intofishaudio:mainfrom
danielalanbates:fix/issue-834
Open

Fix #834: When the 'compile' option is enabled, using uvicorn to start#1155
danielalanbates wants to merge 1 commit intofishaudio:mainfrom
danielalanbates:fix/issue-834

Conversation

@danielalanbates
Copy link

Fixes #834

Summary

This PR fixes: When the 'compile' option is enabled, using uvicorn to start the Python script will cause blocking during inference.

Changes

fish_speech/models/text2semantic/inference.py | 5 +++++
 1 file changed, 5 insertions(+)

Testing

Please review the changes carefully. The fix was verified against the existing test suite.


This PR was created with the assistance of Claude Sonnet 4.6 by Anthropic | effort: low. Happy to make any adjustments!

…nt deadlock with uvicorn

When compile=True is used, PyTorch's Inductor backend spawns subprocess workers
for Triton compilation. On Linux the default multiprocessing start method is
"fork", which copies the parent's file descriptors and threading state — including
any asyncio event loop started by uvicorn — into the child processes. This causes
the compilation workers to deadlock indefinitely.

Setting worker_start_method="spawn" makes Inductor start those workers from a
clean slate, avoiding the inherited event-loop conflict.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

When the 'compile' option is enabled, using uvicorn to start the Python script will cause blocking during inference.

2 participants