-
-
Notifications
You must be signed in to change notification settings - Fork 618
Labels
Description
Description
Extend GPU support beyond NVIDIA to include AMD GPUs and Apple's Neural Engine for faster model inference. This will be enabled by default, but can be disabled by the users from settings page.
Tasks
- Create a provider manager for dynamic hardware selection:
- NVIDIA CUDA support (For Linux)
- DirectML (For Windows, this can run models on any GPU, even if its not a dedicated one )
- Apple CoreML support
- Refactor model initialization in
app/facenet/facenet.py - Add automatic fallback to CPU if GPU providers fail
Expected Outcome
Models run on available GPU hardware (NVIDIA, AMD, Apple) with significant performance improvements over CPU-only inference.
Reactions are currently unavailable