Skip to content

Implement Multi-GPU Support for Accelerated Inference #469

@rahulharpal1603

Description

@rahulharpal1603

Description

Extend GPU support beyond NVIDIA to include AMD GPUs and Apple's Neural Engine for faster model inference. This will be enabled by default, but can be disabled by the users from settings page.

Tasks

  • Create a provider manager for dynamic hardware selection:
    • NVIDIA CUDA support (For Linux)
    • DirectML (For Windows, this can run models on any GPU, even if its not a dedicated one )
    • Apple CoreML support
  • Refactor model initialization in app/facenet/facenet.py
  • Add automatic fallback to CPU if GPU providers fail

Expected Outcome

Models run on available GPU hardware (NVIDIA, AMD, Apple) with significant performance improvements over CPU-only inference.

Metadata

Metadata

Labels

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions