feat: add local inference provider with llama.cpp backend and HuggingFace model management #17232
Triggered via pull request
February 16, 2026 21:37
Status
Failure
Total duration
10m 32s
Artifacts
–
ci.yml
on: pull_request
changes
16s
Check Rust Code Format
55s
Build and Test Rust Project
10m 11s
Lint Rust Code
7m 49s
Check OpenAPI Schema is Up-to-Date
9m 36s
Test and Lint Electron Desktop App
49s
Annotations
2 errors
|
Test and Lint Electron Desktop App
Process completed with exit code 2.
|
|
Build and Test Rust Project
Process completed with exit code 101.
|