Skip to content

feat: add local inference provider with llama.cpp backend and HuggingFace model management #17232

feat: add local inference provider with llama.cpp backend and HuggingFace model management

feat: add local inference provider with llama.cpp backend and HuggingFace model management #17232

Triggered via pull request February 16, 2026 21:37
Status Failure
Total duration 10m 32s
Artifacts

ci.yml

on: pull_request
Check Rust Code Format
55s
Check Rust Code Format
Build and Test Rust Project
10m 11s
Build and Test Rust Project
Lint Rust Code
7m 49s
Lint Rust Code
Check OpenAPI Schema is Up-to-Date
9m 36s
Check OpenAPI Schema is Up-to-Date
Test and Lint Electron Desktop App
49s
Test and Lint Electron Desktop App
Fit to window
Zoom out
Zoom in

Annotations

2 errors
Test and Lint Electron Desktop App
Process completed with exit code 2.
Build and Test Rust Project
Process completed with exit code 101.