-
Notifications
You must be signed in to change notification settings - Fork 15.2k
Closed as not planned
Closed as not planned
Copy link
Labels
Description
Prerequisites
- I am running the latest code. Mention the version if possible as well.
- I carefully followed the README.md.
- I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
- I reviewed the Discussions, and have a new and useful enhancement to share.
Feature Description
It would be great if llama-server when run in router mode and/or with a config.ini could unload models that have been idle for a given time.
For example, you could default it to 300 (seconds) and have models that haven't performed any inference in 5 minutes be unloaded to free up resources.
Motivation
Models current sit loaded until the max loaded models are met, using up system resources.
llama-swap provides this as a ttl setting.
Somewhat related:
- Feature Request: Free up VRAM when llama-server not in use #11703
- Power save mode for server --unload-timeout 120 #4598
Possible Implementation
unload-idle-seconds = 300
[some-custom-important-model]
m = /models/custom.gguf
unload-idle-seconds = 3600llama-server --unload-idle-seconds 300 --models-dir /modelsReactions are currently unavailable