One-command installer for llama.cpp with automatic GPU detection and updates.
- llama-cli - Run language models from command line
- llama-server - HTTP API server for your models
- GPU optimized - Automatically picks CUDA/Metal/Vulkan build for your system
- Auto-updates - Keeps llama.cpp fresh without manual work
Install and run in under 30 seconds:
# One-line install
curl -fsSL https://raw.githubusercontent.com/Rybens92/llama-installer/630ec7c/llama-installer.sh | bash
# Or install the script globally first, then use anywhere
curl -fsSL https://raw.githubusercontent.com/Rybens92/llama-installer/630ec7c/llama-installer.sh | bash -s -- --install
llama-installer # Use from anywhere after thisIf you want to run llama.cpp (the popular LLM inference engine), you usually have to:
- Find the right binary for your system
- Manually download it from GitHub releases
- Figure out if you need CUDA/Metal/Vulkan version
- Update it yourself when new versions come out
This script does all that automatically. It detects your system, finds the best GPU build, downloads and installs it. That's it.
Install the script to your system so you can use it anytime:
curl -fsSL https://raw.githubusercontent.com/Rybens92/llama-installer/630ec7c/llama-installer.sh | bash -s -- --installThen use from anywhere:
llama-installer # Install latest llama.cpp
llama-installer --help # Show all options
llama-installer -n # Preview what will be installedSkip the global install and run directly:
curl -fsSL https://raw.githubusercontent.com/Rybens92/llama-installer/630ec7c/llama-installer.sh | bash
# With options:
curl -fsSL https://raw.githubusercontent.com/Rybens92/llama-installer/630ec7c/llama-installer.sh | bash -s -- -n # Preview
curl -fsSL https://raw.githubusercontent.com/Rybens92/llama-installer/630ec7c/llama-installer.sh | bash -s -- -v b7411 # Specific version
curl -fsSL https://raw.githubusercontent.com/Rybens92/llama-installer/630ec7c/llama-installer.sh | bash -s -- -d /opt/bin # Custom directoryBasic commands:
llama-installer # Install latest version
llama-installer -n # Preview (safe to run)
llama-installer -v b7411 # Install specific version
llama-installer -u # Update existing installation
llama-installer -d /custom/path # Install elsewhereAfter installation:
llama-cli --help # Command line interface
llama-server --help # HTTP serverSet it and forget it - the script can automatically check for and install new llama.cpp versions.
Setup:
# Check every hour
llama-installer --auto-update hourly
# Check once per day
llama-installer --auto-update dailyManage:
llama-installer --auto-update-status # See current status
llama-installer --auto-update-logs # View recent activity
llama-installer --auto-update-disable # Pause updates
llama-installer --auto-update-remove # Remove completelyNote: Auto-update only works with the globally installed script.
- OS: Linux, macOS, Windows (with WSL/Git Bash)
- Architecture: x64, ARM64, s390x
- GPU: NVIDIA (CUDA), AMD (ROCm), Apple (Metal), Intel (Vulkan), or CPU-only
- Dependencies: curl, tar, sha256sum (usually pre-installed)
"llama-installer: command not found" Add to your PATH:
export PATH="$HOME/.local/bin:$PATH"
# Make permanent:
echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.bashrc"bash: --install: No such file or directory" Use this syntax instead:
curl -fsSL URL | bash -s -- --installFile already exists warning Normal behavior - the installer overwrites to ensure latest version.
- Detects your OS, architecture, and GPU type
- Queries GitHub API for available llama.cpp releases
- Picks the best matching binary (GPU-optimized when possible)
- Downloads and verifies the archive
- Extracts to ~/.local/bin (or your chosen directory)
- Optionally sets up auto-update service
The script is smart about versions - it won't re-download if you already have the latest.
v1.3.0
- Auto-update system with systemd/cron support
- Smart version checking
- Better GPU detection
v1.2.0
- Version comparison - no redundant downloads
- Improved error handling
v1.1.0
- Global installer option
- Better documentation
v1.0.0
- Initial release - basic installation support