Skip to content

docs: expand GPU acceleration guide with L4T, multi-GPU, monitoring, and troubleshooting#8858

Merged
mudler merged 1 commit intomudler:masterfrom
localai-bot:docs/gpu-acceleration-h5
Mar 8, 2026
Merged

docs: expand GPU acceleration guide with L4T, multi-GPU, monitoring, and troubleshooting#8858
mudler merged 1 commit intomudler:masterfrom
localai-bot:docs/gpu-acceleration-h5

Conversation

@localai-bot
Copy link
Copy Markdown
Contributor

Summary

  • Expand multi-GPU section to cover llama.cpp (CUDA_VISIBLE_DEVICES, HIP_VISIBLE_DEVICES) in addition to diffusers
  • Add NVIDIA L4T/Jetson section with quick start commands and cross-reference to the dedicated ARM64 page
  • Add GPU monitoring section with vendor-specific tools (nvidia-smi, rocm-smi, intel_gpu_top)
  • Add troubleshooting section covering common issues: GPU not detected, CPU fallback, OOM errors, unsupported ROCm targets, SYCL mmap hang
  • Replace "under construction" warning with useful cross-references to related docs

Replaces #8854 (was based on a stale commit).

Resolves UX Review Issue H5: Complete GPU Acceleration Documentation

Test plan

  • Verify markdown renders correctly in the docs site
  • Verify all internal cross-references (relref) resolve correctly
  • Check that new sections appear in the table of contents

🤖 Generated with Claude Code

…and troubleshooting

- Expand multi-GPU section to cover llama.cpp (CUDA_VISIBLE_DEVICES,
  HIP_VISIBLE_DEVICES) in addition to diffusers
- Add NVIDIA L4T/Jetson section with quick start commands and cross-reference
  to the dedicated ARM64 page
- Add GPU monitoring section with vendor-specific tools (nvidia-smi, rocm-smi,
  intel_gpu_top)
- Add troubleshooting section covering common issues: GPU not detected, CPU
  fallback, OOM errors, unsupported ROCm targets, SYCL mmap hang
- Replace "under construction" warning with useful cross-references to related
  docs (container images, VRAM management)

Signed-off-by: localai-bot <localai-bot@users.noreply.github.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@netlify
Copy link
Copy Markdown

netlify bot commented Mar 8, 2026

Deploy Preview for localai ready!

Name Link
🔨 Latest commit 694ab12
🔍 Latest deploy log https://app.netlify.com/projects/localai/deploys/69ad4c51d38c710008cd8045
😎 Deploy Preview https://deploy-preview-8858--localai.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@mudler mudler merged commit 9297074 into mudler:master Mar 8, 2026
32 of 33 checks passed
localai-bot added a commit to localai-bot/LocalAI that referenced this pull request Mar 25, 2026
…and troubleshooting (mudler#8858)

- Expand multi-GPU section to cover llama.cpp (CUDA_VISIBLE_DEVICES,
  HIP_VISIBLE_DEVICES) in addition to diffusers
- Add NVIDIA L4T/Jetson section with quick start commands and cross-reference
  to the dedicated ARM64 page
- Add GPU monitoring section with vendor-specific tools (nvidia-smi, rocm-smi,
  intel_gpu_top)
- Add troubleshooting section covering common issues: GPU not detected, CPU
  fallback, OOM errors, unsupported ROCm targets, SYCL mmap hang
- Replace "under construction" warning with useful cross-references to related
  docs (container images, VRAM management)

Signed-off-by: localai-bot <localai-bot@users.noreply.github.com>
Co-authored-by: localai-bot <localai-bot@noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants