Skip to content

feat: support for memory-mapping model weights#1414

Open
wbruna wants to merge 7 commits intoleejet:masterfrom
wbruna:sd_mmap_weights
Open

feat: support for memory-mapping model weights#1414
wbruna wants to merge 7 commits intoleejet:masterfrom
wbruna:sd_mmap_weights

Conversation

@wbruna
Copy link
Copy Markdown
Contributor

@wbruna wbruna commented Apr 13, 2026

A follow-up for #1059, this adds support for pointing tensor storage buffers directly into memory-mapped model files.

Apart from the expected limitations (e.g. weight types need to match), for now a lot of stars need to be properly aligned:

  • only enabled for 100% CPU backends, to avoid the complexity of tracking backend information per tensor; so e.g. --clip-on-cpu won't benefit from it. On the other hand, it does work with --offload-to-cpu
  • only enabled if LoRA apply mode is at_runtime (even if no LoRAs are loaded). I've reused the I/O mmap support, which is read-only, so it needs to avoid trying to modify the mapped weights in place.

Edit: added device compatibility detection in the same way as llama.cpp, and per-tensor tracking; so all compatible devices should be supported, including with --clip-on-cpu and --vae-on-cpu.

Edit 2: for LoRA apply mode immediately, turn the mapping writable. With certain LoRAs, the weight patching may cancel most of the mmap savings, but it will still work for some of the unchanged tensors (note: working fine on Linux, but I couldn't test it on Windows).

The existing mmap support on the I/O path isn't affected.

[INFO ] model.cpp:1469 - memory-mapped 606 tensors in 3 files (8356.31 MB), taking 0.00s
[DEBUG] ggml_extend.hpp:2046 - qwen3 params backend buffer size = 1483.75 MB(RAM) (398 tensors)
[DEBUG] ggml_extend.hpp:2046 - z_image params backend buffer size = 6.93 MB(RAM) (453 tensors)
[DEBUG] ggml_extend.hpp:2046 - vae params backend buffer size = 92.57 MB(RAM) (138 tensors)

@wbruna wbruna changed the title feat: initial support for memory-mapping model weights feat: support for memory-mapping model weights Apr 14, 2026
@wbruna wbruna force-pushed the sd_mmap_weights branch 2 times, most recently from 97190f6 to 776fea2 Compare April 19, 2026 19:32
@wbruna wbruna force-pushed the sd_mmap_weights branch from 12d6f98 to ec8de10 Compare May 6, 2026 23:34
pwilkin and others added 2 commits May 10, 2026 21:17
Without an explicit posix_fadvise(POSIX_FADV_DONTNEED), the Linux
kernel keeps a model file's pages cached as buff/cache long after
we're done with it, so loading the LLM (13.7 GB) followed by the
DiT (17 GB) piles up to 30+ GB of cached pages on a 32 GB box and
triggers the OOM-killer.

- Keep the file descriptor alive in MmapWrapperImpl so we can
  posix_fadvise(POSIX_FADV_DONTNEED) on it before munmap. madvise
  alone only unmaps the address range — it does not evict pagecache.
- Add POSIX_FADV_SEQUENTIAL on open: nudges the kernel toward a
  smaller working set during the read.
- Make the "using mmap" log line INFO instead of DEBUG so the user
  can confirm at a glance.
- Bound the lazy-load worker count to 2: the per-thread staging
  buffers grow to the largest tensor seen, so n_threads=8 doubles
  RAM peak for no measurable read-throughput gain.

Result on 32 GB box: peak RSS ~6 GB, peak buff/cache ~12 GB during
LLM lazy load — comfortably within budget.
- drop superfluous validity tests from the mmap handler destructor,
since by design they are always valid on the manager object
- check against zero-sized files
- control read-ahead and discard hints through an environment
variable: on my own system, with a warm cache, all these flags
actually hurt performance for common sd-cli runs (~10-20% worse
loading times), so they should probably be enabled on a
case-by-case basis
@wbruna
Copy link
Copy Markdown
Contributor Author

wbruna commented May 11, 2026

@pwilkin , I've cherry-picked b8d1c99 here to make it easier to test mmap behavior.

I'm not sure why, but the performance flags made loading times consistently worse for me, so I've made them opt-in through an env var. For consistency, and because consecutive sd-cli runs would also benefit from a cached model, I've made the cache eviction opt-in too; but I don't feel strongly about it.

@junmo-kim
Copy link
Copy Markdown

junmo-kim commented May 11, 2026

Hi @wbruna, thanks for this PR — I've been running a merged build (master + this branch) for image generation/edit workloads. Hit a consistent failure with Qwen-Image GGUF models + --offload-to-cpu + --mmap on AMD Radeon 780M / Vulkan / Windows 11 / 64 GB RAM:

[ERROR] ggml_extend.hpp:2063 - qwen2.5vl alloc params backend buffer failed, num_tensors = 857
[ERROR] ggml_extend.hpp:2063 - qwen_image alloc params backend buffer failed, num_tensors = 1933
[INFO ] wan_vae params backend buffer size = 242.10 MB(RAM) (194 tensors)
[INFO ] main.cpp:148  - listening on: 127.0.0.1:7860

sd-server enters listen state but with diffusion_model 0.00MB — any inference request returns blank/noise.

Root cause

When all tensors in params_ctx are already memory-mapped (i.e. t->data != NULL), ggml_backend_alloc_ctx_tensors_from_buft_impl in ggml-alloc.c correctly returns NULL with n_buffers == 0:

// ggml/src/ggml-alloc.c L1210-1215
if (n_buffers == 0) {
#ifndef NDEBUG
    GGML_LOG_DEBUG("%s: all tensors in the context are already allocated\n", __func__);
#endif
    GGML_ASSERT(!buffers);
    return NULL;
}

But GGMLRunner::alloc_params_buffer() treats any NULL return as a hard failure. For GGUF models with mmap enabled, every tensor has a valid t->data pointer → n_buffers == 0 → spurious LOG_ERROR. VAE (loaded from safetensors here) allocates fine because its tensors aren't in the mmap region — only the GGUF diffusion model and GGUF text encoder fail.

This is consistent with the failing components in the log above: qwen2.5vl and qwen_image are both GGUF, wan_vae (safetensors) is fine.

Proposed fix

Add a check before the failure path: if all tensors in params_ctx already have t->data (or are views), treat it as "no separate buffer needed":

bool alloc_params_buffer() {
    size_t num_tensors = ggml_tensor_num(params_ctx);
    params_buffer = ggml_backend_alloc_ctx_tensors(params_ctx, params_backend);
    // mmap-aware path: ggml returns NULL when all tensors are already allocated
    // (typical for memory-mapped weights). See ggml-alloc.c n_buffers==0 branch.
    if (params_buffer == nullptr && num_tensors > 0) {
        bool all_have_data = true;
        for (ggml_tensor * t = ggml_get_first_tensor(params_ctx); t != nullptr; t = ggml_get_next_tensor(params_ctx, t)) {
            if (t->data == nullptr && t->view_src == nullptr) {
                all_have_data = false;
                break;
            }
        }
        if (all_have_data) {
            LOG_DEBUG("%s all params already mmap-allocated (no separate buffer needed)", get_desc().c_str());
            rebuild_params_tensor_set();
            return true;
        }
    }
    if (params_buffer == nullptr) {
        LOG_ERROR("%s alloc params backend buffer failed, num_tensors = %i",
                  get_desc().c_str(), num_tensors);
        return false;
    }
    rebuild_params_tensor_set();
    ggml_backend_buffer_set_usage(params_buffer, GGML_BACKEND_BUFFER_USAGE_WEIGHTS);
    // ... rest unchanged
}

free_params_buffer and get_params_buffer_size already have null guards, and grepping the source I haven't found other paths that dereference params_buffer without checking — so leaving it nullptr in the mmap case appears safe.

Verification

  • Built on Windows 11 + Vulkan SDK 1.4.341 + MSVC, 64 GB RAM, branch merged with current master (so the failure is not RAM exhaustion — peak working set stays well under available memory)
  • Tested with Qwen-Image-Q8 (21.8 GB GGUF) + Qwen 2.5 VL text encoder (safetensors) + qwen_image_vae (safetensors)
  • Tested with Qwen-Image-Edit-2511-Q8 (21.8 GB GGUF) + Qwen 2.5 VL text encoder (Q8 GGUF) + qwen_image_vae (safetensors) — both GGUF diffusion and GGUF TE trigger the n_buffers==0 path, both handled by the fix
  • 4-step 1024² t2i: wall 421s, valid 638 KB PNG, no errors/warnings
  • Confirmed safetensors-only paths are unaffected (VAE always loads correctly in both setups)

Happy to open a separate PR if you'd prefer, or you can incorporate it directly. The underlying ggml-alloc behavior is backend-agnostic, so I expect this generalizes to CUDA/Metal as well — confirmation from users on those backends would be welcome.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants