English | 🀄 中文说明
This repo is for Docker images that runs ComfyUI - an AIGC GUI powering node-based workflow.
mkdir -p \
storage-cache/dot-cache \
storage-cache/dot-config \
storage-nodes/dot-local \
storage-nodes/custom_nodes \
storage-models/models \
storage-models/hf-hub \
storage-models/torch-hub \
storage-user/input \
storage-user/output \
storage-user/user-profile \
storage-user/user-scripts
# Add sudo if needed
docker run -it --rm \
--name comfyui-cu130 \
--pull=always \
--runtime=nvidia \
--gpus all \
-p 8188:8188 \
-v "$(pwd)"/storage-cache/dot-cache:/root/.cache \
-v "$(pwd)"/storage-cache/dot-config:/root/.config \
-v "$(pwd)"/storage-nodes/dot-local:/root/.local \
-v "$(pwd)"/storage-nodes/custom_nodes:/root/ComfyUI/custom_nodes \
-v "$(pwd)"/storage-models/models:/root/ComfyUI/models \
-v "$(pwd)"/storage-models/hf-hub:/root/.cache/huggingface/hub \
-v "$(pwd)"/storage-models/torch-hub:/root/.cache/torch/hub \
-v "$(pwd)"/storage-user/input:/root/ComfyUI/input \
-v "$(pwd)"/storage-user/output:/root/ComfyUI/output \
-v "$(pwd)"/storage-user/user-profile:/root/ComfyUI/user \
-v "$(pwd)"/storage-user/user-scripts:/root/user-scripts \
-e CLI_ARGS="" \
yanwk/comfyui-boot:cu130-slim-v2The supported CUDA versions for each GPU architecture are shown in the table below:
| GPU Architecture | Blackwell | Hopper | Ada Lovelace | Ampere | Turing | Volta | Pascal | Maxwell |
|---|---|---|---|---|---|---|---|---|
Example GPU |
RTX 5090 |
H100 |
RTX 4090 |
RTX 3090 |
RTX 2080 |
TITAN V |
GTX 1080 |
GTX 980 |
cu130 ⭐ |
✔️ |
✔️ |
✔️ |
✔️ |
✔️ |
❌ |
❌ |
❌ |
cu128 |
✔️ |
✔️ |
✔️ |
✔️ |
✔️ |
❌ |
❌ |
❌ |
cu126 |
❌ |
❌ |
✔️ |
✔️ |
✔️ |
✔️ |
✔️ |
✔️ |
-
CUDA 13.0 images are currently recommended.
-
ComfyUI’s performance library is currently developed based on CUDA 13.0, which is also considered one of the stable versions of PyTorch.
-
-
If you are unsure about your NVIDIA GPU architecture, see this article.
|
Note
|
These CUDA compatibility limitations are due to the PyTorch toolchain, not the NVIDIA CUDA Toolkit. For more information, refer to the PyTorch build script. |
The slim images start with only ComfyUI and ComfyUI-Manager, yet include many dependencies to make future Custom Node installation easier. Recommended for beginners.
-
-
CUDA 13.0, Python 3.13 (with GIL), No xFormers
-
-
-
CUDA 12.8, Python 3.12
-
-
-
CUDA 12.6, Python 3.12
-
The megapak images are all-in-one bundles, including development kits and dozens of Custom Nodes for ComfyUI.
-
-
CUDA 13.0, Python 3.13 (with GIL), GCC 14, PyTorch 2.11.0
-
-
-
CUDA 12.8, Python 3.12, GCC 14, PyTorch 2.11.0
-
-
-
CUDA 12.8, Python 3.12, GCC 11, PyTorch 2.9.1
-
-
-
CUDA 12.8, Python 3.12, GCC 11, PyTorch 2.8.0
-
-
-
CUDA 12.6, Python 3.12, GCC 13, PyTorch 2.9.1
-
-
-
Using development version of PyTorch. For testing latest features.
-
-
-
For Intel GPUs with XPU.
-
This open source license is written and valid both in Chinese and English, how good is that!