Skip to content

Latest commit

 

History

History
171 lines (132 loc) · 5.52 KB

File metadata and controls

171 lines (132 loc) · 5.52 KB

Run ComfyUI with ROCm 6 on AMD GPU

Prepare

  • Make sure AMD GPU driver (kernel module amdgpu) is installed on your host system.

  • Check if your GPU/CPU is supported by ROCm 6:

Build & Run

You may need to add these configuration (especially for APUs) into the command of docker run, podman run below. (Credit to nhtua)

  • For RDNA 2 cards:

    • -e HSA_OVERRIDE_GFX_VERSION=10.3.0 \

  • For RDNA 3 cards:

    • -e HSA_OVERRIDE_GFX_VERSION=11.0.0 \

    • Check the AMD doc to see if your GPU can use 11.0.1.

  • For integrated graphics on CPU:

    • -e HIP_VISIBLE_DEVICES=0 \

You may also want to add more environment variable(s):

  • Enable tunable operations (slow first run, but faster subsequent runs. Doc1, Doc2). (Thanks to SergeyFilippov)

    • -e PYTORCH_TUNABLEOP_ENABLED=1 \

With Docker
mkdir -p \
  storage \
  storage-models/models \
  storage-models/hf-hub \
  storage-models/torch-hub \
  storage-user/input \
  storage-user/output \
  storage-user/workflows

docker run -it --rm \
  --name comfyui-rocm6 \
  --device=/dev/kfd --device=/dev/dri \
  --group-add=video --ipc=host --cap-add=SYS_PTRACE \
  --security-opt seccomp=unconfined \
  --security-opt label=disable \
  -p 8188:8188 \
  -v "$(pwd)"/storage:/root \
  -v "$(pwd)"/storage-models/models:/root/ComfyUI/models \
  -v "$(pwd)"/storage-models/hf-hub:/root/.cache/huggingface/hub \
  -v "$(pwd)"/storage-models/torch-hub:/root/.cache/torch/hub \
  -v "$(pwd)"/storage-user/input:/root/ComfyUI/input \
  -v "$(pwd)"/storage-user/output:/root/ComfyUI/output \
  -v "$(pwd)"/storage-user/workflows:/root/ComfyUI/user/default/workflows \
  -e HSA_OVERRIDE_GFX_VERSION="" \
  -e CLI_ARGS="" \
  yanwk/comfyui-boot:rocm6
With Podman
mkdir -p \
  storage \
  storage-models/models \
  storage-models/hf-hub \
  storage-models/torch-hub \
  storage-user/input \
  storage-user/output \
  storage-user/workflows

podman run -it --rm \
  --name comfyui-rocm6 \
  --device=/dev/kfd --device=/dev/dri \
  --group-add=video --ipc=host --cap-add=SYS_PTRACE \
  --security-opt seccomp=unconfined \
  --security-opt label=disable \
  -p 8188:8188 \
  -v "$(pwd)"/storage:/root \
  -v "$(pwd)"/storage-models/models:/root/ComfyUI/models \
  -v "$(pwd)"/storage-models/hf-hub:/root/.cache/huggingface/hub \
  -v "$(pwd)"/storage-models/torch-hub:/root/.cache/torch/hub \
  -v "$(pwd)"/storage-user/input:/root/ComfyUI/input \
  -v "$(pwd)"/storage-user/output:/root/ComfyUI/output \
  -v "$(pwd)"/storage-user/workflows:/root/ComfyUI/user/default/workflows \
  -e HSA_OVERRIDE_GFX_VERSION="" \
  -e CLI_ARGS="" \
  yanwk/comfyui-boot:rocm6

Once the app is loaded, visit http://localhost:8188/

ROCm: If you want to dive in…​

(Just side notes. Nothing to do with this Docker image)

The commands below use the AMD prebuilt ROCm PyTorch image.

This image is large in filesize. But if you have hard time to run the container, it may be helpful. As it takes care of PyTorch, the most important part, and you just need to install few more Python packages in order to run ComfyUI.

docker pull rocm/pytorch:rocm6.4.4_ubuntu24.04_py3.12_pytorch_release_2.7.1

mkdir -p storage

docker run -it --rm \
  --name comfyui-rocm6 \
  --device=/dev/kfd --device=/dev/dri \
  --group-add=video --ipc=host --cap-add=SYS_PTRACE \
  --security-opt seccomp=unconfined \
  --security-opt label=disable \
  -p 8188:8188 \
  --user root \
  --workdir /root/workdir \
  -v "$(pwd)"/storage:/root/workdir \
  rocm/pytorch:rocm6.4.4_ubuntu24.04_py3.12_pytorch_release_2.7.1 \
  /bin/bash

git clone https://github.com/Comfy-Org/ComfyUI.git

pip install -r ComfyUI/requirements.txt
# Or:
# conda install --yes --file ComfyUI/requirements.txt

python ComfyUI/main.py --listen --port 8188
# Or:
# python3 ComfyUI/main.py --listen --port 8188

Additional notes for Windows users

(Just side notes. Nothing to do with this Docker image)

WSL2 supports ROCm and DirectML:

  • ROCm

  • DirectML

  • ZLUDA

    • This is not using WSL2, it’s running natively on Windows. ZLUDA can "translate" CUDA codes to run on AMD GPUs. But as the first step, I recommend to try running SD-WebUI with ZLUDA, it’s easier to start with.