This page describes how to install and use our release artifacts for ROCm and external builds like PyTorch and JAX. We produce build artifacts as part of our Continuous Integration (CI) build/test workflows as well as release artifacts as part of Continuous Delivery (CD) nightly releases.
For the development status of GPU architecture support in TheRock, please see SUPPORTED_GPUS.md which tracks release readiness for each AMD GPU architecture.
Important
These instructions assume familiarity with how to use ROCm. Please see https://rocm.docs.amd.com/ for general information about the ROCm software platform.
Prerequisites:
- We recommend installing the latest AMDGPU driver on Linux and Adrenaline driver on Windows
- Linux users, please be aware of Configuring permissions for GPU access needed for ROCm
Table of contents:
- Multi-arch releases
- Per-family releases
- Verifying your installation
Important
We are introducing multi-arch releases with #3323. Rather than build ROCm for GPU family subsets like the per-family releases, these multi-arch releases build all GPU architectures together and split GPU-specific code (kernel packs) from architecture-neutral host code as a packaging step.
This new setup will streamline package installation, so please note the differences in the install instructions.
Key differences from per-family releases:
- One index URL for all GPUs: select your target with a pip extra like
[device-gfx942]instead of finding a per-family index URL - Broader GPU support: adding support for a new GPU target is just one more device package, so more GPUs can be supported without impacting build times or download sizes for other targets
- Smaller downloads: kernels downloads can be scoped to a single GPU instead of always being scoped to a family or "all"
| Platform | ROCm | PyTorch |
|---|---|---|
| Linux | ||
| Windows |
Package availability:
| Package type | Linux | Windows |
|---|---|---|
| ROCm Python packages | ✅ Available | ✅ Available |
| PyTorch Python packages | ✅ Available | ✅ Available
|
| JAX Python packages | 🟠 Planned | - |
| ROCm tarballs | ✅ Available | ✅ Available |
| Native Linux packages | ✅ Available | 🟠 Planned (#1987) |
Nightly releases of ROCm and related Python packages are published to a unified index at https://rocm.nightlies.amd.com/whl-multi-arch/.
Tip
We highly recommend working within a Python virtual environment:
python -m venv .venv
source .venv/bin/activateMultiple virtual environments can be present on a system at a time, allowing you to switch between them at will.
Warning
If you really want a system-wide install, you can pass --break-system-packages to pip outside a virtual enivornment.
In this case, commandline interface shims for executables are installed to /usr/local/bin, which normally has precedence over /usr/bin and might therefore conflict with a previous installation of ROCm.
We provide several Python packages which together form the complete ROCm SDK.
In multi-arch releases, GPU-specific device code is split into separate
rocm-sdk-device-{target} packages.
- See ROCm Python Packaging via TheRock for information about each package.
- The packages are defined in the
build_tools/packaging/python/templates/directory.
| Package name | Description |
|---|---|
rocm |
Primary sdist meta package that dynamically determines other deps |
rocm-sdk-core |
OS-specific core of the ROCm SDK (e.g. compiler and utility tools) |
rocm-sdk-libraries |
OS-specific libraries (architecture-neutral host code) |
rocm-sdk-device-{target} |
GPU-specific device code (e.g. rocm-sdk-device-gfx942) |
rocm-sdk-devel |
OS-specific development tools |
Install ROCm with device support for your GPU using the unified index:
# Replace device-gfx942 with your GPU, see the section below for details
pip install --index-url https://rocm.nightlies.amd.com/whl-multi-arch/ "rocm[libraries,device-gfx942]"After installing, verify your installation:
rocm-sdk testFor packages which include device-specific code (such as rocm, torch, and
torchvision), support for individual devices can be installed using the
corresponding device-* extra from the table below. See also the
GPU architecture specs
for a full list of supported AMD GPUs.
| Product Name | GFX Target | Device Extra |
|---|---|---|
| AMD Instinct MI355X / MI350X | gfx950 | device-gfx950 |
| AMD Instinct MI325X / MI300X / MI300A | gfx942 | device-gfx942 |
| AMD Instinct MI250X / MI250 / MI210 | gfx90a | device-gfx90a |
| AMD Instinct MI100 | gfx908 | device-gfx908 |
| AMD Instinct MI60 / MI50, Radeon Pro VII, Radeon VII | gfx906 | device-gfx906 |
| AMD Instinct MI25 | gfx900 | device-gfx900 |
| AMD Radeon RX 9070 / XT, AI PRO R9700 / R9600D | gfx1201 | device-gfx1201 |
| AMD Radeon RX 9060 / XT | gfx1200 | device-gfx1200 |
| AMD Radeon 820M iGPU | gfx1153 | device-gfx1153 |
| AMD Ryzen AI 7 350 | gfx1152 | device-gfx1152 |
| AMD Ryzen AI Max+ PRO 395 | gfx1151 | device-gfx1151 |
| AMD Ryzen AI 9 HX 375 | gfx1150 | device-gfx1150 |
| AMD Ryzen 7 7840U / Ryzen 9 270 | gfx1103 | device-gfx1103 |
| AMD Radeon RX 7600 | gfx1102 | device-gfx1102 |
| AMD Radeon RX 7800 XT / 7700 XT, PRO V710 / W7700 | gfx1101 | device-gfx1101 |
| AMD Radeon RX 7900 XTX / 7900 XT, PRO W7900 / W7800 | gfx1100 | device-gfx1100 |
| AMD Radeon RX 6900 XT / 6800 XT, PRO W6800 / V620 | gfx1030 | device-gfx1030 |
| AMD Radeon RX 6750 XT / 6700 XT | gfx1031 | device-gfx1031 |
| AMD Radeon RX 6600 XT / 6600, PRO W6600 | gfx1032 | device-gfx1032 |
| AMD Van Gogh iGPU | gfx1033 | device-gfx1033 |
| AMD Radeon RX 6500 XT | gfx1034 | device-gfx1034 |
| AMD Radeon 680M iGPU | gfx1035 | device-gfx1035 |
| AMD Raphael iGPU | gfx1036 | device-gfx1036 |
| AMD Radeon RX 5700 / XT | gfx1010 | device-gfx1010 |
| AMD Radeon Pro V520 | gfx1011 | device-gfx1011 |
| AMD Radeon Pro W5500 | gfx1012 | device-gfx1012 |
A [device-all] extra is also provided which installs device code for all GPUs.
Warning
The [device-all] extra may not work consistently for nightly releases because
packages are promoted per-target as they pass tests. If tests are still
running or if they failed for an individual target, this extra will not be
able to find all required packages.
We also publish untested packages to the nightly "whl-staging-multi-arch" index which is not affected by this limitation.
| Package index | Safe to use [device-all]? |
|---|---|
| https://rocm.nightlies.amd.com/whl-multi-arch/ | ❌ No (some packages may not be available) |
| https://rocm.nightlies.amd.com/whl-staging-multi-arch/ | ✅ Yes (index includes all packages, even if tests fail) |
Install PyTorch with ROCm support using the same unified index:
# Replace device-gfx942 with your GPU, see the section above for details
# Note: we'll recommend 'whl-multi-arch' instead of 'whl-staging-multi-arch'
# as soon as we test run automate tests on these packages
pip install --index-url https://rocm.nightlies.amd.com/whl-staging-multi-arch/ \
"torch[device-gfx942]" "torchvision[device-gfx942]" torchaudio
# Optional additional packages on Linux:
# apexTip
The device extras install GPU-specific packages like amd-torch-device-gfx1100
which contain GPU-specific kernels and depend on rocm-sdk-device-gfx1100.
The compatible ROCm packages are installed automatically, you do not need to
install ROCm separately:
pip install --index-url https://rocm.nightlies.amd.com/whl-staging-multi-arch/ \
"torch[device-gfx1100]"
pip freeze # with approximate download sizes:
# rocm-sdk-core==7.13.0a... ~700 MB
# rocm-sdk-libraries==7.13.0a... ~100 MB (host code, shared across GPUs)
# rocm-sdk-device-gfx1100==7.13.0a... ~50 MB (only gfx1100 device code)
# torch==2.11.0+rocm... ~100 MB (host code, shared across GPUs)
# amd-torch-device-gfx1100==2.11.0+... ~50 MB (only gfx1100 device code)
# Total: ~1.1 GB
#
# For comparison, a similar per-family (non-multi-arch) torch wheel for
# gfx110X-all [gfx1100, gfx1101, gfx1102, gfx1103] is ~600 MB.After installing, verify PyTorch can see your GPU:
import torch
print(torch.cuda.is_available())
# True
print(torch.cuda.get_device_name(0))
# e.g. AMD Radeon Pro W7900 Dual SlotSee external-builds/pytorch/README.md for more details on supported PyTorch versions and building from source.
Standalone "ROCm SDK tarballs" are a flattened view of ROCm
artifacts matching the familiar folder
structure seen with system installs on Linux to /opt/rocm/ or on Windows via
the HIP SDK:
install/
.kpack/ # GPU-specific kernel packs (multi-arch only)
bin/
clients/
include/
lib/
libexec/
share/Tarballs are just these raw files. They do not come with "install" steps such as setting environment variables.
Multi-arch tarballs separate GPU-specific kernel code into a .kpack/
directory. Two variants are available:
- Per-family tarballs (e.g.
therock-dist-linux-gfx110X-all-7.13.0a20260430.tar.gz) that include.kpackfiles only for one family. - Multiarch tarball (e.g.
therock-dist-linux-multiarch-7.13.0a20260430.tar.gz) that include.kpackfiles for all supported targets.
Browse and download tarballs from https://rocm.nightlies.amd.com/tarball-multi-arch/.
To download and extract:
mkdir therock-tarball && cd therock-tarball
# Per-family (smaller, one GPU family):
wget https://rocm.nightlies.amd.com/tarball-multi-arch/therock-dist-linux-gfx110X-all-7.13.0a20260430.tar.gz
# Or multiarch (all GPUs):
wget https://rocm.nightlies.amd.com/tarball-multi-arch/therock-dist-linux-multiarch-7.13.0a20260430.tar.gz
mkdir install && tar -xf *.tar.gz -C installAfter extraction, test the install:
./install/bin/rocminfo
ls install/.kpack/
# blas_lib_gfx1100.kpack fft_lib_gfx1100.kpack rand_lib_gfx1100.kpack ...Tip
You may also want to add parts of the install directory to your PATH or set
other environment variables like ROCM_HOME.
See also this issue discussing relevant environment variables.
In addition to Python wheels and tarballs, ROCm native Linux packages are published for Debian-based and RPM-based distributions via the multi-arch pipeline.
Warning
These builds are primarily intended for development and testing and are currently unsigned.
Multi-arch native packages use a simplified package model compared to the per-family native packages:
| Package name | Description |
|---|---|
amdrocm |
Installs all base ROCm libraries and runtime support for all supported GPU architectures |
amdrocm-core-sdk |
Installs the full ROCm SDK including runtime, development tools, and headers for all supported GPU architectures |
Tip
To find the latest available release, browse the index pages:
- Debian packages: https://rocm.nightlies.amd.com/packages-multi-arch/deb/
- RPM packages: https://rocm.nightlies.amd.com/packages-multi-arch/rpm/
Look for directories in the format YYYYMMDD-<action-run-id>
(e.g., 20260501-25200531110) and use the latest in the commands below.
# Step 1: Find the latest release from
# https://rocm.nightlies.amd.com/packages-multi-arch/deb/
# Look for directories like "20260501-25200531110"
# Step 2: Set the variable below
export RELEASE_ID=20260501-25200531110 # Replace with the latest date-runid
# Step 3: Add repository and install
sudo apt update
sudo apt install -y ca-certificates
echo "deb [trusted=yes] https://rocm.nightlies.amd.com/packages-multi-arch/deb/${RELEASE_ID} stable main" \
| sudo tee /etc/apt/sources.list.d/rocm-multiarch-nightly.list
sudo apt update
# Install base runtime for all supported GPU architectures:
sudo apt install amdrocm
# Or install full SDK (runtime + dev tools + headers) for all supported GPU architectures:
sudo apt install amdrocm-core-sdk# Step 1: Find the latest release from
# https://rocm.nightlies.amd.com/packages-multi-arch/rpm/
# Look for directories like "20260501-25200531110"
# Step 2: Set the variable below
export RELEASE_ID=20260501-25200531110 # Replace with the latest date-runid
# Step 3: Add repository and install
sudo dnf install -y ca-certificates
sudo tee /etc/yum.repos.d/rocm-multiarch-nightly.repo <<EOF
[rocm-multiarch-nightly]
name=ROCm Multi-Arch Nightly Repository
baseurl=https://rocm.nightlies.amd.com/packages-multi-arch/rpm/${RELEASE_ID}/x86_64
enabled=1
gpgcheck=0
priority=50
EOF
# Install base runtime for all supported GPU architectures:
sudo dnf clean all
sudo dnf install amdrocm
# Or install full SDK (runtime + dev tools + headers) for all supported GPU architectures:
sudo dnf install amdrocm-core-sdkNote
To install support for a specific GPU architecture only, you can use the
per-arch package variant (e.g., apt install amdrocm-gfx942 or dnf install amdrocm-gfx942). For a full list of
supported GPU targets and their identifiers, see
Supported Python [device-*] install extras.
Per-family releases use GPU-family-specific index URLs — you choose the index URL that matches your GPU family, and all packages for that family are served from that URL.
Note
Multi-arch releases (above) are the newer approach and will soon replace per-family releases. Both are available during the transition.
We recommend installing ROCm and projects like PyTorch and JAX via pip, the
Python package installer.
We currently support Python 3.10, 3.11, 3.12, 3.13, and 3.14 (PyTorch 2.9+ only).
Tip
We highly recommend working within a Python virtual environment:
python -m venv .venv
source .venv/bin/activateMultiple virtual environments can be present on a system at a time, allowing you to switch between them at will.
Warning
If you really want a system-wide install, you can pass --break-system-packages to pip outside a virtual enivornment.
In this case, commandline interface shims for executables are installed to /usr/local/bin, which normally has precedence over /usr/bin and might therefore conflict with a previous installation of ROCm.
Important
Known issues with the Python wheels are tracked at #808.
| Platform | ROCm Python packages | PyTorch Python packages | JAX Python packages |
|---|---|---|---|
| Linux | |||
| Windows | — |
For now, rocm, torch, and jax packages are published to GPU-architecture-specific index
pages and must be installed using an appropriate --find-links argument to pip.
They may later be pushed to the
Python Package Index (PyPI) or other channels using a process
like https://wheelnext.dev/. Please check back regularly
as these instructions will change as we migrate to official indexes and adjust
project layouts.
| Product Name | GFX Target | GFX Family | Install instructions |
|---|---|---|---|
| MI300A/MI300X | gfx942 | gfx94X-dcgpu | rocm // torch // jax |
| MI350X/MI355X | gfx950 | gfx950-dcgpu | rocm // torch // jax |
| AMD RX 7900 XTX | gfx1100 | gfx110X-all | rocm // torch // jax |
| AMD RX 7800 XT | gfx1101 | gfx110X-all | rocm // torch // jax |
| AMD RX 7700S / Framework Laptop 16 | gfx1102 | gfx110X-all | rocm // torch // jax |
| AMD Radeon 780M Laptop iGPU | gfx1103 | gfx110X-all | rocm // torch // jax |
| AMD Strix Halo iGPU | gfx1151 | gfx1151 | rocm // torch // jax |
| AMD RX 9060 / XT | gfx1200 | gfx120X-all | rocm // torch // jax |
| AMD RX 9070 / XT | gfx1201 | gfx120X-all | rocm // torch // jax |
We provide several Python packages which together form the complete ROCm SDK.
- See ROCm Python Packaging via TheRock for information about the each package.
- The packages are defined in the
build_tools/packaging/python/templates/directory.
| Package name | Description |
|---|---|
rocm |
Primary sdist meta package that dynamically determines other deps |
rocm-sdk-core |
OS-specific core of the ROCm SDK (e.g. compiler and utility tools) |
rocm-sdk-libraries |
OS-specific libraries |
rocm-sdk-devel |
OS-specific development tools |
A new optional package rocm-profiler is available, providing ROCm profiling tools:
- ROCm Systems Profiler (rocprofiler-systems)
- ROCm Compute Profiler (rocprofiler-compute)
Install profiling tools via the meta package:
pip install "rocm[profiler]"This will install:
rocm-sdk-core(required runtime + SDK)rocm-profiler(profiling tools)
Supported devices in this family:
| Product Name | GFX Target |
|---|---|
| MI300A/MI300X | gfx942 |
Install instructions:
pip install --index-url https://rocm.nightlies.amd.com/v2/gfx94X-dcgpu/ "rocm[libraries,devel]"Supported devices in this family:
| Product Name | GFX Target |
|---|---|
| MI350X/MI355X | gfx950 |
Install instructions:
pip install --index-url https://rocm.nightlies.amd.com/v2/gfx950-dcgpu/ "rocm[libraries,devel]"Supported devices in this family:
| Product Name | GFX Target |
|---|---|
| AMD RX 7900 XTX | gfx1100 |
| AMD RX 7800 XT | gfx1101 |
| AMD RX 7700S / Framework Laptop 16 | gfx1102 |
| AMD Radeon 780M Laptop iGPU | gfx1103 |
Install instructions:
pip install --index-url https://rocm.nightlies.amd.com/v2/gfx110X-all/ "rocm[libraries,devel]"Supported devices in this family:
| Product Name | GFX Target |
|---|---|
| AMD Strix Halo iGPU | gfx1151 |
Install instructions:
pip install --index-url https://rocm.nightlies.amd.com/v2/gfx1151/ "rocm[libraries,devel]"Supported devices in this family:
| Product Name | GFX Target |
|---|---|
| AMD RX 9060 / XT | gfx1200 |
| AMD RX 9070 / XT | gfx1201 |
Install instructions:
pip install --index-url https://rocm.nightlies.amd.com/v2/gfx120X-all/ "rocm[libraries,devel]"After installing the ROCm Python packages, you should see them in your environment:
pip freeze | grep rocm
# rocm==6.5.0rc20250610
# rocm-sdk-core==6.5.0rc20250610
# rocm-sdk-devel==6.5.0rc20250610
# rocm-sdk-libraries-gfx110X-all==6.5.0rc20250610You should also see various tools on your PATH and in the bin directory:
which rocm-sdk
# .../.venv/bin/rocm-sdk
ls .venv/bin
# activate amdclang++ hipcc python rocm-sdk
# activate.csh amdclang-cl hipconfig python3 rocm-smi
# activate.fish amdclang-cpp pip python3.12 roc-obj
# Activate.ps1 amdflang pip3 rocm_agent_enumerator roc-obj-extract
# amdclang amdlld pip3.12 rocminfo roc-obj-lsThe rocm-sdk tool can be used to inspect and test the installation:
$ rocm-sdk --help
usage: rocm-sdk {command} ...
ROCm SDK Python CLI
positional arguments:
{path,test,version,targets,init}
path Print various paths to ROCm installation
test Run installation tests to verify integrity
version Print version information
targets Print information about the GPU targets that are supported
init Expand devel contents to initialize rocm[devel]
$ rocm-sdk test
...
Ran 22 tests in 8.284s
OK
$ rocm-sdk targets
gfx1100;gfx1101;gfx1102To initialize the rocm[devel] package, use the rocm-sdk tool to eagerly expand development
contents:
$ rocm-sdk init
Devel contents expanded to '.venv/lib/python3.12/site-packages/_rocm_sdk_devel'These contents are useful for using the package outside of Python and lazily expanded on the first use when used from Python.
Once you have verified your installation, you can continue to use it for standard ROCm development or install PyTorch, JAX, or another supported Python ML framework.
Using the index pages listed above, you can
also install torch, torchaudio, torchvision, and apex.
Note
By default, pip will install the latest stable versions of each package.
-
If you want to allow installing prerelease versions, use the
--pre -
If you want to install other versions, take note of the compatibility matrix:
torch version torchaudio version torchvision version apex version 2.10 2.10 0.25 1.10.0 2.9 2.9 0.24 1.9.0 2.8 2.8 0.23 1.8.0 For example,
torch2.8 and compatible wheels can be installed by specifyingtorch==2.8 torchaudio==2.8 torchvision==0.23 apex==1.8.0See also
Warning
The torch packages depend on rocm[libraries], so the compatible ROCm packages
should be installed automatically for you and you do not need to explicitly install
ROCm first. If ROCm is already installed this may result in a downgrade if the
torch wheel to be installed requires a different version.
Tip
If you previously installed PyTorch with the pytorch-triton-rocm package,
please uninstall it before installing the new packages:
pip uninstall pytorch-triton-rocmThe triton package is now named triton.
Supported devices in this family:
| Product Name | GFX Target |
|---|---|
| MI300A/MI300X | gfx942 |
pip install --index-url https://rocm.nightlies.amd.com/v2/gfx94X-dcgpu/ torch torchaudio torchvision
# Optional additional packages on Linux:
# apexSupported devices in this family:
| Product Name | GFX Target |
|---|---|
| MI350X/MI355X | gfx950 |
pip install --index-url https://rocm.nightlies.amd.com/v2/gfx950-dcgpu/ torch torchaudio torchvision
# Optional additional packages on Linux:
# apexSupported devices in this family:
| Product Name | GFX Target |
|---|---|
| AMD RX 7900 XTX | gfx1100 |
| AMD RX 7800 XT | gfx1101 |
| AMD RX 7700S / Framework Laptop 16 | gfx1102 |
| AMD Radeon 780M Laptop iGPU | gfx1103 |
pip install --index-url https://rocm.nightlies.amd.com/v2/gfx110X-all/ torch torchaudio torchvision
# Optional additional packages on Linux:
# apexSupported devices in this family:
| Product Name | GFX Target |
|---|---|
| AMD Strix Halo iGPU | gfx1151 |
pip install --index-url https://rocm.nightlies.amd.com/v2/gfx1151/ torch torchaudio torchvision
# Optional additional packages on Linux:
# apexSupported devices in this family:
| Product Name | GFX Target |
|---|---|
| AMD RX 9060 / XT | gfx1200 |
| AMD RX 9070 / XT | gfx1201 |
pip install --index-url https://rocm.nightlies.amd.com/v2/gfx120X-all/ torch torchaudio torchvision
# Optional additional packages on Linux:
# apexAfter installing the torch package with ROCm support, PyTorch can be used
normally:
import torch
print(torch.cuda.is_available())
# True
print(torch.cuda.get_device_name(0))
# e.g. AMD Radeon Pro W7900 Dual SlotSee also the Testing the PyTorch installation instructions in the AMD ROCm documentation.
Using the index pages listed above, you can
also install jaxlib, jax_rocm7_plugin, and jax_rocm7_pjrt.
Note
By default, pip will install the latest stable versions of each package.
-
If you want to install other versions, the currently supported versions are:
jax version jaxlib version 0.9.1 0.9.1 (upstream) 0.8.2 0.8.2 0.8.0 0.8.0 See also
Warning
Unlike PyTorch, the JAX wheels do not automatically install rocm[libraries]
as a dependency. You must have ROCm installed separately via a
tarball installation.
Important
The jax package itself is not published to the TheRock index.
After installing jaxlib, jax_rocm7_plugin, and jax_rocm7_pjrt from the
GPU-family index, install jax from PyPI:
pip install jaxSupported devices in this family:
| Product Name | GFX Target |
|---|---|
| MI300A/MI300X | gfx942 |
pip install --index-url https://rocm.nightlies.amd.com/v2/gfx94X-dcgpu/ jaxlib jax_rocm7_plugin jax_rocm7_pjrt
# Install jax from PyPI
pip install jaxSupported devices in this family:
| Product Name | GFX Target |
|---|---|
| MI350X/MI355X | gfx950 |
pip install --index-url https://rocm.nightlies.amd.com/v2/gfx950-dcgpu/ jaxlib jax_rocm7_plugin jax_rocm7_pjrt
# Install jax from PyPI
pip install jaxSupported devices in this family:
| Product Name | GFX Target |
|---|---|
| AMD RX 7900 XTX | gfx1100 |
| AMD RX 7800 XT | gfx1101 |
| AMD RX 7700S / Framework Laptop 16 | gfx1102 |
| AMD Radeon 780M Laptop iGPU | gfx1103 |
pip install --index-url https://rocm.nightlies.amd.com/v2/gfx110X-all/ jaxlib jax_rocm7_plugin jax_rocm7_pjrt
# Install jax from PyPI
pip install jaxSupported devices in this family:
| Product Name | GFX Target |
|---|---|
| AMD Strix Halo iGPU | gfx1151 |
pip install --index-url https://rocm.nightlies.amd.com/v2/gfx1151/ jaxlib jax_rocm7_plugin jax_rocm7_pjrt
# Install jax from PyPI
pip install jaxSupported devices in this family:
| Product Name | GFX Target |
|---|---|
| AMD RX 9060 / XT | gfx1200 |
| AMD RX 9070 / XT | gfx1201 |
pip install --index-url https://rocm.nightlies.amd.com/v2/gfx120X-all/ jaxlib jax_rocm7_plugin jax_rocm7_pjrt
# Install jax from PyPI
pip install jaxAfter installing the JAX packages with ROCm support, JAX can be used normally:
import jax
print(jax.devices())
# [RocmDevice(id=0)]For building JAX from source or running the full JAX test suite, see the external-builds/jax README.
Standalone "ROCm SDK tarballs" are a flattened view of ROCm
artifacts matching the familiar folder
structure seen with system installs on Linux to /opt/rocm/ or on Windows via
the HIP SDK:
install/ # Extracted tarball location, file path of your choosing
.info/
bin/
clients/
include/
lib/
libexec/
share/Tarballs are just these raw files. They do not come with "install" steps such as setting environment variables.
Warning
Tarballs and per-commit CI artifacts are primarily intended for developers and CI workflows.
For most users, we recommend installing via package managers:
Release tarballs are uploaded to the following locations:
| Tarball index | S3 bucket | Description |
|---|---|---|
| https://repo.amd.com/rocm/tarball/ | (not publicly accessible) | Stable releases |
| https://rocm.nightlies.amd.com/tarball/ | therock-nightly-tarball |
Nightly builds from the default development branch |
| https://rocm.prereleases.amd.com/tarball/ | (not publicly accessible) | |
| https://rocm.devreleases.amd.com/tarball/ | therock-dev-tarball |
To download a tarball and extract it into place manually:
mkdir therock-tarball && cd therock-tarball
# For example...
wget https://rocm.nightlies.amd.com/tarball/therock-dist-linux-gfx110X-all-7.12.0a20260202.tar.gz
mkdir install && tar -xf *.tar.gz -C installFor more control over artifact installation—including per-commit CI builds,
specific release versions, the latest nightly release, and component
selection—see the
Installing Artifacts developer
documentation. The
install_rocm_from_artifacts.py
script can be used to install artifacts from a variety of sources.
After installing (downloading and extracting) a tarball, you can test it by
running programs from the bin/ directory:
ls install
# bin include lib libexec llvm share
# Now test some of the installed tools:
./install/bin/rocminfo
./install/bin/test_hip_apiTip
You may also want to add parts of the install directory to your PATH or set
other environment variables like ROCM_HOME.
See also this issue discussing relevant environment variables.
Tip
After extracting a tarball, metadata about which commits were used to build
TheRock can be found in the share/therock/therock_manifest.json file:
cat install/share/therock/therock_manifest.json
# {
# "the_rock_commit": "567dd890a3bc3261ffb26ae38b582378df298374",
# "submodules": [
# {
# "submodule_name": "half",
# "submodule_path": "base/half",
# "submodule_url": "https://github.com/ROCm/half.git",
# "pin_sha": "207ee58595a64b5c4a70df221f1e6e704b807811",
# "patches": []
# },
# ...In addition to Python wheels and tarballs, ROCm native Linux packages are published for Debian-based and RPM-based distributions.
Warning
These builds are primarily intended for development and testing and are currently unsigned.
| Platform | Native packages |
|---|---|
| Linux | |
| Windows | (Coming soon) |
| Product Name | GFX Target | GFX Family | Runtime Package | Development Package |
|---|---|---|---|---|
| MI300A/MI300X | gfx942 | gfx94X | amdrocm-gfx94x | amdrocm-core-sdk-gfx94x |
| MI350X/MI355X | gfx950 | gfx950 | amdrocm-gfx950 | amdrocm-core-sdk-gfx950 |
| AMD RX 7900 XTX | gfx1100 | gfx110x | amdrocm-gfx110x | amdrocm-core-sdk-gfx110x |
| AMD RX 7800 XT | gfx1101 | gfx110x | amdrocm-gfx110x | amdrocm-core-sdk-gfx110x |
| AMD RX 7700S / Framework Laptop 16 | gfx1102 | gfx110x | amdrocm-gfx110x | amdrocm-core-sdk-gfx110x |
| AMD Radeon 780M Laptop iGPU | gfx1103 | gfx110x | amdrocm-gfx110x | amdrocm-core-sdk-gfx110x |
| AMD Strix Point iGPU | gfx1150 | gfx1150 | amdrocm-gfx1150 | amdrocm-core-sdk-gfx1150 |
| AMD Strix Halo iGPU | gfx1151 | gfx1151 | amdrocm-gfx1151 | amdrocm-core-sdk-gfx1151 |
| AMD Fire Range iGPU | gfx1152 | gfx1152 | amdrocm-gfx1152 | amdrocm-core-sdk-gfx1152 |
| AMD Strix Halo XT | gfx1153 | gfx1153 | amdrocm-gfx1153 | amdrocm-core-sdk-gfx1153 |
| AMD RX 9060 / XT | gfx1200 | gfx120X | amdrocm-gfx120x | amdrocm-core-sdk-gfx120x |
| AMD RX 9070 / XT | gfx1201 | gfx120X | amdrocm-gfx120x | amdrocm-core-sdk-gfx120x |
| Radeon VII | gfx906 | gfx906 | amdrocm-gfx906 | amdrocm-core-sdk-gfx906 |
| MI100 | gfx908 | gfx908 | amdrocm-gfx908 | amdrocm-core-sdk-gfx908 |
| MI200 series | gfx90a | gfx90a | amdrocm-gfx90a | amdrocm-core-sdk-gfx90a |
| AMD RX 5700 XT | gfx1010 | gfx101x | amdrocm-gfx101x | amdrocm-core-sdk-gfx101x |
| AMD RX 6900 XT | gfx1030 | gfx103x | amdrocm-gfx103x | amdrocm-core-sdk-gfx103x |
| AMD RX 6800 XT | gfx1031 | gfx103x | amdrocm-gfx103x | amdrocm-core-sdk-gfx103x |
Tip
To find the latest available release:
- Step 1: Browse the index pages:
- Debian packages: https://rocm.nightlies.amd.com/deb/
- RPM packages: https://rocm.nightlies.amd.com/rpm/
- Step 2: Look for directories in the format
YYYYMMDD-<action-run-id>(e.g.,20260310-12345678) - Step 3: Use the latest date in the installation commands below
# Step 1: Find the latest release from https://rocm.nightlies.amd.com/deb/
# Look for directories like "20260310-12345678"
# Step 2: Look at the "GPU family and package mapping" table above to find
# the GFX Family for your GPU (e.g., gfx94x, gfx110x, gfx1151)
# Step 3: Set the variables below
export RELEASE_ID=20260310-12345678 # Replace with actual date-runid
export GFX_ARCH=gfx110x # Replace with GFX Family from the mapping table
# Step 4: Add repository and install
sudo apt update
sudo apt install -y ca-certificates
echo "deb [trusted=yes] https://rocm.nightlies.amd.com/deb/${RELEASE_ID} stable main" \
| sudo tee /etc/apt/sources.list.d/rocm-nightly.list
sudo apt update
sudo apt install amdrocm-core-sdk-${GFX_ARCH}
# If only runtime is needed, install amdrocm-${GFX_ARCH} insteadNote
The following instructions are for RHEL-based operating systems.
# Step 1: Find the latest release from https://rocm.nightlies.amd.com/rpm/
# Look for directories like "20260310-12345678"
# Step 2: Look at the "GPU family and package mapping" table above to find
# the GFX Family for your GPU (e.g., gfx94x, gfx110x, gfx1151)
# Step 3: Set the variables below
export RELEASE_ID=20260310-12345678 # Replace with actual date-runid
export GFX_ARCH=gfx110x # Replace with GFX Family from the mapping table
# Step 4: Add repository and install
sudo dnf install -y ca-certificates
sudo tee /etc/yum.repos.d/rocm-nightly.repo <<EOF
[rocm-nightly]
name=ROCm Nightly Repository
baseurl=https://rocm.nightlies.amd.com/rpm/${RELEASE_ID}/x86_64
enabled=1
gpgcheck=0
priority=50
EOF
sudo dnf clean all
sudo dnf install amdrocm-core-sdk-${GFX_ARCH}
# If only runtime is needed, install amdrocm-${GFX_ARCH} insteadAfter installing ROCm via any of the methods above, you can verify that your GPU is properly recognized.
GPU status on Linux can be checked via either:
rocminfo
# or
amd-smiGPU status on Windows can be checked via
hipInfo.exeIf your GPU is not recognized or you encounter issues:
- Linux users: Check system logs using
dmesg | grep amdgpufor specific error messages - Review memory allocation settings (see the FAQ for GTT configuration on unified memory systems)
- Ensure you have the latest AMDGPU driver on Linux or Adrenaline driver on Windows
- For platform-specific troubleshooting when using PyTorch or JAX, see: