Pumba can stress-test container resources (CPU, memory, I/O) by running a sidecar container with stress-ng and placing it in the target container's cgroup hierarchy. For container chaos commands, see the User Guide. For network chaos, see Network Chaos.
Pumba offers two modes for cgroup placement:
- Default mode uses Docker's
--cgroup-parentto place the stress-ng sidecar relative to the target container. With the cgroupfs driver, this creates a child cgroup under the target, so stress-ng shares the target's resource limits. With the systemd driver, Docker cannot nest under another container's scope, so stress-ng is placed as a sibling insystem.slice— it does not inherit the target's resource limits. - Inject-cgroup mode (
--inject-cgroup) uses thecg-injectbinary to write the stress-ng PID directly into the target'scgroup.procs, placing stress-ng in the exact same cgroup regardless of driver. See Same-Cgroup Injection Mode.
graph LR
P[Pumba] -->|starts| S[stress-ng sidecar]
S -.->|--cgroup-parent| T[target container cgroup]
T -->|stress-ng processes| R[CPU / Memory / I/O]
This approach works with both cgroups v1 and cgroups v2, and supports all Docker cgroup drivers (cgroupfs, systemd).
Note: For target-scoped stress on systemd-based Docker hosts, use
--inject-cgroupmode. The default mode with the systemd driver places the sidecar as a sibling cgroup, not under the target's limits.
pumba stress [options] CONTAINERSRun pumba stress --help for the full list of options.
| Flag | Default | Description |
|---|---|---|
--duration, -d |
(required) | Stress duration; use unit suffix: ms/s/m/h |
--stress-image |
ghcr.io/alexei-led/stress-ng:latest |
Docker image with stress-ng tool |
--pull-image |
true |
Pull the stress image from the registry before use |
--stressors |
--cpu 4 --timeout 60s |
stress-ng stressors (see stress-ng docs) |
--inject-cgroup |
false |
Inject stress-ng into target container's cgroup (shared resource accounting). See Same-Cgroup Injection Mode |
Note: The
--stressorsflag requires an=sign when passing values, e.g.--stressors="--cpu 4 --timeout 60s".
Stress 4 CPU workers for 60 seconds on a container named myapp:
pumba stress --duration 60s \
--stressors="--cpu 4 --timeout 60s" \
myappStress 2 memory workers, each allocating 256MB, for 2 minutes:
pumba stress --duration 2m \
--stressors="--vm 2 --vm-bytes 256M --timeout 120s" \
myappStress 4 I/O workers for 30 seconds:
pumba stress --duration 30s \
--stressors="--io 4 --timeout 30s" \
myappStress CPU and memory simultaneously for 5 minutes:
pumba stress --duration 5m \
--stressors="--cpu 2 --vm 1 --vm-bytes 128M --timeout 300s" \
myappRun stress tests every 10 minutes against a random container matching a regex:
pumba --interval 10m --random stress --duration 60s \
--stressors="--cpu 2 --timeout 60s" \
"re2:^api"Stress all containers with names starting with worker:
pumba stress --duration 30s \
--stressors="--cpu 2 --timeout 30s" \
"re2:^worker"Pumba uses ghcr.io/alexei-led/stress-ng:latest by default. This is a minimal scratch image containing both the statically linked stress-ng binary and the cg-inject binary (required for --inject-cgroup mode). The image is built and maintained in alexei-led/stress-ng.
If you provide a custom image with --stress-image, it must have the stress-ng binary at /stress-ng (absolute path). For --inject-cgroup mode, it must also include /cg-inject. No shell, Docker CLI, or cgroup tools are required.
By default, Pumba places the stress-ng sidecar using Docker's --cgroup-parent. With the cgroupfs driver, this creates a child cgroup under the target — if stress-ng triggers an OOM kill, only the sidecar is terminated. With the systemd driver, the sidecar is placed as a sibling in system.slice and does not share the target's resource limits (see note above).
The --inject-cgroup flag enables same-cgroup injection, which places stress-ng processes directly into the target container's cgroup. This creates more realistic chaos because stress-ng shares the exact same resource accounting and OOM scope as the target.
graph LR
P[Pumba] -->|starts| S[stress-ng sidecar]
S -->|cg-inject writes PID| T[target container cgroup]
T -->|shared OOM scope| R[CPU / Memory / I/O]
- Realistic resource contention: stress-ng competes for the exact same CPU/memory limits as the target
- OOM testing: trigger OOM kills that affect both the target and stress-ng (shared OOM scope)
- Accurate cgroup accounting: stress-ng resource usage is attributed to the target container in monitoring tools
pumba stress --inject-cgroup --duration 60s \
--stressors="--cpu 4 --timeout 60s" \
--stress-image myregistry/pumba-stress:latest \
mycontainerThe --inject-cgroup mode requires a stress image containing both:
/cg-inject— a minimal binary that writes its PID into the target's cgroup/stress-ng— the stress-ng binary
The default image ghcr.io/alexei-led/stress-ng:latest includes both binaries. The image is built and maintained in alexei-led/stress-ng.
- No privileged mode required
- No Linux capabilities needed (
--cap-drop=ALLworks) - The sidecar runs with
--cgroupns=hostto access the host cgroup hierarchy /sys/fs/cgroupis mounted read-write into the sidecar
| Aspect | Child Cgroup (default) | Same-Cgroup (--inject-cgroup) |
|---|---|---|
| Cgroup placement | Child of target (cgroupfs) or sibling in system.slice (systemd) |
Same cgroup as target |
| OOM behavior | OOM kills stress-ng only (cgroupfs); not target-scoped (systemd) | Shared OOM risk — target may be killed |
| Resource accounting | Shared with target (cgroupfs); independent (systemd) | Combined with target |
| Use case | Safe stress testing, CI/CD | Realistic chaos, OOM testing |
| Security | No caps, no special mounts | No caps, needs --cgroupns=host + cgroup mount |
| Flag | (default) | --inject-cgroup |
Both modes work with cgroups v1 and cgroups v2, and with both cgroupfs and systemd cgroup drivers. Pumba auto-detects the cgroup version and driver from the Docker daemon.
On Kubernetes, containers are placed in cgroup hierarchies like /kubepods/burstable/pod<uid>/<containerID> rather than the standalone Docker paths (/docker/<id>). Pumba automatically resolves the correct cgroup path by inspecting the target container via the Docker API (ContainerInspect).
- Default mode: Pumba reads the target's
HostConfig.CgroupParentand constructs the correct--cgroup-parentfor the stress-ng sidecar, so it is placed under the same Kubernetes pod cgroup hierarchy. - Inject-cgroup mode: Pumba passes the resolved cgroup path directly to
cg-injectvia its--cgroup-pathflag, bypassing driver-based path construction entirely.
No manual configuration is needed — Pumba detects Kubernetes cgroup paths and standalone Docker paths transparently.
For --inject-cgroup mode, the cg-inject binary detects the cgroup version by checking for /sys/fs/cgroup/cgroup.controllers (present on v2, absent on v1) and constructs the appropriate cgroup path:
| Cgroup version | Driver | Path format |
|---|---|---|
| v2 | cgroupfs | /sys/fs/cgroup/docker/<id>/cgroup.procs |
| v2 | systemd | /sys/fs/cgroup/system.slice/docker-<id>.scope/cgroup.procs |
| v1 | cgroupfs | /sys/fs/cgroup/cpu/docker/<id>/cgroup.procs |
| v1 | systemd | /sys/fs/cgroup/cpu/system.slice/docker-<id>.scope/cgroup.procs |
When Pumba passes --cgroup-path (as it does on Kubernetes), these driver-based paths are not used — cg-inject writes directly to the provided path.
- The
--timeoutvalue in--stressorsshould match or be less than--duration - Use
--dry-runto verify which containers would be targeted without actually running stress tests - Combine with
--labelto target containers by Kubernetes labels (e.g.,--label io.kubernetes.pod.namespace=staging) - stress-ng supports many stressor types beyond CPU, memory, and I/O; see the full stress-ng documentation for all options
- User Guide - Container chaos commands and general usage
- Network Chaos - netem and iptables commands
- Deployment - Docker, Kubernetes, and OpenShift
- Contributing - Building and contributing to Pumba