Skip to content

feat: Cluster-wide LimitRange module for ephemeral-storage defaults#335

Closed
sanmesh-kakade wants to merge 1 commit intomainfrom
feat/limit-range-manager
Closed

feat: Cluster-wide LimitRange module for ephemeral-storage defaults#335
sanmesh-kakade wants to merge 1 commit intomainfrom
feat/limit-range-manager

Conversation

@sanmesh-kakade
Copy link
Copy Markdown
Contributor

@sanmesh-kakade sanmesh-kakade commented Apr 16, 2026

Summary

Adds limit_range/default/1.0 — a Facets module that creates Kubernetes LimitRange resources cluster-wide or to specific namespaces, enforcing default resource requests and limits on containers that don't set their own.

Features

  • Two modes:

    • cluster_wide: true — applies LimitRange to all namespaces (uses data "kubernetes_all_namespaces"), with configurable exclude_namespaces to skip specific ones
    • cluster_wide: false — applies only to explicitly listed target_namespaces
  • Full LimitRange spec — configures all fields supported by kubernetes_limit_range_v1:

    • default — default limits injected if container doesn't set its own
    • default_request — default requests injected if container doesn't set its own
    • min / max — hard floor/ceiling, rejects pods outside the range at admission
    • max_limit_request_ratio — rejects pods where limit/request ratio exceeds threshold
    • All fields support cpu, memory, and ephemeral-storage
  • Namespace overrides — per-namespace overrides merged on top of the base spec. Allows different limits for different namespaces (e.g., tighter limits for CI/CD namespaces, looser for workload namespaces)

Usage example

kind: limit_range
flavor: default
version: '1.0'
spec:
  cluster_wide: true
  exclude_namespaces:
  - kube-node-lease
  - kube-public
  limits:
    type: Container
    default_request:
      ephemeral-storage: "512Mi"
      cpu: "100m"
      memory: "128Mi"
    default:
      ephemeral-storage: "5Gi"
      cpu: "1"
      memory: "1Gi"
  namespace_overrides:
    default:
      default_request:
        ephemeral-storage: "1Gi"
      default:
        ephemeral-storage: "10Gi"
    tekton-pipelines:
      default:
        ephemeral-storage: "2Gi"

Caveats

  • data "kubernetes_all_namespaces" reads at plan time — namespaces created between applies won't get a LimitRange until the next release cycle
  • Only affects new pods — existing running pods are not modified

Test plan

  • terraform init && terraform validate passes
  • Deploy in specific_namespaces mode with 1-2 namespaces
  • Verify LimitRange created: kubectl get limitrange -A
  • Deploy pod without resource specs, verify defaults injected
  • Test cluster_wide mode with exclude_namespaces
  • Test namespace_overrides produces different limits per namespace

🤖 Generated with Claude Code

Add a Facets module that creates Kubernetes LimitRange resources across
all namespaces (cluster-wide) or a specific set, with per-namespace
override support. Enforces default cpu, memory, and ephemeral-storage
requests/limits on containers that don't set their own.

Motivated by fleet-wide investigation showing 100% of pods across 18 CP
clusters have zero ephemeral-storage requests/limits set, contributing
to uncontrolled disk pressure from containerd image sprawl.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant