Several architectural decisions were made during the Kyma architecture meeting and the implementation phase. These decisions were primarily driven by technical constraints and the need for timely solutions.
Components:
- KIM (Kyma Infrastructure Manager): Deploys the webhook and shared resources to Kyma runtimes.
- API Server: The Kubernetes API server calls the manipulation webhook to intercept the Pod manifest before it gets applied.
- RT Bootstrapper: Modifies Pod manifests and applies landscape-specific adjustments (e.g., adding pull-secret or rewriting image-registry host-names, etc.).
- Workload: The manipulated workload is adjusted to the landscape-specific setup.
- (Optional) The workload can use shared resources (e.g., pull-secrets, cluster-trust-bundles, etc.).
The webhook only manipulates Pod resources. Other resources, such as StatefulSets, DaemonSets, and Deployments, are ignored. This is required to avoid conflicts between Kyma Lifecycle Manager (KLM) and Kyma Infrastructure Manager (KIM). KLM regularly processes the resources it deployed (for example, Deployments of operators). If the webhook were to modify these deployments, the KLM would revert the modifications regularly, and both processes would "fight" against each other. To avoid such a situation, we agreed that KLM will never deploy Pods, but high-level resources like Deployments, DaemonSets, StatefulSets, etc. The drawback of this decision is that the deployed Pod can include different values compared to its definition within a Deployment, StatefulSet, DaemonSet, etc., which may be confusing for engineers or developers reviewing a Pod definition in Kubernetes who are unaware of the webhook's existence and its adjustments.
The admission webhook must be configured as a non-blocking processing step for API-server requests. This means that the API server continues processing the request when the webhook cannot be invoked. This decision ensures that the API server continues to process requests even when the webhook is temporarily unavailable. The decision introduces the risk that Pods get scheduled without being manipulated.
We agreed that the webhook is exclusively responsible for manipulating the manifest of Pods during their creation phase. If a Pod gets scheduled without being processed by the webhook (for example, when the webhook is temporarily down), the Pod might miss critical adjustments and, in the worst case, may not start up properly. To address this issue, a housekeeping process implemented outside of the webhook regularly scans all Pods for any missing manipulations. If such Pods are identified, the housekeeping process restarts them (during the re-creation, the webhook is invoked, and the manipulations are applied).
We agreed that Pods are processed by the webhook only if one of the following conditions is fulfilled:
- The configuration of the Webhook defines a list of mandatory manipulations for the namespace. This ensures that any Pod in Kyma-managed namespaces is processed.
- The namespace is annotated to receive particular manipulations.
- The Pod itself is annotated to receive manipulations.
This also enables customers to opt into this modification mechanism by annotating either their own namespace or the Pod manifests accordingly.
The webhook retrieves a default configuration that specifies the number of manipulations to apply to all Pods in particular namespaces. Customers or other workloads cannot modify this configuration.
By default, the configuration considers only Kyma-managed namespaces (e.g., kyma-system, istio-system, etc.) to avoid conflicts with customer-owned namespaces.
The webhook supports multiple manipulations. The default configuration, managed by KIM, determines which manipulation is used.
To adjust the workloads to landscape-specific setups, several resources must be published in the Kyma runtime:
- Pull secrets to authenticate at private container registries.
ClusterTrustBundleused to store certificate chains (needed for secured backend communication).- The configuration of the Webhook itself.
The Kyma backend ensures that such resources are synchronized from Kyma Control Plane (KCP) to the Kyma runtime kyma-system namespace. For more information on this mechanism, see Runtime Configuration Synchronization Using Controller Loop.
Some resources are namespace-scoped and must be replicated to all other namespaces in the cluster (e.g., pull secrets). The Runtime Bootstrapper webhook includes a dedicated controller that synchronizes such resources into all Kyma runtime namespaces.