Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
43 changes: 41 additions & 2 deletions docs/content/en/docs/getting-started/vsphere/vsphere-prereq.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,11 @@ Set up an Administrative machine as described in [Install EKS Anywhere ]({{< rel
To prepare a VMware vSphere environment to run EKS Anywhere, you need the following:
* A vSphere 7 or 8 environment running vCenter.
* Capacity to deploy 6-10 VMs.
* DHCP service running in vSphere environment in the primary VM network for your workload cluster.
* [Prepare DHCP IP addresses pool]({{< relref "../../clustermgmt/cluster-upgrades/vsphere-and-cloudstack-upgrades.md/#prepare-dhcp-ip-addresses-pool" >}})
* **IP assignment for cluster nodes** - Choose one of the following:
* **DHCP** (default): DHCP service running in vSphere environment in the primary VM network for your workload cluster.
* [Prepare DHCP IP addresses pool]({{< relref "../../clustermgmt/cluster-upgrades/vsphere-and-cloudstack-upgrades.md/#prepare-dhcp-ip-addresses-pool" >}})
* **Static IP**: Configure an [IP pool]({{< relref "./vsphere-spec/#ippool-optional" >}}) in VSphereDatacenterConfig to assign static IPs to cluster nodes via CAPI IPAM.
* See [Static IP Planning](#static-ip-planning) below for requirements.
* One network in vSphere to use for the cluster. EKS Anywhere clusters need access to vCenter through the network to enable self-managing and storage capabilities.
* An [OVA]({{< relref "customize/vsphere-ovas/" >}}) imported into vSphere and converted into a template for the workload VMs
* It's critical that you set up your [vSphere user credentials properly.]({{< relref "./vsphere-preparation#configuring-vsphere-user-group-and-roles" >}})
Expand Down Expand Up @@ -46,6 +49,42 @@ The administrative machine and the target workload environment will need network
{{% content "./domains.md" %}}


## Static IP Planning

If you choose to use static IP assignment instead of DHCP, you need to plan your IP pool carefully:

### IP Pool Size Requirements

Your IP pool must have enough addresses for:
* **Control plane nodes**: Number specified in `controlPlaneConfiguration.count`
* **Worker nodes**: Total count across all worker node groups. For autoscaling, use the `maxCount` value.
* **External etcd nodes** (if configured): Number specified in `externalEtcdConfiguration.count`
* **Rolling upgrade buffer**: 1 additional IP for node replacements during upgrades

**Formula**: `Required IPs = CP nodes + Worker nodes + Etcd nodes + 1`

**Example**: A cluster with 3 control plane nodes, 5 workers, and 3 external etcd nodes needs at least **12 IP addresses** (3 + 5 + 3 + 1).

### Static IP Configuration

Configure the following in your `VSphereDatacenterConfig`:

```yaml
spec:
ipPool:
name: "my-cluster-pool"
addresses:
- "192.168.1.100-192.168.1.120" # 21 IPs
prefix: 24
gateway: "192.168.1.1"
nameservers:
- "8.8.8.8"
```

For detailed field descriptions, see [ipPool configuration]({{< relref "./vsphere-spec/#ippool-optional" >}}).

>**_NOTE:_** The control plane endpoint IP (`controlPlaneConfiguration.endpoint.host`) is separate from the IP pool and must still be a static IP excluded from both DHCP and your IP pool.

## vSphere information needed before creating the cluster
You need to get the following information before creating the cluster:

Expand Down
49 changes: 49 additions & 0 deletions docs/content/en/docs/getting-started/vsphere/vsphere-spec.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,15 @@ spec:
datastore: "dataStore2"
folder: "folder2"
network: "network2"
ipPool: <a href="#ippool-optional"># Static IP configuration (optional, alternative to DHCP) </a>
name: <span>"my-cluster-pool"</span> <a href="#ippoolname-required"># Name for the InClusterIPPool resource </a>
addresses: <a href="#ippooladdresses-required"># IP addresses (ranges, CIDRs, or individual IPs) </a>
- <span>"192.168.1.100-192.168.1.150"</span>
prefix: <span style="color:green">24</span> <a href="#ippoolprefix-required"># Subnet prefix length </a>
gateway: <span>"192.168.1.1"</span> <a href="#ippoolgateway-required"># Default gateway </a>
nameservers: <a href="#ippoolnameservers-optional"># DNS servers (optional) </a>
- <span>"8.8.8.8"</span>
- <span>"8.8.4.4"</span>

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
Expand Down Expand Up @@ -302,6 +311,46 @@ Folder is the name or inventory path of the folder in which the the VM is create
#### failureDomains[0].network
Network is the name or inventory path of the network which will be added to the VM.

### ipPool (optional)
The IP pool configuration for static IP assignment to cluster nodes. When specified, nodes will be assigned static IPs from this pool instead of using DHCP. The CLI creates an `InClusterIPPool` resource from this configuration using the [CAPI IPAM Provider](https://github.com/kubernetes-sigs/cluster-api-ipam-provider-in-cluster).

This is an alternative to DHCP-based IP assignment. You can switch between DHCP and static IP modes at any time by adding or removing the `ipPool` configuration, which will trigger a rolling update of all cluster nodes.

>**_NOTE:_** When using static IP, ensure your IP pool has enough addresses for all nodes plus one additional address for rolling upgrades. The required count is: control plane nodes + worker nodes + etcd nodes (if external) + 1.

#### ipPool.name (required)
Name for the generated `InClusterIPPool` Kubernetes resource. This name will be referenced by the VSphereMachineTemplates.

#### ipPool.addresses (required)
List of IP addresses to include in the pool. Supports three formats:
- **Ranges**: `"192.168.1.100-192.168.1.150"` (51 addresses)
- **CIDR blocks**: `"192.168.1.0/28"` (16 addresses)
- **Individual IPs**: `"192.168.1.100"`

Example with multiple entries:
```yaml
addresses:
- "192.168.1.100-192.168.1.120" # 21 addresses
- "192.168.1.200" # 1 address
- "192.168.2.0/28" # 16 addresses
```

#### ipPool.prefix (required)
The subnet prefix length (CIDR notation). For example, `24` for a `/24` subnet (255.255.255.0 netmask). Valid range is 1-32.

#### ipPool.gateway (required)
The default gateway IP address for the subnet. This will be configured on all cluster nodes.

#### ipPool.nameservers (optional)
List of DNS server IP addresses. These will be configured on all cluster nodes for DNS resolution.

Example:
```yaml
nameservers:
- "8.8.8.8"
- "8.8.4.4"
```

## VSphereMachineConfig Fields

### memoryMiB (optional)
Expand Down
38 changes: 38 additions & 0 deletions docs/content/en/docs/troubleshooting/troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -1005,6 +1005,44 @@ If there are no IPv4 IPs assigned to VMs, this is most likely because you don't

To confirm this is a DHCP issue, you could create a new VM in the same network to validate if an IPv4 IP is assigned correctly.

#### Static IP: IP pool has insufficient addresses

If you are using static IP assignment (via `ipPool` in VSphereDatacenterConfig) and cluster creation fails, you may see an error like:

```
ipPool 'my-pool' has 10 addresses but cluster requires at least 12 (control plane: 3, workers: 5, etcd: 3, rolling upgrade buffer: 1)
```

**Resolution**: Increase the number of IP addresses in your `ipPool.addresses` configuration. The pool must have enough addresses for all nodes plus one additional address for rolling upgrades.

**Calculating required IPs**:
- Control plane nodes + Worker nodes + External etcd nodes (if any) + 1

For clusters with autoscaling, use `maxCount` instead of `count` for worker nodes to ensure enough IPs are available when scaling up.

#### Static IP: InClusterIPPool resource not found

If VMs are created but don't receive IP addresses when using static IP configuration, verify the IPAM resources exist:

```bash
kubectl get inclusterippool -n eksa-system --kubeconfig <cluster-kubeconfig>
```

If the `InClusterIPPool` resource is missing, check the `capv-controller-manager` logs for errors related to IPAM:

```bash
kubectl logs -n capv-system -l control-plane=controller-manager --kubeconfig <cluster-kubeconfig>
```

#### Static IP: VMs have no network connectivity after IP assignment

If VMs receive static IPs but have no network connectivity:

1. Verify the `gateway` in your `ipPool` configuration is correct and reachable from the VM network.
2. Verify the `prefix` (subnet mask) is correct for your network.
3. Check that the assigned IP addresses are in the correct subnet for the configured gateway.
4. If using `nameservers`, verify the DNS servers are reachable from the VM network.

#### Control Plane IP in clusterconfig is not present on any Control Plane VM

If there are any IPv4 IPs assigned, check if one of the VMs have the controlPlane IP specified in `Cluster.spec.controlPlaneConfiguration.endpoint.host` in the clusterconfig yaml.
Expand Down
Loading