Administering Clusters on vSphere
You can create, upgrade, modify, or delete vSphere on-premises Kubernetes clusters using the CCP web interface. CCP supports v2 and v3 clusters on vSphere. The v2 clusters use a single master node for its control plane, whereas the v3 clusters can use one or three master nodes for its control plane. The multimaster approach of v3 clusters is the preferred cluster type, as this approach ensures high availability for the control plane. The following steps show you how to administer clusters on vSphere:
Step 1. In the left pane, click Clusters and then click the vSphere tab.
Step 2. Click NEW CLUSTER.
Step 3. In the BASIC INFORMATION screen:
a. From the INFRASTRUCTURE PROVIDER drop-down list, choose the provider related to your Kubernetes cluster.
For more information, see Adding vSphere Provider Profile.
b. In the KUBERNETES CLUSTER NAME field, enter a name for your Kubernetes tenant cluster.
c. In the DESCRIPTION field, enter a description for your cluster.
d. In the KUBERNETES VERSION drop-down list, choose the version of Kubernetes that you want to use for creating the cluster.
e. If you are using ACI, specify the ACI profile.
For more information, see Adding ACI Profile.
f. Click NEXT.
Step 4. In the PROVIDER SETTINGS screen:
a. From the DATA CENTER drop-down list, choose the data center that you want to use.
b. From the CLUSTERS drop-down list, choose a cluster.
Note
Ensure that DRS and HA are enabled on the cluster that you choose. For more information on enabling DRS and HA on clusters, see Cisco Container Platform Installation Guide.
c. From the DATASTORE drop-down list, choose a datastore.
Note
Ensure that the datastore is accessible to the hosts in the cluster.
d. From the VM TEMPLATE drop-down list, choose a VM template.
e. From the NETWORK drop-down list, choose a network.
Note
Ensure that you select a subnet with an adequate number of free IP addresses. For more information, see Managing Networks. The selected network must have access to vCenter.
For v2 clusters that use HyperFlex systems:
■ The selected network must have access to the HypexFlex Connect server to support HyperFlex Storage Provisioners.
■ For HyperFlex Local Network, select k8-priv-iscsivm-network to enable HyperFlex Storage Provisioners.
f. From the RESOURCE POOL drop-down list, choose a resource pool.
g. Click NEXT.
Step 5. In the NODE CONFIGURATION screen:
a. From the GPU TYPE drop-down list, choose a GPU type.
Note
GPU configuration applies only if you have GPUs in your HyperFlex cluster.
b. For v3 clusters, under MASTER, choose the number of master nodes as well as their VCPU and memory configurations.
Note
You may skip this step for v2 clusters. You can configure the number of master nodes only for v3 clusters.
c. Under WORKER, choose the number of worker nodes as well as their VCPU and memory configurations.
d. In the SSH USER field, enter the SSH username.
e. In the SSH KEY field, enter the SSH public key that you want to use for creating the cluster.
Note
Ensure that you use the Ed25519 or ECDSA format for the public key. Because RSA and DSA are less-secure formats, Cisco prevents the use of these formats.
f. In the ROUTABLE CIDR field, enter the IP addresses for the pod subnet in the CIDR notation.
g. From the SUBNET drop-down list, choose the subnet that you want to use for this cluster.
h. In the POD CIDR field, enter the IP addresses for the pod subnet in the CIDR notation.
i. In the DOCKER HTTP PROXY field, enter a proxy for the Docker.
j. In the DOCKER HTTPS PROXY field, enter an HTTPS proxy for the Docker.
k. In the DOCKER BRIDGE IP field, enter a valid CIDR to override the default Docker bridge.
Note
If you want to install the HX-CSI add-on, ensure that you set the CIDR network prefix of the DOCKER BRIDGE IP field to /24.
l. Under DOCKER NO PROXY, click ADD NO PROXY and then specify a comma-separated list of hosts that you want to exclude from proxying.
m. In the VM USERNAME field, enter the VM username that you want to use as the login for the VM.
n. Under NTP POOLS, click ADD POOL to add a pool.
o. Under NTP SERVERS, click ADD SERVER to add an NTP server.
p. Under ROOT CA REGISTRIES, click ADD REGISTRY to add a root CA certificate to allow tenant clusters to securely connect to additional services.
q. Under INSECURE REGISTRIES, click ADD REGISTRY to add Docker registries created with unsigned certificates.
r. For v2 clusters, under ISTIO, use the toggle button to enable or disable Istio.
s. Click NEXT.
Step 6. For v2 clusters, to integrate Harbor with CCP:
Note
Harbor is currently not available for v3 clusters.
a. In the Harbor Registry screen, click the toggle button to enable Harbor.
b. In the PASSWORD field, enter a password for the Harbor server administrator.
c. In the REGISTRY field, enter the size of the registry in gigabits.
d. Click NEXT.
Step 7. In the Summary screen, verify the configuration and then click FINISH.