Cisco Container Platform Architecture Overview

At the bottom of the stack is Level 1, the Networking layer, which can consist of Nexus switches, Application Policy Infrastructure Controllers (APICs), and Fabric Interconnects (FIs).

Level 2 is the Compute layer, which consists of HyperFlex, UCS, or third-party servers that provide virtualized compute resources through VMware and distributed storage resources.

Level 3 is the Hypervisor layer, which is implemented using HyperFlex or VMware.

Level 4 consists of the CCP control plane and data plane (or tenant clusters). In the Figure 5-9, the left side shows the CCP control plane, which runs on four control-plane VMs, and the right side shows the tenant clusters. These tenant clusters are preconfigured to support persistent volumes using the vSphere Cloud Provider and Container Storage Interface (CSI) plug-in. Figure 5-9 provides an overview of the CCP architecture.


Figure 5-9 Container Platform Architecture Overview

Components of Cisco Container Platform

Table 5-2 lists the components of CCP.

Table 5-2 Components of CCP

Sample Deployment Topology

This section describes a sample deployment topology of the CCP and illustrates the network topology requirements at a conceptual level.

In this case, it is expected that the vSphere-based cluster is set up, provisioned, and fully functional for virtualization and virtual machine (VM) functionality before any installation of CCP. You can refer to the standard VMware documentation for details on vSphere installation. Figure 5-10 provides an example of a vSphere cluster on which CCP is to be deployed.


Figure 5-10 vSphere cluster on which CCP is to be deployed

Once the vSphere cluster is ready to provision VMs, the admin then provisions one or more VMware port groups (for example, PG10, PG20, and PG30 in the figure) on which virtual machines will subsequently be provisioned as container cluster nodes. Basic L2 switching with VMware vswitch functionality can be used to implement these port groups. IP subnets should be set aside for use on these port groups, and the VLANs used to implement these port groups should be terminated on an external L3 gateway (such as the ASR1K shown in the figure). The control-plane cluster and tenant-plane Kubernetes clusters of CCP can then be provisioned on these port groups.

All provisioned Kubernetes clusters may choose to use a single shared port group, or separate port groups may be provisioned (one per Kubernetes cluster), depending on the isolation needs of the deployment. Layer 3 network isolation may be used between these different port groups as long as the following conditions are met:

• There is L3 IP address connectivity among the port group that is used for the control-plane cluster and the tenant cluster port groups

• The IP address of the vCenter server is accessible from the control-plane cluster

• A DHCP server is provisioned for assigning IP addresses to the installer and upgrade VMs, and it must be accessible from the control-plane port group cluster of the cluster

The simplest functional topology would be to use a single shared port group for all clusters with a single IP subnet to be used to assign IP addresses for all container cluster VMs. This IP subnet can be used to assign one IP per cluster VM and up to four virtual IP addresses per Kubernetes cluster, but would not be used to assign individual Kubernetes pod IP addresses. Hence, a reasonable capacity planning estimate for the size of this IP subnet is as follows:

(The expected total number of container cluster VMs across all clusters) + 3 × (the total number of expected Kubernetes clusters)

Liam Smith Cisco Applications in Finance

Leave a Reply

Your email address will not be published. Required fields are marked *