top of page
  • Writer's picturevP

vSphere with Tanzu Architecture

In one of the previous article, we discussed about different Tanzu implementations. In this post, let's understand the basic architecture of vSphere with Tanzu.


Image Courtesy - VMware

The above diagram indicates Supervisor Cluster General Architecture. When vSphere with Tanzu is enabled on a vSphere cluster, it creates a Kubernetes control plane inside the hypervisor layer. This layer contains specific objects that enable the capability to run Kubernetes workloads within ESXi. It is called as a Supervisor Cluster. It runs on top of an SDDC layer that consists of Compute(ESXi), Networking(NSX-T Data Center or vSphere networking) , and storage ( vSAN or another shared storage). Shared storage is required for persistent volumes for vSphere Pods, VMs running inside the Supervisor Cluster, and pods in a Tanzu Kubernetes cluster.


After a Supervisor Cluster is created, a vSphere administrator creates a namespaces within the Supervisor Cluster that are called vSphere Namespaces. A developer can then run workloads consisting of containers running inside vSphere Pods and create Tanzu Kubernetes clusters.


Image Courtesy - VMware

Let's take a look at the Supervisor Cluster Architecture now. It consists of different components as discussed below.

Kubernetes control plane VM

Three Kubernetes control plane VMs in total are created on the hosts that are part of the Supervisor Cluster. The three control plane VMs are load balanced as each one of them has its own IP address. Additionally, a floating IP address is assigned to one of the VMs.


Spherelet

An additional process called Spherelet is created on each host. It is a kubelet that is ported natively to ESXi and allows the ESXi host to become part of the Kubernetes cluster.


Container Runtime Executive (CRX)

CRX is similar to a VM from the perspective of Hostd and vCenter Server. CRX includes a paravirtualized Linux kernel that works together with the hypervisor. CRX uses the same hardware virtualization techniques as VMs and it has a VM boundary around it. A direct boot technique is used, which allows the Linux guest of CRX to initiate the main init process without passing through kernel initialization. This allows vSphere Pods to boot nearly as fast as containers.


The Cluster API and VMware Tanzu Kubernetes Grid Service are modules that run on the Supervisor Cluster and enable the provisioning and management of Tanzu Kubernetes clusters. The Virtual Machine Service module is responsible for deploying and running stand-alone VMs and VMs that make up Tanzu Kubernetes clusters.


How VM's in supervisor cluster's are placed?

vSphere DRS determines the exact placement of the control plane VMs on the ESXi hosts and migrates them when needed. vSphere DRS is also integrated with the Kubernetes Scheduler on the control plane VMs, so that DRS determines the placement of vSphere Pods. When as a DevOps engineer you schedule a vSphere Pod, the request goes through the regular Kubernetes workflow then to DRS, which makes the final placement decision.


If you're new to new to this technology you probably might not have heard about some of the terms mentioned above like namespaces, supervisor cluster. Let's understand what these term means.


vSphere Namespaces

In a large Kubernetes cluster with many projects, teams or customers there may be a need to carve out a piece to ensure fair allocation of resources and permissions. A Namespace provides that means for better sharing a Kubernetes cluster's resources. You can also attach policies and authorizations. In a cluster, it can also give the scope for Pods, Services, and Deployments.

Image Courtesy - VMware

A vSphere Namespace is a logical object that is created on the vSphere Kubernetes supervisor cluster. This object keeps track of resource assignments (Compute, Memory, Storage, and Network), as well as access control for Kubernetes resources like containers and virtual machines. These vSphere namespaces have no relation to Kubernetes namespaces that would be created inside of a TKG cluster created by the Supervisor cluster.


Supervisor Cluster

The Supervisor cluster is a privileged K8S cluster managed by vSphere that greatly enhances the vSphere cluster's capabilities. A 3-node virtual machine Supervisor Cluster is deployed when workload management is enabled, which act as the control plane. The Kubernetes worker agents, Spherelets, were integrated directly into the ESXi hypervisor to create the supervisor cluster. This cluster makes use of vSphere Pod Service to execute container workloads natively on the vSphere host, leveraging the ESXi hypervisor's security, availability, and performance.


Workload Management

Workload Management is the vSphere with Kubernetes feature that enables you to manage the namespaces. By using workload management, you can leverage both Kubernetes and vSphere functionality. Once vSphere cluster for workload management is configured, namespaces can be created, which provides compute networking and storage resources for Kubernetes applications.


With this I'll conclude this post here.


I hope this introductory blogs is helpful for those of you who are new to this.


Thank you for reading!


*** Explore | Share | Grow ***

30 views0 comments

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page