In our previous blog post, we explored the evolution of network virtualization through VMware's NSX platform, discussing NSX-V and NSX-T. In this article, we will take a deep dive into NSX-T architecture, understanding its key components and how they work together to enable advanced network virtualization and security.
VMware NSX-T is built on a scalable and distributed architecture that decouples networking and security services from the underlying physical infrastructure. This enables organizations to achieve greater flexibility, scalability, and agility in their virtualized environments. The architecture of NSX-T comprises various components that work in harmony to provide a comprehensive networking and security solution.
Below mentioned architectural diagram is for the older version of NSX. (NSX-T v2.3 and earlier)
1. Consumption Platform -
The consumption of NSX Data Center for vSphere can be driven directly through the NSX Manager user interface, which is available in the vSphere Web Client. Typically end users tie network virtualization to their cloud management platform for deploying applications. NSX Data Center for vSphere provides rich integration into virtually any CMP through REST APIs. Out-of-the-box integration is also available through VMware vRealize Automation Center, vCloud Director, and OpenStack with the Neutron plug-in.
2. NSX-T Management Plane -
The management plane in NSX-T serves as the central control point for managing and configuring the entire NSX-T environment. It consists of the NSX Manager, which provides a single pane of glass for administrators to manage their virtual networks, security policies, and other NSX-T features.
The NSX Manager is installed as a virtual appliance on any ESXi host in your vCenter Server environment. NSX Manager and vCenter have a one-to-one relationship. For every instance of NSX Manager, there is one vCenter Server.
The NSX Manager communicates with other NSX-T components such as NSX Controllers, Edge Nodes, and Transport Nodes to orchestrate and enforce network and security policies across the infrastructure. It provides a user-friendly graphical interface and API endpoints for administrators to interact with the NSX-T environment.
3. NSX-T Control Plane -
The control plane in NSX-T is responsible for establishing and maintaining the logical network and security overlays. It includes NSX Controller cluster, which are distributed control plane nodes that control the forwarding behavior and enforce policies in the data plane.
It is the central control point for all logical switches within a network and maintains information about all hosts, logical switches (VXLANs), and distributed logical routers. The NSX Controller cluster is responsible for managing the distributed switching and routing modules in the hypervisors. The controller does not have any data plane traffic passing through it. Controller nodes are deployed in a cluster of three members to enable high-availability and scale. Any failure of the controller nodes does not impact any data-plane traffic.
A controller cluster has several roles, including:
API provider
Persistence server
Switch manager
Logical manager
Directory server
Each role has a master controller node. If a master controller node for a role fails, the cluster elects a new master for that role from the available NSX Controller nodes. The new master NSX Controller node for that role reallocates the lost portions of work among the remaining NSX Controller nodes.
NSX Data Center for vSphere supports three logical switch control plane modes: multicast, unicast and hybrid. Using a controller cluster to manage VXLAN-based logical switches eliminates the need for multicast support from the physical network infrastructure. You don’t have to provision multicast group IP addresses, and you also don’t need to enable PIM routing or IGMP snooping features on physical switches or routers. Thus, the unicast and hybrid modes decouple NSX from the physical network. VXLANs in unicast control-plane mode do not require the physical network to support multicast in order to handle the broadcast, unknown unicast, and multicast (BUM) traffic within a logical switch. The unicast mode replicates all the BUM traffic locally on the host and requires no physical network configuration. In the hybrid mode, some of the BUM traffic replication is offloaded to the first hop physical switch to achieve better performance. Hybrid mode requires IGMP snooping on the first-hop switch and access to an IGMP querier in each VTEP subnet.
NSX Controllers use a combination of protocols such as the Open Shortest Path First (OSPF) and Border Gateway Protocol (BGP) to distribute routing information and enable dynamic routing within the NSX-T environment. They also communicate with the NSX Manager to exchange configuration and policy information.
4. NSX-T Data Plane -
The data plane in NSX-T handles the actual forwarding and processing of network traffic. It consists of Transport Nodes, which can be hypervisors, bare-metal servers, or even virtual machines. Transport Nodes run the NSX-T Data Plane agent, which provides the connectivity and forwarding capabilities required for network virtualization and micro-segmentation.
The NSX-T Data Plane agent, running on each Transport Node, intercepts and processes traffic based on the policies defined in the NSX-T environment. It ensures that network traffic flows through the appropriate logical switches, distributed routers, and security services, enabling granular control over communication between workloads.
With its management, control, and data plane components, NSX-T provides a comprehensive platform for managing and securing multi-cloud and containerized environments. By leveraging its advanced services and distributed architecture, organizations can achieve greater control, scalability, and security in their virtualized networking infrastructure.
Thank you for reading!
*** Explore | Share | Grow ***
Comments