top of page
markjramos

NSX-v & NSX-T

Updated: Mar 18, 2023


VMware NSX is a virtual networking and security software offering created when VMware acquired Nicera in 2012. NSX allows an admin to virtualize network components, enabling them to develop, deploy and configure virtual networks and switches through software rather than hardware. A software layer sits on top of the hypervisor to allow an administrator to divide a physical network into multiple virtual networks.


With the latest release of the product, NSX-T Data Center, network virtualization can be added to both ESXi and KVM as hypervisors, as well as to bare-metal servers. Also containerized workloads in a Kubernetes cluster can be virtualized and protected. NSX-T Data Center also offers Network Function Virtualization, with which functions such as a firewall, load balancer and VPN, can be run in the virtualization software stack.

VMware vRealize Network Insight is a network operations management tool that enables an admin to plan microsegmentation and check on the health of VMware NSX. VRealize Network Insight relies on technology from VMware's acquisition of Arkin in 2016. VRealize Network Insight collects information from the NSX Manager. It also displays errors in its user interface, which helps troubleshoot an NSX environment.


NSX-v & NSX-T

NSX for vSphere (NSX-v) is tightly integrated with VMware vSphere and requires deployment of the VMware vCenter. VMware NSX-v is specific to vSphere hypervisor environments and was developed before NSX-T.

NSX-T (NSX-Transformers) was designed for different virtualization platforms and multi-hypervisor environments and can also be used in cases where NSX-v is not applicable. While NSX-v supports SDN for only VMware vSphere, NSX-T also supports network virtualization stack for KVM, Docker, Kubernetes, and OpenStack as well as AWS native workloads. VMware NSX-T can be deployed without a vCenter Server and is adopted for heterogeneous compute systems.




The main components of VMware NSX are NSX Manager, NSX controllers, and NSX Edge gateways.

NSX Manager is a centralized component of NSX which is used for management of networks. NSX Manager can be deployed as a VM on one of the ESXi servers managed by vCenter (from OVA template). In cases where you are using NSX-v, NSX Manager can work with only one vCenter Server, whereas NSX Manager for NSX-T can be deployed as an ESXi VM or KVM VM and can work with multiple vCenter servers at once.

  • NSX Manager for vSphere is based on the Photon OS (similar to the vCenter

  • Server Appliance) while NSX-T Manager runs on the Ubuntu operating system.

  • NSX controllers. The NSX controller is a distributed state management system used to overlay transport tunnels and control virtual networks, which can be deployed as a VM on ESXi or KVM hypervisors. The NSX Controller controls all logical switches within the network, and handles information about VMs, hosts, switches and VXLANs. Having three controller nodes ensures data redundancy in case of failure of one NSX Controller node.

  • NSX Edge is a gateway service that provides access to physical and virtual networks for VMs. NSX Edge can be installed as a distributed virtual router or as a services gateway. The following services can be provided: Dynamic routing, firewalls, Network Address Translation (NAT), Dynamic Host Configuration Protocol (DHCP), Virtual Private Network (VPN), Load Balancing, and High Availability.


Deployment Options

The concept of deployment is quite similar for both the NSX-v and NSX-T. You should perform the following steps for deploying NSX:

  • Deploy NSX Manager as a VM on an ESXi host using a virtual appliance. Be sure to register NSX Manager on vSphere vCenter (for NSX-v). If you are using NSX-T, NSX Manager can be deployed as a virtual appliance on a KVM host as VMware NSX-T allows you to create a cluster of NSX Managers.

  • Deploy three NSX controllers and create an NSX controller cluster.

  • Install VIBs (kernel modules) on ESXi hosts to enable a distributed firewall, distributed routing and VXLAN if you are using NSX-v. If you are using NSX-T, kernel modules must be also installed on KVM hypervisors.

  • Install NSX Edge as a VM on ESXi (for NSX-v and NSX-T). If you are using NSX-T and there is no possibility to install Edge as a virtual machine on ESXi, Edge can be deployed on a physical server. Installing Edge as a VM on KVM hypervisors is not supported at this time (for NSX-T v.2.3). If you need to deploy Edge on a physical server, check the hardware compatibility list (important for CPUs and NICs) before doing this.


VXLAN vs GENEV

NSX-v uses the VXLAN encapsulation protocol while NSX-T uses GENEVE that is a more modern protocol.


VXLAN.

A MAC over IP encapsulation is used for VXLAN and the working principle of network isolation differs from the VLAN technique. Traditional VLAN has a limited number of networks that is 4094 according to the 802.1q standard, and network isolation is done on the layer 2 of a physical network by adding 4 bytes into Ethernet frame headers. The maximum number of virtual networks for VXLAN is 2^24.

The VXLAN network identifier is used to mark each virtual network in this case. The layer-2 frames of the overlay network are encapsulated within the UDP datagrams transmitted over a physical network. The UDP port number is 4789 in this case.

The VXLAN header consists of the following parts.


  • 8 bits are used for flags. The I flag must be set to 1 for making a VXLAN Network ID (VNI) valid. The other 7 bits are R fields that are reserved and must be set to zero on transmission. The R fields set to zero are ignored on receipt.

  • VXLAN Network Identifier (VNI) that is also known as VXLAN Segment ID is a 24-bit value used to determine the individual overlay network utilized for communicating VMs with each other.

  • Reserved fields (24-bit and 8-bit) must be set to zero and ignored on receipt.

  • A size of the VXLAN header is fixed and is equal to 8 bytes. Using Jumbo frames with MTU set to 1600 bytes or GENEVE.

  • The GENEVE header looks a lot like VXLAN and has the following structure:

  • A compact tunnel header is encapsulated in UDP over IP.

  • A small fixed tunnel header is used to provide control information, as well as a base level of functionality and interoperability.

  • Variable length options are available for making possible to implement future innovations. More is recommended for VXLAN.

NSX-T uses GENEVE (GEneric NEtwork Virtualization Encapsulation) as a tunneling protocol that preserves traditional offload capabilities available on NICs (Network Interface Controllers) for the best performance.


Additional metadata can be added to overlay headers and allows to improve context differencing for processing information such as end-to-end telemetry, data tracking, encryption, security etc. on the data transferring layer. Additional information in the metadata is called TLV (Type, Length, Value). GENEVE is developed by VMware, Intel, Red Hat and Microsoft. GENEVE is based on the best concepts of VXLAN, STT and NVGRE encapsulation protocols.


The MTU value for Jumbo frames must be at least 1700 bytes when using GENEVE encapsulation that is caused by the additional metadata field of variable length for GENEVE headers (MTU 1600 or higher is used for VXLAN as you recall).

NSX-v and NSX-T are not compatible due to the overlay encapsulation difference explained in this section.


Layer-2 Networking

Now you know how virtual layer 2 Ethernet frames are encapsulated over IP networks, hence, it’s time to explore implementation of virtual layer 2 networks for NSX-v and NSX-T.


Transport nodes and virtual switches

Transport nodes and virtual switches represent NSX data transferring components.

Transport Node (TN) is the NSX compatible device participating in the traffic transmission and NSX networking overlay. A node must contain a hostswitch for being able to serve as a transport node.


NSX-v requires to use vSphere distributed virtual switch (VDS) as usual in vSphere. Standard virtual switches cannot be used for NSX-v.


NSX-T presumes that you need to deploy an NSX-T distributed virtual switch (N-VDS). Open vSwitches (OVS) are used for KVM hosts and VMware vSwitches are used for ESXi hosts can be used for this purpose.


N-VDS (virtual distributed switch that is previously known as a hostswitch) is a software NSX component on the transport node, that performs traffic transmission. N-VDS is the primary component of the transport nodes’ data plane that forwards traffic and owns at least one physical network interface controller (NIC). NSX Switches (N-VDS) of the different transport nodes are independent but can be grouped by assigning the same names for centralized management.


On ESXi hypervisors N-VDS is implemented by using VMware vSphere Distributed Switch through the NSX-vSwitch module that is loaded to the kernel of the hypervisor. On KVM hypervisors the hostswitch is implemented by the Open-vSwitch (OVS) module.

Transport zones are available for both NSX-v and NSX-T. Transport zones define the limits of logical networks distribution. Each transport zone is linked to its NSX Switch (N-VDS). Transport zones for NSX-T are not linked to clusters.


There are two types of transport zones for VMware NSX-T due to GENEVE encapsulation: Overlay or VLAN. As for VMware NSX-v, a transport zone defines the distribution limits of VXLAN only.


Logical switch replication modes

When two virtual machines residing on different hosts communicate directly, the unicast traffic is exchanged in the encapsulated mode between two endpoint IP addresses assigned to hypervisors without need for flooding. Sometimes, layer-2 network traffic originated by a VM must be flooded similarly as layer-2 traffic in traditional physical networks, for example, if a sender doesn’t know the MAC address of the destination network interface. It means that the same traffic (broadcast, unicast, multicast) must be sent to all VMs connected to the same logical switch. If VMs are residing on different hosts, traffic must be replicated to those hosts. Broadcast, unicast and multicast traffic is also known as BUM traffic.

NSX-v supports Unicast mode, Multicast mode and Hybrid mode.


- Unicast


- Multicast:


- Hybrid:



NSX-T supports Unicast mode with two options:

o Hierarchical Two-Tier replication (optimized, the same as for NSX-v)

o Head replication (not optimized)

ARP suppression reduces the amount of ARP broadcast traffic sent over the network and is available for Unicast and Hybrid traffic replication modes. Thus ARP suppression is available for both NSX-v and NSX-T.


When a VM1 sends an ARP request to know the MAC address of a VM2, the ARP request is intercepted by the logical switch. If the switch already has the ARP entry for the target network interface of the VM2, the ARP response is sent to the VM1 by the switch. Otherwise, the switch sends the ARP request to an NSX controller. If the NSX controller contains the information about VM IP to MAC binding, the controller sends the reply with that binding and then the logical switch sends the ARP response to the VM1. If there is no ARP entry on the NSX controller, then the ARP request is re-broadcasted on the logical switch.


NSX layer 2 bridging



Layer 2 bridging is useful for migrating workloads from overlay networks to VLANs, or for splitting subnets across physical and virtual workloads.


NSX-v: This feature works on the kernel level of a hypervisor on which a control VM is running.


NSX-T: A separate NSX-bridge node is created for this purpose. NSX bridge nodes can be assembled into clusters to improve fault tolerance of the entire solution.

In the NSX-v control VM, redundancy was implemented by using the High Availability (HA) scheme. One VM copy is active while the second VM copy is on stand-by. If the active VM is failed, it can take some time to switch VMs and load the stand-by VM by making it active. NSX-T does not face this disadvantage since a fault-tolerant cluster is used instead of the active/stand-by scheme for HA.


The Routing Model

In cases where you are using VMware NSX, the following terms are used:


o East-west traffic refers to transferring data over network within the datacenter. This name is used for this particular type of traffic since horizontal lines on diagrams typically indicate local area network (LAN) traffic.

o North-south traffic refers to client-server traffic or traffic that moves between a datacenter and a location outside the datacenter (external networks). Vertical lines on the diagrams usually describe this type of network traffic.

o Distributed logical router (DLR) is a virtual router which can use static routes and dynamic routing protocols such as OSPF, IS-IS or BGP.

o Tenant refers to a customer or an organization that gets access to an isolated secure environment provided by a managed service provider (MSP). A large organization can use multi-tenant architecture by regarding each department as a single tenant. VMware NSX can be particularly useful for providing Infrastructure as a Service (IaaS).


Routing in NSX-v

NSX for vSphere uses DLR (distributed logical router) and centralized routing. There is a routing kernel module on each hypervisor on which to perform routing between logical interfaces (LIFs) on the distributed router.


Let’s consider, for example, the typical routing scheme for NSX-v, when you have a set of three segments: VMs running databases, VMs running application servers and VMs running web servers. VMs of these segments (sky-blue, green and deep-blue) are connected to a distributed logical router (DLR) which is in turn connected to external networks via edge gateways (NSX Edge).


If you are working with multiple tenants, you can use a multi-tier NSX Edge construction, or each tenant can have its own dedicated DLR and controller VM, the latter of which resides on the edge cluster. The NSX Edge gateway connects isolated, stub networks to shared (uplink) networks by providing common gateway services such as DHCP, VPN, NAT, dynamic routing, and Load Balancing. Common deployments of NSX Edge include in the DMZ, VPN Extranets, and multi-tenant Cloud environments where the NSX Edge creates virtual boundaries for each tenant.


If you need to transmit traffic from a VM located in segment A (blue) of the first tenant to segment A of the second tenant, traffic must pass through the NSX Edge gateway. In this case, there is no distributed routing, as traffic must pass the single point that is the designated NSX Edge gateway.



You can also see the working principle on the scheme on which components are divided into clusters: Management cluster, Edge cluster, and Compute cluster. In this example, each cluster is using 2 ESXi hosts. If two VMs are running on the same ESXi host but belong to different network segments, traffic passes through the NSX Edge gateway that is located on another ESXi host of the Edge cluster. After routing, this traffic must be transmitted back to the ESXi host on which source and destination VMs are running.


The route of traffic transmission is not optimal in this case. The advantages available for distributed routing in the multi-tenant model with Edge gateways cannot be utilized, resulting in greater latency for your network traffic.


Routing in NSX-T

NSX-T uses a two-tier distributed routing model for resolving issues explained above. Both Tier0 and Tier1 are created on the Transport nodes, the latter of which is not necessary, but is intended for improving scalability.


Traffic is transmitted by using the most optimal path, as routing is then performed on the ESXi or KVM hypervisor on which the VMs are running. The only case when a fixed point of routing must be used is when connecting to external networks. There are separate Edge nodes deployed on servers running hypervisors.


Additional services such as BGP, NAT, and Edge Firewall can be enabled on Edge nodes, which can in turn be combined into a cluster for improving availability. Also, NSX-T also provides faster failure detection. In simple terms, the best means for distributing routing is routing inside the virtualized infrastructure.


IP addressing for virtual networks

When you configure NSX-v, you need to compose a plan of IP addressing inside NSX segments. Transit logical switches that link DLRs and Edge gateways must also be added in this case. If you are using a high number of Edge gateways, you should compose the IP addressing scheme for segments which are linked by these Edge gateways.


NSX-T, however, does not require these operations. All network segments between Tier0 and Tier1 obtain IP addresses automatically. No dynamic routing protocols are used—instead, static routes are used and a system connects the components automatically, making configuration easier; you don’t need to spend lots of time planning IP addressing for service (transit) network components.


Integration for Traffic Inspection

NSX-v offers integration with third-party services such as agentless antiviruses, advanced firewalling (next generation firewalls), IDS (Intrusion Detection Systems), IPS (Intrusion Prevention Systems), and other types of traffic inspection services. Integration with listed types of traffic inspection is performed on a hypervisor kernel layer using a protected VMCI bus (Virtual Machine Communication Interface).

NSX-T does not provide these capabilities at this time.


Security

Kernel-level distributed firewalls can be configured for NSX-v and NSX-T, working on a VM virtual adapter level. Switch security options are available for both NSX types, but the “Rate-limit Broadcast & Multicast traffic” option is available only for NSX-T.


NSX-T allows you to apply rules in a more granular fashion, resulting in transport nodes being utilized more rationally. For example, you can apply rules based on the following objects: logical switch, logical port, NSGroup. This feature can be used to reduce rule-set configuration on the logical switch, logical port or NSGroup instances for achieving higher levels of efficiency and optimization. You can also save scale space and rule lookup cycles, in addition to hosting multi-tenancy deployment, and apply tenant specific rules (rules that are applied to workloads of the appropriate tenant).


The process of creating and applying the rules is quite similar for both NSX-v and NSX-T. The difference is that the policies created for NSX-T are sent to all Controllers where rules are converted to IP addresses, while in NSX-v, policies are immediately transferred to vShield Firewall Daemon (VSFWD).

27 views0 comments

Recent Posts

See All

Commenti


bottom of page