VCAP6-NV – NSX Study Guide – Section 1 – Objective 1.2

Section 1 – Prepare VMware NSX Infrastructure

Objective 1.2 – Prepare Host Clusters for Network Virtualization

  • Prepare vSphere Distributed Switching for NSX
  • Prepare a Cluster for NSX
  • Add/Remove Hosts from Cluster
  • Configure the appropriate teaming policy for a given implementation
  • Configure VXLAN Transport parameters according to a deployment plan

VMware recommends that prior to NSX deployments the engineer/administrator/consultant plan out the VDS configuration in the environment. A single host can be connected to multiple VDSs and a single VDS can span multiple hosts in multiple clusters.

A common configuration I have worked with is a VDS per host type i.e Compute, Management and Edge potentially this could be scaled down. If you share management and edge then you could use just two VDSs and if all components sit in a single cluster (ROBOs are a good example of this) then you could use a single VDS.

The overall recommendation is that careful planning is required but for the three NSX ‘sizes’ 1-3 VDS would be suitable. The design of the host VDS switching is dependant on the physical network topology also and potentially your Pod design, as always research and testing are king, there isn’t a single right answer for every environment.

It’s also important to think about if creating a large scale NSX deployment external connectivity. See below some an example for host/cluster and VDS layouts in the environment. This has 8 hosts in each rack (very conservative) 4 Clusters – 2 x Compute (with varying CPU and Memory resources) 1 x Management and 1 x Edge each split for the least amount of disruption upon a rack failure whilst still allowing for the most convenient traffic path.

Multi-Rack and VDS

Now on to preparing one of these clusters for NSX –

First, let’s look at the pre requisites for preparing the hosts for NSX…

  • Register vCenter with NSX Manager and deploy NSX Controllers
  • Verify that DNS forward and reverse lookups works for NSX Manager
  • Verify that DNS forward and reverse lookups works for vCenter Server
  • Verify that hosts can connect to the vCenter Server on port 80
  • Verify that the network time on vCenter Server, ESXi and NSX Manager are all synchronized
  • For each host cluster that will participate verify that hosts within the cluster are attached to a common VDS
  • VUM must be disabled
  • The Cluster must be in a resolved state (this just means that the resolve option isn’t available under the actions for the cluster)

Now to start the host preparation we need to navigate to home > networking and security > Installation and select the Host Preparation tab.

Then for any clusters that you want to use NSX Logical switching, routing or firewalling select the actions drop down and select Install (this will begin the installation of the required VIBs for NSX to work which are ESX-VSIP and ESX-VXLAN)

Depending on your environment and its topology you may need to complete this on multiple clusters the recommendation from VMware is where possible dedicate a cluster to each task (Management, Edge and Compute)

Once the install is complete the status column should show a green check mark. You can SSH to the host and confirm that the VIBs have been installed by running

esxcli software vib list | grep esx

This will then show the two VIBs ESX-VSIP and ESX-VXLAN their versions the publisher and the date that they were created, and that’s it. No reboot is required for host preparation but there will need to be one when the VIBs are removed such as removing a host from a cluster prepared for ESX or the removal for all hosts in a cluster.

Adding and removing hosts from a cluster prepared for NSX is the same as completing it for any other VMware cluster but as mentioned above just be aware the removal of the NSX VIBs requires a reboot of the hosts. So plan your removal of hosts carefully.

For NSX to connect to other hosts and complete it’s VXLAN magic we need to have some VTEPs. VTEP stands for VXLAN Tunnel EndPoint I’m not going to go into much more detail about VTEPs in this section just know that they are the parts of NSX that allow it to communicate and allow for VXLAN to work and adds all the benefits that VXLAN brings. The VTEPs you have in an environment are heavily dependant on the design as a whole and the NSX vSwitch uplink teaming policy.

Teaming policies for the NSX vSwitch are the same as VDS however below is a little table used to show the differences between the teaming types some aren’t supported by NSX and therefore it’s worth knowing what your options are and where to use each one.

NSX vSwitch Teaming Policy

Orginating Port ID  – This is the default option for Teaming uplinks in a vSwitch and with NSX I personally think one of the best. It supports multiple VTEPs and each VTEP is pinned to a particular VMNIC this means that when VXLAN traffic travels across the network it will be able to be balanced across any VTEPs that the controllers know about. It also is one of the simplest to scale out and allows for all three VXLAN modes

Source Mac Hash – This is another option that requires little in the way of configuration really, allows for multiple VTEPs and works with all three VXLAN modes it provides a calculation for each packet going through it and therefore provides a more even balancing of traffic than Originating Port ID

LACP – If you know anything about LACP on VDS you will know that providing the uplink switch(es) support LACP it allows for the best balancing of traffic across the NICs in the team. But unfortunately it does require that the networking kit support that connectivity option (for example Cisco’s UCS platform doesn’t support LACP from the host to the Fabric Interconnect, if you have ever worked with UCS I’m sure you already know the pain). It also only supports the Hybrid and Multicast modes for VXLAN but bearing in mind you are already making changes to the physical network it shouldn’t be too much of a stretch to at least configure it for Hybrid mode.

IP Hash – IP Hashing takes the source and destination IP Address puts them through a mathematical equation for each and every packet to determine which uplin to be used, it does mean that a single VM talking to multiple VMs could use more bandwidth than the physical NIC supports but it requires that the physical switch ports be combined into a port-channel/aggregation bond/trunk and configured for the same type of hashing algorithm.

Explicit Failover – This is exactly what is says on the tin, you have a single active link and a backup link in case the primary fails then traffic will begin to flow on the secondary. There is no support for mutliple VTEPs for obvious reasons and can be useful if high-speed ports are at a premium on the host e.g. 1 x 40GbE (Primary) and 1 x 10GbE (Secondary)

Physical NIC Load – Quick and to the point, NSX does not support load balancing based on physical NIC load and I wouldn’t hold out hope for them to be working on it any time soon. The other teaming policies meet every other requirement you could have for the environment without needing the physical NIC load option.

Finally we are going to quickly go over the way to configure VXLAN Transport parameters to cover off the final part of this objective.

In the vCenter navigate to Home > Networking and Security > Installation and select the Host Preparation tab

NSX VXLAN Configuration 1

Click Not Configured in the VXLAN column for the cluster or select the cluster and go to actions

NSX VXLAN Configuration 2

Setup logical network this involves selecting a VDS, a VLAN ID and MTU size and IP Addressing mechanism and a NIC teaming policy

NSX VXLAN Configuration 3

Click OK and then wait for the VTEPs to be deployed on the hosts in that cluster once it’s completed (should take a few seconds) it will show as configured.

NSX VXLAN Configuration 4

Links

https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.networking.doc/GUID-959E1CFE-2AE4-4A67-B4D4-2D2E13765715.html

https://docs.vmware.com/en/VMware-NSX-for-vSphere/6.2/nsx_62_install.pdf

 

 

Leave a Reply