VXLAN, NSX Controllers, and NSX Preparation

Date: Nov 18, 2016

Return to the article

Elver Sena Sosa introduces VXLAN, one of the control planes of NSX, NSX Controllers, and how to prepare the vSphere environment for NSX.

Deploying NSX Manager and attaching it to vCenter are the first steps in allowing you to deploy your software defined network. Your goal is to have logical switches, distributed logical routers, and create and enforce security policies with the distributed firewall and service composer.

Before you can reach your goal, you need to deploy our NSX Controllers and tell NSX Manager which ESXi hosts will be part of the NSX domain. The steps to tell NSX Manager which ESXi hosts will be part of the NSX Domain are

This chapter covers all the steps needed to prepare your NSX domain. The chapter begins with a proper introduction of what VXLAN is.

Do I Know This Already?

The “Do I Know This Already?” quiz allows you to assess whether you should read this entire chapter or simply jump to the “Exam Preparation Tasks” section for review. If you are in doubt, read the entire chapter. Table 4-1 outlines the major headings in this chapter and the corresponding “Do I Know This Already?” quiz questions. You can find the answers in Appendix A, “Answers to the ‘Do I Know This Already?’ Quizzes.”

Table 4-1 Headings and Questions

Foundation Topic Section

Questions Covered in This Section

VXLAN

1-2

NSX Controllers

3-4

IP Pools

5

Host Preparation

6-7

Host Configuration

8-9

VNI Pools, Multicast Pools, and Transport Zones

10

  1. What is the source Layer 4 port number of a VXLAN frame?

    1. It is statically configured to TCP 8472.

    2. It is statically configured to UDP 8472.

    3. It is randomly generated by the VTEP.

    4. It is derived from the encapsulated frame.

  2. At least how many bytes does the VXLAN encapsulation add to the encapsulated frame?

    1. 50

    2. 100

    3. 1500

    4. 9000

  3. How many NSX universal controllers are required to be deployed in a production NSX environment?

    1. 1

    2. 2

    3. 3

    4. 4

  4. What NSX entity is responsible for slicing the distributed logical router?

    1. The NSX Manager

    2. The distributed router control virtual machine

    3. The API provider NSX Controller Master

    4. The Layer 3 NSX Controller Master

  5. What are two use cases of IP pools by NSX Manager? (Choose two.)

    1. To assign IPs to virtual machines in the virtual network.

    2. To assign the default gateway for VTEPs.

    3. To assign IPs to NSX Manager.

    4. To assign IPs to NSX Controllers.

  6. Which of the following is an action that takes place during host preparation?

    1. The NSX Manager tells vCenter to add the selected hosts in the NSX host clusters.

    2. The NSX Manager installs NSX modules on the ESXi hosts.

    3. vCenter adds the VXLAN VMkernel port to the ESXi hosts.

    4. The NSX Controller Master uploads the NSX configuration data to the ESXi hosts.

  7. Which NSX feature does not require logical networking preparation to be completed before it can be used?

    1. VXLAN

    2. Logical switches

    3. Distributed firewall

    4. Distributed logical routers

  8. How many vDS switches does NSX Manager support in a single host cluster?

    1. 1

    2. 2

    3. 32

    4. 128

  9. During host configuration you select a VMKNic teaming policy of enhanced LACP. How many VTEPs does NSX Manager create per ESXi host?

    1. 1

    2. 2

    3. As many dvUplinks as are configured on the vDS

    4. As many VMNICs as are installed on the ESXi hosts

  10. How many universal transport zones are supported in a cross vCenter NSX domain?

    1. 1

    2. 1 per NSX Manager in the cross vCenter NSX domain

    3. Up to the number of VNIs in the segment ID pool

    4. 1 per NSX universal controller

Foundation Topics

VXLAN Introduction

Multitier applications have long been designed to use separate Ethernet broadcast domains or virtual local area networks (VLANs) to separate tiers within the application. In a vSphere environment, the number of multitier applications can be quite large, which eats up the number of available VLANs and makes it challenging to scale the virtual environment. For example, if a client has 100 four-tier applications, the client may need 400 separate Ethernet broadcast domains or VLANs to support these applications. Now multiply that by 10 clients. You are basically hitting the limit on how many Ethernet broadcast domains you can support using VLANs. As the virtual machines (VMs) for these applications are distributed among multiple vSphere clusters or even different data centers, the Ethernet broadcast domains must be spanned across the physical network, necessitating the configuration of Spanning Tree Protocol to prevent Ethernet loops.

Virtual Extensible LAN (VXLAN) addresses the Layer 2 scaling challenges in today’s data centers by natively allowing for the transparent spanning of millions of distinct Ethernet broadcast domains over any IP physical network or IP transport, reducing VLAN sprawl and thus eliminating the need to enable Ethernet loop-preventing solutions such as Spanning Tree.

VXLAN

VXLAN is an open standard supported by many of the key data center technology companies, such as VMware. VXLAN is a Layer 2 encapsulation technology that substitutes the usage of VLAN numbers to label Ethernet broadcast domains with VXLAN numbers. A traditional Ethernet switch can support up to 212 (4096) Ethernet broadcast domains or VLAN numbers. VXLAN supports 224 Ethernet broadcast domains or VXLAN numbers. That is 16,777,216 Ethernet broadcast domains. A VXLAN number ID is referred to as VNI. There is a one-to-one relationship between an Ethernet broadcast domain and a VNI. A single Ethernet broadcast domain can’t have more than one VNI. Two distinct Ethernet broadcast domains can’t have the same VNI.

Figure 4-1 shows a traditional design with two ESXi hosts in different racks, each one with a powered on VM. If both VMs need to be in the same Ethernet broadcast domain, the broadcast domain must be spanned, or extended, across all the Ethernet switches shown in the diagram. This makes it necessary for either the Spanning Tree Protocol to be configured in all the Ethernet switches or a more expensive loop-preventing solution such as Transparent Interconnection of Lots of Links (TRILL) to be deployed. With VXLAN deployed, the ESXi hosts can encapsulate the VM traffic in a VXLAN frame and send it over the physical network, which can be IP-based rather than Ethernet-based, thus removing the need to configure Spanning Tree or deploy solutions such as TRILL.

Figure 4-1 Spanning broadcast domain across multiple ESXi racks

Traditionally, any network technology that encapsulates traffic the way VXLAN does is called a tunnel. A tunnel hides the original frame’s network information from the IP physical network. A good example of a tunnel is Genetic Routing Encapsulation (GRE), which hides Layer 3 and Layer 4 information from IP network devices, although GRE could be set up to also hide Layer 2 information. VXLAN tunnels hide Layer 2, Layer 3, and Layer 4 information. It is possible to deploy a new IP network topology by just using tunnels, without having to do major reconfiguration of the IP physical network. Such a network topology is called an overlay, whereas the IP physical network that switches and routes the tunnels that make up the overlay is called the underlay.

Just as GRE requires two devices to create and terminate the tunnel, VXLAN requires two devices to create and terminate VXLAN tunnels. A device that can create or terminate the VXLAN tunnel is called the VXLAN Tunnel Endpoint (VTEP). NSX enables ESXi hosts to have VTEPs. A VTEP performs these two roles:

Figure 4-2 shows an Ethernet frame from a VM encapsulated in a VXLAN frame. The source VTEP of the VXLAN frame is a VMkernel port in the ESXi host. You can see the encapsulated Ethernet frame, or original frame, and the new header, thus creating the VXLAN overlay.

Figure 4-2 VXLAN encapsulation

The VXLAN frame contains the following components:

To aggregate a few things stated in the preceding content about VXLAN: Any QoS markings, such as DSCP and CoS from the VM Ethernet frame being encapsulated, are copied to the VXLAN frame, and the destination UDP port of the VXLAN frame is derived from the header information from the encapsulated frame. For this to work, VXLAN has to support virtual guest VLAN Tagging (VGT). Without VGT support, the VM’s guest OS couldn’t do QoS markings. If the encapsulated frame does not have any QoS markings, none would be copied to the VXLAN frame; however, there is nothing stopping you from adding QoS markings directly to the VXLAN frame.

Then there is the part where the VXLAN frame traverses the physical network, called the VXLAN underlay or simply underlay. The underlay uses VLANs. It is almost certain that the VXLAN underlay will place the VXLAN frames in their own Ethernet broadcast domain, thus requiring its own VLAN. The VLAN used by the underlay for VXLAN frames is referred to as the VXLAN VLAN. If the ESXi host with the source VTEP is connected to a physical switch via a trunk port, the ESXi host could be configured to add a VLAN tag, 802.1Q, to the VXLAN frame or send the VXLAN frame without a VLAN tab, in which case the physical switch’s trunk needs to be configured with a native VLAN.

All this means that VXLAN encapsulation adds 50+ bytes to the original frame from the VM. The 50+ byes come from the following addition:

VMware recommends that the underlay for VXLAN support jumbo frames with an MTU of at least 1600 bytes to support VMs sending frames with the standard 1500 bytes MTU. This includes any routers that are part of the underlay; otherwise, the routers will discard the VXLAN frames when they realize they can’t fragment the VXLAN frames with more than 1500 bytes payload. ESXi hosts with VTEPs also configure the VXLAN tunnel with the Do Not Fragment bit, DF, in the IP header of the VXLAN overlay to 1.

Figure 4-3 shows two VMs on the same Ethernet broadcast domain communicating with each other. The two VMs are connected to the same VNI, and the two ESXi hosts have the VTEPs. This diagram does not show the nuances of how the VTEPs know about each other’s existence or how they determine where to forward the VXLAN frame. Chapter 5 covers these details in more depth.

Figure 4-3 Virtual machine communication via VXLAN

NSX Controllers

The NSX Controllers are responsible for most of the control plane. The NSX Controllers handle the Layer 2 control plane for the logical switches, and together with the distributed logical router control virtual machine, the NSX Controllers handle the Layer 3 control plane. We review the role of the Layer 3 control plane and the distributed logical router control virtual machine in Chapter 7, “Logical Router.”

For Layer 2, the NSX Controllers have the principal copy of three tables per logical switch, which are used to facilitate control plane decisions by the ESXi host. The three tables are

For Layer 3, the NSX Controllers have the routing table for each distributed logical router as well as the list of all hosts running a copy of each distributed logical router.

NSX Controllers do not play any role in security, such as the distributed firewall, nor do they provide control plane services to the NSX Edge Service Gateway.

Deploying NSX Controllers

The NSX Controllers are virtual appliances deployed by the NSX Manager. The NSX Controllers must be deployed in the same vCenter associated with NSX Manager. In our examples from the figures, that would be vCenter-A if the NSX Controller is from NSXMGR-A. At least one NSX Controller must be deployed before logical switches and distributed logical routers can be deployed in an NSX Manager with a Standalone role.

Deploying NSX Controllers might be the most infuriating thing about setting up an NSX environment. I restate some of this in context a little later, but in short if NSX Manager can’t establish communication with the NSX Controller after it is deployed, it has the NSX Controller appliance deleted. The process of deploying the NSX Controller can take a few minutes or more, depending on the available resources in the ESXi host where you deploy it and the datastore. If the NSX Controller deployment fails for whatever reason, NSX Manager doesn’t attempt to deploy a new one. You can view the NSX Manager’s log to find the reason to why the deployment failed and then try again. But you won’t be doing much networking with NSX until you get at least one NSX Controller deployed.

Let’s now cover the steps to deploying the NSX Controllers, but I wanted to point out this little annoyance first. A single NSX Controller is all that is needed to deploy logical switches and distributed logical routers; however for redundancy and failover capability, VMware supports only production environments with three NSX Controllers per standalone NSX Manager. The NSX Controllers can be deployed in separate ESXi clusters as long as

The following steps guide you in how to deploy NSX Controllers via the vSphere Web Client. You can also deploy NSX Controllers using the NSX APIs.

You must be an NSX administrator or enterprise administrator to be allowed to deploy NSX Controllers. We cover Role Based Access Control (RBAC), in Chapter 17, “Additional NSX Features.”

When NSX Controllers get deployed, they automatically form a cluster among themselves. The first NSX Controller needs to be deployed and have joined the NSX Controller cluster by itself before the other NSX Controllers can be deployed. If you try to deploy a second NSX Controller before the first one is deployed, you get an error message.

When NSX Manager receives the request to deploy an NSX Controller from vCenter, who got it from the vSphere Web Client, or when NSX Manager receives the request via the NSX APIs, the following workflow takes place:

If NSX Manager cannot establish an IP connection to the NSX Controller to complete its configuration, the NSX Manager has vCenter power off the NSX Controller and delete it.

Verifying NSX Controllers

You can verify the status of the NSX Controller installation by selecting the Installation view from the Networking and Security page, as shown in Figure 4-6.

Figure 4-6 An NSX Controller successfully deployed

In this view you can verify the following:

If you assign a role of Primary to an NSX Manager, the NSX Manager’s three NSX Controllers become NSX universal controllers. NSX universal controllers can communicate with Secondary NSX Managers in the same cross vCenter NSX domain as well as Secondary NSX Manager’s participating entities such as ESXi hosts. Before you add Secondary NSX Managers, their existing NSX Controllers, if any, must be deleted.

You can also verify the deployment of the NSX Controllers by viewing the NSX Controller virtual machine in the Host and Clusters or VM and Templates view. The NSX Controller is deployed using the name NSX_Controller_ followed by the NSX Controller’s UUID. Figure 4-8 shows the first NSX Controller in the Host and Clusters view. Notice in Figure 4-8 the number of vCPUs, memory, memory reservation, and HDD configured in the NSX Controller.

Figure 4-8 NSX Controller’s virtual machine Summary view

Each NSX Controller gets deployed with these settings:

VMware does not support changing the hardware settings of the NSX Controllers.

If the NSX Manager is participating in a Secondary role in cross vCenter NSX, the NSX Manager will not have any NSX Controllers of its own. Instead the Secondary NSX Managers create a logical connection to the existing NSX universal controllers from the Primary NSX Manager in the same cross vCenter NSX domain.

Creating an NSX Controller Cluster

When more than one NSX Controller is deployed, the NSX Controllers automatically form a cluster. They know how to find each other because NSX Manager makes them aware of each other’s presence. To verify that the NSX Controller has joined the cluster successfully, connect to the NSX Controllers via SSH or console using the username of admin and the password you configured during the first NSX Controller deployment. Once logged in the NSX Controller, issue the CLI command show control-cluster status to view the NSX Controller’s cluster status. You need to do this for each NSX Controller to verify its cluster status. Figure 4-9 shows the output of the command for an NSX Controller that has joined the cluster successfully.

Figure 4-9 Output of show control-cluster status

Figure 4-9 depicts the following cluster messages:

The clustering algorithm used by the NSX Controllers depends on each NSX Controller having IP communication with a majority of the NSX Controllers, counting itself. If the NSX Controller does not belong to the majority, or quorum, it removes itself from control plane participation. To avoid a split-brain situation where no NSX Controller is connected to the cluster majority and potentially each one removing itself from control plane participation, VMware requires that three of the NSX Controllers be deployed in production environments.

Figure 4-10 shows the output of the command show control-cluster startup-nodes, which shows the NSX Controllers that are known to be cluster members. All NSX Controllers should provide the same output. You can also issue the NSX Manager basic mode command show controller list all to list all the NSX Controllers the NSX Manager is communicating with plus their running status.

Figure 4-10 Output of show control-cluster startup-nodes

Additional CLI commands that could be used in the NSX Controllers to verify cluster functionally and availability are as follows:

We review additional CLI commands in NSX Manager and NSX Controllers related to logical switches and distributed logical routers in Chapter 5 and Chapter 7.

NSX Controller Master and Recovery

When deploying multiple NSX Controllers, the control plane responsibilities for Layer 2 and Layer 3 are shared among all controllers. To determine which portions each NSX Controller handles, the NSX Controllers cluster elects an API provider, Layer 2 and Layer 3 NSX Controller Master. The masters are selected after the cluster is formed. The API provider master receives internal NSX API calls from NSX Manager. The Layer 2 NSX Controller Master assigns Layer 2 control plane responsibility on a per logical switch basis to each NSX Controller in the cluster, including the master. The Layer 3 NSX Controller Master assigns the Layer 3 forwarding table, on a per distributed logical router basis, to each NSX Controller in the cluster, including the master.

The process of assigning logical switches to different NSX Controllers and distributed logical routers to different NSX Controllers is called slicing. By doing slicing, the NSX Controller Master for Layer 2 and Layer 3 distributes the load of managing the control plane for logical switches and distributed routers among all the NSX Controllers. No two NSX Controllers share the Layer 2 control plane for a logical switch nor share the Layer 3 control plane for a distributed logical router. Slicing also makes the NSX Layer 2 and Layer 3 control planes more robust and tolerant of NSX Controller failures.

Once the master has assigned Layer 2 and Layer 3 control plane responsibilities, it tells all NSX Controllers about it so all NSX Controllers know what each NSX Controller is responsible for. This information is also used by the NSX Controllers in case the NSX Controller Master becomes unresponsive or fails.

If your NSX environment has only a single distributed logical router and three NSX Controllers, only one of the NSX Controllers would be responsible for the distributed logical router while the other two would serve as backups. No two NSX Controllers are responsible for the Layer 2 control plane of the same logical switch. No two NSX Controllers are responsible for the Layer 3 forwarding table of the same logical router.

When an NSX Controller goes down or becomes unresponsive, the data plane continues to operate; however, the Layer 2 NSX Controller Master splits among the surviving NSX Controllers Layer 2 control plane responsibilities for all the impacted logical switches. The Layer 3 NSX Controller Master splits among all the surviving NSX Controllers Layer 3 control plane responsibilities for all the affected distributed logical routers.

What if the NSX Controller that fails was the master? In this case, the surviving NSX Controllers elect a new master, and the new master then proceeds to recover the control plane of the affected logical switches and/or distributed logical routers. How does the new master determine which logical switches and/or distributed logical routers were affected and need to have their control plane responsibilities re-assigned? The new master uses the assignment information distributed to the cluster by the old master.

For Layer 2 control plane, the newly responsible NSX Controller queries the hosts in the transport zone so it can rep0opulate the logical switch’s control plane information. We learn about transport zones later in this chapter. For Layer 3, the newly responsible NSX Controller queries the logical router control virtual machine. We learn about the logical router control virtual machine in Chapter 7.

IP Pools

IP pools are the only means to provide an IP address to the NSX Controllers. IP pools may also be used to provide an IP address to the ESXi hosts during NSX host preparations. We review NSX host preparation later in this chapter in the section “Host Preparation.” IP pools are created by an NSX administrator and are managed by NSX Manager. Each NSX Manager manages its own set of IP pools. NSX Manager selects an IP from the IP pool whenever it needs one, such as when deploying an NSX Controller. If the entity using the IP from the IP pool is removed or deleted, NSX Manager places the IP back into the pool. The IPs in the IP pool should be unique in the entire IP network (both physical and virtual).

There are two ways to start the creation of an IP pool. The first method we mentioned during the deployment of the NSX Controllers. This option to create an IP pool is also available during NSX host preparation, which we discuss later in this chapter.

The second method involves the following steps:

Figure 4-11 Create an IP pool

Regardless of how you choose to create an IP pool, the same IP Pool Wizard comes up, as shown in Figure 4-12.

Figure 4-12 IP Pool Wizard

In the IP Pool Wizard, populate the following information:

Once an IP pool is created, you can modify or delete it. To make changes to an IP pool, follow these steps:

The IP pool’s IP range can’t be shrunk if at least one IP has already been assigned. An IP pool can’t be deleted if at least one IP has been already assigned.

Host Preparation

Now that you deployed your NSX Controllers, it’s time to focus on the next steps that must take place before you can start deploying your virtual network and deploying security services. The NSX Controllers can also be deployed after host preparation.

The next step is to install the NSX vSphere Infrastructure Bundles (VIBs) in the ESXi hosts that will be in the NSX domain. The VIBs give the ESXi hosts the capability to participate in NSX’s data plane and in kernel security. We do this by selecting the Host Preparation tab from the Installation view in the Networking and Security page, as shown in Figure 4-13. An alternative would be to use vSphere ESXi Image Builder to create an image with the NSX VIBs installed.

Figure 4-13 Host Preparation tab

In the Host Preparation tab you see a list of all the ESXi host clusters configured in vCenter. Under the Installation Status column, hover toward the right until the mouse is over the cog, click it and select Install. That’s it. NSX Manager pushes the VIBs to each ESXi host that is in the cluster. Successfully adding the VIBs is nondisruptive, and there is no need to place the ESXi host in maintenance. Yes, I wrote “successfully” because if the VIB installation fails you might need to reboot the ESXi host(s) to complete it, as shown in Figure 4-14. The good thing is that NSX Manager tries to reboot the ESXi host for you, first putting in Maintenance mode. The moral of this: Don’t execute any type of infrastructure changes or upgrades outside of a maintenance window. You would also need to reboot the ESXi host if you wanted to remove the NSX VIBs.

Figure 4-14 Incomplete NSX VIB installation

So what superpowers exactly are these VIBs giving the ESXi hosts? The modules and the over-and-above human capabilities they give the ESXi hosts are as follows:

Any other superpowers? Well, maybe this can be considered as a superpower: If you add an ESXi host to a cluster that has already been prepared, the ESXi host gets the NSX VIBs automatically. How about that for cool?! And before I forget, installing the VIBs takes minimal time. Even in my nested-ESXi-hosts running lab with scant available CPU, memory, and an NFS share that is slower at delivering I/O than a delivery pigeon, the VIBs install quickly.

Figure 4-15 shows the ESXi host clusters that have been prepared with version 6.2.0 of the NSX VIBs by NSMGR-A, 10.154.8.32. Have a look at the two columns to the right, the Firewall and VXLAN columns. The Firewall module has its own column because it can be installed independently from the other modules. The VIB that has the Firewall module is called VSFWD. If the Firewall status reads Enabled, with the green check mark, you could go over to the Firewall view of Networking and Security, where the distributed firewall policies get created and applied, or the Service Composer view of Networking and Security, where service chaining is configured, to start creating and applying security rules for VMs. The distributed firewall VIB for NSX 6.0 can be installed with ESXi hosts running version 5.1 or higher. For NSX 6.1 and higher, the ESXi hosts must run 5.5 or higher.

Figure 4-15 Host Preparation tab after NSX modules have been installed

The VXLAN column confirms the installation of the VXLAN VIB. The VXLAN VIB has the VXLAN module, the Security module, and the Routing module. If the column reads Not Configured with a hyperlink, the VXLAN VIB is installed. The VXLAN VIB can be installed with ESXi hosts running version 5.1 or higher; however, with version 5.1 ESXi hosts logical switches can only be deployed in Multicast Replication Mode. We cover Replication Mode in Chapter 5. For NSX 6.1 and higher, the ESXi hosts must run 5.5 or higher. The Routing module only works in ESXi hosts running vSphere 5.5 or higher. Table 4-2 shows the vSphere and vCenter version supported by each module.

Table 4-2 vSphere Versions Supported by the NSX Modules

NSX Modules

vSphere Version

Security

5.1 or later

VXLAN

5.1 (only for Multicast Replication Mode) and later

Routing

5.5 or later

Host Configuration

If you want to deploy logical switches, you must complete the Logical Network Preparation tab in the Installation view. In this section you set up an NSX domain with the variables needed to create VXLAN overlays. Three sections need to be configured. If you skip any of them, you are not going to be deploying logical switches.

First, you need to tell NSX Manager how to configure the ESXi hosts. Oddly enough, you don’t start the logical network configuration from the Logical Network Preparation tab. Rather, click the Configure hyperlink in the VXLAN column in the Host Preparation tab to open the Configure VXLAN Networking Wizard. Optionally, hover toward the right and click on the cog to see a menu list and choose Configure VXLAN, as shown in Figure 4-16.

Figure 4-16 VXLAN host configuration

Figure 4-17 shows the Configure VXLAN Networking window. Here we can configure the following:

Figure 4-17 Configure VXLAN Networking Wizard

All ESXi hosts, per host cluster, must be in the same vDS that will be used by NSX for host configuration. NSX can work with different clusters having different vDSes. This has zero impact on the performance of VMs in the NSX domain. If running a vSphere version before 6.0, not using the same vDS across multiple clusters may impact the capability of vMotion virtual machines connecting to logical switches. We touch on this topic in Chapter 5.

The VLAN in Figure 4-17 is the VXLAN VLAN. The vDS switch selected in Figure 4-17 will be used by NSX Manager to create a portgroup for the VXLAN VMkernel port and portgroups to back the logical switches, which we cover in Chapter 5. All these portgroups will be configured by NSX Manager with the VXLAN VLAN. If the MTU configured is larger than the MTU already configured in the vDS, the vDS’s MTU will be updated. The vDS that gets assigned to the cluster for VXLAN may also continue to be used for other non-NSX connectivity, such as a portgroup for vMotion.

You can assign an IP address to the VXLAN VMkernel port by using DHCP or an IP pool. In both cases, the VXLAN VMkernel port would be getting a default gateway. This would typically present a problem for the ESXi host since it already has a default gateway, most likely pointing out of the management VMkernel port. Luckily for NSX, vSphere has supported multiple TCP/IP stacks since version 5.1. In other words, the ESXi host can now have multiple default gateways. The original default gateway, oddly enough referred to as default, would still point out of the management VMkernel port, or wherever you originally had it configured for. The new default gateway, which you probably correctly guessed is referred to as VXLAN, would point out of the VXLAN VMkernel port. The VXLAN TCP/IP stack default gateway and the VXLAN VMkernel port will only be used for the creation and termination of VXLAN overlays. Figure 4-18 shows the VMkernel ports of an ESXi host, with only the VXLAN VMkernel port using the VXLAN TCP/IP stack.

Figure 4-18 VXLAN VMkernel port with VXLAN TCP/IP stack

One final thing you can configure here is the VMKNic Teaming Policy, a name I’m not too fond of. Why couldn’t they name it VXLAN Load Share Policy? After all, this is how the vDS load shares egress traffic from the VXLAN VMkernel port. Anyhow, the selection you make here has great implications for the behavior of your VXLAN overlays. For one, the policy must match the configuration of the physical switches to which the vDS uplinks connect, which means the vDS must also be configured to match the selected policy, such as enhanced LACP.

These are the VMKNic Teaming Policy options available:

Go back and have a look at Figure 4-17. Do you see the VTEP field at the bottom? It says 1, meaning 1 VXLAN VMkernel port is created for each ESXi host in the cluster being configured. Where did the 1 come from? NSX Manager put it there. Notice the text box for the 1 is grayed out, which means you can’t edit it. And how did NSX Manager know to put a 1 in there? Go back to the VMKNic Teaming Policy selection. If you choose anything other than Load Balance – SRCID or Load Balance – SRCMAC, NSX Manager puts a 1 in the VTEP text box.

If, on the other hand, you choose VMKNic Teaming Policy of Load Balance – SRCID or Load Balance – SRCMAC, NSX Manager creates multiple VXLAN VMkernel ports, one per dvUplink in the vDS. Now that the ESXi hosts have multiple VXLAN VMkernel ports, load sharing can be achieved on a per VM basis by pinning each VM to a different VXLAN VMkernel port and mapping each VXLAN VMkernel port to a single dvUplink in the vDS. Figure 4-19 shows the configured ESXi hosts with multiple VXLAN VMkernel ports.

Figure 4-19 ESXi hosts with multiple VTEPs

Figure 4-20 shows the logical/physical view of two ESXi hosts, each with two dvUplinks, two VMs, and two VTEPs. The VMs are connected to logical switches.

Figure 4-20 Logical/physical view of ESXi hosts with two VTEPs

Table 4-3 shows the VMKNic Teaming Policy options, the multi-VTEP support, how they match to the vDS Teaming modes, and the minimum vDS version number that supports the teaming policy.

Table 4-3 VMKNic Teaming Policies

Key Topic Element

Multi-VTEP Support

vDS Teaming Mode

vDS Version

Fail Over

No

Failover

5.1 or later

Static EtherChannel

No

Ether Channel

5.1 or later

Enhanced LACP

No

LACPv2

5.5 and later

Load Balance - SRCID

Yes

Source Port

5.5 and later

Load Balance - SRCMAC

Yes

Source MAC (MAC Hash)

5.5 and later

Now why would NSX Manager allow the option of multiple VTEPs in the same ESXi host? It allows the option because there is no other good way to load share, yes load share, egress traffic sourced from an ESXi host if the load sharing hash is using the source interface (SRCID) or the source MAC (SRCMAC). I won’t spend too long explaining why NSX Manager achieves the load sharing the way it does. I would just say, think of how the physical network would react if the source MAC in egress frames from the ESXi host were seen in more than one discrete dvUplink from the same ESXi host.

After you finish the Configure VXLAN Networking Wizard, you can go over to the Logical Network Preparation tab to verify the configuration. Figure 4-21 shows the VXLAN Transport section listing the ESXi hosts that have been configured and the details of their configuration.

Figure 4-21 ESXi host clusters that have been configured for VXLAN

In the Network view of vCenter, you can verify that the portgroup was created for connecting the VXLAN VMkernel port. Figure 4-22 shows the VXLAN VLAN for the EDG-A1 host cluster, 13, is configured in the portgroup. Notice that there are other portgroups in the same vDS. If you were to look at the vDS configuration, you would see the MTU is set to at least the size you configured in Configure VXLAN networking.

Figure 4-22 VXLAN vDS

VNI Pools, Multicast Pools, and Transport Zones

You need to undertake two more preparations for the NSX networks.

The first thing you should do is provide the range or pool of VNIs and multicast groups that NSX Manager would be using for its local use as well as do the same for cross vCenter NSX use. Local VNI pools and universal VNI pools shouldn’t overlap. Local multicast groups and universal multicast groups shouldn’t overlap either. The VNI pool can start at 5000. To create the VNI pools, go to the Segment ID section of the Logical Network Preparation tab and select the Primary NSX Manager. If you require multicast support, you can enter the multicast group pools for NSX Manager to use in the same place. We discuss multicast in the “Replication Mode” section of Chapter 5. Secondary NSX Managers can only configure local VNI and multicast group pools.

The second thing you should do is create global transport zones, at least one per NSX Manager, and a universal transport zone. When a logical switch is created, NSX Manager needs to know which ESXi hosts in the NSX domain have to be informed about the logical switch. The global transport zone is a group of ESXi host clusters under the same NSX domain that would be told about the creation of logical switches. Global transport zone only includes ESXi host clusters local to a vCenter. The universal transport zone is a group of ESXi host clusters under the same cross vCenter NSX domain that would be told about the creation of universal logical switches. Universal transport zones may include ESXi host clusters in all vCenters in the same cross vCenter NSX domain. The logical switch’s global transport zone assignment and a universal logical switch’s universal transport zone assignment are done during the creation of the switches.

A transport zone can contain as many clusters as you want. An ESXi host cluster can be in as many transport zones as you want, and it can belong to both types of transport zones at the same time. And yes, you can have as many global transport zones as your heart desires, although you typically don’t deploy more than one or two per NSX Manager. However, you can only have a single universal transport zone. More importantly, both types of transport zones can have ESXi host clusters each with a different vDS selected during Configure VXLAN networking. Again, transport zones matter only for the purpose of letting the NSX Manager know which ESXi hosts should be told about a particular logical switch or universal logical switch.

To create a transport zone, head over to the Logical Network Preparation tab, select the NSX Manager that will own the transport zone, and go to the Transport Zones section. There, click the green + sign. There you can assign the transport zone a name, select its Replication Mode, and choose the ESXi host clusters that will be part of the transport zone. If the NSX Manager is the Primary NSX Manager, you have a check box to turn this transport zone into a universal transport zone, as shown in Figure 4-23.

Figure 4-23 Creating a transport zone

As mentioned, Chapter 5 discusses what Replication Mode is. For now, you should know that if you select Multicast or Hybrid you need to create a multicast group pool in the Segment ID section mentioned previously. Finally, after a transport zone is created, you can’t change the transport zone type. However, you can modify it by adding or removing ESXi host clusters from the NSX Manager that owns the association to the vCenter that owns those clusters. If an NSX switch (a logical switch or a universal logical switch) has already been created before the ESXi host cluster is added to the transport zone, NSX Manager automatically updates the newly added ESXi hosts in the ESXi host cluster with the NSX switch information.

To add an ESXi host cluster in a transport zone, return to the Transport Zone section of the Logical Network Preparation tab and select the NSX Manager that prepared the ESXi host cluster that will be added. Select the Transport Zone and click the Connect Clusters icon. Select the ESXi host clusters you want to add and click OK.

To remove an ESXi host cluster from a transport zone, select the transport zone in the Transport Zones section and select the Disconnect Clusters icon. Select the ESXi host clusters you want to remove and click OK. For the operation to succeed, all VMs (powered on or not) in the ESXi host you want to remove must be disconnected from all logical switches that belong to the transport zone. We cover how to disconnect a VM from a logical switch in Chapter 5.

A transport zone that has any logical switches can’t be deleted. The logical switches must be deleted first. We cover how to delete logical switches in Chapter 5. To delete a transport zone, select the transport zone, then select Actions, All NSX User Interface Plugin Actions, and then select Remove.

One more note on this section. It should be clear by now that NSX Manager loves ESXi host clusters. If you add an ESXi host to an already prepared and configured ESXi host cluster, NSX Manager would make sure that the ESXi host gets the NSX VIBs, the VXLAN VMkernel ports get created with the right IP and subnets, and make the new ESXi host aware of any logical switches, and so forth. On the reverse, if you remove an ESXi host from an already prepared and configured ESXi host cluster, the ESXi host would lose its VXLAN VMkernel ports and IPs, and lose knowledge of any logical switches.

That wraps up all the prep work that needs to be done to get your NSX network and security going. The next chapter begins the coverage of the process of actually building stuff that you can put virtual machines on.

Exam Preparation Tasks

Review All the Key Topics

Review the most important topics from inside the chapter, noted with the Key Topic icon in the outer margin of the page. Table 4-4 lists these key topics and the page numbers where each is found.

Table 4-4 Key Topics for Chapter 4

Key Topic Element

Description

Page Number

Paragraph

Define what VXLAN is and the scaling capabilities native to the protocol.

90

List

VXLAN frame inherits Layer 2 and Layer 3 QoS from the encapsulated frame. The source UDP port is derived from the encapsulated frame.

93

Paragraph

VXLAN requires jumbo frame support from the underlay.

95

Paragraph

The NSX Controllers maintain the principle copies of the VTEP, MAC, and ARP tables.

96

Paragraph

NSX Controllers have no role in Network Security

97

Paragraph

The user must have the correct administrator account to deploy NSX Controllers.

98

Paragraph

Changing the NSX Controllers′s hardware settings is not supported by VMware.

103

Paragraph

Secondary NSX Managers do not deploy NSX Controllers

104

Paragraph

VMware requires three NSX Controllers in a production deployment of NSX.

105

Paragraph

The NSX Controller taking over for a failed one queries the ESXi hosts in the VTEP table.

105

Table 4-2

The versions of vSphere supported by the NSX modules.

113

Paragraph

All members of the ESXi host cluster must belong to the same vDS for NSX host preparation.

114

Paragraph

NSX supports multiple VTEPs per ESXi host.

116

Paragraph

Each NSX Manager in the cross vCenter NSX domain is responsible for adding clusters to the universal transport zone.

122

Paragraph

NSX Manager only interacts with ESXi hosts that are members of clusters.

123

Complete Tables and Lists from Memory

Download and print a copy of Appendix C, “Memory Tables” (found on the book’s website), or at least the section for this chapter, and complete the tables and lists from memory. Appendix D, “=Memory Tables Answer Key,” also on the website, includes the completed tables and lists so you can check your work.

Define Key Terms

Define the following key terms from this chapter, and check your answers in the Glossary:

800 East 96th Street, Indianapolis, Indiana 46240

vceplus-200-125    | boson-200-125    | training-cissp    | actualtests-cissp    | techexams-cissp    | gratisexams-300-075    | pearsonitcertification-210-260    | examsboost-210-260    | examsforall-210-260    | dumps4free-210-260    | reddit-210-260    | cisexams-352-001    | itexamfox-352-001    | passguaranteed-352-001    | passeasily-352-001    | freeccnastudyguide-200-120    | gocertify-200-120    | passcerty-200-120    | certifyguide-70-980    | dumpscollection-70-980    | examcollection-70-534    | cbtnuggets-210-065    | examfiles-400-051    | passitdump-400-051    | pearsonitcertification-70-462    | anderseide-70-347    | thomas-70-533    | research-1V0-605    | topix-102-400    | certdepot-EX200    | pearsonit-640-916    | itproguru-70-533    | reddit-100-105    | channel9-70-346    | anderseide-70-346    | theiia-IIA-CIA-PART3    | certificationHP-hp0-s41    | pearsonitcertification-640-916    | anderMicrosoft-70-534    | cathMicrosoft-70-462    | examcollection-cca-500    | techexams-gcih    | mslearn-70-346    | measureup-70-486    | pass4sure-hp0-s41    | iiba-640-916    | itsecurity-sscp    | cbtnuggets-300-320    | blogged-70-486    | pass4sure-IIA-CIA-PART1    | cbtnuggets-100-101    | developerhandbook-70-486    | lpicisco-101    | mylearn-1V0-605    | tomsitpro-cism    | gnosis-101    | channel9Mic-70-534    | ipass-IIA-CIA-PART1    | forcerts-70-417    | tests-sy0-401    | ipasstheciaexam-IIA-CIA-PART3    | mostcisco-300-135    | buildazure-70-533    | cloudera-cca-500    | pdf4cert-2v0-621    | f5cisco-101    | gocertify-1z0-062    | quora-640-916    | micrcosoft-70-480    | brain2pass-70-417    | examcompass-sy0-401    | global-EX200    | iassc-ICGB    | vceplus-300-115    | quizlet-810-403    | cbtnuggets-70-697    | educationOracle-1Z0-434    | channel9-70-534    | officialcerts-400-051    | examsboost-IIA-CIA-PART1    | networktut-300-135    | teststarter-300-206    | pluralsight-70-486    | coding-70-486    | freeccna-100-101    | digitaltut-300-101    | iiba-CBAP    | virtuallymikebrown-640-916    | isaca-cism    | whizlabs-pmp    | techexams-70-980    | ciscopress-300-115    | techtarget-cism    | pearsonitcertification-300-070    | testking-2v0-621    | isacaNew-cism    | simplilearn-pmi-rmp    | simplilearn-pmp    | educationOracle-1z0-809    | education-1z0-809    | teachertube-1Z0-434    | villanovau-CBAP    | quora-300-206    | certifyguide-300-208    | cbtnuggets-100-105    | flydumps-70-417    | gratisexams-1V0-605    | ituonline-1z0-062    | techexams-cas-002    | simplilearn-70-534    | pluralsight-70-697    | theiia-IIA-CIA-PART1    | itexamtips-400-051    | pearsonitcertification-EX200    | pluralsight-70-480    | learn-hp0-s42    | giac-gpen    | mindhub-102-400    | coursesmsu-CBAP    | examsforall-2v0-621    | developerhandbook-70-487    | root-EX200    | coderanch-1z0-809    | getfreedumps-1z0-062    | comptia-cas-002    | quora-1z0-809    | boson-300-135    | killtest-2v0-621    | learncia-IIA-CIA-PART3    | computer-gcih    | universitycloudera-cca-500    | itexamrun-70-410    | certificationHPv2-hp0-s41    | certskills-100-105    | skipitnow-70-417    | gocertify-sy0-401    | prep4sure-70-417    | simplilearn-cisa    |
http://www.pmsas.pr.gov.br/wp-content/    | http://www.pmsas.pr.gov.br/wp-content/    |