One-Arm VLAN-Backed Load Balancer in NSX-T

There are two main deployment methods for Load Balancers in NSX-T, both offer the same features so the differences are just in connectivity:

  • Inline/Transparent – The load balancer sits in the middle of the network between the clients and servers, so one interface receives the client traffic, another forwards to the servers .The source IP remains the client’s original IP, so the back end servers must have a route back, which should go via the load balancer. See the implementation here
  • One-Arm – Uses the same interface to receive the client traffic as it does to reach the back end servers. This method requires the use of SNAT to modify the client IP to that of the load balancer. The back end servers only need to know how to get back to the load balancer, not the original clients.

Using a VLAN-backed Segment will allow the load balancer to direct traffic to both VM and bare metal workloads that may live on the same broadcast domain.

There are two supported connectivity methods for the ‘arm’ in the One Arm Load Balancer in NSX-T. One uses a T1 Uplink Interface to connect to a T0, but unlike Inline, uses the same interface to reach the back end servers. The other, which is the focus of this post, uses a T1 Service Interface to provide connectivity:

Setup the Network

First create a VLAN Segment, which will be used to connect VMs to be load balanced. It should be connected to a Transport Zone that is applied to the hosts that will run the VMs and also the Edge Custer that will be used to host the T1 Gateway that will be created next:

Then create a new T1 Gateway. It should be assigned to an Edge Cluster in order to run the centralised load balancer service, but doesn’t need to be connected to a T0:

Then create a Service Interface on the T1 that connects to the VLAN Segment created at the start:

Then create a static route on the T1 to reach the external networks. The next hop could be another T1 (that connects to a T0) or the physical network, depending on your setup:

Configure the Load Balancer

Create a Load Balancer in Load Balancing > Load Balancers > Add Load Balancer and attach it to the T1 Gateway:

Create a Server Pool to specify which back-end servers are to be load balanced. Note that the SNAT Translation Mode is set to Automap, which means the Source IP will be that of the T1 rather than the original client IP. :

The Members/Group contains the criteria to match the back-end servers, this could be using Tags, VM Names, or for non-NSX-aware workloads such as bare metal IP addresses:

This is also wher eyou can set a back end port, if this is different to your front end. E.g. if the app runs on port 8000, but we want the clients to go to 80, set 8000 for the members and set 80 later in the Virtual Server:

Then create a Virtual Server, which acts as the entry point to the Load Balancer, setting the virtual IP/port/protocol. It needs to reference the Pool that’s just been created and is attached to the Load Balancer:

At this point the topology from NSX looks like this:

Checking the Traffic

Now the Load Balancer should be switching between the two pool members, using the Round Robin method:

Checking the statistics shows traffic going to both the VM workloads and the bare metal, which was specified by IP:

Checking the web server logs shows that the client source address is the T1 Service Interface address:

The return traffic then goes via the T1 before returning to the client.

Geneve Inside Geneve

Disclaimer: This is by no means an optimal solution and is only being deployed to see Geneve in action!

Just like VXLAN before it, Geneve is the current flavour of overlay network encapsulation helping to improve networking in the virtual world, without worrying about the physical routers and switches that now sit in the boring underlay. It is probably best known for its use in VMware’s NSX-T, however it isn’t some secret proprietary protocol. It was in fact created by VMware and Intel, among others as a proposed RFC standard, so anyone can implement it.

Another product that makes use of Geneve to create overlay networks is Cilium, an eBPF-based Kubernetes CNI. It uses Geneve tunnels between K8s nodes to enable pod reachability, this includes both inter-pod networking and connectivity from Services to pods.

So what better way to see Geneve in action than to run Cilium inside NSX-T, running Geneve inside Geneve. In practice this means running K8s on VMs, with Cilium as the CNI, that are connected to NSX-T Segments.

App Topology

Here’s an overview of the micro-services app that’s being used:

Clients connect to the UI pods to view the web front-end, which in the background talks to the API pods to retrieve data from the database. The traffic from the UI to the API will need to go through two Geneve tunnels, as they will be hosted on different L3-separated hosts.

Ensuring the use of Overlay

First of all, Geneve is only used to tunnel traffic between hosts/nodes. So to make sure it’s used we need to ensure the VMs and pods are placed on different hosts and nodes respectively. This can be achieved by disabling DRS or creating anti-affinity rules in vCenter and by using Node Selectors in K8s.

Here’s the physical view of the app components in the infrastructure:

  • GENEVE-H01 & H02 are ESXi hosts
  • GENVE-UK8N1 and N2 are Ubuntu VM K8s nodes
  • GENEVE-K8S is an NSX-T Overlay Segment
  • Each K8s node has it’s own Cilium pod network
  • LoadBalancer type Services are provided by MetalLb in L2 mode

Kubectl confirms this diagram by showing which pod lives on which node:

And the Cilium Geneve tunnels can be listed by issuing a simple command in each of the pods:

So traffic from the UI pods need to go to another node, via another host to reach the API pods. Whereas all the underlay network needs to know is how to get from the TEP of Host1 to the TEP of Host2.

Seeing Geneve²

Traffic is generated from a client browsing the UI, which talks to the API, which can then be captured from Host1 in the outbound direction (after NSX-T Geneve encapsulation):

nsxcli -c start capture interface vmnic3 direction output file geneve.pcap
  • The outermost conversation is a Geneve flow from Host1 TEP to Host2 TEP (NSX-T)
  • Then next Geneve flow is from Node1 TEP to Node2 TEP (Cilium)
  • And finally the actual application HTTP traffic from the UI Pod to the API Pod

In the real world the workload placement would probably mean no traffic would even need to go on to the wire, but it’s always good to see what would happen when it does!

Configuring OSPF in NSX-T

The release of NSX-T 3.1.1 brings with it, among other things, OSPFv2 routing! This may be a welcome return to many enterpirses who haven’t adopted BGP in the datacentre and allows for a smoother transition from NSX-V, which has always had OSPF support.

Unlike with NSX-V, where routing was used within the virtual domain (between ESGs and DLRs), NSX-T only uses dynamic routing protocols for external P-V connectivity. So OSPF is enabled on T0 Gateways, connecting to the physical world.

Once you have a T0 Gateway deployed, enabling OSPF is now just a few clicks away…

Here’s the logical topology that will be configured in the rest of this post:

Configuring OSPF

1. Assign each interface for each VLAN to the relevant Edge Node:

2. Disable the BGP toggle and enable the OSPF toggle:

3. Create an Area definition (only a single Area can be created). The type can be either Normal or NSSA and the authentication method can be one of the usual for OSPF; Type 0 (None), Type 1 (Plaintext) or Type 2 (MD5):

4. Configure each interface to be used in the OSPF process. Each interface and area can be handily selected from the dropdown, then set the Enabled toggle. Interfaces can be of type broadcast (DR/BDR) or P2P:

5. To advertise out overlay segments to the physical world, create a route redistribution policy, selecting the types of routes to advertise e.g. T1 connected and select OSPF as the Destination Protocol:

6. Then don’t forgot to enable the newly created OSPF route redistribution:

As long as the physical network has also been setup there should now be a few neighbours forming, which can be viewed directly in the NSX UI:

A neighbourship will form between the two T0 SR instances, but will be in the 2Way/DROther state, as DR and BDR are in use, all others are to the physical net

And a similar view from the physical world:

Finally, here’s some routes of Segments that are connected to a T1, “advertised” to the T0, then redistributed out to the physical network using ECMP OSPF from the T0 SRs:

NSX-T Policy API Single JSON PATCH

The NSX-T Policy API is a powerful concept that was introduced in 2.4 and powers the new Simplified UI. It provides a declarative method of describing, implementing and deleting your entire virtual network and security estate.

In a single API call you deploy a complete logical topology utilising all the features NSX-T provides including T1 Gateways (with NAT/Load Balancer services) for distributed routing, Segments for streched broadcast domains and DFW rules to enforce microsegmentation.

This example performs the following:

  • Creates a T1 Gateway and connects it to the existing T0
  • Creates three Segments and attaches them to the new T1
  • Creates intra-app distributed firewall rules to only allow the necessary communication between tiers
  • Creates Gateway Firewall rules to allow external access directly to the web tier
  • Creates a Load Balancer for the web tier with TLS-offloading using a valid certificate

And once deployed the topology will look like this:

Currently, on the networking side there is only a T0 Gateway, a single Segment (which is a VLAN-backed transit Segment to the physical network), with no T1s or Load Balancers:

And on the security side there’s no DFW policies :

Once the JSON body (see my example here) is created with the relevant T0, Edge Cluster and Transport Zone IDs inserted, then the REST call can be constructed. Using your favourite REST API client e.g. Curl, Postman, Requests (Python), the request should look like this:

URL: https://NSX-T_MANAGER/policy/api/v1/infra/
Method: PATCH
Header: Content-Type: application/json
Auth: Basic (NSX-T Admin User/Password)
Body: The provided JSON

Once you send a successful request you’ll notice you receive a status 200 almost instantly, but don’t be fooled into thinking that your entire topology has now been created!

In reality this is just the policy engine acknowledging your declarative intent. It now works to convert or ‘realise’ that intent in to imperative tasks that are used to create all of the required logical objects.

Once this has all been created you’ll see your network and security components in the GUI:

Now the magic of using this API means that you can also delete your entire topology with the same call, just changing the marked_for_delete to true for each section.

Example code here: https://github.com/certanet/nsx-t-policy-api

Configuring NSX-T 2.5 L2VPN Part 2 – Autonomous Edge Client

Continuing on from the server configuration in Part 1, this is the NSX-T L2VPN Client setup.

There’s a few options to terminate an L2VPN in NSX-T, but all of them are proprietary VMware, so there’s no vendor inter-op here. This article uses the ‘Autonomous Edge’ option, previously called Standalone Edge in NSX-V, which is essentially  a stripped-down Edge VM that can be deployed to a non-NSX prepped host. Confusingly this isn’t the new type of Edge Node used in NSX-T, but instead is the same ESG that was used in V.

The Topology

The Overlay Segments on the left are in the NSX site that’s hosting the L2VPN server. On the right is a single host in a non-NSX environment that will use the Autonomous Edge to connect the VLAN-backed Client-User VM to the same subnet as SEG-WEB.

OVF Deployment

First the Autonomous Edge OVF needs to be deployed in vCenter:

The first options are to set the Trunk and Public (and optionally HA) networks. The Trunk interface should connect to a (shock) trunked portgroup (VL4095 in vSphere) and the Public interface should connect to the network that will be L3 reachable by the L2VPN server (specifically the IPSec Local Endpoint IP). Also, as this is an L2VPN there will need to be a loosening of security settings to allow unknown (to vCenter) MACs to be allowed. If the portgroups are in a VDS then a sink port should be used for the Trunk, alternatively if on a standard portgroup; Forged Transmits and Promiscuous Mode are required.

Next, set up the passwords and Public interface network settings. Then in the L2T section set the Peer Address to the L2VPN Local Endpoint IP and copy the Peer Code from the server setup into the Peer Code field:

The last step in the OVF is to set the sub-interface to map a VLAN (on the local host) to the Tunnel ID (that was set on the Segment in the server setup). Here VLAN80 will map to Tunnel ID80 which mapped to the SEG-WEB:

The Tunnel

Once the OVF is deployed and powered on, either the L2VPN Client or Server can initiate an IKE connection to the other to setup the IPSec tunnel. Once this is established then a GRE tunnel will be setup and the L2 traffic will be tunnelled inside ESP on the wire. There’s a few options to view the status of the VPN:

Client tunnel status:

Server tunnel status:

GUI tunnel stats:

Connecting a Remote VM

Now that the VPN is up a VM can be placed on VLAN 80 at the remote site and be part of the same broadcast domain as the SEG-WEB Overlay Segment. Here the NSXT25-L2VPN-Client-User VM is placed in a VLAN80 portgroup, which matches what was set in the OVF deployment. NOTE there isn’t even a physical uplink in the networking here (although this isn’t a requirement) so traffic is clearly going via the L2VPN Client:

Now set an IP on the new remote VM in the same subnet as the SEG-WEB:

And load up a website hosted on the Overlay and voila!

NSX-T 2.5 Inline Load Balancer

See here for configuring a One-Arm Load Balancer

Load balancing in NSX-T isn’t functionally much different to NSX-V and the terminology is all the same too. So just another new UI and API to tackle…

As load balancing is a stateful service, it will require an SR within an Edge Node to provide the centralised service. It’s ideal to keep the T0 gateway connecting to the physical infrastructure as an Active-Active ECMP path, so this LB will be deployed to a T1 router.

The Objective

The plan is to implement a load balancer to provide both highly available web and app tiers. TLS Offloading will also be used to save processing on the web servers and provide an easy single point of certificate management.

  1. User browses to NWATCH-WEB-VIP address
  2. The virtual server NWATCH-WWW-VIP is invoked and the request is load balanced to a NWATCH-WEB-POOL member
  3. The selected web server needs access to the app-layer servers, so references the IP of NWATCH-APP-VIP
  4. The NWATCH-APP-VIP virtual server forwards the request onto a pool member in NWATCH-APP-POOL
  5. The app server then contacts the PostgreSQL instance on the NWATCH-DB01 server and the user has a working app!

Configuration

First the WEB and APP servers are added to individual groups, that can be referenced in a pool. Using a group with dynamic selection criteria allows for automated scaling of the pool by adding/removing VMs that match the criteria:

Each group is then used to specify the members in the relevant pool to balance traffic between:

A pool then needs to be attached to a Virtual Server, which defines the IP/Port of the service and also the SSL (TLS) configuration. Here a Virtual Server is created for each service (WEB and APP):

The final step is to ensure that the new LB IPs are advertised into the physical network. As the LB is attached to a T1 gateway it must first redistribute the routes to the T0, which is done with the All LB VIP Routes toggle:

Next is to advertise the LB addresses from the T0 into the physical network, which is done by checking LB VIP under T0 Route Re-distribution:

Here’s confirmation on the physical network that we can see the /32 VIP routes coming from two ECMP BGP paths (both T0 SRs), as well as the direct Overlay subnets:

Traffic Flow

There’s now a lot of two letter acronyms in the path now from the physical network to the back end servers, there’s T0, T1, DR, SR, LB, so what does the traffic flow actually look?

The first route into the NSX-T realm is via a T0 SR, so check how it knows about the VIPs: 

It can see the VIP addresses coming from a 100.64.x.x address, which in NSX-T is a subnet that’s automatically assigned to enable inter-tier routing. In this case the interface is connected from the T0 DR to the T1 SR:

So the next stop should be the T1 gateway. From the T1 SR the VIP addresses are present under the loopback interface:

So the traffic flow for this Inline Load Balancer looks like the below:

traffic-flow

The Final Product

Testing from a browser with a few refreshes confirms the (originally HTTP-delivered) WEB and APP servers are being round-robin balanced and TLS protected:

And the stats show a perfect 50/50 balance of all servers involved:

Configuring NSX-T 2.5 L2VPN Part 1 – Server

NSX-T 2.5 continues VMware’s approach to assist moving all stateful services to T1 gateways, meaning you can keep your T0 ECMP! This version brought the ability to deploy IPSec VPNs on a T1, however L2VPN still requires deployment to a T0. I’m sure it’ll be moved in a later version but for now here’s the install steps…

First, ensure your T0 gateway is configured as Acitve-Standby, which rules out ECMP, but allows stateful services. NOTE: this mode cannot be changed after deployment so make sure it’s a new T0:

To enable an L2VPN you must first enable an IPSec VPN service. Create both and attach  to your T0 gateway as below:

Next create a Local Endpoint, which attaches the the IPSec service just created and will terminate the VPN sessions. The IP for the LE must be different to the uplink address of the Edge Node it runs on, which is then advertised out over the uplink as a /32.

To ensure the LE address is advertised into the physical network enable IPSec Local IP redistribution in the T0 settings:

And here’s the route on the TOR:

Now it’s time to create the VPN session to enable the peer to connect. Select the Local Endpoint created above and enter the peer IP, PSK and tunnel IP:

You can then add segments to the session from here, or directly from the Segments menu:

OR

There’s one last thing to wrap up the server side config and that’s retrieving the peer code. Go to VPN > L2VPN Sessions > Select the session > Download Config, then copy the peer code from within the config, which will be used in the next part configuring the client…

Don’t Forget Overlay MTU Requirements!

There are several benefits to overlay networking, such as providing a larger number of segments to consume, for example over 16m for VXLAN and Geneve (both provide 24-bits for the VNI) or removing the reliance on physical network config to spin up a new subnet.

To make use of these benefits and more, the overlay doesn’t ask for much of the underlay, just a few basic things:

  • IP connectvity between tunnel endpoints
  • Any firewalls in the path allows UDP6081 for Geneve or UDP4789 for VXLAN
  • Jumbo frames with an MTU size of at least 1600 bytes
  • Optionally, multicast if you wish to optimise flooding

This article is about what happens when you don’t obey rule #3 above with NSX-T…

Topology

A basic 3-tier app has been deployed as above, with each tier on it’s own segment and connected to a T1 gateway. Routing between segments will be completely distributed in the kernel of each hypervisor that the workloads are on.

Problem

When connected to one of the WEB servers and attempting to access the APP, the connection was getting reset:

As the WEB and APP workloads were on separate hosts the traffic was being encapsulated into a Geneve packet (which cannot be fragmented) and sent over the transport network from TEP to TEP:

A ping test confirmed that connectivity was ok up to 1414 bytes, anything larger was being dropped:

root@NWATCH-WEB01 [ ~ ]# ping 10.250.70.1 -s 1414

PING 10.250.70.1 (10.250.70.1) 1414(1442) bytes of data.

1422 bytes from 10.250.70.1: icmp_seq=1 ttl=61 time=1.33 ms

^C

— 10.250.70.1 ping statistics —

1 packets transmitted, 1 received, 0% packet loss, time 0ms

rtt min/avg/max/mdev = 1.326/1.326/1.326/0.000 ms

root@NWATCH-WEB01 [ ~ ]# ping 10.250.70.1 -s 1415

PING 10.250.70.1 (10.250.70.1) 1415(1443) bytes of data.

^C

— 10.250.70.1 ping statistics —

1 packets transmitted, 0 received, 100% packet loss, time 0ms

To prove the app layer was ok, a small page returning just one word was tested and worked fine:

Solved

As soon as the MTU size was increased on the underlay transport network to 1600 bytes the full webpage loaded fine:

And a ping for good measure:

root@NWATCH-WEB01 [ ~ ]# ping 10.250.70.1 -s 1415

PING 10.250.70.1 (10.250.70.1) 1415(1443) bytes of data.

1423 bytes from 10.250.70.1: icmp_seq=1 ttl=61 time=1.76 ms

1423 bytes from 10.250.70.1: icmp_seq=2 ttl=61 time=1.69 ms

^C

— 10.250.70.1 ping statistics —

2 packets transmitted, 2 received, 0% packet loss, time 2ms

rtt min/avg/max/mdev = 1.686/1.723/1.760/0.037 ms

Deploying NSX-T 2.5 with VMware’s Ansible Examples

I try to use IaC to build my labs, so that when I inevitably break something, I can always re-roll.

However, when I tried to build an NSX-T 2.5 lab, using Ansible playbooks based on the VMware examples that worked on 2.4, I received an error about the nsx role when deploying the manager OVA:

Error:\n - Invalid value 'nsx-manager nsx-controller' specified for property nsx_role.

After poking around the Manager OVF file I noticed that the value options had changed from previous versions from manager/controller/cloud options to just manager/cloud.

Here’s the values in NSX-T Manager 2.4:

And here’s the values in NSX-T Manager 2.5:

So a simple change of the nsx_role to “NSX Manager” in my playbook and everything deployed successfully!

I’ve raised an issue on the ansible-for-nsxt GitHub page for the example provided and I’ll hopefully submit a PR to resolve, by adding a version variable to make it compatible across versions.

Hands On With NSX-T 2.4

In order to wrap my head around the changes from NSX-V (ok, NSX Data Center for vSphere) to NSX-T (Data Center) I’ve created a few labs with previous versions. With the latest release 2.4 though there’s a lot of simplification been done in terms of deployment and manageability that you can tell straight away from the UI.

After a few hours with NSX-T 2.4 I’d setup the following deployment:

NSXT.png

Diagramming it all out helps me to understand how the pieces fit together (and there are a lot of pieces to NSX-T).

A few things I’ve noticed so far…

Logical Switches in vCenter

A nice feature in NSX-T is the way that Logical Switches are presented in vCenter, compared to the ugly ‘virtualwires’ from NSX-V, you now get full integration of the N-VDS (NSX-T Virtual Distributed Switches) so they look just like the old school VDS’s. Heres’ the view from H01:

h01-nvds

And again from H03:

h03-nvds

New Workflows

Creating segments, routers and other networking constructs had been a little complicated in previous releases, but now the new wizards makes these tasks easy. Once created the show up in, adding some much needed visibility in to what’s been created:

new-dashboards.PNG

For my first deployment I went straight in with the Advanced Networking & Security screen, not knowing that none of these objects are shown in the fancy new dashboards… so I recreated them. Objects created through the new workflows do show up in the Advanced tab though and can be identified with the ‘Protected Object’ icon as below:

adv_networking