Integrating NSX-V and NSX-T backed workloads Using VeloCloud SD-WAN Solution

In my previous blog , I talked about the Fundamentals of  VMware SD-WAN technology by VeloCloud focusing on its architecture and use cases.
In this blog, I will be discussing the design I used to connect NSX backed workloads across different Sites/Branches and Clouds using the VMware VeloCloud SD-WAN technology.

My setup consists of the following:

1. NSX-v lab living on Site A. It is backed by a vCenter, management and compute ESXI clusters. Workloads are Virtual Machines.
2. NSX-T lab, that is backed by a management and compute ESXI cluster in addition to a KVM cluster. Workloads are VMs along with Kurbenetes (K8) containers.
3. VMware Cloud on AWS SDDC instance with VC/ESXI/VMs.
Because the SD-WAN technology is agnostic of the technology running in the Data Centers, you can add any cloud/Datacenter/Branch to the above mentioned list such as AWS native EC2s/ Azure/ GCP/ Baba or simply any private/public cloud workloads.
Also note that NSX DataCenter is not a prerequisite to connect sites/branches using SD-WAN by VeloCloud.

The Architectural Design of the end Solution:

velo_design

 

The above design is showcasing a small portion of the full picture design where I can connect “n” number of Sites/Branches/Clouds using SD-WAN technology.
I started by connecting my site 1 in San Jose which happens to be backed by NSX-V with my other DC located in San-Fran backed by NSX-T.
Traffic egressing to the Internet/Non SD-WAN Traffic (Green) will go via the NSX ESG in case of NSX-V site and via the Tier-0 in case of the NSX-T site.
Branch to Branch traffic (in Purple) will ingress/egress via the VeloCloud Edge VCE on each site.

NSX-V Site:

In the San Jose site, I Peered the NSX-V ESG with the VCE using e-BGP. I also already had  i-BGP neighborshop between the NSX DLR and ESG. The transit boundary in Blue you see in the image below is established by deploying an NSX logical switch attached to NSX-V ESG , NSX-V DLR and the VCE.
I redistributed routes learned via ibgp on the ESG to the VCE router using the ebgp.
Now the VCE_SanJose knows about the subnets/workloads that are residing south of the DLR.
I filtered the default originate that the ESG learned from its upstream from being distributed to VCE_SanJose as I dont want to advertise my default originate (default route) to other Branches/sites/Clouds.

Low level design on the NSX-V Site:

 

velo_design_2

Based on the above,

Internet/Non-SD-WAN traffic path will be as follows

VM1–>DLR–>ESG–> Inernet/VLAN

SD-WAN traffic  path :

VM1–>DLR–>VCE–> Internet. (Note that the VCE could have multiple ISP Links or MPLS links that will leverage the Dynamic Multipath Protocol Optimization known as DMPO).

VCE will build tunnels to a VMware VeloCloud hosted Gateways (VCGs) and to the Orchestrator (VCO).  VeloCloud Gateways will be the VCEs distributed control plane and hence VCEs will learn about all other branches routes via the updates those VCGs send over. (refer to Image 1 to help you understand the path).

 

Now that we are done configuring the San Jose site, lets go and Configure the San Francisco NSX-T data center.

 

NSX-T Site:

A new Tier-0 uplink will be connected to an NSX-T Geneve Logical Switch. This Transit logical switch will also be connected to one of the VCE’s interfaces as Downlink.

On the NSX-T Tier-0 and VCE, we will build an e-BGP neighborship via the transit Logical Switch created. VCE will hence know about the routes being advertised from the Tier-0.

Note in NSX-T, Tier-1 Auto plumb all routes towards the Tier-0.

Now that the VCE knows about the San Francisco Routes, it will advertise them to the VCG that is again hosted somewhere on the internet by VMware VeloCloud.

Low level design on the NSX-T Site:

 

velo_design_3

 

Internet/Non-SD-WAN traffic path will be as follows:

VM1–>Tier-1–>Tier-0–> Inernet/VLAN

SD-WAN traffic path :

VM1–>Tier1–>Tier-0–> VCE–> Internet.

Note that the VCE could have multiple ISP Links or MPLS links that will leverage the Dynamic Multipath Protocol Optimization known as DMPO.

 

VCE will build tunnels to a VMware VeloCloud hosted Gateways (VCGs) and to the Orchestrator (VCO). Gateways will be the VCEs control plane and hence VCEs will learn about all other branches routes via the VCG.

 

Now San Jose and San Francisco workloads know how to reach each other via  SD-WAN.

 

Summary

 

The magic of SD-WAN is that we can add “n” number of sites with or without  NSX and  connect them via L3 seamlessly. For instance, I can connect 50 branches to those 2 DCs by deploying a VCE on each branch.

We can also use the DMPO technology to improve the Quality of Service of the traffic destined to branches. Business policies can also be enforced using the VCE.

 

Advertisement

VMware SD-WAN by VeloCloud 101

VMware recently acquired VeloCloud which is a company specialized in SD-WAN technology and added it under its NSX umbrella offering.

In this Introductory blog, I will be talking more about the fundamentals of the VMware SD-WAN By VeloCloud focusing on Architecture and Use Cases.

What is SD-WAN?

 

SD-WAN enables enterprises to support application growth, network agility and simplified branch implementations while delivering optimized access to cloud services, private data centers and enterprise applications simultaneously over both ordinary broadband Internet and private links. Think about SD-WAN as the onion routing that connects Enterprise DC to Branches and Private/Public Clouds.

velo_1

 

 

Use Cases:

1. Managing and scaling Connectivity of workloads on any Cloud:

Just like any SD-WAN technology, the main use case would be connecting workloads across any clouds whether private and/or public  with the ability to scale and connect in the minutes. The Orchestration also provides offers recognition and classification of 2,500+ applications and sub applications without the need to deploy separate hardware or software probes within each branch location. The solution intelligently learns applications as they are seen on the network and adds them to the VMware SD-WAN cloud-based application database. Services such as firewall, intelligent multipath, and Smart QoS may be controlled through the solution’s application-aware business policy control.

 

2. Zero-Touch Provisioning

A huge differentiator for Velo Cloud from other competitors is their Zero-touch branch deployments. VMware SD-WAN Edge appliances automatically authenticate, connect, and receive configuration instructions once they are connected to the Internet in a zero-touch deployment.

 

 

3. Dynamic Path Selection

Dynamic Multipath Optimization comprises of automatic link monitoring, auto-detection of Internet Service Provider and auto-configuration of link characteristics, routing and QOS settings.

This means that you could dedicate traffic of a certain high priority application via an available ISP , while perhaps traffic of low priority applications will be routed to ISP2. Think of a case where you have 2 links from 2 different ISPs with one having a higher Bandwidth.

 

4. Link Steering and Remediation

This is another VeloCloud differentiator where an admin can do on-demand Per-packet link steering based on the measured performance metric, intelligent application learning, business priority of the application, and link cost to improve application availability. Remediates link degradation through forward error correction, activating jitter buffering and synthetic packet production.

This will be extremely beneficial for very sensitive applications (say video conferencing) where same packet will be duplicated across all available ISPs/MPLS links on the sender site while the destined site will simply reassemble the packet flow from all circuits available improving performance irrespective of jitter and drops on a given ISP link.

 

velo_linksteering

 

5. Cloud VPN (VeloCloud sites to non VeloCloud Site connectivity)

One-click site-to-site cloud VPN is a VPNC-compliant IPSec VPN to connect VMware SD-WAN and non-VMware SD-WAN sites while delivering real-time status and health of VPN sites. Establish dynamic edge-to-edge communication for all types of branches based on service level objectives and application performance.

6. Security

Stateful and context-aware (application, user, device) integrated next generation firewall delivers granular control of micro-applications, support for protocol-hopping applications, such as Skype and other peer-to-peer applications (e.g., disable Skype video and chat, but allow Skype audio). The secure firewall service is user- and device OS-aware with the ability to segregate voice, video, data, and compliance traffic. Policies for BYOD devices (Apple iOS, Android, Windows, MAC OS, etc.) on the corporate network are easily controlled.

7. Deep Packet Inspection

Granular classification of 2,500+ applications enables smart control. Out-of-the-box defaults set the Quality of Service (QoS) policies for common business objectives with IT required only to establish traffic priority. Knowledge of application profile enables automation of QoS configurations and bandwidth allocations.

 

 

What are the layers of VeloCloud SD-WAN:

 

VeloCloud Technology is based on 3 Core layers: Management layer for Orchestration, a Control Plane Distributed Gateways and Data Plane on-premises Edges.

layers

 VeloCloud Orchestrator (VCO):

A multi-tenant Orchestrator which provides centralized enterprise-wide installation, configuration and real-time monitoring in addition to orchestrating the data flow through the cloud network. The VCO enables one-click provisioning of virtual services in the branch, the cloud, or the enterprise datacenter.

 

VeloCloud Gateways (VCG): 

This layer constitutes of  distributed network of service gateways deployed at top tier cloud datacenters around the world, providing scalability, redundancy and on-demand flexibility.

VCGs provide optimized data paths to all applications, branches and datacenters along with the ability to deliver network services from the cloud.

They are typically considered a distributed Control Plane that can optionally participate in the data-plane.

 

VeloCloud Edge (VCE):

A Zero-touch enterprise-class appliances that provide secure optimized connectivity to private, public and hybrid applications, compute and virtualized services. VCEs perform deep application recognition, application and packet steering, performance metrics and end to end quality of service in addition to hosting virtual network function (VNF) services.

They can be deployed as Hardware appliance or a Virtual Appliance running on an OVA Virtual Machine.

 

Conclusion:

 

SD-WAN is the next Generation MPLS networking for Enterprise and Cloud Providers. It resembles the vision of connecting any cloud any workload anywhere in minimum configuration making scaling branches a smooth and flawless process.

The technology allows Links remediation and packet steering to achieve highest Quality of Service.

 

Fixed- vCloud Director 8.20 IPsec Tunnel issues after upgrading to NSX 6.3.x from prior NSX versions

Blog2_3

Some SPs reported losing IPSEC tunnels on their vCD edges after upgrading from NSX 6.2.X to NSX 6.3.x+ with the below error:

“Sending notification
NO_PROPOSAL_CHOSEN to “IP address” 500, Oakley Transform [OAKLEY AES CBC (256), OAKLEY SHA1, OAKLEY GROUP MODP1024] refused due to strict flag, no acceptable Oakley Transform, responding to Main Mode.”

 

After investigating this internally with Engineering, we found out the reason for that is that NSX changed the default Diffie-Hellman to DH-14 from DH-2.

Diffie-Hellman is a protocol used as part of the negotiation for a secure connection

The change was made by the NSX team was obviously for security reasons. However, this change broke the IPSEC tunnels on the vCD edges that are not aware of the change due to the fact that 8.20 vCD has its own Database.

 

The temporary workaround?

To workaround the issue, the admin will have to change the DH manually from DH14 to DH2 from either NSX UI or the VCD UI noting that each time you do redeploy the vCD edge, you will have to change the DH to 2 as config will override based on the service config in the vCD database.

blog2_2Blog2_1

 

 

The Permanent Fix:

The permanent fix is in vCD 9.0 as NSX would be the source of truth in 9.0 even for non advanced edges. With 9.0, we don’t use the service config anymore and doing a redeploy will maintain the state the edge had in NSX.

If you can’t upgrade to vCD 9.0, you can request a hotpatch from GSS for 8.20 that will basically set the vCD edges to be DH2.

 

Important to note if you haven’t upgraded NSX yet:

 

If you are ready to upgrade NSX to 6.3+ and you are still on vCD 8.2, requesting and applying this vCD hot patch prior to the NSX upgrade will reduce downtimes and manual work.

 

Update From Engineering and GSS:

vCD version 8.20.0.3 will have the hot patch described above included. The release is still being tested with QA and will be be launched after it gets the green light from QA.