With the release of VMware vCloud Director 9.5, which is packed with a lot of great new features, one of the significant additions is the introduction of Cross-VDC networking.
In prior vCD releases, Cloud Providers couldn’t use the universal constructs that NSX introduced in the NSX cross VC architecture and could not benefit from the use cases that cross VC NSX targets and solve.
Compatibility with Cross VC NSX is finally there starting vCD 9.5+ as vCD now supports the universal constructs that NSX creates. This is great news for Cloud Providers who are looking to target those use cases.
In this blog I will address the use cases of leveraging cross-VC NSX inside a vCD Virtual Data Center (VDC) and what design We are proposing to integrate with NSX.
This blog was a joint activity between my SE peer Daniel Paluszek and vCD engineering staff Abhinav Mishra.
What are the use cases of stretching the network across VCs/Sites?
- Resource Pooling: Logical networking and security across multiple vCenters allow for the ability to access and pool resources form multiple vCenter domains. Resources are no longer isolated based on vCenter and/or vCD boundaries which hence allows the ability to access and pool resources form multiple vCenter domains achieving better utilization and less idle hosts.
- Workload Mobility
Since logical networking can span multiple vCenter domains and multiple sites:
- Cross-VC NSX allows for enhanced workload mobility across Active-Active data centers
- Workloads can now be moved between vCenter domains/sites/Org VDCs on demand. A practical use case example would be when there is a data center migration/upgrade activity.
3. Disaster Recovery
Cross VDC will help tenants and providers to continue operations in case of a partial or complete network failure. Workloads on Site-A can leverage the Tenant-X-Org-VDC edge on Site-B in the case where the Tenant-X-Org-VDC Edge fails on Site-A.
Moreover, during Internet failure on Site-A, All tenants workloads on site-A will use the Provider Edges on Site-B to exit to internet provided on site-B.
High Level Design Architecture for vCD 9.5+ and NSX
The goal of this high-level design is to provide optimal availability of network services from the Provider and Tenant layer. We must adhere to Cross-vCenter NSX best practices, so do note that we are presuming you are aware with these guidance parameters stated here: NSX Cross VC Design Guide
In this suggested design, we have two layers of NSX:
- Tenant layer within vCloud Director (auto provisioned by vCD)
- Provider Managed layer ( provisioned natively in NSX)
The goal is to provide high availability between the two sites while meeting the stated requirements of Cross-VDC networking.
First NSX layer (Tenant Layer): This layer is the one that is controlled and provisioned by vCD . vCD will extend the tenant networks across sites via stretching their respective logical switches (Universal Logical Switches).The Tenant Universal Distributed Logical Router (UDLR) will be auto provisioned by vCD and will do the required routing for the tenant’s workloads residing on different L2 domains. The tenant’s Active Edge Services Gateway (ESG) or Tenant-<X>-OrgVDC-Site-A will terminate all tenant services such as NAT/FW/DHCP/VPN/LB and will essentially be the North/South entry/exit point for workloads residing in the tenant’s respective OrgVDC on each site.
We are suggesting that we deploy the Tenant UDLR in Active/Standby(passive) mode where all Tenant A workload traffic whether they are on Site-A or Site-B will egress from
The rationale behind Active/Standby mode is to maintain stateful services that are running on the tenant’s ESG and explicit control of the ingress traffic which will also assist in any failure considerations. (More details on fail-over scenarios in my next blog)
Tenant-B will have a flipped A/S design, where I will have Site A as the passive/(standby) while Site B will be Active for Tenant B workloads.
Tenant B workload traffic whether they are on Site-A or Site-B will egress from
Making different tenants active on different sites will help us distribute network traffic across sites and thus benefit from resource pooling and utilization from the available Data Centers.
2nd NSX Layer (Provider Layer): This layer is the Provider Controlled NSX Layer and will be configured/managed by native NSX outside/North of vCD.
Each Tenant ESGs (Tenant-X-Org-VDC-Edge) will peer externally on a ULS with the pre-Provisioned provider UDLR. This transit interface will be on VXLAN or in other words nothing but another pre-provisioned Universal Logical Switch/tenant. That way we can scale up to 1000 tenants as UDLR supports up to 1000 Logical interfaces (Lifs).
In this high-level design, we will be utilizing an Active/Active state with local egress mode at the Provider Layer (Provider UDLR). Therefore, local traffic will egress at its respective local site. With this configuration, a UDLR Control VM will be deployed on each site.
We are also suggesting that we enable ECMP on the Provider UDLR and Peer with up to 8 ESGs spread equally across sites.
Site-A Provider Primary Control VM will peer with ESG 1-4 Green on site 1 with higher BGP weight along with ESGs 5 to 8 Green on site 2 having lower BGP weights. This will be an achievable step as E1-E8 Green will connect to the same stretched Universal Logical Switch.
Similarly, Provider Secondary Control VM on Site 2 will peer with up to 8 ESGs. ESGs 1 to 4 Blue on site 2 will have higher BGP weight when peering with the Secondary Control VM while ESGs 5 to 8 Blue on site 1 will peer with lower BGP weight.
Provider UDLR will reach the Tenant’s ESGs uplinks via directly connected routes.This is where Public IPs will be floating. No need to have any kind of static/dynamic routes between the Provider UDLR and the Tenant ESGs. Reason is that Provider UDLR will advertise directly connected routes to the Provider Edges upstream via the BGP adjacency that has been already formed while Tenant ESG will simply NAT the public IPs to the workloads that need to be published.
Note: For high availability, the default originate would be advertised to the Provider ESGs from the upstream physical network. This will help in the fail-over to the secondary site when upstream internet switches are down.
Big thank you to my peer Yannick Meillier who inspired me on peering the Provider UDLR control VMs with a set of Provider ESGs spread across sites to achieve high availability in case of upstream failure in any given site.
In my next blog, I discussed in depth the packet life of the above design along with failure and fail-over scenarios.
4 thoughts on “vCloud Director Cross-VDC Design with Cross VC NSX”
Thank you for the article.
We are testing cross VDC, it works (two nsx / vCenter and one instance vDC) but i don’t understand the goal of the control VM. We don’t see it in vCD or vSphere, do we miss something ?
The goal of the Control VM is carrying your control plane information. The control VM is nothing but the UDLR control VM itself. In the case of active/standby deployment AKA common egress point, vCD will deploy the UDLR control VM on the primary site. While in the case of active/active deployments, AKA Egress Point per Fault domain, vCD will deploy a UDLR control VM on each site. The VM will have a name something like edge-xxx1234 … Here is the a video Daniel and I made that shows what goes behind the scene. https://www.youtube.com/watch?v=CH0zN0IjkmY&t=2s&list=PLunwH0gjkUBjCY72S3qXA_zBJxl7f_68m&index=3
Thx you, so control VM is the UDLR when configuring the datacenter group under vCD GUI.
Let say we are in an active / standby deployment, If we lose the primary site (nsx primary and active), we don’t have anymore nsx controller universal and UDLR, what are the impact ? and how can we secure or make it more reliable ?