Regístrese ahora para una mejor cotización personalizada!

Noticias calientes

VXLAN Taking a More Strategic Role in Cloud Networks with Support across Cisco Fabrics and Switching Platforms

Feb, 04, 2014 Hi-network.com

There's been a lot of news and momentum surrounding VXLAN technology in the last several months, and there is no doubt that VXLAN is becoming a more strategic and pervasive technology across cloud networks as a result. When we rolled out VXLAN about two years ago with the first commercial implementation as part of our Nexus 1000V virtual switch, VXLAN was solely a virtual networking construct and had real constraints in how it could be extended to physical networks and devices. It was also restricted to overlay networks using our Nexus 1000V switch (or other virtual switches supporting the VXLAN overlay protocol).

Now, however, VXLAN is being supported broadly across Cisco networking platforms and devices, across multiple Cisco fabric architectures, and we are even seeing broader support from other vendor ecosystems and non-Cisco switching platforms. Cisco is continuing to expand its support for VXLAN onto the new Nexus 5600 Series switches, as well as Nexus 7700 Series using the F3 line card.

For those of you not fully up to speed on VXLAN, VXLAN stands for Virtual eXtensible Local Area Network, and started out as vastly more scalable Layer 2 LAN and tenant isolation construct for data center and cloud networks. Where cloud networks were running out of only 4000+ VLAN IDs to segment application networks, VXLAN gave them over 16 Million logical network segments.

But VXLAN is also an overlay technology, allowing layer 2 adjacency for VM connectivity and VM migration over layer 3 network boundaries. Suddenly data centers had a great deal more flexibility in where they could place workloads, how they connected to virtual network services, and how easily they could migrate multi-tier applications to the cloud (and even set up hybrid cloud deployments). For the nitty gritty on VXLAN encapsulation of Ethernet frames in a UDP format (although this is certainly not required to get through the rest of this post, and there will be no test later), see below:

VXLAN frequently gets compared to other tunneling protocols (like NVGRE) and layer-2-over-layer-3 extensions (like OTV), and in fact there are great similarities. There is a great blog here comparing some of the differences between VXLAN and OTV, LISP and GRE for various use case scenarios. And whereas NVGRE is really only getting traction in Microsoft server environments, VXLAN is proving to be much more viable in heterogeneous environments, and across multi-vendor ecosystems. Clearly the support that VMware, Red Hat, Citrix and others, as well as their ecosystem partners, including many of Cisco's direct switching competitors over the last year or so, is increasing the adoption of VXLAN dramatically.

With the announcement of our Application Centric Infrastructure (ACI) technology in November, we announced that VXLAN would be a fundamental component of this new application-aware fabric, one that was optimized for both physical and virtual workloads. So whether you are using traditional Nexus 1000V virtual network overlays, or the ACI fabric, VXLAN becomes an important common denominator. ACI, in fact, provides VXLAN connectivity to virtual workloads through the Nexus 1000V for ACI, or Application Virtual Switch (AVS). Connecting physical workloads to VXLAN segments can be done through VXLAN gateway devices (usually switches) as described below.

But first, while on the topic of ACI, here is ACI TME Joe Onisick providing a very nice, brief background tutorial on VXLAN technology (along with this companion white paper on VXLAN for Nexus 9000 Series switches):
https://www.youtube.com/watch?v=ZvITtE-gQYg

One of the initial challenges of VXLAN virtual networks was how to connect VXLAN networks to physical workloads, networks and devices that didn't understand the VXLAN tunnel encapsulation, so VXLAN connectivity was really limited to VM's sitting off virtual switches. VXLAN gateways were the solution to this problem, but creating VXLAN Tunnel End-Points (VTEP) that could terminate the VXLAN by removing the Layer-3 encapsulation and mapping the VXLAN to a known VLAN for processing by a traditional physical network or device. Cisco originally came out with a software gateway that ran as a virtual machine, but also announced last year support for VXLAN VTEPs in the newer Nexus 3100 Series switches, as well as the new Nexus 9000 Series switches (the ones that support ACI, but they don't need to run in ACI fabric mode to support VXLAN).

The Nexus 3000 Series is a low-latency, top of rack switch, where it makes a lot of sense to terminate VXLAN tunnels before reaching the destination servers and associated workloads. Similarly, the Nexus 9000 switches would act as VTEPs in leaf nodes in the Nexus 9000 spine-leaf architecture (see below).

We recently announced expanded support, as mentioned above, for VXLAN on the Nexus 5600 and the Nexus 7700 with the F3 line card. This expanded support will increase the relevancy for VXLAN and for scalable virtual networks to integrate with existing physical infrastructures, campus WAN's, and more. It will also extend our ability to support VXLAN across other fabric and SDN architectures in the future. Increasingly, we are seeing VXLAN tunnel endpoints (VTEP) migrating to physical devices to support scale and performance of these overlay networks.

Another emerging VXLAN capability that we are beginning to hear about is "VXLAN routing". Some VXLAN gateways may or may not support VXLAN routing, per se, although most people (including me) find this term a bit confusing. First, remember that all VXLAN packets are UDP IP packets and thus can be routed over a Layer 3 network. The term VXLAN routing is referring to the ability to take a packet off of one VXLAN segment ID and post it to another VXLAN segment ID, e.g. mapping from one VXLAN segment to another within the network device. Typically two workloads that need to communicate are placed on the same VXLAN segment ID, e.g. VXLAN 5500, which, if on the same segment ID, is called "VXLAN bridging" (even if the VXLAN spans an L3 network). If you think about how two different Layer-2 VLANs are routed to each other, the term "VXLAN routing" becomes a little more intuitive. The ACI fabric makes frequent use of connecting VXLAN segments together in this way, and as such, several of the Nexus 9300/9500 line cards will support this VXLAN routing capability, as does the new Nexus 5600 Series switch that we are now introducing.

Another exciting aspect of VXLAN evolution is how VXLAN support is appearing in cloud orchestration and network automation solutions. Manually configuring thousands and thousands of VXLANs wouldn't really be practical no matter how many networking platforms supported it or how pervasive it was. For example, users can now use Cisco UCS Director to automate the provisioning and deployment of VXLAN tenant networks when setting up a new application or tenant, by configuring the virtual switch or VTEP end points depending on where the workload will be deployed, retrieving an available VXLAN segment ID from the data base, and connecting all the workloads that will be sharing the VXLAN segment. For an idea of how this can be done in OpenStack, see here. Essentially, wherever the automation and provisioning of new applications, workloads, tenants or services is happening, the VXLAN infrastructure can be automated as well. And it's really this last aspect that's moving VXLAN out of the prototyping and lab stage and into highly scalable cloud network deployments.

So, if you've been holding back moving those old VLAN segments in your data center up to VXLAN, you may have a few new reasons to think about revamping your application subnets. Beyond giving you some headroom to expand your scale, you'll be getting a great deal more flexibility in workload placement capability, no matter which fabric you use, greater connectivity to networks outside the virtual world, and the possibility for greater automation, no matter which cloud orchestration tool you use.

Related VXLAN Resources:

Cisco VXLAN Innovations Overcoming IP Multicast Challenges
Digging Deeper into VXLAN, Part 1
VXLAN Gaining More Traction for Scalable Cloud Networks
Packet Pushers Podcast on Scalable Cloud Networks with VXLAN
Integrating VXLAN In OpenStack Quantum
TechWise TV's Fundamentals of VXLAN video (featuring VXLAN-man!):


tag-icon Etiquetas calientes: Cisco ACI Cisco Application Centric Infrastructure (ACI) Nexus 1000v En los países bajos Nexus 3000 AVS Application Virtual Switch Nexus 5600 Nexus 7700

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.
Our company's operations and information are independent of the manufacturers' positions, nor a part of any listed trademarks company.