Quantcast
Channel: John McAuley » Network Virtualization
Viewing all articles
Browse latest Browse all 10

Understanding VLAN Port Count In A Cisco UCS Environment – Part 3 (Final)

$
0
0

I’ve posted two previous blog entries trying to cover the VLAN Port Count in a cloud environment.  In a Cisco UCS environment (specifically a Vblock in our case), you have to be concerned about VLAN port count on both the UCS Fabric Interconnect and the upstream Nexus 5548Ps (or whatever upstream Nexus you’re using).  Just as a side note, the upstream Nexus switches are now included in a Vblock.  When we first started looking at the Vblock model, the upstream Nexus switches were not included in a Vblock configuration.  I posted in a previous blog all the things you need to consider to reach the VLAN port count on the upstream Nexus 5Ks.  For this post, I’ll just focus on the downstream Cisco 6100 Fabric Interconnects.

First, understand that in a cloud environment utilizing Cisco UCS and upstream Nexus switches, the VLAN port count limits on the Fabric Interconnects are the lowest common denominator.  Our Cisco 6140 Fabric Interconnects currently support a VLAN port count limit of 6000.  This means that most of the focus at this time should be put on the Fabric Interconnects unless you’ve got a design that for some reason calls for many more port count instances on the upstream Nexus switches than on the 6140 fabric interconnects.  I understand that this 6000 limit will change to 14K+ with a future release, matching the current limitations of the Nexus 5548s.

So now we know our limit in UCS is 6000 VLAN port instances.  That number is made up from a combination of border ports (or uplink ports) and access ports.  See the screenshot below from UCSM.  This is found by going to the “equipment” tab in UCSM, go to the fabric interconnect, and you’ll see a drop-down tab on the right for VLAN Port Count (located right under the High Availability Details tab).  You can also do this in the CLI of the fabric interconnect by typing “show vlan-port-count”.  Just make sure you use the “scope fabric-interconnect” to get to the right level to run this command.  If you try from the top level, the sh vlan-port-count command will not be an option.

The border VLAN ports are your uplink ports and the VLANs carried over those ports so they are pretty easy to understand.  Port channels or virtual port channels (vPC) are considered as one uplink port.  In our design, we have three 10Gbps links going from a 6140 fabric interconnect to each upstream Nexus 5548 giving us a vPC consisting of 6 10Gbps links.  We have pre-staged some customer VLANs and of course we have all of our management, vMotion, replication, and other VLANs.  So our current count on VLANs in the environment is 110.  We are showing 111 for our “Border VLAN Port Count”.  I’m guessing that it adds the VLAN count to the actual trunk port giving us our 111 count but I may be wrong on that.  Perhaps there is a VLAN configured that is not showing up in UCSM that is represented here for some reason.  It could also be the Fabric-A VSAN that is carried on the FC uplink ports.  I bet that’s it now that I think more about it.  Maybe someone reading this can confirm where that one extra count comes from.  We confirmed this count by removing a configured VLAN from the LAN=>VLAN tab in UCSM.  When we did this, the count went to 110 total which means that VLAN was no longer being carried across the uplink vPC.

The next step is to figure out your access VLAN port count.  As you can see above, our count is current 184.  I can’t actually use my numbers for this example because we are still testing with this Vblock cloud pod before it goes into production.  So some VLANs are mapped down to some vnics and some are not (test networks, etc..).  The numbers don’t match up across the board so I don’t want to confuse anyone.

This number is actually incremented by each VLAN carried to each defined vNIC, plus any vHBAs.  So let’s assume you have 8 ESX hosts in your cluster.  Each ESX host has two vNICs and two vHBAs.  You have 100 VLANs configured in UCSM and all 100 of those VLANs are enabled on each defined vNIC.  Your two vNICs are configured as vNIC0 associated with fabric interconnect A and vNIC1 associated with fabric interconnect B.

We will use Fabric Interconnect A for the example to reach the number.  FI-A has 8 ESX hosts with each host presenting one vNIC and one vHBA to it.  Since we are carrying 100 VLANs across the environment, our count would be 808.  That’s VLANs on each vNIC, added together, and then added to 8 vHBAs (one from each ESX host).  Hopefully that makes sense.

Assuming your environment is configured properly and you don’t have strange reasons to carry certain VLANs on one side but not the other, your number on the fabric interconnect B should look the same.

So now we have our total of 808 Access VLAN Port Count and our 112 Border VLAN Port Count.  Our total VLAN port count is 920.  So we have 5080 vlan port instances to go before we hit the limit.

I’ll tell you that I saw more than one Cisco TAC response to a forum question where Cisco says they have lots of cloud service providers that never even get close to this number.  They’re basically telling someone they shouldn’t have to worry about this.  I can’t speak for other cloud service providers.  I know we sell a lot of cloud services to enterprise customers (in other words, not just dev shops).  In our public cloud environment, our customers consume at least one VLAN per customer and we average 1.5 VLANs across our entire cloud customer base.  Let’s use that average for our example below.

Let’s say I have a Vblock with two clusters spread across 32 ESX hosts (16 nodes per cluster).  For one reason or another (or maybe a future proof reason), we may need to extend some customer environments across both clusters.  Because of this, we extend all customer VLANs across both clusters.  I’ll assume that I have 100 customers and that my average VLAN per customer is what our actual average is (1.5 per customer).  That gives me 150 VLANs extended across 32 ESX hosts, plus 32 vHBAs, plus my 10 management/control/packet VLANs trunked to each host.  Finally, I’m running one vPC to the upstream Nexus cluster and I have all 160 VLANs (customer plus management) extended across that vPC trunk.

(150 Customer VLANS x 32 Hosts) + (32 Hosts x 10 Management VLANs) + 32 vHBAs per side + 160 Border VLAN ports = 5312

As you can see, I’m right under my 6000 number and I’ve only provisioned 100 of my typical customers in this multi-tenant service provider class cloud environment.  If I provision 15 more customers, I’ve hit my limit and I can no longer provision.  Also, you’ll note that I’m well below the actual VLAN limitation in a UCS environment.  VLAN Port Limit  is the number that cloud service providers need to focus on, not total VLANs supported.  At least this would be the case if they’re model is similar to ours.

Obviously it’s important to try to prune as many VLANs from each vNIC or trunk as possible.  But, as I explain above, there are a lot of business reasons why that may not be possible depending on what you’re trying to do.  Our work we are doing with vCD puts even more requirements/restrictions on how we can set things up so we have to be very cognizant of this limit.

I understand some things are going to be announced at vmWorld next week that could help with this scaling issue and we’re certainly looking forward to it.  I also know that VMWare has some things it can do with mac-in-mac encapsulation but as a guy who has been doing network engineering for over 15 years now, I tend to shy away from large L2 bridging environments.  We learned our lessons there back in the 90s.



Viewing all articles
Browse latest Browse all 10

Latest Images

Trending Articles





Latest Images