Thursday, April 26, 2012

vSphere 5 Host Network Design - 10GbE vSS Design



Going forward my 10GbE designs will be doing more to answer questions around physical network setup and configuration. Therefore you will see more detail in the diagram then I normally give, especially in regards to how the physical switches are uplinked and interconnected. I had some network design input from Cisco engineers on this to ensure that redundancy and throughput are not compromised once vSphere traffic gets onto the physical switches.

There were many discussions around the use of Load Based Teaming and Etherchannel, neither of which is used in the following design. LBT is not used because licensing does not allow for it. For more information on LBT please check out this link.
LACP is not used as it would not be good design practise. There are very few implementations where LACP/Etherchannel would be valid. For a comprehensive writeup on the reasons why please check out this link.

The following design is based around a segmented 10GbE networking infrastructure where multiple physical switches are interconnected using high speed links. All traffic is segmented with VLAN tagging for logical network separation.

It is assumed that each host has four 10GB NICs provided by 2 x PCI-x Dual Port expansion cards. All NICs are assigned to a single virtual standard switch and traffic segregation is performed by pinning each VMkernel to a specific uplink. This is where design for 10GbE diverges from standard design in 1GB configurations. Typically for 1GB setup you would need at least two virtual switches or three when using iSCSI storage; switch1 for Management and vMotion, switch2 for VMs and switch3 for storage.

In order to gain the performance increase of Jumbo Frames for the storage layer all networking components will need to have Jumbo Frames enabled end-to-end from the hosts through the network and to the storage arrays. There is definitely a performance increase by incorporating Jumbo Frames and this is outlined in the following link. It is important to note that enabling Jumbo Frames on the single switch will allow all traffic to transmitt at 9000MTU. This means that Management, vMotion, FT and Storage will all use Jumbo Frames. VMs will not use Jumbo Frames unless this feature is enabled on the network adapter inside the OS of the VM.

Trunking needs to be configured on all uplinks where all ports on the physical switch allow all VLANs through. Trunking at the physical switch will enable the definition of multiple allowable VLANs at the virtual switch layer. It is important to note that the colours used in the diagram show how traffic will flow under normal circumstances. However all VLANs need to be able to use all uplinks in the event of a NIC failure.

If you are running Cisco equipment then you might be able to use the Rapid Spanning Tree Protocol (802.1w) standard. This means that you do not need to configure trunk ports with Portfast as the physical switches will automatically identify this port correctly. If running any other type of equipment the safest option would probably be to disable STP and enable Portfast on each trunk port, but please refer to your manufacturer manual.

Running vCenter and the vCenter database on the same hosts that it manages is not a problem in this design. So you do not need to run a Management cluster but I would say that it is usually a good design decision to do so. If you built this solution and then upgraded to Ent+ and started using virtual distributed switches then a Management cluster would be required.

This design is based around scenarios where Enterprise Plus licensing and network based bandwidth limiting/control is not available. Because SIOC and NIOC are not available in this design there is no way to guarantee bandwidth for particular traffic types. vMotion in vSphere 5 would be quite happy to consume 8Gb of an uplink and in situations where other traffic is running on that uplink it would be constricted by vMotion.


*** Updates ***

05/05/2012 - Minor update to Jumbo Frames paragraph. Thanks to Eric Singer for his observations.


9 comments:

  1. Hi,

    Hi,
    Cool post.
    Just a warning for others, we have been running this design for about 18 months now. We frequently have difficulties with VMWare engineers going on about it being unsupported.
    They really don't like us mixing iSCSI with other traffic on the same vswitch. Do you have any reference to this being an OK configuration according to VMware ?

    Regards
    Carl.

    ReplyDelete
    Replies
    1. As you can see from the following link VMware actually recommends having all traffic running through the same physical and virtual switch. So if it is not supported then VMware has got big problems. They might not like using Jumbo Frames across the entire network as this is one of the main historical reasons for keeping the storage vSwitch separate from the rest.

      http://blogs.vmware.com/networking/2011/12/vds-best-practices-rack-server-deployment-with-two-10-gigabit-adapters.html

      Delete
  2. Hi Paul,

    Is it necessary to have 4 uplinks for the 2 switches? If so, why is that?


    Do the switches have to be managed? The switches I have are unmanaged.


    Thanks,


    Erich

    ReplyDelete
    Replies
    1. It is not completely necessary to have 4 uplinks. This design highlights an optimal guarantee where one type of traffic will absolutely not impact any other. You can do a similar design using 2 x 10GbE uplinks, if you search on google for VMware Networking Blog you will find information around this. If you are using unmanaged switches then they will not be capable of VLAN segmentation. It will work, but it will not be anywhere near as secure as I personally like it to be.

      Delete
  3. This may be a dumb question, but how are the links between the two switches configured? Are they etherchannel?

    ReplyDelete
    Replies
    1. In this situation and with this equipment the best option would be etherchannel. But it is completely dependant on the equipment you have available. The best way to run this would actually be to use stack cables in the back so that the switches work as a single unit. But I'm not sure if stack ports are available on the Nexus 5000 rack switches.

      Delete
  4. Looking at the design, it looks like if I add a second ISCSI vmk port group and set VMNIC1 active with VMNIC 0,2,3 standby that it would enable multi-path I/O? Assuming storage support. Nexus 5010/5020 don't have stack ports. Instead you use virtual port channel to link the two switches together, if memory serves.

    ReplyDelete
    Replies
    1. If you put an iSCSI port group on vmnic1 then that traffic will be competing with vMotion traffic. If you really wanted a redundant port it would be better to use vmnic0.

      Delete
  5. My spouse and I stumbled over here different page and thought I should check things out.

    I like what I see so now i'm following you.
    Look forward to looking into your web page repeatedly.



    Feel free to visit my website :: unlock iphone (gabrielsandoval.unblog.fr)

    ReplyDelete