Sunday, October 2, 2011

vSphere 5 Host Network Design - 6 NICs Isolated Storage & No FT

The follow diagram outlines a design that uses 6 NICs, makes use of physically isolated networking for storage and does not include Fault Tolerance.

Uplink ports should all be configured to use PortFast.

Both the storage uplink ports and the virtual switch used for storage should be configured to use Jumbo Frames by specifying an MTU of 9000.

In this design trunking should be configured on all uplinks. Trunking at the physical switch will enable the definition of multiple allowable VLANs at the virtual switch layer.

It is assumed that each host has a dual port internal NIC and quad port expansion card. So vmnic0 and vmnic1 are inbuilt and vmnic2 through vmnic5 are PCI-X.


* 1 - Distributed virtual switches require Enterprise Plus licenses. If you do not have Ent+ then replace these switches with standard virtual switches.

* 2 - Route based on physical NIC load is a policy only available in a dvSwitch. If you do not have Ent+ then use the default policy instead.

* 3 - VLAN tagging for the storage network is not required as it is isolated but is still a good idea as it keeps networking configuration consistent and is not very hard to implement.

* 4 - VLAN 30 is reserved in the event of Fault Tolerance being added to the design at a latter stage. Always reserve VLANs that may be used in the future as this will save you time and effort later on.

* 5 - Datastore path selection policy is typically best set at Round Robin, but always consult the storage vendor documentation to confirm the best type to use.

vSphere 5 - 6 NIC IsolatedStorage & NoFT Design v1.0.jpg

I welcome any questions or feedback. If you would like the original Visio document then you can contact me and I can email these to you.


  1. For dvSwitch2 and the two iSCSI portgroups, why configure the NICs as active/unused rather than active/standby?


  2. Because we do not want an iSCSI vmkernel to start using a vmnic that has been assigned to a different iSCSI vmnic. If an uplink fails then the traffic for that link should fail, it should not migrate and try to use bandwidth or paths from a different vmnic.

  3. Paul, this is really simple and helpful. Why cant we seperate storage using vlan on the same physical switch. with the above design are we not wasting 2 additional physical switches ??

    Could you help me with original Visio document...


    1. Yes. But it is often a requirement of the business or the storage team to run on a separate physical layer from other types of traffic. Please check out the post for "vSphere 5 Host Network Design - 6 NICs Segmented Networks & No FT" for a design that uses VLANs to separate traffic instead. Please note that the dvSwitches in these designs are interchangeable with standard virtual switches.

  4. Quick Query as to the connections to the actual storage device. Assuming 2 controllers with 2 NICs each (equallogic) would you connect each nic on the controller to a separate switch OR connect all the nics on the same controller to the same switch?

    1. If each controller has two uplinks then yes each of the uplinks on a controller should go to seperate physical switches.

    2. Thanks Paul, thats about what i thought as well but never hurt to check with someone that knows heaps more than me.