Tuesday, October 4, 2011

vSphere 5 Host Network Design - 8 NICs Isolated Storage Including FT

The following article contains two designs incorporating isolated storage networking.

For both designs the physical switch uplink ports should all be configured to use PortFast.

Both the storage uplink ports and the virtual switch used for storage should be set to use Jumbo Frames by specifying an an MTU of 9000.

In both designs trunking should be configured on all uplinks. Trunking at the physical switch will enable the definition of multiple allowable VLANs at the virtual switch layer.


Simple Design
The first design is slightly more simple to implement and maintain, further it has a greater throughput for Fault Tolerance traffic.

If your organization is comfortable with segmentation of traffic for a DMZ then you can add this functionality by simply adding the DMZ VLANs to dvSwitch1. However in this instance I would not recommend this unless you are utilizing distributed virtual switches.

If DMZ implementation requires isolated traffic then a 10 NIC design would be required, where the extra two NICs are assigned to another dvSwitch with uplinks going to the physical DMZ switches.

Notes:

* 1 - Distributed virtual switches require Enterprise Plus licenses. If you do not have Ent+ then replace these switches with standard virtual switches.

* 2 - Route based on physical NIC load is a policy only available in a dvSwitch. If you do not have Ent+ then use the default policy instead.

* 3 - VLAN tagging for the storage network is not required as it is isolated but is still a good idea as it keeps networking configuration consistent and is not very hard to implement.

* 4 - Datastore path selection policy is typically best set at Round Robin, but always consult the storage vendor documentation to confirm the best type to use. 


vSphere 5 - 8 NICs IsolatedStorage Simple Design v1.0.jpg



Complex Design
The second design is much more complex and requires the use of distributed virtual switching for the management network, this is not recommended if you have vCenter running on a dvSwitch that it manages. vCenter should not reside on a dvSwitch that it manages because this creates a circular dependency and is dangerous under certain circumstances. This is not the recommended approach for any design.

If however you have a management cluster or vCenter as a physical machine then this design is possibly a good solution for you depending on your requirements. Specifically if you use very large nodes with a high consolidation ratio then this design enables you to migrate VMs off a host extremely quickly as throughput for vMotion events is a primary focus of this design.

Notes:

* 1 - Distributed virtual switches require Enterprise Plus licenses. This design is not for you if you do not have this level of licensing. The dvSwitch is required because the dvSwitch0 configuration is to complex to be built using standard virtual switches that must then be reproduced and maintained across multiple hosts. Management and FT traffic are configured to not use each others primary uplink ports. This ensures true separation of traffic that would have adverse effects on each other.



* 2 - VLAN tagging for the storage network is not required as it is isolated but is still a good idea as it keeps networking configuration consistent and is not very hard to implement.

* 3 - Datastore path selection policy is typically best set at Round Robin, but always consult the storage vendor documentation to confirm the best type to use. 


vSphere 5 - 8 NICs IsolatedStorage Complex Design v1.0.jpg



Comments and feedback welcome.

2 comments:

  1. hi, regarding your distributed switch design, wouldn't be better to have management network on dedicated vswitch? what about team policies? traffic shaping? I was just wondering what do you recomend. Good article anyway.
    cheers
    Marek

    ReplyDelete
    Replies
    1. Usually I would put Management for an ESXi host on a standard virtual switch. But this design allows for you to do either.

      Team policies unless otherwise stated will use defaults. Failover order is specified in the design. There is no traffic shapping in this design.

      Delete