This design is highly resilient to most forms of failure. Performance throughput will also be very high.
I had a lot of fun building this diagram and it is probably one of my favourites. Keeping in mind that once you get to this level of implementation you should probably be seriously considering moving to a 10GB infrastructure in the not to distant future.
The following design is based around a converged 1GB networking infrastructure where multiple physical switches are interconnected using high speed links. All traffic is segmented with VLAN tagging for logical network separation.
I had a lot of fun building this diagram and it is probably one of my favourites. Keeping in mind that once you get to this level of implementation you should probably be seriously considering moving to a 10GB infrastructure in the not to distant future.
The following design is based around a converged 1GB networking infrastructure where multiple physical switches are interconnected using high speed links. All traffic is segmented with VLAN tagging for logical network separation.
It is assumed that each host has four inbuilt NICs and 2 x quad port PCI-X cards.
All physical switch ports should all be configured to use PortFast.
Both the storage uplink ports and the virtual switch used for storage should be set to use Jumbo Frames by specifying an an MTU of 9000.
Trunking need to be configured on all uplinks. Trunking at the physical switch will enable the definition of multiple allowable VLANs at the virtual switch layer. Management, vMotion and FT all reside on vSwitch0, multiple VM Networking VLANs reside on dvSwitch1 and all storage networks reside on dvSwitch2.
This configuration will allow up to 14 vSphere hosts and 4 storage array's across 4 x 48 port stacked physical switches. The switch interconnects would need to be high speed uplinks not utilizing Ethernet ports.
If instead an isolated storage network was utilized then it would be possible to have up to 24 hosts connected across the switches, this is a substantial increase in server density.
It is assumed that you will be using a separate management cluster for vCenter and associated database or that your vCenter server and database are located on physical servers.
Notes:
* 1 - Distributed virtual switches require Enterprise Plus licenses. This design really calls for Ent+ licensing as the fail over policies are quite complex and manually configuring these across multiple hosts is almost guaranteed to be mis-configured. I would not recommend this design for organizations that do not have access to distributed virtual switches.
* 2 - This vMotion port can be used as shown in the design or if you need a greater number of FT protected VMs then simply change this port to a FT port. Make sure to configure load balancing policies so that FT traffic does not interfere with the Management network.
* 3 - Datastore path selection policy is typically best set at Round Robin, but always consult the storage vendor documentation to confirm the best type to use.
vSphere 5 - 12 NIC SegmentedNetworks Highly Resilient Design v1.1.jpg
Both the storage uplink ports and the virtual switch used for storage should be set to use Jumbo Frames by specifying an an MTU of 9000.
Trunking need to be configured on all uplinks. Trunking at the physical switch will enable the definition of multiple allowable VLANs at the virtual switch layer. Management, vMotion and FT all reside on vSwitch0, multiple VM Networking VLANs reside on dvSwitch1 and all storage networks reside on dvSwitch2.
This configuration will allow up to 14 vSphere hosts and 4 storage array's across 4 x 48 port stacked physical switches. The switch interconnects would need to be high speed uplinks not utilizing Ethernet ports.
If instead an isolated storage network was utilized then it would be possible to have up to 24 hosts connected across the switches, this is a substantial increase in server density.
It is assumed that you will be using a separate management cluster for vCenter and associated database or that your vCenter server and database are located on physical servers.
Notes:
* 1 - Distributed virtual switches require Enterprise Plus licenses. This design really calls for Ent+ licensing as the fail over policies are quite complex and manually configuring these across multiple hosts is almost guaranteed to be mis-configured. I would not recommend this design for organizations that do not have access to distributed virtual switches.
* 2 - This vMotion port can be used as shown in the design or if you need a greater number of FT protected VMs then simply change this port to a FT port. Make sure to configure load balancing policies so that FT traffic does not interfere with the Management network.
* 3 - Datastore path selection policy is typically best set at Round Robin, but always consult the storage vendor documentation to confirm the best type to use.
vSphere 5 - 12 NIC SegmentedNetworks Highly Resilient Design v1.1.jpg
Comments, suggestions and feedback all welcome.
I can be contacted via email for the original visio document;
logiboy123 at gmail dot com
Nice design. Is there anyway you can make the Visio file available? I'd like to base my next design off of this.
ReplyDeleteThanks!
Feel free to email me;
ReplyDeletelogiboy123 at gmail dot com