Showing posts with label vSphere Networking. Show all posts
Showing posts with label vSphere Networking. Show all posts

Sunday, April 29, 2012

vSphere 5 Host Network Design - 10GbE vDS Design



This design represents the highest performance, most redundant and also most costly option for a vSphere 5 environment. It is entirely feasible to lose three out of the four uplink paths and still be running without interruption and most likely with no performance impact either. When looking for a bullet proof and highly scalable configuration within the data centre then this would be a great way to go.

The physical switch configuration might be slightly confusing to look out without explanation. Essentially what we have here are four Nexus 2000 series switches that are uplinked into two Nexus 5000 series switches. The green uplink ports in the design show that each 2K expansion switch has 40GbE of uplink capacity to the 5Ks. Network layer 3 routing daughter cards are installed within the Nexus 5Ks and now traffic can be routed within the switched environment instead of going out to an external router. In other words traffic from a host will travel up through a 2K, hit a 5K and then come back down where required. It isn't apparent from the design picture, but Keep-Alive traffic is run between the console ports of the two 5K switches.

It is assumed that each host has four 10GB NICs provided by 2 x PCI-x Dual Port expansion cards. All NICs are assigned to a single virtual standard switch and bandwidth control is performed using Load Based Teaming (LBT) in conjunction with Network IO Control (NIOC) and Storage IO Control (SIOC). A good writeup on how to configure NIOC shares can be found on the VMware Networking Blog and whilst this information is specific for 2 x 10GbE uplinks it also holds true when using four 10GbE connections. LBT is a teaming policy only available when using a virtual Distributed Switch (vDS).

LACP is not used as it wouldn't be a good design choice for this configuration. There are very few implementations where LACP/Etherchannel would be valid. For a comprehensive writeup on the reasons why please check out this blog post. A valid use case for LACP could be made when using the Nexus 1000V as LBT is not available for this type of switch.

In order to gain the performance increase of Jumbo Frames for the storage layer all networking components will need to have Jumbo Frames enabled. The requires end-to-end configuration from the hosts through the network and to the storage arrays. There is definitely a performance increase by incorporating Jumbo Frames and this is outlined in the following blog post. It is important to note that enabling Jumbo Frames on the single switch will allow all traffic to transmitt at 9000MTU. This means that Management, vMotion, FT and Storage will all use Jumbo Frames. VMs will not use Jumbo Frames unless this feature is enabled on the network adapter inside the OS of the VM.

Trunking needs to be configured on all physical switch to ESXi host uplinks to allow all VLAN traffic including; Management, vMotion, FT, VM Networking and Storage. Trunking at the physical switch will enable the definition of multiple allowable VLANs at the virtual switch layer. All VLANs used must be able to traverse all uplinks simultaneously.

When running Cisco equipment there is the potential to use the Rapid Spanning Tree Protocol (802.1w) standard. This means there is no requirement to configure trunk ports with Portfast or disable STP as the physical switches will automatically identify these functions correctly. If running any other type of equipment the safest option would probably be to disable STP and enable Portfast on each trunk port, but please refer to the switch manufacturer manual for confirmation.

Running vCenter and the vCenter database on the same clusters that are managed is going to create a dangerous circular dependency. Therefore it is strongly recommended to make sure that the environment has a management cluster dedicated for vCenter and high level VMs, where the management cluster uses virtual standard switches (vSS). One alternative to a dedicated management cluster would be to run vCenter and it's database as physical servers outside of vSphere.


*** Updates ***

05/05/2012 - Minor update to Jumbo Frames paragraph. Thanks to Eric Singer for his observations.

07/05/2012 - Moved diagram to top of article so that visitors wanting to reference design do not need to scroll down the article to view the diagram. Fixed IP address and VMkernel typos.


Thursday, April 26, 2012

vSphere 5 Host Network Design - 10GbE vSS Design



Going forward my 10GbE designs will be doing more to answer questions around physical network setup and configuration. Therefore you will see more detail in the diagram then I normally give, especially in regards to how the physical switches are uplinked and interconnected. I had some network design input from Cisco engineers on this to ensure that redundancy and throughput are not compromised once vSphere traffic gets onto the physical switches.

There were many discussions around the use of Load Based Teaming and Etherchannel, neither of which is used in the following design. LBT is not used because licensing does not allow for it. For more information on LBT please check out this link.
LACP is not used as it would not be good design practise. There are very few implementations where LACP/Etherchannel would be valid. For a comprehensive writeup on the reasons why please check out this link.

The following design is based around a segmented 10GbE networking infrastructure where multiple physical switches are interconnected using high speed links. All traffic is segmented with VLAN tagging for logical network separation.

It is assumed that each host has four 10GB NICs provided by 2 x PCI-x Dual Port expansion cards. All NICs are assigned to a single virtual standard switch and traffic segregation is performed by pinning each VMkernel to a specific uplink. This is where design for 10GbE diverges from standard design in 1GB configurations. Typically for 1GB setup you would need at least two virtual switches or three when using iSCSI storage; switch1 for Management and vMotion, switch2 for VMs and switch3 for storage.

In order to gain the performance increase of Jumbo Frames for the storage layer all networking components will need to have Jumbo Frames enabled end-to-end from the hosts through the network and to the storage arrays. There is definitely a performance increase by incorporating Jumbo Frames and this is outlined in the following link. It is important to note that enabling Jumbo Frames on the single switch will allow all traffic to transmitt at 9000MTU. This means that Management, vMotion, FT and Storage will all use Jumbo Frames. VMs will not use Jumbo Frames unless this feature is enabled on the network adapter inside the OS of the VM.

Trunking needs to be configured on all uplinks where all ports on the physical switch allow all VLANs through. Trunking at the physical switch will enable the definition of multiple allowable VLANs at the virtual switch layer. It is important to note that the colours used in the diagram show how traffic will flow under normal circumstances. However all VLANs need to be able to use all uplinks in the event of a NIC failure.

If you are running Cisco equipment then you might be able to use the Rapid Spanning Tree Protocol (802.1w) standard. This means that you do not need to configure trunk ports with Portfast as the physical switches will automatically identify this port correctly. If running any other type of equipment the safest option would probably be to disable STP and enable Portfast on each trunk port, but please refer to your manufacturer manual.

Running vCenter and the vCenter database on the same hosts that it manages is not a problem in this design. So you do not need to run a Management cluster but I would say that it is usually a good design decision to do so. If you built this solution and then upgraded to Ent+ and started using virtual distributed switches then a Management cluster would be required.

This design is based around scenarios where Enterprise Plus licensing and network based bandwidth limiting/control is not available. Because SIOC and NIOC are not available in this design there is no way to guarantee bandwidth for particular traffic types. vMotion in vSphere 5 would be quite happy to consume 8Gb of an uplink and in situations where other traffic is running on that uplink it would be constricted by vMotion.


*** Updates ***

05/05/2012 - Minor update to Jumbo Frames paragraph. Thanks to Eric Singer for his observations.


Friday, October 7, 2011

vSphere 5 Host Network Design - 12 NICs Segmented Networks & Highly Resilient

This design is highly resilient to most forms of failure. Performance throughput will also be very high.

I had a lot of fun building this diagram and it is probably one of my favourites. Keeping in mind that once you get to this level of implementation you should probably be seriously considering moving to a 10GB infrastructure in the not to distant future.

The following design is based around a converged 1GB networking infrastructure where multiple physical switches are interconnected using high speed links. All traffic is segmented with VLAN tagging for logical network separation.

It is assumed that each host has four inbuilt NICs and 2 x quad port PCI-X cards.

All physical switch ports should all be configured to use PortFast.

Both the storage uplink ports and the virtual switch used for storage should be set to use Jumbo Frames by specifying an an MTU of 9000.

Trunking need to be configured on all uplinks. Trunking at the physical switch will enable the definition of multiple allowable VLANs at the virtual switch layer. Management, vMotion and FT all reside on vSwitch0, multiple VM Networking VLANs reside on dvSwitch1 and all storage networks reside on dvSwitch2.


This configuration will allow up to 14 vSphere hosts and 4 storage array's across 4 x 48 port stacked physical switches. The switch interconnects would need to be high speed uplinks not utilizing Ethernet ports.

If instead an isolated storage network was utilized then it would be possible to have up to 24 hosts connected across the switches, this is a substantial increase in server density.

It is assumed that you will be using a separate management cluster for vCenter and associated database or that your vCenter server and database are located on physical servers.


Notes:

* 1 - Distributed virtual switches require Enterprise Plus licenses. This design really calls for Ent+ licensing as the fail over policies are quite complex and manually configuring these across multiple hosts is almost guaranteed to be mis-configured. I would not recommend this design for organizations that do not have access to distributed virtual switches.

* 2 - This vMotion port can be used as shown in the design or if you need a greater number of FT protected VMs then simply change this port to a FT port. Make sure to configure load balancing policies so that FT traffic does not interfere with the Management network.

* 3 - Datastore path selection policy is typically best set at Round Robin, but always consult the storage vendor documentation to confirm the best type to use. 


vSphere 5 - 12 NIC SegmentedNetworks Highly Resilient Design v1.1.jpg



Comments, suggestions and feedback all welcome.

I can be contacted via email for the original visio document;
logiboy123 at gmail dot com

Thursday, October 6, 2011

vSphere 5 Host Network Design - 10 NICs Segmented Networks

The following design is based around a converged 1GB networking infrastructure where multiple physical switches are interconnected using high speed links. All traffic is segmented with VLAN tagging for logical network separation.

All physical switch ports should all be configured to use PortFast.

Both the storage uplink ports and the virtual switch used for storage should be set to use Jumbo Frames by specifying an an MTU of 9000.

Trunking needs to be configured on all uplinks. Trunking at the physical switch will enable the definition of multiple allowable VLANs at the virtual switch layer. Management, vMotion and FT all reside on vSwitch0, multiple VM Networking VLANs reside on dvSwitch1 and all storage networks reside on dvSwitch2.

This configuration will allow up to 8 vSphere hosts and a single storage array across 2 x 48 port stacked physical switches as long as the switch interconnects are not using Ethernet ports.

An isolated storage network is still my preferred option in almost every environment. Isolating storage to a different physical network would allow up to 12 hosts to be connected across 2 x 48 port switches.

It is assumed that you will be using a separate management cluster for vCenter and associated database or that your vCenter server and database are located on physical servers.


Notes:

* 1 - Distributed virtual switches require Enterprise Plus licenses. This design really calls for Ent+ licensing as the fail over policies are quite complex and manually configuring these across multiple hosts is almost guaranteed to misconfigured. I would not recommend this design for organizations that do not have access to distributed virtual switches.

* 2 - This vMotion port can be used as shown in the design or if you need a greater number of FT protected VMs then simply change this port to a FT port. Make sure to configure load balancing policies so that FT traffic does not interferer with the Management network.

* 3 - Route based on physical NIC load is a policy only available in a dvSwitch. If you do not have Ent+ then use the default policy instead.

* 4 - Datastore path selection policy is typically best set at Round Robin, but always consult the storage vendor documentation to confirm the best type to use. 


vSphere 5 - 10 NIC SegmentedNetworks Design v1.0.jpg


Comments, feedback and suggestions all welcome.

Wednesday, October 5, 2011

vSphere 5 Host Network Design - 10 NICs Isolated Storage & Isolated DMZ Including FT

This design should be reasonably simple to implement and maintain, has isolation for storage and the DMZ and excellent throughput for Fault Tolerance traffic.

For this design the physical switch uplink ports should all be configured to use PortFast.

Both the storage uplink ports and the virtual switch used for storage should be set to use Jumbo Frames by specifying an an MTU of 9000.

In this design trunking should be configured on all uplinks. Trunking at the physical switch will enable the definition of multiple allowable VLANs at the virtual switch layer.


Notes:

* 1 - Distributed virtual switches require Enterprise Plus licenses. If you do not have Ent+ then replace these switches with standard virtual switches.

* 2 - Route based on physical NIC load is a policy only available in a dvSwitch. If you do not have Ent+ then use the default policy instead.

* 3 - VLAN tagging for the storage network is not required as it is isolated but is still a good idea as it keeps networking configuration consistent and is not very hard to implement.

* 4 - Datastore path selection policy is typically best set at Round Robin, but always consult the storage vendor documentation to confirm the best type to use. 


vSphere 5 - 10 NICs IsolatedStorage and IsolatedDMZ Design v1.0.jpg



Comments, feedback and suggestions welcome.

Tuesday, October 4, 2011

vSphere 5 Host Network Design - 8 NICs Isolated Storage Including FT

The following article contains two designs incorporating isolated storage networking.

For both designs the physical switch uplink ports should all be configured to use PortFast.

Both the storage uplink ports and the virtual switch used for storage should be set to use Jumbo Frames by specifying an an MTU of 9000.

In both designs trunking should be configured on all uplinks. Trunking at the physical switch will enable the definition of multiple allowable VLANs at the virtual switch layer.


Simple Design
The first design is slightly more simple to implement and maintain, further it has a greater throughput for Fault Tolerance traffic.

If your organization is comfortable with segmentation of traffic for a DMZ then you can add this functionality by simply adding the DMZ VLANs to dvSwitch1. However in this instance I would not recommend this unless you are utilizing distributed virtual switches.

If DMZ implementation requires isolated traffic then a 10 NIC design would be required, where the extra two NICs are assigned to another dvSwitch with uplinks going to the physical DMZ switches.

Notes:

* 1 - Distributed virtual switches require Enterprise Plus licenses. If you do not have Ent+ then replace these switches with standard virtual switches.

* 2 - Route based on physical NIC load is a policy only available in a dvSwitch. If you do not have Ent+ then use the default policy instead.

* 3 - VLAN tagging for the storage network is not required as it is isolated but is still a good idea as it keeps networking configuration consistent and is not very hard to implement.

* 4 - Datastore path selection policy is typically best set at Round Robin, but always consult the storage vendor documentation to confirm the best type to use. 


vSphere 5 - 8 NICs IsolatedStorage Simple Design v1.0.jpg



Complex Design
The second design is much more complex and requires the use of distributed virtual switching for the management network, this is not recommended if you have vCenter running on a dvSwitch that it manages. vCenter should not reside on a dvSwitch that it manages because this creates a circular dependency and is dangerous under certain circumstances. This is not the recommended approach for any design.

If however you have a management cluster or vCenter as a physical machine then this design is possibly a good solution for you depending on your requirements. Specifically if you use very large nodes with a high consolidation ratio then this design enables you to migrate VMs off a host extremely quickly as throughput for vMotion events is a primary focus of this design.

Notes:

* 1 - Distributed virtual switches require Enterprise Plus licenses. This design is not for you if you do not have this level of licensing. The dvSwitch is required because the dvSwitch0 configuration is to complex to be built using standard virtual switches that must then be reproduced and maintained across multiple hosts. Management and FT traffic are configured to not use each others primary uplink ports. This ensures true separation of traffic that would have adverse effects on each other.



* 2 - VLAN tagging for the storage network is not required as it is isolated but is still a good idea as it keeps networking configuration consistent and is not very hard to implement.

* 3 - Datastore path selection policy is typically best set at Round Robin, but always consult the storage vendor documentation to confirm the best type to use. 


vSphere 5 - 8 NICs IsolatedStorage Complex Design v1.0.jpg



Comments and feedback welcome.

Monday, October 3, 2011

vSphere 5 Host Network Design - 6 NICs Segmented Networks & No FT

The follow diagram outlines a design that uses 6 NICs, makes use of logically isolated networking and does not include Fault Tolerance.

Uplink ports should all be configured to use PortFast.

Both the storage uplink ports and the virtual switch used for storage should be configured to use Jumbo Frames by specifying an MTU of 9000.

In this design trunking should be configured on all uplinks. Trunking at the physical switch will enable the definition of multiple allowable VLANs at the virtual switch layer.

The host design presumes that vmnic0 and vmnic1 are inbuilt and vmnic2 through vmnic5 are using an expansion card. The Management and Storage switches have an uplink from both an internal and expansion slot based card. In the event of the internal card failing the host and VM's will continue to function, in the event of the expansion card failing all VMs will lose network connectivity, but will retain host and storage connectivity.


Notes:

* 1 - Distributed virtual switches require Enterprise Plus licenses. If you do not have Ent+ then replace these switches with standard virtual switches.

* 2 - Route based on physical NIC load is a policy only available in a dvSwitch. If you do not have Ent+ then use the default policy instead.

* 3 - VLAN tagging is absolutely required in this design in order to apply a secure segmentation of storage traffic from the rest of the environment.

* 4 - VLAN 30 is reserved in case Fault Tolerance is ever required in the design, hence VLAN 40 is used for storage.

* 5 - Datastore path selection policy is typically best set at Round Robin, but always consult the storage vendor documentation to confirm the best type to use.

 vSphere 5 - 6 NIC SegmentedStorage and NoFT Design v1.1.jpg



I welcome any questions or feedback. If you would like the original Visio document please contact me and I will email these to you.

Sunday, October 2, 2011

vSphere 5 Host Network Design - 6 NICs Isolated Storage & No FT

The follow diagram outlines a design that uses 6 NICs, makes use of physically isolated networking for storage and does not include Fault Tolerance.

Uplink ports should all be configured to use PortFast.

Both the storage uplink ports and the virtual switch used for storage should be configured to use Jumbo Frames by specifying an MTU of 9000.

In this design trunking should be configured on all uplinks. Trunking at the physical switch will enable the definition of multiple allowable VLANs at the virtual switch layer.

It is assumed that each host has a dual port internal NIC and quad port expansion card. So vmnic0 and vmnic1 are inbuilt and vmnic2 through vmnic5 are PCI-X.

Notes:

* 1 - Distributed virtual switches require Enterprise Plus licenses. If you do not have Ent+ then replace these switches with standard virtual switches.

* 2 - Route based on physical NIC load is a policy only available in a dvSwitch. If you do not have Ent+ then use the default policy instead.

* 3 - VLAN tagging for the storage network is not required as it is isolated but is still a good idea as it keeps networking configuration consistent and is not very hard to implement.

* 4 - VLAN 30 is reserved in the event of Fault Tolerance being added to the design at a latter stage. Always reserve VLANs that may be used in the future as this will save you time and effort later on.

* 5 - Datastore path selection policy is typically best set at Round Robin, but always consult the storage vendor documentation to confirm the best type to use.


vSphere 5 - 6 NIC IsolatedStorage & NoFT Design v1.0.jpg


I welcome any questions or feedback. If you would like the original Visio document then you can contact me and I can email these to you.

Saturday, October 1, 2011

VMware vSphere 5 Host Network Designs

*** Updated 30/04/2012 ***

I have now uploaded two 10GbE designs. I might create some more 10GbE designs if they are requested. Certainly I can see myself creating some 2 x NIC 10GbE designs, but essentially they would be the same as the two already uploaded. If you have some unique design constraints please contact me on logiboy123 at gmail dot com.

The following is an anchor page for my vSphere 5 host networking diagrams. The included diagrams are based on 1GB and 10GbE network infrastructure.

Each of the following links will represent slight variations on the same type of design where the goals are:
  • Manageability - Easy to deploy, administer, maintain and upgrade.
  • Usability - Highly available, scalable and built for performance.
  • Security - Minimizes risk and is easy to secure.
  • Cost - Solutions are good enough to meet requirements and fit within budgets.

The following base designs should be considered a starting point for your particular design requirements. Feel free to use, modify, edit and copy any of the designs. If you have a particular scenario you would like created please contact me and I will see if I can help you with it.

All designs are based on iSCSI storage networking. For fibre networks simply remove or convert the Ethernet switches with the relevant fibre switch; segmentation designs will not apply so use the isolated designs for your base.


1GB NIC Designs

6 NICs isolated storage & no Fault Tolerance

6 NICs segmented storage & no Fault Tolerance

8 NICs isolated storage including Fault Tolerance

10 NICs isolated storage & isolated DMZ including Fault Tolerance

10 NICs segmented networks including DMZ & Fault Tolerance

12 NICs segmented networks including DMZ & Fault Tolerance - Highly Resilient Design


10GbE NIC Designs

4 NICs 10GbE segmented networking vSS Design

4 NICs 10GbE segmented networking vDS Design