After having been through the pitfalls with the new Hyper-V R2 technologies I will be posting a couple articles about how-to's best practices, and considerations.
Considerations
Active Directory Access – Hyper-V Clustering relies on Windows File Clustering which in turn relies on Active Directory for Cluster functions. If you setup your environment with only virtual domain controllers and you bring down your entire environment when you bring it back online you will find you are locked out of your entire Hyper-V environment because the Cluster cannot start due to no domain being available for the cluster to authenticate against.
A preferred method is to ALWAYS have a physical domain controller, but in the real world that is not always possible (and no you should not make a Hyper-V host a domain controller). An effective work around that I have tested and used is:
1.) Create a Virtual DC on Hyper-V Host 1 and 1 on Hyper-V host 2 using the hosts local storage
2.) Configure the VM’s to start with the host
3.) Confirm the VM’s are not added as Cluster Resources
4.) Change the “Cluster Service” on each host to Manual Start
5.) Create a Batch file that that runs net start “cluster service”
6.) Create a scheduled task to run the batch file 5 minutes after boot
What this will do in the event of a complete shutdown is when the hosts come back up the DC VM’s will automatically start when the host starts. After 5 minutes the hosts will start the cluster service which by then the DC’s will have hopefully started up and are able to authenticate the cluster. Once the Cluster is up HA will start turning on VM’s that are shutoff.
Cluster Shared Volumes - In Hyper-V R1 HA, when you failed over a VM you didn’t fail over the VM you were failing over the LUN. This became a problem if you had multiple vm’s on a single LUN as it would fail all of the vm’s over. So a typical design would be a bunch of smaller LUNS only housing 1 VM each, which usually meant a lot of wasted SAN space
In comes Cluster Shared Volumes (CSV) to save the day… Well kinda, but only if you have free SAN space, or are setting up the SAN and Hyper-V first and not doing an upgrade to an existing environement… CSV allows multiple hosts to share, connect, and modify data on the same LUN at the same time. The only major gotcha is that when you configure a LUN for CSV it changes it from a drive mapping to a folder under c:\ … This means you can only convert a LUN to CSV If and only if it is empty, otherwise when you convert the drive the path’s to the VHD’s will be wrong and your VM’s no boot :-(
Live Migration – Live Migration is MS’s answer to VMotion and works just as well as VMotion. Turning it on is a treasure hunt all in itself though
Microsoft MPIO and iSCSI – Microsoft does not support bonded/teamed nics for iSCSI, which means you have to manually configure MPIO for each NIC for Each LUN on each Host, every time you add a new LUN. Hurray for efficiency.
NICS – Depending on what you read, setting up a Microsoft preferred Hyper-V cluster will involve 12 network cards… 2 for Management, 2 for production, 2 for iscsi, 2 for cluster, 2 for Live Migration, 2 for CSV… 12… I have successfully setup and used it with 6 which works just fine in a small environment: 1 for management, 2 for production, 2 for iscsi, and one for CSV-Live Migration-Cluster. Eight in my mind would be a preferred number for most installs
Monday, March 22, 2010
Friday, March 12, 2010
Link Aggregation and VLAN Trunking with ESX and HP Switches
Below is an article from blog.scottlowe.org where Scott discusses how to configure ESX to work with an HP Procurve for Link Aggregation and VLAN Trunking. Pretty simple stuff but handy when you have Cisco on your mind and need to work with HP.
Using Link Aggregation
There’s not a whole lot to this part. In the ProCurve configuration, users will mark the ports that should participate in link aggregation as part of a trunk (say, Trk1) and then set the trunk type. Here’s the only real gotcha: the trunk must be configured as type “Trunk” and not type “LACP”.
In this context, LACP refers to dynamic LACP, which allows the switch and the server to dynamically negotiate the number of links in the bundle. VMware ESX doesn’t support dynamic LACP, only static LACP. To do static LACP, users will need to set the trunk type to Trunk.
Then, as has been discussed elsewhere in great depth, configure the VMware ESX vSwitch’s load balancing policy to “Route based on ip hash”. Once that’s done, everything should work as expected. This blog entry gives the CLI command to set the vSwitch load balancing policy, which would be necessary if configuring vSwitch0. For all other vSwitches, the changes can be made via VirtualCenter.
The rest of the article can be viewed here: http://blog.scottlowe.org/2008/09/05/vmware-esx-nic-teaming-and-vlan-trunking-with-hp-procurve/
Using Link Aggregation
There’s not a whole lot to this part. In the ProCurve configuration, users will mark the ports that should participate in link aggregation as part of a trunk (say, Trk1) and then set the trunk type. Here’s the only real gotcha: the trunk must be configured as type “Trunk” and not type “LACP”.
In this context, LACP refers to dynamic LACP, which allows the switch and the server to dynamically negotiate the number of links in the bundle. VMware ESX doesn’t support dynamic LACP, only static LACP. To do static LACP, users will need to set the trunk type to Trunk.
Then, as has been discussed elsewhere in great depth, configure the VMware ESX vSwitch’s load balancing policy to “Route based on ip hash”. Once that’s done, everything should work as expected. This blog entry gives the CLI command to set the vSwitch load balancing policy, which would be necessary if configuring vSwitch0. For all other vSwitches, the changes can be made via VirtualCenter.
The rest of the article can be viewed here: http://blog.scottlowe.org/2008/09/05/vmware-esx-nic-teaming-and-vlan-trunking-with-hp-procurve/
Wednesday, March 10, 2010
HP Leftand Virtual Storage Appliance, Making the local shared
Good article from VMware on deploying to a small branch office Virtual Desktops without shared storage while still offering failover and redundancy.
http://www.vmware.com/files/pdf/view_local_disk.pdf
Good How-to Video
http://h30423.www3.hp.com/index.jsp?fr_story=0933cdb3c1a887b43416cc6c6c23e3f46e6a7547&rf=bm
http://www.vmware.com/files/pdf/view_local_disk.pdf
Good How-to Video
http://h30423.www3.hp.com/index.jsp?fr_story=0933cdb3c1a887b43416cc6c6c23e3f46e6a7547&rf=bm
Subscribe to:
Comments (Atom)