Full Document can be found here
Cisco.com
Switch Configuration
In order to configure the switch, complete these steps.
1. Per the Network Diagram, choose the ports to be grouped:
o Gi 2/0/23
o Gi2/0/24
2. For each of the listed ports, complete these steps:
a. Configure the port as a Layer 2 switchport.
Note: This step is required only for switches that support both Layer 2 switchports and Layer 3 interfaces.
Switch#conf t
Enter configuration commands, one per line. End with CNTL/Z.
Switch(config)#int Gi2/0/23
Switch(config-if)#switchport
Switch(config-if)#
b. Configure the port as an access port and assign the appropriate VLAN.
c. Switch(config-if)#switchport mode access
d. Switch(config-if)#switchport access vlan 100
e.
Switch(config-if)#
f. Configure the port for spanning tree PortFast.
g. Switch(config-if)#spanning-tree portfast
h. %Warning: portfast should only be enabled on ports connected to a single
i. host. Connecting hubs, concentrators, switches, bridges, etc... to this
j. interface when portfast is enabled, can cause temporary bridging loops.
k. Use with CAUTION
l.
m. %Portfast has been configured on GigabitEthernet2/0/23 but will only
n. have effect when the interface is in a non-trunking mode.
Switch(config-if)#
o. Configure the port for EtherChannel with appropriate mode.
p. Switch(config-if)#channel-group 1 mode active
q.
r. Creating a port-channel interface Port-channel 1
s.
Switch(config-if)#
3. Configure the EtherChannel load balancing. This configuration is applicable for all EtherChannels configured on this switch.
4. Switch(config)#port-channel load-balance ?
5. dst-ip Dst IP Addr
6. dst-mac Dst Mac Addr
7. src-dst-ip Src XOR Dst IP Addr
8. src-dst-mac Src XOR Dst Mac Addr
9. src-ip Src IP Addr
10. src-mac Src Mac Addr
11.
12. Switch(config)#port-channel load-balance src-mac
13.
Switch(config)#
Server Configuration
In order to configure the server, complete these steps:
1. Start the NIC configuration utility.
Note: This examples uses the HP Network Configuration Utility 7. In order to use the HP Network Configuration Utility, locate the icon in the Windows 2000 system tray or click Start > Settings > Control Panel > HP Network.
2. Highlight both NICs, and the click Team.
The NIC team is created.
3. Click Properties.
4. In the Team Properties windows, choose the appropriate Team Type Selection.
Note: Since this example configured the switch with LACP, choose the option with IEEE 802.3ad.
5. Choose the required method from the Transmit Load Balancing Method drop-down list, and click OK.
6. In the Team Properties window, click OK, and when the confirmation window appears, click Yes to continue.
A dialog box appears that displays the status of the process.
7. When you are prompted to reboot the server, click Yes.
8. Once the server is rebooted, open the network configuration utility in order to verify the teaming status.
9. Right-click My Network Places. An additional network card, Local Area Connection 3, displays in the window.
10. Once the NIC adapters are teamed and a new connection is formed, the individual NIC adapters are disabled and are not accessible through the old IP address. Configure the new connection with static IP address, default gateway, and DNS/WINS settings or for dynamic configuration.
Verify
Use this section to confirm that your configuration works properly.
The Output Interpreter Tool ( registered customers only) (OIT) supports certain show commands. Use the OIT to view an analysis of show command output.
• show etherchannel summary—Displays a one-line summary per channel group.
• Switch#show etherchannel 1 summary
• Flags: D - down P - in port-channel
• I - stand-alone s - suspended
• H - Hot-standby (LACP only)
• R - Layer3 S - Layer2
• U - in use f - failed to allocate aggregator
• u - unsuitable for bundling
• w - waiting to be aggregated
• d - default port
•
•
• Number of channel-groups in use: 1
• Number of aggregators: 1
•
• Group Port-channel Protocol Ports
• ------+-------------+-----------+-----------------------------
• 1 Po1(SU) LACP Gi2/0/23(P) Gi2/0/24(P)
•
Switch#
• show spanningtree interface—Displays spanning tree information for the specified interface.
• Switch#show spanning-tree interface port-channel 1
•
•
• Vlan Role Sts Cost Prio.Nbr Type
• ---------------- ---- --- --------- -------- ---------
• VLAN0100 Desg FWD 3 128.616 P2p
Switch#
• show etherchannel load-balance—Displays the load-balance or frame-distribution scheme among ports in the port channel.
• Switch#show etherchannel load-balance
• EtherChannel Load-Balancing Operational State (src-mac):
• Non-IP: Source MAC address
• IPv4: Source MAC address
• IPv6: Source IP address
Switch#
Thursday, June 24, 2010
Wednesday, April 7, 2010
Load Balancing - What and When
With the need for business’s to provide redundant highly available network services the demand on Load Balancing is growing. This is a look at some of the more common technologies out there from a Microsoft perspective and when to use them in your network.
Microsoft Network Load Banalcing (WNLB):
What – WNLB is a load balancing technology available in all Windows Server OS’s since Windows 2000. WNLB operates by creating either a Unicast or Multicast cluster of upto 32 servers (8 maximum is recommended) providing load balanced services over TCP or UDP ports. This happens by the WNLB Cluster providing a Virtual IP Address (VIP) for the cluster which clients connect to, the NLB then decides which host to direct the traffic to.
When – WNLB is best utilized in small implementations when only a small amount of services need to be Load Balanced. Since WNLB is included with Windows it provides a low cost entry point for high availability and load Balancing.
Limitations – WNLB is a best guess load balancing solution. It is common to see “balanced traffic” at 80/20 utlization. Since NLB only load balances by Source IP this is by design of the NLB and may be more appropriately described as a failover solution first with some load balancing capabilities. While NLB’s can be virtualized additional considerations are needed to implement in a virtual environment. NLB’s are not service aware, which means is you are load balancing port 80 and IIS stops but the host still stays up the NLB will assume the server is healthy and will continue to send traffic to the host. And Lastly NLB’s cannot be located on a host that is also using Windows Clustering Services.
Threat Management Gateway (TMG) / Unified Access Gateway (UAG) / Reverse Proxy
What – TMG/UAG are reverse proxy solutions for load balancing. Reverse Proxies Load Balance by directing client traffic to the proxy and then determining the best host to direct the traffic to behind the proxy. Because Proxies terminate external connections at the proxy and opens its own connection to the host additional security scanning can be performed at the perimeter of the network.
When – Load Balancing with Reverse Proxies is best utilized when you already have a proxy in place and need the added security scanning. Also multiple windows Web services can be added to a single Proxy compared with only 1 per NLB
Limitations – TMG/UAG deployments can only load balance Web services, tcp 80 and 443. This means it cannot load balance internal RPC traffic or smtp connections. They are also not service aware and suffer the same problems as NLB’s in the event of a service failure but not a host failure. TMG/UAG only provide Source IP and LB Created Cookie for load balancing methods
Hardware Load Balancer
What – Hardware Load Balancer usually refers to a physical device placed in your network that provides Load Balancing Services. With many vendors now supplying these devices as virtual appliances a more correct term may be Load Balancing Appliance as the balancer may be physical or virtual. In either case the purpose of these devices is to provide Load Balanced Services for multiple servers and ports with true load balancing. Many of these devices also provide advanced features such as SSL Offloading, Dynamic and Static Compression, and Service aware monitoring, and Global Server Load Balancing.
When – Load Balancing Appliances are best utilized when availability, monitoring, and performance are the main concern. LBA’s can load balance across ports and protocols and support multiple separate servers and applications. Because LBA’s are service aware fast redirection to working hosts can be achieved, and many can also redirect requests across data centers in the event of a total outage. LBA’s provide the largest options in load balancing methods and can be chosen per load balanced service. Many LBA’s provide SSL offloading which in turn lowers the CPU utilization needed on internal SSL connections.
Limitations – The only limitation with LBA’s is the initial cost and continued maintenance of the product. Many vendors offer LBA’s that are cost comparable to TMG/UAG implementations.
Microsoft Network Load Banalcing (WNLB):
What – WNLB is a load balancing technology available in all Windows Server OS’s since Windows 2000. WNLB operates by creating either a Unicast or Multicast cluster of upto 32 servers (8 maximum is recommended) providing load balanced services over TCP or UDP ports. This happens by the WNLB Cluster providing a Virtual IP Address (VIP) for the cluster which clients connect to, the NLB then decides which host to direct the traffic to.
When – WNLB is best utilized in small implementations when only a small amount of services need to be Load Balanced. Since WNLB is included with Windows it provides a low cost entry point for high availability and load Balancing.
Limitations – WNLB is a best guess load balancing solution. It is common to see “balanced traffic” at 80/20 utlization. Since NLB only load balances by Source IP this is by design of the NLB and may be more appropriately described as a failover solution first with some load balancing capabilities. While NLB’s can be virtualized additional considerations are needed to implement in a virtual environment. NLB’s are not service aware, which means is you are load balancing port 80 and IIS stops but the host still stays up the NLB will assume the server is healthy and will continue to send traffic to the host. And Lastly NLB’s cannot be located on a host that is also using Windows Clustering Services.
Threat Management Gateway (TMG) / Unified Access Gateway (UAG) / Reverse Proxy
What – TMG/UAG are reverse proxy solutions for load balancing. Reverse Proxies Load Balance by directing client traffic to the proxy and then determining the best host to direct the traffic to behind the proxy. Because Proxies terminate external connections at the proxy and opens its own connection to the host additional security scanning can be performed at the perimeter of the network.
When – Load Balancing with Reverse Proxies is best utilized when you already have a proxy in place and need the added security scanning. Also multiple windows Web services can be added to a single Proxy compared with only 1 per NLB
Limitations – TMG/UAG deployments can only load balance Web services, tcp 80 and 443. This means it cannot load balance internal RPC traffic or smtp connections. They are also not service aware and suffer the same problems as NLB’s in the event of a service failure but not a host failure. TMG/UAG only provide Source IP and LB Created Cookie for load balancing methods
Hardware Load Balancer
What – Hardware Load Balancer usually refers to a physical device placed in your network that provides Load Balancing Services. With many vendors now supplying these devices as virtual appliances a more correct term may be Load Balancing Appliance as the balancer may be physical or virtual. In either case the purpose of these devices is to provide Load Balanced Services for multiple servers and ports with true load balancing. Many of these devices also provide advanced features such as SSL Offloading, Dynamic and Static Compression, and Service aware monitoring, and Global Server Load Balancing.
When – Load Balancing Appliances are best utilized when availability, monitoring, and performance are the main concern. LBA’s can load balance across ports and protocols and support multiple separate servers and applications. Because LBA’s are service aware fast redirection to working hosts can be achieved, and many can also redirect requests across data centers in the event of a total outage. LBA’s provide the largest options in load balancing methods and can be chosen per load balanced service. Many LBA’s provide SSL offloading which in turn lowers the CPU utilization needed on internal SSL connections.
Limitations – The only limitation with LBA’s is the initial cost and continued maintenance of the product. Many vendors offer LBA’s that are cost comparable to TMG/UAG implementations.
Monday, March 22, 2010
Hyper-V R2 HA, CSV, Live Migration Part 1
After having been through the pitfalls with the new Hyper-V R2 technologies I will be posting a couple articles about how-to's best practices, and considerations.
Considerations
Active Directory Access – Hyper-V Clustering relies on Windows File Clustering which in turn relies on Active Directory for Cluster functions. If you setup your environment with only virtual domain controllers and you bring down your entire environment when you bring it back online you will find you are locked out of your entire Hyper-V environment because the Cluster cannot start due to no domain being available for the cluster to authenticate against.
A preferred method is to ALWAYS have a physical domain controller, but in the real world that is not always possible (and no you should not make a Hyper-V host a domain controller). An effective work around that I have tested and used is:
1.) Create a Virtual DC on Hyper-V Host 1 and 1 on Hyper-V host 2 using the hosts local storage
2.) Configure the VM’s to start with the host
3.) Confirm the VM’s are not added as Cluster Resources
4.) Change the “Cluster Service” on each host to Manual Start
5.) Create a Batch file that that runs net start “cluster service”
6.) Create a scheduled task to run the batch file 5 minutes after boot
What this will do in the event of a complete shutdown is when the hosts come back up the DC VM’s will automatically start when the host starts. After 5 minutes the hosts will start the cluster service which by then the DC’s will have hopefully started up and are able to authenticate the cluster. Once the Cluster is up HA will start turning on VM’s that are shutoff.
Cluster Shared Volumes - In Hyper-V R1 HA, when you failed over a VM you didn’t fail over the VM you were failing over the LUN. This became a problem if you had multiple vm’s on a single LUN as it would fail all of the vm’s over. So a typical design would be a bunch of smaller LUNS only housing 1 VM each, which usually meant a lot of wasted SAN space
In comes Cluster Shared Volumes (CSV) to save the day… Well kinda, but only if you have free SAN space, or are setting up the SAN and Hyper-V first and not doing an upgrade to an existing environement… CSV allows multiple hosts to share, connect, and modify data on the same LUN at the same time. The only major gotcha is that when you configure a LUN for CSV it changes it from a drive mapping to a folder under c:\ … This means you can only convert a LUN to CSV If and only if it is empty, otherwise when you convert the drive the path’s to the VHD’s will be wrong and your VM’s no boot :-(
Live Migration – Live Migration is MS’s answer to VMotion and works just as well as VMotion. Turning it on is a treasure hunt all in itself though
Microsoft MPIO and iSCSI – Microsoft does not support bonded/teamed nics for iSCSI, which means you have to manually configure MPIO for each NIC for Each LUN on each Host, every time you add a new LUN. Hurray for efficiency.
NICS – Depending on what you read, setting up a Microsoft preferred Hyper-V cluster will involve 12 network cards… 2 for Management, 2 for production, 2 for iscsi, 2 for cluster, 2 for Live Migration, 2 for CSV… 12… I have successfully setup and used it with 6 which works just fine in a small environment: 1 for management, 2 for production, 2 for iscsi, and one for CSV-Live Migration-Cluster. Eight in my mind would be a preferred number for most installs
Considerations
Active Directory Access – Hyper-V Clustering relies on Windows File Clustering which in turn relies on Active Directory for Cluster functions. If you setup your environment with only virtual domain controllers and you bring down your entire environment when you bring it back online you will find you are locked out of your entire Hyper-V environment because the Cluster cannot start due to no domain being available for the cluster to authenticate against.
A preferred method is to ALWAYS have a physical domain controller, but in the real world that is not always possible (and no you should not make a Hyper-V host a domain controller). An effective work around that I have tested and used is:
1.) Create a Virtual DC on Hyper-V Host 1 and 1 on Hyper-V host 2 using the hosts local storage
2.) Configure the VM’s to start with the host
3.) Confirm the VM’s are not added as Cluster Resources
4.) Change the “Cluster Service” on each host to Manual Start
5.) Create a Batch file that that runs net start “cluster service”
6.) Create a scheduled task to run the batch file 5 minutes after boot
What this will do in the event of a complete shutdown is when the hosts come back up the DC VM’s will automatically start when the host starts. After 5 minutes the hosts will start the cluster service which by then the DC’s will have hopefully started up and are able to authenticate the cluster. Once the Cluster is up HA will start turning on VM’s that are shutoff.
Cluster Shared Volumes - In Hyper-V R1 HA, when you failed over a VM you didn’t fail over the VM you were failing over the LUN. This became a problem if you had multiple vm’s on a single LUN as it would fail all of the vm’s over. So a typical design would be a bunch of smaller LUNS only housing 1 VM each, which usually meant a lot of wasted SAN space
In comes Cluster Shared Volumes (CSV) to save the day… Well kinda, but only if you have free SAN space, or are setting up the SAN and Hyper-V first and not doing an upgrade to an existing environement… CSV allows multiple hosts to share, connect, and modify data on the same LUN at the same time. The only major gotcha is that when you configure a LUN for CSV it changes it from a drive mapping to a folder under c:\ … This means you can only convert a LUN to CSV If and only if it is empty, otherwise when you convert the drive the path’s to the VHD’s will be wrong and your VM’s no boot :-(
Live Migration – Live Migration is MS’s answer to VMotion and works just as well as VMotion. Turning it on is a treasure hunt all in itself though
Microsoft MPIO and iSCSI – Microsoft does not support bonded/teamed nics for iSCSI, which means you have to manually configure MPIO for each NIC for Each LUN on each Host, every time you add a new LUN. Hurray for efficiency.
NICS – Depending on what you read, setting up a Microsoft preferred Hyper-V cluster will involve 12 network cards… 2 for Management, 2 for production, 2 for iscsi, 2 for cluster, 2 for Live Migration, 2 for CSV… 12… I have successfully setup and used it with 6 which works just fine in a small environment: 1 for management, 2 for production, 2 for iscsi, and one for CSV-Live Migration-Cluster. Eight in my mind would be a preferred number for most installs
Friday, March 12, 2010
Link Aggregation and VLAN Trunking with ESX and HP Switches
Below is an article from blog.scottlowe.org where Scott discusses how to configure ESX to work with an HP Procurve for Link Aggregation and VLAN Trunking. Pretty simple stuff but handy when you have Cisco on your mind and need to work with HP.
Using Link Aggregation
There’s not a whole lot to this part. In the ProCurve configuration, users will mark the ports that should participate in link aggregation as part of a trunk (say, Trk1) and then set the trunk type. Here’s the only real gotcha: the trunk must be configured as type “Trunk” and not type “LACP”.
In this context, LACP refers to dynamic LACP, which allows the switch and the server to dynamically negotiate the number of links in the bundle. VMware ESX doesn’t support dynamic LACP, only static LACP. To do static LACP, users will need to set the trunk type to Trunk.
Then, as has been discussed elsewhere in great depth, configure the VMware ESX vSwitch’s load balancing policy to “Route based on ip hash”. Once that’s done, everything should work as expected. This blog entry gives the CLI command to set the vSwitch load balancing policy, which would be necessary if configuring vSwitch0. For all other vSwitches, the changes can be made via VirtualCenter.
The rest of the article can be viewed here: http://blog.scottlowe.org/2008/09/05/vmware-esx-nic-teaming-and-vlan-trunking-with-hp-procurve/
Using Link Aggregation
There’s not a whole lot to this part. In the ProCurve configuration, users will mark the ports that should participate in link aggregation as part of a trunk (say, Trk1) and then set the trunk type. Here’s the only real gotcha: the trunk must be configured as type “Trunk” and not type “LACP”.
In this context, LACP refers to dynamic LACP, which allows the switch and the server to dynamically negotiate the number of links in the bundle. VMware ESX doesn’t support dynamic LACP, only static LACP. To do static LACP, users will need to set the trunk type to Trunk.
Then, as has been discussed elsewhere in great depth, configure the VMware ESX vSwitch’s load balancing policy to “Route based on ip hash”. Once that’s done, everything should work as expected. This blog entry gives the CLI command to set the vSwitch load balancing policy, which would be necessary if configuring vSwitch0. For all other vSwitches, the changes can be made via VirtualCenter.
The rest of the article can be viewed here: http://blog.scottlowe.org/2008/09/05/vmware-esx-nic-teaming-and-vlan-trunking-with-hp-procurve/
Wednesday, March 10, 2010
HP Leftand Virtual Storage Appliance, Making the local shared
Good article from VMware on deploying to a small branch office Virtual Desktops without shared storage while still offering failover and redundancy.
http://www.vmware.com/files/pdf/view_local_disk.pdf
Good How-to Video
http://h30423.www3.hp.com/index.jsp?fr_story=0933cdb3c1a887b43416cc6c6c23e3f46e6a7547&rf=bm
http://www.vmware.com/files/pdf/view_local_disk.pdf
Good How-to Video
http://h30423.www3.hp.com/index.jsp?fr_story=0933cdb3c1a887b43416cc6c6c23e3f46e6a7547&rf=bm
Saturday, February 27, 2010
Turn Windows 7 into a Wireless Hotspot
Situation: 4 techs walk into a conference room with only 1 ethernet jack. No switch, no problem if someone has Windows 7. with Connectify you can turn one persons machine into a virtual wireless router and setup a WPA2 hotspot for access. This gives anyone who connects a fully routed internet connection and network access.
Friday, February 26, 2010
Bits Request Filtering in IIS 7.0
I had an issue today deploying applications for a client from SCCM. The packages had a number of files using file extensions and directories that are blocked in IIS 7.0 by default. Took awhile but I found how to fix this from a bunch of sites. What you want to look for is in the IIS W3v log files in C:\inetpub\logs\LogFiles\W3SVC1 look for 404 7, 404 8, or 404 11 (search for 404). If you see these you are being “request filtered”
This is a post describing what I was seeing
http://social.technet.microsoft.com/Forums/en-US/configmgrswdist/thread/e3c06b14-d0b8-4b4c-9a52-7f920de06f8e
Following links I fixed some of the errors by editing the applicationhost.config file but it was painful. I found this link which shows you how to run appcmd statements to do this automatically
http://technet.microsoft.com/en-us/library/cc754791(WS.10).aspx
The commands are
appcmd set config /section:requestfiltering /fileExtensions.applyToWebDAV:false
appcmd set config /section:requestfiltering /allowdoubleescaping:true
appcmd set config /section:requestfiltering /hiddensegments.applyToWebDAV:false
Then run iisreset and you should be good. I also had issues getting the packages repushed out to machines once the bit.tmp files showed up so instead of troubleshooting I just reimaged.
This is a post describing what I was seeing
http://social.technet.microsoft.com/Forums/en-US/configmgrswdist/thread/e3c06b14-d0b8-4b4c-9a52-7f920de06f8e
Following links I fixed some of the errors by editing the applicationhost.config file but it was painful. I found this link which shows you how to run appcmd statements to do this automatically
http://technet.microsoft.com/en-us/library/cc754791(WS.10).aspx
The commands are
appcmd set config /section:requestfiltering /fileExtensions.applyToWebDAV:false
appcmd set config /section:requestfiltering /allowdoubleescaping:true
appcmd set config /section:requestfiltering /hiddensegments.applyToWebDAV:false
Then run iisreset and you should be good. I also had issues getting the packages repushed out to machines once the bit.tmp files showed up so instead of troubleshooting I just reimaged.
Friday, February 19, 2010
VMware View 4.0.1 Released.
VMWare has released v4.0.1 of View. This maintenance release brings Virtual Printing support to PCoIP.
Click here to view the release notes.
Click here to view the release notes.
Thursday, February 18, 2010
EMC Simulator for Navisphere
For anyone that has a current Powerlink ID the direct link to the downloads can be found here
http://education.emc.com/main/internal/resources/res_int_prod_sim.htm
Its very basic in functions but very nice to study with and give you some basic "hands on" experience.
Bill
http://education.emc.com/main/internal/resources/res_int_prod_sim.htm
Its very basic in functions but very nice to study with and give you some basic "hands on" experience.
Bill
Monday, February 15, 2010
VMware View 4 PCoIP Resizing issues
View 4 has given me troubles in the past where it will not let me resize the desktop if I am connected via PCoIP. After doing some digging on google, I found a blogpost from thatsmyview.net which has a pretty slick walkthrough for making PCoIP work well.Click here to read the Thatsmyview Post
Additionally, I found that I have had to change the video memory of the base VM in order to make this work, posts from vmware communities suggested changing to 128mb, but I think that can be lessened with experimentation.
Additionally, I found that I have had to change the video memory of the base VM in order to make this work, posts from vmware communities suggested changing to 128mb, but I think that can be lessened with experimentation.
Thursday, January 28, 2010
High CPU Utilization during LDAP Query
I had a customer recently who's Domain Controller was pegged at 100% CPU utilization from the process lsass.exe. We determined the problem was from their spam appliance constantly performing LDAP lookups.
After digging around a little bit I found the option to use Global Catalog lookups for the LDAP Query instead of Straight LDAP lookups. The big secret was to change the appliance from doing its lookups using port 389 to port 3268, of course this means you also need to make sure the Domain Controller you do the lookup against is a Global Catalog.
After doing thise CPU utilization went from 100% to 1%-2% with no repercusions found.
Here's a link describing the differences between port 389 and 3268 lookups
Lucas
After digging around a little bit I found the option to use Global Catalog lookups for the LDAP Query instead of Straight LDAP lookups. The big secret was to change the appliance from doing its lookups using port 389 to port 3268, of course this means you also need to make sure the Domain Controller you do the lookup against is a Global Catalog.
After doing thise CPU utilization went from 100% to 1%-2% with no repercusions found.
Here's a link describing the differences between port 389 and 3268 lookups
Lucas
Monday, January 4, 2010
Vmware Tools Upgrade Failures
Recently, a customer upgraded from ESX3.5 to vSphere, and subsequently, had a number of VMs whose VMTools automatic upgrade failed. The failure would leave the old version of VM Tools partially uninstalled, and unable to be removed via Add/Remove programs. The solution is to run regedit and manually delete the keys associated with the VMTools install. The following KB article from VMWare lists the keys that need to be removed.
VMWare KB Article
In my experience, the upgrade removed some of the keys listed in the article, so all of them may or may not be present.
VMWare KB Article
In my experience, the upgrade removed some of the keys listed in the article, so all of them may or may not be present.
Subscribe to:
Comments (Atom)
