Potential issue if you are using VMware Changed Block Tracking and NFS storage.
Here is a very useful KB article on migrating an existing vCenter Server database to 4.1 using the Data Migration Tool:
Also, here’s a great blog post on VMware’s blog showing how to upgrade vCenter 4.0 32 bit to vCenter 4.1 64 bit by using the VMware Data Migration Tool.
Our first video covering vSphere 4.1 is now live. The video compliments KB article 1022137 vSphere 4.1 upgrade pre-installation requirements and considerations. and describes the process for using our Data Migration tool to upgrade VMware vCenter Server 4.1
VMware vCenter 4.1 is part of the VMware vSphere 4.1 product suite and the Data Migration tool allows you to migrate your vCenter Server 4.0 configuration, since VMware has now entirely gone to a 64-bit platform. 64-bit brings significant performance benefits, but it also introduces some challenges that need to be considered before upgrading.
Sit back, grab that cup of java and take in a little KBTV.
The Data Migration tool is provided with your vCenter Server 4.1 installation media.
Just spend my first few hours on the Motorola Xoom that we have acquired for our Desktop Virtualization showroom!
The list of hardware and software continues to climb. Any vendors want to get on the list?
- Citrix XenApp/XenDesktop
- VMware View
- VMware vSphere
- Wyse Xenith
- HP Thin Clients (various)
- HP 8440p laptop (Citrix XenClient capable)
- Streamed VHD delivery to HP All-in-one PC
- Motorola Xoom
- iPad 2
- Dell Equallogic storage
- HP P4000 (older Lefthand units running latest SANiQ)
- Dell m600 and m610 blades (thank you VERY much Dell for the additional memory!!!!
Soon to be added… Fusion-io card…wurd!
Chethan Kumar has recently updated the Performance Troubleshooting for vSphere 4.1 guide. This is a great asset I use regularly for any client or partner that asks about vSphere performance – especially those working with Tier 1 applications. It is very educational and addresses the most common scenarios clients experience.
“The hugely popular Performance Troubleshooting for VMware vSphere 4 guide is now updated for vSphere 4.1 . This document provides step-by-step approach for troubleshooting most common performance problems in vSphere-based virtual environments. The steps discussed in the document use performance data and charts readily available in the vSphere Client and esxtop to aid the troubleshooting flows. Each performance troubleshooting flow has two parts:
1. How to identify the problem using specific performance counters.
2. Possible causes of the problem and solutions to solve it.”
It is located here: http://communities.vmware.com/docs/DOC-14905
NPIV or N-Port Virtualization is a method of utilizing a single Fibre Channel port to serve multiple physical or virtual servers. NPIV allows a single SAN device to service multiple WWNs without additional switching infrastructure. NPIV is the technique used by blade system hardware to reduce the complexity of SAN connected blades. NPIV allows SAN connectivity without requiring Fibre Channel switches to be installed within the blade chassis. VMware also uses NPIV within the Raw Device Mapping (RDM) infrastructure.
Due to a statement in VMware documentation, confusion has arisen over support of Microsoft Cluster Server (MSCS) in a VMware environment where NPIV is utilized. In short NPIV is supported with VMware and MSCS where a hardware device such as HP Virtual Connect or Cisco UCS provides the NPIV functionality but not where VMware is providing the NPIV (checking the box in the guest config of a VM for NPIV).
I ran across this awesome link that discusses the arrays that support VAAI and how to double-check if it’s configured.
Note: For more detailed information about VAAI and when it is used, see Storage Hardware Acceleration in the ESX Configuration Guide.
Apparently I’m being punished because Microsoft ever used Netbios in the first place and I’m too lazy to type a FQDN.
If you are a VMware vSphere/ESX customer hopefully esxtop is something you are familiar with…if not please get familiar. Esxtop is top for esx…get it? Wondering how loaded your ESX host is? Curious if there are performance issues on your ESX host or underlying infrastructure? ESXTOP is your friend…and now we have a document from VMware to help interpret the wealth of information that ESXTOP gives us. http://communities.vmware.com/docs/DOC-11812
Now all you need are some thresholds to compare those statistics against.
An excellent post that is very informative on VMware Raw Device Mapping (RDM) volumes that explains what they are, the different between physical and virtual compatibility, when to use which and when not to use RDM’s at all:
So it’s basically a great post on everything you’ve wondered on RDM volumes.
I really appreciated this tidbit:
- The maximum size of an RDM is 2TB minus 512 bytes.
- Use the Microsoft iSCSI initiator and MPIO inside the VM when you need more than 2TB on a disk.
This is how I came across the above article. I was trying to figure out which method was better for presenting a large disk to a virtual machine – RDM vs iSCSI inside the VM. The above answered my question as it was a 3TB volume!
Also, here’s a great whitepaper from VMware discussing the performance differences between VMFS, RDM’s, etc. especially since this has changed quite a bit from ESX 3.5 to now (vSphere 4.x). The whitepaper is entitled “Comparison of Storage Protocol Performance in VMware vSphere 4″: