Chethan Kumar has recently updated the Performance Troubleshooting for vSphere 4.1 guide. This is a great asset I use regularly for any client or partner that asks about vSphere performance – especially those working with Tier 1 applications. It is very educational and addresses the most common scenarios clients experience.
“The hugely popular Performance Troubleshooting for VMware vSphere 4 guide is now updated for vSphere 4.1 . This document provides step-by-step approach for troubleshooting most common performance problems in vSphere-based virtual environments. The steps discussed in the document use performance data and charts readily available in the vSphere Client and esxtop to aid the troubleshooting flows. Each performance troubleshooting flow has two parts:
1. How to identify the problem using specific performance counters.
2. Possible causes of the problem and solutions to solve it.”
NPIV or N-Port Virtualization is a method of utilizing a single Fibre Channel port to serve multiple physical or virtual servers. NPIV allows a single SAN device to service multiple WWNs without additional switching infrastructure. NPIV is the technique used by blade system hardware to reduce the complexity of SAN connected blades. NPIV allows SAN connectivity without requiring Fibre Channel switches to be installed within the blade chassis. VMware also uses NPIV within the Raw Device Mapping (RDM) infrastructure.
Due to a statement in VMware documentation, confusion has arisen over support of Microsoft Cluster Server (MSCS) in a VMware environment where NPIV is utilized. In short NPIV is supported with VMware and MSCS where a hardware device such as HP Virtual Connect or Cisco UCS provides the NPIV functionality but not where VMware is providing the NPIV (checking the box in the guest config of a VM for NPIV).
This article provides answers to frequently asked questions about vStorage APIs for Array Integration (VAAI). Note: For more detailed information about VAAI and when it is used, see Storage Hardware Acceleration in the ESX Configuration Guide.
If you are a VMware vSphere/ESX customer hopefully esxtop is something you are familiar with…if not please get familiar. Esxtop is top for esx…get it? Wondering how loaded your ESX host is? Curious if there are performance issues on your ESX host or underlying infrastructure? ESXTOP is your friend…and now we have a document from VMware to help interpret the wealth of information that ESXTOP gives us. http://communities.vmware.com/docs/DOC-11812
Now all you need are some thresholds to compare those statistics against.
An excellent post that is very informative on VMware Raw Device Mapping (RDM) volumes that explains what they are, the different between physical and virtual compatibility, when to use which and when not to use RDM’s at all:
So it’s basically a great post on everything you’ve wondered on RDM volumes.
I really appreciated this tidbit:
The maximum size of an RDM is 2TB minus 512 bytes.
Use the Microsoft iSCSI initiator and MPIO inside the VM when you need more than 2TB on a disk.
This is how I came across the above article. I was trying to figure out which method was better for presenting a large disk to a virtual machine – RDM vs iSCSI inside the VM. The above answered my question as it was a 3TB volume!
Also, here’s a great whitepaper from VMware discussing the performance differences between VMFS, RDM’s, etc. especially since this has changed quite a bit from ESX 3.5 to now (vSphere 4.x). The whitepaper is entitled “Comparison of Storage Protocol Performance in VMware vSphere 4″: http://www.vmware.com/files/pdf/perf_vsphere_storage_protocols.pdf
I was at a customer site and was showing him how to expand the VMFS datastore on vSphere 4. Problem was however, it would not increase. If you clicked on the “Increase” button in the VMFS datastore’s properties, the next screen is supposed to show you the available “free” space left on that disk so you can use that free space to increase the VMFS size. However it was coming up blank.
I found a post that said to try and connect directly to an ESX/vSphere server and try the vmfs increase that way, as they were having the same problem. There are several posts from people (eg: http://communities.vmware.com/thread/220476?start=15&tstart=0)who that method worked for, even when it wouldn’t work when connecting to vCenter, like I experienced. So I had the customer try that method and it worked!