vSphere 6.x and VAAI performance boost on VMAX

***Update – as more customers move to vSphere 6.x on VMAX, I expect this topic to be more poignant.  So I want to point out something about the changes detailed below.  The new changes for vSphere 6.x and XCOPY are ONLY for VMAX and no other arrays – not VNX, not XtremIO, not VPLEX, etc. and so far as I know, no 3rd party arrays.  You will also not find these changes fully documented by VMware.  That is fairly typical of their documentation.  This will be repeated at the bottom too, just for good measure.***

I thought I would tease a bit from my VAAI whitepaper and talk about VAAI performance with vSphere 6.x and VMAX.  In particular I’m going to focus on XCOPY which is the primitive called during copy operations e.g. cloning, SvMotion.

Let’s start with a quick reminder of this particular VAAI primitive.  XCOPY or Full Copy delivers hardware-accelerated copying of data by performing all duplication and migration operations on the array. Customers can achieve considerably faster data movement via VMware Storage vMotion® and virtual machine creation and deployment from templates and virtual machine cloning. Basically what this means is that VMware passes off the job of copying data to the VMAX which can do it much faster than the host.  We’ve had this functionality now for quite a while – going all the way back to Enginuity 5875 and vSphere 4.1.

One of the limitations of the copying, however, has always been the extent size.  VMware defaults to sending us 4 MB extents for thick vmdks (ZT and EZT) and 1 MB for thin vmdks.  On the VMAX (and VNX) we were able to adjust the 4 MB up to 16 MB, but thin vmdks remained at 1 MB.  For very large VMs, even with this 4x improvement the operation could take time (granted far faster than the host copy), but for thin vmdks we were still stuck with 1 MB.  The nature of the XCOPY implementation on the VMAX (all but rectified on the VMAX3 and VMAX All Flash) also meant there was a finite number of resources available for the copy job so large VMs could switch over to host copy for a time.

In any case EMC wanted more so we went to VMware development and said look our array can copy very large extents efficiently any chance we can increase that 16 MB number.  Fast forward to vSphere 6.x where VMware has implemented that change for us.  The really nice thing about the new capability is not just the size, but rather that it is implemented through the VAAI plug-in we already use and it only applies to VMAX devices.  In the current vSphere 5 implementation, for our customers who run multiple arrays (non VMAX/VNX) off a single host, they are not permitted to change the 4 MB default size as it can cause issues with the other arrays.  In vSphere 6.x, however, the default size of 4 MB does not need to be altered because VMware will use the size specified in the VAAI claim rule for Symmetrix (VMAX) devices.  Here’s how this works.

In vSphere 5, you change the following parameter in the CLI to adjust the extent size:

esxcfg-advcfg -s xxx /DataMover/MaxHWTransferSize or esxcfg-advcfg -s 16384 /DataMover/MaxHWTransferSize

This is a dynamic parameter, taking effect whenever it is changed.  In vSphere 6, this mechanism will still work, however as I mentioned we can now change the claim rules associated with the VAAI plug-in/filter instead and those will take precedence.  Let’s start by querying the available plug-ins on the ESXi host (vSphere 5 or 6) along with the filters.  This is the same for vSphere 5 and 6 as we need the VAAI plug-in and filter to use VAAI :

Click to enlarge – use browser back button to return to post

Now we can specifically look at our device to see if it is attached to the VAAI filter and supports VAAI.  This one happens to be on the VMAX3:

vaai_supportClick to enlarge – use browser back button to return to post

Each block storage device managed by a VAAI plug-in needs two claim rules, one that specifies the hardware acceleration filter and another that specifies the hardware acceleration plug-in for the device.  These claim rules are where the change has been made (in addition to the code obviously).  Let’s look at the claim rules in vSphere 5 so the difference is apparent.  Here you can see there are no knobs to adjust:

vaai_claim_rulesClick to enlarge – use browser back button to return to post

But in vSphere 6.x, we have these new columns related to XCOPY :

vaai_filter_vsphere6_nochangeClick to enlarge – use browser back button to return to post

So now we can adjust the values to allow for a maximum extent size of 240 MB .  That’s 15x better performance than the current maximum.  As with all claim rules, changing them will not be dynamic for existing devices.  Once a device is claimed it is going to use the values that were set at that time.  While there are online methods to make changes to existing devices, I have not found them to be bullet-proof (and they are disruptive in any case) so I would make the following changes and reboot the ESXi host after the change.

To change the values, we need only adjust the VAAI filter.  Because of the interaction of the 3 different settings the columns represent (and the VMware rules), we need to set the first two to true in addition to changing the transfer size.  The commands to run on the ESXi host are (they will not return anything when run):

esxcli storage core claimrule remove –rule 65430 –claimrule-class=VAAI

esxcli storage core claimrule add -r 65430 -t vendor -V EMC -M SYMMETRIX -P VMW_VAAIP_SYMM -c VAAI -a -s -m 240

esxcli storage core claimrule load –claimrule-class=VAAI

Following this reboot the host.  You can check the parameters afterwards just as before, and the new values will show:

vaai_filter_vsphere6_changeClick to enlarge – use browser back button to return to post

And that’s it.  Whenever XCOPY is issued it will attempt to use the new extent size.  All VAAI functionality remains as before such as:

  • It will revert to software when it cannot use XCOPY
  • It can still be disabled through the GUI or CLI
  • It will use the DataMover/MaxHWTransferSize when the claim rules are not changed or a different array is being used
  • The maximum value is just that, a maximum.  That is the largest extent VMware can send, but does not guarantee all extents will be that size – they won’t.

What about the thin vmdk 1 MB issue you ask?  Well, VMware has also made some changes there per our request.  VMware now makes a best effort to consolidate extents so they won’t just send 1 MB.  This has greatly improved performance.  In addition, the resulting cloned virtual machine benefits from this consolidation for if you then clone that VM, the resulting VM will perform much like a thick vmdk VM if it is cloned again.  Therefore if you are going to create templates from thin vmdk VMs, it is best to clone to template to take advantage of this change, rather than simply converting the VM to template.  By the way, whether you do the initial cloning with  XCOPY or without, the resulting VM will benefit from the consolidation.

The results speak for themselves.  This graph represents the cloning of a 250 GB VM (fully populated) for both thick and thin devices on the VMAX3 using the 240 MB transfer size:

resultsClick to enlarge – use browser back button to return to post

I’ll have more detail in my whitepaper – just didn’t want to get too into the weeds here.  Here is the current paper if you need a refresher:

http://www.emc.com/collateral/hardware/white-papers/h8115-vmware-vstorage-vmax-wp.pdf

**********************************************************************************************************************************************************************************************************************************************************************

 Note that as this is a feature of vSphere 6, this works for both VMAX and VMAX3/VMAX All Flash arrays BUT does not work for any other arrays, EMC or otherwise.

 

**********************************************************************************************************************************************************************************************************************************************************************

Advertisements

9 thoughts on “vSphere 6.x and VAAI performance boost on VMAX

    • The array already reports a value but it is higher than the max allowed so by setting the max value to 240 MB, we ensure VMware uses that.

    • Thanks. Well right now claim rules for VAAI only apply to VMAX2 and VMAX3 and the max is 240 MB which will give the best performance. All other storage arrays still use the max transfer size setting (4 MB default). To be honest I doubt other array companies would be interested in increasing the transfer size beyond the 16 MB max because they use a synchronous mechanism for copying data while we use an asynchronous one. This gives us the advantage of queuing the large extents and doing a background copy while returning functionality to the user. Other arrays have to wait until the copy completes before returning control and much larger extent sizes can slow things down considerably. I hope that answers what you were asking. If not ask again 🙂

  1. Just checking in to ensure that the VAAI plugin will also support Sphere 6.0 on a vnx5200. Everything I am reading indicates that it should but the download I have access too (2.0) only mentions 5.5 compatible. Second should I let the API specify the default transfer size or should I set it to the max 240MB? Any additional tips or suggestions? The environment is a new 6 VMware 6.0 hosts with 8gb FC directly connected to a VNX 5200 array with fast cache enabled. This is a new build to replace an old 3300vnxe and will consolidate the Server VMware infrastructure along with the VMware VDI infrastructure that uses local disks.

    • The functionality I talk about in this blog post is only available for the VMAX and no other arrays. In general the VNX5200 will support VAAI commands on vSphere 6 so you should be fine there (if you are using NFS too you’ll need the plugin mentioned in an early blog post here on eNAS). You can change the transfer size to 16 MB from the 4 MB default but there is no ability to use claim rules and get to 240 MB.

  2. I am running vSphere 6 with VAAI enabled and have a VMAX 20k, however, I am unable to remove the default rule 65430 and add the new rule. When I run a claimrule list I can see the rule but get an error when trying to remove it. The host was recently rebooted and is maintenance mode.

    [root@dorpnvmwesxi004:~] esxcli storage core claimrule remove –rule 65430 –claimrule-class=VAAI
    Error: Unknown command or namespace storage core claimrule remove –rule 65430 –claimrule-class=VAAI

    • Hi Ryan,

      It looks like you are missing the double dash – – next to “rule” and “claimrule”:

      esxcli storage core claimrule remove – -rule 65430 – -claimrule-class=VAAI

      Try that and let me know if it solves it.

      • Thanks Drew.

        For some reason it kept erroring out on the –claimrule. Switched to the short flags: esxcli storage core claimrule remove -r 65430 -c=VAAI and it worked. It may have been a copy/paste issue with the double dashes.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s