vSphere 6.5 – UNMAP on VMAX

As I mentioned in my previous post on vSphere 6.5, VMware now offers automated UNMAP capability for VMFS datastores as well as Linux Guest OS support for UNMAP. I’ll cover VMFS first, then discuss Linux support.

********************** Update 03-15-17

VMware has released vSphere 6.5 p01 which fixes a particular issue related to automated UNMAP in Guest OS. Depending on your configuration you may hit the problem. For more detail see the following post:  http://cormachogan.com/2017/03/15/vsphere-6-5-p01-important-patch-users-automated-unmap/.

**********************

VMFS Auto Reclaim (UNMAP)

There is a long history with VMware and UNMAP. When VMware first introduced UNMAP for VMFS it was automated but required storage vendors to synchronously unmap the storage. This led to all sorts of problems and the feature was quickly revoked and converted to a manual UNMAP capability. This manual mode, for the most part, has served customers well. Yes manual is more work, but it gives the customer complete control over the process. You still get your storage back but on your terms. Fortunately, that capability is still in 6.5 if the idea of automation still concerns you (due to a bug, it currently only works on thin devices, not zeroedthick or eagerzeroedthick). I’ll show below how you can turn it off and stick with manual mode (or automate manual mode through VSI). As automation is new, it requires the use of vSphere 6.5 – ESXi and vCenter. In addition, the feature is only available on the new VMFS format, VMFS 6. It will not work on VMFS 5. And finally you need a VMAX on the backend (or another Dell EMC array that supports UNMAP).

One quirk of automated UNMAP that you must be aware of is that it will not work unless there is an active VM on the datastore. If your datastore only has powered off VMs or if there are no VMs on the datastore, the storage will never be reclaimed through automation. You will be required to used manual UNMAP or failing that, use another procedure you can find here.

********************** Update 12-15-16 ************************************************

If you want more detail on UNMAP in general, here is my updated WP on it.

*******************************************************************************************

So how do I know my datastore is using automatic UNMAP? Well the good news is that it is on by default. Let’s start with the datastore wizard where a couple of the steps will demonstrate the functionality. The first mention we see is in the selection of VMFS. As you can see here VMFS 6 is the only option that supports automated UNMAP: ds_1_unmap In the next step, you have the option of changing the priority of the UNMAP process. There are two choices here despite what appears to be a sliding scale – None (Off) or Low. There is some chatter that in future versions VMware will provide more options here on how aggressive you want UNMAP to be.ds_2_unmap As I mentioned in the beginning, when VMware first introduced automatic UNMAP it was synchronous and that ended badly. In this implementation it is an asynchronous process, and very asynchronous at that. The process runs in the background and slowly unmaps storage that has been marked by VMware on the VMFS. The expectation is that over 12 hours or so the storage should be removed. My experience was a little longer than that but by 24 hours it was complete. Note too that in my testing – and I have not seen VMware say this yet – if my datastore was idle with no VM activity, then VMware did not unmap the storage. Granted most customers do not have idle datastores, but if you do, run the manual reclaim (esxcli storage vmfs unmap) which as I said is still supported. By the way, if you want to turn auto reclaim on/off after datastore creation, that is available in the Configure datastore page. ds_3_unmap If you want to see if any storage has been unmapped, the best way is really to look at the device on the array. There are many ways to view the storage in Unisphere for VMAX. Here is a simple graph you can generate to see if the allocated capacity has increased/decreased over time as in this example.

Linux Guest OS UNMAP

Since vSphere 6.0, VMware has had support for Guest OS UNMAP. However, because virtual disks only support SCSI-2 in 6.0, Windows 2012 R2 was the only supported OS as Linux requires SCSI-4. In vSphere 6.5 VMware introduces SCSI-4 support for thin vmdks which means Linux can support Guest OS UNMAP. The big difference in this version of automatic reclaim from the one just explained for VMFS, is that Guest OS UNMAP is essentially synchronous. In a suported environment if you delete a file, the UNMAPs will be issued right away and the array will clear the storage. Now a quick example of how this works. First, the pre-requisites:

  • vSphere 6.5/ESXi 6.5
  • A Linux Guest VM that supports UNMAP (hardware version 13) along with a supported file system
  • Thin provisioned vmdk
  • If using VMFS 5 you must set EnableBlockDelete to 1 (esxcli system settings advanced set –int-value 1 –option /VMFS3/EnableBlockDelete).  If using VMFS 6 and automated UNMAP on the datastore (default) you don’t need the parameter enabled.
    • This setting is not needed for VVols (which is what I used in my testing)

Although it is possible to manually issue UNMAP commands in the Guest OS by say, passing a range of LBAs, it’s not how to best use this feature. If you simply create the file system and then mount it with the discard option, any time you remove files from the mount, UNMAP will be issued. That is the method I used. My test environment consisted of a virtual machine with 64-bit Ubuntu 12.04.3 with a 16 GB thin vmdk for the OS, and a 5 GB thin vmdk for the test. For storage instead of VMFS I used a VVol datastore. Yes, this UNMAP feature works with VVols. The reason I used VVols is the nice 1 to 1 mapping of thin vmdk (VVol) to storage device which makes the example clearer. The first thing I did was put an ext4 file system on the 5 GB device and then mount it with the discard option which will ensure UNMAP gets issued:

mkfs.ext4 /dev/sdb1
mount /dev/sdb1 /u02 -o discard

At initial creation, the vmdk is 3 MB in the vSphere Clientpre_vmdkwhich results in the VMAX allocating 24 tracks. Note I’m using compression on VMAX All Flash which uses many pools, but the total allocated tracks will be the same with or without compression.pre_symcfg And finally, so we know where we started, we see there has been 906 UNMAP commands that have been issued to the Protocol Endpoint (how VVols are mounted – see other blog posts for detail if needed). Now I put ~ 3 GB file on the mount point /u02.  Both the vmdk and array storage grow appropriately:post_vmdk post_symcfg If I now remove the file, the automatic process beings. Upon removing it, VMware automatically shrinks the vmdk back down as far as it can – in this case 4 MB.post_delete_vmdkThe array similarly reclaims its storage. Here we see that many UNMAP commands are issued beyond the initial 906 and that the VVol is now only using 36 tracks of allocated space.

post_delete_esxtoppost_delete_symcfg

You’ll notice that 100% of the storage did not get reclaimed. The vmdk was about 1 MB short which meant 1 MB less UNMAP commands to the array. A colleague is looking into this but 1 MB out of 3 GB isn’t bad.

Pretty cool, huh? No work involved in getting back my storage and now I know I’ll never leave space stranded again on my array. I’ll end with a cautionary tale from my testing. If you are moving VMs to vSphere 6.5 or cloning them and then testing this feature, be sure you upgrade the hardware version to 13. At one point in testing my various VMs UNMAP didn’t seem to be working. I scratched my head for a bit since I hadn’t had a problem on my other VMs but then remembered I moved this VM from a vSphere 6.0 environment and the hardware version of the VM was still 11. I upgraded to 13 then everything worked. We all have our moments…I hope mine saves you from one.

7 thoughts on “vSphere 6.5 – UNMAP on VMAX

Add yours

  1. Thanks for the detailed tests, this is useful! So guest OS UNMAP requires HWv13 for Linux VMs, to present SCSI-4. For Windows 2012R2, HWv11 should be “enough” ?
    Last question: UNMAP worked because you had VAAI Acceleration activated or is it expected to work even without VAAI plugin ?

    1. Yes, version 11 should be fine for Windows 2012 R2, however if you use vSphere 6.0 you’ll have to enable Guest OS unmap (block delete). On the VMAX VAAI is enabled by default but you don’t need the VAAI plugin for UNMAP – just an array that supports the UNMAP command.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by WordPress.com.

Up ↑