vSphere 6.7 automated UNMAP enhancements

Continuing from vSphere 6.7 post…

So in vSphere 6.5 VMware introduced automated UNMAP for datastores. In so doing they offered users two option: on (default) or off. Rather than a checkbox, however, VMware has what appears to be a scale which would allow the user to decide how aggressive UNMAP should be.

It didn’t work of course, but this seemed to indicate that in a future version VMware would permit such control. And now it has, albeit not with this scale. Beginning with vSphere 6.7, VMware offers the ability to not only change the reclamation priority that is used for UNMAP, but also change the method from priority to a fixed of bandwidth if desired. As explained, prior to vSphere 6.7 automated UNMAP was either on, which meant low priority, or off. Low priority ensured that the VM environment would not be impacted, but it could take 24 hours to reclaim storage. Now priority can be changed to medium or high, each with their own bandwidth. Medium will reclaim up to 75 MB/s while high will be 256 MB/s. To change the priority, however, only the CLI is offered.
By default, the “Low” priority translates into reclaiming at a rate of 25 MB/s. To change it to medium priority, issue the following command:

 esxcli storage vmfs reclaim config set –volume-label RECLAIM_TEST_6A –reclaim-priority medium

Note that when using the CLI for priority adjustments, the reclaim bandwidth will not always reflect the expected amount. See above where the bandwidth shows 0 MB/s when medium should be 75 MB/s.

To take advantage of the power of all flash storage arrays, VMware also offers the ability to set a fixed rate of bandwidth instead of setting priority. The user can set the value anywhere from 100 MB/s to a maximum of 2000 MB/s in multiples of 100. This capability, unlike priority, is offered in the GUI, but only in the HTML 5 client. In the following example the reclaim method is changed to “fixed” at a 2000 MB/s bandwidth. In this interface it is clear that the priority cannot be changed here as previously mentioned.

The CLI can also be used for the fixed method. For example, to change the reclaim rate to 200 MB/s on the datastore “RECLAIM_TEST_6A” issue the following:

esxcli storage vmfs reclaim config set –volume-label RECLAIM_TEST_6A –reclaim-method fixed –b 200

Here is the actual output, along with the config get command which retrieves the current value.

I asked about changing the reclaim granularity and that is not suggested.

Remember in the earlier screenshot where I mentioned the reclaim bandwidth shows as 0 MB/s, despite knowing the actual value is 75 MB/s? VMware will not show the correct value unless the reclaim method is first set to fixed, then changed back to priority. Here is what that looks like:


I conducted numerous tests on the new UNMAP capabilities in vSphere 6.7 and found distinct differences depending on whether the vmdks removed from the datastore were thin or thick. For example, in the graph below there are two reclaim tasks shown. Each one is to unmap 100 GB of storage (VMs lazy thick deleted) from the datastore RECLAIM_TEST_6A. The first line is using a fixed method at 1000 MB/s. The second is a priority method at medium (75 MB/s). Each reclaim took the same amount of time and VMware performed the same UNMAP commands. In fact, the reclaim was done at 1000 MB/s for both no matter what setting I used.

When thin vmdks are deleted, however, VMware generally acts correctly. If you use priority at the default, you’ll get 25 MB/s. If you use fixed at 500 MB/s you’ll get that. If, however, you use fixed at say 1500 MB/s, you’ll only get 1000 MB/s. So it is similar to thick in that way.

So clearly there is something amiss with the feature. As far as what I recommend using, I’m of 2 minds here. The first is that because there might be a bug, it seems prudent to use the defaults. If you have thick you’ll still get 1000 MB/s. If you use thin, you’ll get 25 MB/s. The second mind says that if you use only thin, it might be better to use fixed at 1000 MB/s to match the thick performance.Taking this all into account, however, I’m at a loss to think of a situation where I would need storage reclaimed faster than the default method. I therefore stick with my first mind and continue to recommend using the default priority method. Remember, if you need the storage back quicker on thin for some reason, you can always do a manual UNMAP. And Dell EMC of course supports whatever you want to do regardless of the recommendation.

***************** Update 5-7-18 *****************

I was at Dell Tech World last week delivering a session on integration and covered this topic in detail. One of my customers asked if there was any circumstance under which I would recommend disabling automatic UNMAP. After considering this, my advice was if your environment uses Storage DRS for space management (only type you should use with VMAX/PowerMax), and VMs move very frequently, I would not use automatic UNMAP on those datastores. As I’ve noted more than once, VMware is very good at reusing space. Just because you move a VM, doesn’t mean the freed space in the datastore is lost. VMware is going to reuse that when it moves a VM back. If auto UNMAP is on, VMware is going to issuing commands all the time for datastores in that cluster which could compete with regular IO from another SvMotion going to it. If you still wish to free up space on the array, simply schedule a manual UNMAP at some maintenance time. VSI can do this for you if you don’t want to write your own scripts.


Automatic UNMAP support for SESparse in vSphere 6.7

In vSphere 6.7, on VM’s with a SESparse snapshot (SESparse is the default snapshot format on VMFS 6), the space reclamation is now automatic. Note that automatic UNMAP for SESparse will not be triggered until 2 GB of cleared space is achieved.



5 thoughts on “vSphere 6.7 automated UNMAP enhancements

Add yours

  1. Nice posting! It seems the unmap priority and bandwidth setting didn’t work…i have tried this in my lab environment, no matter the priority or bandwidth i set, the esxi host still throw unmap to the array at it’s own pace (for my case, it is 800MB/s, although i have set the bandwidth to 100MB/s), any thoughts?

    1. Hi Jerry,
      Yes, as my graph shows I had the same experience – VMware is doing what it wants regardless of the setting. I spoke with my friends at Pure yesterday and they, too, have similar findings. We’re engaging VMware development to see what’s going on with the feature and I’ll add any updates here. Thanks for reading.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by WordPress.com.

Up ↑

%d bloggers like this: