For the past few weeks I’ve been testing some upcoming features in vSphere.beta. In so doing, I was seeing some odd UNMAP behavior. VMware did not appear to be honoring the different UNMAP values I set. So I opened a bug and I asked VMware development about it. After a couple weeks I was told that there was a recent change made to the vSphere 7 documentation which included a new step for UNMAP. Funnily enough, this new requirement was added a week after I opened my bug which, well, seems a bit more than coincidental. Anyway let’s look at vSphere 6.7 vs 7.0 documentation around UNMAP.
6.7 vs 7.0
Here are the instructions for changing the UNMAP setting in vSphere 6.7 GUI:
And now the updated guidance (8/16/2022) for vSphere 7.0:
I assume you see the difference. Step 5 for vSphere 7 now requires unmounting and remounting the datastore in order for the change to take effect. And this must be done on every ESXi host that uses that datastore. This is no small change because obviously to unmount the datastore requires a whole bunch of conditions to be met, and frankly is unrealistic in most production environments (though read below for options). In addition, since you cannot change the automatic UNMAP setting during VMFS creation, other than turning it on or off, if you want to change the setting on a new datastore, you still have to unmount/remount after creation.
GUI vs CLI
The instructions above refer to changing UNMAP in the vSphere Client GUI. You can also change UNMAP settings through the CLI. The CLI is treated a little differently than the GUI. If you change the value on an ESXi host using CLI, it will take effect on that host immediately. But the change will not be honored on other ESXi hosts attached to the same datastore without the unmount/remount procedure. Now it is possible to run the same command to change UNMAP on all ESXi hosts using that datastore and thereby avoid the unmount/remount, but VMware will indicate (they are updating docs) that the supported way is through a single ESXi host change and then the unmount/remount. Here is a reminder of the CLI:
To get the current value on a datastore:
[root@dsib1187:~] esxcli storage vmfs reclaim config get -l UNMAP_VMFS Reclaim Granularity: 1048576 Bytes Reclaim Priority: low Reclaim Method: priority Reclaim Bandwidth: 26 MB/s
To change to low priority (default):
[root@dsib1187:~] esxcli storage vmfs reclaim config set -l UNMAP_VMFS --reclaim-method priority --reclaim-priority low
To change to a fixed priority and bandwidth of 100 MB/s:
[root@dsib1187:~] esxcli storage vmfs reclaim config set -l UNMAP_VMFS --reclaim-method fixed --reclaim-bandwidth 100
Does this impact me?
The good news is our best practice is to use the default setting for automatic UNMAP which is low priority method (~ 26MB/s). Therefore, this post is really more informative than anything, but best be aware because we do have customers who sometimes want to disable UNMAP to reduce load on the system, and if you don’t use the CLI across all ESXi hosts seeing the datastore, you leave yourself open to one or more still issuing UNMAP. Caveat emptor (talking to myself mostly).
Disabling UNMAP
Just a reminder that disabling automated UNMAP at the datastore level does not disable GuestOS UNMAP. My general recommendation, however, is that if you are disabling UNMAP for performance reasons, it is unnecessary to disable GuestOS UNMAP (which is more work). Now I have seen corner cases where customers run some massive update or defrag on their Windows farms during the day (which of course they should not) which can result in lots of UNMAPs, but mostly OS-level UNMAPs are minimal.
Array level
You can (or support can) disable UNMAP on the array itself, but doing so means no UNMAP can be run at all, including manual. If you are disabling it on the array because of SRDF performance, be sure to disable UNMAP on both arrays. This is particularly important for SRDF/Metro since you can access the datastore from both arrays.