UNMAP behavior in ESXi 5.5 P3/ESXi 6.0

I’ve had a few questions on how the VAAI UNMAP primitive has been changed in ESXi 5.5 P3 and ESXi 6 since I re-tweeted a colleague’s blog post on it. The changes will be documented in my VAAI whitepaper but as I don’t have a plan yet for publication, I wanted to include it here for now.

As with many of VMware’s code changes, this one is not documented, and was discovered by happenstance when my friend Cody was testing something for a customer. His excellent write-up on these changes can be found here http://www.codyhosterman.com/2015/07/unmap-block-count-behavior-change-in-esxi-5-5-p3/ so I will only provide a high-level description along with VMAX results. You’ll note Cody works for one of those other storage companies but these changes are VMware-centric and not directly related to the storage.

Anyway here is the crux of the change: In ESXi 5.5 VMware changed how UNMAP was executed on the ESXi host. It went from using a vmkfstools command where the user passed a percentage of storage to reclaim synchronously, to an esxcli storage command which asynchronously unmapped a number of blocks in each pass (default 200).  There were a number of issues in the vmkfstools implementation which is why VMware made the change.  If you want more detail you can check out the VAAI whitepaper. Starting with ESXi 5.5 Patch 3 (build 2143827), VMware made a change to the UNMAP behavior in response to a bug.  When issuing an UNMAP, any time a non-default block value is used that is higher than 1% of the free space in the datastore, VMware will revert to the default size (200).  Also as Cody notes if the datastore is at least 75% full, it will also revert to the default.  This obviously leaves a small set of non-default values that could be used which will not default to 200.

When using ESXi 5.5+ our recommendation on VMAX2/VMAX3 has always been to take the default block value which is most often 200 MB (200 blocks at 1 MB default block size of VMFS).  Our testing never really showed a consistent benefit in increasing the block count (ESXi 6) so it did not make sense to advocate anything other than the default.  It is now clear why consistency was not to be had – if I wasn’t under that 1% of free space all I was doing was basically re-issuing the same default command.  When Cody asked me to run some tests for a second validation with the new formula (datastore free space in MB * .01 – rounded DOWN) I confirmed the benefit he was seeing.  The benefit of course will be influenced by the array, but fortunately the VMAX3 is very fast to UNMAP.

Using this methodology is most beneficial when there is a large amount of free space to reclaim. In my testing example I reclaimed 500, 1000, and 1500 GB:

resultsClick to enlarge – use browser back button to return to post

Note that using 1% has the same scaling as the default, e.g. double the storage reclaimed takes about double the time.

The last thing I’ll say about the change is that on the VMAX3 you can see that UNMAP is very fast even at the default.  So though using 1% in these cases is obviously quicker, if you don’t want to do the math it’s not such a huge deal to just keep using the default value.  As is the case with all VAAI primitives, don’t worry you can’t break anything.

Advertisements

3 thoughts on “UNMAP behavior in ESXi 5.5 P3/ESXi 6.0

Add yours

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com.

Up ↑

%d bloggers like this: