Thick or thin VMDKs on VMAX/PowerMax

Thick or thin? This is a question I get in almost all best practice discussions with customers and while I cover it in the TechBook and elsewhere, I thought I’d do a quick post on it without all the extras.

VMware offers 2 types of disks – thick or thin – and 3 disk types – thin, zeroedthick, and eagerzeroedthick. On the VMAX or PowerMax, zeroedthick and eagerzeroedthick are treated the same so we’re just going to discuss thin and zeroedthick (default vmdk). (If you want to know why they are the same, details are in the TechBook.) Eagerzeroedthick (EZT) still has its place, for instance when using VMware Fault Tolerance or for Oracle databases. In other words if your application calls for it, use EZT, but the array is agnostic when it comes to actual data allocation. There are corner cases for EZT’s use. For example if you are importing data into large, empty vmdks, and it is a particularly time-sensitive operation, EZT would provide some benefit for the first run. If you re-use the same vmdks, obviously subsequent runs wouldn’t matter, but if you re-create the vmdks each time EZT would help. In general, any operation/task that is going to fill an empty vmdk quickly would be a candidate for EZT if time is critical.

But back to our comparison – zeroedthick or thin.

Now, the VMAX (3, AFA) is an all-thin box. This means the VMAX will not allocate any storage until you write data to a TDEV (host-accessible device, i.e. your datastore). This is good news of course – we don’t waste space. But how do zeroedthick and thin impact that writing? Well, exactly the same. Both those vmdk types will only allocate space on the array when they are written to. Hmm, some of you might be confused at this point since last time you created a zeroedthick disk it took up whatever space in the datastore you told it to, but thin did not. Correct again. OK, time for a quick example.

I created two 18 GB devices and presented them to the same environment. I then created a datastore on each disk. Here they are:

So at this point we have our 2 datastores.  VMware allocates space for the metadata and since there were a few writes (it allocates more in the datastore than it writes immediately), we can expect the array to show them also. And it does, to the tune of 271 tracks on both (note I have disabled compression just to make it less complicated). Thick is device 90 in red, thin is device 93 in green.

So far so good. Allocation is the same, as it should be. In the next step I create a 5 GB vmdk in each datastore, one zeroedthick, one thin.

Since I am just creating a vmdk, I don’t actually allocate any space on the array, however, VMware allocates 5 GB in the THICK-VMDK datastore. On the other hand, on the THIN-VMDK datastore, VMware does not acknowledge any space because thin vmdks only take space in the datastore when data is written (much like the array).

Now I add each of these vmdks to a Windows VM and do a simple format.

If we then go back to the array and check the VMAX storage allocation, we see the same amount of storage used for that format. Again, as we would guess.

I then copy a 1.8 GB file to both the new thick and thin Windows’ drives. VMware reflects that new size in the vCenter for thin, but note that since the thick vmdk already allocated 5 GB in the datastore, its sizing stays the same. The VMAX again allocates the same amount of storage for both devices (not shown).

So here is the big difference between thick and thin. With thin, both on VMAX and VMware, space is allocated on demand, whereas thick ties up space in the datastore even if it does not on the array. But there is an important caveat to remember.  See what happens when I try to add a 50 GB device to each datastore.

The thin vmdk creates with no issue, the thick, however, errors out due to lack of space. So does this mean I could create terabytes worth of vmdks on that 15 GB of free space on the THIN-VMDK datastore?  Yep. And therein lies the concern with thin. As the users write data to those terabytes of vmdks, eventually that datastore will run out of space and the VMs will suffer the consequences (yes in some cases it isn’t pretty). You will never have that problem with the THICK-VMDK datastore, however, because VMware won’t let you allocate more space in the datastore than you actually have.

OK with the truth laid bare, which is best thick or thin? The good news is that because of VAAI and the VMAX itself, performance differences between the two vmdk types are negligible and no longer a factor. The other good news is that you can see the VMAX treats them the same in terms of storage usage. There is no advantage to one or the other. OK so which one already!? The truth is it is your choice. Thin has advantages – overprovisioning at the datastore level, the ability to use GuestOS UNMAP. Thick has advantages – guaranteed space and faster to clone/move (see VAAI WP). And disadvantages? Thin wins (or rather loses) this one – space management. If you want to use thin vmdks, you must have good monitoring in place for the array (Unisphere for VMAX alerts) and the vCenter (VASA integration is good here). With thin vmdks I’ve seen datastores blown up really quick by a single VM. But some customers absolutely need the overprovisioning at the datastore level so thin makes sense. I would say most, however, are more comfortable with the guaranteed space thick offers.

Whichever you do, happy provisioning!


One thought on “Thick or thin VMDKs on VMAX/PowerMax

Add yours

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by

Up ↑

%d bloggers like this: