VMAX TB and best practices

I’ve just completed the latest update to the VMAX/VMware TechBook, a tome (I can be derogatory since it’s mine) of some 400 plus pages that explains how best to use VMware on the various VMAX/VMAX3/VMAX AFA arrays. You can find it here.

When customers ask me about best practices, a link to this doc is invariably in the response somewhere, however I know the prospect of wading through this much material can be both daunting and time-consuming. In many instances, a customer simply wants some general guidelines to follow and use the TB for detail as needed. Fair enough. So here is my list of guidelines. I’ll begin with the cop-out “it depends” about some of these guidelines, because it does, but you already knew that. All of these guidelines are in the TB somewhere – perhaps with different wording but easily searchable. There is no weight to the order – I put them down as I thought about them. They include some older ones, too, so if you have a VMAX3 or all flash the term “metavolumes” may not mean a thing to you. Not to worry.

  • Use 2 HBAs, minimum 4 ports as a general rule on the VMAX, across directors. That will be 8 paths.
  • Use striped metavolumes over concatenated. Yes, concatenated are easier to expand but they don’t compare in performance unless you are only doing sequential reads like a data warehouse.
  • When it comes to VM density, there is no right size for datastores. They can be big or small. It’s all about the performance you need. The VMAX will spread your data out no matter the size. Just remember a single datastore/device has a single queue so the more IO to that single device the more you might have to adjust queues to achieve your desired performance. This fact can drive customers to use smaller datastores/less VM density, but do what makes sense for you.
  • Use VAAI. Mostly this means you don’t have to do anything since it is on by default, but there are two exceptions. First, if you run vSphere 6.x create claim rules for XCOPY to increase your copy size to 240 MB. Second, if you want to use Guest OS UNMAP, you’ll have to enable the block delete parameter and use thin vmdks.
  • For disk type (e.g. ZT, EZT, thin), thin is a perfectly acceptable option. Lots of customers think you can’t do thin on thin but really the only concern is space management since you can create vmdks well beyond the size of the datastore. There are performance implications for XCOPY but they can be avoided.
  • If you need to share vmdks between VMs for an application, e.g. Oracle RAC, you will have to set the multi-writer flag.
  • Use different datastores for different database components – e.g. Oracle separate out REDO, DATA, etc.
  • VMware Storage IO Control (SIOC) is supported and can be used with Host IO Limits. Storage DRS (datastore clusters) is also supported when metrics are disabled (prevents thrashing if both FAST and SDRS are moving data).
  • PowerPath/VE is preferred for pathing software, but if you use NMP be sure to change the Round Robin IOPS parameter (how often VMware switches paths) from 1000 to 1.
  • In general, do not change any of VMware’s default parameter values. All VMAX arrays (yes including flash) work well without adjusting them and in fact changing them can result in performance issues. Even queues, for the most part, do not have to be altered. If you want to change them, test, test and test.
  • If running vSphere 6.5, migrate to VMFS 6 as it is a superior file system which supports automated UNMAP.

Those are the main points that come to mind. If there is an obvious one I missed or something you would like addressed, let me know and I’ll add/comment on it.


7 thoughts on “VMAX TB and best practices

Add yours

  1. Thank you, Drew!

    Very informative article about integrating VMAX with VMware vSphere.

    Any suggestions for those who use VNX in their environments? The official documentation looks a bit outdated (https://www.emc.com/collateral/hardware/technical-documentation/h8229-vnx-vmware-tb.pdf). And also, it’s not clear when EMC is going to release support for vSphere 6.5 in their implementation of the VASA provider.

    I appreciate if you can point me to the right direction with my questions.


    1. Though I don’t work on VNX, I’m afraid I know the VNX TechBook will not be updated. I don’t believe VNX will update the VASA Provider for vSphere 6.5 but I’ll double-check and if I find out differently I’ll update the comment.

      1. Thank you for letting me know, Drew. I hope VNX2 VASA provider will be updated to support vSphere 6.5 at some point.

      2. Can you tell me what version of the VASA Provider you are using? I don’t even see that VNX2 supports vSphere 6.0 with VASA so I’m also curious what version of vSphere you are using.

    1. Well, I think I know why. EMC made a decision a while ago to move forward with the Unity platform as their midrange Virtual Volume (VVol) array, which requires a VASA 2 Provider and thus supports vSphere 6.5. There will not be a VNX2 VASA 2 Provider, therefore, so no vSphere 6.5 support as VMware does not support the VASA 1.x Provider with vSphere 6.5. As far as vSphere 6.0, I have not seen any official support concerning the VNX2 VASA 1.x Provider, however because of VMware’s implementation it should work as you have discovered. You certainly can raise an SR and ask about vSphere 6.5 support with a VASA 2 Provider on VNX2, though I suspect you will receive a similar answer as I have provided. Sorry.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by WordPress.com.

Up ↑

%d bloggers like this: