I’ve just completed the latest update to the VMAX/VMware TechBook, a tome (I can be derogatory since it’s mine) of some 400 plus pages that explains how best to use VMware on the various VMAX/VMAX3/VMAX AFA arrays. You can find it here.
When customers ask me about best practices, a link to this doc is invariably in the response somewhere, however I know the prospect of wading through this much material can be both daunting and time-consuming. In many instances, a customer simply wants some general guidelines to follow and use the TB for detail as needed. Fair enough. So here is my list of guidelines. I’ll begin with the cop-out “it depends” about some of these guidelines, because it does, but you already knew that. All of these guidelines are in the TB somewhere – perhaps with different wording but easily searchable. There is no weight to the order – I put them down as I thought about them. They include some older ones, too, so if you have a VMAX3 or all flash the term “metavolumes” may not mean a thing to you. Not to worry.
- Use 2 HBAs, minimum 4 ports as a general rule on the VMAX, across directors. That will be 4 paths.
- Use striped metavolumes over concatenated. Yes, concatenated are easier to expand but they don’t compare in performance unless you are only doing sequential reads like a data warehouse.
- When it comes to VM density, there is no right size for datastores. They can be big or small. It’s all about the performance you need. The VMAX will spread your data out no matter the size. Just remember a single datastore/device has a single queue so the more IO to that single device the more you might have to adjust queues to achieve your desired performance. This fact can drive customers to use smaller datastores/less VM density, but do what makes sense for you.
- Use VAAI. Mostly this means you don’t have to do anything since it is on by default, but there are two exceptions. First, if you run vSphere 6.x create claim rules for XCOPY to increase your copy size to 240 MB. Second, if you want to use Guest OS UNMAP, you’ll have to enable the block delete parameter and use thin vmdks.
- For disk type (e.g. ZT, EZT, thin), thin is a perfectly acceptable option. Lots of customers think you can’t do thin on thin but really the only concern is space management since you can create vmdks well beyond the size of the datastore. There are performance implications for XCOPY but they can be avoided.
- If you need to share vmdks between VMs for an application, e.g. Oracle RAC, you will have to set the multi-writer flag.
- Use different datastores for different database components – e.g. Oracle separate out REDO, DATA, etc.
- VMware Storage IO Control (SIOC) is supported and can be used with Host IO Limits. Storage DRS (datastore clusters) is also supported when metrics are disabled (prevents thrashing if both FAST and SDRS are moving data).
- PowerPath/VE is preferred for pathing software, but if you use NMP be sure to change the Round Robin IOPS parameter (how often VMware switches paths) from 1000 to 1.
- In general, do not change any of VMware’s default parameter values. All VMAX arrays (yes including flash) work well without adjusting them and in fact changing them can result in performance issues. Even queues, for the most part, do not have to be altered. If you want to change them, test, test and test.
- If running vSphere 6.5, migrate to VMFS 6 as it is a superior file system which supports automated UNMAP.
Those are the main points that come to mind. If there is an obvious one I missed or something you would like addressed, let me know and I’ll add/comment on it.