I wrote a Dell EMC KB (000533648) article last week about VAAI and SRDF performance considerations to assist customers in certain corner cases, but my support colleagues tell me they’ve had more than a few requests for the information, so I’m going to put most of it here as reference. The intent of the article is to assist SRDF customers who run their system close to peak or have multiple arrays in a multi-consistency setup and use VMware in one or more of those environments. You may, however, find the information useful if only to understand how VAAI works with SRDF.
Issue
Storage APIs for Array Integration (VAAI) permit certain functions to be delegated to the storage array, thus greatly enhancing the performance of those functions. This array offload capability supports four primitives, identified by the SCSI T10 commands: hardware-accelerated Full Copy (XCOPY), hardware-accelerated Block Zero (WRITE SAME), hardware-assisted locking (ATS), and storage reclamation (UNMAP). The VAAI WP provides all detail around the implementation and function of the SCSI T10 commands:
http://www.emc.com/collateral/hardware/white-papers/h8115-vmware-vstorage-vmax-wp.pdf
Normally, the VAAI commands provide only benefit to the VMware environment, but heavily used SRDF arrays can be prone to VAAI impact.
SRDF
The two VAAI commands that can cause the most performance impact to SRDF are XCOPY and UNMAP. ATS and WRITE SAME by themselves are not usually an issue, though WRITE SAME can impact Host I/O Limits and the UNMAPs also include WRITE SAME as part of the reclamation process. For both XCOPY and UNMAP, VMware sends commands in bulk, e.g. UNMAP this LBA range, copy this 240 MB extent. This works perfectly on the local array, however in an SRDF environment the command cannot be forwarded to the remote array. This means each UNMAP (and WRITE SAME) or XCOPY command must be broken down to the track level. Therefore, a single command coming into the local array turns into thousands of IOs to the remote array. For SRDF environments that use MSC or have SRDF directors that are heavily active, a burst of IO like this can cause Spillover to DSE and perhaps drop the SRDF group.
XCOPY is invoked when cloning VMs, deploying new ones, or running Storage vMotion. UNMAP can be issued three different ways, depending on the vSphere and VMFS versions. When using VMFS 5, UNMAP will only be executed against the datastores manually. In VMFS 6, UNMAP can be issued manually or automatically (default). Beginning with vSphere 6.0 and when using thin vmdks, UNMAP can also be issued automatically from the Guest OS (if supported). Generally, both Guest OS UNMAP and automatic datastore UNMAP are less problematic because the commands are not issued in bulk. They are sent whenever a file or VM is deleted, or a VM moved from a datastore. Thus the commands can occur more frequently, but with less intensity. Manual UNMAP, on the other hand, is issued for the entire datastore. The larger the datastore, the longer the process takes and the potential for impact increases.
Susceptible Environments
Customers who utilize VMware Storage DRS (SDRS) with VMFS 6, a process by which VMware moves VMs between datastores to maintain capacity balance (a reminder that the performance metrics option should not be used), experience both XCOPY and UNMAP commands frequently; however, unless there is significant movement (which is not the default behavior), it is unlikely SDRS by itself would cause issues. What is more common is customers running VMFS 5 and issuing multiple manual UNMAPs against large datastores. This is likely to strain an SRDF environment that is already running at or near peak.
SRDF/Metro and SRDF/S require special consideration because XCOPY works synchronously in these environment to maintain consistency. Unlike asynchronous XCOPY (for non-SRDF devices or SRDF/A) which runs in the background and has little impact on host IO, synchronous XCOPY must be serviced immediately and can impact latency on a taxed system. The following section details recommendations for such an environment.
Workarounds/Resolutions
These workarounds/resolutions are provided to address the issues mentioned herein. Dell EMC does not recommend disabling the VAAI primitives as a general practice.
SRDF/Metro and SRDF/S
XCOPY
Per the VAAI paper (http://www.emc.com/collateral/hardware/white-papers/h8115-vmware-vstorage-vmax-wp.pdf) and posts you can find here, Dell EMC recommends adjusting the XCOPY size to 240 MB through a claim rule as the default is 4 MB and can only be adjusted to 16 MB. Having a larger XCOPY size in SRDF environments means lots of IO is generated to the remote array(s), however it also means the SvMotion or clone takes a shorter time to complete. Testing has shown that reducing the XCOPY extent size will reduce the impact to the host latency, though at the cost of increasing the length of the copy jobs. Setting the value to 1 MB will be the least impactful to latency, though most impactful to copying data. For customers running only Metro or Synchronous SRDF, the best solution is to disable XCOPY SRDF on the array (by Dell EMC resource) since VMware’s host copy operates at comparable speeds to synchronous XCOPY. Some customers will not wish to disable XCOPY and for them, the best course is to set the copy size to 1 MB.
If the impact of XCOPY is proving too problematic for the customer for both SRDF and non-SRDF devices, they may choose to disable it for all operations. XCOPY can be disabled on the array, or on an individual ESXi host. If XCOPY is disabled on the array the ESXi host setting DataMover.HardwareAcceleratedMove has no bearing on functionality.
UNMAP
UNMAP can also be disabled at the array level, but as it is easily controlled from within vSphere, that is the preferred method. As mentioned, there is datastore and Guest OS UNMAP. Guest OS UNMAP is infrequently used in customer environments, and even when deployed, produces little impact due to the limited number of LBAs unmapped at once, e.g. deleting a file. At the datastore level, VMFS 5 and VMFS 6 can behave differently because VMFS 5 only supports manual UNMAP while VMFS 6 supports both manual and automatic UNMAP. By default, VMware will issue UNMAP whenever a VM/vmdk is moved/deleted from a VMFS 6 datastore (Dell EMC recommends taking the default of low priority when using automated UNMAP). In VMFS 5 the UNMAP command is issued manually. Customers are encouraged to move to VMFS 6 to avoid manual UNMAP; however, since VMware re-uses space in the datastore, issuing manual UNMAP on VMFS 5 is not necessary unless the business requirements mandate it, e.g. to get accurate storage usage on the array for billing. If manual UNMAP cannot be avoided, customers should only issue it against a single datastore at a time, during maintenance or at the lowest array activity, and should wait for the process to complete before issuing another. This will minimize the impact. To that end I do not advise the use of VSI 7.x manual UNMAP scheduling ability as that cannot be controlled or throttled in any way.
If a customer is vacating a datastore and wishes to reclaim the storage after, it is best to either delete the device (freeing storage will be part of that process) and create a new one or if cache is at a premium, free the space from Unisphere or Solutions Enabler which will avoid UNMAP. This will require either unmapping the device from VMware first, or setting the device to NR and then running the free command, after which the device can be returned to VMware for operation.
Conclusion
Again, these recommendations are made for specific situations, but if you find yourself in one they should prove useful.