VSI 8.1 GA

The latest version of Virtual Storage Integrator, aka VSI 8.1, is now available for download. Before getting into the VMAX All Flash/PowerMax enhancements, I thought I’d start with a reminder of the VSI plug-in structure since it has changed over its life. The latest HTML 5 plug-in, 8.x, is still a virtual appliance (vApp), but it is based on Docker containers running on VMware Photon OS. There are 2 containers, one for the Redis database and one for IAPI which is heart of VSI. After you deploy the vApp and power it on, VSI registers the plug-in in the configured vCenter. If you login to the vApp (be sure to enable SSH during deployment), you can list the Docker containers. Just a reminder, the default password of root is root and you will have to change it the first time you login.


You might ask why are we using containers? Well besides being incredibly efficient and allowing us to run multiple applications on a single OS without conflict, it provides the ability to upgrade with relative ease. No longer will you have to deploy a new vApp. There are two ways to upgrade from VSI 8.0 (if you are at 7.x you will have to deploy a new vApp since the vSphere client has changed), through DockerHub directly from the vApp, or you can download the package from the Support Site or DockerHub and then apply it within the vApp. This will depend on whether the vApp can get to DockerHub. Here is our DockerHub page:

I started an upgrade just to provide a flavor of how it looks if you use DockerHub directly instead of the package:

The Product Guide contains the steps for both methods. BTW you can also get VSI from the VMware marketplace here.

Now on the features…

VMAX All Flash/PowerMax

VSI 8.1 is an important release for VMAX All Flash/PowerMax (arrays running PowerMaxOS) customers who use the provisioning functionality of VSI because it fixes some unfortunate coding designs in VSI 8.0. VSI 8.0 was the first release to support the vSphere HTML 5 client, and thus remove support for the vSphere Web Client (Flex). The provisioning model of that release only presented customers with initiator groups, not the storage groups masked to the hosts. VSI would then select the first masked storage group (alphabetically) to the host, not taking into account whether deduplication/compression was in use, or what service level was assigned to the storage group. The implementation, therefore, proved mostly useless for our customers. VSI 8.1 re-implemented the VSI 7.x model which presents customers a table of storage groups from which to choose. Below is an image of the wizard when adding a new array. Note the Service Level, Compression, and Deduplication columns. We show both compression and deduplication as a VMAX All Flash running PowerMaxOS does not support deduplication so it can show disabled while compression shows enabled. For PowerMax arrays the compression and deduplication columns will always be the same value.

A reminder that we do not support child storage groups. We will only show the parent group in the wizard and use that for device masking. The storage group vsi_sl_sg in the image above is one such group. It has neither a service level, nor compression/deduplication enabled as those services are reserved for the child storage groups. We will add this functionality in a future release.


Along with the change to how we provision in VSI 8.1, we have enabled RDM creation and the ability to show the properties of the individual RDMs the same way we show datastore information. This creation must be initiated from within the Configure tab of the VM. Any RDMs will be created in the Home datastore of the VM.

There will be two enhancements coming in the future to address gaps in this implementation:

  • Create an RDM off the right-click menu of a VM
  • Change the datastore of the RDM during creation

Here is a recording of the RDM addition:

Best Practices

The other feature in VSI 8.1 for the PowerMaxOS is best practices. These fall into two categories: XCOPY and NMP.


The first best practice is for VAAI XCOPY. As I’ve noted in numerous posts and best practice docs, the default copy size for XCOPY is 4 MB and can be increased to 16 MB; however we also support raising the value to 240 MB using a claim rule. The practice of adding a claim rule can be done at any time, but it does require rebooting the host to take effect. The reason for this is that as devices are presented to ESXi hosts they are “claimed” by the VAAI plugin (this is part of the ESXi software). Once claimed, changes to the XCOPY values will not be applied until a device is unclaimed and reclaimed. You can do this without a reboot, however it means detaching the devices, and thus VMFS, from the host. Most customer would not want to do that, thus a reboot is preferable. So to enable this best practice, VSI adds the capability in the right-click menu.

In the menu select Apply Host Best Practices. Use the drop-down to select the array type as PowerMax/VMAX. Here you will see Create VAAI rule for XCOPY extents to copy at 240 MB. Click the check box and then hit Save.

A yellow box will indicate you must reboot the host, however this does not have to be done immediately. The rule will remain and take effect upon reboot whenever that occurs. Remember you must set the rule on each ESXi host because each claims devices independently.


The second option available is to set the best practice for Native Multipathing or NMP (when not using PowerPath/VE which is the general best practice). By default, NMP will switch paths every 1000 IOs. Both Dell EMC and VMware recommend changing paths every 1 IO. This helps both detect issues more quickly and offer some limited performance benefit. There are two options available in the best practice screen, because changing NMP settings is dynamic and does not require a reboot. The options are highlighted in the image below. On the bottom half of the screen is the per-LUN setting. Checking this box and hitting Save will change iops=1 for devices already presented to ESXi. In the top half is the ability to create an SATP rule for iops=1. This covers all new devices presented to the ESXi host. Again, unlike XCOPY, this rule does not require a reboot of the ESXi host.

We’ll be adding a couple new capabilities in the future in the best practices area. The first will be an NMP setting for latency. Latency with NMP was introduced in 6.7 U1 and currently we recommend it for SRDF/Metro cross-connect environments which do not use PowerPath/VE. Although latency can be used for non-Metro environments, RoundRobin with iops=1 is still the best practice. The other change will be to the VAAI plugin for the XCOPY rule. We have a custom plugin, VMW_VAAIP_SYMM, that VMware distributes in the ESXi software for VAAI but VMware is looking for vendors to use its generic plugin, VMW_VAAIP_T10. These plugins are the same from a code perspective so we support both. There is a more detailed discussion on this here. Therefore in the XCOPY rule we will change the plugin type. This will be transparent to the user but will avoid VMware having to make the change in a future upgrade of vSphere.

Here is a video of adding a datastore and setting the best practices in VSI 8.1.

Storage Limits

One area we were not able to address in this release was storage limits. This would include limiting the size and number of devices, as well as the total amount of storage that can be provisioned per user per array. This is slated for the next major release, subject to change of course given the number of platforms we cover.


You’ll also notice, if you’ve used VSI for years, we are still deficient on a number of features like extending/deleting datastores as well as ones not yet carried forward like the datastore and LUN PowerMax views. Rest assured there are all on the road map and I’m hoping it won’t take too many future releases to get them all in.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by WordPress.com.

Up ↑

%d bloggers like this: