Dell VSI 10.0

One of two plugin releases today along with ESA, the Dell (nix EMC) VSI 10.0 plugin that supports the latest Unisphere REST 10 is available for download. This release supports our latest PowerMaxOS 10 and PowerMax 2500/8500 arrays. Besides that general support, there are two main features added in VSI 10 for the PowerMax array: NVMe/TCP and PowerMax File (NFS). Because REST is backward compatible, however, feel free to upgrade VSI with your current arrays.

Dashboard

I’ll start with the VSI 10 dashboard since it altered a little with the new additions.

PowerMax File

There is now support for PowerMax File in the form of creation of NFS datastores and the in-context resolution of existing datastores. The support is specifically for the new PowerMax 2500/8500 platforms. There is no eNAS support for the previous platform. I’ve run through the datastore creation wizard in a narrated demo to cover all salient parts as the screenshots would have been excessive.

One particular item of note is that PowerMax File is only available on embedded Unisphere. So when you add your storage array in VSI you must use the embedded Unisphere for the NAS server. If you use an external Unisphere, it will not discover any NAS servers.

NFS datastore demo

Datastore removal

You can also remove the datastore and underlying objects (device, NFS export) using the VSI right-click menu. The two steps are below. Just be sure there are no VMs registered in the datastore or the operation will fail since it won’t check before it starts the operation.

Replication

There is no ability to setup replication for NAS servers from the VSI interface; however, because replication is configured at the NAS level, if you create an NFS datastore on a replicated NAS server it is replicated. Unfortunately, as you go through the provisioning wizard we have a bug where the replication status may be inaccurate so be sure to check with your storage admin if replication is in use and important for your NFS datastore (see below for more detail).

If your NAS server is not replicated you (or the storage admin) can always add replication after the creation of the filesystem by following this post.

NVMe/TCP

Support for NVMe/TCP is the second major feature added. Unlike NFS support which invokes different steps in datastore creation, NVMe/TCP datastore provisioning is no different than FC or iSCSI. You must know whether or not your storage group is presented via TCP. I’ve named mine accordingly for this reason:

If you know NVMe/TCP you’ll recall that devices are immediately recognized by the TCP software adapter. Therefore there is no concept of rescanning HBAs; however the VSI datastore creation process is the same as FC and iSCSI so you will see that VSI still forces an HBA rescan. There’s no harm in doing this, though it is unnecessary.

When you view the detail of the NVMe/TCP datastore you will know it as such with the inclusion of the NGUID. For FC or iSCSI the NGUID will be set to “NA”.

NGUID is the namespace shown in vSphere. Unlike FC, the WWN is not used.

RDMs

One final thing about NVMe/TCP (and all NVMeoF for that matter). VMware does not support RDMs. If you accidentally choose a TCP storage group, you’ll see this error:

Don’t blink, though, because this screen will disappear automatically in a few seconds. Not only that, but there will be no record in events or tasks, only in the VSI logs.

Best practices menu

Pathing

Unfortunately, our best practices pathing functions haven’t been updated for NVMe/TCP devices yet. As NVMe/TCP is still nascent, I don’t find this a huge concern, but if you want to use best practices, I’ve included the pathing commands in the screenshot below. NVMeoF devices use the HPP plugin, not NMP. HPP offers a number of different policies, though similar to NMP. The default is the same as NMP, Round Robin with iops=1000. For NMP we recommend iops=1, but for HPP we recommend using the LB-Latency policy as it has intelligence to handle more path issues. I’ve included the command to change the policy for one device and the claim rules you want to create to handle future devices. If you only have one array model type (2500 or 8500) you only need one rule (use 914 in that case no matter which array).

XCOPY rule

As there is no support for XCOPY with NVMeoF, creating the rule with the best practices menu has no impact on these datastores.

Manual space reclamation

Space reclamation for NVMeoF datastores works perfectly fine since the UNMAP command is converted to the NVMeoF command set.

Bug “features”

Now I’m not trying to pick on VSI here as every software has its share of issues. The reason I include these sections is to save customers time, not to needle the developers. Hitting these bugs in a vacuum may prompt a customer to open an SR, and it may take many days to finally get an answer that the issue is not a concern. So I’d rather tell you about them here so if you do experience one, you know you can skip it and move on.

Pink error

Well as bugs go this one is more of an annoyance than anything. After deployment, when the screen returns you may see this pink bar:

It’s innocuous, and will go away after a few seconds. You may also see it on other screens here in the management interface, each time disappearing. The issue has already been fixed but did not make it into the GA build.

Datastore expansion refresh

If you expand a datastore using the Dell VSI right-click menu, when it completes the new datastore size will not show unless you run the vSphere Client refresh.

If you EDIT the capacity in the VSI interface, however, you will not need to refresh to see the new size.

Apparently it is like this in version 9 also so perhaps this is old news.

NAS replication column

When you add a new NAS server, the Replication column shows “Disabled”. But when you provision the column shows “Enabled”. Which is correct? Well I’ve had it happen when I am not using replication and when I am. So I’m not sure it matters. I’m afraid if you need to know, you’ll have to check with the storage admin who can definitively tell you. I’ve asked them to fix this.

Number of NAS servers

This minor one refers to the listing of the NAS servers. When you provision an NFS datastore and select the storage system, no matter how many NAS servers you have, only one is going to show; however in the next screen where you supply the details, the drop-down box will show them all. Now if the Replication column in the previous section was accurate, this would be more of a problem, but as it isn’t, it’s not really a big deal here. Well at least we are consistent 🙂

Limitations

Mostly some clarifications below.

VMFS 6

VSI 10 only supports VMFS 6. We have kept the drop-down box which is simply a carry-over and was unintended. There is no missing option here for 5.

SRM

This is not exactly a limitation since it is by design, but the inclusion of the SRM Servers in the Plugin Management is confusing. VSI automatically discovers any attached to your vCenter like mine here:

Having discovered it, you might assume there is some SRM integration with VSI and you’d be right, just not for PowerMax. There is absolutely no integration with the SRDF SRA and VSI so unless you have RecoverPoint or PowerStore in the environment, this screen is simply informative.

SRDF storage groups

VSI will not permit provisioning to storage groups that are involved in an SRDF relationship. Now in the past, VSI did not prevent this action, but it would break the ability to manage SRDF at the storage group level. Customers would get around it by adding replication later. With REST 10, you will get the following error which completely blocks the action:

The error mirrors the warning you would get in Unisphere when trying to add a new device to an existing SRDF-managed storage group. In Unisphere, however, you have the option of overriding the warning and continuing. No such luck in VSI. Since VSI does not check for the SRDF condition when adding new storage groups, I have requested an enhancement to filter them out and avoid this in the future. There is no current plan to support these types of storage groups.


And I think that about does it. You can grab the new build here.

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by WordPress.com.

Up ↑

%d bloggers like this: