PowerMax 5978.444.444

Today is general availability for the new PowerMaxOS release. As I’ve done for previous releases, I want to cover the features/improvements that have some relationship to VMware environments. I will be adding this information into my whitepapers and TechBooks over the course of perhaps the next month, but for now it will be available here for reference. I’ve been asked before why I can’t have the updated documents at release, and it just comes down to resources and time. I am the only resource and there isn’t enough time!  For this new release I’ll cover the highlights here but I’ve done more detailed posts on specific features which you can link to here.

Online SRDF/Metro Device Expansion

VAAI Statistics in Unisphere for PowerMax

SRDF SRA 9.1

And now the PowerMax release feature summary.

NVMeoF

The biggest feature introduced with this release is NVMeoF or NVMe over Fabrics. The PowerMax platform has been NVMe ready since inception but this is the first release that will support running it from the back end all the way through the host. The first release of NVMeoF is FC-NVMe or NVMe over Fibre Channel. As most of our customers run FC, it was natural to start there. FC-NVMe works with your current FC infrastructure, enabling the use of both your current FC connections and FC-NVMe connections. There are of course many flavors of NVMeoF and I’ve discussed a number of them in a previous post if it is of interest. I should note right from the start, that VMware does not support NVMeoF yet. So while I do have a tech preview demo in that post, our support of NVMeoF has no bearing on VMware’s support. VMware will release support in the future, so until they do only certain operating systems on physical servers will support our release of NVMeoF, e.g. SUSE, Red Hat, etc. You can review the ESSM to see all the support combinations.

NVMeoF as you might guess a hardware solution first. This means only PowerMax 2000/8000 will support it, with modification. A new 32 Gb module (SLIC in PowerMax parlance) is required on existing systems. New systems will ship with the 32 Gb FC module. Again, you can’t install these into a VMAX All Flash, no matter what PowerMaxOS you are running. The new module will support both regular FC and FC-NVMe. FC-NVMe is enabled by a new emulation, FN. So I can have this new 32 Gb card and have both FA and FN emulations running on it. Here is an example of my Unisphere for PowerMax screen showing both emulations (and iSCSI to boot).

So if you have a 32 Gb SLIC on PowerMax you will need the corresponding fabric of 16 (negotiates down) or 32 Gb and an HBA that is also 32 Gb AND that supports NVMeoF. Most of the newer HBAs will, so it’s just a matter of whether the OS supports it. Again, check the ESSM.

SRDF

In this first FC-NVMe release, we are not supporting SRDF with the new protocol. SRDF cannot be used with the new 32 Gb modules, rather you must use the current 16 Gb SLICs. Therefore if you want to use FC-NVMe and SRDF you will need both 32 and 16 Gb SLICs.

SCM

Along with the 32 Gb module we are now offering support for Storage Class Memory (SCM) of drive sizes 750 GB and 1.5 TB. SCMs are the next level of high performance drives, a step up from the NVMe drives, offering even lower latency. SCM drives, as you might guess, are a good deal more expensive than NVMe drives, so most customers will not have a box with all SCM. It’s much like the transition from mechanical disks to flash years ago. If your array has both SCM and NVMe, you will have 2 tiers of storage and activate automated data placement. Automated data placement takes advantage of the performance difference to optimize access to data that is frequently accessed. The feature can also help to optimize access to storage groups that have higher priority service levels. In fact both Diamond and Platinum service level response time targets drop by .2 ms in boxes with SCM. The array will use machine learning to place the most heavily accessed data on SCM in an uncompressed state (though deduplication is still performed). You can see my two disk groups, EFD (NVMe) and SCM, in the lab box:

For more information see the Family Product Guide here.

TDEV rapid delete

This new feature/improvement has been a long time in coming. One of the most painful tasks on the array is deleting a device (TDEV) because we only allow you to remove a device if it no longer has any storage associated with it. This means that even if you unmask the device, and remove it from the storage group, you can’t just tell the array to delete it. If you try you will get an error telling you that the device still has extents allocated. So then you have to run a “free” job to deallocate the extents after which you can remove the device. As I say, painful. Well, no more. With this release a single delete command will do that job. Notice here my device 24 is 3% allocated but when I ask Unisphere to delete it, as long as the device is not masked/mapped or in a storage group, I can delete it.

Certainly a welcome improvement.

Unisphere for PowerMax/REST API 9.1

Unisphere for PowerMax and the REST API version 9.1 is the accompanying management tool for the new PowerMaxOS release 5978.444.444. It has a number of improvements, including the ability to collect real-time system and storage group performance information. One change in 9.1 is the removal of un-versioned active management endpoints in the REST API. Though it seems like a small change, we have found that both customers and our own internal engineering groups are severely impacted by this as the un-versioned endpoints were used for some of the initial checking when running a REST command. We are revising our code to remove them from our plugins, but please audit your REST scripts/plugins against 9.1 to be sure you do not have this issue. If it is too difficult to make these changes, however, you can open an SR and ask for a hot fix that restores these endpoints.

Unisphere for PowerMax version 9.1 is also the first version to drop older endpoints from support​. With 9.1, therefore, 81, 82, and 83 will no longer be supported. The latest 3 versions 84, 90, and 91 are supported. This will be the practice of REST Call support for any future release. For instance, if 9.2 is the next release, 84 will be dropped.

VASA Provider 9.1

With the new release a new version of the VASA Provider is available, 9.1. There is no new functionality with this release per se, but it does contain the snapshot performance improvement I wrote about here. In any case, if you upgrade your PowerMaxOS you should also upgrade your VP to 9.1. As a reminder we still only support vVols 1.0/VASA 2.0, for now.

There are many more features/improvements/fixes you can read about in the Release Notes.

 

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by WordPress.com.

Up ↑

%d bloggers like this: