PowerMax 2021 Q1 Service Release

This is our Service Release for the GA PowerMaxOS that came out in September 2020. This is our standard practice after a major release so that we can both fix any issues that may have come up (though I should note we do have a recent service pack, too) and to include some new features that just couldn’t make it into the GA release. Generally, don’t think of it like a patch that must be applied as soon as possible a la iPhones. For the most part, the SR isn’t required unless you are awaiting a specific fix or feature in the SR; however, one of those features might be of great interest to customers looking at VMware Virtual Volumes (vVols) with SRDF – and that is the ability to implement eVASA on existing PowerMax arrays.

The two feature improvement areas covered below are:

  • Unisphere
  • eVASA

Unisphere

New features in the PowerMaxOS 2021 Q1 SR in Unisphere for PowerMax UI (version 9.2.1.2) are:

I’ve put a checkmark next to three of the items I find of particular interest based on my customer interactions, though the first two are essentially the same change. The first concerns compression/deduplication. We’ve made some changes to how the data reduction ratio (DRR) is calculated. This ratio has always been a bit confusing to be honest, and I don’t know that I can do any better job explaining it; however, within the Unisphere help there is a detailed write-up on how this is calculated along with the formulas:

 https://Unisphere_Host_or_IP:8443/univmax/help/en_US/esd_c_mt_unisph_ng_nav_understanding_data_reduction.html 

As part of these changes to the ratio, the Unisphere UI now contains additional detail in various areas, one of which is illustrated in the upcoming image. Below is the main CAPACITY screen of my array. If I hover my mouse over the Data Reduction section in the bottom right, I get the displayed pop-up which lays out the total data in each area. Note how my Ratio overall is 1.9:1, but on the data that can be reduced, I get 4.9:1. That’s because the 1.9:1 includes the unreducible data. Unreducible data has gone through the reduction algorithm but cannot be deduped or compressed further, such as software compressed databases, host encrypted data, and various file formats for audio and video. The Storage Group Demand Report below can illustrate this further.

DRR in the Storage Group Demand Report

If we drill-down to the storage group level, we can find two new columns in the Storage Group Demand Report for unreducible data. I’ve covered this report in relation to getting vVol information, in particular about snapshots. I’ve included both vVol (e.g. _VVOLxxx) and regular storage groups in this image.

You’ll notice I have significant, unreducible data in the last group there, infrastructure_sg, 1.2 TB to be exact. That storage group represents the bulk of my lab environment. In addition to all my VMs, I have lots of ISOs and downloaded software, which is stored as compressed zip or tar files. These cannot be reduced further, thus they are classified as unreducible after going through the algorithm. Well, now that we are all thoroughly confused, onto the alert!

Metro Alert

The other addition in Unisphere I wanted to point out is a new alert for Metro. If there is a failure of a Metro group (e.g. network) that causes the pairs to suspend, an alert can be generated. I say “can” because by default it is disabled. In my environment below I am using SYSLOG for notifications (VMware Log Insight syslog). I check the box and then select SYSLOG. Note that if Notifications are not in use, any alerts are generated in Unisphere and can still be viewed there.

eVASA Add

Embedded VASA or eVASA for short, is now available for post-install implementation on PowerMax arrays (only). At GA in September last year it was only available when you ordered a new PowerMax. Now, with the assistance of our PS, it can be added as a new GuestOS on the array. This is all done online. Prior to adding, Dell EMC will insure the array has the available resources (cache, memory, cores) for eVASA, and vVols in general if the box was not sized originally with vVols in mind. eVASA supports VASA 3.0 and thus replication via SRDF with vVols. The customer provides 2 IP addresses (just like for embedded Unisphere – eMGMT) and once PS completes adding eVASA, you take those IPs and use them to register the VASA Provider. Remember, GuestOS (or container if you like) is controlled by the array. If you used VASA 2 in the past, this is not a vApp that you configure. While you can use the eVASA management interface (port 5480) to configure say log levels, you don’t have to do anything else to use it once configured. This is all covered in my post on eVASA and the VASA 3 paper.

For all the detail of the release, please see the Release Notes (sorry no link yet I’ll update when available).

Advertisement

10 thoughts on “PowerMax 2021 Q1 Service Release

Add yours

  1. Thanks Drew!
    What do you advise to prevent SRDF Async suspend due to write-cache usage by vmware write-storm on snapshot deletion?
    More cache in the powermax?
    Maybe vmware settings to slow snapshot deletion ?

    1. I can’t say I’m familiar with your particular issue in relation to snapshots. I assume you are talking about VMware snapshots in which case mass deletion would trigger VMware to issue UNMAP commands to reclaim space in the datastore. These, along with the companion command WRITE SAME, can impact SRDF significantly (you can find a post here on it). Although you can always buy more cache, it’s not the first choice of most customers obviously. A couple options to try: 1. reduce the number of bulk snapshot deletions, i.e., space them out. 2. Disable automatic UNMAP on the datastore where the snapshots are stored before deleting them – re-enable during slow hours (say overnight or any time there is reduced activity). Number #2 is likely to be more impactful, though remember when you re-enable VMware is going to reclaim.

      1. Thanks Drew!
        I don’t think of UNMAP command that are not using the write-cache….
        Deleting vmware snapshot is causing massive write I/O from vmware to apply the snapshot modification to the real datastore and all these I/O are going to the powermay write-cache, causing the threshold usage to trigger an alert and any SRDF/A running in the same time to be suspended.
        It is a collaterall damage…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by WordPress.com.

Up ↑

%d bloggers like this: