VMAX3 HYPERMAX 5977 2015 Q3 SR

Today we add more features and functionality to our flagship array the VMAX3.  The official HYPERMAX release is 5977.691.684 and includes the new code for the VMAX3 as well as Solutions Enabler/Unisphere for VMAX 8.1.  In a quick bullet list here are some of the new and updated features along with a couple links:

  • Device expansion and increase in device size to 64TB
  • SRDF Metro – Active/Active
  • FAST.X improvements (more supported arrays, eNAS support)
  • eNAS with SRDF/S
  • FAST Hinting
  • Embedded Management
  • Hardware upgrades
  • FCoE and iSCSI support
  • Support for VAAI primitive Full Copy or XCOPY on SRDF
  • Release Notes
  • Solutions Enabler 8.1 Docs
  • Unisphere for VMAX 8.1 Docs

Some of these are self-explanatory (device expansion about time!) but I’ll expand on a couple here and then go into the two most VMware-specific ones – SRDF Metro and XCOPY on SRDF devices.

FAST hinting is one of those features we’ve been working on for quite a while now.  Initially it just supports Oracle databases.  The idea behind FAST hinting is to move portions of the database from a lower SLO to a higher SLO, e.g. Silver to Diamond, at a particular point in time to improve performance.  A classic example is say a report that is run every Friday at 5 PM that needs to complete as quickly as possible.  Using FAST hinting I would setup to tell the VMAX3 to move my tables and indexes or perhaps partitions associated with that report to a higher SLO in time for that report to run.  The hint has a lifetime so you would set it “expire” at a point you know the report will be complete (obviously you may tweak that week to week).

Embedded management is a new GuestOS – runs in the HYPERMAX OS hypervisor.  It is an embedded Solutions Enabler/Unisphere for VMAX/SMI-S installation.  For those customers who have a single VMAX3 it removes the requirement to setup an external management host.  It will be assigned an IP which you can access externally and as with other embedded GuestOSs it is highly available.  One thing it cannot do is manage other arrays as local (remote array management is possible if SRDF is configured, though there are limits to what you can do).  You can of course still do an external management server even if you do have the embedded management.

SRDF Metro is also now available (a long time coming in my opinion).  For those who don’t know, an SRDF Metro configuration offers an active/active cluster much like VPLEX Metro.  It has limited distance (~100km) due to latency requirements and can be used with bias mode (one side set as winner from start) or with a witness (requires third VMAX/VMAX3 array).  The R1 and R2 devices share the same device identity and from a VMware perspective the same naa ID.  This permits quick vMotion across hosts in physically different locations since all the hosts see the exact same datastore presented from 2 different arrays.  You can check out this WP for a good introduction Intro to SRDF Metro.  SRDF Metro is releasing with a limited list of operating system support at GA .  Here is a partial table from ESSM:

OS_supported Click to enlarge – use browser back button to return to post

If you see VMware in that list you’ll notice a footnote.  I can tell you what it says is you’ll need an RPQ to use Metro with VMware.  Rest assured that requirement will go away in the not too distant future, but right now the GA SR will not work with VMware.  I have an Oracle RAC/vROps on a VMFS stretched cluster running so I assure you can get it by RPQ if you don’t want to wait.  It won’t be a long wait though and I’ll write another post at that time so I don’t want to spend too much time on it here.

This release does have a specific VMware feature in it, and that is support for XCOPY on SRDF.  As VAAI has been out a long time (since vSphere 4.1 and 5875 on the VMAX), let’s do a quick recap.  XCOPY was supported on SRDF devices from the start and straight on through the VMAX line.  What we discovered, however, was that because of our asynchronous implementation, it was possible for consistency groups to enter a “sync-in-progress” state which was undesirable for some customers.  Therefore in a 5876 Enginuity code release on the VMAX we offered the ability to disable it which meant VMware would do the copying.  This setting was made permanent on the VMAX3 so no user-intervention was necessary.  Since disabling the functionality we knew we wanted to find a better solution that used XCOPY so we could offload the work to the array.  Fast forward to this service release where we now have re-enabled XCOPY on SRDF devices.  Well, we didn’t just re-enable it we re-architected it first then enabled the change.  The way it works is that existing XCOPY functionality for regular non-SRDF devices doesn’t change – it works in an asynchronous manner and frankly, depending on your ESXi version, toasts software copy.  When using XCOPY with SRDF devices, however, we now use a synchronous copy.  By its nature, this just can’t be as fast as asynchronous copy and honestly that was never the goal.  We wanted to:

  1. Guarantee no consistency problems
  2. Offload the copying to free-up CPU, memory, and bandwidth

We accomplished these 2 things.  So you can expect performance to almost always be faster than VMware’s host copy, but it will be closer to that than the time to copy to a non-SRDF device.  Remember to use best practices too – for instance on vSphere 6 if using thin vmdks be sure to clone to template first rather than converting to template to ensure better performance.  You can check out that stuff in the VAAI WP.

And speaking of the WP, I have published an update with this information in it.  You’ll also find some SRDF Metro basic info in there in case you do RPQ it.  There are a few other updates in it  – UNMAP 5.5 P3 stuff I blogged about (UNMAP and the 1%), array table code update to include this SR, and a new section on the ATS heartbeat which I’ll summarize now.

In ESXi 5.5 U2 VMware made a change to the way they do heartbeating – the process that runs to do host coordination and vitality.  Before this ESXi version VMware used SCSI calls for the heartbeat – basically pre-VAAI functionality.  Now in 5.5 U2+ they use ATS.  For the most part, the change was unnoticed by customers; however many array vendors, including us, have had a few customers experience errors where they lose datastore connectivity because of ATS.  It goes without saying this is bad.  VMware even wrote a KB on it (2113956).  For those customers that experience this, our and their recommendation is to disable the ATS heartbeat functionality.  Disabling the ATS heartbeat does not impact any other VAAI functionality, including ATS in general.  In fact VMware says there is no performance impact in doing this (makes you wonder why they changed it in the first place).  There is more detail in the paper if you want to read about it.

And lastly the elephant in the room (aka what isn’t in this SR)…

For those looking for VVol support…not yet.  If you read my blog you are aware we have it, and announced its impending availability, but it is not in this code release.  The other software duo not releasing for about another 2 weeks is the SRA for SRDF and the SRDF-U (formerly VSI SRA-U) that support this SR.  These both tend to lag our major VMAX releases just for a little for a number of reasons.  Note this SRA will not support SRDF Metro (stretched storage) at GA.  I’ll also blog on those when we GA them since in particular the SRDF-U has some great stuff, having moved from the thick to the thin client.


3 thoughts on “VMAX3 HYPERMAX 5977 2015 Q3 SR

Add yours

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by WordPress.com.

Up ↑

%d bloggers like this: