5977 2016 Q3 SR – Compression, SRDF/Metro et al

Today marks the posting of our Q3 5977 (5977.945.890) release for the VMAX3/VMAX All Flash platforms. Technically most of the host software was available last Friday (e.g. Solutions Enabler 8.3, Unisphere for VMAX 8.3) but the HYPERMAX OS was made available today. The feature that has received and will receive the most attention is compression on VMAX All Flash (VMAX3 does not support compression). Now for those of you who remember compression on the VMAX – or thin device compression as it was known – well this isn’t that. This is true inline compression driven by the hardware SLICs (with software taking over if the hardware fails). Inline compression compresses data as it is written to flash drives. Unlike the VMAX implementation of compression, individual devices cannot be compressed; rather compression is enabled at the storage group level for all devices within the group. Data is compressed as it is written to disk when the device is in a storage group with compression enabled. If compression is enabled on a storage group that already has data, that data is compressed by a background process. If compression is disabled on a storage group, new written data is not compressed and the existing data is slowly decompressed in the background.

Here’s a simplistic example which combines virtual provisioning with compression, showing the two levels of benefit. In VMware environments (not exclusively of course) it is common practice to provision a device from VMAX All Flash that is more than is required for current growth. This is what we call over-provisioning. In this case I need probably 1 TB over the next 6 months but I ask for 2 TB for the future. There’s no penalty since I don’t actually allocate the storage on the array. That device is placed in a storage group which has compression enabled. On the array I have 1 TB of physical storage to start since I know that is all I need for the 6 months. Fast forward a year later and I have written all 1 TB of my data, however since my data is compressed on the flash drives when I write it (inline), my actual usage on the backend is only 500 GB. I’ve saved 500 GB which I can now use the next year for the remaining 1 TB of that VMware datastore.

compression

Click to enlarge – use browser back button to return to post

This compression example is fairly common – i.e. achieving 2 to 1. In my environment I actually got 3 to 1 but each application will be different. There are metrics you’ll be able to look at to determine the benefit. I point some of these out in the updated TechBook.  I’ll anticipate the next question you might ask…the performance price to pay. You can’t get something for nothing, and on and on. Since we use hardware, our compression is certainly efficient and we do as much as we can in the background, but the real key is that for your most accessed, heavily used data, we keep a percentage free from compression. We call this activity-based compression (ABC). If you are familiar with the concept of FAST, where on a VMAX3 we move the most heavily used data to flash drives, this is kind of like that. This serves to limit any performance overhead with compression and hopefully results in no noticeable performance difference. If, however, there is an application or some data you just don’t want compressed – that’s easy enough to deal with. Since compression is not array-wide but rather at the storage group level you can simply uncheck the compression box (it’s on by default) or disable it after the fact and over time we’ll get it decompressed.

Now obviously a business is not going to run things as tight on storage as my image above demonstrates. My example is simply to show the space savings available with compression that you’ll get across the box no matter how much storage you have. Do more with less and over time it means you buy less disk. What?! Dell EMC wants me to buy less disk? Well in truth what compression does is make the VMAX All Flash array an all the more compelling buy by reducing the overall cost. Get the performance benefit of all flash disk at a cost more comparable to hybrids. By the way, compression is supported with all our data services and other features like VAAI. Alright enough on compression. Surely this release has more to offer that might be of interest to us VMware people – and right you are.

This release rounds out the VAAI support for SRDF/Metro devices – Full Copy or XCOPY. The implementation for XCOPY is the same as SRDF/S – synchronous XCOPY. Remember with XCOPY we offload the copy process to the array. While generally this is faster than software copy, the real reason we do it is to avoid memory/CPU consumption on the host.  So the whole array is VAAI-enabled now.

We also have a couple other new features for SRDF/Metro. The first is support for asynchronous 3-site, non-Star. This means you can do cascaded or concurrent environments off of the SRDF/Metro setup and achieve true DR (really as I have said SRDF/Metro is HA given the limited distance). We do support sites off both sides, too, which is great because no matter which SRDF/Metro side fails you have a backup. The second feature is support for a virtual witness. As you may recall, SRDF/Metro defaults to a bias configuration – meaning in the event of failure, one side is always declared the winner. That doesn’t help much if the side that fails is the bias. So we also offer a physical witness to pick the winner. This is better but it does require a third VMAX array and not everyone has one of those lying around. So in this new release we now have a virtual witness. The vWitness is a vApp (can be an 8.3 Solutions Enabler or Unisphere for VMAX) which runs a special daemon (storvwlsd) called the Witness Lock Service. This daemon communicates with the Witness Manager daemon (storvwmd) which runs on the array within the eManagement GuestOS. Yes, there is a catch with the virtual witness. You must have eManagement configured on the array. The good news on that front is that if you don’t have it already, this HYPERMAX OS release supports adding it on-the-fly. It’s a quick, easy process for Dell EMC to put it on your box.

The vWitness can be created through CLI or Unisphere for VMAX – here’s the walkthrough with 8.3 GUI:

vwitness_walkthrough

Click to enlarge – use browser back button to return to post

You can use multiple virtual and physical witnesses for redundancy. Physical witnesses will be used first, as by design they are more resilient – in other words how often does an array go down.

There are lots of other features in the Q3 SR which you can check out in the Release Notes including:

  • A new all flash platform, the 250F/FX
  • Adding eNAS online
  • And finally the ability to do non-disruptive migration (NDM) from VMAX to VMAX3/VMAX All Flash.

I’ll end here with a couple other items.

  • I’ve updated three documents so far for the release – the VMAX/VMware TechBook, VAAI whitepaper, and the VVol whitepaper. For VVols, it was a fairly minor update. In addition to version changes for SE, Unisphere for VMAX, and VASA Provider to 8.3, I added some more troubleshooting steps in the Appendix. Also on the VMAX All Flash you will be able to set your storage resources to use compression (default). Note that once you set compression you cannot turn it off. You would have to either delete the storage resource, or create another one with compression off and migrate your VVols there.
    • There is a bug that popped up with the new Q3 SR release that impacts VVols. Extending a VVol may result in a PDL. Recall that VMware cannot handle PDL for VVols so if this happens you probably have to reboot the host. The fix for this is 91651 which must be requested via an SR.
  • The newest SRDF SRA for SRM which supports this Q3 SR is now available. Read about it here.

 

Advertisement

5 thoughts on “5977 2016 Q3 SR – Compression, SRDF/Metro et al

Add yours

  1. Pingback: ESA 4.1 & VVols |

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by WordPress.com.

Up ↑

%d bloggers like this: