It’s been a while since my last post – I’ve been busy with Virtual Volume (VVol) testing and writing a new SRDF/Metro paper which coincidentally (or perhaps obviously) is the subject of this post. The paper covers using SRDF/Metro with VMware Metro Storage Cluster (vMSC) and was just posted. You can find it here:
Let’s do a short rundown of the technology in the paper for anyone not familiar with the concepts.
* Note a special caveat to this post. If you have configured a SRDF/Metro uniform (cross-connect) configuration and are using NMP RR PSP (as opposed to PowerPath/VE), the IOPS setting should remain at the default of 1000 unless the two arrays are co-located (e.g. campus or closer). Normally we always recommend using IOPS of 1 for VMAX, but in a cross-connect scenario where the arrays are more than a stone’s throw away, it can cause significant response time delays, depending on the distance between arrays. Again, if the arrays are quite close, then IOPS of 1 is acceptable.
SRDF/Metro is a new feature available on the VMAX3 (Q3 2015 SR) which provides active/active access to the R1 and R2 of an SRDF configuration. In traditional SRDF, R1 devices are Read/Write accessible while R2 devices are Read Only/ Write Disabled. In SRDF/Metro configurations both the R1 and R2 are Read/Write accessible. The way this is accomplished is the R2 takes on the personality of the R1 in terms of geometry and most importantly the WWN. By sharing a WWN, the R1 and R2 appear as a shared virtual device across the two VMAX3 arrays for host presentation. Here is a simple diagram of the feature:
Bias and Witness
SRDF/Metro maintains consistency between the R1 and R2 during normal operation. If, however, a device or devices go not ready (NR) or connectivity is lost between the arrays, SRDF/Metro selects one side of the environment as the “winner” and makes the other side inaccessible to the host(s). There are two ways that SRDF/Metro can determine a winner: bias or SRDF/Metro Witness (Witness). The bias or Witness prevents any data inconsistencies which might result from the two arrays being unable to communicate. In the initial release of SRDF/Metro, a Witness requires the use of a third VMAX or VMAX3 array with the proper code.
In my configuration I used SRDF/Metro with Witness rather than relying solely on bias. A Witness ensures that a failure to the R1 means the R2 will be declared the winner. In a strictly bias configuration, if the R1 fails, manual intervention is required since the R1 is always the winner. This is similar functionality to VPLEX Metro without a witness.
A VMware Metro Cluster
A VMware vSphere Metro Storage Cluster configuration is a VMware vSphere 5 or 6 certified solution that combines synchronous replication with array-based clustering. These solutions typically are deployed in environments where the distance between datacenters is limited, often metropolitan or campus environments. EMC SRDF/Metro represents one of those certified solutions.
A VMware vMSC requires what is in effect a single storage subsystem that spans both sites. In this design, a given datastore must be accessible (able to be read and written to) simultaneously from both sites. Furthermore, when problems occur, the ESXi hosts must be able to continue to access datastores from either array, transparently and without impact to ongoing storage operations.
There are two types of configurations available for vMSC: uniform and nonuniform. The simplest way to explain this is uniform means all hosts have paths to both arrays, while nonuniform do not. My SRDF/Metro implementation is nonuniform and looks like this, but you can do uniform with an RPQ:
In order to demonstrate the functionality of both these technologies, I built an applications’ environment consisting of Oracle Applications release 12 running on an Oracle Extended RAC 12c database. Oracle Apps is a great fit for Metro (I’ll use this term to mean SRDF/Metro with vMSC) because it is capable of a multi-tier configuration. You can setup multiple VM application servers and multiple VM database servers (RAC) and separate them across multiple ESXi hosts attached to two different VMAX3 arrays in a Metro cluster. The VMs can be moved seamlessly between hosts and across physical sites with vMotion without ever changing the datastores.
As for the general outline of the paper, I cover how to setup SRDF/Metro and then configure Oracle Apps/RAC. This is followed by the best practices for running an SRDF/Metro vMSC. I also included an Appendix with a discussion of VM mobility and SRDF/Metro maintenance procedures.
In order to use SRDF/Metro with VMware you are going to need an ePack on top of the 5977 Q3 2015 SR so don’t try to use it “out of the box”. I believe at this time it is still an RPQ, though from what I understand, sometime this quarter the ePack will become officially available to anyone who wants it. I do have one pre-GA product in the paper and that is VSI 6.7. As it is a free plug-in I took some liberty in including it, mostly because the upcoming version finally has the host-view of datastores I have been asking for since the Web Client came out. It is being released this quarter so it is not too pre-GA. You can expect a more detailed post, when it posts.
Finally I’ll point out that the initial release of SRDF/Metro is meant to provide a highly available solution only. What I mean by this is that a Metro configuration cannot be used with any other SRDF mode – e.g. cascaded, concurrent, Star, etc. There is only the two-site support – in essence a synchronous relationship that is now active/active. There is also no support for VMware SRM because the current SRDF SRA 6.0 does not support Metro, nor SRM 6.1 (stretched storage). Fear not though as that is coming soon and I intend to expand this paper to include SRM with stretched storage at that time.
If you have any feedback, I am always interested in what customers find most/least useful and what else they would like to see included in the paper.