SRDF SRA 9.2 and the SRM appliance

IMPORTANT:  This post covers the SRDF SRA GA version 9.2. Since its release there has been a hot fix for the Dockers release only which resolves some known issues I cover here. You can read about that here.

The SRDF Storage Replication Adapter (SRDF SRA) version 9.2 is now available for download. This release, as you might guess from the version, is associated with our PowerMaxOS and Solutions Enabler releases. In addition to the requisite support for the new PowerMax Q3 2020 (5978.669.669) release, the SRA has a couple of new features as well as new support.

But first an announcement… I’m going to put this in a special formatting and right up top since it is guaranteed to be the primary question I get asked.

The Dell EMC SRDF SRA 9.2 version DOES NOT SUPPORT SRDF/Metro Smart DR (MetroDR). I do not have any roadmap information, other than that the support is not imminent so using the MetroDR feature with SRM will not be possible with SRDF.

And moving on…

Let’s start with the new features of the SRA and a few limitations I want to call out (though please see the Release Notes for the big list).

SRDF SRA 9.2 New Features

New features and changes for 9.2 are as follows:

  • Support for the SRM Appliance  – Photon OS
  • Support for freeing tracks for Automatic Target Devices.

SRDF SRA 9.2 Limitations

  • No support for a local Solutions Enabler implementation on the SRM Appliance. It must be a client/server setup. Windows requirement does not change so it can remain a consolidated implementation.
  • No support for FC-NVMe with the SRA as VMware does not support SRM with NVMeoF currently. As far as I know this is not on their roadmap yet.
  • No support for SRDF/Metro SmartDR (MetroDR).
  • No support for 3-site SRDF/Metro cascaded configurations – this limitation was there in 9.1 also and has to do with how the PowerMaxOS code chooses the R1. If you absolutely require cascaded there are limited options (meaning ways to configure it) but you would need an RPQ as we would have to qualify it and provide guardrails. Best to just go with concurrent.
  • If you are using authorizations in conjunction with the parameters FilterNonVmwareDevices or CheckForVirtualDisks, the authorizations will not persist through reloads of the SRA adapter in the SRM Appliance Management interface. In other words, if you add the authorizations and then make changes to the SRA files requiring an upload/reload, you will have to re-add the authorizations. There is currently no workaround to this and development is aware of the problem.

I’ll start by covering the appliance, then end with the new parameter for freeing tracks.

SRM 8.2 and 8.3

With this release we now support both the SRM 8.2 and 8.3 Appliance as well as Windows. As the Windows installation has not changed, nor the upgrade process with the provided EXE file, I will forgo covering that and focus on the new Appliance installation; however, be aware that VMware has said this is the terminal release for SRM on Windows. I know a lot of customers have been waiting for our support of the Appliance, but the development was challenging. We’ve had to restrict the Solutions Enabler implementation to a client/server model because we cannot run Solutions Enabler inside of the Docker container. With Windows you can install Solutions Enabler on the same host as SRM which is generally the easiest configuration. With the Appliance you will have to use an external SE environment, be that on a supported OS or the SE vApp. You may do that already, so no big deal, but if you are familiar with the consolidated model you’ll have to move away from that.

SRM Appliance

The SRM appliance runs on Photon OS, VMware’s own operating system. It’s a light Linux build that VMware uses for many of the its software appliances. You may also recall that we use Photon with VSI now that we have moved to a container model for IAPI and the RedisDB. SRM follows that same formula, with VMware deploying the SRM container and then allowing storage vendors to write their own Docker containers for the SRA. And so in the SRA 9.2 we have a dockerized container model that allows us to run within the SRM Appliance. Unlike the Windows installation, the Appliance does not use an exe or binary, rather it accepts a tar file which contains our dockerized version of the SRA. The container model, like the Windows EXE model, means that you can deploy different types of SRAs in the same environment. I’ve had cause to deploy SRDF, VPLEX, and RecoverPoint SRAs all on the same Windows box. I could now do the same with the SRM Appliance (if the other arrays support it).

SRDF SRA – Appliance vs Windows

The first thing I want to do is dispel any idea you might have that the SRM Appliance brings about a significant change in how the SRDF SRA works. In porting over to the Appliance, it was much more about Solutions Enabler and their libraries, along with the SRA binary, rather than the operation of the SRA. The XML files are still there as they are how the SRA is able to support all those great features we have. You may find working with them in the Appliance easier or more difficult, depending on your point of view. In any case let’s get into the details.

SRDF SRA Installation

For the purposes of this post, I’m assuming you have SRM installed and configured, save for the SRDF SRA and the array managers. If you are upgrading an existing environment on Windows, the VMware documentation will provide the steps to do that which requires exporting the configuration from the Windows environment and importing into the new install. Note that while you can upgrade SRM, you can’t upgrade the SRDF SRA from Windows to the Appliance. After upgrading SRM you will then need to install the SRDF SRA as if it were a new install. Below you’ll see how you can update the XML files to reflect your previous Windows environment files.

Start by downloading the SRDF SRA. You want the SRDF Docker Adapter 9.2.0. Even though it says it is for the Windows OS, it isn’t, so fear not. It is distributed as a zip file.

The zip contains two files: The first is a tar file which is the SRA itself. The second file is a shell script,, which needs to be run on the SRM Appliance prior to the SRA installation, though after SRM has been configured (i.e. the SRM server must be running). Let’s talk about that first.

Preliminary step BEFORE installing the SRA – IMPORTANT

In order for Solutions Enabler (SE) to have the proper hostname when the certificate is created during the SRA installation, a prerequisite script needs to be run. Docker containers have their own hostname, not that of the server on which they are deployed. If the SRA uses that generated hostname, the client SE will not be able to talk to the remote SE. We get around it with, well, a workaround. So, after downloading the SRA zip file, extract the .tar file and the script that is there. The .tar file is the actual SRA, but the script is what we need first. Copy the shell script to the SRM server – into /tmp is fine or any other directory – and execute it as root (or sudo) – note you cannot login as root directly, login as admin then change user to root if not using sudo. Or you can do what I do and just copy the contents and paste it into a new file on the SRM server. Whatever is easier for you. The script generates a text file that has the hostname of the SRM server in it. It places this file in 2 directories, though in a brand new install, the folder /tmp/vmware-root will not exist. The script hasn’t been updated to create this (in process), so before you run the script, be sure that directory exists and if it does not, please create it. Then, when you install the SRA, SE takes the hostname from that text file and generates the proper certificate. Here is my example:

If you fail to execute the script before installing the SRA, it will cause issues during communication between SE in the container and the remote SE. You’ll get errors like these in the SE logs when you try to connect to the array managers:

<Error> [6488 SESS 0003] Jan-27 23:46:13.923
: ANR0151E Common Name in client certificate not valid:
expected "", received "storsrvd
<Error> [6488 SESS 0003] Jan-27 23:46:13.923
: ANR0155E Subject Alternative Names in the client
certificate not valid: expected "",
received "9bdc6c1454c2

The easiest resolution is an uninstall of the SRA, run the script, then reinstall.

After reboot

In addition to running the script before the installation of the SRA, it must also be run after each reboot of the SRM appliance. Unfortunately this was not included in the Release Notes. It was only written into the comment section of the script itself where it says:

#This script will need to be re-run post every system reboot.

I’ve since updated the TechBook but the RN still does not include the information (Unfortunately I don’t control that document).

Obviously the prospect of having to remember to re-run the script after a reboot, as infrequently as it might occur, is something most customers would prefer not to. Development is working on a solution to this, but I have no insight into how long this might take. As the SRM appliance has an accessible operating system, the script can be automated to run upon reboot. If that is of interest, I wrote up one way of doing it here.


After the script is complete, install the SRDF SRA. Navigate to <SRM_Appliance_FQDN>:5480 and login as admin. Then select Storage Replication Adapters and NEW ADAPTER.

Select UPLOAD and choose the SRDF SRA adapter tar file previously downloaded.

When complete it will look as so.

Here is a video of the process.

Modifying SRDF SRA files

I’m going to cover how to modify the SRDF SRA files, whether they be Solutions Enabler or the XML files. I’m using 2 example files, one of which is preconfigured for you (daemon_users) and the other is optional (netcnfg).

Daemon_users and netcnfg files

These two files require changes for the Appliance. One is done for you, the other you must handle.

VMware SRM executes commands as the “srm” user on the Appliance. Because of this, it is necessary to give that user privileges on the Solutions Enabler daemons. This is controlled by the daemon_users file. Therefore this file has a default entry in it which should not be changed:

srm <all>

Windows doesn’t need this since the SRM service owner is the same as the one executing the commands.

The netcnfg file is where you tell the client Solutions Enabler (SE) the location of your server SE if you plan on running SE commands from within the Docker container. If so, this file should be changed on both the protection and recovery SRM with their respective server SEs. Note that if you plan on using multiple array managers with the SRA (e.g. say you have 4 arrays using the same SRM environment), you’d only be able to point to one of those SEs at a time. Generally, most customers are not going to need to modify the netcnfg.

I’m going to walk through how to update the file which will also serve as instruction on how to modify any file (think XML) that is part of the SRDF SRA.

Start by downloading the configuration archive from either the protection or recovery site. You do that by going back to the Storage Replication Adapters screen where you installed the SRA. Then select the 3 dots in the right-hand corner and select Download configuration archive.

Extract the archive file keeping the existing directory structure. Here is what the downloaded tar file looks like.

As the Appliance is Linux-based, it is best to extract and edit the files on a similar OS. If Windows is used, be sure there are no extra characters in the file when saving it. There is no syntax checking of the files when the archive is uploaded, and any incorrect information may cause the software to operate incorrectly or not at all.

Using an editor like VI open the netcnfg file which is located in the /symapi/config directory. Add a single line to the bottom which references the Solutions Enabler server for that site:


Be sure to do the same for the opposite site. When complete, tar and compress all the configuration files, maintaining the directory structure. Upload the configuration archive using the SRM Appliance for both the protection and recovery sites. Note that the archive configuration file need not have the same name as when downloaded.

Whenever SE files are changed (this does not apply to the XML files), the SRDF SRA must be  reloaded so that the changes can be incorporated. Using the same SRM Management interface, run a Reload after the upload.

During a reload, the SE daemons will be restarted and use the new settings.

XML files

As you can see, to modify the XML files you follow the same process as above. For most customers, you will only need to modify the Global options file to use the auto device creation feature for testing, after which you will be good to go. Now if you were upgrading from Windows, you need to modify the Appliance XML files to match those from Windows. DO NOT copy the files directly from Windows as the difference in operating systems is going to produce messed up syntax on the Appliance. I produced a video for this one.

Access Docker Container

For those of you familiar with containers, it is possible to access the SRA container and modify the symapi and XML files directly, though we don’t recommend it since if you modify the wrong files they will not persist through reboot among other issues. I do have instructions in the TechBook so I’m not going to cover it here. Fortunately, most customers use the automatic device creation for testing, so once the Global options file is edited, it usually does not have to be touched again. If you do go the manual route, be sure you modify the files in the /srm/sra directory and reload the adapter. If you modify the SE files directly at /opt/emc, then they will not persist through reboot.

And two more warnings…

Just going to say this again. You’ll need to be careful when exporting the configuration from the SRA and modifying the files if you are downloading the .tar.gz to a Windows box and plan on editing them there. It is not recommended because most editors are going to put characters in the files you do not want and when you upload them back into SRM, it will break functionality. Best to use a Linux box and download directly to there as I did in the video.

And when you re-tar/compress the files after modification be sure you do so as I demonstrated in the video. Do not add any additional directories, files, etc. or when you try to upload the compressed tar you will see the following:

Install and Config Round-up

So that pretty much covers the important tasks when using the SRDF SRA Docker version. After you complete them, the SRM configuration is going to be the same as if you were using Windows. So now let’s talk about the other new feature.


This flag is used to free the allocated tracks of the Automatic Target Device(s) during an SRM cleanup operation. This flag is effective only when AutoTargetDeviceReuse is
enabled, and the value options are Yes and No. By default, the value is set to No.
In the Global options file:


This feature was added per customer request. It is most useful when the following criteria are met:

  1. The test will add or change a significant amount of data to the VMs
  2. Multiple tests will be run

Although by default SnapVX is set to NOCOPY for these auto targets, once you start changing or adding data to the VMs, the allocation is going to grow. So if storage is at a premium, and let’s be honest when isn’t it, using this parameter will reclaim the tracks from the array in between the tests. Also this is where that other array feature, rapid TDEV deallocation, makes the process much quicker than it would normally be.

Let’s look at a quick example. I have a simple setup with one Windows VM, single disk, replicating on one datastore (FREE_TRACK_SRM) with SRDF/S.

I’ve setup my EmcSrdfSraGlobalOptions.xml file to enable the parameter:

I then run the test failover. Once complete, here is the current extent allocation of FC, which is the R2, and 10B which is the linked snap target. Note the current allocation of 10B at 81813 tracks.

I then went into the VM console and added a bunch of files to grow the extents to 84321 tracks.

Now I run the cleanup, which due to the new parameter, will free the extents in 10B but not delete the device. In order to do this, the auto target devices are first removed from the storage group. If I check on the extents during the cleanup, I can watch them go down.

In addition, the SRA log file will include entries which show the deallocation.

Note that because the device is tied to the R2 (the link is preserved), you cannot deallocate all the tracks. Once all tracks that can be deallocated are returned to the SRP, the devices will be placed back in their storage group before completing the Cleanup.

Final Thoughts

So I’ll finish this off with some thoughts.

  • Use the TechBook as I obviously can’t include all the detail in a blog post. Chapter 2 is the most important and the one to read, so don’t be daunted by the size of the book. Most stuff in there you don’t need.
  • VMware is going to drop SRM Windows so best to upgrade as soon as you can now that the SRA is available. Yes, there is a learning curve there, particularly if you are not familiar with Docker or containers; however you can accomplish what you need to using the GUI interface and text editors so I think you can handle it (if you needed a push).
  • You may have heard we now support vVols with SRM. vVols with SRM does not require this SRDF SRA as it uses VASA instead. There is absolutely no issue with vVols and VMFS/RDMs co-existing in SRM, but if you use vVols and VMFS, you’ll need the SRDF SRA. VASA has no association with VMFS/RDMs. For some reason even our internal guys are confused about this but our customers are more savvy of course so you get it.
  • I continue to try to get our management team to buy into a GUI interface to modify the SRA files. It would be particularly helpful for the Appliance interface. I’ll keep you informed.
  • Thanks for your patience. I know the Appliance support took longer than you (and we) wanted.

21 thoughts on “SRDF SRA 9.2 and the SRM appliance

Add yours

  1. I thought you had said in a previous post that your were moving toward removing the dependencies on the XML files? Based on this post, it sounds to me like you took something that was already complicated (XML requirements on SRM for windows) and made it even more complicated (XML requirements on linux). I can’t tell you how disappointed I am to hear the XML files are still required.

    1. I think the most I’ve written is that certain features like auto device creation gets us closer to less reliance on the XML files, which it does since you don’t have to modify the test failover file, and once you set the global option you don’t have to make further changes. And hopefully future capabilities will drive more functionality out of the XML files, but eradication of them completely (or files like them) is unlikely given all the capabilities we must support with SRDF. My push with development for years now has been to provide a front-end interface to modify the parameters so that customers would never have to manipulate files manually on the OS (and be prevented from doing so). Then whether we use XML or not is irrelevant, and for all intents and purposes the XML files are gone. But yes, supporting the Appliance was the goal of development for this release, not changing the underlying foundation of the SRA.

  2. Is there any way to completely delete the snapshot when you run the cleanup. As it will continue to build up in space as long as R2 devices are changing. I’m worried about the pool capacity when the snapshots grow big.

  3. Hi Drew and thank you for your great job on this blog! Just a couple of questions: is this sentence still true even in the latest releases? “#This script will need to be re-run post every system reboot”. And are there some GUI improvements in the latest releases?
    Thank you again!

  4. Drew, I’m configuring Solutions Enabler Virtual Appliance. Gatekeeper RDM disks mapped on vSphere side, and on vApp Manager Gatekeeper page. Unfortunately, on that page, I can only configure a single ESXi host. If my appliance move away from that host through vMotion/HA, it looses GK devices attached into the list. Is this the expected behavior? Does it work however? Or do I need to create an affinity rule? I’ve not found an answer into your TechBook.

    1. It should not be an issue if it vMotions to another ESXi host – it would not be possible if the RDMs were not presented to the other hosts. If you wanted to use the GK page in the appliance you would now have to add the new ESXi host instead of the original, but generally once you’ve added the RDMs the first time, you never go back to the page which is probably why I never mentioned it.

      1. I want to give an important update, I think it should be mentioned in the official documentation and maybe here, the script copies in the first docker the “hostname” file, but if you have more than one SRA installed, it doesn’t execute what needed, so you need to manually move the file into the right docker volume. In my case SRDF SRA was the second one. I spent a lot of time finding the problem.

  5. Great article especially for me switching from the Windows based SRM server to the Photon OS based appliance. Just curious if you have any news on the new EMC SRDF Version 10 SRA? I’m just in the process of setting up SRM 8.5 and I see the V10 SRA is now supported as well as the 9.2 SRA.
    Thoughts on what version to run with? Also since I now need a Solutions Enabler install outside of SRM will there be SE vApp Version 10 available? Thank you.

    1. Hi James,

      If you are running the current PowerMax 2000/8000 or older arrays, I’d stick with (the bug fix version is what you want). I have a post on SRA 10 here: if you want to get an overview. It mentions there is no vApp for Solutions Enabler in version 10 as it has been deprecated. So if you were using a consolidated install with Windows (client/server same host) and wanted to use SRA 10, you will need to setup a separate SE physical or virtual server or you could use that Windows box as the SE Server (see here But as I said, I’d stick with

  6. That is what I needed Drew. I was thinking about using the SE that is running on the Windows SRM server still but it is running Windows 2012 OS and I want to retire it. So even though the new Photon OS SRM 8.5 appliances are running Photon OS I can point them at a Windows server that has the SE installed on it? Thanks for your help!!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by

Up ↑

%d bloggers like this: