Dell EMC CSM with replication

Dell EMC has created a GitHub for container storage modules or CSM. Dell EMC writes that “the Dell Container Storage Modules (CSM) enables [sic] simple and consistent integration and automation experiences, extending enterprise storage capabilities to Kubernetes for cloud-native stateful applications.” Quite a mouthful. To de-marketize the statement, CSM sits on top of other integrations like the PowerMax CSI driver and helps do some orchestration of higher functions like replication. The idea here is to reduce complexity for developers by providing a common interface across platforms. The commands you run with CSM, therefore, are the same regardless of the underlying storage. Not everything can be translated, however,  so yaml files and their parameters will be unique to the array, but a replication failover using the binary repctl (discussed below), will look the same on the PowerMax or PowerStore. The initial CSM release supports four functions, Observability, Authorization, Resiliency and Replication. Below is representation of the CSM/CSI relationship.

The PowerMax supports two of the capabilities: authorization and replication. As an example, I’m going to show replication as the authority module is not my bailiwick.

A quick sidebar. You’ll need to upgrade from the v1.7 CSI driver to v2.0 to use CSM if you already have the CSI driver installed. As part of the CSM installation it will walk you through the CSI install, but there is no issue if you already have it installed. For example, I upgraded mine to v2.0 before I even began the CSM installation.

Installation

To install CSM, first add the helm chart repository. You want to be using helm 3 not 2.

helm repo add dell https://dell.github.io/helm-charts

If you want to secure the service and database, use the link above to follow the necessary steps. In my test environment I don’t have a need for that so we press on. The next step is to create a values.yaml file which contains the following information:

# string of any length 
jwtKey: 

# string of exactly 32 characters
cipherKey: "" 

# Admin username of CSM Installer
adminUserName: 

# Admin password of CSM Installer
adminPassword:

The documentation is a bit lacking and perhaps there is some knowledge related to Kubernetes that I just don’t possess so I was a little confused. The descriptions of what you need to supply for entries is true enough. For jwtKey put in any word, really. For cipherKey put in 32 characters (letters, numbers, symbols) in between the quotes. Again, anything. And then for username and password again whatever you want there also. No quotes there. So my file is:

# string of any length
jwtKey: key

# string of exactly 32 characters
cipherKey: "aasdfgafhgshsffadgshsdffgsdggggg"

# Admin username of CSM Installer
adminUserName: admin

# Admin password of CSM Installer
adminPassword: admin

There are other configuration options you can change, but you shouldn’t need to generally. The install command, however, does pass two of these parameters (scheme, dbSSLEnabled) which could have easily been added to the values.yaml but weren’t. Not sure why but I followed what the directions said and ran:

helm install -n csm-installer --create-namespace \
--set-string scheme=http \
--set-string dbSSLEnabled="false" \
-f values.yaml \
csm-installer dell/csm-installer 

You should see a response like:

NAME: csm-installer
LAST DEPLOYED: Tue Nov 16 10:15:21 2021
NAMESPACE: csm-installer
STATUS: deployed
REVISION: 1
TEST SUITE: None

And if you look at the csm-installer namespace:

From this point you could authenticate using the CSM CLI and then deploy the CSI driver if you have not already, but as mine is installed, and I want to show you replication, I’m going to discuss that module next.

Replication

With PowerMax the CSM module will initiate SRDF commands, but remember the commands are universal across whatever platforms are supported. For example, both PowerMax and PowerStore are supported, so while the commands are the same coming from the module, each array issues the equivalent command on the array as shown in the table below. The first three rows are for failover workflows, the bottom three for maintenance activities. CSM supports ASYNC, SYNC, and METRO modes.

For more complex actions (e.g. split) you would have to use the traditional interfaces like Unisphere or Solutions Enabler.

PowerMax arrays in values.yaml

Before beginning the installation, if you have not done this already, be sure your remote array is included in the values.yaml file provided during the CSI driver installation. If only the R1 is available, the RDF state for the R2 will be UNKNOWN and you will have problems deleting replication groups. The relevant section of the yaml is below where you can see my R1 and R2. If you do not have the remote array, add it then run an upgrade. I’ve included that command.

##########################
# PLATFORM ATTRIBUTES
##########################
# Serial ID of the arrays that will be used for provisioning
# Default value: None
# Examples: "000000000001", "000000000002"
storageArrays:
- storageArrayId: "000197601879"
- storageArrayId: "000197601883"

./csi-install.sh --namespace powermax --values values.yaml --upgrade

Installation

There are two ways to install the CSM Replication Controller (dell-replication-controller) which is the underlying container that drives the capability. The first is a new binary developed for the CSM module called repctl. This is the recommended methodology. The second is to use an installation script. I’ve done it both ways, but the reality is you will want the repctl binary to run commands even after the installation so it’s easiest to use it from the start. The installation instructions are easy enough.

        1. Clone the GitHub repository git clone https://github.com/dell/csm-replication
        2. Download the repctl binary and place it in the path https://github.com/dell/csm-replication/releases
        3. Add your clusters using repctl. The syntax for multiple clusters is like so: repctl cluster add -f “/root/.kube/config-1″,”/root/.kube/config-2” -n “cluster-1″,”cluster-2” For my environment, I only use a single cluster (yes you can still use replication) so my syntax was: repctl cluster add -f “/root/.kube/config” -n “cluster-1”. Note the cluster name can be anything as well as your config file name.
        4. Install the replication controller and CRDs. In the example here I cloned the repository in #1 directly into /. Your path may be different.                                      repctl create -f /csm-replication/deploy/replicationcrds.all.yaml
          repctl create -f /csm-replication/deploy/controller.yaml
        5. Inject either the service accounts’ config or the admin config into the clusters. Run only one depending on the level of security you want (first being most secure).  repctl cluster inject –use-sa
          repctl cluster inject

At this point you’re ready to create storage classes that will replicate PVCs for you.

Storage Classes

The format of the yaml file for storage classes will be familiar to those already using the PowerMax CSI driver. I’ve included the example that is provided in the  csm-replication/repctl/examples directory:

targetClusterID: "target"
sourceClusterID: "source"
name: "powermax-replication"
driver: "powermax"
reclaimPolicy: "Retain"
replicationPrefix: "replication.storage.dell.com"
parameters:
rdfMode: "ASYNC"
srp:
source: "SRP_1"
target: "SRP_1"
symID:
source: "000000000001"
target: "000000000002"
serviceLevel:
source: "Bronze"
target: "Bronze"
rdfGroup:
source: "5"
target: "5"

Since we are using the REST API you might expect that we would have access to the more advanced workflows for creating RDF pairs, but alas no. As such, you can see at the bottom of the file we must provide the RDF group. So you have some preliminary work to do first either with Solutions Enable or Unisphere. If you are using ASYNC or METRO, remember that all devices in the RDF group act as a unit, whereas with SYNC you could use a single RDF group for all applications (though I would not recommend it). Once you know the groups on each array, fill in the detail. My file is below. Recall that I am using a single cluster so my source and target are the same. You can also simply list the source and target as “self”.

targetClusterID: "cluster-1"
sourceClusterID: "cluster-1"
name: "powermax-replication"
driver: "powermax"
reclaimPolicy: "Delete"
replicationPrefix: "replication.storage.dell.com"
parameters:
rdfMode: "ASYNC"
srp:
source: "SRP_1"
target: "SRP_1"
symID:
source: "000197601879"
target: "000197601883"
serviceLevel:
source: "Bronze"
target: "Bronze"
rdfGroup:
source: "12"
target: "12"

Run the creation of the storage classes (source and target). Helpfully, repctl will echo back the parameters in a format which you could use in a yaml with kubectl. Note how the cluster name was automatically converted to “self”.

We can list the classes now:

Volumes and Replication Group

With classes in place, I created the yaml file below with the volume definition. I’m requesting a 10GB device using the storage class powermax-replication from above. Using the REST API, this will create the replicated pair between arrays 879 and 883 in RDFG 12.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rep-rdfg12
namespace: powermax
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: powermax-replication

So using kubectl create we pass the file. The module and driver do the rest. I’ve included the describe command which shows the remote device, 8E.

You can see the device pair in Unisphere:

Now list the volumes and replication groups (local and remote) with repctl. The pair is in a SYNCHRONIZED state, or for the PowerMax with SRDF/A, Consistent.

Actions

Finally, let’s initiate one of the replication module actions. I’ll issue a suspend, being the least disruptive, followed by a resume. Although there are ways to use kubectl, it is far easier to execute these commands with repctl. Note during the suspend, you can see the state changes to SUSPEND_IN_PROGRESS. The state does not update, however, when resuming.

I think we’ll wrap it up for now. Lots more to test.

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by WordPress.com.

Up ↑

%d bloggers like this: