PowerMax CSI v2.4 is available on GitHub now and can be found here. This CSI is one developed by Dell which enables provisioning of PowerMax volumes to kubernetes environments. While it is typically used in physical environments where both FC and iSCSI are common, it can also be used in VMware environments when the VM has direct access to the array either via iSCSI (common) or pass-through FC (uncommon).
PowerMax CSI vs VMware CSI (aka CNS)
In speaking with customers, our CSI is frequently confused with what VMware offers in this space, whether through vanilla kubernetes or Tanzu. I’ll see if I can boil it down and make it easy to understand the difference before getting into our new CSI version.
VMware’s CSI, which is now called vSphere Container Storage Plug-in, used to be Cloud Native Storage (CNS). I wrote a post on it a couple years ago here. The easiest way to think about the difference between our CSI and VMware’s, is that VMware’s CSI uses Storage Policies (SPBM) to create vmdks (VMFS, NFS, vVols) for kubernetes. This is the same type of vmdk you would create with a new VM or editing a VM to add a hard disk; but vmdks created by VMware’s CSI are not tied to a particular VM (though they can be). VMware, therefore, calls them First Class Disks (FCDs). The PowerMax CSI, on the other hand, creates devices directly on the PowerMax. It is not a vmdk, but just a regular device, which can then be consumed by kubernetes. It has nothing to do with VMware. I’ve included a screenshot of my kubernetes environment which is running on a VM with a direct iSCSI connection to the PowerMax array. In the green box is the PowerMax CSI pods, while in the blue box is the VMware CSI. Yes, there’s no issue running them at the same time as they do different things.
I think the other important distinction to make is that when you read about any of VMware’s forays into kubernetes – whether they speak of Tanzu, kubernetes with vSphere, microservices (fancy name for containers), VMware Cloud, etc., it is always in association with the vSphere Container Storage Plug-in, and not other CSIs. The integration of VMware components, particularly in Lifecycle Management (vLCM), requires the use of FCDs. Storage vendor CSIs like ours, they cannot control, and thus do not fit well in their model. Why use ours then with VMware you might ask? Because you can take advantage of array features like replication and snapshots at the device level. With FCDs, the vmdks are in datastores so any replication or snapshotting is done at the datastore level, which could contain many FCDs serving different purposes. Therefore you may be replicating or snapshotting a lot more disks (vmdks) than you really want to. But I want to be clear that you cannot substitute our CSI for VMware’s in their solutions like Tanzu.
OK with that behind us let’s look at the PowerMax CSI new release.
PowerMax CSI v2.4
New Features
The following are the new features for version 2.4. I’ll cover the first three, but as I don’t see PowerPath used on VMs much, I’ll just note it is available with the CSI now if you want.
- Online volume expansion for replicated volumes.
- Added 10.0 Unisphere REST endpoints support & removed 9.x support.
- Automatic SRDF group creation for PowerMax arrays (PowerMaxOS 10 and above).
- Added PowerPath support.
The most important item above concerns REST support. Version 2.4 only works with REST 10. Any previous endpoints, i.e. 9, have been removed. REST 10 is part of the latest Unisphere for PowerMax release, 10.0. Therefore if you upgrade your PowerMax CSI you will need to either upgrade your Unisphere or point the values.yaml to a new server. When you install or upgrade you will be warned about this. It, of course, will not prevent you from installing.
SRDF
For SRDF there are two new features, though one is dependent on the type of array.
Automated SRDF group
The first is automated SRDF group creation. To use this feature, however, you need the newest of our arrays the PowerMax 2500 or 8500. Normally when you create a storage class for replicated volumes with repctl, it looks like this:
targetClusterID: "target" sourceClusterID: "source" name: "powermax-replication" driver: "powermax" reclaimPolicy: "Retain" replicationPrefix: "replication.storage.dell.com" remoteRetentionPolicy: RG: "Retain" PV: "Retain" parameters: rdfMode: "ASYNC" srp: source: "SRP_1" target: "SRP_1" symID: source: "000000000001" target: "000000000002" serviceLevel: source: "Bronze" target: "Bronze" rdfGroup: source: "5" target: "5"
With v2.4 if you’re running on PowerMaxOS 10 (i.e., the new arrays), you can remove the section referring to the rdfGroup. Here is an example of created storage classes – the red box has the group defined, the blue is undefined which is only available in v2.4.
If we create a replicated volume using this storage class, the CSI driver will create the SRDF group for us and then the SRDF pair. Here is the CLI output showing the new group:
And Unisphere 10:
Note that with ASYNC there is a restriction that the namespace cannot be more than 7 characters if leaving the RDF group empty. If it is longer then you must define the RDF group in the storage class.
SRDF volume expansion
At this point, we can move on to the second SRDF feature, expansion of a replicated device. In order to use this feature, you must have set the resizer enabled to “true” in the values.yaml (if you didn’t you can set it and run an upgrade), and the storage class must have the flag allowVolumeExpansion to true. The other important item of note about volume expansion with our CSI driver is that the repctl utility does not support the parameter allowVolumeExpansion. Unfortunately, repctl ignores the parameter if it is in the file so it looks like it works. But if you check (kubectl get sc) the column will be false:
So instead use kubectl create:
Now I create a 1GB device in the replicated storage class srdf-async-src. The local device is 25B on 341.
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pmax-expand namespace: async spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: srdf-async-src
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pmax-expand namespace: async spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi storageClassName: srdf-async-src
You can see the resize message below. You may note that it makes reference that a resize of the file system is necessary. Since my PVC is not in use, it will remain as 1GB until I attach it to a pod. I’ll show that next.
And here is the newly resized volume:
Now as I mentioned the PVC is still 1GB in the red box below. But if I create a new pod and use the PVC, it will resize to 2GB as part of the process of attaching it (blue box).
And that’s volume expansion with SRDF and the new CSI v2.4.
Leave a Reply