For the past two posts (1, 2) I’ve covered using PowerPath Migration Enabler (PPME) to move data between Dell storage arrays. PPME, however, can also be used to migrate devices within a single array, in this case PowerMax. One of the best use cases for this is when you are ready to make the leap into NVMe over Fabrics (NVMeoF). PPME facilitates the online migration of data between the two protocols. I’m highlighting the online aspect of this migration because offline migrations where data is transferred, are really not necessary (*see VMware exception at bottom). PowerMax supports presenting a device either with SCSI or NVMeoF with no additional configuration. You can simply remove the masking view using the SCSI initiators and create a new masking view using the NVMe initiators. You cannot present a single device with two different protocols, however.
Undoubtedly, if you move to NVMeoF, some of your devices can be moved offline while others will need to be done online (unless a maintenance window is available). This example covers the latter for SCSI to NVMe/TCP on a physical Red Hat 8.4 host. As I’ve written in length already about PPME and migrations, I’m just going to introduce the demo for this one.
SCSI to NVMe/TCP Demo
The video will walk you through migration of data from a 50 GB SCSI device to a 50 GB NVMe/TCP device using PPME. You will see the process is no different than my previous demos. In this example rather than Oracle, I am running an IO generator, IOMETER, on an ext4 file system on the SCSI device. The application is useful because it clearly shows the impact of copying the data between the devices. Although PPME, as I explained in the intro post, allows the user to throttle the session to avoid impacting the application (at the cost of time of course), I allowed the host to use all its resources and the result is as expected. I have not included device presentation to the Red Hat environment. My host has an HBA for FC, and a QLogic NIC that supports NVMeoF. The other hardware details aren’t particularly relevant. The PowerPath version is 7.5. If something is unclear though, please leave a comment.
Unfortunately, offline migrations are not as straightforward with VMware as with other operating systems. The reason for this is the signature that VMware writes to all its datastores. The signature contains the WWN of the device. When you present a device with NVMeoF, however, VMware uses the NGUID instead. This is not the same as the WWN and VMware will recognize the NGUID does not match the signature and ask you to resignature (or something else). As of today, the exact process to complete an offline datastore migration to NVMeoF is still being tested at VMware. I am hopeful, based on my testing, that VMware will soon validate it and I would guess write a KB. Until then, Storage vMotion (online or offline) is the way to move to NVMeoF.
Leave a Reply