As so many of our customers move from one of our platforms to another, I thought it would be useful to write about the inherent migration capability of our multipathing software PowerPath. It is known as PowerPath Migration Enabler. I plan on doing this over a few posts to cover some of the different use cases where it can be used, rather than cramming them all together which might get confusing… and long. This post is to introduce the technology before we get to the use cases so it will be easier to understand them.
PowerPath Migration Enabler (PPME)
PowerPath Migration Enabler (PPME) is a migration tool that enables non-disruptive or minimally disruptive data migration between storage platforms or even between logical units within a single storage platform. Migration Enabler is part of PowerPath and therefore resides on the host, though it does work independently of PowerPath. Essentially what PPME does is move data between PowerPath pseudo (e.g., /dev/emcpowerx) or native devices (e.g., /dev/sdx), all while applications remain online (pseudo only), and then when finished swaps the device names. So, for example, if /dev/emcpowera initially points to a PowerMax device and /dev/emcpowerb to a PowerFlex device, after migration the /dev/emcpowera device would point to the PowerFlex and the /dev/emcpowerb device point to the PowerMax. This would be accomplished without disturbing the underlying data/file system/application.
PPME does not support migration of boot or swap devices
PPME supports two different technologies for migration, the default HostCopy capability and Open Replicator. HostCopy is just what it sounds like, host-based copying, and as such is agnostic to the underlying storage. Open Replicator or Open Replicator for EMC Symmetrix, on the other hand, is for migrating data to the Symmetrix Family (e.g., VMAX, PowerMax) of arrays through the use of Solutions Enabler. As I will focus on the two platforms I cover, PowerMax and PowerFlex, all migration examples I demonstrate in subsequent posts will be using HostCopy and pseudo device names.
Supported scenarios
There are two useful charts from the DELL EMC PowerPath Migration Enabler User Guide I want to include here. The first below shows, depending on the operating system, whether you can use the native device name or the pseudo device name for the migration. Only two platforms permit the use of native device names, HP-UX and Solaris. Note, however, that all native device name migrations are disruptive.
The second chart here defines what type of device, by operating system, can be the source and target of the migration. The different types of device are: thick, thin, SCSI, or NVMe.
In addition to traditional single node migrations, PPME does support some cluster and geo migrations as well as volume managers like VxVM. These more advanced topics are covered in the previously mentioned user guide.
Prerequisites
I suppose this topic is the one that causes the most confusion with PPME. There are lots of migration tools out there, some host-based, some array-based, and it can be overwhelming when trying to figure out what makes sense for your particular scenario. In order to make an informed decision, it is essential to understand the prerequisites of the technology. PPME will not be the right solution for everyone, and much of that depends on what is required to use it. So let’s go over the necessities.
PowerPath
Well it’s in the name, so best to begin with PowerPath. As PPME is part of PowerPath, I would consider the largest prerequisite to be that you are already using PowerPath on hosts where the migrations are required. If your goal is to non-disruptively move data, the source devices must already have the PowerPath pseudo device names or you will incur downtime. It is perfectly possible to add PowerPath to an environment and then use PPME, however, there is no way to do so without some application/file system downtime. The source devices will need to be acquired by PowerPath. Fortunately, PowerPath can be installed non-disruptively in recent releases, and therefore you will not need to reboot, but the reconfiguration of your device names will necessitate changing say the mount path. Furthermore, if you are not using PowerPath now, and only want to use PPME and then remove PowerPath, more downtime is necessary. This is not meant to discourage you, but if you are willing to take downtime it does open up many more migration options which should be considered.
BTW downtime would also be incurred if you currently use PowerPath but do not wish to do so after PPME migration. For example, while PowerPath offers much functionality with PowerMax, the same is not true for PowerFlex. After migration between these array, if you want to reconfigure your mounts to use the native device names and remove PowerPath, it will be disruptive. Fortunately it does not have to be done immediately after migration, rather it can be scheduled to coincide with existing maintenance windows.
Licensing
PPME does not require separate licensing from PowerPath. If PowerPath is licensed, PPME is available for use.
Devices
In order to migrate between devices of different platforms, the devices must be presented to the host and claimed by PowerPath. The source and target device should each be of the format /dev/emcpowerx. Presumably, as mentioned, the source device would be running as a PowerPath device and any newly presented devices from the target array would receive a pseudo name. The target device must be of equal or larger size than the source device. Since the migration is occurring between different arrays, with potentially different geometries, be sure the sizes are comparable. Since larger target devices are supported, it is always safer to go a little bigger. You cannot use a smaller device for the target regardless of the amount of data to be migrated. Here is the error you will receive if you try this:
Setup migration? [yes]/no: yes Source = /dev/emcpowera, Target = /dev/emcpoweri PPME error(51): Target must be at least as large as the source
Physical or Virtual
PPME supports either physical or virtual environments, though I think it is safe to say physical is the sweet spot. Why? Virtual environments, and for all intents and purposes we’re talking mostly VMware, use PowerPath/VE at server level where PPME is not supported (ESXi). Therefore you need to utilize PPME at the VM-level. Can you do that? Absolutely. Does it scale or will it work for most VMs? Not really. The reason is that PowerPath does not manage VMware virtual disks. It will only manage devices presented directly to the VM. In the PowerMax world this would be via RDM, in-Guest iSCSI, or passthrough FC (DirectPath IO). Sorry, no vVols though, because remember the ESXi host only sees the PE, not the vVols. In the PowerFlex world there is also RDMs, but you have the flexibility of installing the SDC directly on the VM and presenting storage that way. BTW this makes migration to PowerFlex very easy in the virtual space.
So if you have VMs that use these storage presentation models, and the VMs employ PowerPath, PPME might work well for them. Most customers, however, have moved away from RDMs if they previously used them (or use them in a small subset of VMs), and very few use in-Guest iSCSI unless say they are using the CSI driver with PowerMax. And I’ve seen maybe 1 or 2 customers use DirectPath IO which requires you to assign an HBA adapter from your ESXi host directly to a VM, and therefore PPME could only be used with that single VM.
For these reasons, PPME is generally not an efficient nor scalable way to migrate between arrays in virtual environments. Ultimately, most migrations I see with VMware are done with Storage vMotion because you can present both storage arrays to the same ESXi host and allow VMware to move the entire VM without downtime. It’s a far simpler solution.
Physical environments obviously avoid all these restrictions, so they present a better all-around use case for PPME.
Migration Flow
Let’s go over the basic workflow of PPME migration when using a pseudo device name. The migration binary is named powermig. The steps of the migration transition through a number of migration states as the user executes the associated powermig commands. While you can run individual device migrations, you can also supply a pairs file.
- Setup – Essentially this begins the migration session. If it completes successfully it indicates the prerequisites are met.
- Syncing – When issued, PPME starts bulk-copying data from the source device to the target. During the sync, reads are serviced by the source device as are the writes. Each write is then cloned to the target device. By default, the host will spend all its time copying the data, however there is a throttle parameter (0-9, 0 is 100% time, 9 is 1%) to reduce the impact on the host.
- SourceSelected – The migration enters this state after the bulk copy completes. Data is synchronized at that point and read and writes are still serviced by the source with writes being copied over to the target.
- TargetSelected – When issued, the read requests are transitioned to the target and serviced by that device, while the write workflow continues as before.
- Committed – When this final command is executed, PPME swaps all the underlying pseudo names and all IO is redirected to the new device with the original source path. The devices are no longer synchronized.
- Cleanup – After the migration enters the committed state, the user can issue a cleanup to remove the migration session. The cleanup removes some data from the source device to prevent the existence of two identical logical units, just to be sure the OS does not get confused (not that it should).
Note that once a migration is started, the target device is inaccessible until the committed state.
Use Cases
I’ve been considering a few types of use cases to demonstrate the PPME functionality, some more complicated than others, but for now I’ll start with two basic ones. These migrations will be from PowerMax to PowerFlex 3.x so I’ll be using SDC, though in the future NVMe on PowerFlex 4.x is an interesting use case since PPME will soon support SCSI to NVMe for PowerFlex (PowerMax is already supported). But for now:
- Physical – This will be a Red Hat 8.3 operating system with Oracle 21c installed. The Oracle software is located on the boot drive which is a local RAID 1 device. I built an Oracle single instance database across three pseudo devices on PowerMax, presented via iSCSI, which I will migrate to similarly sized devices on PowerFlex.
- Virtual – I think the best virtual environment use case is moving RDMs within a VM. I’ll do the migration in two parts since the goal is to get the VM off PowerMax storage. First, I will use PPME to migrate the RDMs to PowerFlex. Second, I will remove the PowerMax RDMs; and finally use Storage vMotion to move the boot vmdk to a PowerFlex datastore. Remember I can’t SvMotion the RDMs without converting them to vmdks which I do not wish to do, so PPME works well here.
I hope to get both posts completed this week and then look to perhaps a corner case which is going to require a bit more testing.