This post continues from the last one which introduced you to PowerPath Migration Enabler, or PPME. Here we want to look at migrating devices from PowerMax to PowerFlex using PPME on a physical box. To demonstrate how the process is accomplished while online, an Oracle database will remain running on the PowerMax source devices during the migration. I’m not going to spend time on how to present storage from either PowerMax or PowerFlex since if you’ve come this far, I think you can handle that. Let’s begin.
Environment
Below are the details of the setup I am using for the migration, both source and target. Screenshots and demo will follow, but I’m putting it here for reference. Some details are just there for completeness, e.g., RHEL kernel.
Source
- Dell PowerEdge R740 with a local RAID 1 ~300 GB disk (hostname dsib2026.drm.lab.emc.com)
- RedHat Release 8.3 (kernel 4.18.0-240.el8.x86_64)
- iSCSI connectivity to the PowerMax 8500 (if you have FC that’s fine – NVMe/TCP though is a different use case)
- A masking view with three devices to the physical host as follows:
- (1) 100 GB device for datafiles – child SG 1
- (1) 100 GB device for recovery area – child SG 1
- (1) 50 GB device for redo logs – child SG 2
- Oracle 21c installed on the local device
- Oracle database “orcl” created on the three PowerMax devices
- PowerPath version 7.5 (build 95)
Target
- PowerFlex version 3.6.600
- Multi-node, custom configuration Linux/ESXi
- EMC-ScaleIO-sdc-3.6-600.113.el8.x86_64.rpm installed and configured on dsib2026
- Multi-node, custom configuration Linux/ESXi
- Three devices mapped to dsib2026. Recall PowerFlex creates devices in increments of 8 GB so we want them equal or larger than our PowerMax devices.
- (2) 104 GB devices
- (1) 56 GB device
Oracle database
As noted, the Oracle database is running across the three presented devices, each of which has been claimed by PowerPath. I am using multipath on the host (when you install PowerPath you must specify a parameter for this) so each device is on two native devices, though represented by a single pseudo device (emcpowerc, emcpowerd, emcpowere). Here are the PowerMax pseudo devices:
And here are the database/redo files with the mounts shown:
Lastly, just to demonstrate the online nature of the DB, here is Enterprise Manager (EM) showing how long the instance has been running. I’ll use EM in the demo.
PowerFlex storage
I then present the three devices from PowerFlex to the dsib2026 host. PowerFlex handles the multipathing on its side, so you will only see a single path displayed in RHEL. The pseudo devices for PowerPath are emcpowerg, emcpowerh and emcpoweri.
Migration
With the PowerFlex devices claimed by PowerPath, we can begin the migration process. As I wrote in the first post, migrations can be done device by device, or with a device pairs file. Since this is a single Oracle database, I am going to migrate all three devices together. The time to migrate the data does depend on whether you want to throttle it so as not to impact your application performance. I’m moving 250 GB of data which is not insignificant, but as it is an older host and I have a single network for both the PowerMax and PowerFlex to share, it’s all the worse. It goes without saying that is a really bad setup and my copy time will show it. Production environments, however, will have separate networks (FC and/or IP) for the storage systems as well as multiple network links, so my goal here is to demonstrate functionality, not throughput. And fortunately I can edit my demo so you don’t have to wait like me. Onto the demo!
Demo
Next Up
With the physical use case complete, next I’ll cover using PPME with a virtual environment when RDMs are in use. Originally I wanted to use Windows, but PowerPath doesn’t support it yet with PowerFlex so I’m afraid I’m back to RHEL.
Why would anyone want to move from PowerMax to PowerFlex ☺
Internal Use – Confidential
Ah, to each his own my good man.