Non-Disruptive Migration (NDM) with VMware

Non-Disruptive Migration, or NDM, is a capability offered on our arrays that allows you to move from an older array to a newer array with no downtime (hence non-disruptive). NDM has been around for years at this point, but I see it infrequently in VMware environments because of the use of VMware’s own NDM, Storage vMotion. With SvMotion most customers will create new datastores on the new array, present them to their ESXi hosts, then move the VMs from the old datastores on the old array to the new datastores on the new array. Once migration is complete, they either delete the old datastores or simply remove the devices from the ESXi hosts. There are situations, however, where NDM might be more appropriate, particularly for very large environments where SvMotion could be overly time-consuming, if not resource intensive. Therefore I am going to run through a simple example of using NDM for a single VM with 2 vmdks on a single datastore and one RDM. The complexity is not important as the process is the same no matter how many devices you need to migrate.

NDM is integrated into Unisphere so I recommend using the wizards available to you rather than the command line. Unisphere will run through the appropriate pre-checks which will result in a better chance of success. There are a number of prerequisites to run an NDM session, both hardware and software, but at a high level:

  • Code on each array that is qualified for NDM
  • Arrays that are configured for RDF and that are zoned to each other
  • ESXi host(s) must be zoned to both arrays
  • Solutions Enabler/Unisphere supported versions
  • Supported multi-pathing software

You can view the full requirements at the E-Lab site:  https://elabnavigator.emc.com/eln/elnhome

Here is a diagram of the NDM topology:

Putting aside the requirements and diagram and boiling this down, the engine behind NDM is SRDF/Metro. In a VMware environment what we are essentially doing with NDM is creating a uniform vMSC by adding paths to a second array and second device (R2) and then, at a chosen time, removing the paths to the original device (R1). Our VMs are none the wiser that we have  moved to a new array online, and in so doing avoided the time and resources that SvMotion would cost us.

Setup

On to the example. In my environment I am cheating a bit because I am migrating within the same base code level which technically is not supported, but I just don’t have older code around so this is the best I could do. This doesn’t change anything about the process, so for my purposes this works fine. Before I show the demo, here are some items to keep in mind about the environment. You will see all this in the demo but I think it helps for you to know up front what you are going to see.

  • I am going to be migrating the storage group NDM_test_sg from array 000197600357 to 000197600358.
  • There are two 750 GB devices in this storage group, 000FF and 00100. One of these devices is used as a datastore, NDM_357_1, and the other is used as a physical RDM.
  • There is one VM, NDM_Win_1. It has 2 vmdks and 1 RDM. I am running IOMETER on this VM just to show that everything is being done online.
  • There are two paths configured to 000197600357 for the datastore and RDM. I am using NMP, though PP/VE is also supported if you prefer. We of course recommend at least 4 paths.

I think that will do it. I have some callouts in the demo which should help (no audio), though I suspect you will not have any trouble following. There is one initial step in Unisphere you must do before migrating a storage group and that is to create the migration environment (included in demo). Think of this as us doing a sanity check to ensure a migration will work. I don’t have a good deal of dead time in the demo so you may have to pause it if there are particular tasks of interest. And we’re off…

Demo

Roundup

So now that you’ve seen the migration I want to emphasize a couple things I pointed out in the video.

  • The first is that the target array devices take on the external WWN of the source array devices. This is in keeping with how SRDF/Metro works. Even after migration completes, this external WWN remains, so it will look like you are still pointing to the other array.
  • The second thing is that the source devices which are now defunct, have their external WWN changed to the internal WWN of the target devices. So now they look like they come from the target array, however once migration completes they are no longer masked to any host, and if they are used again as is, they will not conflict with the target devices. In other words the devices basically swap external WWNs.

Hope that makes sense.

Advertisement

2 thoughts on “Non-Disruptive Migration (NDM) with VMware

Add yours

  1. Is the cutover option still valid or used when using the Metro based NDM in Unisphere 9.2? Seems like only commit works?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by WordPress.com.

Up ↑

%d bloggers like this: