I’ve had some questions from customers running Oracle RAC on VMFS who would like to use TimeFinder/SnapVX to test, rather than Oracle tools like RMAN. This is a sensible course given that for large databases RMAN takes a long time to restore, and SnapVX is basically instantaneous. But why use either solution when I can just take a VMware snapshot of the VM? Well, the rub with Oracle RAC is that vmdks use the multi-writer flag and that doesn’t support snapshots. So you need to use a non-VMware solution. This is not to say that snapshots can’t play any role – they can, for any of the vmdks which aren’t multi-writer (database files). So normally what we might see is the OS and software on one or more vmdks for each RAC node (non-shared), and then the rest of the vmdks for the database with multi-writer. But to use this method, there is one key parameter that is required for vmdks with the multi-writer flag in order to take a snapshot of the VM. And that is the disk mode of the vmdks must be set to Independent – Persistent. This tells VMware to ignore these disks when taking a snapshot, otherwise you’d get an error. By default they are Dependent. If you have no plans on using snapshots, then you can just keep the default setting. Also be aware that you cannot take the memory with the snapshot (the default checkbox) when you have independent disks configured.
I’m going to do a quick example of how you can use SnapVX for restoring in between test runs. I am concerned here about showing you the process, rather than how long it takes, but whether my database was 2 TB full or empty, SnapVX is going to take the same amount of time, i.e. none at all. While I’ll demonstrate taking a VMware snapshot of the VM (video below), I’m not going to restore it because my test does not change any files on the OS or software mounts that make a difference in between the runs.
- 4 host ESXi 7.0U2, vCenter 7.0 U2b (most recent with security patch)
- Oracle RAC 19c, 19.3.0, single database orcl
- 2 nodes, OEL 8 U3, dsib0242 and dsib0243
- A single storage group on the array for the Oracle database with a single device. The OS/software is in a separate storage group.
- ASM single disk group +DATA with 8 vmdks all in a single datastore ORACLE_RAC
We don’t recommend using a single ASM disk group in a production environment, nor do we recommend using a single device, but as this is just a test to show the restore flow, I think it makes the demonstration clearer. Normally you’d want at least 2 disk groups (DATA, REDO) and you’d want to use 8+ separate devices to allow for effective striping on the backend. You would then assign one vmdk per datastore. I’ve done a few papers that use Oracle which you can find in my documentation library which use a production layout. BTW for Oracle best practices, see this whitepaper. Note that while the paper is based on a physical environment, the best practices hold just as true for virtual.
I’m going to make some assumptions about the reader’s knowledge so I don’t have to go too crazy with screenshots (OK that’s debatable). I’ll take it on faith that unregistering and registering VMs is common knowledge, along with some other tasks which I do include like unmount/mount datastores and detach/attach devices. And if you couldn’t tell already, I do expect some Oracle experience, though I don’t go too deep into the HA stuff. Basically if you know about ASM and how to query a table in the database, you should be fine.
Snapshot with multi-writer
First, let me show you a video of the setup of the VM and then how a snapshot will work. In the animated gif I edit one of the Oracle nodes and show the OS vmdk (no multi-writer, Dependent), and then the Oracle vmdk (multi-writer, Independent-Persistent).
For purposes of the demo, I created a user, rac_test, and a single table, test, with a single column (dummy). It has 3 rows:
So first, I’ll take a SnapVX targetless snapshot on the array to record the initial state I want to use for testing. All SnapVX snapshots are consistent so I’m going to take it while the database is up. Oracle does allow roll-forward of crash-consistent database but I’m only considered with instance recovery as I am not using archive log mode.
Here are the 4 steps for taking a snapshot. First, I checkbox the storage group, then select Protect.
Next, choose the option to create a snapshot.
Now, in step 3 give the snapshot a name, and if desired, an expiration. I’ve set it to 5 hours since I only need it for this example.
Finally, run the job and view the summary. The job took a matter of seconds to complete.
Next, let’s change our data for the rac_user by removing a row. This will show us if the restore works. Here I remove the row with a value of 2.
Now, let’s prepare VMware to restore the snapshot back to the baseline which will include the row I just removed. First, shutdown the database.
Then on each node, dismount the +DATA disk group. Because there are dependencies, you will need to use force.
Since I’m going to shutdown the VMs anyway, I typically use VMware to perform a Guest OS clean shutdown once the Oracle database is down. But use whatever method you prefer. You can also bring down one of the ASM instances, though if you try to bring down both, Oracle will complain about dependencies.
With both VMs down, unregister the VMs from the vCenter so that the Oracle datastore can be unmounted. You don’t want to restore the snapshot with the filesystem mounted. Be sure you note first where the home files are (vmx) so you know where to navigate to re-register. Once the VMs are unregistered, there is no longer a tie to the Oracle datastore so it can be unmounted from the ESXi hosts. There are lots of places to do this, below I am running it from the datastore menu. Select Unmount Datastore… in step 1.
You will be presented with all the ESXi hosts where the datastore is mounted. Select them all in step 2. If any of the ESXi hosts still have VMs with vmdks in the datastore, the unmount will fail on that host.
Once unmounted, the datastore will appear as inaccessible. Be sure all hosts show the datastore as inaccessible.
There is one extra step you can take to ensure that no one is able to manipulate the underlying devices while the restore is conducted, and that is to detach them from the hosts. This is done on a per-host basis. Navigate to Storage Devices under the Configure tab for an ESXi host and use the check box. Then select DETACH.
You will get a warning which basically says the exact reasoning you might do this, to ensure no one can use the device while you restore.
The restore will be fine without this extra step, and in my lab it is unnecessary, but in a production environment you may wish to go the extra step. Remember you have to detach the device on each ESXi host.
Return to Unisphere for PowerMax to restore the snapshot. The navigation is similar – drill-down into your storage group then select the DATA PROTECTION tab. There any and all your snapshots will be shown. I have the singular one so I will select it and then hit Restore.
In step 2, just run the job.
Again, this will complete in seconds because we copy data in the background, not wait for it. Once you see Succeeded, the device is ready.
Return to the vCenter. First, if you have detached the device, re-attach it using the same screen as above for detach (there is an attach button). Next, mount the datastore back to the ESXi hosts. This time I’m showing it at the ESXI host level.
But in step 2 you’ll still get to choose the hosts.
With the datastore mounted, re-register the VMs. Just as an example here is mine.
After they are re-registered, power them on and you should be back in business. My database comes up automatically so I’m going to check on the Test table and see if we got back our row.
Sure enough, there it is. A successful restore.
Assuming this is a test environment, you’ll probably need to refresh again at some point. Since the restore process is actually a running session, you must terminate it first before restoring again from the original snapshot. This is also done in Unisphere for PowerMax. Navigate as you would to run the restore, and instead use the 3 buttons to the right of Link. Then select Terminate. Be sure you see the checkmark below the Restored column, indicating all the data has been copied to the source volume.
The terminate will finish quickly.
And now you are ready to repeat the test. Be aware that even if you set a time to live for the snapshot (as I did to 5 hours above), if there is an active restore session that has not been terminated, the snapshot is still going to be there in Unisphere.
This test was pretty basic as Oracle RAC environments go, but I didn’t want to overcomplicate the process. You can use this to build upon if you have some more complexity. For example, if I had an Oracle RAC with 3 disk groups – DATA, REDO, and FRA. I still only need 2 of the disk groups, DATA and REDO. Likely I don’t care about the FRA as it is my archive location. So I would have all 3 disk groups in separate storage groups (parent/child is easiest as it give flexibility in Unisphere and avoids CLI). Then when I took my baseline snapshot, I would snap the two storage groups of my DATA and REDO devices as one by selecting the parent, leaving the FRA aside. My colleague Yaron Dar has a nice demo and some more info on Oracle snapshotting if you want to delve deeper. Be aware he works with physical environments, so for the sake of comparison assume in a VMware environment we would substitute RDMs in his example. That would avoid the unmounting of the datastore covered in my example.
Other refresh options
There are a number of ways to do the refresh within VMware. Since you can’t unmount a datastore with registered VMs, using the methodology I covered here is a fairly simple, straightforward path (assuming no other VMs are in the Oracle datastores). Another option would be to script (VMware PowerCLI) the removal of the vmdk files from the VMs. You would then still unmount the datastores as no VMs would be associated. A more risky method would be to pull the storage out from under the VMs. You would delete the masking view, restore the snapshot, then creating a new masking view. The VM database vmdks would show as 0 MB (inaccessible) after the masking view deletion, but upon rediscovering the datastore(s) they would recover. This avoids both registration or add/remove disks. The problem with this method, however, is unmapping/mapping devices can sometimes confuse VMware to the point of having to reboot the ESXi hosts to recover. It will cause PDL (permanent device loss) events and those don’t always end well. It’s happened to me many times and though removing the masking view is alluring since it seemingly makes life easier, you may just be making it more difficult. But I leave it to your discretion and testing.