VMware Cloud Foundation with FC, vVols, and SRDF Metro

I’ve been meaning to talk about this topic for a while, but lack of lab resources and other pressing projects kept pushing it to the back burner. I’ve bundled a few things together in this post since they all center on VMware Cloud Foundation (VCF), and as I’ve used the same environment for everything, creating multiple posts would lead to a good deal of repetition. I am using VCF 3.9.1 in my configuration, so I won’t be covering VMware’s new kubernetes integration available in 4.0, though you can read about K8s non-VCF integration here. So on we go.

VCF

Rather than attempt to explain VCF in my own words, I am going to quote VMware’s documentation: VMware Cloud Foundation is an integrated software stack that bundles compute virtualization (VMware vSphere), storage virtualization (VMware vSAN), network virtualization (VMware NSX for vSphere and NSX-T), and cloud management (VMware vRealize Suite) into a single platform that can be deployed on premises as a private cloud or run as a service within a public cloud.” The heart of VCF is the SDDC Manager which is a single GUI pane of glass allowing the VMware administrator to both provision and configure the components, but also control the entire lifecycle of each part of the environment. It’s an all-in-one solution as this VMware image shows:

VCF is deployed as a ready-built solution, like VxRail, or manually with vSAN Ready Nodes. From personal experience, I can unequivocally say that a ready-built solution is the way to go. I was unable to get a VxRail (which is obviously Dell’s solution) and had to build the environment from scratch which, frankly, was very painful. I hit a number of issues, some known, some unknown, and even had to get VMware development involved. Therefore do yourself a favor and get the HCI solution. Even if you don’t use ours, you can still use the PowerMax as part of the solution as is being described herein.


I’ll pause here and suggest that if VCF is completely new to you, I encourage you to follow that link I provided to VMware’s introduction so that you can learn a bit more about the 3.9 version (and VCF in general). The reason for that is I have to assume a basic level of understanding of VCF and its parts going forward as I am going to discuss specific components and integration into the Dell EMC PowerMax platform.


Workload domains and storage

First, let’s start with the two different types of Workload Domains in VCF: Management and Virtual Infrastructure (VI). The management domain is where all the initial components of VCF are deployed and therefore there is only one. The management domain must be installed on VMware vSAN storage. You cannot use any other type of storage. Being the heart of the installation, VMware needs to control all aspects of the management domain, including the storage. Since VMware owns vSAN, it was the logical choice for them. VMware knows nothing about the arrays that use say NFS, iSCSI, or FC so it can’t own the entire lifecycle. Here is my management domain post-installation from within the SDDC Manager.

The other workload domain is called virtual infrastructure. The VI domain is where you would tend to run your workloads. You can of course run them in the management domain, and if you have a collapsed installation and limited servers, that may be your only option (new VI domains require 3 new servers minimum); however most customer environments would utilize one or more VI domains. The VI domain is unique in that it does not require vSAN. So what are the other storage options?

From a conceptual level, there are two types of storage that VCF uses, Principal and Supplemental. Principal storage is where you can deploy new workload domains while supplemental is storage presented to existing workload domains (management or VI). The three principal storage options are: vSAN, NFS, and VMFS on FC (minimum version VCF 3.9). Supplemental storage could be any of those types, iSCSI, or even vVols. We’ll get back to supplemental later in the post when discussing vVols so I’ll leave it at that for now.

Deploying VI domain on PowerMax

So once I have VCF deployed and thus my management domain running, I can add any number of VI domains. Now on PowerMax I could use either NFS (if I have eNAS) or FC. In my environment I went with VMFS on FC. Since VMware is not able to control the storage as it can with vSAN, you must pre-create the VMFS datastore before beginning the wizard. We can do that any number of ways, but first let’s add another twist to our array storage.

SRDF/Metro

Let’s suppose that in addition to using FC, I also want to take advantage of VMware’s Metro Storage Cluster via SRDF/Metro on PowerMax with that datastore, i.e. stretched clustering. Sure, why not? VMware does provide a way to stretch the management domain running on vSAN and if you are going to use SRDF/Metro for your VI domain, it does make sense to also stretch your management domain for redundancy; however as I said VMware doesn’t understand array storage. So if you decide to stretch your non-vSAN VI domain, your network configuration may require you to customize the NSX component to make it reach another availability zone. Fortunately, in my case the 4 hosts for my VI domain I am using are co-located in the same building with my 2 arrays so I really don’t need to change anything. This is actually not an uncommon configuration for our customers. My network is fine the way it is, I just need to decide whether to use a non-uniform or uniform configuration for vMSC. We do recommend non-uniform, but in this case I am going to take advantage of the algorithms of PowerPath/VE and the proximity of my arrays and present to each ESXi host, 6 paths from the R1 and 6 paths from the R2 (no rhyme or reason for 12 paths, but Dell EMC recommends at least 4 ports from each array for performance and redundancy).

Here is my volume for the VCF datastore in Unisphere, already running in an SRDF/Metro configuration with a witness (ActiveActive). The WWN of the device (R1, and the external WWN of R2) is 60000970000197600450533030303835.

To show the ESXi host perspective I am using the PowerPath Management Appliance which allows you to view paths for each device at the host level. What is really nice about the appliance is that it is able to recognize SRDF/Metro devices. Note in the screenshot the highlighted array names, 450 and 355. And you can see I have a total of 12 paths, 6 from array 355 and 6 from array 450, all active in a uniform configuration. 

So now we’re ready to run through the add workload domain wizard.

Workload domain wizard

As I parenthetically noted above, you have to start with the prerequisite of 3 new servers. I used 4 for my vMSC cluster, 2 for each array. Those servers must have ESXi already installed on it (there are other VCF prerequisites you must follow). Therefore the easiest way to create the datastore is to present storage from the PowerMax through Unisphere and then use the vSphere Client to create the datastore. Be sure you remember to scan the HBAs on the other hosts so they all recognize the new datastore.

Once the datastore is available, run the wizard in SDDC Manager to create the new VI domain, choosing the VMFS on FC option. 

You will be guided through the various steps until you are prompted for the datastore name. You’ll notice in the screenshot below that there is no drop-down, radio button, etc. You must type in the datastore name, and you must do it correctly. VMware did discuss the possibility of using some sort of selection capability, but it was wrought with potential issues so it was decided to use this manual method instead.

And that’s it. VCF will deploy a new vCenter appliance in that datastore along with a number of NSX appliances. The vCenter will use the same PSC as the management domain vCenter so you will see both vCenters when you log into either as shown here.

VCF will not duplicate the deployments in the management domain such as vROps or Log Insight. Once complete, your new VI domain (e.g. here FC-WLD) will show in the SDDC Manager.

Note that VMware automatically configures HA and VMCP for your cluster, though some values are a bit different than what I normally suggest in the vMSC best practices paper. You might choose to adjust them if desired.

Supplemental storage

The final topic I want to discuss is supplemental storage. So now that I have my VI domain configured I’m ready to run my production VMs on it. Chances are that the single datastore I used for my deployment will be insufficient to run my workloads. Therefore I’ll want to add more datastores. Unfortunately SDDC Manager and VCF as a whole can’t help me here when dealing with my external array. There are no VMware APIs that are able to call the REST API we have so we have present that storage in another way. When we make storage available to the VCF environment in this way it is considered supplemental storage. Fortunately, as I explained at the start, this storage could be any of the vSphere supported types – FC, iSCSI, NFS, vSAN and even vVols. Unisphere for PowerMax is the easiest way to provision storage, but you can use CLI, the REST APIs, or even Virtual Storage Integrator (VSI). Let’s take a quick look at vVols as supplemental and then how to address SRDF/Metro expansion.

vVols

If I want to use vVols in my VI domain the process is no different than if I was using it in any vCenter. First, I register my VASA Provider.

Then I create my vVol datastore:

Then I’m good to go. I can deploy VMs into this new datastore and it is part of my VCF environment.

SRDF/Metro expansion

If you wish to use supplemental storage that is also backed by SRDF/Metro replication, there are couple good options. The first is to use Unisphere as I did in this example. The wizards are quite straightforward. As a VMware administrator, however, you may want to stay as closely tied to VCF as possible. Fortunately you can. One of the optional components of VCF is vRealize Automation which includes vRealize Orchestrator. Once you deploy that through VCF, you can install our vRO Plug-in for Dell EMC PowerMax which includes dozens of workflows that mimic the Unisphere functionality. It offers the ability to provision single devices, all the way up to provisioning VMFS datastores to your VI domain. For SRDF/Metro, let’s say you wanted to create a new storage group for a new application in the VI domain. You would run the following workflows either in vRO or through vRA created catalog items:

  • Provision VMFS datastore to ESXi Cluster (Create new storage group, mask it to the VI domain cluster, and create the datastore(s))
  • Create Storage Group SRDF Protection (Add SRDF/Metro to the existing storage group, and create a new storage group on the remote array)
  • Create Masking View (Create masking view on the remote array to present the SRDF/Metro devices to the VI domain cluster)

I have a demo of protecting a storage group in this post if you’d like to see how it goes. 

Wrap up

So we’ve managed to cover VMS on FC as primary storage, specifically running on PowerMax in an SRDF/Metro configuration, and vVols as supplemental storage to either the management or workload domain (or both). This is certainly not the end of the VCF story with PowerMax. Given VMware’s greater focus on vVols, particularly in the vSphere 7 release with K8s and SRM, I suspect that one day we might get to vVols as primary storage because then VMware would have more control akin to vSAN since all calls are made through the VASA Provider. My post is only meant as a high-level discussion of these topics and what is possible. I’ve also been involved in an internal proof of concept for a customer specifically around SRDF/Metro with VCF and I believe at some point that will result in documentation from our e-Lab organization that will touch on much of what I have here, only in greater detail. That environment, by the way, was on VxRail thank goodness.

Advertisement

2 thoughts on “VMware Cloud Foundation with FC, vVols, and SRDF Metro

Add yours

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by WordPress.com.

Up ↑

%d bloggers like this: