This post is a continuation of what I’ll call my iSCSI series (if 3 is a series). A question I occasionally hear from customers is if our management software can run on a VM using iSCSI – e.g. Solutions Enabler, Unisphere – without having to assign physical RDMs (be that with a Guest OS or vApp). The answer is yes. We do need Gatekeepers of course since that is how commands are passed to the array, but if you use the iSCSI protocol there is a way to present those devices directly to the VM, rather than adding them as pRDMs.
I’m using the iSCSI environment I previously documented so I’m not going to cover any of the iSCSI setup on the array here, only the changes needed in VMware.
I have installed a Windows 2016 VM for this because that is the more common OS I see with this request. The iSCSI tools are different for Windows and Linux so if you go the Linux route the process won’t be exactly the same, rather you can use open-iscsi.
I started by installing Solutions Enabler on the VM. You can see below that there are no devices and therefore no arrays recognized as I have no iSCSI devices or pRDMs presented to the VM.
First thing we need to do for the setup is add a VM network that will be used exclusively for iSCSI. I need to do this on the switch that is dedicated to iSCSI so that I am on the same IP network. Be sure if you use a VLAN ID that you set it like I have below.
For our VM therefore we are going to have two NICs – one for the public network and one for the iSCSI network.
After you have the new NIC added, assign an IP address to the adapter that is on the same iSCSI network – so for me that’s something on 192.168.1.x or 192.168.1.50. Once you do this it is useful to attempt to ping one of the iSCSI IP interfaces on the array from the CLI on the VM, just to be sure you have connectivity.
So far so good. Now we need to use the Windows iSCSI initiator to add our iSCSI targets on the array. Think of this like the VMware iSCSI software adapter. Windows will give us an initiator name to use for our host and then we’ll be able to present GKs.
Windows iSCSI Initiator
On the VM GuestOS, open the Control Panel and the iSCSI Initiator.
First thing you want to check is the initiator name that Windows generated. This is under the Configuration tab. If you want to change it, simply select the Change option. I leave mine as is – it has the hostname already in it and is unique so good enough for me. I would not recommend renaming the initiator if you have ever used iSCSI on this VM with the PowerMax as you may have entries leftover on the array and you will get authorization errors when discovering the targets.
Now I’m going to use the Discovery tab to add the iSCSI IPs on the array. Select Discover Portal and add each IP in succession.
Here are all the 4 added:
With our targets discovered, we can provision some GKs to the iSCSI initiator. Before doing so, I recommend you enable MPIO if you have multiple targets as I do, otherwise you will see multiple entries in Disk Management for one device. To do so you have to add MPIO as a feature in Windows and then configure MPIO for iSCSI devices. A couple reboots will be needed but it’s straightforward. I skipped it here for this example.
Create Host and Provision Gatekeepers
Using the host wizard in Unisphere for PowerMax, I am going to manually add the iSCSI initiator name from above.
Now provision 6 GKs to the host:
Once provisioned, if we go back into the iSCSI initiator software on the VM and hit Refresh on the Targets tab, our iSCSI targets should appear.
They will have a Status of Inactive, so highlight each one and select Connect.
After connecting each one, the GKs you presented will be available in Disk Management. Run a Rescan Disks and they will show up.
Once the disks appear you’ll want to turn them online, if offline. Then you can run a symcfg discover and list. SE will find the GKs and associated arrays, all without presenting a single RDM.