VMware and iSCSI

The topic for this post is not one I’ve spent much time on, either in testing or documentation. It’s not that we don’t have customers that use iSCSI, but they are few and far between, and the subset of those who use VMware is even smaller. We tend to see iSCSI (and NFS for that matter) in the mid-tier products while FC is dominant in the enterprise class like PowerMax. But as the new embedded VASA (eVASA) 3.0 implementation for vVols supports iSCSI, I though it worthy of a discussion. I’ll do 2 posts, the first being an overview of an iSCSI implementation on the PowerMax with vSphere before moving on to using it in a vVol environment. In particular I thought I’d demonstrate how you might use both FC and iSCSI protocols in a single vCenter environment, whether by design or as a migration strategy from one to the other. As is typical in my experience, this was borne of a customer inquiry and if one customer is asking about it, more will follow. And what better way to codify my answer than a blog post I can point my colleagues to. But let’s not get ahead of ourselves. On to iSCSI.

Pre-requisites

Let’s start with the pre-requisites. You’ll need a director on your PowerMax that supports iSCSI. My box started with a 10Gb director (known as a Rainfall SLIC), but we support 25Gb now so it was upgraded to that (known as a Clearsky SLIC). I mention the change in speed as it will play a role in our configuration. The emulation for iSCSI is “SE”, so on my box I have 4 ports configured on each director, though I only cabled 2 ports on each as you can see below. I am using a Dell EMC PowerSwitch S Series 25Gb switch. My ports on the switch are set to 25Gb, I do have a VLAN set (625), and I am using jumbo frames. Most importantly, I have trunked the ports. Here is the Unisphere view:

With the enabled ports, I can now configure some IP addresses. My switch is exclusively for iSCSI, so I will be using private IP addresses on both the array and the ESXi hosts. Configuring iSCSI is really easy on the PowerMax as there’s a wizard, like so many other functions in the application. I’m going to run through a quick example of how to do this, but my friend Jim has a detailed whitepaper which is worth a look if you haven’t done iSCSI before: https://www.delltechnologies.com/en-us/collaterals/unauth/white-papers/products/storage/h14531-dell-emc-powermax-iscsi-implementation.pdf.

iSCSI Wizard

The wizard can be accessed from the iSCSI Dashboard through the menu on the left System -> iSCSI. Although I have pre-configured my environment (you’ll see I have 4 Targets and 4 Interfaces already below), I’ll run through an example. When I did this initially, I ran through it 4 times, for my 4 ports. But for the example, start the wizard at step 1.

After the wizard starts, we need to create a target. Select the director (in my case it’s either 1 or 2 though you may have more). I assign a Network ID of 60. This number needs to be unique, but it is arbitrary. Feel free to use the numbering that makes sense for you. I am using the default TCP port. Importantly I am going to let the wizard assign the Target Name. You can also check that box and create a custom name, however, be warned that if you are using vSphere 7, it is more unforgiving than vSphere 6, when it comes to naming convention.

I’m getting ahead of myself, but in vSphere when you add the static target with a wrongly formatted custom name, you’ll get this error:

Operation failed, diagnostics report: iScsiException: status(c0000000): Invalid parameter; Message= IMA_AddStaticDiscoveryTarget

You can attempt to fix the custom name, but I found it much easier to let PowerMax generate it.

If you are using the PowerMax CSI DO NOT rename the Target Name. CSI is even more unforgiving than vSphere 7.

Here is the step to add the Target:

In step 3 we will add an IP Interface. Here I provide:

  • The director
  • An internal IP (remember the switch is private)
  • A Prefix (subnet)
  • A Network ID (filled in automatically from my last step)
  • A VLAN ID (in this example I just used none)
  • And my MTU which is 9000 for jumbo frames

The final screen is a summary. By default, the iSCSI target will be enabled when it is attached to the IP interface (which you want).

I have provided the detail below post-creation so you can see the target name which PowerMax generated. This is the name we will use when we add the static iSCSI targets in vCenter, and the name that will not cause us any grief.

Having shown the example, I removed this interface and target. Here are my actual targets and interfaces with the names and IPs we need for VMware. Note my actual VLAN is not 0 but 625.

So, let’s move on to the server side now that our array setup is good.

ESXi Servers

Let’s start by harkening back to my comment about the ethernet speeds and the switch. I need to cable up my ESXi hosts to that same switch (again I have a simple config) but my NICs are only 10Gb. It would be best to have comparable speeds but I have to work with what I got. Fortunately, on the Dell switch (as is the case with most switches) I have the ability to change the port speed down when I cable up my NICs. So, like the array ports, I set the proper VLAN, set the speed (10Gb in this case), and I trunk the ports. Now even though I have different speeds, the switch is going to negotiate down or up (depending on the direction) for me automatically. Yes, this would be an issue in a production environment since I am throttling, but in my lab,  I just want it to work, and it does. As an aside, I can’t cable my 25Gb directors to a 10Gb switch since the switch couldn’t handle the initial size to negotiate down from.

Flow Control on the NIC

The PowerMaxOS iSCSI implementation does not support Priority Flow Control (PFC); however, there is no issue if it is enabled on the NIC. When I say it does not support it, I mean there is no code to take advantage of it. So if it is enabled on the NIC (default for many), essentially the code ignores it. In other words, fear not, you can’t break anything. I believe some of our host configuration guides say to disable it on the NIC, which is fine, I just tend to leave things at default unless I have to change it.

The PowerMax does support Pause Frame Flow Control if you have a need of that. 

Virtual Switch

So, the first thing I’ll do on my ESXi host is to create a new virtual switch and associate it with my 10Gb NIC. Recall my VLAN is 625 so I need to set that on the VMkernel port. I am using the same IP address range and ignoring the gateway and DNS (though you’ll still see them). For my server, therefore, vSwitch1 is associated with vmnic5 (my 10Gb as you can see below) with an IP of 192.168.1.10 and a VLAN of 625. The only adjustment I made on the vSwitch itself from the defaults was to increase the MTU to 9000 (jumbo frames) to match the array (and switch). I didn’t bother including the wizard below as I’m confident you all know how to create a virtual switch in vCenter.

Ping

After you have the vSwitch, it’s a good time to test connectivity between the array and the servers. You can do this by a function in Unisphere that allows you to ping a remote IP from one of the IP interfaces. Below I’m highlighting the 192.168.1.100 interface, then selecting “Ping Remote IP”. I put in the IP of the vmnic5 above I just configured and you see it can successfully ping it.

If you get a timeout, there can be many issues. Just a few things to check:

  • Are the ports trunked on the network switch?
  • Is the speed properly set on the switch ports?
  • Did you use to correct netmask on the VMkernel port?
  • Did you use the correct VLAN on the vSwitch? A special note about this. I have seen situations where if there is only a single VLAN set on the network switch, you should leave the VLAN ID empty on the vSwitch in VMware rather than use the VLAN ID on the switch. I have 2 VLANs on my switch so I had to set it.

Software iSCSI Adapter

The next step is to add the software iSCSI adapter if it does not already exist (it doesn’t by default). This is done from the Storage Adapters screen. Just select “+ Add Software Adapter” and hit OK and you’re good.

It may take a few minutes to create, but then you’ll see it added as an adapter which you can view. You’ll show 0 Targets, unlike mine which shows the 4 I already configured.

Our final step in the configuration is going to be to add the iSCSI array targets in vSphere. You can do this one of two ways: Dynamic or Static Discovery. I’ve had much better luck with static discovery so I’m going to show that. Start by getting the IPs of the interfaces from the array along with the iSCSI target name. Pull this from the IP Interfaces screen:

Back in the vCenter, highlight the iSCSI Software Adapter, navigate to the Static Discovery tab, and select “+ Add”.

Now enter the IP address and target name, leaving the port as default.

Continue this for each target. Here is the final list of all 4 of mine:

Provisioning

And that should be it. To bring things full circle, let’s provision a device using our new iSCSI interface. First thing we need is to create an initiator group for our ESXi server. With the iSCSI software adapter, we just have the one initiator which you can find as part of the iSCSI Software Adapter. You can see I’ve boxed it in red below.

Using the create host wizard in Unisphere, add a new initiator group:

I am going to provision a single 1 GB LUN to the host through the 4 ports. The provisioning wizard is no different for iSCSI so I am not including it here in full; however, one thing you may find is that the first time you provision to the host, the ports will not be visible so you’ll need to check the box below to show the non-visible ones.

After I provision, I do a rescan of the adapter and my device shows up on each path.

To finish things off, I expanded the 1GB to 50GB to make it viable for a datastore. Here is what the datastore wizard looks like selecting that device.

Now that iSCSI is ready, in an upcoming post I’ll talk about using iSCSI with vVols.

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by WordPress.com.

Up ↑

%d bloggers like this: