File on PowerMax 2500/8500

File

The new PowerMax has upgraded its NAS capabilities from the previous eNAS implementation and is now called PowerMax File. Because there is new hardware with this release, if you are running eNAS on a PowerMax 2000 or 8000, PowerMax File will be a new implementation. There will be a number of ways to bring your data over including our new NDM renamed Data Mobility. PowerMax File is now integrated directly into the array, whereas with eNAS it was essentially VNX code running on the PowerMax. With PowerMaxOS 10.0, the new operating system, File can be setup and accessed directly in the embedded Unisphere for PowerMax (external Unisphere does not support File). You can find File Configuration under the System menu as shown below.

Restrictions

Before proceeding I wanted to call out a couple restrictions with the first release. Admittedly these aren’t great but they will be lifted in the future.

  • VLANs are not supported.
  • Jumbo Frames are not supported. MTU 1500 is the limit.

Now most customers use VLANs so that first one is a bit of a pain. For the second one I suppose the good thing is that creation of vSwitches in VMware will default to 1500 so at least you don’t have to change anything 🙂 Don’t worry about the switch, however. It’s OK if that is still using the larger MTU (which is generally the default though you can have a little performance hit) since the array and the host each use 1500 to communicate.

NFS in VMware

Running NFS instead of VMFS is not a common choice I see made by enterprise customers for their production environments; however that does not mean it can’t be a useful option for libraries or even test/dev workloads. NFS mounts can be made readily available across many hosts without concern of FC zoning to the array. Personally, I use it for ISO repositories (content libraries in vSphere) and to move VMs between vCenters. Yes, there is cross-vCenter Storage vMotion now, but honestly I’ve never had much luck with it as my migrations always seem to die in the middle after taking too much time, so I’ve given up on that.

Configuring File

Configuring File is aided by multiple wizards in the top panel where it says Initial Configuration. Depending on your configuration you may have more nodes than the two present than in my setup. In my example here I am going to use two different PowerMax arrays – 302 and 598. I will do the primary configuration on array 302 and use 598 as the remote failover array.

My network setup is a very simple one given the limitations of the lab where my systems reside. In addition to my two arrays, I have two vSphere clusters in separate vCenters. Each cluster has four ESXi hosts. Each of the arrays and hosts are connected to the same switch. The hosts have 10 Gb connections, but the arrays are 25 Gb. The switch handles the disparity in speeds. I left the switch ports at MTU of 9216 as I mentioned earlier. But no Jumbo Frames on the array network devices or the hosts. It can’t be modified so best not to spin your wheels looking for a secret configuration. Here you can see the configuration.

I then created a separate vSwitch (for the cabled vmnic) on each ESXi host with a VMkernel portgroup, setting this MTU to 1500. All my arrays and hosts use the network 192.168.2.0/24 as the switch is private. In order to use replication my arrays must be able to communicate so they must be on the same network. Routing is possible but that is an advanced configuration beyond this blog post.

I harp on the MTU because this isn’t some “nice to have” configuration. If you fail to match your MTU from host through array (minus switch), while you’ll have no issue mounting NFS exports or creating NFS datastores, you’ll have performance issues.

Wizards

There are a lot of steps in the wizard which means lots of screenshots. To reduce the size here I am going to make the images small so you’ll have to click on them if you want to see the detail, otherwise you’ll be scrolling constantly. I will go back to large for the VMware ones as they will be of more interest.

I’m going to run through a local setup of an NFS export and subsequent datastore creation in VMware. I’ll address replication in a separate post.

Here are my two nodes. As I said you may have more but the process will be the same.

Start by configuring the subnets. In step 1 select the Subnet Configuration button.

Modify the values per your network as I have done below. You can use more than one subnet if you have four nodes as you must use the same subnet on at least two nodes.

Next, create a NAS server. Select Create in step 2.

Enter the details for the NAS server – provide a name, primary and backup nodes, and an SRP (usually you’ll have just the one). Select Next.

First, use the drop-down to select the proper subnet, then fill in the radio buttons for the primary and backup nodes. Again, I only have two nodes so makes it easy. See below where the MTU is displayed again. Finally provide an IP address and gateway. This IP will be the one used in exports. Select Next.

Select the protocols you want to support with this NAS server. I will only use it for NFS so that’s what I chose. Although my vCenters are 7.x and I’m unlikely to need NFSv3, I selected it anyway just in case. Select Next.

Enable DNS if so desired and select Next.

Review your inputs and select Run in the Background.

The “Run in the Background” is a new feature of Unisphere rather than just “Run” for longer running jobs. This allows you to navigate away from the screen rather than wait for the dialog to complete.

The NAS server will appear below when done.

Now select the FILE SYSTEMS tab and select Create in step 3.

Choose the type of FS. I’ll assume you’ll start like me and select the VMware one. Select Next.

Use the radio button next to the NAS server and select Next.

In the following screen put in a name and size at a minimum (required fields), but you can also specify a description, service level, and choose whether to disable data reduction and enable thresholds (alerts). Generally Dell does not advise changing the IO size of 8k, however if you know you will be doing larger IOs like 16 or 32, feel free to modify. Select Next.

In the next screen choose whether to configure the export. It’s easiest to do this now rather than later as I assume you want to create an NFS datastore following the configuration. Fill in the export name and description if desired and hit Next.

The Configure Access screen is really not optional if you want to use the NFS mount. Yes, you could leave the defaults and move on and then spend hours trying to figure out why you can’t create an NFS datastore. So let’s assume for the moment you aren’t configuring an advanced configuration like Kerberos. If you just want VMware to be able to use the NFS, then change the Default Access as below “Read/Write, allow Root”. The “allow Root” is critical because that is how VMware mounts the NFS datastore, using root. So that’s it here, click Next. You can specify access by ESXi host if you wish to limit availability.

Review and run the job in the background again.

The new FS and export is ready for use.

Let’s create an NFS datastore on this FS. So go through the normal create datastore wizard. Change the radio button from VMFS to NFS in step 1 and select NEXT.

In step 2, select the NFS version. Here I am going to select NFS 4.1 since I don’t have any older ESXi hosts. Select NEXT.

In the NFS share details, input a name (can be anything), the folder name from the export created in the earlier wizard, and enter in the IP of the NAS server then hit ADD and then NEXT.

Since I am using NFS 4.1 I will be asked about Kerberos authentication. As you saw in the other wizard, you could configure that but as I did not, I move forward here.

In Step 5 choose which hosts in the cluster should mount the share. I’ve selected all of mine.

Finally review the summary and hit FINISH.

The result below is that all my hosts have the new 1 TB NFS mounted.

And there you have it, the new local File functionality with VMware. One more thing – VAAI.

VAAI

VAAI with NFS doesn’t offer all the same functions as with VMFS but the concept is the same – push tasks to the array. Unlike block storage, file storage does not support VAAI as part of the array code. All NFS file systems, regardless of vendor, require a NAS plug-in. Fortunately, the plug-in code is generic so that the same plug-in used for the Dell Unity platform can also be used for PowerMax. Note that the plug-in works with either NFS version 3 or 4.1.

Features on NAS include NFS Clone Offload, extended stats, space reservations, and snap of a snap. In essence the NFS clone offload works much the same way as XCOPY as it offloads ESXi clone operations to the array.

There are three different versions of the plug-in depending on which ESXi version that is running: 4.0.1 for 7.0.1+, 3.0.2 for 7.0+, and 3.0.1 for 6.7+. Version 4.0.1 is the only one that does not require a reboot after installation. To install the plug-in, download the NAS plug-in from Dell support. The plug-in is delivered as a VMware Installation Bundle (vib). The plug-in can be installed through VMware vCenter Update Manager or through the CLI. Be sure to check for any existing NAS plug-in as it must be removed before installing the new one. Stop and start the vaai-nasd service if installing version 4.0.1 as below, otherwise reboot for the other versions.

Windows

Some customers won’t use NFS with ESXi directly, instead they will mount the export from a Windows VM. In that case here is some important info.

When creating a general FS in File (meaning instead of VMware you choose General), as opposed to VMware, you should still set permissions the same way, i.e., allow root (again assuming you don’t need a more expansive security setup); however, if you want to read and write files on the FS from a Windows box, be that physical or virtual, you’ll need to add a couple entries to the registry. Now much like a VMware NFS with wrong permissions, you will be able to mount the NFS from Windows, but you will get access denied errors trying to do anything in it. The reason for this is that by default the Windows UID and GUID is ‘-2’. Root, however, is identified as ‘0’ for both those values. So, if the permissions are minimal (as I have configured it) you can always mount, just not use it. So, we need to tell Windows to send ‘0’ not ‘-2’ by setting the registry. The two values to set in:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\ClientForNFS\CurrentVersion\Default

are:

  • DWORD: AnonymousGuid = 0
  • DWORD: AnonymousUid = 0

I’ve shown these values below:

Microsoft says restarting the NFS client will be enough for the change to take effect. You can do this by running these commands:

  • nfsadmin client stop
  • nfsadmin client start

In my experience, however, I had to reboot the operating system. After reboot I could read and write files to the NFS.

Replication

I’ve covered replication separately here if you want to keep going.

Advertisement

One thought on “File on PowerMax 2500/8500

Add yours

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by WordPress.com.

Up ↑

%d bloggers like this: