vSphere Integrated Containers with VMAX

Containers have been around for years at this point, but honestly it wasn’t until Dell EMC World this year that someone asked us if we were doing anything with them on VMAX. Cue the new project, or investigation anyway. I’ve probably mentioned I tend to have my hands full with normal VMAX/VMware stuff but when I had a chance to look at containers in the context of VMware it made the argument much easier. I had never looked at containers until this point so I’m quite new at it. There are two different avenues that a container investigation can follow. The first is to look at a very traditional implementation on Linux or even Windows physical boxes. The other is to see how VMware approaches containers within an existing vSphere infrastructure. Some of my colleagues took the former, so I took the latter.

I’m going to approach this post from a beginner’s perspective since frankly that’s what I am. I’ve done a little polling figuring I was behind in looking at containers but it seems in the space I work in, many of my colleagues haven’t had a chance to look at them either. I’ve found the ones who know containers best are the developers, and as we talk about them you’ll see why.

Taking it from the start then, what are containers? At a high level a container is a virtualization technology, just like VMware. More specifically, a container is an image that is an executable of a bunch of software. The image has everything you need to run like libraries, tools, code, runtime, etc. It is a self-contained, functioning package. Think of it as a program running on your Linux or Windows host. Since most of us know something about VMware, let’s use that to help explain what containers are and what they are not. A container is, after all, a lot like a VM. A container, though, doesn’t need a hypervisor, nor a Guest OS. Each one of these images I mentioned can be run on any Linux/Windows OS, and many can be run simultaneously, yet independently on the same box. Here is my basic representation of the differences between VMs (left) and containers (right).

So what VMware does is it takes a physical host and enables it to become many hosts or VMs. The VMs are just like real hosts in the sense they have an OS, CPU, memory, and disk. They can have many GBs of storage assigned and potentially use lots of resources. Containers, on the other hand, run directly on the same OS kernel as any other program, yet like a VM each one is independent. They are lightweight, use very little storage (think MB vs GB/TB), and startup almost instantaneously. What if I want to run the container image on a VM – virtual on virtual so to speak – sure you can do that too. A couple important points about a container that really stand it apart from a VM. A container doesn’t boot up like a VM since it is simply a process that runs on the host. When the process starts, the container is running, when the process ends, the container stops. If a container writes data, it is non-persistent, meaning when the container is deleted you lose the data. Now it is possible to attach a volume to a container (hint on VMAX of course) and then the state can persist since it is independent of the container. All the things a container uses – like the binaries, tools, etc. – are part of an image which cannot be changed (the image itself, not to say you can’t make more).

Now as I mentioned in the beginning, developers are one of the groups that can benefit greatly from containers. A single physical server can host many containers and each one might be a different environment of the same program – e.g. test, dev, sandbox, etc. Building them up and tearing them down is a quick process which is ideal during development when code changes so frequently. And remember each container knows nothing about the other containers on the box (though you could have many containers working together), providing pristine environments in which to work. Some might say they can operate VMs just the same and you’ll get no argument from me if that makes sense for your business.

I want to provide a practical example of using containers with VMware, all running on a VMAX array. VMware’s take on containers is called vSphere Integrated Containers.

vSphere Integrated Containers

VMware wanted to make it easy for existing VMware customers to deploy containers so they created an open source project on GitHub called vSphere Integrated Containers, or VIC for short. VIC uses the well known Docker commands. Now VIC is a bit different than what I was talking about above. So it’s not a pure container play, nor is it a container on a VM. VIC is part of vSphere (6.5) and is comprised of 3 components with the official names in parentheses:

  • VIC Engine (VMware vSphere Integrated Containers Engine) – Exposes vSphere objects as container primitives. VMware calls the engine a “Docker façade”. You use a vSphere tool called vic-machine to deploy a virtual container host (VCH) which is a vApp. Inside the vApp there is a small VM that is the Docker endpoint (a Linux VM running Docker basically). Using that IP, Docker images are pulled from the repository and a small VM is created in the vApp.
  • Harbor (VMware vSphere Integrated Containers Registry) – An enterprise Docker registry. It is delivered as an OVA file. Harbor can store the Docker images locally for enterprise customers. Each component of Harbor runs as a container (big surprise). If you want more detail, VMware has an engineering blog post here.
  • Admiral (VMware vSphere Integrated Containers Management Portal) – This component is an extension of vRealize Automation starting in 7.2. Therefore you can deploy containers from within vRA. You can run Admiral standalone, too.

The VIC Engine is the heart of the deployment. Not to get ahead of myself too much, but when you initially deploy the virtual container host you will get a vApp with the VCH within it:

From that point, all deployed containers will appear as small VMs under that same vApp, vch10 (my name), each booting from an ISO that contains VMware’s PhotonOS which is a very lightweight Linux OS. These VMs are your containers so you can think of vSphere as your container host. (Yes I just contradicted myself when I told you above that containers were not VMs – so amend it to “outside of VIC”.) You can create many different vApps if you wish, each with their own containers. So you can see VMware’s model is designed specifically to help existing customers deploy containers in a manner that already makes sense to them.

We can go over the details all day (because there are a lot) but I think it is easier to walk you through what I did so you can set it up yourself if you want and explore it. I used the latest VIC which is version 1.1.1 on vSphere 6.5 (VIC is part of enterprise plus license for vSphere or vROps) in conjunction with vRealize Automation 7.3 (I re-used my VVol environment for this, though as an aside VVols are not supported storage for VIC). I’ll try to keep it simple because the VMware documentation is good and will cover all the details. I just want to give you a bit of a head start.

Install

I’m going to follow VMware’s general steps. Start by downloading the OVA file here. Deploy it in a vSphere 6.x environment providing the necessary information – basically if you are OK with the default ports, you just need to supply an IP and passwords for some users. The deployment of the appliance will provide the following:

  • Runs vSphere Integrated Containers Registry
  • Runs vSphere Integrated Containers Management Portal
  • Makes the vSphere Integrated Containers Engine binaries available for download
  • Hosts the vSphere Client plug-in packages for vCenter Server

Once the installation is complete, if you plan on running the binaries that are used to create the virtual container host (VCH) from Windows (or a Mac), navigate to the new host at https://vic_appliance_address:9443 and pull down the binary script. There will be 3 files listed but only the binary one ( vic_1.1.1.tar.gz) is needed as the plug-in files are actually in the directory structure. The zip file can be uncompressed and extracted and placed on the Windows host.

If you are using Linux, it is easier to use curl to pull down the zip as I did on my Ubuntu VM host. The command is: curl -k https://vic_appliance_address:9443/vic_1.1.1.tar.gz -o vic_1.1.1.tar.gz and to extract: tar -zxf vic_1.1.1.tar.gz. These files are used to create the virtual container host (VCH). You can see in my shot below that I list the binaries and even show the files we will use to install the Web Client plug-in.

The steps to install the client plug-in (VCSA or Windows) can be found here using the files above. The plug-in is compatible with both the HTML5 and Web Client. Feel free to install them in both as I did. If you do you’ll see 2 entries in the plug-ins’ administration screen.

The plug-in displays as an icon in the Home page of both clients. I’ll show how it works after we deploy a VCH appliance.

With the plug-in installed, we can now deploy a VCH appliance. There are some pre-requisites you’ll need to do before creating the appliance. Most are just checks, but there are two you probably need to complete. The first is to create a port group on a distributed switch. Each VCH needs one. You can also create a port group for the containers – I did that, assigning different VLANs to each. The second thing is to open a firewall port. Fortunately you can use the vic-machine command from the binaries we downloaded to do this. As I am using Linux, it is: vic-machine-linux update firewall –allow.

Now we can deploy the VCH. You use the same vic-machine-linux (or -windows, -darwin) command to do this. I am going to stray a little from the documentation because VMware does not include the creation of the persistent volume in their syntax. I want that volume on my VMAX AFA so I can preserve data so I add another switch, –volume-store, and supply my VMFS6 datastore (bolded below). In the command my vch10 name is arbitrary – use whatever makes sense to you.

vic-machine-linux create –target ‘Administrator@vsphere.local:password’@10.xxx.xxx.26 –compute-resource HA –image-store 497_INFRA_VMFS6 –bridge-network ‘vic-bridge’ –public-network ‘VM Network’ –public-network-ip 10.xxx.xxx.228/22 –public-network-gateway 10.xxx.xxx.1 –management-network ‘vic-bridge2’ –client-network ‘vic-bridge2’ –client-network-ip 10.xxx.xxx.229/22 –dns-server 10.xxx.xxx.23 –volume-store=497_INFRA_VMFS6:default –force –name vch10

This will create the vApp and VM I showed earlier.

Returning to the plug-in now that we have a VCH, in the HTML5 client you can see it shows us the VCH and the number of containers, if any.

There is also a Virtual Container Host portlet in the Summary tab for the VCH. It has a link to the Admin page and Docker endpoint.

So now we are ready to create some containers. There are a number of ways you can do this. The most straightforward is to use the management interface (Admiral) directly available through the OVA we deployed. Navigate to the IP of your deployed appliance and select the Management tab.

 

Instead of using that, however, let’s bring in the final integration piece, vRealize Automation. The nice thing about using vRA is that if you already use it for your provisioning needs, it makes adding containers to your environment easier. Here I’ve logged in as the default tenant configurationadmin and navigated to the “Containers” tab and then the Templates menu. Now this tab is not a plug-in, it is in the product as an extension (7.2,7.3) so you don’t need to do anything to enable it. Whether or not you use containers it is there. Be aware that although the interface is the same as the standalone version in the previous screenshot, they do not share the same information, except for the templates. In other words if you add a host in vRA it will not show up in the standalone version, or vice-versa.

We do have a couple configuration steps to complete, however, before we deploy our container. First, you need to configure your VCH host under the Resources/Hosts menu on the left. This is just a three-step process. Select ADD A HOST and then supply the correct information (easiest to use certificates for credentials), verify, save, and now you have a VCH host you can use to deploy containers.

Second, a policy placement must be configured so that when you deploy a container, it knows where to go. This one is just drop-down boxes and a name.

And that’s it. We’re ready to deploy the containers. On the templates page, select which one you wish to provision. I’m going to deploy Ubuntu in my example. I select Provision, provide the correct business group, and hit Provision. Wait a few minutes (or less) and it will be ready.

Pretty cool, huh? Now depending on the template, you may have vmdks created in your VMFS datastore on the VMAX array. The boot is still going to be the image file, but the container may use other storage. Remember that we specified a volume store when creating VCH, so it will be available when a template requires persistent data. The files will be located in: <datastore>\VIC\volumes\<id>. Here is an example of my Redis template deployment. It has a single disk.

I think I’ll leave it here for now. One thing it is important to know is that the container environment we created in vSphere cannot be managed by vSphere for power off/on or delete operations. VCH requires that you use the vic-machine binary for any operations related to it. The containers, meanwhile, are managed by standard Docker commands. There are some things you can do in vSphere, however, like setting up HA or using vMotion. The documentation includes all support statements so you know what you can and cannot do.

I know I didn’t get into the detail of the Docker commands or lifecycle management of containers but I leave that to you to explore as this blog could go on and on. As I say, I’m quite new to this so I have plenty of investigating left to do. Good luck.

Advertisements

2 thoughts on “vSphere Integrated Containers with VMAX

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s