Proxmox

As I wrote in my KVM post, the changes Broadcom is making at VMware have many of their customers considering other virtualization options. One of these options is Proxmox, and as I’ve heard it bandied about a bit more often by customers, I thought it best to install it and get a feel for what it is and how it works.

Proxmox VE

Proxmox Virtual Environment, or Proxmox VE, is an open-source virtualization management platform. It acts similarly to oVirt or say Oracle Virtualization Manager, in that it manages the underlying virtualization technology of the operating system. Unlike oVirt, Proxmox supports both KVM and containers, specifically LXC. This gives the user access to both VMs with dedicated resources as well as containers which have reduced resource requirements. Proxmox is free to use, though there is a licensing model for the enterprise version which I suspect would be desirable in any production implementation.

Install

Proxmox is built on a Debian 12 OS, though customized. You install it directly on bare metal with an ISO. Unlike the other management solutions, there is no operating system flexibility. The install itself is an easy step-through process. You accept a license then answer questions about storage, time zone, administrator password, and network. Proxmox uses a bridged network, just like oVirt.

Once the install completes, you are presented with a terminal login, but also provided the UI URL which is the hostname on port 8006. Upon logging in as root, here is the Summary screen.

Walkthrough

Rather than take a screenshot sampling of the UI, I did an ad-hoc walkthrough (unedited). I makes some references in the video to PowerFlex block storage and SDC not being supported (block storage limits noted below for PowerFlex), but otherwise I think it is useful to start here.

SAN storage

Although Proxmox fronts KVM, the consumption of storage isn’t the same in all respects as what you might have experienced with oVirt or other management tools on KVM. With oVirt you create storage domains in the UI on FC, iSCSI, NFS, etc. and then use those domains just like you would a datastore in vSphere. For block storage like FC, iSCSI, and NVMe/TCP protocols in Proxmox, however, you need an LVM. Furthermore, for FC and NVMe/TCP, there is no UI interface so the preparation of both storage and the volume group must be done in CLI, with the ingestion of the LVM being completed in the UI. NFS is the only protocol that acts like oVirt in that you simply mount it in the UI because there already is a file system. I’m going to run through NFS, iSCSI, and FC in Proxmox to give you a feel for how it differs. NVMe/TCP follows similarly to FC so I’ll give an abbreviated explanation.

PowerFlex and PowerMax

Use can use both file and block on PowerFlex and PowerMax with Proxmox. For each array:

PowerFlex: NFS, NVMe/TCP* (* TCP support with Proxmox will require an RFQ from Dell since it is not on the certification matrix as of this posting)

PowerMax: NFS, iSCSI, FC, NVMe/TCP, NVMe-FC

You’ll notice I did not include SDC for PowerFlex which works perfectly fine with oVirt. The problem with Proxmox is the operating system. PowerFlex 4.x does not support the SDC on Debian. You can, of course, still use SDC on VMs within Proxmox, just not on the VE itself.

Navigation

For all the UI examples, the initial navigation is the same. Highlight the Datacenter in the left-hand panel of the UI and then select Storage from the menu in the right-hand panel.

For FC and iSCSI on the PowerMax, I take for granted that devices are presented to the Proxmox host for consumption. For NFS on either platform I also assume file systems exist.

NFS

We’ll start with NFS since as I wrote it behaves the same in Proxmox and other solutions like oVirt. In this example I am using file storage from a PowerFlex array. In the Storage menu select NFS.

In the dialog, enter in an ID (name), then the NFS Server IP. Use the arrow in the drop-down box of the Export field. Proxmox will query the server and return all available file systems. Select the desired one. Then in the second dialog box hit Add.

The NFS file system will be added as shared and enabled.

So with iSCSI and FC Proxmox requires a two-step process. Let’s start with iSCSI.

iSCSI

From the Storage options, select iSCSI.

Enter in an ID (name) and enter in the IP address of the iSCSI interface on the PowerMax array. Then use the drop-down for the Target field which will populate automatically. Be sure to uncheck the box Use LUNs directly which is designed for RDM-like functionality. Click Add.

The iSCSI device is now added. Note that the Content field is none. The storage cannot be used like this, we need to create an LVM on it.

Return to the Storage menu and select LVM.

In the dialog, enter an ID (name) and from the drop-down next to Base storage select the ID you created in the previous step. Using the drop-down next to Base volume, you will be presented with all of the iSCSI devices you put in the masking view to the Proxmox host. Select one. Then provide a Volume group name, check the box for Shared and if desired the box for Wipe Removed Volumes.

The iSCSI storage is now available for use with VMs and containers. Note the Content column indicates Disk image (aka vmdk) and Container.

FC

For FC, I decided to demonstrate it in a video. Essentially the steps are similar to iSCSI, but you have to prep the storage before you can create the usable LVM in the UI.

NVMe/TCP

If you want to use NVMe/TCP storage with Proxmox you will follow a similar set of steps as FC so I’m going to assume you watched the video. I’m using PowerFlex here.

First you’ll need to install the packages for NVMe on the Proxmox OS:

apt -y install nvme-cli
modprobe nvme_tcp && echo "nvme_tcp" > /etc/modules-load.d/nvme_tcp.conf

Next, create the NVMe/TCP host on the PowerFlex (the name is in /etc/nvme/hostnqn) and map a volume to the host. Then you’ll run a discover and connect to your storage system:

nvme discover -t tcp -a 172.16.100.178 -s 4420

nvme connect -t tcp -a 172.16.100.178 -s 4420 -n nqn.1988-11.com.dell:powerflex:00:08313670c788840f

At this point you should be able to see your mapped device. Just like FC, we need a volume group. I’ve shown the commands here:

Now you can use the UI to create the LVM. Again, be sure to click the Shared checkbox.

Final thoughts

The allure of these non-VMware virtualization solutions is undeniable. Proxmox and others in the same vein can provide many of the same capabilities of a VMware solution at a lower licensing cost. But, and yes there is a big but, moving from VMware to KVM or Proxmox is not really a one to one proposition. Assuming the migration itself can be worked out, the administration of the new virtualization technology will require some different skillsets than VMware. VMware is a UI-driven software in which a customer rarely has to resort to CLI for any task. Both KVM and Proxmox and others like it will need Linux system administration skills along with a developer’s touch. A company may or may not have such personnel readily available to take on these new tasks and therefore may have to hire individuals specifically for this role. Now the cost of employing new administrators may pale in comparison to the licensing savings, but nevertheless it is an important aspect to consider when changing virtualization technologies. In addition, Proxmox and KVM (which is also the underlying virtualization for Proxmox) do not offer all the same capabilities and features as VMware. The more complex your VMware environment, the more difficult it will be to migrate. Also the type and depth of support is critical to understand unless you plan on dealing with issues in-house with the aforementioned development team. Finally, a customer must weigh existing storage investments and how they fit into these open-source platforms. As I’ve written above, for example, if you own PowerFlex storage you cannot use the SDC with Proxmox because of the Debian OS it uses. There are other concerns, too, but you would come upon them in due course of testing these platforms.

As these solutions are easy to install, the most sensible thing you can do is to test. I’ve run all these solutions within my VMware infrastructure as well as bare-metal, so whatever you have at hand will work. Testing is going to be the best way to figure out whether a software like Proxmox can meet all the requirements of your business. You don’t want to make a monumental decision of switching virtualization platforms based on the documentation alone.

2 thoughts on “Proxmox

Add yours

    1. You can use snapshots, sure. I’ve created them from one server and mounted it to another server. I went over the process in the KVM paper – long and short there is no resignaturing like VMware so use a different host for the snapshot.

      SRDF/Metro with Proxmox is a feature that most likely needs certification from Dell through eLab. We certify SRDF/Metro on ESXi and though Proxmox isn’t exactly a hypervisor, it is a bare-metal customized Debian OS management system and I think it will have to undergo some process which would be triggered by high customer interest and RPQs. We haven’t reached that level yet.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Website Powered by WordPress.com.

Up ↑