Red Hat Woos VMware Shops with OpenShift Virtualization Engine
74 comments
·January 16, 2025stego-tech
mogwire
This is a VM only version of OCP. No containers or microservices.
So if you haven’t migrated into that state mentioned but want a Hypervisor that isn’t VMware and have enterprise support.
woleium
or proxmox.. i was going to say for small shops, but i have heard of some larger deployments recently.
whalesalad
If I were building a datacenter today I would go with proxmox. It's "just debian" under the hood and can be customized and controlled a multitude of ways (UI, CLI on the box, Terraform, API, etc)
INTPenis
I'm using it now, even paying for it, and it works very well but I only have one wish. That they could distribute an image based distro, or a slimmed down appliance ISO. It just seems unnecessary to have an actual OS on each host. Specially since I've been impressed by Talos and Openshift for a while.
whalesalad
INTPenis
Not what I meant. That's just how to run LXC containers, which I never use because I find OCI much more convenient.
I meant the hypervisor host OS should be image based atomic/immutable.
zozbot234
Doesn't Proxmox use a separate kernel package compared to Debian? That's kinda annoying because it ends up making the distro a 'Frankendebian' at best. Even using an up-to-date kernel from the stable backports repositories is a lot better than that.
brirec
They use a slightly modified Ubuntu kernel (https://github.com/proxmox/pve-kernel), with things like ZFS added. They also really are good about using proper Debian tooling, and so their kernel doesn’t cause any weird dependency issues.
Right now they install proxmox-kerne-6.8.12-6 by default (using pseudo-packages called proxmox-default-kernel and proxmox-kernel-6.8 pointing at it), and offer proxmox-kernel-6.11.0-2 as an opt-in package (by installing proxmox-kernel-6.11)
I’ve been using the latest opt-in kernels on all of my Proxmox nodes for a few years now, and I’ve never had any issues at all with that myself.
zozbot234
> things like ZFS added
That's a big gotcha - ZFS is non-free so of course it cannot be part of Debian proper. Hopefully we'll get feature parity via Btrfs or Bcachefs at some point in the future.
Maskawanian
It certainly has an optimized kernel for its use case. I believe it also includes ZFS by default. I wouldn't be surprised if the Proxmox developers would prefer to upstream these defaults, but they likely would introduce regressions for the common use case that Debian optimizes for.
Ultimately, I use Proxmox as a hardware hypervisor only, so I don't mind that it uses its own kernel. Everything I run is in its own VM, with its own kernel that is setup the way I want.
throw0101c
> If I were building a datacenter today I would go with proxmox.
I use Proxmox as well in a small-ish deployment, but have also heard good things with Xcp-ng.
At a previous job used OpenStack.
null
samcat116
I'd be worried how proxmox would scale past a few racks. The bones are all good, but I'm not sure how much scale testing their API layer has had.
polski-g
Is there a VDI solution that runs on proxmox? I've only found UDS Enterprise.
NexRebular
Or if you'd like to help preventing linux monoculturalization of datacenters, MNX Triton or vanilla SmartOS are very good options too.
whalesalad
The cool thing about proxmox is that it is - again - "just debian" so there is really no vendor lock-in. Yes they do have commercial support/update subscriptions but the community offering is open (https://github.com/proxmox). So I do not worry too much about lock-in or monoculturalization. At the end of the day it is a wrapper around fundamental components of Linux. They do not have any proprietary secret sauce that would F you down the road.
Correction I see now that the projects you reference are Solaris based. I am down with that cause too - but if you are a BSD/Solaris shop expect to do a lot of things on your own. The linux virtualization space is substantially larger (not necessarily suggesting it is better...)
jclulow
As an aside: Solaris is something you can buy from Oracle, which they forked from OpenSolaris 15 years ago. SmartOS is a distribution of illumos, which also forked from the same code 15 years ago. They have since diverged, in some areas dramatically, so we (the illumos community) don't bill ourselves as being Solaris based.
yjftsjthsd-h
My impression, strengthened by https://wiki.smartos.org/managing-instances-with-vmamd/#usin... , is that SmartOS preferentially operates at the per-host scale, which is probably a disadvantage in a datacenter setting. (I don't know enough about MNX Triton to comment)
antithesis-nl
Well, the Enterprise-hypervisor market is certainly wide open right now, due to Broadcom turning the financial screws on VMware customers hard. There are, broadly, two categories of potentially-profitable VMware customers up for grabs:
-Large enterprises that previously purchased hardware-with-accompanying-VMware-licenses from OEMs like Dell-EMC: Broadcom refused to even honor pre-acquisition license keys from these sources, leaving many private data centers in the lurch, unless they paid a huge premium for a new Broadcom-originated annual subscription (whereas the original key was one-off)
-Service providers with an ongoing "small-percentage-of revenue per year, payable in arrears" agreement, that were suddenly forced into a "hard vCPU and vRAM limit" subscription, payable for at least 2 years upfront.
However, the magic word for both customer segments is "vMotion", i.e. live-migration of VMs across disparate storage. No OSS and/or commercial (including Hyper-V) solution is able to truly match what VMware could (and can, at the right price) do in that space...
lenerdenator
> However, the magic word for both customer segments is "vMotion", i.e. live-migration of VMs across disparate storage. No OSS and/or commercial (including Hyper-V) solution is able to truly match what VMware could (and can, at the right price) do in that space...
Someone's gonna start working on that soon. Necessity is the mother of invention.
To me, this will be the UNIX wars moment for virtualization.
Originally, UNIX was something AT&T/Bell Labs mainly used for their own purposes. Then people wanted to use it for themselves. AT&T cooked up some insane price (like $20k in 1980s money) for the license for System V. That competed with the BSDs for a while. Then, some nerd in a college office in Finland contributed his kernel to the GNU project. The rest is history.
UNIX itself is somewhat of a niche today, with the vast majority of former use cases absorbed by GNU/Linux.
This feels like an effort by Broadcom to suck up all of the money in the VMWare customer base, thinking it's too much of a pain in the ass to migrate off of their wares. In some circumstances, they're not wrong, but there's going to be teams at companies talking about how to show VMWare the door permanently as a result of this.
Whether Broadcom is right that they can turn a profit on the acquisition with the remaining install base remains to be seen.
azurelake
You have to understand that Broadcom isn't actually Broadcom the chipmaker. It's a private equity firm that used be named Avago Technologies before it bought Broadcom. So squeezing until there's nothing left is the plan.
https://digitstodollars.com/2022/06/15/what-has-broadcom-bec...
lenerdenator
> The truth is Broadcom is not a semiconductor company. Nor is it a software company. It is a private equity fund, maximizing cash flow from an endless series of acquisitions. This is disheartening to many in the semis industry and probably confusing to those in software.
I hate when finance people talk like this.
No, it's not confusing to people in software. We're well aware of your (finance) industry's reputation of sucking capital out of necessary, competitive companies for your own personal gain. If we thought we could get away with it, we'd do something about it.
pjmlp
I consider UNIX/POSIX becoming niche, including GNU/Linux, for the so called cloud native workloads.
The large majority of managed languages being used in such scenarios, compiled to native or VM based, have rich ecosystems that abstract the underlying platform.
Moreso, if going deep into serverless, chiseled containers, unikernel style, or similar technologies.
Naturally there is still plenty of room for traditional UNIX style workloads.
noja
What can VMware do in this vmotion space?
The docs says open source can do a live migration, see https://www.linux-kvm.org/page/Migration and https://docs.redhat.com/en/documentation/red_hat_enterprise_...
b5n
The majority of vmware customers could get by with qemu/kvm + pacemaker/corosync, but that requires hiring people who can read a manpage.
mjevans
Vaguely like MS Active Directory vs Kerberos. The really big thing 'vmotion' provides is 'things just work' vs flexibility of options but more effort required.
tart-lemonade
Third is academia. Even with a marginal academic discount, it doesn't come close to offsetting the price hikes. When the choice is between slashing personnel budget and minimizing the usage of/completely migrating away from VMware before the next renewal, there's really no choice in the matter, especially with the difficulty of firing employees.
kazen44
Another magic word that Vmware has which no other vendor does the same is NSX. (network virtualization).
proxmox is lightyears behind this usecase, and so are most other vendors. Especially if you are building private/public clouds with multi tenancy in mind.
NSX is really well designed and scales nicely, (it even has MPLS/EVPN support for Telecom Service Provider integration).
Most open source and other commercial offerings have solved both the compute and storage aspect quite well. But on the networking front, they a really not comparable.
Proxmox for instance, only supports a vxlan encapsulation or vlans, without support for a proper control plane like EVPN. Heck, route injection by BGP is only doable by DIY'ing it ontop of proxmox.
"just using vlans" is not going to cut if you want to really scale across datacenters and with multiple tenants. NSX does this all really nicely without having to touch the network itself at all thanks to encapsulation and EVPN route discovery.
SteveNuts
Hashicorp has some Nomad drivers for Qemu and a beta version that uses libvirt. However it's fairly immature and lacks a lot of features they would need to be competitive there.
Since IBM already has OpenShift I'm not sure how much time and effort they want to put into Nomad virtualization, but I'd love it as an alternative to Kubernetes.
trebligdivad
libvirt+qemu can do live migration across disparate storage; I know quite a lot of people want to use it. I'm curious in which ways you find the vMotion stuff does well with that.
mvdwoord
I would think another issue for a lot of existing large VMware customers is NSX (incl. dynamic firewalling), orchestration (vRA/vRO), vROPS, and the gazillion integrations that have been built with tight coupling at both ends. Swapping that out is not an easy task, and the "digital transformations" some of the available products are banking on take a lot of time in real life.
oneplane
Xen (and XenServer and XCP and the Citrix banded stuff) have been able to live migrate VMs for a really long time. Definitely not something special to VMWare.
Realistically, all the legacy workloads (those that are singletons and can't be load-balanced, need an active GUI session etc) are going to to be problems forever, even if you keep VMWare around.
breakingcups
Red Hat just hit us with 300% price increases for OpenShift across the board right after we went live in production after a little over a year of implementation. The entire org is very, very unhappy about it.
mogwire
300% and you added no additional compute? Prices went up but not 300% unless you got a major discount and added compute.
breakingcups
No additional compute. I don't think we had a major discount before.
woleium
An IBM by any other name would smell the same?
lenerdenator
What're the FLOSS technologies underlying OpenShift? Sounds like KVM?
natebc
Openshift is Kubernetes+.
Openshift Virtualization Engine is kubevirt (aka kvm/libvirt).
lyarwood
https://github.com/kubevirt specifically for OpenShift Virtualization
whalesalad
everything ultimately boils down to kvm or qemu, usually
samcat116
Probably kubevirt if I had to guess (which would then use KVM under the hood)
houseofzeus
Yes you are correct, it is kubevirt and leveraging KVM as the hypervisor coupled with the QEMU/Libvirt userspace pieces.
jaitaiwan
OKR for the open source kubernetes, kubevirt for the virtualisation part.
mjtechguy
[dead]
more_corn
Moving off VMware (yay!) Onto open shift (crap.)
They’ve been hammering us pretty hard, especially some folks in the leadership chain who worked with them before. While I have no direct beef with the product, the reality is that our Enterprise workload (and in fact, most Enterprise IT workloads in my experience) are VM-first, not container-first.
My research conclusion at the time was that, while OpenShift is a great product worthy of consideration, it really only shines in organizations that are heavily invested in microservices or Kubernetes. If you (or more specifically, your vendors) haven’t migrated into that state, it’s not worth it compared to a RHEL server license and their KVM+Cockpit solution for bog standard VMs.