Warning

 

Close

Confirm Action

Are you sure you wish to do this?

Confirm Cancel
BCM
User Panel

Posted: 4/18/2024 10:48:08 AM EDT
I'm not liking Hyper-V very much.  I was using Hyper-V on 2012 r2 but the OS is EOL  I was able to upgrade all of my machines to 2022 DC and tried running that for a few weeks but every so often the server would crash and take my VMs offline, so I down graded to 2019 DC and while the servers aren't crashing as often they still do.  I have run Vmware in the past but I cant upgrade past 6.7, so I am looking for a Linux hypervisor solution that can run various os's ranging from windows to Linux that has low overhead but will also have a gui (not a desktop, but a page that will allow visuals)  so what is everyone using?  My basic of requirements is to be able to config VMs with a gui, be able to move VMs between hosts either manually or automatically and to have a single place to manage all of the hosts and VMs and to be able to migrate VMs off of Hyper-v to a format that the new Linux hypervisor can use .  Does such a creature exist?  I'm not looking for step by step on how to configure this up, just to point me in the right direction.
Link Posted: 4/18/2024 10:56:23 AM EDT
[#1]
I wouldn't say I love it, but from a similarity and ease of use standpoint I do have a number of Linux hosts running VirtualBox

It's not totally great, but free and at least moderately able to be run vis CLI in lieu of the GUI
Link Posted: 4/18/2024 11:07:27 AM EDT
[#2]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By zeus2you:
I wouldn't say I love it, but from a similarity and ease of use standpoint I do have a number of Linux hosts running VirtualBox

It's not totally great, but free and at least moderately able to be run vis CLI in lieu of the GUI
View Quote



I'm wanting to get away from windows as the base of the hypervisor and from what I remember about virtual box I can do live migrations from one host to another under virtual box
Link Posted: 4/18/2024 11:29:37 AM EDT
[Last Edit: Firestarter123] [#3]
I've become a huge fan of Proxmox and have a 2-node HA Cluster running at home (last ~2 years) as well as at work now.

ETA: You can import the disks from Hyper-V into Proxmox.
Link Posted: 5/7/2024 2:22:38 PM EDT
[#4]
I was going to suggest Proxmox as well.  It is what my data center is recommending now that VMware is going subscription only and killing licensing on the free ESXi.
Link Posted: 5/7/2024 2:29:20 PM EDT
[#5]
Look into KVM/QEMU. It runs on most Linux distros but I recommend something Red Hat-based (such as Rocky Linux) for anything server-ish.
Link Posted: 5/8/2024 9:58:18 AM EDT
[#6]
XCP-ng
Link Posted: 5/8/2024 12:59:20 PM EDT
[#7]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By 2ANut:
Look into KVM/QEMU. It runs on most Linux distros but I recommend something Red Hat-based (such as Rocky Linux) for anything server-ish.
View Quote

As a note here, QEMU/KVM are paravirtualization. They run a container/chroot inside of an existing operating system. This is different compared to ESX, Hyper-V, or Virtualbox.

Not saying it's a bad thing, just a thing to be aware of when considering your migrations.
Link Posted: 5/9/2024 2:11:46 PM EDT
[#8]
I utilize ProxMox and it has proven to be reliable and I believe does what you need it to do.

AB
Link Posted: 5/9/2024 10:18:21 PM EDT
[#9]
Proxmox is definitely the most capable of all the free options, but there is a bit more of a leaning curve.  If you're OK with that, you can do some crazy stuff.
Link Posted: 5/10/2024 12:07:45 AM EDT
[#10]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By GTwannabe:
XCP-ng
View Quote


Salvation lies here.  Take the XCP-ng path and never look back.

Link Posted: 5/10/2024 12:27:05 AM EDT
[#11]
Vbox if I'm feeling lazy. Openstack when I need to lab things out.
Link Posted: 5/10/2024 12:37:40 AM EDT
[#12]
VirtualBox for a desktop system. Proxmox for a small lab. Openstack if you need big leagues.
Link Posted: 5/10/2024 2:49:54 AM EDT
[#13]
I'd say Proxmox or XCP-NG.

I have about 100ish VM's running on XCP at work.  Some of the other Sysadmins use Proxmox, and are happy with it.
Link Posted: 5/11/2024 5:52:19 PM EDT
[Last Edit: MMcCall] [#14]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By The_Fly:

I have about 100ish VM's running on XCP at work.  Some of the other Sysadmins use Proxmox, and are happy with it.
View Quote


8.2 or 8.3?
Link Posted: 5/12/2024 7:31:44 PM EDT
[#15]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By MMcCall:


8.2 or 8.3?
View Quote


8.2.1.  8.3 is still in beta I believe.
Link Posted: 5/12/2024 8:24:40 PM EDT
[#16]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By The_Fly:


8.2.1.  8.3 is still in beta I believe.
View Quote


It is, I was just looking for some real-world impressions of the differences.
Link Posted: 5/12/2024 11:52:55 PM EDT
[#17]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By MMcCall:


It is, I was just looking for some real-world impressions of the differences.
View Quote


I got curious, and tossed 8.3 on one of my non production servers.  It definitely seems a bit faster, especially when importing templates and VM's.

The XO lite isn't really usable at this point, I'm having to use a beta version of XenCenter to manage it.



Link Posted: 5/13/2024 12:21:17 AM EDT
[#18]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By The_Fly:

I got curious, and tossed 8.3 on one of my non production servers.  It definitely seems a bit faster, especially when importing templates and VM's.

The XO lite isn't really usable at this point, I'm having to use a beta version of XenCenter to manage it.

View Quote


I appreciate the info. I'm getting a little sick of the limitations and overhead of Hyper-V and I may just go back to ESXi, even being EOL.  

The other options seem good but each a little half-baked in their own ways.. management UI/UX seems to be the Achilles heel of most.
Link Posted: 5/13/2024 12:31:42 AM EDT
[#19]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By MMcCall:


I appreciate the info. I'm getting a little sick of the limitations and overhead of Hyper-V and I may just go back to ESXi, even being EOL.  

The other options seem good but each a little half-baked in their own ways.. management UI/UX seems to be the Achilles heel of most.
View Quote


IMHO, VMware is dead.  Broadcom killed it with massive price hikes.  I'm willing to bet in 5 years you'll only see it in really huge well funded shops.  

If I was starting from scratch, I'd give proxmox a hard look.  Its free, highly functional, and the interface is baked right into the Hypervisor (unlike XCP).  Our web dev guys at work use it and have been quite happy.  

Link Posted: 5/13/2024 12:57:35 AM EDT
[#20]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By The_Fly:


IMHO, VMware is dead.  

View Quote


For sure. I've worked with Prox, XCP, Nutanix, and some docker-based systems.. If this was for work, I'd probably go with whoever has the best support/feature set intersect, but this is for my home lab and I just prefer ESXi. I don't care about support or lifecycle.
Link Posted: 5/13/2024 10:24:31 AM EDT
[#21]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By Foxxz:
VirtualBox for a desktop system. Proxmox for a small lab. Openstack if you need big leagues.
View Quote
Openstack doesnt need to be a big thing. I've got a beefy single AIO in Vbox to test against customer setups. If doing multi node, keep the AIO for the control plane and just add bare metal for more compute.

I'm biased though. My job has me supporting countless customers private clouds ranging from a handful of servers to thousands. I can openstack in my sleep.
Link Posted: 5/13/2024 3:00:49 PM EDT
[#22]
Discussion ForumsJump to Quoted PostQuote History
Originally Posted By packingXDs:
Openstack doesnt need to be a big thing. I've got a beefy single AIO in Vbox to test against customer setups. If doing multi node, keep the AIO for the control plane and just add bare metal for more compute.

I'm biased though. My job has me supporting countless customers private clouds ranging from a handful of servers to thousands. I can openstack in my sleep.
View Quote


Oh, I didn't mean to insinuate you needed heavy iron to run openstack. Just that if you need a system to run alot of VMs then openstack does it well.
Top Top