Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Discuss] choice of hypervisor



Tom Metro wrote:
> A bare metal hypervisor may not be layered on top of a kernel, but
> it is still acting as a kernel and doing most of the things a kernel 
> does.

No, it doesn't. At its core, a bare metal hypervisor (or virtual machine
monitor -- VMM) is a lot of trap and pass-through for the guests. The
VMM does very little actual processing.

The "like a kernel" stuff happens when the VMM also has to emulate
hardware. Here:

http://lists.blu.org/pipermail/discuss/2011-October/040358.html


> Presumably a bare metal hypervisor will be a bit more performance 
> optimized to run VMs. What do the benchmarks show for comparison?

Not just "a bit". Last time I checked (and no, I'm afraid I don't have
anything handy), the overhead of a bare metal hypervisor is on the order
of 3%. I seem to recall discussing this here maybe a year ago? Should be
in the archives somewhere. Yep:

http://www.blu.org/pipermail/discuss/2012-August/042937.html

> I imagine a bare metal hypervisor has far narrower hardware support 
> than the Linux kernel. I'm also guessing the Xen bare metal 
> hypervisor borrows a lot from the Linux kernel.

Yes and no. Xen actually borrows little from Linux itself per se. It's a
moderately portable VMM, capable of running underneath *BSD and Solaris
as well as Linux. When you boot dom0 kernel you don't actually boot
Linux. You boot a loader that loads the Xen components which in turn
loads the Linux kernel in a VM.

> If the best case you can make for using a bare metal hypervisor is 
> that it prevents the system administrator from being tempted to run 
> other processes on the host, then that doesn't sound so compelling.

One of the benefits to Xen over KVM is that you can separate various
hardware devices into their own separate service domains. If a device
crashes then you can restart the relevant service domain without
affecting the dom0. User domains will note the disconnection and
reconnect automatically after the service domain restarts.

Can't do this with KVM. If a device crashes then you have to restart
everything.


>> It's also not so good for workstation virtualization since you can 
>> only run KVM guests on Linux hosts thus negating the easy 
>> portability that makes workstation virtualization so useful.
> 
> So? Doesn't that same comment apply to Xen.

Not really. Xen makes no effort to be a desktop virtualization system.
It's a server/enterprise virtualization system.

> Are you trying to say KVM is useless because it is no good at the 
> things Xen is good at and it's no good at the things VirtualBox is 
> good at?

KVM is inferior to Xen and vSphere for enterprise virtualization. It's
inferior to VirtualBox and VMware Workstation/Fusion for desktop
virtualization. It's inferior to LXC for light-weight container-style
virtualization. KVM is not necessarily useless. I simply don't see a
practical use for it when superior tools are available.


> Why do you think that is? Inertia?

Because migrating to KVM is a nightmare regardless of what you're
already using. I've migrated VMs from various VMM systems to various
others. Migrating to KVM is by far the most difficult.

-- 
Rich P.



BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org