Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Discuss] choice of hypervisor



On 06/05/2013 07:39 PM, Richard Pieri wrote:
> Tom Metro wrote:
>> A bare metal hypervisor may not be layered on top of a kernel, but
>> it is still acting as a kernel and doing most of the things a kernel
>> does.
> No, it doesn't. At its core, a bare metal hypervisor (or virtual machine
> monitor -- VMM) is a lot of trap and pass-through for the guests. The
> VMM does very little actual processing.
>
> The "like a kernel" stuff happens when the VMM also has to emulate
> hardware. Here:
>
> http://lists.blu.org/pipermail/discuss/2011-October/040358.html
>
>
>> Presumably a bare metal hypervisor will be a bit more performance
>> optimized to run VMs. What do the benchmarks show for comparison?
> Not just "a bit". Last time I checked (and no, I'm afraid I don't have
> anything handy), the overhead of a bare metal hypervisor is on the order
> of 3%. I seem to recall discussing this here maybe a year ago? Should be
> in the archives somewhere. Yep:
>
> http://www.blu.org/pipermail/discuss/2012-August/042937.html
>
>> I imagine a bare metal hypervisor has far narrower hardware support
>> than the Linux kernel. I'm also guessing the Xen bare metal
>> hypervisor borrows a lot from the Linux kernel.
> Yes and no. Xen actually borrows little from Linux itself per se. It's a
> moderately portable VMM, capable of running underneath *BSD and Solaris
> as well as Linux. When you boot dom0 kernel you don't actually boot
> Linux. You boot a loader that loads the Xen components which in turn
> loads the Linux kernel in a VM.
>
>> If the best case you can make for using a bare metal hypervisor is
>> that it prevents the system administrator from being tempted to run
>> other processes on the host, then that doesn't sound so compelling.
> One of the benefits to Xen over KVM is that you can separate various
> hardware devices into their own separate service domains. If a device
> crashes then you can restart the relevant service domain without
> affecting the dom0. User domains will note the disconnection and
> reconnect automatically after the service domain restarts.
>
> Can't do this with KVM. If a device crashes then you have to restart
> everything.
>
>
>>> It's also not so good for workstation virtualization since you can
>>> only run KVM guests on Linux hosts thus negating the easy
>>> portability that makes workstation virtualization so useful.
>> So? Doesn't that same comment apply to Xen.
> Not really. Xen makes no effort to be a desktop virtualization system.
> It's a server/enterprise virtualization system.
>
>> Are you trying to say KVM is useless because it is no good at the
>> things Xen is good at and it's no good at the things VirtualBox is
>> good at?
> KVM is inferior to Xen and vSphere for enterprise virtualization. It's
> inferior to VirtualBox and VMware Workstation/Fusion for desktop
> virtualization. It's inferior to LXC for light-weight container-style
> virtualization. KVM is not necessarily useless. I simply don't see a
> practical use for it when superior tools are available.
>
>
>> Why do you think that is? Inertia?
> Because migrating to KVM is a nightmare regardless of what you're
> already using. I've migrated VMs from various VMM systems to various
> others. Migrating to KVM is by far the most difficult.
>
One point you raise is "enterprise/server virtualization". It is very 
important to classify the differences between server virtualization and 
personal or desktop virtualization. I have migrated VMs both to 
Virtualbox and KVM with no trouble, but I much prefer the user 
interfaces of VirtualBox and VMWare workstation. I use KVM/QEMU 
primarily as a tool for comparison. I think it is important to decide 
why you want to virtualize. Once you have made that decision, then the 
choice of VMM may be important. One advantage of KVM is that the driver 
survives kernel upgrades. But, it takes only a couple of minutes to 
build a VirtualBox kernel module. In my case, I also look at the 
different networking options. One is a standalone VM that can be used at 
a client site. Another might be a VM using NAT, and a third might be 
bridged especially if I want to use it inside a corporate network. 
Setting this up in VirtualBox and VMWare Workstation is relatively 
straightforward. Usually, I add the virtualbox repo on Fedora so I get 
all the updates easily via yum.

-- 
Jerry Feldman <gaf at blu.org>
Boston Linux and Unix
PGP key id:3BC1EB90
PGP Key fingerprint: 49E2 C52A FC5A A31F 8D66  C0AF 7CEA 30FC 3BC1 EB90




BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org