Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU |
On Sat, 4 Aug 2012 18:22:34 -0700 "Rich Braun" <richb at pioneer.ci.net> wrote: > Performance is equal to or possibly better than the bare-metal, This is not possible. Even with raw disk I/O the guest does not talk directly to the disk controllers. It talks to the emulated controllers exposed by the host. This emulation incurs a small processing overhead so virtualized I/O can never be as fast or faster than bare metal. That said, the guest also has a disk I/O cache of its own. Certain kinds of disk I/O will remain within the guest's cache. This could be the cause of your perceived faster than bare metal performance. Try mounting the guest file systems sync and see what happens. > setting it up this way. (How could it be "better"? The big host OS > cache, which should be turned off if you're concerned about data > integrity during a power outage.) I've tried a configuration like this. Debian 6 64-bit host with VirtualBox from Squeeze backports. ext3 file system for the guest containers mounted sync specifically to bypass host I/O caching. Guest containers pre-allocated to full size. I experienced two severe problems. The first is that this configuration was noticeably slower than with the host file system mounted async. The second is that the guest VMs kept crashing under load with the file system mounted sync. This crashing problem disappeared when I reverted to async mounts. Seriously? If you're concerned about I/O performance and file durability then you shouldn't be running critical storage inside user-mode hypervisors. Move the storage to bare metal even if you have to run it directly on the host. You'll be better off for it. -- Rich P.
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |