Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU |
Rich Pieri <richard.pieri at gmail.com> writes: > On Sun, 5 Aug 2012 07:47:49 -0700 > Rich Braun <richb at pioneer.ci.net> wrote: >> As for how it could be possible: CPU performance far exceeds that of >> any current I/O. So emulation overhead drops way below the roughly 3% >> CPU overhead that I recall measuring. Throw a big RAM cache >> underneath your VM, and you can get blazing fast numbers. > > Ah-hah! Yes. If you cache I/O then you bypass the trap and emulate > latency. You will see similar performance on bare metal. But you did > stipulate not caching I/O so as to avoid data loss in case of a power > failure. If you take the cache away then performance drops to the > disk and controller at which point you should see a small but > measurable performance hit on the emulated controller. If you don't > then there's probably still some caching going on somewhere. It also depends whether you are trying to optimize for read performance or write stability. Obviously you'd be best with both, but having a large write-through cache for improved read performance might be a good compromise. You get cached speeds for reads, but writes go through the cache (so are slower, but safer). I have no idea how one would implement this. -derek -- Derek Atkins, SB '93 MIT EE, SM '95 MIT Media Laboratory Member, MIT Student Information Processing Board (SIPB) URL: http://web.mit.edu/warlord/ PP-ASEL-IA N1NWH warlord at MIT.EDU PGP key available
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |