Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU |
Edward Ned Harvey <blu at nedharvey.com> writes: >> From: Derek Atkins [mailto:warlord at MIT.EDU] >> Sent: Friday, August 26, 2011 11:19 AM >> >> > Simply write a file. Eliminate the possibility of external drive > slowdown. >> > time dd if=/dev/zero of=10Gfile bs=1024k count=10240 >> >> I did this a few times with various count sizes and noticed that the >> speed declined significantly once I started writing more than my RAM >> cache size data: >> >> [warlord at mocana mocana]$ time dd if=/dev/zero >> of=/home/warlord/TestDataWrite bs=1k count=20000 >> 20000+0 records in >> 20000+0 records out >> 20480000 bytes (20 MB) copied, 0.0662049 s, 309 MB/s >> 0.002u 0.063s 0:00.10 60.0% 0+0k 128+40000io 2pf+0w > > hehehe, yes, of course. :-) The number I suggested above was around 10G. > That was not based on anything, and it may need to be bigger on your system, > depending on your system specs. Really this test should be as large as you > can bear to let it be. But don't go over approx 50% of the drive, or else > you might start getting hurt by fragmentation etc. > > Hint: Any benchmark you complete in 0.06 seconds isn't going to be very > useful. ;-) Perhaps try something that runs at least 5-10 minutes, > minimally. Did you miss attempts 2 and three, which were 200MB and 2GB tests? Yes, I know that a 0.06s test is irrelevant. I included it for completeness. That's why I also did two more tests with increasing dataset sizes to more accurately get my disk write speed. >> Still, 50MB/s is a SIGNIFICANT reduction in I/O throughput from what I >> think I should be seeing w/o encryption. > > You're also using a 1k blocksize. Try increasing that, at least 128k. I > usually say 1024k. Given that "dd" is actually topping your cpu charts, > you're probably only generating your data at 50 MB/s. I don't think it's the blocksize. Just to make you happy, here's the same 2GB test with a 1MB blocksize: time dd if=/dev/zero of=/home/warlord/TestDataWrite bs=1024k count=2048 2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB) copied, 49.2395 s, 43.6 MB/s 0.000u 2.064s 0:49.57 4.1% 0+0k 1376+4194304io 15pf+0w See? There's my 40-50MB/s again. > Try running dd directly from /dev/zero into /dev/null, and see how your > blocksizes affect it. That way you can ensure you're at least running dd > efficiently... And then you can write something to disk. Are you familiar > with pv? It's useful to stick into your pipeline, so you can see what's > going on. And verified, it's not a generation problem: time dd if=/dev/zero of=/dev/null bs=1024k count=20482048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB) copied, 0.127949 s, 16.8 GB/s 0.000u 0.127s 0:00.30 40.0% 0+0k 256+0io 4pf+0w > I agree, 50 MB/sec is not stellar. Any typical 7200rpm sata drive should > sustain 1Gbit/sec. SSD's should sustain about the same throughput, but much > faster IOPS. This is a spinning disk, not SSD, but as you say it should be able to sustain 1Gb/s. It's not. I'm only getting 400Mb/s to the disk through dm-crypt. Unfortunately I don't have any non-encrypted space available on the disk. At least nothing sufficiently large enough to get a good sample to see if it's the disk or dm-crypt. The disk in this machine is the same model as the disk in the other machine where I was seeing full-speed data without dm-crypt. Alas I did change both hardware type and added dm-crypt at the same time so I don't know if it's the ThinkPad vs. Dell or no-encryption v. dm-crypt that's slowing down my disk I/O. -derek -- Derek Atkins, SB '93 MIT EE, SM '95 MIT Media Laboratory Member, MIT Student Information Processing Board (SIPB) URL: http://web.mit.edu/warlord/ PP-ASEL-IA N1NWH warlord at MIT.EDU PGP key available
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |