Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Discuss] Dev Ops - architecture (local not cloud)

On 12/07/2013 09:47 PM, Edward Ned Harvey (blu) wrote:
>> From: at [mailto:discuss-
>> at] On Behalf Of Greg Rundlett
>> (freephile)
>>   I think it's pretty obvious why it's not performing: user home directories
>> (where developers compile) should not be NFS mounted. [1]  The source
>> repositories themselves should also not be stored on a NAS.
> For high performance hybrid distributed/monolithic environments, I've at a few companies, used systems that were generally interchangeable clones of each other, but each one has a local /scratch (or /no-backup) directory, and each one can access the others via NFS automount /scratches/machine1/ (or /no-backup-automount/machine1/)
> If I had it do repeat now, I would look at ceph or gluster, to provide a unified namespace while leaving the underlying storage distributed.  But it will take quite a bit of experimentation / configuration to get the desired performance characteristics.  Autofs has the advantage of simplicity to configure.

I highly recommend moosefs instead of ceph or gluster

> A problem I've seen IT folks (including myself, until I learned better) make over and over was:  They use raid5 or raid6 or raid-DP, believing they get redundancy plus performance, but when you benchmark different configurations, you find, they only perform well for large sequential operations.  They perform like a single disk (sometimes worse) when you have small random IO, which is unfortunately the norm.  I highly, strongly recommend, building your storage out of something more similar to Raid-10.  This performs much, much, much better for random IO, which is the typical case.
> Also:
> A single disk performance is about 1Gbit.  So you need to make your storage network something much faster.  The next logical step up would be 10Gb ether, but in terms of bang for buck, you get a LOT more if you go to Infiniband or Fibrechannel instead.
> _______________________________________________
> Discuss mailing list
> Discuss at

BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!

Boston Linux & Unix /