Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU |
Its hard to quantify what's going on here. Yes it is slow, and we can make guesses as to why, but without a whole system diagnostic it is hard to know. NFS: Network connectivity 100M, 1G, 10G? Sync? OS (Solaris, FreeBSD, [any bsd], Linux, etc.) File System NFS server daemon Describe the NFS server in detail, OS, NFS server, storage, etc. Client: Network connectivity 100M, 1G, 10G? Infrastructure: How many hops? Routers/firewall in between? NFS is not as fast as a local disk, but it should not be that slow. > Performance comparison: > svn checkout single repository on old infrastructure > real 5m44.100s > user 0m36.957s > sys 0m14.757s > > svn checkout single repository on new infrastructure, but only using NFS > for "read" (local working copy stored on local disk) > real 3m15.057s > user 1m18.195s > sys 0m53.796s > > svn checkout same repository on new infrastructure, with writes stored on > NFS volume > real 28m53.220s > user 1m45.713s > sys 3m26.948s > > > Greg Rundlett > > > On Fri, Dec 6, 2013 at 8:35 AM, Greg Rundlett (freephile) < > greg at freephile.com> wrote: > >> We are replacing a monolithic software development IT infrastructure >> where >> source code control, development and compiling all take place on a >> single >> machine with something more manageable, scalable, redundant etc. The >> goal >> is to provide more enterprise features like manageability, scalability >> with >> failover and disaster recovery. >> >> Let's call these architectures System A and System B. System A is >> "monolithic" because everything is literally housed and managed on a >> single >> hardware platform. System B is modular and virtualized, but still >> running >> in a traditional IT environment (aka not in the cloud). The problem is >> that the new system does not come close to the old system in >> performance. >> I think it's pretty obvious why it's not performing: user home >> directories >> (where developers compile) should not be NFS mounted. [1] The source >> repositories themselves should also not be stored on a NAS. >> >> What does your (software development) IT infrastructure look like? >> >> One of the specific problems that prompted this re-architecture was disk >> space. Not the repository per se, but with 100+ developers each having >> one >> or more checkouts of the repos (home directories), we have maxed out a >> 4.5TB volume. >> >> More specifically, here is what we have: >> system A (old system) >> single host >> standard Unix user accounts >> svn server using file:/// RA protocol >> 4.5TB local disk storage (maxed out) >> NFS mounted NAS for "tools" - e.g. Windriver Linux for compiling our OS >> >> system B (new system) >> series of hosts managed by VMWare ESX 5.1 (version control host + build >> servers connected via 10GB link to EMC VNXe NAS for home directories and >> tools and source repos >> standard Unix user accounts controlled by NIS server (adds manageability >> across domain) >> svn server using http:/// RA protocol (adds repository access control >> and >> management) >> NFS mounted NAS for "tools", the repositories, the home directories >> >> Notes: >> The repos we're dealing with are multiple "large" repositories eg. 2GB >> 43,203 files, 2,066 directories. >> We're dealing with 100+ users >> >> >> >> [1] >> http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?topic=/com.ibm.aix.prftungd/doc/prftungd/misuses_nfs_perf.htm >> >> Greg Rundlett >> > _______________________________________________ > Discuss mailing list > Discuss at blu.org > http://lists.blu.org/mailman/listinfo/discuss >
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |