Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

make -j optimization



David Kramer komments:

| In GNU make, you can specify a -j <n> option, to run <n> commands
| simultaneously.  This is good.
|
| I'm having an argument with a coworker over whether specifying a value for
| <n> greater than the number of the processors in the system actually does
| any good or not.  I don't see how it can.  In fact, specifying one fewer
| than the number of processors shoudl have almost the same performance as
| specifying the number of processors.

Well, if this were true, then on a  single-processor  machine,  there
should  be  no advantage in running several makes in parallel.  But I
know from lots of experience that this is far from true.   I  have  a
number of makefiles for C programs that have nearly every function in
a different file, so there are lots of .o files to make. So I've done
parallelism  the brute-force way, by firing up make subprocesses with
'&' on the end of the line.  This should be a lot less efficient than
a  make  that  can do the job automagically.  I've found that on most
machines, this can speed up a build by a factor of 3 or 4.

So I'd predict  that  with  N  processors,  compiling  4*N  files  in
parallel  should give an improvement, but past that point you may not
find that it's any faster.

Anyone have numbers for any particular machines?






BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org