Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Was Moore's law, now something else, parallelism



OK, so I think we all sort of agree that the practical benefits as 
realized from Moore's law are at an end. We may be incorporating more 
transistors per square centimeter (as per the technical definition of 
Moore's law), but we aren't getting any faster.

Disks aren't getting much faster, sure a ms here and there, SATA is an 
improvement, but nothing earth shattering. Solid state disks will 
probably fix this, but its a few years out as to when they'll be cost 
effective in a practical sense. A 1TB disk is less than $90. That's an 
astounding amount of storage.

Networking isn't getting much faster without switching away from copper, 
even fiber is only incrementally faster.

Computers themselves aren't any faster, they only have more CPU cores. 
This is all well and good at increasing "capacity" of processing, but 
does nothing for individual processes.

So an interesting time is upon us. Sure, in theory, most tasks can be 
broken up to many simultaneous actions, but it isn't always easy and it 
doesn't always make things "faster." Synchronization, alone, will cause 
false parallelism on logically single threaded actions.

Multiple processors are great for tasks that can truly be done in 
parallel, like a RAID process in a desktop machine. If you can offload 
that processing to a different core, its like having a dedicated RAID 
controller. Image rendering makes sense, hell multiple cores working on 
the various sections of an image, that makes sense. If you are really 
really good, you can do cell dependency on a spread sheet and make that 
parallel. Compression make sense.

A number of processes make sense for parallel processing, but its hard 
to do. People in the industry are complaining that it is a "language 
issue" in that we don't have the languages to express parallel 
processing well. Maybe.

Maybe its an OS issue? Like the RAID process described above, operating 
systems have to become far more modular and parallel to benefit. That 
whole user-space micro-kernel process stuff doesn't sound so useless 
now.  Monolithic/Modular kernels ruled as CPU cores were scarce. With 
many multiple CPUs, there is actually REAL benefit that can be taken 
from it. Also, old truisms may now becoming wrong. A user space process 
for handling services should now be effectively more efficient (in 
operation) than kernel based ones as long as resource access and 
contention are managed well.

A last problem, somewhat unrelated, disk size vs electronic 
communication. How long does it take top copy the data we are capable of 
generating? I have a digital video camera with a 4G SD card. It is not a 
quick operation to copy that data. It is a very long operation, on a 
practical basis, to copy data from one device to another.

Any thoughts?









BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org