Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU |
On Tue, Jul 13, 2010 at 12:52 PM, Richard Pieri <richard.pieri-Re5JQEeQqe8AvxtiuMwx3w at public.gmane.org> wrote: > On Jul 13, 2010, at 9:48 AM, Mark Woodward wrote: >> >> Any thoughts? > > I think that you're stuck on clock speeds. ?The original Core architecture breezed past Pentium 4 at lower clock speeds and significantly lower power consumption. ?Each successive iteration of Core has been faster out of proportion to the clock speed increases that you've been conditioned to expect. ?There have been, and are, comparable developments with POWER and ARM architectures, and that's not even beginning to touch on what's happening with GPUs. Faster computing and "2x faster computing every 18 months" are not the same thing. Of course, it was officially never 2x faster computing anyway it was more about # of transistors. I don't know exactly how much faster i3/i5/i7 CPUs are on single threaded code vs. the original core 2 CPUs. However, everything I've read in the last ten years, supports the thesis that single-core performance increases are not occurring at the same rate as in the past. Whether it's because increasing the clock rate would fry the CPU or the designers can't figure out a way to use more transistors to increase the speed of that multiply instruction isn't really important. The result is that programmers can no longer assume that their code will be speed up due to hardware improvements without them requiring any changes in the program itself. Bill Bogstad
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |