Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU |
On Thu, Mar 10, 2011 at 02:31:09PM -0500, Bill Bogstad wrote: > On Thu, Mar 10, 2011 at 1:56 PM, Dan Ritter <dsr-mzpnVDyJpH4k7aNtvndDlA at public.gmane.org> wrote: > > On Thu, Mar 10, 2011 at 01:39:01PM -0500, Bill Bogstad wrote: > >> [question about CPU caches] > > Up to 12 MB, but you have to look at how it's split between cores, and how > > it's split between code and data. If you have to have all the data at one > > core, you may not be able to fit it all, and you very likely > > won't be able to control it. > > Almost all data. I was under the impression that L3 caches were > uniformly shared across all cores in current implementations. Is this > incorrect? I can probably get my job done with a single core as long > as there is enough low latency cache. I was thinking of L2 for some reason. Yes, an AMD Opteron 6000 series has 12 MB of L3, outfitted as 2 groups of 6MB, and (512KB of L2 per core), up to 12 cores per cpu, up to 4 cpus per motherboard. > > Is this something amenable to parallel-processing via graphics > > coprocessors? > > Yes, but I want stay away from specialized hardware as much as > possible. The cost curve is rarely good for that kind of thing. > Admittedly, gaming has made graphics coprocessors a sufficiently large > market that performance per dollar has kept up with general CPUs for > quite a while now. (Actually better for > parallelizable problems.) I also don't want to deal with those kinds > of programming environments if I don't have to do so. It may be more cost-effective to say "add this $800 video card to each existing machine" rather than "buy machines that support these $1500 CPUs". Or not. -- http://tao.merseine.nu/~dsr/eula.html is hereby incorporated by reference. You can't defend freedom by getting rid of it.
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |