Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU |
On Thu, Mar 10, 2011 at 1:56 PM, Dan Ritter <dsr-mzpnVDyJpH4k7aNtvndDlA at public.gmane.org> wrote: > On Thu, Mar 10, 2011 at 01:39:01PM -0500, Bill Bogstad wrote: >> [question about CPU caches] > Up to 12 MB, but you have to look at how it's split between cores, and how > it's split between code and data. If you have to have all the data at one > core, you may not be able to fit it all, and you very likely > won't be able to control it. Almost all data. I was under the impression that L3 caches were uniformly shared across all cores in current implementations. Is this incorrect? I can probably get my job done with a single core as long as there is enough low latency cache. > How are you getting the data in and out? Variations in processor > microcode and exact implementations of instruction sets are > going to be interesting. The data I need to access quickly is (relatively) static. I don't think that I care precisely how long it takes for the cache implementation to migrate all of my 'hot' data into the L3 cache. > Is this something amenable to parallel-processing via graphics > coprocessors? Yes, but I want stay away from specialized hardware as much as possible. The cost curve is rarely good for that kind of thing. Admittedly, gaming has made graphics coprocessors a sufficiently large market that performance per dollar has kept up with general CPUs for quite a while now. (Actually better for parallelizable problems.) I also don't want to deal with those kinds of programming environments if I don't have to do so. Bill Bogstad
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |