Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU |
On Thu, Mar 10, 2011 at 01:39:01PM -0500, Bill Bogstad wrote: > This isn't really a Linux/Unix specific question, but I'm hoping > people here will have some insight into the question. > > I'm looking at problems where I could really use low latency random > access to relatively small (order of megabytes) of data. DRAM access > times are too slow. So far my tests show on chip L2/L3 speeds might > be good enough. It looks like current desktop CPUs have as much as > 6Mbytes of L3, while standard server CPUs can be found with as much as > 8Mbytes. Is this an accurate portrayal of the current landscape? Any > idea on what future plans are for the Intel/AMD CPU duopoly in this > area? Up to 12 MB, but you have to look at how it's split between cores, and how it's split between code and data. If you have to have all the data at one core, you may not be able to fit it all, and you very likely won't be able to control it. How are you getting the data in and out? Variations in processor microcode and exact implementations of instruction sets are going to be interesting. Is this something amenable to parallel-processing via graphics coprocessors? -- http://tao.merseine.nu/~dsr/eula.html is hereby incorporated by reference. You can't defend freedom by getting rid of it.
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |