Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

hardware RAM disk



On 05/12/2009 03:33 PM, Richard Pieri wrote:
> On May 12, 2009, at 2:51 PM, Jerry Feldman wrote:
>  =20
>> Could you elaborate. Right now I think the jury is still out on =20
>> these devices. Personally, back in then 1970s I was under the =20
>> impression that
>>    =20
>
> Say that you are running some form of real-time (or near-real-time), =20
> high volume system.  The one I have experience is institutional =20
> equities trading.  A potential bottleneck is writing out the =20
> transaction logs.  Say that you need a minimum of 15,000 IOPS to keep  =

> up with your heaviest volume (I exaggerate by a factor of 10 for the =20
> sake of example).  What are your options?  The ones I had looked at are=
:
>
> * Lots of rotating media striped together.  Given disks that can =20
> handle bursts of 300 IOPS I would need at least 50 such disks to =20
> handle the load.  Least expensive up front costs but requires the most =
=20
> power and cooling over a long term, especially if the application grows=
=2E
>
> * Rotating media with lots of battery backed up cache.  This is =20
> cheaper on disk but in the case of EMC it actually costs a lot more up =
=20
> front than the disks.  Does not scale at all.  DMX frames have a =20
> fixed, finite, relatively small (IIRC, 128GB on DMX3) cache capacity.
>
> * Flash-based SSDs.  These can handle the load but are quite =20
> expensive, but not as expensive as EMC cache.  Flash-based SSDs have =20
> the same wear problems as every other form of flash-based media.  Much =
=20
> of the expense is in redundant cells to offer MTBF comparable to =20
> rotating media.  Immune to power loss.
>
> * SRAM or DRAM-based SSDs.  Fastest media, less expensive than flash =20
> and cache although more expensive than rotating media.  Longer MTBF =20
> than flash; no wear leveling needed.  More scaleable than cache.  =20
> Susceptible to power failure if the batteries fail.
>
> For high performance needs, where cache and rotating media are =20
> insufficient, there is a reasonable need for either flash or RAM =20
> SSDs.  Which is appropriate depends on the application.  For my =20
> example, based on a real project I worked on, I would use flash-based  =

> SSDs because I want the power fault tolerance.  For an application =20
> that requires the performance but not necessarily the fault tolerance  =

> I would consider RAM-based SSDs.
>
>  =20
Certainly, specific applications tend to dictate the type of media, and=20
equity trading certainly has some high-speed as well as persistence=20
issues. In the context of todays modern data centers with redundant=20
power sources and UPS systems, I would think that the SRAM/DRAM based=20
SSDs would be a good fit, but as you pointed out, they need batteries to =

prevent even a momentary power loss.  I used to love Digital's DECTape=20
systems which were random access magnetic tape systems, but a few orders =

of magnitude slower than the systems we have today. Unfortunately I=20
don't get into the data centers too often me being a Software Engineer=20
not a System Administrator.
On the other hand, flash-based SSDs are starting to become useful at the =

netbook and notebook level these days.

--=20
Jerry Feldman <gaf-mNDKBlG2WHs at public.gmane.org>
Boston Linux and Unix
PGP key id: 537C5846
PGP Key fingerprint: 3D1B 8377 A3C0 A5F2 ECBB  CA3B 4607 4319 537C 5846








BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org