Boston Linux & UNIX was originally founded in 1994 as part of The Boston Computer Society. We meet on the third Wednesday of each month at the Massachusetts Institute of Technology, in Building E51.

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Discuss] Gluster startup, small-files performance




On 05/14/2014 11:58 AM, Richard Pieri wrote:
> F. O. Ozbek wrote:
>> That is the whole point, "doesn't flush its write buffers when
>> instructed to do so". You don't need to instruct. The data gets written
>> all the time. When we have done the tests, we have done tens of
>> thousands of writes (basically checksum'ed test files) and
>> read tests succeeded all the time.
>
> But what it seems you haven't done is kill the power while writing to
> see how messy the damage is and how difficult the cleanup will be.
>

If you lose power to your entire storage cluster, you will lose
some data, this is true on almost all filesystems.(including moosefs and 
glusterfs)

>
>> The fact that it doesn't support forcing the buffer to the disk
>> is not the problem in this case. Glusterfs will start giving
>> you random I/O errors under heavy load. How is that any good?
>
> It's not. It's also not relevant to lack of atomic writes on MooseFS.
>
> Yes, I understand the desire for throughput. I don't like the idea of
> sacrificing reliability to get it.

It is possible to setup a moosefs cluster with redundant metadata
servers in separate locations (and chunk servers in separate locations.)
This will save you from power outages as long as you don't lose power
in all the locations at the same time. Keep in mind these servers
are in racks with UPS units and generator backups.

Basically this fsync stuff is the same argument you hear over and
over again from the glusterfs folks. It is pretty much
irrelevant unless you are in financial industry transactional
environments. Even in those environments, you can provide
transactional integrity at the application level using multiple
separate moosefs clusters.
>
>
>> I don't know what you are referring to in Cambridge but
>> we are not Cambridge.
>
> http://www.boston.com/metrodesk/2012/11/29/cambridge-power-outage-thousands-hit-blackout/0r93dJVZglkOagAFw8w9bK/story.html
>

Like I said, we are not Cambridge.



BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org