BLU Discuss list archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Discuss] Gluster startup, small-files performance
- Subject: [Discuss] Gluster startup, small-files performance
- From: ozbek at gmx.com (F. O. Ozbek)
- Date: Wed, 14 May 2014 12:13:04 -0400
- In-reply-to: <53739294.2060903@gmail.com>
- References: <e2d144397125b9340bda1ef334a92ba0.squirrel@webmail.ci.net> <537368BC.9040801@gmx.com> <5373789E.1060700@gmail.com> <53737BC5.4020700@gmx.com> <53738013.8000500@gmail.com> <53738216.6060508@gmx.com> <53738816.6020505@gmail.com> <53738ADA.2040006@gmx.com> <53739294.2060903@gmail.com>
On 05/14/2014 11:58 AM, Richard Pieri wrote: > F. O. Ozbek wrote: >> That is the whole point, "doesn't flush its write buffers when >> instructed to do so". You don't need to instruct. The data gets written >> all the time. When we have done the tests, we have done tens of >> thousands of writes (basically checksum'ed test files) and >> read tests succeeded all the time. > > But what it seems you haven't done is kill the power while writing to > see how messy the damage is and how difficult the cleanup will be. > If you lose power to your entire storage cluster, you will lose some data, this is true on almost all filesystems.(including moosefs and glusterfs) > >> The fact that it doesn't support forcing the buffer to the disk >> is not the problem in this case. Glusterfs will start giving >> you random I/O errors under heavy load. How is that any good? > > It's not. It's also not relevant to lack of atomic writes on MooseFS. > > Yes, I understand the desire for throughput. I don't like the idea of > sacrificing reliability to get it. It is possible to setup a moosefs cluster with redundant metadata servers in separate locations (and chunk servers in separate locations.) This will save you from power outages as long as you don't lose power in all the locations at the same time. Keep in mind these servers are in racks with UPS units and generator backups. Basically this fsync stuff is the same argument you hear over and over again from the glusterfs folks. It is pretty much irrelevant unless you are in financial industry transactional environments. Even in those environments, you can provide transactional integrity at the application level using multiple separate moosefs clusters. > > >> I don't know what you are referring to in Cambridge but >> we are not Cambridge. > > http://www.boston.com/metrodesk/2012/11/29/cambridge-power-outage-thousands-hit-blackout/0r93dJVZglkOagAFw8w9bK/story.html > Like I said, we are not Cambridge.
- Follow-Ups:
- [Discuss] Gluster startup, small-files performance
- From: richard.pieri at gmail.com (Richard Pieri)
- [Discuss] Gluster startup, small-files performance
- From: dsr at randomstring.org (Dan Ritter)
- [Discuss] Gluster startup, small-files performance
- References:
- [Discuss] Gluster startup, small-files performance
- From: richb at pioneer.ci.net (Rich Braun)
- [Discuss] Gluster startup, small-files performance
- From: ozbek at gmx.com (F. O. Ozbek)
- [Discuss] Gluster startup, small-files performance
- From: richard.pieri at gmail.com (Richard Pieri)
- [Discuss] Gluster startup, small-files performance
- From: ozbek at gmx.com (F. O. Ozbek)
- [Discuss] Gluster startup, small-files performance
- From: richard.pieri at gmail.com (Richard Pieri)
- [Discuss] Gluster startup, small-files performance
- From: ozbek at gmx.com (F. O. Ozbek)
- [Discuss] Gluster startup, small-files performance
- From: richard.pieri at gmail.com (Richard Pieri)
- [Discuss] Gluster startup, small-files performance
- From: ozbek at gmx.com (F. O. Ozbek)
- [Discuss] Gluster startup, small-files performance
- From: richard.pieri at gmail.com (Richard Pieri)
- [Discuss] Gluster startup, small-files performance
- Prev by Date: [Discuss] Gluster startup, small-files performance
- Next by Date: [Discuss] Gluster startup, small-files performance
- Previous by thread: [Discuss] Gluster startup, small-files performance
- Next by thread: [Discuss] Gluster startup, small-files performance
- Index(es):