Boston Linux & UNIX was originally founded in 1994 as part of The Boston Computer Society. We meet on the third Wednesday of each month, online, via Jitsi Meet.

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Discuss] Gluster startup, small-files performance




On 05/10/2014 04:00 PM, Rich Braun wrote:
> Greetings...after concluding cephfs and ocfs2 are beyond-the-pale too
> complicated to get working, and xtreemfs fails to include automatic healing,
> I'm back to glusterfs.
>
> None of the docs seem to address the question of how to get the systemd (or
> sysvinit) scripts working properly so I've had to customize my own.
> Basically, the boot sequence has to be this:
>
>    Mount local volumes (including the "bricks" which are xfs)
>    Start up networking
>    Initialize glusterd
>    Wait for glusterd to come up
>    Mount gluster volumes
>    Launch LXC and/or VirtualBox instances
>
> It's that 4th step which isn't obvious:  if I don't insert some sort of a
> wait, the mount(s) will fail because glusterd isn't fully up-and-running.
> Also, in step 5 if I don't wait for all the volumes to mount, one or more
> instances come up without their dependent volumes.
>
> I'll include my cobbled-together systemd scripts below.  Systemd is new enough
> that I'm not completely familiar with it; for example it's easy to get a
> circular dependency and so far the only way I've found to debug it is
> trial-and-error, rebooting each time.
>
> Googling for performance-tuning on glusterFS makes me shake my head,
> ultimately the technology is half-baked if it takes 500ms or more just to
> transfer an inode from a source path into a glusterFS volume.  I should be
> able to "cp -a" or rsync any directory on my system, whether it's got 100
> files or 1 million files, into a gluster volume without waiting for hours or
> days.  It falls apart completely after about 10,000 files, which means I can
> only use the technology for a small portion of my deployments.  That said, if
> any of y'all are *actually using* glusterfs for *anything* I'd love to share
> stories.
>
> -rich

Rich,

We have tested ceph, glusterfs and moosefs and decided to use moosefs.
We have been testing these products for the last 18 months.
We are about to use moosefs in production this summer.

If you are interested in getting more details, you can contact me
off the list and I will explain what we have tested and where we are.
There is no point in going into details of why moosefs is better
than glusterfs in this list because glusterfs die-hards start attacking
immediately. We are located in Boston Longwood Medical Area. (Harvard 
Med School)
If you are serious about looking for alternatives to glusterfs,
let me know and I will give you all the details.

Thanks
Fevzi


>
> -----------------------glusterd.service-----------------------
> [Unit]
> Description=Gluster elastic volume management daemon
>
> [Service]
> ExecStart=/usr/sbin/glusterd -N
>
> [Install]
> WantedBy=multi-user.target
>
>
> -----------------------gluster-vols.service-------------------
> [Unit]
> Description=Mount glusterFS volumes
> Conflicts=shutdown.target
> After=glusterd.service
> ConditionPathExists=/etc/fstab
>
> [Service]
> Type=oneshot
> RemainAfterExit=yes
> ExecStart=/bin/bash -c "sleep 10;/usr/bin/mount -a -O _netdev"
>
> [Install]
> WantedBy=multi-user.target
>
>
> -----------------------lxc at .service----------------------------
> [Unit]
> Description=Linux Container
> After=network.target gluster-vols.service
>
> [Service]
> Type=forking
> ExecStart=/usr/bin/lxc-start -dn %I
> ExecStop=/usr/bin/lxc-stop -n %I
>
> [Install]
> WantedBy=multi-user.target
>
>
> _______________________________________________
> Discuss mailing list
> Discuss at blu.org
> http://lists.blu.org/mailman/listinfo/discuss
>



BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org