Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Discuss] Backing up LVM partitions using snapshots



>> From: discuss-bounces+blu=nedharvey.com at blu.org [mailto:discuss-
>> bounces+blu=nedharvey.com at blu.org] On Behalf Of Mark Woodward
>>
>> example:
>> PREVTIME=13234915
>> CURRTIME=$(now)
>> lvrename /dev/lvm/mysnap /dev/lvm/mysnap.old
>> lvcreate -s -c 64 -L1G -nmysnap /dev/lvm/myvol
>> bcebackup -C /dev/mapper/lvm-mysnap-old.cow -P $PREVTIME -t
>> $CURTIME
>> /dev/lvm/mysnap
>> lvremove /dev/lvm/mysnap.old
>
> This is a limitation people should be aware of - and if they're able to
> find
> numbers that works for them and their dataset, then great.  ;-)  In the
> above example, you've reserved 1G to the snapshot.  If the amount of
> volatile data in the mounted filesystem exceeds 1G, then the snapshot
> disappears, so the next time, you won't be able to do any more
> incrementals,
> you'll have to start fresh again with a new full block level image.  If
> you
> increase to a huge number, say 1T, then you'll be wasting 1T of disk space
> reserved "just in case" the snapshot grows that large.

You can monitor the space usage and expand as necessary. No one is
suggesting that LVM2 is perfect in every way, but come on now, people do
use it and it does work in an enterprise environment. This argument is
only valid as a strawman because in the effective solution, like most all
environments, is monitoring resource utilization.


>
> If the snapshot disappears in the middle of an incremental send - Well
> then,
> the receiving side will have received a corrupt incremental.  Hopefully
> you're able to rollback the receiving side as if no incremental had ever
> started.  Lest the receiving side be entirely destroyed.

That may be true in some implementations, not in mine. I'll explain later.
>
> Also, because your snapshots are taking place at the block level
> (filesystem
> unaware) all block level changes are counted.  If somebody creates a
> random
> file and then removes it, or if they delete, change, or restore a file,
> move
> or copy, those blocks are still marked as "changed" even though they're no
> longer used or referenced in the filesystem, or they've been restored back
> to their original values.  During your incremental, those blocks will be
> sent.  This means, depending on the end user's system usage patterns,
> sometimes this snapshot backup mechanism will greatly outperform something
> like "rsync," and sometimes something like "rsync" would greatly
> outperform
> the snapshot backup mechanism.  It's important to understand your
> individual
> usage characteristics, and make sure something like this is appropriate
> for
> your usage patterns.

Well, having worked in the backup environment in a couple jobs, there are
two truths: (1) There is no one backup strategy that is "best" for a wide
range of applications, there are only degrees of "good." (2) There is no
generally acceptable backup strategy that can not be exampled as
"ineffective" or "inappropriate" by a carefully selected set of arcane
operations.

If a user "moves" a file on a disk, it may actually be better from a file
system point of view. Rsync will backup a new file based on its path,
where as you'll only get the metadata change in the file system.

I will argue that an rsync will NEVER be more effective unless you
actively wipe the blocks where a file once existed. Even so, the file
system is more often than not, a win.
>
>





BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org