Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Discuss] Backing up LVM partitions using snapshots



On Tue, Dec 13, 2011 at 10:15:51AM -0500, Richard Pieri wrote:
> * Disaster strikes!  The server explodes in a ball of flame, or gets
> eaten by Gojira or something.  You replace the hardware and prepare
> for recovery.
> * Restore full dump to new server.  You now have file #1 complete.
> * Attempt to restore incremental dump #1 and find that it is
> unusable. If you stop at this point then there are 2 files missing.
> * Restore incremental dump #2.  At this point you have all of file
> #1 and file #3 along with their directory metadata.  You only have
> file #2's directory metadata.  Specifically, an ls on that directory
> will show that file #2 is there, but the inode list isn't accurate
> as there is no file data on that volume.
> 
> The file system is in an inconsistent state at this point.  How does
> your backup system recover from this?

You run fsck.  But aside from that detail, I don't see how this is
different from any other incremental back-up scheme.  If one fails
along the way, and you fail to notice it's unusable, you lose the
changes it captured.  For that matter, doing full back-ups isn't
necessarily much better of a solution in that regard, since it's not
so unlikely that if whatever scheme you're using failed on Tuesday,
and you didn't notice, it's going to fail on Thursday too, and it's
generally much less cost effective.  

In fairness, I missed the beginning of the thread, so I'm not quite
sure what's being argued.  But I do know that tapes are still
surprisingly expensive (around a buck a GB), and doing full back-ups
every day is not cost effective, except perhaps if the data set you're
archiving is very small.  Re-engineering even a couple of days
of work when catastrophe happens is very likely cheaper than the cost
of doing full back-ups of all your data every single day.  And, some
types of changes (say, compilation of test data accumulated via
repeatable, automated process) can be lost with very little practical
cost at all.

Data assurance is a risk mitigation endeavor; and risk in business
translates directly to dollars.  You have to compare the potential
cost of loss times the probability of that loss to the cost of
preventing the loss (the probability of this ideally should always be
1, but if not do the same multiplication).  Whichever is greater
loses.  What are the odds of these back-ups actually failing, and what
are the odds that they'll fail when you NEED them?  The trick, of
course, is figuring out what P is and what the REAL cost of losing
your data is.  That requires more thought than most people are
inclined to give it, and typically can only be estimated until after
catastrophe actually occurs. :)

This argument strikes me as being a lot like the mbox vs. maildir
argument: the big benefit of maildir is that it's less likely to
corrupt your mailboxes (that, and avoiding the locking problem)...
But in 20 years or so of using mbox, I've never experienced that
problem. :)

-- 
Derek D. Martin    http://www.pizzashack.org/   GPG Key ID: 0xDFBEAD02
-=-=-=-=-
This message is posted from an invalid address.  Replying to it will result in
undeliverable mail due to spam prevention.  Sorry for the inconvenience.




BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org