Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU |
> From: discuss-bounces+blu=nedharvey.com at blu.org [mailto:discuss- > bounces+blu=nedharvey.com at blu.org] On Behalf Of Jerry Feldman > > I have always preferred a file-oriented backup approach, but I have also > been burned. I used to build tarballs until my backup of my home directory > placed a VM in the tarball on a 32-bit Linux, and the drive where my home > drive was crashed. I was able to restore everything up to the VM that was > larger than 3GB. Eventually, I paid to extract the data from the hard drive > because the I lost my email archives and checkbook. > > With today's larger HDs and/or inexpensive NAS systems, like the WD > MyBook, you can use rsync's --link-dest so you can have the equivalent of In my situation, I have millions of files which are mostly static. So simply walking the filesystem to find which files changed is the big time consumer. We were formerly backing this up via rsync, and it ran 10-12 hours per night. Then we switched to ZFS and now the incremental takes typically 7 minutes. Mark's LVM snapshots should be able to achieve similar results on this particular dataset.
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |