Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU |
On Thu, 21 Feb 2013 11:46:45 -0800 "Rich Braun" <richb at pioneer.ci.net> wrote: > I think there are a couple of advantages to keeping backup metadata > in a database table: it's reachable from everywhere which makes it > easier to write integrity-checker scripts (especially against offline > backups), and you can optimize the checksum process more easily (only > generate checksums for new files that aren't yet in the metadata > storage). Yes, yes, yes. When the only tool you have is a database then every problem looks like a table. :) So tell me, how do you propose to keep this database in sync with the actual file system? If the database does not match the file system then its value is diminished. Which is why a checksum file system is vastly superior to any form of disconnected checksum storage, and keeping the checksums near the actual files (perhaps in extended attributes?) is better than a disconnected database. > the backups themselves. I can then create scripts which block me > from such accidental rollovers. Or, you know, a ZFS or Btrfs snapshot. > For what it's worth: creating the database schema and insertion > script was about 3 hours of work, which I've already done. I'm Took me zero seconds to make my backup system do it all automatically, plus about a minute to put together the little script that does a scrub from cron every week. Which automatically heals corrupted data because the data and metadata are mirrored. -- Rich P.
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |