Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU |
> On 12/12/2011 1:33 PM, Mark Woodward wrote: >> Why would they be missing? Sure, any storage system can be corrupted >> and you would loose data. Loss of data is a risk in every system. Even >> if you did a full backup every night, there is no guarantee you won't >> lose data. The system I describe reduces the amount of data that must >> be saved. This actually increases reliability. I'm not saying anything >> new at all. A lot of vendors are doing this. > I create a 12K file on your 8K block volume. This marks a number of > blocks in the block map as changed: > * Two blocks for the file data. > * One or more blocks for the inode list stored in the directory data. > * Possibly the file system superblocks. > > You do your first delta dump. I then create another file in the same > directory. This marks the directory block(s) as changed and will be > dumped on the next run. > > You then lose the intermediary dump containing my first file's data. Why would that happen? That's what I don't understand. If you have a mission critical failure of this sort, then you fall back to a previous backup as your beginning and do a full backup at this point. You don't ignore the failure and proceed business as usual. Under normal circumstances, everything is fine. If there is a failure, then you correct the failure. Since this is a "backed up" system, then your main volumes have not been compromised. Like all data integrity systems, there needs to be a process by which data is protected. As I said a few emails back, sure, anyone can create a scenario in which any system will fail. Losing data is a failure and there should be workable procedures for correcting either by data forensics or re-acquisition.
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |