Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU |
In your example, a duplicate reducing backup would ignore most of the changes. Edward Ned Harvey <blu at nedharvey.com> wrote: >> From: markw at mohawksoft.com [mailto:markw at mohawksoft.com] >> Sent: Sunday, December 11, 2011 2:48 PM >> >> I will argue that an rsync will NEVER be more effective unless you >> actively wipe the blocks where a file once existed. > >for (( i=0 ; i<200 ; i++ )) ; do >mkdir temp >cp datafile temp >run_test $i >> testresults.txt >rm -rf temp >done > >In this case, rsync is what you want, because it ignores files that don't >exist. But a block level backup will backup all the blocks that were ever >contained in any of the (now removed) copies of the datafile. > >I don't know what users you support, but I support engineers who run this >type of test all the time. They create test work dirs, they perform >volatile work in there, store the results of the test, and remove their >scratch dir. > >The block level backup you're talking about is great, under the assumption >that you basically just add data to a filesystem. It's terrible when you >add & remove data from the filesystem. I stand by my claim: Important to >know if it's suitable for your purposes, whoever you are, the consumer who >might consider using this. >
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |