Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU |
Hi, Jay. That's pretty much what I assumed the process would be. The description doesn't address my two concerns, though: 1. By mounting it as a filesystem and then running rsync on top of that, rsync sees the s3 filesystem as a "local" filesystem, and therefore as part of the process of checking if a file needs to be updated, it copies the entire file from s3 to generate its hash for comparison. Rsync to a remote system invokes rsync on the remote end to compute the hash,and avoids the bandwidth usage that the "local" rsync uses. 2. The rsync snapshots process uses hard links to make each daily backup directory look like a complete filesystem -- daily.0, daily.1, daily.2, etc. are all complete filesystems from different days, but files that are the same in all of these are hard-linked to a single instance, so it doesn't waste storage space with multiple copies of the same file. Is it possible to do the same with an s3-based solution? On Fri, Jan 2, 2009 at 12:12 PM, James Kramer <kramerjm-Re5JQEeQqe8AvxtiuMwx3w at public.gmane.org> wrote: > John, > > I am using s3fs and rsync to sync files to amazon. > > See the link below: > > http://blog.eberly.org/2008/10/27/how-i-automated-my-backups-to-amazon-s3-using-rsync/ > > > It works pretty well only every now and then It tries to rewrite files > due to archive dates. > > Jay > -- John Abreau / Executive Director, Boston Linux & Unix AIM abreauj / JABBER jabr-iMZfmuK6BGBxLiRVyXs8+g at public.gmane.org / YAHOO abreauj / SKYPE zusa_it_mgr Email jabr-mNDKBlG2WHs at public.gmane.org / WWW http://www.abreau.net / PGP-Key-ID 0xD5C7B5D9 PGP-Key-Fingerprint 72 FB 39 4F 3C 3B D6 5B E0 C8 5A 6E F1 2C BE 99
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |