Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Amazon S3 and rsync snapshots?



Below is a copy I made to my documentation folder. ... Yea, belts and 
suspenders.  ... If the information is 'so good', I tend to make a copy 
of the ones I really like.  So much of the internet is temporal and goes 
away into the ethers before to long.

Bill Horne wrote:
> James Kramer wrote:
>   
>> John,
>>
>> I am using s3fs and rsync to sync files to amazon.
>>
>> See the link below:
>>
>> http://blog.eberly.org/2008/10/27/how-i-automated-my-backups-to-amazon-s3-using-rsync/
>>
>>
>> It works pretty well only every now and then It tries to rewrite files
>> due to archive dates.
>>
>>   
>>     
> Jay,
>
> That link gave me a "500 Internal Server Error"
>
> Bill Horne
>
>   

http://blog.eberly.org/2008/10/27/how-i-automated-my-backups-to-amazon-s3-using-rsync/ 


*How I automated my backups to Amazon S3 using rsync and s3fs.*
October 27th, 2008  | Tags: amazon <http://blog.eberly.org/tag/amazon/>, 
aws <http://blog.eberly.org/tag/aws/>, backups 
<http://blog.eberly.org/tag/backups/>, s3 <http://blog.eberly.org/tag/s3/>

The following is how I automated my backups to Amazon S3 
<http://aws.amazon.com/s3/> in about 5 minutes.

I lot has changed since my original post on automating my backups to s3 
using s3sync 
<http://blog.eberly.org/2006/10/09/how-automate-your-backup-to-amazon-s3-using-s3sync/>. 
There are more mature and
easier to use solutions now. I am switching because using s3fs gives you 
much more options for using s3, it is
easier to set up and it is faster.

I now use a combination of s3fs <http://s3fs.googlecode.com/> to mount a 
S3 bucket to local directory and then use rsync to keep up to date
with my files. The following directions are geared towards Ubuntu linux, 
but could be modified for any linux
distribution and Mac OSX 
<http://www.rsaccon.com/2007/10/mount-amazon-s3-on-your-mac.html>.


*STEP 1: Install s3fs*

The first step is to install s3fs dependencies. (Assuming Ubuntu)

sudo apt-get install build-essential libcurl4-openssl-dev libxml2-dev libfuse-dev

Next, install the most recent version of s3fs 
<http://code.google.com/p/s3fs/>. As of now the most recent is r177, but 
a quick check of s3fs downloads 
<http://code.google.com/p/s3fs/downloads/list> will show the most recent.

wget http://s3fs.googlecode.com/files/s3fs-r177-source.tar.gz
tar -xzf s3fs*
cd s3fs
make
sudo make install
sudo mkdir /mnt/s3
sudo chown yourusername:yourusername /mnt/s3

*STEP 2: Create script to mount your Amazon s3 bucket using s3fs and 
sync files.*

The following assumes you already have a bucket created on Amazon S3. If 
this is not the case, you can use a tool like s3Fox 
<https://addons.mozilla.org/en-US/firefox/addon/3247> to create one.

Choose a text editor of your choice and make a shell script to mount 
your bucket, perform rsync, then unmount. It is not necessary to unmount 
your S3 directory after each rsync, but I prefer to be safe. One mistake 
like an 'rm' on your root directory could wipe all of your files on your 
machine and your S3 mount. You should probably start with a test 
directory to be safe.

Make the file s3fs.sh

#!/bin/bash
/usr/bin/s3fs yourbucket -o accessKeyId=yourS3key -o secretAccessKey=yourS3secretkey /mnt/s3
/usr/bin/rsync -avz --delete /home/username/dir/you/want/to/backup /mnt/s3
/bin/umount /mnt/s3

Note, the --delete option. This will delete any files that have been 
removed on the 'source'.
Change permissions to make executable

chmod 700 s3fs.sh

Before you run the entire script, you might want to run each line 
separately to make sure everything is working properly. The paths to 
rsync, umount might be different on your system. (Use 'which rsync' to 
check) Just for fun, I did a 'df -h', which showed I now have 256 
Terabytes available on the s3 mount!

Next, run the script and let it do its work. This could take a long time 
depending on how much data you are uploading initially. Your internet 
upload speed will be the bottleneck.

sudo ./s3fs.sh

That's it! You are backing up to Amazon S3. You probably want to 
automate this using cron after you are sure everything is running o.k. 
Just for simplicity of this tutorial, lets assume you are setting up the 
cron job as root so we don't need to worry about editing permissions for 
mount/umounting directory.

*STEP 3: Automate it with cron*

sudo su
crontab -e






BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org