Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU |
Richard Pieri <richard.pieri-Re5JQEeQqe8AvxtiuMwx3w at public.gmane.org> wrote: > SpiderOak does have one specific, tangible benefit ... > The encryption keys are stored entirely on the clients without any escrow. AFAIK the keys aren't sent to CrashPlan either. Can't prove it because I use their front-end software to set up the backups and I haven't looked at the guts of it, but their legal agreement says this: "You may elect to secure your key with a private password or use your own encryption key. If you elect to use a private key password or your own key, they will be required before decrypting backup data. IF YOU ELECT NOT TO HAVE CODE 42 STORE YOUR PRIVATE KEY AND YOU LOSE YOUR KEY OR PASSWORD, YOUR ENCRYPTED DATA WILL NOT BE RECOVERABLE." I've been really happy with the service and it's $50/year for as much data as I can cram through the Comcast pipe (about 120G so far, eventually I'll push about 300G--beyond that is impractical given the number of days a restore would take). CrashPlan snapshots and saves your data every 15 minutes (by default) so you don't have to worry about losing today's work. Also unlike other vendors, the same software app provides peer-to-peer capability so I have also set it up to back up to a second computer at my house. That one provides much faster restore capability than the Comcast pipe, and of course I can put a lot more data on it. My recommendation for any of y'all thinking about setting up backups is to set up three copies of your data, sync'ed at least daily: 1) A second computer or tape drive on site 2) An computer at a second location that you control 3) An offsite location such as SpiderOak or CrashPlan Five years ago the gold-standard for this kind of backup was Iron Mountain, with its LiveVault service. I heard yesterday that they're going bust. Not surprising, given how they never bothered updating their software or pricing to deal with larger data volumes. (3 years ago I gave up on them after they couldn't figure out why their Linux driver kept timing out on data sets larger than about 5G, and completely broke down past about 20G...) The newer companies are much better today, though I wonder how they'll make money. -rich
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |