[Discuss] Redundant array of inexpensive servers: clustering?

markw at mohawksoft.com markw at mohawksoft.com
Mon Mar 31 12:15:50 EDT 2014


> markw at mohawksoft.com wrote:
>> I currently work at a fairly high end deduplicated backup/recovery
>> system
>> company. In a deduplicated system, a "new" backup should not ever be
>> able
>> to trash an old backup. Period. Only "new" data is added to a
>> deduplicated
>> pool and old references are untouched. Old data is not over-written. You
>> can see this behavior in almost any deduplication strategy, including
>> Windows NTFS and ZFS.
>
> You're missing the point.
>
> Say you have disk A and disk B. Every block written to A is replicated to
> B.
>
> Data on blocks on A are damaged.
>
> Damaged data blocks on A are replicated to B.
>
> B is now a 1:1 replica of the trashed data on A.

OK, that's a pretty stupid thing to do. Who would do that? That's the
worse of both worlds. Not only are you backing up EVERY block, you aren't
even preserving old data. Hell you aren't even excluding uninitialized
disk blocks. So, even if you only have 500G on a 2TB drive used, you have
to copy 2TB each time.

I agree, just dumb.

>
> --
> Rich P.
> _______________________________________________
> Discuss mailing list
> Discuss at blu.org
> http://lists.blu.org/mailman/listinfo/discuss
>





More information about the Discuss mailing list