Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

ZFS and block deduplication



On 04/25/2011 09:32 AM, Daniel Feenberg wrote:
>
>
> On Mon, 25 Apr 2011, Mark Woodward wrote:
>
>> This is one of those things that make my brain hurt. If I am
>> representing more data with a fixed size number, i.e. a 4K block vs a
>> 16K block, that does, in fact, increase the probability of collision 4X,
>
> Only for very small blocks. Once the block is larger than the hash, 
> the probability of a collision is independent of the block size.

I think that statement sums up the conceptual gulf between the two 
sides. Its kinda like old school "god does not play dice" physicists and 
the quantum mechanical physicists.





BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org