ZFS and block deduplication

Tom Metro tmetro-blu-5a1Jt6qxUNc at public.gmane.org
Mon Apr 25 16:14:04 EDT 2011


Edward Ned Harvey wrote:
> (2) We're assuming the data in question is not being maliciously formed for
> the purposes of causing a hash collision.  I think this is a safe
> assumption, because in the event of a collision, you would have two
> different pieces of data that are assumed to be identical and therefore one
> of them is thrown away...  And personally I can accept the consequence of
> discarding data if someone's intentionally trying to break my filesystem
> maliciously.

I think the attack vector would be along the lines of an attacker
identifying one or more blocks of a privileged executable, creating
replacement blocks that have both malicious code and cause a hash
collision. They write the blocks to disk, and after the executable
restarts, they have control.

Far fetched, but if they can do it, the consequences are more than just
some corrupted or discarded data.

(Doesn't ZFS also employ overall file hashing to insure the integrity of
a file? (Or is that the verification option you referred to?) If so,
then that would likely thwart this attack vector.)

 -Tom

-- 
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/




More information about the Discuss mailing list