Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

best practices using LVM and e2fsck



Hello,
    As stated,

    Do other admin schedule down time (dismount LVM drives ) to run fsck - I 
was looking for a time period - the
mounts are scheduled to run on reboot after 180 days - -3 months-

    Thanks,
Stephen




> Allow me to rephrase:
> In ext3/4, you have no choice.  You need to fsck occasionally, and you 
> must
> do it while the filesystem is dismounted.  Often, you don't even have a
> choice about *which* dismount will cause it to fsck.
>









----- Original Message ----- 
From: "Edward Ned Harvey" <blu-Z8efaSeK1ezqlBn2x/YWAg at public.gmane.org>
To: <discuss-mNDKBlG2WHs at public.gmane.org>
Cc: "'Stephen Goldman'" <sgoldman-3s7WtUTddSA at public.gmane.org>
Sent: Wednesday, June 30, 2010 12:35 PM
Subject: RE: best practices using LVM and e2fsck


>> From: Derek Martin [mailto:invalid-yPs96gJSFQo51KKgMmcfiw at public.gmane.org]
>>
>> On Mon, Jun 28, 2010 at 09:35:27PM -0400, Edward Ned Harvey wrote:
>> > I do all of this on solaris/opensolaris with ZFS.  The ability to
>> > scrub a filesystem while it's mounted and in use seems like such a
>> brainless
>> > obvious feature.
>>
>> Not to me, it doesn't...  From what I've read, it generates a lot of
>> I/O (as one would expect), and if so I doubt I'd ever want to be using
>> a filesystem where a scrub is going on.  I'd put this functionality in
>
> Allow me to rephrase:
> In ext3/4, you have no choice.  You need to fsck occasionally, and you 
> must
> do it while the filesystem is dismounted.  Often, you don't even have a
> choice about *which* dismount will cause it to fsck.
>
> In ZFS, you have a choice.  You can scrub while the system is not in use 
> if
> you want, or you can scrub while it's in use, and acknowledge that
> performance will be lower than usual during that time.  If you wanted, you
> could simply never scrub.  You have that option.
>
>
>> It also is prone to fragmentation, which in the long run may degrade
>> performance, and it has no defrag utility.
>
> If you keep snapshots around, this is true.  If you don't do snapshots, 
> it's
> not true.
>
> Fragmentation is inherently a characteristic that goes hand-in-hand with
> copy-on-write, which is the key technology that enables snapshots.  Most
> people who go to ZFS are doing it because we're really really happy to 
> have
> snapshots, and we're perfectly happy to pay for it in terms of
> fragmentation.  I am sure you can measure and detect the fragmentation if
> you want.  But neither I, nor any of my users have ever noticed it.
>
> BTW, snapshots & copy-on-write are the key technologies that enable 
> instant
> block-level diff incremental backups.  ;-)  Thanks to this, my nightly
> backups which formerly required 10 hrs per night for rsync to scan the 
> tree
> for changed items ... Now require an average 7 mins per night, because no
> scan is required to search for things that changed.  Given this, versus
> fragmentation which we haven't even noticed, *plus* the option of 
> scrubbing
> online if you want to, and the ability to restore things from snapshot
> instead of needing the backup media ...  Means I am very *thoroughly* 
> happy
> we made this move to ZFS.
>
> One more thing.  Unrelated except that it goes along with the "compare ZFS
> to ExtN" subject:
> Because ZFS handles the posix (filesystem) layer, integrated with the
> block-level (raid) layer, it's able to gain performance that would simply 
> be
> impossible with traditional raid.  With traditional raid, even if you have
> hardware NVRAM BBU HBA acceleration etc, if the system issues a lot of
> random small writes, the disks will have to seek each of those sectors. 
> But
> since ZFS has intimate knowledge of all of these layers, it's able to take 
> a
> bunch of small random write requests, aggregate them, and choose to write
> them on sequential physical blocks.  In my benchmarks, this meant a 30x
> improvement in performance.
>
>
>> I haven't used ZFS.  It seems pretty nice, but it's apparently not
>> without its own set of limitations.  Everything in life comes with
>> tradeoffs.
>
> Always true.   ;-)
>
> 







BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org