best practices using LVM and e2fsck
Tom Metro
tmetro-blu-5a1Jt6qxUNc at public.gmane.org
Wed Jun 30 18:14:56 EDT 2010
Stephen Goldman wrote:
> I took the defaults and "e2fsck" will kick off if the system is
> rebooted if the mounts are up beyond 180 days.
>
> I am afraid if I "need" to reboot- which I don't for see and e2fsck
> kicks off it will take a long time to bring up the server.
I don't recall the ext version being mentioned in this thread. Your
concern seems to imply v2. Would switching to v3 or v4 be an option,
which should eliminate the possibility of a long fsck run?
The man page says:
For ext3 and ext4 filesystems that use a journal, if the system has
been shut down uncleanly without any errors, normally, after replaying
the committed transactions in the journal, the file system should be
marked as clean. Hence, for filesystems that use journalling, e2fsck
will normally replay the journal and exit, unless its superblock
indicates that further checking is required.
Perhaps that means the scheduled checks (which are ran per settings in
the superblock) still are lengthly.
Ben Eisenbraun wrote:
> Are you worried that flaky hardware might be silently corrupting your
> filesystem?
Could also be flaky software.
> Running fsck periodically isn't going to benefit you much as far as I
> can see.
Using a time-based trigger should be more as a early warning mechanism,
much as you might run smartd to monitor your drives.
Given this, shouldn't it be possible to run fsck in read-only mode on a
mounted file system? And then schedule down time for a unmounted repair,
only if needed?
Looks like you can run:
# e2fsck -n /dev/sda1
e2fsck 1.41.9 (22-Aug-2009)
Warning! /dev/sda1 is mounted.
Warning: skipping journal recovery because doing a read-only filesystem
check.
but the man page notes, "...the results printed by e2fsck are not valid
if the filesystem is mounted." Which probably means that e2fsck isn't
equipped to ignore the normal expected inconsistencies that would be
found on an actively used filesystem.
My run above continued with:
Pass 1: Checking inodes, blocks, and sizes
Inodes that were part of a corrupted orphan linked list found. Fix? no
Inode 1286265 was part of the orphaned inode list. IGNORED.
Inode 1286280 was part of the orphaned inode list. IGNORED.
Inode 1286642 was part of the orphaned inode list. IGNORED.
Inode 1286693 was part of the orphaned inode list. IGNORED.
Inode 1286725 was part of the orphaned inode list. IGNORED.
Inode 1286780 was part of the orphaned inode list. IGNORED.
Inode 1286840 was part of the orphaned inode list. IGNORED.
Inode 1286899 was part of the orphaned inode list. IGNORED.
Inode 1286901 was part of the orphaned inode list. IGNORED.
Deleted inode 1288961 has zero dtime. Fix? no
Inode 1769907 was part of the orphaned inode list. IGNORED.
Inode 6685007 was part of the orphaned inode list. IGNORED.
[...]
Pass 2: Checking directory structure
Entry 'Local State' in /home/tmetro/.config/google-chrome (13902072) has
deleted/unused inode 13902407. Clear? no
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Unattached zero-length inode 13806377. Clear? no
Unattached inode 13806377
Connect to /lost+found? no
Unattached zero-length inode 13902410. Clear? no
Unattached inode 13902410
Connect to /lost+found? no
Pass 5: Checking group summary information
Block bitmap differences: -5146629 -(5161650--5161651) -5163636
-5164025 -5164035 -5164043 -(5211454--5211456) -(521956
[...]
Free blocks count wrong for group #157 (7964, counted=7962).
Fix? no
Free blocks count wrong for group #159 (3682, counted=3677).
Fix? no
[...]
/dev/sda1: ********** WARNING: Filesystem still has errors **********
(It automatically answered "no" to the prompts.)
That went on for a few screens. That's on a drive that came up clean on
its last reboot and hasn't shown any signs of problems.
So it appears e2fsck on a mounted drive isn't useful.
-Tom
--
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
More information about the Discuss
mailing list