Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

LVM + RAID follow up



Derek Atkins <warlord at MIT.EDU> wrote:
> With 8 400GB drives using
> RAID-5 + 1 hot spare you can combined them into 2.4TB of storage.

With the drives getting pretty cheap, there are two other cost-related aspects
to consider when planning a server:  power consumption and backups.  New
England has the priciest electricity in the free world--20 cents a kwh, give
or take, depending on which particular town you're in (up from about 8 cents
only a couple years ago)--so a 3.5-inch hard drive running 24/7 will use about
$10/year of electricity (not including A/C to keep it cool, a consideration
when planning a big data center with hundreds to thousands of drives).

As for backups, once you get beyond a couple hundred gigabytes, there really
is no inexpensive way to cope.  Least expensive is to buy a second batch of
hard drives and make periodic copies, but you still have to figure out how to
do an offsite rotation to prevent loss to fire, theft, etc.  And you have to
figure out a way to maintain older archives to prevent loss of
less-frequently-used files whose loss might not become evident for a few
months (typical rotations overwrite backups within a short time).

I've heard rumor that some online services will sell you hundreds of gigabytes
of backup storage for a low cost.  But I'm skeptical that there's an practical
way to provide such a service in a way that provides quick/reliable disaster
recovery.  (My skepticism revolves around bankruptcy of companies selling good
stuff below cost...;-)

> I'd like it all in a single file system instead of multiple partitions
> because I don't know a priori how much space I want to the different
> uses.

That's fine; LVM is my way to address the above concern.  Why do I not like
having a single file system?  Because of the runaway app problem.  For
example, I've been using Audacity to capture audio files.  It'll chew through
a couple gigs an hour; left unattended overnight, voila, you're out of space
on whatever volume it's pointed to.  A related problem is the human tendency
to fill up all available space.  If you limit the space available for a given
application, the users will have incentive to get rid of unwanted clutter.

Companies which insist they need 100 TB of storage probably are failing to
deal with the unwanted clutter problem, to the detriment of their customers'
privacy and their employees' productivity.  Same argument applies to most home
computers.

> I use RAID-1 ... I don't use LVM on those servers;
> I just don't see the point.  It seems to add complexity to what I view
> as little gain.

To me it doesn't seem complex.  It's a proven, reliable technology dating back
at least to the AIX O/S I was using back in '91.  The point of LVM, when
combined with RAID, is that you can hot-swap new hardware in place, sync up
the new drive, and then assign additional storage to your existing
filesystems.  It provides non-stop computing in a commercial environment, and
easier upgrades in a personal environment.

> I guess this was part of my question (and confusion)..  Do I want LVM
> over RAID or RAID over LVM?  Or do I want LVM over RAID over LVM?

RAID is at the bottom of the food chain.  LVM lives on top of it.  I suppose
you could do it differently a la the AFS (Andrew) technology of the late 1980s
but I don't see a benefit.

> Also, if I want to do a RAID5 for / but RAID-1 for /boot, how do I
> want to lay that out?  With RAID-5 do all my drives/partitions need to
> be the same size like they do with RAID-1?

You would create a /boot filesystem the same way you do now, small partitions
of the same size on two of your drives.  The partition size for each RAID5
volume element should be the same size.  If your drives are of different sizes
then you should create multiple RAID5 devices (/dev/md2 etc) so as to take
advantage of your available physical storage.

> And then what's my upgrade path if I decide to swap out to larger
> drives?  Let's say that 3-5 years from now I decide to swap out my
> 400G drives with 1TB drives -- what would be the right process to do
> that such that my main raid partition size increases?  (Then I can
> resize my ext3 FS).

Well one possibility is that you start out with eight 400 GB units and at your
first upgrade you decide you want to buy four 1 TB units, leaving four of the
old ones in place.  Ignoring the bit of space given to /boot, let's say you
set aside one of the 400 GB units as a spare and configure the first RAID5
array as 7 partitions of 400Gb.  You'd still have 2.4 GB available in the
first array.  You'd have four 600 GB partitions to configure in the second
array, which provides 1.8 TB of storage (without setting aside a spare). 
Total 4.2 TB.  Then a year later, with prices still coming down, you swap out
the 400 GB drives for a set of four 1.8 TB drives.  Apply the same logic, you
get available storage of 6 times 1 TB plus 3 times .8 TB equals 7.8 TB.

If you do this without LVM, you have to save/restore your data after
re-creating the new filesystems.  With LVM, you just make the partitions and
extend the filesystems into the new space.

> I don't trust reiserfs for a server -- it's not designed to handle
> catastrophic failures like a power outage.  Yeah, you could setup a
> UPS with auto-shutdown, but why take the extra chance with a fragile
> filesystem?

Haven't heard Reiser described as "fragile", especially compared to the
previous-generation Linux filesystems that I used before it came out (and
compared to NTFS on my XP boxes), but your observation leads me to ask:  what
do you use instead?  Can you point to some reading on the relative reliability
of Reiser vs. alternatives?  Thanks!

-rich


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.





BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org