Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

LVM + RAID follow up



Quoting John Abreau <john.abreau at zuken.com>:

> On Wed, 2006-11-08 at 13:17 -0500, Derek Atkins wrote:
>
>>
>> Ok, you've confused me again.  Doing it this way, to swap out
>> 4 drives you need to have space for 8 drives!  Or are you saying
>> that you swap out one drive at a time?  The way I would think about
>> it:
> [snip]
>> Now, I can see how in the next swap-out, when I add a p4 (built into
>> an md3), if p4 is greater than p2+p3 then I can see how I could move
>> everything from md1 and md2 onto md3, and then combine p2 and p3 and
>> build a new array from the newly combined p2+p3.  But this only works
>> if I keep getting geometrically-bigger hard drives (well, it needs to
>> be Fibonachily-increasing sizes).
>>
>> Am I missing something?
>
> It sounds like you're trying to mix together several different layers.
> You mention combining partitions, for instance, but that's at the
> layer below RAID, whereas LVM is the layer above RAID. And the ext3
> layer sits above LVM.

Sorry, I'm a big picture kind of guy, so I like to think about how it
all works together.  I'm kind of ignoring LVM and ext3 in this description
because I think I understand how that works.  What I'm trying to grasp
is the method to change out the physical drives, which means changing
out the physical partitions and rebuilding the raid devices out from
under LVM.

> I've only done this with RAID-1, which is simpler than trying to juggle
> drives in a RAID-5 metadevice. I only had to swap 2 drives at a time.
> The HOWTOs I read on it described an alternate way to manage it that
> involved partitioning large drives into several partitions and building
> RAID volumes from the partitions. The scheme used multiple md's, and
> combined them into one volume at the LVM layer.

Okay, so let me restate my question in terms of 2 drives and RAID1
to make it simpler.  Let's assume I have a system where I've got two
80G drives partitioned like the following:

1: 200M <raid>
2: 1G   <swap>
3: 78G  <raid>

I have two RAID-1 MDs combining hda1/hdc1 (md0) and hda3/hdc3 (md1).
Now, assume I've got one Volume Group which contains md1 and a filesystem
sitting in a Logical Volume inside that Volume Group.  But for the sake of
discussion lets ignore the VG and LV, because I think I understand completely
how that works.  I've also got grub installed on these two drives (they are
the only two drives in the system).

Now, I want to swap out these 80G drives with 200G drives...  How
would you do it?  I dont understand what you mean by "swap two drives
at a time." -- in all cases I don't see how you can swap more than
a single drive at a time, let the raid rebuild, and then move on to
the next one..  Could you walk me through your procedure?

I guess that now that pvresize is implemented it might be easier,
but that's only been implemented since FC5.

> With this scheme, you can remove md's when removing drives, so you
> don't have md's proliferating infinitely, but you'd still have more
> than one md.  At the LVM layer, you can combine the multiple md's
> into a single virtual volume.

I understand how to combine the multiple MDs into a single virtual volume
but I don't follow the rest.  Sorry for being so dense; I suspect there's
just some fundamental thing I'm not understanding (or not properly
communicating).

Thanks!

-derek

-- 
       Derek Atkins, SB '93 MIT EE, SM '95 MIT Media Laboratory
       Member, MIT Student Information Processing Board  (SIPB)
       URL: http://web.mit.edu/warlord/    PP-ASEL-IA     N1NWH
       warlord at MIT.EDU                        PGP key available


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.





BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org