Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

LVM + RAID follow up



On Wed, 2006-11-08 at 15:29 -0500, Derek Atkins wrote:

> Okay, so let me restate my question in terms of 2 drives and RAID1
> to make it simpler.  Let's assume I have a system where I've got two
> 80G drives partitioned like the following:
> 
> 1: 200M <raid>
> 2: 1G   <swap>
> 3: 78G  <raid>
> 
> I have two RAID-1 MDs combining hda1/hdc1 (md0) and hda3/hdc3 (md1).
> Now, assume I've got one Volume Group which contains md1 and a filesystem
> sitting in a Logical Volume inside that Volume Group.  But for the sake of
> discussion lets ignore the VG and LV, because I think I understand completely
> how that works.  I've also got grub installed on these two drives (they are
> the only two drives in the system).
> 
> Now, I want to swap out these 80G drives with 200G drives...  How
> would you do it?  I dont understand what you mean by "swap two drives
> at a time." -- in all cases I don't see how you can swap more than
> a single drive at a time, let the raid rebuild, and then move on to
> the next one..  Could you walk me through your procedure?

Maybe I shouldn't have used the word "swap" there. What I meant by 
"swap two drives at a time" is to add the two 200G drives, then migrate 
the data off the two 80G drives and onto the two 200G drives, then 
remove the two 80G drives. 

Two other things: I like to include swap within LVM, rather than as 
a separate partition. Also, since md0 is small, I assume that's 
meant for /boot, and /boot cannot be within LVM. 


> I guess that now that pvresize is implemented it might be easier,
> but that's only been implemented since FC5.

The HOWTO did mention that there are two versions of LVM, and that 
volumes created with lvm1 can be incompatible with lvm2.  I only 
saw it mentioned in passing when reading the section on LVM snapshots, 
so I'm not sure what all the differences are between the two versions. 

> > With this scheme, you can remove md's when removing drives, so you
> > don't have md's proliferating infinitely, but you'd still have more
> > than one md.  At the LVM layer, you can combine the multiple md's
> > into a single virtual volume.
> 
> I understand how to combine the multiple MDs into a single virtual volume
> but I don't follow the rest.  Sorry for being so dense; I suspect there's
> just some fundamental thing I'm not understanding (or not properly
> communicating).

If you choose a RAID component size of, let's say, 40G, then your 80G 
drives would each have room for 2 RAID components, and your 200G drives 
would each have room for 5 RAID components. We'll forget about /boot 
for now, to simplify the picture. 

You could add in the 200G drives one at a time, which gives you 5 of 
your 40G partitions. Then raid-fail an 80G drive, and replace its two 
partitions with two of the five partitions on the 200G drive. 

So what we have in this scenario is two existing RAID-5 metadevices, 
and with the additional three 40G partitions on the new 200G drives, 
we create three more RAID-5 metadevices. Then we add the new 
metadevices to the LVM VG. 

Alternately, you could just stick with an 80G partition and a 120G 
partition, but using partitions all of the same size will make 
the upgrade procedure less complicated, particularly the next time 
you want to upgrade the disk sizes. 


-- 
John Abreau / Executive Director, Boston Linux & Unix
IM: jabr at jabber.blu.org / abreauj at AIM / abreauj at Yahoo / zusa_it_mgr at Skype
Email jabr at blu.org / WWW http://www.abreau.net / PGP-Key-ID 0xD5C7B5D9
PGP-Key-Fingerprint 72 FB 39 4F 3C 3B D6 5B E0 C8 5A 6E F1 2C BE 99
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: This is a digitally signed message part
URL: <http://lists.blu.org/pipermail/discuss/attachments/20061108/0716094c/attachment.sig>



BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org