Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: LVM questions



 Thanks Jarod, this is how we'll move forward. 

On Wed, 09 Jul 2008 11:03:53 -0400 
Jarod Wilson <[hidden email]> wrote: 

> On Wed, 2008-07-09 at 10:38 -0400, Jerry Feldman wrote: 
> > I currently have a system where the root file system is part of a 
> > volume group. What I would like to do is to create the root, boot, and 
> > swap file system as a separate physical volume. /boot would normally be 
> > an ext3.   
> > 
> > Here is what I would like to do. I currently have a system where root 
> > is part of the LVM. I would like to build RHEL 4 U6 on a separate 
> > physical volume containing boot and root. I would then like to be able 
> > to attach the existing LVM volumes to it. 
> > 
> > Example: 
> > 1. Unused 73GB drive - create new OS 
> > 2. 3  73GB drives currently containing /boot - ext3, Logical Volume 01 
> > for root, and Logical Volume 02 for swap. 
> > 
> > Currently this particular system is used for backups. The questions are: 
> > 1. When I do a clean install of RHEL 4U6 on the new volume, can I 
> > access the existing LVM volumes. 
> 
> Yes. You'll want to do custom partitioning, create a new lvm on the new 
> drive, and name the volume group something other than what the original 
> volume group is named, so there aren't conflicts between the two. Then 
> the existing logical volumes in the old volume group can be specified to 
> mount on top of the new install wherever you choose. 
> 
> 
> > Since this is a backup server, I certainly can blow everything 
> > away if I need to. 
> 
> Definitely shouldn't need to. 
> 
> > 2. On another system I have an LVM with 2 volume groups. One of the 
> > things I have to do immediately is to take 1 physical volume off line 
> > (the system  is reporting it as a future failure), and replace it with 
> > a new volume. That is not really a problem as I know how to do it. But, 
> > what I would like to do is something similar to 1, above because this 
> > system is currently our primary server and I have 6 73GB drives with 
> > root as a separate logical volume, but the physical volumes consist of 
> > 5 drives, and 1 drive respectively. This is our primary NFS server. We 
> > certainly backup some of the directories on this nightly, such as home 
> > and cvs, but other directories can be easily copied from our home 
> > office. So, in this particular case, it's probably more important that 
> > we have the root and boot on a separate volume. What I would like to be 
> > able to do is to clone the build from step 1 (changing the host name 
> > and a few other parameters) so I can impact work minimally. 
> > (Note: I can only do this during normal business hours since we are in 
> > a Regus business center and the server room is locked after hours). 
> 
> Should also be perfectly doable. 
> 
> 
> 
> -- 
> Jarod Wilson 
> [hidden email] 
> 
> 
> -- 
> This message has been scanned for viruses and 
> dangerous content by MailScanner, and is 
> believed to be clean. 
> 


BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org