![]() |
Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU |
On 5/9/07, Tom Metro <blu at vl.com> wrote: > John Abreau wrote: > > Yes, that's the procedure I'd follow. > > One step you left out is optionally creating an LVM volume on the RAID-1 > array, unless the expectation is that the dd operation will move the > existing one over. LVM can introduce some complications in this > area...you have to be careful that you don't end up with two volumes > with the same name. There may also be dependencies on device name or UUID. The existing server uses LVM already; sda1 is /boot and sda2 is LVM. I was thinking of just dd'ing the LVM and then creating an additional LVM volume in the extra space on the new hard drive; but it would be cleaner to just make one big LVM on sdb2, and then dd the individual LVs from within the old VG. Ultimately the best choice will be the one that minimizes downtime. Although if I'm lucky, I'll be able to add the new drive without rebooting, and then the downtime will be limited to the one reboot to remove the old drive and test the new degraded drive. If that works out, I can do everything else while the server is live. -- John Abreau / Executive Director, Boston Linux & Unix GnuPG KeyID: 0xD5C7B5D9 / Email: abreauj at gmail.com GnuPG FP: 72 FB 39 4F 3C 3B D6 5B E0 C8 5A 6E F1 2C BE 99 -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.
![]() |
|
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |