Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU |
John Abreau wrote: > Is it possible to do this cleanly without reinstalling the > whole system? > > I'm thinking something like the following might work: > > * Set up the new disk as a set of failed RAID-1 volumes > * dd the partitions from the old disk to the corresponding > failed RAID-1 partitions on the new disk > * Remove the old disk and reboot, to test that the system > can boot and run with just the new disk > * Add a second new disk and "rebuild" the failed RAID-1 > volumes > > Does this make sense? Yes, that's the procedure I'd follow. One step you left out is optionally creating an LVM volume on the RAID-1 array, unless the expectation is that the dd operation will move the existing one over. LVM can introduce some complications in this area...you have to be careful that you don't end up with two volumes with the same name. There may also be dependencies on device name or UUID. So an alternative to consider is setting up a new LVM volume on the RAID-1 array, and then migrating the data. You might be able to do this using dd and the LVM devices. I think there is also an LVM command for moving file systems among disks. > Has anyone else done this successfully? Not specifically - not a bootable drive - but I have created RAID arrays in a failed state, migrated data, then completed the array and let it rebuild. The rebuild can be slow and CPU intensive, but it works. -Tom -- Tom Metro Venture Logic, Newton, MA, USA "Enterprise solutions through open source." Professional Profile: http://tmetro.venturelogic.com/ -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |