How to remotely convert live 1xHDD/LVM Linux server to 2xHDD RAID1/LVM (GRUB2, GPT)
17th May 2011
Assumptions:
- current HDD is /dev/sda, it has a GPT (with bios_grub being /dev/sda1), separate /boot partition (/dev/sda2), and a physical LVM volume (/dev/sda3), where LVM holds all the remaining partitions (root, /home, /srv, …); LVM is properly configured, and system reboots with no problems
- your new drive is /dev/sdb, it is identical to /dev/sda, and it comes empty from the manufacturer (this is important! wipe the drive if it is not empty, especially if it used to be a part of another RAID)
- your system is Debian or Debian-based; in this exact example I’ve been using Ubuntu Server 10.04
- your LVM volume group is named vg0
- make sure you understand what each command does before executing it
- you do have an external backup of all your important data, and you do understand that the following operations are potentially dangerous to your data integrity
Inspired by: Debian Etch RAID guide, serverfault question.
- Create the GPT on the new drive:
parted /dev/sdb mklabel gpt - Get the list of partitions on /dev/sda:
parted -m /dev/sda print - Create /dev/sdb partitions similarly to what you have on /dev/sda (my example numbers follow, use your numbers here):
parted /dev/sdb mkpart bios_grub 1049kB 2097kB
parted /dev/sdb mkpart boot 2097kB 258MB
parted /dev/sdb mkpart lvm 258MB 2000GB - Set proper flags on partitions:
parted /dev/sdb set 1 bios_grub on (GPT doesn’t have MBR, so you create a 1-MB partition instead to hold grub2′s boot code)
(possibly optional) parted /dev/sdb set 2 raid on
(possibly optional) parted /dev/sdb set 3 raid on - (possibly optional) To make sure /dev/sdb1 (the bios_grub) indeed contains grub’s boot code, I did dd if=/dev/sda1 of=/dev/sdb1
- apt-get install mdadm
- Note: at this point, older tutorials suggest adding a bunch of raid* kernel modules to /etc/modules and to grub’s list of modules to load. I’m not sure this is really necessary, but do see the tutorials mentioned at the top for more information. If you do modify the lists of modules – don’t forget to run update-initramfs -u.
- Create two initially-degraded RAID1 devices (one for /boot, another for LVM):
mdadm ––create /dev/md0 ––level=1 ––raid-devices=2 /dev/sdb2 missing
mdadm ––create /dev/md1 ––level=1 ––raid-devices=2 /dev/sdb3 missing - Store the configuration of your RAID1 to the mdadm.conf file (important! this is not done automatically!)
mdadm -Es >> /etc/mdadm/mdadm.conf - Verify the contents of your mdadm.conf:
cat /etc/mdadm/mdadm.conf
dpkg-reconfigure mdadm, and enable booting in degraded mode - Copy your current /boot (/dev/sda2) to the new /dev/md0 /boot partition:
(one can use something like dd -if /dev/sda2 -of /dev/md0 here as well, but for some reason my attempt at dd failed writing 1 last byte of data)
mkdir /mnt/md0
mkfs.ext4 /dev/md0 (one can also use other filesystems here, e.g. mkfs.ext3 or even mkfs.ext2)
mount /dev/md0 /mnt/md0
cp -a /boot/* /mnt/md0/
umount /dev/md0
rmdir /mnt/md0 - Now extend your existing volume group to include the newly-created /dev/md1:
pvcreate /dev/md1
vgextend vg0 /dev/md1 - Verify the list of logical volumes you curently have: enter lvm shell, and type lvs. Here’s what I had:
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
home vg0 -wi-ao 1.70t
logs vg0 -wi-ao 4.66g
root vg0 -wi-ao 10.24g
srv vg0 -wc-ao 100.00g
swap vg0 -wi-ao 1.86g
tmp vg0 -wi-ao 4.66g - Now, you can move all the logical volumes to new physical volume in one command: pvmove /dev/sda3 /dev/md1. Personally, remembering the problem I had with dd from /dev/sda2 to /dev/md0, I decided to move all logical volumes one-by-one; as this takes time, you may consider joining these operations with ; or &&, and putting the /tmp last (as the easiest one to re-create if it fails to move):
pvmove ––name home /dev/sda3 /dev/md1
pvmove ––name srv /dev/sda3 /dev/md1
pvmove ––name logs /dev/sda3 /dev/md1
pvmove ––name swap /dev/sda3 /dev/md1
pvmove ––name root /dev/sda3 /dev/md1
pvmove ––name tmp /dev/sda3 /dev/md1 - To be safer, I ran FS check on a few volumes I could umount:
umount /dev/mapper/vg0-srv
fsck -f /dev/mapper/vg0-srv
mount /dev/mapper/vg0-srv
umount /dev/mapper/vg0-tmp
fsck -f /dev/mapper/vg0-tmp
mount /dev/mapper/vg0-tmp - Remove /dev/sda3 from the physical space available to your volume group:
vgreduce vg0 /dev/sda3 - Install grub2 to both drives, so as to make them both bootable in case of failure:
grub-install ‘(hd0)’
grub-install ‘(hd1)’ - Edit /etc/fstab, pointing /boot to /dev/md0. You may use UUIDs here, but please do not use UUIDs from mdadm.conf – those are different from FS-UUIDs, instead do ls -l /dev/disk/by-uuid to find the UUID of /dev/md0. Personally, I had no problems just using /dev/md0.
- Now is the time to add your original /dev/sda to the RAID1; be absolutely sure you have moved all the data off that drive, because these commands will destroy it:
mdadm ––manage ––add /dev/md0 /dev/sda2
mdadm ––manage ––add /dev/md1 /dev/sda3
Re-syncing array will take some time. - To be on the safe side, you may want to run again update-initramfs -u and update-grub; I have also edited /etc/grub.d/40_custom, adding there 2 more boot options: from /dev/sda2 and /dev/sdb2 (/boot on both drives) – have no idea if that will work, but having more boot options didn’t hurt
- Reboot into your new system. Actually, at this point reboot is only necessary to verify that your system is bootable – you may delay this reboot as long as you want to.
- Many tutorials also suggest testing your RAID1 by manually “degrading” it, trying to boot, and then rebuilding it back. I haven’t done that, but you may want to.
Improvement suggestions, criticism and thank-you are welcome in the comments.
July 18th, 2011 at 11:34
Hi,
Thanks for great How-to. I needed to do this locally but on a SS4000-E Nas with debian installed. No GUI, and no CD-ROM or other boot device for any sort of liveCD.
At your step 11, I had to use dd. I could not mount /dev/md0. After using the dd command which completed without issues, I could mount /dev/md0.
July 20th, 2011 at 22:27
I’m glad both this how-to and `dd` at step 11 worked for you.
Have you figured out if steps 4 and 5 (marked as (possibly optional)) are necessary at all? I’d remove them from the how-to for further simplicity, but I’m not sure if they are important in some way.
November 17th, 2011 at 16:45
Thanks lol…
Below mentioned link is very easy to understand,
http://www.redhatlinux.info/2010/11/lvm-logical-volume-manager.html
April 30th, 2012 at 12:58
Very impressed with what pvmove does. The whole LVM2 + MDRAID stack impressed me after I followed your guide.
Thanks!
February 15th, 2013 at 15:46
Thanks for the tutorial.
In step 11, if you can’t mount /dev/md0, you have to create a filesystem for /dev/md0 firs running this code:
mkfs.ext3 /dev/md0
March 15th, 2014 at 1:55
Javier, thanks for pointing out a missing filesystem creation command in step 11. This is now fixed