Replacing Failed Raid1 Drive (mdadm)

To identify if a RAID Array has failed look at the string containing [UU]. Each “U” represents an healthy partition in the RAID Array. If you see [UU] then the RAID Array is healthy. If you see a missing “U” like [_U] then the RAID Array is degraded or faulty.

$ cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda1[0] sdb1[1]
      104320 blocks [2/1] [UU]

md2 : active raid1 sda3[0] sdb3[1]
      2048192 blocks [2/2] [UU]

md3 : active raid1 sda5[0]
      2048192 blocks [2/2] [_U]

md4 : active raid1 sda6[0] sdb6[1]
      2048192 blocks [2/2] [UU]

md5 : active raid1 sda7[0] sdb7[1]
      960269184 blocks [2/2] [UU]

md1 : active raid1 sda2[0] sdb2[1]
      10241344 blocks [2/2] [UU]

From the above out put we can see that RAID Array “md3″ is missing a “U” and is degraded or faulty.

Removing the failed partition(s) and disk:

Before we can physically remove the hard drive from the system we must first “fail” the disks partition(s) from all RAID Arrays that they belong to. Even though only partition /dev/sdb5 or RAID Array md3 has failed, we must manually fail all the other /dev/sdb# partitions that belong to RAID Arrays, before we can remove the hard drive from the system.

To fail the partition we issue the following command:

# mdadm --manage /dev/md0 --fail /dev/sdb1

Repeat this command for each partition changing /dev/md# and /dev/sdb# to match the output from “cat /proc/mdstat”

# mdadm --manage /dev/md1 --fail /dev/sdb2

Removing:

Now that all the partitions are failed we can remove then from the RAID Arrays.

# mdadm --manage /dev/md0 --remove /dev/sdb1

Repeat this command for each partition changing /dev/md# and /dev/sdb# to match the output from “cat /proc/mdstat”

# mdadm --manage /dev/md1 --remove /dev/sdb2

Power off the system and physically replace the hard drive:

# shutdown -h now

Adding the new disk to the RAID Array:

Now that the new hard drive has been physically installed we can add it to the RAID Array.

In order to use the new drive we must create the exact same partition table structure that was on the old drive.

We can use the existing drive and mirror its partition table structure to the new drive. There is an easy command to do this:

# sfdisk -d /dev/sda | sfdisk /dev/sdb

* Note that sometimes when removing drives and replacing them the drives device name may change. Make sure the drive you replaced is listed as /dev/sdb, by issueing command “fdisk -l /dev/sdb” and no partitions exist.

If you get told by sfdisk that “sfdisk: I don’t like these partitions – nothing changed”, it is because the installer for some modern distributions, including CentOS6, create partitions that are the size you specify and do not necessarily end at cylinder boundaries.

You can force sfdisk to do this, with the option –force

sfdisk -d /dev/sda | sfdisk --force /dev/sdb

 

Add the partitions back into the RAID Arrays:

Now that the partitions are configured on the newly installed hard drive, we can add the partitions to the RAID Array.

# # mdadm --manage /dev/md0 --add /dev/sdb1
mdadm: added /dev/sdb1

Repeat this command for each partition changing /dev/md# and /dev/sdb#

# mdadm --manage /dev/md1 --add /dev/sdb2
mdadm: added /dev/sdb2

Now we can check that the partitions are being synchronized by issuing:

# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda1[0] sdb1[1]
      104320 blocks [2/2] [UU]

md2 : active raid1 sda3[2] sdb3[1]
      2048192 blocks [2/1] [_U]
      	resync=DELAYED

md3 : active raid1 sda5[2] sdb5[1]
      2048192 blocks [2/1] [_U]
      	resync=DELAYED

md4 : active raid1 sda6[2] sdb6[1]
      2048192 blocks [2/1] [_U]
      	resync=DELAYED

md5 : active raid1 sda7[2] sdb7[1]
      960269184 blocks [2/1] [_U]
      [>....................]  recovery =  1.8% (17917184/960269184) finish=193.6min speed=81086K/sec

md1 : active raid1 sda2[0] sdb2[1]
      10241344 blocks [2/2] [UU]

Once all drives have synchronized your RAID Array will be back to normail again.

Install Grub on new hard drive MBR:

We need install grub on the MBR of the newly installed hard drive. So that in case the other drive fails the new drive will be able to boot the OS.

Enter the Grub command line:

# grub

Locate grub setup files:

grub> find /grub/stage1

On a RAID 1 with two drives present you should expect to get

(hd0,0)
(hd1,0)

Install grub on the MBR:

grub> device (hd0) /dev/sdb (or /dev/hdb for IDE drives)
grub> root (hd0,0)
grub> setup (hd0)
grub>quit

We made the second drive /dev/sdb device (hd0) because putting grub on it this way puts a bootable mbr on the 2nd drive and when the first drive is missing the second drive will boot.

This will insure that if the first drive in the Raid Array fails or has already failed that you can boot to the Operating System with the second drive.

Or simply try grub-install /dev/sdb

( SOURCE )

Speed up a sync (after drive replacement)

cat /proc/sys/dev/raid/speed_limit_max

200000

cat /proc/sys/dev/raid/speed_limit_min

1000

This means you are running a minimum of 1000 KB/sec/disk and a maximum of 200,000. To speed it up:

echo 50000 >/proc/sys/dev/raid/speed_limit_min

which will set the minimum to 50,000 KB/sec/disk (ie, 50 times greater). Expect your CPU and HDD to be a lot slower!

 

 

This entry was posted in Linux and tagged . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *