There was a problem loading the comments.

How to rebuild degraded RAID1 mdadm software RAID array

Support Portal  »  Knowledgebase  »  Viewing Article

  Print

For this example, let's assume /dev/sda is the good/working member, and /dev/sdb is the failed (or newly inserted) drive:

 

1. "cat /proc/mdstat" to verify the condition of the array (which "md" members are present, which drives are good/missing/etc. for sanity):

 

 

2. Mirror the partition table from /dev/sda over to /dev/sdb using the following command:

  • If available, the following command will mirror the partitions to the new disk: " fdisk -d /dev/sda | /dev/sdb" (NOTE: "fdisk -d" is not supported on all OS's, if this fails, see alternative below).

 ELSE

  • If above command not available/successful, use: "fdisk -l /dev/sda > ./fdisk.txt; fdisk /dev/sdb < ./fdisk.txt; rm ./fdisk.txt"

 

 

3.  The new drive should now have a matching partition scheme to the existing (fdisk -l can validate).  Re-integrate any of the RAID1 partitions back into the array (using the information from "cat /proc/mdstat" to match the partitions the the md members):

  • "mdadm --manage --add /dev/md[x] /dev/sdb[x]" (for each partition in the array).

 

 

4.  Apply the boot-loader to the Master Boot Record of the new drive, to ensure the new RAID members work on boot, using:

  • "grub2-install --re-check /dev/sdb" (if you are in a scenario where you don't know which drive is which, this should be safe to run against all RAID'd drives).

 

5.  The current rebuild status should be visible with: "cat /proc/mdstat", assuming nothing went wrong with the above, you should be done, and the RAID should finish rebuilding on its own from here on out.


Share via

Related Articles

© Priority Colo Inc.