Thursday, 24 October 2019

mdadm - upgrading from 5 x 4TB to 5 x 10TB - pt 1

mdadm - upgrading from 5 x 4TB to 5 x 10TB - pt 1

This assumes you want to keep the existing data, and migrate from 20TB raw to 50TB raw.
This grows from 5 x 4TB drives to 5 x 10TB drives.

Wiping the 10TB drives (Toshiba N300)

using lsblk, to ID the drives then:

# parted /dev/sda
(parted) mklabel gpt
(parted) unit tb
(parted) mkpart primary 0tb 10tb 
(parted) set 1 raid on
(parted) align-check
alignment type(min/opt) [optimal]/minimal? optimal
Partition number? 1
1 aligned

(parted) print
Model: DAS TerraMaster (scsi)
Disk /dev/sdb: 10.0TB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags

1 0.00TB 10.0TB 10.0TB primary

Adding the drive

Pull out one of the existing drives

Boot the NAS server, and the array won't come out due to the missing drive.
Bring it up via

mdadm --stop /dev/md127
Then
mdadm --assemble --scan

Add drive

mdadm /dev/md127 --add /dev/sdX1
Initially it was syncing at over 145k/sec (with 4 x WD Red 4TB drives + 1 x Toshiba N300 10TB)

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : active raid5 sde1[7] sda1[6] sdf1[5] sdb1[3] sdd1[2]
      15627540480 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/4] [U_UUU]
      [>....................]  recovery =  0.6% (23770712/3906885120) finish=448.7min speed=144206K/sec
    
Details during first rebuild
root@hal:/# mdadm --detail /dev/md127
/dev/md127:
           Version : 1.2
     Creation Time : Tue Mar 11 23:31:15 2014
        Raid Level : raid5
        Array Size : 15627540480 (14903.58 GiB 16002.60 GB)
     Used Dev Size : 3906885120 (3725.90 GiB 4000.65 GB)
      Raid Devices : 5
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Thu Oct 24 17:57:47 2019
             State : clean, degraded, recovering
    Active Devices : 4
   Working Devices : 5
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 31% complete

              Name : HAL:HAL
              UUID : 6fc77ab1:cb9bdc74:2d2f3938:9ba4b4e7
            Events : 1714

    Number   Major   Minor   RaidDevice State
       6       8        1        0      spare rebuilding   /dev/sda1
       1       8       65        1      active sync   /dev/sde1
       2       8       49        2      active sync   /dev/sdd1
       3       8       17        3      active sync   /dev/sdb1
       5       8       81        4      active sync   /dev/sdf1

With the new drive:
 mdadm --examine /dev/sde1
/dev/sde1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x2
     Array UUID : 6fc77ab1:cb9bdc74:2d2f3938:9ba4b4e7
           Name : HAL:HAL
  Creation Time : Tue Mar 11 23:31:15 2014
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 19532607488 (9313.87 GiB 10000.70 GB)
     Array Size : 15627540480 (14903.58 GiB 16002.60 GB)
  Used Dev Size : 7813770240 (3725.90 GiB 4000.65 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
Recovery Offset : 0 sectors
   Unused Space : before=261864 sectors, after=11718837248 sectors
          State : clean
    Device UUID : 27c4f732:455e40e8:cb75bd34:12e0a46a

    Update Time : Fri Oct 25 07:19:29 2019
  Bad Block Log : 512 entries available at offset 264 sectors
       Checksum : f3835a55 - correct
         Events : 1808

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)

Once synced, pull out a drive, then add another 10TB one.

No comments:

Post a Comment