Sunday, 27 October 2019

mdadm - upgrading from 5 x 4TB to 5 x 10TB - pt 2

mdadm - upgrading from 5 x 4TB to 5 x 10TB - pt2

Sync all the drives

rsyncing all the drives, you should see this when examining the drives.

root@hal:~# mdadm --examine /dev/sda1
/dev/sda1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 6fc77ab1:cb9bdc74:2d2f3938:9ba4b4e7
           Name : HAL:HAL
  Creation Time : Tue Mar 11 23:31:15 2014
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 19532607488 (9313.87 GiB 10000.70 GB)
     Array Size : 15627540480 (14903.58 GiB 16002.60 GB)
  Used Dev Size : 7813770240 (3725.90 GiB 4000.65 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=261864 sectors, after=11718837248 sectors
          State : clean
    Device UUID : 78e8aac7:94c62194:39c4ce3b:f42b826d

    Update Time : Fri Oct 25 12:40:20 2019
  Bad Block Log : 512 entries available at offset 264 sectors
       Checksum : 3e06ccae - correct
         Events : 1872

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)


After removing the last WD Red 4TB drive, syncing with 5 x Tshiba N300s gave a rebuild speed of:

root@hal:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : active raid5 sdd1[9] sda1[6] sdf1[5] sdb1[8] sde1[7]
      15627540480 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/4] [UU_UU]
      [=>...................]  recovery =  7.0% (274570032/3906885120) finish=297.8min speed=203225K/sec
     
unused devices: <none>

Which is twice over twice as fast at rebuilding 50TB as it was for 20TB.
NICE!

Expand the /dev/md127 array

So after rebuilding, I'll need to expand the array so that it's using the full 10TB per disk rather than 4TB of the existing.

To do this I ran:

mdadm --grow /dev/md127 --bitmap none
mdadm --grow /dev/md127 --size max
 As runnig with bitmaps whilst resizing can be catastrophic, so I've read.

But after running the above, which gave this:
root@hal:/etc/sysctl.d# mdadm --examine /dev/sda
/dev/sda:
   MBR Magic : aa55
Partition[0] :   4294967295 sectors at            1 (type ee)
root@hal:/etc/sysctl.d# mdadm --examine /dev/sda1
/dev/sda1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 6fc77ab1:cb9bdc74:2d2f3938:9ba4b4e7
           Name : HAL:HAL
  Creation Time : Tue Mar 11 23:31:15 2014
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 19532607488 (9313.87 GiB 10000.70 GB)
     Array Size : 39065214976 (37255.49 GiB 40002.78 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=261864 sectors, after=0 sectors
          State : active
    Device UUID : 78e8aac7:94c62194:39c4ce3b:f42b826d

    Update Time : Sun Oct 27 00:15:41 2019
  Bad Block Log : 512 entries available at offset 264 sectors
       Checksum : e00442d4 - correct
         Events : 2248

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)

and approx an 8 hour resync time

After this, check the file system is ok, before expanding it

xfs_repair -v /dev/md127

After any repairs, mount the array and xfs_grow it

root@hal:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            7.6G     0  7.6G   0% /dev
tmpfs           1.6G  2.6M  1.6G   1% /run
/dev/sdc1       222G  157G   54G  75% /
tmpfs           7.6G     0  7.6G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           7.6G     0  7.6G   0% /sys/fs/cgroup
tmpfs           1.6G     0  1.6G   0% /run/user/0
tmpfs           1.6G     0  1.6G   0% /run/user/1000
/dev/md127       15T   15T  292G  99% /mnt/md0
root@hal:~# xfs_growfs  /dev/md127
meta-data=/dev/md127             isize=256    agcount=32, agsize=122090240 blks
         =                       sectsz=512   attr=2, projid32bit=0
         =                       crc=0        finobt=0 spinodes=0 rmapbt=0
         =                       reflink=0
data     =                       bsize=4096   blocks=3906885120, imaxpct=5
         =                       sunit=128    swidth=512 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=521728, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 3906885120 to 9766303744
root@hal:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            7.6G     0  7.6G   0% /dev
tmpfs           1.6G  2.6M  1.6G   1% /run
/dev/sdc1       222G  157G   54G  75% /
tmpfs           7.6G     0  7.6G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           7.6G     0  7.6G   0% /sys/fs/cgroup
tmpfs           1.6G     0  1.6G   0% /run/user/0
tmpfs           1.6G     0  1.6G   0% /run/user/1000
/dev/md127       37T   15T   23T  40% /mnt/md0



No comments:

Post a Comment