Sunday 27 October 2019

mdadm - upgrading from 5 x 4TB to 5 x 10TB - pt 2

mdadm - upgrading from 5 x 4TB to 5 x 10TB - pt2

Sync all the drives

rsyncing all the drives, you should see this when examining the drives.

root@hal:~# mdadm --examine /dev/sda1
/dev/sda1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 6fc77ab1:cb9bdc74:2d2f3938:9ba4b4e7
           Name : HAL:HAL
  Creation Time : Tue Mar 11 23:31:15 2014
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 19532607488 (9313.87 GiB 10000.70 GB)
     Array Size : 15627540480 (14903.58 GiB 16002.60 GB)
  Used Dev Size : 7813770240 (3725.90 GiB 4000.65 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=261864 sectors, after=11718837248 sectors
          State : clean
    Device UUID : 78e8aac7:94c62194:39c4ce3b:f42b826d

    Update Time : Fri Oct 25 12:40:20 2019
  Bad Block Log : 512 entries available at offset 264 sectors
       Checksum : 3e06ccae - correct
         Events : 1872

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)


After removing the last WD Red 4TB drive, syncing with 5 x Tshiba N300s gave a rebuild speed of:

root@hal:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : active raid5 sdd1[9] sda1[6] sdf1[5] sdb1[8] sde1[7]
      15627540480 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/4] [UU_UU]
      [=>...................]  recovery =  7.0% (274570032/3906885120) finish=297.8min speed=203225K/sec
     
unused devices: <none>

Which is twice over twice as fast at rebuilding 50TB as it was for 20TB.
NICE!

Expand the /dev/md127 array

So after rebuilding, I'll need to expand the array so that it's using the full 10TB per disk rather than 4TB of the existing.

To do this I ran:

mdadm --grow /dev/md127 --bitmap none
mdadm --grow /dev/md127 --size max
 As runnig with bitmaps whilst resizing can be catastrophic, so I've read.

But after running the above, which gave this:
root@hal:/etc/sysctl.d# mdadm --examine /dev/sda
/dev/sda:
   MBR Magic : aa55
Partition[0] :   4294967295 sectors at            1 (type ee)
root@hal:/etc/sysctl.d# mdadm --examine /dev/sda1
/dev/sda1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 6fc77ab1:cb9bdc74:2d2f3938:9ba4b4e7
           Name : HAL:HAL
  Creation Time : Tue Mar 11 23:31:15 2014
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 19532607488 (9313.87 GiB 10000.70 GB)
     Array Size : 39065214976 (37255.49 GiB 40002.78 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=261864 sectors, after=0 sectors
          State : active
    Device UUID : 78e8aac7:94c62194:39c4ce3b:f42b826d

    Update Time : Sun Oct 27 00:15:41 2019
  Bad Block Log : 512 entries available at offset 264 sectors
       Checksum : e00442d4 - correct
         Events : 2248

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)

and approx an 8 hour resync time

After this, check the file system is ok, before expanding it

xfs_repair -v /dev/md127

After any repairs, mount the array and xfs_grow it

root@hal:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            7.6G     0  7.6G   0% /dev
tmpfs           1.6G  2.6M  1.6G   1% /run
/dev/sdc1       222G  157G   54G  75% /
tmpfs           7.6G     0  7.6G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           7.6G     0  7.6G   0% /sys/fs/cgroup
tmpfs           1.6G     0  1.6G   0% /run/user/0
tmpfs           1.6G     0  1.6G   0% /run/user/1000
/dev/md127       15T   15T  292G  99% /mnt/md0
root@hal:~# xfs_growfs  /dev/md127
meta-data=/dev/md127             isize=256    agcount=32, agsize=122090240 blks
         =                       sectsz=512   attr=2, projid32bit=0
         =                       crc=0        finobt=0 spinodes=0 rmapbt=0
         =                       reflink=0
data     =                       bsize=4096   blocks=3906885120, imaxpct=5
         =                       sunit=128    swidth=512 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=521728, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 3906885120 to 9766303744
root@hal:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            7.6G     0  7.6G   0% /dev
tmpfs           1.6G  2.6M  1.6G   1% /run
/dev/sdc1       222G  157G   54G  75% /
tmpfs           7.6G     0  7.6G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           7.6G     0  7.6G   0% /sys/fs/cgroup
tmpfs           1.6G     0  1.6G   0% /run/user/0
tmpfs           1.6G     0  1.6G   0% /run/user/1000
/dev/md127       37T   15T   23T  40% /mnt/md0



Thursday 24 October 2019

mdadm - upgrading from 5 x 4TB to 5 x 10TB - pt 1

mdadm - upgrading from 5 x 4TB to 5 x 10TB - pt 1

This assumes you want to keep the existing data, and migrate from 20TB raw to 50TB raw.
This grows from 5 x 4TB drives to 5 x 10TB drives.

Wiping the 10TB drives (Toshiba N300)

using lsblk, to ID the drives then:

# parted /dev/sda
(parted) mklabel gpt
(parted) unit tb
(parted) mkpart primary 0tb 10tb 
(parted) set 1 raid on
(parted) align-check
alignment type(min/opt) [optimal]/minimal? optimal
Partition number? 1
1 aligned

(parted) print
Model: DAS TerraMaster (scsi)
Disk /dev/sdb: 10.0TB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags

1 0.00TB 10.0TB 10.0TB primary

Adding the drive

Pull out one of the existing drives

Boot the NAS server, and the array won't come out due to the missing drive.
Bring it up via

mdadm --stop /dev/md127
Then
mdadm --assemble --scan

Add drive

mdadm /dev/md127 --add /dev/sdX1
Initially it was syncing at over 145k/sec (with 4 x WD Red 4TB drives + 1 x Toshiba N300 10TB)

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : active raid5 sde1[7] sda1[6] sdf1[5] sdb1[3] sdd1[2]
      15627540480 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/4] [U_UUU]
      [>....................]  recovery =  0.6% (23770712/3906885120) finish=448.7min speed=144206K/sec
    
Details during first rebuild
root@hal:/# mdadm --detail /dev/md127
/dev/md127:
           Version : 1.2
     Creation Time : Tue Mar 11 23:31:15 2014
        Raid Level : raid5
        Array Size : 15627540480 (14903.58 GiB 16002.60 GB)
     Used Dev Size : 3906885120 (3725.90 GiB 4000.65 GB)
      Raid Devices : 5
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Thu Oct 24 17:57:47 2019
             State : clean, degraded, recovering
    Active Devices : 4
   Working Devices : 5
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 31% complete

              Name : HAL:HAL
              UUID : 6fc77ab1:cb9bdc74:2d2f3938:9ba4b4e7
            Events : 1714

    Number   Major   Minor   RaidDevice State
       6       8        1        0      spare rebuilding   /dev/sda1
       1       8       65        1      active sync   /dev/sde1
       2       8       49        2      active sync   /dev/sdd1
       3       8       17        3      active sync   /dev/sdb1
       5       8       81        4      active sync   /dev/sdf1

With the new drive:
 mdadm --examine /dev/sde1
/dev/sde1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x2
     Array UUID : 6fc77ab1:cb9bdc74:2d2f3938:9ba4b4e7
           Name : HAL:HAL
  Creation Time : Tue Mar 11 23:31:15 2014
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 19532607488 (9313.87 GiB 10000.70 GB)
     Array Size : 15627540480 (14903.58 GiB 16002.60 GB)
  Used Dev Size : 7813770240 (3725.90 GiB 4000.65 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
Recovery Offset : 0 sectors
   Unused Space : before=261864 sectors, after=11718837248 sectors
          State : clean
    Device UUID : 27c4f732:455e40e8:cb75bd34:12e0a46a

    Update Time : Fri Oct 25 07:19:29 2019
  Bad Block Log : 512 entries available at offset 264 sectors
       Checksum : f3835a55 - correct
         Events : 1808

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)

Once synced, pull out a drive, then add another 10TB one.

Thursday 1 August 2019

Cloning on BMW X6 M50d (2016)

Key Cloning and coding on a BMW X6 (F16) 2016

Background

So a bit of background here, I ordered a BMW X6 M50d in June 2016, and it arrived end of October 2016. Two months later I have a dodgy guy sitting outside of my house at 8pm at night, smoking a cigarette, waiting for *something* The moment I used my car key fob to open the the boot, he dumps his cigarette and drives off.

This lead me to believe he had cloned my car key.

A quick trip to BMW the next day and the service agent there assures me, it shouldn't be possible unless they had access to the ECU/OBD2 port, but my car had the latest updates. But if they are determined, they will steal the car anyway. Great!

Key Cloning

Key types

So there are different types or keys. Passive Keyless Entry (think comfort access and not pressing a button to unlock the car doors), and your standard radio key door which opens when you press a button.
There are several attack vectors to each type of key. 
PKE - Radio amplification attack.
Radio - Replay attack.

Most keys either work on the 868Mhz , 433MHz and 315Mhz and blanks can be bought off the internet from chinese websites or even Ebay.

OBD

Cloning can be done via the OBD port, and there can be a max of 10 keys programmed into the ECU. After that a new ECU is needed. This is one of the easiest attacks. Smash a window, plug a laptop into the OBD port, and clone the key onto a blank.

Coding

ESys

This is the software that enables you to do coding that, for example modifies the car software so that, it recognises you've installed a non-factory fit item like bluetooth or something.

FDL Coding

This enables to personalise the car for you. I will be disabling the start/stop feature, or at least telling it to remember the setting before the car was turned off.

There are many tutorials for hacking BMW and plenty of ways to obtain it.

Building a new VMware server

So buying HP etc is expensive.... time to build your own sourced from general parts!

I bought this:https://www.jetwaycomputer.com/NF795.html and coupled it with a crucial 32GB memory kit here https://uk.crucial.com/gbr/en/bls2k16g4s240fsd

Lots of issues - from booting taking too long to other issues where restarting took way too long...


So long story short - TURN OFF UEFI IN THE BIOS - it majorly fucks things up.