Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
mdadm: destroys primary GPT table & size mismatch
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware
View previous topic :: View next topic  
Author Message
Barnoid
Tux's lil' helper
Tux's lil' helper


Joined: 30 Jul 2004
Posts: 103

PostPosted: Sun Oct 12, 2014 8:26 am    Post subject: mdadm: destroys primary GPT table & size mismatch Reply with quote

Hi all,

We are trying to setup (software) RAID10 on our development machine (stable amd64) with 8 4TB drives.

We pretty much followed this guide here
http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml
but used the 1.2 version of the metadata.

All 8 disks were partitioned with fdisk (new gpt, one partition spanning the entire disk), resulting in:
Code:
$ fdisk -l /dev/sdb
Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 51AA9576-A0C5-4747-ACE5-09357B1ECA24

Device    Start          End   Size Type
/dev/sdb1  2048   7814037134   3.7T Linux RAID


Then the array was created using
Code:
$ mdadm -v --create /dev/md0 --level=10 --raid-devices=8 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1


mdadm --detail listed the expected 16TB array:
Code:
$ mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Oct  3 04:42:30 2014
     Raid Level : raid10
     Array Size : 15627544576 (14903.59 GiB 16002.61 GB)
  Used Dev Size : 3906886144 (3725.90 GiB 4000.65 GB)
   Raid Devices : 8
  Total Devices : 8
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sun Oct 12 02:20:34 2014
          State : active
 Active Devices : 8
Working Devices : 8
 Failed Devices : 0
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 512K

           Name : livecd:0  (local to host livecd)
           UUID : 59c87d4a:41ece55f:896cb814:aae1531a
         Events : 14280

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync set-A   /dev/sdb1
       1       8       33        1      active sync set-B   /dev/sdc1
       2       8       49        2      active sync set-A   /dev/sdd1
       3       8       65        3      active sync set-B   /dev/sde1
       4       8       81        4      active sync set-A   /dev/sdf1
       5       8       97        5      active sync set-B   /dev/sdg1
       6       8      113        6      active sync set-A   /dev/sdh1
       7       8      129        7      active sync set-B   /dev/sdi1


In the kernel, RAID (0,1,5,6,10) support and device mapper (incl. support for RAID0/1/.../10) were compiled in (not as modules).

The RAID is only used for /home, all the rest is on a normal SSD at /dev/sda; so booting into the newly installed system is no problem. However, the RAID array does not get detected no matter what we try. Kernel automount - may not work because we use metadata v1.2 format; mdraid started at boot time (gives the red star with the empty line, similar to what is described here); also adding the partitions/array to /etc/mdadm.conf does not work.

Fine, so we re-created the array from within our newly installed system (kernel 3.14.14, 64bit). This is where things get weird. First, after re-partitioning all disks as shown above and starting the RAID array creation (also same as above) fdisk -l /dev/sdXi shows (note the The primary GPT table is corrupt, but the backup appears OK, so that will be used. line):
Code:
$ fdisk -l /dev/sdb
The primary GPT table is corrupt, but the backup appears OK, so that will be used.

Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 51AA9576-A0C5-4747-ACE5-09357B1ECA24

Device    Start          End   Size Type
/dev/sdb1  2048   7814037134   3.7T Linux RAID


Then, mdadm --detail /dev/md0 gives an array that is only 8TB (instead of the expected 16TB):
Code:
$ mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Sun Oct 12 17:00:31 2014
     Raid Level : raid10
     Array Size : 8589408256 (8191.50 GiB 8795.55 GB)
  Used Dev Size : 2147352064 (2047.87 GiB 2198.89 GB)
   Raid Devices : 8
  Total Devices : 8
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sun Oct 12 17:07:21 2014
          State : active, resyncing
 Active Devices : 8
Working Devices : 8
 Failed Devices : 0
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 512K

  Resync Status : 0% complete

           Name : gotthard:0  (local to host gotthard)
           UUID : bbb9715d:5ec87285:62d21389:f0b76c33
         Events : 78

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync set-A   /dev/sdb1
       1       8       33        1      active sync set-B   /dev/sdc1
       2       8       49        2      active sync set-A   /dev/sdd1
       3       8       65        3      active sync set-B   /dev/sde1
       4       8       81        4      active sync set-A   /dev/sdf1
       5       8       97        5      active sync set-B   /dev/sdg1
       6       8      113        6      active sync set-A   /dev/sdh1
       7       8      129        7      active sync set-B   /dev/sdi1


The culprit lies in the "Used Dev Size" which is only 2TB instead of the expected 4TB.

Any ideas? In particular, is it normal that the primary GPT table gets destroyed? Why does mdadm only use 2TB instead of the available 4TBs?
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum