Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
[Solved]Raid doesn't want to extend
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Other Things Gentoo
View previous topic :: View next topic  
Author Message
pascuol
n00b
n00b


Joined: 03 Dec 2006
Posts: 29

PostPosted: Fri Jul 20, 2012 2:24 pm    Post subject: [Solved]Raid doesn't want to extend Reply with quote

Hi,
I've a little problem with my RAID

I've got a nice raid.

I wanted to extend the size of my raid like each time I add an bigger hard drive :) but today it doesnt want..

I don't see wah't the cause is, it never happened before. :)

So I wanted to know why mdadm doesn't want to get the new sizze of my RAID6

Code:
mdadm -G -z max /dev/md0
mdadm: component size of /dev/md0 has been set to 566516736K


instead it should be
Code:
LV Size                1006.03 GiB


so, usualy, I putted as failled disk one by one and rebuilted it after changing the size of the LVM, but now it tooks 2 days per disk and it seems unneccessary. Now I've just added the size on each LVM and it should be good enough.

tech specs :

Don't ask why I did it like this .. I've got my own reason:)
I've got an LVM behind my RAID6 with 4 logical disks:
Code:
lvdisplay
  --- Logical volume ---
  LV Name                /dev/RAID/RaidVol1
  VG Name                RAID
  LV UUID                hSSEva-AW5H-dVr2-bESF-LpuB-NMSc-sPfwlp
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1006.03 GiB
  Current LE             257543
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
   
  --- Logical volume ---
  LV Name                /dev/RAID/RaidVol2
  VG Name                RAID
  LV UUID                PJ8ckn-2j6p-cC2X-CH4e-RQqR-9j6j-VZNzI7
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1006.03 GiB
  Current LE             257543
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
   
  --- Logical volume ---
  LV Name                /dev/RAID/RaidVol3
  VG Name                RAID
  LV UUID                IC2XLv-s00R-8eoC-shos-TBSb-BoRZ-xr3biS
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1006.03 GiB
  Current LE             257543
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:2
   
  --- Logical volume ---
  LV Name                /dev/RAID/RaidVol4
  VG Name                RAID
  LV UUID                kWp0Go-xDI4-f7Ve-Fe9U-PdK5-3MCi-NtGJnh
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1006.03 GiB
  Current LE             257543
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:3


but mdadm doesn't want to know anything :
Code:
# mdadm -G -z max /dev/md0
mdadm: component size of /dev/md0 has been set to 566516736K

It should be the double but ...
any idea ?

8O :D


Last edited by pascuol on Sun Jul 22, 2012 11:22 am; edited 1 time in total
Back to top
View user's profile Send private message
frostschutz
Advocate
Advocate


Joined: 22 Feb 2005
Posts: 2463
Location: Germany

PostPosted: Fri Jul 20, 2012 10:51 pm    Post subject: Reply with quote

/proc/mdstat?
blockdev --getsize64 /dev/RAID/RaidVol*?
Back to top
View user's profile Send private message
pascuol
n00b
n00b


Joined: 03 Dec 2006
Posts: 29

PostPosted: Sat Jul 21, 2012 12:22 pm    Post subject: Reply with quote

frostschutz wrote:
/proc/mdstat?
blockdev --getsize64 /dev/RAID/RaidVol*?


Code:
cat /proc/mdstat                                                                                                                                                                                         Sat Jul 21 14:21:28 2012

Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 dm-0[4] dm-3[7] dm-2[6] dm-1[5]
      1133033472 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]

unused devices: <none>


Code:
# blockdev --getsize64 /dev/RAID/RaidVol*
1080213635072
1080213635072
1080213635072
1080213635072
Back to top
View user's profile Send private message
frostschutz
Advocate
Advocate


Joined: 22 Feb 2005
Posts: 2463
Location: Germany

PostPosted: Sat Jul 21, 2012 2:52 pm    Post subject: Reply with quote

Hmmm, interesting. Not sure what the problem is.

mdadm -E /dev/RAID/RaidVol*?
Back to top
View user's profile Send private message
pascuol
n00b
n00b


Joined: 03 Dec 2006
Posts: 29

PostPosted: Sat Jul 21, 2012 3:17 pm    Post subject: Reply with quote

frostschutz wrote:
Hmmm, interesting. Not sure what the problem is.

mdadm -E /dev/RAID/RaidVol*?

Code:
# mdadm -E /dev/RAID/RaidVol*
/dev/RAID/RaidVol1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 9df9f556:ef8f3fb6:208753b0:486acf83
           Name : cameleon:0  (local to host cameleon)
  Creation Time : Fri Jan 13 11:48:23 2012
     Raid Level : raid6
   Raid Devices : 4

 Avail Dev Size : 1133033472 (540.27 GiB 580.11 GB)
     Array Size : 2266066944 (1080.54 GiB 1160.23 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : cb5e81cf:534d2c02:1da19216:394cec54

    Update Time : Sat Jul 21 15:58:02 2012
       Checksum : 21000f2b - correct
         Events : 30330

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAAA ('A' == active, '.' == missing)
/dev/RAID/RaidVol2:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 9df9f556:ef8f3fb6:208753b0:486acf83
           Name : cameleon:0  (local to host cameleon)
  Creation Time : Fri Jan 13 11:48:23 2012
     Raid Level : raid6
   Raid Devices : 4

 Avail Dev Size : 1133033472 (540.27 GiB 580.11 GB)
     Array Size : 2266066944 (1080.54 GiB 1160.23 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 25a8b562:eee9cfac:c865a211:46ed91f2

    Update Time : Sat Jul 21 15:58:02 2012
       Checksum : f78d5ad9 - correct
         Events : 30330

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AAAA ('A' == active, '.' == missing)
/dev/RAID/RaidVol3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 9df9f556:ef8f3fb6:208753b0:486acf83
           Name : cameleon:0  (local to host cameleon)
  Creation Time : Fri Jan 13 11:48:23 2012
     Raid Level : raid6
   Raid Devices : 4

 Avail Dev Size : 1133033472 (540.27 GiB 580.11 GB)
     Array Size : 2266066944 (1080.54 GiB 1160.23 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 45e96e83:26305401:f8d1bcbe:59b93147

    Update Time : Sat Jul 21 15:58:02 2012
       Checksum : 6e851a75 - correct
         Events : 30330

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 2
   Array State : AAAA ('A' == active, '.' == missing)
/dev/RAID/RaidVol4:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 9df9f556:ef8f3fb6:208753b0:486acf83
           Name : cameleon:0  (local to host cameleon)
  Creation Time : Fri Jan 13 11:48:23 2012
     Raid Level : raid6
   Raid Devices : 4

 Avail Dev Size : 2109790208 (1006.03 GiB 1080.21 GB)
     Array Size : 2266066944 (1080.54 GiB 1160.23 GB)
  Used Dev Size : 1133033472 (540.27 GiB 580.11 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 33035e2f:5a0d2643:7acffbc8:1e03456c

    Update Time : Sat Jul 21 15:58:02 2012
       Checksum : c5d078df - correct
         Events : 30330

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 3
   Array State : AAAA ('A' == active, '.' == missing)
Back to top
View user's profile Send private message
frostschutz
Advocate
Advocate


Joined: 22 Feb 2005
Posts: 2463
Location: Germany

PostPosted: Sat Jul 21, 2012 5:34 pm    Post subject: Reply with quote

Hm, that's odd.

Did you do anything special for RaidVol4, cause that is the only one where it shows
Code:
Avail Dev Size : 2109790208 (1006.03 GiB 1080.21 GB)


Whereas for the others it says
Code:
Avail Dev Size : 1133033472 (540.27 GiB 580.11 GB)


So for some reason it's not detecting the new capacity of RaidVol1-3 but it's done so for 4.

Maybe it works after stopping & restarting the RAID? If all else fails you might have to remove, readd, resync each drive individually -- that's what you usually do anyway when replacing physical discs.
Back to top
View user's profile Send private message
pascuol
n00b
n00b


Joined: 03 Dec 2006
Posts: 29

PostPosted: Sat Jul 21, 2012 8:38 pm    Post subject: Reply with quote

Doesn't change anything to unmount/remount or restarting the raid.

for the RaidVol4 I rebuilt it, I mean, remove, readd and resync.

I probably have to do it for each, but in my logic, I was thinking that I don't have to do it, and as it take 2 days per disks ... I prefer to avoid it.

is there any other way to tell the Raid that the volume size has changed ?

I will think about it tonight, and maybe try to remove / readd but without resyncing like readd with the option assumesafe (or something like that).

thanks for your help anyway, let me know if you have any good idea :)
Back to top
View user's profile Send private message
frostschutz
Advocate
Advocate


Joined: 22 Feb 2005
Posts: 2463
Location: Germany

PostPosted: Sat Jul 21, 2012 11:20 pm    Post subject: Reply with quote

You could try stopping the RAID, and then assembling it with the --update=devicesize option. This might force it to re-evaluate the size of its member drives.

Another option (but more dangerous) would be to create the RAID with the --assume-clean option. You'd have to use the same settings as your existing RAID upon create. Get the settings wrong and your data is corrupt.
Back to top
View user's profile Send private message
pascuol
n00b
n00b


Joined: 03 Dec 2006
Posts: 29

PostPosted: Sun Jul 22, 2012 11:21 am    Post subject: Reply with quote

ok cool, it was the command I needed .. :)

Code:
mdadm -A /dev/md0 /dev/RAID/RaidVol[1-4] --update=devicesize
Size was 1133033472
Size is 2109790208
Size was 1133033472
Size is 2109790208
Size was 1133033472
Size is 2109790208
Size was 2109790208
Size is 2109790208
mdadm: /dev/md0 has been started with 4 drives.


and now it's resyncing
Code:
cat /proc/mdstat                                                                                 Sun Jul 22 13:01:17 2012

Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 dm-0[4] dm-3[7] dm-2[6] dm-1[5]
      2109790208 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]
      [==========>..........]  resync = 53.7% (566777192/1054895104) finish=1961.0min speed=4148K/sec

unused devices: <none>


it will "only" take 2000 min instead of much more :lol: but I'm already able to do the resize of my file system :)


So, now I've two method to resize my RAID, one is really slow, but keep the RAID up, with the other one, I need a short time to put it down .. Could be nice to be able to use the update=devicesize without stopping the RAID.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Other Things Gentoo All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum