Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
RAID set not starting properly (solved maybe)
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware
View previous topic :: View next topic  
Author Message
Bigun
Veteran
Veteran


Joined: 21 Sep 2003
Posts: 1954

PostPosted: Sun Jan 13, 2013 3:49 pm    Post subject: RAID set not starting properly (solved maybe) Reply with quote

I have a raid set md127, consisting of sda1, sdb1, and sdc1.

Upon boot, mdstat:

Code:
Personalities : [raid1] [raid6] [raid5] [raid4]
md123 : inactive sdc1[3](S) sdb1[1](S)
      2930269954 blocks super 1.2

md127 : inactive sda1[0](S)
      1465134977 blocks super 1.2

md124 : active raid1 sde1[1] sdd1[0]
      96256 blocks [2/2] [UU]

md125 : active raid1 sde2[1] sdd2[0]
      979840 blocks [2/2] [UU]

md126 : active raid1 sde3[1] sdd3[0]
      77074304 blocks [2/2] [UU]

unused devices: <none>


I then stop md123 and md127, then run mdadm --scan --assemble.

Code:
bigun # mdadm --assemble --scan
mdadm: /dev/md/127 has been started with 3 drives.


Then everything is fine.

Code:
Personalities : [raid1] [raid6] [raid5] [raid4]
md127 : active raid5 sda1[0] sdc1[3] sdb1[1]
      2930269184 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]

md124 : active raid1 sde1[1] sdd1[0]
      96256 blocks [2/2] [UU]

md125 : active raid1 sde2[1] sdd2[0]
      979840 blocks [2/2] [UU]

md126 : active raid1 sde3[1] sdd3[0]
      77074304 blocks [2/2] [UU]

unused devices: <none>


How do I fix this so that it starts proper upon reboot?
_________________
BK - "Then, after months of loud-mouthed shit-talking and own-fart smelling, you lost the big bet, fair snd square, and are being a baby about it."


Last edited by Bigun on Tue Jan 15, 2013 10:03 am; edited 1 time in total
Back to top
View user's profile Send private message
slis
n00b
n00b


Joined: 11 Oct 2010
Posts: 57
Location: Limanowa

PostPosted: Mon Jan 14, 2013 11:20 am    Post subject: Reply with quote

Can you show:
Code:

dmesg | grep md
Back to top
View user's profile Send private message
Bigun
Veteran
Veteran


Joined: 21 Sep 2003
Posts: 1954

PostPosted: Mon Jan 14, 2013 11:59 pm    Post subject: Reply with quote

Code:
# dmesg | grep md
[    0.000000] Kernel command line: root=/dev/md126
[    0.489917] md: raid1 personality registered for level 1
[    0.489989] md: raid6 personality registered for level 6
[    0.490060] md: raid5 personality registered for level 5
[    0.490132] md: raid4 personality registered for level 4
[    4.925019] md: Waiting for all devices to be available before autodetect
[    4.925093] md: If you don't use raid, use raid=noautodetect
[    4.925333] md: Autodetecting RAID arrays.
[    4.944264] md: invalid raid superblock magic on sda1
[    4.944349] md: sda1 does not have a valid v0.90 superblock, not importing!
[    4.983419] md: invalid raid superblock magic on sdb1
[    4.983503] md: sdb1 does not have a valid v0.90 superblock, not importing!
[    5.011379] md: invalid raid superblock magic on sdc1
[    5.011463] md: sdc1 does not have a valid v0.90 superblock, not importing!
[    5.089528] md: Scanned 9 and added 6 devices.
[    5.089610] md: autorun ...
[    5.089677] md: considering sde3 ...
[    5.089751] md:  adding sde3 ...
[    5.089821] md: sde2 has different UUID to sde3
[    5.089891] md: sde1 has different UUID to sde3
[    5.089963] md:  adding sdd3 ...
[    5.090031] md: sdd2 has different UUID to sde3
[    5.090101] md: sdd1 has different UUID to sde3
[    5.090444] md: created md126
[    5.090514] md: bind<sdd3>
[    5.090589] md: bind<sde3>
[    5.090661] md: running: <sde3><sdd3>
[    5.091030] md/raid1:md126: active with 2 out of 2 mirrors
[    5.091114] md126: detected capacity change from 0 to 78924087296
[    5.091271] md: considering sde2 ...
[    5.091341] md:  adding sde2 ...
[    5.091410] md: sde1 has different UUID to sde2
[    5.091481] md:  adding sdd2 ...
[    5.091550] md: sdd1 has different UUID to sde2
[    5.091739] md: created md125
[    5.091806] md: bind<sdd2>
[    5.091879] md: bind<sde2>
[    5.091951] md: running: <sde2><sdd2>
[    5.092208] md/raid1:md125: active with 2 out of 2 mirrors
[    5.092289] md125: detected capacity change from 0 to 1003356160
[    5.092389] md: considering sde1 ...
[    5.092459] md:  adding sde1 ...
[    5.092528] md:  adding sdd1 ...
[    5.092701] md: created md124
[    5.092776] md: bind<sdd1>
[    5.092848] md: bind<sde1>
[    5.092921] md: running: <sde1><sdd1>
[    5.093146] md/raid1:md124: active with 2 out of 2 mirrors
[    5.093227] md124: detected capacity change from 0 to 98566144
[    5.093325] md: ... autorun DONE.
[    5.123642]  md126: unknown partition table
[    5.166284] UDF-fs: warning (device md126): udf_fill_super: No partition found (1)
[    5.204372] XFS (md126): Mounting Filesystem
[    5.384509] XFS (md126): Ending clean mount
[    8.336987]  md124: unknown partition table
[    8.500379]  md125: unknown partition table
[    8.546523] md: bind<sda1>
[    8.548516] md: bind<sdb1>
[    8.550144] md: bind<sdc1>
[   10.032886] Adding 979836k swap on /dev/md125.  Priority:-1 extents:1 across:979836k
[   10.054665] XFS (md127): SB buffer read failed
[  359.440817] md: md127 stopped.
[  359.440823] md: unbind<sda1>
[  359.442966] md: export_rdev(sda1)
[  361.525674] md: md123 stopped.
[  361.525681] md: unbind<sdc1>
[  361.531852] md: export_rdev(sdc1)
[  361.531900] md: unbind<sdb1>
[  361.536847] md: export_rdev(sdb1)
[  365.762656] md: md127 stopped.
[  365.763364] md: bind<sdb1>
[  365.763898] md: bind<sdc1>
[  365.764020] md: bind<sda1>
[  365.765414] md/raid:md127: device sda1 operational as raid disk 0
[  365.765417] md/raid:md127: device sdc1 operational as raid disk 2
[  365.765418] md/raid:md127: device sdb1 operational as raid disk 1
[  365.765656] md/raid:md127: allocated 3190kB
[  365.765839] md/raid:md127: raid level 5 active with 3 out of 3 devices, algorithm 2
[  365.765865] md127: detected capacity change from 0 to 3000595644416
[  365.766131]  md127: unknown partition table
[ 1234.385072] XFS (md127): Mounting Filesystem
[ 1234.588983] XFS (md127): Ending clean mount
[ 3672.304261] XFS (md127): Mounting Filesystem
[ 3672.446578] XFS (md127): Ending clean mount


Also pastebin'd all dmesg output.

Looks like udev is setting up that raid.
_________________
BK - "Then, after months of loud-mouthed shit-talking and own-fart smelling, you lost the big bet, fair snd square, and are being a baby about it."
Back to top
View user's profile Send private message
s4e8
Apprentice
Apprentice


Joined: 29 Jul 2006
Posts: 191

PostPosted: Tue Jan 15, 2013 5:52 am    Post subject: Reply with quote

Out of topic, you should select HIGHMEM64G because you have 8G RAM.
Back to top
View user's profile Send private message
Bigun
Veteran
Veteran


Joined: 21 Sep 2003
Posts: 1954

PostPosted: Tue Jan 15, 2013 10:01 am    Post subject: Reply with quote

s4e8 wrote:
Out of topic, you should select HIGHMEM64G because you have 8G RAM.


Thanks for the sweet tip, I thought the limitation was a 32-bit thing? I thought only 64-bit could handle registers that high.

(Last OT post I swear)
_________________
BK - "Then, after months of loud-mouthed shit-talking and own-fart smelling, you lost the big bet, fair snd square, and are being a baby about it."


Last edited by Bigun on Tue Jan 15, 2013 10:02 am; edited 1 time in total
Back to top
View user's profile Send private message
Bigun
Veteran
Veteran


Joined: 21 Sep 2003
Posts: 1954

PostPosted: Tue Jan 15, 2013 10:02 am    Post subject: Reply with quote

Oddly enough, when I rebooted:

Code:
$ cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md127 : active raid5 sdc1[3] sdb1[1] sda1[0]
      2930269184 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]

md124 : active raid1 sde1[1] sdd1[0]
      96256 blocks [2/2] [UU]

md125 : active raid1 sde2[1] sdd2[0]
      979840 blocks [2/2] [UU]

md126 : active raid1 sde3[1] sdd3[0]
      77074304 blocks [2/2] [UU]

unused devices: <none>


I'm going to mark the thread as solved until I see this behavior again.
_________________
BK - "Then, after months of loud-mouthed shit-talking and own-fart smelling, you lost the big bet, fair snd square, and are being a baby about it."
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum