View previous topic :: View next topic |
Author |
Message |
Vieri l33t
Joined: 18 Dec 2005 Posts: 870
|
Posted: Mon Sep 22, 2014 9:15 am Post subject: degraded RAID1: how to read /proc/mdstat |
|
|
Hi,
I setup 2 hard disks with 4 partitions each and all of them as RAID1.
Both of my drives were working fine until recently one failed when I checked /proc/mdstat:
Code: |
# cat /proc/mdstat
Personalities : [raid10] [raid1] [raid6] [raid5] [raid4] [raid0] [linear] [multipath]
md127 : active raid1 sda1[0] sdb1[1]
1984 blocks [2/2] [UU]
md2 : active raid1 sdb2[1] sda2[0]
131008 blocks [2/2] [UU]
md3 : active raid1 sdb3[1] sda3[2](F)
524224 blocks [2/1] [_U]
md4 : active raid1 sdb4[1] sda4[2](F)
292274240 blocks [2/1] [_U]
unused devices: <none>
|
Let's take /dev/md4 as an example:
Code: |
# mdadm --query --detail /dev/md4
/dev/md4:
Version : 0.90
Creation Time : Fri Jul 18 18:08:59 2014
Raid Level : raid1
Array Size : 292274240 (278.73 GiB 299.29 GB)
Used Dev Size : 292274240 (278.73 GiB 299.29 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 4
Persistence : Superblock is persistent
Update Time : Mon Sep 22 11:05:59 2014
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0
UUID : f6414992:af74e556:cb201669:f728008a
Events : 0.771421
Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 20 1 active sync /dev/sdb4
2 8 4 - faulty /dev/sda4
|
I'd like to know how to read the above outputs, expecially /proc/mdstat. I understand that sda4 is failing but what does sda4[2] mean? [2] seams to mean "device number 2". However, my system only has 2 hard disks: [0] and [1]. So why is /proc/mdstat reporting [2]?
If I take a look at /dev/md2 which is not degraded:
Code: |
# mdadm --query --detail /dev/md2
/dev/md2:
Version : 0.90
Creation Time : Fri Jul 18 18:08:59 2014
Raid Level : raid1
Array Size : 131008 (127.96 MiB 134.15 MB)
Used Dev Size : 131008 (127.96 MiB 134.15 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Fri Jul 25 09:32:58 2014
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 5e089a5f:7b824c9a:cb201669:f728008a
Events : 0.18
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 8 18 1 active sync /dev/sdb2
|
So for /dev/md2 sda2 is on device 0 and sdb2 is on device 1.
I guess that when a device/partition fails (such as sda4), the device number it had is removed (ie. [0]), marked faulty and replaced by a number greater than the number of physical devices (in this case my real devices are [0] and [1] so it assigned [2]).
Am I getting it right or am I misinterpreting the output?
Thanks,
Vieri |
|
Back to top |
|
|
eccerr0r Watchman
Joined: 01 Jul 2004 Posts: 9645 Location: almost Mile High in the USA
|
Posted: Wed Oct 01, 2014 10:34 pm Post subject: |
|
|
Only thing I can think of is that when you created the array, somehow you got a phantom device that was never really used... So device 0 is not really there and everything got shifted by one... _________________ Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching? |
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|