Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
[SOLVED] RAID not autodetected...
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware
View previous topic :: View next topic  
Author Message
slackline
Veteran
Veteran


Joined: 01 Apr 2005
Posts: 1471
Location: /uk/sheffield

PostPosted: Tue May 01, 2012 10:20 am    Post subject: [SOLVED] RAID not autodetected... Reply with quote

Hi,

I recently found on a rare reboot that my RAID arrays are no longer autodetected meaning LVM fails and then localmount doesn't mount the partitions.

I can't work out why as they are detected fine by the kernel....

Code:

$ dmesg | grep raid -i
[    0.923041] md: raid0 personality registered for level 0
[    0.923559] md: raid1 personality registered for level 1
[    3.666164] md: If you don't use raid, use raid=noautodetect
[    3.667044] md: Autodetecting RAID arrays.
[    3.695225] md: invalid raid superblock magic on sda1
[    3.715943] md: invalid raid superblock magic on sdb1
[    3.766760] md/raid1:md1: active with 2 out of 2 mirrors
[   23.421337] md/raid1:md2: active with 2 out of 2 mirrors


But the init script mdadm fails with...

Code:

mdadm: hno arrays found in config file or automatically


The LVM init script then fails, and when it gets to the localmount init-script this fails too.

Strangely though the LVM script is then run a second time and succeeds, but /etc/init.d/localmount restart fails until I then restart lvm again after which I can mount.

I notice that there is a problem reported with the superblock magic on sda1 and sdb1.

So my questions are...

How to resolve the superblock magic?

If this is resolved am I likely to then see mdadm autodetecting the RAID arrays?

Should I just write a config file for detecting the RAID arrays?

Thanks in advance,

slack
_________________
"Science is what we understand well enough to explain to a computer.  Art is everything else we do." - Donald Knuth


Last edited by slackline on Wed May 23, 2012 9:03 am; edited 1 time in total
Back to top
View user's profile Send private message
DaggyStyle
Watchman
Watchman


Joined: 22 Mar 2006
Posts: 5909

PostPosted: Tue May 01, 2012 12:35 pm    Post subject: Reply with quote

have you updated either the kernel or udev lately?
_________________
Only two things are infinite, the universe and human stupidity and I'm not sure about the former - Albert Einstein
Back to top
View user's profile Send private message
slackline
Veteran
Veteran


Joined: 01 Apr 2005
Posts: 1471
Location: /uk/sheffield

PostPosted: Tue May 01, 2012 12:59 pm    Post subject: Reply with quote

Regularly update the kernel over the last month and udev too...

Code:

$ genlop -s gentoo-sources | tail | grep Apr
     Thu Apr  5 07:59:39 2012 >>> sys-kernel/gentoo-sources-3.3.1
     Tue Apr 17 07:51:28 2012 >>> sys-kernel/gentoo-sources-3.3.2
     Mon Apr 23 08:49:13 2012 >>> sys-kernel/gentoo-sources-3.3.3
     Sun Apr 29 09:35:59 2012 >>> sys-kernel/gentoo-sources-3.3.4
$ genlop -s udev | tail | grep Apr
     Sun Apr  1 08:51:04 2012 >>> sys-fs/udev-182-r3


My usual path for minor kernel updates is to 'make silentoldconfig'.
_________________
"Science is what we understand well enough to explain to a computer.  Art is everything else we do." - Donald Knuth
Back to top
View user's profile Send private message
DaggyStyle
Watchman
Watchman


Joined: 22 Mar 2006
Posts: 5909

PostPosted: Tue May 01, 2012 2:03 pm    Post subject: Reply with quote

slack---line wrote:
Regularly update the kernel over the last month and udev too...

Code:

$ genlop -s gentoo-sources | tail | grep Apr
     Thu Apr  5 07:59:39 2012 >>> sys-kernel/gentoo-sources-3.3.1
     Tue Apr 17 07:51:28 2012 >>> sys-kernel/gentoo-sources-3.3.2
     Mon Apr 23 08:49:13 2012 >>> sys-kernel/gentoo-sources-3.3.3
     Sun Apr 29 09:35:59 2012 >>> sys-kernel/gentoo-sources-3.3.4
$ genlop -s udev | tail | grep Apr
     Sun Apr  1 08:51:04 2012 >>> sys-fs/udev-182-r3


My usual path for minor kernel updates is to 'make silentoldconfig'.

have you created a initramfs?
_________________
Only two things are infinite, the universe and human stupidity and I'm not sure about the former - Albert Einstein
Back to top
View user's profile Send private message
slackline
Veteran
Veteran


Joined: 01 Apr 2005
Posts: 1471
Location: /uk/sheffield

PostPosted: Tue May 01, 2012 4:48 pm    Post subject: Reply with quote

No, never had one in the past and the two RAID arrays were autodetected no problem.

Would it help though? The kernel is detecting them before /etc/init.d/mdadm is run.
_________________
"Science is what we understand well enough to explain to a computer.  Art is everything else we do." - Donald Knuth
Back to top
View user's profile Send private message
DaggyStyle
Watchman
Watchman


Joined: 22 Mar 2006
Posts: 5909

PostPosted: Tue May 01, 2012 5:51 pm    Post subject: Reply with quote

checkout the relevant threads.
_________________
Only two things are infinite, the universe and human stupidity and I'm not sure about the former - Albert Einstein
Back to top
View user's profile Send private message
slackline
Veteran
Veteran


Joined: 01 Apr 2005
Posts: 1471
Location: /uk/sheffield

PostPosted: Tue May 01, 2012 6:40 pm    Post subject: Reply with quote

Ok, off to do some searching & reading, thanks for the pointers :)
_________________
"Science is what we understand well enough to explain to a computer.  Art is everything else we do." - Donald Knuth
Back to top
View user's profile Send private message
slackline
Veteran
Veteran


Joined: 01 Apr 2005
Posts: 1471
Location: /uk/sheffield

PostPosted: Thu May 17, 2012 7:13 am    Post subject: Reply with quote

I've been busy and not had time to sit down and address this properly, but after a reboot today I'm now being told...

Code:

$ # dmesg | grep md
[    0.846807] ata9: PATA max UDMA/100 cmd 0xb000 ctl 0xb100 bmdma 0xb400 irq 16
[    0.846820] ata10: PATA max UDMA/100 cmd 0xb200 ctl 0xb300 bmdma 0xb408 irq 16
[    0.884755] uhci_hcd 0000:00:1a.0: uhci_check_and_reset_hc: cmd = 0x0000
[    0.891033] uhci_hcd 0000:00:1a.1: uhci_check_and_reset_hc: cmd = 0x0000
[    0.897416] uhci_hcd 0000:00:1a.2: uhci_check_and_reset_hc: cmd = 0x0000
[    0.903888] uhci_hcd 0000:00:1d.0: uhci_check_and_reset_hc: cmd = 0x0000
[    0.910080] uhci_hcd 0000:00:1d.1: uhci_check_and_reset_hc: cmd = 0x0000
[    0.916060] uhci_hcd 0000:00:1d.2: uhci_check_and_reset_hc: cmd = 0x0000
[    0.929040] md: raid0 personality registered for level 0
[    0.929557] md: raid1 personality registered for level 1
[    3.678402] md: Waiting for all devices to be available before autodetect
[    3.679147] md: If you don't use raid, use raid=noautodetect
[    3.680027] md: Autodetecting RAID arrays.
[    3.693425] md: invalid raid superblock magic on sda1
[    3.694204] md: sda1 does not have a valid v0.90 superblock, not importing!
[    3.704419] md: invalid raid superblock magic on sdb1
[    3.705181] md: sdb1 does not have a valid v0.90 superblock, not importing!
[    3.755911] md: Scanned 4 and added 2 devices.
[    3.756655] md: autorun ...
[    3.757359] md: considering sde1 ...
[    3.758062] md:  adding sde1 ...
[    3.758756] md:  adding sdd1 ...
[    3.759576] md: created md1
[    3.760263] md: bind<sdd1>
[    3.760941] md: bind<sde1>
[    3.761615] md: running: <sde1><sdd1>
[    3.763043] md/raid1:md1: active with 2 out of 2 mirrors
[    3.763707] md1: detected capacity change from 0 to 1000202174464
[    3.764388] md: ... autorun DONE.
[    7.759684]  md1: unknown partition table


So I now can't mount three of my LVM partitions which stretch across my two RAID arrays because /dev/sda1 and /dev/sdb1 are no longer imported.

I'm wondering if I could solve this using...

Code:

mdadm --misc --zero-superblock /dev/sda1
mdadm --misc --zero-superblock /dev/sdb1


...but am wary as its not clear to me whether this would wipe the data on the drive?

Can anyone advise how to recover/repair the superblock magic on /dev/sda1 and /dev/sdb1?

EDIT : A bit of searching suggests I may be able to simply recreate the RAID array as mdadm will recognise the existing elements and rebuild in a 'non-destructive' way. Can anyone confirm this?

Thanks in advance,

slack
_________________
"Science is what we understand well enough to explain to a computer.  Art is everything else we do." - Donald Knuth


Last edited by slackline on Thu May 17, 2012 8:23 am; edited 1 time in total
Back to top
View user's profile Send private message
DaggyStyle
Watchman
Watchman


Joined: 22 Mar 2006
Posts: 5909

PostPosted: Thu May 17, 2012 8:20 am    Post subject: Reply with quote

what version of superblock does the raid sees if any?
_________________
Only two things are infinite, the universe and human stupidity and I'm not sure about the former - Albert Einstein
Back to top
View user's profile Send private message
slackline
Veteran
Veteran


Joined: 01 Apr 2005
Posts: 1471
Location: /uk/sheffield

PostPosted: Thu May 17, 2012 8:30 am    Post subject: Reply with quote

Not sure how I'd get that reported as the kernel is reporting that its invalid but no additional information.

I can't see a debugging option under Device Drivers -> Multiple devices driver support (RAID and LVM) to enable more information, or is it something mdadm can report?



A bit of searching suggests I may be able to simply recreate the RAID array as mdadm will recognise the existing elements and rebuild in a 'non-destructive' way.

If this is true that might be the most sensible approach to resolving this current problem, I'll then get an initramfs sorted ("working" from home today!).

Cheers

slack
_________________
"Science is what we understand well enough to explain to a computer.  Art is everything else we do." - Donald Knuth
Back to top
View user's profile Send private message
DaggyStyle
Watchman
Watchman


Joined: 22 Mar 2006
Posts: 5909

PostPosted: Thu May 17, 2012 8:47 am    Post subject: Reply with quote

there is a way I think via mdadm, anyway, good luck with that.
_________________
Only two things are infinite, the universe and human stupidity and I'm not sure about the former - Albert Einstein
Back to top
View user's profile Send private message
slackline
Veteran
Veteran


Joined: 01 Apr 2005
Posts: 1471
Location: /uk/sheffield

PostPosted: Thu May 17, 2012 10:55 am    Post subject: Reply with quote

Ok, been reading the RAID wiki and it appears that the version for /dev/sda1 and /dev/sdb1 are 1.2 whilst for the other two drives I've got arrayed (/dev/sdc1 and /dev/sdd1) the version is 0.9...


Code:

# mdadm --examine /dev/sd*
/dev/sda:
   MBR Magic : aa55
Partition[0] :   1953522992 sectors at           63 (type fd)
/dev/sda1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : a5da90a1:4b0961fd:ef4eb791:0db0e092
           Name : darwin:2  (local to host darwin)
  Creation Time : Fri Aug 12 14:05:48 2011
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 1953520944 (931.51 GiB 1000.20 GB)
     Array Size : 976760336 (931.51 GiB 1000.20 GB)
  Used Dev Size : 1953520672 (931.51 GiB 1000.20 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 515e03da:3fdd399d:a0e2cbdc:dfff379f

    Update Time : Wed May 16 19:40:13 2012
       Checksum : 8fbefbb5 - correct
         Events : 19


   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing)
/dev/sdb:
   MBR Magic : aa55
Partition[0] :   1953522992 sectors at           63 (type fd)
/dev/sdb1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : a5da90a1:4b0961fd:ef4eb791:0db0e092
           Name : darwin:2  (local to host darwin)
  Creation Time : Fri Aug 12 14:05:48 2011
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 1953520944 (931.51 GiB 1000.20 GB)
     Array Size : 976760336 (931.51 GiB 1000.20 GB)
  Used Dev Size : 1953520672 (931.51 GiB 1000.20 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 4ed06a89:ad7fed8d:e743906e:a94515ec

    Update Time : Wed May 16 19:40:13 2012
       Checksum : e7bb732 - correct
         Events : 19


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing)
/dev/sdc:
   MBR Magic : aa55
Partition[0] :        96327 sectors at           63 (type 83)
Partition[1] :     41961780 sectors at        96390 (type 83)
Partition[2] :     20980890 sectors at     42058170 (type 83)
Partition[3] :    873518310 sectors at     63039060 (type 05)
/dev/sdc1:
   MBR Magic : aa55
mdadm: No md superblock detected on /dev/sdc2.
mdadm: No md superblock detected on /dev/sdc3.
/dev/sdc4:
   MBR Magic : aa55
Partition[0] :     64693692 sectors at           63 (type 83)
Partition[1] :    808824555 sectors at     64693755 (type 05)
mdadm: No md superblock detected on /dev/sdc5.
mdadm: No md superblock detected on /dev/sdc6.
/dev/sdd:
   MBR Magic : aa55
Partition[0] :   1953520002 sectors at           63 (type fd)
/dev/sdd1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 1072c19c:6a50e9c5:188edc34:4aa1c004 (local to host darwin)
  Creation Time : Sun Nov  8 07:36:27 2009
     Raid Level : raid1
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
     Array Size : 976759936 (931.51 GiB 1000.20 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 1

    Update Time : Thu May 17 11:49:12 2012
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 5c046f76 - correct
         Events : 44868


      Number   Major   Minor   RaidDevice State
this     0       8       49        0      active sync   /dev/sdd1

   0     0       8       49        0      active sync   /dev/sdd1
   1     1       8       65        1      active sync   /dev/sde1
/dev/sde:
   MBR Magic : aa55
Partition[0] :   1953520002 sectors at           63 (type fd)
/dev/sde1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 1072c19c:6a50e9c5:188edc34:4aa1c004 (local to host darwin)
  Creation Time : Sun Nov  8 07:36:27 2009
     Raid Level : raid1
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
     Array Size : 976759936 (931.51 GiB 1000.20 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 1

    Update Time : Thu May 17 11:49:12 2012
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 5c046f88 - correct
         Events : 44868


      Number   Major   Minor   RaidDevice State
this     1       8       65        1      active sync   /dev/sde1

   0     0       8       49        0      active sync   /dev/sdd1
   1     1       8       65        1      active sync   /dev/sde1


This is probably why they aren't being autodetected I suspect as the wiki indicates they should be 0.9.

So I guess the question now is how to change the version?

Reading debian RAID FAQ suggests that one approach might be to zeroblock /dev/sda1 and /dev/sdb1 and then rebuild the array with version 0.9. Can I 'safely' do this without loosing my data?
_________________
"Science is what we understand well enough to explain to a computer.  Art is everything else we do." - Donald Knuth
Back to top
View user's profile Send private message
slackline
Veteran
Veteran


Joined: 01 Apr 2005
Posts: 1471
Location: /uk/sheffield

PostPosted: Fri May 18, 2012 6:27 am    Post subject: Reply with quote

Right I went ahead and recreated the array using the following and both RAID arrays are now detected....

Code:

# mdadm --create --verbose /dev/md2 --metadata 0.9 --level=mirror --raid-devices=2 /dev/sda1 /dev/sdb1
# reboot
# dmesg | grep md
[    0.846799] ata9: PATA max UDMA/100 cmd 0xb000 ctl 0xb100 bmdma 0xb400 irq 16
[    0.846811] ata10: PATA max UDMA/100 cmd 0xb200 ctl 0xb300 bmdma 0xb408 irq 16
[    0.884739] uhci_hcd 0000:00:1a.0: uhci_check_and_reset_hc: cmd = 0x0000
[    0.890993] uhci_hcd 0000:00:1a.1: uhci_check_and_reset_hc: cmd = 0x0000
[    0.897372] uhci_hcd 0000:00:1a.2: uhci_check_and_reset_hc: cmd = 0x0000
[    0.903843] uhci_hcd 0000:00:1d.0: uhci_check_and_reset_hc: cmd = 0x0000
[    0.910032] uhci_hcd 0000:00:1d.1: uhci_check_and_reset_hc: cmd = 0x0000
[    0.915998] uhci_hcd 0000:00:1d.2: uhci_check_and_reset_hc: cmd = 0x0000
[    0.929010] md: raid0 personality registered for level 0
[    0.929526] md: raid1 personality registered for level 1
[    3.676394] md: Waiting for all devices to be available before autodetect
[    3.677142] md: If you don't use raid, use raid=noautodetect
[    3.678030] md: Autodetecting RAID arrays.
[    3.822911] md: Scanned 4 and added 4 devices.
[    3.823690] md: autorun ...
[    3.824418] md: considering sde1 ...
[    3.825140] md:  adding sde1 ...
[    3.825846] md:  adding sdd1 ...
[    3.826532] md: sdb1 has different UUID to sde1
[    3.827216] md: sda1 has different UUID to sde1
[    3.828021] md: created md1
[    3.828696] md: bind<sdd1>
[    3.829373] md: bind<sde1>
[    3.830043] md: running: <sde1><sdd1>
[    3.831446] md/raid1:md1: active with 2 out of 2 mirrors
[    3.832119] md1: detected capacity change from 0 to 1000202174464
[    3.832795] md: considering sdb1 ...
[    3.833440] md:  adding sdb1 ...
[    3.834077] md:  adding sda1 ...
[    3.834822] md: created md2
[    3.835453] md: bind<sda1>
[    3.836082] md: bind<sdb1>
[    3.836703] md: running: <sdb1><sda1>
[    3.837411] md/raid1:md2: active with 2 out of 2 mirrors
[    3.838044] md2: detected capacity change from 0 to 1000203681792
[    3.838678] md: ... autorun DONE.
[    7.466947]  md2: unknown partition table
[    7.467345]  md1: unknown partition table
[    7.497945] mdadm: sending ioctl 1261 to a partition!
[    7.497947] mdadm: sending ioctl 1261 to a partition!
[    7.497950] mdadm: sending ioctl 1261 to a partition!
[    8.475958] mdadm: sending ioctl 800c0910 to a partition!
[    8.475961] mdadm: sending ioctl 800c0910 to a partition!
[    8.475966] mdadm: sending ioctl 1261 to a partition!
[    8.475968] mdadm: sending ioctl 1261 to a partition!
[    8.476148] mdadm: sending ioctl 1261 to a partition!
[    8.476150] mdadm: sending ioctl 1261 to a partition!
[    8.476169] mdadm: sending ioctl 800c0910 to a partition!
[   15.305109] mdadm: sending ioctl 1261 to a partition!
[   15.305111] mdadm: sending ioctl 1261 to a partition!
[   15.305542] mdadm: sending ioctl 1261 to a partition!
[   15.305544] mdadm: sending ioctl 1261 to a partition!
[   15.305676] mdadm: sending ioctl 1261 to a partition!
[   15.305678] mdadm: sending ioctl 1261 to a partition!
[   15.305810] mdadm: sending ioctl 1261 to a partition!
[   15.305812] mdadm: sending ioctl 1261 to a partition!
[   15.305945] mdadm: sending ioctl 1261 to a partition!
[   15.305947] mdadm: sending ioctl 1261 to a partition!


But I now have an unrecognised partition table that is preventing LVM from starting. Previously I had /dev/vg/work /dev/vg/music /dev/vg/pics /dev/vg/video and only /dev/vg/work is now listed in /dev/vg...

Code:

# ls -l /dev/vg/
total 0
lrwxrwxrwx 1 root root 7 May 18 06:53 work -> ../dm-0


LVM is reporting that it can't find a specific UUID which isn't even listed in /etc/mdadm.conf...

Code:

# /etc/init.d/mdadm restart
 * Setting up the Logical Volume Manager ...
  Couldn't find device with uuid JqPNBk-noWD-H6HZ-foaW-RrbJ-92Iu-GeEvPn.
  Couldn't find device with uuid JqPNBk-noWD-H6HZ-foaW-RrbJ-92Iu-GeEvPn.
  Couldn't find device with uuid JqPNBk-noWD-H6HZ-foaW-RrbJ-92Iu-GeEvPn.
  Refusing activation of partial LV pics. Use --partial to override.
  Refusing activation of partial LV video. Use --partial to override.
  Refusing activation of partial LV music. Use --partial to override.
 * Failed to setup the LVM                                                                                                                                                                                                                                               [ !! ]
 * ERROR: lvm failed to start
 * Starting mdadm monitor ...
mdadm: No mail address or alert command - not monitoring.                                                                                                                                                                                                                [ !! ]
 * ERROR: mdadm failed to start
# grep JqPNBk-noWD-H6HZ-foaW-RrbJ-92Iu-GeEvPn /etc/mdadm.conf
#


The VG tools know that there is a volume group there, but the UUID its looking for isn't the same as the one it finds....
Code:

 # vgscan
  Reading all physical volumes.  This may take a while...
  Couldn't find device with uuid JqPNBk-noWD-H6HZ-foaW-RrbJ-92Iu-GeEvPn.
  Found volume group "vg" using metadata type lvm2
# vgdisplay -v
    Finding all volume groups
    Finding volume group "vg"
  Couldn't find device with uuid JqPNBk-noWD-H6HZ-foaW-RrbJ-92Iu-GeEvPn.
    There are 1 physical volumes missing.
  --- Volume group ---
  VG Name               vg
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  14
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                4
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                1
  VG Size               1.82 TiB
  PE Size               4.00 MiB
  Total PE              476932
  Alloc PE / Size       261120 / 1020.00 GiB
  Free  PE / Size       215812 / 843.02 GiB
  VG UUID               r7Ys3b-EvBQ-KcTa-gOqO-tR81-9toT-dk1b61
   
  --- Logical volume ---
  LV Path                /dev/vg/pics
  LV Name                pics
  VG Name                vg
  LV UUID                tK7jFv-hyMu-VayK-DGiA-M1oP-4FPA-LR3IvB
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              NOT available
  LV Size                320.00 GiB
  Current LE             81920
  Segments               3
  Allocation             inherit
  Read ahead sectors     auto
   
  --- Logical volume ---
  LV Path                /dev/vg/video
  LV Name                video
  VG Name                vg
  LV UUID                6u82kr-14bK-809S-393K-tlk5-ffHd-wxdIGW
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              NOT available
  LV Size                350.00 GiB
  Current LE             89600
  Segments               3
  Allocation             inherit
  Read ahead sectors     auto
   
  --- Logical volume ---
  LV Path                /dev/vg/music
  LV Name                music
  VG Name                vg
  LV UUID                P78exE-Jdz1-LpwC-LzfI-ys55-MSHO-NhKLU6
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              NOT available
  LV Size                300.00 GiB
  Current LE             76800
  Segments               3
  Allocation             inherit
  Read ahead sectors     auto
   
  --- Logical volume ---
  LV Path                /dev/vg/work
  LV Name                work
  VG Name                vg
  LV UUID                uckSUt-6Lv5-LiUn-zmTZ-Myzj-43uZ-7hrUez
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              available
  # open                 0
  LV Size                50.00 GiB
  Current LE             12800
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:0
   
  --- Physical volumes ---
  PV Name               /dev/md1     
  PV UUID               QgN6eY-UJQn-VmC0-qqx6-MEfD-FCe0-wKoM3J
  PV Status             allocatable
  Total PE / Free PE    238466 / 0
   
  PV Name               unknown device     
  PV UUID               JqPNBk-noWD-H6HZ-foaW-RrbJ-92Iu-GeEvPn
  PV Status             allocatable
  Total PE / Free PE    238466 / 215812
   





I suspect its the recreated array that is the problem as pvdisplay -v shows the UUID...

Code:

# pvdisplay -v
    Scanning for physical volume names
  Couldn't find device with uuid JqPNBk-noWD-H6HZ-foaW-RrbJ-92Iu-GeEvPn.
    There are 1 physical volumes missing.
    There are 1 physical volumes missing.
  --- Physical volume ---
  PV Name               /dev/md1
  VG Name               vg
  PV Size               931.51 GiB / not usable 3.12 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              238466
  Free PE               0
  Allocated PE          238466
  PV UUID               QgN6eY-UJQn-VmC0-qqx6-MEfD-FCe0-wKoM3J
   
  --- Physical volume ---
  PV Name               unknown device
  VG Name               vg
  PV Size               931.51 GiB / not usable 3.52 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              238466
  Free PE               215812
  Allocated PE          22654
  PV UUID               JqPNBk-noWD-H6HZ-foaW-RrbJ-92Iu-GeEvPn
   


Found an old thread with a similar problem here but no solution.


EDIT : Just found this thread which suggests commenting out the following section from /etc/lvm/backup/vg might do the trick...

Code:

      pv1 {
         id = "JqPNBk-noWD-H6HZ-foaW-RrbJ-92Iu-GeEvPn"
         device = "/dev/md2"   # Hint only

         status = ["ALLOCATABLE"]
         flags = []
         dev_size = 1953520672   # 931.511 Gigabytes
         pe_start = 2048
         pe_count = 238466   # 931.508 Gigabytes
      }


...and without loss of data!

Any thoughts on whether this is a "good idea" to try?

slack
_________________
"Science is what we understand well enough to explain to a computer.  Art is everything else we do." - Donald Knuth
Back to top
View user's profile Send private message
slackline
Veteran
Veteran


Joined: 01 Apr 2005
Posts: 1471
Location: /uk/sheffield

PostPosted: Sat May 19, 2012 6:46 am    Post subject: Reply with quote

For reference I solved this using the last thread I linked above and reading the Novell Solutions page.

It wasn't quite as described/suggested by the articles as...

Code:

# pvscan
  Couldn't find device with uuid JqPNBk-noWD-H6HZ-foaW-RrbJ-92Iu-GeEvPn.
  PV /dev/md1         VG vg   lvm2 [931.51 GiB / 0    free]
  PV unknown device   VG vg   lvm2 [931.51 GiB / 843.02 GiB free]
  Total: 2 [1.82 TiB] / in use: 2 [1.82 TiB] / in no VG: 0 [0   ]
# pvcreate --restorefile /etc/lvm/archive/vg_00021-2112048483.vg --uuid JqPNBk-noWD-H6HZ-foaW-RrbJ-92Iu-GeEvPn
  Can only set uuid on one volume at once
  Run `pvcreate --help' for more information.


But given I was being told what the missing UUID was I figured I could simply do things without a restore file...

Code:

# pvcreate --uuid JqPNBk-noWD-H6HZ-foaW-RrbJ-92Iu-GeEvPn /dev/md2 --norestorefile
  Writing physical volume data to disk "/dev/md2"
  Physical volume "/dev/md2" successfully created


I can now mount my physical volumes without any problem, no need to restore Volume Groups, although if I did there is a backup and it could be done with any of the files listed in...

Code:

# vgcfgrestore --list /dev/vg
   
  File:      /etc/lvm/archive/vg_00013.vg
  VG name:       vg
  Description:   Created *before* executing 'lvcreate -L10G -nref vg'
  Backup Time:   Sun Nov  8 13:52:01 2009

   
  File:      /etc/lvm/archive/vg_00014-142589210.vg
  VG name:       vg
  Description:   Created *before* executing 'vgextend vg /dev/md2'
  Backup Time:   Fri Aug 12 14:20:40 2011

   
  File:      /etc/lvm/archive/vg_00015-1435666049.vg
  Couldn't find device with uuid JqPNBk-noWD-H6HZ-foaW-RrbJ-92Iu-GeEvPn.
  VG name:       vg
  Description:   Created *before* executing 'lvremove /dev/vg/ref'
  Backup Time:   Fri Aug 12 14:22:25 2011

   
  File:      /etc/lvm/archive/vg_00016-1163068530.vg
  VG name:       vg
  Description:   Created *before* executing 'lvextend -L+10G /dev/vg/pics'
  Backup Time:   Fri Aug 12 14:23:14 2011

   
  File:      /etc/lvm/archive/vg_00017-1993976830.vg
  VG name:       vg
  Description:   Created *before* executing 'lvextend -L+40G /dev/vg/pics'
  Backup Time:   Fri Aug 12 14:23:28 2011

   
  File:      /etc/lvm/archive/vg_00018-412113985.vg
  VG name:       vg
  Description:   Created *before* executing 'lvextend -L+50G /dev/vg/video'
  Backup Time:   Fri Aug 12 14:23:38 2011

   
  File:      /etc/lvm/archive/vg_00019-890081175.vg
  VG name:       vg
  Description:   Created *before* executing 'lvextend -L+50G /dev/vg/music'
  Backup Time:   Fri Aug 12 14:40:36 2011

   
  File:      /etc/lvm/archive/vg_00020-1643279337.vg
  VG name:       vg
  Description:   Created *before* executing 'lvextend -L+50G /dev/vg/video'
  Backup Time:   Sat Feb 18 16:20:39 2012

   
  File:      /etc/lvm/archive/vg_00021-2112048483.vg
  VG name:       vg
  Description:   Created *before* executing 'lvextend -L+20G /dev/vg/pics'
  Backup Time:   Tue May 15 07:33:39 2012

   
  File:      /etc/lvm/archive/vg_00022-1086443861.vg
  VG name:       vg
  Description:   Created *before* executing 'vgscan --mknodes'
  Backup Time:   Fri May 18 08:02:29 2012

   
  File:      /etc/lvm/backup/vg
  VG name:       vg
  Description:   Created *after* executing 'vgscan --mknodes'
  Backup Time:   Fri May 18 08:02:29 2012


Using the following

Code:

# vgcfgrestore -f /etc/lvm/archive/vg_00021-2112048483.vg /dev/vg/[name]


Oh and I downgraded lvm2 based on this bug report, everything is autodetected by the kernel (3.3.5) and automounted, happy days!

Thanks for the pointers daggystyle.
_________________
"Science is what we understand well enough to explain to a computer.  Art is everything else we do." - Donald Knuth
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum