View previous topic :: View next topic |
Author |
Message |
El_Presidente_Pufferfish Veteran
Joined: 11 Jul 2002 Posts: 1179 Location: Seattle
|
Posted: Fri Oct 12, 2012 1:32 am Post subject: RAID/LVM setup got duplicated somehow |
|
|
A long time ago I set up a simple RAID-1 array of two drives(sdb,sdc).
I then set up a single LVM volume on top of it (/dev/mapper/storage)
Recently, I noticed that they have somehow separated.
Code: | # cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdc1[0]
1953512400 blocks super 1.2 [2/1] [U_]
md127 : active raid1 sdb1[2]
1953512400 blocks super 1.2 [2/1] [_U]
unused devices: <none> |
Code: | # pvdisplay
Found duplicate PV CY6OYnQqJiI1XBwv2sU5H5L03f3cd8Ej: using /dev/md127 not /dev/md1
--- Physical volume ---
PV Name /dev/md127
VG Name storage
PV Size 1.82 TiB / not usable 2.95 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 476931
Free PE 5018
Allocated PE 471913
PV UUID CY6OYn-QqJi-I1XB-wv2s-U5H5-L03f-3cd8Ej |
It looks like I have two separate RAID arrays, md1 and md127, and duplicate LVM volumes on top of each.
Any idea how I got into this situation?
Is there any way I can fix this and get my RAID going again without data loss? |
|
Back to top |
|
|
frostschutz Advocate
Joined: 22 Feb 2005 Posts: 2977 Location: Germany
|
Posted: Sun Oct 14, 2012 11:40 am Post subject: |
|
|
I've no idea how you ended up with a RAID split. Never seen that one before...
Boot a rescue system (like SystemRescueCD).
There you start either one or the other RAID array, not both at the same time.
As long as only one RAID array is running, LVM should not complain about the PV being duplicate. So you should be able to vgscan, vgchange -a y, lvchange -a y (commands from my head, refer to manpage) to make the volumes visible and active. Then you should be able to mount the volumes (preferably read-only!).
With the volume mounted, you should (best option) make a backup of all files; alternatively simply verify the age of the files.
You want to umount, lvchange -a n, vgchange -a n, stop RAID, start the other RAID, mount read only again, and check the same thing for the other side. Find out which side has the newer, up to date files.
From your message it sounds like /dev/md127 would be the one up to date, but you can't just assume that it will be. It could just as well be that it used one side one day, the other side the other day, and you have a jumble of things. In which case you'd have to rsync both back together into one somehow, taking the newer files from both sides, or simply deciding on one side to keep. Thus best to make a full backup of both sides unless something's missing afterwards.
When that's done, you stop the unused RAID, and add the disk back to the one you want to keep. That's mdadm --stop unusedraid, mdadm --fail, --remove the disk from the keeping raid, mdadm --zero-superblock the removed disk, mdadm --add ...
Then it should sync the disk back into one RAID. |
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|