Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
RAID array broken, can't boot
View unanswered posts
View posts from last 24 hours

Goto page Previous  1, 2, 3, 4  Next  
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware
View previous topic :: View next topic  
Author Message
ExecutorElassus
Veteran
Veteran


Joined: 11 Mar 2004
Posts: 1156
Location: Stuttgart, Germany

PostPosted: Mon Feb 11, 2019 3:58 pm    Post subject: Reply with quote

Hi Neddy,

from this
Code:
 cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sdb1[1]
      97536 blocks [3/1] [_U_]
     
md124 : active raid1 sdc3[0] sda3[2] sdd3[1]
      9765504 blocks [3/3] [UUU]
     
md126 : inactive sdb4[3](S)
      965920692 blocks super 1.2
       
md127 : inactive sda4[2](S) sdc4[4](S) sdd4[3](S)
      2897762076 blocks super 1.2
       
md125 : active raid1 sdc1[0] sda1[2] sdd1[1]
      97536 blocks [3/3] [UUU]
     
unused devices: <none>
it looks like sdcN has always been set as Active device 0 on the arrays, so I think it's probably the same for what is here given as md127 (where it's listed as Active device 4).

ddrescue has been running now for almost nine hours and has rescued 3 more kb. I think I'll probably call it quits with that in a bit.

what's the command to assemble the array? Do I need to name it? Given what /proc/mdstat says, what command do I use to assemble what's listed here as md127? Also, it's important to note that what'son the array is a bunch of logical paritions which themselves will have to be mounted. How can I do that from the liveCD?

thanks for the help,

EE
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 42163
Location: 56N 3W

PostPosted: Mon Feb 11, 2019 4:26 pm    Post subject: Reply with quote

ExecutorElassus,

Here's one of my raid sets ... just one drive.

Code:
# mdadm -E /dev/sd[abcd]5
/dev/sda5:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 5e3cadd4:cfd2665d:96901ac7:6d8f5a5d
  Creation Time : Sat Apr 11 20:30:16 2009
     Raid Level : raid5
  Used Dev Size : 5253120 (5.01 GiB 5.38 GB)
     Array Size : 15759360 (15.03 GiB 16.14 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 126

    Update Time : Sat Jun 16 17:20:52 2018
          State : clean
Internal Bitmap : present
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 80b12c93 - correct
         Events : 108

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     0       8        5        0      active sync   /dev/sda5

   0     0       8        5        0      active sync   /dev/sda5
   1     1       8       21        1      active sync   /dev/sdb5
   2     2       8       37        2      active sync   /dev/sdc5
   3     3       8       53        3      active sync   /dev/sdd5


So it appears that the role starts at 0. [url=https://raid.wiki.kernel.org/index.php/Mdadm-faq]This page supports that.

From the man page.

Code:
ASSEMBLE MODE
       Usage: mdadm --assemble md-device options-and-component-devices...


Slot 0 is missing so
Code:
mdadm --assemble /dev/md1 --readonly missing /dev/sdb4 /dev/sda4
should bring up /dev/sdb4 /dev/sda4 as a degraded raid set on /dev/md1
You may need to tell it to --run if it assembles but does not start.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
ExecutorElassus
Veteran
Veteran


Joined: 11 Mar 2004
Posts: 1156
Location: Stuttgart, Germany

PostPosted: Mon Feb 11, 2019 4:35 pm    Post subject: Reply with quote

Hi Neddy,
here's what I tried:
Code:
 % mdadm --assemble /dev/md1 --readonly missing /dev/sdb4 /dev/sda4
mdadm: cannot open device missing: No such file or directory
mdadm: missing has no superblock - assembly aborted
root@sysresccd /root % mdadm --assemble /dev/md1 --readonly /dev/sdb4 /dev/sda4
mdadm: /dev/sdb4 is busy - skipping
mdadm: /dev/sda4 is busy - skipping
then I stopped the arrays that were running but inactive. Then:
Code:
 % mdadm --assemble /dev/md127 --readonly /dev/sdb4 /dev/sda4
mdadm: /dev/md127 assembled from 1 drive - not enough to start the array.

Now 'cat /proc/mdstat' shows:
Code:
cat /proc/mdstat                                     
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : inactive sdb4[3](S) sda4[2](S)
      1931841384 blocks super 1.2
       
md124 : active raid1 sdc3[0] sda3[2] sdd3[1]
      9765504 blocks [3/3] [UUU]
     
md125 : active raid1 sdc1[0] sda1[2] sdd1[1]
      97536 blocks [3/3] [UUU]
     
unused devices: <none>


What now?

thanks for the help,

EE
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 42163
Location: 56N 3W

PostPosted: Mon Feb 11, 2019 4:40 pm    Post subject: Reply with quote

ExecutorElassus,

Don't use /dev/md127. It may be in config files and its certainly in the raid metadata as a preferred minor number, so choose a new md number.
I hadn't thought to stop already running arrays of one drive. That's correct.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
ExecutorElassus
Veteran
Veteran


Joined: 11 Mar 2004
Posts: 1156
Location: Stuttgart, Germany

PostPosted: Mon Feb 11, 2019 4:47 pm    Post subject: Reply with quote

Hi Neddy,

So now:
Code:
 % mdadm --stop /dev/md127
mdadm: stopped /dev/md127
root@sysresccd /root % mdadm --assemble /dev/md2 --readonly /dev/sdb4 /dev/sda4
mdadm: /dev/md2 assembled from 1 drive - not enough to start the array.
root@sysresccd /root % cat /proc/mdstat                                       
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : inactive sdb4[3](S) sda4[2](S)
      1931841384 blocks super 1.2
       
md124 : active raid1 sdc3[0] sda3[2] sdd3[1]
      9765504 blocks [3/3] [UUU]
     
md125 : active raid1 sdc1[0] sda1[2] sdd1[1]
      97536 blocks [3/3] [UUU]
     
unused devices: <none>
it still won't assemble, and is apparently automatically renaming it to its preferred name.

What should I try next?

thanks for the help,

EE
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 42163
Location: 56N 3W

PostPosted: Mon Feb 11, 2019 5:02 pm    Post subject: Reply with quote

ExecutorElassus,

Code:
md127 : inactive sdb4[3](S) sda4[2](S)
      1931841384 blocks super 1.2


Try adding in --force
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
ExecutorElassus
Veteran
Veteran


Joined: 11 Mar 2004
Posts: 1156
Location: Stuttgart, Germany

PostPosted: Mon Feb 11, 2019 5:09 pm    Post subject: Reply with quote

Hi Neddy,

now we're here:

Code:
% mdadm --assemble /dev/md2 --readonly --force /dev/sdb4 /dev/sda4
mdadm: forcing event count in /dev/sdb4(1) from 1343686 upto 1347561
mdadm: /dev/md2 has been started with 2 drives (out of 3).
root@sysresccd /root % cat /proc/mdstat                                               
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active (read-only) raid5 sdb4[3] sda4[2]
      1931840512 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
     
md124 : active raid1 sdc3[0] sda3[2] sdd3[1]
      9765504 blocks [3/3] [UUU]
     
md125 : active raid1 sdc1[0] sda1[2] sdd1[1]
      97536 blocks [3/3] [UUU]
     
unused devices: <none>

So, should I mount it and look around?

EDIT: right, as I said: this raid array is an lvm. So here's what I get:

Code:
 % mkdir raidtest
root@sysresccd /root % mount /dev/md2 raidtest/
mount: /root/raidtest: unknown filesystem type 'LVM2_member'.

is there a way to start the lvm on a read-only array?

Thanks for the help,

EE
PS, here's the info on the array:
Code:
% mdadm -D /dev/md2
/dev/md2:
           Version : 1.2
     Creation Time : Wed Apr 11 00:10:50 2012
        Raid Level : raid5
        Array Size : 1931840512 (1842.35 GiB 1978.20 GB)
     Used Dev Size : 965920256 (921.17 GiB 989.10 GB)
      Raid Devices : 3
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Sat Feb  9 10:49:30 2019
             State : clean, degraded
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : domo-kun:carrier
              UUID : d42e5336:b75b0144:a502f2a0:178afc11
            Events : 1347561

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       3       8       20        1      active sync   /dev/sdb4
       2       8        4        2      active sync   /dev/sda4
and /proc/mdstat:
Code:
 % cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active (read-only) raid5 sdb4[3] sda4[2]
      1931840512 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
     
md124 : active raid1 sdc3[0] sda3[2] sdd3[1]
      9765504 blocks [3/3] [UUU]
     
md125 : active raid1 sdc1[0] sda1[2] sdd1[1]
      97536 blocks [3/3] [UUU]
     
unused devices: <none>
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 42163
Location: 56N 3W

PostPosted: Mon Feb 11, 2019 5:29 pm    Post subject: Reply with quote

ExecutorElassus,

Looks promising but its early days. Those 4000 writes bother me.

Code:
vgchange -ay
should try and start all logical volumes. That can take a long time if lvmetad is not running as it will search every block device in /dev for volume groups to start.
You are breaking new ground here. I've not tried to start a volume group on a read only raid.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
ExecutorElassus
Veteran
Veteran


Joined: 11 Mar 2004
Posts: 1156
Location: Stuttgart, Germany

PostPosted: Mon Feb 11, 2019 5:31 pm    Post subject: Reply with quote

Hi Neddy,

Code:
 % vgchange -ay
  17 logical volume(s) in volume group "vg" now active

it completed immediately.

now?

EDIT: Here's what I see now:

Code:
% vgchange -ay
  17 logical volume(s) in volume group "vg" now active
root@sysresccd /root % ls /mnt
backup  custom  floppy  gentoo  windows
root@sysresccd /root % ls /mnt/gentoo
root@sysresccd /root % cd /dev/vg
root@sysresccd /dev/vg % ls
carrier1  carrier2  carrier3  carrier4  carrier5  carrier6  carrier7  carrier8  carrier9  distfiles  home  opt  portage  tmp  usr  var  vartmp

those are all links to dm-N files, which I assume to by the physical volumes.

UPDATE:
I've tried mounting a few of tjhese. Here's what I get:
Code:

 % mkdir vgroup
root@sysresccd /root % mount /dev/vg/carrier1 vgroup
 % umount vgroup
root@sysresccd /root % mount /dev/vg/usr vgroup     
mount: /root/vgroup: can't read superblock on /dev/mapper/vg-usr.
root@sysresccd /root % mount /dev/vg/var vgroup
mount: /root/vgroup: can't read superblock on /dev/mapper/vg-var.
root@sysresccd /root % mount /dev/vg/opt vgroup
mount: /root/vgroup: can't read superblock on /dev/mapper/vg-opt.
root@sysresccd /root % mount /dev/vg/portage vgroup
 % umount vgroup
root@sysresccd /root % mount /dev/vg/home vgroup   
mount: /root/vgroup: can't read superblock on /dev/mapper/vg-home.
root@sysresccd /root % mount /dev/vg/distfiles vgroup
root@sysresccd /root % umount vgroup                 
root@sysresccd /root % mount /dev/vg/portage vgroup 
root@sysresccd /root % umount vgroup               
root@sysresccd /root % mount /dev/vg/vartmp vgroup
root@sysresccd /root % umount vgroup             

So some of them it can't mount. Ia there a way to fix that? The google says I could try using dumpe2fs to find backup superblocks, then run fsck on the partition, but that would require write access, yes?

What next?

thanks for the help,

EE
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 42163
Location: 56N 3W

PostPosted: Mon Feb 11, 2019 5:57 pm    Post subject: Reply with quote

ExecutorElassus,

Code:
/dev/vg % ls
carrier1  carrier2  carrier3  carrier4  carrier5  carrier6  carrier7  carrier8  carrier9  distfiles  home  opt  portage  tmp  usr  var  vartmp


Those are all filesystems. You don't care about tmp and vartmp. You may be emotionally attached to distfiles. I am I have all my distfiles since April 2009 when this box was new, so that's expendable.
opt should only be binaries, that can be recreated too.

System Rescue CD comes with /mnt/gentoo.
Code:
cd /mnt/gentoo
mkdir carrier1  carrier2  carrier3  carrier4  carrier5  carrier6  carrier7  carrier8  carrier9  distfiles  home  opt  portage  tmp  usr  var  vartmp
mount -o ro /dev/vg/carrier1  ./carrier1
...

and look at files. Ignore expendable filesystems if you want.

We know that the LVM metadata is OK.
If all the mounts work, some of the filesystem metadata is OK too.

We can map logical volumes to the array and to the underlying HDDs too and see which filesystems are damaged.
Put the output of
Code:
/sbin/lvdisplay -am
onto a pastebin site.
Put your ddrescue log onto a pastebin site too.
Its not difficult to map the holes in the log to the allocated segments in your logical volumes.

From one of mine ...
Code:
 LV Path                /dev/vg/usr
...
  --- Segments ---
  Logical extents 0 to 5119:
    Type      linear
    Physical volume   /dev/md127
    Physical extents   0 to 5119
   
  Logical extents 5120 to 10239:
    Type      linear
    Physical volume   /dev/md127
    Physical extents   315136 to 320255
a physical extent is a 4Mb block by default.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
ExecutorElassus
Veteran
Veteran


Joined: 11 Mar 2004
Posts: 1156
Location: Stuttgart, Germany

PostPosted: Mon Feb 11, 2019 6:17 pm    Post subject: Reply with quote

Here:s the pastebin for attempted mounts and the lvdisplay: https://pastebin.com/MfzUpiKj
and here's the ddrescue.map file: https://pastebin.com/izG41dN5

would running fsck on the volumes using a backup superblock allow them to be fixed and then mounted? So far, I don't see any glaring errors (but the stuff I care about is thousands of files in hundreds of subdirectories, so I doubt I'd ever find them all.

How does it look? Are we making progress? Should I, at some point, switch to using /dev/sdd4, as it is the non-broken drive?

thanks for the help,

EE
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 42163
Location: 56N 3W

PostPosted: Mon Feb 11, 2019 6:52 pm    Post subject: Reply with quote

ExecutorElassus,

Code:
root@sysresccd /mnt/gentoo % mount -o ro /dev/vg/home  ./home   
mount: /mnt/gentoo/home: can't read superblock on /dev/mapper/vg-home


Where we go from there depends on the filesystem. fsck is a last resort. In the face of missing metadata, it guesses.
It tries to make the metadata self consistent without regard to user data and often makes a bad situation worse.

What filesystem is home? extX keeps backup superblock copies which mount can use if you tell it to.
Also, how big is home? That will be in your pastebin.
Code:
 LV Path                /dev/vg/home
  Segments               1
  --- Segments ---
  Logical extents 0 to 15359:
    Type        linear
    Physical volume /dev/md2
    Physical extents    9472 to 24831
That's near the front of the raid set.

From the map file
Code:
#      pos        size  status
0x00000000  0xAAA4A18800  +
has been recovered.
That's 732,906,489,856 B or 732 salesman GB. That means the first 1.4TB of the raid should be good.
home ends at 99,324,000,000 or 99G. That's 24831 * 4MiB
So home is in the recovered region.

There are two potential issues.
1) those 4000 writes means sd[ab]4 are out of sync.
2) the old sdb is damaged in that region but ddrescue has recovered it on sdb-new4

Look around all the carrierX and see what you can see. Are the files good?
Its progress, we should be able to repeat this with sdd4 in place of sdb4 and maybe get more.

There are two approaches now.
Take the raid down and swap sdb4 to sdd4 and see it it looks better.
Try to mount home with an alternate superblock.

The first backup superblock is at 131072 on home.
Try
Code:
mount -o ro,sb=131072 /dev/vg/home  ./home
and read
Code:
man ext4


That's harmless if I've got the number wrong, so you can try adding in sb=131072 to the other failed mounts too.
There are more backup superblocks too but that one is in the man page, so its easy to find.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
ExecutorElassus
Veteran
Veteran


Joined: 11 Mar 2004
Posts: 1156
Location: Stuttgart, Germany

PostPosted: Mon Feb 11, 2019 7:00 pm    Post subject: Reply with quote

I think at this point I'd like to start trying to work with sdd4, in case I either try to make writes, or if there's more data recovered there.

How do I shut down the active volume groups?
UPDATE: nvm I figured out how to use 'vgchange -an' to shut it down. Now I've stopped /dev/md2 and restarted it with /dev/sdd4, and activated the volume groups. I'll update in a sec when I try your last suggestions.

Thanks for the help,

EE
Back to top
View user's profile Send private message
ExecutorElassus
Veteran
Veteran


Joined: 11 Mar 2004
Posts: 1156
Location: Stuttgart, Germany

PostPosted: Mon Feb 11, 2019 7:16 pm    Post subject: Reply with quote

Hi Neddy,

Using backup superblocks managed to mount all the remaining partitions except /dev/vg/portage. I used a different backup superblock, found using dumpe2fs | grep superblock, tried that one, and it mounted as well.

So all the partitions mount, and a cursory look inside shows them all having the contents they should (it's worth noting, btw, that the signal for me a couple weeks ago that a drive was failing was that portage kept not being able to emerge --sync due to permissions and other problems, so I think this was the partition where a lot of the bad blocks accumulated.

Code:
 % mount                                                           
udev on /dev type devtmpfs (rw,nosuid,relatime,size=10240k,nr_inodes=4098759,mode=755)
/dev/sr0 on /livemnt/boot type iso9660 (ro,relatime,nojoliet,check=s,map=n,blocksize=2048,fmode=644)
/dev/loop0 on /livemnt/squashfs type squashfs (ro,relatime)
tmpfs on /livemnt/memory type tmpfs (rw,relatime)
none on / type aufs (rw,noatime,si=1cd62804c614d2a2)
tmpfs on /livemnt/tftpmem type tmpfs (rw,relatime,size=524288k)
none on /tftpboot type aufs (rw,relatime,si=1cd62804c6818aa2)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /run type tmpfs (rw,nodev,relatime,size=3283428k,mode=755)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /tmp type tmpfs (rw,relatime)
/dev/sdg1 on /root/usb type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096)
/dev/mapper/vg-carrier1 on /mnt/gentoo/carrier1 type ext3 (ro,relatime,stripe=256,data=ordered)
/dev/mapper/vg-var on /mnt/gentoo/var type ext3 (ro,relatime,sb=131072,stripe=256,data=ordered)
/dev/mapper/vg-usr on /mnt/gentoo/usr type ext3 (ro,relatime,sb=131072,stripe=256,data=ordered)
/dev/mapper/vg-home on /mnt/gentoo/home type ext3 (ro,relatime,sb=131072,stripe=256,data=ordered)
/dev/mapper/vg-opt on /mnt/gentoo/opt type ext3 (ro,relatime,sb=131072,stripe=256,data=ordered)
/dev/mapper/vg-portage on /mnt/gentoo/portage type ext2 (ro,relatime,sb=24577,errors=continue,user_xattr,acl)
So portage is ext2 (not sure why, but this may be part of the problem; might be worth reformatting to ext3), and it looks like all the rest are ext3.

What's the next step?

thanks for the help,

EE
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 42163
Location: 56N 3W

PostPosted: Mon Feb 11, 2019 9:41 pm    Post subject: Reply with quote

ExecutorElassus,

We *know* that sdd4 has holes in. Its a question of where and what in affected.
Just because things mount does not mean that the file contents are correct.

When you read a raid5, two out of the three (in your case) drives are used to decode the data,
You need both bits. with sdb4, you would get read errors when you hit a failed block.
With sdd4, it will silently return rubbish.
That rubbish may be file content, directory content, or now unlikely, key filesystem metadata.

While you are using mdadm --readonly you won't do any damage to your data.

If you put the raid together with sd[ac] you may get a different subset of logical volumes that work.

You said that sdb-new is 2GB ?
If that's correct, it will hold all the data from the raid set. Hold that thought.
It may be that different pairings of drives in md2 give you correct access to different LVM filesystems. If that's true, then in place data recovery may not be possible, but you can copy all the files to sdb-new.

That's a bit simplistic. if sdd4 is made to fill the the remaining space on sdd, (it may be like that anyway) then it can be used to hold all the data from md2 while its other partitions are members of the other raid sets. There is a big difference between reading the data and reading correct data. The only way you can verify the data is correct is by examining it.
Like I've already said, some filesystems are expendable. Don't bother recovering them.

Looking through your logical volume to HDD map
Code:
 LV Path                /dev/vg/usr
  LV Size                20.00 GiB
  --- Segments ---
  Logical extents 0 to 5119:
    Type        linear
    Physical volume /dev/md2
    Physical extents    0 to 5119

That's correctly recovered to sdb-new as its all before the first read error at about 349477 physical extents into the volume group / raid.

Code:
  LV Path                /dev/vg/portage
  LV Size                3.00 GiB
  --- Segments ---
  Logical extents 0 to 511:
    Type        linear
    Physical volume /dev/md2
    Physical extents    5120 to 5631
That follows immedately after /dev/vg/usr is the physical space so its copy is good too.
Don't spend any time on recovery. Its just a portage snapshot, er even an emerge --sync away.

Code:
  LV Path                /dev/vg/distfiles
  LV Size                15.00 GiB
  --- Segments ---
  Logical extents 0 to 3839:
    Type        linear
    Physical volume /dev/md2
    Physical extents    5632 to 9471

The same rational as /dev/vg/portage applies. We are only 38G down the raid, so the ddrescue copy is good.

Code:
  LV Path                /dev/vg/home
  LV Size                60.00 GiB
  --- Segments ---
  Logical extents 0 to 15359:
    Type        linear
    Physical volume /dev/md2
    Physical extents    9472 to 24831

That's 98G down the raid but it wouldn't mount. There are no holes is the copy, so it must be the different event counts causing the mount issue.

Code:
  LV Path                /dev/vg/opt
  LV Size                4.00 GiB

  --- Segments ---
  Logical extents 0 to 1023:
    Type        linear
    Physical volume /dev/md2
    Physical extents    24832 to 25855

102G from the start. opt should only be binary installs. Don't bother recovering it. Look and see whats there and remerge those packages.

Code:
  LV Path                /dev/vg/var
  LV Size                4.00 GiB
  --- Segments ---
  Logical extents 0 to 1023:
    Type        linear
    Physical volume /dev/md2
    Physical extents    25856 to 26879

Only 106G of raid used so far.
You need some files here. Its how portage knows whats installed. /var/lib/portage/world is essential. /var/db/pkg/* is what tells portage exactly what is installed.

Code:
  LV Path                /dev/vg/tmp
  LV Size                4.00 GiB
  --- Segments ---
  Logical extents 0 to 1023:
    Type        linear
    Physical volume /dev/md2
    Physical extents    26880 to 27903

110G used. Throw this filesystem away. Its cleared every boot anyway.

Code:
  LV Path                /dev/vg/vartmp
  LV Size                15.00 GiB  --- Segments ---
  Logical extents 0 to 2559:
    Type        linear
    Physical volume /dev/md2
    Physical extents    27904 to 30463
   
  Logical extents 2560 to 3839:
    Type        linear
    Physical volume /dev/md2
    Physical extents    465664 to 466943

This is the first logical volume that's been extended. That means we need to to the arithmetic instead of adding up the sizes and take account of the locations.
The firs part is OK as before 349477 physical extents into the volume group. The second part is harder.
If that's portages build space, throw it away and make a new filesystem.

Code:
  LV Path                /dev/vg/carrier1
  LV Size                100.00 GiB
  --- Segments ---
  Logical extents 0 to 25599:
    Type        linear
    Physical volume /dev/md2
    Physical extents    30464 to 56063


The first sdb4 read error falls in here. Then there is a big rash of errors close together.

-- edit --

The data for carrier1 and on is damaged on sdb4 and therefore on sdd4 too. You can keep going with ddrescue to try to get more data to recover or try to bring up the raid with sd[ac]4 and look around.
There is the recovery and event count issues there.

If you are really really lucky, the sdb4 damage is in unused areas of the drive, so while ddrescue can't copy it, a filesystem level read might work.
That means no recovery in place though.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
ExecutorElassus
Veteran
Veteran


Joined: 11 Mar 2004
Posts: 1156
Location: Stuttgart, Germany

PostPosted: Mon Feb 11, 2019 9:55 pm    Post subject: Reply with quote

Hi Neddy,

yes, the new HDD is 2TB (not GB, as you typed in your last message). that means, theoretically, that I could copy all of one of the other drives onto the extra space if needed.

So there is a big block of bad data on vg-carrier-1, and most everything that comes before it is expendable (I'd like to keep /usr, just because I have a lot of fonts and stuff under /usr/share, anda bunch of custom ebuilds in /usr/local, but those aren't priorities). vg-carrier1 is more concerning: that's where all of my work files are (documents, papers, articles, invoices, etc), so I need to do a more thorough investigation of what's there.

Using the --force flag seems to reset the event count to the highest number, so I'm not sure if that is going to cause problems.

But what should I do next? I'm not sure how to look more deeply into carrier1 without being able to load the files in some sort of GUI, but is there some other way I can try to recover the data?

Thanks for the help,

EE
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 42163
Location: 56N 3W

PostPosted: Mon Feb 11, 2019 10:55 pm    Post subject: Reply with quote

ExecutorElassus,

Put the raid together with sd[ab]4 still --readonly
mount carrier1 somewhere and try to copy the files out.
cp -a will do.
It will fail at the first read error. If you are lucky, there won't be a read error.

You can try with sd[ac]4 too but sdc4 was in the middle of a rebuild.

As long as you always use --readonly everywhere, the data on the raid will not change and it is what it is.
The metadata may change but we have the info in this thread to run a --create but that won't sync your raids.

Warning: Even if the copy works, some files can be corrupt because of the difference in event counts.
The out of sync is not detectable.

Try not to use sdb-new as a destination, unless you make a new partition off the end of sdd4, so the raid image is preserved.
You need 100G, or less, depending on how full carrier1 is.

Someone with some script-fu could write a script to recursively list all the files in carrier1, then copy them one at a time, listing the ones that failed.
That's beyond my bash skills though.

-- edit --

Its lots of small blocks not recovered, rather than a big block.

Code:
      pos        size  status
0x00000000  0xAAA4A18800  +  recovered
0xAAA4A18800  0x00000400  -  2 sectors
0xAAA4A18C00  0x000BD200  +  recovered
0xAAA4AD5E00  0x00000200  -  1 sector
0xAAA4AD6000  0x00003400  +  recovered
0xAAA4AD9400  0x00000200  -  1 sector
...

_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Hu
Moderator
Moderator


Joined: 06 Mar 2007
Posts: 13294

PostPosted: Tue Feb 12, 2019 2:56 am    Post subject: Reply with quote

The quick way to copy files out would be rsync. I have had some success syncing files off dying drives before.

If that doesn't work for you, you could try cd "$carrier_mount_point" && find . '-(' -type f -o -type d '-)' other-restrictions -print0 > "$TMPDIR/files-to-save.txt" to save a list of files. You will likely need to preserve directory structure when copying them, which makes the copy side a bit harder. You could try tar --no-recursion -C "$carrier_mount_point" -0 -T "$TMPDIR/files-to-save.txt" -c -f - | tar -C "$recovery_directory" -x -f - -k to copy them out with tar. If that also fails (and it might, if read errors come back and tar aborts on error), fall back to cd "$carrier_mount_point" && while read -d '' filename; do cp --parents -a "$filename" "$recovery_directory"; done < "$TMPDIR/files-to-save.txt".

Note that this last method will preserve directory hierarchy, but not directory permissions / ownership. If you need that, you could try to pre-create the hierarchy: cd "$carrier_mount_point" && find . -type f other-restrictions -printf '%h\0' | sort -z | uniq -z > "$TMPDIR/dirs-to-save.txt" && tar -C "$carrier_mount_point" -0 -T "$TMPDIR/dirs-to-save.txt" --no-recursion -c -f - | tar -C "$recovery_directory" -x -f -. If this step fails, you are unable to read back some of your directory entries. That would be very unfortunate, as it means some files may be unreachable, even if their contents are intact.

Where I wrote other-restrictions, you could plug in any find predicates to restrict saving files you don't want (old enough that you can restore them from backup; derived files you can recreate from other salvaged files; etc.). As much as practical, you want to minimize recovering files that you can get elsewhere.

Regarding /usr, I would say copy /usr/local, but plan on reinstalling the relevant packages to rebuild /usr/share. If you can save it after you've saved all your irreplaceable contents, go ahead and try. Just prioritize the things that are most difficult to replace.
Back to top
View user's profile Send private message
ExecutorElassus
Veteran
Veteran


Joined: 11 Mar 2004
Posts: 1156
Location: Stuttgart, Germany

PostPosted: Tue Feb 12, 2019 8:38 am    Post subject: Reply with quote

Hi Neddy,

conveniently, I have two SSD drives, 250GB each, that I was planning to use as a RAID1 and migrate all my system partitions (everything up to /home, but not the carrier partitions). I never got around to it, so I have some extra space.

But I have a meeting to attend today, so I'll have to get to this when I get back in the evening.

The only thing I did when sdc4 was rebuilding was start the WM. None of the carrier partitions was mounted. So hopefully being out of sync won't affect too much.

What's a good filesystem format for an SSD? My third one is formatted with f2fs, but this liveCD apparently doesn't have that. Hu, how do I use rsync to copy everything?

Also, conveniently, carrier1 is only about 30% full, so it's quite possible the bad sectors don't even have data on them.

It would be nice if there were a way to use the rescue mapfile and have /dev/sd[ac]4 used to rebuild *only those sectors*, on the off chance that those specific sectors might be intact on /dev/sd[ac]4.

Anyway, when I get back home I'll format the SSD and see about copying carrier1 from sd[ab]4.

thanks for the help,

EE
PS it turns out that carrier1 mounts without issue when mounted from sd[ab]4. I'm not sure why that is.
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 42163
Location: 56N 3W

PostPosted: Tue Feb 12, 2019 10:28 am    Post subject: Reply with quote

ExecutorElassus,

1. You don't know the sync status of sd[ac]4 so attempting to recover individual blocks would be risky.
mdadm reads/writes chunks, that 512k, so a chunk is all or nothing. One missing/unreadable sector costs you a whole chunk.

On top of that, your filesystems use 4kB blocks, so 4kB is the least you can loose at the filesystem level, even if the drive is 512B sectors.

You have a filesystem with 4k blocks on top of a raid with 512k chunks on top of a HDD (which you are trying to rescue) with 512B sectors.
The holes in your recovered data are bigger than you think but on the bright side, one raid chunk spans several unreadable sectors.

sdc4 has lost its active slot. I suspect we will need to run a --create to rewrite that before you can assemble it.

If you really think the raid is clean ... and we know its not, its possible to bring it up with all three drives --readonly then try file copies.
Reads to sdb4 will fail from time to time but the data will be fetched from the other drives, so should succeed.
Again, that does not mean the recovered data is what its supposed to be.
With all three drives in the raid set, we don't have any control, or knowledge, of which drives are being read.
That will be important if sd[ab]4 gives you rubbish/fails we can try the same files from sd[ac]4 which may give a different answer.

That trick with read the missing sectors from the 'good' drives is what mdadm --replace would have done.
It would have duplicated sdb4 using data from all three drives.
That's what you really wanted to do at the outset but hindsight gives everyone 20/20 vision.

-- edit --

See your PM too.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
ExecutorElassus
Veteran
Veteran


Joined: 11 Mar 2004
Posts: 1156
Location: Stuttgart, Germany

PostPosted: Tue Feb 12, 2019 5:32 pm    Post subject: Reply with quote

Hi Neddy,

all right. I've reassembled sd[ab]4 and activated the VGs inside. Once I have the SSD formatted (I guess with ext3?) and I copy over all of carrier1, what should I do with it? Assuming there are no copy errors, what next? I can't check file integrity without the programs to open the files, but is there something else I should do?

What other steps should I take?

Thanks for the help,

EE
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 42163
Location: 56N 3W

PostPosted: Tue Feb 12, 2019 6:03 pm    Post subject: Reply with quote

ExecutorElassus,

Thats it. Copy over other things too.
Use rsync as Hu suggested rather than cp -a.

You could do a second copy based on sd[ac]4 and compare the copies. The differences are just that.
You still have to look at the files that differ to see which is correct, if any.
Even where the files compare correctly, you only know that they are the same, not correct.

You can defer the checking. The data is what it is, you can go back to ddrescue and attempt to fill more holes and get back more data.

The drawback with using sdd4 is that the missing data will still read. The drive will return whatever happens to be there.
If you thought that was useful, you can do it.
e.g. if sdb4 fails to read a directory, it will all be missing.
However, if sdd4 has part recovered that directory, you may be able to use the part recovered directory to salvage the files that are there.

Its OK to run ddrescue overnight if you want to too. Set up lots of retries and let it run.
Next night, do it again with the drive in a different position.

I wouldn't rule out a --create on all three drives last thing and see what happens then, but I've given up trying to salvage the data in place.

Hold that for a moment ...

You could copy off
/dev/mapper/vg-var
/dev/mapper/vg-usr
/dev/mapper/vg-home
/dev/mapper/vg-opt
onto the SSD and make new there filesystems for tmp, vartmp, distfiles and portage

It fits with over 100G to spare, fix /etc/fstab to point to the SSD and try to bring the box up normally (with the SSD in place of the raid).
That 100G will give you space for /dev/mapper/vg-carrier1 too.

You would then have a working Gentoo to use to recover the data from the raid.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
ExecutorElassus
Veteran
Veteran


Joined: 11 Mar 2004
Posts: 1156
Location: Stuttgart, Germany

PostPosted: Tue Feb 12, 2019 6:36 pm    Post subject: Reply with quote

Hi Neddy,

this last suggestion is actually the process I think I asked about over a year ago, when I first got the SSDs, but chickened out of doing.

So, here's what I think I would do. Please correct me if I'm wrong:

Format the SSD as one single partition. I can't use f2fs, apparently. What's a good format?
rsync all of sd[adc]3 (which holds / and all the rest of the system directories like /etc and is clean). How do I do this?
then, rsync each of the other system partitions that live on sd[ab]4 (that is, /usr, /portage, /distfiles, /opt, and /home, all of which I have reason to believe are clean) to their respective directories on the SSD. How do I do this?
edit /etc/fstab to point to this SSD as /. But I boot from an initfs (yours, incidentally). How do I edit this to it boots from the SSD and not the RAID array? is that something I just edit in grub.cfg?

As you say, this should get me a bootable system using only the SSD. Is there a way to turn this into a RAID1 array later while there's still data on it?

Can you please walk me through how to do this? This all looks risky and above my level of skill.

Thanks for the help,

EE
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 42163
Location: 56N 3W

PostPosted: Tue Feb 12, 2019 8:03 pm    Post subject: Reply with quote

ExecutorElassus,

Use ext4 on the SSD. You may want to feed it options to control the number of i-nodes and/or turn off the journal on expenable partitions.

For the portage tree, one inode per filesystem block is good or you will run out of i-nodes.
Due to having a senior moment, my portage tree is on a filesystem with 1k blocks on an SSD with 4k physical blocks. Don't do that.
Make portage one i-node per block. 4G should be enough.

You are going to mount all these filesystems separately, like you do now.
Either use LVM, so you can move space around, growing is trivial, shrinking is hard, or make separate partitions.
As you have used LVM, that will offer the best use of space.

Make two, possibly three partitions on the SSD, /boot, root and everything else.
Combine /boot and root if thats what you do now,.
Make everything else, LVM, as that's what you have now.

Boot with System Rescue CD attach your old Gentoo to /mnt/gentoo but use the read only option to mount for all the old filesystems.
Bring up the sd[ab]4 raid as you have been doing and attach its filesystems in the right places, under /mnt/gentoo. Don't forget the read only option.

Make a new mountpoint /mnt/SSD

Attach all the empty SSD filesystems here. After you mount the SSD_root, you will need to mkdir all the lower level mount points.
Don't forget that /tmp will need its permissions adjusted.

Time for a review before you do something you can't undo.

Your old Gentoo is attached at /mnt/gentoo and its all read only. Do check.
Your empty SSD has its filesystem tree attached at /mnt/SSD and its read/write.
This read only read write stops you removing you existing install by messing up the rsync

I'm not a habitual rsync user, so I can't give you the command. I'll refer you to Hus post.

-- edit --

Once the copy completes, chroot into /mnt/SSD
You will need /proc, /dev and /sys
Fix /etc/fstab
Fix /boot/grub/grub.cfg
Reinstall grub as it uses space outside any filesystem, so thats not been copied.
Reboot but go into the BIOS and choose to boot from the SSD.
Boot. It should come up on the SSD only but your good raid sets will be running but not mounted.

-- edit --
You want to recursively rsync from /mnt/gentoo to /mnt/SSD

I use
Code:
rsync -avHtr /source/ /destination/
the trailing slashes are important. my command sets up a ssh tunnel to copy over, I think I removed that.
I don't recall what the options do but they were arrived at with trial and error and reading the man page.
Don't just use that until you check what it will do.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
ExecutorElassus
Veteran
Veteran


Joined: 11 Mar 2004
Posts: 1156
Location: Stuttgart, Germany

PostPosted: Tue Feb 12, 2019 8:35 pm    Post subject: Reply with quote

Hi Neddy,

right now my drives have four partitions: /boot (raid1 on sd[adc]1), <swap> on sd[adc]2, / as raid1 on sd[adc]3, and the LVM holding /usr, /opt, /var, /tmp, /var/tmp, /var/portage, /var/portage/distfiles, and /home, along with the nine /carrierN partitions as degraded raid5 on sd[ab]4.

So, what I would do is create three partitions on the SSD (which is /dev/sde right now), for holding /boot, /, and the LVM for /usr, Var, /tmp, /var/tmp, /var/portage, /var/portage/distfiles, and /home. I would leave the nine carrier partitions on the raid5 array and try to recover them once the rest of the system boots.

Right now, I have /dev/md124 (active with sd[adc]3) mounted at /mnt/gentoo. At this point, once I partition and format the SSD, I should be able to just rsync everything over, yes?

EDIT: I could, theoretically, plug the other SSD back in (I unplugged it to use the cables for sdb), and put RAID1 arrays on all three partitions from the start. Would that be a smart thing to do, since that was my original intent anyway?

Hu, can you walk me through how to do that?

Thanks for the help,

EE
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware All times are GMT
Goto page Previous  1, 2, 3, 4  Next
Page 3 of 4

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum