Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Recovering resized partitions
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Other Things Gentoo
View previous topic :: View next topic  
Author Message
slackline
Veteran
Veteran


Joined: 01 Apr 2005
Posts: 1471
Location: /uk/sheffield

PostPosted: Fri Dec 15, 2017 12:09 pm    Post subject: Recovering resized partitions Reply with quote

Hi,

I cocked up the other day when I went to resize a LVM partition on my RAID array. Meant to add 50Gb to my /dev/vg0/music partition but stupidly unmounted /dev/vg0/pics and added 50Gb to that. Without thinking I went ahead and tried to reduce it before using resizefs, series of commands used was...

Code:

lvextend +50G /dev/vg0/pics
lvextend -L+50G /dev/vg0/pics


Unsurprisingly this has cocked things up and I now can not mount the partition, nor can I I resize the filesystem, nor repair it...

Code:

# mount /dev/vg0/pics /mnt/pics
mount: /mnt/pics: wrong fs type, bad option, bad superblock on /dev/mapper/vg0-pics, missing codepage or helper program, or other error.
# resize2fs /dev/vg0/pics
resize2fs 1.43.7 (16-Oct-2017)
Please run 'e2fsck -f /dev/vg0/pics' first.
# e2fsck -f /dev/vg0/pics
e2fsck 1.43.7 (16-Oct-2017)
The filesystem size (according to the superblock) is 52428800 blocks
The physical size of the device is 13107200 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort<y>? yes
# dmesg | grep -i ext4
[  932.174317] EXT4-fs (dm-2): bad geometry: block count 52428800 exceeds size of device (13107200 blocks)


Reading around I came across an old thread where NeddySeagoon wrote...

NeddySeagoon wrote:

You can mount a filesystem without actually using the partition table ... but you need to know where it starts.
So we know that sdb8 starts at sector 683593760 and we know that each sector is 512 bytes
Then 683593760 * 512 is the start in bytes. You need to work that sum out then try

Code:

mount -o ro offset=... /dev/sdb /some/mountpoint


Put whatever 683593760 * 512 works out as in place of the ...
Read man mount to see what offset does.


And I was hoping I might be able to do this to mount the partition, back everything up (I have a backup of close to 95% of whats on it anyway I think, but want to check and get anything I've missed) and then reformat the partition. My partition currently looks like....

Code:

# dumpe2fs /dev/vg0/pics
dumpe2fs 1.43.7 (16-Oct-2017)
Filesystem volume name:   <none>
Last mounted on:          /mnt/pics
Filesystem UUID:          4ca2da94-df03-4835-bb53-af2c6ec4f5bd
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags:         signed_directory_hash
Default mount options:    user_xattr acl
Filesystem state:         not clean with errors
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              13107200
Block count:              52428800
Reserved block count:     0
Free blocks:              31958128
Free inodes:              13090549
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      1011
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
RAID stride:              32748
Flex block group size:    16
Filesystem created:       Sun Oct 12 17:34:33 2014
Last mount time:          Mon Feb 29 07:54:35 2016
Last write time:          Thu Dec 14 13:24:04 2017
Mount count:              0
Maximum mount count:      -1
Last checked:             Fri Mar  4 22:31:00 2016
Check interval:           0 (<none>)
Lifetime writes:          795 GB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:             256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      719988d1-21a9-4f9c-82d3-06da8bffbc07
Journal backup:           inode blocks
Journal features:         journal_incompat_revoke
Journal size:             128M
Journal length:           32768
Journal sequence:         0x00006010
Journal start:            0


Group 0: (Blocks 0-32767) csum 0xc5bb [ITABLE_ZEROED]
  Primary superblock at 0, Group descriptors at 1-13
  Reserved GDT blocks at 14-1024
  Block bitmap at 1025 (+1025)
  Inode bitmap at 1041 (+1041)
  Inode table at 1057-1568 (+1057)
  0 free blocks, 6048 free inodes, 176 directories
...


(There are a total of 1559 Group sections output, omitted for brevity).

But this doesn't show the block where the LVM partition /dev/vg0/pics starts and parted doesn't reveal it either...

Code:

 parted -l
Model: ATA Samsung SSD 840 (scsi)
Disk /dev/sda: 250GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start   End     Size    Type      File system     Flags
 1      32.3kB  41.0GB  41.0GB  primary   ext4
 2      41.0GB  49.2GB  8192MB  primary   linux-swap(v1)
 3      49.2GB  49.3GB  98.7MB  primary   ext4            boot
 4      49.3GB  250GB   201GB   extended
 5      49.3GB  59.5GB  10.2GB  logical   ext4
 6      59.5GB  250GB   191GB   logical   ext4


Model: ATA SAMSUNG HD502HJ (scsi)
Disk /dev/sdb: 500GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start   End     Size    Type     File system  Flags
 1      1049kB  21.0GB  21.0GB  primary  ext4
 3      21.0GB  105GB   83.9GB  primary  ext4
 2      105GB   500GB   395GB   primary  ext4


Error: /dev/sdc: unrecognised disk label
Model: ATA WDC WD30EFRX-68E (scsi)                                       
Disk /dev/sdc: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: unknown
Disk Flags:

Error: /dev/sdd: unrecognised disk label
Model: ATA WDC WD30EFRX-68E (scsi)                                       
Disk /dev/sdd: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: unknown
Disk Flags:

Error: /dev/md127: unrecognised disk label
Model: Linux Software RAID Array (md)                                     
Disk /dev/md127: 3000GB
Sector size (logical/physical): 512B/4096B
Partition Table: unknown
Disk Flags:


Can anyone advise as to how I might go about working out the required offset to mount the partition or have any other advise on how to resolve this cock-up (rather than going straight to wiping it and starting anew)?

Thanks in advance,

slackline
_________________
"Science is what we understand well enough to explain to a computer.  Art is everything else we do." - Donald Knuth


Last edited by slackline on Fri Dec 15, 2017 7:43 pm; edited 1 time in total
Back to top
View user's profile Send private message
eccerr0r
Watchman
Watchman


Joined: 01 Jul 2004
Posts: 9679
Location: almost Mile High in the USA

PostPosted: Fri Dec 15, 2017 7:15 pm    Post subject: Reply with quote

If I'm reading this correctly, I'd be readying the backup for restore... (BTW, the command list mentioned doesn't seem complete. )

I don't know how bad the fragmentation is on your LVM but if there's was fragmentation, reducing the LVM size will lose critical information on which blocks contain the extents that used to belong to the logical volume and it'd require some careful analysis to rebuild instead of just re-extending the volume. If you don't have fragmentation, then things are easier and re-extending will work.

Fortunately if you just extended it and chopping it back off the same amount, then you shouldn't lose any information.

It looks like your volume is still the smaller size compared to the ext4fs that was on that volume...
_________________
Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching?
Back to top
View user's profile Send private message
slackline
Veteran
Veteran


Joined: 01 Apr 2005
Posts: 1471
Location: /uk/sheffield

PostPosted: Fri Dec 15, 2017 7:43 pm    Post subject: Reply with quote

What am I missing from the command list that you expect to see? I stupidly didn't resize2fs after adding 50Gb before I tried to remove it.

From memory I think there might have been ~25% fragmentation but since the partition hasn't been mounted I'd hope there hasn't been any read/write activity (although this could just demonstrate my ignorance of how LVM works).


Any idea how I might mount get an LVM partition mounted without recourse to the partition table (essentially forcing mount to ignore trying to read the partition table). Various searches suggested its possible e.g. here and NeddySeagoons post, but I'm unsure how to work out the sector on the LVM to specify as an offset option.
_________________
"Science is what we understand well enough to explain to a computer.  Art is everything else we do." - Donald Knuth
Back to top
View user's profile Send private message
eccerr0r
Watchman
Watchman


Joined: 01 Jul 2004
Posts: 9679
Location: almost Mile High in the USA

PostPosted: Fri Dec 15, 2017 8:04 pm    Post subject: Reply with quote

From what I'm gathering, it should be perfectly fine to extend a partition 50GB and the shrink it 50GB, as long as the two amounts are exactly the same, as long as you didn't resize2fs. But if the filesystem was ever larger than the volume size, then it's danger...

Or did you do this:

unmount
lvextend -L +50G
resize2fs
lvreduce -L -50G

If this was done, this can be dangerous as you don't know what fragments were used for the 50G that you removed and can potentially be permanently lost though I would hope resize2fs did not move anything to the space chopped off (or hope lvextend will use the same allocation groups as before you reduced). You should lvextend the partition back to the full size and fsck it, and then resize it back to its original size and then lvreduce it.

The only assumption is that the start of the volume didn't change. I don't know what you did if you somehow destroyed the beginning of the partition?
_________________
Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching?
Back to top
View user's profile Send private message
szatox
Advocate
Advocate


Joined: 27 Aug 2013
Posts: 3136

PostPosted: Fri Dec 15, 2017 8:50 pm    Post subject: Reply with quote

Quote:
Code:
lvextend +50G /dev/vg0/pics
lvextend -L+50G /dev/vg0/pics
Soo... does the first command here even work?

Anyway, LVM does automagic metadata backups every time you modify your volumes.
Go to /etc/lvm/archive/ and look at the most recent backups.
Once you found one with a suitable timestamp, you can apply it with vgcfgrestore.

Hint: You can also use vgcfgrestore -l to list your backups in a more human-friendly format.
Bonus point: those backups are text files and can be edited manually should you be desperate enough. I once used it to remove an overflown thin pool and recover thick volumes at the expense of discarding the data from pool.
Back to top
View user's profile Send private message
frostschutz
Advocate
Advocate


Joined: 22 Feb 2005
Posts: 2977
Location: Germany

PostPosted: Fri Dec 15, 2017 11:22 pm    Post subject: Reply with quote

LVM is not partitions. It's logical volumes. Space for any given LV might be allocated anywhere. If you reduce and then grow, that grown space might be located elsewhere. If it's SSD and you have issue_discards = 1 in your lvm.conf, the data is already gone. On HDD or with issue_discards = 0 you have to refer to /etc/lvm/{archive,backup} or the circular ondisk metadata for a backup before your goofed change and go from there (tell lvextend specifically to use the old extents for allocation).
Back to top
View user's profile Send private message
slackline
Veteran
Veteran


Joined: 01 Apr 2005
Posts: 1471
Location: /uk/sheffield

PostPosted: Sat Dec 16, 2017 8:02 am    Post subject: Reply with quote

szatox wrote:
Quote:
Code:
lvextend +50G /dev/vg0/pics
lvextend -L+50G /dev/vg0/pics
Soo... does the first command here even work?


Yes they did work, the mistake I made (I think) was omitting to run resize2fs /dev/vg0/pics before trying to reduce the size.

szatox wrote:

Anyway, LVM does automagic metadata backups every time you modify your volumes.
Go to /etc/lvm/archive/ and look at the most recent backups.
Once you found one with a suitable timestamp, you can apply it with vgcfgrestore.

Hint: You can also use vgcfgrestore -l to list your backups in a more human-friendly format.
Bonus point: those backups are text files and can be edited manually should you be desperate enough. I once used it to remove an overflown thin pool and recover thick volumes at the expense of discarding the data from pool.



Thats great, thanks for the pointers.

frostschutz wrote:
LVM is not partitions. It's logical volumes. Space for any given LV might be allocated anywhere. If you reduce and then grow, that grown space might be located elsewhere. If it's SSD and you have issue_discards = 1 in your lvm.conf, the data is already gone. On HDD or with issue_discards = 0 you have to refer to /etc/lvm/{archive,backup} or the circular ondisk metadata for a backup before your goofed change and go from there (tell lvextend specifically to use the old extents for allocation).


Thanks, I thought I might be missing some knowledge on how LVMs work and your advice aligns with that of szafox.


Looking at the list of backups I've got...

Code:

# vgcfgrestore -l /dev/vg0
   
  File:      /etc/lvm/archive/vg0_00005-785991857.vg
  Couldn't find device with uuid JG2MGd-j8Am-OaHK-L1XJ-qpam-fXkx-Rqrrch.
  VG name:       vg0
  Description:   Created *before* executing 'lvrename /dev/vg0/films /dev/vg0/video'
  Backup Time:   Sun Oct 12 17:45:14 2014

   
  File:      /etc/lvm/archive/vg0_00006-518221508.vg
  VG name:       vg0
  Description:   Created *before* executing 'lvextend -L+100G /dev/vg0/music'
  Backup Time:   Tue Aug 11 11:01:47 2015

   
  File:      /etc/lvm/archive/vg0_00007-722062704.vg
  VG name:       vg0
  Description:   Created *before* executing 'lvextend -L+50G /dev/vg0/music'
  Backup Time:   Sun Dec 27 07:46:30 2015

   
  File:      /etc/lvm/archive/vg0_00008-1219225728.vg
  VG name:       vg0
  Description:   Created *before* executing 'lvreduce -L200G /dev/vg0/pics'
  Backup Time:   Fri Mar  4 22:36:58 2016

   
  File:      /etc/lvm/archive/vg0_00009-515552203.vg
  VG name:       vg0
  Description:   Created *before* executing 'lvextend -L+100G /dev/vg0/music'
  Backup Time:   Fri Mar  4 22:39:35 2016

   
  File:      /etc/lvm/archive/vg0_00010-1590449262.vg
  VG name:       vg0
  Description:   Created *before* executing 'lvextend -L+50G /dev/vg0/video'
  Backup Time:   Sat Sep  3 05:55:53 2016

   
  File:      /etc/lvm/archive/vg0_00011-1662999833.vg
  VG name:       vg0
  Description:   Created *before* executing 'lvextend -L+100G /dev/vg0/music'
  Backup Time:   Tue Feb 28 20:59:44 2017

   
  File:      /etc/lvm/archive/vg0_00012-632729900.vg
  VG name:       vg0
  Description:   Created *before* executing 'lvextend -L+150G /dev/vg0/video'
  Backup Time:   Tue May 16 10:39:04 2017

   
  File:      /etc/lvm/archive/vg0_00013-1217696769.vg  <<<<<<  This is the one where I started cocking up
  VG name:       vg0
  Description:   Created *before* executing 'lvextend -L+50G /dev/vg0/pics'
  Backup Time:   Thu Dec 14 08:08:23 2017

   
  File:      /etc/lvm/archive/vg0_00014-931104320.vg
  VG name:       vg0
  Description:   Created *before* executing 'lvreduce -L50G /dev/vg0/pics'
  Backup Time:   Thu Dec 14 08:09:09 2017

   
  File:      /etc/lvm/backup/vg0
  VG name:       vg0
  Description:   Created *after* executing 'lvreduce -L50G /dev/vg0/pics'
  Backup Time:   Thu Dec 14 08:09:09 2017


Restoring it seems to have run without error but still unable to mount...

Code:

# vgcfgrestore --file /etc/lvm/archive/vg0_00013-1217696769.vg /dev/vg0
  Restored volume group vg0

# mount /mnt/pics
mount: /mnt/pics: wrong fs type, bad option, bad superblock on /dev/mapper/vg0-pics, missing codepage or helper program, or other error.

# resize2fs /dev/vg0/pics
resize2fs 1.43.7 (16-Oct-2017)
Please run 'e2fsck -f /dev/vg0/pics' first.

# e2fsck -f /dev/vg0/pics
e2fsck 1.43.7 (16-Oct-2017)
The filesystem size (according to the superblock) is 52428800 blocks
The physical size of the device is 13107200 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort<y>? yes


Thanks both for your assistance, any thoughts on what to try now?
_________________
"Science is what we understand well enough to explain to a computer.  Art is everything else we do." - Donald Knuth
Back to top
View user's profile Send private message
szatox
Advocate
Advocate


Joined: 27 Aug 2013
Posts: 3136

PostPosted: Sat Dec 16, 2017 12:15 pm    Post subject: Reply with quote

Slackline, what puzzles me about the commands you pasted at the beginning is that you omitted -L in the first one (hence "does is it even work") and the second one you pasted increases size of the volume when you said you reduced it.

Now, extending and then reducing a volume without expanding filesystem located on it _should_ be perfectly safe. The only sane - like in "not totally useless" - way to implement it is to mimic a regular partition (you add space at the end, and you remove it from the end). The only real danger comes from rounding errors, if you happen to remove more space than you added and as a result truncate your filesystem.
Removing space from a volume and then adding it back on the other hand produces not predictable results. LVM may allocate extents located somewhere else on a physical volume or even extents from another physical volume, so (unlike regular partitions) this is not recoverable, unless you restore metadata from backup and your data hasn't been discarded/overwritten.

Finally,
Quote:

The filesystem size (according to the superblock) is 52428800 blocks
The physical size of the device is 13107200 blocks

WHAT? Those numbers don't add up. What is your block size there?
Assuming 4kB block size, your filesystem is 200GB and your LV is 50GB which means you're 150GB short.
Assuming a really silly scenario where you extended LV, then expanded filesystem and then reduced the LV by 50G without shrinking the FS, your block size is 1.(3) kB. Hell, it's not even an integer value :!: (And also it means the original size of that FS was less than 17 GB...)
What do I get wrong here?

What was the full sequence of commands so far? (extending, expanding, shrinking, reducing, creating new volumes, removing them, snapshots, fsck, etc... )
Something important seems to be missing.
Back to top
View user's profile Send private message
slackline
Veteran
Veteran


Joined: 01 Apr 2005
Posts: 1471
Location: /uk/sheffield

PostPosted: Sun Dec 17, 2017 8:14 am    Post subject: Reply with quote

Thanks for your continued assistance szatox.

I'd grep'ed my history and copy and pasted the output (omitting the row numbers) suggesting I did indeed miss out the '-L' option in the first command (I'd been awake five minutes as I have little time to do computing maintenance due to daughter, probably not the best time to do something I should be paying attention to!).

Code:

# history | grep lvextend -A10
  392  lvextend +50G /dev/vg0/pics
  393  lvextend -L+50G /dev/vg0/pics
  394  lvreduce -L50G /dev/vg0/pics
  395  mount /dev/vg0/pics /mnt/pics
  396  umount /mnt/music
  397  e2fsck -fy /dev/vg0/pics
  398  dumpe2fs /dev/vg0/pics
  399  e2fsck -f -b 4096 -y /dev/vg0/pics
  400  e2fsck -b 4096 -y /dev/vg0/pics
  401  e2fsck -b 4096 /dev/vg0/pics
  402  e2fsck -b 32768 /dev/vg0/pics


I no longer have the terminal output available and can't remember what it was for either command, but perhaps I didn't add 50Gb to the partition in the first instance and have added 50Gb by mistake (when trying to correct things).

I've not been able to mount the partition and don't think I've done any read/writes on the other LVMs so it might(?) be possible to remove the 50Gb I did inadvertantly add on the second attempt with...

Code:

lvextend -L%0g /dev/vg0/pics


That probably won't resolve the difference in filesystem size and physical size though. To answer your question I think the block size as reported by dumpe2fs /dev/vg0/pics is 4096 (see output several posts back), but I've no idea why it might be whack/out of sync, which I guess is the heart of the problem I'm trying to resolve (i.e. mounting the LVM but ignoring the partition table).

Since I do have a backup I'm not too worried, but I always see these things as an opportunity to learn a bit more, in this case my knowledge of how LVM works is severly lacking so I've read the Wikipedia article to improve that, do you have any other recommended reading?
_________________
"Science is what we understand well enough to explain to a computer.  Art is everything else we do." - Donald Knuth
Back to top
View user's profile Send private message
eccerr0r
Watchman
Watchman


Joined: 01 Jul 2004
Posts: 9679
Location: almost Mile High in the USA

PostPosted: Sun Dec 17, 2017 8:28 am    Post subject: Reply with quote

Yes, this looks a lot more like what commands that could have done something. I was concerned the commands you initially posted had an error so we have to guess what actually happened.

These are the danger commands:
Code:
  393  lvextend -L+50G /dev/vg0/pics
  394  lvreduce -L50G /dev/vg0/pics


It looks like you did add 50G to pics, and then chopped your volume to equal 50G...which is very close to 13107200 blocks.

You'll need to try to lv-resize it back to the exact amount before, which looks like it should be 200G - If you had no lvm fragmentation, lvextend -L 200G /dev/v0/pics would work. But this is dangerous due to unknown fragmentation. You should see if you can restore one of the snapshots instead - But it does look like you tried to restore it... and it apparently failed to restore it back to what it was, prior to the edits?

What is the current size of the volume pics now via lvdisplay?
_________________
Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching?
Back to top
View user's profile Send private message
slackline
Veteran
Veteran


Joined: 01 Apr 2005
Posts: 1471
Location: /uk/sheffield

PostPosted: Sun Dec 17, 2017 2:06 pm    Post subject: Reply with quote

Thanks also for your assistance eccerr0r, curently lvdisplay suggests /dev/vg0/pics is 200Gb

Code:

# lvdisplay
  --- Logical volume ---
  LV Path                /dev/vg0/music
  LV Name                music
  VG Name                vg0
  LV UUID                J2Brwt-zNCO-CSnv-HrtN-e8BL-GbwH-kAfC1g
  LV Write Access        read/write
  LV Creation host, time kimura, 2014-10-12 17:05:49 +0100
  LV Status              available
  # open                 1
  LV Size                850.00 GiB
  Current LE             217600
  Segments               3
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:0
   
  --- Logical volume ---
  LV Path                /dev/vg0/video
  LV Name                video
  VG Name                vg0
  LV UUID                qUU56Y-I4eV-aVMi-mVq1-TsI5-Y0e2-IAw3xS
  LV Write Access        read/write
  LV Creation host, time kimura, 2014-10-12 17:05:56 +0100
  LV Status              available
  # open                 1
  LV Size                700.00 GiB
  Current LE             179200
  Segments               3
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:1
   
  --- Logical volume ---
  LV Path                /dev/vg0/pics
  LV Name                pics
  VG Name                vg0
  LV UUID                LwfI3J-z75x-nEnl-9rRg-Bc6T-qStI-mM65kQ
  LV Write Access        read/write
  LV Creation host, time kimura, 2014-10-12 17:06:09 +0100
  LV Status              available
  # open                 0
  LV Size                200.00 GiB
  Current LE             51200
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:2
   
  --- Logical volume ---
  LV Path                /dev/vg0/work
  LV Name                work
  VG Name                vg0
  LV UUID                NweuCf-N8QS-BvZg-UP4R-lYba-x9cf-j3C1kG
  LV Write Access        read/write
  LV Creation host, time kimura, 2014-10-12 17:06:16 +0100
  LV Status              available
  # open                 1
  LV Size                50.00 GiB
  Current LE             12800
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:3


Restoring to just before I cocked things up hasn't helped and originally /dev/vg0/pics was 200Gb but I successfully reduced that Fri Mar v 22:39:58 2016 (according to the output of vgcfgrestore -l /dev/vg0 I posted further back, dead handy feature that I was never aware of!).

I guess I could go back and try restoring that, but my concern is that between then and now I've added storage to /dev/vg0/music and /dev/vg0/video and I'd expect that restoring to a point prior to those would lose the changes made in the intervening period.


This has all been very useful learning for me but I'm starting to think my only (sensible) option will be to start anew with /dev/vg0/pics.
_________________
"Science is what we understand well enough to explain to a computer.  Art is everything else we do." - Donald Knuth
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54237
Location: 56N 3W

PostPosted: Sun Dec 17, 2017 5:46 pm    Post subject: Reply with quote

slackline,

A logical volume is not a partition. A partition is a sequence of contiguous (to the user) blocks on a block device of sorts.
Usually a HDD. With bad sector remapping in rotating rust and wear levelling is SSDs the physical blocks may not actually be contiguous.
The operating can't tell. The HDD/SSD hides that level of detail.
A partition is fully described by the partition table.

A logical volume is a few layers of abstraction away from a partition.
The next layer up is the phyicsical volume. It comprises at least one partition. It may use several drives. The physical volume hides the joins between multiple partitions and even lets multiple partitions be arranged in different ways.
Think raid. With two partitions, you can do raid1, raid0 or join them end to end. With more partitions, higher raid levels are possible. Its defined at pvcreate time.

Logical Volumes themselves are the divisions of the physical volume. LVM hides all this from you. You have no idea where your data will go.
On a pristine physical volume, logical volumes are allocated in single segments.
On a fragmented physical volume, a new logical volume will fill in some of the gaps and may be several extents, even when it new.
LVM hides all this from the filesystem on top of the logical volume.

Now if you can unravel all these layers of abstraction and your physical volume is simple and the logical volume is only a single extent, you can mount the filesystem it contains with the offset= option.
That's a lot of ifs and buts. Once you have a more complex structure, LVM needs to be able to join up the pieces of the filesystem before you can mount it.
Calculating offset= is non trivial.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
eccerr0r
Watchman
Watchman


Joined: 01 Jul 2004
Posts: 9679
Location: almost Mile High in the USA

PostPosted: Sun Dec 17, 2017 5:57 pm    Post subject: Reply with quote

This should be a recoverable situation in my opinion. Unless you ran out of time, don't rush to recreating the volume.

It looks like e2fsck and lvdisplay have conflicting size reports... Not exactly sure why they differ however.

What do other programs report as the size of the pics volume, like wc -c /dev/vg0/pics ?

Fragmentation of the logical volume within the volume group may preclude one from using offsets on the physical volume...hence the worry and needing the restores. I wish there was a way to "defragment" LVM, there seems to be some programs out there to do that, but very risky... backups are very important.
_________________
Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching?
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54237
Location: 56N 3W

PostPosted: Sun Dec 17, 2017 6:40 pm    Post subject: Reply with quote

slackline,

pvdisplay -m is useful.

Code:
# pvdisplay -m
  WARNING: Failed to connect to lvmetad. Falling back to device scanning.
  --- Physical volume ---
  PV Name               /dev/md127
  VG Name               vg
  PV Size               2.71 TiB / not usable 3.62 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              711140
  Free PE               223588
  Allocated PE          487552
  PV UUID               7b2KgY-NHef-kuNk-WBAp-VnLa-h03A-b4ehGy
   
  --- Physical Segments ---
  Physical extent 0 to 5119:
    Logical volume   /dev/vg/usr
    Logical extents   0 to 5119
  Physical extent 5120 to 5375:
    Logical volume   /dev/vg/local
    Logical extents   0 to 255
  Physical extent 5376 to 5887:
    Logical volume   /dev/vg/tmp
    Logical extents   0 to 511
  Physical extent 5888 to 7679:
    Logical volume   /dev/vg/var
    Logical extents   0 to 1791
  Physical extent 7680 to 23807:
    Logical volume   /dev/vg/vmware
    Logical extents   0 to 16127
  Physical extent 23808 to 26367:
    Logical volume   /dev/vg/opt
    Logical extents   0 to 2559
  Physical extent 26368 to 34047:
    Logical volume   /dev/vg/distfiles
    Logical extents   0 to 7679
  Physical extent 34048 to 41727:
    Logical volume   /dev/vg/packages
    Logical extents   0 to 7679
  Physical extent 41728 to 42239:
    Logical volume   /dev/vg/portage
    Logical extents   0 to 511
  Physical extent 42240 to 304383:
    Logical volume   /dev/vg/home
    Logical extents   0 to 262143
  Physical extent 304384 to 310015:
    Logical volume   /dev/vg/var
    Logical extents   1792 to 7423
  Physical extent 310016 to 315135:
    Logical volume   /dev/vg/vmware
    Logical extents   16128 to 21247
  Physical extent 315136 to 320255:
    Logical volume   /dev/vg/usr
    Logical extents   5120 to 10239
  Physical extent 320256 to 327935:
    Logical volume   /dev/vg/var
    Logical extents   7424 to 15103
  Physical extent 327936 to 335615:
    Logical volume   /dev/vg/distfiles
    Logical extents   7680 to 15359
  Physical extent 335616 to 343295:
    Logical volume   /dev/vg/packages
    Logical extents   7680 to 15359
  Physical extent 343296 to 343679:
    Logical volume   /dev/vg/tmp
    Logical extents   512 to 895
  Physical extent 343680 to 351359:
    Logical volume   /dev/vg/distfiles
    Logical extents   15360 to 23039
  Physical extent 351360 to 479359:
    Logical volume   /dev/vg/home
    Logical extents   262144 to 390143
  Physical extent 479360 to 479871:
    Logical volume   /dev/vg/portage
    Logical extents   512 to 1023
  Physical extent 479872 to 487551:
    Logical volume   /dev/vg/distfiles
    Logical extents   23040 to 30719
  Physical extent 487552 to 711139:
    FREE


This says that VG Name vg is located on PV Name /dev/md127.
It shows where all the extents are allocated.

So
Code:
  Physical extent 42240 to 304383:
    Logical volume   /dev/vg/home
    Logical extents   0 to 262143

and
Code:
  Physical extent 351360 to 479359:
    Logical volume   /dev/vg/home
    Logical extents   262144 to 390143

Both belong to home but there is a gap in the Physical extents allocated, even though the Logical extents are contiguous.
With that gap, there is no possibility of using mount with the offset=option.
You also need to calculate where Physical extent 42240 starts on the drive, or in this example, raid set.
That means you need to know something of the LVM data layout on the disk. It will certainly be documented in the code.
It might be documented elsewhere too.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
szatox
Advocate
Advocate


Joined: 27 Aug 2013
Posts: 3136

PostPosted: Mon Dec 18, 2017 12:47 am    Post subject: Reply with quote

Quote:
These are the danger commands:
Code:
  393  lvextend -L+50G /dev/vg0/pics
  394  lvreduce -L50G /dev/vg0/pics
It looks like you did add 50G to pics, and then chopped your volume to equal 50G...which is very close to 13107200 blocks.
Actually this bit perfectly matches my calculations. The FS was extended to 200 GB and then chopped down to exactly 50G (at 4kB block)
Quote:
Fragmentation of the logical volume within the volume group may preclude one from using offsets on the physical volume...hence the worry and needing the restores. I wish there was a way to "defragment" LVM,
pvmove should be able to do that, although you can't specify exactly this option. It's more liek LVM attempts to allocate continuous space if it is available and fragments when necessary and then doesn't bother with fixing the mess. If you migrate an LV to a new PV (one with continuous free space) it should "accidentally" defragment it in the process.
Fortunately the automatic metadata backups used by vgcfgrestore do contain information about fragmentation, so they will allocate exactly the same space that has been dropped.

The bad news is, since it's SSD and the LV has been reduced below its original size, and now the FS reports the volume size of 50G which happens to be the lowest value along the way... Well, at this point it's very likely that LVM issued trim command to that SSD and actually erased 150 GB of underlying flash.
Did you check this bit?
Quote:
If it's SSD and you have issue_discards = 1 in your lvm.conf, the data is already gone.

If it's set, it's a game over, go restore from backup. If it's 0 we may still have a chance to figure out some smart way to handle this.
Back to top
View user's profile Send private message
eccerr0r
Watchman
Watchman


Joined: 01 Jul 2004
Posts: 9679
Location: almost Mile High in the USA

PostPosted: Mon Dec 18, 2017 1:35 am    Post subject: Reply with quote

Except this is on a rust spinner...

Despite it seeming very plausible to be on the SSD, the chances of successfully extending a 200G partition to 250G on a LVM on a 250G SSD is unlikely with all the metadata and rounding inaccuracies. Or at least it's kind of strange to have a 250G PV on an SSD with all those mechanical HDDs lying around.

---

BTW, LVM2 defragmenter though not helpful for OP... https://bisqwit.iki.fi/source/lvm2defrag.html -- If all volumes were contiguous on the physical volumes, and does not span them, then mounting with an offset will work, hence the value in defragmentation.

If there is at least one free allocation extent, it seems like it should be possible to safely defragment LVM's minus the time it needs to rewrite the metadata. Hmm...this may be a neat tool to write, but dangerous and questionable value when a logical volume necessarily spans over a physical volume.
_________________
Intel Core i7 2700K/Radeon R7 250/24GB DDR3/256GB SSD
What am I supposed watching?
Back to top
View user's profile Send private message
slackline
Veteran
Veteran


Joined: 01 Apr 2005
Posts: 1471
Location: /uk/sheffield

PostPosted: Mon Dec 18, 2017 2:44 pm    Post subject: Reply with quote

Thanks NeddySeagoon, eccerr0r and szatox for your continued advice.

This is indeed a HDD volume rather than SSD, albeit two disks setup in RAID1. LVM seems nice and flexible to an end user such as myself (to date, rapidly learning!) but horrendously complicated under the hood based on your initial description in this thread NeddySeagoon.


Taking things in turn, e2fsck still reports....

Code:


# e2fsck /dev/vg0/pics
e2fsck 1.43.7 (16-Oct-2017)
The filesystem size (according to the superblock) is 52428800 blocks
The physical size of the device is 13107200 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort<y>? cancelled!



Then lvdisplay...

Code:

# lvdisplay /dev/vg0
  --- Logical volume ---
  LV Path                /dev/vg0/music
  LV Name                music
  VG Name                vg0
  LV UUID                J2Brwt-zNCO-CSnv-HrtN-e8BL-GbwH-kAfC1g
  LV Write Access        read/write
  LV Creation host, time kimura, 2014-10-12 17:05:49 +0100
  LV Status              available
  # open                 1
  LV Size                850.00 GiB
  Current LE             217600
  Segments               3
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:0
   
  --- Logical volume ---
  LV Path                /dev/vg0/video
  LV Name                video
  VG Name                vg0
  LV UUID                qUU56Y-I4eV-aVMi-mVq1-TsI5-Y0e2-IAw3xS
  LV Write Access        read/write
  LV Creation host, time kimura, 2014-10-12 17:05:56 +0100
  LV Status              available
  # open                 1
  LV Size                700.00 GiB
  Current LE             179200
  Segments               3
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:1
   
  --- Logical volume ---
  LV Path                /dev/vg0/pics
  LV Name                pics
  VG Name                vg0
  LV UUID                LwfI3J-z75x-nEnl-9rRg-Bc6T-qStI-mM65kQ
  LV Write Access        read/write
  LV Creation host, time kimura, 2014-10-12 17:06:09 +0100
  LV Status              available
  # open                 0
  LV Size                200.00 GiB
  Current LE             51200
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:2
   
  --- Logical volume ---
  LV Path                /dev/vg0/work
  LV Name                work
  VG Name                vg0
  LV UUID                NweuCf-N8QS-BvZg-UP4R-lYba-x9cf-j3C1kG
  LV Write Access        read/write
  LV Creation host, time kimura, 2014-10-12 17:06:16 +0100
  LV Status              available
  # open                 1
  LV Size                50.00 GiB
  Current LE             12800
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:3


Says /dev/vg0/pics is 200Gb here.

pvdisplay is yet another command I've not encountered before but it reports...

Code:

# pvdisplay -m
  --- Physical volume ---
  PV Name               /dev/md127
  VG Name               vg0
  PV Size               2.73 TiB / not usable 4.44 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              715364
  Free PE               254564
  Allocated PE          460800
  PV UUID               JG2MGd-j8Am-OaHK-L1XJ-qpam-fXkx-Rqrrch
   
  --- Physical Segments ---
  Physical extent 0 to 127999:
    Logical volume   /dev/vg0/music
    Logical extents   0 to 127999
  Physical extent 128000 to 255999:
    Logical volume   /dev/vg0/video
    Logical extents   0 to 127999
  Physical extent 256000 to 307199:
    Logical volume   /dev/vg0/pics
    Logical extents   0 to 51199
  Physical extent 307200 to 383999:
    FREE
  Physical extent 384000 to 396799:
    Logical volume   /dev/vg0/work
    Logical extents   0 to 12799
  Physical extent 396800 to 460799:
    Logical volume   /dev/vg0/music
    Logical extents   128000 to 191999
  Physical extent 460800 to 473599:
    Logical volume   /dev/vg0/video
    Logical extents   128000 to 140799
  Physical extent 473600 to 499199:
    Logical volume   /dev/vg0/music
    Logical extents   192000 to 217599
  Physical extent 499200 to 537599:
    Logical volume   /dev/vg0/video
    Logical extents   140800 to 179199
  Physical extent 537600 to 715363:
    FREE
   


.../dev/vg0/pics only crops up once, which makes sense to me as unlike the other logical volumes I've never extended it (until this most recent cock-up that is!). Finally wc -c /dev/vg0/pics (I had no idea you could pass wc a directory path and it reports the number of bytes, in this instance is it the bytes allocated to the volume or those that are used within it?)

Code:

# wc -c /dev/vg0/pics
53687091200 /dev/vg0/pics


In case its of any use here is my /etc/lvm/lvm.conf

Code:

config {
   checks = 1
   abort_on_errors = 0
   profile_dir = "/etc/lvm/profile"
}

devices {
   dir = "/dev"
   scan = [ "/dev" ]
   obtain_device_list_from_udev = 1
   external_device_info_source = "none"
   filter = [ "r|/dev/nbd.*|", "a/.*/" ]
   cache_dir = "/etc/lvm/cache"
   cache_file_prefix = ""
   write_cache_state = 1
   sysfs_scan = 1
   multipath_component_detection = 1
   md_component_detection = 1
   fw_raid_component_detection = 0
   md_chunk_alignment = 1
   data_alignment_detection = 1
   data_alignment = 0
   data_alignment_offset_detection = 1
   ignore_suspended_devices = 0
   ignore_lvm_mirrors = 1
   disable_after_error_count = 0
   require_restorefile_with_uuid = 1
   pv_min_size = 2048
   issue_discards = 0
   allow_changes_with_duplicate_pvs = 0
}

allocation {
   maximise_cling = 1
   use_blkid_wiping = 1
   wipe_signatures_when_zeroing_new_lvs = 1
   mirror_logs_require_separate_pvs = 0
   cache_pool_metadata_require_separate_pvs = 0
   thin_pool_metadata_require_separate_pvs = 0
}

log {
   verbose = 0
   silent = 0
   syslog = 1
   overwrite = 0
   level = 0
   indent = 1
   command_names = 0
   prefix = "  "
   activation = 0
   debug_classes = [ "memory", "devices", "activation", "allocation", "lvmetad", "metadata", "cache", "locking", "lvmpolld", "dbus" ]
}

backup {
   backup = 1
   backup_dir = "/etc/lvm/backup"
   archive = 1
   archive_dir = "/etc/lvm/archive"
   retain_min = 10
   retain_days = 30
}

shell {
   history_size = 100
}

global {
   umask = 077
   test = 0
   units = "h"
   si_unit_consistency = 1
   suffix = 1
   activation = 1
   fallback_to_lvm1 = 0
   proc = "/proc"
   etc = "/etc"
   locking_type = 1
   wait_for_locks = 1
   fallback_to_clustered_locking = 1
   fallback_to_local_locking = 1
   locking_dir = "/run/lock/lvm"
   prioritise_write_locks = 1
   abort_on_internal_errors = 0
   detect_internal_vg_cache_corruption = 0
   metadata_read_only = 0
   mirror_segtype_default = "raid1"
   raid10_segtype_default = "raid10"
   sparse_segtype_default = "thin"
   use_lvmetad = 1
   use_lvmlockd = 0
   system_id_source = "none"
   use_lvmpolld = 0
   notify_dbus = 1
}

activation {
   checks = 0
   udev_sync = 1
   udev_rules = 1
   verify_udev_operations = 0
   retry_deactivation = 1
   missing_stripe_filler = "error"
   use_linear_target = 1
   reserved_stack = 64
   reserved_memory = 8192
   process_priority = -18
   raid_region_size = 512
   readahead = "auto"
   raid_fault_policy = "warn"
   mirror_image_fault_policy = "remove"
   mirror_log_fault_policy = "allocate"
   snapshot_autoextend_threshold = 100
   snapshot_autoextend_percent = 20
   thin_pool_autoextend_threshold = 100
   thin_pool_autoextend_percent = 20
   use_mlockall = 0
   monitoring = 1
   polling_interval = 15
   activation_mode = "degraded"
}

metadata {
}

dmeventd {
   mirror_library = "libdevmapper-event-lvm2mirror.so"
   snapshot_library = "libdevmapper-event-lvm2snapshot.so"
   thin_library = "libdevmapper-event-lvm2thin.so"
}

_________________
"Science is what we understand well enough to explain to a computer.  Art is everything else we do." - Donald Knuth
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54237
Location: 56N 3W

PostPosted: Mon Dec 18, 2017 3:43 pm    Post subject: Reply with quote

slackline,

/dev/vg0/pics appears to be all in one piece

Code:
  Physical extent 256000 to 307199:
    Logical volume   /dev/vg0/pics
    Logical extents   0 to 51199
  Physical extent 307200 to 383999:
    FREE


And its 51200 Extents of
Code:
PE Size               4.00 MiB
That's 200GiB
We don't know that its the right 200GiB ... yet.

That FREE bothers me. There has been something there sometime.
Thats 75GiB

That agrees with what lvdisplay /dev/vg0 says for /dev/vg0/pics too.
Code:
# e2fsck /dev/vg0/pics
e2fsck 1.43.7 (16-Oct-2017)
The filesystem size (according to the superblock) is 52428800 blocks
The physical size of the device is 13107200 blocks

That's decimal 4kiB blocks (filesystems usually use 4kiB blocks
13107200 blocks is 0xC80000 hex blocks or 0x3200MiB or 50.0 GiB

It appears that e2fsck is not seeing the current physical size.
Eww. Has the raid broken so that one half has a 50G /dev/vg0/pics and the other half a 200GiB /dev/vg0/pics
What does /proc/mdstat tell ?

Anyway, getting back to offset= ...
Rotating rust is good. The data is there until its overwritten
Raid 1 is good, its two mirrored copies of the same thing.
What does mdadm -E /dev//... say about one of the partitions underlying the raid set?
Is it version 0.9 or 1.2?
0.9 has the raid metadata at the end.
1.2 has the raid metadata at the start.
It matters for calculating offsets.

You could cheat a little. Do this very carefully.
Run testdisk on one part of the raid arrary. It will report where it finds potential filesystem starts.
Do NOT let testdisk write a partition table like it will offer. Make notes of the filesystem start reported.
Turn them into offset= values.
Play with mount -o ro,offset=

For safety, you could make the whole device world readable. chmod 664, then run testdisk as your nomal user.
The node will go back to 660 at the next reboot.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
slackline
Veteran
Veteran


Joined: 01 Apr 2005
Posts: 1471
Location: /uk/sheffield

PostPosted: Mon Dec 18, 2017 4:05 pm    Post subject: Reply with quote

Hi Neddy Seagoon,

NeddySeagoon wrote:


It appears that e2fsck is not seeing the current physical size.
Eww. Has the raid broken so that one half has a 50G /dev/vg0/pics and the other half a 200GiB /dev/vg0/pics
What does /proc/mdstat tell ?


I don't recall any problems with the raid and /proc/mdstat doesn't appear to report anything awry...

Code:

# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid0] [raid1] [raid10] [linear]
md127 : active raid1 sdd[0] sdc[1]
      2930135488 blocks super 1.2 [2/2] [UU]
      bitmap: 0/22 pages [0KB], 65536KB chunk

unused devices: <none>



NeddySeagoon wrote:

Anyway, getting back to offset= ...
Rotating rust is good. The data is there until its overwritten
Raid 1 is good, its two mirrored copies of the same thing.
What does mdadm -E /dev//... say about one of the partitions underlying the raid set?
Is it version 0.9 or 1.2?
0.9 has the raid metadata at the end.
1.2 has the raid metadata at the start.


Its 1.2 (output is the same for both /dev/sdc and /dev/sdd with the exception of the Deivce UUID which is to be expected I think)...

Code:

 # mdadm -E /dev/sdc
/dev/sdc:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 8680d04a:5f66741c:70a2cf2e:33e2ec3f
           Name : kimura:0  (local to host kimura)
  Creation Time : Sun Oct 12 07:48:18 2014
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 5860271024 (2794.39 GiB 3000.46 GB)
     Array Size : 2930135488 (2794.39 GiB 3000.46 GB)
  Used Dev Size : 5860270976 (2794.39 GiB 3000.46 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=48 sectors
          State : clean
    Device UUID : 9e6ae556:b37508f4:2e81ef29:381e127d

Internal Bitmap : 8 sectors from superblock
    Update Time : Mon Dec 18 15:54:39 2017
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 7fdc6c6a - correct
         Events : 5700


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)



NeddySeagoon wrote:

It matters for calculating offsets.

You could cheat a little. Do this very carefully.
Run testdisk on one part of the raid arrary. It will report where it finds potential filesystem starts.
Do NOT let testdisk write a partition table like it will offer. Make notes of the filesystem start reported.
Turn them into offset= values.
Play with mount -o ro,offset=

For safety, you could make the whole device world readable. chmod 664, then run testdisk as your nomal user.
The node will go back to 660 at the next reboot.


Ok I'll give that a go later, currently at work and not fully focused so will save it for a time when I can dedicate it my full attention (wife is out tomorrow night so might be then).

Thanks for yours and everyone else's assistance, I feel like I'm learning a lot here (but am yet to internalise it all and commit to memory).

Cheers,

slackline

Switched a closing code tag to match the paired opening quote tag. -- desultory
_________________
"Science is what we understand well enough to explain to a computer.  Art is everything else we do." - Donald Knuth
Back to top
View user's profile Send private message
slackline
Veteran
Veteran


Joined: 01 Apr 2005
Posts: 1471
Location: /uk/sheffield

PostPosted: Wed Dec 20, 2017 12:44 pm    Post subject: Reply with quote

Found the time to sit down and run testdisk (what a comprehensive tool) and have pasted the output here.

I've redacted most of the repetitive lines to leave the following which I think shows the start/finish of each of the partitions on one of the volumes of the RAID...

Code:

 P Linux md 1.x RAID        0   0  1 364784 254 22 5860270984 [kimura:0]
 P Linux LVM2              16  81  2 364801  80 15 5860270976
 P ext4                    16 113 32 110976 221 27 1782579200
 P ext4                    16 113 34 110976 221 29 1782579200
 P Sys=0C               10162 141 34 94809 202 31 1359857896
 P HFS                  16287  82 50 34115 123  8  286409362 [iCjh M-?\sM-f~QM-eM-~^E  ^N M-"M-9]
 P HFS                  23322  83 26 27369 114 61   65017044 [H~B6MZ^V^G^D~F~MMM-!"^D^F~E~F^C (~I̠P]
 P HFS                  27369 114 58 31416 146 30   65017044 [H~B6MZ^V^G^D~F~MMM-!"^D^F~E~F^C (~I̠P]
 P HFS                  29808 174 26 30183  17 17    6014476 [2=d~LM-4~R~RM-I"^L~EM-; ~XM-~ N ^UM-7
 P HFS                  30183  17 14 30557 115  5    6014476 [2=d~LM-4~R~RM-I"^L~EM-; ~XM-~ N ^UM-7
 P ext4                 30736 146  1 76426  40 28  734003200
 P ext4                 30736 146  3 76426  40 30  734003200
 P ext4                 32668  81 14 143628 189  9 1782579200
 P ext4                 32668 129 54 143628 237 49 1782579200
 P ext4                 32668 211  8 143629  64  3 1782579200
 P ext4                 32669   0 44 143629 108 39 1782579200
 P ext4                 32669  92 16 143629 200 11 1782579200
 P ext4                 32669 145 45 143629 253 40 1782579200
 P ext4                 32669 154 62 143630   7 57 1782579200
 P ext4                 32669 156 48 143630   9 43 1782579200
 P ext4                 32669 205 25 143630  58 20 1782579200
...
 P ext4                 32679 157 63 111004 159 11 1258291200
 P ext4                 32679 160 34 111004 161 45 1258291200
 P ext4                 32679 161 43 111004 162 54 1258291200
 P ext4                 32679 164  6 111004 165 17 1258291200
 P ext4                 32679 166 40 111004 167 51 1258291200
 P ext4                 32679 168 58 111004 170  6 1258291200
...
 P ext4                 65287  71 63 156666 115 55 1468006400
 P ext4                 65287  72  2 156666 115 57 1468006400
 P ext4                 97939   8 30 189318  52 22 1468006400
 P ext4                 97939  10  8 189318  53 63 1468006400
 P ext4                 97939  11 33 189318  55 25 1468006400
 P ext4                 97939  12 50 189318  56 42 1468006400
 P ext4                 97939  14 20 189318  58 12 1468006400
 P ext4                 97939  15 61 189318  59 53 1468006400
...
 P ext4                 97943 183 25 189322 227 17 1468006400
 P ext4                 97943 185  3 189322 228 58 1468006400
 P ext4                 97943 249  3 169741 228 55 1153433600
 P ext4                 97944 119  8 169742  98 60 1153433600
 P ext4                 97944 218 11 169742 197 63 1153433600
 P ext4                 97944 225  2 169742 204 54 1153433600
 P ext4                 97944 236 29 169742 216 18 1153433600
 P ext4                 97944 240 57 169742 220 46 1153433600
 P ext4                 97944 245  6 169742 224 58 1153433600
 P ext4                 97944 252 37 169742 232 26 1153433600
 P ext4                 97945  30 38 169743  10 27 1153433600
 P ext4                 97945  68 28 169743  48 17 1153433600
 P ext4                 97945 180 60 169743 160 49 1153433600
 P ext4                 97945 216  8 169743 195 60 1153433600
 P ext4                 97951  59 21 169749  39 10 1153433600
 P ext4                 97954  70 21 169752  50 10 1153433600
 P ext4                 130558  30 31 156666 115 55  419430400
 P ext4                 130558  30 31 195828 243 61 1048576000
 P ext4                 130558  30 33 156666 115 57  419430400
 P ext4                 130558  30 33 195828 243 63 1048576000
 P ext4                 130571 201 39 156680  31 63  419430400
 P ext4                 130572   0 21 156680  85 45  419430400
 P ext4                 130572   1 46 156680  87  7  419430400
 P ext4                 130572   4  1 156680  89 25  419430400
 P ext4                 130572   5 18 156680  90 42  419430400
 P ext4                 130572   6 59 156680  92 20  419430400
 P ext4                 130572   8 21 156680  93 45  419430400
...
 P ext4                 130639 152 50 195910 111 17 1048576000
 P ext4                 130639 155  5 195910 113 35 1048576000
 P ext4                 130668  31 20 195938 244 50 1048576000
 P ext4                 130668  36 17 195938 249 47 1048576000
 P ext4                 130668  38 27 195938 251 57 1048576000
 P ext4                 130668  45 58 195939   4 25 1048576000
 P ext4                 130668  48 61 195939   7 28 1048576000
 P ext4                 130668  51 40 195939  10  7 1048576000
 P ext4                 130668  54 43 195939  13 10 1048576000
 P ext4                 130668  58 15 195939  16 45 1048576000
 P ext4                 130668  60 49 195939  19 16 1048576000
 P ext4                 130668  63 36 195939  22  3 1048576000
...
 P ext4                 130750 214 17 196021 172 47 1048576000
 P ext4                 130750 218 29 196021 176 59 1048576000
 P ext4                 143612  73 12 254572 181  7 1782579200
 P ext4                 143612  73 14 254572 181  9 1782579200
 P ext4                 163209 232 16 228480 190 46 1048576000
 P ext4                 163209 252 36 228480 211  3 1048576000
...
 P ext4                 163225 152 24 228496 110 54 1048576000
 P ext4                 163225 155 43 228496 114 10 1048576000
 P ext4                 163225 159 55 228496 118 22 1048576000
 P ext4                 195828 243 62 202356  10 20  104857600
 P ext4                 195828 244  1 202356  10 22  104857600
 P FAT12                196304  91 32 196304 137 13       2880 [dban-1.0.7]
 P HFS                  198520   3 56 198522 146  6      41090
 P HFS                  198522 146  3 198525  33 16      41090
 P HFS                  198528  70 53 198529 209 32      24802
 P HFS                  198529 209 29 198531  93  8      24802
 P ext4                 198978  71 22 205505  92 43  104857600
 P ext4                 198978  74  9 205505  95 30  104857600
 P ext4                 198978  75 26 205505  96 47  104857600
...
 P ext4                 198994 146 33 205521 167 54  104857600
 P ext4                 198994 147 50 205521 169  8  104857600
 P ext4                 198994 149  4 205521 170 25  104857600
 P ext4                 198994 150 21 205521 171 42  104857600
 P HFS                  199361 101 17 199363 243 30      41090
 P HFS                  199363 243 27 199366 130 40      41090
 P HFS                  199369 168 14 199371  51 56      24802
 P HFS                  199371  51 53 199372 190 32      24802


There are somesurprises, HFS and FAT12 partitions which I wasn't expecting and it looks as though there are perhaps four ext4 partitions, albeit with some fragmentation which is presumably down to the use of LVM and my previous expansion of the /dev/vg0/video and /dev/vg0/music partitions (detailed in post further back under output from 'vgcfgrestore -l /dev/vg0').

I think I need to now work out what the start block is for the /dev/vg0/pics partition and calculate the necessary offset using the method in NeddySeagoons post (linked in my initial post), but raises some questions....

Is that the correct strategy to take?
It would presumably be safe to trial and error each offset as partitions that are already mounted will refuse to be mounted a second time?

Thanks in advance,

slackline
_________________
"Science is what we understand well enough to explain to a computer.  Art is everything else we do." - Donald Knuth
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54237
Location: 56N 3W

PostPosted: Wed Dec 20, 2017 2:10 pm    Post subject: Reply with quote

slackline,

In theory, that's correct. but given all those false positives, its going to take a long time.
There are a few things to do to try to narrow it down.

We know that
Code:
  Physical extent 256000 to 307199:
    Logical volume   /dev/vg0/pics
    Logical extents   0 to 51199
  Physical extent 307200 to 383999:
    FREE


And
Code:
PE Size               4.00 MiB
so that the start of /dev/vg0/pics is at least 256000*4MiB down the drive.
There is the partition table, raid metadata etc. pushing that down the drive, so it cant be before that.

A disk "cylinder" is 512 bytes per sector, 63 sectors per track and 255 heads per cylinder.
That's 8225280 B per cylinder.

/dev/vg0/pics must be at lest 1.073741824×10¹² B down the drive. (256000*4*1024*1024)
Dividing one by the other, thats 130541.674447557 cylinders.

Any partition identified before cylinder 130541 is therefore not a possible location for /dev/vg0/pics.
That gets rid of a few :), line 1457 in your is the first possible entry.

We can repeat the above on Ends, looking for a partition the right size.

There is another way too but I'll need to write that up later.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
slackline
Veteran
Veteran


Joined: 01 Apr 2005
Posts: 1471
Location: /uk/sheffield

PostPosted: Sun Dec 24, 2017 7:59 am    Post subject: Reply with quote

Thanks yet again NeddySeagoon.

After a reboot, which I thought I'd already tried prior to starting this thread, the partition has automounted.

Your time and effort patiently answering questions is very much appreciated.


slackline
_________________
"Science is what we understand well enough to explain to a computer.  Art is everything else we do." - Donald Knuth
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 54237
Location: 56N 3W

PostPosted: Sun Dec 24, 2017 10:02 am    Post subject: Reply with quote

slackline,

Its a logical volume. You will confuse your readers by referring to it as a partition.

I'm pleased its fixed :)
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Other Things Gentoo All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum