Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Fresh RAID/LVM install fails to boot [solved]
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Installing Gentoo
View previous topic :: View next topic  
Author Message
vesperto
n00b
n00b


Joined: 21 May 2015
Posts: 34

PostPosted: Sat May 23, 2015 9:49 pm    Post subject: Fresh RAID/LVM install fails to boot [solved] Reply with quote

Greetings,

I'm giving gentoo a try and am struggling to get it going. I think it's either partitions, maybe doing things in/out the chroot or the initramfs creation.

For the most part i followed the handbook, but where it matters i followed the Gentoo Linux x86 with Software Raid and LVM2 Quick Install Guide and the wiki.

I'm trying to setup a RAID1 with 2 disks, for that i used whole-drive FD partitions, like this:
Code:
# fdisk -l /dev/disk/by-id/ata-DISK01-MODEL-AND-SERIAL
  Disk /dev/disk/by-id/ata-DISK01-MODEL-AND-SERIAL: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
  Units: sectors of 1 * 512 = 512 bytes
  Sector size (logical/physical): 512 bytes / 4096 bytes
  I/O size (minimum/optimal): 4096 bytes / 4096 bytes

# fdisk /dev/disk/by-id/ata-DISK01-MODEL-AND-SERIAL
  n p 1 2048 1953525167
  t 1 fd
  a 1
  w

# fdisk -l /dev/disk/by-id/ata-DISK01-MODEL-AND-SERIAL
  Disk /dev/disk/by-id/ata-DISK01-MODEL-AND-SERIAL: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
  Units: sectors of 1 * 512 = 512 bytes
  Sector size (logical/physical): 512 bytes / 4096 bytes
  I/O size (minimum/optimal): 4096 bytes / 4096 bytes
  Disklabel type: dos
  Disk identifier: 0x2a99b723

  Device                                                 Boot Start        End    Sectors   Size Id Type
  /dev/disk/by-id/ata-DISK01-MODEL-AND-SERIAL-part1 *     2048 1953525167 1953523120 931.5G fd Linux raid autodetect

# fdisk -l /dev/disk/by-id/ata-DISK02-MODEL-AND-SERIAL
  Disk /dev/disk/by-id/ata-DISK02-MODEL-AND-SERIAL: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
  Units: sectors of 1 * 512 = 512 bytes
  Sector size (logical/physical): 512 bytes / 4096 bytes
  I/O size (minimum/optimal): 4096 bytes / 4096 bytes

# fdisk /dev/disk/by-id/ata-DISK02-MODEL-AND-SERIAL
  n p 1 2048 1953525167
  t 1 fd
  a 1
  w

# fdisk -l /dev/disk/by-id/ata-DISK02-MODEL-AND-SERIAL
  Disk /dev/disk/by-id/ata-DISK02-MODEL-AND-SERIAL: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
  Units: sectors of 1 * 512 = 512 bytes
  Sector size (logical/physical): 512 bytes / 4096 bytes
  I/O size (minimum/optimal): 4096 bytes / 4096 bytes
  Disklabel type: dos
  Disk identifier: 0x4dd0a3e2

  Device                                                 Boot Start        End    Sectors   Size Id Type
  /dev/disk/by-id/ata-DISK02-MODEL-AND-SERIAL-part1 *     2048 1953525167 1953523120 931.5G fd Linux raid autodetect


For the md, i issued this:

Code:
# cat /proc/mdstat
Personalities : [raid10] [raid1] [raid6] [raid5] [raid4] [raid0] [linear] [multipath]
unused devices: <none>

# mdadm --create /dev/md0 --level=1 --raid-devices=2 --metadata=0.90 /dev/disk/by-id/ata-DISK01-MODEL-AND-SERIAL-part1 /dev/disk/by-id/ata-DISK02-MODEL-AND-SERIAL-part1
mdadm: /dev/disk/by-id/ata-DISK01-MODEL-AND-SERIAL-part1 appears to be part of a raid array:
       level=raid0 devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: /dev/disk/by-id/ata-DISK02-MODEL-AND-SERIAL-part1 appears to be part of a raid array:
       level=raid0 devices=0 ctime=Thu Jan  1 00:00:00 1970
Continue creating array? y
mdadm: array /dev/md0 started.

# cat /proc/mdstat
  Personalities : [raid10] [raid1] [raid6] [raid5] [raid4] [raid0] [linear] [multipath]
  md0 : active raid1 sdc1[1] sda1[0]
        976761472 blocks [2/2] [UU]
        [>....................]  resync =  0.2% (2532480/976761472) finish=319.4min speed=50830K/sec
        bitmap: 8/8 pages [32KB], 65536KB chunk

  unused devices: <none>


I didn't mknod per the Guide as i discovered mdadm does that for me. I waited before continuing because LVM was acting up with stuff like
Code:
Device /dev/md0p1 not found (or ignored by filtering).
or
Code:
Found duplicate PV CHk0Fz1UYhVRh30To6RE2HKJgQUkYNCS: using /dev/sda1 not /dev/md0
Found duplicate PV CHk0Fz1UYhVRh30To6RE2HKJgQUkYNCS: using /dev/sdc1 not /dev/sda1
, etc. It doesn't seem to be RAID-related and i used wipefs and dd extensively. I also noticed that /dev/md0 became /dev/md127 after a reboot:
Code:
# cat /proc/mdstat
  Personalities : [raid10] [raid1] [raid6] [raid5] [raid4] [raid0] [linear] [multipath]
  md127 : active (auto-read-only) raid1 sda1[0] sdc1[1]
      976761472 blocks [2/2] [UU]
      bitmap: 0/8 pages [0KB], 65536KB chunk

  unused devices: <none>

I eventually created an LVM partition on the RAID device:
Code:
# fdisk -l /dev/md127

  Disk /dev/md127: 931.5 GiB, 1000203747328 bytes, 1953522944 sectors
  Units: sectors of 1 * 512 = 512 bytes
  Sector size (logical/physical): 512 bytes / 4096 bytes
  I/O size (minimum/optimal): 4096 bytes / 4096 bytes

# fdisk -l /dev/md127
  n p 1 2048 1953522943
  t 8e
  a 1
  w

# fdisk -l /dev/md127
  Disk /dev/md127: 931.5 GiB, 1000203747328 bytes, 1953522944 sectors
  Units: sectors of 1 * 512 = 512 bytes
  Sector size (logical/physical): 512 bytes / 4096 bytes
  I/O size (minimum/optimal): 4096 bytes / 4096 bytes
  Disklabel type: dos
  Disk identifier: 0x18d92136

  Device       Boot Start        End    Sectors   Size Id Type
  /dev/md127p1 *     2048 1953522943 1953520896 931.5G 8e Linux LVM

On it, i created the LVs i intend to use:
Code:
# pvcreate /dev/md/127_0p1
WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!
Physical volume "/dev/md127p1" successfully created
# vgcreate MyVolGroup /dev/md127p1
Volume group "MyVolGroup" successfully created
# lvcreate -L 15G -n rootLV MyVolGroup
Logical volume "rootLV" created
# lvcreate -L 10G -n usrLV MyVolGroup
Logical volume "usrLV" created
# lvcreate -L 10G -n varLV MyVolGroup
Logical volume "varLV" created
# lvcreate -L 10G -n srvLV MyVolGroup
Logical volume "srvLV" created
# lvcreate -L 8G -n swapLV MyVolGroup
Logical volume "swapLV" created

# vgchange -a y

And formated them. The RAID array chunk size, from /proc/mdstat, is 65536KB; block size is 4096B to match AFD.
stride = chunk/block = (65536*1024)B / 4096B = 16384 blocks
stripe = stride * (# of data disks) = 16384*2 = 32768 blocks
So:
Code:
# mkfs.xfs -f -b size=4096 -d su=32768,sw=2 -s size=4096 -L rootxfs /dev/MyVolGroup/rootLV
# mkfs.ext4 -b 4096 -E stride=16384,stripe-width=32768 -L usrext4 -v /dev/MyVolGroup/usrLV
# mkfs.ext4 -b 4096 -E stride=16384,stripe-width=32768 -L varext4 -v /dev/MyVolGroup/varLV
# mkfs.ext4 -b 4096 -E stride=16384,stripe-width=32768 -L usrext4 -v /dev/MyVolGroup/srvLV

Then mounted:
Code:
# mount /dev/MyVolGroup/rootLV /mnt/gentoo
# mkdir /mnt/gentoo/usr; mount /dev/MyVolGroup/usrLV /mnt/gentoo/usr
# mkdir /mnt/gentoo/var; mount /dev/MyVolGroup/varLV /mnt/gentoo/var
# mkdir /mnt/gentoo/srv; mount /dev/MyVolGroup/srvLV /mnt/gentoo/srv
# mount
  ...
  /dev/mapper/MyVolGroup-rootLV on /mnt/gentoo type xfs (rw)
  /dev/mapper/MyVolGroup-usrLV on /mnt/gentoo/usr type ext4 (rw)
  /dev/mapper/MyVolGroup-varLV on /mnt/gentoo/var type ext4 (rw)
  /dev/mapper/MyVolGroup-srvLV on /mnt/gentoo/srv type ext4 (rw)


Once i got the partitions right (i hope) i stopped rebooting, the livecd's been on.

Then i did some other stuff for the base install and mounted the virtual filesystems:
Code:
# mount -t proc proc /mnt/gentoo/proc
# mount --rbind /sys /mnt/gentoo/sys
# mount --rbind /dev /mnt/gentoo/dev

And chrooted:
Code:
chroot /mnt/gentoo /bin/bash
From here on, whenever i came back to it, i'd ssh into the livecd and chroot into /mnt/gentoo.

I installed the kernel with menuconfig and created the initramfs with genkernel, then went on to dealing with the bootloader.
Code:
# genkernel --lvm --mdadm --e2fsprogs --disklabel --busybox --install initramfs
  ...
  * WARNING... WARNING... WARNING...
  * Additional kernel cmdline arguments that *may* be required to boot properly...
  * add "dolvm" for lvm support
  * add "domdadm" for RAID support
  (created /boot/initramfs-genkernel-x86_64-3.18.12-gentoo)

# emerge --ask sys-boot/grub
  * To avoid automounting and auto(un)installing with /boot,
  * just export the DONT_MOUNT_BOOT variable.
  *
  *
  * Assuming you do not have a separate /boot partition.

# grub2-install /dev/disk/by-id/ata-DISK01-MODEL-AND-SERIAL
  Installing for i386-pc platform.
  Installation finished. No error reported.
# grub2-install /dev/disk/by-id/ata-DISK02-MODEL-AND-SERIAL
  Installing for i386-pc platform.
  Installation finished. No error reported.

# vim /etc/default/grub
  added GRUB_CMDLINE_LINUX="domdadm dolvm"

(from https://wiki.gentoo.org/wiki/GRUB2_Quick_Start)
# vim /etc/portage/make.conf
  added GRUB_PLATFORMS="pc"

# emerge --ask sys-boot/grub:2
  (caused a rebuild)

# grub2-install /dev/disk/by-id/ata-DISK01-MODEL-AND-SERIAL
  Installing for i386-pc platform.
  Installation finished. No error reported.
# grub2-install /dev/disk/by-id/ata-DISK02-MODEL-AND-SERIAL
  Installing for i386-pc platform.
  Installation finished. No error reported.

# grub2-mkconfig -o /boot/grub/grub.cfg
  Generating grub configuration file ...
  Found linux image: /boot/vmlinuz-3.18.12-gentoo
  Found initrd image: /boot/initramfs-genkernel-x86_64-3.18.12-gentoo
  done


And now it won't boot. On start, the GRUB menu shows up (which isn't surprising since, i assume, it's in the MBR), it unpacks the initramfs, goes on detecting my drives correctly (i'm only using 2 out of 4, the other 2 are unpartitioned at the moment) and fails on RAID.

Code:
>> Activating mdev
>> Loading modules
  ...
  :: Loading from dmraid:
  :: Loading from mdadm: dm-snapshot
  :: Loading from lvm: dm-snapshot dm-bufio
  ...
mdadm: WARNING /dev/sdc1 and /dev/sdc appear to have very similar superblocks.
  If they are really different, please --zero the superblock on one
  If they are the same or overlap, please remove one from the DEVICE list in mdadm.conf
mdadm: No arrays found in config file or automatically
Scanning for and activating Volume Groups
  No volume groups found
  No volume groups found
Determining root device
  Block device /dev/mapper/myVG-myRootLV is not a valid root device...
  Could not find the root block device in .


I followed this forum post but i have no /etc/conf.d/rc file nor
Code:
grep RC_VOLUME_ORDER /etc/conf.d/*
returns anything. I did find:
Code:
/etc/conf.d/device-mapper
    RC_AFTER="lvm"
and
Code:
/etc/conf.d/lvm
    RC_AFTER="mdraid"

which makes sense (to me). I found another post suggesting removing or reinstalling mdadm, but i'm not sure that would fix anything. Couldn't hurt, but i'd rather have some feedback from you guys.

At the moment, if i boot the live CD, mount everything and chroot - everything is there. Otherwise it fails with those messages. Outside of the chroot /boot is a symlink to /mnt/livecd/boot, but i guess that's livecd magic that won't persist a reboot.

The disks are SATA and i didn't go with GPT 'cos it's an old-ish board and as far as the partitions go it's just one per disk anyway.

So... where did i go wrong? i'm guessing initramfs/bootloader. What say you?


Last edited by vesperto on Tue Aug 11, 2015 8:02 pm; edited 1 time in total
Back to top
View user's profile Send private message
vesperto
n00b
n00b


Joined: 21 May 2015
Posts: 34

PostPosted: Sun May 24, 2015 9:51 pm    Post subject: Reply with quote

So, replying to self, i noticed that /boot/grub/grub.cfg had erroneous entries in the first selection (i think it's the timeout option? i didn't take note): it was pointing at the /usr LV, so i fixed that. There were also repeated "insmod part_msdos" which i removed, i.e. from
Code:
insmod part_msdos
insmod part_msdos
insmod part_msdos
to
Code:
insmod part_msdos


The other 2 entries seemed fine, the regular and recover ones, and i added some
Code:
insmod xfs
here and there. I don't think that'd be the immediate issue, in the GRUB menu if i press "e" the entry is (roughly) this:
Code:
insmod gzio
insmod part_msdos
insmod diskfilter
insmod mdraid09
insmod lvm
insmod xfs
set root='lvm/myVG-rootLV'
search --no-floppy --fs-uuid --set=root --hint='lvm/myVG-rootLV' <roodUUID> # (plus another entry w/o the hint)
linux /boot/vmlinuz.... root=/dev/mapper/myVG-rootLV reo domdadm dolvm
initrd /boot/initram....

which seems about right.

I also doublechecked that in /etc/lvm/lvm.conf there's
Code:
md_component_detection = 1


I was gonna follow this but .conf is the GRUB legacy extension, right? I'm using GRUB2.

Since detection should be made by the initramfs (btw do i sense a general dislike for initram usage? if so, for any particular reason?) i went ahead and followed this tip from ubuntu-er to gentoo-er and
Code:
emerge --ask --unmerge mdadm
as well as removed /etc/mdadm.conf. I have RAID* and LVM compiled in anyway and the partitions are 0xFD so, at least for boot, it should be autodetected.

But it's not, it still won't boot, fails in the exact same way. Not a good first experience.
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 43370
Location: 56N 3W

PostPosted: Mon May 25, 2015 7:45 pm    Post subject: Reply with quote

vesperto,

A few things.

Code:
Device /dev/md0p1 not found (or ignored by filtering0
is correct.
You partitioned two drives with one partition each, then donated those partitions to a raid1 set called /dev/md0.

/dev/md0 is a block device like any other block device. You can make partitions on it if you wish but you don't have to.
md0p1 does not exist because you have not made it yet.

You should donate all of /dev/md0 to a physical volume if you want to use LVM. There is no paint in using partitions as well, unless you have a good reason.

The
Code:
# fdisk -l /dev/md127
has created a single partition on the raid sel as set the partition to type Linux LVM.
Thats not the same thing as creating a physical volume for LVM to use. For that you need pvcreate.
So far so good but partitioning md127 has cost you some space as that was not required.

Code:
  Block device /dev/mapper/myVG-myRootLV is not a valid root device...
  Could not find the root block device in .
is very telling for what it does not say.

After the "in" and before the "." the kernel has listed all the block devices that it can see. In your case, none at all.
This means that the kernel cannot see your hard drives.

Tell us how you made your kernel and post the output of lspci.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
vesperto
n00b
n00b


Joined: 21 May 2015
Posts: 34

PostPosted: Mon May 25, 2015 8:59 pm    Post subject: Reply with quote

Thank you for your reply.

NeddySeagoon wrote:

You partitioned two drives with one partition each, then donated those partitions to a raid1 set called /dev/md0.

Upon reboot it became /dev/md127, is this udev's doing?

NeddySeagoon wrote:

/dev/md0 is a block device like any other block device. You can make partitions on it if you wish but you don't have to.
md0p1 does not exist because you have not made it yet.

You should donate all of /dev/md0 to a physical volume if you want to use LVM. There is no paint in using partitions as well, unless you have a good reason.

I think the first time i tried pvcreate /dev/md0|127 it complained about duplicate stuff and i ended up creating an LVM partition.

NeddySeagoon wrote:

The
Code:
# fdisk -l /dev/md127
has created a single partition on the raid sel as set the partition to type Linux LVM.
Thats not the same thing as creating a physical volume for LVM to use. For that you need pvcreate.
So far so good but partitioning md127 has cost you some space as that was not required.

Code:
  Block device /dev/mapper/myVG-myRootLV is not a valid root device...
  Could not find the root block device in .
is very telling for what it does not say.

After the "in" and before the "." the kernel has listed all the block devices that it can see. In your case, none at all.
This means that the kernel cannot see your hard drives.

Tell us how you made your kernel and post the output of lspci.


I did find that omission odd. As compared to what the options are when i first ran make menuconfig, my (relevant) changes were:
Code:
Device Drivers -> Serial ATA and Parallel ATA drivers (libata) -> <*> NVIDIA SATA support
Device Drivers -> Serial ATA and Parallel ATA drivers (libata) -> <*> AMD/NVidia PATA support [no change]
Device Drivers -> Multiple devices driver support (RAID and LVM) -> <*> Linear (append) mode
Device Drivers -> Multiple devices driver support (RAID and LVM) -> <*> RAID-1 (mirroring) mode
Device Drivers -> Multiple devices driver support (RAID and LVM) -> <*> Crypt target support
Device Drivers -> Multiple devices driver support (RAID and LVM) -> <*> RAID 1/4/5/6/10 target
Device Drivers -> Multiple devices driver support (RAID and LVM) -> <M> Snapshot target
File systems -> <*> Second extended fs support
File systems -> <*> XFS filesystem support
File systems -> [*] XFS Quota support
File systems -> [*] XFS POSIX ACL support
File systems -> [*] XFS Realtime subvolume support
File systems -> [*] XFS Verbose Warnings
File systems -> CD-ROM/DVD Filesystems -> <M> UDF file system support
File systems -> DOS/FAT/NT Filesystems -> <M> NTFS file system support
File systems -> DOS/FAT/NT Filesystems -> [*] NTFS write support
File systems -> Miscellaneous filesystems -> <M> SquashFS 4.0 - Squashed file system support

I'm very happy i did this installation via ssh and took note of everything :) I pasted a full list of changes.

As for the hardware:
Code:
# lspci
00:00.0 RAM memory: NVIDIA Corporation C51 Host Bridge (rev a2)
00:00.1 RAM memory: NVIDIA Corporation C51 Memory Controller 0 (rev a2)
00:00.2 RAM memory: NVIDIA Corporation C51 Memory Controller 1 (rev a2)
00:00.3 RAM memory: NVIDIA Corporation C51 Memory Controller 5 (rev a2)
00:00.4 RAM memory: NVIDIA Corporation C51 Memory Controller 4 (rev a2)
00:00.5 RAM memory: NVIDIA Corporation C51 Host Bridge (rev a2)
00:00.6 RAM memory: NVIDIA Corporation C51 Memory Controller 3 (rev a2)
00:00.7 RAM memory: NVIDIA Corporation C51 Memory Controller 2 (rev a2)
00:04.0 PCI bridge: NVIDIA Corporation C51 PCI Express Bridge (rev a1)
00:09.0 RAM memory: NVIDIA Corporation MCP51 Host Bridge (rev a2)
00:0a.0 ISA bridge: NVIDIA Corporation MCP51 LPC Bridge (rev a3)
00:0a.1 SMBus: NVIDIA Corporation MCP51 SMBus (rev a3)
00:0a.2 RAM memory: NVIDIA Corporation MCP51 Memory Controller 0 (rev a3)
00:0b.0 USB controller: NVIDIA Corporation MCP51 USB Controller (rev a3)
00:0b.1 USB controller: NVIDIA Corporation MCP51 USB Controller (rev a3)
00:0d.0 IDE interface: NVIDIA Corporation MCP51 IDE (rev a1)
00:0e.0 IDE interface: NVIDIA Corporation MCP51 Serial ATA Controller (rev a1)
00:0f.0 IDE interface: NVIDIA Corporation MCP51 Serial ATA Controller (rev a1)
00:10.0 PCI bridge: NVIDIA Corporation MCP51 PCI Bridge (rev a2)
00:10.1 Audio device: NVIDIA Corporation MCP51 High Definition Audio (rev a2)
00:14.0 Bridge: NVIDIA Corporation MCP51 Ethernet Controller (rev a3)
00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 10h Processor HyperTransport Configuration
00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 10h Processor Address Map
00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 10h Processor DRAM Controller
00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 10h Processor Miscellaneous Control
00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 10h Processor Link Control
01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Park [Mobility Radeon HD 5430]
01:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Cedar HDMI Audio [Radeon HD 5400/6300 Series]
02:09.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8169 PCI Gigabit Ethernet Controller (rev 10)


The southbridge is nVidia, i had trouble before for not enabling this. From previous notes, i seem to have once enabled
Code:
Device Drivers - > SCSI device support -> SCSI low level drivers -> Nvidia SATA
yet i see no low level drivers. Am i confusing?

Thank you for your time.
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 43370
Location: 56N 3W

PostPosted: Mon May 25, 2015 9:57 pm    Post subject: Reply with quote

vesperto,

Your SATA needs

Code:
Device Drivers  --->
   < > ATA/ATAPI/MFM/RLL support (DEPRECATED)  ----   (must be off)
   SCSI device support  --->
      <*> SCSI disk support
      <*> SCSI CDROM support
      <*> SCSI generic support
   <*> Serial ATA and Parallel ATA drivers (libata)  --->
      [*]   ATA SFF support (for legacy IDE and PATA)
      [*]     ATA BMDMA support
      <*>       NVIDIA SATA support
      <*>       AMD/NVidia PATA support


All devices now appear as some form of SCSI, even if they do not use an electrical SCSI interface.
A few words of explaination,
< > ATA/ATAPI/MFM/RLL support (DEPRECATED) ---- (must be off) as there are conflicting obsolete drivers in this menu. You don't want them.

<*> SCSI disk support is the high level SCSI driver.
<*> SCSI CDROM support is for your optical drives, its not needed to boot but you probably want to read optical media
<*> SCSI generic support is not needed to boot. It supports optical media writing and block devices on interfaces like USB. Needed for USB sticks and the like.

The other items are the low level drivers for
lspci:
00:0d.0 IDE interface: NVIDIA Corporation MCP51 IDE (rev a1)
00:0e.0 IDE interface: NVIDIA Corporation MCP51 Serial ATA Controller (rev a1)
00:0f.0 IDE interface: NVIDIA Corporation MCP51 Serial ATA Controller (rev a1)

Which you seem to have set correctly already.

You probably don't want
Code:
File systems -> DOS/FAT/NT Filesystems -> <M> NTFS file system support
File systems -> DOS/FAT/NT Filesystems -> [*] NTFS write support
as it won't do what you think it will.
Use FUSE and emerge ntfs-3g instead. You can have both.
Kernel NTFS write support is limited to changing the content of an existing file - provided the file size does not change.

Your BIOS and chipset support fakeraid. That needs to be off, or at least, fakeraid must not be in use.

The first sign of progress will be the kernel listing your drives and partitions, even if it doesn't boot.

Check your kernel for the above then rebuild and reinstall it if you change anything.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
Barracuz
n00b
n00b


Joined: 17 Mar 2015
Posts: 17
Location: R.I., United States

PostPosted: Tue May 26, 2015 4:22 pm    Post subject: Reply with quote

If you want a temporary fix until you find the real problem is by specifying your root partition when it asks to specify for another device.
Back to top
View user's profile Send private message
NeddySeagoon
Administrator
Administrator


Joined: 05 Jul 2003
Posts: 43370
Location: 56N 3W

PostPosted: Tue May 26, 2015 4:25 pm    Post subject: Reply with quote

Barracuz,

... but only after the kernel can see the hard drives.
_________________
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Back to top
View user's profile Send private message
vesperto
n00b
n00b


Joined: 21 May 2015
Posts: 34

PostPosted: Tue May 26, 2015 10:45 pm    Post subject: Reply with quote

NeddySeagoon wrote:
Your SATA needs

Code:
Device Drivers  --->
   < > ATA/ATAPI/MFM/RLL support (DEPRECATED)  ----   (must be off)
   SCSI device support  --->
      <*> SCSI disk support
      <*> SCSI CDROM support
      <*> SCSI generic support
These are now disabled:
Code:
# CONFIG_IDE is not set


NeddySeagoon wrote:

<*> Serial ATA and Parallel ATA drivers (libata) --->
[*] ATA SFF support (for legacy IDE and PATA)
[*] ATA BMDMA support
<*> NVIDIA SATA support
<*> AMD/NVidia PATA support[/code]

All devices now appear as some form of SCSI, even if they do not use an electrical SCSI interface.
These were not altered:
Code:
CONFIG_RAID_ATTRS=y
CONFIG_BLK_DEV_SD=y
CONFIG_BLK_DEV_SR=y
CONFIG_CHR_DEV_SG=y

CONFIG_ATA_SFF=y
CONFIG_ATA_BMDMA=y
CONFIG_SATA_NV=y
CONFIG_PATA_AMD=y


NeddySeagoon wrote:

You probably don't want
Code:
File systems -> DOS/FAT/NT Filesystems -> <M> NTFS file system support
File systems -> DOS/FAT/NT Filesystems -> [*] NTFS write support
as it won't do what you think it will.
Correct, these were disabled:
Code:
# CONFIG_NTFS_FS is not set
For completeness, here's my .config

NeddySeagoon wrote:
Your BIOS and chipset support fakeraid. That needs to be off, or at least, fakeraid must not be in use.
It was on but not in use in hopes of somehow maybe activating JBOD or AHCI or something so i could try and debug an issue with other hard drives i had. It is now off. On a bout of hyper-pausing on boot to try and catch any rapid message i got Lucky Luke and saw this:
Code:
MediaShield ROM BIOS 6.50
Copyright 2005 Nvidia Corp
detecting ARRAY...
Even after i disabled the BIOS fakeraid.

I also saw this:
Code:
Welcome to GRUB!
error: file '/boot/grub/i386-pc/mdraid09,lvm.mod' not found.
Which led me to immediately edit /etc/default/grub with:
Code:
#GRUB_PRELOAD_MODULES="mdraid09,lvm"


NeddySeagoon wrote:
The first sign of progress will be the kernel listing your drives and partitions, even if it doesn't boot.
Not there yet. My grub.cfg seems to be in order, i only commented out the extraneous
Code:
insmod part_msdos
The livecd detects the drives without issue (here's its dmesg), but that's not the kernel i'm interested in.

Before compiling the kernel i changed its version to
Code:
changed EXTRAVERSION = -gentoo.2
and then compiled with
Code:
make -S -j 5; make -S modules -j 5; make modules_install; make install
Both the RAID partitions are flagged as bootable and are 0xfd Linux raid autodetect so i shouldn't even need an initramfs, provided the RAID and LVM drivers are compiled into the kernel, not as modules, which i think they are.

I still did generate one with
Code:
genkernel --mdadm --mdadm-config=/etc/mdadm.conf --lvm --e2fsprogs --disklabel --busybox --install --bootloader=grub initramfs
and it promptly warns me i pro'lly should add dolvm and domdadm to boot, which i do. I generated
Code:
grub2-mkconfig -o /boot/grub/grub.cfg
and also
Code:
mdadm --detail --scan >> /etc/mdadm.conf
which i passed to genkernel.

Upon boot i double-checked the kernel was the correct one (.2) with the "e" option. The only nitpick is an space between initrd and /boot/initram.... (it's not aligned with the lines above), but i don't think that matters and, again, the kernel should do it.

Just for the hell of it i even tried single-use, but the message is still the same as my first post. i'm considering zeroing out everything and go with GPT plus that handicap boot partition for GRUB2. Or maybe /boot out of RAID+LVM... neither seems necessary and the problem's probably elsewhere in a very stupid detail i overlooked, but i'm getting tired of this and want this system up and running.

Thanks for the feedback.
Back to top
View user's profile Send private message
vesperto
n00b
n00b


Joined: 21 May 2015
Posts: 34

PostPosted: Wed May 27, 2015 11:09 pm    Post subject: Reply with quote

The kernel is loading properly, i think.
After GRUB i briefly see the
Code:
'Loading Linux 3.18.12-gentoo.2 ...
Loading initial ramdisk ...
messages and then it's back to the same error.

But, scrolling up in the dmesg i do see the sd* being loaded and partitions being found. I can go as far up as
Code:
<time> vgaarb: setting as boot device: PCI:000:01:00.0
and some ACPI lines. From then on a few highlights would be
Code:
SGI XFS with ACLs
radeon direct firmware load failed...
scsi host0 through 3: sata_nv
ata1 through 4: SATA max UDMA/133 ...
ata1,3: SATA link up 1.5 Gbps...
ata1.00,3.00: ATA-8: Toshiba disk model...
ata1.00,3.00: 1953525168 sectors, multi 1: LBA48 NCQ (depth 31/32)
ata1.00,3.00: configured for UDMA/133
...
sd 0:0:0:0: [sda] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
sd 0:0:0:0: [sda] 4096-byte physical blocks
sd 0:0:0:0: [sda] Attached SCSI disk
sda: sda1
...
md: linear personality registered for level -1
md: raid0 personality registered for level 0
md: raid1 personality registered for level 1
md: raid10 personality registered for level 10
md: raid6 personality registered for level 6
md: raid5 personality registered for level 5
md: raid4 personality registered for level 4
device-mapper: ioctl: 4.28.0-ioctl (2014-09-17) initialised
...
ata2: SATA link up 1.5 Gbps...
ata2.00: ATA-8: Toshiba disk model...
ata2.00: 1953525168 sectors, multi 1: LBA48 NCQ (depth 31/32)
ata2.00: configured for UDMA/133
...
sd 1:0:0:0: [sdb] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
sd 2:0:0:0: [sdc] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
sd 2:0:0:0: [sdc] 4096-byte physical blocks
...
sdc: sdc1
sdb: unknown partition table
...
ata4: SATA link up 1.5 Gbps...
ata4.00: ATA-8: Toshiba disk model...
ata4.00: 1953525168 sectors, multi 1: LBA48 NCQ (depth 31/32)
ata4.00: configured for UDMA/133
...
sd 3:0:0:0: [sdd] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
sd 3:0:0:0: [sdd] 4096-byte physical blocks
sd 3:0:0:0: [sdd] Attached SCSI disk
sdb: unknown partition table
Typos may apply. Btw i'm only using sda and sdc (to which i refer by /dev/disk/by-id/... in all commands i use).

Then these last lines are shown prior to the lines posted last in the first post:
Code:
[time] Write protecting the kernel read-only data: 16384k
[time] Freeing unused kernel memory: 2008K (ffff880001a0a000 - ffff880001c00000)
[time] Freeing unused kernel memory: 652K (ffff880001f5d000 - ffff880002000000)
[time] mount (1121) used greatest stack depth: 13992 bytes left
>> Activating mdev
...


So... it's not the kernel.
Back to top
View user's profile Send private message
vesperto
n00b
n00b


Joined: 21 May 2015
Posts: 34

PostPosted: Thu May 28, 2015 11:43 am    Post subject: Reply with quote

The N bytes left is apparently CONFIG_DEBUG_STACK_USAGE or similar.
What i found interesting in this output is that the lines following mount are fs-related.
I'll check if that flag's enabled when i get home (not related to my problem, i know).
Back to top
View user's profile Send private message
vesperto
n00b
n00b


Joined: 21 May 2015
Posts: 34

PostPosted: Tue Aug 11, 2015 8:01 pm    Post subject: Reply with quote

    I reinstalled following this blogpost and now it works: (almost)full-disk RAID1+LVM with /boot, / and /others all inside LVM.

    Some notes:
    • i used GPT instead of MBR
    • created a BIOS partition (type EF02)
    • default RAID metadata 1.2
    • did not add raid to default boot
    • issued
      Code:
      echo "sys-boot/grub device-mapper">>/etc/portage/package.use/grub
    • added
      Code:
      GRUB_CMDLINE_LINUX_DEFAULT="dolvm domdadm"
      to /etc/default/grub
    • had to create /boot/grub before issuing
      Code:
      grub2-mkconfig -o /boot/grub/grub.cfg



There are some minor unrelated configuration issues i didn't expect (no network, wrong keymap), but it does boot.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Installing Gentoo All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum