View previous topic :: View next topic |
Author |
Message |
mangueireiro n00b
Joined: 22 Aug 2014 Posts: 3
|
Posted: Fri Aug 22, 2014 2:40 pm Post subject: Raid device gets autoassembled but without partitions |
|
|
Hello fellows. I decided to install gentoo in a raid device, inside a virtual machine in VBox, and I have the following to report:
First, I made a raid device level 0, /dev/md1, out of /dev/sdb and /dev/sdc.
And then I made a primary partition ext4 ( /dev/md1p1 )
I've compiled the kernel and the initramfs with integration of /etc/mdadm.conf.
If I don't make the devices that need to be integrated explicit in that file, the raid drive doesn't get assembled giving an error saying that the superblock of /dev/sdb is equal to the superblock of /dev/sdb1. I haven't created a /dev/sdb1 but it somehow appears, I think it has to do with the raid thing going on, and being /dev/sdb the "first" part of the raid device.
Now, I've the following /etc/mdadm.conf file:
Code: |
DEVICE partitions
CREATE owner=root group=disk mode=0660 auto=yes
HOMEHOST <system>
ARRAY /dev/md1 metadata=0.90 UUID=7513fe63:71b17520:cb201669:f728008a level=raid0 num-devices=2 devices=/dev/sdb,/dev/sdc
|
Now, with the kernel compiled with genkernel, integrating /etc/mdadm.conf and having "domdadm" in the grub kernel line with root=/dev/md1p1
I get:
Code: |
mdadm: /dev/md1 has been started with two drives.
>> Determining root device...
!! Block device /dev/md1p1 is not a valid root device...
...
|
Now, I got into the shell, and confirm that /dev/md1 exists but /dev/md1p1 doesn't.
But, if I do
Code: |
mdadm --detail /dev/md1
|
The partition /dev/md1p1 appears, and by exiting the shell and writing /dev/md1p1 I can now boot to the system and I do get a nice feeling from being in it.
That's it. I would like to ask what can I do so that /dev/md1p1 gets created automatically. |
|
Back to top |
|
|
Goverp Veteran
Joined: 07 Mar 2007 Posts: 1999
|
Posted: Sat Aug 23, 2014 8:58 am Post subject: |
|
|
Your analysis is correct; I have a similar setup but wrote my own initramfs script, so I could add Code: | mdadm --detail --scan | after the first call to mdadm. That makes the partition devices /dev/md1p1 etc.
So you need to persuade the genkernel people to include such a call in the init script it builds, or edit that script and inset the call manually. _________________ Greybeard |
|
Back to top |
|
|
szatox Advocate
Joined: 27 Aug 2013 Posts: 3131
|
Posted: Sat Aug 23, 2014 9:21 am Post subject: |
|
|
I'm using lvm on top of raid, but if you want to use partitions on raid itself you might also want to have a look at genkernel's options
Quote: |
--[no-]mdadm
Includes or excludes mdadm/mdmon support. Without sys-fs/mdadm[static] installed, this will compile mdadm for you.
--mdadm-config=<file>
Use <file> as configfile for MDADM. By default the ramdisk will be built without an mdadm.conf and will auto-detect arrays during
boot-up.
--[no-]dmraid
Includes or excludes DMRAID support. |
|
|
Back to top |
|
|
Roman_Gruber Advocate
Joined: 03 Oct 2006 Posts: 3846 Location: Austro Bavaria
|
Posted: Sat Aug 23, 2014 12:21 pm Post subject: Re: Raid device gets autoassembled but without partitions |
|
|
mangueireiro wrote: |
First, I made a raid device level 0, /dev/md1, out of /dev/sdb and /dev/sdc.
|
Well if you use lvm anyway it is sufficent to jsut make the lvm container with these two hard drives. You do not need raid for that anyway. lvm handels that for you
it looks more than a boot issue or init issue in my point of view.
your init script fails to mount it in a proper way.
personally i think lvm is enough for you in this regard. I never touched that software raid thing because lvm does the things for me anyway. if it is a new install redo it with only an lvm container below. than make your volume groupes and in them your file system. for myself the genkernel initramfs works quite well with a custom made kernel.
it could be also a kernel related issue too. you need kernel + userpsace utils + config files working together. |
|
Back to top |
|
|
mangueireiro n00b
Joined: 22 Aug 2014 Posts: 3
|
Posted: Sat Aug 23, 2014 4:56 pm Post subject: |
|
|
Goverp wrote: | Your analysis is correct; I have a similar setup but wrote my own initramfs script, so I could add Code: | mdadm --detail --scan | after the first call to mdadm. That makes the partition devices /dev/md1p1 etc.
So you need to persuade the genkernel people to include such a call in the init script it builds, or edit that script and inset the call manually. |
Ok thanks .
szatox wrote: | I'm using lvm on top of raid, but if you want to use partitions on raid itself you might also want to have a look at genkernel's options
Quote: |
--[no-]mdadm
Includes or excludes mdadm/mdmon support. Without sys-fs/mdadm[static] installed, this will compile mdadm for you.
--mdadm-config=<file>
Use <file> as configfile for MDADM. By default the ramdisk will be built without an mdadm.conf and will auto-detect arrays during
boot-up.
--[no-]dmraid
Includes or excludes DMRAID support. |
|
I'm have used the first two parameters, but not that last. Don't know if it helps in this case.
tw04l124 wrote: | mangueireiro wrote: |
First, I made a raid device level 0, /dev/md1, out of /dev/sdb and /dev/sdc.
|
Well if you use lvm anyway it is sufficent to jsut make the lvm container with these two hard drives. You do not need raid for that anyway. lvm handels that for you
it looks more than a boot issue or init issue in my point of view.
your init script fails to mount it in a proper way.
personally i think lvm is enough for you in this regard. I never touched that software raid thing because lvm does the things for me anyway. if it is a new install redo it with only an lvm container below. than make your volume groupes and in them your file system. for myself the genkernel initramfs works quite well with a custom made kernel.
it could be also a kernel related issue too. you need kernel + userpsace utils + config files working together. |
But will it raid?
The virtual drives represent files in two different hard disks, so I want them to raid to work in parallel, that's why I made them raid level 0. |
|
Back to top |
|
|
Roman_Gruber Advocate
Joined: 03 Oct 2006 Posts: 3846 Location: Austro Bavaria
|
Posted: Sun Aug 24, 2014 7:47 am Post subject: |
|
|
Hi,
I think yes according to this
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/raid_volumes.html#create-raid
Quote: | To create a RAID logical volume, you specify a raid type as the --type argument of the lvcreate command. Usually when you create a logical volume with the lvcreate command, the --type argument is implicit. For example, when you specify the -i stripes argument, the lvcreate command assumes the --type stripe option. When you specify the -m mirrors argument, the lvcreate command assumes the --type mirror option. When you create a RAID logical volume, however, you must explicitly specify the segment type you desire. The possible RAID segment types are described in Table 4.1, “RAID Segment Types”. |
As I understood in the past about LVM you can specify how big the data junks are and how they are set up, if they are saved in duplicates, you can call them mirrored or anything else. LVM is a very powerful tool. It has many different features |
|
Back to top |
|
|
mangueireiro n00b
Joined: 22 Aug 2014 Posts: 3
|
Posted: Sun Aug 24, 2014 11:03 pm Post subject: |
|
|
tw04l124 wrote: | Hi,
I think yes according to this
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/raid_volumes.html#create-raid
Quote: | To create a RAID logical volume, you specify a raid type as the --type argument of the lvcreate command. Usually when you create a logical volume with the lvcreate command, the --type argument is implicit. For example, when you specify the -i stripes argument, the lvcreate command assumes the --type stripe option. When you specify the -m mirrors argument, the lvcreate command assumes the --type mirror option. When you create a RAID logical volume, however, you must explicitly specify the segment type you desire. The possible RAID segment types are described in Table 4.1, “RAID Segment Types”. |
As I understood in the past about LVM you can specify how big the data junks are and how they are set up, if they are saved in duplicates, you can call them mirrored or anything else. LVM is a very powerful tool. It has many different features |
Ok thanks |
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|