Forums

Skip to content

Advanced search
  • Quick links
    • Unanswered topics
    • Active topics
    • Search
  • FAQ
  • Login
  • Register
  • Board index Assistance Other Things Gentoo
  • Search

Recover broken Intel MatrixRAID array (with FreeBSD on it)?

Still need help with Gentoo, and your question doesn't fit in the above forums? Here is your last bastion of hope.
Post Reply
Advanced search
27 posts
  • 1
  • 2
  • Next
Author
Message
jontheil
n00b
n00b
User avatar
Posts: 23
Joined: Fri Nov 28, 2008 9:04 pm
Location: Copenhagen, Denmark

Recover broken Intel MatrixRAID array (with FreeBSD on it)?

  • Quote

Post by jontheil » Fri Mar 02, 2012 5:45 pm

Hi list,

Some years ago, I "lost" a RAID 5 array containing 3 drives. Back then, some folks suggested that my only chance was to install Gentoo Linux on another disk and try to bring the disks back on-line (and maybe recover the metadata).
This project has been put on ice for a long time, but now I will give it a try.
I haven't even dared to connect the old drives to avoid messing things up even more.
My board is an Intel S975XBX2 and i have a separate drive with Gentoo Linux 3.2.1 on it.

Can I just plug the power cables into the old drives and see what happens?

Are there any tool in Portage I can use to analyse/recover the disk array?

Any other suggestions..?

Thanks in advance,
Jon Theil Nielsen
Top
NeddySeagoon
Administrator
Administrator
User avatar
Posts: 56102
Joined: Sat Jul 05, 2003 9:37 am
Location: 56N 3W

  • Quote

Post by NeddySeagoon » Fri Mar 02, 2012 11:34 pm

jontheil,

No so fast, you will only have one go at this unless you can image the drives.

You say you have an Intel MatrixRAID array - are you 100% sure the Intel part was being used?
If it was kernel raid (I suppose BSD has the equivelent?) the metadata layout on the drives is different.

If you are really using the Intel MatrixRAID, you will need to connect the drives to the same Intel MatrixRAID to recover the data.

What do you mean by 'lost'?
Exactly what happened, as close are you can remember.
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Top
jontheil
n00b
n00b
User avatar
Posts: 23
Joined: Fri Nov 28, 2008 9:04 pm
Location: Copenhagen, Denmark

  • Quote

Post by jontheil » Sat Mar 03, 2012 6:14 am

NeddySeagoon,

Thank you for replying!

I am absolutely sure that my RAID 5 was created by the Intel setup through the RAID configuration utility.
Due to the long time having gone, I am not so sure about the partitioning. I don't remember if I actually booted from the RAID or if I had another boot drive.
The array consisted of three WD Raptor disks.

What exactly happened was that I was messing with a large number of rather big files (digital images). In the middle of transferring the files over the LAN, the array stopped working. I believe that some part of the transfer took such a long time that the controller "decided" that one of the disks was failing and it degraded the array. As the same thing happened again, the next disk was set off-line. As far as I remember, I was left with to disks off-line and one on-line (and no useful content).

In the situation, I guess i panicked and tried a number of things (switching cables, moving disks around etc.). Until I realized that the best I could do was to detach the three disks completely and install a working OS (Gentoo) on a separate disk.

The Intel RAID configuration utility is not very useful. I cannot force the drives back on-line. But I might be able to access the disks from Gentoo and dd the content to some safe location. I just have to be sure that the PC is not trying to boot from the broken array, so the situation gets even worse.

I don't know if the above answers your questions. Hope so... :)

Best regards,
Jon Theil Nielsen
Top
Dont Panic
Guru
Guru
User avatar
Posts: 322
Joined: Wed Jun 20, 2007 4:36 pm
Location: SouthEast U.S.A.
Contact:
Contact Dont Panic
Website

  • Quote

Post by Dont Panic » Sat Mar 03, 2012 3:12 pm

I'm not sure what size these drives were, but if you have the space, you may want to consider creating an backup image of each drive using the 'dd' command.

For example, if each of the RAID drives was 80 GB, and you have a single 500GB or 1TB drive now, it would make sense to capture an initial image of each disk so you can restore in case of a problem.

Just one word of caution when using the 'dd' command. Make *SURE* you have your source disk and destination file syntax correct (be anal, check it 3 or 5 times). :twisted:
Top
jontheil
n00b
n00b
User avatar
Posts: 23
Joined: Fri Nov 28, 2008 9:04 pm
Location: Copenhagen, Denmark

  • Quote

Post by jontheil » Sat Mar 03, 2012 5:00 pm

Thanks again!

I have the space needed for all of the disks. And I will be careful with the syntax of dd.

I have the data cables connected, but not the power cables. Can I harm anything by plugging in the power connectors when the system is running? The BIOS is kind of tricky and I don't like taking the risk of the system trying to boot from the failed disks.

But okay, let's say that I somehow secure the content by dd, what do I do next? As I said, I don't think I can persuade the RAID configuration utility to bring the drives back on-line. Does Gentoo have tools for that?

Of course, I would prefer something with a nice GUI, but a command line tool is better than nothing.

Best regards,
Jon Theil Nielsen
Top
frostschutz
Advocate
Advocate
User avatar
Posts: 2978
Joined: Tue Feb 22, 2005 11:23 am
Location: Germany

  • Quote

Post by frostschutz » Sat Mar 03, 2012 5:13 pm

jontheil wrote:And I will be careful with the syntax of dd.
If one (or more) of the source disks is (partially) defective, add conv=noerror,sync so dd will fill those damaged areas with zeroes in the image.

There's also ddrescue but I never had much luck with it.
jontheil wrote:what do I do next?
Depends on what's on the disks. Output of fdisk -l and file -s /dev/yourdisk* might help, also read up on dmraid for Linux support of various bios softraid solutions.

If all else fails you'll have to figure out how data was stored on the disks yourself and write a small python script to put it back together. Structure is usually easy to figure out if there was a large enough plain text file or image stored on the disk somewhere.
I would prefer something with a nice GUI
don't know any
Top
jontheil
n00b
n00b
User avatar
Posts: 23
Joined: Fri Nov 28, 2008 9:04 pm
Location: Copenhagen, Denmark

  • Quote

Post by jontheil » Sat Mar 03, 2012 6:56 pm

Hi again

As soon as I have everything set, I will attach the drives again and use dd (thanks for the extra safety options) to copy over to a safe place.

Then I will have a look at what e.g. dmraid can tell me.

I am not really afraid of programming, but I don't think I have the knowledge to make Python scripts. But I will report back again, and then some of you clever people may have some ideas. :)

Best regards,
Jon Theil Nielsen
Top
NeddySeagoon
Administrator
Administrator
User avatar
Posts: 56102
Joined: Sat Jul 05, 2003 9:37 am
Location: 56N 3W

  • Quote

Post by NeddySeagoon » Sat Mar 03, 2012 7:05 pm

jontheil,

If the drives are SATA, then they are designed to be hot plugged at the hardware level. That is nothing should get broken.
IDE drives are not hot pluggable and SCSI drives may be - its optional.

Be warned that not all SATA controllers support hot plugging and for some that do, its not in the device driver.

You need to make the SCSI bus rescan after you connect your drives. Intel MatrixRAID arrays can be controlled by either mdadm or dmraid but you need to make up your mind at raid create time.
Looking at the partition table in LBA0 of one of the drives may be useful as would looking at each partition with the file command and mdadm -E /dev/sd.. command to see what it things is there.

Looking is harmless but its safer to look at one of your image files if you can.
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Top
M
Guru
Guru
Posts: 432
Joined: Tue Dec 12, 2006 11:59 am

  • Quote

Post by M » Sat Mar 03, 2012 7:23 pm

Hi jontheil,

I recently had to recover server with Intel Matrix raid, I was told at first that it is hardware raid but when I saw that there are no options in raid bios and that single drives are visible after I booted systemrescuecd I realized that is not hardware raid... Centos was installed, and it seems installation by default used dmraid, not md. Since I didn't have any experience with that I quickly installed windows on separate hard drive I added (yes, kind a silly), and installed intel matrix raid tools for windows, after restart it started immediately to recover raid and all was done in about 1 hour. Just telling you my experience.
Top
jontheil
n00b
n00b
User avatar
Posts: 23
Joined: Fri Nov 28, 2008 9:04 pm
Location: Copenhagen, Denmark

  • Quote

Post by jontheil » Sat Mar 03, 2012 7:48 pm

Hi Guru,

Thanks for the tip. Actually, even today I tried to install XP on another partition as my Gentoo is running from now. Didn't succeed. I don't think XP likes my motherboard, but I may give it another try.
It's true that it is not really a hardware RAID. Don't remember the right name for it right now.
Yes, I will try this solution again. And any other ideas are most welcome!
Of course, I feel a little stupid that I didn't have a backup of everything. But back then, I thought that RAID 5 would/should not fail. And the next time, it will be software RAID. With modern processors, I should think I would still have plenty of power for the rest of the "work".

Best regards,
Jon Theil Nielsen
Top
jontheil
n00b
n00b
User avatar
Posts: 23
Joined: Fri Nov 28, 2008 9:04 pm
Location: Copenhagen, Denmark

  • Quote

Post by jontheil » Sat Mar 03, 2012 7:52 pm

And thank to NeddySeagoon too.
I think this information is very useful. Can't wait to see what shows up..!

Best regards,
Jon Theil Nielsen
Top
jontheil
n00b
n00b
User avatar
Posts: 23
Joined: Fri Nov 28, 2008 9:04 pm
Location: Copenhagen, Denmark

  • Quote

Post by jontheil » Sun Mar 04, 2012 10:22 am

Hi again,

With all three disks turned on, I could see and test two of them. I used mdadm -E /dev/sd*
The first one

Code: Select all

mdadm -E /dev/sdb
/dev/sdb:
          Magic : Intel Raid ISM Cfg Sig.
        Version : 1.2.02
    Orig Family : 00000000
         Family : 226608eb
     Generation : 000000be
           UUID : 89dddf72:2d1472d1:731c54d7:d5685e55
       Checksum : 154e2da6 correct
    MPB Sectors : 2
          Disks : 3
   RAID Devices : 1

  Disk00 Serial : WD-WMANS1321860
          State : active
             Id : 00000100
    Usable Size : 145221598 (69.25 GiB 74.35 GB)

[Volume0]:
           UUID : 376f21d1:f8b44eed:9f1c794e:feefb9bb
     RAID Level : 5
        Members : 3
          Slots : [U_U]
      This Slot : 0
     Array Size : 290441088 (138.49 GiB 148.71 GB)
   Per Dev Size : 96813696 (46.16 GiB 49.57 GB)
  Sector Offset : 0
    Num Stripes : 756357
     Chunk Size : 64 KiB
       Reserved : 0
  Migrate State : idle
      Map State : degraded
    Dirty State : clean

  Disk01 Serial : D-WMANS1321994:0
          State : active
             Id : ffffffff
    Usable Size : 145221598 (69.25 GiB 74.35 GB)

  Disk02 Serial : WD-WMAP41668380
          State : active
             Id : 00000000
    Usable Size : 293042254 (139.73 GiB 150.04 GB)
The second one

Code: Select all

/dev/sdc:
          Magic : Intel Raid ISM Cfg Sig.
        Version : 1.2.02
    Orig Family : 00000000
         Family : 226608eb
     Generation : 000000bc
           UUID : 89dddf72:2d1472d1:731c54d7:d5685e55
       Checksum : 081a3efa correct
    MPB Sectors : 2
          Disks : 3
   RAID Devices : 1

  Disk01 Serial : WD-WMANS1321994
          State : active
             Id : 00010000
    Usable Size : 145221598 (69.25 GiB 74.35 GB)

[Volume0]:
           UUID : 376f21d1:f8b44eed:9f1c794e:feefb9bb
     RAID Level : 5
        Members : 3
          Slots : [UUU]
      This Slot : 1
     Array Size : 290441088 (138.49 GiB 148.71 GB)
   Per Dev Size : 96813696 (46.16 GiB 49.57 GB)
  Sector Offset : 0
    Num Stripes : 756357
     Chunk Size : 64 KiB
       Reserved : 0
  Migrate State : idle
      Map State : normal
    Dirty State : clean

  Disk00 Serial : WD-WMANS1321860
          State : active
             Id : 00000100
    Usable Size : 145221598 (69.25 GiB 74.35 GB)

  Disk02 Serial : WD-WMAP41668380
          State : active
             Id : 00000000
    Usable Size : 293042254 (139.73 GiB 150.04 GB)
The good thing is that both disks are part of the three disk array and have many equal parameters.
The strange thing is that the first disk is reporting the array to be degraded, while the other one states it "normal".
During the night, I used dd to create two image files on a network drive. Maybe not surprisingly, they seems quite equal.

Code: Select all

# ls -ls
total 145227776
72613888 -rwxr-xr-x 1 root root 74356621312 Mar  4 00:02 sdb
72613888 -rwxr-xr-x 1 root root 74356621312 Mar  4 03:30 sdc
I used fdisk to look at the partitions

Code: Select all

#fdisk -l sdb

Disk sdb: 74.4 GB, 74356621312 bytes
255 heads, 63 sectors/track, 9040 cylinders, total 145227776 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x90909090

Device Boot      Start         End      Blocks   Id  System
  sdb1   *          63   290439134   145219536   a5  FreeBSD

Code: Select all

# fdisk -l sdc

Disk sdc: 74.4 GB, 74356621312 bytes
255 heads, 63 sectors/track, 9040 cylinders, total 145227776 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xf366f367

Device Boot      Start         End      Blocks   Id  System
As mentioned, only the two of the three disk are detected (both with dmesg and looking af /dev/)

Code: Select all

# ls /dev | grep sd
sda
sda1
sda2
sda3
sda4
sdb
sdb1
sdb5
sdb6
sdc
sda is my working Gentoo disk. I don't know what I can learn from the partitioning of the two disks.
Since I am very concerned about restoring most of my data, I haven't try to move the disks physically. But I may try to swap one of the working ones with the dead one - it could be a cable issue.
I don't know if any tools can try to bring the dead drive back on-line.

Please give me some more advice on whether I should just forget about the data or at least can have a degraded array back.

Best regards,
Jon Theil Nielsen
Top
NeddySeagoon
Administrator
Administrator
User avatar
Posts: 56102
Joined: Sat Jul 05, 2003 9:37 am
Location: 56N 3W

  • Quote

Post by NeddySeagoon » Sun Mar 04, 2012 1:56 pm

jontheil,

We now know without a shadow of a doubt that your riad is Magic : Intel Raid ISM Cfg Sig since kernel raid gives someting like

Code: Select all

 $ sudo /sbin/mdadm -E /dev/sda1
/dev/sda1:
          Magic : a92b4efc
With a totally dead drive there is nothing to be done. You are on the right track trying replacement power and data cables before you give it up.
If it never appears in /dev nothing can see it.

The next step is to try to assemble your raid in degraded mode. With kernel raid, I would reccomend practicing with the image files but I don't know how to do that with Intel Storage Matrix.
ask the Intel Storage Matrix controller to assemble your raid in degraded mode. It may have a --force option you can use as a last resort. Now mount the raid read only.

Be aware that with one drive mssing, the raid has no idea if the data its looking at is self consistant or not - that needs the other drive.
With a second drive dropping out of the array some data *will* be corrupt as its likely writes were completed on one drive but not on the other. The only way to tell how bad it is is to look at the data yourself after the raid assembles.

Don't even think about in situ data recovery - find your files, copy them out and assess the damage.
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Top
jontheil
n00b
n00b
User avatar
Posts: 23
Joined: Fri Nov 28, 2008 9:04 pm
Location: Copenhagen, Denmark

  • Quote

Post by jontheil » Sun Mar 04, 2012 7:34 pm

Well, the third disk it definitely dead.
So I am left with the two others. I really, really hope I can recover at least some of the data.
The very last resort is to mingle with Intels RAID configuration utility in the BIOS.
I would prefer to use dmraid to (try) to recreate the degraded array. And I would like to do my experiments on the image files and not the physical drives.
I don't really understand the function of the device mapper. But is there a way to symlink the images to /dev/? I can use

Code: Select all

mdadm --examine
on block devises, but apparently not on disk images.
I have searched a lot, but haven't found any references on how to recreate a degraded Intel Matrix RAID with dmraid. So I could really use a link. Or an explanation here on the forum.

Best regards,
Jon Theil Nielsen
Top
NeddySeagoon
Administrator
Administrator
User avatar
Posts: 56102
Joined: Sat Jul 05, 2003 9:37 am
Location: 56N 3W

  • Quote

Post by NeddySeagoon » Sun Mar 04, 2012 8:43 pm

jontheil,

You need to make the file appear as a block device. The loop device does that. You may even be above to assciate files with loop devices than the loop devices into raid sets.
As you won't be mounting the files directly, you can't use mount -o loop,ro ... you need to performa the association and raid formation as separate steps.

See the writeup and example on Wikipedia.

You can do losetup on your images then mdadm may work on them.
You will need loop filesystem support in your kernel.
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Top
jontheil
n00b
n00b
User avatar
Posts: 23
Joined: Fri Nov 28, 2008 9:04 pm
Location: Copenhagen, Denmark

  • Quote

Post by jontheil » Sun Mar 04, 2012 9:10 pm

Hi NeddySeagoon ,

Thank your very much for the thorough explanation and the link.
Actually, I installed Gentoo as an instrument to repair my usual OS. And even though I still don't understand it fully, I have got used to it and like it.
What I also appreciate very much is this very active and friendly forum.

I think I better wait until tomorrow, before I move on with my experiments. Though I must admit, that I am quite curious. :)

Best regards,
Jon Theil Nielsen
Top
NeddySeagoon
Administrator
Administrator
User avatar
Posts: 56102
Joined: Sat Jul 05, 2003 9:37 am
Location: 56N 3W

  • Quote

Post by NeddySeagoon » Sun Mar 04, 2012 9:18 pm

jontheil,

Practice on a *.iso image. Any iso image will do.
e.g.

Code: Select all

mount -o loop,ro -t iso9660 /path/to/random.iso /mnt/floppy
Your kernel needs to support the iso9660 filesystem and the loop device.
If you don't have an iso download handy, get a SystemRescueCD iso. Its about 300Mb and very useful in its own right anyway.
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Top
jontheil
n00b
n00b
User avatar
Posts: 23
Joined: Fri Nov 28, 2008 9:04 pm
Location: Copenhagen, Denmark

  • Quote

Post by jontheil » Wed Mar 07, 2012 4:42 pm

Hi again,

Have been busy with other matters the last days.
But I am very close to giving up.

What I have tried:

Code: Select all

run mdadm --examine on the two physical disks (see output above)

Code: Select all

dd the two disks to image files (sdb and sdc)

Code: Select all

run losetup /dev/loop0 sdb && losetup /dev/loop1 sdc

Code: Select all

run mdadm --examine on loop0 and loop1:
mdadm: No md superblock detected on /dev/loop1

Code: Select all

mount -o loop,ro /dev/loop0:
mount: you must specify the filesystem type

Code: Select all

mount -o loop,ro /dev/loop1:
mount: you must specify the filesystem type
I can't figure out what is wrong with the image files, since I can't run mdadm on them.
I can't figure out how to mount the images (loops), as they represents whole disks and not partitions/filesystems.

Any more suggestions? Thanks in advance

Best regards,
Jon Theil Nielsen
Top
NeddySeagoon
Administrator
Administrator
User avatar
Posts: 56102
Joined: Sat Jul 05, 2003 9:37 am
Location: 56N 3W

  • Quote

Post by NeddySeagoon » Wed Mar 07, 2012 6:28 pm

jontheil,

All this points to needing to use the Intel Matrix Raid Storage Controller and real drives.

You cannot mount any of the images individually as they don't contain a filesystem Indeed the data is 'scrambled' in such a way that it can be spread over three drives yet still be read if any one drive is missing.
This 'scrambling' is done at a very low level - undeneth the filesystem. To be able to use the mount command successfully, the raid must first be assembled, Degraded mode is fine.

There is no mdadm visible superblock for several reasons:-
If the drive is partitioned - look with fdisk, the raid superbock will be written with respect to the partiton start, not the drive start. fdisk -l will tell if there is a partiton and where it starts.
Did you iamage the drives or partition(s)?
In short, you may be looking in the wrong place.

The other reason, is that the raid superblock may be defined in the BIOS code that supports the Intel Matrix Raid Storage Controller, rather than the drives. dmraid works this way.
If this is the case, you can do nothing without the real drives attached to the real Intel Matrix Raid Storage Controller.

If you want me to look at the start of one of the images, use dd to make a file with the first 1M of a drive in and email it to me as an attachment.
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Top
jontheil
n00b
n00b
User avatar
Posts: 23
Joined: Fri Nov 28, 2008 9:04 pm
Location: Copenhagen, Denmark

  • Quote

Post by jontheil » Wed Mar 07, 2012 10:07 pm

NeddySeagoon,

It is a very kind offer. So I will provide the start of the image. The two drive are quite different, what makes me worry the metadata is messed up a lot, when I - without really knowing what I was doing - tried to save the data long time ago.

Code: Select all

#fdisk -l (stripped):
Disk /dev/sdb: 74.4 GB, 74355769344 bytes
255 heads, 63 sectors/track, 9039 cylinders, total 145226112 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x90909090

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *          63   290439134   145219536   a5  FreeBSD

Disk /dev/sdc: 74.4 GB, 74355769344 bytes
255 heads, 63 sectors/track, 9039 cylinders, total 145226112 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xf366f367

   Device Boot      Start         End      Blocks   Id  System

The BIOS of the motherboard (Intel S975XBX2) is a bit complicated, since it controls both the Intel Matrix RAID and a Marvell controller. As far as I remember, I had difficulties avoiding it to try to boot from the RAID. And I think it stated that at least one of the drives was off-line, what I couldn't change. By perhaps I haven't tried to do it the right way.
If I mail you the image files, would it be better to include parts of the partition of the partitioned drive?

Best regards,
Jon Theil Nielsen
Top
NeddySeagoon
Administrator
Administrator
User avatar
Posts: 56102
Joined: Sat Jul 05, 2003 9:37 am
Location: 56N 3W

  • Quote

Post by NeddySeagoon » Wed Mar 07, 2012 11:00 pm

jontheil,

For any sort of raid5 with a partitioned disk I would expect to find several copies of the partition table. Probably one per drive.
The basis of that is that a raid5 set needs to operate with any one drive missing - so it still needs at least one copy of the partition table, whichever drive fails.
Putting the partition table on two drives would be OK since if two drives die, a raid5 is useless.

The partton table alone provides useful information.

The drive size is 74.4 GB, 74355769344 bytes or 72,613,056 1k blocks but the partition holds 145,219,536 1k blocks, or about twice as many as the drive.
Thats normal for BIOS raid. The drives are donated to the raid set then the raid is partitoned. As a result, the partition table describes all of the space on the raid set.

From the start sector of 63, we know the filesystem begins 63 x 512B = 32256 from the start of the raid, so thats a good place to point mdadm -E
Try losetup -o 32256, so it starts on the partition, not from the beginning of your image files.
Don't be disappointed if it fails as we have no idea of the data layout on the drives, which is where the BIOS comes in. If we are lucky, the raid set starts at block 63 on all 3 drives.

You can fix the boot order, if thats the only issue, by making the chipset driver for the boot drive built in and AHCI for the Intel Matrix Raid as a loadable module.
If both contollers need AHCI, then boot from a USB stick. Its fairly easy to make the kernel changes needed to boot Gentoo from USB.
Anyway, we are not done with losetup and mdadm yet.

email me the first Mb or so from both drives.
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Top
jontheil
n00b
n00b
User avatar
Posts: 23
Joined: Fri Nov 28, 2008 9:04 pm
Location: Copenhagen, Denmark

  • Quote

Post by jontheil » Wed Mar 14, 2012 6:47 am

Hi again,

I have been busy doing work related things, so I haven't tried all off the suggested solutions.
One of the things I might try is to use losetup with the offset option.
I tried to activate the RAID setting in the BIOS and see what the RAID configuration utility offered my. There was one defined RAID 5 array, but no way to repair it. The only thing I could do, was to define a new array. Off course, I didn't try that, since I only have two drives and I am quite convinced that by doing so, I would destroy any useful piece of metadata.

Best regards,
Jon Theil Nielsen
Top
NeddySeagoon
Administrator
Administrator
User avatar
Posts: 56102
Joined: Sat Jul 05, 2003 9:37 am
Location: 56N 3W

  • Quote

Post by NeddySeagoon » Wed Mar 14, 2012 10:30 pm

jontheil,

This is a hobby for all of us. Like you, I'm heaqre when I'm here.

Good luck.
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Top
veedar
n00b
n00b
Posts: 1
Joined: Tue Apr 10, 2012 4:12 pm

  • Quote

Post by veedar » Mon Apr 23, 2012 3:46 pm

I'm recovering a RAID0 from a laptop with an intel raid controller and happened upon this thread.

What I've learned is that you need to set two environment variables to examine the loop back devices.

Code: Select all

% mdadm -E /dev/loop2
mdadm: No md superblock detected on /dev/loop2.

% export IMSM_NO_PLATFORM=1
% export IMSM_DEVNAME_AS_SERIAL=1

% mdadm -Eb /dev/loop2
ARRAY metadata=imsm UUID=25137636:2ff64e75:10ae5444:4e8f609b
ARRAY /dev/md/Volume0 container=25137636:2ff64e75:10ae5444:4e8f609b member=0 UUID=72acd5fa:35c5a901:995af79b:83a29de1
Top
jontheil
n00b
n00b
User avatar
Posts: 23
Joined: Fri Nov 28, 2008 9:04 pm
Location: Copenhagen, Denmark

  • Quote

Post by jontheil » Mon Apr 23, 2012 3:57 pm

It's not that I have forgotten this topic. I have been extremely busy.

But I will certainly try this approach soon.

Thanks for the suggestion,
Jon
Jon Theil Nielsen
Top
Post Reply

27 posts
  • 1
  • 2
  • Next

Return to “Other Things Gentoo”

Jump to
  • Assistance
  • ↳   News & Announcements
  • ↳   Frequently Asked Questions
  • ↳   Installing Gentoo
  • ↳   Multimedia
  • ↳   Desktop Environments
  • ↳   Networking & Security
  • ↳   Kernel & Hardware
  • ↳   Portage & Programming
  • ↳   Gamers & Players
  • ↳   Other Things Gentoo
  • ↳   Unsupported Software
  • Discussion & Documentation
  • ↳   Documentation, Tips & Tricks
  • ↳   Gentoo Chat
  • ↳   Gentoo Forums Feedback
  • ↳   Duplicate Threads
  • International Gentoo Users
  • ↳   中文 (Chinese)
  • ↳   Dutch
  • ↳   Finnish
  • ↳   French
  • ↳   Deutsches Forum (German)
  • ↳   Diskussionsforum
  • ↳   Deutsche Dokumentation
  • ↳   Greek
  • ↳   Forum italiano (Italian)
  • ↳   Forum di discussione italiano
  • ↳   Risorse italiane (documentazione e tools)
  • ↳   Polskie forum (Polish)
  • ↳   Instalacja i sprzęt
  • ↳   Polish OTW
  • ↳   Portuguese
  • ↳   Documentação, Ferramentas e Dicas
  • ↳   Russian
  • ↳   Scandinavian
  • ↳   Spanish
  • ↳   Other Languages
  • Architectures & Platforms
  • ↳   Gentoo on ARM
  • ↳   Gentoo on PPC
  • ↳   Gentoo on Sparc
  • ↳   Gentoo on Alternative Architectures
  • ↳   Gentoo on AMD64
  • ↳   Gentoo for Mac OS X (Portage for Mac OS X)
  • Board index
  • All times are UTC
  • Delete cookies

© 2001–2026 Gentoo Foundation, Inc.

Powered by phpBB® Forum Software © phpBB Limited

Privacy Policy

 

 

magic