As a newbie to linux I am expericencing a problem that is driving me crazy and I don't seem to be able to find an answer to it, I've googled until my eyes burn and my head aches, but I just don't seem to be grasping things, I would really appreciate some help here.
The problem I am having is with a Raid0 array setup on identical 2 x 40 gig IDE disks attatched to a medley CMD680 controller. They have 2 partitions on them, the first being a 70 gig NTFS and secondly a 10 gig fat32 for sharing files between windows and linux.
All the howtos and advice seems to be for SATA disks and booting from or installing linux onto them. I'm not looking to have the raid drive available at boot or install an operating system onto them. I just want to mount them and access them in linux and windows.
I am using kernel linux-2.6.14-gentoo-r6 and dmraid, once my system has booted, I run automatically from local.start
hdparm -qc1qu1qm16qd1 /dev/hde (as this then matches /dev/hdg which has these options already set)
dmraid -ay -f sil
the drives are found and the volumes are created in /dev/mapper :-
control sil_adbiebcfchag sil_adbiebcfchag1 sil_adbiebcfchag2
All is well and I can mount the sil_adbiebcfchag1 (ntfs) and read all data no problems
I can mount sil_adbiebcfchag2 (fat32) and read and write data no problems
Ok, the problem I have is when booting, the kernel detects the drives as 2 individual drives hde and hdg, hdg is not a problem, it is probed and reported as having an invalid partition, that's ok as dmraid sorts this out, but, when hde is probed I see these scrolling error messages that go on for around a full 40 seconds, here is an extract from my dmesg :-
Code: Select all
hde: task_in_intr: status=0x59 { DriveReady SeekComplete DataRequest Error }
hde: task_in_intr: error=0x04 { DriveStatusError }
ide: failed opcode was: unknown
hde: task_in_intr: status=0x59 { DriveReady SeekComplete DataRequest Error }
hde: task_in_intr: error=0x04 { DriveStatusError }
ide: failed opcode was: unknown
hde: task_in_intr: status=0x59 { DriveReady SeekComplete DataRequest Error }
hde: task_in_intr: error=0x04 { DriveStatusError }
ide: failed opcode was: unknown
hde: task_in_intr: status=0x59 { DriveReady SeekComplete DataRequest Error }
hde: task_in_intr: error=0x04 { DriveStatusError }
ide: failed opcode was: unknown
ide2: reset: success
hde: task_in_intr: status=0x59 { DriveReady SeekComplete DataRequest Error }
hde: task_in_intr: error=0x04 { DriveStatusError }
ide: failed opcode was: unknown
hde: task_in_intr: status=0x59 { DriveReady SeekComplete DataRequest Error }
hde: task_in_intr: error=0x04 { DriveStatusError }
ide: failed opcode was: unknown
hde: task_in_intr: status=0x59 { DriveReady SeekComplete DataRequest Error }
hde: task_in_intr: error=0x04 { DriveStatusError }
ide: failed opcode was: unknown
hde: task_in_intr: status=0x59 { DriveReady SeekComplete DataRequest Error }
hde: task_in_intr: error=0x04 { DriveStatusError }
ide: failed opcode was: unknown
ide2: reset: successThe partitions were made with p/magic 8 in dos mode, and it reports them as being ok, they work fine in Winxp and linux, so why is the kernel probe having a problem?
Is there anyway around this probing, to stop the errors being reported, I could live with seeing this error 1 or 2 times at boot, but not seeing them scroll by for around 40 secs.
At the moment I have the drives disabled in linux using the hde=noprobe and hdg=noprobe in grub.config
I have tried the hde=noprobe hde=xxxx,xx,xx (entering the CHS values) which gives me access to the drive in linux without the error messages during boot, but then dmraid fails with an error seeking hde to xxxxxxxxxxxxxx (sector numbers)
I would really like to use the raid drive in linux, but I'm not prepared to tolerate the scrolling error messages during boot up, so if anyone can help with this problem I will be really grateful.
Thanks
PS not much point in posting my full dmesg, as it is filled up with the hde: task_in_intr errors and wipes everything previous to these
