Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Partitioning scheme for RAID array
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Gentoo on Sparc
View previous topic :: View next topic  
Author Message
biggyL
Tux's lil' helper
Tux's lil' helper


Joined: 31 Jan 2005
Posts: 120
Location: Israel

PostPosted: Tue May 02, 2006 2:35 pm    Post subject: Partitioning scheme for RAID array Reply with quote

Hello All,

I've configured RAID0 on my StoreEdge array conncted to my Sun Fire 280R (UltraSPARC-III+):

Code:

mdadm -C /dev/md0 --level=raid0 --raid-devices=12 /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn


Now I need some advice here on partitioning schema:

Should I just make # mkfs.ext3 /dev/md0
or there should be special "whole disk" slice configured by fdisk?
Back to top
View user's profile Send private message
ecosta
Guru
Guru


Joined: 09 May 2003
Posts: 477
Location: Brussels,BE

PostPosted: Tue May 02, 2006 2:48 pm    Post subject: Reply with quote

Hi biggyL,

The way you set this up you will have one partition (md0) for your whole system which isn't a recomended setup. You are also striping all your disks as one big disk which means that if one disk fails, the whle raid (md0) will fail and all data will be lost.

Doesn't your StoreEdge have hardware RAID? If so, you don't need software RAID, but I guess you know that ;)

I would create several raid0 devices or even better raid1 or raid5 devices. This way you can seperate / from /usr, /home, ...

I'd read the Gentoo Handbook for partition settings and maybe have a look at LVM2 when managing large disk spaces. I wrote a quick howto in the Gentoo wiki on seting up a server with RAID and LVM2.

-Ed
_________________
Linux user #201331
A8N-SLI Delux / AMD64 X2 3800+ / 1024 MB RAM / 5 x 250 GB SATA RAID 1/5 / ATI Radeon X700 256MB.
Back to top
View user's profile Send private message
sblaineyuk
n00b
n00b


Joined: 27 Oct 2005
Posts: 34

PostPosted: Wed May 03, 2006 10:37 am    Post subject: whole disk Reply with quote

The whole disk slice pertains to the physical disks (/dev/sda - the whole disk slice is /dev/sda3), not the logical RAID array (/dev/md0) - so no you don't need one. "mkfs.ext3 /dev/md0" will certainly work, then you can mount /dev/md0 as whatever you want e.g. "mount /dev/md0 /home"

I would endorse as the above poster recommendation of using RAID5... it gives you redundacy without sacrificing too much disk space. I really don't like the idea of a situation where 1 disk failing causes the data on 12 disks to be lost.
Back to top
View user's profile Send private message
Drunkula
Apprentice
Apprentice


Joined: 28 Jul 2003
Posts: 257
Location: Denton, TX - USA

PostPosted: Wed May 03, 2006 12:28 pm    Post subject: Reply with quote

I, too, must agree with the other posters. A single RAID0 array on 12 disks is playing with fire. If I were mine I'd set it up as RAID5 on 11 disks with the last being an online spare.
_________________
Go away or I will replace you with a very small shell script.
Back to top
View user's profile Send private message
biggyL
Tux's lil' helper
Tux's lil' helper


Joined: 31 Jan 2005
Posts: 120
Location: Israel

PostPosted: Thu May 04, 2006 7:54 am    Post subject: Reply with quote

Hello,

Thanks sblaineyuk for the exact answer I needed.

Thanks all for their proposals.
1) Drunkula your suggestion is interesting and in case I'm going to make the followig (raid5 configuration), could you say how much space I would lose besides the obvious 1 spare disk, every disk is ~17GB (what's the exact formula)?
Code:
#mdadm -C /dev/md0 --level=raid5 --raid-devices=11 /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm --spare-devices=1 /dev/sdn


2) Actually I've started with "Using Linux mdadm Multipathing
with Sun StorEdge™ Systems" document and indeed managed to configure it like this:
Code:
mdadm -C /dev/md0 -l multipath -n 2 /dev/sdc /dev/sdd ....


But then running command
Code:
mdadm -D /dev/md0
I could see that only one disk is "active sync" and all the rest (11 disks are in "spare"). May be I don't understand the multipath array principle to its end, but I also didn't find any howto concerning multipath configured arrays. May be there is a way to make 11 disks to be "active sync" and only 1 of them to be "spare", but again I don't know if this schema is workable with multipath configuration?

This "StorEdge" is actually data storage for my streaming server, so the storage volume is the main issue here.
I gonna do the backups of the data on this storage so in case of the failure I could always restore the data.

What are you saying about the multipath configuration?
Back to top
View user's profile Send private message
Drunkula
Apprentice
Apprentice


Joined: 28 Jul 2003
Posts: 257
Location: Denton, TX - USA

PostPosted: Thu May 04, 2006 12:24 pm    Post subject: Reply with quote

In a RAID5 setup with 12 disks of ~17G using 11 + 1 online spare youd lose 34G, the obvious ~17G for the spare and another ~17G for parity. IMHO that is the beauty of RAID5 - the more disks you use the more efficient is in regards to space.

As a side point the online spare is not entirely necessary. But it is nice to have in case one of the others in the array die since it should kick-in and the array will rebuild the missing info automatically.
_________________
Go away or I will replace you with a very small shell script.
Back to top
View user's profile Send private message
biggyL
Tux's lil' helper
Tux's lil' helper


Joined: 31 Jan 2005
Posts: 120
Location: Israel

PostPosted: Thu May 04, 2006 1:27 pm    Post subject: Reply with quote

Thanks for the info :)

Another problem is with raid5!!!

Creating raid0 and checking it with #mdadm -D /dev/md0
produce no error:
Code:

# mdadm -C /dev/md0 -l 0 -n 12 /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn
mdadm: /dev/sdc appears to contain an ext2fs file system
    size=176891520K  mtime=Thu Jan  1 00:00:00 1970
mdadm: /dev/sdc appears to be part of a raid array:
    level=5 devices=12 ctime=Thu May  4 12:46:58 2006
mdadm: /dev/sdd appears to be part of a raid array:
    level=5 devices=12 ctime=Thu May  4 12:46:58 2006
mdadm: /dev/sde appears to be part of a raid array:
    level=5 devices=12 ctime=Thu May  4 12:46:58 2006
mdadm: /dev/sdf appears to be part of a raid array:
    level=5 devices=12 ctime=Thu May  4 12:46:58 2006
mdadm: /dev/sdg appears to be part of a raid array:
    level=5 devices=12 ctime=Thu May  4 12:46:58 2006
mdadm: /dev/sdh appears to be part of a raid array:
    level=5 devices=12 ctime=Thu May  4 12:46:58 2006
mdadm: /dev/sdi appears to be part of a raid array:
    level=5 devices=12 ctime=Thu May  4 12:46:58 2006
mdadm: /dev/sdj appears to be part of a raid array:
    level=5 devices=12 ctime=Thu May  4 12:46:58 2006
mdadm: /dev/sdk appears to be part of a raid array:
    level=5 devices=12 ctime=Thu May  4 12:46:58 2006
mdadm: /dev/sdl appears to be part of a raid array:
    level=5 devices=12 ctime=Thu May  4 12:46:58 2006
mdadm: /dev/sdm appears to contain an ext2fs file system
    size=176891520K  mtime=Thu Jan  1 00:00:00 1970
mdadm: /dev/sdm appears to be part of a raid array:
    level=5 devices=12 ctime=Thu May  4 12:46:58 2006
mdadm: /dev/sdn appears to be part of a raid array:
    level=5 devices=12 ctime=Thu May  4 12:46:58 2006
Continue creating array? y
mdadm: array /dev/md0 started.
livecd ~ # mdadm -D /dev/md0
/dev/md0:
        Version : 00.90.00
  Creation Time : Thu May  4 12:49:59 2006
     Raid Level : raid0
     Array Size : 212269824 (202.44 GiB 217.36 GB)
   Raid Devices : 12
  Total Devices : 12
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Thu May  4 12:50:03 2006
          State : active
 Active Devices : 12
Working Devices : 12
 Failed Devices : 0
  Spare Devices : 0

     Chunk Size : 64K

           UUID : 3265c4ba:8c08cf18:a73a100b:ce575d71
         Events : 0.1

    Number   Major   Minor   RaidDevice State
       0       8       32        0      active sync   /dev/scsi/host2/bus0/target0/lun0/disc
       1       8       48        1      active sync   /dev/scsi/host2/bus0/target1/lun0/disc
       2       8       64        2      active sync   /dev/scsi/host2/bus0/target2/lun0/disc
       3       8       80        3      active sync   /dev/scsi/host2/bus0/target3/lun0/disc
       4       8       96        4      active sync   /dev/scsi/host2/bus0/target4/lun0/disc
       5       8      112        5      active sync   /dev/scsi/host2/bus0/target5/lun0/disc
       6       8      128        6      active sync   /dev/scsi/host2/bus0/target8/lun0/disc
       7       8      144        7      active sync   /dev/scsi/host2/bus0/target9/lun0/disc
       8       8      160        8      active sync   /dev/scsi/host2/bus0/target10/lun0/disc
       9       8      176        9      active sync   /dev/scsi/host2/bus0/target11/lun0/disc
      10       8      192       10      active sync   /dev/scsi/host2/bus0/target12/lun0/disc
      11       8      208       11      active sync   /dev/scsi/host2/bus0/target13/lun0/disc


But if I'm creating raid5 will produce 1 faulty and one spare disks:
Code:

# mdadm -C /dev/md0 --level=raid5 --raid-devices=12 /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn
mdadm: /dev/sdc appears to contain an ext2fs file system
    size=176891520K  mtime=Thu Jan  1 00:00:00 1970
mdadm: /dev/sdc appears to be part of a raid array:
    level=0 devices=12 ctime=Thu May  4 12:49:59 2006
mdadm: /dev/sdd appears to be part of a raid array:
    level=0 devices=12 ctime=Thu May  4 12:49:59 2006
mdadm: /dev/sde appears to be part of a raid array:
    level=0 devices=12 ctime=Thu May  4 12:49:59 2006
mdadm: /dev/sdf appears to be part of a raid array:
    level=0 devices=12 ctime=Thu May  4 12:49:59 2006
mdadm: /dev/sdg appears to be part of a raid array:
    level=0 devices=12 ctime=Thu May  4 12:49:59 2006
mdadm: /dev/sdh appears to be part of a raid array:
    level=0 devices=12 ctime=Thu May  4 12:49:59 2006
mdadm: /dev/sdi appears to be part of a raid array:
    level=0 devices=12 ctime=Thu May  4 12:49:59 2006
mdadm: /dev/sdj appears to be part of a raid array:
    level=0 devices=12 ctime=Thu May  4 12:49:59 2006
mdadm: /dev/sdk appears to be part of a raid array:
    level=0 devices=12 ctime=Thu May  4 12:49:59 2006
mdadm: /dev/sdl appears to be part of a raid array:
    level=0 devices=12 ctime=Thu May  4 12:49:59 2006
mdadm: /dev/sdm appears to contain an ext2fs file system
    size=176891520K  mtime=Thu Jan  1 00:00:00 1970
mdadm: /dev/sdm appears to be part of a raid array:
    level=0 devices=12 ctime=Thu May  4 12:49:59 2006
mdadm: /dev/sdn appears to be part of a raid array:
    level=0 devices=12 ctime=Thu May  4 12:49:59 2006
Continue creating array? y
mdadm: array /dev/md0 started.
livecd ~ # mdadm -D /dev/md0
/dev/md0:
        Version : 00.90.00
  Creation Time : Thu May  4 12:55:28 2006
     Raid Level : raid5
     Array Size : 194580672 (185.57 GiB 199.25 GB)
    Device Size : 17689152 (16.87 GiB 18.11 GB)
   Raid Devices : 12
  Total Devices : 13
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Thu May  4 12:55:30 2006
          State : active, degraded, recovering
 Active Devices : 11
Working Devices : 12
 Failed Devices : 1
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 64K

 Rebuild Status : 0% complete

           UUID : 5546b8d4:daccc91d:3f805bde:26aca8fe
         Events : 0.1

    Number   Major   Minor   RaidDevice State
       0       8       32        0      active sync   /dev/scsi/host2/bus0/target0/lun0/disc
       1       8       48        1      active sync   /dev/scsi/host2/bus0/target1/lun0/disc
       2       8       64        2      active sync   /dev/scsi/host2/bus0/target2/lun0/disc
       3       8       80        3      active sync   /dev/scsi/host2/bus0/target3/lun0/disc
       4       8       96        4      active sync   /dev/scsi/host2/bus0/target4/lun0/disc
       5       8      112        5      active sync   /dev/scsi/host2/bus0/target5/lun0/disc
       6       8      128        6      active sync   /dev/scsi/host2/bus0/target8/lun0/disc
       7       8      144        7      active sync   /dev/scsi/host2/bus0/target9/lun0/disc
       8       8      160        8      active sync   /dev/scsi/host2/bus0/target10/lun0/disc
       9       8      176        9      active sync   /dev/scsi/host2/bus0/target11/lun0/disc
      10       8      192       10      active sync   /dev/scsi/host2/bus0/target12/lun0/disc
      11       0        0       11      faulty

      12       8      208       12      spare   /dev/scsi/host2/bus0/target13/lun0/disc



Using --raid-devices=11 and --spare-devices=1 options, I've the following result (1 faulty and 2 spares):
Code:

# mdadm -C /dev/md0 --level=raid5 --raid-devices=11 /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm --spare-devices=1 /dev/sdn
mdadm: /dev/sdc appears to contain an ext2fs file system
    size=176891520K  mtime=Thu Jan  1 00:00:00 1970
mdadm: /dev/sdc appears to be part of a raid array:
    level=5 devices=12 ctime=Thu May  4 12:55:28 2006
mdadm: /dev/sdd appears to be part of a raid array:
    level=5 devices=12 ctime=Thu May  4 12:55:28 2006
mdadm: /dev/sde appears to be part of a raid array:
    level=5 devices=12 ctime=Thu May  4 12:55:28 2006
mdadm: /dev/sdf appears to be part of a raid array:
    level=5 devices=12 ctime=Thu May  4 12:55:28 2006
mdadm: /dev/sdg appears to be part of a raid array:
    level=5 devices=12 ctime=Thu May  4 12:55:28 2006
mdadm: /dev/sdh appears to be part of a raid array:
    level=5 devices=12 ctime=Thu May  4 12:55:28 2006
mdadm: /dev/sdi appears to be part of a raid array:
    level=5 devices=12 ctime=Thu May  4 12:55:28 2006
mdadm: /dev/sdj appears to be part of a raid array:
    level=5 devices=12 ctime=Thu May  4 12:55:28 2006
mdadm: /dev/sdk appears to be part of a raid array:
    level=5 devices=12 ctime=Thu May  4 12:55:28 2006
mdadm: /dev/sdl appears to be part of a raid array:
    level=5 devices=12 ctime=Thu May  4 12:55:28 2006
mdadm: /dev/sdm appears to contain an ext2fs file system
    size=176891520K  mtime=Thu Jan  1 00:00:00 1970
mdadm: /dev/sdm appears to be part of a raid array:
    level=5 devices=12 ctime=Thu May  4 12:55:28 2006
mdadm: /dev/sdn appears to be part of a raid array:
    level=5 devices=12 ctime=Thu May  4 12:55:28 2006
Continue creating array? y
mdadm: array /dev/md0 started.
livecd ~ # mdadm -D /dev/md0
/dev/md0:
        Version : 00.90.00
  Creation Time : Thu May  4 12:58:43 2006
     Raid Level : raid5
     Array Size : 176891520 (168.70 GiB 181.14 GB)
    Device Size : 17689152 (16.87 GiB 18.11 GB)
   Raid Devices : 11
  Total Devices : 13
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Thu May  4 12:58:45 2006
          State : active, degraded, recovering
 Active Devices : 10
Working Devices : 12
 Failed Devices : 1
  Spare Devices : 2

         Layout : left-symmetric
     Chunk Size : 64K

 Rebuild Status : 0% complete

           UUID : 41fded8f:742eefe2:4f1e1341:1cdaa844
         Events : 0.1

    Number   Major   Minor   RaidDevice State
       0       8       32        0      active sync   /dev/scsi/host2/bus0/target0/lun0/disc
       1       8       48        1      active sync   /dev/scsi/host2/bus0/target1/lun0/disc
       2       8       64        2      active sync   /dev/scsi/host2/bus0/target2/lun0/disc
       3       8       80        3      active sync   /dev/scsi/host2/bus0/target3/lun0/disc
       4       8       96        4      active sync   /dev/scsi/host2/bus0/target4/lun0/disc
       5       8      112        5      active sync   /dev/scsi/host2/bus0/target5/lun0/disc
       6       8      128        6      active sync   /dev/scsi/host2/bus0/target8/lun0/disc
       7       8      144        7      active sync   /dev/scsi/host2/bus0/target9/lun0/disc
       8       8      160        8      active sync   /dev/scsi/host2/bus0/target10/lun0/disc
       9       8      176        9      active sync   /dev/scsi/host2/bus0/target11/lun0/disc
      10       0        0       10      faulty

      11       8      208       11      spare   /dev/scsi/host2/bus0/target13/lun0/disc
      12       8      192       12      spare   /dev/scsi/host2/bus0/target12/lun0/disc


Any help/advise please?
Back to top
View user's profile Send private message
sblaineyuk
n00b
n00b


Joined: 27 Oct 2005
Posts: 34

PostPosted: Thu May 04, 2006 2:12 pm    Post subject: mdadm Reply with quote

From the mdadm man page:

Quote:

When creating a RAID5 array, mdadm will automatically create a degraded
array with an extra spare drive. This is because building the spare
into a degraded array is in general faster than resyncing the parity on
a non-degraded, but not clean, array. This feature can be over-ridden
with the --force option.


So that may be why you are seeing the faulty drive... What does 'cat /proc/mdstat' show?

As for seeing one extra spare it might be to do with the way the RAID algorithm works.... I'm not quite sure, my array is 3x20G which just gives me 3 active disks.
Back to top
View user's profile Send private message
biggyL
Tux's lil' helper
Tux's lil' helper


Joined: 31 Jan 2005
Posts: 120
Location: Israel

PostPosted: Thu May 04, 2006 2:27 pm    Post subject: Reply with quote

Hi,

Creating Raid5 without configuring "spare" drive give me the following:
Code:

# mdadm -C /dev/md0 --level=raid5 --raid-devices=12 /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn
mdadm: /dev/sdc appears to contain an ext2fs file system
    size=176891520K  mtime=Thu Jan  1 00:00:00 1970
mdadm: /dev/sdc appears to be part of a raid array:
    level=5 devices=11 ctime=Thu May  4 12:58:43 2006
mdadm: /dev/sdd appears to be part of a raid array:
    level=5 devices=11 ctime=Thu May  4 12:58:43 2006
mdadm: /dev/sde appears to be part of a raid array:
    level=5 devices=11 ctime=Thu May  4 12:58:43 2006
mdadm: /dev/sdf appears to be part of a raid array:
    level=5 devices=11 ctime=Thu May  4 12:58:43 2006
mdadm: /dev/sdg appears to be part of a raid array:
    level=5 devices=11 ctime=Thu May  4 12:58:43 2006
mdadm: /dev/sdh appears to be part of a raid array:
    level=5 devices=11 ctime=Thu May  4 12:58:43 2006
mdadm: /dev/sdi appears to be part of a raid array:
    level=5 devices=11 ctime=Thu May  4 12:58:43 2006
mdadm: /dev/sdj appears to be part of a raid array:
    level=5 devices=11 ctime=Thu May  4 12:58:43 2006
mdadm: /dev/sdk appears to be part of a raid array:
    level=5 devices=11 ctime=Thu May  4 12:58:43 2006
mdadm: /dev/sdl appears to be part of a raid array:
    level=5 devices=11 ctime=Thu May  4 12:58:43 2006
mdadm: /dev/sdm appears to contain an ext2fs file system
    size=176891520K  mtime=Thu Jan  1 00:00:00 1970
mdadm: /dev/sdm appears to be part of a raid array:
    level=5 devices=11 ctime=Thu May  4 12:58:43 2006
mdadm: /dev/sdn appears to be part of a raid array:
    level=5 devices=11 ctime=Thu May  4 12:58:43 2006
Continue creating array? y
mdadm: array /dev/md0 started.


and mdadm -D /dev/md0 gives me:
Code:

# mdadm -D /dev/md0
/dev/md0:
        Version : 00.90.00
  Creation Time : Thu May  4 13:51:36 2006
     Raid Level : raid5
     Array Size : 194580672 (185.57 GiB 199.25 GB)
    Device Size : 17689152 (16.87 GiB 18.11 GB)
   Raid Devices : 12
  Total Devices : 13
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Thu May  4 13:51:38 2006
          State : active, degraded, recovering
 Active Devices : 11
Working Devices : 12
 Failed Devices : 1
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 64K

 Rebuild Status : 0% complete

           UUID : 91036631:315653b6:7f62b160:28bc9268
         Events : 0.1

    Number   Major   Minor   RaidDevice State
       0       8       32        0      active sync   /dev/scsi/host2/bus0/target0/lun0/disc
       1       8       48        1      active sync   /dev/scsi/host2/bus0/target1/lun0/disc
       2       8       64        2      active sync   /dev/scsi/host2/bus0/target2/lun0/disc
       3       8       80        3      active sync   /dev/scsi/host2/bus0/target3/lun0/disc
       4       8       96        4      active sync   /dev/scsi/host2/bus0/target4/lun0/disc
       5       8      112        5      active sync   /dev/scsi/host2/bus0/target5/lun0/disc
       6       8      128        6      active sync   /dev/scsi/host2/bus0/target8/lun0/disc
       7       8      144        7      active sync   /dev/scsi/host2/bus0/target9/lun0/disc
       8       8      160        8      active sync   /dev/scsi/host2/bus0/target10/lun0/disc
       9       8      176        9      active sync   /dev/scsi/host2/bus0/target11/lun0/disc
      10       8      192       10      active sync   /dev/scsi/host2/bus0/target12/lun0/disc
      11       0        0       11      faulty

      12       8      208       12      spare   /dev/scsi/host2/bus0/target13/lun0/disc


This is what cat /proc/mdstat gives:
Code:

# cat /proc/mdstat
Personalities : [raid0] [raid5]
read_ahead 1024 sectors
md0 : active raid5 scsi/host2/bus0/target13/lun0/disc[12] scsi/host2/bus0/target12/lun0/disc[10] scsi/host2/bus0/target11/lun0/disc[9] scsi/host2/bus0/target10/lun0/disc[8] scsi/host2/bus0/target9/lun0/disc[7] scsi/host2/bus0/target8/lun0/disc[6] scsi/host2/bus0/target5/lun0/disc[5] scsi/host2/bus0/target4/lun0/disc[4] scsi/host2/bus0/target3/lun0/disc[3] scsi/host2/bus0/target2/lun0/disc[2] scsi/host2/bus0/target1/lun0/disc[1] scsi/host2/bus0/target0/lun0/disc[0]
      194580672 blocks level 5, 64k chunk, algorithm 2 [12/11] [UUUUUUUUUUU_]
      [>....................]  recovery =  2.9% (530520/17689152) finish=88.7min speed=3218K/sec
unused devices: <none>


and listing device names gives me:
Code:

# cat /var/log/messages | grep Attach
Attached scsi disk sda at scsi0, channel 0, id 1, lun 0
Attached scsi disk sdb at scsi0, channel 0, id 2, lun 0
Attached scsi disk sdc at scsi2, channel 0, id 0, lun 0
Attached scsi disk sdd at scsi2, channel 0, id 1, lun 0
Attached scsi disk sde at scsi2, channel 0, id 2, lun 0
Attached scsi disk sdf at scsi2, channel 0, id 3, lun 0
Attached scsi disk sdg at scsi2, channel 0, id 4, lun 0
Attached scsi disk sdh at scsi2, channel 0, id 5, lun 0
Attached scsi disk sdi at scsi2, channel 0, id 8, lun 0
Attached scsi disk sdj at scsi2, channel 0, id 9, lun 0
Attached scsi disk sdk at scsi2, channel 0, id 10, lun 0
Attached scsi disk sdl at scsi2, channel 0, id 11, lun 0
Attached scsi disk sdm at scsi2, channel 0, id 12, lun 0
Attached scsi disk sdn at scsi2, channel 0, id 13, lun 0
Attached scsi CD-ROM sr0 at scsi1, channel 0, id 6, lun 0
Attached scsi generic sg15 at scsi2, channel 0, id 15, lun 0,  type 3


So what the heck this "Failed Device" mean?
Is it a problem I should fix somehow?

When I specifically say to create --spare-devices=1 and --raid-devices=11 it creates me 2 spare devices and 1 faulty, why?
Back to top
View user's profile Send private message
sblaineyuk
n00b
n00b


Joined: 27 Oct 2005
Posts: 34

PostPosted: Thu May 04, 2006 2:52 pm    Post subject: Red Herring Reply with quote

I think it's a red herring... if you look at the list of raid devices, there are actually 13, when in fact you only have 12 disks.

/proc/mdstat seem to confirm this with:

[UUUUUUUUUUU_]

showing 11 devices up

I suspect that when the array finishes recovering you should see 12 U's...

I am pretty sure that the failed device is the raid array setting itself up as per the manual. Try using the --force option as suggested to see if it creates the array without the failed disk.
Back to top
View user's profile Send private message
biggyL
Tux's lil' helper
Tux's lil' helper


Joined: 31 Jan 2005
Posts: 120
Location: Israel

PostPosted: Fri May 05, 2006 10:08 am    Post subject: Reply with quote

Hello sblaineyuk,

Seems to me you are right.
This is what I'm getting from "mdadm -D /dev/md0" and "cat /proc/mdstat" commands now(after several hours):
Code:

livecd ~ # mdadm -D /dev/md0
/dev/md0:
        Version : 00.90.00
  Creation Time : Thu May  4 13:51:36 2006
     Raid Level : raid5
     Array Size : 194580672 (185.57 GiB 199.25 GB)
    Device Size : 17689152 (16.87 GiB 18.11 GB)
   Raid Devices : 12
  Total Devices : 13
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Thu May  4 15:25:17 2006
          State : active
 Active Devices : 12
Working Devices : 12
 Failed Devices : 1
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : 91036631:315653b6:7f62b160:28bc9268
         Events : 0.2

    Number   Major   Minor   RaidDevice State
       0       8       32        0      active sync   /dev/scsi/host2/bus0/target0/lun0/disc
       1       8       48        1      active sync   /dev/scsi/host2/bus0/target1/lun0/disc
       2       8       64        2      active sync   /dev/scsi/host2/bus0/target2/lun0/disc
       3       8       80        3      active sync   /dev/scsi/host2/bus0/target3/lun0/disc
       4       8       96        4      active sync   /dev/scsi/host2/bus0/target4/lun0/disc
       5       8      112        5      active sync   /dev/scsi/host2/bus0/target5/lun0/disc
       6       8      128        6      active sync   /dev/scsi/host2/bus0/target8/lun0/disc
       7       8      144        7      active sync   /dev/scsi/host2/bus0/target9/lun0/disc
       8       8      160        8      active sync   /dev/scsi/host2/bus0/target10/lun0/disc
       9       8      176        9      active sync   /dev/scsi/host2/bus0/target11/lun0/disc
      10       8      192       10      active sync   /dev/scsi/host2/bus0/target12/lun0/disc
      11       8      208       11      active sync   /dev/scsi/host2/bus0/target13/lun0/disc
livecd ~ #
livecd ~ # cat /proc/mdstat
Personalities : [raid0] [raid5]
read_ahead 1024 sectors
md0 : active raid5 scsi/host2/bus0/target13/lun0/disc[11] scsi/host2/bus0/target12/lun0/disc[10] scsi/host2/bus0/target11/lun0/disc[9] scsi/host2/bus0/target10/lun0/disc[8] scsi/host2/bus0/target9/lun0/disc[7] scsi/host2/bus0/target8/lun0/disc[6] scsi/host2/bus0/target5/lun0/disc[5] scsi/host2/bus0/target4/lun0/disc[4] scsi/host2/bus0/target3/lun0/disc[3] scsi/host2/bus0/target2/lun0/disc[2] scsi/host2/bus0/target1/lun0/disc[1] scsi/host2/bus0/target0/lun0/disc[0]
      194580672 blocks level 5, 64k chunk, algorithm 2 [12/12] [UUUUUUUUUUUU]

unused devices: <none>


I think I'll check the --force option and read some more abour raid5 defaults.

May be you also could point me to some good manual/howto regarding the above issues,cause I didn't find many info on the faulty disks and raid5 options.
Back to top
View user's profile Send private message
sblaineyuk
n00b
n00b


Joined: 27 Oct 2005
Posts: 34

PostPosted: Fri May 05, 2006 11:08 am    Post subject: cool! Reply with quote

Glad you got it all sorted...

As I understand it, the force option, and just waiting for the array to "recover" yeild the same results. With the potential exception that not using force should let you use the array sooner after creating it.

Useful resources:

mdadm man page

Software RAID HowTo
Back to top
View user's profile Send private message
biggyL
Tux's lil' helper
Tux's lil' helper


Joined: 31 Jan 2005
Posts: 120
Location: Israel

PostPosted: Sun May 07, 2006 9:29 am    Post subject: Reply with quote

Thanks sblaineyuk,

I've read the docs and I believe you're right.

This command worked perfectly:
# mdadm -C /dev/md0 --force --level=raid5 --raid-devices=11 /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm --spare-devices=1 /dev/sdn

and the output of the checks right after the creation:
Code:

livecd ~ # cat /proc/mdstat
Personalities : [raid0] [raid5]
read_ahead 1024 sectors
md0 : active raid5 scsi/host2/bus0/target13/lun0/disc[11] scsi/host2/bus0/target12/lun0/disc[10]                             scsi/host2/bus0/target11/lun0/disc[9] scsi/host2/bus0/target10/lun0/disc[8] scsi/host2/bus0/targe                            t9/lun0/disc[7] scsi/host2/bus0/target8/lun0/disc[6] scsi/host2/bus0/target5/lun0/disc[5] scsi/ho                            st2/bus0/target4/lun0/disc[4] scsi/host2/bus0/target3/lun0/disc[3] scsi/host2/bus0/target2/lun0/d                            isc[2] scsi/host2/bus0/target1/lun0/disc[1] scsi/host2/bus0/target0/lun0/disc[0]
      176891520 blocks level 5, 64k chunk, algorithm 2 [11/11] [UUUUUUUUUUU]
      [>....................]  resync =  0.2% (42160/17689152) finish=83.6min speed=3513K/sec
unused devices: <none>
livecd ~ # mdadm -D /dev/md0
/dev/md0:
        Version : 00.90.00
  Creation Time : Sat May  6 17:54:36 2006
     Raid Level : raid5
     Array Size : 176891520 (168.70 GiB 181.14 GB)
    Device Size : 17689152 (16.87 GiB 18.11 GB)
   Raid Devices : 11
  Total Devices : 12
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Sat May  6 17:54:38 2006
          State : active, resyncing
 Active Devices : 11
Working Devices : 12
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 64K

 Rebuild Status : 0% complete

           UUID : a0e6ab9b:5ebb96c4:9d7021f3:019a32a2
         Events : 0.1

    Number   Major   Minor   RaidDevice State
       0       8       32        0      active sync   /dev/scsi/host2/bus0/target0/lun0/disc
       1       8       48        1      active sync   /dev/scsi/host2/bus0/target1/lun0/disc
       2       8       64        2      active sync   /dev/scsi/host2/bus0/target2/lun0/disc
       3       8       80        3      active sync   /dev/scsi/host2/bus0/target3/lun0/disc
       4       8       96        4      active sync   /dev/scsi/host2/bus0/target4/lun0/disc
       5       8      112        5      active sync   /dev/scsi/host2/bus0/target5/lun0/disc
       6       8      128        6      active sync   /dev/scsi/host2/bus0/target8/lun0/disc
       7       8      144        7      active sync   /dev/scsi/host2/bus0/target9/lun0/disc
       8       8      160        8      active sync   /dev/scsi/host2/bus0/target10/lun0/disc
       9       8      176        9      active sync   /dev/scsi/host2/bus0/target11/lun0/disc
      10       8      192       10      active sync   /dev/scsi/host2/bus0/target12/lun0/disc

      11       8      208       11      spare   /dev/scsi/host2/bus0/target13/lun0/disc


And after the "recovering":
Code:

livecd ~ # mdadm -D /dev/md0
/dev/md0:
        Version : 00.90.00
  Creation Time : Sat May  6 17:54:36 2006
     Raid Level : raid5
     Array Size : 176891520 (168.70 GiB 181.14 GB)
    Device Size : 17689152 (16.87 GiB 18.11 GB)
   Raid Devices : 11
  Total Devices : 12
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Sat May  6 17:54:38 2006
          State : active
 Active Devices : 11
Working Devices : 12
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : a0e6ab9b:5ebb96c4:9d7021f3:019a32a2
         Events : 0.1

    Number   Major   Minor   RaidDevice State
       0       8       32        0      active sync   /dev/scsi/host2/bus0/target0/lun0/disc
       1       8       48        1      active sync   /dev/scsi/host2/bus0/target1/lun0/disc
       2       8       64        2      active sync   /dev/scsi/host2/bus0/target2/lun0/disc
       3       8       80        3      active sync   /dev/scsi/host2/bus0/target3/lun0/disc
       4       8       96        4      active sync   /dev/scsi/host2/bus0/target4/lun0/disc
       5       8      112        5      active sync   /dev/scsi/host2/bus0/target5/lun0/disc
       6       8      128        6      active sync   /dev/scsi/host2/bus0/target8/lun0/disc
       7       8      144        7      active sync   /dev/scsi/host2/bus0/target9/lun0/disc
       8       8      160        8      active sync   /dev/scsi/host2/bus0/target10/lun0/disc
       9       8      176        9      active sync   /dev/scsi/host2/bus0/target11/lun0/disc
      10       8      192       10      active sync   /dev/scsi/host2/bus0/target12/lun0/disc

      11       8      208       11      spare   /dev/scsi/host2/bus0/target13/lun0/disc
livecd ~ # cat /proc/mdstat
Personalities : [raid0] [raid5]
read_ahead 1024 sectors
md0 : active raid5 scsi/host2/bus0/target13/lun0/disc[11] scsi/host2/bus0/target12/lun0/disc[10] scsi/host2/bus0/target11/lun0/disc[9] scsi/host2/bus0/target10/lun0/disc[8] scsi/host2/bus0/target9/lun0/disc[7] scsi/host2/bus0/target8/lun0/disc[6] scsi/host2/bus0/target5/lun0/disc[5] scsi/host2/bus0/target4/lun0/disc[4] scsi/host2/bus0/target3/lun0/disc[3] scsi/host2/bus0/target2/lun0/disc[2] scsi/host2/bus0/target1/lun0/disc[1] scsi/host2/bus0/target0/lun0/disc[0]
      176891520 blocks level 5, 64k chunk, algorithm 2 [11/11] [UUUUUUUUUUU]

unused devices: <none>


Best Regards :)
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Gentoo on Sparc All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum