Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Is it possible to change the order of disks in raid1?
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware
View previous topic :: View next topic  
Author Message
apurkrt
Tux's lil' helper
Tux's lil' helper


Joined: 26 Feb 2011
Posts: 116
Location: Czechia, Europe

PostPosted: Sat May 30, 2020 6:44 am    Post subject: Is it possible to change the order of disks in raid1? Reply with quote

Hi!
I have done this experiment (in virtual environment) with raid1:

Code:

vbox ~ # cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md126 : active raid1 sdc[0] sdd[1]
      1048512 blocks [2/2] [UU]
...     
vbox ~ # shasum /dev/md126
35b4e07c30f11e6ad81fc45705376bc8d661415d  /dev/md126
vbox ~ # dd if=/dev/urandom of=/dev/sdd bs=1M
dd: error writing '/dev/sdd': No space left on device
1025+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 6.33977 s, 169 MB/s
vbox ~ # shasum /dev/md126
35b4e07c30f11e6ad81fc45705376bc8d661415d  /dev/md126


i.e. I've overwritten one of the disks constituting the raid - the disk denoted as [1] - with random data

Surprisingly, the outcome of reading /dev/md126 was still the same (same shasum).

So my conclusion is: raid1 writes to both drives, but when it comes to reading, it reads only from one of the drives - namely the one which is seen as [0] in /proc/mdstat

This is in line with the observation of a real hw test machine, where I have raid1 made from an old platter HDD and SSD; when reading from the raid, I can hear the platter disk seek, also it is rather slow - the conclusion being the data are being read from the HDD (the raid partition on the platter HDD being denoted as [0])

My question would be: Is it possible to somehow change this order? I.e. change which disk (partition) is [0] and which is [1]?

I've tried to search "man mdadm" for this, to no avail. Maybe I missed something?
Back to top
View user's profile Send private message
Bones McCracker
Veteran
Veteran


Joined: 14 Mar 2006
Posts: 1611
Location: U.S.A.

PostPosted: Sat May 30, 2020 11:34 pm    Post subject: Reply with quote

I don't know, but think you would have to disassemble the array then assemble it again in reverse order.
_________________
patrix_neo wrote:
The human thought: I cannot win.
The ratbrain in me : I can only go forward and that's it.
Back to top
View user's profile Send private message
apurkrt
Tux's lil' helper
Tux's lil' helper


Joined: 26 Feb 2011
Posts: 116
Location: Czechia, Europe

PostPosted: Sun May 31, 2020 11:52 am    Post subject: Reply with quote

Bones McCracker wrote:
I don't know, but think you would have to disassemble the array then assemble it again in reverse order.


Reassembling in reverse order does not change anything:

Code:
vbox ~ # cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md126 : active raid1 sdc[0] sdd[1]
      1048512 blocks [2/2] [UU]
     
unused devices: <none>

vbox ~ # mdadm --stop /dev/md126
mdadm: stopped /dev/md126

vbox ~ # mdadm -A /dev/md126 /dev/sdd /dev/sdc
mdadm: /dev/md126 has been started with 2 drives.

vbox ~ # cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md126 : active raid1 sdc[0] sdd[1]
      1048512 blocks [2/2] [UU]
     
unused devices: <none>


even starting the array with sdd alone (and only later adding sdc) does not force sdd to be marked as [0]
Code:
vbox ~ # cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md126 : active raid1 sdc[0] sdd[1]
      1048512 blocks [2/2] [UU]
     
unused devices: <none>
     
vbox ~ # mdadm --stop /dev/md126
mdadm: stopped /dev/md126

vbox ~ # mdadm -A /dev/md126 --run /dev/sdd
mdadm: /dev/md126 has been started with 1 drive (out of 2).

vbox ~ # cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md126 : active raid1 sdd[1]
      1048512 blocks [2/1] [_U]

vbox ~ # mdadm /dev/md126 -a /dev/sdc
mdadm: hot added /dev/sdc

vbox ~ # cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md126 : active raid1 sdc[2] sdd[1]
      1048512 blocks [2/1] [_U]
      [===============>.....]  recovery = 76.9% (806976/1048512) finish=0.0min speed=201744K/sec

(and later)

vbox ~ # cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md126 : active raid1 sdc[0] sdd[1]
      1048512 blocks [2/2] [UU]
Back to top
View user's profile Send private message
apurkrt
Tux's lil' helper
Tux's lil' helper


Joined: 26 Feb 2011
Posts: 116
Location: Czechia, Europe

PostPosted: Sun May 31, 2020 12:01 pm    Post subject: Reply with quote

the number in square brackets [ ] seems to correspond to "RaidDevice" number printed in the metadata;

Code:
vbox ~ # mdadm -E /dev/sdc
/dev/sdc:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : ece21326:d9c89e34:eff5119f:08dda25b (local to host vbox)
  Creation Time : Thu May 28 09:37:13 2020
     Raid Level : raid1
  Used Dev Size : 1048512 (1023.94 MiB 1073.68 MB)
     Array Size : 1048512 (1023.94 MiB 1073.68 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 126

    Update Time : Sun May 31 13:50:26 2020
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 265bbadb - correct
         Events : 143


      Number   Major   Minor   RaidDevice State
this     0       8       32        0      active sync   /dev/sdc

   0     0       8       32        0      active sync   /dev/sdc
   1     1       8       48        1      active sync   /dev/sdd


vbox ~ # mdadm -E /dev/sdd
/dev/sdd:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : ece21326:d9c89e34:eff5119f:08dda25b (local to host vbox)
  Creation Time : Thu May 28 09:37:13 2020
     Raid Level : raid1
  Used Dev Size : 1048512 (1023.94 MiB 1073.68 MB)
     Array Size : 1048512 (1023.94 MiB 1073.68 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 126

    Update Time : Sun May 31 13:50:26 2020
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 265bbaed - correct
         Events : 143


      Number   Major   Minor   RaidDevice State
this     1       8       48        1      active sync   /dev/sdd

   0     0       8       32        0      active sync   /dev/sdc
   1     1       8       48        1      active sync   /dev/sdd
Back to top
View user's profile Send private message
apurkrt
Tux's lil' helper
Tux's lil' helper


Joined: 26 Feb 2011
Posts: 116
Location: Czechia, Europe

PostPosted: Sun May 31, 2020 12:17 pm    Post subject: Reply with quote

basically I'm looking for a way/tool to manipulate "Section: Device-Roles (Positions-in-Array) area"

https://raid.wiki.kernel.org/index.php/RAID_superblock_formats

but I guess it's rarely needed in real world usage, hence why it (apparently) does not exist
Back to top
View user's profile Send private message
Bones McCracker
Veteran
Veteran


Joined: 14 Mar 2006
Posts: 1611
Location: U.S.A.

PostPosted: Sun May 31, 2020 2:35 pm    Post subject: Reply with quote

When I said "disassemble", I meant completely destroy the raid array and re-create it.
_________________
patrix_neo wrote:
The human thought: I cannot win.
The ratbrain in me : I can only go forward and that's it.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum