Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
wipe out part containing a linux_raid_member file system
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Installing Gentoo
View previous topic :: View next topic  
Author Message
Vieri
l33t
l33t


Joined: 18 Dec 2005
Posts: 769

PostPosted: Mon Oct 19, 2020 7:10 am    Post subject: wipe out part containing a linux_raid_member file system Reply with quote

Hi,

When I clear out a hard disk to reinstall Gentoo from scratch I usually run:

Code:
dd if=/dev/zero of=${TARGET_DEVICE} bs=1M count=100


That usually works, but sometimes I get failing messages such as this one:

Code:
/dev/sda3 contains a linux_raid_member file system
/dev/sda3 is apparently in use by the system; will not make a filesystem here!


I could run something like:

Code:
dd if=/dev/zero of=${TARGET_DEVICE} bs=1M status=progress


but it could take forever.

What is the "correct" amount for "count" in this case (to properly clear the Linux RAID information)?

Vieri
Back to top
View user's profile Send private message
Goverp
Veteran
Veteran


Joined: 07 Mar 2007
Posts: 1339

PostPosted: Mon Oct 19, 2020 9:17 am    Post subject: Reply with quote

Depends on the RAID version, Assuming it's an mdadm RAID, while v0.9 put the metadata at the start of the partition, at least one of the versions put it at the end.
_________________
Greybeard
Back to top
View user's profile Send private message
Hu
Moderator
Moderator


Joined: 06 Mar 2007
Posts: 18097

PostPosted: Mon Oct 19, 2020 4:12 pm    Post subject: Reply with quote

Have you tried using wipefs instead of using dd and guessing how much to wipe? According to its man page:
Code:
       wipefs  can erase filesystem, raid or partition-table signatures (magic
       strings) from the specified device to make the signatures invisible for
       libblkid.
Back to top
View user's profile Send private message
sabayonino
l33t
l33t


Joined: 03 Jan 2012
Posts: 871

PostPosted: Mon Oct 19, 2020 6:56 pm    Post subject: Reply with quote

RAID member was started at boot 'cause RAID was build-in (or as module) in your kernel.

You need to install sys-fs/mdadm to manage Raid array(s)

Stop the array that you need to clean

sda3 is part of /dev/md[0-9] arrays
you need to stop md array where sda3 is part of this

Check the arrays
Code:
cat /proc/mdstat


Code:
mdadm -S /dev/md[0-9]


Cleanup the partition disk

[edit] see : https://wiki.gentoo.org/wiki/User:SwifT/Complete_Handbook/Software_RAID
_________________
BOINC ed il calcolo distribuito

My LRecoverySystem Repo
Back to top
View user's profile Send private message
Vieri
l33t
l33t


Joined: 18 Dec 2005
Posts: 769

PostPosted: Mon Oct 19, 2020 8:26 pm    Post subject: Reply with quote

Code:
# wipefs /dev/sda*
DEVICE OFFSET       TYPE              UUID                                 LABEL
sda    0x200        gpt
sda    0x37e4895e00 gpt
sda    0x1fe        PMBR
sda1   0x1f0000     linux_raid_member ed72e609-09ba-520c-cb20-1669f728008a
sda2   0x52         vfat              4E62-A9D0
sda2   0x0          vfat              4E62-A9D0
sda2   0x1fe        vfat              4E62-A9D0
sda3   0x1fff0000   linux_raid_member d27fc13c-fe90-62e2-cb20-1669f728008a
sda3   0x438        ext2              c33a16ca-ecdb-4aeb-96d1-1372e1651fa3
sda4   0x8d29f0000  linux_raid_member fcc1edda-d3cb-9c48-cb20-1669f728008a
sda4   0xff6        swap              c4a25a0b-d075-4283-b063-4a1e8b48e398
sda5   0x2ecb680000 linux_raid_member ee737a47-704b-1e7b-cb20-1669f728008a
sda5   0x438        ext4              6a6a96cb-a221-4c0a-aeee-18aef33d1895

# mdadm --detail --scan
ARRAY /dev/md/1 metadata=0.90 UUID=ed72e609:09ba520c:cb201669:f728008a
ARRAY /dev/md/3 metadata=0.90 UUID=d27fc13c:fe9062e2:cb201669:f728008a
ARRAY /dev/md/4 metadata=0.90 UUID=fcc1edda:d3cb9c48:cb201669:f728008a
ARRAY /dev/md/5 metadata=0.90 UUID=ee737a47:704b1e7b:cb201669:f728008a

# fdisk /dev/sda

Welcome to fdisk (util-linux 2.35.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): p
Disk /dev/sda: 223.58 GiB, 240057409536 bytes, 468862128 sectors
Disk model: INTEL SSDSCKKB24
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 680354E4-774F-4262-BE1A-86CF0A0F9A5A

Device        Start       End   Sectors   Size Type
/dev/sda1      2048      6143      4096     2M BIOS boot
/dev/sda2      6144   1054719   1048576   512M EFI System
/dev/sda3   1054720   2103295   1048576   512M Linux RAID
/dev/sda4   2103296  76113919  74010624  35.3G Linux RAID
/dev/sda5  76113920 468655279 392541360 187.2G Linux RAID


I think I had a "device or resource busy" message when I tried to use wipefs to do the job, but I cannot reproduce the problem right now.
So I'll give it another shot asap.

Thanks
Back to top
View user's profile Send private message
szatox
Advocate
Advocate


Joined: 27 Aug 2013
Posts: 2154

PostPosted: Tue Oct 20, 2020 6:25 pm    Post subject: Reply with quote

Goverp wrote:
Depends on the RAID version, Assuming it's an mdadm RAID, while v0.9 put the metadata at the start of the partition, at least one of the versions put it at the end.
Actually, it's the other way around.
metadata version 0.9 puts its superblock at the end of the underlying device, which means that in case of raid 1 (mirror) you can access the data on a single device before assembling RAID - a nice trick to know when you want to boot from raid.
You may also find the backup copy of GPT at the end of device, which sometimes gets in the way.

metadata version 1.2 puts its superblock at the beginning, followed by the data space.

mdadm has a -z option that wipes superblock signatures.
LVM, LUKS, and a bunch of other things put headers with magic sequences at the beginning too, that's why we often call them "headers" :lol:
Zeroing the first 4MB with dd tends to take care of those though.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Installing Gentoo All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum