Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Yup! I'm doing it - reinstalling Gentoo *gasp*
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Installing Gentoo
View previous topic :: View next topic  
Author Message
Zucca
Veteran
Veteran


Joined: 14 Jun 2007
Posts: 1353
Location: KUUSANKOSKI, Finland

PostPosted: Sun Sep 02, 2018 11:34 am    Post subject: Yup! I'm doing it - reinstalling Gentoo *gasp* Reply with quote

Stop!
Before you comment - the story so far:
  • I was on my PC just about to start working on some exercises (set up some virtual machines on a headless server)...
  • ... when power outage strikes!
  • "Oh no! But wait! Lights are still on and so is everything else BUT my desktop PC."
  • *a cat peeps behind the PC* (yes, this literally happened like this)
  • So the culprit was a loose IEC C13/C14 power connector, which this cat pushed until it disconnected.
  • I dug my cables out and replace the power cord with one that felt more tight to push into the PSU.
  • I booted the PC but the boot hung up at initramfs stage unable to mount btrfs.
  • None of the usual operations helped, so the filesystem is trash. I must have a bad luck, since it's a six disk btrfs-raid10 (internally every file has two copies, each spread/striped on 1 to 3 disks).
  • As a last resort I attached my backup drive to my PC and used btrfs restore. After going trough several btrfs tree roots I managed to backup ~everything. At least most important things from /etc and /home
  • Yay me! (?)


Now that there is significant chance of corrupted data, I need to rebuild everything. This Is why I thought to just boot from latest Gentoo Live and reinstall my setup. I have config files backed up so there's not much to do but to glance the configs for errors from corruption and then run long emerge sessions. BUT, while I'm at this, I thought to
  • switch to stable (may need to adjust configs)
  • simplify my disk layout --> /boot on memstick maybe and all SSDs go full btrfs
  • maybe UEFI boot?


The most complicated part is to switch to stable. Lately I've noticed that I really don't need to run software that's so close to the edge. Despite If I need to have something from unstable I can adjust package.accept_keywords (for those packages that don't need dependencies from unstable), or if I change my mind switching back to unstable isn't much of a hassle.

So for the most part I (think I) know what I'm doing here but if you, fellow forum members, have any tips to point out about potential pitfalls, please say it here. :)
The first tip would be the obvious: backup more often. Yeah. I got lazy and trusted too much on btrfs-raid10 + snapshots (for user errors). :\
_________________
..: Zucca :..

Code:
ERROR: '--failure' is not an option. Aborting...
Back to top
View user's profile Send private message
P.Kosunen
Apprentice
Apprentice


Joined: 21 Nov 2005
Posts: 299
Location: Finland

PostPosted: Mon Sep 03, 2018 10:58 am    Post subject: Reply with quote

It was with internal RAID of Btrfs? Guess it's not ready yet for production use, might be better use kernels MD RAID 10 and Btrfs on top of it.
Back to top
View user's profile Send private message
Zucca
Veteran
Veteran


Joined: 14 Jun 2007
Posts: 1353
Location: KUUSANKOSKI, Finland

PostPosted: Mon Sep 03, 2018 1:09 pm    Post subject: Reply with quote

Only btrfs-raid5/6 are considered unstable.
That said, the whole btrfs isn't yet considered production ready. :|
That said (2), the, now broken, btrfs had survived other power outages just fine. Bad luck? Maybe.

The other option for my setup would be to use lvm2 and its internal raid. Then lay maybe ext4 over it. Since lvm2 uses mdraid it should be pretty stable. But I'm not sure how it can manage different sized disks and swapping them with bigger ones. I know btrfs can handle different combinations of disks seamlessly (automatic fs shrink, data balancing in case one drive is removed etc...). The keyword is "seamless". Btrfs is the "drobo" software equivalent in Linux world in terms of ease of usage (imo).
_________________
..: Zucca :..

Code:
ERROR: '--failure' is not an option. Aborting...
Back to top
View user's profile Send private message
Spargeltarzan
Apprentice
Apprentice


Joined: 23 Jul 2017
Posts: 266

PostPosted: Tue Sep 04, 2018 10:40 am    Post subject: Reply with quote

I would have a look for ZFS. For my datapool I use a 6 disk Raid-Z2. Don't take Z1 if you really care about your data, the statistical chance of data loss is reduced very significantly with 2 disks parity. (consider during resilvering one disk, all other disks become more stressed, what might be a reason for a second disk to fail)

You can use ZFS also for your rootpool, but it will take some time to install, custom initramfs, etc. Have a look at the work of fearedbliss, it creates also an initramfs for you.

I am also on stable and upgrade to ~amd64 for some packages only. If those packages pull in many ~amd64 dependencies you might want consider to virtualise or chroot your ~amd64, because otherwise you would have a too mixed stable/unstable system what is a receipt for blockers.
_________________
___________________
Regards

Spargeltarzan

Notebook: Lenovo YOGA 900-13ISK: Gentoo stable amd64, GNOME systemd, KVM/QEMU
Desktop-PC: Intel Core i7-4770K, 8GB Ram, AMD Radeon R9 280X, ZFS Storage, GNOME openrc, Dantrell, Xen
Back to top
View user's profile Send private message
Tony0945
Advocate
Advocate


Joined: 25 Jul 2006
Posts: 2607
Location: Illinois, USA

PostPosted: Tue Sep 04, 2018 1:53 pm    Post subject: Reply with quote

I hear cat tastes just like chicken. :!:
Back to top
View user's profile Send private message
Ant P.
Watchman
Watchman


Joined: 18 Apr 2009
Posts: 5225

PostPosted: Tue Sep 04, 2018 4:21 pm    Post subject: Reply with quote

Oof, sorry to hear that. I use Btrfs everywhere myself but it's only in single-disk configurations. I wouldn't trust it with a RAID (or ZFS for that matter), I'd always go with the old-fashioned solution.
Back to top
View user's profile Send private message
P.Kosunen
Apprentice
Apprentice


Joined: 21 Nov 2005
Posts: 299
Location: Finland

PostPosted: Wed Sep 05, 2018 3:55 pm    Post subject: Reply with quote

Ant P. wrote:
I wouldn't trust it with a RAID (or ZFS for that matter), I'd always go with the old-fashioned solution.

ZFS should be reliable, it was good over ten years ago and is used "everywhere". Though it hogs memory with big pools.
Back to top
View user's profile Send private message
Ant P.
Watchman
Watchman


Joined: 18 Apr 2009
Posts: 5225

PostPosted: Wed Sep 05, 2018 7:04 pm    Post subject: Reply with quote

I've heard two major, recurring complaints about ZFS: the memory consumption, and having important parts of the system reliant on an out-of-tree driver. If I had enough storage to warrant using ZFS I'd probably go with BSD.
Back to top
View user's profile Send private message
Spargeltarzan
Apprentice
Apprentice


Joined: 23 Jul 2017
Posts: 266

PostPosted: Thu Sep 06, 2018 10:55 am    Post subject: Reply with quote

When using ZFS for a rootpool the memory consumption shouldn't be the same as for a big datapool; don't forget you can configure ZFS also to use less ram, or more if you have plenty of it. I use it for a 12TB datapool (6x3TB Raid-Z2) and with default settings I am fine with the memory consumption, I even want to allow it to use more RAM because ZFS will benefit from it and the developers made quite a small memory limit to avoid users' afraids for RAM usage . As long as you don't want de-duplication you should be good to use normal consumer hardware. I used it with 8GB RAM, now I have 24GB because I virtualise more, but even with 8GB I was happy and my 24 GB are the reason why I want to allow more for ZFS.

Agree that with ZFS one relies on a package which is not included in the mainstream kernel, but this package itself is incredibly reliable with all of its end-to-end data checksumming, transactions, compression, caching, etc.
One needs to invest to setup the rootpool and finally be aware to create the initramfs when upgrading kernel - fearedbliss' work does the job, which users can automate in skripts. Skripts for upgrading Gentoo are a good idea anyway, so for using ZFS this are some little additionaly only.

And when we see what happened to a BTRFS user, ...... I would decide between ext4 or ZFS. Ext4 will boot without a doubt, in ZFS the user wins much, the prize is to monitor the skripts for the initramfs.
_________________
___________________
Regards

Spargeltarzan

Notebook: Lenovo YOGA 900-13ISK: Gentoo stable amd64, GNOME systemd, KVM/QEMU
Desktop-PC: Intel Core i7-4770K, 8GB Ram, AMD Radeon R9 280X, ZFS Storage, GNOME openrc, Dantrell, Xen
Back to top
View user's profile Send private message
Zucca
Veteran
Veteran


Joined: 14 Jun 2007
Posts: 1353
Location: KUUSANKOSKI, Finland

PostPosted: Thu Sep 06, 2018 1:55 pm    Post subject: Reply with quote

How well zfs manages different sized disks on same pool nowdays?
My disks are between 120GB to 512GB. On my server I have from 640GB to 4TB.
I could consider zfs for my server, but last time I compared it to btrfs, it was a little too complicated to the use case. Also can you specify redundancy level in zfs (ie. how many disks can you lose before you may start losing data)?
The reason why I originally chose btrfs over zfs was the ease of use.
_________________
..: Zucca :..

Code:
ERROR: '--failure' is not an option. Aborting...
Back to top
View user's profile Send private message
HenryW
n00b
n00b


Joined: 07 Sep 2018
Posts: 2

PostPosted: Fri Sep 07, 2018 1:25 pm    Post subject: Reply with quote

Power outages, is that normal in Finland?
How do you prevail, ups are only good for a few minutes.
Back to top
View user's profile Send private message
Spargeltarzan
Apprentice
Apprentice


Joined: 23 Jul 2017
Posts: 266

PostPosted: Fri Sep 07, 2018 1:52 pm    Post subject: Reply with quote

Zucca wrote:
How well zfs manages different sized disks on same pool nowdays?

For ZFS you should use disks with the same disk size. If you plan to do a mirror of a 1TB disk and you use 1x1TB and 1x2TB, you will only have 1TB disk space. If you setup a RAID, you should also use the same disk size for disks. AFAIK, if you use a disk with more capacity in the RAID, it will also only use the size of the other disks.


Zucca wrote:

I could consider zfs for my server, but last time I compared it to btrfs, it was a little too complicated to the use case. Also can you specify redundancy level in zfs (ie. how many disks can you lose before you may start losing data)?
The reason why I originally chose btrfs over zfs was the ease of use.

Yes, you can specify the parity level. RAID-Z1, 1 disk parity, RAID-Z2, 2 disks parity, RAID-Z3, 3 disks parity. I would recommend RAID-Z2, because if one disk fails, the resilvering of the new disk stresses all other disks what increases the likelihood of a second disk failure, what would lead to total data loss. Additionally, I have calculated the MTTDL for many pool configurations and Z2 reduces the likelihood of total data loss compared to Z2 significantly.

To setup a datapool is straightforward. Be aware of the ashift depending on your block size of the disks, double-check the specifications of your disks, and set it correctly, because it cannot be altered any more after the pool is created. Compression=lz4 I would recommend, but this can be altered.

Plan how much your data will grow in the next 3 years minimum and oversize your pool, so you will not face the situation to have a full pool. Don't fill your pool excessively, performance might drop.

I would not recommend striped mirroring, because you spend many disks overhead (double the disks), so you only have 50% capacity of your pool and if the random disk failure hits one full mirror, you have a dead pool. Additionally, all this overhead disks could have invested (or parts of it) to a RAID-Z pool, offering more performance & capacity according to my benchmark results. Also, in a Z2, two random disks can fail, what is not the case for striped mirroring

Check out the FreeBSD documentation, you can setup a datapool with one command only, specifying the pool type, ashift and compression. Ashift is theoretically set automatically, but since it cannot be altered and some disks report a wrong block size, you should be 100% sure of your ashift.

Scrubbing is the process of "checking" and eventually healing errors in your datapool. I run it ~ once a month, so I can be sure my data is in good state.

Despite all the good features of ZFS, backup your pool. There are other scenarios than disk failures, for example human errors, fire, etc.
_________________
___________________
Regards

Spargeltarzan

Notebook: Lenovo YOGA 900-13ISK: Gentoo stable amd64, GNOME systemd, KVM/QEMU
Desktop-PC: Intel Core i7-4770K, 8GB Ram, AMD Radeon R9 280X, ZFS Storage, GNOME openrc, Dantrell, Xen
Back to top
View user's profile Send private message
Zucca
Veteran
Veteran


Joined: 14 Jun 2007
Posts: 1353
Location: KUUSANKOSKI, Finland

PostPosted: Fri Sep 07, 2018 2:03 pm    Post subject: Reply with quote

HenryW wrote:
Power outages, is that normal in Finland?
How do you prevail, ups are only good for a few minutes.
Read the first post again. ;) Also my UPS (currently not in use) has capacity to keep my server running for 30-45mins.

And as for zfs, it still seems like too enterpricey for my use case. And by that I meant throw a bunch of disks at it and it'll arrange them as best it can.
_________________
..: Zucca :..

Code:
ERROR: '--failure' is not an option. Aborting...
Back to top
View user's profile Send private message
Zucca
Veteran
Veteran


Joined: 14 Jun 2007
Posts: 1353
Location: KUUSANKOSKI, Finland

PostPosted: Fri Sep 07, 2018 5:28 pm    Post subject: Reply with quote

Now I have a problem with grub-mkconfig. Instead of using rEFInd I'd like to use grub for customisability reasons. And also I'd like to finally "learn" grub2.
However... grub-mkconfig creates a faulty grub.cfg.
I think the problem is that I don't have any partitions on the disks that are a part of the main btrfs pool. Only /boot (and ./efi/ inside it) is a seperate to avoid needlessly complicated setup.

I inspect grub-mkconfig by piping its output to less for example, and whatever I do this syntax error appears:
part of failty grub.cfg:
menuentry 'Gentoo GNU/Linux' --class gentoo --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-62e4ab50-da1c-4185-9198-ad1d0de84f53' {
        load_video
        insmod gzio
        insmod part_gpt
        insmod ext2
        set root='hd6,gpt2'
        if [ x$feature_platform_search_hint = xy ]; then
          search --no-floppy --fs-uuid --set=root --hint-bios=hd6,gpt2 --hint-efi=hd6,gpt2 --hint-baremetal=ahci6,gpt2  7fe61fb9-ca93-40c9-bfa8-ce0362e00a15
        else
          search --no-floppy --fs-uuid --set=root 7fe61fb9-ca93-40c9-bfa8-ce0362e00a15
        fi
        echo    'Loading Linux x86_64-4.14.65-gentoo-wren ...'
        linux   /kernel-genkernel-x86_64-4.14.65-gentoo-wren root=/dev/sda
/dev/sdb
/dev/sdc
/dev/sdd
/dev/sde
/dev/sdf ro libata.force=3Gbps rootfstype=btrfs dobtrfs
}


So now I'm planning to create my own grub.cfg, which can scan kernels from /boot and create boot entries for them.
Since grub configs are basically shell scripts nowdays, it should be possible. Right?
If someone knows some pre-made grub configs which do this... please share. ;)
_________________
..: Zucca :..

Code:
ERROR: '--failure' is not an option. Aborting...
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Installing Gentoo All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum