Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Gentoo on ZFS
View unanswered posts
View posts from last 24 hours

Goto page Previous  1, 2, 3, 4  Next  
Reply to topic    Gentoo Forums Forum Index Gentoo Chat
View previous topic :: View next topic  
Author Message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 5701
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Tue Jan 14, 2014 11:14 am    Post subject: Reply with quote

well, the harddrive seems to be fine


yet another scrub didn't reveal any issues;

did a manual export of the filesystem before reboot today and manually re-imported it afterwards:

Quote:
zpool status
pool: WD20EARS
state: ONLINE
scan: scrub repaired 0 in 8h31m with 0 errors on Tue Jan 14 06:08:43 2014
config:

NAME STATE READ WRITE CKSUM
WD20EARS ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
wd20ears ONLINE 0 0 0
hd203wi ONLINE 0 0 0

errors: No known data errors



will see if the error message occurs again after the next reboot


any ideas how to solve this ?

is this a known issue ?
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD
2.6.37.2_plus_v1: BFS, CFS,THP,compaction, zcache or TOI
Hardcore Linux user since 2004 :D
Back to top
View user's profile Send private message
Yamakuzure
Veteran
Veteran


Joined: 21 Jun 2006
Posts: 1389
Location: Bardowick, Germany

PostPosted: Tue Jan 14, 2014 11:38 am    Post subject: Reply with quote

It just said that there was something wrong in the past. But the current state listed everything as 'ONLINE', so nothing is wrong at all. Just use 'zpool clear <pool>' to get rid of the message.

Unless the state of a device or the pool itself changes away from 'ONLINE', everything is in fine working order. Otherwise the device would be listed as 'FAULTED'.

Edit: If one scrub, which checks the checksums of everything, can't find anything, there is no need for another scrub, really. This isn't chkdsk. ;)
_________________
I *do* know that I easily aggravate people due to my condensed writing. Rule of thumb: If I wrote anything that can be understood in two different ways, and one way offends you, then I meant the other! ;)
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 5701
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Tue Jan 14, 2014 9:56 pm    Post subject: Reply with quote

Yamakuzure wrote:
It just said that there was something wrong in the past. But the current state listed everything as 'ONLINE', so nothing is wrong at all. Just use 'zpool clear <pool>' to get rid of the message.


ok

Yamakuzure wrote:

Unless the state of a device or the pool itself changes away from 'ONLINE', everything is in fine working order. Otherwise the device would be listed as 'FAULTED'.


great, so the data's fine after all =)

Yamakuzure wrote:

Edit: If one scrub, which checks the checksums of everything, can't find anything, there is no need for another scrub, really. This isn't chkdsk. ;)


haha, yeah - just wanted to be really sure since there's *all* of my data (including very valuable stuff, my study learning matter, thesis and more)

in that regard a little paranoia can never hurt :wink:





next practical problem - I get the following error when trying to delete a folder on /home/user:

http://askubuntu.com/questions/288513/cant-move-files-to-the-trash


this used to work before

just before that I had a file without asking being deleted


any way to "fix" that ?


/home is a symbolic link to /WD20EARS/home

Quote:
lrwxrwxrwx 1 root root 15 Jan 13 10:55 home -> /WD20EARS/home/


.local/share/Trash

and ~/.Trash

exist and seem to have the correct permissions (worked before)


this is on XFCE + nemo / nautilus


edit:

thunar says:

Quote:
Error trashing file: Unable to trash file: Invalid cross-device link



is there a way to rename the ZFS pools & directories

WD20EARS/home/ to just home/ with the existing subdirectories ?

I know that mount points are inherited in ZFS and can be easily renamed

but this a whole lot more different - isn't it ?!

not sure if this issue can be easily solved (if ZFS works that way, don't want to wreak heavoc on my data ^^ , will research more ...) :oops:


edit2:

will try what is documented in the below link:

http://docs.oracle.com/cd/E19253-01/819-5461/gamnq/index.html
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD
2.6.37.2_plus_v1: BFS, CFS,THP,compaction, zcache or TOI
Hardcore Linux user since 2004 :D
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 5701
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Tue Jan 14, 2014 10:41 pm    Post subject: Reply with quote

ok, following the mountpoint procedure worked nicely:

Quote:
zfs set mountpoint=/home WD20EARS/home


however nemo, nautilus and thunar still keep on complaining

at least ~/.local/share/Trash is being created still same error message


seems like nemo & thunar regard the /home partition on the ZFS pool as a network device ? :o


edit:

is there a way to give ZFS the mount option uid=1000 ?

this seems to solve some issues in that regard :?

http://www.computerandyou.net/2011/06/how-to-solve-cannot-move-file-to-trash-do-you-want-to-delete-immediately/


this is a real showstopper - still looking on the web on how to resolve this :?


edit2:

in a nutshell:

Gnome and all their related stuff: not considering all the cases

https://bugzilla.redhat.com/show_bug.cgi?id=436160

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=698640



sorry for the wording but - that sucks

hope this also doesn't apply to the KDE & dolphin / konqueror (that they handle it in a similar way)


edit3:

just tried it with Konqueror & Dolphin and it worked :D


Full ZFS migration - here I come (in baby steps) :mrgreen:
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD
2.6.37.2_plus_v1: BFS, CFS,THP,compaction, zcache or TOI
Hardcore Linux user since 2004 :D
Back to top
View user's profile Send private message
Yamakuzure
Veteran
Veteran


Joined: 21 Jun 2006
Posts: 1389
Location: Bardowick, Germany

PostPosted: Wed Jan 15, 2014 12:14 pm    Post subject: Reply with quote

Wow, so much data!

Well, it is a good idea to start with the zpool without a mountpoint.

For instance, I have 6 small truecrypt containers bound together as a RAIDZ zpool, and then three datasets for the backup folders. (The truecrypt containers are backed up in three dropbox accounts.)

To move the data from the previous backup folders I had to first mount the new ones somewhere else:
Code:
zpool create -f -m none -o cachefile= -o ashift=9 -O compression=on bpool raidz /dev/mapper/truecrypt21 /dev/mapper/truecrypt22 /dev/mapper/truecrypt23 /dev/mapper/truecrypt24 /dev/mapper/truecrypt25 /dev/mapper/truecrypt26 -R /home/sed/backup_new
This means, that the pool itself does not hold a mountpoint on it's own ('-m none') but all mounted datasets for this session should be mounted under '-R /home/sed/backup_new'.
There the datasets will then appear with their full paths. So once exported/imported, the datasets will mount in the correct locations, which weren't empty before I moved the data.
Code:
zfs create -o compression=lz4 -p bpool/backups
zfs create -o mountpoint=/home/sed/pryde/PrydeWorX/Dropbox bpool/backups/PrydeWorX
zfs create -o mountpoint=/home/sed/Backup bpool/backups/Main
zfs create -o mountpoint=/home/sed/pryde/Dropbox bpool/backups/Personal
The first is just the container, enabling compression that is then inherited by the other datasets. As the pool has no mountpoint, the container has none, too, and the following datasets need the mountpoint option.

Then I used 'rsync -avhHAX --progress' to copy the data to the new folders, deleted the old folders, exported the pool, re-imported without further options, and everything is there.
Code:
 ~ # zpool status bpool
  pool: bpool
 state: ONLINE
  scan: scrub repaired 0 in 0h1m with 0 errors on Mon Jan 13 19:09:22 2014
config:

        NAME             STATE     READ WRITE CKSUM
        bpool            ONLINE       0     0     0
          raidz1-0       ONLINE       0     0     0
            truecrypt21  ONLINE       0     0     0
            truecrypt22  ONLINE       0     0     0
            truecrypt23  ONLINE       0     0     0
            truecrypt24  ONLINE       0     0     0
            truecrypt25  ONLINE       0     0     0
            truecrypt26  ONLINE       0     0     0

errors: No known data errors
s
 ~ # zfs list -o name,used,avail,compression,compressratio,mountpoint | head -n 6
NAME                      USED  AVAIL  COMPRESS  RATIO  MOUNTPOINT
bpool                     961M  3,94G        on  1.01x  none
bpool/backups             961M  3,94G       lz4  1.01x  none
bpool/backups/Main        429M  3,94G       lz4  1.03x  /home/sed/Backup
bpool/backups/Personal    462M  3,94G       lz4  1.00x  /home/sed/pryde/Dropbox
bpool/backups/PrydeWorX  69,6M  3,94G       lz4  1.00x  /home/sed/pryde/PrydeWorX/Dropbox
I did the same for my personal data, but there the six truecrypt containers are used with 5 raid and one spare device:
Code:
 ~ # zpool status ppool
  pool: ppool
 state: ONLINE
  scan: none requested
config:

        NAME             STATE     READ WRITE CKSUM
        ppool            ONLINE       0     0     0
          raidz1-0       ONLINE       0     0     0
            truecrypt11  ONLINE       0     0     0
            truecrypt12  ONLINE       0     0     0
            truecrypt13  ONLINE       0     0     0
            truecrypt14  ONLINE       0     0     0
            truecrypt15  ONLINE       0     0     0
        spares
          truecrypt16    AVAIL   

errors: No known data errors
Those are regularly backed up on an external hard drive. And in the case of one of the containers becoming corrupted, zfs will kick in the spare.

Further, if the total space of the containers isn't enough, I can replace them, one-by-one, with bigger containers on-the-fly.

That is far more convenient than the previous ext4 partitions I had...
_________________
I *do* know that I easily aggravate people due to my condensed writing. Rule of thumb: If I wrote anything that can be understood in two different ways, and one way offends you, then I meant the other! ;)
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 5701
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Tue Jan 28, 2014 11:18 pm    Post subject: Reply with quote

yeah, just had to get rid of the initial issues ASAP :lol:

thanks for sharing how you do it

wasn't aware that the ZFS way is such a convenient piece of cake :wink:

this will make thinks during a migration or future backups easier


meanwhile I'm still using nemo/nautilus on xfce4 but figured the issue out:

glib/gtk+/gvfs or what library lies behind this and the gtk-way trash implementation creates the .Trash-1000 (or other number for other users) folder on the root directory of the

particular volume

and since I created several zfs volumes for convenience (future backup jobs, snapshots, separate compression algorithms, checksums, etc. etc.)

this hurts a little the workflow but files don't get deleted that often - so in case files or folders need to deleted:

right-click on folder -> "Open in Terminal" -> then from there work can be done

otherwise: firing up dolphin and deleting stuff from there (in case more tasks need to be done in one session)



had several hardlocks meanwhile (with xf86-video-ati, fglrx and after the switch on nvidia-drivers when running games via wine) no data lost or data errors reported so far ( *fingers crossed*)

whereas without even a hardlock or simply by mounting it two times a btrfs backup-volume reported 2 errors in the root inode, not sure what to think of these :o
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD
2.6.37.2_plus_v1: BFS, CFS,THP,compaction, zcache or TOI
Hardcore Linux user since 2004 :D
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 5701
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Sun Feb 02, 2014 7:33 pm    Post subject: Reply with quote

not sure how related to Gentoo in special it is but:


currently I've ZFS on /home and on some backup drives/partitions

to simplify: e.g. WD20EARS01, WD20EARS02, ... (these are the pools)

each of those pools have the same basic folder structure e.g.

WD20EARS01/home/user/

WD20EARS02/home/user/

but differ in the subfolders in the way, that some of those folders, on larger harddrives are split up into zfs sub-volumes which have different settings (e.g. copies=2, dedup=on)

e.g.

WD20EARS01/home/user/bak/data

bak and data are in user

whereas for

WD20EARS02/home/user

WD20EARS02/home/user/bak/
<--- zfs sub-volume of "user"

WD20EARS02/home/user/bak/data
<-- zfs sub-volume of "bak"

so in a nutshell: a nested sub-volume structure to make working with specific folders easier



is it possible in this case to create a snapshot of WD20EARS01/home/user/ and send it to WD20EARS02/home/user/ and assume that everything gets backed up and set up accordingly ?

(referring to the specific copies=2, dedup=on, etc. settings on the backup-zpool which differs from the current in-use /home-drive)



that's probably some unusual and/or advanced configuration for /home-users but since ZFS offers these

I'd like to take full advantage

and hope that also this time won't be disappointed (meaning: it'll work out)


Many thanks for reading and many thanks in advance for your help


:)


Just saw that is actually also a pretty convenient way in the future for the root partition to backup and set up things - so: related to Gentoo :wink:
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD
2.6.37.2_plus_v1: BFS, CFS,THP,compaction, zcache or TOI
Hardcore Linux user since 2004 :D
Back to top
View user's profile Send private message
Yamakuzure
Veteran
Veteran


Joined: 21 Jun 2006
Posts: 1389
Location: Bardowick, Germany

PostPosted: Sun Feb 02, 2014 8:43 pm    Post subject: Reply with quote

As far as I understand the "snapshot" and "rollback" commands, the snapshots are bound to their dataset. You can not rollback a snapshot over a different dataset.

Which makes sense, as a snapshot does not save anything unless a file is changed. So basically the snapshot has no knowledge about unchanged files. It can not "copy" anything unchanged over to another dataset.

And, why not use rsync for that? Or have I misunderstood you completely?
_________________
I *do* know that I easily aggravate people due to my condensed writing. Rule of thumb: If I wrote anything that can be understood in two different ways, and one way offends you, then I meant the other! ;)
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 5701
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Mon Feb 03, 2014 3:19 pm    Post subject: Reply with quote

currently I'm using rsync but it's taking ages (around 1 hour when syncing the data from one ZFS (/home) partition to the other [when still using ext4 & btrfs it only took 10-20 minutes, at worst 30 minutes - but not trusting those anymore due to dataloss and other stories]

probably the snapshot-concept isn't so clear to me yet :lol:


WD20EARS/home/user/ <== is actually the /home folder


WD30EFRX/home/user/ is the backup drive (shouldn't have called it WD20EARS01 and WD20EARS02 in this example)


so of both - the /home folder/drive and the backup drive the folder structure is the same only with the differences that on the backup drive (WD30EFRX) some folders are replicated locally (copies=2) and/or have deduplication enabled


to speed up thing's the snapshot feature shall be used to transfer incremental changes since the last backup

so the process would be as follows:


==> make backup via rsync so that both drives *are* identical (or as close to identical as possible since when backing up /home is used)
*) /home
*) WD30EFRX

==> snapshot

work as usual


==> create a snapshot of /home, ZFS knows of the incremental changes/differences between its two snapshots

send these differences to WD30EFRX





or is there a different approach were the files on both pools - /home (WD20EARS) and /bak (WD30EFRX) - can be made to be identical, transparently & independent from the specific changes in the zfs sub-volumes (copies=2 for some folders, deduplication, different compression ?


trying to find the most time-efficient and elegant way to do it


Thanks for reading :)
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD
2.6.37.2_plus_v1: BFS, CFS,THP,compaction, zcache or TOI
Hardcore Linux user since 2004 :D
Back to top
View user's profile Send private message
drescherjm
Advocate
Advocate


Joined: 05 Jun 2004
Posts: 2767
Location: Pittsburgh, PA, USA

PostPosted: Sat Feb 08, 2014 2:39 pm    Post subject: Reply with quote

Aonoa wrote:
genkernel-next-35 gives me "invalid root device" after trying to boot with a initramfs made with it.

I've actually not been able to use anything later than 24-r1 (which is now gone from portage, but I kept a binary package of it), and I need genkernel-next because of the udev support it has.


I think I have this same problem on a server I installed yesterday at work with genkernel-next-50. I have /boot on ext2 /root and everything else on a 9 drive raidz3 (drives are ~6 years old and have a high failure rate). If I boot off a zfs enabled sysrescue usb stick and export the pool the next boot will work. However if I reboot off of the zfs /root gentoo server I get several zfs related messages like pool ready imported, device IO error and finally I believe the invalid root device. I am trying to debug this possibly by creating my own initramfs (which I have not done in probably a decade).


Edit:
I tried to reproduce this on a VirtualBox guest using the same kernel verson (gentoo-sources-3.12.10), same grub, same zfs version ( 0.6.2-r3) but a single drive instead of the raidz3 and the boot worked every time. Could the array come up early and degraded. I mean there are 3 SATA different controllers used for the 9 drives. 4 ports + 3 ports + 2 ports. Originally I had omitted the extra 2 port controller on the motherboard and that caused me to have a degraded boot (raid1 x9 each time it tried to boot).

Edit2:
Looks like I need to add some additional output to etc/initrd.d/00-zfs.sh inside the genkernel initramfs. I will have to work on that on Monday since it will serve no purpose to reboot a machine that is 20 miles away with no ikvm like my newest sever that has IPMI.

Edit3:
Solved. Although a modified initramfs that exported the zpool then reimported after the failed root mount worked the real solution was to regenerate the zpool.cache then the mount did not fail in the first place.
_________________
John

My gentoo overlay
Instructons for overlay
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 5701
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Wed Feb 19, 2014 7:28 pm    Post subject: Reply with quote

is there a way to create a kernel-patch from the current zfs, zfs-kmod and spl state over at github.com (preferentially also with some pretty recent changes ?)

Btrfs is failing again on me (or dm, cryptsetup or this harddrive - showing checksum errors and tons of checksum missing)


anyway: would really like to use Zfs on /home, / (root) and /usr/portage for the laptop in an up-to-date state ( https://github.com/zfsonlinux/zfs/pull/2110 & https://github.com/zfsonlinux/spl/pull/328 )


Thanks for reading & Many thanks in advance for your answer :)



edit:

there's an up-to-date SysRescueCD (4.0.0) available with ZFS support (0.6.2) from funtoo:

http://www.funtoo.org/Creating_System_Rescue_CD_ZFS_Modules#Download_kernel_and_patches (look at Using the premade iso )


edit2:

https://mthode.org/posts/2013/Sep/gentoo-hardened-zfs-rootfs-with-dm-cryptluks-062/

hm, ... interesting

so you're patching in the current zfs/spl tree into the kernel source via ebuild commands



the mount/import issue during bootup are still there ?
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD
2.6.37.2_plus_v1: BFS, CFS,THP,compaction, zcache or TOI
Hardcore Linux user since 2004 :D
Back to top
View user's profile Send private message
drescherjm
Advocate
Advocate


Joined: 05 Jun 2004
Posts: 2767
Location: Pittsburgh, PA, USA

PostPosted: Thu Feb 20, 2014 5:37 pm    Post subject: Reply with quote

Quote:
there's an up-to-date SysRescueCD (4.0.0) available with ZFS support (0.6.2) from funtoo


That is what I used (the premade one) on the 2 systems I have /root on zfs. In both cases I still have a separate /boot mainly because I am unsure of the limitations of grub2 as far as zfs support. I mean does it support boot on raidz3 with compression enabled? Will I run into trouble in the future when zfsonlinux gets out of sync with the grub2 zfs patch?
_________________
John

My gentoo overlay
Instructons for overlay
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 5701
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Wed Feb 26, 2014 9:24 pm    Post subject: Reply with quote

would be interesting to know yeah :?

besides grub2, it also seems to work with grub legacy (0.97*) ? have read about than in an German howto,

with most of the issues sorted out, I'd definitely start using ZFS on / - but so far giving Btrfs a try on the system partition


stripped most of my data and only copied the most important to the SSD in this laptop

scrub speed is pretty fast (190 MB/s) but not insanely fast (335 MB/s from hdparm, encryption speed almost reaches that 300 MB/s)

so 40-45% are "consumed" by ZFS ?

looking forward to performance optimizations (but there are currently more important things, ARC revamp (just recently merged); pull requests - e.g. efficiency with small files in memory allocators)
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD
2.6.37.2_plus_v1: BFS, CFS,THP,compaction, zcache or TOI
Hardcore Linux user since 2004 :D
Back to top
View user's profile Send private message
Yamakuzure
Veteran
Veteran


Joined: 21 Jun 2006
Posts: 1389
Location: Bardowick, Germany

PostPosted: Wed Feb 26, 2014 9:40 pm    Post subject: Reply with quote

Yes, zfs seems to make things slower. At least on my laptop and from a very subjective view. Hopefully things will speed up in later versions.
_________________
I *do* know that I easily aggravate people due to my condensed writing. Rule of thumb: If I wrote anything that can be understood in two different ways, and one way offends you, then I meant the other! ;)
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 5701
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Sun Mar 02, 2014 4:09 pm    Post subject: Reply with quote

just asking here before I (unintentionally destroy and thus) lose any data


creating a snapshot, not needing it anymore,

then destroying that snapshot will NOT lead to any data loss for data written after the creation of the snapshot ?

no, right ? it will keep the new data and just remove the "diff" information from the snapshot until now

and thus take some time until it's re-integrated and merged into one data-set/pool state again


hope I got the concept of the snapshots right ... :?
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD
2.6.37.2_plus_v1: BFS, CFS,THP,compaction, zcache or TOI
Hardcore Linux user since 2004 :D
Back to top
View user's profile Send private message
drescherjm
Advocate
Advocate


Joined: 05 Jun 2004
Posts: 2767
Location: Pittsburgh, PA, USA

PostPosted: Sun Mar 02, 2014 4:19 pm    Post subject: Reply with quote

You are correct. You do not lose any data from the dataset when you destroy a snapshot. To this point I have deleted dozens of snapshots with no data loss.
_________________
John

My gentoo overlay
Instructons for overlay
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 5701
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Thu Mar 06, 2014 3:51 pm    Post subject: Reply with quote

drescherjm wrote:
You are correct. You do not lose any data from the dataset when you destroy a snapshot. To this point I have deleted dozens of snapshots with no data loss.


thanks :)

can't believe how fast it went: 15 GB in only a few seconds

awesome ^^
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD
2.6.37.2_plus_v1: BFS, CFS,THP,compaction, zcache or TOI
Hardcore Linux user since 2004 :D
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 5701
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Wed May 14, 2014 7:23 pm    Post subject: Reply with quote

anyone else having issues that the ZFS volumes or pools aren't getting unmounted during shutdown because they're busy ?

normally it kills or and/or shutdown down the processes accessing the partition, then unmounts all of the subvolumes while continuing the shutdown process


now it simply continues the shutdown process while NOT unmounting the subvolumes

this results in checksum errors during the next import because stuff is not consistent, like:

Quote:
zpool status
pool: WD30EFRX
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: http://zfsonlinux.org/msg/ZFS-8000-9P
scan: resilvered 122K in 0h0m with 0 errors on Thu May 13 12:50:38 2014
config:

NAME STATE READ WRITE CKSUM
WD20EFRX ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
wd20efrx ONLINE 0 0 0
wd20efrx_mirror ONLINE 0 0 4

errors: No known data errors

_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD
2.6.37.2_plus_v1: BFS, CFS,THP,compaction, zcache or TOI
Hardcore Linux user since 2004 :D
Back to top
View user's profile Send private message
Yamakuzure
Veteran
Veteran


Joined: 21 Jun 2006
Posts: 1389
Location: Bardowick, Germany

PostPosted: Fri May 16, 2014 10:37 am    Post subject: Reply with quote

Well, yes, my root volume and /usr are on zfs and they never get unmounted, but caught by remount-ro of course. So I never have such a message.

The other volumes get unmounted normally.
_________________
I *do* know that I easily aggravate people due to my condensed writing. Rule of thumb: If I wrote anything that can be understood in two different ways, and one way offends you, then I meant the other! ;)
Back to top
View user's profile Send private message
kernelOfTruth
Watchman
Watchman


Joined: 20 Dec 2005
Posts: 5701
Location: Vienna, Austria; Germany; hello world :)

PostPosted: Mon May 19, 2014 1:22 pm    Post subject: Reply with quote

Yamakuzure wrote:
Well, yes, my root volume and /usr are on zfs and they never get unmounted, but caught by remount-ro of course. So I never have such a message.

The other volumes get unmounted normally.


what wonders me is that it happens only since a few weeks and worked before that

not really sure what changed, can't pinpoint it to any cause :?



How practical would a L2ARC device be for home users ?


/home is currently around 1.5 TB of data with around 2 GB of very frequently accessed data

using one part of the SSD for the system and the other for the L2ARC should work, right ? (read that this seems to work with solaris and/or other BSD operating systems but haven't seen anything/much related to Gentoo)


does using a L2ARC with ZRAM still provide significant improvements ? (https://groups.google.com/a/zfsonlinux.org/forum/#!topic/zfs-discuss/fJBKbCNtfqE)

the mentioned ARC changed meanwhile got merged and I'm also using https://github.com/zfsonlinux/zfs/pull/2250



Thanks in advance :)
_________________
https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD
2.6.37.2_plus_v1: BFS, CFS,THP,compaction, zcache or TOI
Hardcore Linux user since 2004 :D
Back to top
View user's profile Send private message
Aonoa
Guru
Guru


Joined: 23 May 2002
Posts: 584
Location: Oslo, Norway

PostPosted: Wed Aug 27, 2014 10:30 am    Post subject: Reply with quote

I have the problem that during shutdown/reboot, it seems that my /home is unmounted before pulseaudio and possibly other services writes files in my users $HOME. During the next startup sequence, this means that /home fails to mount (being a separate dataset) because of a non-empty /home directory in the / dataset. Has anyone else encountered this?

zfs is in the boot runlevel.
_________________
Dive into Gentoo Linux and emerge into a new world


Last edited by Aonoa on Wed Aug 27, 2014 2:39 pm; edited 2 times in total
Back to top
View user's profile Send private message
mrbassie
Apprentice
Apprentice


Joined: 31 May 2013
Posts: 223

PostPosted: Wed Aug 27, 2014 12:06 pm    Post subject: Reply with quote

Aonoa wrote:
I have the problem that during shutdown/reboot, it seems that my /home is unmounted before pulseaudio and possibly other services writes files in my users $HOME. During the next startup sequence, this means that /home fails to mount because of a non-empty /home directory. Has anyone else encountered this?

zfs is in the boot runlevel.


Is /home a seperate dataset?
Back to top
View user's profile Send private message
Aonoa
Guru
Guru


Joined: 23 May 2002
Posts: 584
Location: Oslo, Norway

PostPosted: Wed Aug 27, 2014 2:37 pm    Post subject: Reply with quote

mrbassie wrote:
Aonoa wrote:
I have the problem that during shutdown/reboot, it seems that my /home is unmounted before pulseaudio and possibly other services writes files in my users $HOME. During the next startup sequence, this means that /home fails to mount because of a non-empty /home directory. Has anyone else encountered this?

zfs is in the boot runlevel.


Is /home a seperate dataset?


Yes, it is. I amended my original post a little.
_________________
Dive into Gentoo Linux and emerge into a new world
Back to top
View user's profile Send private message
mrbassie
Apprentice
Apprentice


Joined: 31 May 2013
Posts: 223

PostPosted: Wed Aug 27, 2014 3:58 pm    Post subject: Reply with quote

Is the whole system on zfs or just /home?

Is there anything about /home in /etc/fstab? If so comment it out.

I'm thinking you're right and / contains a /home which gets mounted first. Now assuming all of that, at boot with your unmounted /home dataset if you have stuff in the other /home, I would back up the contents with tar using the --preserve-permissions argument and nuke the directory, zfs mount the dataset (if you can) and extract the tar backup into it (again with --preserve-permissions).

If it won't mount, you could destroy the dataset, create a new one and extract the tar.



I had the same problem with one dataset or another during my install but I can't remember exactly how I fixed it.

I've actually got a problem right now where my swap zvol won't mount at boot "no such file or directory" on my laptop, whereas it works fine on my work computer and they are set up identically. I can mount it after logging in because the swap zvol does exist of course.

There's another oddity where at shutdown every zfs dataset fails to be unmounted by zfs but then localmount unmounts them all just fine. :?
Back to top
View user's profile Send private message
Aonoa
Guru
Guru


Joined: 23 May 2002
Posts: 584
Location: Oslo, Norway

PostPosted: Fri Aug 29, 2014 12:18 pm    Post subject: Reply with quote

mrbassie wrote:
Is the whole system on zfs or just /home?

Is there anything about /home in /etc/fstab? If so comment it out.


The whole system is on ZFS. There is also nothing in fstab about /home.

The problem is that $HOME get's a few files on every reboot. I can empty $HOME and manually mount the /home dataset, but during a reboot some config files will have been placed there again before it mounts the /home dataset. Thus the dataset fails to mount.

I can fix it with a little modification to the ZFS init script, which deletes the config files before the mounting is actually done, but I am looking for a better solution.
_________________
Dive into Gentoo Linux and emerge into a new world
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Gentoo Chat All times are GMT
Goto page Previous  1, 2, 3, 4  Next
Page 3 of 4

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum