View previous topic :: View next topic |
Author |
Message |
kernelOfTruth Watchman
Joined: 20 Dec 2005 Posts: 6111 Location: Vienna, Austria; Germany; hello world :)
|
Posted: Tue Jan 14, 2014 11:14 am Post subject: |
|
|
well, the harddrive seems to be fine
yet another scrub didn't reveal any issues;
did a manual export of the filesystem before reboot today and manually re-imported it afterwards:
Quote: | zpool status
pool: WD20EARS
state: ONLINE
scan: scrub repaired 0 in 8h31m with 0 errors on Tue Jan 14 06:08:43 2014
config:
NAME STATE READ WRITE CKSUM
WD20EARS ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
wd20ears ONLINE 0 0 0
hd203wi ONLINE 0 0 0
errors: No known data errors
|
will see if the error message occurs again after the next reboot
any ideas how to solve this ?
is this a known issue ? _________________ https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa
Hardcore Gentoo Linux user since 2004 |
|
Back to top |
|
|
Yamakuzure Advocate
Joined: 21 Jun 2006 Posts: 2283 Location: Adendorf, Germany
|
Posted: Tue Jan 14, 2014 11:38 am Post subject: |
|
|
It just said that there was something wrong in the past. But the current state listed everything as 'ONLINE', so nothing is wrong at all. Just use 'zpool clear <pool>' to get rid of the message.
Unless the state of a device or the pool itself changes away from 'ONLINE', everything is in fine working order. Otherwise the device would be listed as 'FAULTED'.
Edit: If one scrub, which checks the checksums of everything, can't find anything, there is no need for another scrub, really. This isn't chkdsk. _________________ Important German:- "Aha" - German reaction to pretend that you are really interested while giving no f*ck.
- "Tja" - German reaction to the apocalypse, nuclear war, an alien invasion or no bread in the house.
|
|
Back to top |
|
|
kernelOfTruth Watchman
Joined: 20 Dec 2005 Posts: 6111 Location: Vienna, Austria; Germany; hello world :)
|
Posted: Tue Jan 14, 2014 9:56 pm Post subject: |
|
|
Yamakuzure wrote: | It just said that there was something wrong in the past. But the current state listed everything as 'ONLINE', so nothing is wrong at all. Just use 'zpool clear <pool>' to get rid of the message.
|
ok
Yamakuzure wrote: |
Unless the state of a device or the pool itself changes away from 'ONLINE', everything is in fine working order. Otherwise the device would be listed as 'FAULTED'.
|
great, so the data's fine after all =)
Yamakuzure wrote: |
Edit: If one scrub, which checks the checksums of everything, can't find anything, there is no need for another scrub, really. This isn't chkdsk. |
haha, yeah - just wanted to be really sure since there's *all* of my data (including very valuable stuff, my study learning matter, thesis and more)
in that regard a little paranoia can never hurt
next practical problem - I get the following error when trying to delete a folder on /home/user:
http://askubuntu.com/questions/288513/cant-move-files-to-the-trash
this used to work before
just before that I had a file without asking being deleted
any way to "fix" that ?
/home is a symbolic link to /WD20EARS/home
Quote: | lrwxrwxrwx 1 root root 15 Jan 13 10:55 home -> /WD20EARS/home/ |
.local/share/Trash
and ~/.Trash
exist and seem to have the correct permissions (worked before)
this is on XFCE + nemo / nautilus
edit:
thunar says:
Quote: | Error trashing file: Unable to trash file: Invalid cross-device link |
is there a way to rename the ZFS pools & directories
WD20EARS/home/ to just home/ with the existing subdirectories ?
I know that mount points are inherited in ZFS and can be easily renamed
but this a whole lot more different - isn't it ?!
not sure if this issue can be easily solved (if ZFS works that way, don't want to wreak heavoc on my data ^^ , will research more ...)
edit2:
will try what is documented in the below link:
http://docs.oracle.com/cd/E19253-01/819-5461/gamnq/index.html _________________ https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa
Hardcore Gentoo Linux user since 2004 |
|
Back to top |
|
|
kernelOfTruth Watchman
Joined: 20 Dec 2005 Posts: 6111 Location: Vienna, Austria; Germany; hello world :)
|
|
Back to top |
|
|
Yamakuzure Advocate
Joined: 21 Jun 2006 Posts: 2283 Location: Adendorf, Germany
|
Posted: Wed Jan 15, 2014 12:14 pm Post subject: |
|
|
Wow, so much data!
Well, it is a good idea to start with the zpool without a mountpoint.
For instance, I have 6 small truecrypt containers bound together as a RAIDZ zpool, and then three datasets for the backup folders. (The truecrypt containers are backed up in three dropbox accounts.)
To move the data from the previous backup folders I had to first mount the new ones somewhere else: Code: | zpool create -f -m none -o cachefile= -o ashift=9 -O compression=on bpool raidz /dev/mapper/truecrypt21 /dev/mapper/truecrypt22 /dev/mapper/truecrypt23 /dev/mapper/truecrypt24 /dev/mapper/truecrypt25 /dev/mapper/truecrypt26 -R /home/sed/backup_new | This means, that the pool itself does not hold a mountpoint on it's own ('-m none') but all mounted datasets for this session should be mounted under '-R /home/sed/backup_new'.
There the datasets will then appear with their full paths. So once exported/imported, the datasets will mount in the correct locations, which weren't empty before I moved the data.
Code: | zfs create -o compression=lz4 -p bpool/backups
zfs create -o mountpoint=/home/sed/pryde/PrydeWorX/Dropbox bpool/backups/PrydeWorX
zfs create -o mountpoint=/home/sed/Backup bpool/backups/Main
zfs create -o mountpoint=/home/sed/pryde/Dropbox bpool/backups/Personal | The first is just the container, enabling compression that is then inherited by the other datasets. As the pool has no mountpoint, the container has none, too, and the following datasets need the mountpoint option.
Then I used 'rsync -avhHAX --progress' to copy the data to the new folders, deleted the old folders, exported the pool, re-imported without further options, and everything is there. Code: | ~ # zpool status bpool
pool: bpool
state: ONLINE
scan: scrub repaired 0 in 0h1m with 0 errors on Mon Jan 13 19:09:22 2014
config:
NAME STATE READ WRITE CKSUM
bpool ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
truecrypt21 ONLINE 0 0 0
truecrypt22 ONLINE 0 0 0
truecrypt23 ONLINE 0 0 0
truecrypt24 ONLINE 0 0 0
truecrypt25 ONLINE 0 0 0
truecrypt26 ONLINE 0 0 0
errors: No known data errors
s
~ # zfs list -o name,used,avail,compression,compressratio,mountpoint | head -n 6
NAME USED AVAIL COMPRESS RATIO MOUNTPOINT
bpool 961M 3,94G on 1.01x none
bpool/backups 961M 3,94G lz4 1.01x none
bpool/backups/Main 429M 3,94G lz4 1.03x /home/sed/Backup
bpool/backups/Personal 462M 3,94G lz4 1.00x /home/sed/pryde/Dropbox
bpool/backups/PrydeWorX 69,6M 3,94G lz4 1.00x /home/sed/pryde/PrydeWorX/Dropbox | I did the same for my personal data, but there the six truecrypt containers are used with 5 raid and one spare device: Code: | ~ # zpool status ppool
pool: ppool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
ppool ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
truecrypt11 ONLINE 0 0 0
truecrypt12 ONLINE 0 0 0
truecrypt13 ONLINE 0 0 0
truecrypt14 ONLINE 0 0 0
truecrypt15 ONLINE 0 0 0
spares
truecrypt16 AVAIL
errors: No known data errors | Those are regularly backed up on an external hard drive. And in the case of one of the containers becoming corrupted, zfs will kick in the spare.
Further, if the total space of the containers isn't enough, I can replace them, one-by-one, with bigger containers on-the-fly.
That is far more convenient than the previous ext4 partitions I had... _________________ Important German:- "Aha" - German reaction to pretend that you are really interested while giving no f*ck.
- "Tja" - German reaction to the apocalypse, nuclear war, an alien invasion or no bread in the house.
|
|
Back to top |
|
|
kernelOfTruth Watchman
Joined: 20 Dec 2005 Posts: 6111 Location: Vienna, Austria; Germany; hello world :)
|
Posted: Tue Jan 28, 2014 11:18 pm Post subject: |
|
|
yeah, just had to get rid of the initial issues ASAP
thanks for sharing how you do it
wasn't aware that the ZFS way is such a convenient piece of cake
this will make thinks during a migration or future backups easier
meanwhile I'm still using nemo/nautilus on xfce4 but figured the issue out:
glib/gtk+/gvfs or what library lies behind this and the gtk-way trash implementation creates the .Trash-1000 (or other number for other users) folder on the root directory of the
particular volume
and since I created several zfs volumes for convenience (future backup jobs, snapshots, separate compression algorithms, checksums, etc. etc.)
this hurts a little the workflow but files don't get deleted that often - so in case files or folders need to deleted:
right-click on folder -> "Open in Terminal" -> then from there work can be done
otherwise: firing up dolphin and deleting stuff from there (in case more tasks need to be done in one session)
had several hardlocks meanwhile (with xf86-video-ati, fglrx and after the switch on nvidia-drivers when running games via wine) no data lost or data errors reported so far ( *fingers crossed*)
whereas without even a hardlock or simply by mounting it two times a btrfs backup-volume reported 2 errors in the root inode, not sure what to think of these _________________ https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa
Hardcore Gentoo Linux user since 2004 |
|
Back to top |
|
|
kernelOfTruth Watchman
Joined: 20 Dec 2005 Posts: 6111 Location: Vienna, Austria; Germany; hello world :)
|
Posted: Sun Feb 02, 2014 7:33 pm Post subject: |
|
|
not sure how related to Gentoo in special it is but:
currently I've ZFS on /home and on some backup drives/partitions
to simplify: e.g. WD20EARS01, WD20EARS02, ... (these are the pools)
each of those pools have the same basic folder structure e.g.
WD20EARS01/home/user/
WD20EARS02/home/user/
but differ in the subfolders in the way, that some of those folders, on larger harddrives are split up into zfs sub-volumes which have different settings (e.g. copies=2, dedup=on)
e.g.
WD20EARS01/home/user/bak/data
bak and data are in user
whereas for
WD20EARS02/home/user
WD20EARS02/home/user/bak/
<--- zfs sub-volume of "user"
WD20EARS02/home/user/bak/data
<-- zfs sub-volume of "bak"
so in a nutshell: a nested sub-volume structure to make working with specific folders easier
is it possible in this case to create a snapshot of WD20EARS01/home/user/ and send it to WD20EARS02/home/user/ and assume that everything gets backed up and set up accordingly ?
(referring to the specific copies=2, dedup=on, etc. settings on the backup-zpool which differs from the current in-use /home-drive)
that's probably some unusual and/or advanced configuration for /home-users but since ZFS offers these
I'd like to take full advantage
and hope that also this time won't be disappointed (meaning: it'll work out)
Many thanks for reading and many thanks in advance for your help
Just saw that is actually also a pretty convenient way in the future for the root partition to backup and set up things - so: related to Gentoo _________________ https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa
Hardcore Gentoo Linux user since 2004 |
|
Back to top |
|
|
Yamakuzure Advocate
Joined: 21 Jun 2006 Posts: 2283 Location: Adendorf, Germany
|
Posted: Sun Feb 02, 2014 8:43 pm Post subject: |
|
|
As far as I understand the "snapshot" and "rollback" commands, the snapshots are bound to their dataset. You can not rollback a snapshot over a different dataset.
Which makes sense, as a snapshot does not save anything unless a file is changed. So basically the snapshot has no knowledge about unchanged files. It can not "copy" anything unchanged over to another dataset.
And, why not use rsync for that? Or have I misunderstood you completely? _________________ Important German:- "Aha" - German reaction to pretend that you are really interested while giving no f*ck.
- "Tja" - German reaction to the apocalypse, nuclear war, an alien invasion or no bread in the house.
|
|
Back to top |
|
|
kernelOfTruth Watchman
Joined: 20 Dec 2005 Posts: 6111 Location: Vienna, Austria; Germany; hello world :)
|
Posted: Mon Feb 03, 2014 3:19 pm Post subject: |
|
|
currently I'm using rsync but it's taking ages (around 1 hour when syncing the data from one ZFS (/home) partition to the other [when still using ext4 & btrfs it only took 10-20 minutes, at worst 30 minutes - but not trusting those anymore due to dataloss and other stories]
probably the snapshot-concept isn't so clear to me yet
WD20EARS/home/user/ <== is actually the /home folder
WD30EFRX/home/user/ is the backup drive (shouldn't have called it WD20EARS01 and WD20EARS02 in this example)
so of both - the /home folder/drive and the backup drive the folder structure is the same only with the differences that on the backup drive (WD30EFRX) some folders are replicated locally (copies=2) and/or have deduplication enabled
to speed up thing's the snapshot feature shall be used to transfer incremental changes since the last backup
so the process would be as follows:
==> make backup via rsync so that both drives *are* identical (or as close to identical as possible since when backing up /home is used)
*) /home
*) WD30EFRX
==> snapshot
work as usual
==> create a snapshot of /home, ZFS knows of the incremental changes/differences between its two snapshots
send these differences to WD30EFRX
or is there a different approach were the files on both pools - /home (WD20EARS) and /bak (WD30EFRX) - can be made to be identical, transparently & independent from the specific changes in the zfs sub-volumes (copies=2 for some folders, deduplication, different compression ?
trying to find the most time-efficient and elegant way to do it
Thanks for reading _________________ https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa
Hardcore Gentoo Linux user since 2004 |
|
Back to top |
|
|
drescherjm Advocate
Joined: 05 Jun 2004 Posts: 2790 Location: Pittsburgh, PA, USA
|
Posted: Sat Feb 08, 2014 2:39 pm Post subject: |
|
|
Aonoa wrote: | genkernel-next-35 gives me "invalid root device" after trying to boot with a initramfs made with it.
I've actually not been able to use anything later than 24-r1 (which is now gone from portage, but I kept a binary package of it), and I need genkernel-next because of the udev support it has. |
I think I have this same problem on a server I installed yesterday at work with genkernel-next-50. I have /boot on ext2 /root and everything else on a 9 drive raidz3 (drives are ~6 years old and have a high failure rate). If I boot off a zfs enabled sysrescue usb stick and export the pool the next boot will work. However if I reboot off of the zfs /root gentoo server I get several zfs related messages like pool ready imported, device IO error and finally I believe the invalid root device. I am trying to debug this possibly by creating my own initramfs (which I have not done in probably a decade).
Edit:
I tried to reproduce this on a VirtualBox guest using the same kernel verson (gentoo-sources-3.12.10), same grub, same zfs version ( 0.6.2-r3) but a single drive instead of the raidz3 and the boot worked every time. Could the array come up early and degraded. I mean there are 3 SATA different controllers used for the 9 drives. 4 ports + 3 ports + 2 ports. Originally I had omitted the extra 2 port controller on the motherboard and that caused me to have a degraded boot (raid1 x9 each time it tried to boot).
Edit2:
Looks like I need to add some additional output to etc/initrd.d/00-zfs.sh inside the genkernel initramfs. I will have to work on that on Monday since it will serve no purpose to reboot a machine that is 20 miles away with no ikvm like my newest sever that has IPMI.
Edit3:
Solved. Although a modified initramfs that exported the zpool then reimported after the failed root mount worked the real solution was to regenerate the zpool.cache then the mount did not fail in the first place. _________________ John
My gentoo overlay
Instructons for overlay |
|
Back to top |
|
|
kernelOfTruth Watchman
Joined: 20 Dec 2005 Posts: 6111 Location: Vienna, Austria; Germany; hello world :)
|
|
Back to top |
|
|
drescherjm Advocate
Joined: 05 Jun 2004 Posts: 2790 Location: Pittsburgh, PA, USA
|
Posted: Thu Feb 20, 2014 5:37 pm Post subject: |
|
|
Quote: | there's an up-to-date SysRescueCD (4.0.0) available with ZFS support (0.6.2) from funtoo |
That is what I used (the premade one) on the 2 systems I have /root on zfs. In both cases I still have a separate /boot mainly because I am unsure of the limitations of grub2 as far as zfs support. I mean does it support boot on raidz3 with compression enabled? Will I run into trouble in the future when zfsonlinux gets out of sync with the grub2 zfs patch? _________________ John
My gentoo overlay
Instructons for overlay |
|
Back to top |
|
|
kernelOfTruth Watchman
Joined: 20 Dec 2005 Posts: 6111 Location: Vienna, Austria; Germany; hello world :)
|
Posted: Wed Feb 26, 2014 9:24 pm Post subject: |
|
|
would be interesting to know yeah
besides grub2, it also seems to work with grub legacy (0.97*) ? have read about than in an German howto,
with most of the issues sorted out, I'd definitely start using ZFS on / - but so far giving Btrfs a try on the system partition
stripped most of my data and only copied the most important to the SSD in this laptop
scrub speed is pretty fast (190 MB/s) but not insanely fast (335 MB/s from hdparm, encryption speed almost reaches that 300 MB/s)
so 40-45% are "consumed" by ZFS ?
looking forward to performance optimizations (but there are currently more important things, ARC revamp (just recently merged); pull requests - e.g. efficiency with small files in memory allocators) _________________ https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa
Hardcore Gentoo Linux user since 2004 |
|
Back to top |
|
|
Yamakuzure Advocate
Joined: 21 Jun 2006 Posts: 2283 Location: Adendorf, Germany
|
Posted: Wed Feb 26, 2014 9:40 pm Post subject: |
|
|
Yes, zfs seems to make things slower. At least on my laptop and from a very subjective view. Hopefully things will speed up in later versions. _________________ Important German:- "Aha" - German reaction to pretend that you are really interested while giving no f*ck.
- "Tja" - German reaction to the apocalypse, nuclear war, an alien invasion or no bread in the house.
|
|
Back to top |
|
|
kernelOfTruth Watchman
Joined: 20 Dec 2005 Posts: 6111 Location: Vienna, Austria; Germany; hello world :)
|
Posted: Sun Mar 02, 2014 4:09 pm Post subject: |
|
|
just asking here before I (unintentionally destroy and thus) lose any data
creating a snapshot, not needing it anymore,
then destroying that snapshot will NOT lead to any data loss for data written after the creation of the snapshot ?
no, right ? it will keep the new data and just remove the "diff" information from the snapshot until now
and thus take some time until it's re-integrated and merged into one data-set/pool state again
hope I got the concept of the snapshots right ... _________________ https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa
Hardcore Gentoo Linux user since 2004 |
|
Back to top |
|
|
drescherjm Advocate
Joined: 05 Jun 2004 Posts: 2790 Location: Pittsburgh, PA, USA
|
Posted: Sun Mar 02, 2014 4:19 pm Post subject: |
|
|
You are correct. You do not lose any data from the dataset when you destroy a snapshot. To this point I have deleted dozens of snapshots with no data loss. _________________ John
My gentoo overlay
Instructons for overlay |
|
Back to top |
|
|
kernelOfTruth Watchman
Joined: 20 Dec 2005 Posts: 6111 Location: Vienna, Austria; Germany; hello world :)
|
|
Back to top |
|
|
kernelOfTruth Watchman
Joined: 20 Dec 2005 Posts: 6111 Location: Vienna, Austria; Germany; hello world :)
|
Posted: Wed May 14, 2014 7:23 pm Post subject: |
|
|
anyone else having issues that the ZFS volumes or pools aren't getting unmounted during shutdown because they're busy ?
normally it kills or and/or shutdown down the processes accessing the partition, then unmounts all of the subvolumes while continuing the shutdown process
now it simply continues the shutdown process while NOT unmounting the subvolumes
this results in checksum errors during the next import because stuff is not consistent, like:
Quote: | zpool status
pool: WD30EFRX
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: http://zfsonlinux.org/msg/ZFS-8000-9P
scan: resilvered 122K in 0h0m with 0 errors on Thu May 13 12:50:38 2014
config:
NAME STATE READ WRITE CKSUM
WD20EFRX ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
wd20efrx ONLINE 0 0 0
wd20efrx_mirror ONLINE 0 0 4
errors: No known data errors |
_________________ https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa
Hardcore Gentoo Linux user since 2004 |
|
Back to top |
|
|
Yamakuzure Advocate
Joined: 21 Jun 2006 Posts: 2283 Location: Adendorf, Germany
|
Posted: Fri May 16, 2014 10:37 am Post subject: |
|
|
Well, yes, my root volume and /usr are on zfs and they never get unmounted, but caught by remount-ro of course. So I never have such a message.
The other volumes get unmounted normally. _________________ Important German:- "Aha" - German reaction to pretend that you are really interested while giving no f*ck.
- "Tja" - German reaction to the apocalypse, nuclear war, an alien invasion or no bread in the house.
|
|
Back to top |
|
|
kernelOfTruth Watchman
Joined: 20 Dec 2005 Posts: 6111 Location: Vienna, Austria; Germany; hello world :)
|
Posted: Mon May 19, 2014 1:22 pm Post subject: |
|
|
Yamakuzure wrote: | Well, yes, my root volume and /usr are on zfs and they never get unmounted, but caught by remount-ro of course. So I never have such a message.
The other volumes get unmounted normally. |
what wonders me is that it happens only since a few weeks and worked before that
not really sure what changed, can't pinpoint it to any cause
How practical would a L2ARC device be for home users ?
/home is currently around 1.5 TB of data with around 2 GB of very frequently accessed data
using one part of the SSD for the system and the other for the L2ARC should work, right ? (read that this seems to work with solaris and/or other BSD operating systems but haven't seen anything/much related to Gentoo)
does using a L2ARC with ZRAM still provide significant improvements ? (https://groups.google.com/a/zfsonlinux.org/forum/#!topic/zfs-discuss/fJBKbCNtfqE)
the mentioned ARC changed meanwhile got merged and I'm also using https://github.com/zfsonlinux/zfs/pull/2250
Thanks in advance _________________ https://github.com/kernelOfTruth/ZFS-for-SystemRescueCD/tree/ZFS-for-SysRescCD-4.9.0
https://github.com/kernelOfTruth/pulseaudio-equalizer-ladspa
Hardcore Gentoo Linux user since 2004 |
|
Back to top |
|
|
Aonoa Guru
Joined: 23 May 2002 Posts: 589
|
Posted: Wed Aug 27, 2014 10:30 am Post subject: |
|
|
I have the problem that during shutdown/reboot, it seems that my /home is unmounted before pulseaudio and possibly other services writes files in my users $HOME. During the next startup sequence, this means that /home fails to mount (being a separate dataset) because of a non-empty /home directory in the / dataset. Has anyone else encountered this?
zfs is in the boot runlevel.
Last edited by Aonoa on Wed Aug 27, 2014 2:39 pm; edited 2 times in total |
|
Back to top |
|
|
mrbassie l33t
Joined: 31 May 2013 Posts: 772 Location: over here
|
Posted: Wed Aug 27, 2014 12:06 pm Post subject: |
|
|
Aonoa wrote: | I have the problem that during shutdown/reboot, it seems that my /home is unmounted before pulseaudio and possibly other services writes files in my users $HOME. During the next startup sequence, this means that /home fails to mount because of a non-empty /home directory. Has anyone else encountered this?
zfs is in the boot runlevel. |
Is /home a seperate dataset? |
|
Back to top |
|
|
Aonoa Guru
Joined: 23 May 2002 Posts: 589
|
Posted: Wed Aug 27, 2014 2:37 pm Post subject: |
|
|
mrbassie wrote: | Aonoa wrote: | I have the problem that during shutdown/reboot, it seems that my /home is unmounted before pulseaudio and possibly other services writes files in my users $HOME. During the next startup sequence, this means that /home fails to mount because of a non-empty /home directory. Has anyone else encountered this?
zfs is in the boot runlevel. |
Is /home a seperate dataset? |
Yes, it is. I amended my original post a little. |
|
Back to top |
|
|
mrbassie l33t
Joined: 31 May 2013 Posts: 772 Location: over here
|
Posted: Wed Aug 27, 2014 3:58 pm Post subject: |
|
|
Is the whole system on zfs or just /home?
Is there anything about /home in /etc/fstab? If so comment it out.
I'm thinking you're right and / contains a /home which gets mounted first. Now assuming all of that, at boot with your unmounted /home dataset if you have stuff in the other /home, I would back up the contents with tar using the --preserve-permissions argument and nuke the directory, zfs mount the dataset (if you can) and extract the tar backup into it (again with --preserve-permissions).
If it won't mount, you could destroy the dataset, create a new one and extract the tar.
I had the same problem with one dataset or another during my install but I can't remember exactly how I fixed it.
I've actually got a problem right now where my swap zvol won't mount at boot "no such file or directory" on my laptop, whereas it works fine on my work computer and they are set up identically. I can mount it after logging in because the swap zvol does exist of course.
There's another oddity where at shutdown every zfs dataset fails to be unmounted by zfs but then localmount unmounts them all just fine. |
|
Back to top |
|
|
Aonoa Guru
Joined: 23 May 2002 Posts: 589
|
Posted: Fri Aug 29, 2014 12:18 pm Post subject: |
|
|
mrbassie wrote: | Is the whole system on zfs or just /home?
Is there anything about /home in /etc/fstab? If so comment it out. |
The whole system is on ZFS. There is also nothing in fstab about /home.
The problem is that $HOME get's a few files on every reboot. I can empty $HOME and manually mount the /home dataset, but during a reboot some config files will have been placed there again before it mounts the /home dataset. Thus the dataset fails to mount.
I can fix it with a little modification to the ZFS init script, which deletes the config files before the mounting is actually done, but I am looking for a better solution. |
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|