Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
cron wrapper for rsnapshot (rsync-based backup)
View unanswered posts
View posts from last 24 hours

Goto page 1, 2  Next  
Reply to topic    Gentoo Forums Forum Index Documentation, Tips & Tricks
View previous topic :: View next topic  
Author Message
Bones McCracker
Veteran
Veteran


Joined: 14 Mar 2006
Posts: 1611
Location: U.S.A.

PostPosted: Tue Nov 06, 2007 7:55 am    Post subject: cron wrapper for rsnapshot (rsync-based backup) Reply with quote

Just a little hack to make it convenient to use rsnapshot (an rsync-based backup tool) with cron (and with removable media). There's an ebuild for rsnapshot itself.

So there are there "layers" to this:
1 - rsnapshot (and its config file)
2 - my little script (which I call "snapwrap")
3 - cron ("script-lets" of a line or two that trigger it with various options)

Note that in my case (examples below), I'm using an internal Zip drive to back up /etc and selected configuration files, synchronizing it hourly (takes < 1 minute), and rotating it daily, weekly, and monthly (all of which consumes a grand total of 12 MiB of space thanks to rsync). Although this might wear out the zip disk, it's not good for much else.

You can use anything, though, and back up as much as you want. The possiblities are endless. This is a good way, for example, to create backups on another machine and do it over ssh, and so on (refer to rsync and rsnapshot documentation).

1. So, with that background, for those not familiar with rsnapshot, here's my rsnapshot config file first (sorry to add this to the length of post, but this serves as an example for somebody setting up rsnapshot, and some of my script might not make sense without this to refer to). Note the comment about doing a regular snapshot once before turning on "sync first".:
Code:
# /etc/rsnapshot.conf
# Rev. 30 July 2007

# Refer to:
# /etc/rsnapshot.conf.default
# man rsnapshot(1)

# Notes:
# This file requires tabs between elements.
# Directories require a trailing slash:
#   right: /home/
#   wrong: /home

config_version          1.2
snapshot_root           /mnt/ark/
no_create_root          1             # appropriate if using an entire device (e.g., removable media) or an entire partition

### Paths to External Programs:
cmd_cp                  /bin/cp
cmd_rm                  /bin/rm
cmd_rsync               /usr/bin/rsync
cmd_logger              /usr/bin/logger
cmd_du                  /bin/du
cmd_rsnapshot_diff      /usr/bin/rsnapshot-diff

### Backup Intervals (must be in ascending order):

interval                daily           7
interval                weekly          4
interval                monthly         3

### Global Options:
## (note: before enabling 'sync_first', run "/usr/bin/rsnapshot daily" once)
loglevel                3
logfile                 /var/log/rsnapshot
lockfile                /var/run/rsnapshot.pid
sync_first              1

### Rsync include/exclude (refer to rsync documentation for syntax)
exclude                 /etc/gconf/***

### Backup Points:
backup                  /etc/                                           localhost/
backup                  /var/lib/portage/world                          localhost/
backup                  /proc/config.gz                                 localhost/
backup                  /home/redacted/.esmtprc                       localhost/
#backup                 /home/redacted/money/grisbi/redacted.gsb    localhost/


2. Here's my little "snapwrap" script (please forgive the vim code-folding marks: "#{{{" ):
Code:
#! /bin/sh

# /root/bin/snapwrap.sh
# Rev. 2.1: 28 October 2007

# {{{ PURPOSE:

# This script is a wrapper for rsnapshot:
# - useful for making backups to removable media
# - callable from multiple cron script at varying intervals
# - enable toggling of rsnapshot's sync-only and rotate-only
# - intelligently mounting media only for duration of backup
# - enabling optional checking and repair of filesytem

# The idea is that you invoke this at several intervals
# (e.g., hourly, daily, and weekly).  It only needs to be in
# a sync mode (mode 0 or 1) at the most frequent of those
# intervals; at the other intervals, it can be rotate-only.

# To prevent snapshots being written to incorrect or unauthorized
# media, the fstab entry for the rsnapshot device should use a
# "UUID=foo" entry or "LABEL=foo" entry instead of "/dev/bar".

# If you want to use the -c option (check/repair filesystem), you
# need fsck -compatible file-system-checker for whatever
# filesystem you've got the snapshots stored on (like e2fsck,
# reiserfsck, etc.)  Or, modify the line that initiates the filesystem
# check toward the bottom of this script.  When you set up the
# filesystem, disable automatic filesystem checking based on
# number of mounts and time since last mount, so you can
# control filesystem maintenance in an orderly fashion from cron.
# It's not mandatory to use the -c option.
# }}}

# {{{ CONFIGURATION:

# Enter your value for "$STORE" (mount point of rsnapshot volume)
STORE='/mnt/ark'

# IF using removable media, enter the device which may be probed
# for the presence of the unmounted rsnapshot root filesystem.
# E.g.,"/dev/hdd" not "/dev/hdd4".  (Only for removable media.)
DEVICE='/dev/hdd'


# }}}

# {{{ USAGE FUNCTION:

f_Usage() {
cat <<EOF
=== USAGE ========================================================
|  This script accepts up to three single-letter options         |
|  preceded by a single hyphen (e.g., backup -wrc ).             |
|                                                                |
|  You may indicate one mode option (default = sync then rotate) |
|  -s    sync-only (may not be indicated along with r)           |
|  -r    rotate-only (may not be indicated along with s)         |
|                                                                |
|  You may indicate one rotation interval (default = -d)         |
|  (rotation options are ignored in sync-only mode)              |
|  -h    do an hourly rsnapshot (exclusive of other intervals)   |
|  -d    do a daily rsnapshot (exclusive of other intervals)     |
|  -w    do a weekly rsnapshot (exclusive of other intervals)    |
|  -m    do a monthly rsnapshot (exclusive of other intervals)   |
|                                                                |
|  You may also indicate the following option (default = unset)  |
|  -c    initate fsck (if unmounted) following backup            |
==================================================================
EOF
exit $E_PARAMS
}

# }}}

# {{{ ERROR HANDLING:

E_PARAMS=3              # invalid or wrong number of arguments
E_MEDIA=4               # can't access the storage destination
E_RSNAP=5               # rsnapshot error
E_FSCK=6                # fsck of storage media failed

# }}}

# {{{ OPTION PROCESSING:

while getopts ":hdwmsrc" OPTION; do
        case $OPTION in
                h ) [ -z "$INTERVAL" ]  && INTERVAL="hourly"    || f_Usage;;
                d ) [ -z "$INTERVAL" ]  && INTERVAL="daily"     || f_Usage;;
                w ) [ -z "$INTERVAL" ]  && INTERVAL="weekly"    || f_Usage;;
                m ) [ -z "$INTERVAL" ]  && INTERVAL="monthly"   || f_Usage;;
                s ) [ -z "$MODE" ]      && MODE="sync-only"     || f_Usage;;
                r ) [ -z "$MODE" ]      && MODE="rotate-only"   || f_Usage;;
                c ) [ -z "$CHECK" ]     && CHECK="y"            || f_Usage;;
                * ) f_Usage;;
        esac
done
shift $(($OPTIND - 1))

[ -z $INTERVAL ] && INTERVAL="daily"  # if no interval option, default to daily
[ -z $MODE ] && MODE="both"           # if no mode option, default to both

# }}}

# {{{ MAIN LOGIC:

# if storage media not mounted, mount (remember to unmount when finished)
if ! grep "$STORE" /proc/mounts &>/dev/null; then
        mount $STORE 2>/dev/null
        # if unable to mount, probe $DEVICE, update blkid.tab, and try again
        if [ $? -ne 0 ]; then
                if [ -n "$DEVICE" ]; then
                        /sbin/blkid $DEVICE
                        /sbin/blkid 1>/dev/null
                        mount $STORE || exit $E_MEDIA
                fi
        fi
        UNMOUNT=1
fi

# call rsnapshot to perform indicated sync and/or rotate functions

if [ "$MODE" != "rotate-only" ]; then
        /usr/bin/rsnapshot sync || exit $E_RSNAP
fi

if [ "$MODE" != "sync-only" ]; then
        /usr/bin/rsnapshot $INTERVAL || exit $E_RSNAP
fi

# return storage media to its original state (and check it if unmounted)
if [ "$UNMOUNT" ]; then
        umount $STORE
        if [ "$CHECK" ]; then
                fsck $STORE -fp &>/dev/null || exit $E_FSCK
        fi
fi

exit 0

# }}}


3. And here are the little cron "scriptlets" (note: "loggit" is a little code block that I source at the end of all cron jobs - it just logs each job's success or failure to the cron log):

Hourly:
Code:
#! /bin/sh
#/etc/cron.hourly/znapwrap

# Purpose: keep today's snapshot no older than one hour, (to be
# captured by the daily cron script as the daily backup).

# call my rnsapshot wrapper script with the following options:
#+      mode:                   sync-only

/root/bin/snapwrap.sh -s
. /root/bin/loggit


Daily:
Code:
#! /bin/sh
#/etc/cron.daily/znapwrap

# Purpose: daily backup rotation
# (hourly sync -> daily.0 -> daily.1 -> ... daily.6)

# call my rnsapshot wrapper script with the following options:
#+      mode:                   rotate-only
#+      interval:               daily

/root/bin/snapwrap.sh -rd
. /root/bin/loggit


Weekly:
Code:
#! /bin/sh

# Purpose: weekly backup rotation
# (daily.6 -> weekly.0 -> weekly.1 ... weekly.3)

# call my rnsapshot wrapper script with the following options:
#+      mode:                   rotate-only
#+      interval:               weekly
#+      check filesystem:       true

/root/bin/snapwrap.sh -rwc
. /root/bin/loggit


Monthly:
Code:
#! /bin/sh

# Purpose: monthly backup rotation
# (weekly.3 -> monthly.0 -> monthly.1 ... monthly.3)

# call my rnsapshot wrapper script with the following options:
#+      mode:                   rotate-only
#+      interval:               monthly

/root/bin/snapwrap.sh -rm
. /root/bin/loggit


In case you're curious - here's the "loggit" code:
Code:
# /root/bin/loggit
# Rev. 20 July 2007

# My standard code snippet to include at the bottom of cron scripts.
# The $? variable expands to the exit status of the previous command.
# To include, use ". /root/bin/loggit" or "source /root/bin/loggit

EXIT_STATUS=$?

case $EXIT_STATUS in
        0) PRIORITY="notice" && MESSAGE="succeeded.";;
        *) PRIORITY="err" && MESSAGE="failed with exit status ${EXIT_STATUS}";;
esac

/usr/bin/logger -p cron.$PRIORITY "$0 $MESSAGE"


Last edited by Bones McCracker on Thu Dec 06, 2007 6:39 pm; edited 5 times in total
Back to top
View user's profile Send private message
beatryder
Veteran
Veteran


Joined: 08 Apr 2005
Posts: 1138

PostPosted: Wed Nov 07, 2007 5:42 pm    Post subject: Reply with quote

Thank you for these well written and well documented scripts. I am very impressed with your work.

I will be implementing your solutions into my server when I get home from work today. I may even implement them on our production server. (After a certain amount of testing)
_________________
Dont make it idiot proof, make it work.
Neucode.org
<suppressed key>
Back to top
View user's profile Send private message
Bones McCracker
Veteran
Veteran


Joined: 14 Mar 2006
Posts: 1611
Location: U.S.A.

PostPosted: Thu Nov 08, 2007 2:35 am    Post subject: Reply with quote

Wow, thanks. :)
Let me know what I didn't do right too.
Back to top
View user's profile Send private message
bludger
Guru
Guru


Joined: 09 Apr 2003
Posts: 389

PostPosted: Tue Nov 13, 2007 4:54 pm    Post subject: Reply with quote

How does rsnapshot compare with rdiff-backup? Both seem to offer incremental backups.
_________________
Modolingo - International Language Center
Quality Language School for Business and General English, German and all living languages.
Munich and Germany wide.
Back to top
View user's profile Send private message
Bones McCracker
Veteran
Veteran


Joined: 14 Mar 2006
Posts: 1611
Location: U.S.A.

PostPosted: Tue Nov 13, 2007 6:00 pm    Post subject: Reply with quote

I don't believe rdiff-backup makes use of hard links.
rsync's versioning leverages hard links (and rsnapshot is based on rsync).

See, here's the magic of using hard links as part of a backup strategy: each backup is a true full backup but they're created as quickly as an incremental backup.

When a new backup is made, if a given file exists already on the backup disk (and hasn't been changed) then a hard link is created instead of a new copy of the file. Because each backup version is a full copy, restoring is as simple as copying that version back to the target. No need to dick around with incrementals, diffs, etc.

So it's kind of the best of both worlds. The one drawback is that you can't physically separate the versions (they must remain within that backup device). If you need to (e.g. must keep a version off-site), you can simply copy off that version (as in scp -rp) to a separate location.

In reality, this method is faster than most incremental backups because there is no need to manage contextual data for the incremental backup (i.e., where to "restore" those particular files to).
Back to top
View user's profile Send private message
IQgryn
l33t
l33t


Joined: 05 Sep 2005
Posts: 764
Location: WI, USA

PostPosted: Mon Nov 26, 2007 7:33 pm    Post subject: Reply with quote

Seeing your scripts made me want to clean up mine and show them, too. I did notice one minor error in yours, though: If rsync encounters non-fatal errors during a sync, rsnapshot will return with exit code 2, which will cause your code to exit. I have a workaround for that in my script (see rsnapshot_sync in run-rsnapshot.sh). My scripts are designed to work with sync_first enabled, but they could probably be adapted to work without it.

Anyway, here they are:

This one is optional, but greatly simplifies my crontab. You'll have to modify it unless you have the same interval setup as me.
rsnapshot.sh:
#!/bin/bash

################################################################################
# Calls run-rsnapshot.sh with the appropriate intervals.  Currently handles    #
# hourly, daily, monthly, and yearly, but can easily be modified to suit       #
# almost anyone's interval setup.  Should be called every lowest interval by   #
# cron, like so:                                                               #
#     min     hour    day/mo  month   day/wk  command                          #
#     0       *       *       *       *       /etc/rsnapshot.sh                #
################################################################################

INTERVALS="hourly"

if [[ `date +%k` -eq 0 ]]
then
   INTERVALS="$INTERVALS daily"
   if [[ `date +%d` -eq 1 ]]
   then
      INTERVALS="$INTERVALS monthly"
   fi
   if [[ `date +%j` -eq 1 ]]
   then
      INTERVALS="$INTERVALS yearly"
   fi
fi

"`dirname "$0"`/run-rsnapshot.sh" $INTERVALS


This is the main script. You can just call this from cron (mostly) like you would BoneKracker's scripts (example at the bottom).
run-rsnapshot.sh:
#!/bin/bash

################################################################################
# Runs rsnapshot with the specified intervals.  If called without any          #
# arguments, runs rsnapshot sync.  Runs rsnapshot sync just before each run of #
# the lowest interval.                                                         #
################################################################################

# CHange this to the location of your rsnapshot.conf
RSNAPSHOT_CONF="/etc/rsnapshot.conf"

# This is normally run from cron, so any output gets logged
# Change to log to a logfile if you want
function log_error {
   echo "$1 at `date`" 1>&2
}

function interval_failed {
   log_error "Rsnapshot $1 failed"
}

# Add any arguments to rsnapshot (like -v) here
# Also, add or change to an echo statement for testing/debugging
function run_rsnapshot {
   rsnapshot $1
}

function rsnapshot_sync {
   run_rsnapshot sync
   if [[ $? -ne 0 && $? -ne 2 ]]
   then
      interval_failed "sync"
      exit 1
   fi
}

# If run with no arguments, just run rsnapshot sync and exit
if [[ -z "$1" ]]
then
   rsnapshot_sync
   exit 0
fi

# Get the list of valid intervals from rsnapshot.conf
VALID_INTERVALS="`grep '^\(interval\|retain\)' "$RSNAPSHOT_CONF" |
sed -e 's/\(interval\|retain\)\t\+//' -e 's/\t.*//'`"

# Verify all the intervals passed are valid
for i in $*
do
   for j in $VALID_INTERVALS
   do
      if [[ $i == $j ]]
      then
         continue 2
      fi
   done
   log_error "Invalid interval $i"
   exit 1
done

LOWEST_INTERVAL="`echo "$VALID_INTERVALS" | head -n 1`"

while [[ ! -z "$1" ]]
do
   # Run rsnapshot sync before the lowest interval gets run
   if [[ "$1" == "$LOWEST_INTERVAL" ]]
   then
      rsnapshot_sync
   fi
   run_rsnapshot "$1"
   if [[ $? -ne 0 ]]
   then
      interval_failed "$1"
      exit 1
   fi
   shift
done


This is my current crontab (yes, i know that I'm actually running hourly every two hours--I didn't want to change the name everywhere).
crontab (mine):
# min   hour    day/mo  month   day/wk  command
0       */2     *       *       *       /etc/rsnapshot.sh


This is how your crontab would look if you had my setup but didn't want to use rsnapshot.sh
crontab (example):
# min   hour    day/mo  month   day/wk  command
0       2-23/2  *       *       *       /etc/run-rsnapshot.sh hourly
0       0       2-31    *       *       /etc/run-rsnapshot.sh hourly daily
0       0       1       2-11    *       /etc/run-rsnapshot.sh hourly daily monthly
0       0       1       1       *       /etc/run-rsnapshot.sh hourly daily monthly yearly


This is how your crontab would look if you run the largest intervals first and didn't want to use rsnapshot.sh
crontab (example 2):
# min   hour    day/mo  month   day/wk  command
0       2-23/2  *       *       *       /etc/run-rsnapshot.sh hourly
0       0       2-31    *       *       /etc/run-rsnapshot.sh daily hourly
0       0       1       2-11    *       /etc/run-rsnapshot.sh monthly daily hourly
0       0       1       1       *       /etc/run-rsnapshot.sh yearly monthly daily hourly


My rsnapshot.conf (most comments removed for conciseness)
/etc/rsnapshot.conf:
config_version   1.2
snapshot_root   /mnt/backup/backups/
no_create_root   1
cmd_cp      /bin/cp
cmd_rm      /bin/rm
cmd_rsync   /usr/bin/rsync
cmd_ssh      /usr/bin/ssh
cmd_logger   /usr/bin/logger

# Backup intervals
interval   hourly   25
interval   daily   32
interval   monthly   25
interval   yearly   3

verbose      2
loglevel   3
logfile   /var/log/rsnapshot
lockfile   /var/run/rsnapshot.pid

# Rsync args.  -H keeps hardlinks in the original together in the backups
rsync_short_args   -aH
rsync_long_args   --delete --numeric-ids --relative --delete-excluded

# More efficient transfer
ssh_args   -T -o BatchMode=yes

# All my excludes go here for ease of editing
exclude_file   /etc/rsnapshot.excludes

link_dest   1
sync_first   1
use_lazy_deletes   1

# Backup points
backup   <user>@<machine:/   <machine>/      +rsync_long_args=--bwlimit=512
backup   /            localhost/


Note that I have patched my version of rsync to use link_dest whether or not the .sync directory already exists. Otherwise, all backup points but the first would not link properly, unless I disabled link_dest. The maintainer is considering this patch for future versions of rsnapshot, as well.

Finally, here's my excludes file, for those of you who are curious (comments and blank lines added for clarity).
/etc/rsnapshot.excludes:
/proc/**

# These two files are required even with udev; save them
+ /dev/console
+ /dev/null

/dev/**
/sys/**

# Don't back up removable media (unneeded if you use one_fs)
/mnt/**
/media/**

# Save my custom ebuilds
+ /usr/portage/local
+ /usr/portage/local/**

# The rest of the portage tree can be easily resynced
/usr/portage/**

/tmp/**
/var/tmp/**
.xauth*

# These directories are actually required for portage to work properly on restore, even though they're called "cache"
+ /usr/lib/portage/pym/cache
+ /usr/lib/portage/pym/cache/**

cache/**
temp/**
.thumbnails/
Cache/**
/usr/src/linux*/**
.ccache/**

# K9Copy's default working directory
.k9copy/dvd/**

# Allows me to exclude any file or folder just by sticking NOBACKUP at the beginning of the filename
NOBACKUP**


Sorry for the megapost. Let me know if you see anything I could be doing better, or have any questions.
Back to top
View user's profile Send private message
Bones McCracker
Veteran
Veteran


Joined: 14 Mar 2006
Posts: 1611
Location: U.S.A.

PostPosted: Tue Nov 27, 2007 3:11 am    Post subject: Reply with quote

Brilliant! Just looking it over quickly, there are several great ideas there I like. Thanks for posting this and taking the time to explain it.
  • I didn't know about the non-fatal exit with "2". I hadn't encountered one. I'll have to incorporate that.
  • I like how you have the wrapper script checking the rsnapshot.conf file for valid intervals. Mine would simply cough up a "rsnapshot error" if the user called it with an invalid interval. That's probably worth incorporating as well.
  • I'm happy with dropping a one-line scriptlet (command + options) into cron.hourly/daily/weekly, etc. to trigger the interval execution (rather than your very nice interval-detection mechanism and crontab entry at interval-of-highest-frequency). I suppose either of those approaches are good for different circumstances. Although as I can see you are doing more of a full backup where I'm only backing up a few files, your technique has the added benefit of isolating the job from the rest of any "daily/weekly etc." batches, which would help to avoid job overlap.
Mainly though, I see that you've obviously got a much better grasp of rsync itself and have implemented a number of its features I don't yet understand (I'm an rsync novice, having never done anything with it but this). I need to re-read the documentation to see if I might better understand it with what I've learned since I first started dabbling with this. But, if you have the patience, I'd like to pick your brain on a couple of those points:

1. The link-dest idea is interesting. Could you please elaborate on that? I'm not smart enough to follow the rsync documentation on what link-dest does. Use small words and try to be gentle with terminology for my simple brain (e.g.: "original" - does that mean the "original" file being backed up, the "original" sync copy, the "original" backup interval copy, etc.). :)

2. Also, I could use a hand with this:
Code:
# Rsync args.  -H keeps hardlinks in the original together in the backups
rsync_short_args   -aH
rsync_long_args   --delete --numeric-ids --relative --delete-excluded
Does this mean:
a) if there's a hardlink in the "original" group of files being backed up, copy the hardlink itself and do not create an actual copy of the file
b) if there's one of the files being backed up is a hardlink to another file being backed up, don't create two full copies of the file; create one copy and a new hardlink to it
c) both a and b
d) something else (please elaborate)

Nothing jumps out at me as things that could be improved, although I've yet to read through it in depth.
Back to top
View user's profile Send private message
IQgryn
l33t
l33t


Joined: 05 Sep 2005
Posts: 764
Location: WI, USA

PostPosted: Tue Nov 27, 2007 6:15 am    Post subject: Reply with quote

Thanks for the reply. The cron bit is really a matter of personal preference. I like having the jobs execute one after the other because sometimes my systems are in different states, which greatly affects the sync time vs. when they're on the same router.

BoneKracker wrote:
But, if you have the patience, I'd like to pick your brain on a couple of those points:

1. The link-dest idea is interesting. Could you please elaborate on that? I'm not smart enough to follow the rsync documentation on what link-dest does. Use small words and try to be gentle with terminology for my simple brain (e.g.: "original" - does that mean the "original" file being backed up, the "original" sync copy, the "original" backup interval copy, etc.). :)


I like to think I'm patient. :P
Link_dest is an option that newer versions of rsync provide (I think all the versions in portage have it, but don't quote me on that). It basically allows you to say "copy a to b, but if anything in a matches c, create a hard link to c in b instead." So, given this:
Code:
a+     c+
 |-d    |-d
 |-e    |-f
you'd end up with (link_dest)
Code:
a+     c+     b+
 |-d    |-d    |-d (link to c/d)
 |-e    |-f    |-e (copy of a/e)
instead of (no link_dest)
Code:
a+     c+     b+
 |-d    |-d    |-d (copy of a/d)
 |-e    |-f    |-e (copy of a/e)

This is assuming that, in my diagram, a filename matching means the contents match, too. If a/d and c/d had different contents, link_dest would have no effect in this example.

BoneKracker wrote:
2. Also, I could use a hand with this:
Code:
# Rsync args.  -H keeps hardlinks in the original together in the backups
rsync_short_args   -aH
rsync_long_args   --delete --numeric-ids --relative --delete-excluded
Does this mean:
a) if there's a hardlink in the "original" group of files being backed up, copy the hardlink itself and do not create an actual copy of the file
b) if there's one of the files being backed up is a hardlink to another file being backed up, don't create two full copies of the file; create one copy and a new hardlink to it
c) both a and b
d) something else (please elaborate)

I'm fairly certain it means b (I just discovered the option myself). You may want to test it before you use it. Also, it's supposed to increase rsync's memory usage on the machine that it's copying from, but I haven't noticed anything too large (my base backup is 37 gigs, with the largest partition at 26 gigs).
Back to top
View user's profile Send private message
Bones McCracker
Veteran
Veteran


Joined: 14 Mar 2006
Posts: 1611
Location: U.S.A.

PostPosted: Tue Nov 27, 2007 8:38 am    Post subject: Reply with quote

IQgryn wrote:
Link_dest is an option that newer versions of rsync provide (I think all the versions in portage have it, but don't quote me on that). It basically allows you to say "copy a to b, but if anything in a matches c, create a hard link to c in b instead." So, given this:
Code:
a+     c+
 |-d    |-d
 |-e    |-f
you'd end up with (link_dest)
Code:
a+     c+     b+
 |-d    |-d    |-d (link to c/d)
 |-e    |-f    |-e (copy of a/e)
instead of (no link_dest)
Code:
a+     c+     b+
 |-d    |-d    |-d (copy of a/d)
 |-e    |-f    |-e (copy of a/e)

This is assuming that, in my diagram, a filename matching means the contents match, too. If a/d and c/d had different contents, link_dest would have no effect in this example.

Thanks. Diagram helped. (I follow: file matching is determined by inode hashes or something.) Overall, what you're describing is kind of what I gathered from the man page blurb on link_dest. I'm thinking the files to backed up are analagous to "a"; the snapshot is "b"; and the backup set as a whole is "c". But I am having a hard time relating that to rsnapshot (what I mean is, I don't see how that's different from what I thought rsnapshot was already doing).

I probably have a misconception that needs to be corrected before I'll understand. Here's my (probably incorrect) understanding of how rsnapshot works (again, I haven't looked at the docs in ages). This is without any additional rsync flags (i.e. without link_dest):
  • The highest-frequency interval-named backup is created (in my case "daily"). When a "snapshot" is perfomed (for my handful of files, I do it hourly against the daily backup), files to be backed up are compared to the backup set inodes: if the file exists (inode hashes match), a hardlink is placed in snapshot; if not, the changed (or new) file is copied from the "files-to-be-backed-up" to the snapshot.
  • When "daily" rotation is initated, each period is moved one step (in my setup, this would be: daily.5 -> daily.6; daily.4 -> daily.5 .... daily.0 -> daily.1; and snapshot -> daily.0).
    (So now we have some hard links in daily.0 and daily.1 pointing to the same content-bearing inodes.)
  • Snapshot is again performed. "Files-to-be-backed up" are compared to new daily.0 (which now has some hard links to the same content pointed to by some hard links in daily.1). Again, if inode hashes match, the actual "file-to-be-backed up" is not copied; instead, a hard link is created.
  • When next-highest-frequency rotation occurs, the same thing as daily happens (weekly.3 -> weekly.4; weekly.2 -> weekly.3; ... weekly.0 -> weekly.1; and daily.6 -> weekly.0).

So, in my model, the backup set exists on a single filesystem, so there is only one set of content-bearing inodes, some with multiple hard links. When the lowest-frequency rotation occurs (in my case monthly - in yours, annually) hard links are deleted, if there is only the one hard link, then the content-bearing inode is deleted, if hard links remain elsewhere in the backup set, then the content-bearing inode is naturally retained (it just has one fewer hard link).

Now, as I said, that's my understanding how it works -- but I don't recall how much of that's based on fact and how much I made up to fill in gaps.

Ok. Based on that, could you first correct my flaws and then (the real question here) tell me how link_dest alters the normal functioning?

Qgryn wrote:
I'm fairly certain it means b (I just discovered the option myself). You may want to test it before you use it. Also, it's supposed to increase rsync's memory usage on the machine that it's copying from, but I haven't noticed anything too large (my base backup is 37 gigs, with the largest partition at 26 gigs).
If that's the case, this is definitely something that should be part of a backup that's intended for restoration to its original context. I've restored from backups before, but given that I'm just doing /etc/ and a few others, I probably haven't actually had to back up many hard links. I guess this could be fairly critical if one were backup up a cvs repository or their whole system. Thanks for mentioning it.

You know, I find it odd that there's not much discussion of backups in the forums. Maybe a lot of people just don't do it.
Back to top
View user's profile Send private message
IQgryn
l33t
l33t


Joined: 05 Sep 2005
Posts: 764
Location: WI, USA

PostPosted: Tue Nov 27, 2007 9:31 am    Post subject: Reply with quote

BoneKracker wrote:
Thanks. Diagram helped. (I follow: file matching is determined by inode hashes or something.) Overall, what you're describing is kind of what I gathered from the man page blurb on link_dest. I'm thinking the files to backed up are analagous to "a"; the snapshot is "b"; and the backup set as a whole is "c". But I am having a hard time relating that to rsnapshot (what I mean is, I don't see how that's different from what I thought rsnapshot was already doing).

In my example, a is the folder being backed up, b is the new .sync directory, and c is the <lowest interval>.0 directory. Rsnapshot will end up with the same result either way, but using link_dest allows rsync to make the hardlinks, instead of hardlinking every file, then unlinking and replacing those that have changed. It's more efficient, and possibly less error-prone, to use link_dest if you can.

BoneKracker wrote:
I probably have a misconception that needs to be corrected before I'll understand. Here's my (probably incorrect) understanding of how rsnapshot works (again, I haven't looked at the docs in ages). This is without any additional rsync flags (i.e. without link_dest):
  • The highest-frequency interval-named backup is created (in my case "daily"). When a "snapshot" is perfomed (for my handful of files, I do it hourly against the daily backup), files to be backed up are compared to its most recent period (daily.0): if the file exists in daily.0, a hardlink is placed in snapshot to the inode in daily.0; if not, the changed (or new) file is copied from the "files-to-be-backed-up" to the snapshot.
  • When "daily" rotation is initated, each period is moved one step (in my setup, this would be: daily.5 -> daily.6; daily.4 -> daily.5 .... daily.0 -> daily.1; and snapshot -> daily.0).
    (So now we have some hard links in daily.0 and daily.1 pointing to the same content-bearing inodes.)
  • Snapshot is again performed. "Files-to-be-backed up" are compared to new daily.0 (in which shares some hard links in common with daily.1). Again, if inode hashes match, the actual "file-to-be-backed up" is not copied; instead, a hard link is created.
  • When next-highest-frequency rotation occurs, the same thing as daily happens (weekly.3 -> weekly.4; weekly.2 -> weekly.3; ... weekly.0 -> weekly.1; and daily.6 -> weekly.0).

So, in my model, the backup set exists on a single filesystem, so there is only one set of content-bearing inodes. Some of these are "normal files (i.e., there is only one hard link to them), while others have additional hard links to them. When the lowest-frequency rotation occurs (in my case monthly - in yours, annually) hard links are deleted, if there is only the one hard link, then the content-bearing inode is deleted, if hard links remain elsewhere in the backup set, then the content-bearing inode is naturally retained (it just has one fewer hard link).

Now, as I said, that's my understanding how it works -- but I don't recall how much of that's based on fact and how much I made up to fill in gaps.

Ok. Based on that, could you first correct my flaws and then (the real question here) tell me how link_dest alters the normal functioning?

It sounds like you have it mostly right. I'm pretty sure inode hashes aren't what is used to determine if a file has changed, but the rest is pretty close. Since you have sync_first set to 1, then where you say "snapshot", you mean sync, and the folder will be <backup root>/.sync, but that's all just semantics. The only other thing I see is that usually things are deleted from every level--the only hourly snapshots that get rotated into daily are the ones at midnight, the only daily snapshots that get rotated into monthly are the ones on the first of a month, etc. The others get deleted either before the rotation occurs (without use_lazy_deletes) or after the rotation is finished (with use_lazy_backups, which I recommend--the folder gets renamed to <interval>.delete during the rotation, and is purged once everything else is done). FYI, the hardlink deletion handling is all done by the filesystem, not rsnapshot--hardlinks automatically get removed once the last reference to them is deleted.

As far as what link_dest changes, it's really just whether you hardlink everything, then sync (which unlinks and replaces the changed files), or whether you let rsync hardlink the files that haven't changed and copy the new versions of the ones that have all in one go. It may also handle device nodes and other special files better. Ideally, the only difference is speed, but I had a lot less problems with oddball files and canceled syncs once I started using link_dest.

BoneKracker wrote:
IQgryn wrote:
I'm fairly certain it means b (I just discovered the option myself). You may want to test it before you use it. Also, it's supposed to increase rsync's memory usage on the machine that it's copying from, but I haven't noticed anything too large (my base backup is 37 gigs, with the largest partition at 26 gigs).
If that's the case, this is definitely something that should be part of a backup that's intended for restoration to its original context. I've restored from backups before, but given that I'm just doing /etc/ and a few others, I probably haven't actually had to back up many hard links. I guess this could be fairly critical if one were backup up a cvs repository or their whole system. Thanks for mentioning it.

You know, I find it odd that there's not much discussion of backups in the forums. Maybe a lot of people just don't do it.

I agree about -H. I was surprised to find that it wasn't the default. I think a lot of people make haphazard backups when they think about it; I know I used to, before I lost a (not really backed-up) drive one summer. One thing to be sure you test is a restore. I used VirtualBox to test a restore to a bare drive, and found the portage cache directories (mentioned above in my excludes file) that had to be backed-up, even though they were supposed to be caches (which should always be deletable without ill effects). I've also chosen to exclude the portage tree and the kernel sources from the backup, so I have to re-sync the tree and then re-install gentoo-sources once I restore--but the machine is working before that. It's a really nice feeling to know that you can restore your machine from scratch in a few hours if need be.

Let me know if I missed any of your questions, or if I'm being cryptic again.
Back to top
View user's profile Send private message
Bones McCracker
Veteran
Veteran


Joined: 14 Mar 2006
Posts: 1611
Location: U.S.A.

PostPosted: Tue Nov 27, 2007 10:48 am    Post subject: Reply with quote

IQgryn wrote:
The only other thing I see is that usually things are deleted from every level--the only hourly snapshots that get rotated into daily are the ones at midnight, the only daily snapshots that get rotated into monthly are the ones on the first of a month, etc.
Actually, I do a "daily" snapshot, but I do it each hour. (In other words, I have no "hourly" interval in rsnapshot.conf. My hourly cron job runs a sync every hour, but it's at the "daily" interval. It's a speed trade-off that probably only makes sense at the hourly level (on the one hand, each sync takes a bit longer because files that changed are getting copied again during each sync, but on the other hand this avoids performing 24 hourly rotations which also consume time; I'm not sure which takes less time or causes less filesystem fragmentation. Should I be doing hourly snapshots too instead of my hourly sync of the daily snapshot?

IQgryn wrote:
The others get deleted either before the rotation occurs (without use_lazy_deletes) or after the rotation is finished (with use_lazy_backups, which I recommend--the folder gets renamed to <interval>.delete during the rotation, and is purged once everything else is done). FYI, the hardlink deletion handling is all done by the filesystem, not rsnapshot--hardlinks automatically get removed once the last reference to them is deleted.
Right. I was editing as you were responding apparently. I had lapsed in my thinking and then said, "wait a minute, there's no 'actual file' being moved around, what am I talking about -- these are links". I do understand that all "files" as we see them are simply hard links to an inode. A "hard link" as we refer to it commonly is simply an additional link. Writing that stuff out was a good refresher. If you re-read my post it'll probably sound a bit more sane. :) I'll look into the use_lazy_deletes again. I had experimented with them once but never got back to it.

IQgryn wrote:
As far as what link_dest changes, it's really just whether you hardlink everything, then sync (which unlinks and replaces the changed files), or whether you let rsync hardlink the files that haven't changed and copy the new versions of the ones that have all in one go. It may also handle device nodes and other special files better. Ideally, the only difference is speed, but I had a lot less problems with oddball files and canceled syncs once I started using link_dest.
Okay, I'm starting to get the picture. I appreciate the explanation and I will follow up with some more reading.

IQgryn wrote:
One thing to be sure you test is a restore. I used VirtualBox to test a restore to a bare drive, and found the portage cache directories (mentioned above in my excludes file) that had to be backed-up, even though they were supposed to be caches (which should always be deletable without ill effects). I've also chosen to exclude the portage tree and the kernel sources from the backup, so I have to re-sync the tree and then re-install gentoo-sources once I restore--but the machine is working before that. It's a really nice feeling to know that you can restore your machine from scratch in a few hours if need be.

Let me know if I missed any of your questions, or if I'm being cryptic again.

No, you weren't being cryptic -- it's just a somewhat difficult topic to communicate about verbally given all the "moving parts" one must refer to. I very much appreciate you taking the time to explain it. I know what you mean about testing. I had two experiences that drove that lesson home. One was as a project manager who got "Disaster Recovery" dumped in my lap (at a Fortune 500 client), and finding they had never done any kind of verification/testing -- not even of the backup systems, much less the fancy-shmancy hot-site stuff they were paying some vendor big money for. And the more painful lesson was when I set up "Windows Backup" for my father, who has 25 GB of digital photos (many painstakingly scanned in from decades-old photos). To make a long story short, it didn't fricking work. Fortunately I had a month-old "copy" of everything as a fall-back, but he did lose a couple weeks worth of photos.

Really, I started running this with the idea that I'd back up the config files for "rapid rebuild" as sort of a stability test. Then expand it to more of a full backup - doing it over ssh to another machine, maybe off-site. But I never got 'round the the "then". So I figured I'd post it here -- see if people had better ideas before moving on that next level. Now I'm glad I did! Thanks again. :)
Back to top
View user's profile Send private message
IQgryn
l33t
l33t


Joined: 05 Sep 2005
Posts: 764
Location: WI, USA

PostPosted: Tue Nov 27, 2007 11:25 am    Post subject: Reply with quote

BoneKracker wrote:
Should I be doing hourly snapshots too instead of my hourly sync of the daily snapshot?


It's really up to you. I like having the granularity of a snapshot every two hours available for the past two days if I need it, so I have a (bi-)hourly interval set up. If you aren't rotating the files, and you have --delete in your rsync options (it's there by default), you may as well only run the sync once a day--the rest of them are really just busy work. Also, the rotations themselves don't take much time--the deletion of the oldest hourly snapshot does, but that's only because I'm deleting/unlinking 37 gigs each time. Also, in the development version, there's support (with use_lazy_deletes enabled) to remove the lockfile just before the delete starts, since the delete won't interfere with the next run of rsnapshot.

I'd recommend either switching to one sync a day, just before your daily rotation, or adding an hourly rotation. Alternately, IF you use link_dest, you can remove --delete from your rsync options, which will gain you those files that were created and deleted during the day, but stuck around for at least an hour boundary. If you don't use link_dest, that will clog your intervals up pretty quick (nothing will ever be deleted from daily.0, jsut new things added).

BoneKracker wrote:
If you re-read my post it'll probably sound a bit more sane. :) I'll look into the use_lazy_deletes again. I had experimented with them once but never got back to it.


It does sound better now. I don't know if use_lazy_deletes really makes anything faster now, but in the future, it could (see above). It seems like the best way to do it, too--keep everything around until you're sure everything else worked.

BoneKracker wrote:
No, you weren't being cryptic -- it's just a somewhat difficult topic to communicate about verbally given all the "moving parts" one must refer to. I very much appreciate you taking the time to explain it.

<snip>

Really, I started running this with the idea that I'd back up the config files for "rapid rebuild" as sort of a stability test. Then expand it to more of a full backup - doing it over ssh to another machine, maybe off-site. But I never got 'round the the "then". So I figured I'd post it here -- see if people had better ideas before moving on that next level. Now I'm glad I did! Thanks again. :)


I'm glad I could help. Let me know if you think of anything else to pick my brain about. :)
Back to top
View user's profile Send private message
Bones McCracker
Veteran
Veteran


Joined: 14 Mar 2006
Posts: 1611
Location: U.S.A.

PostPosted: Tue Nov 27, 2007 12:34 pm    Post subject: Reply with quote

IQgryn wrote:
I'm glad I could help. Let me know if you think of anything else to pick my brain about. :)

Since you asked... if you've got time, and if the subject is of any interest to you, could you check this out? I'd be interested in any suggestions to improve it.
https://forums.gentoo.org/viewtopic-t-571162.html
Back to top
View user's profile Send private message
Bones McCracker
Veteran
Veteran


Joined: 14 Mar 2006
Posts: 1611
Location: U.S.A.

PostPosted: Tue Nov 27, 2007 3:16 pm    Post subject: Reply with quote

I updated the script (in original post) to hopefully handle removable media better.

I noticed that unless I have something set up to continuously poll the device for inserted media, and if the media is not present during boot-up, there is only a device special file for the hardware device itself and none is created (of course) for any volumes on the media itself.

That sucks, so to overcome this, I added the following to the script:

In "configuration" section up front:
Code:
# If using removable media, enter the device which may be probed
# for the presence of the unmounted rsnapshot root filesystem.
# E.g.,"/dev/hdd" not "/dev/hdd4".
DEVICE='/dev/hdd'


I "main logic" (after inital mount command):
Code:
        # if unable to mount, probe $DEVICE, update blkid.tab, and try again
        if [ $? -ne 0 ]; then
                if [ -n "$DEVICE" ]; then
                        /sbin/blkid $DEVICE
                        /sbin/blkid 1>/dev/null
                        mount $STORE || exit $E_MEDIA
                fi
        fi



These two items don't seem to be documented (at least not in blkid man page):

Without other flags, blkid <device> seems to probe the media and somehow cache the retrieved information that one would typically use other flags to parse and output -- with nothing but the device, it seems to probe the device but does not generate output (and, notably, it does NOT update /etc/blkid.tab).

Running /sbin/blkid with no arguments whatsoever seems to the blkid.tab.

For some reason I'm hoping someone may explain to me, this works (results in properly update blkid.tab including UUIDs and LABELs for otherwise unknown volumes:
Code:
/sbin/blkid $DEVICE
/sbin/blkid

While this does not (does not update blkid.tab):
Code:
/sbin/blkid $DEVICE && /sbin/blkid


At any rate, the upshot is this is a way to make use of removables from scripts that would otherwise require polling to recognize.
Back to top
View user's profile Send private message
IQgryn
l33t
l33t


Joined: 05 Sep 2005
Posts: 764
Location: WI, USA

PostPosted: Tue Nov 27, 2007 5:42 pm    Post subject: Reply with quote

From the blkid man page:

RETURN CODE
If the specified token was found, or if any tags were shown from (speci-
fied) devices 0 is returned. If the specified token was not found, or no
(specified) devices could be identified, an exit code of 2 is returned.
For usage or other errors, an exit code of 4 is returned.

I get a return value of 2 when I run blkid <dev>, but 0 when I run blkid with no arguments.

I am curious as to why you need to check the device at all--shouldn't udev do this for you?
Back to top
View user's profile Send private message
Bones McCracker
Veteran
Veteran


Joined: 14 Mar 2006
Posts: 1611
Location: U.S.A.

PostPosted: Tue Nov 27, 2007 11:18 pm    Post subject: Reply with quote

IQgryn wrote:
From the blkid man page:

RETURN CODE
If the specified token was found, or if any tags were shown from (speci-
fied) devices 0 is returned. If the specified token was not found, or no
(specified) devices could be identified, an exit code of 2 is returned.
For usage or other errors, an exit code of 4 is returned.

I get a return value of 2 when I run blkid <dev>, but 0 when I run blkid with no arguments.
That was the first thing that occured to me when I noticed that it didn't work as a list construct. I swear I checked the return code and it was zero :P (which was why I was mystified about that). I guess I must have screwed up and checked "blikid <no args>" when I meant to check "blkid <dev>".
IQgryn wrote:
I am curious as to why you need to check the device at all--shouldn't udev do this for you?

I have yet to master udev, but apparently it does not -- at least not without additional automounting-type tools.

First of all, note that I do have dbus and hald running. I also have gnome, but I did not install gnome-volume-manager or its dependencies (which include its automounting functionality, I believe). I haven't been considering this an exception because I suspect many servers also lack such functionality. I know there are also standalone automounting tools, but I'd prefer to not require any of these (Gnome's, KDE's, or otherwise).

So, as I currently have this machine configured, here's what the behavior is:

When I insert media in a hardware device (pop a CD or a zip disk in), I have to mount it manually: either from the command line or by double-clicking the CD drive in Nautilus. On the other hand, if I connect a USB key or the like to a USB port, the new hardware device is processed and shows up by name, unmounted, in Nautilus (e.g., "San Disk U3 Cruzer Micro"). I am able to mount all of these -- with the notable exception of the Zip disk (read on).

When I boot, if media is present in the ide floppy (Zip) drive, udev (or udev + hal + dbus or whatever combination of services is responsible) probes the device, finds the volume, and processes it normally. I know little about these services, but I assume that libblkid is called by udev, the results gets written to blkid.tab, a device special file is assigned to the volume, and the appropriate symbolic links are created in /dev/disk/).

However, when I boot, if media is not present in the ide floppy drive, none of that happens (as my system is currently configured). There is a device special file for the hardware device itself (/dev/hdd in this case). Inserting media after booting, the drive light comes on, the media is apparently "read" by the drive, but nothing further happens. Nothing changes. Because there is no device special file for the storage volume itself (i.e. hdd4 on the Zip disk), no entry in blkid.tab, and nothing to symlink to from /dev/disks/*, any attempt to mount (by double-clicking the "Zip Disk" icon in Nautilus, or by command line: mount /dev/hdd4; mount UUID="foo"; or mount LABEL="bar") results in ye old mount exit status 32 "Special Device foobar does not exist."

Now, my thoughts are that this may be unique to removable media that have a partition table. One work-around might be to NOT create a partition table on the Zip disks, just create a filesystem right on the disk itself -- like a CD. I've done it before and it works. The only problem is that this is not the standard. The standard is a partition table and a partition.

I'm thinking as I write this that optical drives, tapes, external USB or Firewire disks, flash devices, and almost all other forms of "removable" media won't have this issue. The only things I can think of that would are ide-floppies (Zip/Jazz, etc.).

So maybe the best "workaround" is to format the ide-floppies used for backup without a partition table. If using a UUID= or LABEL= entry in fstab, the normal disks (ones not used for backup) would still work as expected.

So I guess this boils down to a simple question. How does one trigger a "device event" and cause hal/udev to re-process a hardware device that's already been handled at boot-time, without using a polling/automount daemon?
Back to top
View user's profile Send private message
IQgryn
l33t
l33t


Joined: 05 Sep 2005
Posts: 764
Location: WI, USA

PostPosted: Wed Nov 28, 2007 4:44 am    Post subject: Reply with quote

I could see where zip drives would pose a problem. I'd recommend looking at udevtrigger, but I don't actually know if it does what you want. I'd agree with your UUID/LABEL workaround being the best way...but that really doesn't mean much--I'm out of my depth here. :P
Back to top
View user's profile Send private message
Bones McCracker
Veteran
Veteran


Joined: 14 Mar 2006
Posts: 1611
Location: U.S.A.

PostPosted: Wed Nov 28, 2007 5:26 am    Post subject: Reply with quote

IQgryn wrote:
I could see where zip drives would pose a problem. I'd recommend looking at udevtrigger, but I don't actually know if it does what you want. I'd agree with your UUID/LABEL workaround being the best way...but that really doesn't mean much--I'm out of my depth here. :P


Aha! Excellent. I think that might be the "right" way to do this (so udev rules actually get applied and not skipped). Then I could also use udev to selectively mount media to the rsnapshot root mount point instead of using an fstab entry.

Thanks again!
Back to top
View user's profile Send private message
Bones McCracker
Veteran
Veteran


Joined: 14 Mar 2006
Posts: 1611
Location: U.S.A.

PostPosted: Wed Nov 28, 2007 7:39 am    Post subject: Reply with quote

Okay, your suggestion does work - at least partially (so I thought I'd document it here in case I can build on it or it's of use in someone's similar script):

A possible generic command to mount any removables [Edit: correction -- not "mount" per se but "activate" (as in, "make nodes for") ]
Code:
# /sbin/udevtrigger --subsystem-match="block" --attr-match=removable="1"

More precise, yet user-configurable for the script
Code:
# /sbin/udevtrigger --KERNEL="hdd"

What happens:
- sysfs is scanned for removable block devices or for KERNEL semantic "hdd"
- the established device hdd matches
- the defalt udevtrigger --action="add" is performed

The result:
- block special file /dev/hdd4 is created
- but that's it (blkid.tab not updated, no /dev/disk symlinks)
So this is sufficient to mount the device via /dev/hdd4 but not by UUID or LABEL

I scanned /etc/udev/rules.d/ to see what it's supposed to do and find this in 60-persistent-storage-rules:
Code:
# never access non-cdrom removable ide devices, the drivers are causing event loops on open()
KERNEL=="hd*[!0-9]", ATTR{removable}=="1", DRIVERS=="ide-cs|ide-floppy", GOTO="persistent_storage_end"
KERNEL=="hd*[0-9]", ATTRS{removable}=="1", GOTO="persistent_storage_end"

Which apparently skips all the subsequent logic, most of which creates the /dev/disk/* symlinks (which it does not seem to use libuuid to do - so that must be getting called higher up the food chain, like when the bus gets scanned. Anyway, that explains different udev behavior for zip disks vs other removables.

Nothing else specific to removable or ide-floppy etc. So I suppose the actual block special file (/dev node) is being created elsewhere by a catch-all rule for the ide bus or something.

Although this provided an incentive to finally learn about udev, the effort is exceeding the value since the blkid approach does work (and since nobody uses zip disks any more and I will abandon it when I expand my backup).


Last edited by Bones McCracker on Thu Nov 29, 2007 12:49 am; edited 1 time in total
Back to top
View user's profile Send private message
steveL
Watchman
Watchman


Joined: 13 Sep 2006
Posts: 5153
Location: The Peanut Gallery

PostPosted: Wed Nov 28, 2007 11:10 am    Post subject: Reply with quote

Wow this post is great!

It's really encouraging to read you two collaborate so effectively as well. Thanks to both of you for the great scripts, and the inspiration :-)
Back to top
View user's profile Send private message
Bones McCracker
Veteran
Veteran


Joined: 14 Mar 2006
Posts: 1611
Location: U.S.A.

PostPosted: Thu Nov 29, 2007 12:43 am    Post subject: Reply with quote

steveL wrote:
Wow this post is great!

It's really encouraging to read you two collaborate so effectively as well. Thanks to both of you for the great scripts, and the inspiration :-)

Very kind of you to say.
Yeah, but I get threatened with banning from the "Off the Wall" forum regularly. :P

This seems to also generate the uevent (i.e. results in the creation of /dev node(s)) for partitions on the newly-inserted ide-floppy, but likewise, so does not result in an update of /etc/blkid.tab or symlinks to the newly-created node(s) in /dev/disk/*):
Code:
# /sbin/blockdev --rereadpt /dev/hdd


So I think I'll stick with the "blkid <device> && blkid" approach for now.

Interestingly, my Fedora 8 instance on spare disk of same machine keeps the UUID entry for the removable in blkid.tab when rebooting without the media inserted, while gentoo does not.
Back to top
View user's profile Send private message
naturalmage
n00b
n00b


Joined: 17 Mar 2006
Posts: 9

PostPosted: Thu Dec 06, 2007 3:31 pm    Post subject: Reply with quote

This thread is awesome. All your hard work and genius is making my life incredibly easier. Thank you guys.

I am wondering if I can pick your brains further about a few things.

My goal is to create a backup system with rsnapshot that (1) easily lets me browse snapshots and restore individual files and (2) allows me to completely restore my system should I seriously damage it with my amateur tinkering.

My first concern: I am a bit unclear about what command I would use to do a full restore from backup, should the need arise. Obviously, my ignorance of this little tidbit would defeat the point of all of it. Rsnapshot doesn't seem to have any option to do so, so I assume I would use rsync myself. What options would I use?

Second, I want to make sure I am doing my excludes right. I think I can pretty much just copy IQgryn's excludes file, but I want to ask: I don't see /proc in there. I was under the impression that /proc does not reside on any disk but is dynamically created and constantly changed by the system, so shouldn't it be excluded? Wouldn't it actually bork your system right good if you tried restoring /proc from backup?

Should I exclude /var/lock, /var/run, and /var/state? These all seem like they might be unnecessary, and maybe even bad to restore from backup (i.e., I imagine nothing good happens if sshd finds an existing sshd.pid file while it's starting up.) Am I right? I also want to exclude "lost+found/" but I don't know if that's a truly good idea.

One more thing: I am a little confused about how rsync handles it when two rules conflict. Do earlier or later rules get precedence? Or is it more complex than that?
_________________
NaturalMage
Back to top
View user's profile Send private message
Bones McCracker
Veteran
Veteran


Joined: 14 Mar 2006
Posts: 1611
Location: U.S.A.

PostPosted: Thu Dec 06, 2007 6:20 pm    Post subject: Reply with quote

IQgryn's more qualified to answer those questions, so I'll defer to him.
_________________
patrix_neo wrote:
The human thought: I cannot win.
The ratbrain in me : I can only go forward and that's it.
Back to top
View user's profile Send private message
naturalmage
n00b
n00b


Joined: 17 Mar 2006
Posts: 9

PostPosted: Fri Dec 07, 2007 9:15 am    Post subject: Reply with quote

Okay, I think I may have arrived at the answer to one of my questions on my own.

It is implied from the rsync man page, and from some of the examples I have seen, that former rules override later rules.

The important exception, which occurs with either the -r or -a option of rsync (and therefore with rsnapshot), is that if a directory is excluded, then any include rules for anything it contains will be ignored. It doesn't matter which comes first, the exclude rule for the directory or the include rule for its descendant. This fact, its reason, and the way to get around it are all detailed in the rsync man page.
_________________
NaturalMage
Back to top
View user's profile Send private message
Bones McCracker
Veteran
Veteran


Joined: 14 Mar 2006
Posts: 1611
Location: U.S.A.

PostPosted: Fri Dec 07, 2007 6:21 pm    Post subject: Reply with quote

naturalmage wrote:
Okay, I think I may have arrived at the answer to one of my questions on my own.

It is implied from the rsync man page, and from some of the examples I have seen, that former rules override later rules.

The important exception, which occurs with either the -r or -a option of rsync (and therefore with rsnapshot), is that if a directory is excluded, then any include rules for anything it contains will be ignored. It doesn't matter which comes first, the exclude rule for the directory or the include rule for its descendant. This fact, its reason, and the way to get around it are all detailed in the rsync man page.

Yes. Also important to understand the include/exclude syntax also explained there (e.g., use of **).

Also, on your question about what to include in a full backup. You're right about /proc. I think there is more too. I honestly don't know because I've never done a full backup. I would probably look to the documentation of other backup tools, and other scripts, to see what others are including/excluding. Areas of interest that come to mind are mounts of types proc, sysfs, tmpfs, devpts, and usbfs. Also, depending on your purposes, you might want to exclude /tmp or even /var/tmp. When you become wise about this enlighten me. :)
_________________
patrix_neo wrote:
The human thought: I cannot win.
The ratbrain in me : I can only go forward and that's it.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Documentation, Tips & Tricks All times are GMT
Goto page 1, 2  Next
Page 1 of 2

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum