Forums

Skip to content

Advanced search
  • Quick links
    • Unanswered topics
    • Active topics
    • Search
  • FAQ
  • Login
  • Register
  • Board index Discussion & Documentation Gentoo Chat
  • Search

What is the best or cleanest backup procedure?

Opinions, ideas and thoughts about Gentoo. Anything and everything about Gentoo except support questions.
Post Reply
  • Print view
Advanced search
29 posts
  • 1
  • 2
  • Next
Author
Message
spork_kitty
Tux's lil' helper
Tux's lil' helper
Posts: 124
Joined: Fri Jul 05, 2019 4:28 pm

What is the best or cleanest backup procedure?

  • Quote

Post by spork_kitty » Tue Sep 10, 2019 1:04 am

I was encouraged to start a new thread with this one. While discussing backing up / before installing a stage3, I asked "What's the cleanest way to backup?"

There are different types of backups, like the dumb-recursive "copy it all", partial change, filtered; some are meant to be used across differing filesystems, some retain permissions and mtime, some don't, etc. "Clean" means different things to people, so if you answer please clarify what you mean.

I think a clean backup:
  • is easy to produce
  • is easy to automate
  • is easy to restore from
  • doesn't leave any tempfiles or extra dirs laying about
  • represents the state correctly, i.e. xattrs, permissions, timestamps; all of it
  • verifies itself at some point (checksums, etc)
So, what are your backups like? What's important for your backups? Have a script to share?
Top
CasperVector
Apprentice
Apprentice
User avatar
Posts: 156
Joined: Tue Apr 03, 2012 1:22 pm

  • Quote

Post by CasperVector » Tue Sep 10, 2019 2:35 am

* Use a suitable directory as the backup mirror, here called $BAKROOT.
* Copy files that need backups (mainly those modified by the user) to the mirror, eg. /etc/passwd -> $BAKROOT/etc/passwd.
* Record volatile directories in $BAKROOT/../rsync.filter (see "FILTER RULES" and "INCLUDE/EXCLUDE PATTERN RULES" in rsync(1)), eg.:

Code: Select all

# First excludes.
- .keep*
- /etc/local.d/README

# Always update.
+ /etc/openvpn/***
+ /usr/local/portage/***

# Compressed archives.
- /usr/share/fonts/

# No longer installed.
- /etc/slim.conf
* And then on periodic backup (requires root; the slash in `./' is necessary due to the rsync(1) command line convention):

Code: Select all

cd $BAKROOT
find | sed 's/^\.//' > ../include.lst
rsync --filter 'merge ../rsync.filter' --include-from=../include.lst \
      --exclude='*' -navh --delete-after / ./ | less
rsync --filter 'merge ../rsync.filter' --include-from=../include.lst \
      --exclude='*' -avh --delete-after / ./
rm -f ../include.lst
cd -
* To restore from the backup, swap `/' and `./' in the commands above.
My current OpenPGP key:
RSA4096/0x227E8CAAB7AA186C (expires: 2020.10.19)
7077 7781 B859 5166 AE07 0286 227E 8CAA B7AA 186C
Top
spork_kitty
Tux's lil' helper
Tux's lil' helper
Posts: 124
Joined: Fri Jul 05, 2019 4:28 pm

  • Quote

Post by spork_kitty » Thu Sep 12, 2019 1:05 am

I like your approach with include.lst! I currently use a unified rsync filter with entries like this:

Code: Select all

# system config directories
+ /etc/
+ /etc/abcde/
+ /etc/catalyst/
+ /etc/conf.d/
+ /etc/crossdev/
+ /etc/cups/
+ /etc/ddclient/
+ /etc/default/
+ /etc/init.d/
+ /etc/lighttpd/
+ /etc/lvm/
+ /etc/portage/
+ /etc/postgresql-10/
+ /etc/postgresql-11/
+ /etc/ssh/
+ /etc/X11/
+ /etc/X11/xorg.conf
+ /etc/X11/xorg.conf.backup
+ /etc/udev/
+ /etc/udev/rules.d/

# explicit files
+ /etc/exports
+ /etc/fstab
+ /etc/genkernel.conf
+ /etc/hosts
+ /etc/issue
+ /etc/locale.gen
+ /etc/lynx.lss
+ /etc/minidlna.conf
+ /etc/mpd.conf
+ /etc/ntpd.conf
+ /etc/rc.conf
+ /etc/sysctl.conf
+ /etc/vdpau_wrapper.cfg
+ /etc/wgetpaste.conf
+ /etc/xattr.conf
+ /etc/xinetd.conf
+ /etc/profile.d/
+ /etc/profile.d/env.sh

# We need to be able to self-backup!
+ /.baklist

# Exclusions
- /*
- /etc/*
- /etc/profile.d/*
- /etc/udev/*
- /etc/X11/*
And I call it with:

Code: Select all

rsync -auHEm --delete --delete-excluded --no-D --progress --include-from="${BAK_LIST}" "${BAK_SOURCE%/}/" "${BAK_TARGET%/}/"
I'll need to revisit rsync's manpage, seems I'm missing something to make life easier.

If I'm not mistaken, is the only difference in your two commands the "-n" flag?
Top
CasperVector
Apprentice
Apprentice
User avatar
Posts: 156
Joined: Tue Apr 03, 2012 1:22 pm

  • Quote

Post by CasperVector » Thu Sep 12, 2019 1:27 am

spork_kitty wrote:If I'm not mistaken, is the only difference in your two commands the "-n" flag?
Yes, the first rsync(1) invocation is a dry run, to help ensure nothing is obviously wrong.
My current OpenPGP key:
RSA4096/0x227E8CAAB7AA186C (expires: 2020.10.19)
7077 7781 B859 5166 AE07 0286 227E 8CAA B7AA 186C
Top
wjb
l33t
l33t
User avatar
Posts: 681
Joined: Sun Jul 10, 2005 9:40 am
Location: Fife, Scotland

  • Quote

Post by wjb » Thu Sep 12, 2019 6:56 pm

My backups are done with two objectives
  • to be able to restore the odd file or directory where I've deleted or changed something I wish I hadn't.
  • to restore my data and settings on top of a new install, following a disaster or h/w upgrade.
The amount of what I backup has gone up a lot since I discovered borgbackup earlier this year. I now just skip temporary, caches, virtual, mounts, ...

I do a daily backup with borg which currently takes about 20min - I keep 7 daily, 4 weekly, and 6 monthly.

Things I like
  • 20 minutes for a backup (the very first one takes longer) means daily is a practical option
  • backup sets can be mounted, so extracting/comparing is about as easy as it gets
  • the 'prune' command automates purging of old backup sets
  • deduplication means the overall collection of backup sets is tiny
A "borg init" is needed to set things up first time, but the body of the script is

Code: Select all

export BORG_REPO=\path\to\repository
export BORG_PASSPHRASE='--whatever--'

borg create            \
     --verbose         \
     --filter AME      \
     --list            \
     --stats           \
     --show-rc         \
     --compression lz4 \
     --exclude-caches  \
     --exclude /proc         \
     --exclude /dev          \
     --pattern=-/dev         \
     --pattern=-/lost+found  \
     --pattern=-/proc        \
     --pattern=-/run         \
     --pattern=-/sys         \
     --pattern=-/tmp         \
     --pattern=-/usr/portage \
     --pattern=+/usr/src/linux-$(uname -r)     \
     --pattern=-/usr/src     \
     --pattern=-/var/tmp     \
     --pattern=-/var/db      \
     --pattern=-/var/cache   \
     --pattern=-/var/log     \
     $mountpoints            \
     $local_ignores          \
     ::'{hostname}-{now}'    \
     /

borg prune                          \
    --list                          \
    --prefix '{hostname}-'          \
    --show-rc                       \
    --keep-daily    7               \
    --keep-weekly   4               \
    --keep-monthly  6

Stats:

Code: Select all

------------------------------------------------------------------------------
Archive name: puma-2019-09-11T19:00:13
Archive fingerprint: bbb97ad9d7541a5ec722929dffe25f531a7080ab419f3ef63387234c6ffaf323
Time (start): Wed, 2019-09-11 19:00:16
Time (end):   Wed, 2019-09-11 19:16:47
Duration: 16 minutes 30.46 seconds
Number of files: 1052830
Utilization of max. archive size: 0%
------------------------------------------------------------------------------
                       Original size      Compressed size    Deduplicated size
This archive:               70.39 GB             48.87 GB            280.71 MB
All archives:                1.26 TB            878.80 GB             60.46 GB

                       Unique chunks         Total chunks
Chunk index:                 1062155             18984341
------------------------------------------------------------------------------
terminating with success status, rc 0

Wed Sep 11 19:17:06 BST 2019 Pruning backups

Keeping archive: puma-2019-09-11T19:00:13             Wed, 2019-09-11 19:00:16 [bbb97ad9d7541a5ec722929dffe25f531a7080ab419f3ef63387234c6ffaf323]
Keeping archive: puma-2019-09-10T19:00:14             Tue, 2019-09-10 19:00:17 [5977c8b2cb9ef4156074ba49a9189f5dcaa036a7a5b752d05c481367d0ff331b]
Keeping archive: puma-2019-09-09T19:00:13             Mon, 2019-09-09 19:00:17 [378f5ac95d460d2915d1aeddf0770197ae8cd98c9177e3647ae8dd4bb71bfd59]
Keeping archive: puma-2019-09-08T19:00:13             Sun, 2019-09-08 19:00:17 [c58fca4de7ec1c486331c547e2966b08b621835c1d840035c242af64024adf96]
Keeping archive: puma-2019-09-06T19:00:14             Fri, 2019-09-06 19:00:17 [ad13ed80c65ae1b8a8a1a899118951dca7567c6688e1a7439033b38500f65732]
Keeping archive: puma-2019-09-05T19:00:10             Thu, 2019-09-05 19:00:13 [d96d2034b06bedb5ddecb5dc491b361572f7f6d4a2d0f769be2b549cb634e70a]
Keeping archive: puma-2019-09-04T19:00:13             Wed, 2019-09-04 19:00:17 [b8010cb696876315309fce0a27b3792db21376518eb9142e1da75edff2ab4bf2]
Pruning archive: puma-2019-09-03T19:00:11             Tue, 2019-09-03 19:00:14 [46384dfbabe12895d365563a33d521df447e4924208b57043a0df27e42e72940] (1/1)
Keeping archive: puma-2019-09-01T19:00:14             Sun, 2019-09-01 19:00:17 [b6a0521dd6752a2aa5b7dc8e352426a72fd95e54aed92a837479dd82d119493c]
Keeping archive: puma-2019-08-31T19:00:14             Sat, 2019-08-31 19:00:17 [a806ee87c17f916c4f5867841cb59c2627246ec4ef5f3861079d5df686b00fb6]
Keeping archive: puma-2019-08-25T19:00:13             Sun, 2019-08-25 19:00:16 [319564b7ebd583f74cd0f0fbff10db8a199b71e3a402c3e9c396bc3aa520be2d]
Keeping archive: puma-2019-08-16T19:00:14             Fri, 2019-08-16 19:00:17 [925ccd5d277b14984036d8e41526c6e2703d997b3df5930c6d9a9c269d64e734]
Keeping archive: puma-2019-08-11T19:00:13             Sun, 2019-08-11 19:00:17 [023a592f4b2d9aae2221ab71e8162a35a24049df1b1236c159d3fb866bfb7f0d]
Keeping archive: puma-2019-07-29T19:00:11             Mon, 2019-07-29 19:00:14 [ee25c515767da574ec85f73f1f22edee547bb71602bc8b0bc209356bebf98d77]
Keeping archive: puma-2019-06-30T19:00:13             Sun, 2019-06-30 19:00:16 [023ea3d802171368d155aeef64372b474923b24dffcc30100261ca20a8ff97b3]
Keeping archive: puma-2019-05-31T19:00:13             Fri, 2019-05-31 19:00:15 [9ec3b529da38d23075753777ae5f11031edadf8c5ceb5ac30c4799a1c50d5d55]
Keeping archive: puma-2019-04-30T19:00:13             Tue, 2019-04-30 19:00:15 [670ed0c9fa123cdc438bd62379776ef48ded4e50d3e63614126d59ab4461cb6f]
Keeping archive: puma-2019-03-31T19:00:07             Sun, 2019-03-31 19:00:09 [caab93a4b028f7de33442c085508d3752dcb2006641ee8dbd47537ac7d0059c3]
terminating with success status, rc 0
Top
alexander-n8hgeg5e
n00b
n00b
Posts: 58
Joined: Sat Nov 02, 2019 2:47 pm

  • Quote

Post by alexander-n8hgeg5e » Mon Nov 11, 2019 9:53 am

For me 'btrfs send'/receive turned out to be the best way to get
fast compressed and somewhat deduplicated backups.
i have a btrfsbenc script on github(outdated/read the code, could eat the cat). that can find corresponding snapshot pairs for incremental backups.
I have some problems with
my system going quickly out of memory and freezing
during a backup, but it could be my zstd compression
is at to high setting.
Normally max is 3 but i turned it
in the kernel code up to 18.

In things of self verifying, btrfs should be suitable.
You can scrub the disc, it checksums everything

I tried many backuptools with features
like dedupe, compress, incremental .
It seems its hard to implement a fast one.
I think zbackup or so was somewhat good.
simple rsync has not the fancy features but runs fast.
Top
technotorpedo
Apprentice
Apprentice
Posts: 151
Joined: Tue Dec 10, 2019 4:57 am

  • Quote

Post by technotorpedo » Wed Dec 11, 2019 5:32 am

Personally another vote for rsync no shortage of information on it so will leave it there. Though one possible refinement worth noting for whomever, using multicore de/compression tools. They've been around a LONG time, though apparently not so much being readily used, ie: pigz instead of gzip etc etc. Integrating such could be good mojo for people, the how's are each endusers to sort out and enjoy. Am sure there's already tutes and how-to's aplenty online anyway.
Top
poncho
Tux's lil' helper
Tux's lil' helper
Posts: 92
Joined: Sun Mar 06, 2011 11:21 am

  • Quote

Post by poncho » Wed Dec 11, 2019 8:39 am

alexander-n8hgeg5e wrote:For me 'btrfs send'/receive turned out to be the best way to get
fast compressed and somewhat deduplicated backups.
Same here. I'm using app-backup/btrbk
Top
erm67
l33t
l33t
User avatar
Posts: 653
Joined: Tue Nov 01, 2005 5:31 pm
Location: EU
Contact:
Contact erm67
Website

  • Quote

Post by erm67 » Wed Dec 11, 2019 5:47 pm

I also use btrfs on the nas with 4 disks in raid1 (data and metadata), better than RAID5 IMHO... I will rebalance important data to raid1c3 soon, now that is mainlined, so that I am protected against the failure of 2 disks (better than RAID6)....
And use btrfs-send for backups and duplicati for the windoze machines.
Ok boomer
True ignorance is not the absence of knowledge, but the refusal to acquire it.
Ab esse ad posse valet, a posse ad esse non valet consequentia

My fediverse account: @erm67@erm67.dynu.net
Top
alexander-n8hgeg5e
n00b
n00b
Posts: 58
Joined: Sat Nov 02, 2019 2:47 pm

  • Quote

Post by alexander-n8hgeg5e » Tue Dec 17, 2019 5:07 pm

erm67 wrote:I also use btrfs on the nas with 4 disks in raid1 (data and metadata), better than RAID5 IMHO... I will rebalance important data to raid1c3 soon, now that is mainlined, so that I am protected against the failure of 2 disks (better than RAID6)....
And use btrfs-send for backups and duplicati for the windoze machines.
It's really a fast way.
And clean. You get the whole tree incrementially and fast.
As good as a new backup and as fast as a incremential one.

I coded a little prog to help me automatically relate the
snapshots to each other and find the ones that btrfs
can use to base it's transmission on.
It was a bit confusing. But i think it works.
So i want to look at btrbk , how they did it.

My btrfs send/receive feature needs some fix however.
But i think it could be related to turning zstd compression level to 18, what i did in the kernel code...
System freezes if i use btrfs send.
So i'm back at rsync for now...
Top
erm67
l33t
l33t
User avatar
Posts: 653
Joined: Tue Nov 01, 2005 5:31 pm
Location: EU
Contact:
Contact erm67
Website

  • Quote

Post by erm67 » Wed Dec 18, 2019 9:16 am

My nas has lots of big disks but a slow CPU, I use btrfs compression only on iscsi volumes ....
Ok boomer
True ignorance is not the absence of knowledge, but the refusal to acquire it.
Ab esse ad posse valet, a posse ad esse non valet consequentia

My fediverse account: @erm67@erm67.dynu.net
Top
figueroa
Advocate
Advocate
User avatar
Posts: 3032
Joined: Sun Aug 14, 2005 8:15 pm
Location: Edge of marsh USA
Contact:
Contact figueroa
Website

  • Quote

Post by figueroa » Tue Jan 14, 2020 4:20 am

Tarball it. Multiple TARs, actually.

On alternating weeknights, (3 am) I run bu2tar.scr or bu2tar2.scr, then on Saturday I run bu2tar3.scr. These each do a full backup of all my personal data (no system files -- that's another set of scripts, below) onto a secondary internal hard drive using alternating different partitions.

Each of /mnt/backup, /mnt/backup2, and /mnt/backup3 are 128 G partitions on my large secondary hard drive which I only use for backups.

I suppose I waste a lot of energy, but I've been doing this for many years, I understand it, and they run while I'm sleeping. Here is what one of those look like:

Code: Select all

$ cat bin/bu2tar.scr
#!/bin/sh
mount /mnt/backup
cd /
tar cpzf /mnt/backup/data/bak.tgz scratch/bak/
tar cpzf /mnt/backup/data/bin.tgz scratch/bin/
#tar cpzf cdrecord.tgz /scratch/cdrecord/
tar cpzf /mnt/backup/data/Documents.tgz scratch/Documents/
#tar cpzf dosc.tgz /scratch/dosc/
#tar cpzf dosg.tgz /scratch/dosg/
tar cpzf /mnt/backup/data/Downloads.tgz scratch/Downloads/
#tar cpzf dso.tgz /scratch/dso/
#tar cpzf err.tgz /scratch/err/
#tar cpzf home.tgz /home/
tar cpzf /mnt/backup/data/home.tgz -X /scratch/bin/exclude.home home/
tar cpf /mnt/backup/data/graphics.tar scratch/graphics/
#tar cpzf Mail.tgz /scratch/Mail/
#tar cpf /mnt/backup/data/mntavwork.tar mnt/av/work/
tar cpzf /mnt/backup/data/mozilla.tgz scratch/.mozilla/
tar cpf /mnt/backup/data/mp3.tar scratch/mp3/
tar cpzf /mnt/backup/data/pdf.tgz scratch/pdf/
tar cpf /mnt/backup/data/photos.tar scratch/photos/
#tar cpzf pkg.tgz /scratch/pkg/
#tar cpzf text.tgz /scratch/text/
tar cpzf /mnt/backup/data/thunderbird.tgz scratch/.thunderbird/
#tar cpzf /mnt/backup/data/vbox.tgz -X /scratch/bin/exclude.vbox home/figueroa/.VirtualBox/
tar cpzf /mnt/backup/data/vbox.tgz home/figueroa/.VirtualBox/
tar cpzf /mnt/backup/data/vboxmnt.tgz mnt/vbox/VDI/
tar cpzf /mnt/backup/data/vboxvm.tgz "home/figueroa/VirtualBox VMs/"
tar cpzf /mnt/backup/data/wav.tgz scratch/wav/
tar cpzf /mnt/backup/data/webdev.tgz scratch/webdev/
tar cpzf /mnt/backup/data/www.tgz scratch/www/
#tar cpzf /mnt/backup3/data/janbak.tgz home/jan/
cd
umount /mnt/backup
I have a personal bin directory in /home/myusername/bin which I use for my large collection of scripts. (actualy, bin in my home directory is a symlink and the real home bin is at /scratch/bin) Why do I use a personal /scratch partition? Because, eons ago, drives were small and therefore partitions were small and I ran out of space in /home so I made /scratch and kept doing it as a habit. I don't recommend it. :-)

Here is my personal crontab:

Code: Select all

#Sun, Tue, Thu
1	3	*	*	7,2,4	/home/figueroa/bin/bu2tar.scr
#Mon, Wed, Fri
1	3	*	*	1,3,5	/home/figueroa/bin/bu2tar2.scr
#Sat
1	3	*	*	6	/home/figueroa/bin/bu2tar3.scr
My personal data is found on either /home or /scratch with some additional VirtualBox virtual machines on /mnt/vbox, all on drive /dev/sda/, roughly 80 gig of data, and takes roughly an hour on a fast machine with spinning hard drives. You'll see some of the rows are commented out, old archival stuff where I no longer make any changes. Those collections that make sense to compress are compressed, and things like mp3 files and photos are not compressed.

Five times a month I backup my system files with another script that runs on specified days at 6 am:

Code: Select all

$ cat bin/gentoo2bak.scr
#!/bin/sh
#gentoo2bak.scr
mount /mnt/backup
cd /
tar cpzf /mnt/backup/sysbak/bin.tgz --xattrs --numeric-owner bin/
tar cpzf /mnt/backup/sysbak/boot.tgz --xattrs --numeric-owner boot/
tar cpzf /mnt/backup/sysbak/dev.tgz --xattrs --numeric-owner dev/
tar cpzf /mnt/backup/sysbak/etc.tgz --xattrs --numeric-owner etc/
tar cpf /mnt/backup/sysbak/home.tar --xattrs --numeric-owner --no-recursion -X /scratch/bin/exclude.home2 home/*
tar cpzf /mnt/backup/sysbak/lib.tgz --xattrs --numeric-owner --no-recursion lib/
tar cpzf /mnt/backup/sysbak/lib32.tgz --xattrs --numeric-owner lib32/
tar cpzf /mnt/backup/sysbak/lib64.tgz --xattrs --numeric-owner lib64/
tar cpf /mnt/backup/sysbak/media.tar --xattrs --numeric-owner --no-recursion media/*
tar cpf /mnt/backup/sysbak/mnt.tar --xattrs --numeric-owner --no-recursion mnt/*
tar cpzf /mnt/backup/sysbak/opt.tgz --xattrs --numeric-owner opt/
tar cpf /mnt/backup/sysbak/proc.tar --xattrs --numeric-owner --no-recursion proc/
tar cpzf /mnt/backup/sysbak/root.tgz --xattrs --numeric-owner root/
tar cpf /mnt/backup/sysbak/run.tar --xattrs --numeric-owner --no-recursion run/
tar cpzf /mnt/backup/sysbak/sbin.tgz --xattrs --numeric-owner sbin/
tar cpf /mnt/backup/sysbak/sys.tar --xattrs --numeric-owner  --no-recursion sys/
tar cpf /mnt/backup/sysbak/tmp.tar --xattrs --numeric-owner --no-recursion tmp/
tar cpzf /mnt/backup/sysbak/usr.tgz --xattrs --numeric-owner -X scratch/bin/exclude.usr usr/
tar cpzf /mnt/backup/sysbak/usrportage.tgz --xattrs --numeric-owner -X scratch/bin/exclude.distfiles usr/portage/
tar cpzf /mnt/backup/sysbak/var.tgz --xattrs --numeric-owner -X scratch/bin/exclude.var var/
tar cpf /mnt/backup/sysbak/scratch.tar --xattrs --numeric-owner --no-recursion -X scratch/bin/exclude.scratch scratch/
tar cpzf /mnt/backup/janbak/janbak.tgz --xattrs --numeric-owner home/jan/
cd
umount /mnt/backup
This is what my root crontab looks like for those:

Code: Select all

01      6       2,16	*       *	/home/figueroa/bin/gentoo2bak.scr
01      6       9,23	*       *	/home/figueroa/bin/gentoo2bak2.scr
01      6       30	*       *	/home/figueroa/bin/gentoo2bak3.scr
This is currently about 3.6 G, and takes and 18 minutes, more or less.

Once a week, normally on Saturday, I make a compressed, encrypted archive onto an external 128 G USB 3 flash drive (used about 95% of it) and keep these moved around to alternate locations. I use five differnt flash drives currently in rotation. These are for recovery from catastrophe. I expect never to need them. Running this takes 50-60 min. I do this manually, but I don't watch it as I'm doing other things. The script looks like:

Code: Select all

$ cat bin/targpgflash1.scr
#!/bin/sh
#targpgflash1.scr
#Encrypt and store data and OS backups on external media: targpgfire.scr
#If backup partition(s) already mounted, comment out mount and umount commands.

mount /mnt/backup

cd /mnt/backup
date > /run/media/figueroa/MicroCenter128/date1.txt
tar cvf - sysbak/* | gpg -c --batch --yes --passphrase-file /scratch/bin/.passrc --compress-algo none -o /run/media/figueroa/MicroCenter128/sysbackup.tar.gpg
date >> /run/media/figueroa/MicroCenter128/date1.txt

date > /run/media/figueroa/MicroCenter128/date2.txt
tar cvf - janbak/* | gpg -c --batch --yes --passphrase-file /scratch/bin/.passrc --compress-algo none -o /run/media/figueroa/MicroCenter128/janbackup.tar.gpg
date >> /run/media/figueroa/MicroCenter128/date2.txt
cd /

umount /mnt/backup

mount /mnt/backup3
cd /mnt/backup3
date > /run/media/figueroa/MicroCenter128/date3.txt
tar cvf - data/* | gpg -c --batch --yes --passphrase-file /scratch/bin/.passrc --compress-algo none -o /run/media/figueroa/MicroCenter128/databackup.tar.gpg
date >> /run/media/figueroa/MicroCenter128/date3.txt

cd /
umount /mnt/backup3
I didn't mention that once a week I also make a tarball of my wife's entire home directory which she uses occasionally, including a Windows virtual machine for geneology work, about 16 G. That's the last line of my system backup script which used to be in my personal data bakcup script.

I have used my collection of tarballs to recover from hard drive failure as well as to move my system from one computer to another. They work and they use commonly installed software found on any distribution, just tar, gzip, gnupg. I just happen to use Gentoo. Tweak to suit your needs and level of paranioa. Many people have told me not to do this, that I'm wasting space, etc., but it works, and hard drive space is cheap.

Logs are mailed to me nightly at the end of each backup run.
Andy Figueroa
hp pavilion hpe h8-1260t/2AB5; spinning rust x3
i7-2600 @ 3.40GHz; 16 gb; Radeon HD 7570
amd64/23.0/split-usr/desktop (stable), OpenRC, -systemd -pulseaudio -uefi -wayland
Top
Zucca
Administrator
Administrator
User avatar
Posts: 4698
Joined: Thu Jun 14, 2007 10:31 pm
Location: Rasi, Finland
Contact:
Contact Zucca
Website

  • Quote

Post by Zucca » Tue Jan 21, 2020 10:48 am

I've been looking for simple backup method for as long as I have kept backups.

Here's how I do my backups at the moment:
  • 1st line of defense: rely on btrfs data duplication and its read-only snapshots
    • protects from user errors (accidental file deletion) and depending on data layout one or two disk failures
  • 2nd line of defense: rsync to an external hard drive
    • even better if external backup drive has a filesystem with snapshotting functionality
  • 3rd line of defense (haven't implemented yet): use restic to copy the most important files to a off-site location (amazon, backblaze...)
    • ... Yes. restic does encrypt your data before sending it.
I consider the 3rd method as not simple as it requires more than normal linux utilies to restore from backups. But it's there in case of a catastrophe were to happen.
..: Zucca :..

Code: Select all

init=/sbin/openrc-init
-systemd -logind -elogind seatd
I am NaN! I am a man!
Top
alexander-n8hgeg5e
n00b
n00b
Posts: 58
Joined: Sat Nov 02, 2019 2:47 pm

  • Quote

Post by alexander-n8hgeg5e » Sat Jan 25, 2020 11:08 pm

Zucca wrote:I
[*]1st line of defense ...
These 3 lines are probably the ultimate strategy, excluding the implementation
of the lines.
These snapshots are really good, i do them befor doing some dangerous changes.
Sometimes i have to remove them to free the disk,
but as long there is free space, the more snapshots the better.
I just enter "snap" shortcut to make one with timestamp.
Then the other two ones protect against system corruption and local site physical destruction.
So nothing more to worry about.

add:
the only way for line 2 is the hardrives in the cupboard way.
I extra bought an usb3 to sata converter for this.
I turned out that it is more reliable for me to use an
native sata port that i had available easy accessable,
because i recently mounted a morherboard without case
on a monitor that only runs an xserver for that monitor.
So i use the disks as bare disk.
If someone has only old drives, old motherboard, old power supply
this is a way. I boot it over network with usb ethernet adapter and extra privat network.
Last edited by alexander-n8hgeg5e on Sat Jan 25, 2020 11:31 pm, edited 1 time in total.
Top
dmpogo
Advocate
Advocate
Posts: 3717
Joined: Thu Sep 02, 2004 9:21 pm
Location: Canada

  • Quote

Post by dmpogo » Sat Jan 25, 2020 11:24 pm

For me the most impotant requirement for backup is that I should be able to forget about it running for months at a time. If it requires any frequent manual intervention, I know myself, it will not be done.

From that requirement of automation stem some corollaries. For instance, backup storage model should be fairly compact, they should not overfill the disk, in, say, half a year, of daily backups, so I need proper scheduling of what is saved.
Top
alexander-n8hgeg5e
n00b
n00b
Posts: 58
Joined: Sat Nov 02, 2019 2:47 pm

  • Quote

Post by alexander-n8hgeg5e » Sat Jan 25, 2020 11:51 pm

dmpogo wrote:For me the most important requirement for backup is ...
Btrfs snapshots would full-fill your requirement really good.
You would need to somehow send the snapshots to a place away from
your computer, to get "line3" , and to a place outside of your computer,
that once created , your computer can not delete , to get "line2".
So line 2 is the hardest one ...
The btrfs strategy with copy-on-write is nearly optimal for space i think.
File renaming,moving all that things do not require space.
You can do daily or hourly backups and you will only need space for newly
created stuff. The only thing , if you delete something
all snapshots have to be removed to free the space.
Top
dmpogo
Advocate
Advocate
Posts: 3717
Joined: Thu Sep 02, 2004 9:21 pm
Location: Canada

  • Quote

Post by dmpogo » Sun Jan 26, 2020 12:37 am

alexander-n8hgeg5e wrote:
dmpogo wrote:For me the most important requirement for backup is ...
Btrfs snapshots would full-fill your requirement really good.
You would need to somehow send the snapshots to a place away from
your computer, to get "line3" , and to a place outside of your computer,
that once created , your computer can not delete , to get "line2".
So line 2 is the hardest one ...
The btrfs strategy with copy-on-write is nearly optimal for space i think.
File renaming,moving all that things do not require space.
You can do daily or hourly backups and you will only need space for newly
created stuff. The only thing , if you delete something
all snapshots have to be removed to free the space.
Honestly speaking, I would call 'backup' what starts from level 2 - i.e what allows the recover from hardware crash.
'First line' is an interesting problem, but separate.
Top
Tony0945
Watchman
Watchman
Posts: 5127
Joined: Tue Jul 25, 2006 12:19 am
Location: Illinois, USA

  • Quote

Post by Tony0945 » Sun Jan 26, 2020 3:05 am

dmpogo wrote:For me the most impotant requirement for backup is that I should be able to forget about it running for months at a time. If it requires any frequent manual intervention, I know myself, it will not be done.

Anen, brother!

I set up a cronjob to backup my MsMoney file, appending the date. First I moved the file to the Linux computer. Windows reads it as a network neighborhood file.
For twenty years I had set it to make its own backup files. Then I discovered that there is only one file! It always uses the same name. Now the cronjob sets a date using date!

Code: Select all

#! /bin/bash

cd /data/MNY ||  echo "/data/MNY does not exist!" 

if [  MsMoney.mny  -nt last-backup ]; then
echo "creating backup"
	datestring=`date +%Y-%m-%d`
	cp MsMoney.mny MsMoney-"${datestring}".mny
     	touch last-backup
fi
Another slothful problem, I now have 14G of daily backups stretching back to last August. Now I need a way to consolidate them monthly!
Top
alexander-n8hgeg5e
n00b
n00b
Posts: 58
Joined: Sat Nov 02, 2019 2:47 pm

  • Quote

Post by alexander-n8hgeg5e » Sun Jan 26, 2020 3:23 am

dmpogo wrote:I would call 'backup' what starts from level 2 - i.e what allows the recover from hardware crash.
I also would call backup ,stuff at level2 min.
First line is like version management and pseudo protecting the user
from himself. But very effective.
It's like , don't put the fire near the flammable stuff.
Last edited by alexander-n8hgeg5e on Sun Jan 26, 2020 5:08 am, edited 1 time in total.
Top
Hu
Administrator
Administrator
Posts: 24401
Joined: Tue Mar 06, 2007 5:38 am

  • Quote

Post by Hu » Sun Jan 26, 2020 4:54 am

Tony0945 wrote:Another slothful problem, I now have 14G of daily backups stretching back to last August. Now I need a way to consolidate them monthly!
What retention policy do you want? Always first day of the month? Always first Saturday of the month (or any other day of the week; point being that the day number migrates)? If the chosen day is missing, do you want to pick a different nearby day or just lose that month?
Top
dmpogo
Advocate
Advocate
Posts: 3717
Joined: Thu Sep 02, 2004 9:21 pm
Location: Canada

  • Quote

Post by dmpogo » Sun Jan 26, 2020 6:24 am

Tony0945 wrote:
dmpogo wrote:For me the most impotant requirement for backup is that I should be able to forget about it running for months at a time. If it requires any frequent manual intervention, I know myself, it will not be done.

Anen, brother!

I set up a cronjob to backup my MsMoney file, appending the date. First I moved the file to the Linux computer. Windows reads it as a network neighborhood file.
For twenty years I had set it to make its own backup files. Then I discovered that there is only one file! It always uses the same name. Now the cronjob sets a date using date!

Code: Select all

#! /bin/bash

cd /data/MNY ||  echo "/data/MNY does not exist!" 

if [  MsMoney.mny  -nt last-backup ]; then
echo "creating backup"
	datestring=`date +%Y-%m-%d`
	cp MsMoney.mny MsMoney-"${datestring}".mny
     	touch last-backup
fi
Another slothful problem, I now have 14G of daily backups stretching back to last August. Now I need a way to consolidate them monthly!

I was lazy to script myself. For me backuppc does the job. Once I left it go for 1.5 years, and then found all backups of my home compuiters, mine and my wife laptops in order as expected.
Top
C5ace
Guru
Guru
Posts: 517
Joined: Mon Dec 23, 2013 12:44 am
Location: Brisbane, Australia

  • Quote

Post by C5ace » Sun Jan 26, 2020 10:33 am

I use app-backup/rsnapshot to hourly full backup of my mail server to a external location. Works like a charm. Restore are very easy using rsync.
Observation after 30 years working with computers:
All software has known and unknown bugs and vulnerabilities. Especially software written in complex, unstable and object oriented languages such as perl, python, C++, C#, Rust and the likes.
Top
Tony0945
Watchman
Watchman
Posts: 5127
Joined: Tue Jul 25, 2006 12:19 am
Location: Illinois, USA

  • Quote

Post by Tony0945 » Sun Jan 26, 2020 11:10 am

Hu wrote:What retention policy do you want? Always first day of the month? Always first Saturday of the month (or any other day of the week; point being that the day number migrates)? If the chosen day is missing, do you want to pick a different nearby day or just lose that month?
Not sure. I want a clean copy for sure. Twice recently I had to restore from backup because something was wrong in the db, It correctly listed my stock holding then wouldn't let me sell any shares sayng that it would make a future value negative. In the most recent instance I knew I had run Windows last (vs virtualbox Windows) so I restored the Windows .mbf backup and saved it with the .mny extension.

For many years I have wanted to write a Linux clone using an sqllite database. I see how to do it. However there would be a ton of grunt work to write the wxGTK code. MsMoney does a lot of things and is very versatile. It's abandonware. Microsoft actually made a deal with Intuit and gave them programs to convert from all version of MsMoney to Quicken, advising users migrate with Intuit converting your files. I don't really like Quicken and it's leaseware which is a no-no for me. I don't tent software. I'll buy it, but I won't rent it. off-topic, I wonder when Microsoft will switch to the lease model, gouging say $100 every year from their users? Sounds far fetched but I caught my sister paying over $200 yearly to Norton for a subscription to Norton defender or some such nonsense. I bought a faster dive, installed Gentoo, and transferred her Firefox and Thunderbird data. That was three years ago. Instead of updating, I'll ask my daughter (a Debian user) to e-mail me my sister's /var/lib/portage/world. I'll buy a nice 2.5 inch SATA SSD and build an updated version (I have her old /home/<user> still on my computer) and mail it back. My daughter can transfer the profiles. She still wants to rent Norton anti-virus. <primal scream>.

I'm sure the program was not intended to hold twenty years of data. I'll have to do something.
Meanwhile, I'll try to remember to manually prune monthly.
Top
C5ace
Guru
Guru
Posts: 517
Joined: Mon Dec 23, 2013 12:44 am
Location: Brisbane, Australia

  • Quote

Post by C5ace » Sun Jan 26, 2020 3:25 pm

Try app-office/gnucash. Works fine for me keeping bank and brokerage account.
https://www.gnucash.org/
Observation after 30 years working with computers:
All software has known and unknown bugs and vulnerabilities. Especially software written in complex, unstable and object oriented languages such as perl, python, C++, C#, Rust and the likes.
Top
Hu
Administrator
Administrator
Posts: 24401
Joined: Tue Mar 06, 2007 5:38 am

  • Quote

Post by Hu » Sun Jan 26, 2020 5:35 pm

One cheap way to handle aging out backups with declining frequency would be to use hard links to create aliases. For example:
  • Create daily-YYYYMMDD.tar every day.
  • At the same time, hard link that file (so almost no storage cost) to weekly-YYYYMMWW.tar (where WW comes from date +%W or similar, depending how you want to number weeks at the boundary between years). 6 days out of 7, this hard link will collide with a prior backup from the same week. It's your choice whether to ignore the collision or replace it. Either can work, as long as you pick a policy and stick to it.
  • At the same time, hard link the daily file to monthly-YYYYMM. Again, most days this will collide.
Now you have dailies, but with selected dailies aliased to other names. Your cleanup job can be to delete files that start with daily- and are older than a certain date. The hard links to the selected files will keep their storage alive when their "daily" name expires. You could also delete files that start with weekly- that are older than a certain threshold. You would probably set the weekly threshold much higher than daily, say 60 days of dailies, and 6 months of weeklies.

If you miss creating a daily that would have been a long term backup, then the next daily you do create doesn't get a collision and becomes the long term backup for the week/month.

This scheme works regardless of whether you use Microsoft Money, GNUcash, or something else.
Top
Post Reply
  • Print view

29 posts
  • 1
  • 2
  • Next

Return to “Gentoo Chat”

Jump to
  • Assistance
  • ↳   News & Announcements
  • ↳   Frequently Asked Questions
  • ↳   Installing Gentoo
  • ↳   Multimedia
  • ↳   Desktop Environments
  • ↳   Networking & Security
  • ↳   Kernel & Hardware
  • ↳   Portage & Programming
  • ↳   Gamers & Players
  • ↳   Other Things Gentoo
  • ↳   Unsupported Software
  • Discussion & Documentation
  • ↳   Documentation, Tips & Tricks
  • ↳   Gentoo Chat
  • ↳   Gentoo Forums Feedback
  • ↳   Duplicate Threads
  • International Gentoo Users
  • ↳   中文 (Chinese)
  • ↳   Dutch
  • ↳   Finnish
  • ↳   French
  • ↳   Deutsches Forum (German)
  • ↳   Diskussionsforum
  • ↳   Deutsche Dokumentation
  • ↳   Greek
  • ↳   Forum italiano (Italian)
  • ↳   Forum di discussione italiano
  • ↳   Risorse italiane (documentazione e tools)
  • ↳   Polskie forum (Polish)
  • ↳   Instalacja i sprzęt
  • ↳   Polish OTW
  • ↳   Portuguese
  • ↳   Documentação, Ferramentas e Dicas
  • ↳   Russian
  • ↳   Scandinavian
  • ↳   Spanish
  • ↳   Other Languages
  • Architectures & Platforms
  • ↳   Gentoo on ARM
  • ↳   Gentoo on PPC
  • ↳   Gentoo on Sparc
  • ↳   Gentoo on Alternative Architectures
  • ↳   Gentoo on AMD64
  • ↳   Gentoo for Mac OS X (Portage for Mac OS X)
  • Board index
  • All times are UTC
  • Delete cookies

© 2001–2026 Gentoo Foundation, Inc.

Powered by phpBB® Forum Software © phpBB Limited

Privacy Policy

 

 

magic