Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
What is the best or cleanest backup procedure?
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Gentoo Chat
View previous topic :: View next topic  
Author Message
spork_kitty
Tux's lil' helper
Tux's lil' helper


Joined: 05 Jul 2019
Posts: 115

PostPosted: Tue Sep 10, 2019 1:04 am    Post subject: What is the best or cleanest backup procedure? Reply with quote

I was encouraged to start a new thread with this one. While discussing backing up / before installing a stage3, I asked "What's the cleanest way to backup?"

There are different types of backups, like the dumb-recursive "copy it all", partial change, filtered; some are meant to be used across differing filesystems, some retain permissions and mtime, some don't, etc. "Clean" means different things to people, so if you answer please clarify what you mean.

I think a clean backup:


  • is easy to produce
  • is easy to automate
  • is easy to restore from
  • doesn't leave any tempfiles or extra dirs laying about
  • represents the state correctly, i.e. xattrs, permissions, timestamps; all of it
  • verifies itself at some point (checksums, etc)


So, what are your backups like? What's important for your backups? Have a script to share?
Back to top
View user's profile Send private message
CasperVector
Tux's lil' helper
Tux's lil' helper


Joined: 03 Apr 2012
Posts: 149

PostPosted: Tue Sep 10, 2019 2:35 am    Post subject: Reply with quote

* Use a suitable directory as the backup mirror, here called $BAKROOT.
* Copy files that need backups (mainly those modified by the user) to the mirror, eg. /etc/passwd -> $BAKROOT/etc/passwd.
* Record volatile directories in $BAKROOT/../rsync.filter (see "FILTER RULES" and "INCLUDE/EXCLUDE PATTERN RULES" in rsync(1)), eg.:
Code:
# First excludes.
- .keep*
- /etc/local.d/README

# Always update.
+ /etc/openvpn/***
+ /usr/local/portage/***

# Compressed archives.
- /usr/share/fonts/

# No longer installed.
- /etc/slim.conf
* And then on periodic backup (requires root; the slash in `./' is necessary due to the rsync(1) command line convention):
Code:
cd $BAKROOT
find | sed 's/^\.//' > ../include.lst
rsync --filter 'merge ../rsync.filter' --include-from=../include.lst \
      --exclude='*' -navh --delete-after / ./ | less
rsync --filter 'merge ../rsync.filter' --include-from=../include.lst \
      --exclude='*' -avh --delete-after / ./
rm -f ../include.lst
cd -
* To restore from the backup, swap `/' and `./' in the commands above.
_________________
My current OpenPGP key:
RSA4096/0x227E8CAAB7AA186C (expires: 2020.10.19)
7077 7781 B859 5166 AE07 0286 227E 8CAA B7AA 186C
Back to top
View user's profile Send private message
spork_kitty
Tux's lil' helper
Tux's lil' helper


Joined: 05 Jul 2019
Posts: 115

PostPosted: Thu Sep 12, 2019 1:05 am    Post subject: Reply with quote

I like your approach with include.lst! I currently use a unified rsync filter with entries like this:

Code:

# system config directories
+ /etc/
+ /etc/abcde/
+ /etc/catalyst/
+ /etc/conf.d/
+ /etc/crossdev/
+ /etc/cups/
+ /etc/ddclient/
+ /etc/default/
+ /etc/init.d/
+ /etc/lighttpd/
+ /etc/lvm/
+ /etc/portage/
+ /etc/postgresql-10/
+ /etc/postgresql-11/
+ /etc/ssh/
+ /etc/X11/
+ /etc/X11/xorg.conf
+ /etc/X11/xorg.conf.backup
+ /etc/udev/
+ /etc/udev/rules.d/

# explicit files
+ /etc/exports
+ /etc/fstab
+ /etc/genkernel.conf
+ /etc/hosts
+ /etc/issue
+ /etc/locale.gen
+ /etc/lynx.lss
+ /etc/minidlna.conf
+ /etc/mpd.conf
+ /etc/ntpd.conf
+ /etc/rc.conf
+ /etc/sysctl.conf
+ /etc/vdpau_wrapper.cfg
+ /etc/wgetpaste.conf
+ /etc/xattr.conf
+ /etc/xinetd.conf
+ /etc/profile.d/
+ /etc/profile.d/env.sh

# We need to be able to self-backup!
+ /.baklist

# Exclusions
- /*
- /etc/*
- /etc/profile.d/*
- /etc/udev/*
- /etc/X11/*


And I call it with:

Code:

rsync -auHEm --delete --delete-excluded --no-D --progress --include-from="${BAK_LIST}" "${BAK_SOURCE%/}/" "${BAK_TARGET%/}/"


I'll need to revisit rsync's manpage, seems I'm missing something to make life easier.

If I'm not mistaken, is the only difference in your two commands the "-n" flag?
Back to top
View user's profile Send private message
CasperVector
Tux's lil' helper
Tux's lil' helper


Joined: 03 Apr 2012
Posts: 149

PostPosted: Thu Sep 12, 2019 1:27 am    Post subject: Reply with quote

spork_kitty wrote:
If I'm not mistaken, is the only difference in your two commands the "-n" flag?

Yes, the first rsync(1) invocation is a dry run, to help ensure nothing is obviously wrong.
_________________
My current OpenPGP key:
RSA4096/0x227E8CAAB7AA186C (expires: 2020.10.19)
7077 7781 B859 5166 AE07 0286 227E 8CAA B7AA 186C
Back to top
View user's profile Send private message
wjb
Guru
Guru


Joined: 10 Jul 2005
Posts: 408
Location: Fife, Scotland

PostPosted: Thu Sep 12, 2019 6:56 pm    Post subject: Reply with quote

My backups are done with two objectives
  • to be able to restore the odd file or directory where I've deleted or changed something I wish I hadn't.
  • to restore my data and settings on top of a new install, following a disaster or h/w upgrade.


The amount of what I backup has gone up a lot since I discovered borgbackup earlier this year. I now just skip temporary, caches, virtual, mounts, ...

I do a daily backup with borg which currently takes about 20min - I keep 7 daily, 4 weekly, and 6 monthly.

Things I like
  • 20 minutes for a backup (the very first one takes longer) means daily is a practical option
  • backup sets can be mounted, so extracting/comparing is about as easy as it gets
  • the 'prune' command automates purging of old backup sets
  • deduplication means the overall collection of backup sets is tiny


A "borg init" is needed to set things up first time, but the body of the script is
Code:
export BORG_REPO=\path\to\repository
export BORG_PASSPHRASE='--whatever--'

borg create            \
     --verbose         \
     --filter AME      \
     --list            \
     --stats           \
     --show-rc         \
     --compression lz4 \
     --exclude-caches  \
     --exclude /proc         \
     --exclude /dev          \
     --pattern=-/dev         \
     --pattern=-/lost+found  \
     --pattern=-/proc        \
     --pattern=-/run         \
     --pattern=-/sys         \
     --pattern=-/tmp         \
     --pattern=-/usr/portage \
     --pattern=+/usr/src/linux-$(uname -r)     \
     --pattern=-/usr/src     \
     --pattern=-/var/tmp     \
     --pattern=-/var/db      \
     --pattern=-/var/cache   \
     --pattern=-/var/log     \
     $mountpoints            \
     $local_ignores          \
     ::'{hostname}-{now}'    \
     /

borg prune                          \
    --list                          \
    --prefix '{hostname}-'          \
    --show-rc                       \
    --keep-daily    7               \
    --keep-weekly   4               \
    --keep-monthly  6



Stats:
Code:
------------------------------------------------------------------------------
Archive name: puma-2019-09-11T19:00:13
Archive fingerprint: bbb97ad9d7541a5ec722929dffe25f531a7080ab419f3ef63387234c6ffaf323
Time (start): Wed, 2019-09-11 19:00:16
Time (end):   Wed, 2019-09-11 19:16:47
Duration: 16 minutes 30.46 seconds
Number of files: 1052830
Utilization of max. archive size: 0%
------------------------------------------------------------------------------
                       Original size      Compressed size    Deduplicated size
This archive:               70.39 GB             48.87 GB            280.71 MB
All archives:                1.26 TB            878.80 GB             60.46 GB

                       Unique chunks         Total chunks
Chunk index:                 1062155             18984341
------------------------------------------------------------------------------
terminating with success status, rc 0

Wed Sep 11 19:17:06 BST 2019 Pruning backups

Keeping archive: puma-2019-09-11T19:00:13             Wed, 2019-09-11 19:00:16 [bbb97ad9d7541a5ec722929dffe25f531a7080ab419f3ef63387234c6ffaf323]
Keeping archive: puma-2019-09-10T19:00:14             Tue, 2019-09-10 19:00:17 [5977c8b2cb9ef4156074ba49a9189f5dcaa036a7a5b752d05c481367d0ff331b]
Keeping archive: puma-2019-09-09T19:00:13             Mon, 2019-09-09 19:00:17 [378f5ac95d460d2915d1aeddf0770197ae8cd98c9177e3647ae8dd4bb71bfd59]
Keeping archive: puma-2019-09-08T19:00:13             Sun, 2019-09-08 19:00:17 [c58fca4de7ec1c486331c547e2966b08b621835c1d840035c242af64024adf96]
Keeping archive: puma-2019-09-06T19:00:14             Fri, 2019-09-06 19:00:17 [ad13ed80c65ae1b8a8a1a899118951dca7567c6688e1a7439033b38500f65732]
Keeping archive: puma-2019-09-05T19:00:10             Thu, 2019-09-05 19:00:13 [d96d2034b06bedb5ddecb5dc491b361572f7f6d4a2d0f769be2b549cb634e70a]
Keeping archive: puma-2019-09-04T19:00:13             Wed, 2019-09-04 19:00:17 [b8010cb696876315309fce0a27b3792db21376518eb9142e1da75edff2ab4bf2]
Pruning archive: puma-2019-09-03T19:00:11             Tue, 2019-09-03 19:00:14 [46384dfbabe12895d365563a33d521df447e4924208b57043a0df27e42e72940] (1/1)
Keeping archive: puma-2019-09-01T19:00:14             Sun, 2019-09-01 19:00:17 [b6a0521dd6752a2aa5b7dc8e352426a72fd95e54aed92a837479dd82d119493c]
Keeping archive: puma-2019-08-31T19:00:14             Sat, 2019-08-31 19:00:17 [a806ee87c17f916c4f5867841cb59c2627246ec4ef5f3861079d5df686b00fb6]
Keeping archive: puma-2019-08-25T19:00:13             Sun, 2019-08-25 19:00:16 [319564b7ebd583f74cd0f0fbff10db8a199b71e3a402c3e9c396bc3aa520be2d]
Keeping archive: puma-2019-08-16T19:00:14             Fri, 2019-08-16 19:00:17 [925ccd5d277b14984036d8e41526c6e2703d997b3df5930c6d9a9c269d64e734]
Keeping archive: puma-2019-08-11T19:00:13             Sun, 2019-08-11 19:00:17 [023a592f4b2d9aae2221ab71e8162a35a24049df1b1236c159d3fb866bfb7f0d]
Keeping archive: puma-2019-07-29T19:00:11             Mon, 2019-07-29 19:00:14 [ee25c515767da574ec85f73f1f22edee547bb71602bc8b0bc209356bebf98d77]
Keeping archive: puma-2019-06-30T19:00:13             Sun, 2019-06-30 19:00:16 [023ea3d802171368d155aeef64372b474923b24dffcc30100261ca20a8ff97b3]
Keeping archive: puma-2019-05-31T19:00:13             Fri, 2019-05-31 19:00:15 [9ec3b529da38d23075753777ae5f11031edadf8c5ceb5ac30c4799a1c50d5d55]
Keeping archive: puma-2019-04-30T19:00:13             Tue, 2019-04-30 19:00:15 [670ed0c9fa123cdc438bd62379776ef48ded4e50d3e63614126d59ab4461cb6f]
Keeping archive: puma-2019-03-31T19:00:07             Sun, 2019-03-31 19:00:09 [caab93a4b028f7de33442c085508d3752dcb2006641ee8dbd47537ac7d0059c3]
terminating with success status, rc 0
Back to top
View user's profile Send private message
alexander-n8hgeg5e
n00b
n00b


Joined: 02 Nov 2019
Posts: 22

PostPosted: Mon Nov 11, 2019 9:53 am    Post subject: Reply with quote

For me 'btrfs send'/receive turned out to be the best way to get
fast compressed and somewhat deduplicated backups.
i have a btrfsbenc script on github(outdated/read the code, could eat the cat). that can find corresponding snapshot pairs for incremental backups.
I have some problems with
my system going quickly out of memory and freezing
during a backup, but it could be my zstd compression
is at to high setting.
Normally max is 3 but i turned it
in the kernel code up to 18.

In things of self verifying, btrfs should be suitable.
You can scrub the disc, it checksums everything

I tried many backuptools with features
like dedupe, compress, incremental .
It seems its hard to implement a fast one.
I think zbackup or so was somewhat good.
simple rsync has not the fancy features but runs fast.
Back to top
View user's profile Send private message
technotorpedo
n00b
n00b


Joined: 10 Dec 2019
Posts: 28

PostPosted: Wed Dec 11, 2019 5:32 am    Post subject: Reply with quote

Personally another vote for rsync no shortage of information on it so will leave it there. Though one possible refinement worth noting for whomever, using multicore de/compression tools. They've been around a LONG time, though apparently not so much being readily used, ie: pigz instead of gzip etc etc. Integrating such could be good mojo for people, the how's are each endusers to sort out and enjoy. Am sure there's already tutes and how-to's aplenty online anyway.
Back to top
View user's profile Send private message
poncho
Tux's lil' helper
Tux's lil' helper


Joined: 06 Mar 2011
Posts: 89

PostPosted: Wed Dec 11, 2019 8:39 am    Post subject: Reply with quote

alexander-n8hgeg5e wrote:
For me 'btrfs send'/receive turned out to be the best way to get
fast compressed and somewhat deduplicated backups.


Same here. I'm using app-backup/btrbk
Back to top
View user's profile Send private message
erm67
Guru
Guru


Joined: 01 Nov 2005
Posts: 523
Location: EU

PostPosted: Wed Dec 11, 2019 5:47 pm    Post subject: Reply with quote

I also use btrfs on the nas with 4 disks in raid1 (data and metadata), better than RAID5 IMHO... I will rebalance important data to raid1c3 soon, now that is mainlined, so that I am protected against the failure of 2 disks (better than RAID6)....
And use btrfs-send for backups and duplicati for the windoze machines.
_________________
Ok boomer
True ignorance is not the absence of knowledge, but the refusal to acquire it.
Ab esse ad posse valet, a posse ad esse non valet consequentia

My fediverse account: @erm67@erm67.dynu.net
Back to top
View user's profile Send private message
alexander-n8hgeg5e
n00b
n00b


Joined: 02 Nov 2019
Posts: 22

PostPosted: Tue Dec 17, 2019 5:07 pm    Post subject: Reply with quote

erm67 wrote:
I also use btrfs on the nas with 4 disks in raid1 (data and metadata), better than RAID5 IMHO... I will rebalance important data to raid1c3 soon, now that is mainlined, so that I am protected against the failure of 2 disks (better than RAID6)....
And use btrfs-send for backups and duplicati for the windoze machines.


It's really a fast way.
And clean. You get the whole tree incrementially and fast.
As good as a new backup and as fast as a incremential one.

I coded a little prog to help me automatically relate the
snapshots to each other and find the ones that btrfs
can use to base it's transmission on.
It was a bit confusing. But i think it works.
So i want to look at btrbk , how they did it.

My btrfs send/receive feature needs some fix however.
But i think it could be related to turning zstd compression level to 18, what i did in the kernel code...
System freezes if i use btrfs send.
So i'm back at rsync for now...
Back to top
View user's profile Send private message
erm67
Guru
Guru


Joined: 01 Nov 2005
Posts: 523
Location: EU

PostPosted: Wed Dec 18, 2019 9:16 am    Post subject: Reply with quote

My nas has lots of big disks but a slow CPU, I use btrfs compression only on iscsi volumes ....
_________________
Ok boomer
True ignorance is not the absence of knowledge, but the refusal to acquire it.
Ab esse ad posse valet, a posse ad esse non valet consequentia

My fediverse account: @erm67@erm67.dynu.net
Back to top
View user's profile Send private message
figueroa
Guru
Guru


Joined: 14 Aug 2005
Posts: 463
Location: GA-USA

PostPosted: Tue Jan 14, 2020 4:20 am    Post subject: Reply with quote

Tarball it. Multiple TARs, actually.

On alternating weeknights, (3 am) I run bu2tar.scr or bu2tar2.scr, then on Saturday I run bu2tar3.scr. These each do a full backup of all my personal data (no system files -- that's another set of scripts, below) onto a secondary internal hard drive using alternating different partitions.

Each of /mnt/backup, /mnt/backup2, and /mnt/backup3 are 128 G partitions on my large secondary hard drive which I only use for backups.

I suppose I waste a lot of energy, but I've been doing this for many years, I understand it, and they run while I'm sleeping. Here is what one of those look like:
Code:
$ cat bin/bu2tar.scr
#!/bin/sh
mount /mnt/backup
cd /
tar cpzf /mnt/backup/data/bak.tgz scratch/bak/
tar cpzf /mnt/backup/data/bin.tgz scratch/bin/
#tar cpzf cdrecord.tgz /scratch/cdrecord/
tar cpzf /mnt/backup/data/Documents.tgz scratch/Documents/
#tar cpzf dosc.tgz /scratch/dosc/
#tar cpzf dosg.tgz /scratch/dosg/
tar cpzf /mnt/backup/data/Downloads.tgz scratch/Downloads/
#tar cpzf dso.tgz /scratch/dso/
#tar cpzf err.tgz /scratch/err/
#tar cpzf home.tgz /home/
tar cpzf /mnt/backup/data/home.tgz -X /scratch/bin/exclude.home home/
tar cpf /mnt/backup/data/graphics.tar scratch/graphics/
#tar cpzf Mail.tgz /scratch/Mail/
#tar cpf /mnt/backup/data/mntavwork.tar mnt/av/work/
tar cpzf /mnt/backup/data/mozilla.tgz scratch/.mozilla/
tar cpf /mnt/backup/data/mp3.tar scratch/mp3/
tar cpzf /mnt/backup/data/pdf.tgz scratch/pdf/
tar cpf /mnt/backup/data/photos.tar scratch/photos/
#tar cpzf pkg.tgz /scratch/pkg/
#tar cpzf text.tgz /scratch/text/
tar cpzf /mnt/backup/data/thunderbird.tgz scratch/.thunderbird/
#tar cpzf /mnt/backup/data/vbox.tgz -X /scratch/bin/exclude.vbox home/figueroa/.VirtualBox/
tar cpzf /mnt/backup/data/vbox.tgz home/figueroa/.VirtualBox/
tar cpzf /mnt/backup/data/vboxmnt.tgz mnt/vbox/VDI/
tar cpzf /mnt/backup/data/vboxvm.tgz "home/figueroa/VirtualBox VMs/"
tar cpzf /mnt/backup/data/wav.tgz scratch/wav/
tar cpzf /mnt/backup/data/webdev.tgz scratch/webdev/
tar cpzf /mnt/backup/data/www.tgz scratch/www/
#tar cpzf /mnt/backup3/data/janbak.tgz home/jan/
cd
umount /mnt/backup


I have a personal bin directory in /home/myusername/bin which I use for my large collection of scripts. (actualy, bin in my home directory is a symlink and the real home bin is at /scratch/bin) Why do I use a personal /scratch partition? Because, eons ago, drives were small and therefore partitions were small and I ran out of space in /home so I made /scratch and kept doing it as a habit. I don't recommend it. :-)

Here is my personal crontab:
Code:

#Sun, Tue, Thu
1   3   *   *   7,2,4   /home/figueroa/bin/bu2tar.scr
#Mon, Wed, Fri
1   3   *   *   1,3,5   /home/figueroa/bin/bu2tar2.scr
#Sat
1   3   *   *   6   /home/figueroa/bin/bu2tar3.scr


My personal data is found on either /home or /scratch with some additional VirtualBox virtual machines on /mnt/vbox, all on drive /dev/sda/, roughly 80 gig of data, and takes roughly an hour on a fast machine with spinning hard drives. You'll see some of the rows are commented out, old archival stuff where I no longer make any changes. Those collections that make sense to compress are compressed, and things like mp3 files and photos are not compressed.

Five times a month I backup my system files with another script that runs on specified days at 6 am:
Code:

$ cat bin/gentoo2bak.scr
#!/bin/sh
#gentoo2bak.scr
mount /mnt/backup
cd /
tar cpzf /mnt/backup/sysbak/bin.tgz --xattrs --numeric-owner bin/
tar cpzf /mnt/backup/sysbak/boot.tgz --xattrs --numeric-owner boot/
tar cpzf /mnt/backup/sysbak/dev.tgz --xattrs --numeric-owner dev/
tar cpzf /mnt/backup/sysbak/etc.tgz --xattrs --numeric-owner etc/
tar cpf /mnt/backup/sysbak/home.tar --xattrs --numeric-owner --no-recursion -X /scratch/bin/exclude.home2 home/*
tar cpzf /mnt/backup/sysbak/lib.tgz --xattrs --numeric-owner --no-recursion lib/
tar cpzf /mnt/backup/sysbak/lib32.tgz --xattrs --numeric-owner lib32/
tar cpzf /mnt/backup/sysbak/lib64.tgz --xattrs --numeric-owner lib64/
tar cpf /mnt/backup/sysbak/media.tar --xattrs --numeric-owner --no-recursion media/*
tar cpf /mnt/backup/sysbak/mnt.tar --xattrs --numeric-owner --no-recursion mnt/*
tar cpzf /mnt/backup/sysbak/opt.tgz --xattrs --numeric-owner opt/
tar cpf /mnt/backup/sysbak/proc.tar --xattrs --numeric-owner --no-recursion proc/
tar cpzf /mnt/backup/sysbak/root.tgz --xattrs --numeric-owner root/
tar cpf /mnt/backup/sysbak/run.tar --xattrs --numeric-owner --no-recursion run/
tar cpzf /mnt/backup/sysbak/sbin.tgz --xattrs --numeric-owner sbin/
tar cpf /mnt/backup/sysbak/sys.tar --xattrs --numeric-owner  --no-recursion sys/
tar cpf /mnt/backup/sysbak/tmp.tar --xattrs --numeric-owner --no-recursion tmp/
tar cpzf /mnt/backup/sysbak/usr.tgz --xattrs --numeric-owner -X scratch/bin/exclude.usr usr/
tar cpzf /mnt/backup/sysbak/usrportage.tgz --xattrs --numeric-owner -X scratch/bin/exclude.distfiles usr/portage/
tar cpzf /mnt/backup/sysbak/var.tgz --xattrs --numeric-owner -X scratch/bin/exclude.var var/
tar cpf /mnt/backup/sysbak/scratch.tar --xattrs --numeric-owner --no-recursion -X scratch/bin/exclude.scratch scratch/
tar cpzf /mnt/backup/janbak/janbak.tgz --xattrs --numeric-owner home/jan/
cd
umount /mnt/backup

This is what my root crontab looks like for those:
Code:

01      6       2,16   *       *   /home/figueroa/bin/gentoo2bak.scr
01      6       9,23   *       *   /home/figueroa/bin/gentoo2bak2.scr
01      6       30   *       *   /home/figueroa/bin/gentoo2bak3.scr


This is currently about 3.6 G, and takes and 18 minutes, more or less.

Once a week, normally on Saturday, I make a compressed, encrypted archive onto an external 128 G USB 3 flash drive (used about 95% of it) and keep these moved around to alternate locations. I use five differnt flash drives currently in rotation. These are for recovery from catastrophe. I expect never to need them. Running this takes 50-60 min. I do this manually, but I don't watch it as I'm doing other things. The script looks like:
Code:
$ cat bin/targpgflash1.scr
#!/bin/sh
#targpgflash1.scr
#Encrypt and store data and OS backups on external media: targpgfire.scr
#If backup partition(s) already mounted, comment out mount and umount commands.

mount /mnt/backup

cd /mnt/backup
date > /run/media/figueroa/MicroCenter128/date1.txt
tar cvf - sysbak/* | gpg -c --batch --yes --passphrase-file /scratch/bin/.passrc --compress-algo none -o /run/media/figueroa/MicroCenter128/sysbackup.tar.gpg
date >> /run/media/figueroa/MicroCenter128/date1.txt

date > /run/media/figueroa/MicroCenter128/date2.txt
tar cvf - janbak/* | gpg -c --batch --yes --passphrase-file /scratch/bin/.passrc --compress-algo none -o /run/media/figueroa/MicroCenter128/janbackup.tar.gpg
date >> /run/media/figueroa/MicroCenter128/date2.txt
cd /

umount /mnt/backup

mount /mnt/backup3
cd /mnt/backup3
date > /run/media/figueroa/MicroCenter128/date3.txt
tar cvf - data/* | gpg -c --batch --yes --passphrase-file /scratch/bin/.passrc --compress-algo none -o /run/media/figueroa/MicroCenter128/databackup.tar.gpg
date >> /run/media/figueroa/MicroCenter128/date3.txt

cd /
umount /mnt/backup3


I didn't mention that once a week I also make a tarball of my wife's entire home directory which she uses occasionally, including a Windows virtual machine for geneology work, about 16 G. That's the last line of my system backup script which used to be in my personal data bakcup script.

I have used my collection of tarballs to recover from hard drive failure as well as to move my system from one computer to another. They work and they use commonly installed software found on any distribution, just tar, gzip, gnupg. I just happen to use Gentoo. Tweak to suit your needs and level of paranioa. Many people have told me not to do this, that I'm wasting space, etc., but it works, and hard drive space is cheap.

Logs are mailed to me nightly at the end of each backup run.
_________________
Andy Figueroa
andy@andyfigueroa.net Working with Unix since 1983.
Back to top
View user's profile Send private message
Zucca
Veteran
Veteran


Joined: 14 Jun 2007
Posts: 1613
Location: KUUSANKOSKI, Finland

PostPosted: Tue Jan 21, 2020 10:48 am    Post subject: Reply with quote

I've been looking for simple backup method for as long as I have kept backups.

Here's how I do my backups at the moment:

  • 1st line of defense: rely on btrfs data duplication and its read-only snapshots
    • protects from user errors (accidental file deletion) and depending on data layout one or two disk failures

  • 2nd line of defense: rsync to an external hard drive
    • even better if external backup drive has a filesystem with snapshotting functionality

  • 3rd line of defense (haven't implemented yet): use restic to copy the most important files to a off-site location (amazon, backblaze...)
    • ... Yes. restic does encrypt your data before sending it.

I consider the 3rd method as not simple as it requires more than normal linux utilies to restore from backups. But it's there in case of a catastrophe were to happen.
_________________
..: Zucca :..

Code:
ERROR: '--failure' is not an option. Aborting...
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Gentoo Chat All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum