View previous topic :: View next topic |
Author |
Message |
NotQuiteSane Guru
Joined: 30 Jan 2005 Posts: 488 Location: Klamath Falls, Jefferson, USA, North America, Midgarth
|
Posted: Fri Mar 27, 2009 10:05 am Post subject: |
|
|
is it possible to get an option to use lzma to compress with added to the script?
Thanks
NQS _________________ These opinions are mine, mine I say! Piss off and get your own.
As I see it -- An irregular blog, Improved with new location
To delete French language packs from system use 'sudo rm -fr /' |
|
Back to top |
|
|
xnmrphr n00b
Joined: 28 Jul 2009 Posts: 1
|
Posted: Tue Jul 28, 2009 7:29 pm Post subject: |
|
|
Great tool/script Thanx. |
|
Back to top |
|
|
El_Goretto Moderator
Joined: 29 May 2004 Posts: 3169 Location: Paris
|
Posted: Thu Sep 03, 2009 4:08 pm Post subject: |
|
|
It was the second time I used this script to restore or migrate OS from a machine to another after yeeeaaaars of use for backup.
No problem at all once again.
Still years of use to come
And thanks to its author. _________________ -TrueNAS & jails: µ-serv Gen8 E3-1260L, 16Go ECC + µ-serv N40L, 10Go ECC
-Réseau: APU2C4 (OpenWRT) + GS726Tv3 + 2x GS108Tv2 + Archer C5v1 (OpenWRT) |
|
Back to top |
|
|
Gibbo_07 n00b
Joined: 15 Nov 2009 Posts: 38 Location: Brisbane, AU
|
Posted: Mon Nov 30, 2009 7:22 am Post subject: |
|
|
Another thanks to an old but still worthy of praise topic.
Have tuned the script to my needs and works a damn treat has even saved my gentoo install already. I have a question/bug report however if the author/better gifted scripter than myself can work out why when I had set
Code: |
# patterns which should not be backed up (like iso files).
# example: default_exclude_pattern="*.iso *.divx"
# These pattern count only for files NOT listed in the $custom_include_list.
default_exclude_pattern="*.iso *.pk4"
|
I still have .pk4 files being backed up in the archive? They are located in /opt namely the quake4 data files which I don't need to have backed up, I would however like the config files and binary backed up so this is why I am trying to get this function to work rather than adding the dir to the exclude list.
+1 to anyone who can figure this out |
|
Back to top |
|
|
Gibbo_07 n00b
Joined: 15 Nov 2009 Posts: 38 Location: Brisbane, AU
|
Posted: Mon Nov 30, 2009 6:26 pm Post subject: |
|
|
I worked around the issue - added /opt/quake4 to the custom include list, and set the custom exclude pattern to match *.pk4. This works, my thinking is the global exclude is broken?
Thanks. |
|
Back to top |
|
|
rb34 Guru
Joined: 03 Oct 2004 Posts: 361 Location: Rome, italy
|
Posted: Sun Dec 06, 2009 9:24 pm Post subject: |
|
|
Hi all!
I found that the line
Code: | `find /mnt -mount -name .keep` |
should be
Code: | `find /mnt -mount -name .keep` |
because if you don't add -mount, find descends the whole directory structure of any mounted partition!!
That "bug" leads to two problems:
* if you, as me, have a partition with many dirs, the old find command takes a lot of time!
* and, if you have a backup made with rsnapshot (with many .keep files) on a mounted partition, the final command sent to "sh -c" can exceed the argument size limit!
Adding "-mount" makes find remain on the same partition, and this is fine, as we need only to find the .keep under /mnt and not on the mounted partitions _________________ rb |
|
Back to top |
|
|
Gibbo_07 n00b
Joined: 15 Nov 2009 Posts: 38 Location: Brisbane, AU
|
Posted: Mon Dec 07, 2009 3:24 am Post subject: |
|
|
rb34 wrote: | Hi all!
I found that the line
Code: | `find /mnt -mount -name .keep` |
should be
Code: | `find /mnt -mount -name .keep` |
because if you don't add -mount, find descends the whole directory structure of any mounted partition!!
That "bug" leads to two problems:
* if you, as me, have a partition with many dirs, the old find command takes a lot of time!
* and, if you have a backup made with rsnapshot (with many .keep files) on a mounted partition, the final command sent to "sh -c" can exceed the argument size limit!
Adding "-mount" makes find remain on the same partition, and this is fine, as we need only to find the .keep under /mnt and not on the mounted partitions |
Both lines you posted are identical
Also I canno't find a line matching that?
edit: my bad I have v3.5 - should have gotten it from authors webpage first time around |
|
Back to top |
|
|
rb34 Guru
Joined: 03 Oct 2004 Posts: 361 Location: Rome, italy
|
Posted: Mon Dec 07, 2009 10:56 am Post subject: |
|
|
I'm sorry : the line
Code: | `find /mnt -name .keep` |
should be replaced with
Code: | `find /mnt -mount -name .keep` |
_________________ rb |
|
Back to top |
|
|
johabba n00b
Joined: 23 May 2004 Posts: 22
|
Posted: Tue Apr 20, 2010 6:09 pm Post subject: |
|
|
Gibbo_07 wrote: |
edit: my bad I have v3.5 - should have gotten it from authors webpage first time around |
How exactly do you get this script? I tried to copy/paste from the author website but it does not copy the indents cleanly. I also tried "wget 'http://blinkeye.ch/dokuwiki/doku.php/projects/mkstage4'" hoping the script was wrapped in "pre" html flags and I could delete the rest of the html.
Maybe the author can use the "pre" html flag around the script? The "pre" html flag is used around the section starting with "Backup script v3.5". This section works with wget'ing and removing the surrounding html. |
|
Back to top |
|
|
Killerchronic Tux's lil' helper
Joined: 24 Apr 2007 Posts: 91 Location: UK
|
Posted: Wed Dec 22, 2010 12:53 pm Post subject: |
|
|
johabba wrote: | I tried to copy/paste from the author website but it does not copy the indents cleanly. |
Not sure what your doing diff or if this has changed but a simple copy and paste worked fine for me, indents and all.
Thanks for this, been hunting for a backup solution, hopefully this will fill my need, was considering bacula but for 2 computers that are now fully setup i think its a bit much. |
|
Back to top |
|
|
kwispy Tux's lil' helper
Joined: 10 Mar 2003 Posts: 82
|
Posted: Thu Oct 08, 2015 1:51 pm Post subject: |
|
|
I went searching for the maintained mkstage4.sh on Google, as the original download link is broken. I found it on github, which seems to be the most up-to-date version, as it has been re-written a few times. It seems to be in the chymeric overlay.
FYI for anyone who is following in my frustrated tracks, and for continuity's sake. |
|
Back to top |
|
|
Astronome Tux's lil' helper
Joined: 02 Jan 2016 Posts: 148
|
Posted: Tue Jan 26, 2016 5:57 am Post subject: |
|
|
kwispy wrote: | I went searching for the maintained mkstage4.sh on Google, as the original download link is broken. I found it on github, which seems to be the most up-to-date version, as it has been re-written a few times. It seems to be in the chymeric overlay.
FYI for anyone who is following in my frustrated tracks, and for continuity's sake. |
Thanks! Works great and runs much faster than it did in 2005 |
|
Back to top |
|
|
pa1983 Tux's lil' helper
Joined: 09 Jan 2004 Posts: 101
|
Posted: Fri Oct 28, 2016 9:44 pm Post subject: |
|
|
I just noticed that tar 1.29 seems bugged, found a bug report with the same problem as I have.
Basically -X or the exclude options dont seem to work so in my case it started to archive my external hard drive and well eventually would have tried to compress terabytes of data of my filserver
Anyway I was also annoyed by the time it takes to use tar. Single threaded so my 6 core CPU with 12 threads cant realy do much work.
I installed pbzip2 and used that as a compressor instead. This is how I compressed my gentoo install and it was MUCH faster, all 12 threads used to 100%.
tar -I pbzip2 -cvf /mnt/sdc1/voyager-20161028-stage4.tar.bz2 / -X stage4.excl
Anyway have not unpacked it since Im waiting for my new SSD but trying to harnes all the cores on a modern CPU shoukd be a goal and added to the wiki I think.
Anyway I got the info from here, testing it out.
http://stackoverflow.com/questions/12313242/utilizing-multi-core-for-targzip-bzip-compression-decompression _________________ NAS: i3 4360 3.7Ghz, 20Gb ram, 256Gb SSD, 65Tb HDD, NIC: Intel 2x1Gbit, Realtek 2.5Gbit
ROUTER: J1900 2Ghz, 8Gb ram, 128Gb SSD, NIC: 2x1Gbit, WIFI: Atheros AR9462 and AR5005G |
|
Back to top |
|
|
cv01302 n00b
Joined: 08 Jan 2005 Posts: 19
|
Posted: Sat Nov 19, 2016 6:05 pm Post subject: |
|
|
It's true!
I emerged pbzip2 (I had already installed tar-1.29-r1), modified the initial script to use pbzip2 instead of bz2 , htop shows all cores being used at 100%.
On i7-4771 , 13 GB of data (root partition only) were compressed to 3.5 GB (best option) in under 7.5 minutes. Unedited script did 16 mins so that's about double time (although I expected the difference to be even bigger).
Kudos to pa1983 for this nice tip! |
|
Back to top |
|
|
davidbrooke Guru
Joined: 03 Jan 2015 Posts: 341
|
Posted: Sat Nov 19, 2016 7:16 pm Post subject: |
|
|
The pbzip2 tip is nice but why not use something that actually works? The --exclude problem is a major road block.
I found fsarchiver
http://www.fsarchiver.org/Main_Page
via
https://wiki.gentoo.org/wiki/Backup
Fsarchiver has pbzip2 with other compression options as well as core / thread setup.
I tried both Fsarchiver and Mkstage4 and Fsarchiver worked better for me. |
|
Back to top |
|
|
cv01302 n00b
Joined: 08 Jan 2005 Posts: 19
|
Posted: Sun Nov 20, 2016 10:43 am Post subject: |
|
|
Well, with tar-1.29-r1 exclusions were kept so I guess no bug there as it was stated that existed in 1.29. My file was 3.5GB compressed, if it didn't include the exclusions, my user home partition would be there too, along with my other HDD partitions, file would be way, way larger than 3.5GB .
Just checked the archive too, indeed no file was included from the exclusion list, so I guess it is good to go. Real test would be to restore the image just to check everything works perfectly, this though at a later time.
I went ahead and tried the fsarchiver (with -A flag in order to simulate all conditions as before), used the exact same exclusions as with script, along with -j 8 (all cores) and -z 9 (best compression with lzma when z>7 . However no pbzip2 support as you stated, just lzo, gzip, bzip2 and lzma) results were 3.7GB data at 11 minutes. Changing it to -z 7 (lzma still but without the best compression), results were 3.9GB and an astonishing 3 minutes, so I guess it is a winner. Decompression will be way faster too due to lzma in comparison with pbzip2.
Fsarchiver looks indeed very promising, liked the checksumming of the data, ability to restore corrupt archives, as well as the ability to encrypt data with a password.
Nice tip David, thanks! Fsarchiver is the winner for me |
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|