Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
HOWTO backup to DVDs as if they were magnetic tapes
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Documentation, Tips & Tricks
View previous topic :: View next topic  
Author Message
ayqazi
Apprentice
Apprentice


Joined: 10 Apr 2005
Posts: 164

PostPosted: Mon Feb 27, 2006 11:24 am    Post subject: HOWTO backup to DVDs as if they were magnetic tapes Reply with quote

PLEASE NOTE THIS DOCUMENT IS WITHOUT WARRANTY! IT MAY DAMAGE YOUR SYSTEM, BLOW UP YOUR MONITOR OR GIVE YOUR CAT PILES! USE AT YOUR OWN RISK!

Hello,

I explored tar's tape backup features, and noted how you could back up an archive to multiple tapes. I don't have tapes, but I have DVDs. Couldn't it work?

Unfortunately, it seems, no. You can't do a 'tar -c XXX > /dev/dvdrw' as you can 'tar -c XXX > /dev/st0' - the /dev/dvdrw device doesn't read in data, unlike the tape device.

You need a burning utility to write to DVDs. There has to be a way of linking them together.....

We could always create a tar archive, then burn it onto a DVD, but if we create an iso image and store the tar archive in it, we run into a limitation: mkisofs unfortunately only accepts files 2GB in size. We want to use the full 4.3GB of a DVD+R(W)!

Code:
$ tar -c /lots/of/stuff -f stuff.tar
$ ls -sh stuff.tar
4GB stuff.tar
$ growisofs -r -Z /dev/dvd stuff.tar
mkisofs error: file too large for defined data type


So, we don't write an iso image to the disk. We just write the tar file directly to the disk!

Code:
$ tar -c /lots/of/stuff -f stuff.tar
$ ls -sh stuff.tar
4GB stuff.tar
$ growisofs -Z /dev/dvd=stuff.tar # NOTE THE EQUALS SIGN!
...
$


Then to read it back again:
Code:
$ tar -tf /dev/dvd


Fine! but what if we don't want to create an intermediate file? We just pipe the input into growisofs

Code:
$ tar -c /lots/of/stuff | growisofs -Z /dev/dvd=/dev/stdin
...
$ tar -tf /dev/dvd


That's fine too! But what if we want to back up more than 4GB of data? We split up the files we wanna back up into portions of 4GB each, then....

nah, that's too fiddly. We want to treat our DVDs as tapes, so that when the space runs out on one tape, we just carry on our backup on another tape! tape, dvd whatever - same thing!

tar was designed for use with tapes (tar = Tape ARchive) It has a feature called 'multi-volume' archives. When an archive is too big for one tape, it prompts you for another tape - you swap tapes, press 'return', and the archive carries on on the other tape. Excerpt from the info file:

Tar info page wrote:
When you specify `--multi-volume' (`-M'), `tar' does not report an error when it comes to the end of an archive volume (when reading), or the end of the media (when writing). Instead, it prompts you to load a new storage volume. If the archive is on a magnetic tape, you should change tapes when you see the prompt; if the archive is on a floppy disk, you should change disks; etc.

You can read each individual volume of a multi-volume archive as if it were an archive by itself. For example, to list the contents of one volume, use `--list' (`-t'), without `--multi-volume' (`-M') specified. To extract an archive member from one volume (assuming it is described that volume), use `--extract' (`--get', `-x'), again without `--multi-volume' (`-M').

If an archive member is split across volumes (ie. its entry begins on one volume of the media and ends on another), you need to specify `--multi-volume' (`-M') to extract it successfully. In this case, you should load the volume where the archive member starts, and use `tar --extract --multi-volume'--`tar' will prompt for later volumes as it needs them. extracting archives, for more information about extracting archives.


The problem is, you can't write directly onto a DVD device like you can onto magnetic disk and tape devices, you have to use an intermediate program. Secondly, although it can read from a DVD device, tar cannot find out when it is at 'the end of the volume' (a quirk of the DVD device driver and DVD drives themselves.)

So, we must provide some way of sorting out these problems.

Code:
mknod /tmp/dvd-bak.pipe p


This is the key here: it will allow us to make tar think it's reading and writing to and from a tape, while in fact we will be supplying it the data it needs to function using other means. Other means? Read on...

Let's say we want to backup our whole Gentoo installation. We don't want to back up /home, because we'll back that up separately. There are also a few other places we don't want to back up. We don't have the space to store extra back up files, so we need to output straight onto DVD. Here, we use DVD+R or DVD+RWs. For DVD-R(W), see the end of these ramblings.

Open a terminal or start a terminal emulator, and, as root (or use sudo) run the following:

tar Terminal

Code:
tar --create / --verbose --one-file-system --preserve --multi-volume --tape-length=4580000 --exclude /tmp --exclude /var/tmp --exclude /var/run --exclude /usr/portage/distfiles --exclude /home -f /tmp/dvd-bak.pipe


OR

Code:
tar -c / -v1pML 4580000 -X /tmp -X /var/tmp -X /var/run -X /usr/portage/distfiles -X /home -f /tmp/dvd-bak.pipe


Note the files we '--exclude'? This is because it won't do much good backing them up - they are either generated at each boot, are temporary files that shouldn't be kept permanently, or can be downloaded again. As we said, /home will be separately backed up.

Also note the '--tape-length=' argument - we can't count on our dvd burning program to tell tar when it has reached the end of the DVD, so we must tell tar how long each tape is. Here, a DVD+R(W) is being used, so we supply 4580000 kilobytes, 10MB less than its maximum capacity (I'm just paranoid like that.) How do we discover a medium's capacity? See the end of these ramblings to find out.

hmm... nothing happens. This is because the 'named pipe' (/tmp/dvd-bak.pipe) is 'blocking'. i.e. until something starts reading from it, it will stop the program from executing. Don't worry - this is normal, and is in fact a feature!

Insert a disc into the DVD drive that you want to write to, and keep another 1/2 handy depending on how much you want to back up (experiment with DVD+RWs first, I'm not liable for any coasters you produce with DVD+R discs!)

Now, on ANOTHER TERMINAL (i.e. switch to a new Linux virtual terminal, or open a new terminal window on X):

growisofs Terminal

Code:
growisofs -Z /dev/dvd=/tmp/dvd-bak.pipe


The DVD starts burning!

When growisofs has finished, switch back to the tar Terminal, you should see a prompt:
Code:
Prepare volume #2 for `/tmp/dvd-bak.pipe' and hit return:


So, insert another disc! Then hit return. Nothing happens. That's right - tar is blocked again, until something starts reading from the other end of the pipe!

Switch to the growisofs Terminal, and repeat the command:

Code:
growisofs -Z /dev/dvd=/tmp/dvd-bak.pipe


If you specified the '--verbose' or '-v' switch to tar, you can switch back to the tar Terminal and see the files it is backing up.

Keep on repeating this process, until all your files are backed up.

Hang on a moment - what about untarring the files on the discs?

As the tar info entry said, if a file is wholly on one tape (or disc :-) ) it can be obtained simply by treating it as a normal tar archive.
Code:
<put disc in drive>
tar -xvf /dev/dvdrw 'name of file'


But if we want to extract a file that is straddling two archives, or we want to extract the whole backup, then we must use the '--multi-volume' switch again (or just '-M').

TERMINAL 1
Code:
$ cd <where we want to extract>
$ tar -xvMpf /tmp/dvd-bak.pipe


TERMINAL 2:
Code:
<insert disc 1 of backup set>
$ dd if=/dev/dvd of=/tmp/dvd-bak.pipe bs=1k count=4580000


Then when tar requests the next archive on TERMINAL 1, switch to TERMINAL 2, insert the next disc, and repeat the 'dd' command, switch back to TERMINAL 1, and press enter at the tar prompt. Voila! I hope....

Why do we use 'dd' and not simply 'cat /dev/dvdrw > /tmp/dvd-bak.pipe'? Because the /dev/dvdrw device will return all the data on the disc, even if it is from an old write, and it happens to appear after the data that you are trying to read. Because of this, we have to limit the amount of data that is read so that stray data does not get returned and does not confuse the tar command.

Phew... tell me if this is too complicated please, but I think I've covered everything - enjoy.

Discovering a DVD medium's capacity

We use the dvd+rw-mediainfo tool
Code:
$ dvd+rw-mediainfo /dev/dvd
INQUIRY:                [LITE-ON ][DVDRW SOHW-1693S][KS0B]
GET [CURRENT] CONFIGURATION:
 Mounted Media:         1Ah, DVD+RW
 Media ID:              RICOHJPN/W11
 Current Write Speed:   4.0x1385=5540KB/s
 Write Speed #0:        4.0x1385=5540KB/s
GET [CURRENT] PERFORMANCE:
 Write Performance:     4.0x1385=5540KB/s@[0 -> 2295103]
 Speed Descriptor#0:    00/2295103 R@8.0x1385=11072KB/s W@4.0x1385=5540KB/s
READ DVD STRUCTURE[#0h]:
 Media Book Type:       92h, DVD+RW book [revision 2]
 Legacy lead-out at:    2295104*2KB=4700372992
READ DISC INFORMATION:
 Disc status:           complete
 Number of Sessions:    1
 State of Last Session: complete
 Number of Tracks:      1
 BG Format Status:      suspended
READ TRACK INFORMATION[#1]:
 Track State:           complete
 Track Start Address:   0*2KB
 Free Blocks:           0*2KB
 Track Size:            2295104*2KB
FABRICATED TOC:
 Track#1  :             14@0
 Track#AA :             14@2295104
 Multi-session Info:    #1@0
READ CAPACITY:          2295104*2048=4700372992


Note that this is a DVD+RW disc that has been formatted by growisofs to already. However, I believe that the line that reads 'Legacy lead-out at: 2295104*2KB=4700372992' allows us to find out the capacity of the disc regardless of whether it is partly filled, wholly filled or empty. Don't hold me to that. Note that 2295104 is the number 2K blocks on the disc - the number of kilobytes is obtained by 2295104*2 = 4590208. I supplied '4580000' to the '--tape-length=' argument of tar, i.e. 10MB less than the full capacity.

If you use DVD-R(W)s, then you will have to use a different size. It'll probably be about 4100000KB but CHECK FIRST. I suggest using about 10MB (10240KB) less than the max capacity, just to be safe. You could use the full capacity if you wanted, I suppose. Just remember to supply the exact same size to both the initial 'tar' command when creating the archive as well as the 'dd' command when extracting.
Back to top
View user's profile Send private message
fangorn
Veteran
Veteran


Joined: 31 Jul 2004
Posts: 1886

PostPosted: Mon Feb 27, 2006 12:12 pm    Post subject: Reply with quote

Just a tip:
If you drop the iso9660 support and write the tar to a udf formatted disc, you can burn one big file to the disc until its full. With UDF you will have error correction capabilities at hand, that will give you more decent backup archives.
_________________
Video Encoding scripts collection | Project page
Back to top
View user's profile Send private message
frostschutz
Advocate
Advocate


Joined: 22 Feb 2005
Posts: 2977
Location: Germany

PostPosted: Mon Feb 27, 2006 12:14 pm    Post subject: Reply with quote

This is a really interesting post, thanks for the information.

Actually, I'm already using growisofs together with pipes. I found it necessary to add a software buffer between mkisofs / whatever and growisofs when burning, because growisofs does not provide a buffer by itself and the burner's internal buffer is not big enough to cope with pauses in transmission that are longer than a quarter of a second. So burning on the fly while the system was under load was a problem. Of course, buffer underflows are not a problem anymore with modern burners, but still, I feel uncomfortable when the burner has to pause, and plus it is unnecessary delay for the burning program.

Now after telling you a long and unnecessary story, my question to you. How performant is burning tar on the fly onto DVD? I take it you are not using compression, so I guess it's about the same as mkisofs, yes? Does your burner always get data continuously, or does it have to pause, when you're working with your PC during the backup process? If it doesn't pause, what kind of system do you have?

I'm itching to try your approach, although I think I will have to add a buffer to it and wrap the whole thing in a script, since I find it too troublesome to do manually every time.
Back to top
View user's profile Send private message
ayqazi
Apprentice
Apprentice


Joined: 10 Apr 2005
Posts: 164

PostPosted: Mon Feb 27, 2006 12:15 pm    Post subject: Reply with quote

Unfortunately, mkisofs doesn't support UDF properly, and I don't really want packet writing, just fast backing up. But please, if you know how to do it, show the way!
Back to top
View user's profile Send private message
JeliJami
Veteran
Veteran


Joined: 17 Jan 2006
Posts: 1086
Location: Belgium

PostPosted: Mon Feb 27, 2006 12:16 pm    Post subject: Reply with quote

great reading!

how does this procedure relate to dar or scdbackup, both mentioned in a previous post of yours?
pro's and con's?
_________________
Unanswered Post Initiative | Search | FAQ
Former username: davjel
Back to top
View user's profile Send private message
frostschutz
Advocate
Advocate


Joined: 22 Feb 2005
Posts: 2977
Location: Germany

PostPosted: Mon Feb 27, 2006 12:24 pm    Post subject: Reply with quote

fangorn wrote:
If you drop the iso9660 support and write the tar to a udf formatted disc, you can burn one big file to the disc until its full. With UDF you will have error correction capabilities at hand, that will give you more decent backup archives.


Ummm, although I've read some documentation about UDF, it never mentioned anything about error correction capabilities. Could you explain / point to some docs that describe how this UDF error correction works and how much space it requires?
Back to top
View user's profile Send private message
ayqazi
Apprentice
Apprentice


Joined: 10 Apr 2005
Posts: 164

PostPosted: Mon Feb 27, 2006 12:50 pm    Post subject: Reply with quote

frostschutz: the burner just slows down if it doesn't receive enough info - sometimes burning at 0.1x speed! But the data's OK. I too thought of the buffering approach, but couldn't find any tools to help me do that. Still, it does the job for me.


davjel: problem with these tools is that they're not standard across distributions. If you need your data quickly, you can bet that tar and dd will be on the machine you're reading from! The same can't be said of scdbackup. That was my main motivation. scdbackup is more automated, and has more options, and is definitely the more robust solution. It's just not standard.
Back to top
View user's profile Send private message
frostschutz
Advocate
Advocate


Joined: 22 Feb 2005
Posts: 2977
Location: Germany

PostPosted: Mon Feb 27, 2006 1:31 pm    Post subject: Reply with quote

ayqazi wrote:
frostschutz: the burner just slows down if it doesn't receive enough info - sometimes burning at 0.1x speed! But the data's OK. I too thought of the buffering approach, but couldn't find any tools to help me do that. Still, it does the job for me.


Well, if you are interested in adding a buffer to your script, I'm using the 'bfr' program for that (as it is the only one I could find that is also provided by Gentoo, besides it's quite powerful). The (very simple & stupid) shell scripts I use for burning with this buffer are here:

http://www.metamorpher.de/files/burn-dvd-image.sh
http://www.metamorpher.de/files/burn-dvd-fly.sh

Please note that you have to edit them to fit your requirements (burner device, buffer size, etc.).

It should really work the same with your method as you are using pipes as well... you just have to add another pipe between your data source (tar) and the burning program (growisofs) which will then be occupied by the buffering program.

Of course, if it should turn out that tar is just too slow to provide data in general (when using compression, it most certainly will be), a buffer won't help here unless it's very very big.

On my own machine, I find that a 32MB buffer is enough for most applications for my 4x burner. Faster burners may need bigger buffers, as they get depleted much more quickly. I recently upgraded my machine's RAM to make work with the Gimp more performant (went from 512MB to 1GB of RAM now). However, when I'm not working with Gimp, the RAM is pretty much unused, so I use a whopping 128MB buffer for burning which proved to be sufficient for me even under extreme load.
Back to top
View user's profile Send private message
ayqazi
Apprentice
Apprentice


Joined: 10 Apr 2005
Posts: 164

PostPosted: Mon Feb 27, 2006 5:54 pm    Post subject: Reply with quote

Actually, its when a lot of small files are being read that the burner slows down. Backing up, say, /usr/lib is fine - but backing up, say, /usr/portage, with all those small files - 0.4x burn speed :-)
Back to top
View user's profile Send private message
frostschutz
Advocate
Advocate


Joined: 22 Feb 2005
Posts: 2977
Location: Germany

PostPosted: Mon Feb 27, 2006 7:25 pm    Post subject: Reply with quote

Right. I just did some tests, and tar on /usr/portage puts out only 250kb/s of data, and that's my fastest hard disk which makes 50MB/s according to hdparm... :( So obviously, a buffer won't help you there at all, as even a 128MB buffer will go empty within seconds.

That horrible performance makes me want to benchmark filesystems though. Maybe reiserfs would perform better for /usr/portage, as we're talking about small files here. Currently I'm using xfs.

As for your method, I still think the idea is pretty cool, but as I don't like buffer underruns on the burner at all, I think I'll stick with create file first and burn afterwards.
Back to top
View user's profile Send private message
ayqazi
Apprentice
Apprentice


Joined: 10 Apr 2005
Posts: 164

PostPosted: Tue Feb 28, 2006 12:26 am    Post subject: Reply with quote

Heh - maybe its cos I'm using reiserfs that even my old crappy Pentium 3 using a ATA33 hard disk can just about supply enough data to the cd burning bitch to stop it buffer underrunning (although growisofs does sometimes report speeds as low as 0.1x :-)
Back to top
View user's profile Send private message
dmartinsca
Guru
Guru


Joined: 09 Dec 2005
Posts: 303
Location: Ontario, Canada

PostPosted: Thu Apr 13, 2006 2:26 am    Post subject: Reply with quote

Interesting post! I have been toying with something like this for a while now but didn't know how to set it up.. Pipes had never crossed my mind. Probably because i've never used them before ;)
Something that jumps out at me is when you are using dd to read back multiple discs, what happens on the final disc if it is not full with the 4580000 kB you are telling dd to read? I would guess that tar just sees the some bytes which indicate the end of the archive and quits. Can someone verify this? I just finished a backup using dar (Kdar actually), so all my dvd-rws are in use, otherwise I'd be testing this out! :)
I created archive files larger than 2GB and used k3b to burn them to DVDs. k3b displayed a dialog explaining that udf extensions would need to be turned on to make the disc readable due to the 2GB limit for iso9660 filesystems. The discs read fine but the mkisofs man page explains that udf support is in the alpha stage and can only be used along with joliet, so probably not the best option when doing backups :oops:
Back to top
View user's profile Send private message
yabbadabbadont
Advocate
Advocate


Joined: 14 Mar 2003
Posts: 4791
Location: 2 exits past crazy

PostPosted: Thu Apr 13, 2006 4:09 am    Post subject: Reply with quote

If memory serves, tar archives are written in 512 byte blocks. The end of archive is indicated by two full blocks of zeros.

I wrote a DOS version of tar years ago that correctly handled archives that would span multiple floppies. I never found any other DOS tar program that could do that. There were some that could read a spanned archive, but none that could create them. Since there wasn't a World Wide Web at the time, I had to rely on an incomplete man page description (which was *almost* right) and hex editing sample tar archives. It was a lot of fun actually. :)

Update: I just took a look at a test tar file I created. It looks like maybe the block size is 256 bytes instead of 512. Also, GNU tar seems to append a shitload of zeros to the end of the archive. A simple 250 byte file resulted in a 10k tar file. I know that doesn't match up with the specs that were used in the *old* days. I guess the format has changed somewhat. The header blocks looked the same though.

Update2: I just looked at the source code. I should trust my memory more. :oops: It does use 512 byte blocks. It looks like the archive layout I used was the old V7 format. Apparently you can still specify that it be used with a command line option. I guess it's always good to know that tar archives created back in '89 can still be read. :lol:
_________________
Bones McCracker wrote:
On the other hand, regex is popular with the ladies.
Back to top
View user's profile Send private message
ayqazi
Apprentice
Apprentice


Joined: 10 Apr 2005
Posts: 164

PostPosted: Thu Apr 13, 2006 9:04 am    Post subject: Reply with quote

dmartinsca wrote:
Interesting post! I have been toying with something like this for a while now but didn't know how to set it up.. Pipes had never crossed my mind. Probably because i've never used them before ;)
Something that jumps out at me is when you are using dd to read back multiple discs, what happens on the final disc if it is not full with the 4580000 kB you are telling dd to read? I would guess that tar just sees the some bytes which indicate the end of the archive and quits. Can someone verify this? I just finished a backup using dar (Kdar actually), so all my dvd-rws are in use, otherwise I'd be testing this out! :)
I created archive files larger than 2GB and used k3b to burn them to DVDs. k3b displayed a dialog explaining that udf extensions would need to be turned on to make the disc readable due to the 2GB limit for iso9660 filesystems. The discs read fine but the mkisofs man page explains that udf support is in the alpha stage and can only be used along with joliet, so probably not the best option when doing backups :oops:


I have tested this myself - when the tar archive comes to an end on the second DVD before the end of 4580000 kB, tar just stops and exits "normally".

As for the writing to ISO files, I don't do that. I just do a: growisofs -Z /dev/dvdrw=/bak/a-pipe

In other words, I'm not writing an iso or udf filesystem to the disc, I'm writing a tar archive to it! Then I don't have to worry about filesystem limitations!

If I wanted ease of use though, I'd just use scdbackup from scdbackup.sf.net - it seems much easier and 'user-friendly'.
Back to top
View user's profile Send private message
Doogman
Apprentice
Apprentice


Joined: 24 Sep 2004
Posts: 242

PostPosted: Sun Apr 23, 2006 9:51 pm    Post subject: Reply with quote

Great topic!

For backups, I like to stick to tried and true and I've been using tar for years, way back when I was using tape, of course. Lately, I've used tar to backup my small headless server box over the network. After some upgrades, the lil server ended-up with a DVD burner, so I wondered if I could just burn a DVD this way for a backup. Your topic saved me some reading. Thanks!

Luckily for me, my backup can easily fit on one DVD, so I don't have to bother with swapping.

Here's how I do it:

I've always kept a file called "excludes" that has directories I don't wish to backup.

doug@mothra ~ $ cat /etc/excludes
/tmp
/var/tmp
/proc
/sys
/mnt
/home/torrent
/usr/portage

And then to backup:

tar -c --totals -S --preserve -X /etc/excludes -V "`date`" / | buffer -m 20m | growisofs -Z /dev/dvd=/dev/stdin

The buffer program really helps with the throughput. However you look at it, this is WAY faster than those old floppy and IDE tape drives I used to screw with.

Just for grins, here's the line for network backup:

tar -c --totals -S --preserve -X /etc/excludes -V "`date`" / | buffer -m 20m | nc -w3 servername 4321

And at the destination:

nc -l -p 4321 > backup.tar

I've moved quite a few linux distros around with nc and tar.
Back to top
View user's profile Send private message
phsdv
Guru
Guru


Joined: 13 Mar 2005
Posts: 372
Location: Europe

PostPosted: Sat Sep 02, 2006 8:32 am    Post subject: Reply with quote

ayqazi wrote:
the burner just slows down if it doesn't receive enough info - sometimes burning at 0.1x speed!
Try removing the --verbose flag (or -v). Especially on slow machines, the cpu spend more times showing the file names than transfering the data!
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2995
Location: Bay Area, CA

PostPosted: Sun Sep 03, 2006 4:43 pm    Post subject: Reply with quote

can these steps be automated in a script, such that I only have to change the dvd and press enter (1 terminal only)? what about using dual-layer disks? does growisofs support them?
Back to top
View user's profile Send private message
ayqazi
Apprentice
Apprentice


Joined: 10 Apr 2005
Posts: 164

PostPosted: Mon Sep 04, 2006 8:57 am    Post subject: Reply with quote

devsk wrote:
can these steps be automated in a script, such that I only have to change the dvd and press enter (1 terminal only)? what about using dual-layer disks? does growisofs support them?


1. Yes, they can. You have to do a bit of process management in it - running tasks concurrently, polling their exit values or something like that, or perhaps you use signal handlers? I'm not sure about what to do in bash. Maybe someone else does. I'd just write a handling script in ruby, quite frankly. Or even better, use scdbackup, which is already a script. This is just a quick-n-dirty solution I thought up 'cos I wanted fame and fortune.

2. I don't know - ask on the growisofs list? I assume it does, but I don't intend to use a 1 pound disc to experiment with :-)
Back to top
View user's profile Send private message
devsk
Advocate
Advocate


Joined: 24 Oct 2003
Posts: 2995
Location: Bay Area, CA

PostPosted: Mon Sep 04, 2006 5:41 pm    Post subject: Reply with quote

I figured that tar can be safely put into background with '&' (record $!) and we can run growisofs in the same terminal, check its return, read 'enter' from stdin (for invoking it again), echo 'enter' (for tar to work) and continue. Exit only when tar process with recorded pid exits. I will post if I ever write it myself.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Documentation, Tips & Tricks All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum