

What were you using? My experience has been the reverse, except for the egregiously awful Travan type drives that used the floppy controller. In particular and recently, I've had spectacularly good results with DLT.1clue wrote:... I've been doing backups for decades. At one point I used magnetic tape, until such time as I actually had to use the tape. I found out then that, as was more common than you might imagine, the backups were not restorable. ...
Ahh! A kindred spirit.1clue wrote:Whatever solution you use, you need to test a restore. If you can't restore it, it's not a backup. It's a waste of time.
That is what I usually do (in one form or another).1clue wrote:
- cp -prd /home /mnt/backups/2014-04-10/home
[[/b]
I have no idea what the brand was anymore, I haven't used tape for probably 20 years. It was a DAT drive and I think it was SCSI. That was the most recent, but I've been using tape since back in the 9-track (open reel) days. At least then I knew the data was good because we used to swap tapes between one office and another.John R. Graham wrote:What were you using? My experience has been the reverse, except for the egregiously awful Travan type drives that used the floppy controller. In particular and recently, I've had spectacularly good results with DLT.1clue wrote:... I've been doing backups for decades. At one point I used magnetic tape, until such time as I actually had to use the tape. I found out then that, as was more common than you might imagine, the backups were not restorable. ...
I love tape. It's the only way to get a really deep backup history economically. Currently, I use app-backup/flexbackup to perform automatic nightly incrementals to tape.
First, I decide on a frequency of backups. I don't have a network backup plan. I do host by host, except for workstations which is a shared drive, which gets backed up.lexflex wrote:That is what I usually do (in one form or another).1clue wrote:
- cp -prd /home /mnt/backups/2014-04-10/home
[[/b]
However , for me , the important question is if it is possible to also detect changes ( i.e. how long should I keep old backups? ).
So: my question would be: Does any of the proposed tools do that ?
Disk-crashes are relatively easy: you know your data is destroyed, so you restore your last backup before that moment and take your loss ( couple of days, weeks, or months).
I am mostly worried about the possibility files might get corrupted. Is there a way to create a backup (tar) file and check some kind of checksum ?
( not only for the changed files, which is obvious, but I would like to be allerted if an 'old' file suddenly changed).
So, preferably I would like to make a backup, compare it with my last backup, and then be alerted about changes that involve files that where already there in the old backup ( the new files will obviously be 'new' ) .
Any advice on that?
Or, is the only way to keep at least one backup per year or something like that?
Alex.

Well, I don't know. My last purchase of media was for 9, 320GiB SDLT tapes for $45. That's 2.8TiB of storage for less than $50 (cheating a little bit because there was shipping in addition to that, but you get the idea). Plus, when I drop a tape, it generally still works thereafter.1clue wrote:... If you take into account the cost of the drive, tape can't possibly compete. For $100 you get more than a terabyte of sata2 or sata3, whatever you system can handle. You have genuinely re-writable random access device which is absolutely no different than a normal hard drive.
Code: Select all
Missing paratheses around qw(...) statement in
/usr/lib/BackupPC/Storage/Text.pm line 301
/usr/lib/BackupPC/Lib.pm Line 1420
Code: Select all
Added to /etc/BackupPC/config.pl
338:$Conf{SockDir} = '/var/run/BackupPC'
Added to /usr/bin/BackupPC
92: my $SockDir = $bpc->SockDir();
Changed in /usr/bin/BackupPC
367: unlink("$SockDir/BackupPC.pid");
368: if ( open(PID, ">", "$SockDir/BackupPC.pid") ) {
371: chmod(0444, "$SockDir/BackupPC.pid");
1850: unlink("$SockDir/BackupPC.pid");
1889: my $sockFile = "$SockDir/BackupPC.sock";
1972: unlink("$SockDir/BackupPC.pid");
Added to /usr/lib/BackupPC/Lib.pm
119: SockDir => '/var/run/BackupPC',
128: SockDir => '/var/run/BackupPC',
192:sub SockDir
{
my($bpc) = @_;
return $bpc->{SockDir};
}
Changed in /usr/lib/BackupPC/Lib.pm
152: foreach my $dir ( qw(... SockDir) ) {
697: my $SockFile = "$bpc->{SockDir}/BackupPC.sock";
Changed in /usr/lib/BackupPC/CGI/Lib.pm
47: use vars qw(... $SockDir);
79: $Cgi %In ... $SockDir
Added to /usr/lib/BackupPC/CGI/Lib.pm
101: $SockDir = $bpc->SockDir();
111: $SockDir = $bpc->SockDir();
Added to /usr/lib/BackupPC/Config/Meta.pm
117: SockDir => "string";
Code: Select all
ServerRoot "/usr/lib64/apache2"
HostnameLookups Off
ServerName 192.168.222.8
LoadModule actions_module modules/mod_actions.so
.
.
.
LoadModule vhost_alias_module modules/mod_vhost_alias.so
Include /etc/apache2/modules.d/*.conf
<IfDefine DEFAULT_CONF>
User apache
Group apache
LockFile /var/run/apache2/apache.lock
PidFile /var/run/apache2/apache.pid
ScoreBoardFile /var/run/apache2/apache.scoreboard
</IfDefine>
<IfDefine BACKUP_CONF>
User backuppc
Group backuppc
LockFile /var/run/BackupPC/apache.lock
PidFile /var/run/BackupPC/apache.pid
ScoreBoardFile /var/run/BackupPC/apache.scoreboard
</IfDefine>
Include /etc/apache2/vhosts.d/vhosts.confCode: Select all
<IfDefine DEFAULT_CONF>
Listen 80
NameVirtualHost *:80
# The Default vhost
<IfDefine DEFAULT_VHOST>
<VirtualHost *:80>
ServerName localhost
Include /etc/apache2/vhosts.d/default_vhost.include
<IfModule mpm_peruser_module>
ServerEnvironment apache apache
</IfModule>
</VirtualHost>
</IfDefine>
# A vhost without secure connection
<VirtualHost *:80>
ServerName sysInfo.home.dkw
Include /etc/apache2/vhosts.d/sysInfo.include
<IfModule mpm_peruser_module>
ServerEnvironment apache apache
</IfModule>
</VirtualHost>
# A vhost with secure connection. The ssl.include file will rewrite.
<VirtualHost *:80>
ServerName courierAdmin.home.dkw
<IfDefine SSL>
<IfModule ssl_module>
Include /etc/apache2/vhosts.d/ssl.include
</IfModule>
</IfDefine>
Include /etc/apache2/vhosts.d/courierAdmin.include
<IfModule mpm_peruser_module>
ServerEnvironment apache apache
</IfModule>
</VirtualHost>
# The SSL portion
<IfDefine SSL>
<IfModule ssl_module>
listen 443
NameVirtualHost *:443
<IfDefine SSL_DEFAUL_VHOST>
<VirtualHost *:443>
ServerName localhost
Include /etc/apache2/vhosts.d/ssl.include
Include /etc/apache2/vhosts.d/default_vhost.include
<IfModule mpm_peruser_module>
ServerEnvironment apache apache
</IfModule>
</VirtualHost>
</IfDefine>
<VirtualHost *:443>
ServerName courierAdmin.home.dkw
Include /etc/apache2/vhosts.d/ssl.include
Include /etc/apache2/vhosts.d/courierAdmin.include
<IfModule mpm_peruser_module>
ServerEnvironment apache apache
</IfModule>
</VirtualHost>
</IfModule>
</IfDefine>
</IfDefine>
<IfDefine BACKUP_CONF>
Listen 8080
NameVirtualHost *:8080
<IfDefine DEFAULT_VHOST>
<VirtualHost *:8080>
ServerName localhost
Include /etc/apache2/vhosts.d/default_vhost.include
<IfModule mpm_peruser_module>
ServerEnvironment apache apache
</IfModule>
</VirtualHost>
</IfDefine>
<VirtualHost *:8080>
ServerName Backup.home.dkw
<IfDefine SSL>
<IfModule ssl_module>
Include /etc/apache2/vhosts.d/ssl.include
</IfModule>
</IfDefine>
Include /etc/apache2/vhosts.d/backup.include
<IfModule mpm_peruser_module>
ServerEnvironment backuppc backuppc
</IfModule>
</VirtualHost>
<IfDefine SSL>
<IfModule ssl_module>
listen 8043
NameVirtualHost *:8043
<IfDefine SSL_DEFAUL_VHOST>
<VirtualHost *:8043>
ServerName localhost
Include /etc/apache2/vhosts.d/ssl.include
Include /etc/apache2/vhosts.d/default_vhost.include
<IfModule mpm_peruser_module>
ServerEnvironment apache apache
</IfModule>
</VirtualHost>
</IfDefine>
<VirtualHost *:8043>
ServerName Backup.home.dkw
Include /etc/apache2/vhosts.d/ssl.include
Include /etc/apache2/vhosts.d/backup.include
<IfModule mpm_peruser_module>
ServerEnvironment packuppc packuppc
</IfModule>
</VirtualHost>
</IfModule>
</IfDefine>
</IfDefine>Code: Select all
ErrorLog /var/log/apache2/ssl_error_log
<IfModule log_config_module>
TransferLog /var/log/apache2/ssl_access_log
CustomLog /var/log/apache2/ssl_request_log \
"%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"
</IfModule>
# this makes an http url jump to https:
<IfDefine DEFAULT_CONF>
RewriteEngine On
RewriteCond %{HTTPS} !=on
RewriteRule ^/(.*) https://%{SERVER_NAME}/$1 [R,L]
</IfDefine>
<IfDefine BACKUP_CONF>
RewriteEngine On
RewriteCond %{HTTPS} !=on
RewriteRule ^/(.*) https://%{SERVER_NAME}:8043/$1 [R,L]
</IfDefine>
SSLEngine on
SSLOptions +StrictRequire
SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL
SSLCertificateFile /etc/ssl/apache2/server.crt
SSLCertificateKeyFile /etc/ssl/apache2/server.key
<FilesMatch "\.(cgi|shtml|phtml|php)$">
SSLOptions +StdEnvVars
</FilesMatch>
<IfModule setenvif_module>
BrowserMatch ".*MSIE.*" \
nokeepalive ssl-unclean-shutdown \
downgrade-1.0 force-response-1.0
</IfModule>Code: Select all
DocumentRoot "/var/www/Backup.home.dkw/htdocs"
# This makes just the basic url jump to the BackupPC_Admin file
RedirectMatch "^/$" /BackupPC_Admin
<Directory "/var/www/Backup.home.dkw/htdocs">
Options Indexes FollowSymLinks
AllowOverride None
<IfDefine SSL>
<IfModule ssl_module>
SSLOptions +StdEnvVars
</IfModule>
</IfDefine>
SetHandler perl-script
PerlResponseHandler ModPerl::Registry
PerlOptions +ParseHeaders
Options +ExecCGI
Order allow,deny
Allow from all
# I use LDAP for authentication where possible. My users have the same credentials for basic login, mail and here.
# You can use any apache authentication as long as in the end it gives a username. Backuppc itself manages which
# users have access to what. In config.pl with $Conf{CgiAdminUsers} = 'backuppc hika'; the admin users and further
# what configuration items the other users can manage. In hosts.pl where the backups are defined you give a list
# with users who can access that backup. Changing logged-in user you do with https://<username>@<hostname>:<port>
AuthType Basic
AuthName "Backup Admin"
AuthBasicProvider ldap
AuthLDAPURL "ldap://ldap.home.dkw:389/dc=home?uid?sub?(objectClass=*)"
AuthLDAPGroupAttribute memberUid, member
AuthLDAPGroupAttributeIsDN Off
# Not needed for the user gives the credentials
# AuthLDAPBindDN
# AuthLDAPBindPassword
# Not working for some as yet unknown reason
# Require ldap-group cn=backuppc,ou=Groups,dc=home
# Require ldap-group cn=smbadmins,ou=Groups,dc=home
Require valid-user
</Directory>
<IfModule alias_module>
Alias /image/ "/var/www/Backup.home.dkw/htdocs/image/"
</IfModule>
<Directory "/var/www/Backup.home.dkw/htdocs/image">
SetHandler None
Options Indexes FollowSymLinks
Order allow,deny
Allow from all
</Directory>
You have my vote! especially if you take on rdiffweb with it.Hell, maybe I should fork it and start maintaining it. It's hard to walk away from a piece of software that does exactly what you need it to do and does it so well.
That's exactly what business interruption insurance coverage is for. Especially in the case of catastrophic loss, execs are going to be too busy doing their job to interfere with letting system administrators work on data recovery.1clue wrote:OK John, So how much does it cost for your office to be down?
I can't believe you said that. Are you sure you don't want to rethink this from a business perspective? There are so many things wrong with this I don't even know where to start.Navar wrote:@jrg, wow, DLTs have really went down in cost in 15 years.I'm kind of surprised they still exist.
That's exactly what business interruption insurance coverage is for. Especially in the case of catastrophic loss, execs are going to be too busy doing their job to interfere with letting system administrators work on data recovery.1clue wrote:OK John, So how much does it cost for your office to be down?
I can. And any well run business evaluates its risks as much as possible and hedges appropriately. There are always factors out of your control.1clue wrote:I can't believe you said that. Are you sure you don't want to rethink this from a business perspective? There are so many things wrong with this I don't even know where to start.Navar wrote:That's exactly what business interruption insurance coverage is for. Especially in the case of catastrophic loss, execs are going to be too busy doing their job to interfere with letting system administrators work on data recovery.1clue wrote:OK John, So how much does it cost for your office to be down?
I'm trying to overlook the overt wow euphemisms.1clue wrote:Wow again.
You totally missed what I was wowing about above. I suppose that's my bad, considering how terse my post was.
Budgets, which have been ever decreasing over the last 15 years. Automated after hours backup system wasn't slow, unless you feel full checksum verification with logs is slow. A full restore didn't take that long. A good business retains capital to back downtime operating risks.What surprises me is that you would rely on that insurance so you can tolerate a slow, possibly kludgy backup system when something far better exists for not much more money, and needs zero special software.
Companies and policies vary. Proof of claims? The generality is something to the effect of actual loss of business income you sustain due to the necessary suspension of your “operations” during the period of “restoration.”Insurance aside (not my part of the business) the cost of downtime is always significant, insurance or not. Even with the best possible insurance, you have a loss: They won't pay you the full value of your time and anticipated income, and they also generally don't pay full replacement cost of your equipment.
Kinda reaching there? If you have serious loss you're hoping just to survive the aftermath to recover. Those without these hedges to support generally are gone, sometimes sinking their owners financially with them.Most importantly, they don't pay you for lost customers.
Interesting perspective, from the sake of convenience. You have offline backup setup this way? And what are the clients that have this need?From my perspective, the ability to get data back rapidly outweighs anything a tape can give you. Random-access backups with no compression give you the ability to reduce your downtime hugely. Insurance or not, that's a big money saver when you need it.
Interesting numbers. One would arguably ask why anyone would bother buying insurance, ever.Insurance might pay 60% of your total loss, but if your backup system reduces that gross loss from $10,000 to $2,000 then you just paid for a lot of backup storage.
Unless you had no rotation and infrequent verifies, I'm unsure what and why happened in your case. Older hardware, dirty heads, someone waved tapes near a large transformer, etc. I've had similar fault with platters, particularly expensive IBM SCSI drives before they got out of the business. When those died in a sudden manner, there was no attempt available to try to raw forensic recovery read, they were full on bricked at firmware/drive spinup. Less than 6 months duty.If you read far enough back in this thread, I also used tapes of all sorts. I had a need to restore from a tape -- which failed -- and then on testing realized my tapes were essentially worthless.
Back then, there really was no option except tape. Fast forward to the topic of this thread though, the OP wants to set up a system of backups right now. There's absolutely no way I would recommend a tape for modern backups. That's why I'm being the way I'm being right now.Navar wrote: Budgets, which have been ever decreasing over the last 15 years. Automated after hours backup system wasn't slow, unless you feel full checksum verification with logs is slow. A full restore didn't take that long. A good business retains capital to back downtime operating risks.
Tape use for me, in a professional setting, was between 1996-2000 as a system admin, please keep that in perspective. I have not used it since. During that time, the medium proved reliable. Given that time frame, what is this superior and cost effective technology then that you speak of to use instead?
Have you ever had an insurance claim, for anything at all? Not many policies promise full replacement value, and those cost significantly more than a partial claim. Again, better to have the insurance and not need it.Companies and policies vary. Proof of claims? The generality is something to the effect of actual loss of business income you sustain due to the necessary suspension of your “operations” during the period of “restoration.”
In every business I've been in IT, the best sales prospect is your existing customer. No amount of cold calling or advertising has the return of keeping the customer happy and getting them to buy more business. I'm sticking my head out on this one a bit but I can't really think of any industry at all where this isn't true, except a case where the thing you buy is so infrequently bought as to be a one-time deal. Even then, word of mouth plays a big part.Kinda reaching there? If you have serious loss you're hoping just to survive the aftermath to recover. Those without these hedges to support generally are gone, sometimes sinking their owners financially with them.Most importantly, they don't pay you for lost customers.
The clients don't know what my backup scheme is, nor do most of the people in my business.Interesting perspective, from the sake of convenience. You have offline backup setup this way? And what are the clients that have this need?From my perspective, the ability to get data back rapidly outweighs anything a tape can give you. Random-access backups with no compression give you the ability to reduce your downtime hugely. Insurance or not, that's a big money saver when you need it.
We don't really have anything to argue about here. Frequency of backups certainly counts, and IMO it's also important that it's not completely automatic. It needs to be in somebody's face enough that they're aware of the changing needs of the business with respect to those backups. Otherwise you wind up spending money on 'automatic backups' of data that is obsolete and hasn't changed for years, when some new project is not backed up at all.Business insurance protection was just one piece of the pie. Nowhere did I claim it was a sole reliance. Just like the rest of the overall business plan, such as data backup, which involves budget allocation towards hardware/support services. I was jumping in after your poke at jrg there for using cheap DLTs to simply point out a hole in your argument. What's more important is frequency of data backups on needs/ability and reliability. When the chips fall, who cares if one was even 20 percent faster on full recovery than the other if the data is faulty? If the ongoing business concern is speed of a particular file recovery due to pebkac somewhere in the organization that is going to cause you to lose customers--then there are bigger business problems afoot than data backup.
Woulda coulda shoulda. I spent quite awhile looking for causes, but in the meantime I pretty much realized that I was hosed. Back then I went to CD burners even though they were quite a bit smaller, and only marginally better reliability.Unless you had no rotation and infrequent verifies, I'm unsure what and why happened in your case. Older hardware, dirty heads, someone waved tapes near a large transformer, etc. I've had similar fault with platters, particularly expensive IBM SCSI drives before they got out of the business. When those died in a sudden manner, there was no attempt available to try to raw forensic recovery read, they were full on bricked at firmware/drive spinup. Less than 6 months duty.