rdiff-backup-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[rdiff-backup-users] Whats the best thing to do?!!


From: Robert Yoon
Subject: [rdiff-backup-users] Whats the best thing to do?!!
Date: 28 Apr 2004 10:50:46 -0700

I am currently trying to implement rdiff into an enviornment with a
/home mountpoint that can vary in size from 80gigs up to 300 gigs.   My
concern is that rdiff is failing for some reason, and erroring out with
some python error, similar to the error that the Mr Hawkes is
experiencing.    My main concern is after a failure, if the backup jobs
are cron'd   the backup will fail again stating  that the mirror data is
corrupted.   I am considering having a script get around that by
deleting the rdiff-backup-data directory if it should fail.  Is that the
best solution?   I am also considering just running a little script to
ssh hostname ls /home to gather the directories inside of home, and run
rdiff for each directory. so that if backups fail once, not all the
directories are affected.   I think thats the best solution.  But what
the best thing to do if backups fail, and it attempts to run the backups
again?  That is my main concern.

Also when that happens,  There will be directories still there, but i
noticed  the --force option, would allow rdiff to write to a directory
already with data within it.  Now that being the case,  what will happen
when rdiff runs, and theres already folders in that directory.  Does it
still run incrementals against whats in the destination directory,
overwrite?  I need to know what it does.







reply via email to

[Prev in Thread] Current Thread [Next in Thread]