rdiff-backup-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [rdiff-backup-users] Staged backups


From: Ian Jones
Subject: Re: [rdiff-backup-users] Staged backups
Date: Tue, 12 Aug 2008 15:13:53 +0200
User-agent: Thunderbird 2.0.0.16 (Windows/20080708)

Hello Michael, and thank you for your reply.

Michael Crider a écrit :

I may not be the most qualified person to answer this, but since nobody
else has, I'll take a stab at it. There are (at least) two possible
approaches that you should look at, each with advantages and
disadvantages. One way would be to set up different backup jobs for
different directories. With this approach you could stagger backup times
and even run multiple backups at the same time, although bandwidth, disk
speed, and CPU speed will all be limiting factors there. This would also
give you separate rdiff-backup-data directories for each job, for better
or worse.

The problem with this approach is that I would have to manually keep track of the directory structure. While this should not be too difficult, the day I forget to add a new top-level directory is sure to the very day that a recovery of that directory is required! I was hoping to find a way to just have one automated job, once everything had been backed-up.

The second way would be to make a single job that points to /dir, then
use include statements to get /dir/a, /dir/b, etc., with --exclude '**'
after those to knock out anything else. There were several backup jobs
(for several servers on a lan) that I ran first with a single include
statement, then added an include statement on each run until I had
everything I wanted. From what I understand of the way rdiff-backup
works, when a new include statement shows up, it will copy those files
just like on the first run. Any include statements that were processed
previously will get a normal backup: librsync will check hash sums of
all files on both machines and only pass those that have changes, at
which point rdiff-backup will store the new file and create reverse
diffs against the old file.

Yes, I think this approach seems the best way.

Another idea I had was to initially use rdiff-backup to backup to a local, external disk. Then move the disk to the remote server, and continue to use rdiff-backup. Would rdiff-backup use the previously created rdiff-backup-data, or create a new one?

Thanks,
Ian.

Ian Jones wrote:
Hello. I have about 20GB of data to backup from a remote site. Clearly, it's not practical to do this over the Internet in one go, so I would like to stage the backup over several sessions. So, my question is: what it the best way to do it? If I backup, say /dir/a, then subsequently /dir/a and /dir/b, will /dir/a get copied a second time?

An alternative approach would be to make a preliminary backup on DVDs and copy the files to the backup machine. If I then use rdiff-backup to do incremental backups, how to I ensure that the files that are already there are not copied again, i.e. how to I add them to the archive?

Thanks,
Ian.



_______________________________________________
rdiff-backup-users mailing list at address@hidden
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki






reply via email to

[Prev in Thread] Current Thread [Next in Thread]