duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] strategy for performing initial full backup to S3


From: Scott Classen
Subject: Re: [Duplicity-talk] strategy for performing initial full backup to S3
Date: Fri, 9 Oct 2015 14:14:11 -0700

Thanks Ken. I'll have to think about how to split things up. In the meantime these are the current settings:

TMPDIR='/home-old/duplicity' PASSPHRASE=xxxxxx FTP_PASSWORD='xxxxxxxx/xxxxxxxx' duplicity --archive-dir '/home-old/duplicity/.duply-cache' --name duply_home --encrypt-key 1212121 --sign-key 23232323 --verbosity '4' --s3-use-rrs --asynchronous-upload --exclude-filelist '/etc/duply/home/exclude' '/home' 's3://address@hidden/xxxxxx-backup-home'

I'm using
duply v1.10.1
duplicity version 0.7.05
python 2.6.6
gpg 2.0.14


There are ~400 user directories in /home. It would be nice if I could think of a way to split based on size rather than ...say A-G, H-O, and P-Z


Scott


On Oct 7, 2015, at 10:35 AM, Kenneth Loafman <address@hidden> wrote:

I would suggest not to do one large backup, but segmented backups at the top of the directory tree, split however makes sense to you.  This will speed up the backup process since each segment can be done in parallel, will better use the cores and network available, and be much easier to use for recovery.

What version of duplicity and what options are you using?

...Ken


On Wed, Oct 7, 2015 at 10:59 AM, Scott Classen <address@hidden> wrote:
I am attempting to perform the initial full backup of a fairly large volume (3.5TB) to Amazon S3 and it is taking a long time, 8 days so far, and it looks like it might be about 1/2 done, although it is hard to determine because presumably stuff is getting compressed. I'm wondering why it takes so long? Our internet connection is capable of much faster rates and the server that is running the backup has many more cores than are being used. I fear that I may not have the magical combination of duply/duplicity options to get the best performance? or maybe 3.5TB is beyond the capabilities of duplicity? At this rate I won't need to do (or I won't have time to do) any incremental backups... I'll just do a full backup once every two weeks. Any advice would be greatly appreciated.

Scott


_______________________________________________
Duplicity-talk mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/duplicity-talk

_______________________________________________
Duplicity-talk mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/duplicity-talk


reply via email to

[Prev in Thread] Current Thread [Next in Thread]