On 07.02.2017 11:46, Fazekas László via Duplicity-talk wrote:
Hi!
I'm using duplicity to backup my webhosting server. My www directory is 17G, and
it contains many many small files. A full backup to amazon S3 is >1 day. Is
this a normal running time for a full backup?
what's your duplicity version?
try a test backup to a local file:// target and see how long that takes in
comparison.
Creating tar from the www folder is 12min. Is it a good idea to make a tar
before backup?
only if you are willing to untar manually on restore of course
Will it speed up the backup process?
if the small files are the issue, then probably yes.
Will duplicity store only modifications of the tar?
yes, but as we are using librsync it will look at it chunk by chunk and if say
in the beginning a file got bigger all content afterwards is offset and will be
regarded as changed.
Or have you any other idea to speeding up the backup?
primarily using the latest duplicity from the website. there was an issue some
month ago that slowed down backups.
..ede/duply.net
_______________________________________________
Duplicity-talk mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/duplicity-talk