[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Duplicity-talk] Much Larger Duplicity backups when compared to Sou
From: |
Kenneth Loafman |
Subject: |
Re: [Duplicity-talk] Much Larger Duplicity backups when compared to Source |
Date: |
Wed, 10 Mar 2010 06:26:23 -0600 |
User-agent: |
Thunderbird 2.0.0.23 (X11/20090817) |
william pink wrote:
> On Mon, Mar 8, 2010 at 10:09 PM, Jacob Godserv <address@hidden
> <mailto:address@hidden>> wrote:
>
> On Mon, Mar 8, 2010 at 06:53, william pink <address@hidden
> <mailto:address@hidden>> wrote:
> > Any help most appreciated,
> > Will
>
> I can't claim any professional experience with MySQL, but I can try to
> help regardless. I need some more information, first. Do you back up
> the entire /var/lib/mysql/ (or wherever the raw databases are stored)
> or do you back up a dump? What does 'duplicity collection-status' say?
>
>
> Hi Jacob,
>
> Sorry for the delayed response - I use Mysqldump each day which
> compresses them with tar and gunzip, I then with duplicity use full
> backup initially then incremental after.
This mechanism will guarantee that the incremental ends up being about
the same size as the original. Duplicity saves space by only sending
the changed parts of the file as part of the incremental. By using gzip
compression, you defeat the comparison process, so each file is totally
changed as far as duplicity is concerned, thus the large backups.
To make best use of duplicity, do the mysqldump straight to text files
and make that the input to duplicity. It will compress and tar them
before sending to the remote system and you will see much smaller
incremental backups.
...Ken
signature.asc
Description: OpenPGP digital signature