duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] Running out of disk space -- new full backup?


From: Grant
Subject: Re: [Duplicity-talk] Running out of disk space -- new full backup?
Date: Mon, 15 Feb 2016 18:09:28 -0800

>>>>>> If you do full backups regularly, and keep at least 2 around, you can 
>>>>>> always
>>>>>> opt to keep fewer full backups too. See remove-all-but-n-full and
>>>>>> remove-all-inc-of-but-n-full for some options.
>>>>>
>>>>>
>>>>> So far I've only done 1 full backup and all incrementals after that.
>>>>> Should I re-think this strategy?  Is the point of running periodic
>>>>> full backups to save disk space as per above?
>>>>>
>>>>> - Grant
>>>>>
>>>>>
>>>>>>> One of the systems I send my backups to is running out of space.  Is
>>>>>>> the solution to delete all of the backups and run a new full backup?
>>>>
>>>>
>>>> Can anyone help me figure this out?  I've been using duplicity-0.6.26
>>>> happily for quite a while but I'm finally up against disk space.  One
>>>> of my systems has 14GB in /root/.cache/duplicity/ compared to 38GB in
>>>> the backup target.  That seems crazy so I deleted the cache but the
>>>> next duplicity run brought it right back in full 14GB glory.
>>>>
>>>> Will running another full backup and using remove-all-but-n-full and
>>>> remove-all-inc-of-but-n-full reduce disk space usage at the backup
>>>> target and in the cache?
>>>>
>>>> Is there another way to reduce the disk space used as cache?
>>>>
>>>> - Grant
>>> Well a couple of things.  If you have a very very long backup chain of 
>>> incrementals it will increase the cache size a lot.  It is also less 
>>> reliable since if one of the incrementals is corrupted then all of the 
>>> subsequent incrementals will not be available.  That is why an infinite 
>>> chain of incrementals is not a good idea.
>>>
>>> Periodic full backups are the answer and that is why I have a script that 
>>> decides to make an incremental or not every 30 +/- a few days on several 
>>> directories.  The random +/- keeps the network from being in sync and 
>>> requiring a full backup of all directories on the same day.
>>
>>
>> Thanks Scott.  How best to transition from a single full backup and
>> infinite incrementals to running a full backup every 30 days?
>>
>> Should I just use full-if-older-than, remove-all-but-n-full, and
>> remove-all-inc-of-but-n-full from now on?  Any other cleanup necessary
>> (cache or otherwise)?
>
> My shortened commands are:
>
> nice -n19 /sw/bin/duplicity --full-if-older-than 40D --num-retries 5 
> --tempdir /var/tmp/duplicity --volsize 250 --asynchronous-upload
> nice -n19 /sw/bin/duplicity remove-all-but-n-full 1 --num-retries 5 --tempdir 
> /var/tmp/duplicity --volsize 250 --asynchronous-upload
> nice -n19 /sw/bin/duplicity cleanup --force --num-retries 5 --tempdir 
> /var/tmp/duplicity --volsize 250 --asynchronous-upload
>
> This means that the full has to complete before removing the older one.


I ran the above (with remove-all-but-n-full 3) and .cache/duplicity
grew in size significantly.  As a reminder, that was my second full
backup, every other daily run has been incremental.  Do I need to run
remove-all-inc-of-but-n-full in order to make .cache/duplicity
smaller?

- Grant



reply via email to

[Prev in Thread] Current Thread [Next in Thread]