[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Duplicity-talk] Out of space error while restoring a file
From: |
Laurynas Biveinis |
Subject: |
Re: [Duplicity-talk] Out of space error while restoring a file |
Date: |
Sun, 4 Nov 2012 20:01:17 +0200 |
2012/11/4 <address@hidden>
>
> On 30.10.2012 09:59, Laurynas Biveinis wrote:
> > 2012/10/8 <address@hidden>:
> >> On 08.10.2012 12:12, Laurynas Biveinis wrote:
> >>>> If that is the case, download both files directly from the release code
> >>>> at
> >>>>> http://bazaar.launchpad.net/~duplicity-team/duplicity/0.6-series/files/head:/duplicity/.
> >>>>> There is a green arrow at the far right for download. Copy these over
> >>>>> the
> >>>>> old files and remove their .pyc files for safety. If you go back
> >>>>> through
> >>>>> the process again, you will see 2-3 gpg processes at a time, but not a
> >>>>> long
> >>>>> list.
> >>>>>
> >>>>> Let us know how it goes, and thanks go to you and edso for hanging in
> >>>>> there.
> >>>>> We'll get this fixed.
> >>> How do I proceed in the light of above?
> >>
> >> do what Ken suggested:
> >> - download the two files GnuPGInterface.py, gpg.py from the link above
> >> - actually you'll gonna need tempdir.py as well since the change from
> >> yesterday (gpg tmp files)
> >> - place them into /usr/lib/python<your_version>/dist-packages/duplicity/
> >> overwriting the old gpg.py, tempdir.py
> >> - remove the corresponding *.pyc files
> >>
> >> run duplicity and it should be faster than before using less resources.
> >>
> >> you could as well wait for the next release where we'd hopefully clear
> >> that once and for all.
> >
> > I tested the new release. It is faster: it took much less than 24h to
> > restore, ~10h or so. Re. the previous issue of unexplained free space
> > requirements, I have attached the restore log with df interspersed.
> >
>
> did you have a look at the output yourself? fs usage climbs from 584G to
> 648G, up to 616G it creates the oldest full version, then it takes this one
> plus volumewise rsync patching to create the next step in time. then the
> older version is deleted, fs usage goes down by ca. 32G and the rsync
> patching is done again on the formerly second file. this repeats through all
> incremental states until your requested date's state is recreated.
>
> seems to me like does exactly the 2 times the file to restore plus some...
> wouldn't you agree?
Yes. So I have no idea why did the restore fail originally for a 30GB
file with 80GB free space with the previous release. Could it be
related to GPG fixes?
--
Laurynas