|
From: | Kenneth Loafman |
Subject: | Re: [Duplicity-talk] Create full backup from incremental |
Date: | Sun, 19 Apr 2015 10:17:05 -0500 |
Eric,
On 17.04.2015 18:42, Eric O'Connor wrote:
> On 04/17/2015 06:40 AM, Scott Hannahs wrote:
>> I am still not clear how this scheme could be implemented without
>> the remote machine having all the files and lengths etc. But this
>> meta data is not supposed to be in the clear on the remote machine
>> ever. Thus if it is local then all the incremental files would need
>> to be transferred back to the local machine for combining with the
>> full. Not saving bandwidth which I believe is the original intent.
>
> The remote machine (say, S3) doesn't have any use for files and lengths
> -- it's just a dumb bucket of bits. Anyway, Duplicity already stores a
> bunch of metadata locally, such as a rolling checksum for every file
> that's backed up. Unless that local metadata became corrupted or lost,
> why would it need to be repeatedly transferred back?
obviously a misunderstanding. he means recreating a synthetic full from an existing remote chain (full+incr's). to do that w/o using the local data you will have to locally recreate the latest states which essentially is a local restore which in turn means transfer of the complete chain (volumes are not cached locally). be aware that the metadata is not sufficient to recreate data. it is there so it does not have to be downloaded/cdecrypted for every backup and mainly describes a latest state for incrementals to decide what is new.
taking that into account it is much easier to do a new full locally.
> Anyway, it sounds like this isn't wanted, so I'll be on my way. Cheers :)
maybe to you. to me it sounds like an idea developing after revisiting the initial requirement to shorten the chain.
..ede/duply.net
_______________________________________________
Duplicity-talk mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/duplicity-talk
[Prev in Thread] | Current Thread | [Next in Thread] |