duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] Re: Re: Duplicity best practices?


From: Peter Schuller
Subject: Re: [Duplicity-talk] Re: Re: Duplicity best practices?
Date: Sat, 15 Mar 2008 17:13:29 +0100
User-agent: Mutt/1.5.17 (2007-11-01)

> Agreed.  I have been running into exactly that issue, where I need to make a 
> snapshot of the fs prior to running duplicity on it.  However, unless I 
> misunderstand how to use duplicity, I am having a lot of trouble getting 
> duplicity to work with the paths in my snapshot.  The problem that I am 
> having is that once I create my snapshot, I need to "mount" it somewhere to 
> get access to it.  So, for instance, if I put my snapshot in 
> /mount/snapshot, then I am not sure how I would run duplicity.
>  # duplicity --include /etc /mount/snapshot file:///duplicity
> 
> doesn't seem to work; tells me that /etc cannot be found.  Is there a way to 
> indicate to duplicity that all paths specified in the --include 
> / --include-filelist parameters are relative to the source (ie: 
> /mount/snapshot) and not from / itself?  Or is that how it is supposed to be 
> working and I'm just doing something wrong on my end?

I don't know off hand what the intended behavior is with regards to
relative inclusion, as I have not needed that feature yet. Hopefully
someone else will chime in, or I can have a look at the code.

> Yes they are at the moment.  The question is rather associated to the issue 
> if the verify fails legitmately - what to do then.  Is there a way to delete 
> only the backup made?  ie: if the last duplicity execution was incremental, 
> then to delete the incremental backup, if full, then delete the full backup? 
> From what I read in the duplicity specs, I can't figure out how to do that 
> from within duplicity itself.

That's a good point. You basically cannot delete the last backup. You
can delete backups *older* than some limit, but not *newer than*.

I added a bug for this on savannah for now.

> Why?  Wouldn't it be the same total amount of total free space required? 
> Even if I were to split a 2G backup into 100 files, I would still need 2G 
> total space in tmp, wouldn't I?

No, duplicity only stores a single volume at a time (or is it two
copies? I don't remember when the encryption happens off the top of my
head). Each volume is filled locally until complete, and then
uploaded.

I am working on some changes that will allow concurrency (preparing
the next volume while uploading the last), but even then you will
never need space for the entire backup in your temp dir.

> Agreed.  I think it is a lot a question of backend storage.  If, indeed, it 
> is a remote upload to S3, or a webdav folder, etc, then it might be more 
> efficient to use smaller files.  But my gut tells me that the greater the 
> number of files, the more of a chance for something to go wrong; a file not 
> copied properly, reassembly of files having an error, etc.  But using 5G 
> backup files tends to be exaggerated as well.  I would think that in the 
> realm of 250MB-500MB would prob. be a decent compromise...  but that is 
> based on nothing but pure gut feeling.

250 being exactly what I have been using for large backups.

Large number of files are just generally annoying; you start running
into bad file systems (listing files becomes slow), getopt() bailing
due to argument limit excession, etc.

In the end it's going to be very dependent on the situation I think.

-- 
/ Peter Schuller

PGP userID: 0xE9758B7D or 'Peter Schuller <address@hidden>'
Key retrieval: Send an E-Mail to address@hidden
E-Mail: address@hidden Web: http://www.scode.org

Attachment: pgp8mAyO4G0gl.pgp
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]