duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] Duplicity - Resource Unavailable Errors


From: Colin Ryan
Subject: Re: [Duplicity-talk] Duplicity - Resource Unavailable Errors
Date: Mon, 23 Nov 2009 13:36:25 -0500
User-agent: Thunderbird 2.0.0.23 (Windows/20090812)

Hrm...strange...especially considering that Ubuntu has a Debian parentage does it not?

I'm not a python coder, but anything I can do to help let me know.

C


Kenneth Loafman wrote:
Colin,

I know why this is happening, the fix in GnuPGInterface.py is not
working on those systems, yet it does work on Ubuntu.  This is very
strange and puzzling.

I'm going to have to set up one of them and trace through the problem
again.  As to reducing the footprint, the only thing that will help at
this point is to reduce the number of incremental backups done between
full backups.  What's happening is that duplicity is not closing the
incremental backups until the iteration is done, and the iteration is
not done until the restore is complete, kind of a Catch-22 of coding.

...Ken

Colin Ryan wrote:
Update:

Same behavior on a Fedora 8 system - that being (for this backup set at
least) duplicity peaking at 39 and steady stating at 23-23 gpg
sub-processes sucking up ~ 250-310 MB RAM.

So the question becomes, is this normal, and if so is there some way we
can control the number of sub-processes so one can control this memory
overhang for systems not blessed with a lot of RAM?

Cheers

Colin Ryan wrote:
Ok Ken here's some more meat for you to chew on.

I fired up a Amazon instance of a stock Debian 4.0 (Etch) system
(supposedly stock)

Installed Duplicity 5.20 on it.

Ran my restore.

The restore completed, however during it effectively I saw the same
thing.

The number of gpg processes grew steadily until 39, then generally
floated around there, sometimes going down as far as 24 ... and
correspondingly the VZ memory footprint of duplicity was up around
300-250 MB....

So it seemed that it completed only because there was enough RAM on it.

Curiously - although it may be something to do with the fact that I
have been using the same backup for testing ( ~ 2 GB with default
volume sizes) was that it peaked out (as it had on my own systems at
39 proc's).

Cheers ;-)



C

Kenneth Loafman wrote:
Sorry, my misunderstanding...

In the duplicity/GnuPGInterface.py file you should find references to
and the definition of threaded_waitpid().  What this module does is set
a thread to wait for the process to terminate.  When the process finally
terminates, the waitpid call harvests its resources.

You really don't have much option with respect to where this module
loads from.  The import is done via
    "from duplicity import GnuPGInterface"
which quite specifically says to get it from the duplicity directory.
Unless you have another duplicity directory on your PYTHONPATH, you
should get the correct one.

...Ken

Colin Ryan wrote:
Ken,

I think you're mis-interpreting what I was saying (and I forgot to
ask a
question)... I had picked upon the discussions regarding GnuPGInterface
and your customizations so I symlinked from the .py file in the
duplicity site packages

/usr/lib/python2.4/site-packages/duplicity/GnuPGInterface.py


To the other system locations to see if my python environment was
picking up a standard version (assuming that the above as delivered
from
a "python setup dist" package creation for the duplicity source ) is
your custom instance of it.

My question was how can I confirm absolutely that I've got your custom
module and that it's the one being used.

As well please see a previous note. that while I admit I did this
testing in my stripped down Debian environment I also did in fact try a
stock debian kernel and still had the problem.

C

Kenneth Loafman wrote:
The current distribution of GnuPGInterface.py is broken in a couple of
ways and does not have the additions I put in to the duplicity version
to harvest the child processes, so this would not have worked.

Something is keeping duplicity from harvesting the completed child
processes.  I don't know enough about the kernel processing to tell
you
where to look in your custom kernel.  Perhaps a note to the Linux
kernel
mailing list?

...Ken

Colin Ryan wrote:
I've now gone so far as to manually sym-link all system copies of
GnuPGInterface.py to the one found in

usr/lib/python2.4/site-packages/duplicity/GnuPGInterface.py

with the same results, climbs to about 39,40 gpg processes and dies.



# find / -name "GnuPGInterface.py" -exec ls -l {} \;
lrwxrwxrwx 1 root root 60 2009-11-17 15:26
/var/lib/python-support/python2.4/GnuPGInterface.py ->
/usr/lib/python2.4/site-packages/duplicity/GnuPGInterface.py

lrwxrwxrwx 1 root root 60 2009-11-17 15:26
/usr/share/python-support/python-gnupginterface/GnuPGInterface.py ->
/usr/lib/python2.4/site-packages/duplicity/GnuPGInterface.py

-rw-r--r-- 1 root root 22880 2009-10-29 16:51
/usr/lib/python2.4/site-packages/duplicity/GnuPGInterface.py


Colin Ryan wrote:
Kenneth,

Just a follow-up. While still running in my customized stripped
environments, I did try this with a stock Debian Etch (my error I
stated Lenny before, I'm actually based on Etch) 2.6.18-6-486 kernel
right off their packages repository....same issue...

Note I'm running Python 2.4 could it be an issue there?

C

Kenneth Loafman wrote:
Yeah, this seems to be happening more on debian than ubuntu.  I
guess I
need to bring up a debian vm and check it out.


address@hidden wrote:
the bugs.debian.org post seems to describe 0.6.05 keeping open all
signature file gpg processes, at least thats what I understood..
also I
understand that the lesser processes the better, meaning - if
duplicity
really keeps 40 gpg processes open that's a bad thing .. ede;
duply.net


On 13.11.2009 19:45, Kenneth Loafman wrote:
Better yet would be a Python implementation of the gpg
library.  I
looked at one a year or more ago.  Maybe its grown up now.

...Ken

address@hidden wrote:
smells like the gpg interface should be rewritten not to use
pipes
but
files instead. Processes would die then timely on the end of
gpg's
actions.

.. ede
duply.net

On 13.11.2009 18:56, Colin Ryan wrote:
Bump..

Anyone have any ideas on the below...it's odd that this happens
only on
restore. I make the backup itself from the same environment(s).

C

Colin Ryan wrote:
Ken,

No problem, your a busy man ;-).

Summary/Update
----------

Well I tried this on the source system and I get something
different
but the same in a way.
It appears that I'm running into the following type of issue:

http://www.google.ca/url?sa=t&source=web&ct=res&cd=1&ved=0CAcQFjAA&url=http%3A%2F%2Fbugs.debian.org%2Fcgi-bin%2Fbugreport.cgi%3Fbug%3D531786&ei=qDL7Ssj7AcfZnAfxoMmDDQ&usg=AFQjCNFeZMg6JCzACjbX3nmxpkXvFPxkKQ&sig2=_sCXzLVFuZkEvVmEcdFWHw







or

http://www.google.ca/url?sa=t&source=web&ct=res&cd=5&ved=0CBMQFjAE&url=http%3A%2F%2Fwww.mail-archive.com%2Fduplicity-tracker%40nongnu.org%2Fmsg00632.html&ei=qDL7Ssj7AcfZnAfxoMmDDQ&usg=AFQjCNEwImqEAQLUmA7ab5S44D1MlHpnZw&sig2=LNsQa64sQk-4GObh8B0sQA







i.e. see below when it's running the restore it's hanging at
39 gpg
sub-processes.

My environment is:

*Linux rdrive 2.6.22-univ-423 #1 PREEMPT Wed Jul 2 13:04:26
EDT
2008
i686 GNU/Linux

Custom Kernel on Stripped down version of Debian Lenny

* rdrive:/# gpg --version
gpg (GnuPG) 1.4.6
Copyright (C) 2006 Free Software Foundation, Inc.
This program comes with ABSOLUTELY NO WARRANTY.
This is free software, and you are welcome to redistribute it
under certain conditions. See the file COPYING for details.

Home: ~/.gnupg
Supported algorithms:
Pubkey: RSA, RSA-E, RSA-S, ELG-E, DSA
Cipher: 3DES, CAST5, BLOWFISH, AES, AES192, AES256, TWOFISH
Hash: MD5, SHA1, RIPEMD160, SHA256, SHA384, SHA512, SHA224
Compression: Uncompressed, ZIP, ZLIB, BZIP2

* rdrive:/# duplicity --version
duplicity 0.6.05

(Hand compiled to a distribution tar then unpacked onto this
system)

Full Details Below:
-------------------

The following command:

/usr/bin/duplicity restore -v9 --encrypt-key=xxxxxx
--sign-key=xxxxxxx
--gpg-options=--default-key=xxxxxx
--archive-dir=/opt/aa/service/nas/restores
--tempdir=/opt/aa/service/nas/restores s3+http://<amazone
url>/office-demo.accessanywhere.net
/opt/aa/service/nas/restores/office

Run fine without errors as it's the same UID/GID etc. But
eventually
it now barfs out with a

<code>
tores/duplicity-sQKaRZ-tempdir/mktemp-kgeAuN-75
Removing still remembered temporary file
/opt/aa/service/nas/restores/duplicity-sQKaRZ-tempdir/mktemp-3Pq-mM-26



DEBUG:duplicity:Removing still remembered temporary file
/opt/aa/service/nas/restores/duplicity-sQKaRZ-tempdir/mktemp-3Pq-mM-26



Removing still remembered temporary file
/opt/aa/service/nas/restores/duplicity-sQKaRZ-tempdir/mktemp-T0z4xe-67



DEBUG:duplicity:Removing still remembered temporary file
/opt/aa/service/nas/restores/duplicity-sQKaRZ-tempdir/mktemp-T0z4xe-67



Removing still remembered temporary file
/opt/aa/service/nas/restores/duplicity-sQKaRZ-tempdir/mktemp-t8culd-72



DEBUG:duplicity:Removing still remembered temporary file
/opt/aa/service/nas/restores/duplicity-sQKaRZ-tempdir/mktemp-t8culd-72



Removing still remembered temporary file
/opt/aa/service/nas/restores/duplicity-sQKaRZ-tempdir/mktemp-eBy5OU-87



DEBUG:duplicity:Removing still remembered temporary file
/opt/aa/service/nas/restores/duplicity-sQKaRZ-tempdir/mktemp-eBy5OU-87



Removing still remembered temporary file
/opt/aa/service/nas/restores/duplicity-sQKaRZ-tempdir/mktemp--H-_I3-29



DEBUG:duplicity:Removing still remembered temporary file
/opt/aa/service/nas/restores/duplicity-sQKaRZ-tempdir/mktemp--H-_I3-29



Removing still remembered temporary file
/opt/aa/service/nas/restores/duplicity-sQKaRZ-tempdir/mktemp-NSeApT-82



DEBUG:duplicity:Removing still remembered temporary file
/opt/aa/service/nas/restores/duplicity-sQKaRZ-tempdir/mktemp-NSeApT-82



Removing still remembered temporary file
/opt/aa/service/nas/restores/duplicity-sQKaRZ-tempdir/mktemp-_m-wf1-91



DEBUG:duplicity:Removing still remembered temporary file
/opt/aa/service/nas/restores/duplicity-sQKaRZ-tempdir/mktemp-_m-wf1-91



Removing still remembered temporary file
/opt/aa/service/nas/restores/duplicity-sQKaRZ-tempdir/mktemp-r1Z3CO-83



DEBUG:duplicity:Removing still remembered temporary file
/opt/aa/service/nas/restores/duplicity-sQKaRZ-tempdir/mktemp-r1Z3CO-83



Removing still remembered temporary file
/opt/aa/service/nas/restores/duplicity-sQKaRZ-tempdir/mktemp-RLNUrL-68



DEBUG:duplicity:Removing still remembered temporary file
/opt/aa/service/nas/restores/duplicity-sQKaRZ-tempdir/mktemp-RLNUrL-68



Traceback (most recent call last):
File "/usr/bin/duplicity", line 1241, in ?
with_tempdir(main)
File "/usr/bin/duplicity", line 1234, in with_tempdir
fn()
File "/usr/bin/duplicity", line 1188, in main
restore(col_stats)
File "/usr/bin/duplicity", line 542, in restore
restore_get_patched_rop_iter(col_stats)):
File "/usr/lib/python2.4/site-packages/duplicity/patchdir.py",
line
519, in Write_ROPaths
for ropath in rop_iter:
File "/usr/lib/python2.4/site-packages/duplicity/patchdir.py",
line
492, in integrate_patch_iters
final_ropath = patch_seq2ropath(normalize_ps(patch_seq))
File "/usr/lib/python2.4/site-packages/duplicity/patchdir.py",
line
472, in patch_seq2ropath
misc.copyfileobj(current_file, tempfp)
File "/usr/lib/python2.4/site-packages/duplicity/misc.py",
line
170,
in copyfileobj
outfp.write(buf)
IOError: [Errno 28] No space left on device

ERROR:duplicity:Traceback (most recent call last):
File "/usr/bin/duplicity", line 1241, in ?
with_tempdir(main)
File "/usr/bin/duplicity", line 1234, in with_tempdir
fn()
File "/usr/bin/duplicity", line 1188, in main
restore(col_stats)
File "/usr/bin/duplicity", line 542, in restore
restore_get_patched_rop_iter(col_stats)):
File "/usr/lib/python2.4/site-packages/duplicity/patchdir.py",
line
519, in Write_ROPaths
for ropath in rop_iter:
File "/usr/lib/python2.4/site-packages/duplicity/patchdir.py",
line
492, in integrate_patch_iters
final_ropath = patch_seq2ropath(normalize_ps(patch_seq))
File "/usr/lib/python2.4/site-packages/duplicity/patchdir.py",
line
472, in patch_seq2ropath
misc.copyfileobj(current_file, tempfp)
File "/usr/lib/python2.4/site-packages/duplicity/misc.py",
line
170,
in copyfileobj
outfp.write(buf)
IOError: [Errno 28] No space left on device
</code>

This despite that the /opt/aa/service/nas path is a 400MB
volume, that
is totally empty.

More interestingly is that at the time of this error
(duplicity
just
hangs it doesn't crash) the
Virtual Mem. footprint is 310Mb with 12MB resident.

And all the GPG processes look like:

nas 2302 2250 0 16:39 pts/1 00:00:00 gpg --status-fd 33
--passphrase-fd 37 --logger-fd 7 --batch --no-tty
--default-key
xxxxx
--no-secmem-warning --default-key=xxxxx --decrypt
nas 2308 2250 0 16:39 pts/1 00:00:00 gpg --status-fd 39
--passphrase-fd 43 --logger-fd 34 --batch --no-tty
--default-key
xxxxxx --no-secmem-warning --default-key=xxxxx --decrypt
nas 2312 2250 0 16:39 pts/1 00:00:00 gpg --status-fd 44
--passphrase-fd 48 --logger-fd 40 --batch --no-tty
--default-key
xxxxx
--no-secmem-warning --default-key=xxxxx --decrypt
nas 2314 2250 0 16:39 pts/1 00:00:00 gpg --status-fd 48
--passphrase-fd 52 --logger-fd 45 --batch --no-tty
--default-key
xxxxxx --no-secmem-warning --default-key=xxxxx --decrypt



Colin Ryan


Kenneth Loafman wrote:
Sorry for the slow response. This seems to be a ulimit issue,
but
that's not normal for only 45 incrementals. Please run
'ulimit -n'
and
post the response here. Try increasing the ulimit with
'ulimit -n
4096'
or higher and retry. That could help.

I'm not sure what's happening, but its possible that the
permission
failures are somehow eating up file slots. During the run you
could try
'lsof -p<duplicity's pid>' just to see if any of those
files are
open.

...Thanks,
...Ken

Colin Ryan wrote:

Update:

Tried this same test again and it is not always that this
failure
leaves
the rest of the system unable to fork other processes
such as
"ls" or
"top" etc etc. sometimes yes, sometimes no...in any even
duplicity
does
continue to grow in excess of 200Mb put I do have
physical RAM
reported
as free when this occurs.

Any thoughts?

Colin Ryan wrote:
I should know the answer to this but can't seem to confirm.
Clearly
such errors are generally flimit/ulimit issues or RAM.

But I'm testing restore a not huge ( ~ 8 GB archive that is
only 1
full and 45 days of incremental's) and get the following
from
the
following command line.

/duplicity restore -v9 --encrypt-key=xxxxxxx
--sign-key=xxxxxx
--gpg-options='--default-key=xxxxxxx'
--archive-dir="/opt/aa/service/nas/restores"
--tempdir="/opt/aa/service/nas/restores" --ignore-errors
s3+http://URL/<backup set name>
"/opt/aa/service/nas/restores/office"

I'm using the -ignore-errors as I'm unpacking as a non-root
user
other
than the one that did the backups so it was throwing "no
permission"
errors on what I expect was the chmod/chown stages ( I say
this
because the files themselves were extracted)...Anyhow...

Version is 6.0.5 hand rolled in Debian Lenny.

When it ceases the system is unable to fork any other
processes
until
I kill duplicity.

A already running top in another terminal shows that
duplicity at
this
point is ~ 200 MB virtual and 11 MB resident but that I
have
~ 200M
RAM free (this is a fairly small footprint system).

I believe it's out of memory for forking but is it normal.

<code>
BEGNj3-tempdir/mktemp-QYdP4e-85
DEBUG:duplicity:Removing still remembered temporary file
/opt/aa/service/nas/restores/duplicity-BEGNj3-tempdir/mktemp-QYdP4e-85





Removing still remembered temporary file
/opt/aa/service/nas/restores/duplicity-BEGNj3-tempdir/mktemp-aDi69N-61





DEBUG:duplicity:Removing still remembered temporary file
/opt/aa/service/nas/restores/duplicity-BEGNj3-tempdir/mktemp-aDi69N-61





Removing still remembered temporary file
/opt/aa/service/nas/restores/duplicity-BEGNj3-tempdir/mktemp-2YCFpS-81





DEBUG:duplicity:Removing still remembered temporary file
/opt/aa/service/nas/restores/duplicity-BEGNj3-tempdir/mktemp-2YCFpS-81





Removing still remembered temporary file
/opt/aa/service/nas/restores/duplicity-BEGNj3-tempdir/mktemp-CXX67i-73





DEBUG:duplicity:Removing still remembered temporary file
/opt/aa/service/nas/restores/duplicity-BEGNj3-tempdir/mktemp-CXX67i-73





Traceback (most recent call last):
File "/usr/bin/duplicity", line 1241, in ?
with_tempdir(main)
File "/usr/bin/duplicity", line 1234, in with_tempdir
fn()
File "/usr/bin/duplicity", line 1188, in main
restore(col_stats)
File "/usr/bin/duplicity", line 542, in restore
restore_get_patched_rop_iter(col_stats)):
File
"/usr/lib/python2.4/site-packages/duplicity/patchdir.py",
line
519, in Write_ROPaths
for ropath in rop_iter:
File
"/usr/lib/python2.4/site-packages/duplicity/patchdir.py",
line
491, in integrate_patch_iters
for patch_seq in collated:
File
"/usr/lib/python2.4/site-packages/duplicity/patchdir.py",
line
376, in yield_tuples
setrorps(overflow, elems)
File
"/usr/lib/python2.4/site-packages/duplicity/patchdir.py",
line
365, in setrorps
elems[i] = iter_list[i].next()
File
"/usr/lib/python2.4/site-packages/duplicity/patchdir.py",
line
110, in difftar2path_iter
tarinfo_list = [tar_iter.next()]
File
"/usr/lib/python2.4/site-packages/duplicity/patchdir.py",
line
326, in next
self.set_tarfile()
File
"/usr/lib/python2.4/site-packages/duplicity/patchdir.py",
line
320, in set_tarfile
self.current_fp = self.fileobj_iter.next()
File "/usr/bin/duplicity", line 579, in get_fileobj_iter
manifest.volume_info_dict[vol_num])
File "/usr/bin/duplicity", line 603, in
restore_get_enc_fileobj
fileobj = tdp.filtered_open_with_delete("rb")
File
"/usr/lib/python2.4/site-packages/duplicity/dup_temp.py",
line
114, in filtered_open_with_delete
fh = FileobjHooked(path.DupPath.filtered_open(self, mode))
File "/usr/lib/python2.4/site-packages/duplicity/path.py",
line
724,
in filtered_open
return gpg.GPGFile(False, self, gpg_profile)
File "/usr/lib/python2.4/site-packages/duplicity/gpg.py",
line 135,
in __init__
attach_fhs={'stdin': encrypt_path.open("rb"),
File
"/usr/lib/python2.4/site-packages/duplicity/GnuPGInterface.py",


line 365, in run
create_fhs, attach_fhs)
File
"/usr/lib/python2.4/site-packages/duplicity/GnuPGInterface.py",


line 407, in _attach_fork_exec
process.pid = os.fork()
OSError: [Errno 11] Resource temporarily unavailable

ERROR:duplicity:Traceback (most recent call last):
File "/usr/bin/duplicity", line 1241, in ?
with_tempdir(main)
File "/usr/bin/duplicity", line 1234, in with_tempdir
fn()
File "/usr/bin/duplicity", line 1188, in main
restore(col_stats)
File "/usr/bin/duplicity", line 542, in restore
restore_get_patched_rop_iter(col_stats)):
File
"/usr/lib/python2.4/site-packages/duplicity/patchdir.py",
line
519, in Write_ROPaths
for ropath in rop_iter:
File
"/usr/lib/python2.4/site-packages/duplicity/patchdir.py",
line
491, in integrate_patch_iters
for patch_seq in collated:
File
"/usr/lib/python2.4/site-packages/duplicity/patchdir.py",
line
376, in yield_tuples
setrorps(overflow, elems)
File
"/usr/lib/python2.4/site-packages/duplicity/patchdir.py",
line
365, in setrorps
elems[i] = iter_list[i].next()
File
"/usr/lib/python2.4/site-packages/duplicity/patchdir.py",
line
110, in difftar2path_iter
tarinfo_list = [tar_iter.next()]
File
"/usr/lib/python2.4/site-packages/duplicity/patchdir.py",
line
326, in next
self.set_tarfile()
File
"/usr/lib/python2.4/site-packages/duplicity/patchdir.py",
line
320, in set_tarfile
self.current_fp = self.fileobj_iter.next()
File "/usr/bin/duplicity", line 579, in get_fileobj_iter
manifest.volume_info_dict[vol_num])
File "/usr/bin/duplicity", line 603, in
restore_get_enc_fileobj
fileobj = tdp.filtered_open_with_delete("rb")
File
"/usr/lib/python2.4/site-packages/duplicity/dup_temp.py",
line
114, in filtered_open_with_delete
fh = FileobjHooked(path.DupPath.filtered_open(self, mode))
File "/usr/lib/python2.4/site-packages/duplicity/path.py",
line
724,
in filtered_open
return gpg.GPGFile(False, self, gpg_profile)
File "/usr/lib/python2.4/site-packages/duplicity/gpg.py",
line 135,
in __init__
attach_fhs={'stdin': encrypt_path.open("rb"),
File
"/usr/lib/python2.4/site-packages/duplicity/GnuPGInterface.py",


line 365, in run
create_fhs, attach_fhs)
File
"/usr/lib/python2.4/site-packages/duplicity/GnuPGInterface.py",


line 407, in _attach_fork_exec
process.pid = os.fork()
OSError: [Errno 11] Resource temporarily unavailable

Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "atexit.py", line 24, in _run_exitfuncs
func(*targs, **kargs)
File "threading.py", line 634, in __exitfunc
t.join()
File "threading.py", line 540, in join
self.__block.wait()
File "threading.py", line 203, in wait
waiter.acquire()
</code>

Cheers and Thanks

Colin


_______________________________________________
Duplicity-talk mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/duplicity-talk
_______________________________________________
Duplicity-talk mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/duplicity-talk

------------------------------------------------------------------------






_______________________________________________
Duplicity-talk mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/duplicity-talk
_______________________________________________
Duplicity-talk mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/duplicity-talk
_______________________________________________
Duplicity-talk mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/duplicity-talk
_______________________________________________
Duplicity-talk mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/duplicity-talk

_______________________________________________
Duplicity-talk mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/duplicity-talk
_______________________________________________
Duplicity-talk mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/duplicity-talk

------------------------------------------------------------------------



_______________________________________________
Duplicity-talk mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/duplicity-talk
_______________________________________________
Duplicity-talk mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/duplicity-talk
_______________________________________________
Duplicity-talk mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/duplicity-talk

------------------------------------------------------------------------


_______________________________________________
Duplicity-talk mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/duplicity-talk
_______________________________________________
Duplicity-talk mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/duplicity-talk

------------------------------------------------------------------------

_______________________________________________
Duplicity-talk mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/duplicity-talk

_______________________________________________
Duplicity-talk mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/duplicity-talk

_______________________________________________
Duplicity-talk mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/duplicity-talk


------------------------------------------------------------------------

_______________________________________________
Duplicity-talk mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/duplicity-talk





reply via email to

[Prev in Thread] Current Thread [Next in Thread]