duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] Errors during test phase of installation.....


From: Kenneth Loafman
Subject: Re: [Duplicity-talk] Errors during test phase of installation.....
Date: Mon, 16 Jan 2017 10:45:00 -0600

Hmm, those errors should have been fixed in 0.7.11 in testing/test_selection.py.TestLockedFoldersNotError.  Mac Sierra made some funky changes that keeps you from deleting a directory with perms of 0o0000, thus causing test cleanup of those two tests to fail, and in general messing up the entire test cycle.

There should be two lines before each test def with:
    @unittest.skipUnless(platform.platform().startswith('Linux'),
                         'Skip on non-Linux systems')

Plus, if you ran the tests already, then the directory testing/testfiles may still be around to screw up the next test run.

...Ken


On Mon, Jan 16, 2017 at 10:30 AM, Scott Hannahs <address@hidden> wrote:
Ok, but the standard error out has 581 k lines of output:
6700 lines end with “ERROR”
1000 lines end with “FAIL”
23K lines end with “ok”
37K lines start with “Traceback"

This seems disturbing?  Especially the traceback failures in python.  It is possible that I have not installed some necessary component.

But as I said, my backup script is working every night (with S3 backend).  It just may not be working for a variety of backends?

There are a lot of “directory not empty” failures (about 5K).  But the number of errors in the whole file is somewhat daunting to me at the moment.  But if it is normal, I will move on and publish this installer package.

The whole error log is 36 MB and took 14,000 seconds to run.
It can be downloaded at http://www.p-hall.net/files/duplicity-error.txt

for the record, here is the installed components (and of course their defined dependencies)

duplicity                       0.7.11-3
boto-py27               2.36.0-1
gnupg-unified           1.4.21-1
lftp                            4.6.5-1
librsync                        0.9.7-1006
lockfile-py27           0.12.2-1
paramiko-py27   1.7.6-1
pycryptopp-py27 0.7.1-1
python27                        1:2.7.13-1
MacOSX                  10.11.6

Here are the first two Traceback reports:
======================================================================
ERROR: test_cleanup_after_partial (testing.functional.test_cleanup.CleanupTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/sw/src/fink.build/duplicity-0.7.11-1/duplicity-0.7.11/testing/functional/test_cleanup.py", line 38, in test_cleanup_after_partial
    good_files = self.backup("full", "testfiles/largefiles")
  File "/sw/src/fink.build/duplicity-0.7.11-1/duplicity-0.7.11/testing/functional/__init__.py", line 143, in backup
    result = self.run_duplicity(options=options, **kwargs)
  File "/sw/src/fink.build/duplicity-0.7.11-1/duplicity-0.7.11/testing/functional/__init__.py", line 130, in run_duplicity
    raise CmdError(return_val)
CmdError: 31

======================================================================
ERROR: test_remove_all_but_n (testing.functional.test_cleanup.CleanupTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/sw/src/fink.build/duplicity-0.7.11-1/duplicity-0.7.11/testing/functional/test_cleanup.py", line 56, in test_remove_all_but_n
    full1_files = self.backup("full", "testfiles/empty_dir")
  File "/sw/src/fink.build/duplicity-0.7.11-1/duplicity-0.7.11/testing/functional/__init__.py", line 143, in backup
    result = self.run_duplicity(options=options, **kwargs)
  File "/sw/src/fink.build/duplicity-0.7.11-1/duplicity-0.7.11/testing/functional/__init__.py", line 130, in run_duplicity
    raise CmdError(return_val)
CmdError: 31

======================================================================


-Scott


> On Jan 16, 2017, at 10:41 AM, Kenneth Loafman <address@hidden> wrote:
>
> Those look like the normal errors we force for testing.  Unless the tests end with ERROR or FAIL, all is good.
>
> ...Thanks,
> ...Ken
>
>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]