[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Avocado notes from KVM forum 2019
From: |
Cleber Rosa |
Subject: |
Re: Avocado notes from KVM forum 2019 |
Date: |
Mon, 25 Nov 2019 13:08:18 -0500 |
User-agent: |
Mutt/1.12.1 (2019-06-15) |
On Mon, Nov 25, 2019 at 10:58:02AM -0300, Eduardo Habkost wrote:
> Thank you, Philippe, those are great ideas. I have copied them
> to the Avocado+QEMU Trello board so we don't forget about them:
> https://trello.com/b/6Qi1pxVn/avocado-qemu
>
> Additional comments below:
>
> On Mon, Nov 25, 2019 at 01:35:13PM +0100, Philippe Mathieu-Daudé wrote:
> > Hi Cleber,
> >
> > Here are my notes from talking about Avocado with various people during the
> > KVM forum in Lyon last month.
> >
> > All comments are QEMU oriented.
> >
> >
> > 1) Working offline
> >
> > Various people complained Avocado requires online access, and they would
> > like to use it offline.
> >
> > Maintainer workflow example is:
> >
> > - run avocado
> > - hack QEMU, build
> > - git pull
> > - build
> > - hack QEMU
> > (go offline)
> > - hack QEMU
> > - build
> > - run avocado <- FAILS
> >
>
> Ouch. This shouldn't happen even with no explicit --offline
> option. Failure to download artifacts shouldn't make tests
> report failure.
>
>
Agreed. There are a number of work items already to cover this. One
is a more generic test metadata collection system:
https://trello.com/c/lumR8u8Y/1526-rfc-nrunner-extended-metadata
We already have code that can find the required assets, and with that,
we can let the job (not the test) attempt to fulfill those
requirements, skipping the tests if they can not be fulfilled.
Until this is available, we can wrap the "fetch_asset()" calls and
cancel the test if it fails.
> > Failure is because mainstream added new tests, which got pulled in, and the
> > user only notice when running avocado again, but offline.
> > Test example is boot_linux_console.py, which added various tests from other
> > subsystems, so the maintainer has to disable the new tests manually to be
> > able to run his previous tests.
> >
> > Expected solution: skip tests when artifact is not available, eventually
> > when the --offline option is used
> >
> >
> > 2) Add artifacts manually to the cache
> >
> > Not all artifacts can be easily downloadable, some are public but require
> > the user to accept an End User License Agreement.
> > Users would like to share their tests with the documentation about where/how
> > to download the requisite files (accepting the EULA) to run the tests.
> >
> >
> > 2b) Add reference to artifact to the cache
> >
> > Group of users might share group of files (like NFS storage) and would like
> > to use directly their remote read-only files, instead of copying it to their
> > home directory.
>
> This sounds nice and useful, but I don't know how to make the
> interface for this usable.
>
>
I guess this would require an Avocado installation-wide configuration
entry for the available cache directories. IMO given that
configuration is applied, the tests should transparently find assets
in the configured locations.
> >
> >
> > 3) Provide qemu/avocado-qemu Python packages
> >
> > The mainstream project uses Avocado to test QEMU. Others projects use QEMU
> > to test their code, and would like to automatize that using Avocado. They
> > usually not rebuild QEMU but use a stable binary from distributions. The
> > python classes are not available, so they have to clone QEMU to use Avocado
> > (I guess they only need 5 python files).
> > When running on Continuous Integration, this is overkill, because when you
> > clone QEMU you also clone various other submodules.
>
> I only have one concern, here: I don't think we have the
> bandwidth to start maintaining a stable external Python API.
> Users of those packages will need to be aware that future
> versions of the modules might have incompatible APIs.
>
My understanding is that we would publish those files as a Python
module with versions matching QEMU. No stability would be promised.
Users can always require a specific version of the Python module that
matches the QEMU version they expect/want to use.
> >
> >
> > 4) Warn the user when Avocado is too old for the tests
> >
> > Some users tried Avocado following the examples on the mailing list and the
> > one in some commit's descriptions where we simply show "avocado run ...".
>
> Oops.
>
> > They installed the distribution Avocado package and tried and it fails for
> > few of them with no obvious reason (the .log file is hard to read when you
> > are not custom to). IIUC their distribution provides a older Avocado (69?)
> > while we use recent features (72).
> >
> > We never noticed it because we use 'make check-venv' and do not test the
> > distrib Avocado. While we can not test all distribs, we could add a version
> > test if the Avocado version is too old, display a friendly message to the
> > console (not the logfile).
>
> Sounds like a good idea.
>
A simpler (complementary?) solution, or maybe just a good practice, is
to try to use in the examples:
"./tests/venv/bin/avocado run ..."
Do you think this would be enough? It would of course not cover the
examples in previous commit messages.
Thanks!
- Cleber.
> >
> >
> > That it for my notes.
> >
> > Eduardo/Wainer, are there other topics I forgot?
>
> I don't remember anything specific right now. Thanks again!
>
> --
> Eduardo