https://review.opendev.org/#/c/742971 changed
defaults for ara_report_path to be an absolute path but
missed changes in zuul artifacts. The url here
needs to be relative to log dir to be browsable.
Change-Id: I53511709227ea890eb417778be89e49e03771ff2
Currently a new cmd shell is opened instead of running the command
supplied when adding the build key for ssh enabled windows nodes.
Fix this by adding a '/c' flag.
Change-Id: I582febfe91d5dc229cc6a1959477a461a237336b
We are still trying to get incomplete subunit files into our logs. This
is now failing because we assumed a single dict with a files entry to
iterate over but in reality we get a list of dicts with files entries to
iterate over. In order to iterate over this content we use
with_subelements to iterate over and copy the found files.
Change-Id: I160ac5caeec6fd7f5a35e805b7d4811b20405d14
zypper wants to autodetect these values and it appears that upstream
changes repo types because we are having errors now. Dropping this
config manually on a held node seems to fix things.
Change-Id: I8ad28da7164d9a0955f43d4864ba24c14f0bd4a3
In a previous change I attempted to log in flight subunit streams that
may be left behind if a job times out. This unfortunately didn't work
because in the cases we leave those files behind we also assume that we
don't have a valid .stestr state due to the contents of this directly.
Address this by modifying the find-stestr.sh script to indicate via its
return code if .stestr or .testrepository are present separate from
which commands are valid to run. Then in ansible we can do steps that
always apply if (s)testr were used (like copying partial logs) and
separately do subunit parsing if the contents of the (s)testr db
directories are valid.
Change-Id: I1c8f2405d74484631f633065baf9764dbd0209ee
This error occurs when variables are null and
DEFAULT_JINJA2_NATIVE is enabled:
object of type 'NoneType' has no len()
Change-Id: I7c69be60b846c03e30a0a6e8893bf2f3a3d5e5b8
This role had some assumptions that it would run only on the
zuul-executor. If we remove those assumptions, it can run on any
worker node, which allows it to be used in untrusted playbooks.
Note that this is backwards compatible only in the case that the
user did not provide a non-default value for ara_report_path. If
a user overrode that (eg, to put the report in a directory other
than "ara" under the executor log dir), this will break.
Change-Id: I3a28368a049fa583398ece8a66f90783f53098f4
If (s)testr are killed during a test run (possibly due to a timeout)
they will "leak" a tmp file containing the subunit stream to that point.
This file is useful for debugging so collect it. We don't process it
further as the file is a partial stream and humans may need to look at
it to make sense of it.
Change-Id: I6f0e61ec78f4dd9e2c4e52bc968b4f57586cb22e
The workspace setup role requires local code execution on the executor.
This is not allowed from an untrusted context so we disable it.
The previous assertions about the zuul-cloner setup depend on the
workspace setup running successfully. Disable those extra assertions.
subunit fetching role assertions grepped the html file for validity on
localhost. Disable this as well.
Change-Id: I7449749f50b6e4a34c4615b00836a7148e01c768
This uses a new method of calculating the cutoff time which does not
require executing code on the executor.
Change-Id: I92e71727e24281a31cb7caf0b78dd17a5aaae129
Co-Authored-By: Clark Boylan <clark.boylan@gmail.com>
The job was relying on a security hole to execute untrusted command
on the executor (localhost). This change prevents failure after the
hole is fixed.
Change-Id: I910fb6e6a2659e77991111c8ce5a6b763556aeec
This was relying on a bug in the executor. It's not critical
functionality, so just avoid it for now.
Change-Id: I6069150324d0b921cdb8cbc6902e74ec2a6036cc
These tasks should be used with become like Debian.yaml, otherwise
error will occurred like this:
[Errno 13] Permission denied: '/etc/yum.repos.d/test.repo'
Change-Id: I9c6f0132536500d36ba0098b635b1858e05d7ccf
We do this because it better approximates a production environment and
it seems that devstack is better tested when tls-proxy is enabled.
Change-Id: If607caf301211181b4f37a2c7012f875de3d285c
We subclass the GCS Credentials class. We can't completely test
it without either hitting the live Google cloud or substantial
mocking, but we should be able to exercise most of the functionality
we typically use. Do that by asking it to load a token from disk
and assert that it adds that token to a fake request headers
dictionary.
This also corrects a "problem" detected by the test. The current
super() call uses the python3 form, which is fine in that all current
uses of this code are using python3, but we still run python27 tests
on this repo, so we'll use the python2/python3 compat syntax.
Change-Id: Ifa4209617f4be52008b6294ebd10f0deb9bd6a51
Setting tox_envlist to venv by default is unintuitive for
many users. Remove this behaviour and let default tox
behaviour be the same as running tox on the commandline.
Change-Id: I1b6d59ee4ebb7f6b3adcf4bd35d7148e83389008
The google-auth Credential class added some new variables in the
constructor; make sure that we call it in order to avoid later
reference errors.
Change-Id: I338a3368b90a6612801ba587982d3053bbd3ba78
The upload-afs-synchronize role was added with
I493d8829e3fd98e84f1b8f0e776e7ba41abf16c2 and used as part of the
artifact promote process for tarballs and other uploads. However the
role is useful as a generic way to upload results to AFS and avoid
some commonly hit caveats such as getting permissions flags wrong.
Expand the README to explain things a little more.
Change-Id: Ia643b02b503560d1057697fc5a7d143fd728b177
The ensure-pip role has an option to provide multiple interpreters to
use. If not specified, we default to ansible_python.executable. This
default includes the full path to the executable. Those manually
providing which interpreters to use will not know what the full path is,
only the versions they would like to use. To make things consistent,
this strips off the path so we just have the version (python, python3,
python3.8, etc).
Change-Id: I339afc08393e9c6b1d26a05cf13b6fdc151f46d5
Signed-off-by: Sean McGinnis <sean.mcginnis@gmail.com>
The ensure-pip tasks will install python3-venv, but multiple
interpreters may be specified. This adds handling to make sure the venv
package is installed for all requested interpreters.
Change-Id: I670d5815bfc902f9c50a98df715ef60b61bab594
Signed-off-by: Sean McGinnis <sean.mcginnis@gmail.com>
We're seeing when the shell command fails (for whatever reason) and since
it's failed_when: false, the subsequent tasks actually fail. We can
check if stdout is not defined as a failure condition because that means
the module itself failed and we should not continue to process the
next tasks since they depend on stdout being defined.. Additionally we
can add some retries to try again if possible.
Change-Id: Iabc1619009220c6f1c91b84f81d5262c4d9d8430
Related-Bug: #1885697
When you're running a multi-node job, it would be nice to see the
inventory hostname in this list, so if you want to say, monitor one of
the nodes, you don't have to guess which is which.
Change-Id: Ibf822d51711c74d963a7287eaeaa9a40ff074e79