Adds a role to let users define a manifest of artifacts located
in zuul-output/artifacts that should be uploaded to pre defined
artifactory instances.
Change-Id: I00dc0302e85ce59b3808f6e62e2bcdadf2e41fde
When building normally we tag the images in docker, this lets
upload push them. But in the buildx case, we tag them for the
buildset registry but they never end up with change-specific
tags on the docker host itself, so they can't be found by
the upload playbook.
Change-Id: I4f51df3ac67602fd2d48f66639bb0715e7b06cd3
When building multi-arch it's done in parallel which can result
in the same layer being pushed at the same time, which is bad
for the registries.
Instead, build everything in paralle, then push each arch independently,
then push all the arches again to cause the manifest to be written
and pushed properly.
Change-Id: I8036a9b4d4c515c20a05994741540b999e7cbcae
We seem to have flaky issues with buildx builds (because of course
we do). Try retrying the build to see if it's an eventually
consistent utility.
Change-Id: I6bd625ad7ffcf0c629c85017b1d5d3727e27b9d9
This currently installs with pip --user which cases problems if you
try to run this version of tox as another user. This is done in
system-config, for example, where we run tox with "become: yes" to run
testinfra.
By installing tox into a venv, we can call it as another user and it
just works because it's all encapsulated in the venv. We use the
virtualenv commands exported by ensure-pip to create this.
I think the original motivation for installing tox like this was to
ensure it is done without sudo permissions. This also doesn't require
permissions, but ensures the resulting tox_executable is able to be
executed in more contexts.
Needed-By: https://review.opendev.org/712819
Change-Id: Iebee8cb72cce7944c537fdb91b6c98ed51878661
This is a fix for Id8347b6b09735659a7ed9bbe7f9d2798fbec9620 which did
not specify the full path in ensure_pip_virtualenv_command for Xenial.
This slipped by testing because there we check
ensure_pip_virtualenv_command runs under a shell:, but not when called
as the argument to the pip: module (which exec's it differently and
requires the full path). Update testing to do that too.
Change-Id: I65ff5ce913917079ab2fc1d88c56d1c0a24ea83e
The buildx patch unfortunately changed the logic associated with
siblings to set up siblings in a loop one time, rather than to
do a loop of "set up siblings, build, cleanup siblings". This causes
builds to fail when they're using siblings with an error about
siblings dir not having been cleaned up.
Change-Id: I3c45bfa77ec9f2609689e04044c18f066adc9741
This adds more of the plain nodes (without pip-and-virtualenv) to our
testing environment to facilitate the eventual switch in opendev
infra.
Change-Id: I091ab667c5f071ba5844ea567464b1c29052638d
It turns out the extant comment, removed here, is correct in
identifying the problem, but incorrect about the solution.
As noted the v8 pip included with Xenial doesn't fall back to PyPi
correctly when nodes are configured with mirrors. However, the note
about virtualenv upgrading pip is incorrect. This was not tested on
our "plain" nodes (this will be added by a follow-on
https://review.opendev.org/724776 when it can pass) so virtualenv was
picking up the pip installed by the pip-and-virtualenv element.
Installing pip from source doesn't really help; in fact it makes
things even more confusing because "python3 -m venv" still uses the
inbuilt pip from the python-pip-whl package [1]. e.g.
root@ubuntu-xenial-plain:~# pip --version
pip 20.1 from /usr/local/lib/python3.5/dist-packages/pip (python 3.5)
^ this is installed by get-pip.py
root@ubuntu-xenial-plain:~# python3 -m venv test
root@ubuntu-xenial-plain:~# ./test/bin/pip --version
pip 8.1.1 from /root/test/lib/python3.5/site-packages (python 3.5)
^ it still deploys pip in the venv from the whl
and thus will *not* pick up the source pip install. This is a problem
on our extant Xenial hosts, so clearly nobody is using it. However,
as part of this work we want to standardise other tools we are
installing in zuul-jobs to use "python3 -m venv". Thus we want all
our platforms need to support a working venv out of the box.
The solution proposed here is to install a backport of Bionic's pip 9
into Xenial when using this element. This way, we are still shipping
packaged pip on the host and keeping our images as close to plain
vanilla upstream as possible, but with almost as small change as we
can manage to actually work in our environment. Given the sunsetting
lifespan of Xenial, this should require not further maintenance until
we are no longer interested in the distro.
Because we skip the install phase on nodes with pre-installed pip, we
put in a work-around to set "ensure_pip_virtualenv_command" to
virtualenv on extant nodes that have been configured with
pip-and-virtualenv. We can remove this when we have only "plain"
nodes (i.e. no pip-and-virtualenv element) and then we will
consistently be using venv's.
[1] https://packages.ubuntu.com/xenial/python-pip-whl
Change-Id: Id8347b6b09735659a7ed9bbe7f9d2798fbec9620
It's a generally useful file for people using buildkitd. It's also
more appropriate to write it in use-buildset-registry and then
just have build-docker-image copy it.
Do the same thing with writing the cert - we don't need to know
which path on the host use-buildset-registry wrote the file to,
we can just write the content from the dir into the container,
and then it's just a consistent command to apply it.
Change-Id: Iaa485c2e8628900dccbed1f4b0773b6d1b5f7983
We need ca_dir to copy the certs in, but when we run in multi-node
cases the use-buildset-registry invocation is in a separate place
so we dont' leak the var in.
This will get deleted in the next patch, but that patch is not
working and we need this to unbreak the multi-arch functionality
for nodepool.
Change-Id: I4f92f0415fb471f304fdd0a1e576812c8d67ab24
Move the setting of the containerfile into the loop (thus in build.yaml)
to follow the semantics from roles/build-container-image/common.rst for
container_filename and container_images.container_filename.
Co-Authored-By: Andreas Jaeger <aj@suse.com>
Change-Id: Id39f13cc666e9f99557447073cc0d362991f3506
Python 3.8 is current version, replace py37 so that we test
3.5 and 3.8 as lowest/highest.
This follows I4b382bb89847d3ae2b20292d923c3c2211dfb254 for zuul.
Change-Id: I35a1492ff0e34c7951c8ecf216992044fdb3065f
This change ensures the modify_registries_conf doesn't update the
registry configuration when it is not necessary.
Change-Id: Iae824315ec5e0a0a362a81158777ebae7143ab57
When Ansible runs under Python 2, such as on SuSE 15, we need
python2-pip for pip: to work. Update the default to install Python 2
deps when Ansible is running under v2.
Additionally for SuSE, as described inline we can have pip but not
setuptools. Put in a work-around to always pull in the setuptools
package. Pull this out into a separate file; history has shown that
we may need more :/
Change-Id: I450171aad5e31d2925239ab3d0641cd23f6815a2
We currently are pulling from br/repo/tag and then re-tagging to
just repo/tag. But we have buildset registry in the mirrors list
for docker, so we should just be able to pull directly from repo/tag
to prime the local image cache and have everything just work.
Change-Id: I4d73f10acfc84d94772b13e3be16790e661c7047
Docker has experimental support for building multi-arch
container images with a buildx command. Currently it only
supports pushing to a registry after running and the images
don't end up in the local docker images list. To work around
that, push to the buildset registry then pull back. This
is the inverse of the normal case where we build, then
retag, then push. The end result should be the same.
Change-Id: I6a4c4f9e262add909d2d5c2efa33ec69b9d9364a
We can attach some metadata to images we build indicating where
the image was built from. We can also allow users to add additional
labels, should they choose, although honestly for users doing it in
the Dockerfile makes more sense.
Change-Id: I01de15279a69026a09633eb488dac62910b324f5
Adds yamllint to the linters with a minimal configuration, some
rules are disabled to allow us to fix them in follow-ups, if
we agree on them.
Fixes invalid YAML file containing characters inside block.
Fixes few minor linting issues.
Change-Id: I936fe2c997597972d884c5fc62655d28e8aaf8c5
I added previously a check that makes docs job fail in post if there's no
html output. This now leads to a POST_FAILURE when the normal build fails.
So, a FAILURE turned into POST_FAILURE, see
https://zuul.opendev.org/t/zuul/build/afbc8c39929a46039aa4d987971cdb4f .
Update the check in the fetch-sphinx-tarball role that it only fails
if the playbooks passed.
Change-Id: Iebd0bffcc5caaa6df97d432a712d83b64db8e780