So we can stop/pull/start, move the pull tasks to their own files
and add a playbook that invokes them.
Change-Id: I4f351c1d28e5e4606e0a778e545a3a805525ac71
This change adds comments to the python-base and python-builder
dockerfiles to force a rebuild of these images. We do this periodically
to pull in updates.
Change-Id: I109a46603a74a376cc36fdfbd35734f6dc835abe
Add the two new borg hosts to cacti. Also remove the old bup server
which was still lurking there.
Change-Id: I2bf9e401f93b59ecef162db7020f97ba1498e027
This includes a fix for I216528a76307189d8d87bd2fcfeff95c6ceb53cc.
Now it's released we can be a bit more explicit about why we added the
workaround.
Change-Id: Ibaf1850549b5e7ec3622418b650bc5e59a289ab6
All hosts are now running thier backups via borg to servers in
vexxhost and rax.ord.
For reference, the servers being backed up at this time are:
borg-ask01
borg-ethercalc02
borg-etherpad01
borg-gitea01
borg-lists
borg-review-dev01
borg-review01
borg-storyboard01
borg-translate01
borg-wiki-update-test
borg-zuul01
This removes the old bup backup hosts, the no-longer used ansible
roles for the bup backup server and client roles, and any remaining
bup related configuration.
For simplicity, we will remove any remaining bup cron jobs on the
above servers manually after this merges.
Change-Id: I32554ca857a81ae8a250ce082421a7ede460ea3c
This sets a global BORG_UNDER_CRON=1 environment variable for
production hosts and makes the borg-backup script send an email if any
part of the backup job appears to fail (this avoids spamming ourselves
if we're testing backups, etc).
We should ideally never get this email, but if we do it's something we
want to investigate quickly. There's nothing worse than thinking
backups are working when they aren't.
Change-Id: Ibb63f19817782c25a5929781b0f6342fe4c82cf0
This is a private list to contact administrators that is suitable
for raising security concerns. I've stolen the wording from Ian's
similar I886f67d875abd09753511f6c33312cfc5eb62933 docs change.
Change-Id: I7eb094d9fc75494ab33e0b9133fb451724f96dad
Co-Authored-By: Ian Wienand <iwienand@redhat.com>
Ifa5f251fdfb8de737ad2ed96491d45294ce23a0c renamed the afs and afsdb
groups to afs-file-server and afs-db-server, but didn't update the
group files.
Previously the firewall rules were duplicated in the afs/afsdb group;
but now all afs servers are in the afs-server-common group. Rename
afs.yaml->afs-server-common.yaml and remove the now unnecessary
afsdb.yaml.
Remove one of the old group vars files and rename the other to
afs-server-common so we can restore the udp ports they open in our
firewall rules.
Change-Id: I17dd0596660addf061ade31b4450bf040c01ffe8
Ib7e7d7313e0827a40009df840119444611d74ca2 did not match the promote
job file filter, so the image wasn't promoted. Bump to create a new
image build.
Change-Id: I31e8368ec309e62a9cf3607acfd5157aa5b6c61e
Both the filesevers and db servers have common key material deployed
by the openafs-server-config role. Put both types of server in a new
group "afs-server-common" so we can define this key material in just
one group file on bridge.
Then separate out the two into afs-<file|db>-server groups for
consistent naming.
Rename afs-admin for consistent naming.
The service file is updated to reflect the new groups.
Change-Id: Ifa5f251fdfb8de737ad2ed96491d45294ce23a0c
Currently this variable is setting several URL's used in the config to
internal http links (port 8000). This bubbles through to the UI which
then can't talk to the API. Emperically, changing these values in the
container config and restarting it makes things work. Update this
variable to make it talk to external https.
Change-Id: If61ec1e0383b98d34d092c55ca0095588487902a
This checks the backup archives and alerts us if anything seems wrong.
This will take a few hours, so we run once a week.
Change-Id: I832c0d29a37df94d4bf2704c59bb3f8d855c3cc8
We have setup rsyslogd/logrotate to handle anything with docker- tags
to be persisted to disk in /var/log/containers. Set this up here so
we keep track of the mariadb and refstack logs.
Change-Id: I760cfeb7226f79986fbf9d7dbc5f899fc87a0cd1
We need to depend on the buildset registry as we are building this image
in a separate job. We also don't need to depend on the build job in
gate, we only need the upload job.
Change-Id: Ie7c2ed29c028f8c23d67ad38edbe04b12e22d026
This change splits our existing system-config-run-review job into two
jobs, one for gerrit 3.2 and another for 3.3. The biggest change is that
we use a var called zuul_test_gerrit_version to select which version we
want and that ends up in the fake group file written out by Zuul for the
nested ansible run. The nested ansible run will then populate the
docker-compose file with the appropriate version for us.
Change-Id: I00b52c0f4aa8df3ecface964007fcf5724887e5e
Gerrit 3.3 has released. Lets start building images for it so that we
can do testing when ready to start that.
We also add testing files to the list of things that trigger the 3.3
builds. Strictly this isn't necessary since the test will continue to
use 3.2 images until we upgrade to 3.3, but this helps us avoid
forgetting to do this when we do upgrade. Little extra jobs run today to
ensure we continue to run the right jobs tomorrow.
Change-Id: Ib7e7d7313e0827a40009df840119444611d74ca2
Channel for PTG discussion was moved to #openinfra-events, and
the bot was renames "openinfraptg" to match the extended scope
of the event.
Change-Id: I52718358ddb4a199d24fc6da6e71f81c646da1f2