19078 Commits

Author SHA1 Message Date
Tony Breeds
cca0c34e7b Switch meetpad test servers to jammy
Also while we're there use 99 as the host index to make it slightly
harder to confuse testing with production.

Change-Id: I62193418feb9401fc06da39bd100553aef3dc52d
2024-05-02 13:46:03 -05:00
Tony Breeds
5f0e06963a Use group var to define letsencrypt_certs
Change-Id: Ibcc11bd4fb120a046b428fe3c6eb9ba8505495d0
2024-05-02 13:42:56 -05:00
Tony Breeds
ba511c4d1c Allow overriding of meetpad PUBLIC_URL
In testing jammy updates we discovered that this value need to match
so that websockets are directed to the correct host.

Change-Id: Id44bf92edff411389f05a652dad2ae78607e4d55
2024-05-02 13:41:47 -05:00
Zuul
f6a131ebc0 Merge "Update etherpad to v2.0.3" 2024-05-01 16:08:38 +00:00
Zuul
2653c922cc Merge "roles/openafs-client: small overhaul of rpm builds" 2024-04-30 18:21:55 +00:00
Clark Boylan
b2607979ab Update etherpad to v2.0.3
This updates changes how Etherpad is built and how authentication is
managed for API requests. This ends up changing a lot of our tooling
around etherpad but etherpad itself (other than the auth changes)
doesn't seem to change much. In response to this I update our admin docs
on common api tasks to use the new process. Then update our testinfra
testing as well to cover that to ensure it all continues to work
properly after this change.

Note the Dockerfile updates are all adapted from upstream. I'm actually
not fond of the decisions they have made in this image build, but being
in sync is probably more important than fixing the multistage builds and
being different.

This change jumps us from v1.9.7 to 2.0.3 (covers releases 2.0.0, 2.0.1,
and 2.0.2 too). A changelog can be found here:

  https://github.com/ether/etherpad-lite/blob/v2.0.3/CHANGELOG.md

Change-Id: Ia7c4f26d893b4fc4a178262e1a6b9f3fa80d2a5c
2024-04-29 14:17:55 -07:00
Zuul
095ac2ae8a Merge "Ubuntu 22.04: Add ceph quincy and reef to mirror" 2024-04-29 16:58:19 +00:00
Zuul
69da8432f9 Merge "Upgrade Gitea's backend DB to MariaDB 10.11" 2024-04-25 18:33:07 +00:00
Jeremy Stanley
17276d0d31 Override upstream ADMINS address for mailman
Deliveries to root@localhost don't work, use the same as the from
address instead.

Change-Id: Id509f652be3b4b6f77edc85161aed7200f672571
2024-04-24 18:02:00 +00:00
Zuul
a713d1ded5 Merge "Add robots.txt to Zuul web" 2024-04-24 00:09:28 +00:00
Clark Boylan
691903f1c1 Upgrade Gitea's backend DB to MariaDB 10.11
After upgrading MariaDB behind several services from 10.4 to 10.11 it is
now Gitea's turn. Use the auto upgrade method as we've done elsewhere
and update Gitea's DB too.

Change-Id: I01aab3c55f1af4f883efe05bf8862ccd50a07631
2024-04-23 15:16:41 -07:00
Zuul
4683a2342e Merge "Apply apache us filter to Zuul" 2024-04-23 19:27:34 +00:00
Zuul
1a7ad511da Merge "Rebuild our base python images" 2024-04-23 18:19:05 +00:00
Zuul
f57a5aa61b Merge "Use UA filter in mailman apache vhost" 2024-04-23 17:42:14 +00:00
Zuul
196817156d Merge "Add robots.txt to our list servers" 2024-04-23 17:25:21 +00:00
Clark Boylan
6c37d8f202 Rebuild our base python images
This is our semi regular python base image rebuild. This ensures we're
running relatively up to date python builds as well as base system
packages (though many of our image builds update the base system too).

Change-Id: Ice918219a64bd5845de9dc3330bf292261c6a80e
2024-04-23 09:01:43 -07:00
Clark Boylan
e3cf34414b Use UA filter in mailman apache vhost
We're noticing that mailman's uwsgi queue is filling up. Before we try
to extend the queue apply our user agent filter to apache to reduce the
number of requests that hit the queue in the first place.

Change-Id: Ib821a7fffa6239a9affcc4c6698eef2dc9378cd1
2024-04-23 08:51:53 -07:00
Clark Boylan
c499b57e16 Add robots.txt to our list servers
We've noticed that our uwsgi queues are filling up and a lot of requests
are being made to robots.txt which ends up 500/503 erroring. Add a
robots.txt file which allows crawling of our lists and archives with a
delay value in hopes this will cause bots to cache results and not fill
up the queue with repetetive requests.

Change-Id: I660d8d43f6b2d96663212d93ec48e67d86e9e761
2024-04-23 08:51:50 -07:00
Zuul
1d1f8bee20 Merge "reprepro: mirror Ubuntu UCA Caracal for Ubuntu Jammy" 2024-04-23 06:16:24 +00:00
Takashi Kajinami
0b0010dbb2 Ubuntu 22.04: Add ceph quincy and reef to mirror
Ceph quincy and reef packages for Ubuntu Jammy are required by Puppet
CI jobs.

Change-Id: I701e61c9e30515b140c3c2e8131f0a72938b9552
2024-04-21 02:40:37 +09:00
Takashi Kajinami
e9428a7dc3 reprepro: mirror Ubuntu UCA Caracal for Ubuntu Jammy
Change-Id: Ib57086a526afb995bdbca43080f4ed16688038ed
2024-04-20 01:02:25 +09:00
Dr. Jens Harbott
be45290054 gitea: move robots.txt to public directory
Gitea wants us to move the robots.txt file to a new location. It
currently logs a warning about it:

2024/04/17 19:30:56 cmd/web.go:191:serveInstalled() [E] Found legacy public
      asset "robots.txt" in CustomPath. Please move it to
      /custom/public/robots.txt

Change-Id: Ic4a7f3bbe4633972e0409b37b511fdb03f968442
2024-04-19 14:17:41 +02:00
Zuul
481f128257 Merge "Move gerrit replication waiting queue aside during project renames" 2024-04-17 22:48:39 +00:00
Zuul
a89fab4a40 Merge "Upgrade Etherpad's MariaDB to 10.11" 2024-04-17 20:23:23 +00:00
Zuul
f56968c92e Merge "Upgrade gitea to v1.21.11" 2024-04-17 19:23:50 +00:00
Zuul
6bb2a585e3 Merge "Add more User Agent filters" 2024-04-17 19:15:19 +00:00
Zuul
62e62af380 Merge "Switch Gitea archive cleanup from weekly to daily" 2024-04-17 18:55:14 +00:00
Zuul
4661dad3b5 Merge "Cleanup lingering Mailman 2 playbook" 2024-04-17 17:54:54 +00:00
Zuul
d9d15d9a1c Merge "Upgrade Mailman's MariaDB to 10.11" 2024-04-17 17:04:59 +00:00
Clark Boylan
3e5a2968ff Add more User Agent filters
These filters match Windows with old Chrome that appear to be doing
inappropriate crawling.

Change-Id: Ia83c5596cc82b28f265af55240e52f364baef728
2024-04-17 09:35:20 -07:00
Jeremy Stanley
6466fa8419 Switch Gitea archive cleanup from weekly to daily
Crawlers that ignore our robots.txt are triggering archive creation
so rapidly our rootfs fills up between weekly purges, so doing it
once a day should hopefully mitigate further problems.

Change-Id: Ib4e56fbd666f7bf93c017739697d8443d527b8c7
2024-04-17 14:15:49 +00:00
Clark Boylan
3a44e0cf64 Upgrade gitea to v1.21.11
This is a bugfix update upgrade from v1.21.10 to v1.21.11. None of the
templates we override have been changed between these two versions
according to git diff.

A full changelog can be found here:

  https://github.com/go-gitea/gitea/blob/v1.21.11/CHANGELOG.md

Change-Id: I4d3648e311fe87b275856f2d73aca4a79c2c5507
2024-04-16 08:32:22 -07:00
Zuul
998493a057 Merge "Update Gerrit 3.8 to 3.8.5 and 3.9 to 3.9.4" 2024-04-15 18:07:38 +00:00
Clark Boylan
8049407fa7 Update Gerrit 3.8 to 3.8.5 and 3.9 to 3.9.4
This updates our production 3.8 image to the latest bugfix release and
our future upgrade image to 3.9.4. Importantly this addresses problems
with reindexing in 3.9 which will allow us to start upgrade testing in
earnest.

Change-Id: I21f692121665d5630b68c17901b80fc3102ea280
2024-04-14 17:32:07 -07:00
Zuul
4222c01643 Merge "Add more LE debugging info to our Ansible role" 2024-04-10 16:06:09 +00:00
Zuul
ffb76931d7 Merge "More completely disable ansible galaxy proxy testing" 2024-04-10 15:17:12 +00:00
Clark Boylan
2c42e57510 Add more LE debugging info to our Ansible role
Adding the information about which host we were checking for certcheck
did help in debugging. It pointed out that a specific host was at fault
(nb02 in this case though it could change over time) and further
investigation of this host showed acme.sh was not running there at all
due to earlier failures. Rather than the playbook ending at that point
it continued to run until building the certcheck list and then had a
fatal error leading to the confusion.

Add a breadcrumb comment to the Ansible role to help point this behavior
out in the future.

Change-Id: Ib607665d75eb666d19c8508346eb217783b98eb5
2024-04-05 13:40:14 -07:00
Clark Boylan
2641e8e6ac More completely disable ansible galaxy proxy testing
Ansible Galaxy appears to be served behind cloudflare and is currently
throwing 504 errors waiting for the backend to respond on /api/
requests. Since we know this is already not working and this is
preventing other changes from landing lets just go ahead and completely
disable the testing of this proxy.

We can always add the test back if and when effort is made to proxy the
new version of galaxy's api.

Change-Id: Iad35fc19ce4f8bb8ec3c57eb995b44fc5e61a06c
2024-04-05 13:40:14 -07:00
Jeremy Stanley
04ab5864da Cleanup lingering Mailman 2 playbook
We don't need the Mailman 2 service deployment playbook, as we're no
longer running it. This was simply overlooked in the earlier mass
cleanup change, and even refers to a no longer existing role.

Change-Id: I7e65fdf9e81858f780bef8dce15ef88823345be8
2024-04-05 20:09:06 +00:00
Jeremy Stanley
6f433da61f Upgrade Mailman's MariaDB to 10.11
We are currently running MariaDB 10.6 for Mailman. We use the
MARIADB_AUTO_UPGRADE flag to automatically upgrade the mariadb
install to 10.11 when switching the image version over to 10.11.
This was successfully performed against several other services
already.

Change-Id: I675753df142d635eca60c15728ece2870b406134
2024-04-05 20:05:18 +00:00
Zuul
49093ba65d Merge "Update gitea to v1.21.10" 2024-04-05 19:29:38 +00:00
Zuul
bf7d283140 Merge "Rotate clarkbs ssh key" 2024-04-05 15:45:16 +00:00
Zuul
963413c214 Merge "Restrict permissions on mariadb compose file" 2024-04-05 03:17:11 +00:00
Zuul
11ee0652f9 Merge "Add zuul-db01 to cacti" 2024-04-05 01:56:35 +00:00
James E. Blair
7e2770f670 Restrict permissions on mariadb compose file
This file has passwords templated in, so make it root-only.

Change-Id: I7fa438acfa91be9794d505dc7bf0d88af6bb25c9
2024-04-04 17:22:09 -07:00
James E. Blair
f6445cf650 Add zuul-db01 to cacti
Change-Id: I81f533044064b26c27ec668f206d476e9fb6f9cc
2024-04-04 17:15:50 -07:00
James E. Blair
81eca4c17c Mariadb: listen on all IP addresses
Mariadb is bound to localhost only otherwise.

Change-Id: I66046ea1e264a58cb05881fc985ffe18917fe490
2024-04-04 15:57:16 -07:00
James E. Blair
1a2e341fed Set standalone mariadb innodb buffer pool to 4G
This increases the innodb buffer pool size from the default of
128M to 4G.  Some increase is necessary for creating large indexes,
but probably not this much.  Having a large pool allows for
significant performance improvement.  To that end, allocate half of
our RAM to this.

https://mariadb.com/kb/en/innodb-buffer-pool/#innodb_buffer_pool_size

Change-Id: I0a20cb2e11edc88dac6a55191a05637e7634773f
2024-04-04 12:29:17 -07:00
James E. Blair
81bcc41f70 Add zuul-db01
Change-Id: I0292540594214d0a1d4a8327c0f5d3d84c294abf
2024-04-04 12:29:15 -07:00
James E. Blair
99342db824 Add a standalone zuul db server
Change-Id: Ibb260f820dbc1d9d6ca523ff3903134612cb003e
2024-04-04 12:25:23 -07:00