Also while we're there use 99 as the host index to make it slightly
harder to confuse testing with production.
Change-Id: I62193418feb9401fc06da39bd100553aef3dc52d
In testing jammy updates we discovered that this value need to match
so that websockets are directed to the correct host.
Change-Id: Id44bf92edff411389f05a652dad2ae78607e4d55
This updates changes how Etherpad is built and how authentication is
managed for API requests. This ends up changing a lot of our tooling
around etherpad but etherpad itself (other than the auth changes)
doesn't seem to change much. In response to this I update our admin docs
on common api tasks to use the new process. Then update our testinfra
testing as well to cover that to ensure it all continues to work
properly after this change.
Note the Dockerfile updates are all adapted from upstream. I'm actually
not fond of the decisions they have made in this image build, but being
in sync is probably more important than fixing the multistage builds and
being different.
This change jumps us from v1.9.7 to 2.0.3 (covers releases 2.0.0, 2.0.1,
and 2.0.2 too). A changelog can be found here:
https://github.com/ether/etherpad-lite/blob/v2.0.3/CHANGELOG.md
Change-Id: Ia7c4f26d893b4fc4a178262e1a6b9f3fa80d2a5c
After upgrading MariaDB behind several services from 10.4 to 10.11 it is
now Gitea's turn. Use the auto upgrade method as we've done elsewhere
and update Gitea's DB too.
Change-Id: I01aab3c55f1af4f883efe05bf8862ccd50a07631
This is our semi regular python base image rebuild. This ensures we're
running relatively up to date python builds as well as base system
packages (though many of our image builds update the base system too).
Change-Id: Ice918219a64bd5845de9dc3330bf292261c6a80e
We're noticing that mailman's uwsgi queue is filling up. Before we try
to extend the queue apply our user agent filter to apache to reduce the
number of requests that hit the queue in the first place.
Change-Id: Ib821a7fffa6239a9affcc4c6698eef2dc9378cd1
We've noticed that our uwsgi queues are filling up and a lot of requests
are being made to robots.txt which ends up 500/503 erroring. Add a
robots.txt file which allows crawling of our lists and archives with a
delay value in hopes this will cause bots to cache results and not fill
up the queue with repetetive requests.
Change-Id: I660d8d43f6b2d96663212d93ec48e67d86e9e761
Gitea wants us to move the robots.txt file to a new location. It
currently logs a warning about it:
2024/04/17 19:30:56 cmd/web.go:191:serveInstalled() [E] Found legacy public
asset "robots.txt" in CustomPath. Please move it to
/custom/public/robots.txt
Change-Id: Ic4a7f3bbe4633972e0409b37b511fdb03f968442
Crawlers that ignore our robots.txt are triggering archive creation
so rapidly our rootfs fills up between weekly purges, so doing it
once a day should hopefully mitigate further problems.
Change-Id: Ib4e56fbd666f7bf93c017739697d8443d527b8c7
This is a bugfix update upgrade from v1.21.10 to v1.21.11. None of the
templates we override have been changed between these two versions
according to git diff.
A full changelog can be found here:
https://github.com/go-gitea/gitea/blob/v1.21.11/CHANGELOG.md
Change-Id: I4d3648e311fe87b275856f2d73aca4a79c2c5507
This updates our production 3.8 image to the latest bugfix release and
our future upgrade image to 3.9.4. Importantly this addresses problems
with reindexing in 3.9 which will allow us to start upgrade testing in
earnest.
Change-Id: I21f692121665d5630b68c17901b80fc3102ea280
Adding the information about which host we were checking for certcheck
did help in debugging. It pointed out that a specific host was at fault
(nb02 in this case though it could change over time) and further
investigation of this host showed acme.sh was not running there at all
due to earlier failures. Rather than the playbook ending at that point
it continued to run until building the certcheck list and then had a
fatal error leading to the confusion.
Add a breadcrumb comment to the Ansible role to help point this behavior
out in the future.
Change-Id: Ib607665d75eb666d19c8508346eb217783b98eb5
Ansible Galaxy appears to be served behind cloudflare and is currently
throwing 504 errors waiting for the backend to respond on /api/
requests. Since we know this is already not working and this is
preventing other changes from landing lets just go ahead and completely
disable the testing of this proxy.
We can always add the test back if and when effort is made to proxy the
new version of galaxy's api.
Change-Id: Iad35fc19ce4f8bb8ec3c57eb995b44fc5e61a06c
We don't need the Mailman 2 service deployment playbook, as we're no
longer running it. This was simply overlooked in the earlier mass
cleanup change, and even refers to a no longer existing role.
Change-Id: I7e65fdf9e81858f780bef8dce15ef88823345be8
We are currently running MariaDB 10.6 for Mailman. We use the
MARIADB_AUTO_UPGRADE flag to automatically upgrade the mariadb
install to 10.11 when switching the image version over to 10.11.
This was successfully performed against several other services
already.
Change-Id: I675753df142d635eca60c15728ece2870b406134
This increases the innodb buffer pool size from the default of
128M to 4G. Some increase is necessary for creating large indexes,
but probably not this much. Having a large pool allows for
significant performance improvement. To that end, allocate half of
our RAM to this.
https://mariadb.com/kb/en/innodb-buffer-pool/#innodb_buffer_pool_size
Change-Id: I0a20cb2e11edc88dac6a55191a05637e7634773f