These were because of recent spam bots.
I made some changes today. We now have 4 containers for the UI (we only had 1 before) and 4 for the backend (we only had 2)
It seems that when you delete a user, and you tell lemmy to also remove the content (the spam) it tells the database to mark all of the content as deleted.
Kbin.social had about 30 users who posted 20/30 posts each which I told Lemmy to delete.
This only marks it as deleted for Reddthat users until the mods mark the post as deleted and it federates out.
The problem
The UPDATE in the database (marking the spam content as deleted) takes a while and the backend waits(?) for the database to finish.
Even though the backend has 20 different connections to the database it uses 1 connection for the UPDATE, and then waits/gets stuck.
This is what is causing the outages unfortunately and it's really pissing me off to be honest. I can't remove content / action reports without someone seeing an error.
I don't see any solutions on the 0.18.3 release notes that would solve this.
Temp Solution
So to combat this a little I've increased our backend processes from 2 to 4 and our front-end from 1 to 4.
My idea is that if 1 of the backend processes gets "locked" up while performing tasks, the other 3 processes should take care of it.
This unfortunately is an assumption because if the "removal" performs an UPDATE on the database and the /other/ backend processes are aware of this and wait as well... This would count as "locking" up the database and it won't matter how many processes I scale out too, the applications will lockup and cause us downtime.
Next Steps
- Upgrade to 0.18.3 as it apparently has some database fixes.
- look at the Lemmy API and see if there is a way I can push certain API commands (user removal) off to its own container.
- fix up/figure out how to make the nginx proxy container know if a "backend container" is down, and try the other ones instead.
Note: we are kinda doing #3 point already it does a round-robbin (tries each sequentially). But from what I've seen in part of the logs it can't differentiate between one that is down and one that is up. (From the nginx documentation, that feature is a paid one)
Cheers, Tiff