this post was submitted on 05 Aug 2023
44 points (100.0% liked)

Reddthat Support -> Has moved

305 readers
1 users here now

Reddthat Community Support Forum

Before posting, have you read the rules?

Introductory Required Reading


You are ready to start your adventure on Reddthat but are still unsure? That's fine! You've come to the right place.


Alternative Support Forums

founded 1 year ago
MODERATORS
 

I've been noticing reddthat going down for a short time (a few minutes or so) more than before - is everything okay?

https://lestat.org/status/lemmy shows our uptime at 88.82% right now - not horrible, but not great either. We of course stunt need to be up 99.9999% of the time, but still.

Is there anything we can help with besides donating? I like it here, so I want to make sure this place stays up for a long time ๐Ÿ‘

you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 28 points 1 year ago (8 children)

These were because of recent spam bots.

I made some changes today. We now have 4 containers for the UI (we only had 1 before) and 4 for the backend (we only had 2)

It seems that when you delete a user, and you tell lemmy to also remove the content (the spam) it tells the database to mark all of the content as deleted.

Kbin.social had about 30 users who posted 20/30 posts each which I told Lemmy to delete.
This only marks it as deleted for Reddthat users until the mods mark the post as deleted and it federates out.

The problem

The UPDATE in the database (marking the spam content as deleted) takes a while and the backend waits(?) for the database to finish.

Even though the backend has 20 different connections to the database it uses 1 connection for the UPDATE, and then waits/gets stuck.

This is what is causing the outages unfortunately and it's really pissing me off to be honest. I can't remove content / action reports without someone seeing an error.

I don't see any solutions on the 0.18.3 release notes that would solve this.

Temp Solution

So to combat this a little I've increased our backend processes from 2 to 4 and our front-end from 1 to 4.

My idea is that if 1 of the backend processes gets "locked" up while performing tasks, the other 3 processes should take care of it.

This unfortunately is an assumption because if the "removal" performs an UPDATE on the database and the /other/ backend processes are aware of this and wait as well... This would count as "locking" up the database and it won't matter how many processes I scale out too, the applications will lockup and cause us downtime.

Next Steps

  • Upgrade to 0.18.3 as it apparently has some database fixes.
  • look at the Lemmy API and see if there is a way I can push certain API commands (user removal) off to its own container.
  • fix up/figure out how to make the nginx proxy container know if a "backend container" is down, and try the other ones instead.

Note: we are kinda doing #3 point already it does a round-robbin (tries each sequentially). But from what I've seen in part of the logs it can't differentiate between one that is down and one that is up. (From the nginx documentation, that feature is a paid one)

Cheers, Tiff

[โ€“] [email protected] 10 points 1 year ago (3 children)

Wow, that limitation in the Lemmy design sucks. Thanks for working so hard to figure it out!

[โ€“] [email protected] 11 points 1 year ago (1 children)

Yeah, I dont remember it happening in 0.17 but that was back when we had websockets! So everything was inherently more synchronous.

0.18.3 has "database optimisations" which apparently also results in 80% space savings . (Like wtf, how could we save 80%!!!!).

Anyway I'll be testing that on the dev server tonight and then I'll plan a maintenance window.

[โ€“] [email protected] 2 points 1 year ago (1 children)

Wait they removed websocket support?!?! Why?

How were things more synchronous with websockets?

[โ€“] [email protected] 6 points 1 year ago

I think the websocket support was "clunky", and it resulted in weird things happening.

Because all the clients were on a websocket everything was sent immediately to the clients from the server, and you didn't need to "wait" for the long running queries.

But that only really affects admin/mods. The current system is a lot better! And since 0.18.3 the db optimisation has helped! Removing a user and all their content didn't take as long now and the changes I made for extra horizontal scaling really helped.

load more comments (1 replies)
load more comments (5 replies)