maltfield

joined 1 year ago
[–] [email protected] 4 points 1 year ago (1 children)

Yeah, that's exactly why I'm asking this question. All the effort seems to be going into the DB -- but you can have a horribly shitty DB and backend but still have a massively performant webserver by just caching away the reads to RAM.

I didn't see any tickets about this on the GitHub, which is why I'm asking around to see if there's actually some very low-hanging-fruit for improving all the instances with a frontend RAM cache.

[–] [email protected] 5 points 1 year ago (9 children)

In my experience, the best benefits gained from caching are done before the backend and are stored in RAM, so the query never even reaches those services at all. I've used varnish for this (which is also what the big CDN providers use). In Lemmy, I imagine that would be the ngnix proxy that sits in-front of the backend.

[–] [email protected] 8 points 1 year ago (11 children)

Right, but if you don't have a cache setup, then the DB gets taxed. At a certain point a cache looses its benefit, but an enormous amount of savings can be made (to backend DB calls, for example) by just caching all API reads for ~60 seconds.

[–] [email protected] 5 points 1 year ago (6 children)

I wouldn't be surprised if it has more to do with caching than throwing hardware at it.

[–] [email protected] 77 points 1 year ago (4 children)

Yes. And I'm asking him to share his tweaks here with the community so that others instance admins can shore-up their servers :)

 

At the time of writing, Lemmyworld has the second highest number of active users (compared to all lemmy instances)

Also at the time of writing, Lemmyworld has >99% uptime.

By comparison, other lemmy instances with as many users as Lemmyworld keep going down.

What optimizations has Lemmyworld made to their hosting configuration that has made it more resilient than other instances' hosting configurations?

See also Does Lemmy cache the frontpage by default (read-only)? on [email protected]