this post was submitted on 11 Apr 2024
57 points (98.3% liked)

/0

1620 readers
30 users here now

Meta community. Discuss about this lemmy instance or lemmy in general.

Service Uptime view

founded 2 years ago
MODERATORS
 

/c/[email protected] is the second biggest community on Lemmy.World and yet on /0, there is nothing newer than two days.

/c/[email protected] has two posts from today but based on the vote count, I think it's only showing votes from this instance

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 21 points 9 months ago (10 children)

I think the real problem is world is getting too big. They are by far the largest instance by users, content and post volume. The federation bandwidth requirements (and front end serving requirements) have got to be insane with 7200 people actively posting, with thousands more federating in. It exposes all the cracks that lemmy inevitably has in it's underlying data handling.

[–] [email protected] 24 points 9 months ago* (last edited 9 months ago) (9 children)

It's more to do with the lemmy itself as a platform handles than bandwith. Basically, there is only one channel between any two instances, and it's serial, and each step requires multiple handshakes to complete. Add in geographic distance making those handshakes take a significant part of a second to complete, and you end up with a single channel that gets flooded.

Blahaj.zone is 1.3 million activities behind on lemmy.world for example, but we're not behind on any other instance, because those channels don't hit capacity. Now, if we could use multiple channels at once to talk to lemmy.world, we wouldn't have a problem, but lemmy isn't built for that at the moment

[–] OnlyTakesLs 0 points 9 months ago (1 children)

Couldnt they batch them up? Im not a technical person but this seems solvable.

[–] [email protected] 7 points 9 months ago

Yep. Batching and/or multiple parallel channels per remote instance would solve it.

load more comments (7 replies)
load more comments (7 replies)