this post was submitted on 13 Jun 2023
29 points (96.8% liked)
Technology
1928 readers
7 users here now
Rumors, happenings, and innovations in the technology sphere. If it's technological news, it probably belongs here.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The main feature of a federated system is that there's no one "owner" of the entire system; each admin uses their own servers and their own domain names. There's nothing stopping a server admin from using a .com domain name...
There's basic TOTP 2FA now: https://github.com/LemmyNet/lemmy/issues/2363 but it hasn't been released yet (it's only available in the latest server beta). "Next-gen" TOTP (Webauthn/FIDO) is coming later: https://github.com/LemmyNet/lemmy/issues/3059
You should get to know your server admin, then. You have the freedom to pick any server you like :)
Have you subscribed to many communities yet? You can browse other Lemmy servers to find interesting communities, and subscribe to them on your instance.
On a similar note, would it be possible to have something like "sharding" where one server has multiple synched copies on several people's machines? So lets say if one machine goes down for whatever reason there are others to still serve content? This could also help with distributing load across multiple machines so its less stressful on one, and we don't have situations such as whats going on with lemmy.ml now, where so many users are joining it that its frequently down and subscription statuses are stuck on pending
This is usually referred to as "high availability", where you'd have a hot failover to swap to in case the main server goes down. This is usually implemented with a load balancer that checks if the upstream server is alive before sending requests to it. If the upstream server isn't responding, switch to the other one.
A load balancer could also spread the load evenly across multiple machines, at least for reads. Generally there's far more reads than writes, and reads are more easily scalable since you can have database replicas that just need to sync in one direction.
I don't think Lemmy supports any of this yet though.
The other approach is to split the large instances into multiple smaller instances. For Fediverse stuff, I don't know which approach is considered "better".