The Agora
In the spirit of the Ancient Greek Agora, we invite you to join our vibrant community - a contemporary meeting place for the exchange of ideas, inspired by the practices of old. Just as the Agora served as the heart of public life in Ancient Athens, our platform is designed to be the epicenter of meaningful discussion and thought-provoking dialogue.
Here, you are encouraged to speak your mind, share your insights, and engage in stimulating discussions. This is your opportunity to shape and influence our collective journey, just like the free citizens of Athens who gathered at the Agora to make significant decisions that impacted their society.
You're not alone in your quest for knowledge and understanding. In this community, you'll find support from like-minded individuals who, like you, are eager to explore new perspectives, challenge their preconceptions, and grow intellectually.
Remember, every voice matters and your contribution can make a difference. We believe that through open dialogue, mutual respect, and a shared commitment to discovery, we can foster a community that embodies the democratic spirit of the Agora in our modern world.
Community guidelines
New posts should begin with one of the following:
- [Question]
- [Discussion]
- [Poll]
Only moderators may create a [Vote] post.
Voting History & Results
view the rest of the comments
I've expressed concerns about the potential effects of a bot-swarm before, and have had a few mildly constructive conversations about it. Here is a thread where I lay out a few of my concerns on the matter, but I'll copy the relevant text here for easier discovery.
Me:
@[email protected]
Me:
That is just one of the more insidious possibilities that a bot-swarm could be used for. Spamming, scamming, brigading, and poisoning discussions en-masse are all possible with even a moderately sized number of bots with the technical ability to put them to use on a platform of this size.
I've also seen announcement posts and the resulting post in The Agora covering the use of one tool (The Lemmy Overseer) that can help to automate the de/refederation of likely bot-infested instances. While I don't think the tool is going to deter particularly motivated actors, it should take care of the "low-hanging fruit" that is the tens of thousands of suspected bot accounts that have had no engagement on the platform since account creation. Instance owners take on a lot of responsibility when federating with others, just one of which is being responsible for securing their instance against automated signups. Once they take care of their bot problem they can become refederated automatically.
TLDR: I think we should defederate botted instances preemptively. Automatic refederation is possible, and a Matrix channel for instance operators exists for discussing refederation as a fallback measure.
Thank you for your input. You've obviously thought a lot about this and are bringing a lot to the table.
Personally, priority number one is removing the low-hanging fruit. Once we've done that, we can think about more complex goals in terms of how to defend ourselves against more complex bots. We need to start here though, and soon.
Of course and thank you. I agree completely. I think going forward, that instance admins who are utilizing a defense-in-depth strategy with tools like Lemmy Overseer, automated account creation hurdles, and other emergent tools (one example) will be the most effective in keeping this part of the federation largely free of the bot-swarm.