there's no karma to farm. there's no algorithm to game. the best they can do is spam.
Asklemmy
A loosely moderated place to ask open-ended questions
Search asklemmy π
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- [email protected]: a community for finding communities
~Icon~ ~by~ ~@Double_[email protected]~
And spamming can be a serious problem for forums such as Lemmy communities
No. They can be used in influence campaigns. They can upvote the posts and comments the controllers want you to see and downvote those they donβt.
Spamβs obvious and can be dealt with. Bots altering what shows up in your feed is impossible to combat as an end user.
In some ways, this shows Lemmy is winning. It means Lemmyβs important enough to start trying to influence. It also means weβre about to go through some interesting times.
There has been a big surge in bot accounts
Basically many of the newer instances allowed sign ups with no bot protection
This is why we can't have nice things and need to deal with CAPTCHA and email verification etc..
The only bots I've seen are posting news, TIL, etc. No annoying bits so far.
I have a crosspost bot, I'm mainly testing it and it's purpose is to use reddit as link agregator to help small communities get some content going.
It only posts external links, never OC nor a link to reddit.
Also in my case the bot will get one post no older than 5 hours every hour, to prevent flooding with posts.
I think some of it comes down to admins who left their (small) instances open (no captcha, no application, no email validation) not knowing how bad an idea that currently is given the maturity level of Lemmy and the (very recent) influx of bots. I am reaching out to the admins of the fastest growing servers according to FediDB if it looks suspicious (based on growth rate, participation rate of their users, and if the content posted by users). In many of these cases we are talking thousands of new accounts in the past few days on instances that have single-digit active daily/weekly users.
So far the responses I have gotten have been appreciative and the admins are taking action, but not everyone has responded. Also the tooling to find and delete such accounts is pretty lacking as far as I can tell.
What kind of tooling do you envision to find bot users?
I'm not sure. At a user level perhaps some sort of tracking of logins, posting frequency that sort of stuff. If a user signed up and immediately starts making hundreds of posts, something is probably up and an admin should be made aware somehow. If a dormant account wakes up and starts posting a lot, maybe an admin should take a casual look. Also, as much as people seem to hate it, track some IP addresses, at least temporarily. If 100+ accounts all sign up from one IP in the space of an hour, they are probably less than legitimate.
Assuming the problem is posts and comments by bots there could be something that looks for known spam copypasta, previously moderated/admin'd content, or keywords could be enough on a small instance. Going further perhaps something that reads the posts from users of your instance, has them classified based on previous admin actions (and probably some manual work to flag things as "known good"), and trains some sort of classifier (bayes/markov/ml/whatever). Such tools already exist and are in wide use for email spam filtering and the like. They aren't perfect, but would make an ok first line of defense that can raise things to the attention of the admin.
I am sure you could go further down the automation side, but I would imagine all of these are "human in the loop" sort of things. Once a user/post/whatever gets flagged it generates some sort of report for an admin to take a look at. I don't know how much of this stuff like automoderator or mod bots did on reddit, but a decent amount of it would probably be transferable however it was done.
Perhaps some/all of this doesn't get put into Lemmy itself but can interact through admin APIs and/or the database. I would start at just basic things in Lemmy itself as at the moment there is hardly any admin interface to Lemmy at all. If I just want a list of the users on my instance I have to query the database. Make deleting/purging users easier (I have heard from some admins having bot trouble that it was easier to vs than delete them). Properly split out the modlog per community, show all the details of the action, and show whether something was a mod or admin action.
What are the bot accounts being used for? I haven't noticed any posts made by bots (unless you guys are all ChatGPT and I'm the only human here)
As an AI language model, Iβm not able to confirm whether or not Iβm a bot.
I am not a language model of a modern major general.
Did you miss the news about Reddit pissing off all their users, many of whom are now looking for alternatives?
I wonder what will happen to the users that already registered without email (Like me). Maybe I should add an email to my profile, just to be sure.
People seem to be aware, and taking steps to deal with it: e.g. https://kbin.social/m/[email protected]/t/65767/Introducing-The-Lemmy-Overseer.