this post was submitted on 08 Jun 2025
28 points (100.0% liked)

LocalLLaMA

3205 readers
2 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

Rules:

Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.

Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.

Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.

Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.

founded 2 years ago
MODERATORS
 

Hello. Our community, c/localllama, has always been and continues to be a safe haven for those who wish to learn about the creation and local usage of 'artificial intelligence' machine learning models to enrich their daily lives and provide a fun hobby to dabble in. We come together to apply this new computational technology in ways that protect our privacy and build upon a collective effort to better understand how this can help humanity as an open source technology stack.

Unfortunately, we have been recieving an uptick in negative interactions by those outside our community recently. This is largely due to the current political tensions caused by our association with the popular and powerful tech companies who pioneered modern machine learning models for buisiness and profit, as well as unsavory techbro individuals who care more about money than ethics. These users of models continue to create animosity for the entire field of machine learning and all associated through their illegal stealing of private data to train base models and very real threats to disrupt the economy by destroying jobs through automation.

There are legitimate criticisms to be had. The cost in creating models, how the art they produce is devoid of the soulful touch of human creativity, and how corporations are attempting to disrupt lives for profit instead of enrich them.

I did not want to be heavy handed with censorship/mod actions prior to this post because I believe that echo chambers are bad and genuine understanding requires discussion between multiple conflicting perspectives.

However, a lot of these negative comments we receive lately aren't made in good faith with valid criticisms against the corporations or technologies used with an intimate understanding of them. No, instead its base level mud slinging by people with emotionally charged vendettas making nasty comments of no substance. Common examples are comparing models to NFTs, namecalling our community members as blind zelots for thinking models could ever be used to help people, and spreading misinformation with cherry picked unreliable sources to manipulatively exaggerate enviromental impact/resource consumption used.

While I am against echo chambers, I am also against our community being harassed and dragged down by bad actors who just don't understand what we do or how this works. You guys shouldn't have to be subjected to the same brain rot antagonism with every post made here.

So Im updating guidelines by adding some rules I intend to enforce. Im still debating whether or not to retroactively remove infringing comments from previous post, but be sure any new post and comments made will be enforced based on the following guidelines.

RULES: Rule: No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.

Reason: More or less self explanatory, personal character attacks and childish mudslinging against community members are toxic.

Rule: No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.

Reason: This is a piss poor whataboutism argument. It claims something that is blaitantly untrue while attempting to discredit the entire field by stapling the animosity everyone has with crypto/NFT onto ML. Models already do more than cryptocurrency ever has. Models can generate text, pictures, audio. Models can view/read/hear text, pictures, and audio. Models may simulate aspects of cognitive thought patterns to attempt to speculate or reason through a given problem. Once they are trained they can be copied and locally hosted for many thousands of years which factors into initial energy cost vs power consumed over time equations.

Rule: No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.

Reason: There are grains of truth to the reductionist statement that llms rely on mathematical statistics and probability for their outputs. The same can be said for humans and the statistical patterns in our own language and how our neurons come together to predict the next word in the sentence we type out. Its the intricate complexity in the process and the way information is processed that makes all the diffence. ML models have an entire college course worth of advanced mathematics and STEM concepts to create hyperdimensional matrixes to plot the relationship of information, intricate hidden translation layers made of perceptrons connecting billions of parameters into vast abstraction mappings. There were also some major innovations and discoveries made in the 2000s which made modern model training possible that we didn't have in the early days of computing. all of that is a little more complicated than what your phones autocorrect does, and the people who make the lazy reductionist comparison just dont care about the nuances.

Rule: No implying that models are devoid of purpose or potential for enriching peoples lives.

Reason: Models are tools with great potential for helping people through the creation of accessability software for the disabled and enabling doctors to better heal the sick through advanced medical diagnostic techniques. The percieved harm models are capable of causing such as job displacement is rooted in our flawed late stage capitalist human society pressures for increased profit margins at the expense of everyone and everything.

If you have any proposals for rule additions or wording changes I will hear you out in the comments. Thank you for choosing to browse and contribute to this space.

top 9 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 5 days ago* (last edited 5 days ago)

Agreed, agreed, agreed. Thanks

Some may seem arbitrary, but things like the NFT/crypto comparison are so politically charged and ripe for abuse that it's a good to nip that in the bud.

The only one I have mixed feelings on is:

Rule: No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they’re still using the same algorithms since <over 10 years ago>. Reason: There are grains of truth to the reductionist statement that llms rely on mathematical statistics and probability for their outputs.

The reasoning is true. I agree. But it does feel a bit uninclusive to outsiders who, to be frank, know nothing about LLMs. Commenters shouldn't drive by and drop reductionist hate, but that's also kinda the nature of Lemmy, heh.

So... maybe be a little lax with that rule, I guess? Like give people a chance to be corrected unless they're outright abusive.

[–] mindbleach 6 points 6 days ago (2 children)

Rule: No comparing artificial intelligence/machine learning to simple text prediction algorithms.

That's an overstep. "Spicy autocorrect" is not a joke exclusive to trolls. LLMs genuinely are simpler than they have any right to be, and it's ridiculous they work anywhere near this well.

Then again, rigidly defining bad behavior is a poor move anyway, when you're trying to say "don't be a tedious asshole." Tedious assholes will gladly slip around whatever specific problems you name, and bait other people into unwitting violations. The general version of this is enforced civility, i.e. "Rule 1: Be nice! >:(", and that becomes a duck-blind for infuriating liars. Sometimes "fuck off" is a perfectly reasonable response.

Just write "don't be a tedious asshole." Hash out what that means amongst the mod team. Do not be afraid to give people a week-long time-out for things you did not pre-emptively wag a finger about. If they mewl 'but it didn't say!,' tell them, it doesn't have to. I think everyone is happier when they can trust moderators to make a judgement call on who's being a dick. And so long as the stakes are temporary, don't be afraid to get it wrong sometimes.

[–] [email protected] 3 points 6 days ago* (last edited 6 days ago)

Just write "don't be a tedious asshole."

I believe that's what ani.social moderators do and I really appreciate them for it.

This is why

  1. it has a healthy community IMO
  2. I moved over to ani.social
[–] ThreeJawedChuck 2 points 6 days ago

can trust moderators to make a judgement call on who’s being a dick

I think there's something to that. I've only been here a short time, but it looks to me (so far) like the mods are doing a good job, or maybe the community is better behaved than certain others, or both. Whatever problems are here, it's nothing like the toxicity of some online spaces. Thus, I am content to trust the mod team to make judgment calls, with guidelines to nudge people toward good behaviours and set the tone they want the group to have. A balance between rules and flexibility, if you will.

Social media seems to drive everyone toward polarization. XYZ is either the best thing that has ever happened, or the worst. The big sites weaponize that using algorithms. But even without the algorithms, it's human nature and happens on its own to a lesser degree. I think it should be part of social media literacy to be cognizant of that, and try to guard ourselves against it. Personally I'm really enjoying experimenting with LLMs so far, and I think they can enrich people's lives in nice ways. At the same time, I also believe there are major social risks to this technology, and it's worth thinking about those. Both can be true at once.

[–] [email protected] 8 points 1 week ago (1 children)

It would be great if people could just talk online, discuss things... without turning every topic into a fight. I wish these rules weren't necessary and they weren't there. Thanks to the vast majority of nice, respectful and constructive people here, and to the moderators.

[–] mindbleach 4 points 6 days ago

Evaporative cooling is a bitch. You have some community about the problems with X, and there's a range of opinions about how bad X is. Anyone mildly affected won't post much or stick around. People with intense opinions exaggerate. Whether it's for comedy or rhetoric, 'X will be the death of us all!' chases out even more mild users. Now you have a vicious circle of X haters.

If that's popular enough to form a meaningful audience, you see careers made, serving that conclusion. Shockingly few of them are grifters. They just posted something honestly critical that the haters really enjoyed, and the likeminded engagement made the author's brain do the happy chemicals, so now they're the weekly go-to for obsessively complaining about the evils of X. Still naming any actual problems with X, on par with their original independent criticism... but in the new fire-breathing style that makes even half-true non-issues sound like the worst event in recorded history.

The same can happen for positive attitudes, but the result is less circlejerk, and more... cult. Like that DRSyourGME instance. Or Qanon. When sensible people start toward the exits, that doesn't mean the party's over.

[–] [email protected] 8 points 1 week ago

stapling the animosity everyone has with crypto/NFT onto ML

Feels bad for crytography and blockchain researchers who did genuine work before the cryptocurrency and NFT hype.

[–] [email protected] 7 points 1 week ago

I haven't frequented this community in a while, but wow, I had no idea things had gotten that bad. I know there are some rabid AI haters out there. I see AI posts getting downvoted with no real discussion or explanation.

Thanks for taking these steps.

[–] [email protected] 7 points 1 week ago

Thank you for the work you do maintaining this space. Antagonistic users who don't know what they're talking about have myriad other places to complain, it's not necessary to endure them here.