this post was submitted on 03 Jun 2025
9 points (62.9% liked)

LocalLLaMA

3190 readers
25 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

Rules:

Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.

Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.

Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.

Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.

founded 2 years ago
MODERATORS
9
submitted 1 week ago* (last edited 1 week ago) by [email protected] to c/localllama
 

Sorry team flipped the URL’s around to prevent overflow from lemmy.world users

https://fly.io/blog/youre-all-nuts/

all 16 comments
sorted by: hot top controversial new old
[–] atzanteol 9 points 1 week ago (2 children)

This is just brilliant. Every ridiculous argument addressed perfectly.

but you have no idea what the code is

Are you a vibe coding Youtuber? Can you not read code? If so: astute point. Otherwise: what the fuck is wrong with you?

You’ve always been responsible for what you merge to main. You were five years go. And you are tomorrow, whether or not you use an LLM.

I want to scream every time somebody brings up "but it writes code that doesn't work" and all I can think of is "what the fuck is wrong with you that you're merging code that doesn't work?" LLMs do not remove your responsibility as a developer to create a working product.

[–] [email protected] 1 points 1 day ago

Apart from the arguments that

  • yes, vibe coders exist and they will be cheaper to employ, creating huge long term problems, with a generational gap in senior programmers, who are the ones maintaining open source projects.
  • heinous environmental impact, and I mean heinous. This is my biggest problem honestly.
  • you're betting that llms will improve faster than programmers forget "the craft". Llms are wide, not deep, and the less programmers care about boilerplate and how things actually work, the less material for the llms - >feedback loop - > worse llms etc.

I use llms, hell I designed a workshop for my employer on how programmers can use llms, cursor, etc. But I don't think we're quite aware how we are screwing ourselves long term.

[–] [email protected] 6 points 1 week ago (1 children)

I've played with QwenCoder2.5, Qwen3, and Devstral.

Holy shit are they bad. Seriously, consistently bad at coding. Initialized variables that are never used. Importing, using functions/methods that don't exist, it's fucking pathetic.

[–] atzanteol 4 points 1 week ago (2 children)

I don't know what to tell ya - GPT 4o does a really good job. Feel free to simply blame "ai slop" for everything though.

[–] [email protected] 1 points 5 days ago* (last edited 4 days ago)

Kinda late to the party but based on my day-to-day usage of ChatGPT, 4o is rubbish when it comes to coding.

Now o4-mini-high on the other hand - that's the good stuff (most of the time).

[–] [email protected] 1 points 1 week ago

yep I've been told Gemini is the new hot shit, really hoping local models can catch up

[–] [email protected] 6 points 1 week ago (1 children)

So should I try the Zed editor? I've tried AI assisted coding but never with a fully "immersive" experience. And I have a ton of small little woes, the code is riddled with small little annoyances and bugs and I end up rephrasing and doing several tries until I arrive at something which I still need to refactor for an hour or so... So does this apply to people who need to uphold some level of quality, and people who can't just change the programming language of an entire existing project so it works better with AI?

[–] [email protected] 2 points 1 week ago* (last edited 1 week ago) (2 children)

Local models are not capable of coding yet, despite what benchmarks say. Even if they get what you're trying to do they spew out so many syntax errors and tool calling problems that it's a complete waste of time. But if you're using an API then I don't see why not one editor over another. They'll be different in implementation but generally pull off the same things

[–] [email protected] 8 points 1 week ago* (last edited 1 week ago)

Local models are not capable of coding yet, despite what benchmarks say. Even if they get what you’re trying to do they spew out so many syntax errors and tool calling problems that it’s a complete waste of time.

I disagree with this. Qwen Coder 32B and on have been fantastic for niches with the right settings.

If you apply a grammar template and/or start/fill in their response, drop the temperature a ton, and keep the actual outputs short, it's like night and day vs 'regular' chatbot usage.

TBH one of the biggest problems with LLM is that they're treated as chatbot genies with all sorts of performance-degrading workarounds, not tools to fill in little bits of text (which is what language models were originally concieved for).

[–] [email protected] 1 points 1 week ago* (last edited 1 week ago)

Alright. I mean I haven't used local models for coding. This was ChatGPT, AIstudio and Grok I tried. I can't try Claude, since they want my phone number and I'm not going to provide that to them. I feel DeepSeek and a few other local models should be able to get somewhere in the realm of commercial services, though. At least judging by the coding benchmarks, we have some open-weight competition there.

[–] [email protected] 5 points 1 week ago (1 children)

+1, though all this is a very unpopular opinion on most of the internet.

[–] [email protected] 3 points 1 week ago* (last edited 1 week ago) (2 children)

it’s weird because the post has a massive amount of downvotes in an ai friendly sub, even the hackernews rss bot is being downvoted!

I think the “cross-posted to” feature is being abused by anti ai zealots

[–] [email protected] 3 points 1 week ago* (last edited 1 week ago) (1 children)

It's not bots, it just how local ML posts are on the internet.

I got banned from a Reddit fandom sub for the mere suggestion that a certain fan 'remaster' be updated with newer diffusion/GAN models. Apparently they weren't aware the original was made with Waifu2x... But unfortunately, anything tangential to tech bro AI is radioactive.

[–] [email protected] 3 points 1 week ago* (last edited 1 week ago) (1 children)

sorry i mean the rss bot:

https://lemmy.bestiver.se/post/419818

this was posted on [email protected] who are super anti ai so it’s no surprise they’re downvoting everything

Next time might do an archive link and put the real article in the body

edit: just swapped the links around, will see how this goes

[–] [email protected] 3 points 1 week ago

Eh, I still bet it was really people browsing /new who downvoted it.

Honestly I get it, with how enshittified corporate portals/use is already, but still.