this post was submitted on 03 Jun 2025
9 points (62.9% liked)

LocalLLaMA

3198 readers
3 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

Rules:

Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.

Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.

Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.

Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.

founded 2 years ago
MODERATORS
9
submitted 1 week ago* (last edited 1 week ago) by [email protected] to c/localllama
 

Sorry team flipped the URL’s around to prevent overflow from lemmy.world users

https://fly.io/blog/youre-all-nuts/

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 6 points 1 week ago (1 children)

So should I try the Zed editor? I've tried AI assisted coding but never with a fully "immersive" experience. And I have a ton of small little woes, the code is riddled with small little annoyances and bugs and I end up rephrasing and doing several tries until I arrive at something which I still need to refactor for an hour or so... So does this apply to people who need to uphold some level of quality, and people who can't just change the programming language of an entire existing project so it works better with AI?

[–] [email protected] 2 points 1 week ago* (last edited 1 week ago) (2 children)

Local models are not capable of coding yet, despite what benchmarks say. Even if they get what you're trying to do they spew out so many syntax errors and tool calling problems that it's a complete waste of time. But if you're using an API then I don't see why not one editor over another. They'll be different in implementation but generally pull off the same things

[–] [email protected] 8 points 1 week ago* (last edited 1 week ago)

Local models are not capable of coding yet, despite what benchmarks say. Even if they get what you’re trying to do they spew out so many syntax errors and tool calling problems that it’s a complete waste of time.

I disagree with this. Qwen Coder 32B and on have been fantastic for niches with the right settings.

If you apply a grammar template and/or start/fill in their response, drop the temperature a ton, and keep the actual outputs short, it's like night and day vs 'regular' chatbot usage.

TBH one of the biggest problems with LLM is that they're treated as chatbot genies with all sorts of performance-degrading workarounds, not tools to fill in little bits of text (which is what language models were originally concieved for).

[–] [email protected] 1 points 1 week ago* (last edited 1 week ago)

Alright. I mean I haven't used local models for coding. This was ChatGPT, AIstudio and Grok I tried. I can't try Claude, since they want my phone number and I'm not going to provide that to them. I feel DeepSeek and a few other local models should be able to get somewhere in the realm of commercial services, though. At least judging by the coding benchmarks, we have some open-weight competition there.