LocalLLaMA
Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.
Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.
As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.
Rules:
Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.
Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.
Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.
Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.
view the rest of the comments
I'm not entirely sure how I need to effectively use these models, I guess. I tried some basic coding prompts, and the results were very bad. Using R1 Distill Qwen 32B, 4-bit quant.
The first answer had incorrect, non-runnable syntax. I was able to get it to fix that after multiple followup prompts, but I was NOT able to get it to fix the bugs. It took several minutes of thinking time for each prompt, and gave me worse answers than the stock Qwen model.
For comparison, GPT 4o and Claude Sonnet 3.5 gave me code that would at least run on the first shot. 4o's was even functional in one shot (Sonnet's was close but had bugs). And that took just a few seconds instead of 10+ minutes.
Looking over its chain of thought, it seems to get caught in circles, just stating the same points again and again.
Not sure exactly what the use case is for this. For coding, it seems worse than useless.
The lower quant coding models in the 14b-32b range ive tried just can't cook functioning code easily in general. Maybe if they distilled the coder version of qwen 14b it might be a little better but i doubt it. I think a really high quant 70b model is more in range of cooking functioning code off the bat. Its not really fair to compare a low quant local model to o1 or Claude with on the cloud they are much bigger.
Some people are into this not to have a productive tool but just because they think neural networks are rad. The study of how brains process and organize information is a cool thing to think about.
So I 'use' it by asking questions to poke around at its domain knowledge. Try to find holes, see how it handles not knowing things, and how it reasons about what information might imply in an open ended question or how it relates to something else. If I feel its strong enough with general knowledge and real world reasoning problems I consider trusting it as a rubber duck to bounce ideas and request suggestions.
Deepseek feels to me like its aimed as a general experimental model that peels back how llms 'think'. It examines how altering or extending an LLMs 'thought process' changes its ability to figure out logic problems and similar comparative examination abilities.
Ive gotten good test asking a very domain specific question and a followup:
how are strange attractors and julia sets related?
Are they the same undelying process occuring in the two different mediums of physical reality and logical abstraction?
what is the collatz conjecture ?
how does it apply to negative numbers?
How do determinists arguments against human free will based on the predictability of human neurons firing relate to AI statements about lacking the ability to generate consciousness or have experiences?
what is Barnsley's collage conjecture? Explain it in easy to understand way.
Does it imply every fractal structure have a related IFS equation?
What is Gödel's incompleteness theorem?
What does it imply about scientific theories of everything?
Can fractal structures contain other fractals? is the universe structured as a super fractal that contains all other fractals?
These kind of questions really grill an LLMs exact knowledge level of scientific, mathematical, and philosophical concepts, as well as its ability to piece these concepts together into coherent context. Do human like monologues and interjections of doubt actually add something to its ability to piece together coherent simulations of understanding or is just extra fluff? Thats an interesting question worth exploring.
That's a good point. I got mixed up and thought it was distilled from qwen2.5-coder, which I was using for comparison at the same size and quant. qwen2.5-coder-34b@4bit gave me better (but not entirely correct) responses, without spending several minutes on CoT.
I think I need to play around with this more to see if CoT is really useful for coding. I should probably also compare 32b@4bit to 14b@8bit to see which is better, since those both can run within my memory constraints.
that’s interesting, in gpt4all they have the qwen reasoner v1 and it will run the code in a sandbox (for javascript anyway) and if it errors it will fix itself
Sounds cool. I'm using LM Studio and I don't think it has that built in. I should reevaluate others.
https://www.nomic.ai/blog/posts/gpt4all-scaling-test-time-compute
This release introduces the GPT4All Javascript Sandbox, a secure and isolated environment for executing code tool calls. When using Reasoning models equipped with Code Interpreter capabilities, all code runs safely in this sandbox, ensuring user security and multi-platform compatibility.
I use LM Studio as well but between this and LM Studios bug where LLM's larger than 8b won't load I've gone back to gpt4all