LocalLLaMA

3226 readers
2 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

Rules:

Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.

Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.

Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.

Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.

founded 2 years ago
MODERATORS
101
 
 

One might question why an RX 9070 card would need so much memory, but increased capacity can serve purposes beyond gaming, such as Large Language Model (LLM) support for AI workloads. Additionally, it’s worth noting that RX 9070 cards will use 20 Gbps memory, much slower than the RTX 50 series, which features 28-30 Gbps GDDR7 variants. So, while capacity may increase, bandwidth likely won’t.

102
103
 
 

Closing session, speech by Modi, JD Vance, Ursula von der Leyen

104
105
 
 

Sorry I keep posting about Mistral but if you check: https://chat.mistral.ai/chat

I duno how they do it but some of these answers are lightning fast:

Fast inference dramatically improves the user experience for chat and code generation – two of the most popular use-cases today. In the example above, Mistral Le Chat completes a coding prompt instantly while other popular AI assistants take up to 50 seconds to finish.

For this initial release, Cerebras will focus on serving text-based queries for the Mistral Large 2 model. When using Cerebras Inference, Le Chat will display a “Flash Answer ⚡” icon on the bottom left of the chat interface.

106
 
 

Example of it working in action: https://streamable.com/ueh3sj

Paper: https://arxiv.org/abs/2502.03382

Samples: https://hf.co/spaces/kyutai/hibiki-samples

Inference code: https://github.com/kyutai-labs/hibiki

Models: https://huggingface.co/kyutai

From kyutai on X: Meet Hibiki, our simultaneous speech-to-speech translation model, currently supporting FR to EN.

Hibiki produces spoken and text translations of the input speech in real-time, while preserving the speaker’s voice and optimally adapting its pace based on the semantic content of the source speech.

Based on objective and human evaluations, Hibiki outperforms previous systems for quality, naturalness and speaker similarity and approaches human interpreters.

https://x.com/kyutai_labs/status/1887495488997404732

Neil Zeghidour on X: https://x.com/neilzegh/status/1887498102455869775

107
108
35
submitted 4 months ago* (last edited 4 months ago) by [email protected] to c/localllama
 
 

Hello, everyone! I wanted to share my experience of successfully running LLaMA on an Android device. The model that performed the best for me was llama3.2:1b on a mid-range phone with around 8 GB of RAM. I was also able to get it up and running on a lower-end phone with 4 GB RAM. However, I also tested several other models that worked quite well, including qwen2.5:0.5b , qwen2.5:1.5b , qwen2.5:3b , smallthinker , tinyllama , deepseek-r1:1.5b , and gemma2:2b. I hope this helps anyone looking to experiment with these models on mobile devices!


Step 1: Install Termux

  1. Download and install Termux from the Google Play Store or F-Droid

Step 2: Set Up proot-distro and Install Debian

  1. Open Termux and update the package list:

    pkg update && pkg upgrade
    
  2. Install proot-distro

    pkg install proot-distro
    
  3. Install Debian using proot-distro:

    proot-distro install debian
    
  4. Log in to the Debian environment:

    proot-distro login debian
    

    You will need to log-in every time you want to run Ollama. You will need to repeat this step and all the steps below every time you want to run a model (excluding step 3 and the first half of step 4).


Step 3: Install Dependencies

  1. Update the package list in Debian:

    apt update && apt upgrade
    
  2. Install curl:

    apt install curl
    

Step 4: Install Ollama

  1. Run the following command to download and install Ollama:

    curl -fsSL https://ollama.com/install.sh | sh
    
  2. Start the Ollama server:

    ollama serve &
    

    After you run this command, do ctrl + c and the server will continue to run in the background.


Step 5: Download and run the Llama3.2:1B Model

  1. Use the following command to download the Llama3.2:1B model:
    ollama run llama3.2:1b
    
    This step fetches and runs the lightweight 1-billion-parameter version of the Llama 3.2 model .

Running LLaMA and other similar models on Android devices is definitely achievable, even with mid-range hardware. The performance varies depending on the model size and your device's specifications, but with some experimentation, you can find a setup that works well for your needs. I’ll make sure to keep this post updated if there are any new developments or additional tips that could help improve the experience. If you have any questions or suggestions, feel free to share them below!

– llama

109
 
 

Should be good at conversations and creative, it'll be for worldbuilding

Best if uncensored as I prefer that over it kicking in when I least want it

I'm fine with those roleplaying models as long as they can actually give me ideas and talk to be logically

110
111
 
 

Changed title because no need for youtube clickbait here

112
 
 

Someone asked about how llms can be so good at math operations. My response comment kind of turned into a five paragraph essay as they tend to do sometimes. Thought I would offer it here and add some reference. Maybe spark some discussion?

What do language models do?

LLMs are trained to recognize, process, and construct patterns of language data into high dimensional manifold plots.

Meaning its job is to structure and compartmentalize the patterns of language into a map where each word and its particular meaning live as pairs of points on a geometric surface. Its point is placed near closely related points in space connected by related concepts or properties of the word.

You can explore such a map for vision models here!

Then they use that map to statistically navigate through the sea of ways words can be associated into sentences to find coherent paths.

What does language really mean?

Language data isnt just words and syntax, its underlying abstract concepts, context, and how humans choose to compartmentalize or represent universal ideas given our subjective reference point.

Language data extends to everything humans can construct thoughts about including mathematics, philosophy, science storytelling, music theory, programming, ect.

Language is universal because its a fundimental way we construct and organize concepts. The first important cognative milestone for babies is the association of concepts to words and constructing sentences with them.

Even the universe speaks its own language. Physical reality and logical abstractions speak the same underlying universal patterns hidden in formalized truths and dynamical operation. Information and matter are two sides to a coin, their structure is intrinsicallty connected.

Math and conceptual vectors

Math is a symbolic representation of combinatoric logic. Logic is generally a formalized language used to represent ideas related to truth as well as how truth can be built on through axioms.

Numbers and math is cleanly structured and formalized patterns of language data. Its riggerously described and its axioms well defined. So its relatively easy to train a model to recognize and internalize patterns inherent to basic arithmetic and linear algebra and how they manipulate or process the data points representing numbers.

You can imagine the llms data manifold having a section for math and logic processing. The concept of one lives somewhere as a point of data on the manifold. By moving a point representing the concept of one along a vector dimension that represents the process of 'addition by one' to find the data point representing two.

Not a calculator though

However an llm can never be a true calculator due to the statistical nature of the tokenizer. It always has a chance of giving the wrong answer. In the infinite multitude of tokens it can pick any number of wrong numbers. We can get the statistical chance of failure down though.

Its an interesting how llms can still give accurate answers for artithmatic despite having no in built calculation function. Through training alone they are learning how to apply simple arithmetic.

hidden structures of information

There are hidden or intrinsic patterns to most structures of information. Usually you can find the fractal hyperstructures the patterns are geometrically baked into in higher dimensions once you go plotting out their phase space/ holomorphic parameter maps. We can kind of visualize these fractals with vision model activation parameter maps. Welch labs on yt has a great video about it. 

Modern language models have so many parameters with so many dimensions the manifold expands into its impossible to visualize. So they are basically mystery black boxes that somehow understand these crazy fractal structures of complex information and navigate the topological manifolds language data creates.

conclusion

This is my understanding of how llms do their thing. I hope you enjoyed reading! Secretly I just wanted to show you the cool chart :)

113
26
submitted 4 months ago* (last edited 4 months ago) by [email protected] to c/localllama
 
 

Ive been playing around with the deepseek R1 distills. Qwen 14b and 32b specifically.

So far its very cool to see models really going after this current CoT meta by mimicing internal thinking monologues. Seeing a model go "but wait..." "Hold on, let me check again..." "Aha! So.." Kind of makes it feel more natural in its eventual conclusions.

I don't like how it can get caught in looping thought processes and im not sure how much all the extra tokens spent really go towards a "better" answer/solution.

What really needs to be ironed out is the reading comprehension seems to be lower th average as it misses small details in tricky questions and makes assumptions about what youre trying to ask like wanting a recipe for coconut oil cookies but only seeing coconut and giving a coconut cookie recipe with regular butter.

Its exciting to see models operate in a kind of a new way.

114
 
 

I was experimenting with oobabooga trying to run this model but due to it's size it wasn't going to fit in ram, so i tried to quantize it using llama.cpp, and that worked, but due to the gguf format it was only running on the cpu. searching for ways to quantize the model while keeping it in safetensors returned nothing; so is there any way to do that?

I'm sorry if this is a stupid question, i still know almost nothing of this field

115
 
 

Do i need industry grade gpu's or can i scrape by getring decent tps with a consumer level gpu.

116
 
 

I am excited to see how this performs when it drops around May.

117
 
 
118
13
submitted 5 months ago* (last edited 5 months ago) by [email protected] to c/localllama
 
 

Seems Meta have been doing some research lately, to replace the current tokenizers with new/different representations:

119
 
 

Absolutely humongous model. Mixture of 256 experts with 8 activated each time.

Aider leaderboard: The only model above 🐋 v3 here is ~~Open~~AI o1. DeepSeek is known to make amazing models and Aider rotates their benchmark over time, so it is unlikely that this is a train-on-benchmark situation.

Some more benchmarks: on Reddit.

120
12
submitted 6 months ago* (last edited 6 months ago) by [email protected] to c/localllama
 
 

I want to fine tune an LLM to "steer" it in the right direction. I have plenty of training examples in which I stop the generation early and correct the output to go in the right direction, and then resume generation.

Basically, for my dataset doing 100 "steers" on a single task is much cheaper than having to correct 100 full generations completely, and I think each of these "steer" operations has value and could be used for training.

So maybe I'm looking for some kind of localized DPO. Does anyone know if something like this exists?

121
 
 

Howdy!

(moved this comment from the noob question thread because no replies)

I'm not a total noob when it comes to general compute and AI. I've been using online models for some time, but I've never tried to run one locally.

I'm thinking about buying a new computer for gaming and for running/testing/developing LLMs (not training, only inference and in context learning) . My understanding is that ROCm is becoming decent (and I also hate Nvidia) , so I'm thinking that a Radeon Rx 7900 XTX might be a good start. If I buy the right motherboard I should be able to put another XTX in there as well, later. If I use watercooling.

So first, what do you think about this? Are the 24 gigs of VRAM worth the extra bucks? Or should I just go for a mid-range GPU like the Arc B580?

I'm also curious experimenting with a no-GPU setup. I.e. CPU + lots of RAM. What kind of models do you think I'll be able to run, with decent performance, if I have something like a Ryzen 7 9800X3D and 128/256 GB of DDR5? How does it compare to the Radeon RX 7900 XTX? Is it possible to utilize both CPU and GPU when running inference with a single model, or is it either or?

Also.. Is it not better if noobs post questions in the main thread? Then questions will probably reach more people. It's not like there is super much activity..

122
12
Fixed it (sh.itjust.works)
submitted 6 months ago* (last edited 6 months ago) by HumanPerson to c/localllama
 
 

Seriously though, does anyone know how to use openwebui with the new version?

Edit: if you go into the ollama container using sudo docker exec -it bash, then you can pull models with ollama pull llama3.1:8b for example and have it.

123
 
 

People are talking about the new Llama 3.3 70b release, which has generally better performance than Llama 3.1 (approaching 3.1's 405b performance): https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_3

However, something to note:

Llama 3.3 70B is provided only as an instruction-tuned model; a pretrained version is not available.

Is this the end of open-weight pretrained models from Meta, or is Llama 3.3 70b instruct just a better-instruction-tuned version of a 3.1 pretrained model?

Comparing the model cards: 3.1: https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md 3.3: https://github.com/meta-llama/llama-models/blob/main/models/llama3_3/MODEL_CARD.md

The same knowledge cutoff, same amount of training data, and same training time give me hope that it's just a better finetune of maybe Llama 3.1 405b.

124
15
submitted 6 months ago* (last edited 6 months ago) by [email protected] to c/localllama
 
 

I've been working on keeping the OSM tool up to date for OpenWebUI's rapid development pace. And now I've added better-looking citations, with fancy styling. Just a small announcement post!

Update: when this was originally posted, the tool was on 1.3. Now it's updated to 2.1.0, with a navigation feature (beta) and more fixes for robustness.

125
17
Qwen2.5-Coder-7B (self.localllama)
submitted 6 months ago by lynx to c/localllama
 
 

I've been using Qwen 2.5 Coder (bartowski/Qwen2.5.1-Coder-7B-Instruct-GGUF) for some time now, and it has shown significant improvements compared to previous open weights models.

Notably, this is the first model that can be used with Aider. Moreover, Qwen 2.5 Coder has made notable strides in editing files without requiring frequent retries to generate in the proper format.

One area where most models struggle, including this one, is when the prompt exceeds a certain length. In this case, it appears that the model becomes unable to remember the system prompt when the prompt length is above ~2000 tokens.

view more: ‹ prev next ›