this post was submitted on 04 Jan 2024
25 points (100.0% liked)

LocalLLaMA

2292 readers
1 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 2 years ago
MODERATORS
 

Based off of deepseek coder, the current SOTA 33B model, allegedly has gpt 3.5 levels of performance, will be excited to test once I've made exllamav2 quants and will try to update with my findings as a copilot model

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 11 months ago* (last edited 11 months ago) (6 children)

I'm startng to think I should run local models for completion engines but I don't have a GPU I can use. What's the best option for accelerating these models? Are there PCI cards that give a better bang for buck for running models?

[–] noneabove1182 2 points 11 months ago (2 children)

The 3060 is a nice cheap one for running okay sized models, but if you can find a way to stretch for a 3090 or a 7900 XTX you'll be able to run these 33B models with decent quant levels

[–] [email protected] 3 points 11 months ago (1 children)

I was hoping to avoid Nvidia's binary drivers although I don't know what the driver/support status of dedicated AI accelerators are like on Linux._

[–] noneabove1182 3 points 11 months ago

I run my Nvidia stuff in containers to not have to deal with all the stupid shenanigans

load more comments (3 replies)