this post was submitted on 04 Jan 2024
25 points (100.0% liked)

LocalLLaMA

2199 readers
1 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS
 

Based off of deepseek coder, the current SOTA 33B model, allegedly has gpt 3.5 levels of performance, will be excited to test once I've made exllamav2 quants and will try to update with my findings as a copilot model

you are viewing a single comment's thread
view the rest of the comments
[โ€“] noneabove1182 2 points 7 months ago* (last edited 7 months ago) (1 children)

Btw I know this is old and you may have already figured out your hardware and setup, but p40s and p100s go for super cheap on eBay.

P40 is an amazing $/GB deal, only issue is the fp16 performance is abysmal so you'll want to run either full fp32 models or use llama.cpp which is able to cast up to that size

The p100 has less VRAM but really good fp16 performance which makes it ideal for exllamav2 usage. I picked up one of each recently, p40 was failed to deliver and p100 was delivered while I'm away, but once I have both on hand I'll probably post a comparison to my 3090 for interests sake

Also I run all my stuff on Linux (Ubuntu 22.04) with no issues

[โ€“] [email protected] 2 points 7 months ago (1 children)

I've generally tried to avoid Nvidia cards because binary blob drivers are a pain (especially as a FLOSS developer I occasionally need to build newer kernels). I believe the recent firmware changes mean the nouveau driver can now control clocking but I've no idea what the status is for CUDA which I assume you need to run the models.

They do look pretty affordable though ๐Ÿ˜€

[โ€“] noneabove1182 1 points 7 months ago

If you go for it and need any help lemme know I've had good results with Linux and Nvidia lately :)