this post was submitted on 07 Jul 2023
6 points (87.5% liked)
LocalLLaMA
2247 readers
1 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I've got a k80 and it's... underwhelming.
I does run 30B models tho. And it is cheap.
so i am looking to get a k80, p40 or 3060 regarding the support for cuda in future i see that it is possible to use a old gpu without the current cuda version even if a program requires it? or is it not usable in some programs today? compiling from scratch isnt a problem and drivers is something i can probably handle to but are there more real problems for future proofing?