this post was submitted on 20 Jul 2023
6 points (100.0% liked)

LocalLLaMA

2293 readers
1 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 2 years ago
MODERATORS
 

(Deleted for not relevant anymore)

top 2 comments
sorted by: hot top controversial new old
[โ€“] noneabove1182 1 points 1 year ago

I guess it depends on what you mean by usable, I think people have had success with ROCM, it's not as solid as CUDA of course but it's been more than usable

[โ€“] [email protected] 1 points 1 year ago

Before buying a GPU for this, evaluate models using your CPU and system memory first. The only difference would be speed but note that CPUs can be acceptably fast and that responses are the same.