this post was submitted on 02 Jul 2023
10 points (100.0% liked)

LocalLLaMA

2952 readers
35 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

founded 2 years ago
MODERATORS
 

So what is currently the best and easiest way to use an AMD GPU for reference I own a rx6700xt and wanted to run 13B model maybe superhot but I'm not sure if my vram is enough for that Since now I always sticked with llamacpp since it's quiet easy to setup Does anyone have any suggestion?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 2 years ago

Just pay nvidia their ill-earned ounce of flesh. I say this as a strong AMD advocate.

It's clear that AMD isn't serious about the AI market. They had years to provide a proper competitor to CUDA or at the very least a 1:1 compatibility layer. Instead of doing either of these things, AMD continued messing with half-assed projects like ROCm and the other one the name of which I don't care to look up. AMD has the resources to build a CUDA compatible API in under 6 months but for some reason they don't. I don't know why they don't, and at this point I don't really care.

Buy an AMD GPU for AI at your own risk.