this post was submitted on 06 Sep 2023
26 points (93.3% liked)

LocalLLaMA

2274 readers
3 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 2 years ago
MODERATORS
26
How usable are AMD GPUs? (lemmy.dbzer0.com)
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/localllama
 

Heyho, I'm currently on a RTX3070 but want to upgrade to a RX 7900 XT

I see that AMD installers are there, but is it all smooth sailing? How well do AMD cards compare to NVidia in terms of performance?

I'd mainly use oobabooga but would also love to try some other backends.

Anyone here with one of the newer AMD cards that could talk about their experience?

EDIT: To clear things up a little bit. I am on Linux, and i'd say i am quite experienced with it. I know how to handle a card swap and i know where to get my drivers from. I know of the gaming performance difference between NVidia and AMD. Those are the main reasons i want to switch to AMD. Now i just want to hear from someone who ALSO has Linux + AMD what their experience with Oobabooga and Automatic1111 are when using ROCm for example.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 year ago (1 children)

what models are you using and how many iterations /s do you get on average with them?

Do you also use StableDiffusion (Auto1111)? If yes, same question as above for that^^

[–] [email protected] 1 points 1 year ago (1 children)

I use a ton of different ones. I can test specific models if you like.

[–] [email protected] 2 points 1 year ago (1 children)

The good ol' anything v3 and DPM Karras 2m+

that would give me a good baseline. Thanks! :)

[–] [email protected] 1 points 1 year ago (1 children)

Does the resolution or steps or anything else matter?

[–] [email protected] 1 points 1 year ago (1 children)

512x512 and 1024x1024 would be interesting

and 50 steps

That'd be awesome!

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago) (1 children)

I ran these last night, but didn’t have the correct VAE, so I’m not sure if that affects anything. 512x512 was about 7.5it/s. 1024x1024 was about 1.3s/it (iirc). I used somebody else’s prompt which used loras and embeddings, so I’m not sure how that affects things either. I’m not a professional benchmarker so consider these numbers anecdotal at best. Hope that helps.

Edit: formatting

[–] [email protected] 2 points 1 year ago (1 children)

7.5it/s for 512x512 is what i was looking for! On par (actually even faster than my 3070) with NVidia!

Thank you very much! And how / what exactly did you use to install?

[–] [email protected] 1 points 1 year ago (1 children)

The install wasn’t too hard. I mean it wasn’t like just running a batch file on Windows, but if you have even a tiny bit of experience with the Linux shell and installing python apps, you will be good. You mostly just need to make sure you’re using the correct (ROCm) version of PyTorch. Happy to help, any time (best on evenings and weekends EST). Please DM.

[–] [email protected] 1 points 1 year ago (2 children)

i'm quite familiar with Linux and installing stuff - so there is no compiling special versions of some weird packages and manually put them in venv or something i assume😄

Thanks again!

[–] [email protected] 2 points 1 year ago

Also you’re welcome!

[–] [email protected] 2 points 1 year ago

No special compiling. Just need to download the ROCm drivers from AMD and the special ROCm PyTorch version.