this post was submitted on 06 Sep 2023
26 points (93.3% liked)
LocalLLaMA
2269 readers
5 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You probably want to use the amd driver out of the box on your linux distro + ROCm instead of whatever AMD gives you as a driver download on their landing page
Gaming wise the AMD card would win in rasterization performance but PyTorch is made for CUDA (Nvidia only) first instead of OpenCL/HID (which AMD uses)
I couldn't get my AMD card to run reliable in half-precision (16fp) which actually hurts performance A LOT in comparison to no-half or 32fp
interestingly enough setting up AMD cards on Linux with ROCm is actually easier compared to Windows
anyway my experience is mostly stable difussion and some early gpt4all stuff but oobabooga uses PyTorch too so its probably similar
I've had AMD cards my whole life and only switched to NVidia 3 years ago where that whole local LLM and ImageAI thing wasn't even on the table...now i am just pissed that NVidia gives us so little VRAM to play with unless you pay the same price as used car -.-
AMD drivers are available from within the kernel so yeah, i won't do any downloading for AMD drivers on Linux^^
Oobabooga and Automatic1111 are my main questions - i could actually live with a downgrade in terms of performance if i then atleast can run the bigger models due to having way more VRAM. Can't even run 17b models on my current 8GB VRAM card...can't even make 1024x1024 images on Auto1111 without getting Issues aswell. If i can do those things but a bit slower, thats fine for me^^
What sort of issues are you getting trying to generate 1024x1024 images in Stable Diffusion? I've generated up to 1536x1024 without issue on a 1070 (although it takes a few minutes) and could probably go even larger (this was in img2img mode which uses more VRAM as well - although at that size you usually won't get good results with txt2img anyway). What model are you using?
That's outside the scope of this post and not the goal of it.
I don't want to start troubleshooting my NVidia stable diffusion setup in a LLM post about AMD :D thanks for trying to help but this isn't the right place to do that
Fair enough but if your baseline for comparison is wrong then you can't make good assessments of the capabilities of different GPUs. And it's possible that you don't actually need a new GPU/more VRAM anyway, if your goal is to generate 1024x1024 in Stable Diffusion and run a 13B LLM both of which I can do with 8 GB of VRAM.
This is correct, yes. But I want a new GPU because I want to get away from NVidia...
i CAN use 13b models and I can create 1024x1024 but not without issues, not without making sure nothing else uses VRAM and I run out of memory quite often.
I want to make it more stable. And open the door to use bigger models or make bigger images
Yes, that makes more sense. I was concerned initially that you were looking to buy a new GPU with more VRAM for the sole reason of being unable to do something that you should already be able to do, and that this would be an unnecessary spend of money and/or not actually fix the problem, that you would be somewhat mad at yourself if you found out afterwards that "oh, I just needed to change this setting".
thanks for the concern but no worries, i did my fair share of optimization for my config and i believe i got everything out of it... i will 100% switch to AMD so my question basically just aims at: Can i sell my 3070 or do i have to keep it and put into a "server" on which i can run StableDiffusion and oobabooga because AMD is still too wonky for that...
That's all. My decision is not depending on whether this AI stuff works, but it just accelerates it if AMD can run this, because i can sell my old card to get the money quicker.
I only ever used 7b large language models on my RX 6950 XT but PyTorch had or still has some nasty AMD VRAM bugs which didn't fully utilized all of my VRAM (more like only a quarter of it)
it seems the sad truth is high performance/training of models are just not good on AMD cards as of now
Interesting
Do you only use LLMs or also stable diffusion ?