this post was submitted on 06 Sep 2023
26 points (93.3% liked)
LocalLLaMA
2269 readers
5 users here now
Community to discuss about LLaMA, the large language model created by Meta AI.
This is intended to be a replacement for r/LocalLLaMA on Reddit.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I am on Linux, but I can live with a painful install. I wanted to hear if it performs on par with nvidia
Again. Apologies for the confusion. I had thought my initial comment was on a gaming community. Here is puget systems benchmarks and they don't look great - https://www.pugetsystems.com/labs/articles/stable-diffusion-performance-nvidia-geforce-vs-amd-radeon/#Automatic_1111
"Although this is our first look at Stable Diffusion performance, what is most striking is the disparity in performance between various implementations of Stable Diffusion: up to 11 times the iterations per second for some GPUs. NVIDIA offered the highest performance on Automatic 1111, while AMD had the best results on SHARK, and the highest-end GPU on their respective implementations had relatively similar performance."
Sorry, not trying to come at you, but I’m just trying to provide a bit of fact checking. In this link, they tested on Windows which would have to be using DirectML which is super slow. Did Linus Tech Tips do this? Anyway, the cool kids use ROCm on Linux. Much, much faster.
Haha, you're not, I definitely stumbled into this. These guys mainly build edit systems for post companies, so they stick to windows. Good to know about ROCm, got something to read up on.
Yeah that was what i was worried about after reading the article; I've heard about the different backends...
Do you have AMD + Linux + Auto111 / Ooobabooga? Can you give me some real-life feedback? :D
No worries
Interesting article Never heard about SHARK, seems interesting then