this post was submitted on 23 Nov 2023
1 points (100.0% liked)

Hardware

47 readers
1 users here now

A place for quality hardware news, reviews, and intelligent discussion.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 11 months ago (1 children)

I know that the xtx kept up with the 4090 in stable diffusion before the tensorRT update, so there might be some places where the xtx can be a replacement when you build software from the grounds up and willing to lose performance for the benefit of less eyes and hassle on Amd products

[–] [email protected] 1 points 11 months ago (1 children)

Got a source for that keeping up?

[–] [email protected] 1 points 11 months ago (2 children)
[–] [email protected] 1 points 11 months ago (2 children)

That is, unfortunately, sorely outdated. Particularly with the advent of tensorRT. Best case vs best case the 4080 is about twice as fast today

https://www.tomshardware.com/pc-components/gpus/stable-diffusion-benchmarks#section-stable-diffusion-512x512-performance

[–] [email protected] 1 points 11 months ago

The gap would be even larger if, or to be precise WHEN, Fp8 and/or sparisity will be used on the Ada Lovelace cards.

[–] [email protected] 1 points 11 months ago (1 children)

Of note, TensorRT doesn't support SDXL yet.

[–] [email protected] 1 points 11 months ago (1 children)

This is no longer true.
If you use NV's TensorRT plugin with the A1111 development branch, TensorRT works very well with SDXL (it's actually much less painful to use than SD1.5 TensorRT was initially).

The big constraint is VRAM capacity. I can use it for 1024x1024 (and similar-total-pixel-count) SDXL generations on my 4090, but can't go much beyond that without tiling (though that is generally what you do anyway for larger resolutions).

Just like for SD1.5, TensorRT speeds up generation by almost a factor of 2 for SDXL (compared to an "optimized" baseline using SDP).

[–] [email protected] 1 points 11 months ago

Alright thanks. This stuff is moving very fast, and I was only looking at the master branch.

[–] [email protected] 1 points 11 months ago (2 children)

You cant compare using using two different impelementations. You compare only on A1111 or only on SHARK.

SHARK doesnt even seem be taking any adavantage of the 4090 being significatly slower than the 7900xtx.

The recent A1111 Olive branch made the performance of it almost equal SHARK model. A1111 also full uses the 4090.

The new results on the same A1111 implention are here -

https://www.pugetsystems.com/labs/articles/amd-microsoft-olive-optimizations-for-stable-diffusion-performance-analysis/

You can divide the 4090's perf by half if you want no Tensor RT which is 35. Thats still significantly higher than the 7900xtx's 23

[–] [email protected] 1 points 11 months ago

You compare only on A1111 or only on SHARK

That's seems like an arbitrary handicap. You should use whichever solution runs best on the respective hardware.

[–] [email protected] 1 points 11 months ago

It mentions Olive. I don't know what that is, but it's suggesting it could cause AMD to catch back up. Is that true? Or is it more likely going to get them an extra 10% performance instead of the extra 110% they need to catch up?