this post was submitted on 23 Nov 2023
1 points (100.0% liked)
Hardware
47 readers
1 users here now
A place for quality hardware news, reviews, and intelligent discussion.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
That is, unfortunately, sorely outdated. Particularly with the advent of tensorRT. Best case vs best case the 4080 is about twice as fast today
https://www.tomshardware.com/pc-components/gpus/stable-diffusion-benchmarks#section-stable-diffusion-512x512-performance
The gap would be even larger if, or to be precise WHEN, Fp8 and/or sparisity will be used on the Ada Lovelace cards.
Of note, TensorRT doesn't support SDXL yet.
This is no longer true.
If you use NV's TensorRT plugin with the A1111 development branch, TensorRT works very well with SDXL (it's actually much less painful to use than SD1.5 TensorRT was initially).
The big constraint is VRAM capacity. I can use it for 1024x1024 (and similar-total-pixel-count) SDXL generations on my 4090, but can't go much beyond that without tiling (though that is generally what you do anyway for larger resolutions).
Just like for SD1.5, TensorRT speeds up generation by almost a factor of 2 for SDXL (compared to an "optimized" baseline using SDP).
Alright thanks. This stuff is moving very fast, and I was only looking at the master branch.