this post was submitted on 24 Nov 2023
3 points (100.0% liked)
Hardware
47 readers
1 users here now
A place for quality hardware news, reviews, and intelligent discussion.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
They can do what they want, 'gamer' GPU's for AI is not a new thing. The theory of Nvidia's low VRAM comes from GTX 1080 TI's being used for AI training, Nvidia saw the money lost and locked down that VRAM.
And mining. Ethereum mining is very memory intensive, so they had to limit memory bandwidth and find other ways to make up the performance for games. That’s why you don’t see 384 or 512-but memory bus anymore, they’re all as low low low as you can go. A 128-bit bus isn’t uncommon, sadly.
The reason for the shrinking memory buses is the bad scaling of IO with newer processes. The memory controllers on AD102 have basically the same footprint as that on GA102, in spite of there being a gigantic increase in overall transistor density
Ethereum mining hasn't been a thing for a year now btw
2080 Ti, 3090, 4090.
We haven't seen them since we moved to GDDR6. Simply because the signal integrity and power requirements makes it quite unreasonable.
Lack of DRAM scaling is the reason why we are where we are. Computational power has grown much faster than bandwidth.
Nvidia has had around a generation of advantage in bandwidth efficiency/utilization since Maxwell over AMD. Surprise surprise, one generation after AMD they as well have to resort to larger caches to substitute for bandwidth.
A 512 bit G6 bus (which isn't realistic to begin with), would not have given 4090 enough bandwidth over 3090. To keep up with the growth in computational power.
Exactly, entire planet is consipiring to deprieve gamers of High end GPUs.