this post was submitted on 24 Nov 2023
3 points (100.0% liked)

Hardware

33 readers
1 users here now

A place for quality hardware news, reviews, and intelligent discussion.

founded 11 months ago
MODERATORS
 

Here is the link.

I can see people buying GPU, send it to Mexico, and do the same, then export the chips to China. Since China is already offshoring their production to Mexico, the logistic chain is already there.

https://www.techpowerup.com/316066/special-chinese-factories-are-dismantling-nvidia-geforce-rtx-4090-graphics-cards-and-turning-them-into-ai-friendly-gpu-shape

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 10 months ago (4 children)

They can do what they want, 'gamer' GPU's for AI is not a new thing. The theory of Nvidia's low VRAM comes from GTX 1080 TI's being used for AI training, Nvidia saw the money lost and locked down that VRAM.

[–] [email protected] 1 points 10 months ago (3 children)

And mining. Ethereum mining is very memory intensive, so they had to limit memory bandwidth and find other ways to make up the performance for games. That’s why you don’t see 384 or 512-but memory bus anymore, they’re all as low low low as you can go. A 128-bit bus isn’t uncommon, sadly.

[–] [email protected] 1 points 9 months ago

That’s why you don’t see 384

2080 Ti, 3090, 4090.

or 512-but memory bus anymore

We haven't seen them since we moved to GDDR6. Simply because the signal integrity and power requirements makes it quite unreasonable.

find other ways to make up the performance for games

Lack of DRAM scaling is the reason why we are where we are. Computational power has grown much faster than bandwidth.

Nvidia has had around a generation of advantage in bandwidth efficiency/utilization since Maxwell over AMD. Surprise surprise, one generation after AMD they as well have to resort to larger caches to substitute for bandwidth.

A 512 bit G6 bus (which isn't realistic to begin with), would not have given 4090 enough bandwidth over 3090. To keep up with the growth in computational power.

load more comments (2 replies)
load more comments (2 replies)