this post was submitted on 15 Nov 2023
1 points (100.0% liked)
Hardware
48 readers
5 users here now
A place for quality hardware news, reviews, and intelligent discussion.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
They'd have to actually spend chip real estate on DL/RT.
So far they've been talking about using DL for gameplay instead of graphics. So no dedicated tensor units.
And their RT has been mostly just there to keep up with NV feature wise. They did enhance it somewhat in RDNA3 apparently. But NV isn't waiting for them either.
Second gen AMD HW ray-tracing still has a worse performance impact than Intel first gen HW ray-tracing. No need to talk about Nvidia here, as they are miles ahead. Either AMD is not willing to expend more resources on RT or they aren't able to improve performance.
AMD's RT hardware is intrinsically tied to the texture unit, which was probably a good decision at the start since Nvidia kinda caught them with their pants down and they needed something fast to implement (especially with consoles looming overhead, wouldn't want the entire generation to lack any form of RT).
Now, though, I think it's giving them a lot of problems because it's really not a scalable design. I hope they eventually implement a proper dedicated unit like Nvidia and Intel have.
That's what RDNA4 will introduce.