this post was submitted on 15 Nov 2023
1 points (100.0% liked)

Hardware

48 readers
5 users here now

A place for quality hardware news, reviews, and intelligent discussion.

founded 1 year ago
MODERATORS
 

Just wondering,what AMD would need to do..to at least MATCH nvidias offering in A.I/dlss/Ray tracing tech

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 11 months ago

And needs an answer to dlss.

[–] [email protected] 1 points 11 months ago (1 children)

Of course, they could. Their hardware isn't that bad; they are closer than anybody else. Their software stack is another story. AMD has been promising to do a better job at that for more than a decade. I don't really trust their commitment to their software stack anymore. Actually, Intel might overtake them in that regard.

load more comments (1 replies)
[–] [email protected] 1 points 11 months ago

Yes because what Nvidia is doing isn’t super special. Of course AMD will have an equivalent or better solution, so the question really should be “how many years behind” will AMD be.

They closed the gap significantly in raster perf. Power efficiency is pretty close, and so is area efficiency. AI mostly a software problem and AMD aren’t blind to this, and are very clearly investing a ton more into software to close this gap. (They just bought Node AI and absorbed all their talent)

The hardware is arguably better in many aspects. MI200 and MI250 are HPC monsters and MI300 is a chiplet packaging masterpiece that has has HPC performance on lockdown.

There’s a reason that no new HPC super computers are announced with Nvidia GPUs.

Nvidia has lead in AI, AMD has lead in HPC. Nvidia has lead in area efficiency, AMD has lead in packaging expertise (which means they can throw a ton more area at the problem with the same cost of Nvidia)

[–] [email protected] 1 points 1 year ago (2 children)

Do people think they can't catch up? Remember Ryzen?

They can if they start taking their GPU division seriously, in terms of R&D and units produced, which they are not.

[–] [email protected] 1 points 1 year ago (1 children)

Ryzen caught up first and foremost because Intel stalled. And the driving stall on Intels part was vis a vis TSMC more than it was against AMD.

[–] [email protected] 1 points 11 months ago

The opposite is also true. In terms of CPU design itself, that is, what AMD actually does, they are able to match Intel's offering since. It could've been a flop if TSMC didn't deliver, but the Ryzen architecture (which is what we're talking about in this thread, design) was up to the level of their competitor after being lagging behind for like half a decade.

So, I insist. With enough R&D they'd be able to do something similar in the GPU side of things.

[–] [email protected] 1 points 1 year ago

Yes I remember ryzen, tell me more how it took what, 4 gens to best intel’s skylake++++++++ in gaming?

[–] [email protected] 1 points 1 year ago

Nvidia would have to fumble pretty hard.

[–] [email protected] 1 points 1 year ago

There are decades for AMD to catch Nvidia rn, and Nvidia is unstoppable.
About software ecosystem the gap is even wider

[–] [email protected] 1 points 11 months ago

No because Nvidia has software companies on their side and that proprietary software is simply more developed than that for AMD.

[–] [email protected] 1 points 1 year ago (2 children)

Im an expert in this area, I won't reveal more than that. My understanding, though I could be wrong, is that AMDs biggest issue with raytracing, is they don't do workload rescheduling, which Nvidia reports to have given a 25% performance uplift on its own, and Intel has from the get go.

Basically, RT cores determine where a ray hit, and what material shader to use, but they don't actually execute material shaders, they just figure out which one to call for that bounce. This then has to be fed back to normal compute cores, but groups of compute cores need to have the same instruction to execute that instruction in parallel, otherwise each instruction to be executed in the "subgroup" will have to be executed serially (in sequence). So what Nvidia and Intel do is reorder instructions first before handing them off to compute subgroups to increase performance.

Im not sure why AMD didn't bother with this, but they have in recent history had hardware/driver bugs that caused them to scrap entire features on thier GPUs.

Now the upscaling and AI tech thing is a different issue. While AMD isn't doing well power efficiency wise right now anyway, adding tensor cores, the primary driver for Nvidias ML capabilities, means sacrifices to power efficiency and die space. What I believe AMD wants to do is focus of generalized fp16 performance. This can actually be useful in non ML workloads, like HDR and other generalized low precision applications, or with sparse neural networks. (where tensor cores aren't, they can't be used at the same time IIRC as CUDA cores, where at least on Nvidia, fp16 and fp32 can execute at the same time within the same CUDA core/warp/subgroup)

We can see power issues on the low end especially, Jetson orins ( ampere) don't beat or barely beat jetson tX2s (8 year old hardware, pascal) at the same power draw, and more than doubled the "standard performance" power draw.

Im addition to power draw, and tensor cores being dead weight for non ML, fully dedicated ASICs are the future for AI, not ML acceleration duct taped to GPUs, which can already accelerate it with out specialized hardware. See Microsoft news, Google Amazon, Apple, and even AMD looking to put ML acceleration on CPUs instead as a side thing (like integrated graphics).

AMD probably doesn't want to go down that route since inevitably they are going to cease using that with GPUs in the future.

Finally, DLSS 2.0 quality upscaling should now be possible at acceptable speeds using AMDs fp16 capabilities. GPUs are so fast now that the fixed cost of DLSS is now small enough to be carried out by compute cores. AMDs solution this far has been pretty lacking give their own capabilities. Getting the data for this is very capital intensive, and it's likely AMD still doesn't want to spend the effort to make a better version of far n.0 despite it essentially being a software problem on the 7000 series.

load more comments (2 replies)
[–] [email protected] 1 points 1 year ago (1 children)

Same could be said with amd / intel in 2015, and yet here we are

load more comments (1 replies)
[–] [email protected] 1 points 1 year ago

They can't Nvidia is to big if AMD put in as much as Nvidia does and it didn't pay off with a few years they would go bankrupt.

[–] [email protected] 1 points 1 year ago

Don’t need to. Latest leak is that AMD is focusing on mobile segment to push out Intel and so, RDNA 4 high-end is not needed in that effort. The latest Anti-Lag+ debacle showed it is not worth it for AMD to invest in software unless they can compete fully against Nvidia.

[–] [email protected] 1 points 1 year ago

no, at 4k fsr quality maybe looks only as good as dlss performance. that makes a 7900xtx only as fast as a 3080.

[–] [email protected] 1 points 1 year ago

People really underestimate how big nvidia is

load more comments
view more: next ›