this post was submitted on 15 Nov 2023
1 points (100.0% liked)

Hardware

33 readers
1 users here now

A place for quality hardware news, reviews, and intelligent discussion.

founded 11 months ago
MODERATORS
 

Just wondering,what AMD would need to do..to at least MATCH nvidias offering in A.I/dlss/Ray tracing tech

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 10 months ago (30 children)

Just wondering,what AMD would need to do..

They'd have to actually spend chip real estate on DL/RT.

So far they've been talking about using DL for gameplay instead of graphics. So no dedicated tensor units.

And their RT has been mostly just there to keep up with NV feature wise. They did enhance it somewhat in RDNA3 apparently. But NV isn't waiting for them either.

[–] [email protected] 0 points 10 months ago (5 children)

Can they make dedicated tensor to units or is that patented?

[–] [email protected] 1 points 10 months ago (1 children)

AMD got a lot of AI-related IP when they made the acquisition of Xilinx. It's just a matter of them dedicating the die space to it.

[–] [email protected] 1 points 10 months ago

The die space is only one part of the puzzle. The other - AMD's achilles heel no less - is software support. I mean, Phoenix has XDNA already, but from everything I've read, it's a PITA to actually use and rather limited by its currently available driver API, and as a consequence, barely any ML library/framework support as of now.

[–] [email protected] 1 points 10 months ago

"Tensor Units" are just low-precision matrix multiplication units.

[–] [email protected] 0 points 10 months ago (2 children)

They don't even need to make dedicated tensor units, since programmable shaders already have the necessary ALU functionality.

The main issue for AMD is their software, not their hardware per se.

[–] [email protected] 1 points 10 months ago (1 children)

Nah, throughput of tensor cores is far to high to compete against

[–] [email protected] 1 points 10 months ago

Well, sure the application specific IP is always going to be more performant. But on a pinch, shader ALUs can do tensor processing just fine. But without a proper software stack, the presence of tensor cores is irrelevant ;-)

[–] [email protected] 1 points 10 months ago (1 children)

This. AMD struggles with making drivers that don't crash or get you VAC banned. They're going to have to clear that bar before they can really start competing

[–] [email protected] 1 points 10 months ago (2 children)

Those VAC bans really kinda sum the lack of ability with AMD’s software. AMD can’t ship fluid frames without literally getting you banned.

Stop for a moment and think about this, AMD can’t even catch up to nvidia/intel much less be at the forefront.

Really, AMD only exists so nvidia doesn’t charge $2k for a 4090… so uh thanks AMD for being a joke of a competitor but saving me $400

[–] [email protected] 1 points 10 months ago

WTF are you talking about, what Intel GPU is better than AMD? No one is buying Intel's trash video cards. Also the 7800X3D is the fastest gaming chip.

[–] [email protected] 1 points 10 months ago (1 children)

AMD is better than Intel on both gpu and cpu front lol. Not sure what you are on.

Idd I think AMD has solid gpu products last decade. Had several AMD gpus just as nvidia. Just because nvidia has been ahead last 3 years doesn't invalidate AMD. Its competition and as long as they offer decent performance for the price ppl will buy it. RDNA2/3 was definitely not bad architectures - the main gap atm is upscalers and framegen but that is also reflected in the price nvidia sells for.

[–] [email protected] 1 points 10 months ago

Lol this sub just has it out for RTG for a few months now. The most ridiculous takes get upvoted.

[–] [email protected] 0 points 10 months ago (1 children)

Of course they could. Intel does on their graphics cards. Apple does on its latest silicon.

Question is, do they have the people that could develop this, can they and do they want to spend the money on it and can they and do they want to spend the money on the software side it this as well.

Currently, it seems like they looked at it, did the math and decided to try and get by without the effort. And to a degree that's doable. FSR2 isn't as good as DLSS but it saves them the effort to have AI-cores on chip. Now they did the same with frame generation. Generally they seem to be able to be slightly worse for a lot less R&D-budget.

Of course, they will never leave nVidia's shadow this way and should Intel or nVidia ever manage to offer Microsoft and Sony an APU to power the next generation of consoles but with more features, their graphics division might be well and truly fucked.

[–] [email protected] 1 points 10 months ago

Keep in mind too if they haven't already made these decisions to inovate and invest 4+ years ago, then any solution they come up with is still years away. Chip development is a 5+ year cycle from concept to implementation.

[–] [email protected] 0 points 10 months ago (1 children)

They have their own equivalent in the CDNA line of compute products.

They absolutely could bring matrix multiplication units to their consumer cards, they just refuse to do so.

[–] [email protected] 1 points 10 months ago

Just like they refuse to support consumer cards officially

load more comments (24 replies)