no, at 4k fsr quality maybe looks only as good as dlss performance. that makes a 7900xtx only as fast as a 3080.
Hardware
A place for quality hardware news, reviews, and intelligent discussion.
No because Nvidia has software companies on their side and that proprietary software is simply more developed than that for AMD.
Yes because what Nvidia is doing isn’t super special. Of course AMD will have an equivalent or better solution, so the question really should be “how many years behind” will AMD be.
They closed the gap significantly in raster perf. Power efficiency is pretty close, and so is area efficiency. AI mostly a software problem and AMD aren’t blind to this, and are very clearly investing a ton more into software to close this gap. (They just bought Node AI and absorbed all their talent)
The hardware is arguably better in many aspects. MI200 and MI250 are HPC monsters and MI300 is a chiplet packaging masterpiece that has has HPC performance on lockdown.
There’s a reason that no new HPC super computers are announced with Nvidia GPUs.
Nvidia has lead in AI, AMD has lead in HPC. Nvidia has lead in area efficiency, AMD has lead in packaging expertise (which means they can throw a ton more area at the problem with the same cost of Nvidia)
Of course, they could. Their hardware isn't that bad; they are closer than anybody else. Their software stack is another story. AMD has been promising to do a better job at that for more than a decade. I don't really trust their commitment to their software stack anymore. Actually, Intel might overtake them in that regard.
Intel is actually closer than AMD. Apparently so is Apple
And needs an answer to dlss.
A company that totally focus on GPU vs a company works on both CPU/GPU and lots of other stuff. Can't say this comparison is even fair.
I do expect AMD's RT/Ai is behind NVIDIA. That's pretty reasonable.
Also Intel Arc is much worse now, they still don't have a high-end model that can at least compete with 7900XTX or 4070 Ti.
Fsr3!…they need to try hard to get ray tracing in ther gpus
It will take a while but Ray tracing will become as low-cost and standard as anti aliasing, probably buy around 2040 they will both have indistinguishable performance for Ray tracing. By that time games will be completely AI designed and generated using things like gorce and splatting and AMD will completely miss the gaussian splatting development suits because it's a generalist company and it lost its graphics department is subservient and less well managed.
Can they? Of course they can?
Will they? Unlikely.
Unless there will be moment we stop in development id except nvidia to remain step ahead.
it would be crazy to take the foot off the gas in CPU/SOC side while intel is imploding. AMD inherits the x86 market by default. NVIDIA is on top of their game in a product segment that AMD has neglected their investments in for a long time. Yeah, do whatever the consoles want, ship the MVP of that as a dGPU and stick it in your iGPUs.
The first thing is that AMD don't build as big chips as nvidia. In raster, RDNA3 is extremely good against Ada shader-count-wise. As for RT, you can also think that they are comparatively better at raster instead. Double the N31 GCD and you have a chip that is best in raster and between 4080/4090 in RT.
The bigger problem is software and I doubt AMD will match nvidia there. The current AAA PT games are done with nvidia support and while it's not nvidia-locked, it'd be great if intel/AMD optimize for it or get their own versions out.
The path tracing updates to Portal and Cyberpunk have quite poor numbers on AMD and also on intel. Arc770 goes from being ~50% faster than 2060 to 2060 being 25% faster when you change from RT Ultra to Overdrive. This despite the intel cards' RT hardware which is said to be much better than AMD if not at nvidia's level.
The later path tracing updates to classic games of Serious Sam and Doom had the 6900XT close to 3070 performance. Earlier this year, I benched 6800XT vs 4090 in the old PT updated games and heavy RT games like updated Witcher3 and Cyberpunk, and 4090 was close to 3.5x of 6800XT. 7900XTX should be half of 4090's performance then in PT like in RT heavy games.
Shader count wise it's really not a win. They stopped counting the physically present ILP shadera because of their poor scaling. Shadera don't exist in a vacuum though, the smallest execution unit of both AMD and Nvidia are the SM/CU. And AMD currently needs more CU to match the equivalent Nvidia SM. Which was not the case last gen
Ray tracing, probably. AI probably not. What AMD can provide is better value. They just don't have the manpower or money to beat Nvidia at this point. Nvidia could probably release a new product twice as fast which is kind of what they're doing now with how many people they have.
Can they? Yes, will they? Nvidia hopes not, so they will do whatever they can so it doesn't happen. Which is a good thing.
Oh of course, silly me, we need Nvidia to sit still so AMD can blow past them and be the best CPU and GPU combo in the market. Great for customers!
AMD has to catch up to Intel first. ARC has better hardware support for this stuff
Similar questions were probably asked about AMD vs Intel when it came to CPUs not long ago. Still a big market share gap there, though.
It would take Nvidia really stumbling or for something to happen to Jensen. I think with the way Nvidia is ran someone like Jensen is a necessity / kingpin in the machine ( he has 40+ direct reports).
Intel stumbled really hard which allowed AMD a golden opportunity and to build up a bit of a warchest they are only able to reap the benefits of now.
I have a 4080 and never use dlss nor raytracing, those features are not worth and will die soon, I'm telling you, at the end of the day there's nothing better than playing games in the most original form.
Just wondering,what AMD would need to do..
They'd have to actually spend chip real estate on DL/RT.
So far they've been talking about using DL for gameplay instead of graphics. So no dedicated tensor units.
And their RT has been mostly just there to keep up with NV feature wise. They did enhance it somewhat in RDNA3 apparently. But NV isn't waiting for them either.
Can they make dedicated tensor to units or is that patented?
"Tensor Units" are just low-precision matrix multiplication units.
They have their own equivalent in the CDNA line of compute products.
They absolutely could bring matrix multiplication units to their consumer cards, they just refuse to do so.
Just like they refuse to support consumer cards officially
Ray tracing and DLSS 2/3 most probably yes. They are behind in ray tracing simply because they use compute shaders for ray tracing. Only they get full blown ray tracing cores I think they will perform similar.
Same with DLSS 2/3. There is a precedent for this. XESS was able to get pretty close to DLSS within an year. All AMD has to do is use machine learning for upscaling.
The biggest issue for AMD is not these features. But the fact that NVIDIA keeps introducing new features. So everytime AMD could catch up with one Nvidia Would have probably introduced 2 more.
And much of this is because AMD has been a follower, not a leader despite having the market share in consoles.
Nvidia has gone from "cool features but not many use them" to "gamers want these extra features" in only a few years. That doesn't happen if you are happy to just follow.