this post was submitted on 25 Nov 2023
2 points (100.0% liked)

AMD

25 readers
1 users here now

For all things AMD; come talk about Ryzen, Radeon, Threadripper, EPYC, rumors, reviews, news and more.

founded 11 months ago
MODERATORS
 

I've been thinking of when will the RDNA4 cards come out.

As MILD mentioned RDNA 4 will come out around Q3 2024. I don't think there will be RDNA 3.5 refresh cards or RDNA 3 refresh cards out next year.

I think RDNA4 will be very similar to RDNA3 apart from very small arch improvements and update to Raytracing core.

There were rumors in 2022 that AMD had issues with TSMC 3nm node and that they will be using 4nm. Current rumors don't say anything about which node will RDNA4 use but, seeing that TSMC N3E 3nm node is just being put in production and others will be using it, makes sense that AMD has to use the 4nm node in 2024. This could make AMD release the RDNA 4 before Nvidia does its new series. So, I'm thinking RDNA4 could come out end of May and be available in June 2024.

Further, I was looking at how big the RDNA 4 flagship chip will be in mm2 and what its performance could be. Taking the N31 which is based on 5nm and 6nm nodes and combined size of 530mm2. An RDNA4 best would be around 370mm to 450mm2 chip with 90-96 CUs like Rx7900 series, but with 256Bit bus, faster memory since it will use Gddr7 and rated TDP of below 280W. I came to this conclusion that 4nm TSMC node is a very small improvement in transistor density, of just 6% for the N4, (N4x or Nvidia specific 4N might be a bit more).

Looking at the 4nm node and doing the math is no wonder that AMD can't produce a high-end GPU next year because by my math comes out that a 20-30% more performat GPU then a RX7900xtx would have to be bigger then 680mm2 and have a TDP of 410W, that's what the 4nm node does.

But here are the all the good things, the GPU, let's say it's called Rx 8800 XT is out in middle of next year has 16gb of Vram for 600$, identical performance in raster compared to Rx 7900 XTX and somewhat better performance in raytracing.

There are two AMD patents on raytracing that I’ve read few months back. The first one, released 1 year prior to first RDNA 2 GPU, talks about raytracing core. But the second one, it was released in June this year. So, the latest AMD patent describes GPU withing its raytracing core, addition of a hardware specific traversal engine and specific BVH memory cache. Not to go into details, from what I understand of the two patents first patent describes raytracing core in RDNA 2&3 and the second patent describes similar but a much improved way of doing raytracing. I’m hopeful we will see this is RDNA4(the patent did arrive this june and next june we’ll have RDNA 4 card so it matches the schedule prior to RDNA2) (https://www.freepatentsonline.com/20230206543.pdf)

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 9 months ago (1 children)
[–] [email protected] 1 points 9 months ago (1 children)

I love the smell of copium in the morning!

... no, like seriously, that is how I get by...

[–] [email protected] 1 points 9 months ago (1 children)

It's beginning to become a problem

[–] [email protected] 1 points 9 months ago

just like the state of the video games industry

[–] [email protected] 1 points 9 months ago (6 children)

There's no way we'd be getting a June release of RDNA 4 without some sort of announcement or more serious leaks. People be rumoring the RTX 5000 cards as a 2025 release, and yet we hear nothing about AMD and their next release is only 7 months out?

Your analysis about performance / size is sensible, but there's absolutely no way we would see a release so soon.

[–] [email protected] 1 points 9 months ago (2 children)

That is fair to say, I am making a prediction here and its pure speculation.
But Rx5700 Xt released in Jul 7, 2019, RX 6800Xt Nov 18, 2020 and Rx7900 XTX Dec 13, 2022. So, it would be a year and a half from previous series.

[–] [email protected] 1 points 9 months ago (1 children)

It was 1.5 years for the 5000 and 6000, then 2 years from 6000 to 7000. 2 years from 7000 would be Dec 2024 or later. The announcements also come a few months before; if we were getting a card in June 2024 we'd be hearing about it by now, at least rumors if not official announcements.

An early release would be smart, especially if they don't plan to compete at the high end in the next series of cards; get the folks who'd want to upgrade now, before Nvidia releases their cards in Q1 2025.

[–] [email protected] 1 points 9 months ago

Sure, more often then not it was closer to 2 years apart from each series releases.
I agree with you that an early release would be smart if they are not competing in the high end with RDNA4.

[–] [email protected] 1 points 9 months ago (1 children)

GFX12 was add this week. That comparision

For RDNA3, gfx11 was added on April 29, 2022 and the cards were launched Dec. 13 2022. RDNA2 (GFX1030) was added on Jun 16, 2020 and released November 18, 2020

[–] [email protected] 1 points 9 months ago

Not even close - December 24' "paper" lunch is a wishful thinking even...

Some "Super" cards or "refresh" would be possible - but 5% chance at best...

[–] [email protected] 1 points 9 months ago

There is pretty big difference in timeline of credible leaks for Nvidia versus AMD for past few major releases - AMD leaks start significantly closer to launch date. So the fact that there are no leaks for RDNA4 at this time does not mean summer release is too soon.

Obviously for the same reason release date is pure speculation at this point.

[–] [email protected] 1 points 9 months ago

We have gotten the first "changes" in Linux drivers for RDNA4

[–] [email protected] 1 points 9 months ago

Last gen, Lovelace was rumored as late and RDNA3 was rumored early.

Let’s just say I am skeptical

load more comments (2 replies)
[–] [email protected] 1 points 9 months ago (2 children)

Upping the CUs from 96 to 128 (and the ROPs similarly) will increase the GCD size from ~305 to ~372 mm^(2), based on the die image (and leaving some blank space at the side), and the total to 596 mm^(2). Whether performance will increase performance enough depends on the RAM bottleneck.

It's also worth nothing that RDNA 3 apparently didn't reach the expected clocks, and if AMD managed to solve this problem it would be possible to get extra performance without much or any extra die space.

In general if you're just aiming to reduce chip size by removing two MCDs, that's not really that much of a cost saving. That won't make the chip a mid-range chip as rumoured.

[–] [email protected] 1 points 9 months ago (2 children)

In my opinion, on the RDNA3 not reaching expected core clocks, its mostly down to poor performance of TSMC 5nm node to what AMD wanted to achieve. There target might been RTX4090 flagship performance but on the 5nm plus 6nm, and TDP 355W resulted in high power consumption, and to keep power down, core clock has to suffer.

Rx 7900XTX Taichi model has roughly 250mhz more in boost and a roughly 50W more power consumption then the AMD model.(https://www.techpowerup.com/review/asrock-radeon-rx-7900-xtx-taichi/38.html)

[–] [email protected] 1 points 9 months ago (1 children)

thing is, people can't seem to get much clock boost that translates into performance out of the 7900 series. 3200mhz overlocking tests doesn't do much more than 10-15% from AIBs, and consumes 500w+

if you look at timespy results, going from 2800-3000mhz game clock to power unlimited 3500mhz (which consumes over 500W), you gain a few thousand points

time spy with a random "stockish" result and the 7900xtx overclock from here. most of the gains came from the 6.0GHz oveclock on the CPU, with graphics results only being about 12% faster with this overclock

unless i've missed something, it seems like the 7900 series is limited by more than power

[–] [email protected] 1 points 9 months ago

I don't see GPUs having more then 3Ghz anytime soon as standard clocks. High core frequencies make a small chip have much better performance, so its cost efficient for AMD to make them but high core clocks are increasing power consumption. AMD is trading and evaluating size of the chip with its frequency to get the performance.

[–] [email protected] 1 points 9 months ago

I have Taichi on water, 3.1 GHZ clock - what more you may need? 425W~ consumption. 2.95Ghz stable game clock...

All that is before i will update the BIOS to Aqua monster...550W+~ without OC..

[–] [email protected] 1 points 9 months ago (1 children)

They could make a much larger die, if its really just 300 mm².

load more comments (1 replies)
[–] [email protected] 1 points 9 months ago (1 children)

AMD memes a lot but I don't think they'd just pull their pants down and shit right in everyone's faces who bought a 7900xtx that soon after.

[–] [email protected] 1 points 9 months ago (1 children)

AMD is known for screwing launch buyers by dropping prices hard when competition doesn't allow them to keep theie high prices. Like look how hard and fast Zen 4 pricing fell after 13th gen launched. I know someone will think 'its good they reacted to competition', and that's true, but AMD knew a $300 6 core non-X3D wouldnt be competitive, but they didn't care and let early adopters overpay.

But I don't think AMD is going to start pricing at $600, that's too low of a 'starting' price for their 'flagship'. I could see $750-$800 but definitely not as low as $600.

[–] [email protected] 1 points 9 months ago

I'm honestly pretty sad there won't be a 8900 XTX or whatever. I'm not excited to see Nvidia just go ape shit with their prices.

[–] [email protected] 1 points 9 months ago

I don’t really care about a high end model being released. But I would appreciate a small generational gain with a large price cut.

Wishful thinking but we need to get those prices in check

[–] [email protected] 1 points 9 months ago (1 children)

The dust is yet to settle around RDNA3, what's the rush with RDNA4? Also, I just bought RDNA2 card and having tons of fun with it.

[–] [email protected] 1 points 9 months ago (1 children)

All cards for rdna3 are out, or at least the main ones are. Where else to look but forward?

[–] [email protected] 1 points 9 months ago

offtopic totally but maybe actually getting features rdna3 was supposed to get.. like antilag+ or fsr3 thats printed on boxes for months but only exists in two glorified tech demos right now.

[–] [email protected] 1 points 9 months ago

I love threads like these where people talk about hardware specifics. Idk wtf these magic words are but if they help my frames I'm all for it

[–] [email protected] 1 points 9 months ago

Well, whenever it does drop. That means i can upgrade to rdna 3.

[–] [email protected] 1 points 9 months ago

The one thing I'm banking on is vendors will prioritize expensive high-margin AI chips over consumer GPU chips in terms of their fab allocation. Expect delays or only high-end/high-margin releases in the first wave.

[–] [email protected] 1 points 9 months ago (2 children)

I see no one, coming to the conclusion that there's only leaks about N44/48 because AMD doesn't need a larger die for a multiple GCD (gpu chiplet MCM approach) SKU.

Put two N44s together, and you've got like 80% of the equivelant of 2x 7900xtx..quite capable or competing with a 5090 in theory.

[–] [email protected] 1 points 9 months ago

yeah I think that's clearly going to be what they do whenever they decide to do chiplets again

[–] [email protected] 1 points 9 months ago (2 children)

I see no one, coming to the conclusion that there's only leaks about N44/48 because AMD doesn't need a larger die for a multiple GCD (gpu chiplet MCM approach) SKU.

Bcuz it sounds like copium lol

Put two N44s together, and you've got like 80% of the equivelant of 2x 7900xtx..quite capable or competing with a 5090 in theory.

Except scaling never works like that, esp not with adding in the problems of chiplets

[–] [email protected] 1 points 9 months ago (1 children)

It was a spit ball % guess..chiplets for CPU scale excellently (up till about 64 cores, sort of drops off steeply after that)..whether GPUs scale like that remains to be seen (doubtful)..but if you think AMD hasn't gone to all this trouble to break away from monolithic designs with MCDs and GCDs, and not iterate with a multi GCD design..then I dunno what to tell you bro...🤷

[–] [email protected] 1 points 9 months ago (1 children)

..but if you think AMD hasn't gone to all this trouble to break away from monolithic designs with MCDs and GCDs, and not iterate with a multi GCD design..then I dunno what to tell you bro...🤷

It's not as impressive as you make it out to be. Splitting the MCDs and GCDs is certainly pretty nice, but both Intel and AMD have shown to have better and more advanced packaging capabilities in their GPUs- with MI300 and PVC- the only reason they haven't come to consumers yet is cost and complexity chiefly.

However, if AMD using something MI300esque with RDNA 4... and failed, then yes, it stands to reason that only the monolithic skus would remain.

Alternatively, the base RDNA 4 arch could just be so cooked they thought it wasn't worth the effort of developing the more expensive and complicated chiplet skus.

Or who knows, maybe it's a combination of the two, or something else.

Also, the idea that AMD has N44/N48 and can just glue the two together to act as their flagship is also wrong. There has to be additional interconnect logic among other things added to the two dies. If the chiplet dies are canned, then they have to do expensive and time consuming respins on their existing planned RDNA 4 dies (N44/48) in order for them to be allowed to be used in chiplet designs.

[–] [email protected] 1 points 9 months ago

There's nothing to say, that isn't the case (with what you suggested in the last part of your message)..who knows? All these leakers throw so much shit at the wall and tiny bits of it stick, and they point it out and jump up and down, and say "I told you so!" Even though there's been a tonne of false leaks and misdirection that everyone has had to sift through along the way.

Take for instance the recent leak of Linux driver updates, mentioning gfx1200, could be a larger "N41" die, or it could be a nothing-burger.

[–] [email protected] 1 points 9 months ago
[–] [email protected] 1 points 9 months ago

Really, no RDNA 3 refresh?

[–] [email protected] 1 points 9 months ago (2 children)

Please don't watch MLID all his videos come out inaccurate and only pure copium before the actual product launches.

[–] [email protected] 1 points 9 months ago

this this not true

load more comments (1 replies)
[–] [email protected] 1 points 9 months ago (1 children)

No, we wont see an AMD flagship sadly.

As you already know, AMD wont compete with Nvidias high-end 5000 blackwell series. Sure, AMD could still release an high-end flagship GPU, like a 6950 XT, or 7900XTX, but I have high doubts tbh, just dont see it based on information we already got laid out.

If anything, they would likely try to go for 3nm to incorporate AI chips like Nvidia perhaps. But if they had the means, then they wouldnt avoid competing with Nvidias high-end 5000 series. More realistically we will likely see a 4-5nm architecture, or at the very least, more efficient 5-6nm on an already, solid 530mm2.

Safe to assume tho that we will get new GPUs by 2024 Q3 or Q4 regardless. Reason being that GDDR7 will be released in mid 2024, as was released officially by micron. Some GPUs have also seen over 2 years of use since their release. Timing wise its ideal to start a new GPU generation. Pretty sure they wont bother releasing anything before the new GDDR7 release however. Consumers will hold out until then.

[–] [email protected] 1 points 9 months ago

AMD wont compete with Nvidias high-end 5000 blackwell series

Not with RDNA4, but that does not meant they won't compete with 5000 series at all, RDNA5 high end is apparently coming out 2025.

[–] [email protected] 1 points 9 months ago

MLID is a bullshit mill that spits out a bunch of self-contradictory guesses based on absolutely no evidence, in the hopes one of them will be close to correct.

[–] [email protected] 1 points 9 months ago (3 children)

One of the fun benefits of the Navi31 die, AMD just needs to adapt the Navi31 main die to the newer fabrication node plus minor updates and polish. The I/O dies hold the memory controllers, so adapting to GDDR7 is just redesigning the I/O dies. If the RDNA4 I/O dies support GDDR7 and are compatible with Navi31, making the Navi31 refresh support GDDR7 won't require too much effort.

[–] [email protected] 1 points 9 months ago

That would be a good solution if the card was massively ahead of the XTX. 4090 levels at least. And it would still be behind in RT likely.

load more comments (2 replies)
[–] [email protected] 1 points 9 months ago

As MILD mentioned RDNA 4 will come out around Q3 2024.

Stopped reading right there

[–] [email protected] 1 points 9 months ago

If RDNA4 becomes just a sidegrade anyone who has the 7900 series will not need to care. It is expected to be a 5700 XT state again. Good RT would be nice though, but I doubt AMD will run any RT game in 4K120 any time soon.

[–] [email protected] 1 points 9 months ago

RDNA 4 - probably -

Q4 2024 - Paper "soft" lunch

Q1 2025 - General availability..

No highend for now, but that is still questionable... I really hope that AMD will use this time REAL RT cores that are massively better. The rest will be - nah... Blackwell will destroy AMD this time...

[–] [email protected] 1 points 9 months ago

Stopped reading at "MLID", the guy just throw whatever info/infox he has at the wall and sometimes it turns out to be true.

[–] [email protected] 1 points 9 months ago

RedGamingTech says that for RDNA4, only Navi 44 and Navi 48 are rumored, mawing out at 192 bits and 64 CUs. 7900 XTX will still be the most mowerful GPU.

[–] [email protected] 1 points 9 months ago

RDNA4 will be N4-based, targeting to launch in H2 2024. We could probably see CDNA4 and RDNA4 launching at the same year, and we'll get some inspirations about what RDNA5 will look like from CDNA4's packaging.

load more comments
view more: next ›