this post was submitted on 23 Nov 2023
1 points (100.0% liked)

AMD

25 readers
1 users here now

For all things AMD; come talk about Ryzen, Radeon, Threadripper, EPYC, rumors, reviews, news and more.

founded 11 months ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 10 months ago (6 children)

No 256-bit? If they really don't care for highend I will likely have to buy a 4090 when Blackwell launches because it would be the only upgrade from my 7900 XTX. Buying Blackwell is unlikely an option because it will be horribly overpriced, especially if AMD starts to suck again. The XTX feels like AMD can finally have a hold in the highend again. It doesn't need to outperform, it needs to outperform NVIDIA where they don't: Pricing.

[–] [email protected] 1 points 10 months ago (1 children)

Probably not needed with the faster gddr7

[–] [email protected] 1 points 10 months ago (1 children)

Yes, a 192bit bus will be non competitive even with gddr7.

[–] [email protected] 1 points 10 months ago

GDDR7 should be 50 % faster but yes, its likely reaching similar bandwiths. Unfortunately its a trend now to reduce bus size. 256-bit GDDR7 is likely almost 1 TB/s.

[–] [email protected] 1 points 10 months ago (3 children)

From rumours there will be N31 refresh highend as RX 8800 that should be around 20% faster than 7900XTX as a refresh it does sound impressive as a competitive graphic card its depressing. Lets hope nvidia wont go crazy with prices

[–] [email protected] 1 points 10 months ago

Literally haven't heard a single thing about rdna 3 refreshes and with rdna 4 launching probably end of next year, really Don't see the point

[–] [email protected] 1 points 10 months ago

Those were the rumors around launch, saying the flagship dies had issues and that they were going to be refreshed later for big performance gains, but I haven't heard that rumor ever since. If AMD really had that much performance left on the table, I think they would've pushed to have the cards out this holiday season, and not wait until 2024, but I don't think those rumors are true.

load more comments (1 replies)
[–] [email protected] 1 points 10 months ago

We can't say until we know when the cards launch. Right now, the low-end has barely been updated since 2021. Navi 33/7600 is a very small update of Navi 23/6600XT. It moved to 6nm but there are no other relevant changes, because AMD did not increase the CU count and has utterly failed to make use of the dual issue shaders in RDNA3. That level is thus far more important to update right now. Say that AMD does that in early to mid 2024 and updates the high end in 2025. There is a new "7600XT" or "8600" with 48 CUs to fill the gap between 7700XT and 7600, meeting 4060 Ti more closely - that's great. When Blackwell launches in early 2025, AMD can be ready to update the high end at the same time. They have done that before - after all, RDNA 2 was the only recent time that the they did the entire line on one generation (RDNA 3 has only done 3 cards, Navi 24 is a holdover).

Not sure I think that that is particularly likely, but please remember the timeline when people talk about RDNA 4. Without release dates, those predictions are worthless.

[–] [email protected] 1 points 10 months ago (1 children)

You don't have to buy anything next gen if the performance of the 7900XTX is enough. Unless you want the best RT performance but if you wanted that you wouldn't have bought from AMD to begin with. They can keep raising prices because of consumerism. If you always go to buy the next thing they can keep increasing prices because you'll just keep buying.

[–] [email protected] 1 points 10 months ago

More performance is always good. RT is more like a bonus, I would like it, but I won't pay double to get it. I'm running 4K120.

[–] [email protected] 1 points 10 months ago

First of all, those are rumors, and given how the leaker doesn't even know if it's 128bit or 192bit this late in the game when it's been in development for 3 years and 10 months from release, it means the leaks are pretty much completely made up. RedGamingTech has a pretty bad leak accuracy record.

That being said, if it's targeting 7900xt to 7900xtx performance, and it's using GDDR7, then 192 but makes sense. It's at about 7900xt memory bandwidth in total if you work out the math at 34 Gbps. Currently GDDR7 is aiming for 32 to 36 Gbps.

[–] [email protected] 1 points 10 months ago (1 children)

Why tf is you trying to upgrade when you got a xtx

[–] [email protected] 1 points 10 months ago

I would upgrade if the expense was reasonable. But theres only one game where I could need more performance and that is Darktide. Otherwise I wish FSR3 became a thing, but just as DLSS is still lacks a solution to update older games.

[–] [email protected] 1 points 10 months ago (4 children)

So the leakers were correct.

Means the top end Blackwell offerings will cost an arm and a leg. F for the consumers. If Nvidia even puts out a 102-die as 5090 given the run on AI and just how insane margins are there. So if they are limited by fab capacity they might just pull another 4070 and sell us a 103 die as 5090 and force it down our throat.

[–] [email protected] 1 points 10 months ago (3 children)

I'm fine with it. Consumers reap what they sow, basically. AMD is likely gonna drop high end GPUs in general, if not dedicated GPUs completely.

[–] [email protected] 1 points 10 months ago (3 children)

Which is hilarious because 7900 XT(X) is the highest selling high end GPU they've ever made afaik

[–] [email protected] 1 points 10 months ago (3 children)

From the last few gens? Not even close to best. Certainly not more than Radeon 9000 series, HD4000 series, HD5000 series, HD7000 series, and R9 200 series.

7900 XTX, one year later is at 0.19% on Steam.

A year after 6900 XT released, 6900 series was at 1.19%.

Given their last high end before 6900 was 390X, which there is no steam hardware survey on, but 7970 was ahead... Yeah, not even close.

[–] [email protected] 1 points 10 months ago (1 children)

Navi21 wasn't high end, GA102 was just weak, also 6900 XT was not at 1.19% in late 2021,that's 🧢

[–] [email protected] 1 points 10 months ago

A 520mm^2 die on 7nm is pretty high end

[–] [email protected] 1 points 10 months ago (2 children)

Their last "high end" before the RX 6900 XT was the Vega VII and before that Vega 64, yeah they only were able to compete with the RTX 2080 and GTX 1080 respectively but so does the RX 7900 XTX that can only compete with the RTX 4080...

PS.: Also the high end AMD GPU before Vega 64 was the R9 Fury X (R9 390X was a 290X refresh that launched in the same period), that was quite competitive with the GTX 980 Ti but it's 4 GB of HBM and and the necessity to be water cooled limited its sales...

[–] [email protected] 1 points 10 months ago (2 children)

forgot about vega's existence tbh

[–] [email protected] 1 points 10 months ago (2 children)

Sadly they weren't that impactful besides the Vega 56 - competed very well with the GTX 1070 and Nvidia launched the GTX 1070 Ti because of it - they consumed too much power at stock because of overvoltage and they launched way too late...

[–] [email protected] 1 points 10 months ago

Yep, they were honestly decent cards, but wrong time to launch. Same issue nVidia had with 400 series, without the whole "overpriced to fuck, trying to scam customers" kinda deal they had with the benchmarking requirements for reviewers.

[–] [email protected] 1 points 10 months ago (1 children)

loved my Vega 56, performed well, and a little undervolting fixed the power issue big time... didn't feel like I were missing out for "not buying Nvidia"

[–] [email protected] 1 points 10 months ago

It really was a good GPU, sadly the 1st impression Vega gave wasn't good stacked with being one year late, overvolted and barely could reach the GTX 1080 at launch...

I would've gotten one if they weren't quite uncommon in my country, even Navi was a lot more easy to find in the used market so I ended up getting a RX 5700 that's serving me very well!

[–] [email protected] 1 points 10 months ago

Looks sadly at radeon VII by my feet

[–] [email protected] 1 points 10 months ago (1 children)

quite competitive with the GTX 980 Ti

The Fury X was an instant no-buy for high-end 4K gamers, due to the measly 4GB of VRAM.

Just as the RTX 4080 should be a no-buy for high-end 4K gamers, due to the measly 16GB of VRAM. In a year's time, AAA RT-enabled games will suck up >16GB at 4K.

load more comments (1 replies)
[–] [email protected] 1 points 10 months ago (1 children)

Again with the steam numbers it's not accurate as the data is gather from a pool of people who opts in to the survey that pool could be 500/5000 people we wouldn't known

[–] [email protected] 1 points 10 months ago

Steam is also heavily biased towards Nvidia users. I'd like to see stats which discount China, which is flooded with Nvidia GPUs, especially in their internet cafes. The other issue is that Steam seems to count the same cafe PC twice, if two survey opted-in gamers log onto that same PC.

[–] [email protected] 1 points 10 months ago (1 children)

That’s not backed by AMD financials or marketshare

[–] [email protected] 1 points 10 months ago (1 children)

To be fair "highest selling high end GPU ever" for AMD is still not a lot compared to NV or their own midrange stuff.

[–] [email protected] 1 points 10 months ago

Sorry, I misread. I thought you said highest selling GPU which is what I have also read elsewhere.

Seems to me 7800XT is their best performer but not sure

load more comments (1 replies)
[–] [email protected] 1 points 10 months ago (2 children)

I mean, we can debate "high-end". By RDNA 5, we should have 4k @ 120 fps as a base-line for all dedicated GPUs. Where do you go after that in consumer GPUs?

While there will always be a small, enthusiast market for super-high end GPUs, I'm not sure the mainstream will be interested in pushing 240 FPS. Maybe Nvida sees the writing on the wall, which is why they're pivoting away from consumer-focused GPUs.

And if AMD continues to serve us solid 300-600$ dGPUs until then, I think that's still a win. I don't think the market for >1000$ dGPUs is that large anyway.

[–] [email protected] 1 points 10 months ago

Yes, but also this is why NVidia pushes raytracing.

[–] [email protected] 1 points 10 months ago

I mean, all of that assumes requirements won't keep increasing. Raytracing just artificially increases the performance requirements once you start getting to the top of what's possible. The same will be done once RT is getting capped out.

[–] [email protected] 1 points 10 months ago

I doubt this. They just might drop TSMC for them and go with Samsung. TSMC is way too expensive for big dies given the reluctance people have to pay similar prices to Nvidia for similar performance from AMD. At the end of the day, AMD doesn't have a significant edge from a cost perspective. Chiplet benefits are cool, but AMD needs an interposer so the cost advantage might not be as impressive and the 6nm dies might not be as cheap given that they're still manufactured on TSMC.

So TLDR: I think they might shift their focused for the higher end market, but I doubt they will entirely abandon it. They might just take a break from it.

[–] [email protected] 1 points 10 months ago

There were the same leaks before RDNA 2, showing that AMD wouldn't do better than a 2080ti and we still had a 6900xt... So i'm waiting for the official release to get an idea.

[–] [email protected] 1 points 10 months ago (1 children)

So the leakers were correct: No top end RDNA4 cards (at least on launch).

Where did you read that on that article?

[–] [email protected] 1 points 10 months ago (1 children)

It's not verbatim confirmed, but it does match the leaks to the degree that, instead of a halo Navi 41/N4C die for the top end 8900 XTX (not that they still don't call another chip this name), they will launch two different, monolithic dies, but quicker and closer to each other, time-wise.

Since this is such a stark change of release cadence compared to N21 and N31, this points to the leaked release strategy being correct.

load more comments (1 replies)
[–] [email protected] 1 points 10 months ago

So the leakers were correct: No top end RDNA4 cards (at least on launch).

Remember the claims from MLID/RGT/etc. when AMD release a halo desktop RDNA4 GPU...that's unless they delete those particular videos.

[–] [email protected] 1 points 10 months ago (2 children)

Well maybe mid end is where they gonna strike gold. So its smart to focus your resources and scale back the product stack. AMD always operates on value above all else , so making a product exclusively designed for it could be good.

[–] [email protected] 1 points 10 months ago

Makes sense with their focus on mobile with stuff like strix incoming

[–] [email protected] 1 points 10 months ago (1 children)

What is “mid-end”? 7800XT replacement?

load more comments (1 replies)
[–] [email protected] 1 points 10 months ago (3 children)

There were the same leaks before RDNA 2, showing that AMD wouldn't do better than a 2080ti and we still had a 6900xt... So I'm waiting for the official release to get an idea.

[–] [email protected] 1 points 10 months ago (2 children)

The only hint people had for that was the bus width, which was indeed 256-bit. It's just that people couldn't fathom that AMD had a fast GPU with so little memory bandwidth.

I'm not sure this leak shares anything that would tell us what performance target this GPU could belong to.

[–] [email protected] 1 points 10 months ago (2 children)

It wasn't about bus width, it was about nVidia's fictitious CUDA core counts with Ampere.

At the last minute, the 4352 CUDA cores of the 3080 (same as the 2080 Ti) was changed to 8704 "CUDA cores", because the INT32 ALU was replaced with a dual-function INT32/FP32 ALU. People who didn't understand that (i.e. basically everyone who didn't call out nVidia's dishonesty in marketing those figures) thought, from the leaks, that it'd be 8704 shaders against 4608 shaders. It wasn't. It was more like ~5200-5400 shaders, depending on resolution, against 4608, with the latter running at a substantially higher clock speed.

Ironically, the reverse happened with RDNA 3, as the values leaked were incorrect - they said 12,288 ALU's for Navi 31, without mentioning that it was really 6144 FP32 ALU's with 6144 INT32/FP32 ALU's that could be partially used. So people thought it was 12,288 on the side of Navi 31 versus 16,384 on the side of the 4090, with those numbers meaning the same as they did with 5120 for the 6900 XT versus 10,496 for the 3090. But they didn't mean the same thing at all. It was ~7400 effective shaders for Navi 31 versus ~10,240 effective shaders for the 4090. With no real clock speed advantage.

As it turns out, the 4090 scales pretty poorly though, so it's not as far ahead of the 7900 XTX as it should be base on raw compute.

load more comments (2 replies)
load more comments (1 replies)
[–] [email protected] 1 points 10 months ago

There were also posts about Big Navi and Nvidia running scared, 6800XT outperforming 3090 (AMD really fed that idea before launch)

[–] [email protected] 1 points 10 months ago (1 children)

There were also rumors claiming RDNA 3 would be when AMD surpassed Nvidia. Unfortunately it was a regression in competitiveness.

Leaks about the structure of a GPU or CPU, the cores, memory type, architecture used, are typically more reliable far ahead of release, but the leaks about performance are almost never reliable until a couple months before release.

[–] [email protected] 1 points 10 months ago

Performance leaks more than a few months prior to launch will at best be targeted performance and often it could be in a very simple metric like TFlops.

The 2.5x 6900XT performance claim did not hold up in terms of FPS uplift but it did in terms of TFlop uplift. Even if some people did think it was a TFlop increase the expected fps improvement of such a huge jump was expected to be higher than what we got.

[–] [email protected] 1 points 10 months ago
load more comments
view more: next ›