DevAnalyzeOperate

joined 11 months ago
[–] [email protected] 1 points 10 months ago

I mean it just seems sort of insane to use 3nm silicon companies like Nvidia are desperate to sell GPUs costing tens of thousands with margins of well over 500% on… selling a smart TV box? Saving a few dimes on bandwidth for the minority of users that are Apple TV+ subscribers? Helping push games which most Apple TV users don’t play?

I wouldn’t be shocked if the last gen chip can just brute force av1 decode in software too. It just seems hilariously uneconomical.

[–] [email protected] 1 points 10 months ago (3 children)

You would think Apple would be pushing av1 to reduce its bandwidth spend for Apple TV+, but a17pro seems like an absurdly expensive chip to use for an Apple TV?

I've always been surprised at how often Apple iterates this product and changes nothing of substance.

[–] [email protected] 0 points 10 months ago (3 children)

Intel's accelerator strategy and focus on memory bandwidth is paying off huge.

First time in awhile I've seen Intel execute something well and catch AMD with their pants down, despite sapphire rapids being a lemon in most respects.

[–] [email protected] 1 points 10 months ago

If you're doing small/moderage transfers, why do you need 8tb of capacity? Why wouldn't you use, totally serious here, a 1tb usb stick which is going to go just as fast but cost a lot less and be smaller?

I just don't see the product market fit here. I just don't know why this product exists other than an engineer at Samsung deciding they could do it so damn it they were going to do it. It seems to be either outperformed at the same price, or have an equivalent that gets the same job done for a lower price.

[–] [email protected] 1 points 10 months ago (1 children)

I GUESS just because affordable 8tb QLC M.2 SSDs aren't really a thing, but it still sounds mad stupid to me compared to using a 4tb QLC M.2 SSD like Crucial P3 which are fuck cheap. Or even step up to a Teamgroup MP34 4tb.

How many fucking console games do people own? If you have enough to fill up a 4tb I'd say you should have bought a PC awhile ago.

[–] [email protected] 1 points 10 months ago (6 children)

I genuinely have zero idea what the market is for giant portable drives which can't read/write quickly but are more expensive than spinning rust. The nature of these portable drives is either you're writing just a little data to them so you don't need much storage, or you are writing a ton of data to them and want to probably run at TB3 speeds or better.

[–] [email protected] 1 points 10 months ago

36gb of vram on the 384-bit bus would be fantastic, yet I'm somehow sceptical when Nvidia sells the 48gb A6000 for a $6800 MSRP. Even without benefits like nvlink, a 36gb card ought to cannibalise Nvidia's productivity cards quite a lot. I don't think Nvidia would actually be TOTALLY opposed to this if they could produce enough 5090's to not sell out of them since it would help entrench Nvidia's CUDA moat, but I don't think Nvidia is going to be capable of pulling that off.

It's not impossible we see 36gb 5090 and 72gb a7000 or whatever. I'm just not holding my breath especially when AMD doesn't seem to have much in the pipeline to even compete with a 24gb model.

[–] [email protected] 1 points 10 months ago (16 children)

I honestly don't know how well a 24gb 5090 will move, no matter how fast it is. I feel like the gamers will go for stuff like 4080 super, 4070 ti super, next gen AMD. For productivity users, there's 3090, 4090, A6000.

Maybe I'm wrong and the card doesn't need to be very good to sell because GPUs are so burning hot right now.

[–] [email protected] 1 points 10 months ago

They can technically make the emulator and have. It is hard to think of a company more qualified to do so than Microsoft, they're frankly more equipped than Apple is.

The problem Microsoft has more broadly is Apple is a company which has set the expectation that they don't do legacy support. Apple is a company which has set the expectation that they will change things and their customers will pay the cost. So they can just straight up say "in 2 years, we won't sell computers which use x86 anymore, transition now" and everybody does it and they only see higher sales.

Microsoft is a company which people use because they have outstanding legacy support and save their customers money through supporting 10 year old line of business applications at their expense. If they move off x86 in the same way Apple did, they will bleed customers to Linux/ChromeOS/MacOS/Android/iPadOS etc. etc. So they're essentially forced to support ARM and x86 concurrently. That results in every developer going "Well, more people are using x86, and a lot less people are using ARM, so I'll just develop for x86 only and ARM users can emulate". This results in the ARM experience being shit but there's nothing Microsoft can do about it even though not transitioning more forcibly will kill Windows market share in the long term. It's just not worth it to force things especially since Windows is doomed to die in slow motion regardless.

[–] [email protected] 1 points 10 months ago (1 children)

Pushing hard with ROCm?

There are millions of Devs who develop for CUDA. Nvidia I believe has north of a thousand (can't remember if it's like 1 or 2 thousand) people working on Cuda. CUDA is 17 years old. There is SO MUCH work already done in CUDA, Nvidia is legit SO far ahead and I think people really underestimate this.

If AMD hired like 2000 engineers to work on ROCm they would still take maybe 5 years to get to where Nvidia is now, and still be 5 years behind Nvidia. Let's not even get into the magnitudes more CUDA GPUs floating around out there compared to ROCm GPUs, because CUDA GPUs started being made earlier at higher volumes and even really old stuff is still usable for learning/home lab. As far as I know, they're hiring a lot less, they just open sourced it and are hoping they can convince enough other companies to write stuff for ROCm.

I don't mean to diminish AMD's efforts here, Nvidia is certainly scared of ROCm, ROCm I expect to make strides in the consumer market in particular as hobbyists try and get their cheaper AMD chips to work with Diffusion models and whatever. When it comes to more enterprise facing stuff though CUDA is very very far ahead and the lead is WIDENING and the only real threat to that status quo is that there literally are not enough NVIDIA GPUs to go around.

[–] [email protected] 1 points 10 months ago

Nvidia is all hands on deck going pedal to the metal right now trying to stop AMD from gaining any market share right now. They're worried about becoming a victim of their own success here and the AI boom allowing AMD to gain a foothold in AI with their open source strategy. They're also worried about Intel and Google for similar reasons.

Nvidia is quite the formidable foe especially compared to Intel, and they have a massive headstart because they have a LOT of advantages beyond having merely the best hardware on the market, but I'm still a bit bullish on AMD's chances here.

[–] [email protected] 1 points 10 months ago

Their software is behind in the consumer market and WAY behind in enterprise market.

Nvidia hitting production bottlenecks should help AMD move a few GPUs in Enterprise though, so long as they don't throw away their opportunity here.

view more: next ›