this post was submitted on 09 Nov 2023
4 points (100.0% liked)
Hardware
47 readers
1 users here now
A place for quality hardware news, reviews, and intelligent discussion.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Don’t think there’s the same level of focus there. Even when it comes to AI, AMD is pushing hard with ROCm. Recently progress on that has rapidly accelerated. They’re getting deals for their upcoming data center products. In comparison, how much revenue can be expected to be made with gaming? Especially since to make a gaming GPU they sacrifice being able to make a more profitable CPU or HPC/AI Accelerator. Furthermore when you look at the size difference between AMD and Nvidia (Nvidia’s profits are greater than AMD’s entire revenue) you see why the current situation has occurred. There’s no way AMD can keep pace with Nvidia given their smaller, divided resources. Not an excuse, just the reality.
Pushing hard with ROCm?
There are millions of Devs who develop for CUDA. Nvidia I believe has north of a thousand (can't remember if it's like 1 or 2 thousand) people working on Cuda. CUDA is 17 years old. There is SO MUCH work already done in CUDA, Nvidia is legit SO far ahead and I think people really underestimate this.
If AMD hired like 2000 engineers to work on ROCm they would still take maybe 5 years to get to where Nvidia is now, and still be 5 years behind Nvidia. Let's not even get into the magnitudes more CUDA GPUs floating around out there compared to ROCm GPUs, because CUDA GPUs started being made earlier at higher volumes and even really old stuff is still usable for learning/home lab. As far as I know, they're hiring a lot less, they just open sourced it and are hoping they can convince enough other companies to write stuff for ROCm.
I don't mean to diminish AMD's efforts here, Nvidia is certainly scared of ROCm, ROCm I expect to make strides in the consumer market in particular as hobbyists try and get their cheaper AMD chips to work with Diffusion models and whatever. When it comes to more enterprise facing stuff though CUDA is very very far ahead and the lead is WIDENING and the only real threat to that status quo is that there literally are not enough NVIDIA GPUs to go around.
CUDA’s moat is being undone by things like OpenAI Triton. Soon most ML code will be coded in interfaces that allow for any supported hardware vendor to run the code. AMD doesn’t have to replicate all of Nvidia’s work, especially when the industry has multiple giants all working on undoing Nvidia’s software moat.
Nvidia’s dominance won’t last forever, they have the advantage but one day all this AI hardware/software will be commoditized.