this post was submitted on 23 Nov 2023
1 points (100.0% liked)

Hardware

33 readers
1 users here now

A place for quality hardware news, reviews, and intelligent discussion.

founded 11 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 9 months ago (3 children)

Apparently the AMD significantly outperforms Nvidia in specific calculations used for nuclear weapons simulation software.

[–] [email protected] 1 points 9 months ago (1 children)

Amd is better at fp32 and FP64

During 2017 ish Nvidia and Amd focused on different parts with data centre cards.

Amd went in on Compute with fp32 and fp64.

Nvidia went full in on AI with Tensor cores and fp16 performance.

Amd got faster than Nvidia in some tasks. But Nvidia's bet on AI is the clear winner.

[–] [email protected] 1 points 9 months ago (1 children)
[–] [email protected] 1 points 9 months ago

Nobody knows the actual flops of the mi300

The mi250x had 95.7 tflops of fp32 due the matrix cores

https://www.amd.com/en/products/server-accelerators/instinct-mi250x

That's more than the H100 even

[–] [email protected] 1 points 9 months ago

Most people dont realize this is the original justification for the bans.

[–] [email protected] 1 points 9 months ago

AMD has had traidionally very competitive FLOPs with their shaders. The issue is that their software stack, for lack of a better word is; shit.

For specific customers, like national labs or research institutions, they can afford to pay a bunch of poor bastards to develop some of the compute kernels using the shitty tools. Because at the end of the day, most of their expenses are in terms of electricity and hardware, with salaries not being the critical cost for some of these projects. I.e. grad students are cheap!

However, when it comes to industry, things are a bit difference. First off, nobody is going to take a risk w a platform with little momentum behind it. Also they need to have access to talent pool that can develop and get the applications up and running as soon as possible. Under those scenarios, salaries (i.e. the people developing the tools) tend to be almost as important consideration as the HW. So you go with the vendor that gives you the biggest bang for your buck in terms of performance and time to market. And that is where CUDA wins hands down.

At this point AMD is just too behind, at least to get significant traction in industry.