Jonny_H

joined 11 months ago
[–] [email protected] 1 points 10 months ago (1 children)

Yeah, if there's a market for older CPUs they're going to sell it.

I mean Intel still manufactured new original 386 chips until 2007 or so, more than 20 years after release. And who knows how long they still had stock for, if you asked them today with enough $$$ they might find some in a warehouse somewhere.

[–] [email protected] 1 points 10 months ago

There's not really anything intrinsically /wrong/ with tying it to the same latency-hiding mechanisms as the texture unit (there's nothing in the ISA that /requires/ it to be implemented in the texture unit, more likely that's already the biggest read bandwidth connection to the memory bus so may as well piggyback off it). I honestly wouldn't be surprised if the nvidia units were implemented in a similar place - as it needs to be heavily integrated to the shader units, while also having a direct fast path to memory reads.

One big difference is that nvidia's unit can do a whole tree traversal with no shader interaction, while the AMD one just does a single node test and expansion then needs the shader to queue the next level. This means AMD's implementation is great for hardware simplicity, and if the there's always a shader scheduled that is doing a good mix of RT and non-RT instructions it's not really much slower.

But that doesn't really happen in the real world - the BVH lookups are normally all concentrated to an RT pass and not spread over all shaders over the frame. And that batch tends to not have enough other work to be doing to fill the pipeline while waiting for the BVH lookups. If you're just waiting on a tight loop of BVH lookups, the pass back to the shader to just submit the next BVH lookup is a break in the pipelining or prefetching you might otherwise be able to do.

But it might also be more flexible - anything that that looks a bit like a BVH might be able to do fun things with the BVH lookup/triangle-ray intersection instructions, not just raytracing, but there simply doesn't seem to be a glut of use cases for that as-is. And then unused flexibility is just inefficiency, after all.

[–] [email protected] 1 points 10 months ago (1 children)

That's where the focus comes in - AMD have CPUs, FPGAs, networking, consoles etc. all split between "About the same number of engineers". Nvidia have GPUs, and a bit of Tegra SoCs on the side?

And having more money means you can pay other people to do stuff, be it contractors (so not in the "employee count", or other companies like getting first dibs at TSMC. Though it seems Apple are the ones paying for #1 there right now. Nvidia outspend AMD in R&D in total even then. If they wanted to (and not get slapped down by antitrust laws pretty quickly) they could probably sell GPUs at a loss and just starve out AMD - hell people here will likely celebrate that as they'll get cheaper GPUs from those loss leaders and miss the long-term ramifications.

And the moat is bigger than just internal engineering - if you're a gamedev and you can choose a technique that works better on NVidia, or one that works better on AMD, you'll choose the 90% of your market every time.

When Microsoft are asking around for things to do in the next generation directX, who do you think they'll listen to more?

[–] [email protected] 1 points 10 months ago (8 children)

Nvidia is nearly 10x the size of AMD, and more focused on GPUs. That's a lot of R&D, and if they keep outselling AMD 10:1 in GPUs that's a big amortized development cost advantage.

That's a big hill to climb, and (unlike Intel) they don't seem to be sitting on their thumbs.