soggybiscuit93

joined 11 months ago
[–] [email protected] 1 points 9 months ago

It totally depends on what chips are outsourced. Intel has been a TSMC customer for many years.

Intel 4 and Intel 20A are not library complete nodes - they're optimized specifically for x86 compute tiles. Intel 3 and 18A are the refined, library complete versions of these nodes.

Lunar Lake combines NPU, iGPU, and Compute on a single tile, so Intel 4 and 20A are not viable nodes. Arrow Lake has Compute on its own tile, so it's using 20A. Lunar Lake either be delayed 6 - 9 months and wait for 18A, or it can launch on TSMC N3. N3 was likely a better choice than Intel 3 - either due to capacity constraints (Granite Rapids and Sierra Forest will be launching on Intel 3 in the same year), or it could be due to performance (N3 could just be better suited for GPU - or it would be too costly to try to and port the Arc iGPU to Intel 3 just for Lunar Lake).

Intel's business structure has changed. Their nodes and design teams aren't working tightly together anymore like in the past, where Intel nodes were highly optimized to work with their own designs, and their designs were not portable to other foundries: Intel Fab designs standardized nodes now that compete for customers, and Intel design has more flexibility in which fabs they choose for their now-portable designs. (One recent change is that Intel design teams has to pay for foundry steppings from their own budget, rather than Foundry eating the cost)

[–] [email protected] 1 points 9 months ago

I don't know of any Intel nodes called "5nm", but Intel 4 based client chips are launching in laptops on Dec. 14th, and 20A based desktop chips are launching sometime next year, likely in the typical October - November timeframe Intel usually launches desktop chips.

[–] [email protected] 1 points 9 months ago (3 children)

Intel's competitor to CUDA is oneAPI and SYCL. Intel poses no threat to Nvidia GPUs in datacenter in the near term, but that doesn't mean Intel won't still secure contracts.

Intel's biggest threat to Nvidia is against Nvidia's laptop dGPU volume segment. Arc offers synergies with Intel CPUs, a single vendor for both CPU and GPU for OEMs, and likely bundled discounts for them as well. A renewed focus on improving iGPUs also threatens some of Nvidia's low end dGPUs in laptops - customers don't have to choose between very poor performance iGPU or stepping up to a dGPU, and now iGPUs will start to become good enough that some customers will just opt to not buy a low end mobile dGPU in coming years.

[–] [email protected] 1 points 9 months ago (7 children)

Arc is realistically a bigger threat to AMD than it is to Nvidia. The second half of the 2020's will be AMD and Intel competing over second place for desktop dGPUs.

For mobile, Arc iGPUs, while obviously not matching dedicated GPUs, can realistically offer good enough performance to some people who want to do light gaming, then stepping up to a low end dGPU just to make sure Minecraft, Fortnight, etc. can at least run may not be worth the extra cost.

Either way, I think Intel's heavy focus on putting Arc in all of their Core Ultra CPUs and heavily focusing on iGPU can be a potentially bigger disruptor than their desktop dGPUs, at least in the nearterm.

[–] [email protected] 1 points 10 months ago (1 children)

I think what's most exciting, even if MTL only ends up matching, or being slightly behind 780M, is that this performance tier will be available on most laptops. I won't have to do research to find which few models have the 780M and base my decision off that. I can pick a make/model and be assured that the iGPU is good enough that my games will at least run. I don't need 1440P 100fps high. I just want to know that if I take a business trip, my thin and light can at least run some games when I'm in the hotel.

[–] [email protected] 1 points 10 months ago

Just replace "i" with "Ultra", and reset the gen back to 1st

[–] [email protected] 1 points 10 months ago

20A is essentially an early, incomplete release of 18A that can only be used on for the compute tiles. LNL has iGPU, NPU, and x86 cores all on the same tile, so 20A can't be used.

Intel 3 and 4 weren't designed with GPUs in mind either.

[–] [email protected] 1 points 10 months ago

One of the ways these low power chips save power and die space is by having less IO capacity.

[–] [email protected] 1 points 10 months ago (6 children)

My two best guesses for why TSMC N3B is used for the compute tile:

  1. The Compute Tile has the x86 Cores, NPU, and iGPU. Because of this, 20A couldn't be used. It could've been either Intel 3, N3B, or wait for 18A. N3B was likely the best choice

  2. ARL and LNL were developed in Tandem, one using internal foundries, the other external, as a post-10nm risk mitigation. Development for these CPUs likely began somewhere around 2020.

[–] [email protected] 1 points 10 months ago

36GB is certainly a possibility. VRAM demand is high across multiple markets. Currently you can get a 24GB 4090 or 48GB A6000 Ada. There's certainly a possibility of seeing 36GB 5090 and 72GB A6000 Blackwell (B6000?)

[–] [email protected] 1 points 10 months ago

or productivity users, there's... A6000.

A6000 is a lot of money. For productivity users for say, Blender, you can get the same 48GB of VRAM and more compute for a lower cost if you go with dual 4090's.

[–] [email protected] 1 points 10 months ago (5 children)

Having Intel devs do manual, per game scheduling optimization seems unsustainable in the long term.
I wonder if the long term plan is to try and automate this, or use the NPU in upcoming generations to assist in scheduling.

view more: next ›