Intel
Rules
-
Be civil. Uncivil language, slurs, and insults will result in a ban. If you can't say something respectfully, don't say it at all.
-
No Unoriginal Sources, Referral links or Paywalled Articles.
-
All posts must be related to Intel or Intel products.
-
Give competitors' recommendations only where appropriate. If a user asks for Intel only (i.e. i5-12600k vs i5-13400?) recommendations, do not reply with non-Intel recommendations. Commenting on a build pic saying they should have gone AMD/Nvidia is also inappropriate, don't be rude. Let people enjoy things.
-
CPU Cooling problems: Just like 95C is normal for Ryzen, 100C is normal for Intel CPUs in many workloads. If you're worried about CPU temperatures, please look at reviews for the laptop or CPU cooler you're using.
view the rest of the comments
The thing about codec support is that you essentially have to add specific circuits that are used purely for decoding and encoding video using that specific codec. Each addition takes up transistors and increases the complexity of the chip.
XMX cores are mostly used for XeSS and other AI inferencing tasks as far as I understand. While it could be feasible to create an AI model that encodes video to very small file sizes, it would likely consume a lot of power in the process. For video encoding with relatively high bitrates it's more likely an ASIC would consume a lot less power.
XeSS is already a worthy competitor/answer to DLSS (in contrast to AMD's FSR2), so adding XMX cores to accelerate XeSS alone can be worth it. I also suspect Intel GPUs use the XMX cores for raytracing denoising.
Ah, got it. I'm guessing this is why Intel leaves those types of circuits for the GPU. Then I'm looking forward to seeing what battlemage brings, and how these innovations trickle down to the iGPU