Intel
Rules
-
Be civil. Uncivil language, slurs, and insults will result in a ban. If you can't say something respectfully, don't say it at all.
-
No Unoriginal Sources, Referral links or Paywalled Articles.
-
All posts must be related to Intel or Intel products.
-
Give competitors' recommendations only where appropriate. If a user asks for Intel only (i.e. i5-12600k vs i5-13400?) recommendations, do not reply with non-Intel recommendations. Commenting on a build pic saying they should have gone AMD/Nvidia is also inappropriate, don't be rude. Let people enjoy things.
-
CPU Cooling problems: Just like 95C is normal for Ryzen, 100C is normal for Intel CPUs in many workloads. If you're worried about CPU temperatures, please look at reviews for the laptop or CPU cooler you're using.
view the rest of the comments
There are already encoders and decoders for x.264/265/VP9/AV1 on Intel GPUs, these are codec-specific. The article of this post points to Intel increasing the capabilities of the GPU, which is usually accompanied by an increase in encoding/decoding performance and efficiency.
Right. So adding more codecs is what I'd like to see with improved performance and efficiency: pro res, braw, avid dnx, etc. I'm not sure if this could happen tho. Intel doesn't have to beat Apple with pro red speed but, something close and same with the others, ofc, this might be me wishing for pie in the sky lol. And with Ray tracing, Render engines, any ideas on that? I know it's tending a bit off topic.
The thing about codec support is that you essentially have to add specific circuits that are used purely for decoding and encoding video using that specific codec. Each addition takes up transistors and increases the complexity of the chip.
XMX cores are mostly used for XeSS and other AI inferencing tasks as far as I understand. While it could be feasible to create an AI model that encodes video to very small file sizes, it would likely consume a lot of power in the process. For video encoding with relatively high bitrates it's more likely an ASIC would consume a lot less power.
XeSS is already a worthy competitor/answer to DLSS (in contrast to AMD's FSR2), so adding XMX cores to accelerate XeSS alone can be worth it. I also suspect Intel GPUs use the XMX cores for raytracing denoising.
Ah, got it. I'm guessing this is why Intel leaves those types of circuits for the GPU. Then I'm looking forward to seeing what battlemage brings, and how these innovations trickle down to the iGPU