this post was submitted on 24 Nov 2023
1 points (100.0% liked)

Intel

24 readers
1 users here now

Rules

founded 11 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 10 months ago (4 children)

From my perspective as someone who works with media a lot, having a powerful iGPU with the Cpu which has great decoders/encoders, speeds up lots of work flows. And reduces the need for a workstation with multiple GPUs, which ofc, some might still need regardless. Ideally, Nvidia and AMD would improve the ones in their GPUs, and so would Intel, but so far I haven't seen any signs of that, especially with Nvidia who wants you to buy a Quadro type card for that. At the moment Apples decoders/encoders do so much heavy lifting the Cpu and GPU on their SoC can accelerate other operations. I'd like to see this on the PC side. We used to buy add in cards for that say for Avid systems, Media100 systems, and Red footage, etc. If anyone has expertise on this feel free to chime in and add more, and educate me some more. So maybe not too applicable to heavy gaming but for content creation and heavy media work it would be a great thing.

[–] [email protected] 1 points 9 months ago (1 children)

There are already encoders and decoders for x.264/265/VP9/AV1 on Intel GPUs, these are codec-specific. The article of this post points to Intel increasing the capabilities of the GPU, which is usually accompanied by an increase in encoding/decoding performance and efficiency.

[–] [email protected] 1 points 9 months ago (1 children)

Right. So adding more codecs is what I'd like to see with improved performance and efficiency: pro res, braw, avid dnx, etc. I'm not sure if this could happen tho. Intel doesn't have to beat Apple with pro red speed but, something close and same with the others, ofc, this might be me wishing for pie in the sky lol. And with Ray tracing, Render engines, any ideas on that? I know it's tending a bit off topic.

[–] [email protected] 1 points 9 months ago (1 children)

The thing about codec support is that you essentially have to add specific circuits that are used purely for decoding and encoding video using that specific codec. Each addition takes up transistors and increases the complexity of the chip.

XMX cores are mostly used for XeSS and other AI inferencing tasks as far as I understand. While it could be feasible to create an AI model that encodes video to very small file sizes, it would likely consume a lot of power in the process. For video encoding with relatively high bitrates it's more likely an ASIC would consume a lot less power.

XeSS is already a worthy competitor/answer to DLSS (in contrast to AMD's FSR2), so adding XMX cores to accelerate XeSS alone can be worth it. I also suspect Intel GPUs use the XMX cores for raytracing denoising.

[–] [email protected] 1 points 9 months ago

Ah, got it. I'm guessing this is why Intel leaves those types of circuits for the GPU. Then I'm looking forward to seeing what battlemage brings, and how these innovations trickle down to the iGPU

load more comments (2 replies)