hi_im_bored13

joined 11 months ago
[–] [email protected] 1 points 9 months ago

NVLink is no longer supported on the Ada Lovelace GPU architecture that powers Nvidia's flagship RTX 4090 graphics. Replacing NVLink is the PCIe Gen 5 standard. Nvidia will use the freed up space from the removal of NVLink to cram in more AI processing capabilities.

https://www.windowscentral.com/hardware/computers-desktops/nvidia-kills-off-nvlink-on-rtx-4090#:~:text=NVLink%20is%20no%20longer%20supported,in%20more%20AI%20processing%20capabilities.

The RTX A6000 ADA also drops nvlink. No idea why the downvotes

[–] [email protected] 1 points 9 months ago (2 children)

NVLink died because PCIE Gen5 can handle incorrect now, no external hardware needed.

[–] [email protected] 1 points 9 months ago (3 children)

Yeah, and he notes why pretty shortly into the video: the PC realm copies apple. The video seemed to be complimenting apple if anything, and the level of engineering seems pretty commendable.

Regarding the tech itself, I was looking into it for my personal pc a while earlier and it seems like each module has a max of 10W as it needs to act as a heatsink along with a fan. Performs better than passive of course, but (as evident by the video) still falls short to a conventional fan in a larger chassis.

[–] [email protected] 1 points 9 months ago (2 children)

I'm not the biggest fan of the main LTT channel, but macaddress puts out consistently high quality content.

Can't speak for the sponsored asus router they showcased, but I'm liking my google/nest wifi setup

[–] [email protected] 1 points 10 months ago

I believe the M2 Max gpu is at the level of a 4070ti at best, but the larger issue is not many tools support metal for compute. On the other hand, all memory is shared on the mac so you (theoretically) get up to 192gigs of video memory, along with the neural engine for basic inferencing and matrix extensions on the CPU.

Essentially, the max is simultaneously excellent and falling behind in different industries, but you won't know until you optimize and test your software for the specific usecase, and after that you're beholden to whatever hardware puts out

Cuda is well supported and nvidia/amd scale well for different applications, unless apple picks up their software I don't think the hardware matters much

[–] [email protected] 1 points 10 months ago

I need an apple x google fanfic

[–] [email protected] 1 points 10 months ago

I found this article from last month

That drops support from AMDVLK which is largely deprecated in favor of RADV, which still supports polaris/vega just fine.

[–] [email protected] 1 points 10 months ago

x90 should have 8 slots in the future