this post was submitted on 31 Oct 2023
3 points (100.0% liked)

Hardware

33 readers
1 users here now

A place for quality hardware news, reviews, and intelligent discussion.

founded 11 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 10 months ago (1 children)

On another note...

  • Memory bandwidth is down. M2 Pro had 200GB/s, M3 Pro only has 150GB/s. M3 Max only has 400GB/s on the higher binned part.

  • Just like the low-spec M3 14" has one fewer Thunderbolt port, it also doesn't officially support Thunderbolt 4 (like M1/M2 before it)

  • The M3 Pro loses the option for an 8TB SSD. Likely because it was a low volume part for that spec.

  • The M3 Pro actually has more E-cores than the Max (6 vs 4). Interesting to see them take this away on a higher-specced part; seems like Intel wouldn't do this

[–] [email protected] 1 points 10 months ago (2 children)

Memory bandwidth is down. M2 Pro had 200GB/s, M3 Pro only has 150GB/s. M3 Max only has 400GB/s on the higher binned part.

This really puzzles me. One of the impressive things about the M2 Max and Ultra was how good they were at running local LLMs and other AI models (for a component not made by Nvidia and only costing a few grand). Mostly because of their high memory bandwidth, since that tends to be the limiting factor for LLMs over raw GPU TFLOPS. So for LLM use, this is *really* shooting themselves in the foot. Guess I better buy an M2 Ultra Mac Studio before they get around to downgrading it to the M3 Ultra.

[–] [email protected] 1 points 10 months ago

What locally hosted models were people running?

[–] [email protected] 1 points 10 months ago

That is not true. The SoC does not have enough Neural Engine Cores to run AI training on its own. For AI inference, it's not IO centric.