this post was submitted on 31 Oct 2023
3 points (100.0% liked)

Hardware

33 readers
1 users here now

A place for quality hardware news, reviews, and intelligent discussion.

founded 11 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 10 months ago (4 children)

Comparisons:

M3 base:

  • CPU: 20% faster than M2 base

  • GPU: 20% faster than M2 base

M3 Pro:

  • CPU: Undisclosed performance vs M2 Pro, 20% faster than M1 Pro

  • GPU: 10% faster than M2 Pro

M3 Max

  • CPU: 50% faster than M2 Max

  • GPU: 20% faster than M2 Max


It seems the biggest improvements were on M3 Max. For the M3 family, all will enjoy an upgraded screen brightness (From 500 nits to 600), hardware accelerated raytracing and hardware accelerated mesh shading

[–] [email protected] 1 points 10 months ago (1 children)

On another note...

  • Memory bandwidth is down. M2 Pro had 200GB/s, M3 Pro only has 150GB/s. M3 Max only has 400GB/s on the higher binned part.

  • Just like the low-spec M3 14" has one fewer Thunderbolt port, it also doesn't officially support Thunderbolt 4 (like M1/M2 before it)

  • The M3 Pro loses the option for an 8TB SSD. Likely because it was a low volume part for that spec.

  • The M3 Pro actually has more E-cores than the Max (6 vs 4). Interesting to see them take this away on a higher-specced part; seems like Intel wouldn't do this

[–] [email protected] 1 points 10 months ago (2 children)

Memory bandwidth is down. M2 Pro had 200GB/s, M3 Pro only has 150GB/s. M3 Max only has 400GB/s on the higher binned part.

This really puzzles me. One of the impressive things about the M2 Max and Ultra was how good they were at running local LLMs and other AI models (for a component not made by Nvidia and only costing a few grand). Mostly because of their high memory bandwidth, since that tends to be the limiting factor for LLMs over raw GPU TFLOPS. So for LLM use, this is *really* shooting themselves in the foot. Guess I better buy an M2 Ultra Mac Studio before they get around to downgrading it to the M3 Ultra.

[–] [email protected] 1 points 10 months ago

What locally hosted models were people running?

[–] [email protected] 1 points 10 months ago

That is not true. The SoC does not have enough Neural Engine Cores to run AI training on its own. For AI inference, it's not IO centric.

[–] [email protected] 1 points 10 months ago (1 children)

I'm not incredibly knowledgeable when it comes to hardware, but a 50% CPU and 20% GPU increase does not seem insignificant for a product upgrade (M2 Max --> M3 Max) in less than a year.

[–] [email protected] 1 points 10 months ago (1 children)

Only highest end M3 Max(highest binned 12 p core + 4 ecore) while M2 Max has 8+4. But you have to pay lot more to get the M3 Max laptops. I would wait for independent benchmarks across applications to see actual improvements.

[–] [email protected] 1 points 10 months ago

People would kill If intel increased their HEDT or i9 by 50%

[–] [email protected] 1 points 10 months ago (1 children)

It seems the biggest improvements were on M3 Max.

/u/uzzi38 any indication if the e-cores are less useless at all, with the renewed focus on gaming and not just pure background work?

what are the overall cache structure changes especially in the GPU etc? Enough to compensate for the reduction? Things like cache structure or delta compression etc can def make a difference, we have seen memory performance ratios soar since kepler etc. But it definitely seems more tiered than M1/M2.

Obviously this all exists in the shadow of the N3 trainwreck... N3B vs N3E etc and the like. Any overall picture of the core structure changes here?

It all just is so much less interesting than an ARM HEDT workstation would be right now

[–] [email protected] 1 points 10 months ago

Apple's E-cores were never useless, they're easily best in class by a large margin. They've got the best perf/W in the industry by a country mile really, the things sip power, and while they're not the fastest little cores, they are still extremely potent. I don't see them changing away from that core structure any time soon.

As for the GPU, idk off the top of my head, but the IP is likely similar to A17. I wouldn't expect much - the main advantage is the addition of hardware RT support, but from what we've seen the RT capabilities aren't hugely improved over shaders. Definitely going to be a more modest improvement than prior iterations here.

[–] [email protected] 1 points 10 months ago

didn't they say there was a big leap in GPU ? 20% is tiny
I am surprised.
Is this data really accurate ?