this post was submitted on 05 Jan 2025
1039 points (98.2% liked)
Greentext
4747 readers
1364 users here now
This is a place to share greentexts and witness the confounding life of Anon. If you're new to the Greentext community, think of it as a sort of zoo with Anon as the main attraction.
Be warned:
- Anon is often crazy.
- Anon is often depressed.
- Anon frequently shares thoughts that are immature, offensive, or incomprehensible.
If you find yourself getting angry (or god forbid, agreeing) with something Anon has said, you might be doing it wrong.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
We reached the physical limits of silicon transistors. Speed is determined by transistor size (to a first approximation) and we just can't make them any smaller without running into problems we're essentially unable to solve thanks to physics. The next time computers get faster will involve some sort of fundamental material or architecture change. We've actually made fundamental changes to chip design a couple of times already, but they were "hidden" by the smooth improvement in speed/power/efficiency that they slotted into at the time.
My 4 year old work laptop had a quad core CPU. The replacement laptop issued to me this year has a 20-core cpu. The architecture change has already happened.
I'm not sure that's really the sort of architectural change that was intended. It's not fundamentally altering the chips in a way that makes them more powerful, just packing more in the system to raise its overall capabilities. It's like claiming you had found a new way to make a bulletproof vest twice as effective, by doubling the thickness of the material, when I think the original comment is talking about something more akin to something like finding a new base material or altering the weave/physical construction to make it weigh less, while providing the same stopping power, which is quite a different challenge.
Except the 20 core laptop I have draws the same wattage as the previous one, so to go back to your bulletproof vest analogy, it's like doubling the stopping power by adding more plates, except the all the new plates weigh the same as and take up the same space as all the old plates.
A lot of the efficiency gains in the last few years are from better chip design in the sense that they're improving their on-chip algorithms and improving how to CPU decides to cut power to various components. The easy example is to look at how much more power efficient an ARM-based processor is compared to an equivalent x86-based processor. The fundamental set of processes designed into the chip are based on those instruction set standards (ARM vs x86) and that in and of itself contributes to power efficiency. I believe RISC-V is also supposed to be a more efficient instruction set.
Since the speed of the processor is limited by how far the electrons have to travel, miniaturization is really the key to single-core processor speed. There has still been some recent success in miniaturizing the chip's physical components, but not much. The current generation of CPUs have to deal with errors caused by quantum tunneling, and the smaller you make them, the worse it gets. It's been a while since I've learned about chip design, but I do know that we'll have to make a fundamental chip "construction" change if we want faster single-core speeds. E.G. at one point, power was delivered to the chip components on the same plane as the chip itself, but that was running into density and power (thermal?) limits, so someone invented backside power delivery and chips kept on getting smaller. These days, the smallest features on a chip are maybe 4 dozen atoms wide.
I should also say, there's not the same kind of pressure to get single-core speeds higher and higher like there used to be. These days, pretty much any chip can run fast enough to handle most users' needs without issue. There's only so many operations per second needed to run a web browser.