22
Intel optimizes slimmed-down X86S instruction set — revision 1.2 eliminates 16-bit and 32-bit features
(www.tomshardware.com)
All things related to technology hardware, with a focus on computing hardware.
Rules (Click to Expand):
Follow the Lemmy.world Rules - https://mastodon.world/about
Be kind. No bullying, harassment, racism, sexism etc. against other users.
No Spam, illegal content, or NSFW content.
Please stay on topic, adjacent topics (e.g. software) are fine if they are strongly relevant to technology hardware. Another example would be business news for hardware-focused companies.
Please try and post original sources when possible (as opposed to summaries).
If posting an archived version of the article, please include a URL link to the original article in the body of the post.
Some other hardware communities across Lemmy:
Icon by "icon lauk" under CC BY 3.0
Not sure whether to be excited or concerned.
I think this is a necessary and inevitable consequence of the insurgence of competitive ARM CPUs unencumbered by legacy ISAs (mainly the ones from Apple, I'm not sure Samsung's is even in the conversation). If the Apple Silicon chips are any indication, the performance benefits could be massive.
But if Intel cuts AMD out, or price-gouges them for the "honor" of developing compatible CPUs, that hurts the whole industry—even Intel—in the long run. And I don't trust the bean counters at Intel to take the long view over the earnings of next fiscal quarter.
Itanium 2: electric boogaloo.
I guess it's time to get rid of legacy stuff in modern chips. If nothing else, to make them cheaper to produce. Older instruction sets can still be emulated, just like Apple did it. Said that, I doubt that there will be massive performance gains just from this because it's still the same architecture, but let's hope if this actually sees the light.
If those instruction sets take up a set place on the processing pipeline, eliminating them could be a huge performance boost. Additionally, the removal of the instruction sets would reduce the size of the chip’s die which could result in shorter signal paths.
I don't have enough hardware knowledge to dispute that, but I have a feeling that's not that easy to gain massive performance boost. If I recall correctly, the biggest Apple's ARM CPU advantage has to do with fixed instruction length whereas x86 is variable. Fixed one gives you an prediction advantage because you exactly know how many instructions are in cache and where are they located - something along this. But let's hope for the better.