this post was submitted on 28 Oct 2023
306 points (98.4% liked)
Technology
59646 readers
4056 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It will take at least another 10 years to get a majority of the market off of x86 with the 20+ years of legacy software bound to it. Not to mention all of the current gen x86 CPUs that will still be usable 10 years from now.
Honestly, we just need some sort of compatibility layer. Direct porting isn't completely required yet.
So, like Rosetta? 🙄
You don't really need the majority of the market to have moved before things start to get tricky for Intel. They're a very much non-diversified company; the entire house is bet on x86. They've only just started dabbling in discrete GPUs, despite having made integrated GPU SOCs for years. Other than a bit of contract fabbing, almost every penny they make is from x86.
If ARM starts to make inroads into the laptop/desktop space and RISC-V starts to take a chunk of the server market, the bottom could fall out of Intel's business model fast.
if i overclock a 486 into the 10ghz range it will be stronger and warmer
They make a shit ton of Wi-Fi modems.
I think it's safe to say Apple has proved that wrong three times.
When they switched from Motorola to Power, then from Power to Intel, and latest from Intel to Arm.
If necessary software will be quickly modified, or it will run well enough on compatibility layers.
The switch can happen very fast for new hardware. The old systems may stay around for a while, but the previous CPU architecture can be fazed out very quickly in new systems. Apple has proven that.
Apples advantage is that it controls the whole stack from silicon to App Store. That’s a problem for all sorts of reasons, but here they can use that power to implement the shift in a way that minimally impact the users.
Keep in mind that M1 it’s not just an ARM CPU. It’s an ARM CPU that has specially designed features that make Intel compatibility fast. Rosetta 2 is a marvel of technology, but it would run like crap on anything that does not have Intel memory model emulation in hardware.
If you are in a position where
things are looking quite a bit less rosy for you.
Not ther case from Motorola to Power or Power to X86. Very similar software infrastructure to what PC's have, with lots and lots of 3rd party software vendors.
Apple has a petit vertical monopoly.
Intel tried pushing to 64-bit via Itanium, and it completely bombed.
Microsoft tried extending Windows to ARM, and it went poorly.
I'm not sure about that. If for example the EU says "for the environment, you may not use chips that use X watts/Ghz" or something, x86 might be out of the game pretty quickly. Also, becoming market leader doesn't mean old hardware, it's the new hardware. I bet by 2030, the majority of chipsets sold will be either ARM or RISC-V. AMD did make an ARM rival with the 7840U, but with their entry in to ARM in 2025, it's not preposterous to believe the ARM ecosystem will pick up steam.
Also, recompiling opensource stuff for ARM is probably not going to be a huge issue. clang and gcc already support ARM as a compilation target, and unless there's x86 specific code in python or ruby interpreters, UI frameworks like Qt and GTK, they should be able to be compiled without much issue. If proprietary code can't keep up or won't keep up, the most likely outcome will be x86 emulators or the dumping of money into QEMU or stuff like Rosetta for windows.
Anyway, I'm talking out of my ass here as I don't write C/C++ and don't have to deal with cross-compilation, nor do I have any experience in hardware. It's all just a feeling.
FYI, arm can already handle most Open Source Software with no problem as far compiling them is concerned. In particular, Qt and GTK does work, and cross compiling too is very easy. Not that it's necessary anyway (aside of probably faster compilation unless you have really good ARM CPU). In particular, QEMU have qemu-user (if you didn't know), which basically Rosetta for Linux, but with a good performance hit when testing cross-compiled code.
Edit: In my opinion, what will switch the faster to a non-x86 on a large scale (for computers, not counting phones, tablet and microcontroller, not using them anyway) are servers. A lot of them use standard open source software, so switching might be pretty easy if the package manager abstract it (like... All of those I know).
I mean, certain cloud provider are starting to offer renting such servers (and not speaking of all those hacker who host server on raspi (and then those who use standard linux on mobile phone too))
Thank you, that was informative. Also, didn't know about qemu-user!
Aren't Box64 and FEX faster though?