this post was submitted on 04 Dec 2023
1 points (100.0% liked)

Hardware

47 readers
1 users here now

A place for quality hardware news, reviews, and intelligent discussion.

founded 1 year ago
MODERATORS
 

x86 came out 1978,

21 years after, x64 came out 1999

we are three years overdue for a shift, and I don't mean to arm. Is there just no point to it? 128 bit computing is a thing and has been in the talks since 1976 according to Wikipedia. Why hasn't it been widely adopted by now?

top 42 comments
sorted by: hot top controversial new old
[–] [email protected] 2 points 11 months ago

There have been a number of 128bit systems over the years.
As it is, 64bit should be good for the life of x86

[–] [email protected] 2 points 11 months ago

What are you gonna do with 128 that you can't do with 64

[–] [email protected] 2 points 11 months ago

I think what you need to know, in layman terms, is that 128bit is not the double of 64bit. 65bit is double the amount of 64bit.

128bit is an absurd huge amount. And 64 is so much that even I as a radar engineer do not have to worry about it for a second.

[–] [email protected] 2 points 11 months ago

Lots of good responses regarding why 128-bit isn't a thing, but I'd like to talk about something else.

Extrapolating from two data points is a folly. It simply can't work. You can't take two events, calculate the time between them, and then assume that another event will happen after the same amount of time.

Besides, your points are wrong. (Edit: That also has been mentioned in another response.)

x86 (8086) came out in 1978 as a 16-bit CPU. 32-bit came with the 386 in 1985. x64, although described in 1999, was released in 2003.

So now you have three data points: 1978 for 16-bit, 1985 for 32-bit and 2003 for 64-bit. Differences are 7 years and 18 years.

Not that extrapolating from 3 points is good practice, but at least it's more meaningful. You could, for example, conclude that it took about 2.5 times more to move from 32-bit to 64-bit than it did from 16-bit to 32-bit. Multiply 18 years by 2.5 and you get 45 years. So the move from 64-bit to 128-bit would be expected in 2003+45 = 2048.

This is nonsense, of course, but at least it's a calculation backed by some data (which is still rather meaningless data).

[–] [email protected] 2 points 11 months ago

This is a bit pedantic, but x64 refers to Alpha, which existed long before 1999. 64 bit x86 (x86-64, or amd64) wasn't purchasable until 2003, although it was announced in 2000.

There were several additional shifts between 1978 and 2003:

  • 8088 / 8086 has what's essentially bank switched 16 bit addressing which gives 1 MB, or 2^20 bytes
  • 80286 has physical support for 16 megs, or 2^24 bytes
  • 80386 has physical support 4 gigs, or 2^32 bytes
  • Pentium Pro has PAE support for 64 gigs, or 2^36 bytes
  • AMD Opteron from 2003 has support for 1024 gigs, or 1 terabyte, or 2^40 bytes
  • Current AMD and Intel CPUs physically support anywhere between 2^48 and 2^57 bytes of physical hardware (256 terabytes to 128 petabytes)

But let's just use three points of data: 8086 / 8088, 80386, and let's say the first 64 bit AMD Opteron supports 64 bits:

  • 8086 / 8088, 1978, 20 bits
  • 80386, 1985, 32 bits
  • AMD Opteron, 2003, 64 bits

1978 to 1985 is 7 years, with a change in addressing of 12 bits, or about .6 bits per year.

1985 to 2003 is 18 years, with a change in addressing of 32 bits, or about .56 bits per year. So far, pretty consistent.

How long would it take to go from 64 bits to 128 bits? At around .56 bits per year, that'd be about 114 years, and we've had twenty so far.

Check back in 94 years.

[–] [email protected] 2 points 11 months ago

Because there is no need from an address space or compute standpoint.

to understand how large 128bit memory space really is; you'd need a memory size larger than all the number of atoms in the solar system

In the rare cases where you need to deal with a 128bit integer or floating, you can do it in software with not that much overhead by concatenating registers/ops. There hasn't been enough pressure in terms of use cases that need 128bit int/fp precision for manufacturers to invest the resources in die area to add direct HW support for it.

FWIW there have been 64bit computers since the 60s/70s.

[–] [email protected] 1 points 11 months ago

Did op just look at two number’s pattern and then looked at the date difference and whipped up this conclusion?

[–] [email protected] 1 points 11 months ago

What is this? The console wars of the 90s all over again?

[–] [email protected] 1 points 11 months ago

Would a third leg help you walk faster? Not the way we currently walk.

[–] [email protected] 1 points 11 months ago

Because it was, vector registers have crazy size today, 1Kb+.

[–] [email protected] 1 points 11 months ago

How much memory can you address with 64 bits versus 32 bits? Are we approaching devices with that capacity yet?

[–] [email protected] 1 points 11 months ago

Let me put it this way: computing is evolving in a way where SMALLER registers are actually more important for new types of algorithmical necessities. AI/ML is a great example - you have to program in specialty frameworks such as CUDA or Tensorflow which want to have registers as small as 8bit so that things are done faster, in the GPU or in L1/2 cache. The hardware of GPUs for instance is made with 8 and 16b logical processing units in mind.

Larger registers only really help a portion of computing, while you can emulate the odd large register you may need without affecting performance THAT much with a combination of smaller registers.

[–] [email protected] 1 points 11 months ago (1 children)

If it took 21 years to go from 32-bit to 64-bit, imagine it will take about 21^2 = 441 years to go from 64-bit to 128-bit. This is because 2^128 / 2^64 is the square of 2^64 / 2^32.

[–] [email protected] 1 points 11 months ago

Technology moves at an exponential pace. The time it took to go from 8 bit to 16 bit to 32 bit to 64 bit got shorter and shorter.

[–] [email protected] 1 points 11 months ago

What would even be the point, how would you justify the expense of making modern CPUs 128bit (transistors and R&D ain't free)? We aren't anywhere near the limit of 64bit addressing and won't reach that point for decades, modern "64-bit" consumer CPUs don't even bother going for full 64-bit addressing space, since it wouldn't be of use anyway. For arithmetic, it's easily doable without any architectural changes.

[–] [email protected] 1 points 11 months ago

The world's biggest super computer, Frontier, has 9,2 PB of RAM. It's not available to one CPU, so no need to address everything in one address space, but let's say it is. That still leaves room to build around 1 000 times more RAM into that theoretical CPU. I'm not sure we would be able to build such a computer today. One that needs more than ~10 000 PB RAM to address, which is what 128 bits means.

Sure, RAM isn't the only reason for bigger address space, but there are also other ways to handle data beyond one address space. For the consumer, we are far from there.

[–] [email protected] 1 points 11 months ago

The 32 bit limit was a real constraint, 64 bit is not. Also, modern architectures do actually compute 128 bit data in parallel (say 4x32 bit), so it'd just be a matter of representing that data on the screen in a 128 bit way. Any actual need for 128 bit can just be emulated, and it's likely you don't need to process such data at the limit of a 2023 tier processor anyway. In fact if anything for machine learning the direction seems to be going in the other direction, preferring faster hardware at half-precision (https://en.wikipedia.org/wiki/Half-precision_floating-point_format)

[–] [email protected] 1 points 11 months ago

There is very little need for 128 bit computing. And it can be emulated when necessary.

[–] [email protected] 1 points 11 months ago

What's been said, no need, no advantage at present.

[–] [email protected] 1 points 11 months ago

Because 2 to the power of 64 is a stupidly big number.

It is many times less than 2 to the power of 32 because you've went ahead and doubled it 32 times to get to 64bits.

[–] [email protected] 1 points 11 months ago

It'll be introduced whenever we reach RAM values of 16 exabytes or something being commonplace

[–] [email protected] 1 points 11 months ago

tl;dr: we could but what for?

Practically all comments here are wrong although a few does mention why they are wrong: the address space has nothing to do with the bitness of the CPU.

Now, let's review what's what.

Let's say you want to get the word "GRADIENT" from the memory into the CPU. Using a 8 bit instruction set you need to loop eight instructions. A 16 bit instruction set need four instructions; GR, AD, IE, NT. A 32 bit CPU only two and a 64 bit instruction can read it in a single step. Most of the time the actual CPU facilities will match the instruction set -- in the early days, the Motorola 68000 for example had a 16 bit internal data bus and a 16 bit ALU but had a 32 bit instruction set. This was fixed in the 68020. This "merely" meant the 68000 needed internally twice as much time as the 68020 to do anything.

Now, in the past the amount of memory addressable has often been larger than what a single register could address. For example, the famous 8086/8088 CPUs had 20 bit address space while they were 16 bit CPUs. The Pentium Pro was a 32 bit CPU with a 36 bit address bus. These tricks, as the RISC-V instruction set manual drily notes

History suggests that whenever it becomes clear that more than 64 bits of address space is needed, architects will repeat intensive debates about alternatives to extending the address space, including segmentation, 96-bit address spaces, and software workarounds, until, finally, flat 128- bit address spaces will be adopted as the simplest and best solution.

That manual thinks we might need more than 64 bit address space before 2030. And to be fair going to 128 bit is not a big engineering challenge, not for a long time now, after all as early as 1999 even desktop Intel CPUs have included some 128 bit registers although for vector processing only. (A computer with a 128 bit general processor register existed in the 70s.)

Let's review why we needed 64 bit! Say you want to number your records in a database, if you do that with a 32 bit register then you can have four billion records and game over. Sure you can store your number on two machine words but it'll be slower. As an example there are more than four billion humans so this was a very real, down-to-the-earth limit which we needed to move on from. Also as per the note above, it's much nicer to have a big single address space than all the tricks which were running out fast, 64GB was addressable and even run-of-the-mill servers were able to reach 16GB. 64 bits can address 16 billion billion records or bytes of memory, this seems to be fine for now. Notably current CPUs can only address 57 bits worth of physical memory so a hundredfold increase is still possible compared to currently existing machines.

Going 128 bit would require defining a whole new instruction set or at least an extension of one existing. RISC-V has a draft for RV128I but even they didn't bother fully fleshing it out yet. Each register, internal bus and processing unit widening to 128 bit would consume significant silicon area. The memory usage of everything would at least double (note Apple still selling 8GB laptops at top dollar in 2023). So there are significant drawbacks and so far we have been fine with delegating the 128 bit computing to vector processing units in CPUs and GPUs.

So:

  1. Addressing has tricks aplenty should a future system need addressing more than 16 exabytes.
  2. General purpose computing works fine with 64 bit for now.
[–] [email protected] 1 points 11 months ago

No need. 32 bit became a hindrance because a 32 bit address bus can only address up to 4GB of RAM.

A 64 bit address bus can address up to 16 EB of RAM. Exabytes. To get there we would need to pass Gigabytes, Terabytes, and Petabytes of RAM capacity before we get to Exabytes.

That is a limit that we simply will never hit, at least not with silicon based computers.

[–] [email protected] 1 points 11 months ago

You are implying that all versions of x86 were 32 bit. 1978 was when the 8086 was released, Intels first 16 bit CPU. Intel's first 32 bit CPU was the 386, released in 1985.

[–] [email protected] 1 points 11 months ago

Because we haven't hit a memory limit yet.

[–] [email protected] 1 points 11 months ago

You could argue that SSE introduced 128bit back in 1999 for x86. It's not 128bit as a base word size, but it is 128 bit computing.

[–] [email protected] 1 points 11 months ago (1 children)

Not needed yet, 64bit was a must back then since 32bit can only handle 4GB, 64bit can handle 18exabyte

[–] [email protected] 1 points 11 months ago

I need that much memory to run chrome with 3 tabs.

[–] [email protected] 1 points 11 months ago

64 bit is quite litterally 2^32 times larger than 32 bit.

There isn't a need to go to 128 bit yet

[–] [email protected] 1 points 11 months ago

Other people have addressed why 64-bit is still fine, but I just want to say that "x86" and "x64" are not two different architectures the way that you're presenting them. We still use the x86 architecture, it's just that x86-64, or AMD64, or whatever you want to call it, is a 64-bit extension of that architecture.

And this isn't the first time that happened; the original 8086 was a 16-bit processor, as was the 286. The 386, however, was a 32-bit processor with backward compatibility for the 16-bit software built for the 16-bit x86 CPUs.

The 386 came out in 1985, so there's actually a 14 year gap, though actually actually an 18 year gap because a 64-bit x86 processor didn't actually hit the market until 2003. And then there was a 7-year gap between 16 and 32-bit x86.

But ultimately as other people have said the answer is that we don't need to go beyond 64-bit right now, and the reason there was such a short gap between 16 and 32-bit processors was because the limitations of a 16-bit architecture became practical obstacles to progress faster than they did for 32-bit, and it's going to be much longer than that for 64-bit because the address space has grown exponentially, not linearly.

[–] [email protected] 1 points 11 months ago

Modern CPUs have 512bit registers, and don't need bigger memory addresses. Not sure what the issue is.

[–] [email protected] 1 points 11 months ago

Requirements aside you’d need an insane page cache to deal with the page table hierarchy. In most architectures you need a new level of page tables per 9 bits of address space.

[–] [email protected] 1 points 11 months ago

Is address space the only reason we moved away from 32-bit for high-performance computers though? Does 64-bit have any performance advantages over 32-bit apart from that? What about SIMD performance?

[–] [email protected] 1 points 11 months ago

More importantly, why hasn't anyone invented a 3-way flip-flopper/capacitor/resistor/transistor so that we can finally use a trinary system instead of the outdated binary system?

You could store like ~85 different pieces of information into 4 bits, rather than a measly 15 different pieces of information in 4 bits of binary.

[–] [email protected] 1 points 11 months ago

32 bit has only 4 gigabytes which is easily saturated by a single socket computer while 64 bit has 16 million terabytes of addressable possible memory,. you will never see that number ever used in a single socket machine in the history of humanity.

[–] [email protected] 1 points 11 months ago

More memory overhead, literally 0 benefit

[–] kakes 1 points 11 months ago (2 children)

I'll answer your question with a question: What are doing that requires 128-bit computations?

After that, a follow up question: Is it so important you're willing to cut your effective RAM in half to do it?

[–] brian 2 points 11 months ago (1 children)

Why would it be cutting your effective RAM in half? I know very little about hardware/software architecture and all that.

[–] kakes 1 points 11 months ago

Imagine we have an 8 bit (1 byte) architecture, so data is stored/processed in 8-bit chunks.

If our RAM holds 256 bits, we can store 32 pieces of data in that RAM (256/8).

If we change to a 16 bit architecture, that same physical RAM now only has the capacity to hold 16 values (256/16). The values can be significantly bigger, but we get less of them.

Bits don't appear out of nothing, they do take physical space, and there is a cost to creating them. We have a tradeoff of the number of values to store vs the size of each value.

For reference, per chunk (or "word") of data:
With 8 bits, we can hold 256 values.
With 64 bits, we can hold 18,446,744,100,000,000,000 values.
With 128 bits, we can hold 3,402,823,670,000,000,000,000,000,000,000,000,000,000 values.
(For X bits, it's 2^X)

Maybe one day we'll get there, but for now, 64 bits seems to be enough for at least consumer-grade computations.

[–] kakes 1 points 11 months ago

Oh for fuck sake, I replied to a bot.

To the dev that's spamming Lemmy with this garbage: You aren't making Lemmy better. You're actively making it a worse experience.

[–] [email protected] 1 points 11 months ago

Your calculations are off. No one expects to have constant time to double the address size, certainly for physical RAM -- what is approximately true (but slowing down) is constant time to need each additional bit of physical address space:

  • 1974 8080, 16 bits

  • 1978 8086, 20 bits, 1.0 years/bit

  • 1985 80386, 32 bits, 0.6 years/bit

  • 1995 Pentium Pro, 40 bits, 1.25 years/bit

  • 2003 Athlon64, 40 bits, n/a

  • 2006 Core 2, 36 bits, n/a (going backwards!)

  • 2014 Haswell-E [1], 46 bits, 1.9 years/bit (since Pentium Pro)

  • 2019 Ice Lake, 52 bits, 0.8 years/bit

The overall average is 36 extra address bits in 45 years or 1.25 years/bit.

At this rate, we're going to need more than 64 physical address bits around 2035. The need for more than 64 virtual address bits is probably about 5 years earlier, in 2030.

You could make similar lists of virtual address space on the one hand, or actual maximum RAM supported on the other hand. Those would give different rates, but I think the trend would be the same.

[1] not 100% sure this was the first

[–] [email protected] 0 points 11 months ago

This post is an automated archive from a submission made on /r/hardware, powered by Fediverser software running on alien.top. Responses to this submission will not be seen by the original author until they claim ownership of their alien.top account. Please consider reaching out to them let them know about this post and help them migrate to Lemmy.

Lemmy users: you are still very much encouraged to participate in the discussion. There are still many other subscribers on [email protected] that can benefit from your contribution and join in the conversation.

Reddit users: you can also join the fediverse right away by getting by visiting https://portal.alien.top. If you are looking for a Reddit alternative made for and by an independent community, check out Fediverser.