this post was submitted on 04 Dec 2023
1 points (100.0% liked)

Hardware

48 readers
1 users here now

A place for quality hardware news, reviews, and intelligent discussion.

founded 1 year ago
MODERATORS
 

x86 came out 1978,

21 years after, x64 came out 1999

we are three years overdue for a shift, and I don't mean to arm. Is there just no point to it? 128 bit computing is a thing and has been in the talks since 1976 according to Wikipedia. Why hasn't it been widely adopted by now?

you are viewing a single comment's thread
view the rest of the comments
[–] kakes 1 points 1 year ago (3 children)

I'll answer your question with a question: What are doing that requires 128-bit computations?

After that, a follow up question: Is it so important you're willing to cut your effective RAM in half to do it?

[–] brian 2 points 1 year ago (1 children)

Why would it be cutting your effective RAM in half? I know very little about hardware/software architecture and all that.

[–] kakes 1 points 1 year ago

Imagine we have an 8 bit (1 byte) architecture, so data is stored/processed in 8-bit chunks.

If our RAM holds 256 bits, we can store 32 pieces of data in that RAM (256/8).

If we change to a 16 bit architecture, that same physical RAM now only has the capacity to hold 16 values (256/16). The values can be significantly bigger, but we get less of them.

Bits don't appear out of nothing, they do take physical space, and there is a cost to creating them. We have a tradeoff of the number of values to store vs the size of each value.

For reference, per chunk (or "word") of data:
With 8 bits, we can hold 256 values.
With 64 bits, we can hold 18,446,744,100,000,000,000 values.
With 128 bits, we can hold 3,402,823,670,000,000,000,000,000,000,000,000,000,000 values.
(For X bits, it's 2^X)

Maybe one day we'll get there, but for now, 64 bits seems to be enough for at least consumer-grade computations.

load more comments (1 replies)