this post was submitted on 28 Nov 2023
1 points (100.0% liked)

Hardware

47 readers
1 users here now

A place for quality hardware news, reviews, and intelligent discussion.

founded 1 year ago
MODERATORS
 

I was recently reading Tracy Kidder's excellent book Soul of a New Machine.

The author pointed out what a big deal the transition to 32-Bit computing was.

However, in the last 20 years, I don't really remember a big fuss being made of most computers going to 64-bit as a de facto standard. Why is this?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 11 months ago (2 children)

And to think that processors started out at 4-bits, the Intel 4004, the first general purpose microprocessor. That didn't last long as it quickly transitioned to 8-bits so that it could go from being a calculator to a text processor as conventional text is represented as an 8-bit number, even though only 7 bits are technically required.

After that, the processor word length depended more upon addressing limitations, as it went from 16-bits to 32-bits to, finally, 64-bits. Then it stopped. However, GPU's have taken it to 128 bits and multiples thereof (no longer strictly powers of 2) such as 384 bits for the GeForce 4090, just for sheer data bandwidth.

I'm not a processor developer, so maybe I got somethings wrong.

[–] [email protected] 1 points 11 months ago

384 bits for the GeForce 4090

384bit is the memory bus width. AMD's Fiji (r9 290x) had a 512-bit bus in 2013. Not to be confused with data types used for calculations.

[–] [email protected] 1 points 11 months ago

Memory bus width != CPU Bitness

Those two numbers describe totally different things.

For GPUs the bitness number is usually used to describe the width of the memory bus that is how much data can be "transported" concurrently.

For CPUs the bitness describes the size of the data that can be processed at any given time. With AVX, CPUs can handle data vectors that are up to 512-bit long.

GPUs are in fact 64-bit processing units, the largest data type they are designed to handle are 64-bit double precision floating-point numbers.