The 32 bit limit was a real constraint, 64 bit is not. Also, modern architectures do actually compute 128 bit data in parallel (say 4x32 bit), so it'd just be a matter of representing that data on the screen in a 128 bit way. Any actual need for 128 bit can just be emulated, and it's likely you don't need to process such data at the limit of a 2023 tier processor anyway. In fact if anything for machine learning the direction seems to be going in the other direction, preferring faster hardware at half-precision (https://en.wikipedia.org/wiki/Half-precision_floating-point_format)
The 32 bit limit was a real constraint, 64 bit is not. Also, modern architectures do actually compute 128 bit data in parallel (say 4x32 bit), so it'd just be a matter of representing that data on the screen in a 128 bit way. Any actual need for 128 bit can just be emulated, and it's likely you don't need to process such data at the limit of a 2023 tier processor anyway. In fact if anything for machine learning the direction seems to be going in the other direction, preferring faster hardware at half-precision (https://en.wikipedia.org/wiki/Half-precision_floating-point_format)