As I wrote before, Thunderbolt now is essentially a certification program for certain USB4 devices and for PCs, there's no difference currently in practice.
With USB4 version 2.0, the program will be called Thunderbolt 5 but the way I read it, Intel is planning to restrict the certification further on lighter workstations. Read this page
Laptop charging: Thunderbolt™ 4 technology for thin and light notebooks that require up to 100W to charge. Thunderbolt™ 5 technology for laptops that require up to 140W to charge. 140W‒240W is available on some devices.
Seems like a small change, doesn't it? Wrong. This is a very big change which tests the clout of Intel against the will of Lenovo/Dell/HP. Let me explain. For near two decades now, all business laptops charge over 20V. From 2014 to 2019, the USB C specification only allowed up to 100W by using 20V 5A. This didn't faze much the big three and they have their proprietary 20V 6.5A (or so) docks. Lenovo even created such a charger last year when PD 3.1 was already out for some time with the appearance of the ThinkPad Z16 and the Z16 Gen 2 this fall still shipped with that (meanwhile the consumer Legion line switched over with the C135 being proprietary last year and the C140 being PD 3.1 this year). At higher wattages they are using proprietary power plugs and combo cables which allows their customers to dock with plugging a single cable and charge at basically any wattage up to like 230W. This means the incentive for PD 3.1 is not really that big.
Now, in 2019 the USB IF raised the wattage but since the connector didn't change, the amperage needed to stay put and so they raised the voltage. This is the big change. If I am reading correctly and Intel will deny certification unless the manufacturer uses PD 3.1 then the big three needs to augment their laptops and docks to support 28V. But also depending on how strict Intel goes, TB5 certification might require downright abandoning their proprietary means because the USB C specification doesn't allow proprietary charging protocols over the C connector (yes, all your phone chargers which support Qualcomm QC over C are not specs compliant).
Will they care? Macbooks with plain (not Pro/Max) CPUs also shipped as USB4 because they do not conform to TB4 requirements of dual displays and it doesn't seem like this made a dent in sales because we are now three generations in and Apple didn't change the capabilities of their lowest tier CPU. On the PC side, AMD models only ship with USB4 too and who cares?
Does Intel have the clout in 2024 to force laptop manufacturers to the new standard or will they shrug and say they don't need a Thunderbolt 5 sticker on those laptops then? Stay tuned, this will be interesting.
tl;dr: we could but what for?
Practically all comments here are wrong although a few does mention why they are wrong: the address space has nothing to do with the bitness of the CPU.
Now, let's review what's what.
Let's say you want to get the word "GRADIENT" from the memory into the CPU. Using a 8 bit instruction set you need to loop eight instructions. A 16 bit instruction set need four instructions; GR, AD, IE, NT. A 32 bit CPU only two and a 64 bit instruction can read it in a single step. Most of the time the actual CPU facilities will match the instruction set -- in the early days, the Motorola 68000 for example had a 16 bit internal data bus and a 16 bit ALU but had a 32 bit instruction set. This was fixed in the 68020. This "merely" meant the 68000 needed internally twice as much time as the 68020 to do anything.
Now, in the past the amount of memory addressable has often been larger than what a single register could address. For example, the famous 8086/8088 CPUs had 20 bit address space while they were 16 bit CPUs. The Pentium Pro was a 32 bit CPU with a 36 bit address bus. These tricks, as the RISC-V instruction set manual drily notes
That manual thinks we might need more than 64 bit address space before 2030. And to be fair going to 128 bit is not a big engineering challenge, not for a long time now, after all as early as 1999 even desktop Intel CPUs have included some 128 bit registers although for vector processing only. (A computer with a 128 bit general processor register existed in the 70s.)
Let's review why we needed 64 bit! Say you want to number your records in a database, if you do that with a 32 bit register then you can have four billion records and game over. Sure you can store your number on two machine words but it'll be slower. As an example there are more than four billion humans so this was a very real, down-to-the-earth limit which we needed to move on from. Also as per the note above, it's much nicer to have a big single address space than all the tricks which were running out fast, 64GB was addressable and even run-of-the-mill servers were able to reach 16GB. 64 bits can address 16 billion billion records or bytes of memory, this seems to be fine for now. Notably current CPUs can only address 57 bits worth of physical memory so a hundredfold increase is still possible compared to currently existing machines.
Going 128 bit would require defining a whole new instruction set or at least an extension of one existing. RISC-V has a draft for RV128I but even they didn't bother fully fleshing it out yet. Each register, internal bus and processing unit widening to 128 bit would consume significant silicon area. The memory usage of everything would at least double (note Apple still selling 8GB laptops at top dollar in 2023). So there are significant drawbacks and so far we have been fine with delegating the 128 bit computing to vector processing units in CPUs and GPUs.
So: