exactly the same as 64 bit computing, except pointers now take up twice as much ram, and therefore you need mire baseline momory throuput/ more cache, for pretty much no practical benefit. Because we aren’t close to fully using up a 64-bit address space .
Our modern 64 bit processors do use 128 bits for certain vector operations though, don’t they? So there is another aspect apart from address space.
Yes, up to 512 bits since Skylake. But there are very few real-world tasks that can make use of such wide data paths. One example is media processing, where a 512-bit register could be used to pack 8 64-bit operands and act on all of them simultaneously, because there is usually a steady stream of data to be process using similar operations. In other tasks, where processing patters can’t make use of such batched approaches, the extra bits would essentially be wasted.
It wouldn’t be much different. Was it noticeably different when you went from a 32 bit to 64 bit computer?
For me it was, actually. Maybe because I was late to the party so people stopped developing shit for 32 bits, and when I did the transition was like “Finally, I can install shit” also my computer was newer and the OS worked better.
So your PC was old (thus the new one faster) and its HW no longer supported by some software developers (because it was outdated and not enough users were on it anymore). The same can hold true if you have a 5 year old PC now. You didn’t notice this due to going 64bit, you noticed it due to going away from a heavily outdated system.
The big shortcoming of 32 bit hardware was that it limits the amount of RAM in the computer to 4 GB. 64 bit is not inherently faster (for most things) but it enables up to 16 exabytes of RAM, an incomprehensible amount. Going to 128 bit would only be helpful if 16 exabytes wasn’t enough.
Slightly off topic, but the number of bits doesn’t necessarily describe the size of memory. For example most eight bit processors had 16bit data busses and address registers.
Some processors that were 32 bits internally have 24bit memory addressing.
We have 128 bit stuff in some places where it’s advantageous, but in most cases there’s not really a need. 64 bits already provides a maximum integer value of (+/-)9,223,372,036,854,775,807. Double it if you don’t need negatives and drop the sign. There’s little need in most cases for a bigger number, and cases that do either get 128 bit hardware, or can be handled by big number libraries.
deleted by creator
The question is, how it would look like? Not whether it is practical or not
Similar to a modern 64 bit computer, my computer actually has a 512 bit wide ALU for SIMD, basically it lets you do the same operation on multiple numbers simultaneously.
It’s hard to picture “128bit computing” in a general sense as ever being a thing. It’s just so far beyond anything we can realistically use now, plus would be inefficient/wasteful on most ordinary tasks.
Put this together with the physical limits to Moore’s law and current approaches to at least mobile computing ……
I picture more use of multi-core, specialty core, system on a chip. Some loads, like video, benefit from wide lanes, huge bandwidth, addresses many things at once, and we have video cores with architectures more suited for that. Most loads can be done with a standard compute core, and it is unnecessary, maybe counterproductive to move up to 128bit. If we want efficiency cores, like some mobile already have, 128bit is wrong/bad/inefficient. We’ll certainly have more AI cores, but I have no idea what they need
If you can forgive the Apple-ness and take thos as a general trend, I can see this, only more so
— https://www.apple.com/newsroom/2023/06/apple-introduces-m2-ultra/
not even an apple thing isn’t this just how SOCs work in general? definitely something intel and amd should be doing though (if they aren’t already i dont honestly know) especially with hardware decoders and ML cores and whatnot
Yes, this is how SoC can work. I think it is a great description about one specific company emphasizing a balance of different cores to do different jobs, rather than trying to make many general cores attempting to do everything. However, don’t get distracted by all the marketing language or that this is a company that people love to hate
I came here to leave a snarky comment but then I read this thread. Now I feel sad and really confused.
Contrary to some misconceptions, these SIMD capabilities did not amount to the processor being “128-bit”, as neither the memory addresses nor the integers themselves were 128-bit, only the shared SIMD/integer registers. For comparison, 128-bit wide registers and SIMD instructions had been present in the 32-bit x86 architecture since 1999, with the introduction of SSE. However the internal data paths were 128bit wide, and its processors were capable of operating on 4x32bit quantities in parallel in single registers.
What?
I would guess they think a PS2 is an example of 128 bit computing.
The PS2 had full 128 bits DMA bus, and full 128 bits registers. IIRC Dreamcast too.
In fact, your computer is already capable of processing more than 64 bits at once using SIMD instructions. Many applications or things you don’t suspect may or are already using them, including games.