Let’s just say hypothetically this was possible and that the laws of silicon were not a thing, and that there was market demand for it for some asinine reason. As well as every OS process scheduler was made to optionally work with this. How would this work?
Personally, I’d say there would be a smol lower clocked CPU on the board that would kick on and take over if you were to pull the big boy CPU out and swap it while the system is running.
Computer engineer here. I think physically swappable CPUs could be physically doable. You would have something like an SD card slot and just be able to click it in and out.
The main problem is the speed of electricity. Electricity moves about as fast as light does, but there’s a problem. Meet the light nanosecond. The distance light travels in a nanosecond:
It’s 6mm less than a foot. If you have a processor running at 4GHz, the pulse of the clock is going to be low for an 8th of a foot and high for an 8th of a foot. You run into issues where the clock signal is low on one side of the chip and high on the other side of the chip if the chip is too big. And that’s saying nothing of signals that need to get from one side of the chip and back again before the clock cycle is up.
When you increase the distance between the CPU and the things it is communicating with, you run into problems where maybe the whole wire isn’t at the same voltage. This can be solved by running more wires in parallel, but then you get crosstalk problems.
Anyway, I’m rambling. Main problem with multiple CPUs not on the same chip: by far the fastest thing in a computer is the GPU and the CPU. Everything else is super slow and the CPU/GPU has to wait for it. having multiple CPUs try to talk to the same RAM/Harddrive would result in a lot of waiting for their turn.
It’s cheaper and a better design to put 24 cores in one CPU rather than have 12 cores in two CPUs.
Most things are still programmed like they are single thread and most things we want computers to do are sequential and not very multi-threadable.
Very interesting take for what I’d consider to be a semi-shitpost. Yeah, more programs are multithreaded compared to years back, but thread safety poses a big challenge still since you don’t want functions executing in parallel and one part gets done sooner than the other, causing a plethora of race conditions.
For multi CPU systems, there’s NUMA which tries to take advantage of the memory closest to the processor first before going all the way out and fetching the data from a different set of memory all the way across a motherboard. That’s why server boards have each set of DIMMs close to each processor. Though this is coming from a hardware layman, so I’m pretty sure I’m not being entirely accurate here. Low level stuff makes my brain hurt sometimes.
One of my systems has 2 CPUs if you ever want me to run benchmarks c:
I played around with a dual CPU system. But it just used too much power and it was way too over powered for my needs. Don’t remember if I ran a benchmark on it though…
I see. I only have mine because more CPUs==more PCIe slots.
Oooh. How many PCIe slots do you get and what kind?
With the motherboard I am using (Supermicro X9DRH-7F) I get 6 8x PCIe slots, and 1 16x PCIe slot for a total of 7 slots. All of these are communicating directly with the CPUs, as opposed to some boards where the slots go through the chipset. There are motherboards with even more, but they are more expensive. I got this because I will need the bandwidth for model parallelism using multiple GPUs.
Also, the least power-hungry CPUs you can get for this board are the Xeon E5-2630L, which each consume 60W under full load.
Here’s my slot layout
Specs of my server: https://burggit.moe/comment/59898
Oh nice, you could probably fit a few GPUs in there if your case has room for it.