Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> In particular, one I’d love to hear more about is why so many deployments are still choosing 2-socket servers by default

bit of a guess but connectivity has been coming at a stupidly high premium for too long. there are sweet sweet blade chassis with price optimized less than full power 1p designs but 10g is still kind of novel there. bigger form factors are starting to see 25gbit at not-astronomical prices & switches in some rare cases are reasonable too.

power supplies, storage, networking... a computer has a lot of not-entirely-ancillary needs. having multiple chips sharing the perhipetals should make sense, should be cheap. it's not though. the SMP tax is huge huge huge.

thing is we don't need smp. we just need multihost peripherals. we needs nic's that like the grouphug ocp board can support 4 separate modes via pcie srv-io, a nice that can present multiple different virtual functions that different hosts can use. NVME similarly could be multiport- was was. power supplies are shared in ocp designs, with big bus rails, some 48v.

I'm notad yet but sure seems dead obvious to me the future of multi-socket is non-coherency. build a big board with a couple different isolated computers on it, but connected via shared nic or nics to the top-of-rack. we get close with the 3 per width ope compute systems but those each need to be self contained, and there's an obvious leap in efficiency to be had by merging those three separate computers onto a single motherboard, while sharing some network, maybe storage devices. also like throw in some gratis pcie ntb maybe for a medium speed (32gbps on pcie 4.0 x16) direct server to server interconnect. ideally add another ntb unit on most chips so we can make a little mediums speed nearly free ring, or other topology.

choose single sockets but choose many of them, each sharing some common peripherals.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: