Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This being said, presumably if you’re running a huge farm of GPUs, you could put each expert onto its own slice of GPUs and orchestrate data to flow between GPUs as needed. I have no idea how you’d do this…


Ideally those many GPUs could be on different hosts connected with a commodity interconnect like 10gbe.

If MOE models do well it could be great for commodity hw based distributed inference approaches.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: