Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
ttul
on Dec 8, 2023
|
parent
|
context
|
favorite
| on:
Mistral "Mixtral" 8x7B 32k model [magnet]
This being said, presumably if you’re running a huge farm of GPUs, you could put each expert onto its own slice of GPUs and orchestrate data to flow between GPUs as needed. I have no idea how you’d do this…
alchemist1e9
on Dec 8, 2023
[–]
Ideally those many GPUs could be on different hosts connected with a commodity interconnect like 10gbe.
If MOE models do well it could be great for commodity hw based distributed inference approaches.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: