Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Maybe, but even that fourth-order metric is missing key performance details like context length and model size/sparsity.

The bigger takeaway (IMO) is that there will never really be hardware that scales like Claude or ChatGPT does. I love local AI, but it stresses the fundamental limits of on-device compute.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: