Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> This is totally misleading to anyone with less familiarity with how LLMs work. They are only programs in as much as they perform inference from a fixed, stored, statistical model. It turns out that treating them theoretically in the same way as other computer programs gives a poor representation of their behaviour.

Can you share any reading on this?

 help



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: