Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

While scaling laws are only empirical curve fitting and extrapolation, none of them predict a discontinuous "jackpot effect."


AFAIK, certain abilities such as understanding arithmetic manifest at discrete scale points even though there is a continuous build up of potential. There is also the more remote possibility of a discrete scale that AI takes over its own training or at least starts to contribute substantially. A lot of real world leverage and arbitrage depends on such discrete surprises that may not be visible during the continuous incremental evolution. I think this principle holds computationally as much as it does biologically.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: