Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You're getting at a deep point of disagreement - should we expect a modern or near-future LLM to be limited by the intelligence of the people who generated its training data? I don't think anyone claims to have a provably correct answer. There's one intuition that says yes (why should it be impossible to make new insights from data collected by people who didn't have those insights?) and another that says no (how can a statistical average of N people's most likely responses be smarter than any of those N?)


I think you are reading a little too much into my comment but I understand where you are coming from. My point is that even if you agree with this:

  There’s just not that much distance between a chatbot that can manage a vending machine poorly and a chatbot that can manage it well.
it is a huge leap to conclude this:

  There’s not much distance between a chatbot that is as intelligent as a human and a chatbot that is more intelligent than a human.
But that seems to be what Anthropic is assuming.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: