Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> What is the entropy per word of random yet grammatical text?

That is what these 5-11bit estimates are about. Those would correspond to a choice out of 32 to 2048 options (per word), which is much less than there are words in english (active vocabulary for a native speaker should be somewhere around 10000-ish).

Just consider the XKCD "thing explainer" which limits itself to a 1k word vocabulary and is very obviously not idiomatic.

If you want your big if to produce credible output, there is simply no way around the entropy bounds in input and desired output, and those bounds render the concept absolutely infeasible even for I/O lengths of just a few sentences.

Eliza is not comparable to GPT because it does not even hold up to very superficial scrutiny; its not really capable of even pretending to intelligently exchange information with the user, it just relies on some psychological tricks to somewhat keep a "conversation" going...



> Eliza is not comparable to GPT because it does not even hold up to very superficial scrutiny; its not really capable of even pretending to intelligently exchange information with the user, it just relies on some psychological tricks to somewhat keep a "conversation" going...

That's kinda the point I was making — tricks can get you a long way.

The comparison with GPT is not "and therefore GPT is bad" but rather "it's not necessarily as smart as it feels".

Perhaps I should've gone for "clever Hans" or "why do horoscopes convince people"?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: