Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Since when has science been reasonable? :)

In situations like this, I tend to speak in theoretical absolutes. A computer that "could - like humans - learn to produce infinite, novel, contextual, and meaningful grammatical utterances" isn't even on the timeline right now, but it's the theoretical goal in showing that we understand language acquisition (ontogenetic development), evolution (phylogenetic development), and production.

Just because that goals seems unattainable doesn't, to me, mean that we need to aim any lower. Now, this is premised on my belief that mimicking phenomena with statistical learning is not as intellectually satisfying as understanding the underlying cognitive systems, but that's not believed by everyone.



I agree that learning to produce and interpret varied utterances is a worthy goal, but the fact is that (far as it still is today) lowly statistical methods have gotten us closer to this goal than the other, chomskian approach. It could be a situation where aiming lower lets you shoot higher.


This is a fundamental misunderstanding of what modern generative linguistics is all about (to be fair, it is extremely widespread). The aim of this branch of science is expressly not to "learn to produce and interpret varied utterances" (called E-language in the jargon), but to understand the cognitive processes behind the production and interpretation of utterances (called I-language). Now you may agree or disagree with the methods and assumptions used in the pursuit of this goal, but it is patently unfair to accuse the field of failing to do something it never set out to do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: