Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You talk about about as if a human-created neural network is at the same level as quantum physics where there are limits as to our understanding. We know very well how large language models work even if the capabilities of this technology are actively being explored.

You along with others here are far overstating the unknowns we have within the context of AI, whether this is the result of a misinformation campaign targeted at trying to boost the value of this tech or if the pop-sci takes have really gotten too prevalent, it is unclear to me.



For the definition of "understand" that most people use, humans don't understand things which are highly complex. We don't really understand the weather, we can't predict it, it's too complex. But, you can break it down into matter and forces and energy and simulate it and get pretty darn good predictions. We can now throw it in a deep learning model and get good predictions. But, to suggest we "understand it" doesn't gel with most people's definition of "understand".


wyager is correct. There are very serious limits to our understanding of what's going on inside these networks. Even how an LLM answers simple factual questions like "The capital of France is ..." is only now just coming into view. And the moment it gets more complex than that, interpretability is lost again.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: