Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

...doing far more harm than good...

Odd turn of phrase. Thinking LLMs work like brains may be holding back an advance to full AGI but is that a harm or good? I'm not against all powerful models but "build and deploy this stuff as fast as possible with minimal consequence consideration" definitely seems like a harm to me. Perhaps the Sam Altmans of the world should keep believing LLMs are "brains".



I guess it would depend on how you view AGI. I personally do not believe AGI is possible under current or near-future technology, so it is not really a concern to me. Even the definition of "AGI" is a little murky - we can't even definitely nail down what "g" is in humans, how will we do that with a machine?

Anyway, that aside, yes, your general understanding of my comment is correct - if you do believe in AGI, this kind of framing is harmful. If you don't believe AGI, like me, you will think it is harmful because we're inevitably headed into another AI winter once the bubble bursts. There are actual very useful things that can be done with ML technology, and I'd prefer if we keep investing resources into that stuff without all this nonsensical hype that can bring it crashing down at any moment.

An additional concern of mine is that continuing to make comparisons this way makes the broader populace much more willing to trust/accept these machines implicitly, rather than understanding they are inherently unreliable. However, that ship has probably already sailed.


> "I personally do not believe AGI is possible under current or near-future technology"

Calculators do arithmetic faster and more reliably than human brains, and do so using many fewer transistors than we have neurons. Wheels and tarmac are simpler, more efficient and faster at forwards motion than jointed human legs. Boston Dynamics' robots can cross rough terrain with legs, without needing bones and flesh and nerves and blood and skin and hair and toes with toenails.

What if language, reasoning, logic, intelligence is similar - if it could be done on simpler hardware by not doing it in the same way the human brain does it, if we knew how?

I don't suppose this can be answered either way until someone builds an AGI or understands how the brain works, but is there a strong reason you think this reasoning doesn't/cannot apply to thinking, other than "the brain is the only thing we know of which does this"?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: