Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's one of the biggest answer-analysis issues with LLMs, yeah. Non-experts can't spot when they're being blatantly lied to because it's pretty much always plausible, because that's what they do - produce plausible continuations of what came before.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: