Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> They need balanced examples of verbally expressed knowledge modesty to do this better.

This is generally pretty easy, with ChatGPT at least, in my experience.

Langchain uses this prompt: "The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know."

Also, just don't ask it for things it couldn't possibly know without making it up like medical citations.

This is more for where you want your "AI" assistant to say it does not know how to book a reservation rather than pretend that it can.



> Langchain uses this prompt: "The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know."

There seem to be a lot of cases where people claim the models can't do something, but with minimal direction, they can.

Like avoiding bias. Or doing long digit math carefully, instead of guessing, etc.

I don't know a human that doesn't need any feedback either.

Obviously, many of these things would be better handled in the learning stage, so bias avoidance, careful serial thought processes, etc., were its baseline thinking.

When that happens I would expect fewer mistakes, but also an overall increase in the quality of responses, and ability to handle greater complexity. Clear careful thinking reduces error in each step, making longer chains of reasoning more viable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: