The question I have is what's a prompt which reliably hallucinates but still produces the correct answer some of the time?
I know it gets some python functions "wrong", but i think they were actually "right" in the version it was trained on, so software seems out.
The question I have is what's a prompt which reliably hallucinates but still produces the correct answer some of the time?
I know it gets some python functions "wrong", but i think they were actually "right" in the version it was trained on, so software seems out.