I asked ChatGPT how to write texts so it can be identified as human generated.
The answer: "One way to make it clear that a text is not generated by AI is to include a joke or riddle that is difficult for a machine to understand or generate. For example, you could include a joke that relies on wordplay or a complex pun. Here is an example:
It's everyone choice - odd, ironic or appropriate - what to think of how GPT's answers to keeps getting quoted on the question of how to detect it.
The thing is, I'd see these answers as similar to everything else the program produces. A bunch of claims from the net cobbled together - I've read a number of Sci-fi novels and stories where "inability to understand humor" is the distinguishing quality of an AI (I'm guessing it extrapolated "hard create" from "hard to understand"). But that doesn't seem to be playing here where the AI mostly running together things humans previously wrote (and so it will an average amount humor in circumstances calling for it).
A reasonable answer is that the AI's output tends to involve this running-together of common rhetorical devices along with false and/or contradictory claims within them.
-- That said, the machine indeed did fail at humor thing time.
I don’t think it was “intentional” so to speak (not that it has intention anyway, so it isn’t clear what distinction I’m trying to make there). But regardless, I’d say it actually succeeded at humor (the contrast of the “clever wordplay” it describes with the lame example is actually pretty funny).
And the idea that the computer would “try” to come up with an example that would trick a computer is itself a little funny, in that it has fallen into giving itself a preposterous task.
>And the idea that the computer would “try” to come up with an example that would trick a computer is itself a little funny
There sure is some obscure discussion forum where users talked about that or some amateur writer that published online something in those lines. ChatGPT is just a statistical device selecting randomly from previous answers.
>A reasonable answer is that the AI's output tends to involve this running-together of common rhetorical devices along with false and/or contradictory claims within them.
The question here is this an actual AI only failure mode. Are we detecting AI, or just bullshittery?
I don't know if bullshittery is the only failure mode but I think it's a necessary failure mode of large language models as they are currently constituted.
I would say that human knowledge involves a lot of the immediate structure of language but also a larger outline structure as well as a relation to physical reality. Training on just a huge language corpus thus only gets partial understanding of the world. Notably, while the various GPTs have progressed in fluency, I don't think they've become more accurate (somewhere I even saw a claim they say more false thing now but regardless, you can observe them constantly saying false things).
Obviously we should not anthropomorphize a model, but this is just downright funny because it's so obviously wrong that it appears as almost malicious.
Good thing ChatGPT doesn't know any complex puns or witty wordplay!
Unfortunately I don't think this is enough of a heuristic. I am only speaking about the one language model I have personally used, on character.ai, but it is more than capable of making word play and insightful, often hilarious jokes. Although they are frequently amateurish, I think that's more a function of the fact that I myself am not much of a stand-up comedian, as well as each "bot's" individual training history which is presumably modifying a prompt under the hood and/or training an extension of the model directly based on the conversations.
Of course, in real time the attempts at humor often fall flat and might give away flawed thought processes, although I personally have found them to be often insightful, (containing a seed of humor) even when they're not funny. It could be a useful technique when actually having a conversation, a form of Voight-Kampff test, but I don't think it will do anything to let you know if the content was generated by AI and then just cherry picked by a human.
The answer: "One way to make it clear that a text is not generated by AI is to include a joke or riddle that is difficult for a machine to understand or generate. For example, you could include a joke that relies on wordplay or a complex pun. Here is an example:
Why was the math book sad?
Because it had too many problems."