gpt4 gets thing wrong as well, especially as soon as you are out of a well beaten path. I tried writing code with brain off and gpt4 on, and the terraform code was mostly right but didn't work, python code for imports of recent libraries (llama-cpp-smth) were a complete fabrication, even if I gave the ai a documentation before hand, and we went in cycles around a problem for which it kept giving me the same solution and resulted in the same error (around python multiprocess, which is very picky aroud nested parallelism and method import)
Well. You would definitely have to very carefully select a very, VERY narrow slice of society, to get a piece where Qanon supporters make up a significant percentage of people.
But hey, if you are really looking to convince yourself of something, I have no doubt that it can be done.
The point is that people keep repeating it because it's true. Why are you being so obnoxious about it? Anyone who has used GPT-3.5 vs GPT-4 knows this and it's just ridiculous to claim otherwise.
My guess is most people into AI don't even remember that they are paying a $20/month for this.
We do a lot of experiments involving gpt3.5, 4, claude-v2, titan-large, and palm2, and for what it's worth, on our real production workloads gpt4 shines. We can make Palm2 produce decent results with a lot of extra effort, and claude-v2 is passable but gpt4 does not disappoint. This is low-grade knowledge management stuff, and we are not using it as a information-retrieval system - but for basic 'cognitive' tasks where all the information needed is provided in the prompt. I'd not rely on it for info retrieval tasks such as the examples quoted above - its knowledge base is highly compressed, after all.
If you don't pay for ChatGPT, you get GPT-3.5. You can also get access to GPT-4 if you use the playground.