Fair point, but its likely very hard if not impossible to create AI algorithms for automatic/guided content curation/classification without deploying them in a real-world use-case.
For me this is the crux of the issue with The Platforms giving rise to "fake news".
We as a society have decided that rampant mis-information and propaganda is only worth solving if we can automate it. If we actually have to pay real people real money to fix it on an ongoing basis, that's just too expensive.
Sure there are problems having Humans doing this work too, but they are still way ahead of AI in this problem space.
How long do we wait for automated solutions while these problems impose real costs to society?
I have doubts that you can do it without heavy automation. Sure, eventually some human can decide whether something is "factual-ish" or not. But producing content is much easier and can be automated, so the attackers can flood the system.
If you want humans involved, you end up with a gate keeper, which essentially means "unless you're an accredited media organization, your content is considered fake", because you can't vet individual pieces.
I agree with you. It'd take a huge company with tons of resources at its disposal to do something like this if it's possible at all. But if anyone could hire and train the army necessary to do it, it'd be google or facebook. (Apple already does it. Apple News is edited by real apple-employed humans but its far smaller in scope).
I think real solutions are gonna require us to break out of our tech-focused approaches and find ways to get Google, Facebook, Twitter to really start to care about fixing this stuff. Unfortunately I think that means it'll have to start costing them.