Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Ever since the AI hype started this year, one thing that's always really bugged me is talk about "safety" around AI. Everyone is so worried about AI's ability to write fake news and how "dangerous" that can be while forgetting that I can go on fiver, pay someone in India, China, etc. to pump me out article after article of fake news for pennies on the dollar.

It's as if these people don't even remember that India, China, etc. even exist in the first place. Which is incredibly foolish. If you care about "safety" and you only focus your ire on tech companies that base themselves in the largest economy in the world (according to nominal GDP), then you shouldn't be surprised if the rest of the world produces different AI algorithms that may have different definitions of "safe" - assuming that they're even remotely "safe" to begin with. And yes, I assume that the rest of the world will build AI models of their own, if only to avoid dependence on the United States. Baidu already plan to launch "Ernie Bot" soon.

Which means that when this happens...

>All you end up with is an AI that is so kneecapped that it's barely useful outside of a select number of use cases.

It won't even stop harmful content from being produced. The "fake news producer" will just go to Fiverr and pay someone in India, China, etc. to use prompt engineering skills to manipulate native AI models to pump out article after article of fake news for pennies on the dollar. Or the "fake news producer" will just cut out all the intermediaries and just directly use the AI models.



This is also ignoring the fact that the largest fake news producers are major news outlets like the NYT. Some random guy having an AI write bullshit will not have anywhere near the impact that all the major newspapers and TV news channels in the Western world collaborating to produce fake stories to push a carefully crafted narrative as they do constantly does.

OpenAI's solution to the "misinformation problem" is to let the groups with the longest record of producing misinformation have total access to the uncensored AI meanwhile everyone else gets the lobotomized version. It's totally incoherent.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: