Not only does social media know enough about us to build powerful psychological models of both individuals and groups, it is also increasingly in control of our information diet. It has access to a set of extremely effective psychological exploits to manipulate what we believe, how we feel, and what we do.
A sufficiently advanced AI algorithm with access to both perception of our mental state, and action over our mental state, in a continuous loop, can be used to effectively hijack our beliefs and behavior.
Using AI as our interface to information isn’t the problem per se. Such AI interfaces, if well-designed, have the potential to be tremendously beneficial and empowering for all of us. The key factor: the user should stay fully in control of the algorithm’s objectives, using it as a tool to pursue their own goals (in the same way that you would use a search engine).
As technologists, we have a responsibility to push back against products that take away control, and dedicate our efforts to building information interfaces that place the user in charge. Don’t use AI as a tool to manipulate your users; instead, give AI to your users as a tool to gain greater agency over their circumstances.
That'd be interesting if I could tell Facebook (or whatever social media) I want to be closer to my family (so it surfaces family posts) or local community (so it surfaces geographically close posts) or that I want to lose weight (so it surfaces motivational/dieting posts) or I am training for a marathon (so it surfaces running/training posts) or I went through a bad breakup recently (so it surfaces pictures of puppies and my single friends living fun bachelor lifestyles), or that I want to learn programming (so it surfaces programming things). Or like a button for "Only show me positive, happy stuff".
I mean, I guess you can achieve all of that with a carefully curated Newsfeed / friends list, but it's different than having the dials for what you want to feel/accomplish (how you want to be psychologically manipulated) and the AI could periodically check in ("Am I making you happier?" or "Do you feel inspired to run more often?") and adjust how the content is being surfaced.
That’s the thing though, right? How do you optimize for two contradictory objectives? The “simple” answer is that you align incentives and avoid the problem by redefining it.
You mean distinct objectives, and the answer is with weights (sliders, usually on the UI.) Align what with what? And how is taking these choices out of the hands of the principle and handing it to the agent avoiding principle-agent?!? But in any case how does this relate to your previous comment?
The issue is that they see an opportunity... give you that control... or give that control to someone who will pay them more than you can for it.
I've been a bit surprised that none of the news aggregator sites (including HN) realized that in most cases what 'most people' want is not what any individual user wants. And we have the technology for actual personalization of newsfeeds. But then that puts the control in the hand of the user... and I suppose they simply don't see that as alluring.
Instead, I’m terrified of that step before - which does not yet have a name - in which sophisticated bots generate a news bubble (whether real or fake may not even matter that much) that leads us to destroy ourselves.
Think about it - the riots shortly after the election, the cries of fake news (on both sides of he political spectrum), the growing gap between the left and right, etc.
That’s enough to get us off the rails in the US. Now imagine what that would do in a country with a much more fragile democracy, or a split religious population.
Truly, that is more frightening. If you’ve ever seen what one person can do to another under strained circumstances, you know what I mean.
Medium's "Pardon the interruption" pop up is starting to pissing me off.
This is the last article I will read on Medium until I've heard they removed it.
Not only does social media know enough about us to build powerful psychological models of both individuals and groups, it is also increasingly in control of our information diet. It has access to a set of extremely effective psychological exploits to manipulate what we believe, how we feel, and what we do.
A sufficiently advanced AI algorithm with access to both perception of our mental state, and action over our mental state, in a continuous loop, can be used to effectively hijack our beliefs and behavior.
Using AI as our interface to information isn’t the problem per se. Such AI interfaces, if well-designed, have the potential to be tremendously beneficial and empowering for all of us. The key factor: the user should stay fully in control of the algorithm’s objectives, using it as a tool to pursue their own goals (in the same way that you would use a search engine).
As technologists, we have a responsibility to push back against products that take away control, and dedicate our efforts to building information interfaces that place the user in charge. Don’t use AI as a tool to manipulate your users; instead, give AI to your users as a tool to gain greater agency over their circumstances.