That's actually always a good guidance: What's the business model/the aim of a company? Does it align with my own aims and wishes?
It's the basic difference between Google/Facebook and Apple. Google/Facebook sells ads, Apple sells hardware. Google/Facebook need to know as much as possible about you to sell more ads. Apple does not.
If that's true, why did Apple ads start showing in iOS? There's now a promotion for Apples services in iOS 14 settings, we're getting notifications about Apple arcade things, the Apple Ads tracking setting is ON by default and split from 3rd party ad tracking settings...
> If that's true, why did Apple ads start showing in iOS?
Just because Apple does things besides sell hardware, doesn't mean those other things are their primary focus. "Services" make up 'only' 22% of their revenue:
I think there's an argument you can never fully ban it and there are ways to subtly inject and influence politics.
But my point was that Facebook could survive and thrive in a hypothetical situation wherein engagement over political content/disinformation was removed.
I don't want to start a meta flamewar, but I don't think HN has ever officially banned political discussion tout court. There's always been a fair amount of it on the site for the past decade, as far as I remember.
Yes indeed. That's the policy on stories, though. I think the policy on political discussion in comments (which often occurs in threads for "non-political" stories) has always been fairly unrestrictive.
There was a period a few years ago where a total ban was attempted, then rolled back after a couple of weeks - users protested by flagging everything and writing a comment saying how X was in fact political.
Value measures bereft of an encompassing political framework tend to very quickly degrade in terms of who can enact more existential violence on the other, and even that, given enough time for adaptation can stabilize into an equilibrium indistinguishable from a polity.
This is the driving force behind symbiosis, family rearing, community building, business, justice systems (and the necessity of blind spots with regards to Law Enforcement to universally apply) and the "taming" of the criminal element as a much needed escape valve for development away from dysfunctional political dead ends. Quite to the contrary, all life tends toward the political, and politik as it happens makes all other value measures besides existence possible.
So quite to the contrary, from the system-centric point of view, in which a eukaryotic organism can be considered a policy of living tissues, extant fauna, and set of evolvedmechanisms for keeping everyone alive and in check, the stable political equilibrium from which all other activity inherent to prolonged existence can take place, the political is life.
I think what people mean when they say everything is political is that the actions of our politics shapes just about every aspect our lives. You might think that driving to a restaurant to buy a meal isn’t political, but everything about is the result of decisions of voters, politicians, and bureaucrats. How smooth the drive is dependent on how much money your city allocated tax money for road maintenance and how city planners decided to lay out your area. The laws and regulations, and the competence of the enforcement, on safety effect your chance of getting sick. The laws on tax burdens, labor laws, minimum wage laws, trade agreements, etc. factor into the price of your meal. Your level of disposable income to go buy meals is effected by those labor laws, wage laws, cost or rent or a mortgage (which itself is effected by city planning of what types of space can be built where), and on and on.
The point of “everything is political” is to snap people out of the mistaken idea that politics is just abortion, voting, gun rights, and tax loopholes.
Partly true. On the other hand I think it is a common mistake to attribute economic success to politics. Sure, they lay the framework, but that doesn't mean they are responsible for success or failure. Of course politicians like to claim it anyway.
> The point of “everything is political” is to snap people out [...]
Oh, wow. That can make sense, but I had a complete different interpretation. I understood it as prompt to reflect on everything you do with an examination if it furthers partisan goals.
Yes - I think it's absolutely necessary to distinguish between "political", and "partisan". Not everything has to be partisan, and jamming everything into the two-party framework has a destructive Procrustean effect that leads to people making all kinds of increasingly stupid claims.
> On the other hand I think it is a common mistake to attribute economic success to politics. Sure, they lay the framework, but that doesn't mean they are responsible for success or failure.
I agree that it is a common mistake. It is difficult to draw direct lines to individual ups and downs to policies and that trying to do that is usually a tool used to swing voters more than what is actually happening. But I think it can’t be overstated the effect political decisions have, in the aggregate, of shaping the economic situation. Internally, look at the Federal-Aid Highway Act of 1956, and the resulting construction of 50,000ish miles of highway over the next 2 decades. While it is a bit harder to draw a direct line from all that to the rapid growth of Walmart and their newfound ability to transport large amounts of goods faster and to more places than rail, it is easy to say that that political decision had a profound effect on how the country operated over the last 60 years. The country would look very different in almost every facet had that not happened.
Externally, look at the trade agreements and foreign policy decision in the post WWII cold war era. If the US had not decided to embrace Japan as a trading partner and had not imposed strict controls on their post WWII government on military spending, the money from the Marshal Plan, the Japanese government’s decision to rebuild the economy through the central planning of the MITI, would they have exploded into the technological powerhouse they became? Maybe, but I can’t imagine it would have looked anything like what they did end up with.
> I had a complete different interpretation. I understood it as prompt to reflect on everything you do with an examination if it furthers partisan goals.
I think it is both. To snap you out of the idea that politics is just those big talking point issues that politicians and voters focus on (or that politicians convince voters to focus on). That it is a lot of everyday things that you may rarely think about being connected to “politics” and partisan actions.
> When a Utah county accidentally sent out 13,000 absentee ballots without a signature line, the NewsGuard Red-rated site LawEnforcementToday.com called this a "cheat-by-mail scheme."
This isn't a Facebook problem as such. It's just sensationalistic journalism. Just that Facebook as a platform helps amplfiy it. Just my 2 cents.
And Facebook has changed the landscape of journalism to favor drive-by clickbait journalism. Does the riverbed guide the river or did the river carve out the riverbed?
I like your analogy. Maybe it's because I'm older than last time I cared about something this much, but this time I really don't have a solution that's consistent with any ideology that I have. For most of my adult life, I've leaned towards libertarian. I don't see the free market solving this one very well. I'm also skeptical of a government-sponsored ministry of truth, though. I'm at a loss for how to "fix" the effects of social media without causing far more damage with censorship. The closest stop-gap sort of damage control I can think of is to do everything possible to remove any barriers in the way of enabling private citizens to sue for damages when social media companies publish harmful material.
It sometimes really scares me to think about. The power of algorithms combined with big data is showing itself to be scarily good at influencing behavior. It is basically exploiting quirks of our brains to get us to do certain things more (namely stay on a particular platform and engage with it). It is like those optical illusions that are made so that even if you are told what you are really seeing, you are incapable of getting your brain to “fix” the illusion. But what if almost everything around was an AI designed optical illusion that is constantly tweaked and improved until you are incapable of having an accurate visual understanding of the world around you. That is what our digital spaces are starting to feel like. But instead of just our visual centers, where we can just say “I know my eyes are playing trips on me”, it is a much deeper level of the brain, like our reward center, our values, our information processing.
And in the big scheme of things, algorithms aren’t even all that good yet. They are only going to get better at manipulating us and making us want and need it. Like the study with the mice that would compulsively push a lever to deliver drugs until they starved to death. Using digital stimulus to illicit the addictive biochemical reaction in our brain instead of a drug’s more direct chemical delivery method. Or maybe I’m just paranoid.
You're onto something. I try to clamp down every unnecessary email, marketing message, notification, etc. that I receive and I still feel like I'm in a fog of "engagement spam." Instagram let's me know that some random person I might know just got on Instagram. Photos reminds me of some random dinner I had this day five years ago with an ex. Reddit says this random topic I don't care about is "trending." Facebook seems to have moved half the news feed into a parallel feed in the "notifications" area. "Someone you barely know just posted for the first time in a while, check it out!"
I'm borderline anal about managing which notifications I allow if a particular service lets you adjust them at all, but I feel like I'm constantly turning off some new one that they must have added and opted me into by default. I know I can turn them off for an app entirely, but sometimes I legitimately want a "ding" if a friend sends me a message. Yet I still feel like I'm in a constant haze of pointless "notifications" that are not about some event I need or even want to know about it. It's just attention theft, where a hostile computer program acts of it's own accord to cause my brain to have thoughts I didn't intend to have and waste my precious attention. I didn't ask for it to remind me of something, or let me know when a specific thing happens. This isn't even a "notification." It's just spam funneled into a channel that spammers know we haven't completely tuned out yet. They'll strip mine it until nobody bothers to pay attention to notifications anymore, and then they'll move to the next channel that still has any attention left in it.
If I could request any feature from Apple right now, it would be to treat "attention grabbing" as strictly as it has access to the device's camera or photos. Instead of a blanket "allow notifications," make developers register exactly which notifications they want to send with a predefined message template. Let me easily switch each of those templates on or off in the general settings app instead of having to hunt through the settings pages of the app itself (if it offers the option to tweak notifications at all).
They could be more open to solutions like the optical highlighting that twitter has implemented, even if it just says "caution" it could be a big game changer
> it's not true that it's designed to propagate misinformation
That was hyperbole on OP’s part.
The algorithm is designed to incentivise the creation and dissemination of attention-grabbing content without considering truthfulness. That simply and directly incentivises misinformation propagation. The fact that politically-disinterested troll groups routinely get incentivised to produce misinformation for clicks is Exhibit A to how deeply Facebook promotes these mechanisms.
Facebook and its employees turn a blind eye to these second-order effects because they are massively profitable. That’s as close to “designed to” as one can get without literally coding for it.
What Facebook's system does by design is to propagate and amplify simplistic, emotionally potent narratives. These tend to be fictional i.e. misinformation because reality is nuanced and boring. Further, this has been used by bad actors to seed social discontent and affect democratic elections. Facebook know exactly what is happening on their system and by whom.
The algoritthm is driven by engagement, wich is driven by misinformation, because that’s what people click more. So then a lot of more misinformation is written to feed this need. Clearly a lot of money is involved
I think it's possible. I know for certain there are techniques to identify "click-baity" text snippets. My partner is currently working on a data model now and she has already had some reasonable success. Imagine what a company that hires several 10s or even 100s of data scientists can achieve?
From there on, it's just a matter of highlighting these successful matches constantly in your feeds.
If the approach would be content-neutral - i.e. simply relying on the form of the information (eg misspellings in the title, many exclamation points, and such) to distinguish that which is more likely to be fake - then there could be a race condition where the misinformation purveyors learn and subvert the algorithm followed by the misinformation identifier incorporating the new forms of misinformation, and so on. In the meantime, true information purveyors would also need to be aware of this algorithm so as not to be falsely labeled. Think of the race in SEO for an example
If the misinformation identifier uses the content of the information to label misinformation, then the identifier itself is as open to bias and opinion as anyone else
It's symptomatic of all platforms who rely on UGC at scale, and the business model goes on the assumption that it can't be moderated. I mean, it goes without saying how much copyrighted content sits on YouTube, Facebook et al.
The fact the platform(s) generates revenue from this situation doesn't sit well with me.
I'd agree it's not a problem exclusive to Facebook, but a handful of platforms.
This is the perfect description. Facebook amplifies your communication and I find that a perfect purpose for Facebook. No one should blame Facebook for what someone posts there for “allowing” it to be said. Facebook has had live streamed murders, that actually helps catch a killer. When Google bans a website doing illegal things, all it does it keep that website from being noticed but the activity continues. Maybe you think it limits the spread of something bad but no it just hides it. People who want to do bad things still do them. The only people who need to be protected from content are children, adults have the ability to critically think about a topic. Amplification of a message is only good. It is an argument about transparency.The phase the best disinfectant is sunlight is great because once it reaches a wide enough audience the people who can dispute the claim will appear rather than just an echo chamber.
I agree about sunlight being the best disinfectant, but I think social media like Facebook is a little more insidious than just "increased exposure." Facebook is designed for targeted messaging to people who are likely to engage with it. We've known about filter bubbles for at least a decade now. Amplifying disinformation specifically to an audience that is vulnerable to it is different than airing it on TV and increasing awareness.
That's exactly what algorithmically curated content feeds like Facebook and YouTube recommendations do. They amplify messages to certain people who are susceptible, and there's little awareness until somebody shoots up a mosque and then suddenly we realize there's a whole subculture of people that have semi-willingly brainwashed themselves with conspiracies floating around about white people being "replaced" and think Shariah law is being implemented in Western countries.
I don't think censorship of the content is the right answer. Short of calling for violence, people should obviously be free to propagate conspiracies, alternative history, whatever. The component that I am wary of is the algorithmic curation. It seems to tend towards "you like that? How about this slightly more extreme piece of related content?"
I disagree that people are vulnerable to it. I honestly can’t think of a single Facebook ad that has ever changed my mind in a topic. Adults are not vulnerable. The point is is so identify and catch the people who are posting conspiracies and getting radicalized before they do something. You can do that if you don’t see it. I doubt some random reader gets radicalized simply from reading things and never engaging in conversations. With those conversations you are now to identify people and actually prevent more radicalization. For example Facebook knows if you make a death threat, via their algorithms as well as flagging by users. Not only that but their algorithms can identify them in private messages sent, and report them to law enforcement. This may shock people but if these people don’t post on Facebook they then create heir own websites, post in chat rooms, recruit their friends. Then you are left with a completely uncontrolled website or app where these people get stuck in an echo chamber.
I'm not gravely worried about the ads themselves. They definitely bother me, but I'm more worried by "organic" content. It's really the algorithmically curated feeds that I see being harmful. My dad has always been kind of wing-nutty, but it was localized. He was suspicious about things like Ruby Ridge and the Waco Siege when I was a kid. Now he is constantly repeating some garbage meme he encountered on FB. He is completely steeped in disinfo, and actively seeks out more of it by joining basically any FB group named something like "Real Trump Patriot Boat Lovers" and entices him with a meme about locking up the Clintons. If I watch him use it on his iPad, his feed is just torrent of crap, most of it endorsed by his friends and self-selected bubble. That's the harm, he was susceptible to disinfo to begin with, but now with the social proof that all his friends agree, they are escalating to more and more radical shit. At this point they're all saying how they won't stand for another inch of Corona rules and the governor is about to have an uprising on their hands. We don't even have any Corona restrictions in my state, but he's radicalized against them regardless.
I know this is late but maybe your Facebook feed is different from mine. Mine only shows people I follow and advertisements. I don’t get any other content. Twitter on the other hand constantly recommends related people to those I follow and shows their tweets.
One thing I still don’t quite understand is how a system like this one can develop when the people developing it almost certainly are completely opposed to these ideas on a personal political level.
I almost feel like it was an excess of idealism and belief in “the free market of ideas” -as well as the idea that Facebook should remain a neutral party at all cost- that led to overcompensating in the wrong direction, and the current situation.
The result is actually an emergent property of "increase attention/engagement to the max".
This is actually an algorithmic result that was just one of many possibilities.
So, I don't blame the developers 100%. Forest/trees problem.
What I do blame is Zuckerberg & the Board's decision to not 180' on the algorithm and lower engagement now that we know this is the result - money & power.
Let's be honest here. Facebook for Zuckerberg is just a political tool for him now. Earlier, he wanted to run for President, which is obviously at severe conflict with the FB board's stated vision. Now he has realized that he can just twist any nation into his image. Money and power, at the end of the day.
Not really. On the contrary, he's succeeding quite well - being on good terms with Facebook is now essential for any government seeking to get reelected, or seeking to stay in power (in the case of dictatorships). Zuckerberg wants to outwardly lean Democrat, but inside, it's all about maintaining money and power. Just check his natural progression - first laughing at the idiocy of his Harvard peers for sharing their personal details out in the open, all the way to very recently trying to launch his own crypto coin, which would have enabled his "alliance" to gather data on spending habits of its users.
They know they can't lower engagement, these are positive feedback loops, and attention is a zero-sum economy. If they give up even a tiny bit, they 'll lose out to competitors.
> how a system like this one can develop when the people developing it almost certainly are completely opposed to these ideas on a personal political level
> “the free market of ideas” -as well as the idea that Facebook should remain a neutral party at all cost
were abandoned by facebook the moment non-customizable feed algorithms replaced chronology, not after the resulting system lost whatever 'objectivity' remained.
this, and not the resultant 'gaming' of the implicitly-clickbait-biased platform, are the root cause 'that led to overcompensating in the wrong direction, and the current situation.'
> One thing I still don’t quite understand is how a system like this one can develop when the people developing it almost certainly are completely opposed to these ideas on a personal political level.
It is, as they say, giving people what they want. People get genuine pleasure from the feeling of outrage and, better still, righteous anger. And creators of content get paid by the ad industry more from outraging people than drily informing them of the fact of the matter.
I don't think so. We didn't have the free market of ideas. Instead we had engagement optimized content. These approaches are completely different. Most other platforms and forums had a free market of ideas and it didn't lead to problems.
> One thing I still don’t quite understand is how a system like this one can develop when the people developing it almost certainly are completely opposed to these ideas on a personal political level.
The people at the top believe in these ideas and our ability to organise and exercise our collective power is so withered and atrophied that people feel incapable of doing anything except going along with it
Because of the kind of content that drives engagement, optimizing for engagement is not being a neutral party. And censoring content costs money. So Facebook has at least two financial motives.
Most people's political beliefs and ideas are simply something they wear, like a piece of clothing. If you asked them to change into different clothes for a specific purpose they will usually gladly do so without a moment's hesitation, if the purpose makes sense to them. The daily context of our lives and what we need to be getting on with is vastly more powerful than anything else.
Frustrating. At the same time, I all too often see claims of "fake news" or "disinformation" that which is simply embarrassing or inconvenient for one party or the other. I'm not sure that there is a way to solve this problem that will not cause worse problems, like censorship of good but unpopular notions, or notions that are true but annoy the powerful
Its not fake news if its true but inconvenient, so there's a pretty firm line there. There are plenty of recent examples of legitimately fake and (intentionally) misleading content that has been propagated on Facebook, I assume anyone with any right wing friends has been inundated with it. But in addition to that, the simple protection against censoring true notions that annoy the powerful is transparency. Complete transparency with when and why and how things are blocked would be a start. Blocking clearly fake content, banning people who (repeatedly) post it, and being transparent about the reasons seems like a fair progress with a fairly clean line between it and proposed dystopian outcomes.
"Its not fake news if its true but inconvenient, so there's a pretty firm line there."
I agree that truth is a pretty firm line, but discerning that truth and agreeing on it is a very, very squishy line. Discerning what's true is arguably the most difficult task we can face, philosophically speaking
No matter how many well-meaning people agree that some assertion A is false, if they are wrong and have the authority to suppress it, then they have committed the injustice of censorship.
Even with the scientific method, which I believe is the best method ever invented for discerning truth, there have been many, many times that the scientific consensus has been, not only wrong, but egregiously, arrogantly wrong. Any time throughout history that authorities have been allowed to suppress what they believed to be false - and they often had excellent reasons for believing so - they invariably and inevitably suppress ideas that turn out to be true.
Throwing AI at the problem is just adding another layer of misdirection. It's not going to be better this time in spite of all of the other times just because an AI decides it
No, personally speaking, I think we will have a much, much healthier society if we can just learn to live with the fact that people believe stupid things and they will always believe stupid things even when you try to hide the stupid things from them. I know that I believe things that are both wrong and stupid, I just don't know which things. If everyone could acknowledge that they probably hold at least some wrong and stupid ideas, we would all be a little bit more humble about banishing other peoples' wrong and stupid ideas.
And don't think it stops at the US border, this is a segment from the Dutch TV: https://www.youtube.com/watch?v=FLoR2Spftwg
Funny thing they just copy/paste the US conspiracies on Dutch politicians.
If we are going to fix this, we need real suggestions instead of just complaining about it. For example
Policy proposal:
Maximize the amount of people that can see a given post / news article. People would have to resubmit in substantially their own words any information they glean that should be passed on beyond that limit. E.g. someone posts a news article that reaches 10,000 people. The 10,001st gets an error saying it has already reached maximum views.
One of the initial 10,000 couldn’t just copy paste what they saw, but would have to generate unique content to reinitiate an information node that would also have a 10,000 view maximum.
This would not be great for national news sites, but it would reinvigorate local news, because that would be the only way for the information to spread.
I don't doubt that facebook's algorithm results in disinformation through their quest for engagement, but does anyone know a good resource that talks about the specifics of what ideas are most spread in this way with some numbers?
My issue is that I read about this often (and care about these issues), but I have not used facebook since 2010 and so I have no experience with what everyone's talking about. I feel like in order to understand I need some data to work with that is more specific than 'flagged false', because facebook is just not part of my experience right now and I don't know what people are actually saying on the platform
Not only Facebook is a super spreader of internet but the whole internet. There are so much misinformation going on that people dont even take the time to check is that true or not.
People spread misinformation on Facebook. Arguably, it's not Facebook's responsibility to fact-check anything.
Facebook is not a publishing house (at least not the Facebook social media platform as such) and mandating that they fact check and curate their ads and content is tricky business. It requires a legal definition of truth, which is a matter complex enough to justify the existence of the whole judiciary system.
Expecting that a condensation of such a system be applied to the publishing of information on any platform is wishful thinking at best.
Facebook chooses to promote disinformation over your friend’s cat pictures because it drives more engagements and thus more money.
They are totally responsible for that content, even if this is an emergent property of optimizing for money algorithmically aNd no explicit editorializing takes place.
Their users may be dumb f&cks[0] but Facebook is not a dumb pipe.
I also do not like the fact that Facebook "optimizes" the content shown to users, and would rather just have no Algorithm at all, but that seems to me like a different (although possibly related) problem.
Social media is a sufficiently different form of media that it will require its own form of regulation. Pointing to the publisher-editor delineation is anachronistic. Publishers never had the reach, insidiousness and information velocity that Facebook does.
What concrete problem did Facebook cause on that matter?
I don't use facebook, just passively through contacts, but that is very limited. There are opinions that are wrong. But where did it lead to a real problem?
In the "old" days, people had to actively look for or click through to get misinformation on platforms. Facebook's algorithm now shoves posts into user's feed, which is optimized for engagement. We're all seeing now where that has led, and it's not going to get any better if nothing changes.
Facebook keeps hiding behind the claim that they are a neutral platform when their own man made algorithm optimizes user feeds for ad/content delivery to maximize profit. Whether the developers meant to or not, that algorithm has actively spread and amplified misinformation, even polarizing entire nations.
If facebook cannot tame (identifying "truth") what they have made (their algorithmic feed), they should kill it. It does not matter if this taming is possible or not, because they do have the ability to terminate the algorithm.
People spread viruses at bars. The bars themselves don't spread viruses. Yet they're shut in quite a lot of countries, to their severe financial detriment, because that's where the spread happens.
Yes, it's asymptotically hard to assign truth values to every single thing that crosses our feeds, but there are degrees to this stuff, and at least two conservative "news" organizations have made court depositions to the effect that nobody should mistake their entertainment for facts.
> Facebook is not a publishing house (at least not the Facebook social media platform as such) and mandating that they fact check and curate their ads and content is tricky business.
This is not by force of nature; this is the result of laws and policies that people created and that can be changed. The safe harbor provisions that exempts internet service providers from liability for user generated content created the circumstances that led to the creation and growth of huge companies like Facebook. But it doesn't have to be that way: in another world where all publishers (internet or other) would be liable for all content on their platform, companies like Facebook would never have existed, we'd probably have many smaller and more local publishers, economically less efficient but having much less of the current disinformation problems.
Ads are different to the things people post on social media. Facebook can’t moderate and fact check ads? Why not? Because it would cost them too much of their 17billion dollar revenue? Boohoo
again, I do not think Facebook is a publisher, but rather a platform, so I do not see how the parallel applies.
At any rate, if you are asking why censorship, I guess there are angles and perspectives from which it is not, but that risks becoming a path to various dystopian scenarios (freedom of expression/speech negatively correlates with authoritarianism of governments, for example)
If you want to publish something and some publisher does not want to publish it for you, chances are you can find a different publisher that is willing to, without necessarily negatively impacting the spread of your content and ideas.
For platforms such as Facebook that is fundamentally different: A- they gain their power by offering a service, not spreading ideology... users come to them to access content produced by others, so their influence to the content that they are, or are not, able to access is "unfair" and should be in my opinion unlawful; B- exclusion from them pretty much disconnects you from a large amount of fellows humans in a significant way.
Particularly given that most of the exemples cited by the article are approximations, the sort that mainstream newspapers do routinely to push a narrative.
So... I shouldn't talk to family overseas during this time or find another way to do it?
Or everyone not living in the US should not have facebook before an American election?
It would seem to be more prudent to disallow political ads the week before. Or make it a crime to, say, spread wrong information about voting. And so on.
You are facebook’s product. It’s ok to say “Facebook cannot sell any products for a few days” for some reasons. Lockdowns slowed covid. Facebook limits will Possibly slow riots.
Ask for your money back if you are inconvenienced.
Facebook wouldn't be so powerful if switching means of communication was so easy.
It is possible, but email doesn't offer real time communication, phone calls can be expensive, and not everyone knows how or wants to use zoom. Getting the family, including both grandma and kids to switch is hard "why isn't it working like it did before?".
And BTW, to Facebook, Google and everyone else. Stop messing with the fucking UI. For most of us, keeping up with redesigns is often just a minor annoyance. But to older non-technical people, it is awful.
> I shouldn't talk to family overseas during this time or find another way to do it?
A speed bump on posts (not messages or comments) isn’t a bad idea.
One month before an election, there is a fifteen-minute cooldown on posts. One week before, a one-hour cooldown. One day before, three. This lets people keep talking. But it slows the reproduction rate and gives people a shot, even if a long shot, at fact checking.
In my country, which is a tiny part of Europe, there is an election every month. There's over 100 countries with a larger population. Some of the largest countries are so large that they would undoubtedly have elections pretty much constantly. How are you going to decide which country's elections should be throttled by Facebook?
Everything is a Super Spreader of Election Misinformation.
I remember when Trump won in 2016, CNN immediately turned it's theme black and ran gloomy headlines that continue until this very day.
Likewise, other news sources went into spin mode, seemingly faster than they did pre-Trump. Actors and athletes became media stars not for their area of expertise but for their political opinions (which should be worth exactly the same as the bartender's but the bartender doesn't make the news for some reason.)
I agree that Facebook is a huge platform for politics. The difference is that you can post your own views and you can indicate if you like or dislike someone else's opinions.
Mercer/Bannon/Trump and co. are super spreaders of misinformation. They happen to also use facebook, but I dont remember any issues of "fake news" in elections pre-2016/brexit. So no, its not inherent to the platform, only to the bad actors on it.
No mention of social media, facebook or any other platform. If the argument is that there was political misinformation before 2016 then of course, it has existed for centuries. I am referring to the spread of misinformation through social media, at scale, which is the theme of the article. This was not seen at scale before cambridge analytica/brexit/trump/troll factories. Yet the platform existed for ~8years before it started.
Obama was probably the first candidate to win thanks to social media and facebook groups.
Partisan Facebook groups post a lot and it's 50% your party's interpretation of reality, 45% outrageous claims, 5% pure fake news.
I don't even think the fake news spreading is done maliciously, the groups are probably maintained by radicalised individuals who believe whatever news fit their narrative.
Trump (who often retweets fake news and even retweeted a website similar to the onion, once) equally calls fake news on opposing publications.
Still, I think fake news is a vastly overblown problem.
Fake news is merely the scapegoat left wing media use to explain why Trump/Brexit won, even if it's represented so poorly in media.
You can call it populism or far right extremism or Nazi, but outside of the rich tech community bubble, outside of universities, poor people want less government not more.
The socialist message is lost on them, that's why Trump and Brexit won, despite the media coverage.
If we look at similar cases, like Berlusconi in Italy 30 years ago, it's just a matter of time before Trump understands fixing the government is hard, if not impossible and get caught in some corruption case himself.
The parasitical nature of the state just allow for the state to grow larger until it destroys its host.
Spending in the USA is still at record highs.
They made $22 billion in revenue this past quarter helping destroy the country. Every Facebook employee is laughing their asses off at you personally right now. "LOL! Look, HN is complaining about Facebook again! Hysterical!" They're emailing the link above with laughing emojis and dollar signs, while they customize their new Tesla order in another window.
The fact is, they will all lose money if Trump isn't reelected. Every single FB employee knows this, and does their part to keep the rage machine working. No conspiracy theory needed here, it's all part of the job.
If this wasn't the case, we would have a simple setting on the home feed to exclude shared links and news. One click and no more Fox News hatred from your crazy uncle. But we don't have that option. Why? All I want from Facebook is to see updates of distant relatives on vacation, or graduation photos, etc. But they don't make money if there's no anger, so oh well.
> The fact is, they will all lose money if Trump isn't reelected. Every single FB employee knows this, and does their part to keep the rage machine working.
I don’t understand what you mean here. Why does Trump need to be reelected? There is always outrage to be generated for clicks. There was no shortage of outrage during Obama’s term. Even things that were objective non issues, like the tan suit, was whipped into a viral attention storm. “Obama's tan suit shocks social media”. It seems like social media is just designed to promote outrage, since outrage produces virality, but not that it needs Trump specifically. Is there something I’m not getting relating specifically to Trump and Facebook?
People spread this stuff. Organised funded groups from Fox news down to your local corporate or office push it. Facebook is just where they do it. At worst, Facebook encourages people stay on Facebook, but it doesn't care whether they're sharing trump memes or kittens.
This is like blaming parks because people deal drugs in parks sometimes.
"The people most responsible for destruction of the internet, as well those most blind about the destruction they’re wreaking are the journalists and activists who spent years begging Silicon Valley to regulate discourse."
It's amazing how links I post to other HN subs always wind up with -1 karma; never positive, never zero, never -2. Always -1. I'm sure that's totally and completely organic.
Facebook needs to be nationalized and regulated as a public utility. It's outrageous that this hasn't already happened.
How much more election meddling do they need to get away with before everyone understands that this is necessary?
At this stage, all remaining Facebook investors are deeply immoral individuals. All the reasonable moral individuals have already cashed out.
The government should just screw over the remaining shareholders. They deserve it.
The BBC, our nationalised TV producer, has now been explicitly politicised by the government. It's very hard to have a state-owned media enterprise that doesn't end up propping up the status quo.
I would say the BBC is biased towards people holding long-term political office, i.e. the bureaucratic class, not the incumbent party. They're pretty good at bringing the hammer down on representatives.
They were explicitly anti-Brexit, and arguably anti-Corbyn. They're fairly anti-Boris. The common factor is bias against threats to the bureaucracy. It always puzzled me that they could get away with criticizing the people who held power over them, but I think it's just that MPs don't hold much power over them. They're more worried about the branches of government that don't change between elections.
It is way better that way. As someone who lived outside of the UK, I have been watching BBC news, documentaries, series and films for over a decade online. The BBC is probably the most wholesome source of mainstream media that I can imagine.
To point to the BBC as an argument against nationalization makes a very weak case.
Very few media organizations have the kind of international reach as the BBC has. Frankly, that says something about its quality and wholesomeness.
I'm not a US citizen but the only reason I ever watch Fox news and CNN is to get a sense of how bad things are going in the US. They're more like reality shows than news.
Important is that the BBC isn't nationalised, at least not in the typical sense. The government is prohibited from direct intervention[0] and the safeguards (surprisingly) seem to work. It's much less nationalised than e.g. Russia Times.
I agree with you. I don't really like media organizations but as media goes the BBC is great.
Railways and telecoms worked out just fine.
Sure, in many countries, train tickets have been getting overly expensive under government control but high costs are not such a huge concern in the grand scheme of things. On the other hand, the break up of AT&T in the US had huge benefits for consumers in terms of lowering costs.
If Facebook is nationalized, at least they will not manipulate people to serve corporate interests (that is the main threat).
Sure, in the worst case, under government ownership, social media might be used to manipulate elections to give preference to the incumbent party over the other, but that is infinitely better than manipulating elections based on the financial interests of a tiny minority and allow it to drive an increase in surveillance and restrictions of public freedoms. Eventually, even the tiny minority of shareholders (the smaller ones) will end up being harmed at the expense of the bigger shareholders. It's not going to stop at non-shareholders.
> Sure, in the worst case, under government ownership, social media might be used to manipulate elections to give preference to the incumbent party over the other, but that is infinitely better than manipulating elections based on the financial interests of a tiny minority
I totally disagree. They look like the same thing today, but this is merely a consequence of the fact that everyone is under Facebook's thumb and Facebook works in the interest of corporations/big money who pay for their ads.
If you shift the control to government, the dynamics change fundamentally. The power will once again start at the government (like it should) and not at the corporations. Money will still influence things, but not nearly as much as it does today.
We need to take steps to decouple money from political power, decouple state from economics.
Like guns, Facebook is not inherently evil. It mostly depends on who is holding it.
Government intervention is unavoidable. It's just a matter of choosing the right kind of intervention.
Facebook wouldn't exist without government support and expansionary monetary policy.
We can go to an extreme and point out that to properly decouple state from economics, we'd have to abolish corporate personhood first.
We would need to abolish 'limited liability' legal entities; the real purpose of these entities is to allow individuals to repudiate responsibility for their actions by deflecting blame towards these abstract legal entities. This doesn't make any sense aside from facilitating criminal activity.
Whenever a corporation commits a crime and isn't shut down (which is 100% of the time), that is government interference of the worst kind.
> Railways and telecoms worked out just fine. Sure, in many countries, train tickets have been getting overly expensive under government control but high costs are not such a huge concern in the grand scheme of things. On the other hand, the break up of AT&T in the US had huge benefits for consumers in terms of lowering costs.
If Facebook is nationalized, at least they will not manipulate people to serve corporate interests (that is the main threat).
You should read up on:
1. The history of railroads and regulation in the United States, including their collapse and resurgence in the 20th Century.
2. The history of Bell as a government-sanctioned monopoly, their breakup and subsequent industry consolidation.
3. The disastrous environmental damage caused by state-owned energy companies.
> Facebook needs to be nationalized and regulated as a public utility.
That is the wrong solution. The threat to _democracy_ because of _advertising_ is a US phenomenon.
It says something about social values if journalism and public education/discourse, which are essential to democracy, depend for their survival on _advertising_.
It can’t change. Facebook couldn’t afford for it to change.