I'd like to hear an informed take from anybody who thinks that Facebook's fact-checkers were a better product feature than Community Notes.
All of the articles I'm seeing about this online are ideological, but this feels like the kind of decision that should have been in the works for multiple quarters now, given how effective Notes have been, and how comically ineffective and off-putting fact-checkers have been. The user experience of fact-checkers (forget about people pushing bogus facts, I just mean for ordinary people who primarily consume content rather than producing it) is roughly that of a PSA ad spot series saying "this platform is full of junk, be on your guard".
> As a result, we’re going to start treating civic content from people and Pages you follow on Facebook more like any other content in your feed, and we will start ranking and showing you that content based on explicit signals (for example, liking a piece of content) and implicit signals (like viewing posts) that help us predict what’s meaningful to people. We are also going to recommend more political content based on these personalized signals and are expanding the options people have to control how much of this content they see.
IMO the concerning part is hidden at the bottom. They want to go back to shoveling politics in front of users. They say it is based on viewing habits, but just because I stop my car to watch a train wreck doesn't mean I want to see more train wrecks. I just can't look away. FB makes theirnacrions sound noble or correct, but this is self serving engagement optimization.
Social media sites should give users an explicit lever to see political content or not. Maybe I'll turn it on for election season and off the rest of the year. Some political junkies will always have it set to "maximum". IMO that is better FB always making that decision for me.
What I think I just read is that content moderation is complicated, error-prone, and expensive. So Meta is going to do a lot less of it. They'll let you self-moderate via a new community notes system, similar to what X does. I think this is a big win for Meta, because it means people who care about the content being right will have to engage more with the Meta products to ensure their worldview is correctly represented.
They also said that their existing moderation efforts were due to societal and political pressures. They aren't explicit about it, but it's clear that pressure does not exist anymore. This is another big win for Meta, because minimizing their investment in content moderation and simplifying their product will reduce operating expenses.
Perhaps, given the situation with Twitter, now "X", more web and mobile app users will come to understand that despite its size, Facebook is someone's personal website. Like "X", one person has control. Zuckerberg controls over 51% of the company's voting shares. Meta is not a news organization. It has no responsibility to uphold journalistic standards. It does not produce news; in fact, it produces no content at all. It is a leech, a parasite, an unnecessary intermediary that is wholly reliant on news content produced by someone else being requested through its servers.
> When we launched our independent fact checking program in 2016, we were very clear that we didn’t want to be the arbiters of truth. We made what we thought was the best and most reasonable choice at the time, which was to hand that responsibility over to independent fact checking organizations... That’s not the way things played out, especially in the United States. Experts, like everyone else, have their own biases and perspectives. This showed up in the choices some made about what to fact check and how.
Alex Stamos pushed this initiative pretty hard outside of Facebook in 2019+, seemingly because he wasn't able to do inside of Facebook back in 2016/2018. But I haven't dug into his motivations.
I don't use Twitter so I hadn't seen it in action, but the interview convinced me that this is a good approach. I think this approach makes sense for Facebook as well.
The median news article has something wrong in it.
Often I live through events and read about it in the daily paper and then read about it in The Economist and read a few more accounts of it. 5-25 years later a good well researched history of the event comes out and it is entirely different from what I remember reading at the time. Some of that is my memory but a lot of it is that the first draft of history is wrong.
When someone signed their name "Dan Cooper" and hijacked a plane a newspaper garbled that to "D B Cooper", the FBI thought it sounded cool so they picked it up, but it happens more often than not that journalists garble things like that.
shows (but doesn't tell) that that a novelized accounts of events could be more true than a conventional newspaper account and similar criticisms come throughout the work of Joan Didion
If anything really makes me angry about news and how people consume it is this. In the age of clickbait everyone who works for The New York Times has one if not two eyes on their stats at all times. Those stats show that readers have a lot more interest in people like David Brooks and Ezra Klein blowing it out their ass and could care less about difficult journalism that takes integrity, elbow grease and occasionally can put you in danger done by younger people who are paid a lot less if they are paid at all. The conservative press was slow on the draw when it came to 'Cancel Culture', it was a big issue with the NYT editorial page because those sorts of people get paid $20k to give a college commencement address and they'd hate to have the gravy train stop.
Seen that way the problem with 'fake news' is not that it is 'fake' but that it is 'news'.
I'm certain it will make parts of the user experience worse, but at least for the Threads app, this seems at least a little necessary - if you're aiming to be the "new" twitter or whatever social need twitter was fulfilling, you need to break free of the shackles of IG/Meta moderation, which is very unforgiving and brutal in very subtle ways that aren't always easy to figure out. But basically, I find a platform like Threads/Twitter are probably unusable for a lot of people unless you can say "hey, you're an asshole" every now and then without Meta slapping you on the wrist or suppressing your content.
One of the only visible actions Meta has taken on my account was once when a cousin commented on a musical opinion I had posted to facebook, I jokingly replied "I'll fight you" and I caught an instant 2 week posting ban and a flag on my account for "violence." Couldn't even really appeal it, or the hoops were so ridiculous that I didn't try. The hilarious thing is these bans will still let you consume the sites' content (gotta get those clicks), you just are unable to interact with it. This kind of moderation is pointless as users will always get around it anyway - leading to stuff like "unalive" to replace killing/suicide references, or "acoustic" to refer to an autistic person, etc. Just silliness, as you'll always be able to find a way to creatively convey your point such that auto-moderators don't catch it.
I'm sure it's a win for Meta (less responsibility, less expense, potentially less criticism, potentially more ad dollars), but certainly a loss for users. More glad than ever that I deleted my FB account 10 years ago, and Twitter once it went X.
As a leftist, while this is concerning, it's also important to remember that Meta censors left content as much as it does right content.
So, while this announcement certainly seems to be in bad faith (what could Mark mean by "gender" other than transphobic discussion?), this should be a boon both for far-right and left discussion.
Does that mean increased polarization and political violence? Surely, surely.
From bad to worse. Meta is probably one of the single largest funders of fact checking. Now that appears to be coming to an end. Third parties will no longer be able to flag misinfo on FB, Instagram or Threads in the US.
Leaving Facebook, Instagram and Twitter a few years ago (and never joining TikTok) has been the number one top decision for my mental health. I wish everyone and society as a whole to make the same decision.
They've also said there will be more harmful (but legal) content on there as they'll no longer automatically look for it, but require it to be reported before taking action.
As someone who worked on harmful content, specifically suicide and self injury, this is just nuts - they were raked over the coals in both the UK by an inquest into the suicide of a teenage user who rabbit holed on this harmful content, and also with the parents of teenagers who took their lives, who Zuck turned around and apologised to as his latest senate hearing.
There is research that shows exposure to suicide and self injury content increases suicidal ideation.
I'm hoping that there is some nuance that has been missed from the article, but if not, this would seem like a slam dunk for both the UK and EU regulators to take them to task on.
I am concerned about the community notes model they're moving towards.
Community notes has worked well on Twitter/X, but looking at the design it seems super easy to game.
Many notes get marked 'helpful' (ie. shown) with just 6 or so ratings.
That means, if you are a bad actor, you can get a note shown (or hidden!) with just 6 sockpuppet accounts. You just need to get those accounts on opposite sides of the political spectrum (ie. 3 act like a democrat, 3 act like a republican), and then when the note that you care about comes up, you have all 6 agree to note/unnote it.
Companies like Facebook pretending they are not publishers, people posting content believing they should be able to publish anything without consequences, and professional weather makers ( PR/comms/lobbyists etc ) using this confusion to get around traditional controls on their dark arts.
In the end I think the only solution that works in the long term is to have everything tied back to an individual - and that person is responsible for what they do.
You know - like in the 'real' world.
That does mean giving up the charade of pseudo-anonymity - but if we don't want online discourse dominated by bots controlled by people with no-conscience - then it's probably the grown up thing to do.
Zuck claims "Europe has an ever increasing number of laws,institutionalizing censorship and making difficult to build something innovative"
Ouch. As a European, I feel very wary of such a sentence and the implications.
Time for Europe to wake up ?
(edit: fix typos)
I use Instagram and Threads specifically because of the relative lack of political content on them. If they also start to become cultural war grounds like everything else then RIP.
I speculated what Zuckerberg wanted and what he'd do when he visited Mar-a-lago[0]:
* Push to ban Tiktok
* Drop antitrust lawsuits against Meta
* Meta will relax "conservative" posts on its platforms
* Zuckerberg will donate to Trump's cause
So far, Zuckerberg has already donated to Trump's cause. Now he has relaxed "conservative" posts on its platforms directly or indirectly.
When Trump comes into power, he'll likely ask the FTC to drop its antitrust lawsuit against Meta under the disguise of being pro-business.
My last speculation is push to ban Tiktok. I'm sure it was discussed. Trump has donors who wanted him to reverse the Tiktok ban. Zuckerberg clearly wants Tiktok banned. Trump will have to decide who to appease when he comes into office.
The discussion here is painful to read. The 'neutral' discussion of product features and how Austin, TX is more liberal than the rest of Texas are grotesque.
Zuckerberg says Facebook is going to be more "like X" and "work with Trump". It has changed its content policy to allow discussions that should horrify anyone.
"In a notable shift, the company now says it allows “allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like ‘weird.’”
"In other words, Meta now appears to permit users to accuse transgender or gay people of being mentally ill because of their gender expression and sexual orientation. The company did not respond to requests for clarification on the policy."
But Zuck himself says that they are also dialing their algorithms back in favor of allowing more bad content. It's not right.
Off topic but related to holding communities to account: I wish there were a way to metamoderate subs on Reddit. The Texas subreddit has been co-opted by a moderator that bans anyone who criticizes their editorial decisions or notices antagonism trolls taking over the sub.
Mark has looked at what has happened to Twitter since Musk took over, a notable decline in activity and value… and decided he wants a piece of that? Musk is begging people on Twitter to post more positive content, as it devolves into 4chan-lite.
If Musk’s ideological experiment with Twitter had proven the idea that you can have a pleasant to use website without any moderation then Mark’s philosophical 180 would at least make sense, but this doesn’t, at all. What’s to gain? Musk has done everyone a favor by demonstrating that moderation driven by a fear of government intervention was actually a good thing.
Great news. It's further evidence that the zeitgeist has shifted against the idea that platforms have a "responsibility" to do "good" and make the world "better" through censorship. Tech companies like Meta have done incalculable damage to the public by arrogating the power to determine what's true, good, and beautiful.
Across the industry, tech companies are rejecting this framework. Only epistemic and moral humility can lead to good outcomes for society. It's going to take a long time to rebuild public trust.
As far as I can tell they gave up moderation a few years ago, at least every time I report someone spamming about "Elon Musk giving away a million dollars if you click this shady link" or the like I invariably get told it meets their "community standards" and won't be removed. I guess technically I haven't seen a female nipple there though so, job well done?
In summary, FB was pressured in 2016 to act on “foreign influence” the press hysterically parroted by politicians and leaders. FB bowed to the pressure. Now that the press lost all validity along with the X purchase, the press can no longer persuade Meta to “fact check.” FB is in a better spot to follow the X model of moderation. People arguing this is a bad move are ignoring the fact that FB was a censorship hotbed for the last four years.
"There is a cult of ignorance in the United States, and there always has been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'"
Isaac Asimov - Hitting the high notes even after 30 years from the pulpit.
Mark doing what Mark needs to do to keep that Meta stock elevated.
Will this totally end content moderation? That could be a small silver lining, as content moderation for FB appears to be extremely hazardous to one's mental health:
So lets take one of the most expensive, labor intensive parts of our business and replace it with crowdsourced notes.
As of 2022, Meta employed 15000 content moderators. Expected salary of 70K to 150K per person (salary + benefits, plus consulting premiums) so lets assume 110K.
This implies $1.65B in workforce costs for content moderation.
Meta is more likely to make their earnings....
Though I wonder if they will redeploy these people to be labelers for LLMs?
Don't worry, there will be community notes and some form of eu/us/state notes.
The paradigm has changes, moderation has to be separated from censorship and transparent.
I would love to hear/read Audrey Tang's take on this, as CPP has been heavily involved in manipulating Chinese public opinion.
It’s funny to see these tech moguls bend the knee for the new king. All their values, their so called care for the community, everything they say, everything, … is just all a big play in an effort to make as much money as they can. It sickens me to watch this stuff unfold.
while it's obviously fair to be very very wary of everything FB does, especially moderation, the other side of this is a worldwide campaign by the worst people alive to use these platforms to shape public opinion and poison our (ie at least the West's) culture to death.
Community notes and enforcement might help meta in the long run as being a step into more organically managed content that can scale better than simple moderation.
I have my serious gripes with how Instagram currently manages reports. I've recently reported a clear racist post promoted to me on Instagram that did not get removed or acted on. They seem to go the route of "block it so you cannot see the user anymore but let everyone else see it".
So as far as I can tell the only thing that Instagram actually moderate at the moment are gore and nudity, regardless of context. So barely dressed sexualised thirst traps are ok, black and white blurred nipples are not, everything else is a-ok.
The Metaverse and the WFH bets made by Zuck were controversial but at least it was something rooted in tech and population habits trends and vision without any political poop attached to it.
This one is pure political poop to please Orange Man.
Also I believe that fact-checking needed to be slowly sunsetted after the COVID emergency was over, but the timing of this announcement and the binary nature of the decision means that it was done with intention to get in the graces of the new administration.
If these techs executives become the American equivalent of Russian Oligarchs I hope that States would go after their wealth based on their residence and even ADS-B private jet trackers if they were to move to say Wyoming but partying every weekend in Los Angeles/NYC etc.
Corporate censorship should have never happened. It is a huge corruption of public discourse and the political process. These platforms have hundreds of millions of users, or more, and are as influential as governments. They should be regulated like public utilities so they cannot ban users or censor content, especially political speech. Personally I don’t trust Zuck and his sudden shift on this and other topics. It doesn’t come with a strong enough rejection of Meta/Facebook’s past, and how they acted in the previous election cycle, during COVID, during BLM, etc. But I guess some change is still good.
Regardless of what you think about this step I find it disconcerting that we can now disagree on facts.
For example:
- whether crime is up or down
- whether the earth is warming or not
- how many people live in poverty
- what the rate of inflation is
- how much social security or healthcare costs
- etc
These are all verifiable, measurable facts, and yet, we somehow manage to disagree.
We always used to disagree and that is healthy, we avoid missing something. But in the past we could agree on some basic facts and then have a discussion.
Now we just end a discussion with an easy: "Your facts are wrong." And that leads to an total inability of having any discussion at all.
Fact checking is not censorship. Imagine math if we'd question the basic axioms.
Zuck's video claims Europe has been imposing a lot of censorship lately, which is a nicer way for him to say "we have done a crappy job at stopping misinformation and abusive material, got fined A LOT by countries who actually care about it, and that's somehow not our fault".
Community notes is good news, and something I was expecting to disappear from Twitter since Elon bought it a couple years ago, especially since they have called out his lies more than once. Hearing Facebook/Instagram/Threads are getting them is great.
Then he claims "foreign governments are pushing against American companies" like we aren't all subject to the same laws. And actually, it wasn't the EU who prohibited a specific app alleging "security risks" because actually they can't control what's said there; it was the US, censoring TikTok.
Perhaps we the europeans should push for a ban of US platforms like Twitter, especially when its owner has actually pledged to weaponise the platform to favour far-right candidates like AfD (Germany) or Reform UK. And definitely push for bigger fines to monopolistic companies like Meta.
The moderation tools were themselves offensive and abusive. I use FB to read what my friends and relatives have to say. I don't want FB to interfere with their posts under any normal circumstance, but somehow, they felt like they should do this.
But the real reason I can't use FB much any more is that the feed is stuffed full of crap I didn't ask for, like Far Side cartoons etc.
TBH I had assumed FB was just penalizing all political content or that people just tried like hell to avoid it because all I see on FB anymore is either stuff related to the few FB groups that keep me on the platform or endless reposts of basically pirated Reddit content for engagement.
It seems like a much smaller fraction of comments were flagged and/or killed here compared to the recent Trudeau thread, which surprises me because the level of discourse I observed was at least as bad.
I assume the data is showing that conservative users are growing either in raw numbers or in aggregate interaction on Facebook, and thus, will now be catered to.
Meta, as a company, doesn't have values beyond growth.
The fact-checking that Meta is ending, which put "misinformation" disclaimers on posts, is NOT the same as content moderation, which will continue.
A lot of comments in this thread reflect a conflation of these two, with stuff like "great! no more censorship!" or "I was once banned because I made a joke on my IG post", which don't relate to fact-checking.
It's funny how facebook got so political all the normies left, then they downranked political content so much that the political people left too. Facebook is a ghost town now.
this is good. the automated systems were getting increasingly byzantine, with layers of rules trying to patch edge cases, which just created more edge cases.
I know there has been a lot of ink spilled trying to persuade that Technology can't solve our deeper problems and Technologists are too optimistic about having real-world impact etc. etc.
But I think community notes (the model, not necessarily the specific implementation of one company or another) is one of those algorithms that truly solve a messy messy sticky problem.
Fact-checking and other "Big J Journalist" attempts to suppress "misinformation" is a very broken model.
1) It leads to less and less trust in the fact checkers which drives more people to fringe outlets etc.
2) They also are in fact quite biased (as are all humans, but it's less important if your electrician has socialist/MAGA/Libertarian biases)
3) The absolute firehose of online content means fact checkers/media etc. can't actually fact check everything and end up fact checking old fake news while the new stuff is spreading
The community notes model is inherently more democratic, decentralized and actually fair. and this is the big one it works! unlike many of the other "tech will save us" (e.g. web3 ideas) It is extremely effective and even-handed.
I recommend reading the Birdwatch paper [0], it's quite heartening and I'm happy more tech companies are moving in that direction
I am sure Mark wanted to do this for quite some time. Community Notes have been a way to let the audience do the fact checking for them, in exchange for a "sense of pride and accomplishment" instead of paying millions or billions to vendors and alienate half of its audience. The timing is obviously to placate Trump.
The effectiveness of Community Notes is up to debate. Though I personally have seen some really brutally honest or hillarious fact checks (check out Community Notes Violations on Twitter) but I still feel it can be brigaded by trolls to say the inverse is the truth. I have an anecdotal example from recent memory which on a post of someone commenting on the new Superman trailer, with a shot of Corenswet as Clark Kent gushing about how much he looked like Superman. I saw a humorous community note on that post that claimed the person in the image is not Superman but Clark Kent and they are separate people.
To me this raises the question, couldn't Community Notes potentially be overwhelmed by trolls to claim a falsehood as the truth for more nefarious reasons (this may have happened already, though I have not seen it yet).
Zuckerberg knows which way the winds are blowing in the US Capital and is ensuring he is aligned with them so to avoid political blowback on his company.
I suspect the changes to the fact checking / free speech will align with Trump's political whims. Thus fact checking will be gone on topics like vaccines, trans people, threats from immigrants, etc.
While the well documented political censorship at Meta affecting Palestine will remain because it does align with Trump's political whims...
It was evident that Mark Zuckerberg / Meta would have to once again "adapt" to another Trump presidency, but this is much more explicit than I expected, wow.
hopefully i stop getting trouble for reposting things verifiable in the public record that other people spoke about in 2018, and not being banned for supporting capital punishment, a thing legal in the US, the native state of the brand.
wow so many warnings for the future.. They didnt intend to but FB now has some responsibility about whats generated on it as one of the most massive source of info in the planet...
Community notes is maybe the only good thing to happen to the microstructure of social media in years so I'm vaguely in favour of this.
The official fact checking stuff is far too easily captured, it was like the old blue checks — a handy indicator of what the ancien regime types think.
I’m less concerned by the change of fact checking to community notes, because meta had often neutered the ability of their fact checkers anyway.
What I am concerned about is their allowance of political content again.
Between genocides and misinformation campaigns, meta has shown that the idea of the town square does not scale. Not with their complete abandonment of any kind of responsibility to the social construct of its actual users.
Meta are an incredibly poor steward of human mental health. Their algorithms have been shown to create intense feedback loops that have resulted in many deaths, yet they continue down the march of bleeding as much out of people as possible.
I was recently browsing FB for the first time in months, and didn't see a peep from fact checkers, despite all the garbage-tier content FB is forcing into my feed including things like "see how this inventors new car makes fossil fuels and batteries obsolete". I spent most of my time on the site clicking "hide all from X", where X is some suggested page I never expressed interest in. The "shorts" on the site are always clickbaity boob-featuring things that I have no interest in either. The site is disgusting and distracting from any practical use, i.e. keeping in touch with friends, which is what I used to use it for.
It's a welcome move as this "fact checkers" thing was doomed to fail, mostly because "who decides what the truth is, and who fact checks the fact checkers?".
Sad thing is, this move isn't motivated by Mark Zuckerberg having a eureka moment and now trying to seek out the truth to build a better product for human kind.
This move is motivated by Mark's realizing he is on the wrong side of American politics now, being left behind by the Trump/Musk duo.
It's just cheaper. That's the most important thing for corporations. It's also harder to accuse them of bias. Personally, I'm a little dubious about the effectiveness of fact checkers on people's opinions. If someone is a dullard who is willing to believe the most absurd propaganda or every conspiracy theory that exists, a fact checker won't solve the problem. They are used to being told that they are wrong. Of course they just can shadowban this content but in the end they profit from that.
>Ending Third Party Fact Checking Program, Moving to Community Notes
CNotes were extremely successful on X.
The problem with censorship, why digg and reddit died as platforms, you end up with second order consequences. The anti-free speech people will always deeply analyze their opponent's speech to find a violation of the rules.
They try to make rules that sound reasonable but are beyond section 230. No being anti-LGBT for ex. But then every joke, miscommunication, etc leads to bans. You also ban entire cultures with this rule. Ive had bans because I meant to add NOT to my 1 sentence, but failed to do so.
Then when it comes to politics. You've banned entire swaths of people/viewpoints. There's no actual meaningful conversation happening on reddit.
Reddit temporarily influenced politics in this way. In a recent election a politician built a platform that mirrored the subreddit. There was polls and if you were to go by reddit... the liberals were about to take at least a minority government, if not majority.
What actually happened? The platform was bizarre and very out of touch with the province. They got blasted in the election. The incumbent majority got stronger.
> Starting in the US, we are ending our third party fact-checking program and moving to a Community Notes model.
The Community Notes model works great on X at dealing with misinformation. More broadly, this is a vindication of the principle that putatively neutral "expert" institutions cannot be trusted unless they're subject to democratic checks and balances.
I know some of those fact checkers. They are career journalists and the bar to tag a post as disinformation is extremely high.
To tag a post, they need to produce several pages of evidence, taking several days of work to research and document. The burden of proof is in every way on the fact checkers, not the random Facebook poster.
Generalizing this work as politically biased is a purposeful lie.
Removing the politics from this is rather impossible because it was so deliberately timed and explicitly positioned as political. But as a PM addressing the pure product question, I’d say it’s an unnecessarily risky product move. You’ve basically forgone the option to use humans professionally incentivized to follow guidelines, and decided to 100% crowdsource your moderation to volunteers (for amplification control, not just labeling btw). Every platform is different, but the record of such efforts in other very high volume contexts is mixed at best, particularly in responding to well financed amplification attacks driven by state actors. Ultimately this is not a decision most any experienced PM would make, exactly because the risk is huge and upside low. X’s experience with crapification would get any normal PM swift and permanent retirement (user base down roughly 60%, valuation down $30B - how’s the look on your resume?... So I go back to the beginning - this is plutocrats at play and not even remotely in the domain of a carefully considered product decision.
It would have been a perfect opportunity to -add- community notes and study which worked side by side and choose the better of the two, instead evidently Musk and Drump pulled Zuck aside and told him to shape up and join the billionaire oligarchs club or face the consequences of a partisan DoJ and SEC.
The litmus test of this is whether they roll it out globally. If they do, Meta truly has seen the light; if they don't, this is just a cynical attempt to butter up Trump in case he regulates them into oblivion (as one could argue they deserve).
https://en.wikipedia.org/wiki/Rohingya_genocide doesn't fit neatly into any of those, but it does fit more into immigration, at least in the rhetoric (the Rohingya have long been present in Myanmar but have been denied citizenship by the government):
> We’re getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate.
...and in Facebook's warped views, the ones that permitted the Facebook-Cambridge Analytica data scandal, that means ready to be boosted by the algorithm.
My door to Meta is closed and will never reopen, no matter what.
Facebook has cost me all my friends.
WhatsApp sells my phone number.
Threads banned me for commenting too much without giving it my phone number.
Facebook keeps or kept censoring my posts.
Fuck Meta forever.
Both the far-right and far-left live off misinformation, but right now the far-right is experiencing a renaissance, and tech moguls are bending the knee to be on good terms with the leaders.
MAGA and European far-right politicians have been moaning for ages that fact checking is "politically biased". The Biden laptop controversy was the catalyst for this.
If you think this move exists in a vacuum or is actually about "getting back to their roots with free speech", you're wrong. Alongside Dana White joining the board[0], it's clear that this is solely about currying favor with the incoming administration.
> We’re getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate. It’s not right that things can be said on TV or the floor of Congress, but not on our platforms.
My mom and my wife’s mom both have remarked in the last year that they’re upset with speech policing. My mom can’t say things about immigration that she thinks as an immigrant, and my mother in law is censored on gender issues despite having been married to a transgender person in the 1990s. They’re not ideological “free speech” people. Neither are political, though both historically voted left of center “by default.”
The acceptable range of discourse on these issues in the social circles inhabited by Facebook moderators (and university staff) is too narrow, and imposing that narrow window on normal people has produced a backlash among the very people who are key users of Facebook these days (normie middle age to older people). This is a smart move by Zuckerberg.
If you want automated fact checking you need to create a god. (... and creating a human team that does the same is playing God)
If you want to identify contagious emotionally negative content you need ModernBERT + RNN + 10,000 training examples. The first two are a student project in a data science class, creating the second would wreck my mental health if I didn't load up on Paxil for a month.
The latter is bad for people whether or not it is true. If you suppressed it by a large factor (say 75%) in a network it would be like adding boron to the water in a nuclear reactor. It would reduce the negativity in your feed immediately, would reduce it further because it would stop it from spreading, and soon people would learn not to post it to begin with because it wouldn't be getting a rise out of people. (This paper https://shorturl.at/VE2fU notably finds that conspiracy theories are spread over longer chains than other posts and could be suppressed by suppressing shares after the Nth hop)
My measurements show Bluesky is doing this quietly, I think people are more aware that Threads does this; most people there seem to believe "Bluesky doesn't have an algorithm" but they're wrong. Some people come to Bluesky from Twitter and after a week start to confess that they have no idea what to post because they're not getting steeped in continuous outrage and provocation.
I'm convinced it is an emotional and spiritual problem. In Terry Pratchett's Hogfather the assassination of the Hogfather (like Santa Claus but he comes on Dec 32 and has his sleigh pulled by pigs) leads to the appearance of the Hair Loss Fairy and the God of Hangovers (the "Oh God") because of a conservation of belief.
Because people aren't getting their spiritual needs met you get pseudo-religions such as "evangelicals who don't go to church" (some of the most avid Trump voters) as well as transgenderists who see egg hatching (their word!) as a holy mission, both of whom deserve each other (but neither of whom I want in my feed.)
Zuck still dreams of his despotic dictatorial empire where he can enslave millions and make them all trans via Police enforcement. This move is just to stop bleeding users to X.
during the biden administration they were expected to shift their moderation policies to fit in with the political ideology currently in the white house
now it's been normalized and the other party is doing it. but the news outlets have waited until now to start crying wolf?
Ending our third party fact-checking program and moving to Community Notes model
(about.fb.com)868 points by impish9208 7 January 2025 | 1359 comments
Comments
All of the articles I'm seeing about this online are ideological, but this feels like the kind of decision that should have been in the works for multiple quarters now, given how effective Notes have been, and how comically ineffective and off-putting fact-checkers have been. The user experience of fact-checkers (forget about people pushing bogus facts, I just mean for ordinary people who primarily consume content rather than producing it) is roughly that of a PSA ad spot series saying "this platform is full of junk, be on your guard".
IMO the concerning part is hidden at the bottom. They want to go back to shoveling politics in front of users. They say it is based on viewing habits, but just because I stop my car to watch a train wreck doesn't mean I want to see more train wrecks. I just can't look away. FB makes theirnacrions sound noble or correct, but this is self serving engagement optimization.
Social media sites should give users an explicit lever to see political content or not. Maybe I'll turn it on for election season and off the rest of the year. Some political junkies will always have it set to "maximum". IMO that is better FB always making that decision for me.
They also said that their existing moderation efforts were due to societal and political pressures. They aren't explicit about it, but it's clear that pressure does not exist anymore. This is another big win for Meta, because minimizing their investment in content moderation and simplifying their product will reduce operating expenses.
This frustration with fact-checkers seems genuine. Mark alluded to it in https://techcrunch.com/2024/09/11/mark-zuckerberg-says-hes-d... which squares with how the Government used fact-checkers to coerce Facebook into censoring non-egregious speech (switchboarding) https://news.ycombinator.com/item?id=41370516
Alex Stamos pushed this initiative pretty hard outside of Facebook in 2019+, seemingly because he wasn't able to do inside of Facebook back in 2016/2018. But I haven't dug into his motivations.
I don't use Twitter so I hadn't seen it in action, but the interview convinced me that this is a good approach. I think this approach makes sense for Facebook as well.
The obvious context is that either Meta gets out of the content moderation game voluntarily, or the incoming admin goes to war with them.
> focusing our enforcement on illegal and high-severity violations.
I imagine this will in practice determine how far they can go in the EU. Community notes, sure. No moderation? Maybe not.
Often I live through events and read about it in the daily paper and then read about it in The Economist and read a few more accounts of it. 5-25 years later a good well researched history of the event comes out and it is entirely different from what I remember reading at the time. Some of that is my memory but a lot of it is that the first draft of history is wrong.
When someone signed their name "Dan Cooper" and hijacked a plane a newspaper garbled that to "D B Cooper", the FBI thought it sounded cool so they picked it up, but it happens more often than not that journalists garble things like that.
https://en.wikipedia.org/wiki/The_Armies_of_the_Night
shows (but doesn't tell) that that a novelized accounts of events could be more true than a conventional newspaper account and similar criticisms come throughout the work of Joan Didion
https://en.wikipedia.org/wiki/Joan_Didion
If anything really makes me angry about news and how people consume it is this. In the age of clickbait everyone who works for The New York Times has one if not two eyes on their stats at all times. Those stats show that readers have a lot more interest in people like David Brooks and Ezra Klein blowing it out their ass and could care less about difficult journalism that takes integrity, elbow grease and occasionally can put you in danger done by younger people who are paid a lot less if they are paid at all. The conservative press was slow on the draw when it came to 'Cancel Culture', it was a big issue with the NYT editorial page because those sorts of people get paid $20k to give a college commencement address and they'd hate to have the gravy train stop.
Seen that way the problem with 'fake news' is not that it is 'fake' but that it is 'news'.
One of the only visible actions Meta has taken on my account was once when a cousin commented on a musical opinion I had posted to facebook, I jokingly replied "I'll fight you" and I caught an instant 2 week posting ban and a flag on my account for "violence." Couldn't even really appeal it, or the hoops were so ridiculous that I didn't try. The hilarious thing is these bans will still let you consume the sites' content (gotta get those clicks), you just are unable to interact with it. This kind of moderation is pointless as users will always get around it anyway - leading to stuff like "unalive" to replace killing/suicide references, or "acoustic" to refer to an autistic person, etc. Just silliness, as you'll always be able to find a way to creatively convey your point such that auto-moderators don't catch it.
So, while this announcement certainly seems to be in bad faith (what could Mark mean by "gender" other than transphobic discussion?), this should be a boon both for far-right and left discussion.
Does that mean increased polarization and political violence? Surely, surely.
This is not good imho.
The result will be even more poisonous to users.
Just like cigarette companies using chemicals in the papers so that they burn slower. Does it improve the product? Maybe, along one dimension.
As someone who worked on harmful content, specifically suicide and self injury, this is just nuts - they were raked over the coals in both the UK by an inquest into the suicide of a teenage user who rabbit holed on this harmful content, and also with the parents of teenagers who took their lives, who Zuck turned around and apologised to as his latest senate hearing.
There is research that shows exposure to suicide and self injury content increases suicidal ideation.
I'm hoping that there is some nuance that has been missed from the article, but if not, this would seem like a slam dunk for both the UK and EU regulators to take them to task on.
Community notes has worked well on Twitter/X, but looking at the design it seems super easy to game.
Many notes get marked 'helpful' (ie. shown) with just 6 or so ratings.
That means, if you are a bad actor, you can get a note shown (or hidden!) with just 6 sockpuppet accounts. You just need to get those accounts on opposite sides of the political spectrum (ie. 3 act like a democrat, 3 act like a republican), and then when the note that you care about comes up, you have all 6 agree to note/unnote it.
Companies like Facebook pretending they are not publishers, people posting content believing they should be able to publish anything without consequences, and professional weather makers ( PR/comms/lobbyists etc ) using this confusion to get around traditional controls on their dark arts.
In the end I think the only solution that works in the long term is to have everything tied back to an individual - and that person is responsible for what they do.
You know - like in the 'real' world.
That does mean giving up the charade of pseudo-anonymity - but if we don't want online discourse dominated by bots controlled by people with no-conscience - then it's probably the grown up thing to do.
We're entering a dangerous period, and it's not for anything as noble as the virtues of absolute free speech
* Push to ban Tiktok
* Drop antitrust lawsuits against Meta
* Meta will relax "conservative" posts on its platforms
* Zuckerberg will donate to Trump's cause
So far, Zuckerberg has already donated to Trump's cause. Now he has relaxed "conservative" posts on its platforms directly or indirectly.
When Trump comes into power, he'll likely ask the FTC to drop its antitrust lawsuit against Meta under the disguise of being pro-business.
My last speculation is push to ban Tiktok. I'm sure it was discussed. Trump has donors who wanted him to reverse the Tiktok ban. Zuckerberg clearly wants Tiktok banned. Trump will have to decide who to appease when he comes into office.
[0]https://news.ycombinator.com/item?id=42262573#42262975
Zuckerberg says Facebook is going to be more "like X" and "work with Trump". It has changed its content policy to allow discussions that should horrify anyone.
"In a notable shift, the company now says it allows “allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like ‘weird.’”
"In other words, Meta now appears to permit users to accuse transgender or gay people of being mentally ill because of their gender expression and sexual orientation. The company did not respond to requests for clarification on the policy."
But Zuck himself says that they are also dialing their algorithms back in favor of allowing more bad content. It's not right.
https://www.wired.com/story/meta-immigration-gender-policies...
https://news.ycombinator.com/item?id=42622082
If Musk’s ideological experiment with Twitter had proven the idea that you can have a pleasant to use website without any moderation then Mark’s philosophical 180 would at least make sense, but this doesn’t, at all. What’s to gain? Musk has done everyone a favor by demonstrating that moderation driven by a fear of government intervention was actually a good thing.
Across the industry, tech companies are rejecting this framework. Only epistemic and moral humility can lead to good outcomes for society. It's going to take a long time to rebuild public trust.
Isaac Asimov - Hitting the high notes even after 30 years from the pulpit.
Mark doing what Mark needs to do to keep that Meta stock elevated.
https://www.cnn.com/2024/12/22/business/facebook-content-mod...
As of 2022, Meta employed 15000 content moderators. Expected salary of 70K to 150K per person (salary + benefits, plus consulting premiums) so lets assume 110K.
This implies $1.65B in workforce costs for content moderation.
Meta is more likely to make their earnings....
Though I wonder if they will redeploy these people to be labelers for LLMs?
https://finance.yahoo.com/news/trump-warns-mark-zuckerberg-c...
Trump himself confirmed this today:
https://bsky.app/profile/atrupar.com/post/3lf66oltlvs2l
I cannot believe anyone would actually be okay with this situation.
I have my serious gripes with how Instagram currently manages reports. I've recently reported a clear racist post promoted to me on Instagram that did not get removed or acted on. They seem to go the route of "block it so you cannot see the user anymore but let everyone else see it".
So as far as I can tell the only thing that Instagram actually moderate at the moment are gore and nudity, regardless of context. So barely dressed sexualised thirst traps are ok, black and white blurred nipples are not, everything else is a-ok.
The Metaverse and the WFH bets made by Zuck were controversial but at least it was something rooted in tech and population habits trends and vision without any political poop attached to it.
This one is pure political poop to please Orange Man.
Also I believe that fact-checking needed to be slowly sunsetted after the COVID emergency was over, but the timing of this announcement and the binary nature of the decision means that it was done with intention to get in the graces of the new administration.
If these techs executives become the American equivalent of Russian Oligarchs I hope that States would go after their wealth based on their residence and even ADS-B private jet trackers if they were to move to say Wyoming but partying every weekend in Los Angeles/NYC etc.
For example:
- whether crime is up or down
- whether the earth is warming or not
- how many people live in poverty
- what the rate of inflation is
- how much social security or healthcare costs
- etc
These are all verifiable, measurable facts, and yet, we somehow manage to disagree.
We always used to disagree and that is healthy, we avoid missing something. But in the past we could agree on some basic facts and then have a discussion. Now we just end a discussion with an easy: "Your facts are wrong." And that leads to an total inability of having any discussion at all.
Fact checking is not censorship. Imagine math if we'd question the basic axioms.
Community notes is good news, and something I was expecting to disappear from Twitter since Elon bought it a couple years ago, especially since they have called out his lies more than once. Hearing Facebook/Instagram/Threads are getting them is great.
Then he claims "foreign governments are pushing against American companies" like we aren't all subject to the same laws. And actually, it wasn't the EU who prohibited a specific app alleging "security risks" because actually they can't control what's said there; it was the US, censoring TikTok.
Perhaps we the europeans should push for a ban of US platforms like Twitter, especially when its owner has actually pledged to weaponise the platform to favour far-right candidates like AfD (Germany) or Reform UK. And definitely push for bigger fines to monopolistic companies like Meta.
But the real reason I can't use FB much any more is that the feed is stuffed full of crap I didn't ask for, like Far Side cartoons etc.
Sure "Meta" won't, but I wouldn't be surprised if a bunch of "contributing users" end up being facebook's AI accounts
# Meta eliminating fact-checking to combat "censorship"
https://www.axios.com/2025/01/07/meta-ends-fact-checking-zuc...
Bots and gov-psyop trolls are certainly (hopefully) like 95% of the gross misinformation, right?
I'd give some reasonably trustworthy platform my Passport and identity to speak to only other people who have done the same.
Meta, as a company, doesn't have values beyond growth.
A lot of comments in this thread reflect a conflation of these two, with stuff like "great! no more censorship!" or "I was once banned because I made a joke on my IG post", which don't relate to fact-checking.
Also I wonder if they will be federating with truth social and gab.
But I think community notes (the model, not necessarily the specific implementation of one company or another) is one of those algorithms that truly solve a messy messy sticky problem.
Fact-checking and other "Big J Journalist" attempts to suppress "misinformation" is a very broken model.
1) It leads to less and less trust in the fact checkers which drives more people to fringe outlets etc.
2) They also are in fact quite biased (as are all humans, but it's less important if your electrician has socialist/MAGA/Libertarian biases)
3) The absolute firehose of online content means fact checkers/media etc. can't actually fact check everything and end up fact checking old fake news while the new stuff is spreading
The community notes model is inherently more democratic, decentralized and actually fair. and this is the big one it works! unlike many of the other "tech will save us" (e.g. web3 ideas) It is extremely effective and even-handed.
I recommend reading the Birdwatch paper [0], it's quite heartening and I'm happy more tech companies are moving in that direction
[0] https://github.com/twitter/communitynotes/blob/main/birdwatc...
The effectiveness of Community Notes is up to debate. Though I personally have seen some really brutally honest or hillarious fact checks (check out Community Notes Violations on Twitter) but I still feel it can be brigaded by trolls to say the inverse is the truth. I have an anecdotal example from recent memory which on a post of someone commenting on the new Superman trailer, with a shot of Corenswet as Clark Kent gushing about how much he looked like Superman. I saw a humorous community note on that post that claimed the person in the image is not Superman but Clark Kent and they are separate people.
To me this raises the question, couldn't Community Notes potentially be overwhelmed by trolls to claim a falsehood as the truth for more nefarious reasons (this may have happened already, though I have not seen it yet).
I suspect the changes to the fact checking / free speech will align with Trump's political whims. Thus fact checking will be gone on topics like vaccines, trans people, threats from immigrants, etc.
While the well documented political censorship at Meta affecting Palestine will remain because it does align with Trump's political whims...
- https://www.hrw.org/news/2023/12/20/meta-systemic-censorship...
- https://www.theguardian.com/technology/article/2024/may/29/m...
The official fact checking stuff is far too easily captured, it was like the old blue checks — a handy indicator of what the ancien regime types think.
What I am concerned about is their allowance of political content again.
Between genocides and misinformation campaigns, meta has shown that the idea of the town square does not scale. Not with their complete abandonment of any kind of responsibility to the social construct of its actual users.
Meta are an incredibly poor steward of human mental health. Their algorithms have been shown to create intense feedback loops that have resulted in many deaths, yet they continue down the march of bleeding as much out of people as possible.
This is insane and clearly a political move. Maybe we just don't require social media as a species. That might be nice.
Sad thing is, this move isn't motivated by Mark Zuckerberg having a eureka moment and now trying to seek out the truth to build a better product for human kind.
This move is motivated by Mark's realizing he is on the wrong side of American politics now, being left behind by the Trump/Musk duo.
News story about other CEOs sucking up to Trump.[2]
News story about Bezos stucking up to Trump.[3]
"The Führer is always right" [4]
[1] https://www.cnn.com/2024/12/04/business/zuckerberg-trump-mus...
[2] https://www.foxbusiness.com/media/kevin-oleary-explains-why-...
[3] https://newrepublic.com/article/188170/jeff-bezoss-shocking-...
[4] https://en.wikipedia.org/wiki/F%C3%BChrerprinzip
CNotes were extremely successful on X.
The problem with censorship, why digg and reddit died as platforms, you end up with second order consequences. The anti-free speech people will always deeply analyze their opponent's speech to find a violation of the rules.
They try to make rules that sound reasonable but are beyond section 230. No being anti-LGBT for ex. But then every joke, miscommunication, etc leads to bans. You also ban entire cultures with this rule. Ive had bans because I meant to add NOT to my 1 sentence, but failed to do so.
Then when it comes to politics. You've banned entire swaths of people/viewpoints. There's no actual meaningful conversation happening on reddit.
Reddit temporarily influenced politics in this way. In a recent election a politician built a platform that mirrored the subreddit. There was polls and if you were to go by reddit... the liberals were about to take at least a minority government, if not majority.
What actually happened? The platform was bizarre and very out of touch with the province. They got blasted in the election. The incumbent majority got stronger.
The Community Notes model works great on X at dealing with misinformation. More broadly, this is a vindication of the principle that putatively neutral "expert" institutions cannot be trusted unless they're subject to democratic checks and balances.
To tag a post, they need to produce several pages of evidence, taking several days of work to research and document. The burden of proof is in every way on the fact checkers, not the random Facebook poster.
Generalizing this work as politically biased is a purposeful lie.
They also appointed Dana White, a prominent Trump supporter, to their board this week.
Their content moderation team is moving from California to Texas.
If people think all this is Meta going "neutral", you are delusional.
Zuck is making the right noises. Time will tell.
> drugs, terrorism, child exploitiation
https://en.wikipedia.org/wiki/Rohingya_genocide doesn't fit neatly into any of those, but it does fit more into immigration, at least in the rhetoric (the Rohingya have long been present in Myanmar but have been denied citizenship by the government):
> We’re getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate.
...and in Facebook's warped views, the ones that permitted the Facebook-Cambridge Analytica data scandal, that means ready to be boosted by the algorithm.
This is to please the incoming president.
Both the far-right and far-left live off misinformation, but right now the far-right is experiencing a renaissance, and tech moguls are bending the knee to be on good terms with the leaders.
MAGA and European far-right politicians have been moaning for ages that fact checking is "politically biased". The Biden laptop controversy was the catalyst for this.
Free expression my ass. Freedom of speech is not about protecting speech you agree with.
[0] https://www.npr.org/2025/01/06/nx-s1-5250310/meta-dana-white...
My mom and my wife’s mom both have remarked in the last year that they’re upset with speech policing. My mom can’t say things about immigration that she thinks as an immigrant, and my mother in law is censored on gender issues despite having been married to a transgender person in the 1990s. They’re not ideological “free speech” people. Neither are political, though both historically voted left of center “by default.”
The acceptable range of discourse on these issues in the social circles inhabited by Facebook moderators (and university staff) is too narrow, and imposing that narrow window on normal people has produced a backlash among the very people who are key users of Facebook these days (normie middle age to older people). This is a smart move by Zuckerberg.
They’re doing everything they can to suck up to the incoming administration.
If you want to identify contagious emotionally negative content you need ModernBERT + RNN + 10,000 training examples. The first two are a student project in a data science class, creating the second would wreck my mental health if I didn't load up on Paxil for a month.
The latter is bad for people whether or not it is true. If you suppressed it by a large factor (say 75%) in a network it would be like adding boron to the water in a nuclear reactor. It would reduce the negativity in your feed immediately, would reduce it further because it would stop it from spreading, and soon people would learn not to post it to begin with because it wouldn't be getting a rise out of people. (This paper https://shorturl.at/VE2fU notably finds that conspiracy theories are spread over longer chains than other posts and could be suppressed by suppressing shares after the Nth hop)
My measurements show Bluesky is doing this quietly, I think people are more aware that Threads does this; most people there seem to believe "Bluesky doesn't have an algorithm" but they're wrong. Some people come to Bluesky from Twitter and after a week start to confess that they have no idea what to post because they're not getting steeped in continuous outrage and provocation.
I'm convinced it is an emotional and spiritual problem. In Terry Pratchett's Hogfather the assassination of the Hogfather (like Santa Claus but he comes on Dec 32 and has his sleigh pulled by pigs) leads to the appearance of the Hair Loss Fairy and the God of Hangovers (the "Oh God") because of a conservation of belief.
Because people aren't getting their spiritual needs met you get pseudo-religions such as "evangelicals who don't go to church" (some of the most avid Trump voters) as well as transgenderists who see egg hatching (their word!) as a holy mission, both of whom deserve each other (but neither of whom I want in my feed.)
now it's been normalized and the other party is doing it. but the news outlets have waited until now to start crying wolf?