Meta’s platform is built to be a place where people can freely express themselves. That can get messy. On a platform where billions of people have a voice, the good, the bad, and the ugly are all on display. But it’s freedom of expression.
Mark Zuckerberg said in a 2019 lecture at Georgetown University that freedom of expression is the driving force behind progress in American society and the world, and that suppressing speech, no matter how well-intentioned, often overrides existing He argued that it would strengthen institutions and power structures. Instead of empowering people. “Some people believe that giving more people a voice is dividing us more than uniting us,” he said. I believe it’s more important to achieve political outcomes that I think are important than to have rights. I think that’s dangerous.”
In recent years, partly in response to social and political pressure to moderate content, we have developed increasingly complex systems for managing content across our platforms. This approach has gone too far. Many of these efforts were well-intentioned, but they expanded over time, making too many mistakes, frustrating users, and too often getting in the way of the free expression we strived for. . We find that too much harmless content is censored and too many people are unfairly locked up in “Facebook jail,” and when they do, we are often too slow to respond.
We want to fix that and go back to our fundamental commitment to freedom of expression. We are currently making some changes to stay true to that ideal.
Ending third-party fact-checking program and moving to community notes
When we launched our independent fact-checking program in 2016, we were clear that we did not want to be the arbiters of truth. We made the best and most reasonable choice at the time to hand over that responsibility to an independent fact-checking organization. The program aims to help these independent experts provide people with more information about what they see online, especially viral misinformation, so they can make their own decisions about what they see and read. It was to be.
Things didn’t work out that way, especially in the United States. Experts, like everyone else, have their own biases and perspectives. This manifested itself in the choices some made about what and how to fact-check. Over time, too much content was fact-checked for people to understand it as legitimate political speech and discussion. Subsequently, our system had practical results in reducing cumbersome labeling and distribution. All too often, programs meant to inform became tools of censorship.
We are currently changing this approach. We will end our current third-party fact-checking program in the United States and begin transitioning to a community notes program instead. We have confirmed that this approach works in X. X allows the community to decide when a post is misleading and requires more context, and allows people with different perspectives to decide what context might be helpful to others Decide. We believe this may be a better and less biased way to achieve our original goal of providing information about what people are seeing.
Once the program is up and running, Meta does not create community notes or determine which notes to display. These are created and rated by posting users.
Just as we do with X, community notes require consensus among people with different viewpoints to prevent biased evaluations.
We intend to be transparent about how different perspectives affect the notes you see in the app, and we’re working on the right way to share this information.
You can sign up today (facebook, Instagram, thread) Give us a chance to be one of the first contributors when this program becomes available.
We plan to gradually roll out Community Notes over the next few months in the U.S. and continue to improve it throughout the year. With the transition, we are eliminating fact-checking controls, stopping demoting fact-checked content, and significantly increasing the messaging we use instead of overlaying full-screen interstitial warnings that you must click through before seeing a post. Reduce to. Obtrusive labels that indicate additional information for those who want to see it.
allow more say
Over time, we have developed complex systems for managing content on our platforms, the application of which has become increasingly complex. As a result, we overenforce the rules, restrict legitimate political debate, censor too much trivial content, and subject too many people to frustrating enforcement measures.
For example, in December 2024, we removed millions of pieces of content every day. Although these actions represent less than 1% of the content created each day, we believe that 1 to 2 out of 10 of these actions may be wrong (i.e., the content is actually not a policy may not be in violation). This does not take into account the steps we take to combat large-scale hostile spam attacks. We plan to expand our transparency reporting and regularly share numbers about our mistakes so people can track our progress. As part of that, we’ll also take a closer look at the mistakes people make when enforcing their spam policies.
We want to reverse the mission creep that has made rules too restrictive and overly enforced. We are removing many restrictions on topics such as immigration, sexual identity, and gender, which are often the subject of political discussion and debate. It’s not true that we can speak on TV and in Parliament, but not on our platforms. These policy changes may take several weeks to fully take effect.
We’re also changing the way we enforce our policies to reduce the types of mistakes that make up the majority of censorship on our platform. Until now, we have used automated systems to scan all policy violations, but this resulted in too many mistakes and too much content that should not have been censored. We will therefore continue to focus these systems on tackling illegal and high-severity violations such as terrorism, child sexual exploitation, drugs, fraud and fraud. For less serious policy violations, we rely on someone reporting the issue before we take any action. It also demotes content if the system predicts too much content may violate the standards. We are in the process of removing the majority of these demotions and increasing our confidence that the remaining content is in violation. And we plan to adjust our systems to require a higher degree of reliability before content is removed. As part of these changes, we’re moving our reliability and safety team, which creates content policies and reviews content, from California to Texas and other U.S. locations.
People often have the opportunity to challenge our enforcement decisions and ask for a review, but the process is frustratingly slow and doesn’t always yield the right outcome. We’re adding staff to this effort and increasingly needing multiple reviewers to make a decision to remove something. We’re working on ways to make account recovery easier, testing facial recognition technology, and using AI Large-Scale Language Models (LLM) to provide a second opinion on some content before taking enforcement action. I have started using it.
A personalized approach to political content
Since 2021, we’ve made changes to reduce the amount of civic content (posts about elections, politics, and social issues) people see, based on feedback we received from users who wanted to see less content. . But this was a pretty straightforward approach. We’ll be gradually rolling this back to Facebook, Instagram, and Threads in a more personalized approach so that people who want to see more political content in their feed can see it.
We’re continually testing ways to deliver personalized experiences, and recently conducted tests around citizen content. As a result, we’re starting to treat citizen content from the people and Pages we follow on Facebook like any other content in our feed, with explicit signals (e.g., what pieces of content) and what’s meaningful to them. Implicit signals (such as post visibility) that help predict what will happen. We will also recommend more political content based on these personalized signals and give you more options to control how much of this content you see.
These changes are an attempt to return to the commitment to freedom of expression that Mark Zuckerberg expressed in his Georgetown speech. It means being vigilant about the impact our policies and institutions are having on our ability to have our voices heard, and having the humility to change our approach when we know they are wrong.