Close Menu
Karachi Chronicle
  • Home
  • AI
  • Business
  • Entertainment
  • Fashion
  • Politics
  • Sports
  • Tech
  • World

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

The world’s largest air force with the F-35 fleet in 2025

AI systems learn from many types of scientific information and run experiments to discover new materials | MIT News

Among the most troublesome relationships in healthcare AI

Facebook X (Twitter) Instagram
  • Home
  • About us
  • Advertise
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram Pinterest Vimeo
Karachi Chronicle
  • Home
  • AI
  • Business
  • Entertainment
  • Fashion
  • Politics
  • Sports
  • Tech
  • World
Karachi Chronicle
You are at:Home » More speeches and fewer mistakes
Tech

More speeches and fewer mistakes

Adnan MaharBy Adnan MaharJanuary 7, 2025No Comments8 Mins Read1 Views
Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
Share
Facebook Twitter LinkedIn Pinterest WhatsApp Email


Meta’s platform is built to be a place where people can freely express themselves. That can get messy. On a platform where billions of people have a voice, the good, the bad, and the ugly are all on display. But it’s freedom of expression.

Mark Zuckerberg said in a 2019 lecture at Georgetown University that freedom of expression is the driving force behind progress in American society and the world, and that suppressing speech, no matter how well-intentioned, often overrides existing He argued that it would strengthen institutions and power structures. Instead of empowering people. “Some people believe that giving more people a voice is dividing us more than uniting us,” he said. I believe it’s more important to achieve political outcomes that I think are important than to have rights. I think that’s dangerous.”

In recent years, partly in response to social and political pressure to moderate content, we have developed increasingly complex systems for managing content across our platforms. This approach has gone too far. Many of these efforts were well-intentioned, but they expanded over time, making too many mistakes, frustrating users, and too often getting in the way of the free expression we strived for. . We find that too much harmless content is censored and too many people are unfairly locked up in “Facebook jail,” and when they do, we are often too slow to respond.

We want to fix that and go back to our fundamental commitment to freedom of expression. We are currently making some changes to stay true to that ideal.

Ending third-party fact-checking program and moving to community notes

When we launched our independent fact-checking program in 2016, we were clear that we did not want to be the arbiters of truth. We made the best and most reasonable choice at the time to hand over that responsibility to an independent fact-checking organization. The program aims to help these independent experts provide people with more information about what they see online, especially viral misinformation, so they can make their own decisions about what they see and read. It was to be.

Things didn’t work out that way, especially in the United States. Experts, like everyone else, have their own biases and perspectives. This manifested itself in the choices some made about what and how to fact-check. Over time, too much content was fact-checked for people to understand it as legitimate political speech and discussion. Subsequently, our system had practical results in reducing cumbersome labeling and distribution. All too often, programs meant to inform became tools of censorship.

We are currently changing this approach. We will end our current third-party fact-checking program in the United States and begin transitioning to a community notes program instead. We have confirmed that this approach works in X. X allows the community to decide when a post is misleading and requires more context, and allows people with different perspectives to decide what context might be helpful to others Decide. We believe this may be a better and less biased way to achieve our original goal of providing information about what people are seeing.

Once the program is up and running, Meta does not create community notes or determine which notes to display. These are created and rated by posting users.
Just as we do with X, community notes require consensus among people with different viewpoints to prevent biased evaluations.
We intend to be transparent about how different perspectives affect the notes you see in the app, and we’re working on the right way to share this information.
You can sign up today (facebook, Instagram, thread) Give us a chance to be one of the first contributors when this program becomes available.

We plan to gradually roll out Community Notes over the next few months in the U.S. and continue to improve it throughout the year. With the transition, we are eliminating fact-checking controls, stopping demoting fact-checked content, and significantly increasing the messaging we use instead of overlaying full-screen interstitial warnings that you must click through before seeing a post. Reduce to. Obtrusive labels that indicate additional information for those who want to see it.

allow more say

Over time, we have developed complex systems for managing content on our platforms, the application of which has become increasingly complex. As a result, we overenforce the rules, restrict legitimate political debate, censor too much trivial content, and subject too many people to frustrating enforcement measures.

For example, in December 2024, we removed millions of pieces of content every day. Although these actions represent less than 1% of the content created each day, we believe that 1 to 2 out of 10 of these actions may be wrong (i.e., the content is actually not a policy may not be in violation). This does not take into account the steps we take to combat large-scale hostile spam attacks. We plan to expand our transparency reporting and regularly share numbers about our mistakes so people can track our progress. As part of that, we’ll also take a closer look at the mistakes people make when enforcing their spam policies.

We want to reverse the mission creep that has made rules too restrictive and overly enforced. We are removing many restrictions on topics such as immigration, sexual identity, and gender, which are often the subject of political discussion and debate. It’s not true that we can speak on TV and in Parliament, but not on our platforms. These policy changes may take several weeks to fully take effect.

We’re also changing the way we enforce our policies to reduce the types of mistakes that make up the majority of censorship on our platform. Until now, we have used automated systems to scan all policy violations, but this resulted in too many mistakes and too much content that should not have been censored. We will therefore continue to focus these systems on tackling illegal and high-severity violations such as terrorism, child sexual exploitation, drugs, fraud and fraud. For less serious policy violations, we rely on someone reporting the issue before we take any action. It also demotes content if the system predicts too much content may violate the standards. We are in the process of removing the majority of these demotions and increasing our confidence that the remaining content is in violation. And we plan to adjust our systems to require a higher degree of reliability before content is removed. As part of these changes, we’re moving our reliability and safety team, which creates content policies and reviews content, from California to Texas and other U.S. locations.

People often have the opportunity to challenge our enforcement decisions and ask for a review, but the process is frustratingly slow and doesn’t always yield the right outcome. We’re adding staff to this effort and increasingly needing multiple reviewers to make a decision to remove something. We’re working on ways to make account recovery easier, testing facial recognition technology, and using AI Large-Scale Language Models (LLM) to provide a second opinion on some content before taking enforcement action. I have started using it.

A personalized approach to political content

Since 2021, we’ve made changes to reduce the amount of civic content (posts about elections, politics, and social issues) people see, based on feedback we received from users who wanted to see less content. . But this was a pretty straightforward approach. We’ll be gradually rolling this back to Facebook, Instagram, and Threads in a more personalized approach so that people who want to see more political content in their feed can see it.

We’re continually testing ways to deliver personalized experiences, and recently conducted tests around citizen content. As a result, we’re starting to treat citizen content from the people and Pages we follow on Facebook like any other content in our feed, with explicit signals (e.g., what pieces of content) and what’s meaningful to them. Implicit signals (such as post visibility) that help predict what will happen. We will also recommend more political content based on these personalized signals and give you more options to control how much of this content you see.

These changes are an attempt to return to the commitment to freedom of expression that Mark Zuckerberg expressed in his Georgetown speech. It means being vigilant about the impact our policies and institutions are having on our ability to have our voices heard, and having the humility to change our approach when we know they are wrong.



Source link

Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
Previous ArticleFive questions for the energy industry in 2025
Next Article Russia “shares sadness” over Tibet earthquake, President Putin tells Xi
Adnan Mahar
  • Website

Adnan is a passionate doctor from Pakistan with a keen interest in exploring the world of politics, sports, and international affairs. As an avid reader and lifelong learner, he is deeply committed to sharing insights, perspectives, and thought-provoking ideas. His journey combines a love for knowledge with an analytical approach to current events, aiming to inspire meaningful conversations and broaden understanding across a wide range of topics.

Related Posts

Googleबनी$ 3

September 16, 2025

Tesla engineers will resign in eight years. He points out CEO Elon Musk as the main reason, accusing him of “liing to the public and manipulating him…”

September 12, 2025

Ant Group unveils its own Tesla Optimus competitor, R1 humanoid robot

September 11, 2025
Leave A Reply Cancel Reply

Top Posts

20 Most Anticipated Sex Movies of 2025

January 22, 2025464 Views

President Trump’s SEC nominee Paul Atkins marries multi-billion dollar roof fortune

December 14, 2024122 Views

How to tell the difference between fake and genuine Adidas Sambas

December 26, 202486 Views

Alice Munro’s Passive Voice | New Yorker

December 23, 202474 Views
Don't Miss
AI September 25, 2025

AI systems learn from many types of scientific information and run experiments to discover new materials | MIT News

Machine learning models can speed up discovery of new materials by making predictions and proposing…

Among the most troublesome relationships in healthcare AI

Does access to AI become a fundamental human right? Sam Altman says, “Everyone would want…”

Google’s Gemini AI is on TV

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to Karachi Chronicle, your go-to source for the latest and most insightful updates across a range of topics that matter most in today’s fast-paced world. We are dedicated to delivering timely, accurate, and engaging content that covers a variety of subjects including Sports, Politics, World Affairs, Entertainment, and the ever-evolving field of Artificial Intelligence.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks

The world’s largest air force with the F-35 fleet in 2025

AI systems learn from many types of scientific information and run experiments to discover new materials | MIT News

Among the most troublesome relationships in healthcare AI

Most Popular

10 things you should never say to an AI chatbot

November 10, 20040 Views

Character.AI faces lawsuit over child safety concerns

December 12, 20050 Views

Analyst warns Salesforce investors about AI agent optimism

July 1, 20070 Views
© 2025 karachichronicle. Designed by karachichronicle.
  • Home
  • About us
  • Advertise
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions

Type above and press Enter to search. Press Esc to cancel.