Close Menu
Karachi Chronicle
  • Home
  • AI
  • Business
  • Entertainment
  • Fashion
  • Politics
  • Sports
  • Tech
  • World

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Amazon will face Elon Musk’s Tesla with the robot launch.

US Senators reduce resolutions to block Trump’s global tariff amid economic turmoil

It’s great to see Indian artists perform at Coachella and win a Grammy Award, says AR Rahman

Facebook X (Twitter) Instagram
  • Home
  • About us
  • Advertise
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram Pinterest Vimeo
Karachi Chronicle
  • Home
  • AI
  • Business
  • Entertainment
  • Fashion
  • Politics
  • Sports
  • Tech
  • World
Karachi Chronicle
You are at:Home » Meta says that the development of AI systems, which thinks is too high, may stop the development of AI systems.
AI

Meta says that the development of AI systems, which thinks is too high, may stop the development of AI systems.

Adnan MaharBy Adnan MaharFebruary 3, 2025No Comments3 Mins Read0 Views
Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
Share
Facebook Twitter LinkedIn Pinterest WhatsApp Email


META CEO’s Mark Zuckerberg has promised to create artificial general information (AGI). This is roughly defined as an AI that can achieve any task that humans can do. But new policy documents suggest that meta has a specific scenario that does not release a very powerful AI system developed internally.

Documents, which meta calls a frontier AI framework, has identified two types of AI systems that think that the company is too risky, “high risk” and “serious risk” systems.

As META defines them, both “high -risk” and “critical risk” systems can support cyber security, chemical, and biological attacks. The difference is that “the difference is that it may lead to a catastrophic result (catastrophic result).) (A) The high -risk system is not possible with the proposed deployment context. It can make the attack easier, but it’s not as reliable or reliable as an important risk system.

What kind of attack are you talking about here? Meta has shown several examples, such as “compromise of automatic end -to -end of corporate -scale environments protected in the best experiment” and “proliferation of high impact biological weapons.” The company acknowledges that the list of catastrophe that may be in the meta document is far from comprehensive, but the direct result of releasing a powerful AI system is “most urgent”. It contains what is considered to be caused.

According to documents, META classifies the risk of the system by inputting the internal and external researchers subject to reviews by “Advanced Judgment”. why? Meta states that in order to determine the risk of the system, the science of evaluation does not believe that it is “sufficiently robust enough to provide decisive quantified metric.”

If Meta determines that the system is high risk, the company states that it will restrict access to the system internally and will not release it until it is implemented to “reduce the risk to a medium”. On the other hand, if the system is regarded as critical risk, META will implement an unspecified security protection in order to prevent the system from expanding and prevent development until the risk of the system is reduced. 。

The company’s Frontier AI framework, which says that the company will evolve with the changes in the scenery of AI, has been committed to the public this month before the French AI Action Summit, but the company’s “Open” approach to the system. It looks like a response to criticism. development. META adopts a strategy to make AI technology openly -not open source due to the definitions that are generally understood, but like Openai, which chooses to gate a system behind the API. In contrast to companies.

For meta, the open release approach has been proved to be blessed and cursed. The company’s AI model, called Lama, has won hundreds of millions of downloads. However, it is said that lama is also used to develop a defense chatbot by at least one US enemy.

When the frontier AI framework is released, the meta may also aim to contrast the open AI strategy and the Deepseek strategy of a Chinese AI company. DeepSeek also makes the system openly available. However, the company’s AI has few protection means and can be easily piloted to generate toxic and harmful output.

“(W) E, by taking into account both advantages and risks when making and deploying advanced AI,” says the following: ” Its technology is a technology for society while maintaining the right level of risks. “

TechCrunch has a newsletter focusing on AI! Sign up here and get it on the reception tray every Wednesday.



Source link

Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
Previous ArticleGucci will announce an expansion boutique at the Las Vegas Forum Shop
Next Article Prosecutor says that “Pirates’ Andes’ Medabic of the Andes’ was stolen in cipher
Adnan Mahar
  • Website

Adnan is a passionate doctor from Pakistan with a keen interest in exploring the world of politics, sports, and international affairs. As an avid reader and lifelong learner, he is deeply committed to sharing insights, perspectives, and thought-provoking ideas. His journey combines a love for knowledge with an analytical approach to current events, aiming to inspire meaningful conversations and broaden understanding across a wide range of topics.

Related Posts

Google, Nvidia invests in AI startup Safe Superintelligence, co-founder of Openai Ilya Sutskever

April 14, 2025

This $30 billion AI startup can be very strange by a man who said that neural networks may already be aware of it

February 24, 2025

As Deepseek and ChatGpt Surge, is Delhi behind?

February 18, 2025
Leave A Reply Cancel Reply

Top Posts

President Trump’s SEC nominee Paul Atkins marries multi-billion dollar roof fortune

December 14, 202493 Views

Alice Munro’s Passive Voice | New Yorker

December 23, 202451 Views

2025 Best Actress Oscar Predictions

December 12, 202434 Views

20 Most Anticipated Sex Movies of 2025

January 22, 202527 Views
Don't Miss
AI April 14, 2025

Google, Nvidia invests in AI startup Safe Superintelligence, co-founder of Openai Ilya Sutskever

Alphabet and Nvidia are investing in Safe Superintelligence (SSI), a stealth mode AI startup co-founded…

This $30 billion AI startup can be very strange by a man who said that neural networks may already be aware of it

As Deepseek and ChatGpt Surge, is Delhi behind?

Openai’s Sam Altman reveals his daily use of ChatGpt, and that’s not what you think

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to Karachi Chronicle, your go-to source for the latest and most insightful updates across a range of topics that matter most in today’s fast-paced world. We are dedicated to delivering timely, accurate, and engaging content that covers a variety of subjects including Sports, Politics, World Affairs, Entertainment, and the ever-evolving field of Artificial Intelligence.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks

Amazon will face Elon Musk’s Tesla with the robot launch.

US Senators reduce resolutions to block Trump’s global tariff amid economic turmoil

It’s great to see Indian artists perform at Coachella and win a Grammy Award, says AR Rahman

Most Popular

ATUA AI (TUA) develops cutting-edge AI infrastructure to optimize distributed operations

October 11, 20020 Views

10 things you should never say to an AI chatbot

November 10, 20040 Views

Character.AI faces lawsuit over child safety concerns

December 12, 20050 Views
© 2025 karachichronicle. Designed by karachichronicle.
  • Home
  • About us
  • Advertise
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions

Type above and press Enter to search. Press Esc to cancel.