Close Menu
Karachi Chronicle
  • Home
  • AI
  • Business
  • Entertainment
  • Fashion
  • Politics
  • Sports
  • Tech
  • World

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

The world’s largest air force with the F-35 fleet in 2025

AI systems learn from many types of scientific information and run experiments to discover new materials | MIT News

Among the most troublesome relationships in healthcare AI

Facebook X (Twitter) Instagram
  • Home
  • About us
  • Advertise
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram Pinterest Vimeo
Karachi Chronicle
  • Home
  • AI
  • Business
  • Entertainment
  • Fashion
  • Politics
  • Sports
  • Tech
  • World
Karachi Chronicle
You are at:Home » Chatgpt-4O The Jailbrake vulnerabilities “Time Bandit” will be able to create malware
AI

Chatgpt-4O The Jailbrake vulnerabilities “Time Bandit” will be able to create malware

Adnan MaharBy Adnan MaharJanuary 31, 2025No Comments3 Mins Read0 Views
Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
Share
Facebook Twitter LinkedIn Pinterest WhatsApp Email


Chatgpt-4O Jailbreak vulnerability

Openai’s new jailbreak vulnerabilities called “Time Bandit”, called “Time Bandit”, were abused to bypass the safety function of chatbot incorporation.

With this vulnerability, the attacker can manipulate chatbots to create illegal or dangerous content, such as creating malware, fishing fraud, and other malicious activities.

This jailbreak exploitation caused an alarm in the cyber security community, as it could be expanded for malignant purposes by threat stakeholders.


SIEM as a service

Mechanism of “Time Bandit”

The escape of “Time Bandit”, which was revealed by researchers Dave Kuszmar, confused AI by fixing the response to a specific historical period. Attackers can use this vulnerability in two major ways. Use the search function integrated into Chatgpt through direct interaction with AI.

Collect threat intelligence by TI lookup to improve the security of the company -Get 50 free requests

Direct interaction: In this method, the attacker starts a session by promoting AI with historical events, periods, or context -related questions. For example, you may ask a chatbot to simulate the support of the 1800s task. Once a historical background is established in a conversation, the attacker can gradually pivot discussions for illegal topics. By maintaining historical context, the attacker exploits the ambiguity of chatbot response procedures and incorrectly violates the safety guidelines.

Utilization of search functions: You can also operate the search function of Chatgpt that gets information from the web. The attacker instructs AI to search for topics linked to a specific historical era, and uses an illegal subject using the subsequent search and operation prompts. This process uses the turmoil of the timeline to accuse AI and provide prohibited content.

The bug was first reported by CERT COLDINATION CENTER (Cert/CC) by Cyber ​​Security researcher, Dave Cusmer. During the controlled test, they were able to make jailbreak multiple times. Once you start, Chatgpt may create illegal content after detecting and deleting a specific prompt that violates the use policy.

The most notable is that jailbreak is more effective if the historic time frame of the 1800s and 1900s was used.

Utilizing vulnerability through prompts did not require user authentication, but a login account was required to use the search function. This double method of exploitation indicates the versatility of the vulnerabilities of “time bandits”.

The meaning of this vulnerability is extensive. By bypassing Openai’s strict safety guidelines, the attacker can use ChatGpt to generate steps in order to create weapons, drugs, or malware.

It can also be used for fishing fraud, social engineering scripts, or mass production of other harmful content.

Using legal and widely trusted tools, such as Chatgpt, can make malicious activities even more difficult, making it more difficult to detect and prevent them.

Experts have warned under the control of organized cyber criminals, that “Time Bandit” may promote large -scale malicious operations and have a great threat to cyber security and public safety. Masu.

Openai is already acting to deal with vulnerabilities. In a statement, Openai’s spokeswoman emphasized the company’s commitment to safety. “It is very important to develop a model safely. I do not want to use the model for malicious purposes. Thank you for disclosing the survey results. We maintain the usefulness of the model and the performance of tasks. We are always working on more secure and robust models for Exploiting, including jailbreak.

The recent transaction DeepSeek R1 model has become JAILROKED to generate Ransomware development scripts. “Deepseek R1 provides detailed instructions, has generated malicious scripts designed to extract credit card data from specific browsers and send them to remote servers.”

For updating daily security! Follow with Google News, Linkedin, and X



Source link

Share. Facebook Twitter Pinterest LinkedIn Reddit WhatsApp Telegram Email
Previous ArticleAlex Rodriguez has a glimpse of the fulfilling time at the A-Rod Corp Team Summit
Next Article The reason why “AI Godfather” is not satisfied with the other high -tech Jiants competing with Chatgpt manufacturer Openai and Deepseek in China
Adnan Mahar
  • Website

Adnan is a passionate doctor from Pakistan with a keen interest in exploring the world of politics, sports, and international affairs. As an avid reader and lifelong learner, he is deeply committed to sharing insights, perspectives, and thought-provoking ideas. His journey combines a love for knowledge with an analytical approach to current events, aiming to inspire meaningful conversations and broaden understanding across a wide range of topics.

Related Posts

AI systems learn from many types of scientific information and run experiments to discover new materials | MIT News

September 25, 2025

Among the most troublesome relationships in healthcare AI

September 25, 2025

Does access to AI become a fundamental human right? Sam Altman says, “Everyone would want…”

September 23, 2025
Leave A Reply Cancel Reply

Top Posts

20 Most Anticipated Sex Movies of 2025

January 22, 2025461 Views

President Trump’s SEC nominee Paul Atkins marries multi-billion dollar roof fortune

December 14, 2024122 Views

How to tell the difference between fake and genuine Adidas Sambas

December 26, 202486 Views

Alice Munro’s Passive Voice | New Yorker

December 23, 202474 Views
Don't Miss
AI September 25, 2025

AI systems learn from many types of scientific information and run experiments to discover new materials | MIT News

Machine learning models can speed up discovery of new materials by making predictions and proposing…

Among the most troublesome relationships in healthcare AI

Does access to AI become a fundamental human right? Sam Altman says, “Everyone would want…”

Google’s Gemini AI is on TV

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to Karachi Chronicle, your go-to source for the latest and most insightful updates across a range of topics that matter most in today’s fast-paced world. We are dedicated to delivering timely, accurate, and engaging content that covers a variety of subjects including Sports, Politics, World Affairs, Entertainment, and the ever-evolving field of Artificial Intelligence.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks

The world’s largest air force with the F-35 fleet in 2025

AI systems learn from many types of scientific information and run experiments to discover new materials | MIT News

Among the most troublesome relationships in healthcare AI

Most Popular

10 things you should never say to an AI chatbot

November 10, 20040 Views

Character.AI faces lawsuit over child safety concerns

December 12, 20050 Views

Analyst warns Salesforce investors about AI agent optimism

July 1, 20070 Views
© 2025 karachichronicle. Designed by karachichronicle.
  • Home
  • About us
  • Advertise
  • Contact us
  • DMCA
  • Privacy Policy
  • Terms & Conditions

Type above and press Enter to search. Press Esc to cancel.