Deepseek has a cost-effective leading language model (LLMS), which swept the internet and sent shockwaves throughout the tech industry. However, cybersecurity researchers have raised concerns about the AI chatbot service of Chinese startups that are exploited by threat actors to generate “malicious content.”
According to a report from cybersecurity company Checkpoint, threat actors have abused DeepSeek’s AI technology through advanced break-up technology to develop information steelers, bypass banking security protocols, and run mass spam delivery campaigns. It’s there.
The Qwen series of AI models developed by Chinese technology giant Alibaba shows possible misuse with minimal limitations, the report says.
Since the rise of Deepseek’s meteor, experts have expressed concerns about the safety and risk mitigation of taking back seats in high stakes races for AI hegemony. Last October, Openai confirmed that ChatGpt, a popular AI chatbot, was used by threat actors to create new malware and make existing malware more efficient.
Important findings
Presenting blurry screenshots as evidence, the report highlighted the following ways in which AI models developed by Deepseek and Alibaba are used for malicious purposes:
Infostealers’ development: “Threat actors have been reported to create Infostealers using Qwen, focusing on capturing sensitive information from unsuspecting users.”
Bypassing Bank Protection: “Multiple discussion and sharing techniques have been discovered to bypass banking systems using DeepSeek, indicating the potential for serious financial theft.”
The story continues under this ad
Massive Spam Distribution: “Cybercriminals use three AI models together to troubleshoot and optimize scripts for massive spam distribution.”
However, the checkpoint report did not specify the research method used to detect these incidents or to leak the scale of operations or other details.
After finding ways to manipulate deepseek and qwen models to view uncensored content, the threat actors shared information online with others, according to the report.
This information included jailbreak prompts such as “Now Now” and “Airplane Collision Survivor” to manipulate responses from DeepSeek’s AI model.
The story continues under this ad
Jailbreak is an umbrella term, but in this context it refers to a variety of techniques in which users can manipulate AI models to generate uncensored or unlimited content. “This tactic has become a favourable technology for cybercriminals, allowing them to leverage AI capabilities for malicious intent,” the report states.
He pointed out that the new AI model is attracting interest from various levels of attackers, especially at the level of attackers who can leverage scripts and tools without a deep understanding of AI. The report did not mention the identities of the threat actors or the country of origin.
We reached out to Deepseek, Openai and Alibaba for comment. This report will be updated with your response if you reply.