The Google AI logo appears on a smartphone with a Gemini on the background of this photo … (+)
Nurphoto via Getty Images
AI assistants like Google’s Gemini and Openai’s ChatGPT bring in incredible benefits, but they are being exploited by cybercriminals, including state-sponsored hackers, to bolster their attacks.
Google’s latest report reveals that Advanced Persistent Threat (APT) groups from several countries, including Iran, China, North Korea and Russia, are experimenting with Gemini to streamline cyber operations. I am. From reconnaissance of potential targets to investigating vulnerabilities and creating malicious scripts, these AI-driven attacks are even more sophisticated.
This revelation is not isolated. Openai revealed similar findings in October 2024, confirming that state-related actors are actively trying to utilize generative AI tools for malicious purposes.
When problems get worse, alternative AI models that lack robust security controls will emerge, providing cybercriminals with powerful, unlimited tools to promote hacking, phishing and malware development.
This trend is a major concern for consumers. And because small cybercriminals and fraudsters use AI to make phishing attacks more persuasive, automate fraud and break through personal security defenses. Understanding these risks and adopting proactive defence strategies is important to maintaining safety in the age of AI.
How hackers are exploiting AI for cyber attacks
AI-powered assistants provide a wealth of knowledge and automation capabilities that can accelerate cyber threats in several ways if placed in the wrong hands.
Faster reconnaissance of targets
Hackers use AI to collect intelligence about individuals and businesses, and analyze social media profiles, public records and leaked databases to create highly personalized attacks.
AI Assisted Phishing and Social Engineering
AI can generate deep-fark voice calls that are hardly distinguishable from sophisticated phishing emails, text messages, and even legitimate communications. Attackers can bypass traditional spam filters and create persuasive messages that deceive cautious users.
Automating malicious code development
Threat actors leverage AI tools to help coding aid, refine malware, and create attack scripts to increase efficiency. Even when AI assistants have safeguards in place, cybercriminals will experiment with jailbreaks or use alternative models that lack security restrictions.
Identify security gaps in public infrastructure
Hackers are urging AI assistants to provide technical insights into software vulnerabilities, security bypasses, and exploitation strategies.
Bypassing AI Safeguard and Jailbreak Models
Researchers and cybersecurity companies have already demonstrated how easily AI security restrictions can be bypassed. Some AI models, such as DeepSeek, have weak safeguards and have become an attractive tool for cybercriminals.
How to protect yourself from AI-driven cyber threats
While large-scale cyberattacks are often targeted at governments and businesses, consumers are not immune to AI-enhanced fraud or security violations. Here’s how you can protect yourself from the threats that drive your evolving AI:
1. Beware of phishing and AI generation fraud
Screams generated by AI are becoming increasingly persuasive, so be careful when receiving unexpected emails, messages, or calls. Always review your personal information requests by contacting your organization directly.
2. Monitor your digital footprint
Hackers use AI for reconnaissance, so please limit the personal information you share online. Check your social media privacy settings regularly and avoid overshared personal information that can be used to create targeted attacks.
3. Update your software and security tools
AI-driven attacks often exploit known vulnerabilities. Regularly updates operating systems, browsers, and applications to patch security flaws that attackers can take advantage of.
4. Protect your email and online accounts
Consider a reputable password manager that uses strong, unique passwords for various accounts. Enable alerts for suspicious login attempts and review your account activity regularly. Enable Multifactorial Authentication (MFA) whenever possible.
5. Provide information about AI and cybersecurity trends
It is important for cybercriminals to constantly evolve their tactics, so providing information is important. Follow cybersecurity news, subscribe to alerts, educate yourself about potential AI-related threats, and recognize potential risks.