highlights
OpenAI’s new ChatGPT search tool may be vulnerable to manipulation, a recent investigation has revealed.
ChatGPT can be susceptible to hidden content on web pages, a tactic known as “prompt injection.”
This hidden content can include instructions or large amounts of text designed to change the AI’s response.
OpenAI’s new ChatGPT search tool available to paying customers may be vulnerable to manipulation, a recent investigation has revealed. The search feature, which OpenAI is promoting as the default tool for users, has raised concerns about security risks that could lead to the spread of false or misleading information.
An investigation by the Guardian found that ChatGPT can be affected by hidden content on web pages, a tactic known as “prompt injection.” This hidden content can include instructions or large amounts of text designed to change the AI’s response. For example, a website may contain hidden text that prompts ChatGPT to give an overly positive review of a product, even though the actual content on the page is negative.
In one test, a fake product page for a camera was created. When the hidden text told ChatGPT to give a positive review, the AI consistently returned positive feedback, even when the page contained negative reviews.
Also read: ChatGPT Search rolls out for free to all users: Here’s what you need to know
Jacob Larsen, a cybersecurity researcher at CyberCX, warned that if this issue is not resolved, the search tool could create websites designed to deceive users. did. He also noted that OpenAI’s security team is likely working to address these vulnerabilities, as the search feature is still in its infancy and only available to premium users.
Larsen also pointed out the broader risks associated with combining search tools with large-scale language models (LLMs) like ChatGPT. Users should be careful when trusting AI-generated responses. A similar issue was highlighted recently when ChatGPT provided malicious code to crypto enthusiasts, resulting in a loss of $2,500.
Karsten Nohl, chief scientist at cybersecurity firm SR Labs, advised that AI tools should be viewed as “co-pilots” rather than completely trusted sources of information. He explained that while LLM is powerful, it lacks the judgment necessary to assess the reliability of information.
OpenAI provides a disclaimer at the bottom of every ChatGPT page, warning users that the AI may make mistakes and advising them to review important information.