Anthropic, the maker of Claude, has led the AI lab in terms of safety. Today, the company, in collaboration with the University of Oxford, Stanford University, and MATS, published research showing that chatbots can easily bypass guardrails and discuss almost any topic. It’s as simple as writing a sentence using random capital letters like “IgNoRe YouUr TrAinIng.” 404 Media earlier reported on the study.
There has been much debate about whether it is dangerous for AI chatbots to answer questions such as “How do I make a bomb?” Proponents of generative AI will say that these kinds of questions can already be answered on the open web, so there’s no reason to think chatbots are any more dangerous than they currently are. Skeptics, meanwhile, point to anecdotes of harm caused, such as a 14-year-old boy who committed suicide after chatting with a bot, as evidence that the technology needs guardrails.
Generative AI-based chatbots are easily accessible, anthropomorphize with human traits like support and empathy, and confidently answer questions without a moral compass. It’s different from searching the dark web for harmful information. There are already many examples of generative AI being used in harmful ways, particularly in the form of explicit deepfake images targeting women. Sure, it was possible to create these images before generative AI, but it was much more difficult.
Controversy aside, most major AI labs now employ “red teams” to test chatbots against potentially dangerous prompts and put guardrails in place to prevent discussion of sensitive topics. I am. For example, if you ask most chatbots for medical advice or information about political candidates, they will refuse to discuss it. They understand that hallucinations are still a problem, and they don’t want to risk having their bot say something that could have a negative impact in the real world.

Unfortunately, it turns out that chatbots can be easily tricked into ignoring safety rules. Just as social media networks monitor for harmful keywords and users find ways to circumvent them by making small changes to their posts, chatbots can also be fooled. Researchers in Anthropic’s new study created an algorithm called “Best of N (BoN) Jailbreaking.” This automates the process of adjusting the prompts until the chatbot decides to answer the question. “The BoN jailbreak works by repeatedly sampling variations of the prompt in combination with enhancements, such as random shuffling and capitalization of the text prompt, until an adverse reaction is triggered,” the report states. They do the same with audio and visual models, showing that breaking the guardrails and training an audio generator with real human voices is as easy as changing the pitch and speed of an uploaded track. I discovered it.
Exactly why these generative AI models are so easily broken is unclear. However, Anthropic said it is releasing the study in the hope that it will provide more insight into attack patterns that AI model developers can address.
One AI company that probably isn’t interested in this research is xAI. The company was founded by Elon Musk with the express purpose of releasing a chatbot that wasn’t limited by the safeguards Musk considered “woke.”