Google is reportedly using the AI model “Claude” developed by Anthropic to improve the performance of its own AI system “Gemini”. TechCrunch reports that a contractor hired by Google was tasked with comparing the responses Gemini and Claude generated to the same user prompts. Their job is to evaluate the quality of the output based on factors such as veracity, clarity, and redundancy.
The process works like this: Contractors will see responses from both Gemini and Claude to certain prompts. We will then spend up to 30 minutes carefully evaluating each response to determine how well it meets our criteria. This feedback helps Google identify areas where Gemini needs improvement. Interestingly, the contractor began noticing some unusual patterns during this evaluation process. At times, Gemini’s output includes mentions like “I am Claude, created by the Antropics,” raising curiosity about how closely connected the two systems are. I did.
One thing that stood out in the comparison was Claude’s rigorous approach to safety. Contractors reportedly observed Claude’s tendency to avoid responding to prompts he deemed unsafe and to adhere to high ethical standards. Gemini, on the other hand, sometimes flagged these dangerous prompts, but in a way that the contractor deemed more detailed, but perhaps less stringent. For example, in one case involving nudity and bondage prompts, Claude refused to engage, but Gemini recognized the input as a serious safety violation.
Google has put in place a system for these comparisons using an internal platform that makes it easy for contractors to test and review multiple AI models side by side. However, using Claude for this purpose raises some questions. Anthropic’s terms of service explicitly state that users are not allowed to access Claude to create or train competing AI products without authorization. The rules apply to other companies using the model, but it’s unclear whether they also apply to investors like Google, which financially backs Anthropic.
In response to this speculation, Google DeepMind spokesperson Shira McNamara clarified the situation. He emphasized that comparing AI models is a standard practice in the industry and is essential to improve performance. McNamara categorically denied claims that Google used Anthropic’s Claude to train Gemini, calling such suggestions inaccurate.
For now, this collaboration highlights how competitive the AI industry is becoming, with major companies like Google combining internal development and external partnerships to enhance their models. It also raises questions about ethical boundaries and how companies balance collaboration and competition.