
Artificial intelligence company Anthropic has reached a settlement with Universal Music and other music publishers over the use of its AI chatbot “Claude” and copyrighted lyrics. The agreement, approved Thursday by U.S. District Judge Yumi Lee, resolves part of an ongoing lawsuit filed by the publisher last year.
The lawsuit, which is being heard in a California federal court, alleges Anthropic inappropriately used lyrics from hundreds of songs by artists such as Beyoncé and the Rolling Stones to train its AI model, Claude, according to Reuters. He is accused of using it. Music publishers had previously called for the introduction of “guardrails” to prevent chatbots from generating copyrighted lyrics, and this request was reflected in the recent settlement. However, Antropic denied the accusations and claimed that such safeguards were already in place before the legal dispute.
The publisher had sought a broader court injunction to stop Anthropic from allegedly using its lyrics in an AI training process. Although this particular issue is addressed in the guardrail agreement, Judge Lee is still considering the publisher’s request for a preliminary injunction to prevent Anthropic from continuing to use the lyrics.
Read more: UK clears Google’s artificial AI investments, no antitrust investigation
Anthropic confirmed in a statement that it will maintain the existing guardrails, apply them to future versions of Claude, and allow courts to resolve future disputes over their implementation. “Claude was not designed to be used for copyright infringement, and we have numerous processes in place designed to prevent such infringement,” an Anthropic spokesperson said Friday. . “Our decision to enter into this provision is consistent with these priorities.”
Despite the agreement, music publishers such as UMG, ABKCO and Concord emphasized that the lawsuit is ongoing. “While Anthropic’s provisions are a positive step forward, this litigation remains ongoing,” they said in a statement Friday. The lawsuit accuses Anthropic of illegally copying lyrics to train Claude, and that the chatbot illegally copied lyrics in response to user prompts.
As reported by Reuters, the lawsuit is part of a broader lawsuit involving AI companies accused of using copyrighted material for training purposes without permission. Similar legal battles are unfolding involving authors, news organizations, and visual artists, all of whom claim their work was used to train AI without compensation or consent. Defendants, including Anthropic, argue that use of such material constitutes “fair use.”
Source: Reuters