
India’s biggest press is trying to take part in a lawsuit against Openai, the US startup behind ChatGpt, due to alleged misuse of content.
The news organization includes India’s oldest publications, including the Indian Express, Hindus, Today’s Group of India, and NDTV owned by billionaire Gautam Adaniye.
Openai denied the allegations and informed the BBC that it was using “publicly available data” in line with “widely accepted legal precedents.”
On Wednesday, Openai CEO Sam Altman was in Delhi and discussed India’s plans for a low-cost AI ecosystem with Minister Ashwini Vaishnaw.
He said India should “be one of the leaders of the AI revolution,” and that previous comments from 2023 were taken out of context when Indian companies said they would struggle to compete. Ta.
“India is a very important market for AI in general and Openai in particular,” local media quoted him as saying at the event.
India’s first lawsuit filed against Openai in November by Asia News International (ANI), India’s largest news agency.
Ani accused ChatGpt of illegally using copyrighted material, and Openai denies it – seeking damages of 20m rupees ($230,000, £185,000).
The incident retains importance for ChatGpt given its plans to expand within the country. According to research, India already has the largest user base for ChatGPT.
Chatbots like ChatGpt are trained on large datasets collected by crawling over the Internet. The content created by around 450 news channels and 17,000 newspapers in India has great potential.
However, it is not clear what material ChatGpt is, which is a material that can be legally collected and used for this purpose.
Openai faces at least 12 lawsuits worldwide filed by publishers, artists and news organizations.
The most notable of them was filed by the New York Times in December 2023, and the newspaper demanded “billions of dollars” in damages from its backers, Open and Microsoft.
“The court’s decision holds persuasive value for other similar cases around the world,” says Vibav, an Indian law firm Anand and Anand’s lawyer specializing in artificial intelligence. Misal says.
Mithal said the verdict in the lawsuit filed by ANI could “define how these AI models will work in the future,” and “using copyrighted news content to AI “We can train generative models (such as CHATGPT).”
ANI’s favorable court ruling not only causes further legal cases, but could also launch the possibility that AI companies could enter into license sharing agreements with content creators.
“However, Openai’s favorable ruling would lead to freedom to use copyrighted protected data to train AI models,” he said.

What about Annie’s case?
ANI provides news to paid subscribers and owns exclusive copyright over a large archive of text, images and videos.
In the lawsuit filed in Delhi High Court, Anni says Openai used its content to train ChatGPT without permission. Anni claimed that this led to better chatbots and benefited Openai.
Before filing the lawsuit, the media said it was provided to inform Openai that the content was illegally used and to grant the company a license to use the data.
Anni says Openai has rejected the offer and put the news agency on an internal block list, and that data has not been collected. I also asked ANI to disable certain web crawlers so that its content would not be picked up by ChatGPT.
The press says despite these measures, ChatGPT will receive content from subscribers’ websites. This has made Openai “unreasonably” rich.
Anni also states in the suit that the chatbot generates verbatim verbatim verbatim verbatim for the content for a particular prompt. In some cases, Ani said ChatGpt misrepresents statements to the press, hindering its credibility and misleading the public.
Apart from seeking damages, ANI asked the court to instruct Openai to cease storing and using the work.
In its response, Openai is opposed to the lawsuit filed in India because the company and its servers are not in the country, and the chatbots are not trained there either.
The press is about to take part in the lawsuit
In December, the Indian Federation of Publishers claimed to represent 80% of Indian publishers, including Penguin Random House and the Indian office of Oxford University Press, filed an application in court, and the case was in question. He said he was “directly affected.” They were also allowed to present their arguments.
A month later, the Digital News Publishers Association (DNPA), which represents the major Indian news outlets, and three other media outlets submitted similar applications. They argued that Openai had signed licensing agreements with international news publishers such as the Associated Press and Financial Times, but similar models were not following India.
The DNPA told the court that the case would affect the livelihoods of journalists and the country’s entire news industry. However, Openai argues that chatbots are not “alternative” to news subscriptions and are not used for such purposes.
The court has not yet granted these applications by publishers, and Openai argued that the court should not hear them.
However, the judges made it clear that even if these associations were allowed to argue, the court would limit it to ANI’s claims, as other parties had not filed their own cases.
Meanwhile, Openai told the BBC it is engaged in “constructive partnerships and conversations” to “work collaboratively” with news organizations around the world, including India.

Where India’s AI regulations stand
Analysts say the lawsuits filed against ChatGpt around the world could bring about a focus aspect of the chatbot that has so far escaped scrutiny.
Dr. Sivaramakrishnan R Guruvayur said in a study focusing on the responsible use of artificial intelligence that the data used to train chatbots is one such aspect.
Ani-Openai’s case leads to a court that “assessssss” of chatbots, he said.
Governments around the world are working on ways to regulate AI. In 2023, Italy blocked ChatGpt and said that the mass collection of chatbots and the storage of personal data raised privacy concerns.
The European Union approved a law regulating AI last year.
The Indian government has also indicated plans to regulate AI. Before the 2024 election, the government issued a recommendation that “after test” or “unreliable” AI tools should obtain government permission before they begin.
They also called on AI tools to either not generate illegal responses in India or “threatening the integrity of the election process.”