Suthir Balaji, a former OpenAI engineer and whistleblower, said he helped train the artificial intelligence system behind ChatGPT and later said he believed those actions violated copyright laws. He died, his parents and San Francisco officials announced. He was 26 years old.
Balaji worked at OpenAI for nearly four years before retiring in August. He is highly regarded by his colleagues at the San Francisco company, with co-founders this week calling him one of OpenAI’s strongest contributors, which is essential to the development of some of its products.
“We are devastated to learn of this incredibly sad news and our hearts go out to Suchil’s loved ones at this difficult time,” OpenAI said in a statement.
Balaji was found dead in his San Francisco apartment on November 26, and police said it was an apparent suicide. An initial investigation found no evidence of wrongdoing. ” The city’s Chief Medical Examiner’s Office confirmed the cause of death as suicide.
His parents, Poornima Rama Rao and Balaji Ramamurthy, said they were still looking for answers and described their son as a “happy, smart and brave young man” who loved hiking and had recently returned from a trip with friends. By the way, I explained.
Balaji grew up in the San Francisco Bay Area and was studying computer science at the University of California, Berkeley, when he first arrived at the fledgling AI research lab for a summer internship in 2018. A few years later, he returned to OpenAI, where one of his first projects called WebGPT helped pave the way for ChatGPT.
“Still’s contributions to this project were essential and we would not have succeeded without him,” OpenAI co-founder John Schulman said in a social media post memorializing Balaji. Schulman, who recruited Balaji to his team, said what made him such a great engineer and scientist was his attention to detail and ability to spot subtle bugs and logical errors.
“He had a talent for finding simple solutions and writing elegant code that worked,” Schulman wrote. “He thinks carefully and rigorously about the details.”
Balaji then created a huge dataset of online writings and other media used to train GPT-4, the fourth generation of OpenAI’s flagship large-scale language model and the basis of the company’s famous chatbot. I have moved on to sorting out. That work ultimately led Balaji to question the technology he helped build, especially after newspapers, novelists and others began suing OpenAI and other AI companies for copyright infringement.
He first expressed his concerns to the New York Times, which reported on them in an October profile of Balaji.
He later told The Associated Press that he would “try to testify” in the most aggressive copyright infringement cases, and that he considered the suit filed by the New York Times last year to be “the most serious.” In a Nov. 18 court filing, lawyers for the Times named him as someone who may have “unique and relevant documentation” to support OpenAI’s claims of intentional copyright infringement. named.
His records were also searched by lawyers in a separate lawsuit filed by the book’s authors, including comedian Sarah Silverman, according to court filings.
“It doesn’t feel right to use people’s data to train and compete with them in the marketplace,” Balaji told The Associated Press in late October. “I don’t think you can do that. I don’t think you can legally do that.”
He told the AP that he had become increasingly disillusioned with OpenAI, especially after internal turmoil that led to the firing of the board and the rehiring of CEO Sam Altman last year. Balaji said he has widespread concerns about how the company’s products are being rolled out, including its tendency to spout misinformation known as hallucinations.
But he said that among the “bunch of problems” he was concerned about, he focused on copyright as “something that could actually be done about.”
He acknowledged this was an unpopular opinion within the AI research community, which is used to pulling data from the internet, but said: “They’re going to have to change and it’s only a matter of time.” .
He has not yet been expelled, and it is unclear how much of his disclosures will be admissible as evidence in posthumous litigation. He also published a personal blog post with his thoughts on the topic.
Schulman, who resigned from OpenAI in August, said he and Balaji happened to leave on the same day and celebrated that night with dinner and drinks at a bar in San Francisco with colleagues. Another of Balaji’s mentors, co-founder and chief scientist Ilya Sutskever, left OpenAI a few months ago, and Balaji saw this as another opportunity to leave.
Shulman said that Balaji communicated his plans to leave OpenAI earlier this year, and that Balaji told others at the company that AI better than humans, known as artificial general intelligence, was “just around the corner.” He said he didn’t believe it the way his employees believed him. Schulman said the young engineer was interested in pursuing a Ph.D. and exploring “more remote ideas about how to build intelligence.”
Balaji’s family says a memorial is planned for later this month at the India Community Center in Milpitas, California, near his hometown of Cupertino.
In the United States, you can contact a crisis counselor by calling or texting the National Suicide Prevention Lifeline at 988, chatting at 988lifeline.org, or texting HOME to 741741. In the UK and Ireland, Samaritans can be contacted on freephone 116 123 or email jo@samaritans.org or jo@samaritans.ie. In Australia, the crisis support service Lifeline is 13 11 14. Other international helplines can be found at befrienders.org.
The Associated Press and OpenAI have a license and technology agreement that gives OpenAI access to some of the AP’s text archives.