At the 2024 Grammy Awards, one name aims to strike a different kind of chord. That’s Watson.
On what is known as Music’s Biggest Night, the Recording Academy and IBM will leverage generative AI to create content for the Grammy Awards social channels, giving music fans a way to engage with AI-generated content Both companies then told Digiday.
The new ‘AI Stories with IBM Watsonx’ tool creates text, images, animations and videos based on a variety of sources and real-time news before and after the awards. In addition to the Recording Academy’s editorial team using AI Stories, fans can also use AI widgets on the Grammy Awards website to generate text and integrate it with existing visual assets. The Grammy Awards website will also provide a livestream of coverage of the February 4 ceremony and other days.
The plan is also to use AI Stories to create content that shares insights about more than 100 Grammy-nominated and winning artists. Training data was obtained from a variety of other publicly available content, including content from The Recording Academy and other historical data, artist pages, Wikipedia profiles, and articles about music and the Grammys.
“The reality is we have millions of news articles stored in our content systems,” said Ray Stark, vice president of digital strategy at the Recording Academy. “It’s one thing to run a search and see the results, but it’s also one thing to look at its content and take advantage of the opportunities of what might be a hot topic right now within your industry. You can do it. Everything goes very quickly.”
In an interview with Digiday, Starck said his goal is to experiment with AI and create more real-time content during the 2024 Grammy Awards. Before discussing the use of Watson with IBM, the Academy was already developing product ideas that leveraged AI for content creation while protecting intellectual property. The Recording Academy and IBM will also have personnel on hand during the ceremony to ensure accuracy and update information based on the latest news.
Starck sees the use of generative AI as an addition to the way editorial teams research topics and create content. The Academy has also developed pre-generated prompts to help limit the risks associated with output and intellectual property concerns, rather than allowing users to use arbitrary prompts.
“Our core content strategy was to look at all the great content that we have in our content management system and look back at our history, our records (and) our award data,” Starck said.
The experiment comes as the music industry grapples with the potential impact of generative AI on artists and record labels. Last year, Universal Music and two other music publishers sued Anthropic, accusing the AI startup of violating copyright law by training an AI model on copyrighted lyrics and distributing the answers through a Claude chatbot. I filed a lawsuit against the company.
The use of GenAI in Grammy Awards content comes less than a year after the Recording Academy announced new rules regarding AI-generated music. Last summer, Recording Academy CEO Harvey Mason Jr. said that artists who use AI for things like voices and instruments can prove that humans are still “contributing creatively in the appropriate categories.” He said he could qualify for the nomination.
This AI effort is the latest evolution in a seven-year partnership between the Recording Academy and IBM, which plans to use the event to promote WatsonX’s various products. The partnership also marks the Academy’s first use of large-scale language models to create AI-generated content. IBM did not disclose the terms of the deal, but Noah Shiken, IBM’s vice president of sports and entertainment, said the partnership includes a financial investment “that goes both ways.”
“The key is how do we understand the engagement we’re trying to build with a 50-year-old like me or an 18-year-old,” Siken told Digiday. “What language resonates with them? And how can we train our models to understand the context of where the information is being delivered?”
Using IBM’s WatsonX platform and Meta’s open source Llama 2 large language model, AI Stories uses Search Augmented Generation (RAG) to help guide AI models toward using data from music-focused sources. ) was developed using a process called IBM also used a process called few-shot learning. This is useful for training AI models with small amounts of data. As part of training the AI model to provide accurate information, IBM also trained the model to generate accurate pronouns for each artist based on how AI Stories identifies the artist.
The challenge when creating a tool for the Grammy Awards was how to leverage Llama 2’s knowledge base and music-specific information to create a creative, free-form, yet accurate feature. IBM engineer and inventor Aaron Baughman provided a liquid analogy to explain the process of prioritizing data sources in the RAG approach depending on the type of content you want your AI model to generate.
“Think of it like having multiple buckets of water. We try to fill the buckets with factual data first,” Bowman told Digiday. “If there’s still room left, we’ll pour in more information from Wikipedia or something. And if we still have tokens left for context, we’ll pour in more water.”