Problem acknowledged after many complaints
Apple announced it would update a new AI feature that was generating false news alerts on the latest iPhones, but it didn’t make it more accurate.
Apple acknowledged the concerns for the first time yesterday, saying it was working on a software update to “further clarify” when a notification is a summary generated by Apple Intelligence.
The company has been criticized for not addressing numerous complaints about its ability to group notifications to help users quickly find important details. According to Apple, it helps iPhone users concentrate.
However, this feature generated some inaccurate alerts.
The BBC complained last month after an AI-generated summary of headlines incorrectly told some readers that Luigi Mangione, accused of murdering United Healthcare CEO Brian Thompson, had committed suicide by shooting himself. filed a complaint. Last week, the feature told users before the match started that Luke Littler had won the PDC World Darts Championship and that Rafael Nadal had come out as gay.
The BBC is particularly concerned because the notice appears to be coming from them.
The BBC said on Monday that “AI-powered summaries by Apple do not reflect, and in some cases outright contradict, the original BBC content.”
“It’s important that Apple addresses these issues quickly because news accuracy is essential to maintaining trust.”
Apple said in a statement to the BBC:
“Apple Intelligence features are in beta and we are continually improving them based on user feedback.”
“A software update in the coming weeks will make it even clearer when the text you see is a summary provided by Apple Intelligence.”
warning parable
Meanwhile, some users on the online book club forum “Fable” noticed bigoted and racist language used to describe their reading choices in the “Wrap 2024” feature. Ta.
One user was advised to “pay attention to white authors once in a while,” while another was asked, “Have you ever wanted to hear the perspective of a straight cis white person?”
Another said their penchant for romantic comedies “set the bar for my irritation meter.”
In an Instagram post this week, Fable’s head of product Chris Garello addressed an issue with the AI-generated summaries on the app, saying that Fable had received complaints about “very bigoted and racist language.” He said that it was shocking for us. .
“As a company, we underestimated the amount of work these models would require to work in a responsible and safe manner.”
In a follow-up video, Gallero confirmed that Fable will be removing three major features that rely on AI, including wrapped summaries.
“It is unacceptable to include features that cause any harm to the community,” he said, acknowledging that more work is needed to ensure that AI models operate responsibly and safely.
Computing says:
Both of these stories came about because Apple and Fable pushed AI-driven features before they were ready. Both should serve as a warning to companies rushing to launch generative AI-driven capabilities for commercial reasons before properly testing them.
The data on which the Fable model was trained clearly had significant potential bias. Certain social media sites and very unpleasant corners of the internet almost exclusively contain “straight cis white perspectives” and how Fable underestimates the risk and impact of biased data distorting models. It’s not difficult to understand what happened. Of that prejudice.
Apple’s story is more worrying because the misinformation appears to be coming from the BBC. Apple’s haste and slowness in making this feature available seems irresponsible, given that public trust in traditional news sources like the BBC is at an extremely low level.
The fact that Apple’s response does not emphasize feature accuracy, but simply focuses on clear attribution, shows the company’s commitment to developing features built on responsible and ethical generative AI. does not exactly inspire confidence in the efforts of