Earlier this week, LinkedIn’s premium customers reportedly filed a proposed class action lawsuit against the platform. The customers alleged that LinkedIn shared their private messages with third parties without their permission and that the disclosed details were used to train generative artificial intelligence (Gen AI) models.
The class action lawsuit alleges that LinkedIn secretly introduced a privacy setting last August that allowed users to enable or disable the sharing of their personal data. Customers then said the platform quietly updated its privacy policy in September to say the shared data could be used to train AI models, Reuters reported.
Additionally, the “Frequently Asked Questions” hyperlink stated that discontinuing this feature would not affect training that was already occurring.
Don’t miss: MCMC and Microsoft in talks over LinkedIn license
The complaint says that the attempt to carefully suggest AI training features shows that LinkedIn recognizes that it is violating customer privacy and promises to use personal information to improve the platform. It is reported that it has been stated.
The lawsuit was reportedly filed in California federal court on behalf of premium LinkedIn customers who used InMail messages and shared personal data with third parties for AI training before September 18 of last year. It is being
The lawsuit seeks unspecified damages for breach of contract and violation of California’s Unfair Competition Law, as well as $1,000 per customer for violation of the Federal Communications Storage Act.
In a conversation with MARKETING-INTERACTIVE, a LinkedIn spokesperson said, “These are false claims without any basis.”
The case follows a lawsuit filed last October by Hong Kong’s privacy watchdog over concerns against LinkedIn over its privacy policy, which allows Gen AI models to be trained based on users’ data and content by default. It is something.
The Commissioner for Personal Data Privacy (PCPD) said that LinkedIn’s privacy policy update had raised the concerns of data protection authorities in other jurisdictions. PCPD was also concerned about whether LinkedIn’s default opt-in settings for using users’ personal data to train generative AI models accurately reflected users’ choices. PCPD therefore wrote to LinkedIn to investigate the matter.
In a conversation with MARKETING-INTERACTIVE at the time, PCPD said it had received seven complaints related to LinkedIn data privacy from October 2023 to October 7, 2024. Those who filed complaints were concerned that their personal data would be disclosed without their consent or that they would be impersonated by fake accounts. they.
Personal Data Privacy Commissioner Ada Chung Lai-ling said the recently adjusted policy will help LinkedIn users make informed decisions about whether to allow their personal data to be used for AI training. He urged caution to always be vigilant.
In response, LinkedIn told the South China Morning Post that it would notify users about the change through multiple channels, citing an earlier blog post written by LinkedIn’s senior vice president and general counsel, Blake Lawit. He said he started it.
“Our privacy policy includes training and security and safety measures for the AI models used to generate content (‘Generative AI’),” it said in a blog post.
Related articles:
LinkedIn Names New APAC Managing Director
Social media marketing is among the top 10 rising skills for LinkedIn members in APAC
Simu Liu joins LinkedIn: Why celebrities need to build their brands beyond the spotlight