LinkedIn has paused the use of data from UK users in training its artificial intelligence (AI) models, following concerns raised by the Information Commissioner’s Office (ICO).
The career-oriented social networking platform, owned by Microsoft, had automatically enrolled users globally into having their data used to train its AI systems. However, on Friday, the ICO expressed its satisfaction after LinkedIn confirmed it had halted the practice for UK users.
“We are pleased that LinkedIn has reflected on the concerns we raised about its approach to training generative AI models with information relating to its UK users,” said Stephen Almond, the ICO’s executive director.
In response, LinkedIn stated that it welcomes further discussions with the ICO on the issue.
The use of user-generated content as training data for AI models is becoming increasingly common among tech giants, including LinkedIn. Generative AI tools, such as chatbots like OpenAI’s ChatGPT and image generators like Midjourney, rely on vast amounts of text and image data to improve their capabilities.
However, a LinkedIn spokesperson told the BBC that the company believes users should have control over how their data is utilised. Consequently, UK users have been provided with the option to opt out of having their data used for AI training.
“We’ve always used some form of automation in LinkedIn products, and we’ve always been clear that users have the choice about how their data is used,” the spokesperson added.
Social media platforms, where users share personal and professional updates, offer a wealth of data that can make AI systems sound more natural. LinkedIn noted that its generative AI services can help users by assisting with tasks such as drafting résumés or crafting messages to recruiters.
“The reality of where we’re at today is a lot of people are looking for help to get that first draft of that résumé… to help craft messages to recruiters to get that next career opportunity,” LinkedIn’s spokesperson explained. “At the end of the day, people want that edge in their careers, and what our gen-AI services do is help give them that assist.”
LinkedIn’s global privacy policy acknowledges that user data is used to improve its AI services, and a help article further clarifies that data is processed when users engage with features such as post-writing suggestions. However, this will no longer apply to users in the UK, European Union (EU), European Economic Area (EEA), and Switzerland.
Several other major tech platforms, such as Meta and X (formerly Twitter), are similarly seeking to utilise user-generated content for AI development. However, they too have faced regulatory challenges, particularly in the UK and EU, where stringent privacy regulations limit the collection and use of personal data.
In June, Meta suspended plans to use the public posts, comments, and images of UK users to train its AI systems, following criticisms and concerns raised by the ICO. After further engagement with the regulator, Meta has since re-notified users of Facebook and Instagram in the UK about its AI plans, and clarified the opt-out process.
LinkedIn will likely face a similar regulatory path before it can resume its AI training activities with UK user data.
“In order to get the most out of generative AI and the opportunities it brings, it is crucial that the public can trust that their privacy rights will be respected from the outset,” said the ICO’s Stephen Almond. He added that the regulator would “continue to monitor” developers, including Microsoft and LinkedIn, to ensure UK users’ data rights are upheld.