Report 4151
You might have used LinkedIn to hunt for a new job, or keep in touch with colleagues from the early days of your career. But LinkedIn has been using you, too.
Last week, the professional network added a new data privacy setting that caught many by surprise. By default, it granted itself permission to use information shared on the service to train its artificial intelligence. Unless you toggle this new setting to off, LinkedIn considers everything fair game --- your posts, articles, even your videos.
To opt out, log into your LinkedIn account, tap or click on your headshot, and open the settings. Then, select "Data privacy," and turn off the option under "Data for generative AI improvement."
Flipping that switch will prevent the company from feeding your data to its AI, with a key caveat: The results aren't retroactive. LinkedIn says it has already begun training its AI models with user content, and that there's no way to undo it.
Spokesman Greg Snapper said LinkedIn uses people's data to train AI to "help people all over the world create economic opportunity" by fleshing out tools to help them find new jobs and learn new skills. "If we get this right, we can help a lot of people at scale," he said.
LinkedIn, then, would clearly love if its AI features landed you a job where you were fairly valued for the quality of your work. But it's hard not to think of it the other way, too: Is LinkedIn fairly valuing the work you've contributed to improving its AI? Work that you were not directly compensated for, or may not have been told they were using?
For some, that will seem like a fair trade-off. Others are unsettled by how LinkedIn handled the situation.
"Hard to find opt-out tools are almost never an effective way to allow users to exercise their privacy rights," said F. Mario Trujillo, a staff attorney at the Electronic Frontier Foundation. "If companies really want to give users a choice, they should present users with a clear 'yes' or 'no' consent choice."
LinkedIn isn't alone in turning public user data into AI training material. Your chats with OpenAI's ChatGPT and Google's Gemini are used to improve those chatbots' performance over time, and similarly require you to opt out rather than in. And during a recent hearing in the Australian Parliament, Meta's director of privacy policy, Melinda Claybaugh, confirmed that the company had been scraping public photos and text on Facebook and Instagram to train its AI models for years longer than expected.
LinkedIn, which is owned by Microsoft, says it has been notifying users about its AI data policy through emails, text messages and banners on its website. But it still caught many users off-guard --- and the move appears to give users less time to respond than even its parent company has offered.
In August, Microsoft announced that it would begin training its AI Copilot tool based on the interactions people had with it, along with data collected from usage of its Bing search engine and its Microsoft Start news feed.
But unlike LinkedIn, Microsoft said it would inform consumers of the option to opt out of data collection in October, and would only begin training its AI models 15 days after the option is made available.
Why didn't LinkedIn do a similarly informed rollout? Snapper wouldn't say whether it had been considered. "As a company, we're really just focused on 'How can we do this better next time?'," he said.