09/27/2024 / By Ava Grace
Professional networking platform LinkedIn is reportedly utilizing data from its users to train its artificial intelligence (AI) models without user consent.
The Daily Expose reported that according to LinkedIn’s own website, user data is collected for various details. These include users’ posts and articles, how frequent they use the platform, their language preferences and any feedback users have sent to the company – which Microsoft acquired in December 2016.
But some users took issue with the revelation, alongside LinkedIn’s decision to automatically enroll users in the scheme. Rachel Tobac, chairwoman of Women In Security and Privacy (WISP), was among them.
“LinkedIn is now using everyone’s content to train their AI tool. They just auto-opted everyone in,” she wrote in a post on X. “I recommend opting out now and that [organizations like LinkedIn] put an end to auto opt-in, its not cool.”
Tobac argued in a series of posts on X that social media users “shouldn’t have to take a bunch of steps to undo a choice that a company made for all of us.” The WISP chairwoman encouraged members to demand that organizations give them the option to choose whether they opt into programs beforehand.
Her posts also included a clip of LinkedIn Chief Privacy Officer Kalinda Raina explaining the rationale behind the data collection. Raina said in the video that LinkedIn uses personal data so the company and its affiliates can “improve both security and our products in the generative AI [gen-AI] space and beyond.” (Related: AI chatbot admits artificial intelligence can cause the downfall of humanity.)
Greg Snapper, a spokesman for LinkedIn, told USA Today that the company started notifying users about data being used to train its gen-AI model.
“The reality of where we’re at today is a lot of people are looking for help to get that first draft of that resume, to help write the summary on their LinkedIn profile, to help craft messages to recruiters to get that next career opportunity. At the end of the day, people want that edge in their careers and what our gen-AI services do is help give them that assist,” he said.
According to Snapper, users have choices when it comes to how their data is used – noting that the company has always been upfront about it. “We’ve always been clear in our terms of service,” he remarked. The LinkedIn spokesman also mentioned that the platform has always used some form of automation in its products, commenting that “gen-AI is the newest phase of how companies everywhere are using AI.”
While “the company claims to employ privacy-enhancing technologies to anonymize or redact personal data from its AI training sets,” one cannot be sure that LinkedIn is indeed complying with this promise. Thus, the Expose shared how users on the platform can opt out of the scheme that scrapes their data for AI.
“To prevent further use of your data for AI training, navigate to your account settings, then click on Data Privacy and toggle off the Data for Generative AI Improvement option.” it said. “Note that this will not reverse the use of information already processed for AI training purposes.”
Head over to Robots.news for more stories about AI.
Watch a discussion on the possibilities of AI and how it takes our privacy away.
This video is from the ThriveTime Show channel on Brighteon.com.
Russian researchers unveil AI model that adapts to new tasks without human input.
Meta admits to training its AI models with public info from Aussie users posted SINCE 2007.
New York Times sues Microsoft, OpenAI, claiming artificial intelligence copyright infringement.
Sources include:
Tagged Under:
AI model, artificial intelligence, Big Tech, computing, conspiracy, cyber war, data scraping, deception, future tech, generative AI, Glitch, Greg Snapper, information technology, LinkedIn, Microsoft, outrage, privacy watch, Rachel Tobac, robots, tech giants, technocrats, traitors, user data
This article may contain statements that reflect the opinion of the author
COPYRIGHT © 2017 FUTURE SCIENCE NEWS