Incident 174: Fake LinkedIn Profiles Created Using GAN Photos

Description: More than a thousand inauthentic LinkedIn profiles using allegedly GAN-generated photos were notified by researchers at Stanford to LinkedIn’s staff, and many of which were removed for violating rules against creating fake profiles and falsifying information.
Alleged: Unknown developed and deployed an AI system, which harmed LinkedIn users.

Suggested citation format

Lam, Khoa. (2022-02-28) Incident Number 174. in McGregor, S. (ed.) Artificial Intelligence Incident Database. Responsible AI Collaborative.

Incident Stats

Incident ID
174
Report Count
4
Incident Date
2022-02-28
Editors
Sean McGregor, Khoa Lam

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscover

Incident Reports

At first glance, Renée DiResta thought the LinkedIn message seemed normal enough.

The sender, Keenan Ramsey, mentioned that they both belonged to a LinkedIn group for entrepreneurs. She punctuated her greeting with a grinning emoji before pivoting to a pitch for software.

"Quick question — have you ever considered or looked into a unified approach to message, video, and phone on any device, anywhere?"

DiResta wasn't interested and would have ignored the message entirely, but then she looked closer at Ramsey's profile picture. Little things seemed off in what should have been a typical corporate headshot. Ramsey was wearing only one earring. Bits of her hair disappeared and then reappeared. Her eyes were aligned right in the middle of the image.

"The face jumped out at me as being fake," said DiResta, a veteran researcher who has studied Russian disinformation campaigns and anti-vaccine conspiracies. To her trained eye, these anomalies were red flags that Ramsey's photo had likely been created by artificial intelligence.

That chance message launched DiResta and her colleague Josh Goldstein at the Stanford Internet Observatory on an investigation that uncovered more than 1,000 LinkedIn profiles using what appear to be faces created by artificial intelligence.

Social media accounts using computer-generated faces have pushed Chinese disinformation; harassed activists; and masqueraded as Americans supporting former President Donald Trump and independent news outlets spreading pro-Kremlin propaganda.

NPR found that many of the LinkedIn profiles seem to have a far more mundane purpose: drumming up sales for companies big and small. Accounts like Keenan Ramsey's send messages to potential customers. Anyone who takes the bait gets connected to a real salesperson who tries to close the deal. Think telemarketing for the digital age.

By using fake profiles, companies can cast a wide net online without beefing up their own sales staff or hitting LinkedIn's limits on messages. Demand for online sales leads exploded during the pandemic as it became hard for sales teams to pitch their products in person.

More than 70 businesses were listed as employers on these fake profiles. Several told NPR they had hired outside marketers to help with sales. They said they hadn't authorized any use of computer-generated images, however, and many were surprised to learn about them when NPR asked.

NPR has not independently verified who created the profiles or images, or found anyone who authorized them to be used. Nor has NPR found any illegal activity.

But these computer-generated LinkedIn profile photos illustrate how a technology that has been used to propagate misinformation and harassment online has made its way to the corporate world.

"It looks like somebody we know"

From a business perspective, making social media accounts with computer-generated faces has its advantages: It's cheaper than hiring multiple people to create real accounts, and the images are convincing.

A recent study found faces made by AI have become "indistinguishable" from real faces. People have just a 50% chance of guessing correctly whether a face was created by a computer — no better than flipping a coin.

"If you ask the average person on the internet, 'Is this a real person or synthetically generated?' they are essentially at chance," said Hany Farid, an expert in digital media forensics at the University of California, Berkeley, who co-authored the study with Sophie J. Nightingale of Lancaster University.

Their study also found people consider computer-made faces slightly more trustworthy than real ones. Farid suspects that's because the AI sticks to the most average features when creating a face.

"That face tends to look trustworthy, because it's familiar, right? It looks like somebody we know," he said. He worries that the proliferation of AI-generated content could augur a new era of online deception, using not just still images, but also audio and video "deepfakes."

After the Stanford researchers alerted LinkedIn about the profiles, LinkedIn said it investigated and removed those that broke its policies, including rules against creating fake profiles or falsifying information. LinkedIn did not give details about how it conducted its investigation.

"Our policies make it clear that every LinkedIn profile must represent a real person. We are constantly updating our technical defenses to better identify fake profiles and remove them from our community, as we have in this case," LinkedIn spokesperson Leonna Spilman said in a statement. "At the end of the day it's all about making sure our members can connect with real people, and we're focused on ensuring they have a safe environment to do just that."

Searching for any evidence Keenan Ramsey is who she claims to be

At first glance, Ramsey's profile looks like many others on LinkedIn: the bland headshot with a slightly stiff smile; a boilerplate description of RingCentral, the software company where she says she works; and a brief job history. She claims to have an undergraduate business degree from New York University and gives a generic list of interests: CNN, Unilever, Amazon, philanthropist Melinda French Gates.

But there were oddities in the photo: the single earring and strange hair, the placement of her eyes, the blurry background. Alone, any of these clues might be explained away, but together, they aroused DiResta's suspicions.

"The positioning of the features in the face is something where if you've seen these enough times, you just become familiar with it," DiResta said.

The technology most likely used to create Ramsey's photo, known as a generative adversarial network, or GAN, has been around only since 2014, but in that time has rapidly become better at creating lifelike faces by training on large datasets of real people's photos. Today, websites allow anyone to download computer-generated faces for free.

"In the course of my work, I look at a lot of these things, mostly in the context of political influence operations," DiResta said. "But all of a sudden, here was a fake person in my inbox reaching out to me."

To confirm whether Ramsey was indeed a "fake person," NPR dug into the background described on her LinkedIn profile. RingCentral doesn't have any record of an employee named Keenan Ramsey. Neither does Language I/O, one of the previous employers she listed. And "NYU's records do not reflect anyone named Keenan Ramsey receiving an undergraduate degree of any type," university spokesperson John Beckman told NPR.

DiResta initially thought Ramsey's message might be a phishing attempt — trying to trick her into revealing sensitive information. She grew even more suspicious when she received an identical LinkedIn message — including the same emojis — from someone else claiming to be a RingCentral employee and whose profile photo also looked computer-generated.

Then she got an email from a third RingCentral employee, referencing Ramsey's LinkedIn message. But when she looked up this one's name, it appeared to belong to a real person who worked at the company.

Intrigued, DiResta and Goldstein, a postdoctoral fellow at Stanford, started scouring LinkedIn for profiles like Ramsey's.

"In the span of a few weeks, we found more than a thousand accounts that appear to be fake accounts with GAN-generated images," Goldstein said. "And when we searched for these personas on the internet, we didn't find any evidence of them in other places, which is rare."

The profiles they spotted had other patterns in common too. Many described their jobs with variations on titles like business development manager, sales development executive, growth manager, and demand generation specialist.

They often had a brief list of two or three former employers, sometimes well-known names like Amazon and Salesforce, with no details about those experiences. When NPR reached out to some of the companies listed as former employers, none had records of any of the supposed employees working there.

A lot of the profiles also sported strikingly similar educational credentials. For example, some claimed to have received bachelor's degrees in business administration — including from schools, such as Columbia University, that don't offer an undergraduate business degree.

NPR contacted 28 universities about 57 of the profiles. Of the 21 schools that responded, none had records of any of the supposed graduates.

"This is not how we do business," RingCentral says

Of course, people do pad their resumes, and there's no guarantee that just because someone's LinkedIn profile says they work at a company, they really do. But the emergence of apparently false personas using computer-generated photos takes the deception to new heights on a professional social network like LinkedIn, where people frequently send messages to people they don't know when they're looking for work, recruiting job candidates or just networking.

"The expectation when you're on social media platforms is that you're dealing with other humans," said Bonnie Patten, executive director of the nonprofit watchdog Truth in Advertising. "And that you're not dealing with an AI-generated persona that's being manipulated by someone behind the curtain."

Of the profiles the Stanford researchers identified, 60 claimed to be employees of RingCentral. But the company says none of them has ever worked there. The person who emailed DiResta following up on Ramsey's LinkedIn message was a real RingCentral employee but left the company in February (this person did not respond to NPR's attempts to contact them).

RingCentral said it had hired other companies to reach out to potential customers and set up meetings with RingCentral's in-house salespeople — what's known in the business as "lead generation."

And RingCentral says one of these outside vendors created fake profiles, although it declined to name the vendor. NPR has not been able to confirm the identity of the vendor or who created the profiles.

Heather Hinton, RingCentral's chief information security officer, said she was not aware that anyone was making fictitious LinkedIn profiles on RingCentral's behalf and did not approve of the practice.

"This is not how we do business," she told NPR in an interview. "This was for us a reminder that technology is changing faster than even those of us who are watching it can keep up with. And we just have to be more and more vigilant as to what we do and what our vendors are going to do on our behalf."

RingCentral spokesperson Mariana Leventis said in a statement: "While this may have been an industry accepted practice in the past, going forward we do not think this is an acceptable practice, and is counter to our commitment to our customers. We are taking specific steps to update our approach to lead generation and to educate our people on what is and is not acceptable in terms of how we communicate with customers and partners."

One CEO says, "I thought they were real people"

Several of the other companies listed as current employers on the seemingly fake profiles told NPR the same thing: They used outside vendors to pitch potential customers on LinkedIn.

Bob Balderas, CEO of Bob's Containers in Austin, Texas, told NPR he had hired a firm named airSales to drum up business for his small startup, which repurposes shipping containers for homes and offices.

Balderas says he knew airSales was creating LinkedIn profiles for people who described themselves as business development representatives for Bob's Containers. But, he said, "I thought they were real people who worked for airSales."

Balderas said he was not comfortable with any use of AI-generated photos. "We are consumer focused. This doesn't create trust," he said. He said Bob's Containers stopped working with airSales before NPR inquired about the profiles.

AirSales CEO Jeremy Camilloni confirmed that Bob's Containers was a client. He said airSales hires independent contractors to provide marketing services and has "always been clear" with its clients about that.

Camilloni said these contractors may create LinkedIn profiles "at their own discretion," but the company doesn't require it or get involved.

And he said he points contractors to LinkedIn's terms of service. "To my knowledge, there are no specific rules for profile pictures or the use of avatars," he said, asserting "this is actually common among tech users on LinkedIn."

He added, "If this changes, we'll advise our contractors accordingly."

LinkedIn says any inauthentic profiles, including those using pictures that do not represent a real user, go against its rules. "Do not use an image of someone else, or any other image that is not your likeness, for your profile photo," its Professional Community Policies page states.

Selling LinkedIn "avatars" for $1,300 a month

Fake profiles are not a new phenomenon on LinkedIn. Like other social networks, it has battled against bots and people misrepresenting themselves. But the growing availability and quality of AI-generated photos creates new challenges for online platforms.

LinkedIn removed more than 15 million fake accounts in the first six months of 2021, according to its most recent transparency report. It says the vast majority were detected during signup, and most of the rest were found by its automatic systems, before any LinkedIn member reported them.

Spilman, the LinkedIn spokesperson, says the company is "constantly working to improve our models to ensure we are catching and removing profiles that use computer-generated images."

These days, many more companies are looking for ways to find customers online.

"Traditional business-to-business sales has been meet in person: I meet you at a conference, you wine and dine them, you try to develop a personal relationship," said Hee Gun Eom, co-founder and CEO of Salezilla, a company that specializes in automated email marketing.

But that all changed during the pandemic. "A lot of prospecting and new business development has gone digital — on social media, LinkedIn, email," he said. "We just saw a huge boost in people trying to send emails or wanting to create new businesses through virtual means." (Salezilla does not offer LinkedIn campaigns and says it does not use AI-generated images.)

NPR attempted to contact more than a dozen companies listed as employers on profiles identified by the Stanford researchers that offer LinkedIn marketing services to other businesses.

One of those companies, Renova Digital, advertised on its website a "ProHunter" package that includes two bots, or "fully branded avatar profiles," and unlimited messages for customers willing to pay $1,300 a month. The company removed the description of its services and pricing from its website after NPR asked about them.

Renova Digital founder Philip Foti told NPR in an email that he tested AI-generated photos in the past but has stopped doing so. "We decided that it wasn't coherent with our values and not worth the marketing gains," he wrote.

In addition to taking down most of the profiles identified by the Stanford researchers, LinkedIn also removed the pages of two lead-generation companies listed on many of those profiles: LIA, based in Delhi, India, and San Francisco-based Vendisys.

For $300 a month, LIA customers can pick one "AI-generated avatar" from hundreds that are "ready-to-use," according to LIA's website, which was recently scrubbed of all information except its logo. LIA did not respond to multiple requests for comment. Vendisys CEO Erik Paulson declined to comment.

As prosaic as it is to use computer-generated profiles to sell things, the spread of the technology worries digital forensics expert Farid. As artificial intelligence advances, he and other researchers expect it to become harder to detect computer-created images with the naked eye — not to mention fake audio and video, like the heavily manipulated video that circulated on social media recently purporting to show Ukrainian President Volodymyr Zelenskyy calling on his soldiers to surrender.

Computer-generated faces are "the canary in the coal mine," Farid said. "It's the beginning of what is coming next, which is full blown audio-video deepfakes targeted to a specific person."

That smiling LinkedIn profile face might be a computer-generated fake

It has been reported that the profiles were used for marketing and sales purposes.

An investigation by researchers at Stanford Internet Observatory uncovered more than 1,000 LinkedIn profiles using facial images that appear to have been created using artificial intelligence.

According to NPR, researcher Renée DiResta was contacted by someone on LinkedIn, with little things appearing off about the profile image. For DiResta, the face “jumped out at me as being fake” for reasons including the central positioning of the eyes in the image and vague backgrounds.

This prompted her to begin an investigation with her colleague Josh Goldstein on the number of computer-generated – or deepfake – images on LinkedIn profiles.

Deepfakes use a form of artificial intelligence to combine and superimpose existing images and videos to make fake images of people or make it look like a person has said or done something they have not.

While there have been examples of deepfakes used as a source of humour, there have also been fears that this technology could be used to discredit individuals or as a tool to interfere in elections.

Now it appears the technology has entered the corporate world. NPR found that many of these profiles with AI-generated images appear to be for marketing and sales purposes. When someone connects with the fake profile, they’ll end up speaking to a real salesperson.

NPR suggested this tactic could allow companies to “cast a wide net online” without having to employ more staff.

Several of the companies listed as employers on the profiles with AI-generated images told NPR that they used outside vendors to pitch potential customers on LinkedIn.

One of these vendors is AirSales, which said it hires independent contractors to provide marketing services and that these contractors may make LinkedIn profiles “at their own discretion”.

“To my knowledge, there are no specific rules for profile pictures or the use of avatars,” AirSales CEO Jeremy Camilloni told NPR. “This is actually common among tech users on LinkedIn.”

LinkedIn’s professional community policies state that it does not allow fake profiles or entities on its platform. This includes using images of someone else or “any other image that is not your likeness” for profile photos.

In general, the company says that users should not post deepfake images or videos or “otherwise post content that has been manipulated to deceive”.

NPR technology correspondent Shannon Bond said on Twitter that LinkedIn has removed most of the profiles found during the investigation and is updating its defences to catch fake accounts.

A community report on LinkedIn’s transparency page said it removed more than 15m fake accounts in the first half of 2021, with most of these stopped by automated defences.

However, it can be difficult for people to spot a computer-generated image. A study released last month saw participants examine similar facial images to decide which one was real and which was a deepfake. The participants had an average accuracy of 48.2pc, slightly lower than if the results were based on chance.

Last year, Cork teen Greg Tarr was named the overall winner in 2021’s BT Young Scientist and Technology Exhibition for his work in developing an improved method to detect deepfakes.

Researchers find 1,000 LinkedIn profiles using computer-generated faces

Have you ever ignored a seemingly random LinkedIn solicitor and been left with a weird feeling that something about the profile just seemed…off? Well, it turns out, in some cases, those sales reps hounding you might not actually be human beings at all. Yes, AI-generated deepfakes have come for LinkedIn and they’d like to connect.

That’s according to recent research Renée DiResta of the Stanford Internet Observatory detailed in a recent NPR report. DiResta, who made a name for herself trudging through torrents of Russian disinformation content in the wake of the 2016 election, said she became aware of a seeming phenomenon of fake, AI computer-generated LinkedIn profile images after one particularly strange-looking account tried to connect with her. The user, who reportedly tried to pitch DiResta on some unimportant piece of software, used an image with strange incongruities that stood out to her as odd for a corporate photo. Most notably, DiResta says she noticed the figures’ eyes were aligned perfectly in the middle of the image, a tell-tale sign of AI generated images. Always look at the eyes, fellow humans.

“The face jumped out at me as being fake,” DiResta told NPR.

From there, DiResta and her Stanford colleague Josh Goldstein conducted an investigation that turned up over 1,000 LinkedIn accounts using images that they say appear to have been created by a computer. Though much of the public conversation around deep fakes has warned of the technology’s dangerous potential for political misinformation, DiResta said the images, in this case, seem overwhelmingly designed to function more like sales and scam lackeys. Companies reportedly use the fake images to game LinkedIn’s system, creating alternate accounts to send out sales pitches to avoid running up against LinkedIn’s limits on messages, NPR notes.

“It’s not a story of mis- or disinfo, but rather the intersection of a fairly mundane business use case w/AI technology, and resulting questions of ethics & expectations,” DiResta wrote in a Tweet.” “What are our assumptions when we encounter others on social networks? What actions cross the line to manipulation?”

In a statement sent to Gizmodo, LinkedIn said it had investigated and removed accounts that violated its policies around using fake images.

“Our policies make it clear that every LinkedIn profile must represent a real person. We are constantly updating our technical defenses to better identify fake profiles and remove them from our community, as we have in this case,” a LinkedIn spokesperson said. “At the end of the day it’s all about making sure our members can connect with real people, and we’re focused on ensuring they have a safe environment to do just that.”

Deepfake Creators: Where’s The Misinformation Hellscape We Were Promised?

Misinformation experts and political commentators forewarned a type of deepfake dystopia for years, but the real-world results have, for now at least, been less impressive. The internet was briefly enraptured last year with this fake TikTok video featuring someone pretending to be Tom Cruise, though many users were able to spot the non-humanness of it right away. This, and other popular deep fakes (like this one supposedly starring Jim Carey in The Shining, or this one depicting an office full of Michael Scott clones) feature clearly satirical and relatively innocuous content that don’t quite sound the, “Danger to Democracy” alarm.

Other recent cases however have tried to delve into the political morass. Previous videos, for example, have demonstrated how creators were able to manipulate a video of former President Barack Obama to say sentences he never actually uttered. Then, earlier this month, a fake video pretending to show Ukrainian president Volodymyr Zelenskyy surrendering made its rounds through social media. Again though, it’s worth pointing out this one looked like shit. See for yourself.

Deepfakes, even of the political bent, are definitely here, but concerns of society stunting images have not yet come to pass, an apparent bummer leaving some post-U.S. election commentators to ask, “Where Are the Deepfakes in This Presidential Election?”

Humans Are Getting Worse At Spotting Deepfake Images

Still, there’s a good reason to believe all that could change…eventually. A recent study published in the Proceedings of the National Academy of Sciences found computer-generated (or “synthesized”) faces were actually deemed more trustworthy than headshots of real people. For the study, researchers gathered 400 real faces and generated another 400, extremely lifelike headshots using neural networks. The researchers used 128 of these images and tested a group of participants to see if they could tell the difference between a real image and a fake. A separate group of respondents were asked to judge how trustworthy they viewed the faces without hinting that some of the images were not human at all.

The results don’t bode well for Team Human. In the first test, participants were only able to correctly identify whether an image was real or computer generated 48.2% of the time. The group rating trustworthiness, meanwhile, gave the AI faces a higher trustworthiness score (4.82) than the human faces (4.48.)

“Easy Access to such high-quality fake imagery has led and will continue to lead to various problems, including more convincing online fake profiles and—as synthetic audio and video generation continues to improve—problems of nonconsensual intimate imagery, fraud, and disinformation campaigns,” the researchers wrote. “We, therefore, encourage those developing these technologies to consider whether the associated risks are greater than their benefits.”

Those results are worth taking seriously and do raise the possibility of some meaningful public uncertainty around deepfakes that risks opening up a pandora’s box of complicated new questions around authenticity, copyright, political misinformation, and big “T” truth in the years and decades to come.

In the near term though, the most significant sources of politically problematic content may not necessarily come from highly advanced, AI driven deepfakes at all, but rather from simpler so-called “cheap fakes” that can manipulate media with far less sophisticated software, or none at all. Examples of these include a 2019 viral video exposing a supposedly hammered Nancy Pelosi slurring her words (that video was actually just slowed down by 25%) and this one of a would-be bumbling Joe Biden trying to sell Americans car insurance. That case was actually just a man poorly impersonating the president’s voice dubbed over the actual video. While those are wildly less sexy than some deepfake of the Trump pee tape, they both gained massive amounts of attention online.

Move Over Global Disinformation Campaigns, Deepfakes Have a New Role: Corporate Spamming

LinkedIn reportedly says it has investigated and removed those profiles that broke its policies.

Deepfake technology involves using artificial intelligence (AI) to generate convincing images or videos of made-up or real people. It is surprisingly accessible and has been put to various uses, including in entertainment, misinformation, harrassment, propaganda and pornography.

Researchers Renée DiResta and Josh Goldstein of the Stanford Internet Observatory found that this technology is now being used to boost sales for companies on LinkedIn, according to an NPR report.

The researchers found over a thousand profiles with AI generated profile pictures on LinkedIn. The technology likely used is generative adversarial network (GAN) which pits two neural networks against each other.

LinkedIn says it has since investigated and removed those profiles that broke its policies, including rules against creating fake profiles or falsifying information, the publication reported.

The investigation began when DiResta received a message from a seemingly fake account. The eyes were aligned perfectly in the middle of the image, the background was vague, and one earring was missing.

Delhi Based Firm Offers AI Generated Profiles

One of the lead-generation companies that LinkedIn reportedly removed after Stanford Internet Observatory's research was Delhi-based LIA.

The company offered hundreds of "ready-to-use" AI generated avatars for $300 a month each according to LIA's website, from which all information was recently removed, the report mentioned.

“It’s not a story of mis or disinfo, but rather the intersection of a fairly mundane business use case with AI technology, and resulting questions of ethics and expectations,” DiResta wrote in a Tweet.

"What are our assumptions when we encounter others on social networks? What actions cross line to manipulation?" she added.

Recently, a deepfaked video made an appearance on social media, in which Ukrainian president Volodymyr Zelenskyy appeared to ask Ukrainian troops to lay down their arms.

It's not very convincing, but deepfakes are getting better and people seem to be getting worse at identifying them.

A recent study published in the Proceedings of the National Academy of Science found that people have just a 50 percent chance of guessing correctly whether a face was generated by artificial intelligence. AI synthesised faces were found to be indistinguishable from real faces and, somehow, more trustworthy.

Deepfakes Invade LinkedIn: Delhi Firm Offers ‘Ready to Use’ AI Made Profiles

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents