An image provided by Pindrop Security shows a fake job candidate the company dubbed “Ivan X,” a scammer using deepfake AI technology to mask his face, according to Pindrop CEO Vijay Balasubramaniyan.
Courtesy: Pindrop Security
When voice authentication startup Pindrop Security posted a recent job opening, one candidate stood out from hundreds of others.
The applicant, a Russian coder named Ivan, seemed to have all the right qualifications for the senior engineering role. When he was interviewed over video last month, however, Pindrop’s recruiter noticed that Ivan’s facial expressions were slightly out of sync with his words.
That’s because the candidate, whom the firm has since dubbed “Ivan X,” was a scammer using deepfake software and other generative AI tools in a bid to get hired by the tech company, said Pindrop CEO and co-founder Vijay Balasubramaniyan.
“Gen AI has blurred the line between what it is to be human and what it means to be machine,” Balasubramaniyan said. “What we’re seeing is that individuals are using these fake identities and fake faces and fake voices to secure employment, even sometimes going so far as doing a face swap with another individual who shows up for the job.”
Companies have long fought off attacks from hackers hoping to exploit vulnerabilities in their software, employees or vendors. Now, another threat has emerged: Job candidates who aren’t who they say they are, wielding AI tools to fabricate photo IDs, generate employment histories and provide answers during interviews.
The rise of AI-generated profiles means that by 2028 globally 1 in 4 job candidates will be fake, according to research and advisory firm Gartner.
The risk to a company from bringing on a fake job seeker can vary, depending on the person’s intentions. Once hired, the impostor can install malware to demand ransom from a company, or steal its customer data, trade secrets or funds, according to Balasubramaniyan. In many cases, the deceitful employees are simply collecting a salary that they wouldn’t otherwise be able to, he said.
‘Massive’ increase
Cybersecurity and cryptocurrency firms have seen a recent surge in fake job seekers, industry experts told CNBC. As the companies are often hiring for remote roles, they present valuable targets for bad actors, these people said.
Ben Sesser, the CEO of BrightHire, said he first heard of the issue a year ago and that the number of fraudulent job candidates has “ramped up massively” this year. His company helps more than 300 corporate clients in finance, tech and health care assess prospective employees in video interviews.
“Humans are generally the weak link in cybersecurity, and the hiring process is an inherently human process with a lot of hand-offs and a lot of different people involved,” Sesser said. “It’s become a weak point that folks are trying to expose.”
But the issue isn’t confined to the tech industry. More than 300 U.S. firms inadvertently hired impostors with ties to North Korea for IT work, including a major national television network, a defense manufacturer, an automaker, and other Fortune 500 companies, the Justice Department alleged in May.
The workers used stolen American identities to apply for remote jobs and deployed remote networks and other techniques to mask their true locations, the DOJ said. They ultimately sent millions of dollars in wages to North Korea to help fund the nation’s weapons program, the Justice Department alleged.
That case, involving a ring of alleged enablers including an American citizen, exposed a small part of what U.S. authorities have said is a sprawling overseas network of thousands of IT workers with North Korean ties. The DOJ has since filed more cases involving North Korean IT workers.
A growth industry
Fake job seekers aren’t letting up, if the experience of Lili Infante, founder and chief executive of CAT Labs, is any indication. Her Florida-based startup sits at the intersection of cybersecurity and cryptocurrency, making it especially alluring to bad actors.
“Every time we list a job posting, we get 100 North Korean spies applying to it,” Infante said. “When you look at their resumes, they look amazing; they use all the keywords for what we’re looking for.”
Infante said her firm leans on an identity-verification company to weed out fake candidates, part of an emerging sector that includes firms such as iDenfy, Jumio and Socure.
An FBI wanted poster shows suspects the agency said are IT workers from North Korea, officially called the Democratic People’s Republic of Korea.
Source: FBI
The fake employee industry has broadened beyond North Koreans in recent years to include criminal groups located in Russia, China, Malaysia and South Korea, according to Roger Grimes, a veteran computer security consultant.
Ironically, some of these fraudulent workers would be considered top performers at most companies, he said.
“Sometimes they’ll do the role poorly, and then sometimes they perform it so well that I’ve actually had a few people tell me they were sorry they had to let them go,” Grimes said.
His employer, the cybersecurity firm KnowBe4, said in October that it inadvertently hired a North Korean software engineer.
The worker used AI to alter a stock photo, combined with a valid but stolen U.S. identity, and got through background checks, including four video interviews, the firm said. He was only discovered after the company found suspicious activity coming from his account.
Fighting deepfakes
Despite the DOJ case and a few other publicized incidents, hiring managers at most companies are generally unaware of the risks of fake job candidates, according to BrightHire’s Sesser.
“They’re responsible for talent strategy and other important things, but being on the front lines of security has historically not been one of them,” he said. “Folks think they’re not experiencing it, but I think it’s probably more likely that they’re just not realizing that it’s going on.”
As the quality of deepfake technology improves, the issue will be harder to avoid, Sesser said.
As for “Ivan X,” Pindrop’s Balasubramaniyan said the startup used a new video authentication program it created to confirm he was a deepfake fraud.
While Ivan claimed to be located in western Ukraine, his IP address indicated he was actually from thousands of miles to the east, in a possible Russian military facility near the North Korean border, the company said.
Pindrop, backed by Andreessen Horowitz and Citi Ventures, was founded more than a decade ago to detect fraud in voice interactions, but may soon pivot to video authentication. Clients include some of the biggest U.S. banks, insurers and health companies.
“We are no longer able to trust our eyes and ears,” Balasubramaniyan said. “Without technology, you’re worse off than a monkey with a random coin toss.”