Even in the midst of so many layoffs in the tech sector, cybersecurity professionals are still in high demand. While job seekers have been targeted by scammers, companies must now be more vigilant than ever in their hiring practices. A new and insidious threat has emerged: hackers and sanctioned foreign nationals are posing as job applicants, often leveraging advanced artificial intelligence (AI) to bypass traditional hiring filters. There are many new challenges faced by human resource professionals, hiring managers and executives, and there are several critical steps they should take to safeguard against these sophisticated new threats.
The Threat from Within: Hackers as Job Applicants
North Korean IT Workers Posing as US Citizens
A cybersecurity company, KnowBe4, recently shared how a North Korean fake IT worker was able to fool them into hiring him. After conducting interviews, background checks, and verifying references, they hired the person. However, when the company sent the new employee a workstation, it immediately started to load malware. Upon investigation, they found out he was from North Korea. Although “no illegal access was gained, and no data was lost, compromised, or exfiltrated on any KnowBe4 systems,” it causes many to wonder how this could have happened to a security firm trusted by hundreds of companies to train their workers on how to spot social engineering.
U.S. officials recently warned about a workforce of thousands of North Korean IT workers in low-level jobs worldwide. These workers help the North Korean government evade international sanctions and generate billions of dollars through computer fraud and hacking. They allegedly sent thousands of skilled IT workers to China and Russia posing as US citizens, using proxy computers in the United States. The Justice Department alleged that over 300 U.S. companies had unknowingly hired foreign nationals with ties to North Korea for remote IT work. the scheme raised at least $6.8 million in wages for North Korea, likely in support of its nuclear program, per the indictment.
To help companies avoid hiring bogus workers that can lead to insider threats and potential violations of sanctions, the US and South Korea released an alert with hiring screening tips.
The Challenge of AI-Enhanced Impostors
As AI technology continues to get better and more sophisticated, it gets significantly harder to spot impostors. Chatbots, such as ChatGPT, are now used by both legitimate and fake job applicants to craft resumes and cover letters tailored for a specific position, making it hard to spot the difference between genuine candidates and frauds. Additionally, AI-generated deepfakes have emerged as a potent tool for mimicking real individuals in video and voice communications, increasing the threat of sophisticated impersonation attacks and cyber breaches. As a result, traditional methods of vetting applicants have become less reliable in the face of these AI-enhanced impostors.
At The Wall Street Journal’s Tech Live, Infante even shared, “I always ask them to show their ID on video. That’s it. It has to match your face.” However, even these measures can be circumvented with AI-enhanced techniques.
AI-driven tools poses a profound risk to security, as bad actors exploit these technologies to perpetrate sophisticated cyberattacks. Deepfakes are AI-generated videos, audio, and images, that are meticulously crafted to appear genuine, blurring the line between reality and deception. Deepfake technology helps criminals convincingly impersonate legitimate employees or executives, allowing them unauthorized access to sensitive information and critical systems. Furthermore, the widespread availability of AI-powered chatbots amplifies the scale and scope of these types of fraudulent activities, enabling bad actors to craft deceptive narratives and manipulate hiring processes undetected.
To effectively combat these AI-enhanced impostor threats, companies should adopt proactive measures to enhance their cybersecurity posture. These practices should include robust identity verification protocols, leveraging advanced AI for anomaly detection, and comprehensive training for employees responsible for candidate screening. By prioritizing vigilance and investing in cybersecurity tools, companies can help mitigate the risks associated with AI-driven impersonation attacks and protect their sensitive assets from exploitation.
The Implications for Hiring Managers and Executives
There are two big challenges for hiring managers: they have to find qualified professionals, but also ensure that these candidates are not threats in disguise. This means taking a multi-layered approach to hiring that includes both traditional vetting techniques and new, advanced cybersecurity measures.
Recognizing Red Flags in Applications
- Too Good to Be True Resumes: Be cautious of resumes that appear too perfect. Many of these list education in foreign countries where it can’t be tracked, but most of their work experience is in the United States.
- VoIP Phone Numbers: Candidates using voice over internet protocol (VoIP) phone numbers, which don’t require a traditional cellular provider, may be attempting to obscure their true identity.
- Lack of Online Presence: A candidate with little to no online presence can be suspicious. However, this may be a bit more challenging with cybersecurity professionals who are more sensitive to the security issues of sharing personal information online. However, most professionals typically have a footprint in at least some professional networks, such as LinkedIn.
Rigorous Interview Processes
- Live Video Interviews: Conduct live video interviews to verify the applicant’s identity and assess their responses in real-time. While deepfake technologies are even being used in live video, there are some things you can watch for, so you’re not duped by a deepfake.
- Technical Aptitude Tests: Implement technical gates to check for skills aptitude, where senior-level candidates must pass specific technical tests. To make these more challenging to fake, make it time-bound or even monitor the work as its being done.
- Behavioral and Situational Questions: Use detailed behavioral and situational questions to gauge the candidate’s genuine experience and problem-solving abilities.
Enhanced Screening and Verification Processes
- Deep Background Checks: Conduct thorough background checks that go beyond the surface level. This includes verifying educational credentials, employment history, and checking for any red flags that might indicate a fraudulent background.
- Automated Identity Verification: Implement automated identity verification systems that require candidates to show their ID on video, matching it with their face to prevent deepfake impersonations.
- Educational Credential Verification: Thoroughly verify educational credentials, especially when listed from countries like Malaysia or Singapore, but with work experience only in the U.S.
- Behavioral Analysis: Implement behavioral analysis during the interview process to assess the integrity and reliability of candidates. This can include questions designed to reveal inconsistencies in their story or a deeper understanding of their motivations for entering the cybersecurity field.
- AI and Machine Learning: Use AI and machine learning tools to analyze candidate data for anomalies that might indicate fraudulent behavior. This includes scanning resumes for unusual patterns and cross-referencing the information shared with public databases.
Continuous Monitoring and Training
- Regular HR Training: Regularly train HR, hiring managers, and recruitment teams to recognize red flags in applications and resumes.
- Stay Updated on AI Threats: Keep abreast of the latest developments in AI and its potential misuse in the hiring process.
- Foster a Security Culture: Promote a culture of security awareness across the company, encouraging employees to be vigilant and report any suspicious activities.
Case Studies: Learning from Real-World Examples
The Minh Phuong Vong Case
In Maryland, Minh Phuong Vong was arrested for participating in a scheme to help overseas IT workers secure employment at U.S. companies by using his identity. Despite thorough verification processes, including a video call where Vong presented a U.S. passport and driver’s license, the scheme succeeded until his arrest. This case underscores the need for continuous vigilance and multi-layered verification.
Fraudulent DPRK IT Work Website Seizures
In another significant operation, the Eastern District of Missouri seized 12 website domains used by Democratic People’s Republic of Korea (DPRK) IT workers to mimic legitimate IT services firms. These websites, designed to appear credible, included misleading claims and disjointed phrases that should have raised suspicion. This case highlights the importance of scrutinizing online job applicants and the websites they use to establish their credentials.
Hiring Managers Beware
Both the job market and cybersecurity landscape are evolving rapidly, with AI-enhanced threats posing new challenges for hiring managers and executives. Adopting comprehensive identity verification processes, leveraging AI for enhanced threat detection, and fostering a culture of security awareness, can help companies better protect themselves from sophisticated fraudsters posing as job applicants. In the digital and AI age, vigilance, collaboration, and proactive measures cane help protect your company from cyber threats.