By Domingo Guerra
Today, more and more companies are conducting the hiring process remotely, interviewing and hiring people without ever meeting them in person. Advanced technology like deepfakes means it’s easier than ever before to game this system.
Fraudsters today have no shortage of methods available to them to scam employers. Techniques include fake resumes and LinkedIn profiles. Fraudsters use AI to generate profile pictures and pull text from real LinkedIn accounts. In fact, some groups report being flooded by thousands of fake profiles.
Even a video or in-person interview isn’t 100% guaranteed against fraud. At one company, an individual showed up for a video interview, performed impressively and got the job. But on his start day, a different person showed up. It took a few days for those involved in the hiring process to share suspicions and consult the legal team. As soon as he got a call questioning his performance, he said, “I quit” and hung up. And presumably went on to his next scam.
There have always been opportunists, but new technologies have made it even easier to create synthetic identities. Some estimate that candidate fraud has increased by 92% since early 2020. Deepfake technology is rapidly growing in sophistication, making this time of fraud easier than ever. The seriousness of job fraud is underscored by the FBI’s attention to it. They warn that deepfakes can “include a video, an image, or recording convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.”
While candidate fraud isn’t a new phenomenon, what is new is the rise of synthetic identities and the increasing simplicity of being able to create them. That means companies need a new approach. Here are three best practices to follow.
- Liveness detection. To defeat spoofing and fraud, organizations can employ liveness detection technology to determine if the user is a real, live person instead of a photo or video. Checks for liveness can be active or passive. Active liveness tests require the person to make a sequence of facial or eye movements, like blinking or smiling, in front of the camera. Active liveness checks can add time and difficulty to the onboarding process. In contrast, passive liveness tests use spoof-detecting technology to analyze, for instance, skin texture or motion. This method provides security against fraud without sacrificing the user experience.
- Identity document verification. To ensure that documents are authentic and current, there are many safe checks organizations can perform. HR can run a number of checks, including facial recognition and tamper detection, to make sure the printed information on the identity document matches the information barcoded on it
- Comparison to systems of record. Organizations can also verify that the identity document’s information matches the data in the central records by comparing it to data held in government and non-government databases. Advanced capabilities can even compare the biometric information of the applicant to those of the official government record.
Due to the proliferation of remote work, threat actors have discovered a new way to sneak into organizations: as deepfake job candidates. Securing remote hiring against such candidates is paramount. As synthetic identities surge, companies must revamp their approach to onboarding and authentication by adopting best practices like liveness detection, document verification and comparison to systems of record. Failure to enhance deepfake detection not only risks time and money but also the integrity of critical assets. Adapting to these challenges ensures a reliable workforce and a safer enterprise network.
Domingo Guerra is EVP of trust and GM of North America for Incode Technologies.