Identity, AI/ML, Generative AI, Threat Intelligence

Deepfake face swap attacks on ID verification systems up 704% in 2023

Threat actors are increasingly using “face swap” deepfakes, virtual cameras and emulators in attempts to bypass remote identify verification systems.

Deepfake attacks using “face swap” technology to attempt to bypass remote identity verification increased by 704% in 2023, according to a report published Wednesday.

Free and low-cost face swap tools, virtual cameras and mobile emulators are accelerating the efforts of a growing number of deepfake-focused threat actors, identity verification company iProov found in its 2024 Threat Intelligence Report titled “The Impact of Generative AI on Remote identity Verification.”

“Generative AI has provided a huge boost to threat actors’ productivity levels: these tools are relatively low cost, easily accessed, and can be used to create highly convincing synthesized media such as face swaps or other forms of deepfakes that can easily fool the human eye as well as less advanced biometric solutions,” iProov Chief Scientific Officer Andrew Newell said in a public statement.

In addition to identifying face swaps as the “deepfake of choice among persistent threat actors,” iProov’s Security Operations Centre (iSOC) found that injection attacks targeting mobile identify verification platforms increased by 255%, while use of emulators in these attacks rose by 353% between the first and second halves of 2023.

Furthermore, the number of threat groups exchanging information online about attacks on biometric and video identification systems nearly doubled between 2022 and 2023, with 47% of these groups surfacing within the last year.

Generative AI supplies deepfake threat actors with better, cheaper toolkits

Free and freemium face swap apps that can be easily downloaded on one’s phone or computer have evolved from a fun novelty to a powerful tool for deception.

Unlike Snapchat filters that let users trade faces with their friends for a laugh, deepfake apps like SwapFace, DeepFaceLive and Swapstream were discovered by iProov researchers to be the most common tools leveraged in attacks against remote ID verification systems.

Advanced attackers can perfect the use of these AI apps to create realistic live motion videos and use stolen selfies like masks or puppets to fool video authentication systems.

Deepfakes videos are most commonly combined with digital injection attacks that use a virtual camera feed to replace the webcam or other device camera feed that would normally be used to display one’s face for verification. For example, OBS Studio, a legitimate open-source streaming tool, includes a virtual camera feature that could potentially be used to display deepfake video.

Digital injection attacks are more technically advanced than presentation attacks, in which a mask or a video on a screen is held up to the camera. While many facial biometric systems are equipped with presentation attack detection (PAD), injection attacks are more difficult to detect and doubled in frequency in 2023, according to Gartner.

Emulators, such as the Android emulator available in the free, official Android Studio software development kit, can allow threat actors to conceal the use of a virtual camera and target mobile verification systems more effectively, according to iProov.

For example, a sophisticated attacker could generate a deepfake and set up a virtual camera on a PC while using an emulator to access a mobile verification app and appear as though they are using their phone camera normally.

iProov threat analysts currently monitor more than 110 different face swap tools and repositories that could be used to generate malicious deepfakes.

Deepfake attackers see humans as the weakest link in ID verification

Deepfake threat actor groups frequently target manual or hybrid identity verification systems where a human operator has the last say, according to iProov. These groups consider humans to be easier to fool using deepfake injection attacks compared with computerized facial recognition systems, the report stated.

In fact, iProov analysts have observed threat actors providing instructions on how to purposely fail biometric verification in order to be forwarded to a human operator, according to the report.

Research has shown humans have a limited ability to detect deepfakes, with one study published in the Journal of Cybersecurity finding participants identified deepfake images of human faces with 62% accuracy overall.

Another study, performed at the Idiap Research Institute, which was presented at the 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), found human subjects could only suss out high-quality deepfake videos 24.5% of the time. However, the same study also found humans outperformed deepfake detection algorithms overall.

Interestingly, videos that were seen as “obviously fake” by humans were usually not detected by the algorithms, while some of the algorithms performed better in detecting the videos that were most difficult for humans, Idiap researchers found.

Previous successful deepfake schemes have shown both humans and digital biometric systems to be potentially vulnerable.

Last week, Hong Kong police announced that a finance worker was convinced to send the equivalent of $25 million to scammers after attending a conference call with deepfakes of multiple colleagues. And in 2021, fraudsters in China stole an equivalent of $75 million via fake tax invoices after using deepfakes to trick government-run facial recognition systems.

iProov concluded its report with recommendations to be aware of the risk of deepfakes to human-only or human-led remote identity verification systems, and to ensure that biometric verification technologies are independently red team tested against digital injection attacks. The company also recommends cloud-based, multi-frame liveness biometric solutions over on-premises and single-frame liveness-based systems.

“Automated systems, when combined with the correct expert oversight, can leverage the breakthroughs in AI to produce effective systems to stay ahead in the arms race,” Newell told SC Media in an email. “However, we expect manual systems, such as video interviews, to come under increasing pressure as it becomes impossible to detect advanced deepfakes by eye.”

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.