13 Feb iProov’s report warns biometric industry to be wary of generative AI
iProov’s latest intelligence report shone a harsh light on the growing threat of generative AI.
Specifically focusing on remote identity verification and biometric ID, the company zoomed in on tools and techniques that threat actors use to launch digital injection attacks.
The report read: “In the last 24 months the threat landscape has undergone significant changes. Organizations considering incorporating facial biometrics into their remote identity platforms need to understand the benefits and drawbacks of the various technologies available and the pros and cons of different deployment methods.”
It went on to explain that biometric solutions that looked secure two years ago may not be as resilient as previously thought.
For example, face swap injection attacks increased by 720 per cent in 2023.
It said: “Face swaps are now firmly established as the deepfake of choice among persistent threat actors.”
“Organizations leveraging biometric verification technology are in a stronger position to detect and defend against these attacks than those relying solely on manual operation,” noted the report.
“Human operator-led systems can no longer consistently and correctly detect synthetic media such as deepfakes.
“In order to detect synthetic media created using generative AI, verification technologies that leverage AI are essential.”
iProov called upon the biometrics industry to create tougher certification requirements, stating: “Threat actors are exploiting processes that rely on lower-cost technology as well as those that leverage human intervention.
“Current tools are outpacing defenses in both availability and sophistication. As a result, these new threat vectors are evading many current remote identity verification techniques faster than organizations can detect or adapt their security measures.”
“A proactive approach leveraging science is needed to identify, mitigate, and prevent potential threats before they become serious.”