How Biometric Technologies Can Fail: Bias, Spoofing, and Data Poisoning
Biometric technologies have a number of vulnerabilities that underscore the ethical concerns over their employment and could result in the failure of the technology to perform as anticipated.
Algorithmic Bias
Researchers have repeatedly found that AI-trained facial recognition programs fail disproportionately when used for women and people of color, due to both the models and the data on which the programs were trained. If unaddressed, these challenges could result in system failure, potentially leading to violations of civil liberties or international humanitarian law.
Data Poisoning
Data poisoning—in which an adversary or bad actor seeks to surreptitiously mis-train an opponent’s AI—could present additional challenges for AI-trained biometric technologies. This attack vector is particularly difficult to detect and could compromise the reliability of biometric systems at scale.
Presentation Attacks and Spoofing
Biometric technologies are also vulnerable to presentation attacks (or spoofing), in which a targeted individual uses makeup, prosthetics, or other measures to prevent a biometric system from accurately capturing their biometric identifiers or adjudicating their identity. This could enable individuals such as terrorists or foreign intelligence operatives to thwart biometric security systems.
Some U.S. defense agencies are seeking to develop biometric presentation attack detection technologies. For example, the Intelligence Advanced Research Projects Agency program Odin seeks to provide an automated means of both detecting known presentation attacks and identifying unknown vectors of attack.
The gap between the promise of biometric technology and its real-world performance is a recurring concern for policymakers, civil libertarians, and military planners alike. As these systems are deployed in higher-stakes environments, the cost of failure—whether through bias, spoofing, or adversarial manipulation—rises considerably.