<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>civil liberties on Media Presser</title>
    <link>https://mediapresser.com/tags/civil-liberties/</link>
    <description>Recent content in civil liberties on Media Presser</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en-us</language>
    <lastBuildDate>Fri, 17 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://mediapresser.com/tags/civil-liberties/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Biometric Technologies and Congress: Recent Legislation and Open Questions</title>
      <link>https://mediapresser.com/2026/04/17/biometric-technologies-and-congress-recent-legislation-and-open-questions/</link>
      <pubDate>Fri, 17 Apr 2026 00:00:00 +0000</pubDate>
      
      <guid>https://mediapresser.com/2026/04/17/biometric-technologies-and-congress-recent-legislation-and-open-questions/</guid>
      <description>Congress has considered the implications of biometric technologies—specifically facial recognition—in a number of recent legislative provisions.
Key Legislative Provisions Section 5104 of the FY2021 NDAA (P.L. 116-283) tasks the National AI Advisory Committee with advising the President on whether the use of facial recognition technology by government authorities is taking into account ethical considerations and whether such use should be subject to additional oversight, controls, and limitations.
Section 5708 of the FY2020 NDAA (P.</description>
    </item>
    
    <item>
      <title>How Biometric Technologies Can Fail: Bias, Spoofing, and Data Poisoning</title>
      <link>https://mediapresser.com/2026/04/17/how-biometric-technologies-can-fail-bias-spoofing-and-data-poisoning/</link>
      <pubDate>Fri, 17 Apr 2026 00:00:00 +0000</pubDate>
      
      <guid>https://mediapresser.com/2026/04/17/how-biometric-technologies-can-fail-bias-spoofing-and-data-poisoning/</guid>
      <description>Biometric technologies have a number of vulnerabilities that underscore the ethical concerns over their employment and could result in the failure of the technology to perform as anticipated.
Algorithmic Bias Researchers have repeatedly found that AI-trained facial recognition programs fail disproportionately when used for women and people of color, due to both the models and the data on which the programs were trained. If unaddressed, these challenges could result in system failure, potentially leading to violations of civil liberties or international humanitarian law.</description>
    </item>
    
  </channel>
</rss>
