HR Policy Association
News

Congressional Hearing, Government Report Examine Safe Harbors for AI Users

Published on:

Topics:

Congress may eventually consider creating safe harbors based on best practices to limit the liability of artificial intelligence users, which could be a promising development as AI begins to play a role in hiring decisions.  A hearing of the House Oversight and Government Reform Subcommittee on Information Technology featured top experts in the field from organizations such as OpenAI, Partnership on AI, the Consumer Technology Association, and the Harvard Kennedy School.  During questioning, Rep. Darrell Issa (R-CA) noted, “Safe harbors should exist if we are to promote the advancement and use of data and artificial intelligence,” which was met by unanimous agreement by witnesses.  When a company uses AI in hiring, it holds the potential for lawsuits alleging a disparate impact on women and minorities.  Recently, a U.S. Government Accountability Office report on artificial intelligence addressed the safe harbor concept.  It reads, "In implementing AI... one participant said that it should be a requirement that AI developers test for disparate impact before deploying their technology.  This participant noted that such a requirement would be better complied with if the developer was not held liable for the impact.  Rather, creating 'safe harbors' in conjunction with testing would allow developers an opportunity to seek out input from others to address disparate impacts."  An upcoming report by the Association's Recruiting Software Initiative, which is reported on in a separate story, will examine the concept in more detail.

MORE NEWS STORIES

GDPR: Restrictive enforcement endangers road safety?
HR Processes Policies and Compliance

GDPR: Restrictive enforcement endangers road safety?

April 24, 2024 | News
GDPR: MEP proposals will make tough situation worse
HR Processes Policies and Compliance

GDPR: MEP proposals will make tough situation worse

April 17, 2024 | News