President Biden signed the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” aimed at ensuring the United States takes the lead in harnessing the potential and managing the risks of AI technology. The expansive Order addresses potential workplace uses of AI technology that could impact employers as it is implemented.
HR Policy issued a press release in response, which you can read here.
What does it mean for companies? The Executive Order alone does not establish any new regulations for private sector companies. Federal agencies are tasked with creating standards, principles, and guidance; whether such agencies (such as DOL) will also eventually promulgate binding regulations that will impact employers remains unclear in the short term, although it is expected in the long term.
Key highlights relevant to employers include:
- New Standards for AI Safety and Security: These include requiring developers of AI systems that pose serious risks to national security, including economic and public health security and safety, to share safety test results with the government, and develop rigorous safety standards for AI to mitigate potential harm.
- Supporting Workers: The Executive Order requires measures to address potential job displacement, labor standards, workplace equity, and workforce training, including Council of Economic Advisors report on AI's labor-market impacts and greater training for Labor Department and Justice Department investigators.
- Workplace Tracking: The Department of Labor is charged with issuing guidance to employers reiterating that AI cannot be used to track/surveil workers or their productivity in ways that violate their federal labor rights. This builds on recent initiatives by NLRB General Counsel Jennifer Abruzzo.
- Federal Contractors: Within 365 days the Acting Secretary of Labor is directed to issue guidance for federal contractors on non-discrimination in the recruitment process that involves AI and technology-based hiring practices.
- Immigration: The Order aims to ease employment-based immigration (including H-1Bs) for AI experts, streamline visa procedures, promote the U.S. as a destination for foreign tech talent, and assess employer demand for skilled immigrants.
Legislation is not likely this Congress. The Executive Order calls upon Congress to take action . Specifically, it urges federal lawmakers to pass bipartisan data privacy legislation, a goal that has been in the works for several years with no immediate progress. Senate Majority Leader Chuck Schumer (D-NY) applauded the President’s Executive Order but stated the “only real answer” on AI is congressional action, despite the fact that it will be “months not weeks” before legislation is introduced.
HR Policy Association advocacy. The Association has engaged with Congress and the administration and submitted several comments on the topic including:
- Comments to the White House Office of Science and Technology Policy and the Department of Commerce to inform forthcoming AI policies.
- A letter to the Senate Health, Education, Labor and Pensions Committee in response to a request for information on AI from Senator Bill Cassidy (R-LA).
- Comments to the White House Office of Science and Technology Policy.
What’s next? The agencies tasked with implementing the Executive Order will begin to take action within their jurisdictions. Depending on the mandate, some agencies will have up to a year to take action, with some due as soon as 30 days. This includes the creation of new government offices and task forces, requiring each federal agency to appoint a Chief AI Officer, and participation in a new White House AI Council. Once agency representatives and the White House AI Council members are named, the Association will directly engage to serve as a resource as they pursue principles and recommendations to govern AI use in the workplace.
OMB provides guidance on Executive Order. On Wednesday, The White House’s Office of Management and Budget (OMB) provided additional information on the implementation of the AI Executive Order within federal agencies that could serve as a private sector mandate in the future. The OMB memo is directed at agency and executive department leaders, stating that agencies have until Aug. 1, 2024, to provide minimum practices for “safety-impacting or rights-impacting AI, or else stop using any AI that is not compliant with the minimum practices.” The OMB guidance would establish AI governance structures in federal agencies, advance responsible AI innovation, increase transparency, protect federal workers, and manage risks from government uses of AI.
First signs of global cooperation. Officials from G7 countries have agreed to create an international code of conduct for AI to regulate advanced technologies, including generative AI, with a focus on preventing societal harm, strengthening cybersecurity, and curbing misuse. Separately, to initiate global discussion on comprehensive rules for AI safety, the UK AI Safety Summit was held this week. The agenda included discussions on various threats posed by AI, including its potential for weaponization by hackers or terrorists, as well as concerns about AI’s growing popularity and uses. the UK AI Safety Summit was held this week. The agenda included discussions on various threats posed by AI, including its potential for weaponization by hackers or terrorists, as well as concerns about AI’s growing popularity and uses.
Join us on November 14 as HRPA and Deloitte partner for The Implications of AI in Productivity and Government Oversight. This webinar will discuss how regulators in both the U.S. and Europe are working to quickly govern the use of AI and other digital technologies impacting the employee-employer relationship. Hear how companies are preparing to engage with policymakers to ensure the ability to use valuable tools without generating excessive oversight and regulatory restrictions.