HR Policy Global
News

BEERG Newsletter - EU: AI Liability Directive – new leverage for unions and works councils?

Will unions and works councils be able to use the proposed new EU AI Liability Directive as leverage in negotiations over the introduction of new technologies and human resource systems? If experience with the GDPR is anything to go by, then the answer is almost certainly. The proposed new Directive, published on 28 September 2022, joins the Artificial Intelligence Act as the second leg of the EU’s strategy to set framework rules for the use of machine learning and artificial intelligence. The Act identifies the use of AI in human resource decision making as “high risk” which must be subject to human oversight.

The proposed Directive applies to non-contractual civil law claims for damages caused by an AI system, where such claims are brought under fault-based liability regimes. Existing national liability rules based on fault are not seen as appropriate for handling liability claims for damage caused by AI-enabled products and services. This is because victims tend to need to prove wrongful action by a person in order to succeed in a claim and the complexity, autonomy and lack of transparency of AI may make it difficult or too expensive for victims to identify the liable person due to the number of parties involved in the design, development, deployment and operation of AI.

The Directive proposes, in certain cases, a rebuttable presumption of a causal link between the fault of the defendant and the output that gave rise to the damage, where all of the following conditions are met:

  • The claimant has demonstrated fault on the part of the AI provider, in the form of non-compliance with an obligation of EU or national law design to protect against such damage (e.g., certain requirements under the AI Act).
  • It can be considered reasonably likely, based on the circumstances of the case, that the fault demonstrated in a. above, has influenced the output produced by the AI system/the failure of the AI system to produce an output; and
  • The claimant has demonstrated that such output/failure to produce an output gave rise to the damage.

The proposed Directive establishes a right for claimants to request from a court, an order for a defendant to disclose relevant evidence about a high-risk AI system (as defined in the AI Act). However, courts are only permitted to order disclosure of evidence where such evidence is necessary and proportionate for supporting the claim and so long as the claimant has made all proportionate attempts at gathering the relevant evidence themselves. It is easy to anticipate unions and works councils been active in this space. 

The AI Liability Directive will now be examined and discussed in both the European Council and the European Parliament. It could be well into 2024 before a final text is agreed. Member States will then have 2 years to implement the requirements into national law.

BEERG/HR Policy Global is running a newly created workshop on Artificial Intelligence, Human Resource Management and Employee Information and Consultation next February. Details of the program can be found HERE.

Published on:

Authors: Tom Hayes

Topics:

MORE NEWS STORIES

FTC to Vote on Non-Compete Ban Tuesday
Employment Law

FTC to Vote on Non-Compete Ban Tuesday

April 19, 2024 | News
GDPR: MEP proposals will make tough situation worse
HR Processes Policies and Compliance

GDPR: MEP proposals will make tough situation worse

April 17, 2024 | News
CJEU: When should management consult on collective redundancies?
Employee Relations

CJEU: When should management consult on collective redundancies?

April 17, 2024 | News

Continue reading this content with the HR Policy Global Membership package