Current FeaturesRisk & Compliance

AI in HR: The Current Legal Landscape

Artificial intelligence, and its growing use in the world of work, has increasingly become the subject of legislation across the United States, the European Union, and the United Kingdom.

By Meredith Gregston, Daniel J. Butler, and Sarah Pearce

Artificial intelligence (AI) is here to stay, and its growing use has caught the attention of legislators and administrators across the United States, the European Union, and the United Kingdom. All HR professionals should take stock of their systems using AI, recognize the risks, and understand the existing—though evolving—legal frameworks used to regulate AI in the workplace.   

An Elusive Concept 

To start, it’s important to define AI. The explosion of blogs, articles, and discussions about AI often fail to grasp this threshold question, which is essential to assessing legal compliance.   

The new AI-specific laws emerging around the world include definitions of the term. The EU AI Act, for example, defines AI systems as “a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” The Act also defines general-purpose AI models or systems. 

Certain employment-related laws include their own definitions of AI, which, by its very nature, due to its inherent complexity, is not easy to define. Though every law has critical nuance, most generally define AI as a machine-based system that makes decisions, recommendations, or predictions for employment events (e.g., hiring, promotions, terminations) based on human-defined objectives. A common example in the HR space is a resume-screening tool that rejects candidates and recommends others for hire. Such tools generally qualify as AI under most laws. In contrast, a sales analytics tool used to gather insight on employee efficiency may not qualify under some laws if that tool is not used as part of employment decisions.   

In short, if a machine-based system is involved in a decision-making process that impacts tangible employment events such as hiring, compensation, bonuses, or terminations, it likely falls within the realm of AI that is regulated whether by AI-specific laws or provisions particular to AI in employment laws. 

Existing Legal Landscape  

At the federal level in the U.S., there are no AI-specific laws that govern workplace conduct.  But existing anti-discrimination laws, including Title VII of the Civil Rights Act of 1964 (“Title VII”), the Americans with Disabilities Act (“ADA”), and the Age Discrimination in Employment Act (“ADEA”), apply to AI just as they do for all other employment actions. “The machine did it” will generally not work as a defense.   

Because AI is often lauded as a way to increase workforce diversity, it is probably not going to feature prominently in cases alleging intentional discrimination. But there have already been disparate impact discrimination claims based on a company or vendors’ alleged use of AI.  Broadly, a disparate impact discrimination claim alleges that a neutral policy or practice (e.g., a resume screening tool) is unlawful because it has a disparate impact on a protected group and cannot be otherwise justified by “business necessity.” While AI is new, disparate impact litigation is not, having existed since the 1970s. Courts, therefore, are likely to apply existing disparate impact case law to new disputes involving AI. Thus, the lack of specific AI laws at the federal level does not mean that AI tools are outside the scope of federal regulation.  

To the contrary, a U.S. district court in California recently held that an AI vendor could be considered an agent of an employer and thus subject to potential liability under Title VII for applicants that the employer rejected through the use of the vendor’s tool.  

Most AI-specific legislation in the U.S. has occurred at the state level. Some laws restrict the use of specific types of AI, such as Illinois’ Artificial Intelligence Video Interview Act, while others require that employers perform periodic “bias audits” and publish the results of those audits before utilizing such tools. Currently, there are bias audit laws on the books in New York City—the first jurisdiction to do so—and Colorado, with similar bills proposed in California, Illinois, New Jersey, and New York. A third category of AI law seeks to expand upon the protections afforded under federal disparate impact discrimination law. Both Utah and Illinois recently passed laws prohibiting the use of AI tools that have a disparate impact on protected groups under those states’ respective anti-discrimination laws. 

It is quite a different story across the Atlantic. The EU AI Act, now enforced in all jurisdictions within the EU, introduces a risk-based legal framework that establishes different requirements depending on the level and type of risks related to the use of the concerned AI system. The Act establishes the following types of AI systems:  

  • prohibited AI systems; 
  • high-risk AI systems; 
  • systems with transparency requirements; and  
  • general-purpose AI models and introduces obligations that require preparation, implementation, and ongoing oversight.   

Notably, under Article 6(2) and Annex III, high-risk AI systems expressly include:  

  • AI systems intended to be used for the recruitment or selection of individuals, in particular, to place targeted job advertisements, to analyze and filter job applications, and to evaluate candidates; and 
  • AI systems intended to be used to make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behavior or personal traits or characteristics, or to monitor and evaluate the performance and behaviors of individuals in such relationships. 

There are, of course, other scenarios where the use of AI in the workplace could trigger certain obligations, but these are the most obvious and those that will be of most relevance to employers based on current use cases. 

Employers who deploy such high-risk AI systems in their HR activities must comply with a number of obligations, including those related to transparency, data management, impact assessments, and human oversight. Importantly, employers must also inform workers’ representatives and the affected workers before deploying a high-risk system in the workplace. 

Over in the U.K., the newly elected government has announced its proposed approach to AI. While no specific legislation has yet been cited or proposed, the government has stated that it will seek to establish such specific legislation to regulate the development of AI. This deviates from the approach proposed by the previous government, which took a non-binding “principle” approach to regulating the development and use of AI. 

It is worth noting that AI is, by its very nature, reliant on data to operate. The use of AI tools in the workplace often requires the processing of data related to employees, including personal data, the result being that the use AI in the workplace often raises data privacy issues. Additionally, similar anti-discrimination and other employment specific legislation exists in the U.K. and EU and will continue to apply to AI just as they do for all other employment actions.   

Steps to Mitigate Risk 

There are several steps companies can take to evaluate the risk profile of contemplated AI tools. First, companies should demand indemnification from AI vendors, or, at minimum, obtain assurances and representations from vendors that their tools have been designed and audited to avoid impact bias.   

Second, companies should routinely—or at least annually—audit vendors and the tools themselves prior to implementation. Indeed, such audits are now required in some jurisdictions and that list will only grow as time goes on. When conducting such audits, consider performing them in coordination with legal counsel to obtain the attorney-client and/or work product privileges. Although such privilege generally eliminates the use of the audit as a sword in defensive litigation, the privilege ensures organizations can have open discussions and internal debate about the best ways to improve and/or utilize the tool without the fear that those communications will be discovered in subsequent litigation. 

Finally, companies need to stay abreast of developments in the law.  As previewed, there are a host of laws already in place with more on the way. Taking proactive measures now to ensure compliance will ensure the company is best positioned to leverage AI’s benefits and strategically react to compliance challenges that the future is likely to reveal. 

Meredith Gregston is a senior attorney in Hunton Andrews Kurth’s Labor & Employment group in the firm’s Austin office. Daniel J. Butler is an associate in Hunton Andrews Kurth’s labor and employment group in the firm’s Miami office. Sarah Pearce is a partner in Hunton Andrews Kurth’s Global Technology, Outsourcing & Privacy group in the firm’s London office. 

Tags: AI, Current Features, HR Technology

Recent Articles