Under recently-enacted legislation, AI-enabled hiring poses a high risk. Here’s how TA professionals can stay compliant.
By Maggie Mancini
As organisations adopt artificial intelligence (AI) technology into their daily operations with everything from recruitment to employee engagement being assisted by these tools, government leaders around the world are tasked with regulating the use and deployment of AI. In one of the largest-scale efforts to place guardrails on the use of AI, the European Union has implemented the AI Act, a foundational piece of legislation that provides a legal framework for organisations seeking to adopt AI systems.
The AI Act may require significant changes for organisations using AI tools that are deemed high-risk, says Guru Sethupathy, CEO and co-founder of FairNow, an AI governance platform. By investing the time and energy to build a robust AI governance programme, business leaders can support the entire organisation and avoid operational, financial, and reputational risks down the line.
The legislation takes a risk-based approach to understanding the ethical implications of AI tools: AI-enabled spam filters are considered a minimal risk, while chatbots pose a risk if they are not transparent about their use of AI.
For organisations looking to determine whether their business will be impacted by the EU’s AI Act, it’s important to understand whether the organisation uses a “high-risk AI system” as defined in the law. High-risk programmes must comply with stricter requirements because of their impact on human lives.
These high-risk systems include all AI systems used in employment, such as AI talent sourcing or hiring software, Sethupathy says. If an organisation’s AI programme is deemed high-risk, the next step involves determining whether the company is a deployer or provider of AI technology, as each group has its own regulations under the new law.
Providers of AI technology will have stringent requirements around the development and design aspects of the AI, including adherence to guidelines, human oversight, and monitoring, Sethupathy explains. As these organisations are the creators of the technology, they are responsible for ensuring that the AI system is designed and tested to meet regulatory standards.
Deployers of AI technology, however, are the organisations utilising AI-enabled hiring programmes. These organisations should focus on the appropriate deployment of the technology, adhering to regulatory guidelines and providing human oversight. Most organisations are deployers of AI technology, Sethupathy says.
“It’s also important to note that in some instances, organisations may find themselves acting as both the provider and deployer of AI, especially when developing in-house AI solutions,” he says. “This dual role means adhering to both sets of requirements.”
AI-enabled recruitment software is considered high-risk under the AI Act due to its direct influence on hiring decisions and employment, which has a direct impact on people’s livelihoods. With this in mind, HR leaders should ensure that the AI tools they’re using are compliant with the new regulations, as many will require additional oversight and more involvement with legal and compliance teams, Sethupathy explains.
“Those directly responsible for managing vendor technology will want to increase their literacy around the risks associated with vendor AI technology, particularly in the areas of ensuring fairness and evaluating models for bias,” he says.
HR leaders tasked with evaluating third-party AI vendors should screen for AI governance or invest in an AI governance platform to train employees and meet compliance requirements. Ideally, Sethupathy says, an AI governance task force would include representation from legal and compliance, IT and technology, HR, leadership, and relevant business units.
AI governance programmes are essential for staying compliant with evolving regulations and maximising the efficiency of AI technologies responsibly. Sethupathy explains that an AI governance programme should include the following:
- a centralised inventory of all AI tools with their purpose, data usage, risks, and compliance status;
- routine risk assessments to identify and mitigate issues;
- clear policies around transparency, remediation, documentation, and AI ethics;
- human oversight with defined roles and accountability, particularly in recruitment;
- a system tasked with tracking compliance with emerging regulations;
- a process to test and monitor AI systems for biases;
- detailed records of AI decision-making processes and compliance efforts; and
- a process for tracking and monitoring vendor-provided AI technology.
When deployed thoughtfully, AI can be used to maintain the values of fairness and transparency in hiring, Sethupathy says. This, in turn, creates more equitable opportunities for job applicants and a better candidate experience, he says. In fact, research has suggested that female job seekers may prefer AI-enabled screening to human evaluations, believing it will lead to less bias in the recruitment process.
“Legal and ethical guidelines drive bias evaluations and standardized monitoring of AI models, ensuring they operate fairly and do not perpetuate bias,” Sethupathy says. “As AI becomes more integrated into our society, these guidelines and regulations are essential for maintaining fairness and accountability. These guidelines can also encourage transparency in AI usage, which is critical for fostering trust among job candidates and employees.”