HR TechnologyTalent Acquisition

Regulating AI Risks

As the use of AI increases, organizations need to take steps to ensure its accuracy.  

By Maggie Mancini

Even as artificial intelligence continues to redefine talent acquisition and worker retention for the organizations that embrace it, there is not much regulation surrounding its use in organizations’ hiring practices. As research from Instant Impact shows that 71% of HR teams are already using AI and say it is making their function significantly more effective, the key to success is to integrate responsible technology across the process and remain aware of the risks.  

If the decisions made by the AI algorithm is based on data that is impacted by unconscious bias, is incomplete, or is inaccurate, it could mean “disastrous results” for organizations and their potential candidates, warns Felix Mitchell, co-founder of Instant Impact. Clean data is critical to getting the most from AI. 

“Using AI can support better decision-making and significantly reduce hiring bias, resulting in a naturally diverse workforce,” Mitchell explains. “The quality of AI depends on the strength of accurate data; however, it shouldn’t be used for decisions in candidate selection.”  

Prior hiring patterns can set limitations for AI. For example, Mitchell says if most AI engineers at a company are white and male, the data that they choose to include—and the logic behind the algorithms themselves—may skew the results based on unconscious biases. AI algorithms are limited by their original coding, which could potentially overlook equality and diversity efforts in hiring.  

“Companies should regularly audit, test and train AI algorithms to ensure they are free from discriminatory elements, and secure and protect the data used,” he recommends. “Companies should also have a trained human in the loop to monitor AI outputs.” 

As organizations build and refine their AI algorithms, there are three key principles to consider when working to eliminate bias in talent acquisition.  

  1. It’s essential to be transparent about when and how AI is being used. When a candidate is selected, a company must be able to explain its decision. AI built on deep learning AI models often struggle to return an understandable explanation for why recommendations have been made, which can leave companies open to ethical and legal risks. “Companies should ensure they are using algorithms that can clearly explain why decisions have been made in order to ensure that candidates are treated equally and tell candidates how and where AI has been used,” Mitchell said. 
  2. Be sure to consider privacy with using AI practices. Leveraging AI-driven technology for social media, such as programmatic advertising, allows for online data collection in order to place job ads in front of potential candidates based on an algorithm’s understanding of user preferences. While it can help expand the talent pool, the practice is not without its controversy. Cambridge Analytica’s use of unauthorized personal data to influence behavior during the 2016 U.S. presidential election and the Brexit referendum has led to increased scrutiny. 
  3. To reduce human bias, companies need to make sure that the data used to refine AI algorithms are representative. Historical data must be treated with caution to ensure fairness. Diverse teams, testing, and bias audits are all critical bias mitigation tools. Those creating the software must also stay neutral and not allow their own unconscious biases to “unintentionally creep in,” Mitchell said.

As companies implement AI programs to boost recruitment efforts, it’s important to do so responsibly—regularly auditing, testing, and training algorithms to ensure they are free from discriminatory elements. 

Recent Articles