Current Features

What the Mobley v. Workday Case Means for the Future of AI and Fair Hiring

CHROs will lead the charge in selecting, implementing, and ensuring that AI tools are working as intended, free from bias and aligned with ethical and legal standards.

By Sarah Smart

HR leaders, take note: The Mobley vs. Workday lawsuit, filed in California, was approved to proceed as a collective action on May 16, 2025. This case has significant implications for employment practices and the use of artificial intelligence (AI) in hiring. 

The Lawsuit at a Glance 

Plaintiff Derek Mobley, along with four other plaintiffs over the age of 40, alleges that Workday’s AI-based applicant recommendation system discriminates against job applicants based on race, age, and disability. 

Currently, with collective action certification granted, others who believe they’ve experienced age discrimination through Workday’s AI system can now join the lawsuit, potentially expanding its reach and impact.  

Why This Matters for HR Leaders 

At the onset of the AI in HR revolution, many leaders embraced AI tools for their promise of efficiency and scalability, often without fully understanding the broader implications. Now, as legal and ethical concerns emerge, it’s critical to pause, audit, and ensure AI tools are compliant, unbiased, and aligned with best practices. 

While this lawsuit is directed at Workday, it raises important questions for all companies using AI-driven hiring tools. As a talent acquisition (TA) leader at an ERP-enabled organization, this case serves as a wake-up call to proactively assess talent acquisition (TA) technology stacks to mitigate current and future risks. There are several proactive steps to take. 

Auditing the HR Tech Stack 

It’s important to map the candidate journey to identify areas where applicants interact with AI-driven tools. Focus on the following key areas. 

  1. Resume parsing tools. These tools streamline the candidate screening process by converting unstructured resume data into structured formats for applicant tracking systems (ATS) and recruiters. Often powered by AI and natural language processing (NLP), they extract, analyze, and organize resume details like skills, experience, and qualifications.

Key consideration: Ensure these tools are accurate and do not inadvertently exclude candidates due to formatting issues or lack of specific keywords. 

  1. Pre-qualifying questions. Are pre-qualifying questions aligned with job descriptions, free from bias, and essential? Is the question bank up to date, reducing the risk that a recruiter selects the wrong question? For example, an organization had more than 45,000 separate questions in their question bank, creating substantial risk and inefficiencies. In this case, the question bank hadn’t been reviewed in years, creating significant risk for the organization.

Best practice: Auto disqualification (auto-disposition) of applicants should be limited to this stage of the hiring process and must be based solely on objective, job-related criteria and the applicant’s answers to the pre-qualifying questions. 

  1. Candidate filtering or matching tools. These tools sort and categorize candidates based on keywords or other data points in resumes, comparing them to job descriptions.

Risk: At the crux of the Mobley vs. Workday lawsuit is the allegation that Workday’s AI-enabled tools, which, per Workday’s website, score, rank, or screen applicants, may unintentionally favor or disadvantage particular groups. Workday is not the only ERP with a candidate matching tool. 

Action step: Conduct a full bias review of the applicant funnel. Work with the legal team to determine what the threshold for unintentional bias in hiring is. If it shows that the candidate filtering or matching tool crosses that red line, shut down the tool and work with the vendor to ensure their redesign and redeployment meet organizational standards. 

  1. Job recommendation tools. These tools analyze a candidate’s profile, resume, or past behavior (e.g., jobs applied to or viewed) to recommend roles based on keywords, skills, or data points.

Concern: Like candidate matching tools, job recommendation systems may perpetuate bias if algorithms favor specific patterns. 

Key action: Understand the data driving these recommendations and implement routine audits to ensure fairness and inclusivity. 

Building a Cross-Functional Audit Team 

A cross-functional team is critical to auditing an HR tech stack. Members should include the following.  

  • TA leaders who can provide context on hiring goals and practices. 
  • HR technology specialists since they understand the tools and systems in use and how they were implemented. 
  • Workforce or people analytics teams to analyze data and outcomes. 
  • Data science experts to evaluate algorithms and technical functionality. 
  • Assessment experts that can ensure hiring assessments are valid and unbiased. 
  • Legal and compliance teams to ensure alignment with anti-discrimination laws and data privacy regulations to maintain compliance.
  • Optional, but often helpful, financial experts can help reassess and reestablish baseline metrics for efficiency if the AI tool isn’t delivering the expected results from a business case perspective.  

It’s important to establish consistent routines and invest in the necessary resources to support them. Implement a regular audit schedule and maintain the flexibility to conduct spot reviews whenever new AI-driven features are introduced. This approach will help minimize risks and ensure your systems remain compliant, fair, and effective. 

Key Areas to Investigate 

Here are some questions to consider to ensure an equitable process. 

  • Data inputs: What data fuels AI systems? Is the data representative of a diverse candidate pool? Who owns and manages the data? 
  • Bias audits: Does the AI provider conduct regular bias audits? Are the audits reviewed by independent third-party organizations or solely in-house teams? How often are these audits performed, and what actions are taken based on findings? 
  • Data usage: Are the algorithms trained on organizational data, aggregated data from all clients, or a mix of both? Does this data introduce any potential risks of bias or unfair outcomes? 
  • Performance metrics: How does the AI provider measure the success of their tools? Are these metrics aligned with organizational expectations around best hiring practices? Are the tools improving hiring outcomes or introducing inefficiencies and risks? 
The Bottom Line 

As AI continues to enable hiring, HR leaders must become AI experts. The CHROs of the future will lead the charge in selecting, implementing, and ensuring that AI tools are working as intended, free from bias and aligned with ethical and legal standards. The Mobley vs. Workday lawsuit underscores the importance of evaluating and refining the HR tech stack to safeguard the organization and candidates alike. 

Sarah Smart is co-founder of HorizonHuman. 

Tags: Current Features

Recent Articles