AI-driven recruitment raises critical concerns about bias, fairness and legal accountability in modern hiring practices. There is mounting foreign case law that provides both employers and Artificial Intelligence (AI) developers with significant food for thought.
South Africa ushered in the National Artificial Intelligence Policy Framework 2024 (Policy Framework) in October 2024.
AI bias in recruitment
The ongoing United States (US) case of Derek Mobley v Workday warrants attention from South African employers using AI platforms. It also holds relevance for AI software developers.
Derek Mobley, an African American male over 40 with anxiety and depression, made serious allegations against Workday. He claims that Workday’s AI screening tools discriminate against job applicants based on race, age, and disability. Despite being qualified, Mobley was rejected for over 100 positions by companies using Workday’s AI screening tools.
Mobley successfully established a prima facie case of “unfair discrimination” by showing a causal link between practice and impact. He argued that Workday’s algorithmic tools likely screen out applicants with mental health disorders or cognitive impairments. This bias stems from the use of biased training data and personality tests. The court recognised a common discriminatory component, evidenced by Mobley’s numerous rejections.
Mobley also demonstrated that he was rejected from over 100 employers, all using the Workday platform, despite being qualified. He asserted that this suggested potential bias against race, age, and disability. Additionally, Mobley showed he received rejections within an hour of applying. This indicated a lack of human oversight in the AI decision-making process. Software developers and AI vendors worldwide should take note of this unfolding case.
AI recruitment legal precedents
Courts may begin recognising third-party service providers as agents liable for actions typically performed by employers. Since the employers delegated hiring and recruiting functions to Workday, the court recognised Workday as their agent. If this case unfolds in Mobley’s favour, it could set a precedent for holding third-party providers accountable. These providers might be held liable for how their AI systems are used by customers who fail to follow best practices.
South African employers should also pay attention to this case. The legal principles on unfair discrimination are relevant to South Africa and its employment practices. This case highlights the potential for AI software to infer demographic information from inputs like zip codes and educational history. Such inferences could lead to unfair discrimination if proper human oversight is not applied.
Employers cannot escape liability by delegating duties to third parties. Even without codified AI legislation in South Africa, employers face risks under the existing employment legislative framework. The Employment Equity Act applies to all employers, employees, and job applicants. It prohibits unfair discrimination in the workplace. AI usage can also impact other workplace areas, including retrenchments, promotions, demotions, opportunities, and bonuses. If AI software unintentionally shows bias in these areas, employers may face liability under the Labour Relations Act.
Unfair discrimination in recruitment
An unfair discrimination dispute between a third-party service provider and a job applicant falls outside traditional employment law. However, this does not leave the affected parties without recourse. If such a situation were to occur in South Africa, alternative legal recourse might be pursued.
For example, an employee or job applicant may bring an unfair discrimination dispute against a third-party provider to the Equality Court. This could be done under the Promotion of Equality and Prevention of Unfair Discrimination Act. Additionally, an employee or job applicant may hold the service provider or developer liable under the Protection of Personal Information Act. This applies where automated decision-making contravenes the relevant provisions of that Act. Such provisions include allowing an affected person to make presentations about the decision and providing sufficient information about it.
An employer may also hold the third-party provider liable based on the agreed terms and conditions of their contract. Employers with an annual turnover of less than ZAR 2 million may bring a claim for damages. This claim could be made under section 61 of the Consumer Protection Act against a third-party provider for liability from defective goods.
Employers must exercise caution when using AI in the workplace. Software vendors must also remain aware of the risks associated with AI systems. These systems, used as decision-making tools, may impact equal access to employment opportunities.
The need for an AI recruitment accountability framework
Our previous insights highlighted the human-centered AI Principles adopted by the Organisation for Economic Co-operation and Development in May 2019. These principles serve as a guide for South African employers implementing AI in the workplace.
As of October 2024, the Department of Communications and Digital Technologies introduced the National Artificial Intelligence Policy Framework 2024. Although this is not codified law, it represents progressive steps to safeguard South African citizens’ interests.
The Policy Framework outlines 12 strategic pillars for best practices in AI usage. The three most prominent pillars are transparency, fairness, and maintaining human oversight. For transparency, the Policy Framework aims to hold developers and organisations accountable for AI systems’ actions and outcomes.
In conclusion
Employers should educate themselves and employees on how AI decisions are made. Understanding AI decision-making can help detect and mitigate biases by identifying and correcting skewed data.
To ensure fairness, employers should train AI systems on diverse datasets that include all demographic groups. Most importantly, human oversight should always be applied to AI decision-making. Emerging case law shows that AI might miss contextual nuances and produce outcomes devoid of ethical consideration. This may perpetuate biases present in its training data.
Considering South Africa’s comprehensive employment law framework, employers should adopt a proactive approach to AI implementation. They should align with the best practices and pillars envisioned by the new Policy Framework. Software developers designing AI systems for employers must ensure these pillars are reflected in their software.
Mehnaaz Bux | Partner | mail me | | |
Karl Blom | Partner | mail me | | |
Caitlin Leahy | Candidate Attorney | mail me | | |
Daniel Philipps | Candidate Attorney | mail me | | |
| Webber Wentzel | |