Steve Seeberg

03-15-2023

CTOs & Recruitment Managers
AI in Recruiting: Regulation and Litigation

“Black Box” Explainability Alert

Whether you are a seller of AI applications for recruiting, a customer of one of the emerging companies, or considering deploying AI for hiring in your business, this article is a must-read to avoid substantial litigation expense, potential costly penalties, and damage to your brand reputation.

While there is potential to profit from the considerable time-savings and increased productivity afforded by AI, pitfalls exist.

Artificial Intelligence (AI) is now actively being used in the recruitment process globally, streamlining hiring by analyzing millions of data points from thousands of potential job candidates in seconds.

AI typically means combining computer science and large data sets in models to enable problem-solving, make predictions, or provide recommendations. Most AI models are considered black boxes, since their decision-making processes are often opaque and difficult, if not impossible, to understand.

Application developers like Workday, Eightfold, and Beamery provide AI technology allowing their customers to scale quickly by using AI not only for matching candidates to available jobs, but also for predicting success and eliminating bias in the recruiting process.

However, a lack of transparency or “explainability” poses challenges in understanding and validating the decisions made by AI models, making it impossible to ensure lack of bias and compliance with data privacy laws in the hiring process.

This article provides an overview of the existing and proposed regulatory landscape as well as litigation developing to address this issue. The article also outlines best practices when using AI platforms to ensure they satisfy existing and proposed regulations, as well as avoid costly lawsuits.

Regulation and Litigation

Given AI’s potential for injecting bias in the hiring process as well as posing threats to data security, this technology has become the new subject of large-scale regulation by the US Federal Government, US states and cities, as well as governments around the world.

More importantly, as the Workday case below shows, aggrieved applicants are not waiting for AI-specific regulations to sue application providers and their customers for bias in AI algorithms. The litigant in the Workday case is testing the waters under Title VII of the Civil Rights Act of 1964 and associated legislation.

Regulation

New York City recently enacted Local Law 144. The law prohibits employers from using automated employment selection tools unless an organization institutes specific bias auditing of these tools and makes the resulting data publicly available. The New York City law could be a catalyst for other states to adopt similar legislation. Liability for violations ranges from $500 to $1500 per day in addition to the potential for civil suits.

States including California, Illinois, Maryland, Connecticut, Virginia, Colorado, Texas, Tennessee, Indiana, and Montana have enacted regulations covering AI learning tools as of this writing

The US Congress is considering the federal Algorithmic Accountability Act, which, if passed, would require AI vendors and employers that use their platforms to perform impact assessments of any automated decision-making systems that have a significant effect on individuals’ access to terms, or availability of employment. The Act provides for enforcement by the FTC with the potential for significant administrative fines and exposure to civil liability.

Several states, in addition to those with existing legislation noted above, have proposed regulations to address the use of AI generally which covers use in hiring.

In addition, the US Equal Employment Opportunity Commission (EEOC) and National Telecommunications and Information Administration (NTIA) recently announced that they intend to increase oversight and scrutiny of AI tools used to screen and hire workers.

In Europe, in addition to protecting personal data privacy under General

Data Protection Regulation (GDPR), the European Union recently passed the draft EU AI Act with a final version expected later in the year, which aims to regulate AI hiring platforms developed or used by employers or agencies in the European Union. Proposed administrative liability is draconian, ranging from a minimum of $11 million to $33 million per violation in addition to possible civil claims by applicants. Enactment of the EU AI Act is anticipated for early this year and full enforcement in 2026.

Litigation

While AI tools for hiring have caught the attention of the EEOC, state, and local legislatures, there has yet to be a proliferation of litigation in this area. However, that may soon be changing. On February 21, 2023, a class action lawsuit was filed against Workday, Inc. in the Northern District Court of California, under Title VII of the Civil Rights Act of 1964 and associated statutes alleging that the company engaged in illegal race, age, and disability discrimination by offering its customers applicant screening tools that use biased AI algorithms.

The court initially dismissed the lawsuit in January 2024, citing insufficient evidence to classify Workday as an "employment agency" under anti-discrimination laws. The plaintiff filed an amended complaint in February 2024, providing more details about the alleged discriminatory practices of Workday's algorithm-based screening tools.

The U.S. Equal Employment Opportunity Commission (EEOC) has argued that Workday should face the lawsuit, stating that the company's software potentially caters to discriminatory preferences of employers.

The case is ongoing, with the court yet to rule on the amended complaint and the EEOC's arguments.

This lawsuit has garnered significant attention due to its potential implications for the use of AI in employment practices. If successful, it could set a precedent for holding software vendors accountable for discriminatory outcomes resulting from their products.

It's important to note that this action was brought under existing US laws against discrimination in employment. Therefore AI vendors and their customers should consider taking the actions suggested here and recommended by most frameworks immediately, rather than wait until the implementation of regulations targeted specifically to AI tools.

Emerging Requirements

Given the current legislative proposals, it is safe to assume that both vendors and their customers may incur liability for violations from governmental entities as well as lawsuits by affected individuals. Given this likelihood, both should consider the following suggested actions when developing and/or implementing AI applications.

Although the current enacted and proposed regulations, as well as best practice recommendations, vary on approach and aspects of the hiring process affected, a common theme emerges from these and the Workday case as to actions both vendors and their customers should consider to avoid liability. These actions involve providing:

  • Periodic AI audits or impact assessments: Proof via periodic internal or preferably third-party audits that AI algorithms are explainable, free of bias, and user data is secure.
  • Disclosure of audit or assessment results: The public is made aware of audit results on the vendors’, employers’, or agencies’ websites or by other means.
  • AI Explainability Statements: Job applicants receive information on how AI is being used in the hiring process and how it affects them.

Key Takeaways

  • Given AI’s potential for injecting bias in the hiring process as well as posing threats to data security, this technology has become the new subject of large-scale regulation by the US Federal Government, US states and cities, as well as governments around the world.
  • Aggrieved applicants are not waiting for AI-specific regulations to sue application providers and their customers for bias in AI algorithms.
  • Both vendors and their customers should consider the following to avoid liability: periodic AI audits or impact assessments, disclosure of audit or assessment results, and providing AI explainability statements regarding the use of AI to applicants.

Options for Application Developers & Their Customers

Depending on the regulation involved, internal AI audit and impact assessments may or may not be sufficient. It should be noted that performing these tasks internally can require significant time and use of an experienced data science team to implement the available opensource analytical tools required, depending on the use case and model complexity. However, they’re explored here as a first line of defense.

Frameworks

Frameworks are detailed guides or assessment procedures for developing and implementing AI tools that are trustworthy, satisfying the requirements of explainability, anti-bias, data security, and confidence in outcomes required by evolving regulatory schemes.

The most comprehensive framework to date is CapAI, created to address the requirements of the draft EU AI Act. The Act, expected to be enacted this year, complements the EU’s stringent GDPR data security regulations with respect to AI.

Additional frameworks to CapAI have been proposed from the National Institute of Standards and Technology (NIST), Institute of Electrical and Electronics Engineers (IEEE), Information Systems Audit and Control Association (ISACA), and the Organization for Economic Co-operation and Development (OECD) among others.

These frameworks, except the OECD’s, do not identify the tools required to provide audits or impact assessments required by framework protocols.

Open-source tools

Fortunately, there are many open-source tools available to accomplish these tasks. However, all require programming skills and most have steep learning curves. Some of the more popular are AI Fairness 360 (developed by IBM), What-If-Tool (Google), Aequitas, Fairlearn, LIME, Fairtest, FairML, and more.

Paid applications

Stepping in to address this problem for organizations that lack the technical capabilities or time to devote to internal AI audits or impact assessments are third-party vendors with paid and freemium solutions.

Except for a few enterprise financial auditing and tech companies, most of these applications represent a growing cottage industry which evolved over the past several years to meet anticipated demand especially from small to medium-sized businesses.

The largest companies offering these applications and services include Accenture AI Services (negotiated pricing), IBM Watson OpenScale (“Lite” free, “Standard” $261/model/month), PwC’s Responsible AI Toolkit (negotiated pricing), Ernst & Young AI Consulting Services (negotiated pricing), and KPMG Lighthouse (negotiated pricing).

Smaller companies with similar offerings include Weights & Biases (freemium, from $50 per user per month), Truera (negotiated pricing), Aporia (freemium, quote), Fiddler (free trial, quote), Arthur (negotiated pricing), and Arize (freemium, from $100/mo).

Active Engagement Essential

The use of AI has resulted in a new era of efficiency in hiring, but it is not without risks. The absence of explainability and consequent lack of trust, resulting in potential bias, loss of data security, and confidence in generated outcomes is worth considering if you deploy AI in hiring.

These challenges along with existing regulations, proposed legislation, and the ramifications of the Workday case demand both developers and users of this technology be more than passive observers.

They must consider actively engaging with these issues, keeping a close eye on regulatory changes, adopting appropriate frameworks, investing in the audit and impact assessment technology suggested above, as well as providing AI explainability statements to those affected.

By taking these actions, AI vendors and their customers can leverage the substantial benefits of AI for hiring, while mitigating costly administrative penalties, potential civil liability, and reputation damage.

About Globalpros.ai

If you haven’t had a chance to visit our website, we’re a private marketplace powered by machine learning with tens of thousands of the world’s top, deeply pre-vetted developers in our AI Talent Sync Community seeking full-time positions.

Our Community consists of app and web developers as well as highdemand, hard-to-fill roles such as AL/ML engineer, data engineer, certified salesforce developer, and the fast-growing role, generative AI for coding and AI Co-Pilot expert.

Our AI/ML technology instantly matches and ranks candidates from the Community to your job description, reducing your time to fill by up to 90%.

Contract or direct hire onshore or save up to 40% in salary and HR expense by hiring nearshore or offshore.

There’s no learning curve. Copy and paste a full job description or let AI generate one for you and see deeply vetted and ranked matches in seconds ready to shortlist and interview.

GLobalpros.ai is free (no subscription fee) to match, rank, shortlist, and interview.

Use Globalpros.ai as your go-to sourcing tool and avoid expensive portal ads, resume searches and the time-consuming resume evaluation of hundreds of CVs from those ads, searches, and career pages.

https://www.globalpros.ai/client-lp1

Want to learn more?

https://globalpros-ai.wistia.com/medias/tqec87t6px