When Algorithms Become Defendants: The AI Employment Litigation Landscape Every Employer Needs to Understand

Brandon Sneider | March 2026


Executive Summary

  • Three landmark cases are reshaping AI hiring liability. Mobley v. Workday (N.D. Cal., collective action certified May 2025) established that AI vendors can be directly liable as employer “agents” for discriminatory screening — covering 1.1 billion rejected applications. Kistler v. Eightfold AI (filed January 2026) attacks the secrecy of algorithmic scoring under the Fair Credit Reporting Act, alleging hidden “match scores” on over one billion worker profiles at employers including Microsoft, Morgan Stanley, and PayPal. Harper v. Sirius XM (E.D. Mich., filed August 2025) targets race discrimination in AI applicant tracking systems under Title VII.
  • Federal enforcement has retreated, but litigation risk has increased. The EEOC withdrew its AI hiring guidance in January 2025 and closed pending disparate impact charges by September 2025. The CFPB rescinded its 2024 guidance on algorithmic employment scoring. The result: private litigation is now the primary enforcement mechanism, and plaintiff attorneys are filling the vacuum.
  • Five state laws take effect in 2026, creating a compliance patchwork that affects every multi-state employer. Illinois (January 1, 2026), California (October 1, 2025), Colorado (June 30, 2026), New York City (enforcement overhaul underway after December 2025 Comptroller audit), and New Jersey (December 15, 2025) each impose distinct obligations — from annual bias audits to pre-decision notice requirements.
  • The “pincer movement” eliminates the safe harbor. Mobley attacks discriminatory outcomes (disparate impact). Eightfold attacks opaque processes (FCRA violations). Together, they mean that AI hiring tools must produce fair results AND operate transparently — neither alone is sufficient.
  • Mid-market employers face disproportionate exposure. Vendor contracts typically cap liability at subscription fees while employers bear the legal risk for algorithmic outcomes they cannot audit. The compliance cost of doing nothing now exceeds the cost of a governance program.

The Three Cases That Changed the Calculus

Mobley v. Workday: The Vendor as Agent

Derek Mobley, a Black applicant over age 40, applied to more than 100 jobs through employers using Workday’s AI-powered screening tools. He received no offers. Filed in the Northern District of California, the case initially tested whether an AI vendor could be held liable for employment discrimination at all — a question no court had answered.

Judge Rita Lin’s May 2025 ruling answered definitively: yes. The court held that Workday acted as an “agent” of the employers using its screening features, performing a function traditionally handled by human hiring managers. The court dismissed claims that Workday operated as an “employment agency” but allowed the more powerful agent-liability theory to proceed.

The numbers give the case its weight. The court granted conditional certification as a nationwide collective action under the Age Discrimination in Employment Act (ADEA), covering all individuals aged 40 and over who applied through Workday’s platform since September 2020 and were denied employment recommendations. Workday disclosed in filings that approximately 1.1 billion applications were rejected through its system during the relevant period. The opt-in deadline was March 7, 2026.

In August 2025, Judge Lin expanded the scope further, ordering Workday to produce a list of customers using its HiredScore AI features — acquired after Mobley’s original complaint — rejecting Workday’s argument that the acquisition created a separate product outside the case’s reach.

Why this matters for mid-market employers: The liability follows the deployer, not just the vendor. Every company using Workday’s AI screening features during the covered period is implicated in the fact pattern. And Workday’s contract terms, like most AI vendor agreements, cap the vendor’s liability at subscription fees — leaving the employer exposed.

Kistler v. Eightfold AI: The FCRA End-Run

Filed on January 20, 2026, in Contra Costa County Superior Court by Outten & Golden LLP and Towards Justice, this case deploys a different legal weapon entirely. The plaintiffs — Erin Kistler and Sruti Bhaumik, both California residents with STEM backgrounds — do not allege the algorithm was biased. They allege it existed in secret.

Eightfold AI’s platform serves employers including Microsoft, Morgan Stanley, Starbucks, BNY, PayPal, Chevron, and Bayer. The complaint alleges that Eightfold scraped “vast amounts of personal data” — social media profiles, location data, internet activity, tracking data — far beyond what applicants voluntarily submitted, compiling profiles on over one billion workers. The system generated “Match Scores” ranking applicants on a zero-to-five scale, filtering lower-ranked candidates before any human review.

The legal theory: these AI-generated evaluations constitute “consumer reports” under the Fair Credit Reporting Act (15 U.S.C. § 1681 et seq.), triggering mandatory disclosure, access, and dispute procedures that have governed background checks since 1970. FCRA statutory damages range from $100 to $1,000 per willful violation. Applied to a billion-profile database through a class action, the exposure is staggering.

The genius of the FCRA theory is its low evidentiary bar. Disparate impact claims require statistical proof of discrimination — expert witnesses, regression analyses, demographic data. FCRA claims require only procedural proof: did you disclose? Did you provide access? Did you allow disputes? The answers here appear to be no, no, and no.

The regulatory vacuum amplifies the theory. The CFPB’s 2024 Circular 2024-06 explicitly stated that algorithmic employment scores constitute FCRA-covered reports. That guidance was rescinded in 2025 under the new administration. But rescinding guidance does not change the statute — it simply shifts enforcement from regulators to private plaintiffs. Former EEOC Chair Jenny R. Yang now represents the Eightfold plaintiffs, carrying institutional knowledge of the theory the government abandoned.

Harper v. Sirius XM: Race Discrimination in Applicant Tracking

Filed August 4, 2025, in the Eastern District of Michigan, Arshon Harper — a Black IT professional with over a decade of experience — alleges that Sirius XM’s iCIMS Applicant Tracking System rejected him for 149 of 150 applications submitted since November 2023. Harper claims the system analyzed application materials and assigned scores based on data points functioning as proxies for race: educational institutions, home ZIP codes, and employment history patterns that historically disadvantage African-American applicants.

The case brings both Title VII and Section 1981 claims and seeks class action status for similarly situated applicants. While earlier in its procedural life than Mobley or Eightfold, Harper tests the race-discrimination theory that the University of Washington’s research supports: AI models preferred resumes with white-associated names in 85% of cases versus 9% for Black-associated names (Quinn Emanuel, citing UW study).

The Federal Retreat — and Why It Makes Things Worse

The Trump administration’s January 2025 Executive Order 14179 (“Removing Barriers to American Leadership in Artificial Intelligence”) triggered a systematic withdrawal of federal AI employment guidance:

Action Date Impact
EEOC removes AI hiring guidance from website January 27, 2025 Eliminated employer compliance roadmap
CFPB rescinds Circular 2024-06 on algorithmic scoring 2025 Removed FCRA theory’s regulatory backing
EEOC closes pending disparate impact charges September 2025 Issues right-to-sue letters, shifting cases to courts
EEOC AI and Algorithmic Fairness Initiative paused 2025 Federal investigation pipeline closed

The instinct is to read this as reduced risk. The opposite is true.

When the EEOC closes charges and issues right-to-sue letters, it does not eliminate the claims — it transfers them to federal court, where damages are uncapped and discovery is broader. When guidance is rescinded, the statute remains unchanged, but employers lose the compliance safe harbor that following the guidance provided. When regulatory agencies step back, the plaintiffs’ bar steps forward — as demonstrated by the Eightfold complaint filed within months of the CFPB rescission, advancing the exact FCRA theory the CFPB had endorsed.

The pattern is familiar from financial services regulation: deregulation shifts enforcement from administrative proceedings (where fines are capped and precedent is limited) to private litigation (where class-action damages are uncapped and precedent is binding). Every employment attorney tracking this space expects 2026 to produce more AI hiring lawsuits than all prior years combined.

The State Patchwork: Five Laws, Five Compliance Models

For any company operating across state lines — which is every mid-market employer hiring remotely — the state regulatory landscape creates overlapping and occasionally contradictory obligations.

Jurisdiction Effective Date Key Requirements Enforcement Penalties
NYC Local Law 144 July 2023 (enforcement overhauled 2026) Annual bias audit, public disclosure, advance notice to candidates DCWP investigations $500-$1,500/violation/day
Illinois HB 3773 January 1, 2026 Prohibits discriminatory AI in hiring/promotion/termination; employee notification; ZIP code proxy ban IL Dept. of Human Rights + private right of action Uncapped compensatory damages, back pay, attorneys’ fees
California ADS Regulations October 1, 2025 Independent bias testing, pre/post-deployment notice, meaningful human oversight, 4-year record retention CA Civil Rights Dept. Administrative penalties + private right of action
Colorado AI Act (SB 24-205) June 30, 2026 (delayed from Feb 1) “Reasonable care” standard, impact assessments, annual evaluations, risk management program, consumer notices Attorney General (deceptive trade practices) $20,000/violation
New Jersey N.J.A.C. 13:16 December 15, 2025 Disparate impact regulations for automated decisions NJ Division on Civil Rights Administrative + civil penalties

NYC Local Law 144: The Enforcement Wake-Up Call

The December 2025 audit by New York State Comptroller Thomas DiNapoli exposed Local Law 144’s enforcement as “ineffective.” The findings were specific and damning:

  • DCWP reviewed 32 employers’ websites and found one instance of non-compliance. The Comptroller’s auditors reviewed the same companies and found 17 — a 17x detection gap.
  • Of 12 test calls to NYC’s 311 hotline for AEDT complaints, only 3 (25%) reached DCWP. Eight were misdirected to the NYS Department of Labor. One was directed back to the employer itself.
  • DCWP conducted no public education efforts after May 2023.
  • DCWP failed to use technical support procedures created by the NYC Office of Technology and Innovation.

The Comptroller issued 13 recommendations. DCWP agreed to implement the majority, including strengthened complaint routing, cross-divisional staff training, written enforcement policies, and enhanced investigative methods including interviews and tool demonstrations.

The practical implication: employers who assumed Local Law 144 was paper-only are now facing an agency retooling for aggressive enforcement. Compliance was cheap when nobody was checking. It becomes expensive when enforcement catches up to the law.

Illinois: The Uncapped Damage Threat

Illinois HB 3773 is the most plaintiff-friendly state AI employment law. Both the Illinois Department of Human Rights and the Human Rights Commission enforce it, and individuals who exhaust administrative remedies can pursue private lawsuits seeking uncapped compensatory damages, back pay, reinstatement, lost benefits, emotional distress damages, and attorneys’ fees. The law also specifically prohibits using ZIP codes as proxies for protected classes — directly targeting the algorithmic pattern alleged in Harper v. Sirius XM.

Colorado: The Governance Mandate

Despite the five-month implementation delay (SB 25B-004, signed August 28, 2025), the Colorado AI Act’s substance is unchanged. It requires employers to maintain a documented AI governance and risk management program explaining how the organization identifies, monitors, and mitigates algorithmic discrimination. The “reasonable care” standard creates an affirmative defense — employers who can demonstrate governance compliance have legal protection. Those who cannot face $20,000-per-violation penalties enforced by the Attorney General as deceptive trade practices.

The Additional Front: AI Video Interviews and Disability Discrimination

The ACLU’s March 2025 complaint against Intuit and HireVue opened a disability discrimination front. An Indigenous and deaf Intuit employee used HireVue’s AI video interview platform for a promotion and received feedback to “practice active listening.” The ACLU alleged that HireVue’s automated speech recognition performs “ten times worse” for deaf and hard-of-hearing individuals, with approximately every other word transcribed incorrectly.

The complaint was filed with both the EEOC and the Colorado Civil Rights Division, alleging violations of the ADA, Title VII, and the Colorado Anti-Discrimination Act. HireVue’s CEO disputed the claims, stating Intuit did not use an “AI-based assessment.” The case tests whether AI-powered interview tools must meet ADA accessibility standards — a question that extends to every employer using video interview technology with algorithmic evaluation.

Key Data Points

Metric Data
Applications rejected through Workday’s system (Mobley class period) 1.1 billion
Worker profiles in Eightfold’s database 1+ billion
Job applications Harper submitted to Sirius XM 150 (149 rejected)
iTutorGroup EEOC settlement (first AI hiring discrimination case) $365,000 for 200+ rejected applicants
NYC LL144 compliance detection gap (DCWP vs. Comptroller audit) 1 vs. 17 violations found in same 32 companies
NYC 311 AEDT complaint routing accuracy 25% (3 of 12 test calls reached DCWP)
Employers using AI in some capacity for hiring (2025 surveys) 62-87% depending on definition
AI resume screening adoption among AI-using companies 82%
Colorado AI Act penalty per violation $20,000
Illinois HB 3773 damages Uncapped compensatory + attorneys’ fees
State AI employment laws taking effect in 2026 5 jurisdictions
EEOC AI guidance rescission January 27, 2025

What This Means for Your Organization

The litigation landscape has shifted from theoretical to operational. The question is no longer whether AI hiring tools create legal exposure — three active cases and five state laws have answered that definitively. The question is whether your organization has the documentation, governance, and vendor agreements to survive a challenge.

Three actions matter now. First, inventory every AI tool touching employment decisions — not just recruiting software, but performance management, scheduling, and workforce planning systems. The Eightfold case demonstrates that FCRA liability attaches to any algorithmic scoring used in employment contexts, regardless of whether the tool was marketed as a “hiring” product. Second, audit your vendor contracts. If your AI vendor’s liability is capped at subscription fees while Mobley establishes that the deploying employer bears discrimination liability, your risk transfer is illusory. Require bias audit rights, FCRA compliance warranties, and indemnification that survives regulatory changes. Third, build the documentation trail that Colorado’s “reasonable care” standard rewards. A documented governance program that includes bias testing, human oversight protocols, and adverse action procedures is not just compliance — it is a legal defense.

The companies that will navigate this well are those that treat AI governance as litigation insurance, not bureaucratic overhead. The ones that wait will discover that the cost of defending a class action dwarfs the cost of the governance program they declined to build. If this raised questions specific to your organization’s exposure, I’d welcome the conversation — brandon@brandonsneider.com.

Sources

  1. Mobley v. Workday, Inc. (N.D. Cal., Case No. 3:23-cv-00770). Judge Rita Lin, conditional collective action certification granted May 16, 2025. Discovery order expanding to HiredScore customers, August 2025. Primary source — court filings. Highest credibility. Holland & Knight analysis, May 2025; Norton Rose Fulbright analysis, June 2025; Seyfarth Shaw analysis; HR Dive, customer list order, August 2025.

  2. Kistler v. Eightfold AI Inc. (Contra Costa County Superior Court, filed January 20, 2026). Outten & Golden LLP and Towards Justice. Independent plaintiff counsel analysis. High credibility. Outten & Golden press release, January 2026; National Law Review analysis, 2026; Fortune reporting, January 26, 2026; Norton Rose Fulbright FCRA analysis, March 2026.

  3. Harper v. Sirius XM Radio, LLC (E.D. Mich., filed August 4, 2025). Pro se plaintiff. Early stage — lower credibility for legal theory development, high credibility as litigation trend indicator. Epstein Becker Green analysis, 2025; Fisher Phillips analysis, 2025.

  4. EEOC v. iTutorGroup, Inc. (E.D.N.Y., settled August 9, 2023). First AI employment discrimination settlement. $365,000 for 200+ rejected applicants. Primary source — EEOC enforcement record. Highest credibility. EEOC press release, August 2023.

  5. NYC Comptroller Audit of Local Law 144 (Report 2025-N-2, December 2, 2025). NY State Comptroller Thomas DiNapoli, audit period July 2023-June 2025. Independent government audit. Highest credibility. Comptroller audit report, December 2025; Comptroller press release, December 2025; DLA Piper employer risk analysis, January 2026.

  6. ACLU v. Intuit/HireVue complaint (filed with EEOC and Colorado Civil Rights Division, March 19, 2025). Disability and race discrimination in AI video interviews. HR Dive, March 2025; Public Justice, March 2025.

  7. State law analysis: Seyfarth Shaw AI legal roundup (Colorado/California/Illinois comparison), 2025; Hinshaw & Culbertson (Illinois HB 3773 enforcement mechanisms), 2025; Fisher Phillips (Colorado AI Act amendments), 2025; K&L Gates (EEOC guidance withdrawal), January 2025.

  8. Employer AI adoption statistics: Multiple surveys with varying methodologies. Insight Global 2025 AI in Hiring Survey; DemandSage AI Recruitment Statistics 2026. Moderate credibility — survey definitions of “AI” vary significantly.

  9. Quinn EmanuelWhen Machines Discriminate: The Rise of AI Bias Lawsuits, citing University of Washington research on resume name bias (85% preference for white-associated names). Independent law firm analysis. High credibility for legal analysis, study credibility depends on underlying UW methodology.


Brandon Sneider | brandon@brandonsneider.com March 2026