Artificial Intelligence and Bias: A Growing Employment Risk for U.S. Employers
From resume screening to employee evaluations, artificial intelligence (AI) is quickly becoming a mainstay in the modern workplace. But as employers increasingly rely on algorithmic tools to streamline hiring, promotion, and performance management, they may be walking a legal tightrope. The Equal Employment Opportunity Commission (EEOC) and several state agencies are intensifying scrutiny of how these tools are used and how they may unintentionally discriminate.
Why It Matters: Bias In, Bias Out
AI tools are only as objective as the data on which they’re trained. If that data reflects historical inequities, the result can create algorithms that perpetuate bias based on race, gender, age, disability, or other protected characteristics. Employers that adopt these technologies without proper oversight leave themselves vulnerable to claims under Title VII of the Civil Rights Act, the Americans with Disabilities Act (ADA), and comparable state laws.
In other words, just because a decision was made by software doesn’t mean it’s shielded from legal scrutiny.
EEOC Guidance: What Employers Need to Know
In late 2024, the EEOC released updated guidance clarifying how federal anti-discrimination laws apply to automated employment decision tools. Key points include:
Employers Are Responsible for the Outcomes of Third-Party Tools
Whether you build it in-house or license it from a vendor, you remain liable for discriminatory outcomes.
Disparate Impact Still Applies
Even if your AI system treats everyone “equally,” it may still result in a disparate impact on protected groups, which is unlawful unless justified by business necessity.
Accessibility Concerns Under the ADA
AI that screens out candidates based on traits associated with a disability, such as facial expressions, speech patterns, or the time it takes to complete a task, can violate the ADA if reasonable accommodations are not provided.
This builds on the EEOC’s earlier technical assistance documents from 2022 and its joint initiative with the Department of Justice on AI and disability rights.
State-Level Action Is Accelerating
States are not waiting for federal regulation to catch up. A few notable examples:
New York City’s Local Law 144 requires employers to conduct bias audits of automated employment decision tools and provide candidates with advance notice. Enforcement began in 2023, and fines can reach $1,500 per violation.
Illinois’ Artificial Intelligence Video Interview Act mandates notice, consent, and data retention limitations when using AI to analyze video interviews.
California and Washington are exploring legislation that would impose transparency and impact assessment obligations for algorithmic hiring tools.
These state laws often impose stricter requirements than federal law and are being closely watched by employers nationwide.
Hypothetical: When the Algorithm Crosses the Line
Imagine a retail company using an AI tool to screen applicants. The system learns from past hiring decisions that many favored younger, tech-savvy candidates. Over time, the AI begins to screen out older applicants at a significantly higher rate.
An older applicant who meets all the job qualifications is rejected without an interview. They file a claim under the Age Discrimination in Employment Act (ADEA), citing a pattern of biased rejections. The employer argues that “the system made the decision,” but the EEOC sees it differently.
In this case, the employer could be held liable for using a tool that caused a disparate impact on the basis of age—even if the bias was unintended.
What Employers Should Do Now
To reduce legal risk, employers should proactively evaluate how AI is used in their employment processes. Consider taking the following steps:
Audit Your Tools
Conduct regular bias audits—internally or through a qualified third party, and document your findings.
Demand Transparency From Vendors
Ask for information on how tools are trained, what data is used, and whether bias testing has been conducted.
Provide Accommodations
Ensure that any AI tool used in assessments complies with the ADA and includes options for candidates who require modifications.
Train Your Team
HR professionals and hiring managers should understand the risks and limitations of algorithmic tools.
Looking Forward: Reducing Risk in the Age of Algorithmic Hiring
AI promises efficiency, but without oversight, it can introduce a new class of compliance challenges. As regulatory guidance evolves, employers must stay informed and take a proactive role in ensuring that their use of technology aligns with anti-discrimination laws.
If your organization is considering—or already using—AI in employment decisions, now is the time to review your policies and practices. Our firm can help you assess risk, navigate compliance, and build a responsible approach to modern hiring.