From Crush to Commitment: Maturing Your AI Compliance Program
Employers and human resources departments nationwide are increasingly entering committed relationships with artificial intelligence (AI) tools. But as federal guidance shifts and state regulations multiply, employers must navigate a complex landscape to ensure they are exercising the caution and transparency required by law. This article provides an overview of key federal and state regulations, and outlines best practices employers should implement when using these tools.
The Federal Foundation: Anti-Discrimination Laws Still Apply
Although the Equal Employment Opportunity Commission (EEOC) and U.S. Department of Labor have withdrawn or stepped back from certain AI-specific guidance and other materials, federal employment laws continue to apply fully to AI-assisted hiring and workplace decision-making. Title VII of the Civil Rights Act (Title VII), the Americans with Disabilities Act (ADA), and other federal discrimination laws continue to apply with full force when AI is used as a “selection procedure” or otherwise influences employment decisions involving protected characteristics. As a result, employers remain subject to liability across three critical areas:
Disparate impact discrimination. AI tools that disproportionately exclude or disadvantage protected groups may violate Title VII, even where outcomes are unintended or algorithmically driven. “Neutral” tools can create statistically adverse effects — and employers can be responsible for results, not just intent, especially when systems rank, filter, or score candidates.
Disability discrimination. AI systems that screen out individuals based on disability-related characteristics may trigger ADA liability. Three recurring traps plague algorithmic screening: failure to provide reasonable accommodations on AI-driven assessments, screening out qualified individuals due to disability-linked signals, and conducting disability-related inquiries or medical exams before extending a conditional offer.
Third-party vendor risk. Using AI tools, such as Automated Employment Decision Tools (AEDTs), developed or administered by vendors doesn’t shield employers from liability. Employers remain responsible for discriminatory outcomes from vendor-provided systems used in employment decisions.
State and Local Regulations: A Growing Commitment to Oversight
While federal agencies may be taking a step back from certain kinds of oversight (at least for now), state and local governments are moving forward with AI regulations. These jurisdictions are building comprehensive frameworks that extend federal anti-discrimination principles into affirmative governance, transparency, and risk-management obligations.
Colorado’s AI Act: “Reasonable Care”
Colorado’s SB 24-205, set to become effective on June 30, 2026, represents the most comprehensive state approach to “consequential decisions” in employment. The statute goes beyond simple disclosure requirements, imposing a statutory duty of “reasonable care” on both developers and deployers of high-risk AI systems to prevent algorithmic discrimination.
The law defines algorithmic discrimination broadly to include differential treatment or impact that disadvantages protected groups. Covered employers must implement risk-management policies, conduct impact assessments, notify individuals when AI influences consequential decisions, offer opportunities to correct data and seek human review, and publicly disclose their use of such systems. The Colorado Attorney General enforces the law, and violations may trigger civil penalties.
The Regulatory Patchwork: Other State and Local Laws
Illinois. Amendments to the Illinois Human Rights Act (effective January 1, 2026) make it unlawful for employers to use AI that produces discriminatory effects based on protected characteristics. More critically, the law also requires notice to applicants and employees about AI usage, extending transparency obligations across the employment lifecycle.
California. Finalized FEHA regulations (effective October 1, 2025) treat automated decision systems as subject to existing anti-discrimination duties. The regulations clarify that employers may not use AI tools in ways that result in unlawful bias — a principle-based approach that applies California’s robust employment protections to algorithmic systems.
New York City. Local Law 144 remains the most established local AI-in-hiring regime. It requires independent bias audits, public posting of results, and candidate notice before use of automated employment decision tools. New York maintains a transparency-driven approach that predates other state laws and has served as a model for subsequent regulations.
Practical Steps: Nurturing a Healthy AI-Compliance Relationship
Employers should proactively manage and mitigate AI-related discrimination risks. Like any healthy partnership, this requires ongoing attention, honest assessment, and willingness to make changes when problems arise.
Inventory and assess your AI ecosystem. Map AI use cases across the employment lifecycle: recruiting, screening, interviews, promotions, performance assessment, compensation determinations, and terminations. You can’t manage risks you haven’t identified. Understanding where algorithms touch employment decisions is the foundation for compliance.
Implement human review and escalation paths. Ensure meaningful human oversight around adverse decisions. “Meaningful” means the reviewer understands how the system works, has authority to override the algorithm, and can evaluate whether outcomes align with your organization’s values and legal obligations. Rubber-stamping algorithmic recommendations doesn’t satisfy this requirement.
Validate tools for adverse impact. Conduct regular testing to identify whether AI systems disproportionately affect protected groups. Keep current documentation of validation efforts, including the methodology used, results obtained, and any corrective actions taken. This documentation serves both compliance and defense functions.
Refine ADA accommodation processes for AI assessments. Build clear pathways for candidates and employees to request accommodations for AI-driven evaluations. This may include alternate assessment formats, the ability to bypass certain algorithmic screening, or guaranteed human review.
Plan for regulatory fragmentation. Multi-state employers should anticipate an increasingly complex compliance landscape. What works in one jurisdiction may fall short in another. Consider whether to adopt the most stringent standard across your operations or maintain jurisdiction-specific protocols. Both approaches carry trade-offs in complexity and risk.
Scrutinize vendor contracts. Review vendor agreements to ensure transparency regarding algorithmic function, compliance with anti-discrimination laws, and appropriate risk allocation. Key provisions should address data usage and retention, audit rights, indemnification for discriminatory outcomes, data privacy, compliance with applicable laws, and vendor obligations to provide information necessary for your compliance efforts.
Don’t accept vendor assurances of “fairness” or “compliance” at face value. Request documentation of validation studies, information about the data used to train models, and explanations of how the system reaches decisions.
The Path Forward: Commitment With Eyes Open
AI offers genuine benefits in efficiency, consistency, and scalability across the hiring process. But, like any powerful tool, it carries risks that require active management. The regulatory landscape is evolving rapidly, with states taking the most active role at this time.
For employers, the strategy is clear: Embrace AI’s potential while building robust compliance frameworks that mitigate risks associated with discrimination claims. This means going beyond good intentions to implement real safeguards, maintain transparency, and ensure human judgment remains central to consequential employment decisions.
Questions about how current regulations apply to your company’s use of AI in hiring? Reach out to Deja Davis or another member of LPs Employment & Executive Compensation Group