AI Governance: Mitigating Risk and Achieving Compliance
Artificial intelligence (AI) is now embedded in operations across every major industry. This rapid proliferation brings extraordinary opportunity, but it also introduces a widening surface of security threats, legal exposure, and regulatory risk that many organizations are not yet equipped to manage. For compliance officers and executives, the imperative is clear: Governance must catch up with adoption in very short order.
The Regulatory Landscape Is Moving Fast
States
The legal environment surrounding AI is fragmenting rapidly at the state level, even as federal preemption remains limited. Several states have already enacted or significantly amended AI-related laws that carry real compliance obligations for businesses operating in those jurisdictions.
Illinois: Illinois amended its Human Rights Act effective January 1, 2026, prohibiting AI use that discriminates in hiring, firing, and discipline, and requiring employers to notify employees when AI is used in certain workplace decisions.
Colorado: Colorado passed the first comprehensive U.S. AI Act, imposing rules for high-risk AI systems used in consequential decisions. It is designed to protect against algorithmic discrimination and requires both developers and deployers to conduct impact assessments, make disclosures on AI system use, and to mitigate risks. It takes effect June 30, 2026.
Other developing regulations:
- California has enacted multiple AI-related laws addressing deepfakes, automated decision-making, and certain transparency obligations, many of which are already in effect.
- Texas has advanced the Responsible AI Governance Act (TRAIGA), which includes consumer protections, anti-discrimination provisions, and notably provides a liability safe harbor for organizations that demonstrate compliance with the National Institute of Standards and Technology Artificial Intelligence Risk Management Framework (NIST AI RMF).
- New York City Local Law 144-21 requires independent bias audits of hiring algorithms.
- Protections for minors: Many states are introducing AI-specific protections for minors including limits on advertising and profiling of minors; high-transparency standards for automated systems affecting minors; child-focused risk assessments; restrictions on algorithmic curation and requirements for high-privacy defaults; safety guardrails for recommender systems; and risk mitigation measures around mental health, exploitation, and addictive design.
Federal
At the federal level, Executive Order 14365, signed December 11, 2025, signals the current administration’s intent to establish a “minimally burdensome national policy framework for AI.” The order creates an AI Litigation Task Force within the Department of Justice (DOJ) to challenge state laws inconsistent with this approach, instructs the Secretary of Commerce to evaluate “onerous” state laws within 90 days (by roughly March 11), and directs the Federal Trade Commission (FTC) to issue guidance on unfair or deceptive AI practices. Critically, however, this executive order does not preempt existing state AI laws; congressional action or judicial interpretation would be required for preemption. Such laws remain in full effect until further notice. Given this reality, executives cannot wait for federal clarity before making decisions about and building AI compliance programs.
The Role of Litigation
As the regulatory framework comes into focus, litigation around AI is already escalating. Regulators and plaintiffs’ attorneys are increasingly using existing consumer protection laws (such as the FTC Act) to pursue AI-related claims. Chatbot litigation is on the rise, including a Texas Attorney General investigation into chatbots used for therapeutic purposes and private lawsuits following chatbot-related suicides. AI claims are frequently paired with privacy claims under statutes like the California Invasion of Privacy Act. Because the risk of litigation is so broad, AI governance has become a genuine legal-risk management activity that goes far beyond the IT department.
Key Regulatory Takeaways:
- Begin implementing AI governance measures, even if not establishing a full AI governance framework.
- Prioritize transparency and risk assessments in AI use.
- Implement child-specific safeguards where relevant.
- Expect rapid regulatory expansion in 2026.
AI Vendor Management Considerations
To meet existing and contemplated regulatory compliance requirements, organizations also need robust vendor management programs for AI tools. Vendor-provided AI tools are still software and should meet minimum security expectations before purchase and implementation. Vendor contracts deserve particular scrutiny. Organizations are responsible for the AI systems they deploy, including those built on third-party foundation models. Contracts with AI vendors should explicitly address:
- Scope of use and purpose limitations
- Data ownership and intellectual property rights
- Confidentiality protections
- Audit and assessment rights
- Incident and breach notification timelines
- Subprocessor disclosures
- Termination with data return or deletion.
Human Oversight Is a Control, Not a Checkbox
As AI systems potentially become involved in consequential decisions, defining where human judgment must intervene is both an ethical and a legal imperative. High-impact decisions in areas such as hiring, lending, insurance, and medical diagnosis require documented human review. The same applies to edge cases, model exceptions, and situations where the model’s confidence falls below defined thresholds. A robust AI governance program will include considerations of the following:
- Document escalation paths and override authority for the AI model.
- Train users on when not to trust the model.
- Define reviewer qualifications and training requirements.
- Accountability includes designating who owns the system, what types of decisions they are responsible for, and who is responsible when something goes wrong.
The Bottom Line
Organizations that establish AI governance programs now will be better positioned on every dimension that matters: regulatory compliance, legal protection, risk management, stakeholder trust, and competitive differentiation. Those that defer will face mounting legal exposure, operational disruptions from ungoverned AI deployments, and the costly exercise of retrofitting controls onto systems where risk was already locked in at design. The organizations that act now will mitigate their AI risk on their own terms, rather than under regulatory, litigation, or crisis-driven pressure.
To view Kathryn’s recent webinar on this topic, presented with Kelsey Cunningham of Echelon Risk + Cyber, click here.
Questions around AI compliance? Reach out to Kathryn or another member of LP’s AI & Technology Team.