Skip to main content

Legal Updates

Navigating the AI Regulatory Landscape: US Contemplates Regulations and the EU Moves Ahead with the AI Act

Date

June 7, 2023

Read Time

5 minutes

Share


On May 16, 2023, the U.S. Senate Judiciary Committee’s Subcommittee on Privacy, Technology, and the Law convened a critical hearing to address the burgeoning field of artificial intelligence (AI). The session aimed to probe the potential opportunities and challenges of AI, drawing comparisons to the transformative effects of past innovations like the printing press. 

The hearing’s witnesses represented key players in the AI industry, including Samuel Altman, CEO of OpenAI; Christina Montgomery, Chief Privacy & Trust Officer at IBM; and Gary Marcus, Professor Emeritus at New York University. The conversation focused on the risks associated with AI, such as the dissemination of false information, behavioral manipulation, child safety, and potential utilization in warfare. 

The Subcommittee recognized the necessity to take proactive steps in AI regulation, learning from prior regulatory missteps in social media and the ongoing absence of a federal privacy law. Overall, the participants’ statements reflected a discernable consensus on the need for the U.S. to lead in crafting regulations for AI, particularly in light of the European Union’s (EU) advancement with its own AI Act. 

One key takeaway from the hearing was the shared belief that AI regulation is inevitable. As Altman argued, people need to know if they’re talking to an AI system or looking at content generated by a chatbot. The question now is, how this regulation should be implemented?  

Altman underscored that Section 230 of the Communications Decency Act, which offers legal protection to online platforms for user-posted content, does not extend to AI companies. He proposed the creation of a new agency to oversee AI, complete with a licensing system based on the AI’s capabilities and associated risk levels. 

Meanwhile, Montgomery advocated for “precision regulation” that focuses on specific AI use cases rather than the technology as a whole. Her proposal includes clear guidelines for different risk levels, transparency requirements, and ongoing governance of AI models. 

Drawing from the medical industry’s regulatory practices, Marcus suggested a monitoring agency similar to the FDA, capable of conducting both pre- and post-review of AI systems. He further emphasized the need for increased funding dedicated to AI safety research. He also emphasized that although the AI “genie is out of the bottle,” there are other more powerful AI genies still in bottles. For example, Artificial General Intelligence (AGI) having self-improvement and self-awareness capabilities could be on the horizon. Such AGI systems could comprise a broader and more advanced range of analytical skills over conventional AI to provide information that may not have been directly encountered during their design phase. 

On the other hand, the EU continues to move ahead with adopting its AI Act. The EU’s AI Act encompasses various provisions for regulating AI. It offers a broad definition of AI, including machine learning, deep learning, knowledge and logic-based approaches, and statistical approaches. The AI Act classifies AI into four risk categories: unacceptable, high, limited, and minimal. Technologies falling under the unacceptable risk category, such as real-time facial and biometric identification systems in public spaces and systems that exploit vulnerabilities of specific individuals, are prohibited. High-risk AI systems – determined based on the system operating in a sector with potential for significant risks and where the system’s intended purpose could lead to substantial risks within that sector – will be required to fall under strict conformity assessments before they can be brought to market. These assessments analyze data sets, biases, user interactions, system design, and output monitoring. Additionally, high-risk systems must be transparent, explainable, provide human oversight, and offer clear information to users. Post-market monitoring will be mandatory, with a focus on tracking performance data and ensuring continuous compliance, particularly as AI algorithms evolve over time. On the other hand, AI systems in the limited and minimal risk category have fewer requirements but must still adhere to transparency obligations.  

One common criticism of the EU’s AI Act is that it does not adequately capture the unique risks and challenges posed by general-purpose AI systems like ChatGPT. These systems have the ability to generate human-like text and can be utilized for a wide range of applications, including content creation, customer service, and information retrieval. They may raise concerns related to misinformation, bias, privacy, and potential malicious use. Critics argue that the AI Act should have specific provisions addressing the risks and requirements associated with general-purpose AI systems to ensure they are developed and used responsibly. 

However, after testifying for the need for regulations, Sam Altman recently revealed that the company may have to stop operations in the EU due to difficulties in complying with upcoming AI legislation in the region. Although OpenAI plans to comply with the new rules, Altman expressed concerns over the AI Act’s classification of “high-risk” systems, which, as it stands, could apply to large AI models like ChatGPT. Altman argued that general-purpose systems are not inherently high-risk, and the firm will cease operations if they cannot meet the requirements.  “Either we’ll be able to solve those requirements or not,” Altman said during a panel discussion at University College London. “If we can comply, we will, and if we can’t, we’ll cease operating… We will try. But there are technical limits to what’s possible.” But, days after making this statement, Altman dialed back his position and instead stated that OpenAI wants “to make sure it is able to comply.” 

EU lawmakers have previously amended proposed provisions in order to foster AI innovation and boost the economy. For example, a previously proposed provision included banning copyrighted material being used to train generative AI like ChatGPT. But instead, the proposed provision was changed to a transparency requirement that requires operators of AI platforms to disclose the copyrighted materials that have been used to train their AI systems. Nonetheless, EU lawmakers seem determined to regulate AI, regardless of Altman’s recent remarks about potential difficulties in complying with “high-risk system” requirements. Therefore, much like how the General Data Protection Regulation (GDPR), the EU’s privacy law, has resulted in record fines, the AI Act may result in tech giants struggling to meet regulatory requirements. 

Thus, as the U.S. contemplates AI regulation and the EU moves forward with the AI Act, it remains crucial to strike a balance between fostering innovation and safeguarding the well-being and rights of individuals in the evolving AI landscape. 

If you have questions about legal developments related to AI or IP issues, please reach out to a member of LP’s Intellectual Property Group


Filed under: Intellectual Property

October 18, 2023

What Community Associations Need to Know to Avoid Common Social Media and Intellectual Property Pitfalls

Read More

October 04, 2023

How EU Privacy Laws Impact Unregulated Generative AI

Read More