Skip to main content

Legal Updates

The AI Legal Pulse: June 2023 Legal and Tech Updates on Disruptive Technologies

Date

June 28, 2023

Read Time

9 minutes

Share


The AI landscape is rapidly evolving. To help you stay abreast of the various developments, we recap the latest legal and tech updates related to artificial intelligence (AI) and other emerging technologies.

The EU’s AI Act moves to the final stages before its expected effective date in 2025.

The European Parliament passed a draft of the AI Act in June 2023, with the Council of the European Union expecting to review and approve it at the end of this year. The law is expected to take effect in 2025. The AI Act implements a “risk-based approach,” placing restrictions on the proposed use of AI based on the perceived danger of the particular AI application. High-risk applications may face new usage limitations and increased transparency requirements, while certain uses deemed as having “unacceptable” risks may be outright banned. The draft introduces several significant measures, like a ban on face recognition in public places and new regulations on generative AI, including a ban on using copyrighted material in the training set of large language models and a requirement for AI-generated content to be labeled.

The AI Act categorizes AI into four levels of risk: unacceptable, high, limited, and minimal, each with its own regulations. The “unacceptable” category prohibits technologies like real-time public space facial recognition and systems that exploit individuals’ vulnerabilities, while the “high” risk category calls for stringent assessments of data sets, biases, user interactions, and system design before market entry, with additional requirements for transparency, human oversight, and user information. These high-risk systems also require post-market monitoring, focused on tracking performance data and ensuring continued compliance as AI algorithms develop. Conversely, AI systems in the “limited” and “minimal” risk categories have fewer stipulations, although they must still meet certain transparency obligations.

Federal judges impose AI certification mandates and sanctions.

Two federal judges have recently issued orders in their respective courts, Texas and Illinois, mandating attorneys to attest to either not using AI in their court document drafting or ensuring human-verified accuracy if AI was used. U.S. District Judge Brently Starr of the Northern District of Texas issued an order on May 30, 2023, requiring attorneys to provide certifications stating they didn’t use generative AI for drafting court documents, or if they did, the output was manually checked for accuracy. Following this order, on May 31, U.S. Magistrate Judge Gabriel Fuentes from the Northern District of Illinois issued a Standing Order which includes a similar requirement, mandating disclosure of any AI usage to conduct legal research or to draft documents for filing with the Court. Notably, such disclosure is not necessarily limited to generative AI.

These developments come after two attorneys, who used the AI tool ChatGPT to do legal research and prepare a legal brief, were reprimanded for citing nonexistent case law generated by AI. The two attorneys relied solely on ChatGPT to do legal research without understanding that ChatGPT is not a search tool or how it works – namely, that it generates text based on statistical analysis, which can cause it to draw incorrect inferences (or hallucinate) and generate false information. US District Judge P. Kevin Castel called out the attorneys for submitting a brief written by AI that cited nonexistent case law, finding the lawyers “abandoned their responsibilities” to check their work. This incident led to sanctions against the attorneys, fines, and court-ordered remedial measures, including training at their firm. Judge Castel highlighted the professional responsibility of attorneys to ensure the accuracy of their filings, regardless of the technology used, marking a crucial milestone in the intersection of AI and law.

Bipartisan bills on AI are introduced in U.S. Senate: Proposing AI transparency in public interactions and a new Office for Global Competition Analysis.

U.S. senators introduced two bipartisan bills on June 8 regarding artificial intelligence (AI), reflecting the growing interest in addressing issues related to this technology. The first bill, introduced by Senators Gary Peters, Mike Braun, and James Lankford, mandates government agencies to disclose when they use AI in public interactions and to provide a mechanism for people to appeal AI-made decisions. The second bill, put forth by Senators Michael Bennet, Mark Warner, and Todd Young, proposes the creation of an Office of Global Competition Analysis to ensure the U.S. maintains its competitiveness in AI and other strategic technologies.

U.S. and EU collaborate on a voluntary code of conduct for AI.

U.S. and EU are creating a voluntary code of conduct for AI, according to European Commission Vice President Margrethe Vestager, as the rapidly evolving technology triggers concerns about its potential risks and the increasing demand for regulation. This temporary solution aims to bridge the gap while the EU finalizes its comprehensive AI rules. The council aims to involve industry feedback in the final proposal for this voluntary code.

U.S. Senate hearings address AI regulation and AI-driven inventorship.

On May 16, 2023, the U.S. Senate Judiciary Committee’s Subcommittee on Privacy, Technology, and the Law held a hearing on the implications of AI. Representatives from academia and major AI industry players, including OpenAI and IBM participated, discussing AI’s potential opportunities, challenges, and possible regulation. The consensus indicated a need for proactive regulation of AI, with the U.S. leading in the creation of such rules, especially given advancements by the EU in their AI Act. Various proposals for regulation were presented, such as a new overseeing agency suggested by OpenAI’s CEO, Samuel Altman, or a “precision regulation” model advocated by IBM’s Chief Privacy & Trust Officer, Christina Montgomery. NYU’s Professor Emeritus, Gary Marcus, proposed a regulatory body akin to the FDA, along with increased funding for AI safety research. The importance of regulation was underscored in light of future developments like Artificial General Intelligence (AGI), a potential advancement in AI with self-improvement and self-awareness capabilities.

On June 7, 2023, the U.S. Senate Committee on the Judiciary, Subcommittee on Intellectual Property held a hearing titled “Artificial Intelligence and Intellectual Property – Part I: Patents, Innovation, and Competition.” Chaired by Senator Christopher Coons (D-DE), the hearing focused on the impact of AI on innovation and patent laws. Witnesses included representatives from academia, technology companies, and intellectual property specialists who offered diverse perspectives on AI’s influence on intellectual property law. Major points of discussion included the patentability of AI-driven inventions, the need to adapt or amend the patent Act to accommodate AI, the establishment of AI regulatory bodies, and the U.S.’s relationship with China regarding AI development and regulation.

Google’s EU launch of AI chatbot Bard postponed amid GDPR compliance concerns.

Google’s launch of its AI chatbot Bard in the EU was postponed due to privacy concerns raised by the Irish Data Protection Commission. The regulator claims Google has provided insufficient information regarding how the tool complies with the EU’s data privacy laws, specifically the General Data Protection Regulation (GDPR). The GDPR is a comprehensive set of data protection laws implemented by the EU to safeguard EU citizens’ privacy and personal data. It requires businesses and organizations to protect the personal data and privacy of EU citizens for transactions that occur within EU member states, providing individuals with significant control over their personal data, including access, correction, and deletion rights. The issue is now under ongoing examination, and the launch of Bard will not proceed until detailed assessments and further documentation have been submitted and approved by the Commission.

SCOTUS’ Warhol decision on fair use may challenge AI companies’ copyright law protection for model training.

In a landmark decision in the case of Andy Warhol Foundation for the Visual Arts v. Goldsmith, the U.S. Supreme Court upheld photographer Lynn Goldsmith’s claim that the Andy Warhol estate infringed her copyright. Justice Sonia Sotomayor, writing for a majority of seven, ruled that Warhol’s licensing of an image derived from Goldsmith’s photograph of Prince for a Condé Nast cover was not a fair use of Goldsmith’s copyrighted work. Even though the Court found that Warhol’s piece was transformative, which up to now would have been enough to find for fair use, the Court instead found that being transformative is not the sole factor. The Court concluded that all four factors considered in determining fair use – (i) the character of the later use, (ii) the nature of the work, (iii) the substantiality of the copying, and (iv) the effect of the later use on the market for the original work – weighed in favor of Goldsmith. Sotomayor’s opinion underscored the balancing act between creativity and availability in copyright law and that the commercial nature of Warhol’s secondary use did not protect it under fair use.

For generative AI companies who use copyrighted works to train their AI systems, the Warhol opinion may mean that AI-generated output may not automatically be protected under fair use, particularly when the underlying AI models are commercialized or used in similar contexts to the original works.

The Grammys align with the U.S. Copyright Office and Supreme Court in AI works recognition, requiring a human author.

Taking a stance similar to the U.S. Copyright Office and the U.S. Supreme Court, the Recording Academy, responsible for the Grammy Awards, has announced new eligibility criteria permitting songs incorporating AI elements to be considered for awards, provided there is “meaningful” human authorship. However, fully AI-generated songs won’t be eligible for awards, and only “human creators” can be nominated or win. This comes amid advancements in generative AI, leading to AI-assisted songs gaining popularity, often going viral online.

The U.S. Copyright Office issued a statement providing clarity on its practices for examining and registering works with AI-generated content, emphasizing that copyright protection only extends to human-created material. This follows in the footsteps of a recent case, Thaler v. Vidal, where Dr. Stephen Thaler attempted to patent two inventions produced by his AI system, “DABUS.” Thaler’s patent applications were denied by the United States Patent and Trademark Office (USPTO), and both the U.S. District Court and the Court of Appeals affirmed this decision, asserting the requirement of a human inventor as per the Patent Act. As applicants for copyright registrations are required to disclose and describe the AI-generated content and can only claim copyright for the human-authored portions, non-compliant pending applications or issued registrations need corrective action. Failure to comply could result in the loss of registration benefits. This human-centric approach to intellectual property rights is shaping up to be the norm, as demonstrated by the rulings of the USPTO, courts, and the stance of the Copyright Office.

If you have questions about legal developments related to AI, please reach out to a member of LP’s Intellectual Property Group.

Additional Information:

AI Legal Pulse: May 2023


Filed under: Intellectual Property

October 18, 2023

What Community Associations Need to Know to Avoid Common Social Media and Intellectual Property Pitfalls

Read More

October 04, 2023

How EU Privacy Laws Impact Unregulated Generative AI

Read More