How EU Privacy Laws Impact Unregulated Generative AI
Date
October 4, 2023
Read Time
3 minutes
Share
As we explained in the first of our two-part series on privacy laws’ impact on Generative AI, although currently there are no AI regulatory laws in the U.S., privacy laws and copyright laws govern how data can be used. In this article, we looked at how U.S. privacy laws govern Generative AI systems, and here, we discuss the impact of European Union (EU) privacy laws on Generative AI.
EU Privacy Law
Unlike the U.S., the EU has the General Data Protection Regulation (GDPR) — often referred to as the “gold standard” for privacy law. The GDPR requires businesses and organizations to protect the personal data and privacy of EU citizens for transactions that occur within EU member states and provides individuals with significant control over their personal data, including access, correction, and deletion rights. The GDPR applies to companies located within the EU and companies located outside of the EU that offer goods or services to, or monitor the behavior of, EU citizens.
Although not specifically designed to recall or retrieve personal information, given their extensive training data, AI systems might unintentionally generate sensitive or personal details in the training process. Because the GDPR requires explicit consent from individuals before their data is processed, any inadvertent processing of personal data by an LLM may not have the required consent.
Additionally, GDPR requires data subjects to be aware of how their data is being used, which may be difficult for AI models like GPT-4 because of their complex algorithms and large data sets. Lastly, GDPR prioritizes “data minimization,” or collecting only what is necessary. However, the broad datasets used to train LLMs may lead to excessive data collection and generation, potentially violating this requirements.
Generative AI has already come under scrutiny for possible violations of the GDPR. OpenAI has recently been required to update its privacy policy to comply with the GDPR, ensuring a clearer outline of personal data usage and the development of its language models. The policy is more prominently displayed during the signup process and has been supplemented with an age confirmation mechanism. OpenAI also provides detailed information on user data controls, including the ability to export and delete ChatGPT data, and more insights into how user data enhances model performance. Users can now opt out via a new Chat History & Training setting if they prefer not to have their personal data used.
Additionally, the launch of Google’s AI chatbot Bard in the EU has also been delayed due to privacy concerns expressed by the Irish Data Protection Commission. The regulator argues that Google’s information about Bard’s compliance with the GDPR has been inadequate, which as spurred an ongoing examination and the suspension of Bard’s launch until comprehensive assessments and supplementary documentation have been submitted and approved by the Commission.
Conclusion
As Generative AI systems like ChatGPT and Bard continue to permeate various industries, their transformative potential is clear – but their success hinges on navigating an increasingly complex legal landscape, marked by a lack of AI-specific regulations but stringent privacy laws. While AI continues to push the boundaries of technology, it must equally respect and adapt to the boundaries set by privacy laws.
If you have questions about legal developments related to AI or Privacy, please reach out to the head of LP’s Privacy Group.