The European Union has released the first draft of its guidelines for General-Purpose AI (GPAI) models. This 36-page document, which lays the groundwork for compliance under the EU AI Act, addresses transparency, risk management, and governance, with a focus on the ethical and legal challenges posed by powerful AI technologies.
Core Areas of Focus in the Guidelines
The draft focuses on four critical areas: transparency, copyright compliance, risk assessment, and technical/governance risk mitigation. Companies like OpenAI, Google, Meta, and Anthropic—all of which develop GPAIs—are expected to comply with these guidelines. The draft underscores the importance of transparency, requiring AI developers to disclose the web crawlers used in model training, a response to growing concerns from copyright holders.
Additionally, the draft outlines robust risk assessments aimed at minimizing potential harm from cyber offenses, AI errors, and unchecked discrimination. Safety and Security Frameworks (SSFs) are recommended to ensure that AI systems remain under control and are continually reassessed for effectiveness.
Ensuring Accountability and Compliance
The guidelines also stress governance, urging companies to establish internal structures for ongoing risk assessment and to involve external experts when necessary. Companies that fail to comply with the AI Act can face significant fines—up to €35 million or 7% of their annual profits, whichever is higher.
Stakeholders have until November 28 to submit feedback on the draft through the Futurium platform, with the final regulations expected by May 1, 2025.
For more details, read the full article on Engadget.