From August 2, 2025, providers of general purpose artificial intelligence (GPAI) models in the European Union need to start obeying certain sections of the EU AI Act. Requirements include maintaining up-to-date technical documentation and summaries of training data.
The AI Act outlines EU-wide measures designed to ensure that AI is used safely and ethically. It establishes a risk-based approach to regulation that categorises AI systems based on their perceived level of risk to and impact on citizens.
While specific regulatory obligations for GPAI model providers begin to apply on August 2, 2025, a one-year grace period is available to come into compliance, meaning there will be no risk of penalties until August 2, 2026.
TechRepublic has prepared a simplified guide to what GPAI model providers should know for the upcoming deadline. This guide is not comprehensive and has not been reviewed by a legal or EU regulatory expert; providers should consult official sources or seek legal counsel to ensure full compliance.
What rules come into effect on August 2?
There are five sets of rules that providers of GPAI models must ensure they are aware of and are following as of this date:
Notified bodies
Providers of high-risk GPAI models must prepare to engage with notified bodies for conformity assessments and understand the regulatory structure that supports those evaluations.
High-risk AI systems are those that pose a significant threat to health, safety, or fundamental rights. They are either: 1. used as safety components of products governed by EU product safety laws, or 2. deployed in a sensitive use case, including:
- Biometric identification
- Critical infrastructure management
- Education
- Employment and HR
- Law enforcement
GPAI models
GPAI models can serve multiple purposes. These models pose “systemic risk” if they exceed 1025 floating-point operations executed per second (FLOPs) during training and are designated as such by the EU AI Office. OpenAI’s ChatGPT, Meta’s Llama, and Google’s Gemini fit these criteria.
All providers of GPAI models must have technical documentation, a training data summary, a copyright compliance policy, guidance for downstream deployers, and transparency measures regarding capabilities, limitations, and intended use.
Providers of GPAI models that pose systemic risk must also conduct model evaluations, report incidents, implement risk mitigation strategies and cybersecurity safeguards, disclose energy usage, and carry out post-market monitoring.
Governance
This set of rules defines the governance and enforcement architecture at both the EU and national levels. Providers of GPAI models will need to cooperate with the EU AI Office, European AI Board, Scientific Panel, and National Authorities in fulfilling their compliance obligations, responding to oversight requests, and participating in risk monitoring and incident reporting processes.
Confidentiality
All data requests made to GPAI model providers by authorities will be legally justified, securely handled, and subject to confidentiality protections, especially for IP, trade secrets, and source code.
Penalties
Providers of GPAI models will be subject to penalties of up to €35,000,000 or 7% of their total worldwide annual turnover, whichever is higher, for non-compliance with prohibited AI practices under Article 5, such as:
- Manipulating human behaviour
- Social scoring
- Facial recognition data scraping
- Real-time biometric identification in public
Other breaches of regulatory obligations, such as transparency, risk management, or deployment responsibilities, may result in fines of up to €15,000,000 or 3% of turnover.
Supplying misleading or incomplete information to authorities can lead to fines of up to €7,500,000 or 1% of turnover.
For SMEs and startups, the lower of the fixed amount or percentage applies. Penalties will consider the severity of the breach, its impact, whether the provider cooperated, and whether the violation was intentional or negligent.
How can a GPAI provider ensure they are in compliance, and that they need to comply in the first place?
The European Commission recently published the so-called AI Code of Practice, a voluntary framework that tech companies can sign up to implement and comply with the AI Act. Google, OpenAI, and Anthropic have committed to it, while Meta has publicly refused to in protest of the legislation in its current form.
The Commission plans to publish supplementary guidelines with the AI Code of Practice before August 2 that will clarify which companies qualify as providers of general-purpose models and general-purpose AI models with systemic risk.
When does the rest of the EU AI Act come into force?
The EU AI Act was published in the EU’s Official Journal on July 12, 2024, and took effect on August 1, 2024; however, various provisions are applied in phases.
- February 2, 2025: Certain AI systems deemed to pose unacceptable risk (e.g., social scoring, real-time biometric surveillance in public) were banned. Companies that develop or use AI must ensure their staff have a sufficient level of AI literacy.
- August 2, 2026: GPAI models placed on the market after August 2, 2025 must be compliant by this date.
Rules for certain listed high-risk AI systems also begin to apply to: 1. Those placed on the market after this date, and 2. those placed on the market before this date and have undergone substantial modification since. - August 2, 2027: GPAI models placed on the market before August 2, 2025, must be brought into full compliance.
High-risk systems used as safety components of products governed by EU product safety laws must also comply with stricter obligations from now on. - August 2, 2030: AI systems used by public sector organisations that fall under the high-risk category must be fully compliant by this date.
- December 31, 2030: AI systems that are components of specific large-scale EU IT systems and were placed on the market before August 2, 2027, must be brought into compliance by this final deadline.
A group representing Apple, Google, Meta, and other companies urged regulators to postpone the Act’s implementation by at least two years, but the EU rejected this request.