The European Union’s Artificial Intelligence Act started its phased rollout August 1, 2024, aiming to regulate AI use across its 27 member countries. It affects both EU-based and foreign AI developers and users, setting strict rules for AI systems.
The first big deadline hit February 2, 2025, banning certain AI uses like untargeted scraping of images from the internet or CCTV. Most rules kick in by mid-2026.
On August 2, 2025, the Act began applying to general-purpose AI (GPAI) models — the types used for a wide range of tasks, raising “systemic risks” like easing weapon development or losing control over autonomous AI.
Providers like Anthropic, Google, Meta, and OpenAI got official guidelines, but those with existing models have until August 2, 2027, to fully comply. Newcomers face earlier deadlines.
The law carries heavy penalties. Breaking rules on banned AI uses can cost up to €35 million or 7% of global annual revenue. GPAI providers risk fines up to €15 million or 3% of turnover.
Some tech giants are pushing back. Meta refused to sign the voluntary GPAI compliance code, calling the EU’s approach “overreach” and warning about legal uncertainty.
Google joined the code but expressed concern over its impact on Europe’s AI progress.
Meta’s Joel Kaplan posted on LinkedIn that
Europe is heading down the wrong path on AI.
The code of practice introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.
European AI companies are uneasy too. Mistral AI’s CEO Arthur Mensch joined others urging Brussels to “stop the clock” for two years on AI Act obligations.
The EU rejected calls to delay the law in July 2025, confirming it will stick to its schedule.
This patchwork framework aims to balance innovation with protections for privacy, safety, and fundamental rights — but the next two years will test how tech companies adapt under its weight.