Meta is refusing to sign the EU’s new AI code of practice just weeks before the continent’s AI Act rules go live.
The voluntary code, published this month, sets rules for companies using general-purpose AI models. It demands regular updates on AI tools, bans training on pirated content, and requires respecting content owner opt-outs.
Meta’s chief global affairs officer, Joel Kaplan, called the EU approach “over-reach” in a LinkedIn post.
Joel Kaplan stated:
“Europe is heading down the wrong path on AI.”
“We have carefully reviewed the European Commission’s Code of Practice for general-purpose AI (GPAI) models and Meta won’t be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.”
“…the law will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them.”
The AI Act bans “unacceptable risk” uses like social scoring and cognitive behavioral manipulation. It defines “high-risk” categories including biometrics, facial recognition, education, and employment. Providers must register their AI systems and meet risk and quality standards.
Tech giants including Alphabet, Microsoft, and Mistral AI have pushed back against the EU rules, calling for delays. The European Commission isn’t budging.
On Friday, the EU dropped guidelines for AI model providers ahead of August 2’s enforcement date. Companies with general-purpose AI models already on market, like OpenAI and Meta, must comply by August 2, 2027.
The fight over Europe’s AI future is heating up fast.