French court halts AI rollout over missing worker consultation
A company in France got slapped with a court order to stop deploying AI tools at work. The First Instance Court of Nanterre ruled on February 14, 2025, the AI rollout began before consulting the Works Council as legally required.
The firm claimed the AI apps were just in a “pilot phase.” The court disagreed, calling it actual implementation, not mere testing.
The ruling underlines that employee representation rights can’t be ignored in workplace digital shifts. The court said the premature AI deployment caused a “manifestly unlawful disturbance” to the Works Council’s powers.
This sets a clear precedent for AI in French workplaces — companies must finish all employee consultations before launching new tech.
The decision also ties to the new EU AI Act. That legislation puts AI systems into four buckets: banned AI, high-risk AI systems (HRAIS), general purpose AI models (GPAIM), and low-risk AI.
High-risk AI providers must:
- Implement risk management systems
- Ensure data governance
- Keep technical documentation
- Guarantee transparency
- Enable human oversight
- Meet accuracy, robustness, and cybersecurity standards
- Run conformity assessments
- Cooperate with regulators
General Purpose AI Models have to issue technical docs, respect EU copyright laws, and summarize training data. Those posing systemic risks must do model evaluations, risk reduction, and incident reporting.
Workers’ councils and strict EU rules mean AI deployments in France are far from plug-and-play. Companies rushing AI launches without proper procedures now face legal roadblocks.
Claude-Étienne Armingaud and Josefine Beil provide full legal details at Predictice.
“The premature implementation of the AI tools constituted a ‘manifestly unlawful disturbance’ of the Works Council prerogatives.”
French court ruling, February 14, 2025