New frontier AI regulation proposal targets companies, not models or uses
A new approach to AI regulation focuses on the corporate entities developing the most powerful AI systems instead of individual models or their uses. This method aims to tackle flaws found in recent US and EU regulatory efforts.
The issue started with California’s SB 1047, a model-based bill using a compute threshold (10^26 FLOPs) to regulate "frontier" models. Critics argued the threshold was a blunt proxy that quickly became outdated, especially as newer techniques rely less on training compute and more on inference and reinforcement learning—factors SB 1047 couldn’t track well.
Soon after SB 1047’s veto in September 2024, OpenAI released its o1 model, which uses inference compute heavily and outperforms many larger models. This exposed the weakness of compute-based triggers.
Use-based regulation also falls short. Texas’ HB 1709 targeted AI users across many sectors with extensive reporting burdens. The law’s broad definitions risked hitting countless benign AI uses with heavy compliance costs, potentially stifling innovation and centralizing AI decision-making, slowing adoption.
The launch follows detailed analysis arguing that entity-based regulation—triggered by company characteristics like annual AI R&D spend—can better address AI risks. Heavy spending thresholds (e.g., $1 billion yearly) would focus regulation on a small group of leading firms, cutting red tape for startups and smaller companies.
Entity regulation would cover broad activities like how companies manage safe training, insider threats, and algorithmic secrets—areas model- or use-based rules miss. It recognizes that risks often arise from the overall business environment, not just single models or isolated applications.
Potential triggers might combine R&D spend with model compute or cost, but the core idea is to oversee the risky practices of frontier AI developers as a whole rather than chasing ever-shifting model definitions.
The paper highlights that entity-based rules could range from transparency demands to intense oversight like nuclear regulation. It also warns that while evasion through shell companies is possible, legal tools like veil piercing can address this.
This strategy doesn’t ignore model or use regulation but deemphasizes them in favor of targeting the business entities at AI’s cutting edge. The approach aims to help policymakers better understand and manage risks ahead of clear signs of harm.
The authors conclude:
"We believe that the legal framework for frontier AI development should generally treat the characteristics of entities (or related entities acting in concert), rather than characteristics of models or uses, as its principal regulatory trigger."
Further research is needed on the precise mix of entity-, model-, and use-based rules, but entity-focused regulation is positioned as a key piece for safe frontier AI governance.
Source: Brookings Institution – Entity-based Frontier AI Regulation