Anthropic launched new AI models aimed at U.S. national security agencies. The “Claude Gov” models were built from direct government feedback to handle classified tasks like strategic planning, operational support, and intelligence analysis.
The company says these models are already in use by top-level national security agencies. Access is tightly restricted to classified environments and the models went through the same safety testing as Anthropic’s public Claude versions.
The models improve handling of classified info, say Anthropic, with less refusal to engage on sensitive data. They also boost understanding of intelligence documents, key languages for security, and complex cybersecurity data.
“These models are already deployed by agencies at the highest level of U.S. national security, and access to these models is limited to those who operate in such classified environments,” Anthropic wrote.
“[They] underwent the same rigorous safety testing as all of our Claude models.”
This comes after Anthropic’s partnership with Palantir and AWS to offer AI to defense customers, announced last November. The move puts Anthropic in the growing club of AI labs chasing government defense contracts.
OpenAI is also courting the Defense Department. Meta made its Llama models available to defense partners. Google is tuning Gemini to run in classified settings. Cohere teamed with Palantir for defense AI deployments as well.
Anthropic is betting on dependable government revenue to fuel growth in a shifting AI landscape.