Confident Security just launched CONFSEC, a new encryption tool aimed at locking down AI data privacy for enterprises.
The San Francisco startup wants to be “the Signal for AI,” stopping user prompts and metadata from being stored or used for AI training — not even by the AI provider.
The issue started as AI giants like OpenAI and Google quietly keep user data to improve models or monitor security, scaring off highly regulated sectors like healthcare and finance. CONFSEC promises to end that.
CONFSEC wraps around AI models with end-to-end encryption modeled on Apple’s Private Cloud Compute system, routing data through Cloudflare or Fastly to anonymize it. It then enforces strict rules on when data can be decrypted — no logging, no training, no third-party access. The AI inference software is also open for public review to verify those guarantees.
CONFSEC came out of stealth with $4.2 million seed funding from Decibel, South Park Commons, Ex Ante, and Swyx. The company aims to sit between AI vendors and clients like governments, hyperscalers, and enterprises.
CEO Jonathan Mortensen told TechCrunch:
“The second that you give up your data to someone else, you’ve essentially reduced your privacy.”
“And our product’s goal is to remove that trade-off.”
Mortensen says AI companies might also use CONFSEC to win enterprise business by reassuring customers their data stays private. New AI browsers like Perplexity’s Comet could benefit as well.
“Confident Security is ahead of the curve in recognizing that the future of AI depends on trust built into the infrastructure itself,” said Decibel partner Jess Leão.
CONFSEC is already production-ready, externally audited, and in talks with banks, browsers, and search engines to embed the tech.
Mortensen’s final pitch:
“You bring the AI, we bring the privacy.”