At a Glance
- Witness AI raised $58 million to build a “confidence layer” that lets companies deploy AI without leaking data or breaking compliance rules.
- The startup tackles prompt-injection attacks and rogue AI-to-AI chatter as agents proliferate inside corporations.
- Ballistic Ventures co-founder Barmak Meftah and Witness AI CEO Rick Caccia say AI security could become a $800 billion-$1.2 trillion market by 2031.
- Why it matters: Enterprises want AI productivity gains but fear sensitive data walking out the door through a chatbot prompt.
Companies racing to embed chatbots, copilots, and autonomous agents across workflows face a new headache: how to unleash powerful models without exposing customer data, tripping regulatory wires, or letting malicious prompts hijack internal systems. Witness AI aims to solve that exact problem and just banked $58 million to scale its platform.
The Confidence Layer
Witness AI brands its product as the “confidence layer for enterprise AI.” The software sits between employees, AI agents, and company data, inspecting every prompt and response for leaks, compliance violations, or injection attempts. The goal is to give security teams real-time visibility and policy enforcement without slowing down legitimate AI use.
Market Size and Timing

During a segment on News Of Philadelphia‘s Equity podcast, Rick Caccia, Witness AI’s CEO, framed the opportunity: “Every C-suite wants AI yesterday, but CISOs are saying ‘not so fast.'” Ballistic Ventures co-founder Barmak Meftah added that analyst models project the AI-security sector swelling to $800 billion-$1.2 trillion by 2031, driven by regulatory pressure and the sheer volume of agent-to-agent transactions happening without human eyes on them.
What Enterprises Fear
- Data leakage – an employee pastes customer records into a chatbot that retains and later regurgitates them.
- Compliance drift – regional AI laws evolve faster than internal policies, exposing firms to fines.
- Prompt injection – attackers hide malicious instructions inside everyday prompts to exfiltrate data or trigger unauthorized actions.
- Agent anarchy – autonomous agents negotiate and act on behalf of the company without audit trails.
How Witness AI Works
The platform deploys as a cloud-native gateway. All AI traffic routes through it, where natural-language models classify content against customer-defined guardrails. If a prompt violates policy-say, it contains personally identifiable information-the request is blocked or redacted before reaching the target model. Outbound answers are similarly screened. Continuous logging feeds compliance dashboards and forensic investigators.
Funding Round Details
The $58 million round was led by Ballistic Ventures, with participation from existing investors. The cash will accelerate engineering hires, expand go-to-market teams, and fund certifications for SOC 2, ISO 27001, and FedRAMP so Witness AI can sell into highly regulated industries.
Podcast Highlights
Rebecca Bellan, hosting the Equity episode, pressed the guests on whether enterprises will accept yet another middleware layer. Meftah responded that early adopters-mainly financial services and health-care giants-already treat the capability as “non-negotiable infrastructure,” not a nice-to-have. Caccia noted that Witness AI’s early deployments show 30-40 percent reductions in security-team alert volume because policy violations are caught before they reach production models.
Competitive Landscape
While the segment did not name direct rivals, Caccia emphasized that generic API gateways or data-loss-prevention tools lack semantic understanding of AI interactions. Witness AI’s specialized large-language-model classifiers, he argued, give it an edge in accuracy and low-latency enforcement.
Customer Traction
The company declined to reveal exact customer counts but said “several Fortune 50” companies are in paid pilots. Average annual contract values exceed seven figures, indicating enterprises are willing to pay premium prices for specialized AI security rather than bolt on legacy tools.
Regulatory Tailwinds
With the EU AI Act phasing in through 2026 and U.S. agencies issuing rolling guidance, Meftah believes compliance budgets will keep ballooning. He pointed to language in the SEC’s proposed cybersecurity rules that could require firms to disclose AI-related incidents, further incentivizing proactive controls.
Key Takeaways
- $58 million in fresh capital positions Witness AI to ride the AI-security wave as enterprise adoption accelerates.
- The sector’s projected trillion-dollar scale by 2031 underscores how critical data and compliance risks have become.
- Early customers report measurable drops in security alerts, hinting that specialized AI-governance tools deliver faster ROI than horizontal security stacks.
- As agents start negotiating with other agents sans humans, platforms like Witness AI could become as standard inside enterprises as firewalls once were.

