As companies race to deploy AI-powered chatbots, agents, and copilots, a fresh risk is surfacing: how to let employees and AI agents tap powerful AI tools without leaking sensitive data, violating compliance rules, or falling prey to prompt-based attacks.
At a Glance
- Witness AI closed a $58 million round to create what it calls “the confidence layer for enterprise AI.”
- The startup focuses on stopping data leaks, compliance breaches, and injection hacks as agent-to-agent traffic scales.
- Analysts project the AI security market could reach $800 billion to $1.2 trillion by 2031.
Why it matters: Enterprises need guardrails before autonomous AI systems start talking to one another without human oversight.
Witness AI aims to fill that gap. On the latest episode of News Of Philadelphia‘s Equity podcast, Rick Caccia, the company’s CEO, told host Rebecca Bellan and Barmak Meftah, co-founder and partner at Ballistic Ventures, why the new funding is only the beginning.
The $58 Million Bet
Witness AI’s seed round, led by Ballistic Ventures, gives the startup runway to build controls that sit between large language models and corporate data. The platform logs, filters, and redacts prompts and responses in real time, letting security teams set policies on who-or what-can see sensitive information.
Meftah, whose firm backed the round, said the need became obvious during customer interviews. “Every CISO we spoke with had the same three fears,” he noted on the show:
- Data exfiltration through clever prompts
- Compliance drift when agents store chat history
- Injection attacks that trick models into bypassing safeguards
Caccia added that traditional data-loss-prevention tools were never designed for generative AI. “They look for credit-card patterns or Social Security numbers,” he said. “They don’t understand context, intent, or the way LLMs can be coaxed into revealing source code or customer lists.”
Why the Market Could Top $1 Trillion
Research baked into Witness AI’s pitch deck cites forecasts that place the AI security sector between $800 billion and $1.2 trillion within seven years. The estimate counts spending on:
| Category | 2023 Spend | 2031 Projection |
|---|---|---|
| Cloud security for AI workloads | $45 B | $290 B |
| Model monitoring & guardrails | $12 B | $180 B |
| Identity & access controls | $30 B | $220 B |
| Compliance automation | $18 B | $160 B |
| Threat detection for ML pipelines | $9 B | $120 B |
| Managed security services | $26 B | $230 B |
Those totals assume AI workloads grow at a 52 percent compound annual rate and that regulators impose stricter audit rules on high-risk systems.
Caccia acknowledged the numbers sound lofty but argued they’re grounded in bill-of-materials conversations with Fortune 500 tech buyers. “When you price per-seat, per-workload, and per-model, you get to big figures fast,” he said.
Agents Talking to Agents
A core worry for Witness AI customers is the moment AI agents start collaborating without human sign-off. Picture one support bot querying a finance bot for refund data, then sharing results with a shipping bot to create a return label-all in milliseconds.
“Each hop is a potential leak,” Caccia warned. “If bot A can convince bot B to disable logging, you’ve lost the audit trail.”
The startup’s response is a policy engine that treats every agent-to-agent call like an API transaction. Requests are scanned for:
- Overly broad data scopes
- Prompts that request system-level instructions
- Attempts to override safety filters
- Tokens that match known secrets or PII patterns
Violations are blocked and escalated to human reviewers, preserving a decision log for later audits.
Early Customers & Roadmap
Witness AI is already running pilots with three unnamed Fortune 100 companies and ten mid-market firms, according to Caccia. Use cases range from:
- Blocking source-code uploads to public Copilot sessions
- Masking patient data before it reaches medical summarization bots
- Enforcing region-based data residency for European staff
The $58 million will fund:
- Doubling the 27-person engineering team by year-end
- Expanding compliance coverage to SOC 2, ISO 27001, HIPAA, and FedRAMP
- Building partner integrations with AWS Bedrock, Azure OpenAI, and Google Vertex
- Launching a self-serve tier for smaller security teams
Meftah said Ballistic Ventures expects the product to reach general availability in Q1 2025, with pricing tied to monthly active users and the volume of tokens scanned.
Competitive Landscape
Witness AI enters a crowded field. Incumbents like Microsoft, Google, and AWS have rolled out basic model-use policies, while startups such as HiddenLayer, CalypsoAI, and Robust Intelligence offer overlapping features. Caccia argued Witness AI’s edge is context-aware filtering that works across multiple models and cloud providers without forcing customers to rewrite prompts.
“We don’t ask developers to change a single line of code,” he said. “Point your API calls at our proxy, and policies apply instantly.”

Key Takeaways
- $58 million in fresh capital signals investor confidence that AI security will become a standard line item in enterprise budgets.
- The projected $800 billion-$1.2 trillion market hinges on rapid AI adoption and tougher compliance rules.
- Witness AI’s pitch centers on stopping data leaks and prompt attacks before they happen, especially when AI agents interact autonomously.
- Early traction with Fortune 100 pilots gives the startup room to compete against tech-giant incumbents, but execution and pricing will determine whether the projected market size translates to revenue.

