Cybersecurity

The Next Generation of Securing AI: Why We Invested in Lumia

December 4, 2025
Amir Zilberstein

Managing Partner

Aviad Harell

Managing Partner

There’s a quiet but profound shift happening inside enterprises.

AI is no longer a single “chatbot project” on the side. Employees are using AI to draft contracts, summarize deals, analyze data, write code, and trigger actions across critical systems. At the same time, companies are beginning to rely on AI agents that don’t just suggest next steps, but actually carry them out on behalf of humans.

The productivity upside is enormous. So is the risk.

Most of the security and governance tools in large organizations were built for people and applications, not for AI systems that can read everything, remember everything, and act across multiple tools at once. 

That gap creates very concrete problems: sensitive data flowing into opaque AI services, agents initiating actions with little oversight, and organizations struggling to answer basic questions about where AI is used, what it sees, and what it does.

CISOs now live with a daily tension: they cannot afford to slow down the biggest productivity wave of this century, but they also cannot accept blind, uncontrolled AI usage touching their most sensitive data and systems.

Lumia was created to resolve that tension.

Lumia is a Team8 venture-creation company focused on AI Usage Control for employees and their agents. It is emerging from stealth with an $18M seed round backed by Team8 and a significant endorsement: Admiral Mike Rogers, former Director of the NSA, is joining as Chairman of the Advisory Board. The goal is straightforward—make broad AI adoption possible without losing visibility or control.

From AI experiments to an AI workforce

The first phase of enterprise AI was experimental. A small team tried a chat interface, a few developers tested code assistants, a pilot group used AI to summarize documents.

We are now in a different phase. AI is embedded in day-to-day work. Salespeople use AI add-ons inside their CRM. Lawyers draft language in browser-based assistants. Data and product teams rely on AI features inside analytics and collaboration tools. Developers split their time between traditional IDEs and AI-native environments. Many of these capabilities arrive as part of product updates, not as discrete, approved “AI tools.”

In parallel, agents are starting to act on behalf of users: creating tickets, sending messages, changing records, and interacting with internal and external APIs. From a risk perspective, they look less like tools and more like powerful identities operating at scale.

In this environment, simple questions- Which AI tools are being used, what data is being shared, what are agents actually doing, and under which permissions? are suddenly hard to answer. Traditional SaaS discovery, network monitoring, and basic DLP were not designed for this world. Blocking everything is unrealistic; allowing everything blindly is not an option.

Lumia’s view is that this moment requires a new layer in the stack: a control plane that understands how AI is used, by people and by agents, and enforces the organization’s risk appetite without undermining the value AI creates.

Lumia’s approach: governing AI usage, not just AI tools

Lumia’s platform starts from the premise that AI usage must be understood in terms of content, context, and intent. Not just point integrations.

From day one, Lumia is designed to work across applications, modalities, devices, and agents. Rather than chasing individual tools, it focuses on what actually happens: what data is sent, what task is being performed, what the AI system returns, and what action follows. The platform is already capable of deeply analyzing more than 5,000 AI-powered applications, interpreting those interactions to build a real picture of AI risk exposure.

On top of that visibility, Lumia introduces governance for both human employees and their AI agents. It understands actions, intent, and impact: which agent is doing what, with which permissions, across which systems; which employees are using which AI capabilities and for what purposes. That understanding is then used to enforce AI usage policies that align with business priorities, security standards, and privacy regulations.

A key design choice is how Lumia is deployed. The platform integrates at the network and application edges to provide agentless coverage, augmenting existing IT and security infrastructure without requiring endpoint changes. In large organizations with diverse devices and complex environments, this practicality is essential.

The end result is not another “block/allow” gate in front of a single AI product. It’s a usage-centric control layer that lets CISOs say: this is how we want AI to be used in our company, and then have those rules applied consistently, even as tools, devices, and agents evolve.

As Omri, Lumia’s Co-Founder & CEO, puts it:

“The pressure on CISOs is huge – they cannot afford to be the ones pulling back the business on the greatest productivity boost in this century. However, AI introduces risks that the business just cannot afford. Lumia allows enterprises to adopt AI securely and responsibly. Allowing broad usage while putting seamless controls in place.”

AppleStorm: seeing the invisible

Lumia is not launching with theory alone; it is launching with a concrete example of the kind of risk it is built to address.

The company’s AppleStorm research, already widely cited, uncovered critical privacy flaws in Apple’s AI ecosystem. The team showed how Siri transmits sensitive user data and metadata—such as WhatsApp messages and location coordinates—to Apple servers without transparency or user control, despite strong claims of on-device processing.

AppleStorm triggered a necessary debate: what AI systems are actually doing with data, how much control users really have, and what this means when consumer devices and embedded AI features are used in enterprise environments.

For us, AppleStorm matters because it highlights how hard it is for organizations to understand the real behavior of AI systems they did not build, and because it showcases the kind of protocol-level investigation and traffic analysis Lumia is set up to do. Serious AI security requires this depth, not just policy wrappers around vendor promises.

How Lumia came together

Lumia’s story starts inside Team8, with a thesis and with a relationship.

For several years, Bobi Gilburd, former CTO of Unit 8200, led Team8’s thinking on AI security as Chief Innovation Officer at Team8. His background spans elite signals intelligence, large-scale cyber operations, and national-level innovation, and he is a recipient of the Israel Defense Prize for his contributions to cybersecurity. He had a clear picture of how AI would change the security problem: from single models to distributed agents; from contained experiments to workforce-wide usage; from perimeter control to governance of intent and action.

In parallel, Omri Iluz was coming off his journey as Co-Founder and CEO of PerimeterX, a company that reached $40M in ARR, merged with HUMAN Security, and went on to become a unicorn. PerimeterX was built around understanding and defending against automated traffic at internet scale. That experience, translating a new class of risk into a scalable platform and a category-defining company, is rare and highly relevant to AI.

When Omri came to Team8 with the idea that became Lumia, it clicked quickly. He wanted to work on securing AI in a way that matched how organizations are truly adopting it. Bobi wanted to build a company that reflected a holistic view of AI security, from employees to agents, rather than a narrow feature or bolt-on.

Lumia is the result of that match: a venture-creation company where the thesis, the founding team, and the problem are tightly aligned from day one.

Looking ahead

Lumia’s $18M seed round will be used to build engineering and research, deepen integrations with leading AI ecosystems and enterprise infrastructure, and scale go-to-market programs with design-partner customers in financial services, technology, and other data-sensitive industries.

But the more important story is what Lumia represents: one of the very first companies built from the ground up to secure AI usage across both the workforce and the agents that increasingly work alongside it.

Agentic AI is where the world is headed. The organizations that succeed with it will be those that can say “yes” to AI, confidently, at scale, and with guardrails that reflect how they want their business to run.

We believe Lumia is building the platform that will make that possible.

Amir Zilberstein

Managing Partner

Amir Zilberstein is a Managing Partner at Team8, where he builds and invests in Cyber and Software Infra businesses.

Aviad Harell

Managing Partner

Aviad Harell is a Managing Partner at Team8. He builds and invests in Cyber and Software Infrastructure companies.

Share:

Join our community

and get weekly updates on our latest news to your email