top of page

How Manifold is Securing AI Agents on Endpoints

  • Writer: Karan Bhatia
    Karan Bhatia
  • 17 hours ago
  • 2 min read

Manifold, building runtime security for AI agents on endpoints, led by Oleksandr Yaremchuk, Neal Swaelens, and Michael McKenna, has announced the close of an $8 million seed funding round led by Costanoa Ventures with participation from Cherry Ventures, Rain Capital, and Modern Technical Fund, and notable angel investors including former Uber CSO Joe Sullivan and former Google DeepMind CISO Vijay Bolina.


The funding will accelerate development of Manifold’s agentic AI Detection and Response (AIDR) platform, built to protect enterprises from risks associated with expanding autonomous AI usage, while enabling employees to securely adopt agentic AI at scale.


The financing comes amid rapid AI adoption, with 85% of developers already using coding agents like GitHub Copilot, Claude Code, and Cursor, a trend now expanding to broader knowledge work.


Engineers represent a key blind spot, as their typical activities, accessing codebases, running commands, and making API calls, often resemble malicious behavior, leading to relaxed endpoint controls. Autonomous coding agents now replicate these actions with extensive access to production systems and pipelines, but with limited visibility or oversight.


As agent adoption spreads across roles, the attack surface expands, creating a growing need for enterprise visibility into what agents are deployed and how they operate.


“Developers today run coding agents with deep access to source code, production systems, and CI/CD pipelines, connected to a growing ecosystem of tools that lack oversight,” said Neal Swaelens, CEO and Co-founder of Manifold. “With agents like Claude Cowork and OpenClaw expanding to all knowledge workers, the risk is scaling rapidly. These agents don’t just assist, they act, and existing AI security tools weren’t built for this challenge.”


Manifold was founded by Neal Swaelens, Oleksandr Yaremchuk, and Michael McKenna, who bring deep expertise in AI security. Swaelens and Yaremchuk previously co-founded Laiyer AI, where they built LLM Guard, one of the most widely adopted open-source LLM firewalls.


The team came together after Laiyer AI’s acquisition by Protect AI (later acquired by Palo Alto Networks), identifying a key gap: first-generation AI security tools have not scaled to address autonomous AI agents that act, rather than just generate responses.


The industry has responded to agentic AI by scaling guardrails, gateways, and classifiers originally built for chatbots, but these tools focus on prompts and outputs at inference, leaving activity beyond that boundary largely invisible.


Natural language classification also introduces limitations, generating false positives while failing to detect the real risks posed by autonomous agent actions.


Manifold provides real-time visibility into agent activity, tracking tools, system access, and actions, while instantly flagging anomalies. It deploys quickly using existing infrastructure, with no need for new architecture, gateways, or proxies.


“There’s a limited window to define agentic security as a category,” said John Cowgill. “Endpoint agent security is emerging as a core layer of enterprise infrastructure, and this team’s experience in building and scaling foundational AI security positions them to lead what comes next.”

bottom of page