Nomadic Emerges with $8.4M to Build Physical AI Training Platform for Robotics and AV Teams
- Karan Bhatia

- 3 hours ago
- 2 min read

Nomadic, a startup building the spatial intelligence layer for physical AI, led by Mustafa Bal and Varun Krishnan, has announced $8.4M in funding led by TQ Ventures, with participation from Pear VC, Jeff Dean, and angels & executives from OpenAI, Google DeepMind.
Nomadic serves as a visual data engine for robotics and autonomous vehicle teams, enabling continuous monitoring of real-world systems while generating production-ready training data and edge-case libraries. The platform is already used by companies including Zoox, Mitsubishi Electric (Automotive America), and Zendar.
Physical AI’s Bottleneck: Video That Doesn’t Turn into Learning Fast Enough
Robotics and autonomy teams capture vast amounts of real-world video, but much of it remains underutilized, manually reviewed, inconsistently labeled, or stored as static archives.
As deployment scales, the bottleneck shifts from models and compute to understanding fleet behavior in real time and converting that insight into continuous learning.
Teams face two core challenges:
Monitoring: Understanding real-time fleet activity, including failures, regressions, and safety-critical events.
Training: Identifying the most valuable data, especially rare edge cases, hidden within massive video datasets.
“Most fleet data goes unreviewed because no human team can process it all, yet the rare edge cases are what matter most,” said Antonio Puglielli, VP of Engineering at Zendar. “Nomadic makes the full dataset usable, turning weeks of manual review into minutes so engineers can focus on improving models.”
Nomadic’s Solution: Monitoring + Structured Training Data
Nomadic transforms raw robotics and AV footage into training-ready data. Its platform analyzes thousands of videos simultaneously, automatically surfacing failures and key events, then converting them into structured, searchable datasets.
Instead of static files, video becomes a living dataset, enabling teams to validate perception, improve data quality, and prioritize training. The result is a faster path from real-world data to production models, without relying on large manual labeling teams.
Key Features
Automated event detection: Identifies critical moments without manual review.
Compliance analysis: Flags safety risks and operational violations.
AI-powered insights: Generates analysis and recommendations for events.
Video search: Finds similar events and patterns across datasets.
Natural-language analysis: Detects custom scenarios via simple queries.
Multi-sensor support: Ingests camera feeds, LiDAR, and logs in a single run.
“Teams are sitting on a goldmine of video and sensor data, but most of it never becomes a usable training signal,” said Mustafa Bal, Co-founder and CEO of Nomadic. “Nomadic brings world-class perception workflows to every robotics team, not just the largest AV labs.”
“Physical AI will be won by teams that learn fastest from real-world data,” added Andrew Marks, Co-founding Partner at TQ Ventures. “Nomadic provides the most actionable way to understand data and rapidly improve systems.”
Founding Team
Nomadic was founded by Harvard CS graduates Mustafa Bal (CEO) and Varun Krishnan (CTO), combining expertise in large-scale distributed systems to solve the data bottleneck in physical AI. Bal previously worked on AI/ML at Snowflake and Microsoft, contributing to DeepSpeed, while Krishnan is a U.S. Chess Master and former INFORMS Wagner Prize finalist.
The team includes engineers from Amazon, Snowflake, and IBM Research, with deep expertise in computer vision, large-scale AI optimization, and production machine learning.


