top of page

Origin Lab Announces $8M in Seed Funding Led by Lightspeed

  • Writer: Karan Bhatia
    Karan Bhatia
  • 2 hours ago
  • 2 min read

Origin Lab, a platform and technology company that captures, creates, and delivers premium, rights-cleared multimodal content for AI training, led by Anne-Margot Rodde, Antoine Gargot, and Colin Carrier, has raised $8M seed round led by Lightspeed Venture Partners with participation from Eniac VenturesFPV VenturesSeven Stars, and SV Angel, along with angels from robotics, AI, gaming, and Google, including Twitch co-founder Kevin Lin and Cruise founder Kyle Vogt.


For decades, game studios have built highly detailed interactive environments with physics, state, and both scripted and emergent behavior under consequence.


Originally designed for players, these worlds also contain the kind of structured data that can help the next generation of AI systems learn how complex environments behave.


The funding is focused on building the pipeline between game studios and AI labs training next-generation world models, video generation systems, and embodied agents.


Where AI is, and What it Needs


Frontier AI is moving beyond language and static images toward systems that understand motion, physics, spatial structure, and cause-and-effect, enabling world models, video generation, robotics, and embodied agents. All of these are constrained by the same bottleneck: the quality, structure, and provenance of training data.


Scraped video provides pixels, but not the underlying state, actions, or physics, and often lacks usable licensing.


The focus is now shifting to a harder question: how to source rich, structured, legally usable data at scale for training the next generation of models.


The Answer was Hiding in Game Worlds.


Game engines already produce the kind of data world models need by design. Each frame carries structure, actions map to inputs, collisions reflect physics state, and scenes include rich metadata. Player sessions effectively capture decision-making in consequence-driven environments, none of which exist in scraped video, but all of which exist in a running engine.


The gap was never supply. It was the lack of a technical and commercial bridge to turn game engines into usable, auditable datasets for AI training.


What We Built


Origin Lab builds a pipeline connecting game engines to frontier AI systems.


It works with publishers to license content at the source and capture engine-level data, inputs, video, physics, camera paths, and scene state, fully structured and time-aligned through a single API.


With 20+ publishers across 50+ titles and a frontier AI lab partnership, it is building “Artificial World Intelligence®”: structured datasets for models that simulate and interact with complex environments.


Research


Origin Lab is also building an applied research layer across generative world-building, synthetic player modeling, scene intelligence, world value scoring, and automated data quality.


In collaboration with academic and industry partners, including researchers at the University of Oxford, the company is developing joint work in parallel with its platform.


The research and infrastructure are tightly coupled, each reinforcing the other.


How We Built It.


Origin Lab is designed around alignment between game studios and AI labs.


Game studios participate in the value they create, while AI researchers get licensed, auditable, structured data they can trust.


Consent, attribution, revenue sharing, and usage tracking are built into the pipeline from the start, embedded as the system’s core operating model rather than added as a legal layer.

Menlo Times is a global media platform covering AI, Deeptech, Venture Capital, Fintech, Robotics, and Security through news, analysis, and insights from founders and operators.
  • Instagram
  • Facebook
  • X(Formerly Twitter)
  • LinkedIn
  • YouTube
© 2026 Menlo Times. All rights reserved.
bottom of page