top of page

Memories AI Launches World's First Large Visual Memory Model - Raises $8 Million in Seed Fund

  • Writer: Menlo Times
    Menlo Times
  • Jul 25
  • 1 min read
ree

Memories AI, Developer of the World's First Large Visual Memory Model that remembers visuals like humans, led by Shawn Shen and Ben Zhou, announced its launch and a seed fund raise of $8 million led by Susa Ventures, with participation from Crane Venture Partners, Samsung Next, Fusion Fund, Seedcamp, and Creator Ventures.

Memories AI introduces a breakthrough Large Visual Memory Model (LVMM) that stores and retrieves persistent visual memories, enabling advanced reasoning across vast video datasets. Unlike traditional systems limited to short video segments, LVMM compresses and indexes content into searchable memory structures—delivering instant answers from decades of footage or millions of social clips. This Google-like approach to video unlocks state-of-the-art performance and real-world applications in sports, commerce, and media intelligence.


Memories AI's Large Visual Memory Model is delivering real-world impact across industries. In security, it enables instant search through months of surveillance footage for incidents like unattended bags. In media, studios rapidly locate specific scenes across vast archives. For brands, it analyzes millions of social videos to uncover trends and influencer activity. Work is also underway on AI hardware to embed persistent visual memory into next-gen devices. Across use cases, video archives are turning into powerful, searchable intelligence platforms.

Comments


bottom of page