Edward Miller
@TweetEdMiller
World models & contextual AI at @Meta Reality Labs Research. Previously co-founder & CEO @Scape, acquired by @Facebook.
After 5½ years, yesterday marked my last day at Meta. For those who may not know, I joined via the acquisition of my previous company, where we set out to build a continuously-updating 3D model of the world, enabling devices to understand where they are and what’s around them.…

An incredible use case for augmented wearables which provides irrefutable value - giving people the freedom to navigate and explore the world, and retain their independence. Hats off to our partner @ Envision for helping make it happen.
We’re diving into some cutting-edge experiments with @Meta’s Aria Gen 2 glasses! Using on-device SLAM, spatial audio, and our new personal assistant @allydotme, we’re exploring how Aria Gen 2 could help people who are blind or have low vision navigate indoor spaces.
I imagine there will be several valuable tricks to be discovered with pre-computing the results to prompts that a user is predicted most likely to ask, similar to how Spotify pre-loaded tracks as users typed into the search bar to decrease latency.
in the past week at @cluely, we've been kicking off our most ambitious project ever. the models of today are great at answering questions. the models at @cluely will be really good at predicting which questions you have. this is a fundamentally different user experience than…
Back in 2019, I remember @alexgkendall telling me Wayve wouldn't be the first to deploy self-driving in a city, but that it would be the first to deploy to 100 cities. Incredible to see what clarity of purpose and sharp execution can achieve over six years. Bravo @wayve_ai ! 👏
90 cities. 90 days. 1 Wayve AI Driver 🌏 We’ve just finished the first legs of our global AI-500 roadshow - our most ambitious real-world test yet. 🚀 The goal: take a single AI driving model to 500 cities by the end of 2025. No retraining. No region-specific coding. Just one…
In 'The Computer for the Twenty-First Century', Mark Weiser wrote that the most profound technologies are those that disappear & weave themselves into the fabric of everyday life. 35 years later, the inverse is unfolding: *our lives* are being woven into the fabric of…
Starting today, memory in ChatGPT can now reference all of your past chats to provide more personalized responses, drawing on your preferences and interests to make it even more helpful for writing, getting advice, learning, and beyond.
Quite the journey from Light Blue Labs back in 2017! Huge congratulations to the entire team over at @wayve_ai 🏄♀️
Big news! @NissanMotor will launch its next-gen ProPILOT autonomous driving technology in FY2027, powered by Wayve AI Driver. Together, we’re excited to set a new standard in autonomous driving with embodied AI and advanced collision avoidance. The road ahead is AI-driven. 🚘🤖🚀…
Tank, I need a pilot training program for a B-212 helicopter. Hurry.
So we launched a thing. zapier.com/mcp
SceneScript encoder spotted in the wild 😍
SpatialLM just dropped on Hugging Face Large Language Model for Spatial Understanding
SceneScript has to be one of the most fascinating projects to have worked on. This new extension to the original paper lets an AI refine its 3D understanding of a space with just a glance. Basically, look at an area, say "fix this", and the model figures out what’s…
Check out our extension of SceneScript to human-in-the-loop local corrections! Our method leverages infilling techniques from NLP to refine a 3D scene in a "one-click fix" workflow, enabling more accurate modeling of complex layouts. 📰arxiv.org/abs/2503.11806…
MCP is a tiny concrete example of what people meant when they talked about the metaverse. Less about virtual worlds, more about the essential (boring?) infrastructure that quietly stitches the entire web into a single unified & intelligent interface.
Perplexity is taking the hard route—going full stack across hardware and software. The benefits are clear: deep integration of simulated interaction models (like OpenAI’s Operator) and full digital context (if the user opts in) for a more personal assistant. Remains to be seen…
Excited to be partnering with @deutschetelekom for a native Perplexity Assistant on their new AI Phone!
The first generation of Aria glasses have made a big impact in the research community, can't wait to see all the new possibilities these will unlock meta.com/blog/project-a…
Our team is proud to announce Aria Gen 2, the most advanced wearable sensor platform in the world. Following Orion's announcement last year, and Aria 1st-Gen in 2020, Aria Gen 2 will help us solve challenges on the path to full AR glasses and ContextualAI 😎
I’ve been finding Cursor dangerously addictive—especially in the early hours. I generally tell myself I’ll head to bed right after landing this feature… then spend the next hour muttering, “No, not like that.” 💀
🛑📢 HD-EPIC: A Highly-Detailed Egocentric Video Dataset hd-epic.github.io arxiv.org/abs/2502.04144 New collected videos 263 annotations/min: recipe, nutrition, actions, sounds, 3D object movement &fixture associations, masks. 26K VQA benchmark to challenge current VLMs 1/N
This is how the world ends isn't it? 🫤
There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It's possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper…