Aditya Grover
@adityagrover_
Co-founder @InceptionAILabs. AI Prof @UCLA. Denoising intelligence.
A few months ago, we started Inception Labs, a new generative AI startup with a rockstar founding team. At Inception, we are challenging the status quo for language generation. Our first results bring blazing fast speeds at 1000+ tokens/sec while matching the quality of leading…
Happy to share the tech report for our launch model! arxiv.org/abs/2506.17298 Stay tuned for more releases ;)
Since our launch earlier this year, we are thrilled to witness the growing community around dLLMs. The Mercury tech report from @InceptionAILabs is now on @arxiv with more extensive evaluations: arxiv.org/abs/2506.17298 New model updates dropping later this week!
Since our launch earlier this year, we are thrilled to witness the growing community around dLLMs. The Mercury tech report from @InceptionAILabs is now on @arxiv with more extensive evaluations: arxiv.org/abs/2506.17298 New model updates dropping later this week!
Very cool and useful @MishaLaskin! Would be excited to try out the new agent.
Engineers spend 70% of their time understanding code, not writing it. That’s why we built Asimov at @reflection_ai. The best-in-class code research agent, built for teams and organizations.
Somewhat dissatisfying that the predominant usage of "world model" is restricted to videos. Beyond just photons, a true foundation model of the world should model arbitrary physics. Very excited to introduce PhysiX co-led by my students @tungnd_13 , @ArshKoneru, @li78658171 .
🚀 Introducing PhysiX: One of the first large-scale foundation models for physics simulations! PhysiX is a 4.5B parameter model that unifies a wide range of physical systems, from fluid dynamics to reaction-diffusion, outperforming specialized, state-of-the-art models.
🚀 Introducing PhysiX: One of the first large-scale foundation models for physics simulations! PhysiX is a 4.5B parameter model that unifies a wide range of physical systems, from fluid dynamics to reaction-diffusion, outperforming specialized, state-of-the-art models.
(1/6)Our work Reflect-DiT was accepted to #ICCV2025 ! Reflect-DiT allows the model to reflect on its past generations and textual feedback to self-correct and improve, extending reasoning to text-to-image generation.
Inception Labs has just launched the first diffusion language model publicly released for general chat Mercury is a generalist language model with similar intelligence to OpenAI’s GPT-4.1 Nano that runs >7x faster that GPT-4.1 Nano on GPU hardware. This follows @InceptionAILabs'…
Diffusion has entered the chat. Since we launched Mercury Coder, one of the most frequent requests was to expand support for more applications. Our latest Mercury model from @InceptionAILabs brings the blazing-fast speeds & high quality of dLLMs to YOUR favorite application!
We’re excited to launch Mercury, the first commercial-scale diffusion LLM tailored for chat applications! Ultra-fast and efficient, Mercury brings real-time responsiveness to conversations, just like Mercury Coder did for code.
🥳 Excited to share that VideoPhy-2 has been awarded 🏆 Best Paper at the World Models Workshop (physical-world-modeling.github.io) #ICML2025! Looking forward to presenting it as a contributed talk at the workshop! 😃 w/ @clarkipeng @YonatanBitton Roman @adityagrover_ @kaiwei_chang…
Video generative models hold the promise of being general-purpose simulators of the physical world 🤖 How far are we from this goal❓ 📢Excited to announce VideoPhy-2, the next edition in the series to test the physical likeness of the generated videos for real-world actions. 🧵
We're presenting OmniFlow at CVPR 2025. Checkout our work at Poster #241 (ExHall D) on Jun 14 8-10am. Additionally, my advisor @adityagrover_ will give a talk about our recent works on multi-modal diffusion language models at WorldModelBench workshop on June 12.
Introducing OmniFlow, a unified multi-modal foundational model for image, audio and text generation. It extends the MMDiT architecture of SD3 to new modalities using a novel multi-modal rectified flow formulation, achieving any-to-any generation. arxiv.org/abs/2412.01169 (1/n)
Announcing OpenThinker3-7B, the new SOTA open-data 7B reasoning model: improving over DeepSeek-R1-Distill-Qwen-7B by 33% on average over code, science, and math evals. We also release our dataset, OpenThoughts3-1.2M, which is the best open reasoning dataset across all data…
Thank you for the honor — truly an acknowledgment of the tireless efforts of all my students, mentors, collaborators, friends and family over the years!
Announcing the 2025 IJCAI Computers and Thought Award winner ✨Aditya Grover @adityagrover_, @InceptionAILabs @UCLA. Dr. Grover is honored for uniting deep generative models, representation learning & RL to advance scientific reasoning. Congratulations! ijcai.org/awards
Announcing the 2025 IJCAI Computers and Thought Award winner ✨Aditya Grover @adityagrover_, @InceptionAILabs @UCLA. Dr. Grover is honored for uniting deep generative models, representation learning & RL to advance scientific reasoning. Congratulations! ijcai.org/awards
Accelerating Diffusion LLMs via Adaptive Parallel Decoding "We therefore introduce adaptive parallel decoding (APD), a novel method that dynamically adjusts the number of tokens sampled in parallel." "Notably, Dream with ADP surpasses the speed of autoregressive Qwen 7B and…
What a fun Mercury demo combining two of the most latency-sensitive applications: voice + code! A preview into truly unique experiences that will become viable with ultra-fast diffusion language models.
This is the fastest coding model in the world. You need to watch this 1 minute video to really experience what's possible. Speak to your computer and get working code in TWO seconds.
This is the fastest coding model in the world. You need to watch this 1 minute video to really experience what's possible. Speak to your computer and get working code in TWO seconds.