Yulia Rubanova
@YuliaRubanova
Research Scientist in Veo team at Deepmind. Veo Ingredients-to-Video (I/O 2025). Controllable video generation, learning physics in 3D, world models
Video, meet audio. 🎥🤝🔊 With Veo 3, our new state-of-the-art generative video model, you can add soundtracks to clips you make. Create talking characters, include sound effects, and more while developing videos in a range of cinematic styles. 🧵
Get into the zone with Flow. 🎬 It combines the best of our most advanced models Veo, Imagen and Gemini into 1️⃣ master filmmaking tool - helping you weave cinematic clips, dynamic scenes, and compelling narratives into stories with consistent results.
Veo team is hiring!
Want to be part of a team redefining SOTA for generative video models? Excited about building models that can reach billions of users? The Veo team is hiring! We are looking for amazing researchers and engineers, in North America and Europe. Details below:
Greenfield (@GDMGreenfield), our in-house creative team, is now on X! Follow them to learn about the latest tips and tricks for Veo 3 and our other generative models.
Greenfield prompt tip of the day: Add excitement to your shots with Veo 3 camera controls! You can use multiple camera controls within a single prompt by stringing together a sequence of camera movements. For example, you can try combining effects over time: "The camera position…
TLDR: Given 1-3 reference images and a text prompt, you can use Veo to compose them in a video. Here's me in a few weeks from now in Greece:
Ingredients to Video is live, and eggs is one of them! 🍳 Try it out at labs.google/flow/about So proud of this amazing work from the team @tkipf @sserenazz @nikoskolot @YuliaRubanova @philipphenzler jieru@ @Roni_Paiss @ShiranZada @inbar_mosseri
We've been cooking Super excited to share our latest work at @GoogleDeepMind We launched Ingredients to Video with an amazing team: @tkipf @sserenazz @YuliaRubanova @acoadmarmon @philipphenzler Jieru @Roni_Paiss @ShiranZada @inbar_mosseri Available on labs.google/flow
Can’t wait for everyone to give this a try. 🔥This has been long in the making by a fantastic team. Bringing multiple characters, objects, scenes or literally anything you want into your generations with Veo’s stunning visual quality is a big unlock for visual storytelling,…
Reference-powered video You can now give Veo images of a scene, outfit, or object, and it will generate a full video aligned with your creative direction. You give the look,Veo gives you motion. 👇
Brush🖌️ is now a competitive 3D Gaussian Splatting engine for real-world data and supports dynamic scenes too! Check out the release notes here: github.com/ArthurBrussee/…
Do video models really understand physics? Not yet. It is important to probe the video models for physical understanding before we can use them as world models. A shout-out to our 🇨🇦 Toronto team 🇨🇦 for great work!
Do generative video models learn physical principles from watching videos? Very excited to introduce the Physics-IQ benchmark, a challenging dataset of real-world videos designed to test physical understanding of video models. Webpage: physics-iq.github.io
[1/4] Ever wondered what it would be like to use images—rather than text—to generate object and background compositions? We introduce VisualComposer, a method for compositional image generation with object-level visual prompts.
Happy holidays all! Here's some #Veo2 inspired hydraulic press physics...
Impressive work on generating 3D scenes with such sharp level of detail, all by leveraging a video model. Crazy to think how much video models have evolved that we can use them to get coherent 3D worlds
Wonderland: Navigating 3D Scenes from a Single Image Contributions: • First, we introduce a representation for controllable 3D generation by leveraging the generative priors from camera-guided video diffusion models. Unlike image models, video diffusion models are trained on…