Nikos Kolotouros
@nikoskolot
Research Scientist @GoogleDeepMind working on Veo. Veo Ingredients (I/O 2025). CS PhD from @Penn.
We've been cooking Super excited to share our latest work at @GoogleDeepMind We launched Ingredients to Video with an amazing team: @tkipf @sserenazz @YuliaRubanova @acoadmarmon @philipphenzler Jieru @Roni_Paiss @ShiranZada @inbar_mosseri Available on labs.google/flow
There are so many things one can do with Veo Ingredients. Like for example, a 3D animation version of me holding this cute little G Wagon. It even has my name on the license plate 😁
Reference-powered video 🖼️ ➡️ 📹 Upload your own favorite assets, and generate videos that precisely match your creative aspirations.
To celebrate almost 1 year at Google, here's me wearing a noogler hat! You've probably seen tons of #veo results, but it's pretty mindblowing to see how far we've come 🔥
Amazing new capabilities 🚀 Super excited to be part of the reference and character control efforts!
Since launching Veo 2, we’ve built new capabilities and addressed a few pain points to help filmmakers and creatives. 📽️✨ Here’s a quick rundown. 🧵
We cooked something exciting up for you! 🧑🍳 Your vision, brought to life: Transform any reference image(s) into videos exactly as you envision them and even star in them yourself. This has been so much fun to work on with an amazing team: @tkipf, @sserenazz, @YuliaRubanova,…
Ingredients to Video is live, and eggs is one of them! 🍳 Try it out at labs.google/flow/about So proud of this amazing work from the team @tkipf @sserenazz @nikoskolot @YuliaRubanova @philipphenzler jieru@ @Roni_Paiss @ShiranZada @inbar_mosseri
We've been cooking Super excited to share our latest work at @GoogleDeepMind We launched Ingredients to Video with an amazing team: @tkipf @sserenazz @YuliaRubanova @acoadmarmon @philipphenzler Jieru @Roni_Paiss @ShiranZada @inbar_mosseri Available on labs.google/flow
Reference-powered video aka ingredients to video is now available at labs.google/flow/about! With Veo, you can always be a kid at heart❤️ Proud of the incredible team, it's been a blast :) @tkipf @YuliaRubanova @nikoskolot @philipphenzler @acoadmarmon jieru@ @Roni_Paiss…
Can’t wait for everyone to give this a try. 🔥This has been long in the making by a fantastic team. Bringing multiple characters, objects, scenes or literally anything you want into your generations with Veo’s stunning visual quality is a big unlock for visual storytelling,…
Reference-powered video You can now give Veo images of a scene, outfit, or object, and it will generate a full video aligned with your creative direction. You give the look,Veo gives you motion. 👇
Today, we’re announcing Veo 2: our state-of-the-art video generation model which produces realistic, high-quality clips from text or image prompts. 🎥 We’re also releasing an improved version of our text-to-image model, Imagen 3 - available to use in ImageFX through…
Happy to share the outcome of @AkashSengupta97's intern project done here at Google Research🇨🇭: DiffHuman: Probabilistic Photorealistic 3D Reconstruction of Humans 💻 akashsengupta1997.github.io/diffhuman/ 📄 arxiv.org/abs/2404.00485 📺 youtube.com/watch?v=C6PeP0… Accepted to #CVPR2024. 1/4
Google presents VLOGGER Multimodal Diffusion for Embodied Avatar Synthesis We propose VLOGGER, a method for audio-driven human video generation from a single input image of a person, which builds on the success of recent generative diffusion models. Our method consists of
⚠️ Applications for Research Intern 2024 positions at Google Research in Europe are now open: google.com/about/careers/… 🇨🇭 Come to Zurich!🇨🇭
Super excited to share that I will be starting as an Assistant Professor at UT Austin @UTCompSci in January 2024! 🥳🥳 I'm extremely grateful to my amazing mentors and colleagues for their unwavering support every step of the way! Looking forward to this exciting new chapter!
What can 3D and tracking offer to action recognition? Come to our #CVPR2023 poster this morning to find out and chat with us! people.eecs.berkeley.edu/~jathushan/LAR… Joint work with @brjathu, @akanazawa, @cfeichtenhofer and @JitendraMalikCV.