Ruoshi Liu
@ruoshi_liu
Building better 👁️ and 🧠 for 🤖 @Meta
We present🌊AquaBot🤖: a fully autonomous underwater manipulation system powered by visuomotor policies that can continue to improve through self-learning to perform tasks including object grasping, garbage sorting, and rescue retrieval. aquabot.cs.columbia.edu more details👇
🥳
Check out this fantastic video highlighting my recent work on minimal sensing for orienting a solar panel...thank you so much @jbhuang0604 for such an excellent summary with top-notch animations!
Check out my friend @jklotz_’s new paper and @jbhuang0604’s amazing explanation video! You need solar panels to provide energy for AI training, so its contribution to AI is probably bigger than 99% of NeurIPS papers😜
In an era of billion-parameter models everywhere, it's incredibly refreshing to see how a fundamental question can be formulated and solved with simple, beautiful math. - How should we orient a solar panel ☀️🔋? - Zero AI! If you enjoy math, you'll love this!
Checkout our CoRL workshop on world modeling and consider submitting your paper!
🤖🌎 We are organizing a workshop on Robotics World Modeling at @corl_conf 2025! We have an excellent group of speakers and panelists, and are inviting you to submit your papers with a July 13 deadline. Website: robot-world-modeling.github.io
In the words of Nelson Mandela: it always seems impossible until it’s done. My friends, it is done. And you are the ones who did it. I am honored to be your Democratic nominee for the Mayor of New York City.
Please join our RSS workshop on Multimodal Robotics with Multi-sensory Capabilities tomorrow!
How to equip robot with super human sensory capabilities? Come join us at RSS 2025 workshop, June21, on Multimodal Robotics with Multisensory capabilities to learn more. Featuring speakers: @JitendraMalikCV, Katherine J. Kuchenbecker, Kristen Grauman, @YunzhuLiYZ, @Boyiliee
Why am I loving this 🙂
I’m sorry, but I just can’t stand it anymore. This massive, outrageous, pork-filled Congressional spending bill is a disgusting abomination. Shame on those who voted for it: you know you did wrong. You know it.
I'm excited to share that I’ll be joining @UofMaryland as an Assistant Professor in Computer Science, where I’ll be launching the Resilient AI and Grounded Sensing Lab. The RAGS Lab will build AI that works in chaotic environments. If you would like to partner, please DM me!
Dear MAGA friends, I have been worrying about STEM in the US a lot, because right now the Senate is writing new laws that cut 75% of the STEM budget in the US. Sorry for the long post, but the issue is really important, and I want to share what I know about it. The entire…
Diffusion-based image editing for interpretability, scientific discovery, and more!
How do you edit images when words fail? Whether you know exactly what you want but can't describe it, or you're not even sure what changes to make yet? 🔬✨ Introducing DIFFusion — edit images with images to reveal insights in species, black holes, design, and medicine.
How can we train and apply world models that step towards modeling the physical world? Come join us at ICML 2025 workshop on Building Physically Plausible World Models to learn more from the top experts and share your own research and insights! physical-world-modeling.github.io
Join us at our ICML 2025 workshop on building physical world models!
We're organizing a workshop at ICML 2025 on building physically plausible world models! Come join us and our awesome speakers in exploring this exciting research direction, with applications to video generation, robotics, 3D reconstruction and more... 1/2
Tariff is not the solution to the problem of US re-industrialization. Robot is. Invest in robot learning now.
Normalize that academic labs do cool work as well. Here 5 of 6 authors are from Purdue, somehow zero credit to academic labs. (This seems to be a universal issue, not about this particular post.)
NVIDIA has found a way to add camera physics to diffusion models. Literally makes it possible to generate consistent images but with a different aperture, focal length, shutter speed or color temperature.
If you drop your Valentine’s Day roses in water, you know how to pick them up😉
Happy Valentines Day! 🌹 Enjoy a special Valentine's day themed policy (sound on!) from the AquaBot team 👬❤️🦾 Visit aquabot.cs.columbia.edu to learn more about our recent ICRA publication!
Looking forward to hosting @ruoshi_liu tomorrow for a seminar on "Generative Computer Vision for the Physical World:" 📌 FEB. 13, 2025 @ 10:30 am cse.engin.umich.edu/event/generati…
Please submit your papers to the CVPR 4D Vision workshop!
Really excited to put together this @CVPR workshop on "4D Vision: Modeling the Dynamic World" -- one of the most fascinating areas in computer vision today! We've invited incredible researchers who are leading fantastic work at various related fields. 4dvisionworkshop.github.io