Shaowei Liu
@stevenpg8
CS PhD @IllinoisCDS | MSCS @ucsd_cse | BSEE @Tsinghua_uni
Glad introduce our #ECCV2024 work: PhysGen: Rigid-Body Physics-Grounded Image-to-Video Generation Website: stevenlsw.github.io/physgen/ Paper: arxiv.org/abs/2409.18964 Code: github.com/stevenlsw/phys…… Poster: Oct. 2 10:30, #217 @RenZhongzheng @_saurabhg @ShenlongWang (1/5)
We are excited to share Cosmos-Drive-Dreams 🚀 A bold new synthetic data generation (SDG) pipeline powered by world foundation models—designed to synthesize rich, challenging driving scenarios at scale. Models, Code, Dataset, Tookit are released. Website:…
🚀Excited to introduce GEN3C #CVPR2025, a generative video model with an explicit 3D cache for precise camera control. 🎥It applies to multiple use cases, including single-view and sparse-view NVS🖼️ and challenging settings like monocular dynamic NVS and driving simulation🚗.…
Reward models that help real robots learn new tasks—no new demos needed! ReWiND uses language-guided rewards to train bimanual arms on OOD tasks in 1 hour! Offline-to-online, lang-conditioned, visual RL on action-chunked transformers. 🧵
Check our Physgen3D which extends Physgen () to 3D. Try the deflate demo below 👇👇👇 Achieved by our amazing intern @boyuanchen21 and collaborators @jiang_hanxiao, Saurabh, @YunzhuLiYZ Prof. Zhao and @ShenlongWang
🚀 Introducing PhysGen3D – turn a single image into an interactive 3D world 🌍 From image ➡️ amodal 3D ➡️ physically grounded video You can control initial speed, mass, friction... and "imagine" what happens next 📄 Project page: by-luckk.github.io/PhysGen3D #3DVision #CVPR2025
Stop by our poster #217 tmr 10:30 if you are at #ECCV2024, Prof @ShenlongWang and Prof @_saurabhg will present tmr. This is how Shenlong did toy experiments at home🤣
Glad introduce our #ECCV2024 work: PhysGen: Rigid-Body Physics-Grounded Image-to-Video Generation Website: stevenlsw.github.io/physgen/ Paper: arxiv.org/abs/2409.18964 Code: github.com/stevenlsw/phys…… Poster: Oct. 2 10:30, #217 @RenZhongzheng @_saurabhg @ShenlongWang (1/5)
The paper presents a novel image-to-video generation method called PhysGen that can convert a single image into a realistic, physically plausible, and temporally consistent video. The key idea is to integrate a model-based physical simulation with a data-driven video generation…
Thank you AK @_akhaliq for featuring our work. Come and visit our stevenlsw.github.io/physgen/ to play the interactive demos! Don't miss our Wednesday morning poster session at #217 if you are at #ECCV2024
PhysGen Rigid-Body Physics-Grounded Image-to-Video Generation We present PhysGen, a novel image-to-video generation method that converts a single image and an input condition (e.g., force and torque applied to an object in the image) to produce a realistic, physically…
Introducing: Opening Cabinets and Drawers in the Real World using a Commodity Mobile Manipulator We develop a system to open unseen cabinets and drawers *zero-shot* from novel environments using the Stretch RE2: arjung128.github.io/opening-cabine…