Bolei Zhou
@zhoubolei
Associate Professor at the Computer Science Department @UCLAComSci @UCLAengineering @UCLA
I will move to UCLA Computer Science early next year to continue my academic journey. I am grateful for the enormous support from many smart and energetic people as collaborators, colleagues, and peers. Look forward to the exciting new chapter with @UCLAengineering @CS_UCLA
I think there should be an official satellite location in China, given the fact that a huge amount of NeurIPS works come from China, and so many great Chinese researchers couldn't attend the conference due to the US/Canada Visa issue or long travel distance.
For years, people have suggested having alternate NeurIPS locations, outside the US. 2025 is the first time it will be done, with satellite locations in Copenhagen and Mexico City. I wonder if this will be a one-off thing, or whether the trend will continue.
Code is released!
🚀URBAN-SIM is released! A large-scale robot learning platform for urban spaces, built on NVIDIA Omniverse. Train robots at scale in rich, interactive city environments. 🔗 github.com/metadriverse/u… Key Features: ⚡️ High Efficiency: Thousands of FPS on a single GPU -- enabling…
I've officially become an Associate Professor with tenure at @UCLA @UCLAengineering as we kick off the new academic year on July 1! Deepest gratitude to my mentors, my amazing students, and wonderful collaborators. Incredible journey so far—more exciting research ahead! 🚀

#ICCV2025 Introducing X-Fusion: Introducing New Modality to Frozen Large Language Models It is a novel framework that adapts pretrained LLMs (e.g., LLaMA) to new modalities (e.g., vision) while retaining their language capabilities and world knowledge! (1/n) Project Page:…
Thank you for the invitation! It was a fun workshop.
And now, the magnfic @zhoubolei talks about Learning Safety-aware Agents from Human Feedback and Adversary Environments. #uncv2025 #cvpr2025
🚦 Excited introducing Urban-Sim — our new simulator presented at #CVPR2025 as a highlight paper! ⚡️ Fast training with IsaacSim backend 🏙️ Diverse 3D assets for rich urban scenes 🤖 Towards generalizable robots in dynamic urban environments. Webpage: metadriverse.github.io/urban-sim/
🚀Join our #CVPR2025 2nd POETs Workshop --Embodied "Humans": Symbiotic Intelligence Between Virtual Humans and Humanoid Robots We have super cool live demo sessions, and awesome lineup of speakers @UnitreeRobotics @GerardPonsMoll1, @pathak2206, Karen Liu, @chelseabfinn @psyth91…
📢 The first X-Sense Workshop: Ego-Exo Sensing for Smart Mobility at #ICCV2025! 🎤 We’re honored to host an outstanding speaker lineup, featuring Manmohan Chandraker, @BharathHarihar3, @wucathy, Holger Caesar, @zhoubolei, @Boyiliee, Katie Luo x-sense-ego-exo.github.io
#ICLR2025 My lab members, Zhizheng and Joe, will present our work on learning to generate diverse pedestrians by watching web videos this week in Singapore. Let's fill the ghost town house with people in simulations! Webpage: genforce.github.io/PedGen/ Code: github.com/genforce/PedGen
Our recent @CVPR 25 paper develops a Vid2Sim method that turns a video captured by mobile phone into an interactive environment represented by Gaussian Splatting to train RL agent for urban navigation. Incredible @ZiyangXie_ leaded the project. Webpage: metadriverse.github.io/vid2sim/
Oh great, it is accepted by #ICLR2025 as Spotlight paper!
We release a new urban simulator, MetaUrban, to support research on AI agents for micromobility. The work will be presented at #ICLR2025, and the demo code can run on any laptop. Webpage: metadriverse.github.io/metaurban/ Code: github.com/metadriverse/m… Paper: arxiv.org/pdf/2407.08725
We release a new urban simulator, MetaUrban, to support research on AI agents for micromobility. The work will be presented at #ICLR2025, and the demo code can run on any laptop. Webpage: metadriverse.github.io/metaurban/ Code: github.com/metadriverse/m… Paper: arxiv.org/pdf/2407.08725
Grateful for the kind support from @USNavyResearch ! Amazing to work with the talented students in the lab to pioneer vision and autonomy.
Congrats to @UCLA Asst. Profs. Yuzhang Li of @UclaCBE & Bolei Zhou of @UCLAComSci on each receiving a Young Investigator Program Award from @USNavyResearch to support their work on next-gen batteries & computer vision, respectively. samueli.ucla.edu/yuzhang-li-and…
Amazing work done by Jordan and Sicheng. Jordan is still just a rising senior undergrad at UCLA! Always proud to work with talented students.
Ctrl-X was accepted to #NeurIPS2024! We present a guidance-free structure and appearance control method for any pre-trained diffusion model. Paper, code, and results: genforce.github.io/ctrl-x It was awesome collaborating with @sicheng_mo @BenKlingher Fangzhou Mu @zhoubolei :D
After the #NeurIPS2024 result came out yesterday, my poster repo started getting more stars😂. You are welcome to take my previous posters and pptx sources as references to create your awesome posters! github.com/zhoubolei/bole…

Nice chatting with Marco and Peter
Peter Ludwig talked with Marco Pavone (@stanford, @nvidia) and Bolei Zhou (@ucla) about the importance of advanced simulation and scenario generation tools for the development and validation of AD systems. #intersect24
youtube.com/watch?v=QKy2ZD… My talk record at CVPR'24 WAD summarizes our effort to build the open-source platform MetaDriverse for AI research and mobility. The slide is available here too: drive.google.com/file/d/1YirXiL…
The official GraphCG @TmlrPub implementation is out! It enables semantic understanding and shape editing in #graph #GenerativeAI, like molecular graphs (functional group editing & lead optimization) and point clouds. paper: tinyurl.com/2rnpn349 code: tinyurl.com/dpkw4ykt
🔍 Next up at the 5th Workshop on Robot Visual Perception in Human Crowded Environments at #CVPR2024 is Bolei Zhou! Don’t miss his talk on "UrbanSim: Generating and Simulating Diverse Urban Spaces for Embodied AI Research" at 4:15 PM PST on June 18th. #AI #Robotics @CVPR