OpenDriveLab
@OpenDriveLab
Official account for OpenDriveLab @hkuniversity and Beyond. We do cutting-edge research in Robotics, Autonomous Driving. Email: [email protected]
Congratulations to our team on winning #CVPR2023 @CVPR Best Paper! We are humbled to be recognized by the community! This is a great incentive for us to keep on delivering good work!


📹Our #CVPR2025 workshop and tutorial recordings are now online! Big thanks to our incredible speakers! Watch all the sessions here 🔗 Workshop: youtube.com/playlist?list=… 🔗 Tutorial: youtube.com/playlist?list=… 🏟️But we’re not done yet - our workshop continues at #ICCV2025! And the…
🚀 New Video Alert! 🚀 📹 Video: youtu.be/Ah-xYnST0yw "FreeTacMan: Robot-free Visuo-Tactile Data Collection System for Contact-rich Manipulation" We are pleased to introduce FreeTacMan, a human-centric and robot-free visuo-tactile data collection system for high-quality…
#ICCV2025 DetAny3D: Detect Anything 3D in the Wild Can your 3D detector handle novel objects & unseen cameras from just a single image? DetAny3D can. 👁️ Monocular Input 🗂️ Box / Point / Text Prompts 🎯 Zero-shot Generalization 🌐 Works even on mobile snapshots & YouTube…

🚀The AgiBot World Challenge @ IROS2025 starts now! More details on opendrivelab.com/challenge2025/… Two Tracks 🤖Manipulation (online & onsite): Train models to tackle complex real-world tasks in diverse environments, such as microwave operation and supermarket packaging. The test…

🤔 How to reliably simulate future driving scenarios under a wide range of ego behaviors, especially for rare and non-expert ones? 😭 Challenge of data shortage: Non-expert data with hazardous actions are scarce and unsafe to gather in the physical world. Without such data in…
🚀 HERE WE GO! Join us at CVPR 2025 for a full-day tutorial: “Robotics 101: An Odyssey from a Vision Perspective” 🗓️ June 12 • 📍 Room 202B, Nashville Meet our incredible lineup of speakers: @shahdhruv_ @GuanyaShi @davsca1 @iamborisi @pathak2206 @akanazawa @du_yilun and I…
The IEEE / CVF Computer Vision and Pattern Recognition Conference @CVPR is being held soon at the Music City Center, Nashville TN, USA. Many members of the MMLab team at HKU @HKUniversity @hkudatascience will attend CVPR in person. Meet us on-site - we'd love to connect, chat,…

🚀 New Paper Alert! 🚀 FreeTacMan: Robot-free Visuo-Tactile Data Collection System for Contact-rich Manipulation We are pleased to introduce FreeTacMan, a human-centric and robot-free visuo-tactile data collection system for high-quality and efficient robot manipulation! 🤖✨…
🎬 Miss the live introduction of AgiBot World Challenge? We’ve got it on replay! Check youtube.com/watch?v=rFkeOA… #AutonomousGrandChallenge #IROS2025 #AgiBotWorld
🚀 #MTGS is now open-sourced! It manages to leverage multi-traversal data for scene reconstruction with better geometry. We utilize the nuPlan dataset with extensive multi-traversal data. 📷 github.com/OpenDriveLab/M…
🚀 Ready for the #IROS2025 challenge? We've got you covered! This briefing session includes everything you need: task, data, baseline, metrics & more. 🌍 Two identical sessions will run for different time zones. Don't miss it! [Asia/Europe] 28 May 2025, 17:00 (UTC+8)…
![OpenDriveLab's tweet image. 🚀 Ready for the #IROS2025 challenge? We've got you covered!
This briefing session includes everything you need: task, data, baseline, metrics & more.
🌍 Two identical sessions will run for different time zones. Don't miss it!
[Asia/Europe] 28 May 2025, 17:00 (UTC+8)…](https://pbs.twimg.com/media/GrosbO4WcAAuCdR.jpg)
👏👏
【Recent Feature🥰】Prof Hongyang Li from HKU IDS (@OpenDriveLab) is interviewed by HKU Bulletin to talk about the developments of autonomous driving led by his research team. While computers take the wheel, IDS researchers take the lead. Read more👉bulletin.hku.hk/cover-story-th…
🚗The technology behind embodied AI and autonomous cars has recently made huge strides. But how do computers take the wheel? Check out our newest article at bulletin.hku.hk/cover-story-th… Thanks for sharing! @HKUniversity @hkudatascience
💥 Forget slow autoregression and skip rigid full-sequence denoising! Nexus is a next-gen predictive pipeline for realistic, safety-critical driving scene generation. What’s new? ✅ Decoupled diffusion → fast updates, goal-driven control ✅ Noise-masking training → inject…
🏙Proud to support the advancement of autonomous driving in #Shanghai. As part of a collaborative initiative, we are honored to contribute to the city's innovation ecosystem through collaborative efforts with key stakeholders. #ShanghaiInnovation #SmartCity #AutonomousDriving
