Alejandro Fontán
@AFontanVillcmp
Research Fellow at QUT
Lead Research Engineer (SLAM/ State estimation) Build SLAM & spatial AI deployed in real environments. 🌟 We're hiring! 🌟 You'll design localisation & mapping algorithms, deploy real-world robotics solutions, and work with a top-tier team. 👉 Strong CV/SLAM/C++ skills 🏠…
Happy to be at ICVSS in Sicily this week. The theme this year is #SpatialAI!
Great talk by Andrew Davison on the journey from SLAM to Spatial AI at #ICVSS2025! A compelling argument for the power of dynamic, updatable 3D representations and their direct impact on the future of robotics and wearable devices. And nice insights for spatial AI researchers!
We’re open-sourcing 352GB of Coral Reef pics (13 sites, 90k pics) from Indonesia under CC-BY-4.0 🌏🪸 3D photogrammetry data to accelerate research/conservation, no strings attached 🤗 🔵 Why? Coral reefs are so precious, beautiful, incredibly complex and threatened ecosystems.…
Out now: VSLAM-LAB - A Unified Framework for Visual SLAM Benchmarking! Led by @AFontanVillcmp: linkedin.com/posts/michaelj… @jcivera @TobiasRobotics #SLAM #localization #opensource #benchmarking #research #robotics #computervision #VSLAMLAB @QUTRobotics
📢Plan to attend #RSS2025 in Los Angeles, California this June? Check out the list of exciting workshops! roboticsconference.org/program/worksh…
Part 2 of SLAM handbook is out for public comments! let us know what you think :-) Issue tracker on GitHub awaits! Link in comments.
🔍Looking for a multi-view depth method that just works? We're excited to share MVSAnywhere, which we will present at #CVPR2025. MVSAnywhere produces sharp depths, generalizes and is robust to all kind of scenes, and it's scale agnostic. More info: nianticlabs.github.io/mvsanywhere/
MASt3R-SLAM code release! github.com/rmurai0610/MAS… Try it out on videos or with a live camera Work with @eric_dexheimer*, @AjdDavison (*Equal Contribution)
Introducing MASt3R-SLAM, the first real-time monocular dense SLAM with MASt3R as a foundation. Easy to use like DUSt3R/MASt3R, from an uncalibrated RGB video it recovers accurate, globally consistent poses & a dense map. With @eric_dexheimer*, @AjdDavison (*Equal Contribution)
Open source code now available MASt3R-SLAM: the best dense visual SLAM system I've ever seen. Real-time and monocular, and easy to run with a live camera or on videos without needing to know the camera calibration. Brilliant work from Eric and Riku.
MASt3R-SLAM code release! github.com/rmurai0610/MAS… Try it out on videos or with a live camera Work with @eric_dexheimer*, @AjdDavison (*Equal Contribution)
Great to see Dorian Tsai on BBC News talking all things coral reef babies, restoration and robotics : linkedin.com/posts/michaelj… @QUTRobotics
- ¿Es la oficina de artes escénicas? - ... - ¿Está David? Que se ponga.
#CVPR2025 papers done!!! What an epic effort by @AFontanVillcmp, Connor, @TobiasRobotics and @Somayeh_HS 👏👏👏 linkedin.com/posts/michaelj…
Upcoming special section on Visual SLAM in IEEE T-RO, with a great group of editors. The last Visual SLAM special issue was edited by @jneirap, @jleonardmit and me in 2008. It's worth reading our Guest Editorial to remember the state of research back then! doc.ic.ac.uk/~ajd/Publicati…
T-RO is accepting submissions for the Visual #SLAM special collection until December 15. Thank you to @jcivera, @giov_cioffi, @davsca1, @StefanLeuteneg1, Abhinav Valada, Teresa Vidal-Calleja, and @ghuangud for handling this special collection. ieee-ras.org/publications/t… #IEEERAS