Dylan Campbell
@dylanjcampbell_
Lecturer @ANUComputing @ANUCECC working on computer vision and machine learning. Previously, postdoc @Oxford_VGG and PhD @CSIRO/@ourANU/NICTA.
Single-view scene reconstruction in a flash! Augment your favourite monocular depth estimator by predicting multiple Gaussians per pixel, revealing the structure of unseen regions. Thanks to @StanSzymanowicz, @EldarIsTyping, @ChuanxiaZ, Joao Henriques, @chrirupp & Andrea Vedaldi
Feed-forward 3D Gaussians from @Oxford_VGG strike again! Flash3D has now been accepted to 3DV 2025: it is a method for feed-forward single-view 3D scene reconstruction. Project page: robots.ox.ac.uk/~vgg/research/… Code: github.com/eldar/flash3d Arxiv: arxiv.org/pdf/2406.04343 A 🧵👇
Call for papers: Australasian Joint Conference on Artificial Intelligence (#AJCAI2025) Dates: 1-5 December 2025 Location: Canberra, Australia Paper Submission: 15 July 2025 AoE Website: ajcai2025.org A good opportunity to visit Australia!
Call for papers: The 26th International Conference on Digital Image Computing: Techniques and Applications (@dicta2025) Dates: 3-5 December 2025 Location: Adelaide Convention Centre, Adelaide, Australia Paper Submission: 15 July, 2025 AoE Website: dicta2025.dictaconference.org
Congratulations @jianyuan_wang and the VGGT team for winning the #CVPR2025 best paper award - fantastic work!
Many Congratulations to @jianyuan_wang, @MinghaoChen23, @n_karaev, Andrea Vedaldi, Christian Rupprecht and @davnov134 for winning the Best Paper Award @CVPR for "VGGT: Visual Geometry Grounded Transformer" 🥇🎉 🙌🙌 #CVPR2025!!!!!!
The Binocular Egocentric 360° workshop is coming to #ICCV2025! We invite you to participate in the Classification and Temporal Action Localization Kaggle challenges on this unique and uniquely human data (panoramic + binocular egocentric video + audio + binaural delay).
🚀 Exciting news! The BinEgo‑360 Workshop&Challenge is coming to #ICCV2025 @ICCVConference! 🌍🎥 We invite you to: 📜 Present your work 🏆 Participate in the Challenge (win a 🌎360 camera!) 💡 360°, ego, multi-modal 🗓️ Challenge DDL: 6th July 🔗 Details: x360dataset.github.io/BinEgo-360/
A fantastic opportunity with the best supervisors:
Joao Henriques (joao.science) and I are hiring a fully funded PhD student (UK/international) for the FAIR-Oxford program. The student will spend 50% of their time @UniofOxford and 50% @AIatMeta (FAIR), while completing a DPhil (Oxford PhD). Deadline: 2nd of Dec AOE!!
Very cool demo from Stan: single image to 3D (Gaussian representation), extremely fast.
Announcing The Splatter Image demo🤗 It works on *any* object and is *super* fast - around ~2s per object in Gradio! Trained on 2 GPUs in 3.5 days 🚀 huggingface.co/spaces/szymano… See 🧵👇 for more results and updates since the original release! Project: szymanowiczs.github.io/splatter-image
SCENES: finetuning correspondence estimators (e.g., LoFTR/Matchformer) with pose-only (or no!) supervision @3DVconf
SCENES: Subpixel Correspondence Estimation With Epipolar Supervision @DominikKloepfer, João Henriques, @dylanjcampbell_ tl;dr: clf + regression epipolar loss for finetuning LoFTR and friends. robots.ox.ac.uk/~vgg/publicati…
IMPUS 😈 will be heading to #ICLR2024! Morph between any two images smoothly, directly, and "realistically". Code coming soon, paper at: arxiv.org/abs/2311.06792

I’ll be presenting our paper in person for the first time! If you are interested in human-object interaction detection, come check out our poster (#147) in Room Nord this morning from 10:30 to 12:30. #ICCV2023 @dylanjcampbell_ @sgould_au
DINO features of the same object vary with viewpoint, making them hard to use for retrieval and scene-level instance segmentation. Visit our poster on Friday morning to learn how to make them view-consistent! #ICCV2023
📢#ICCV23 is here! I’ll be presenting our paper “LoCUS: Learning Multiscale 3D-Consistent Features from Posed Images” this Friday, 10:30 AM - 12:30 PM in Room Nord. Swing by, say hi, and let’s chat about cool research! Details: robots.ox.ac.uk/~vgg/research/… 🚀 #ComputerVision
If you aren't able to get to London, AZ's Royal Society lecture will be livestreamed (at 3:30am AEST) and available at: youtube.com/c/royalsociety. Don't miss it!
Join us at the Royal Society this May to hear Professor Andrew Zisserman @Oxford_VGG, winner of the Bakerian Medal and Lecture 2023, deliver his prize lecture on the topic of #ComputerVision. Register now: ow.ly/IzQo50NvQa4
I definitely recommend this opportunity - Dima and Bristol Uni are both delightful (and the latter beats Oxford in the elevation stakes, which is important for a university imo).
[Pls RT 📢 - DL 23/04] Fantastic 💡 **postdoc** Opportunity to join my group @BristolUni working around multimodal video understanding. Project in collab w Andrew Zisserman &others @Oxford_VGG. 26 months w student co-supervision opportunities. Email for queries or apply directly.
Very nice single-view neural (density) field prediction work from @felixwimbauer and coauthors. A bonus: soothing videos from the perspective of Superman trapped in a KITTI world.
🎬 Behind the Scenes: Density Fields for Single-View Reconstruction (#CVPR2023) ✅ Generalizes to challenging scenes ✅ Meaningful density even in occluded regions ✅ Strong depth prediction / NVS Code: github.com/Brummi/BehindT… Project Page: fwmb.github.io/bts/ 1/n
Congratulations Richard! Your curiosity and determination to work things out for yourself are an inspiration.
What if you actually could build Rome in a day? Prof Richard Hartley is one of the founders of Multiview Geometry, which establishes the construction of 3D models from sets of images or videos. He receives the Hannan Medal for his pioneering work.
Visit us at poster 31, 11am-1:30pm, to find out! #ECCV2022
🚗🚗 You drive past a parked car. What does the side you didn't see look like? Let SNeS (Symmetric Neural Surfaces) show you at #ECCV2022! 📰 Blog: robots.ox.ac.uk/~vgg/blog/snes… 🚩 Project Page: robots.ox.ac.uk/~vgg/research/… 🪄 Code: github.com/eldar/snes 📝 Paper: arxiv.org/abs/2206.06340
Very impressive work! The argument is: surface normals can be estimated locally from images without worrying about scale (unlike depth), and so can be a valuable cue for fixing monocular depth estimates.
Introducing IronDepth, a framework that uses surface normal and its uncertainty to iteratively refine the predicted depth map (to appear in #bmvc2022). Visit baegwangbin.github.io/IronDepth/ for more detail. Joint work with @IgnasBud and @robertocipolla.
I can't recommend Yicong highly enough - he's a brilliant, curiosity-driven researcher with a deep understanding of vision and language tasks. A sure bet!
Looking for research intern opportunities in Seattle this winter (Start in late Dec or Jan) 😊 Embodied vision and language / multimodal. Personal Webpage: yiconghong.me, GitHub: github.com/YicongHong, Linkedin: linkedin.com/in/yicong-hong