Andrii Zadaianchuk 🇺🇦
@ZadaianchukML
Postdoc @UvA_Amsterdam PhD @ETH Zürich and @MPI_IS, intern in @AmazonScience. Structured representation learning for and by autonomous agents. 🦋 @zadaianchuk
How to represent dynamic real-world data both consistently and efficiently, while reflecting the compositional object-centric structure of the world? Contrast your slots! ...with our new SlotContrast method(🚀#CVPR2025 Oral🚀)! 🌐website: slotcontrast.github.io 🧵🧵🧵 1/n
Intelligence isn't a collection of skills. It's the efficiency with which you acquire and deploy new skills. It's an efficiency ratio. And that's why benchmark scores can be very misleading about the actual intelligence of AI systems.
@svlevine was just presenting in the Exploration in AI @ #ICML2025 and promoted that exploration needs to be grounded, and that VLMs are a good source ;-) Check our paper below 👇
✨Introducing SENSEI✨ We bring semantically meaningful exploration to model-based RL using VLMs. With intrinsic rewards for novel yet useful behaviors, SENSEI showcases strong exploration in MiniHack, Pokémon Red & Robodesk. Accepted at ICML 2025🎉 Joint work with @cgumbsch 🧵
When multiple tasks need improvements, fine-tuning a generalist policy becomes tricky. How do we allocate a demonstration budget across a set of tasks of varied difficulty and familiarity? We are presenting a possible solution at ICML on Wednesday! (1/3)
Zero-shot imitation from just a single sparse demonstration is hard. Goal-conditioned methods tend to “greedily" move from one state to the next and lose the big picture. We're presenting an alternative approach on Tuesday at #ICML2025. (1/3)
🌍🤖 What is the best way to explore the world to learn a robust world model from high dimensional data? 🤖🌍 SENSEI learns to explore from humans by reusing semantics structure discovered by VLMs and exploring around most interesting for VLM states. #ICML2025
✨Introducing SENSEI✨ We bring semantically meaningful exploration to model-based RL using VLMs. With intrinsic rewards for novel yet useful behaviors, SENSEI showcases strong exploration in MiniHack, Pokémon Red & Robodesk. Accepted at ICML 2025🎉 Joint work with @cgumbsch 🧵
Introducing ArticuBot🤖at #RSS2025, in which we learn a single policy for manipulating diverse articulated objects across 3 robot embodiments in different labs, kitchens & lounges, achieved via large-scale simulation and hierarchical imitation learning. articubot.github.io 🧵
I believe successful neural network training represents cases of "near convexity": the optimization landscape, while technically non-convex, behaves enough like a convex problem that standard convex optimization is often applicable. At the same time, *in general* neural nets…
The neural network objective function is a very complicated objective function. It's very non convex, and there are no mathematical guarantees whatsoever about its success. And so if you were to speak to somebody who studies optimization from a theoretical point of view, they…
In case there is any ambiguity: DINOv2 is 100% a product of dumb hill-climbing on ImageNet-1k knn accuracy (and linear too) Overfitting an eval can be bad. But sometimes the reward signal is reliable, and leads to truly good models. It's about finding a balance
Oh I am a big fan of self supervised learning. Also ssl has never been benchmark maxing on imagenet afaik. I am mainly complaining about the supervised classification imagenet hill climb
We will present this work in the afternoon poster session today at #CVPR2025, poster #322, Exhibition Hall D, 4-6pm. Do stop by if you are interested in learning how to extract visual features for *specific* concepts specified by language queries!
Check out our latest work (published at CVPR 2025) on learning language-controllable visual representations.
New PhD position at @AmlabUva on learning concepts with theoretical guarantees using #causality and #RL with me, Frans Oliehoek (TU Delft) and @herkevanhoof 💥 Deadline: 15 June werkenbij.uva.nl/en/vacancies/p…
Today you can listen to @annamanasyan4 talk at #CVPR2025 or chat with Anna during the poster session! 🗣️ Oral Session 2C: Today at 13:15 P.S. Due to almost 2 years of administrative processing with US visa, I cannot visit CVPR a second time, so would be happy to chat here!
How to represent dynamic real-world data both consistently and efficiently, while reflecting the compositional object-centric structure of the world? Contrast your slots! ...with our new SlotContrast method(🚀#CVPR2025 Oral🚀)! 🌐website: slotcontrast.github.io 🧵🧵🧵 1/n