Erik Bekkers
@erikjbekkers
Associate Prof @AmlabUva @UvA_Amsterdam @Ellis_Amsterdam | @ELLISforEurope Scholar | Geometric and Group Equivariant Deep Learning
Dear GDL friends! Here's a🧵on our mini-course ✨Group Equivariant Deep Learning✨ See uvagedl.github.io for YT playlist (21 vids), colabs, slides, lecture notes. Topics: 1⃣regular & 2⃣steerable g-convs 3⃣equivariant graph NNs 4⃣geometric latent space models 1/14
🎉Happy to be in 🇨🇦Vancouver in the summer for ✨ICML2025! Ping me if you want to chat about Symmetries, GDL, Geometric representations + AI4Science, or want to look for the best ramen in town🍜! 🥁Excited to present a few exciting works at the main conference and workshops!…
Why do video models handle motion so poorly? It might be lack of motion equivariance. Very excited to introduce: Flow Equivariant RNNs (FERNNs), the first sequence models to respect symmetries over time. Paper: arxiv.org/abs/2507.14793 Blog: kempnerinstitute.harvard.edu/research/deepe… 1/🧵
Happening today at @genbio_workshop
3. 💪 Rapidash: Scalable Molecular Modeling Through Controlled Equivariance Breaking Presenting 💪🚀Rapidash🚀 architecture: a flexible architecture allowing for different symmetry-breaking, equivariance-breaking modes through a group convolutional architecture at…
SDE Matching is truly something exceptional. First ever algorithm capable of learning partially observed diffusions (Latent SDEs) from data without resorting to simulation or discretisation of the SDE! #SDE #Diffusion #Flow #GenerativeAI
📢Presenting SDE Matching🔥🔥🔥 🚀We extend diffusion models to construct a simulation-free framework for training Latent SDEs. It enables sampling from the exact posterior process marginals without any numerical simulations. 📜: arxiv.org/abs/2502.02472 🧵1/8
Also, presenting on behalf of @artemmoskalev work on Efficient Geometric deep learning architecture: ⏩⏩ Geometric Hyena Networks for Large-scale Equivariant Learning Paper: openreview.net/forum?id=jJRkk… Thread: x.com/artemmoskalev/…
ICML Spotlight 🚨 Equivariance is too slow and expensive, especially when you need global context. It makes us wonder if it even worths the cost, especially in high-dimensional problems? We present Geometric Hyena Networks — a simple equivariant model orders of magnitude more…
3. 💪 Rapidash: Scalable Molecular Modeling Through Controlled Equivariance Breaking Presenting 💪🚀Rapidash🚀 architecture: a flexible architecture allowing for different symmetry-breaking, equivariance-breaking modes through a group convolutional architecture at…
2. 🌊Controlled Generation with Equivariant Variational Flow Matching Paper: openreview.net/forum?id=YSVSM… Thread: x.com/FEijkelboom/st…
Flow Matching (FM) is one of the hottest ideas in generative AI - and it’s everywhere at #ICML2025. But what is it? And why is it so elegant? 🤔 This thread is an animated, intuitive intro into (Variational) Flow Matching - no dense math required. Let's dive in! 🧵👇
1. 🧠On the Importance of Embedding Norms in Self-Supervised Learning We show that 🔍🔍Embedding norms play a key role in self-supervised learning (SSL) by - Governing convergence rates during training. -Encoding network confidence — smaller norms correspond to more surprising or…
Can an AI model predict perfectly and still have a terrible world model? What would that even mean? Our new ICML paper formalizes these questions One result tells the story: A transformer trained on 10M solar systems nails planetary orbits. But it botches gravitational laws 🧵
Flow Matching (FM) is one of the hottest ideas in generative AI - and it’s everywhere at #ICML2025. But what is it? And why is it so elegant? 🤔 This thread is an animated, intuitive intro into (Variational) Flow Matching - no dense math required. Let's dive in! 🧵👇
"On the Ability of Deep Networks to Learn Symmetries from Data: A Neural Kernel Theory" now accepted at JMLR! 🥳 🔗arxiv.org/abs/2412.11521 We thank the reviewers for expert suggestions which allowed us to substantially improve the work and writing. See ⬇️ for more info and 🧵
Now accepted at JMLR, and with an extension to general finite groups (including non-abelian groups)! Updated version of our (w/ @StphTphsn1) work: arxiv.org/abs/2412.11521
We release AB-UPT, a novel method to scale neural surrogates to CFD meshes beyond 100 million of mesh cells. AB-UPT is extensively tested on the largest publicly available datasets. 📄 arxiv.org/abs/2502.09692 🤗 huggingface.co/EmmiAI/AB-UPT 💻 github.com/Emmi-AI/AB-UPT
🤹 New blog post! I write about our recent work on using hierarchical trees to enable sparse attention over irregular data (point clouds, meshes) - Erwin Transformer. blog: maxxxzdn.github.io/blog/erwin/ paper: arxiv.org/abs/2502.17019 Compressed version in the thread below:
Thrilled to announce the first TAG-DS: TAG…We’re It! event Dec 1-2, 2025 in San Diego (Right before NeurIPS)! Please join us for this 2-day workshop featuring keynotes, submitted work, an associated proceedings and collaboration activities! More at tagds.com/events/tag-ds-…!
Super proud of @algarciacast's first work in his PhD at @AmlabUva. ❤️ Beautiful new theoretical results + an incredibly practical method: a scalable grid-free Eikonal solver (for geodesic and distance computations) on arbitrary domains & scalable though conditional neural fields!
🌍 From earthquake prediction to robot navigation - what connects them? Eikonal equations! We developed E-NES: a neural network that leverages geometric symmetries to solve entire families of velocity fields through group transformations. Grid-free and scalable! 🧵👇
🌍 From earthquake prediction to robot navigation - what connects them? Eikonal equations! We developed E-NES: a neural network that leverages geometric symmetries to solve entire families of velocity fields through group transformations. Grid-free and scalable! 🧵👇