Eike Eberhard
@ESEberhard
PhD student focused on ML methods for quantum chemistry
While Nicholas is switching gears, I am switching fields from Physics to Machine Learning. This started out as a curious open-ended project, but it turns out that GNNs might be able to advance DFT, too. I can‘t wait to find out where this line of research might lead us.
Switching gears from QMC to DFT for this one. I'm excited to share our newest work, where we learn the non-local exchange-correlation functional in KS-DFT with equivariant graph neural networks! Joint work w/@ESEberhard, @guennemann 📝 arxiv.org/abs/2410.07972
Super excited to be at #ICML2025 in Vancouver this week! 🇨🇦 Thrilled to be presenting new work and soaking in all the amazing research. This also marks my first conference trip since joining @GoogleResearch a couple of months ago! #LLMs #AIResearch
Real data is noisy but HiPPO assumes it's clean. Our UnHiPPO initialization resists noise with implicit Kalman filtering and makes SSMs robust without architecture changes. #ICML poster: Thu 11am E-2409 Paper: openreview.net/forum?id=U8GUm… Code: github.com/martenlienen/u… w/ @guennemann
How private is DP-SGD for self-supervised training on sequences? Our #ICML2025 spotlight shows that it can be very private—if you parameterize it right! 📜arxiv.org/abs/2502.02410 #icml Joint work w/ M. Dalirrooyfard, J. Guzelkabaagac, A. Schneider, Y. Nevmyvaka, @guennemann 1/6
🚀 After two+ years of intense research, we’re thrilled to introduce Skala — a scalable deep learning density functional that hits chemical accuracy on atomization energies and matches hybrid-level accuracy on main group chemistry — all at the cost of semi-local DFT. ⚛️🔥🧪🧬
How do LLMs navigate refusal? Our new @ICMLConf paper introduces a gradient-based approach & Representational Independence to map this complex internal geometry. 🚨 New Research Thread! 🚨 The Geometry of Refusal in Large Language Models By @guennemann's lab & @GoogleAI. 🧵👇
3️⃣ Key Question ❓ Is refusal behavior governed by a single vector, or do multiple independent mechanisms exist? We introduce a novel gradient-based method to extract refusal-mediating directions more effectively! 🎯
This ICLR is the best conference ever. Attendees are extremely friendly and cuddly. ..What do you mean this is the wrong hall?
If you are attending #ICLR2025 and are interested in electronic structure modelling / quantum chemistry come by our poster on learnable non-local XC-functionals to discuss with @n_gao96 and me. 🗓️ Today | 3:00 pm– 5:30 pm 📍Hall 3 | Poster #3

Thrilled to announce that we just presented „MAGNet: Motif-Agnostic Generation of Molecules from Scaffolds“ at #ICLR2025 🧲 @j_m_sommer @Pseudomanifold @fabian_theis @guennemann For those who couldn’t make it to our spotlight: openreview.net/forum?id=5FXKg…
Excited to announce our #ICLR2025 spotlight work deriving the first exact certificates for neural networks against label poisoning 🎉. Joint work w/ @maha81193, @guennemann & Debarghya. For more details check out the thread below👇 or check out our paper arxiv.org/abs/2412.00537.
🎉Excited to announce our #ICLR2025 Spotlight! 🚀@lukgosch and I will be presenting our paper on the first exact certificate against label poisoning for neural nets and graph neural nets. Joint work with @guennemann and Debarghya 👇[1/6]
Using EDM as the latent diffusion model, we achieve state-of-the-art performance on standard molecular graph generation benchmarks, outperforming discrete diffusion models by a large margin. 7/
Happy to share that our paper "Lift Your Molecules: Molecular Graph Generation in Latent Euclidean Space" got accepted to #ICLR2025, and we will be presenting it this week in Singapore! Joint work with @n_gao96, @TomWollschlager, @j_m_sommer, and @guennemann. 🧵 1/
While, for a short cutoff of 5 Bohr radii, our approach matches the 'gold standard' CCSD(T)/CBS in non-covalent binding energies, an even shorter cutoff of 3 yields closer-to-experimental results in well-bonded biochemical compounds.
Our results show that physical intuitions like the nearsightedness of electronic matter may be useful inductive biases for designing efficient yet accurate machine learning-based electronic structure methods.
The paper of my final "pure" physics project just got published in @NanoLetters doi.org/10.1021/acs.na… joint work with Ludwig Burger, @ceslopast, Hamid Seyed-Allaei @GiovanniGiunt20 and Ulrich Gerland

Another moment to be proud of an excellent team: Today the @PrunaAI team has open sourced parts of their AI model optimization framework: github.com/PrunaAI/pruna/ If you care about more efficient and less costly models, and look for minimal overhead in development, check it out!
Safe and reliable machine learning has never been more relevant. However, at the same time, LLMs made robustness research **more complex, less reproducible, and harder-to-evaluate**. How can we enable research progress despite these challenges?🧵 @guennemann @gauthier_gidel
Super happy & honored that our work on certifying NNs against poisoning won the Best Paper Award @AdvMLFrontiers at #NeurIPS2024. Come by our poster 10:40am-12&4-5pm (or talk) tomorrow :) Joint work w/@maha81193, Debarghya Ghoshdastidar & @guennemann L: arxiv.org/pdf/2407.10867
It’s rainy in Vancouver, poster hall is closed but we are ready 🙌 👉 Come and talk to us and learn about UniGuide at Poster#2600 (East) UniGuide is a new framework for molecular diffusion models that enables flexible geometric conditioning across tasks—no retraining required
This week, we will present our recent #NeurIPS2024 paper. 📎 Paper: openreview.net/forum?id=HeoRs… 📆 Make sure to visit our poster #2600 on Fri, 13 December at 11 am! Joint work with my amazing mentors @leon_het @j_m_sommer @fabian_theis @guennemann
By weighting multiple eigenspaces, we find that S²GNNs, combine important properties of hierarchical message-passing schemes, graph-rewirings, and pooling. The depicted propagation schemes provide further examples of corresponding graph-adaptive hierarchies.