Johannes Brandstetter
@jo_brandstetter
Founder and Chief Scientist @ Emmi AI, Ass. Prof / Group Lead @jkulinz. Former @MSFTResearch, @UvA_Amsterdam, @CERN, @tu_wien
Today marks a big milestone for us at Emmi AI. We’ve raised a €15M seed round, backed by 3VC, Speedinvest, Serena, and PUSH. Let’s build the future of Physics AI together!

From GPT to MoE: I reviewed & compared the main LLMs of 2025 in terms of their architectural design from DeepSeek-V3 to Kimi 2. Multi-head Latent Attention, sliding window attention, new Post- & Pre-Norm placements, NoPE, shared-expert MoEs, and more... magazine.sebastianraschka.com/p/the-big-llm-…
Aurora is fully open! 🥳 The air pollution model 🌬️, the ocean wave model 🌊, and the TC tracker 🌀 are now available. And that's not all: all model weights (pretrained and fine-tuned) are now released under the MIT license. 😎 GitHub: github.com/microsoft/auro… #AIforGood
Very excited to announce our new tabular foundation model, Mitra!
Introducing Mitra: a foundation model from Amazon researchers that outperforms traditional methods for tabular data by learning from diverse synthetic priors. Mitra uses in-context learning to adapt to new tasks without separate models for each dataset: amazon.science/blog/mitra-mix…
Bit of a late update, but I finally had my PhD viva! It was an honour and a privilege to have my thesis read and examined by @PetarV_93 and @jo_brandstetter, two giants in Geometric Deep Learning. It will soon be online after corrections :)
Just in case someone was wondering why many Chinese products are becoming better and cheaper than European products, and it’s not only cars. Europe prefers to spend money on welfare and green projects instead. Good luck.
Michael Jordan gave a short, excellent, and provocative talk recently in Paris - here's a few key ideas - It's all just machine learning (ML) - the AI moniker is hype - The late Dave Rumelhart should've received a Nobel prize for his early ideas on making backprop work 1/n
Flow Matching (FM) is one of the hottest ideas in generative AI - and it’s everywhere at #ICML2025. But what is it? And why is it so elegant? 🤔 This thread is an animated, intuitive intro into (Variational) Flow Matching - no dense math required. Let's dive in! 🧵👇
I converted one of my favorite talks I've given over the past year into a blog post. "On the Tradeoffs of SSMs and Transformers" (or: tokens are bullshit) In a few days, we'll release what I believe is the next major advance for architectures.
1/10 ML can solve PDEs – but precision🔬is still a challenge. Towards high-precision methods for scientific problems, we introduce BWLer 🎳, a new architecture for physics-informed learning achieving (near-)machine-precision (up to 10⁻¹² RMSE) on benchmark PDEs. 🧵How it works:
🌶️spicy paper alert🌶️ We present TABASCO, a new model for small molecule generation that achieves state-of-the-art PoseBusters validity while also being ~10x faster This is all achieved despite ✖️no equivariance ✖️no self-conditioning ✖️no bond modelling what we found🧵(1/n)