Michael Galkin
@michael_galkin
Senior Research Scientist @GoogleAI. Prev: @Intel, Postdoc @Mila_Quebec & McGill. Graph ML, Geometric DL. Grandmaster of 80's music (according to Spotify)
📣 Our spicy ICML 2025 position paper: “Graph Learning Will Lose Relevance Due To Poor Benchmarks”. Graph learning is less trendy in the ML world than it was in 2020-2022. We believe the problem is in poor benchmarks that hold the field back - and suggest ways to fix it! 🧵1/10

Hey, we built a Graph Foundation Model at Google and it's showing some very promising results! Read more in the blogpost and also catch me and @phanein at the ICML Expo Talk next Monday. Happy to carry the Graph Learning flag ⛳️
Today on the blog we share our recent progress in developing graph foundation models that excel on interconnected relational tables and at the same time generalize to arbitrary sets of tables, features, and tasks without additional training. Learn more → goo.gle/4lLPNVe
📢 New paper: Distributed computing 🤝 agents in AgentsNet! AgentsNet transforms classical distributed computing problems into a benchmark for evaluating how LLM agents can coordinate when organized in a network Led by Florian Grötschla, @luis_pupuis, @jonshoff w/ @phanein 🧵1/9
Graph neural networks are becoming increasingly common across a variety of real-world applications. Stop by the #ICML2025 Google booth today at 12:30pm, when Michael Galkin & Bryan Perozzi will host a Q&A to discuss novel approaches to generalization for graph models.
So many ppl came to hear the expo on Graph Foundation Models at #ICML2025 by @michael_galkin @phanein . This makes me so happy! 🥹 As a contrast to the paper we will present on Thursday, we really should make sure Graph Learning WILL NOT lose relevance😉
i will not be going to @icmlconf #icml2025 this year but my colleagues will be presenting four of our papers throughout the week -- please feel free to stop by for a chat if you're in vancouver! details in thread: expect * chess ♟️ * graphs 🕸️ * softmax 🌡️ * algorithms 🧮
6. Position: Graph Learning Will Lose Relevance Due To Poor Benchmarks East Exhibition Hall A-B #E-604 Thu 17 Jul 11 a.m. PDT @mayabechlerspei @benfinkelshtein @ffabffrasca @phanein @michael_galkin @Mniepert @chrsmrrs et al.
At ICML 🇨🇦 presenting the spicy 🌶️ Position: Graph Learning Will Lose Relevance Due To Poor Benchmarks 📍 East Hall A-B #E-604, Thu Also, @antvas98 will be presenting "Covered Forest" — glad to have played a part in this one! 📍 #E-2908, Thu DM to chat graph(+foundation models)
Check out two recent blog posts from our team: 1) Graph Foundation Models, and how they help achieve 3-40x in precision: research.google/blog/graph-fou… 2) Enabling efficient Multi-Vector retrieval, via MUVERA: research.google/blog/muvera-ma… (based on our NeuRIPS'24 paper).
We'll be presenting MOTIF with @hxyscott on Wed 4:30pm - Xingyue prepared a great poster talk! If you want to chat with less useful people, I'll be there too 🌚
🚨 Excited to announce that "How Expressive are Knowledge Graph Foundation Models?" is coming to ICML 2025! 🎉 📅 Wednesday, July 16th 🕟 4:30 PM 📍 Booth #E-3011 Come by to chat about motifs, expressiveness, and the future of graph foundation models! 🔍📊🔗
New advancement on Graph Foundation Models (GFM) for relational data. Similar to leading foundation models, GFMs learn transferable representations to generalize to new, unseen graphs and data. Initial results show significant performance gains. Details in our new blog:…
Loss landscapes were fascinating objects back in the days - inspired by them, we came up with Landscape of Thoughts where you can observe the convergence of the LLM reasoning process and even derive a simple verifier to steer it in the right direction! Thread by @zhankezhou 👇
Tired of debugging LLMs by reading the extremely long chain of thoughts? We built Landscape of Thoughts (LoT) to transform complex thoughts into intuitive visual maps to help you understand model behaviors. Paper and findings in 🧵 1/10 youtu.be/Zb8CfYxSvik?si… via @YouTube