Christopher Morris @ ICML
@chrsmrrs
@RWTH. Previously, @Mila_Quebec, @mcgillu, @polymtl, and @TU_Dortmund. Working on learning with graphs and ML for combinatorial optimization.
Want to know about the current understanding of the generalization abilities of GNNs? Please have a look at our survey paper arxiv.org/abs/2503.15650. Joint work with Antonis Vasileiou, Stefanie Jegelka, and @levie_ron.

🚨 Reviewer Call — LoG 2025 📷 Passionate about graph ML or GNNs? Help shape the future of learning on graphs by reviewing for the LoG 2025 conference! 📷forms.gle/Ms21k7oE8kF1Pd… 📷 RT & share! #GraphML #GNN #ML #AI #CallForReviewers
If you are in the Bay Area, consider attending our workshop, "Graph Learning Meets TCS," at the Simons Institute (simons.berkeley.edu/workshops/grap…).
I tracked the percentage of people who came by our "Position: Graph Learning Will Lose Relevance Due to Poor Benchmarks" poster at ICML 2025 and agreed with the message - 100%! 😄 A recurring question was: But what should we do? Is the field doomed? Definitely not! I see this…
Fun times at ICML. Graph learning dinner, position poster gang, theory, and graph learning hike. :)




Come talk to us tomorrow.
At ICML 🇨🇦 presenting the spicy 🌶️ Position: Graph Learning Will Lose Relevance Due To Poor Benchmarks 📍 East Hall A-B #E-604, Thu Also, @antvas98 will be presenting "Covered Forest" — glad to have played a part in this one! 📍 #E-2908, Thu DM to chat graph(+foundation models)
Pass by to chat with us on the future of Graph Learning and how we should improve as a research community!!! 😍
6. Position: Graph Learning Will Lose Relevance Due To Poor Benchmarks East Exhibition Hall A-B #E-604 Thu 17 Jul 11 a.m. PDT @mayabechlerspei @benfinkelshtein @ffabffrasca @phanein @michael_galkin @Mniepert @chrsmrrs et al.
6. Position: Graph Learning Will Lose Relevance Due To Poor Benchmarks East Exhibition Hall A-B #E-604 Thu 17 Jul 11 a.m. PDT @mayabechlerspei @benfinkelshtein @ffabffrasca @phanein @michael_galkin @Mniepert @chrsmrrs et al.
At ICML 🇨🇦 presenting the spicy 🌶️ Position: Graph Learning Will Lose Relevance Due To Poor Benchmarks 📍 East Hall A-B #E-604, Thu Also, @antvas98 will be presenting "Covered Forest" — glad to have played a part in this one! 📍 #E-2908, Thu DM to chat graph(+foundation models)
Relational foundation models are a new frontier for graph learning — especially with applications on relational databases.
Hey, we built a Graph Foundation Model at Google and it's showing some very promising results! Read more in the blogpost and also catch me and @phanein at the ICML Expo Talk next Monday. Happy to carry the Graph Learning flag ⛳️
We’re thrilled to share that the first in-person LoG conference is officially happening December 10–12, 2025 at Arizona State University logconference.org Important Deadlines: Abstract: Aug 22 Submission: Aug 29 Reviews: Sept 3–27 Rebuttal: Oct 1–15 Notifications: Oct 20
Nice read! Also, have a look at our two ICML position papers: arxiv.org/abs/2502.14546 (Practice) and arxiv.org/abs/2402.02287 (Theory).
Check out this blogpost from @ffabffrasca and the GLOW reading group on the future of graph learning! I’ve also contributed and my main take is - its actually working and its an exciting moment to work on applications!
📢22nd International Workshop on Mining and Learning with Graphs: deadline extended to June 19th! 📄Submit your work now at mlg-europe.github.io!
𝐎𝐧𝐥𝐲 𝟒 𝐝𝐚𝐲𝐬 𝐥𝐞𝐟𝐭 to submit your work to the Mining and Learning with Graphs (MLG) workshop @ECMLPKDD 📄🗓️ Deadline is 14th of June! More info at mlg-europe.github.io
𝐔𝐩𝐝𝐚𝐭𝐞: The 22nd Mining and Learning with Graphs workshop @ECMLPKDD will be on 𝐒𝐞𝐩𝐭𝐞𝐦𝐛𝐞𝐫 𝟏𝟓𝐭𝐡. We will have exciting keynotes by Rebekka Burgholz @BurkholzRebekka and Matthias Fey @rusty1s! Submit your work until June 14th at mlg-europe.github.io!
Computational Capability and Efficiency of Neural Networks: A Repository of Papers I compiled a list of theoretical papers related to the computational capabilities of Transformers, recurrent networks, feedforward networks, and graph neural networks. Link:…
Great discussion, @chaitjo! We also explored this with extensive experiments in our recent paper: arxiv.org/abs/2501.01999. We find, among others, that equiv mods in a sense scale even better than non-equiv ones. Going more or less completely against the vibes from your post😅1/5
After a long hiatus, I've started blogging again! My first post was a difficult one to write, because I don't want to keep repeating what's already in papers. I tried to give some nuanced and (hopefully) fresh takes on equivariance and geometry in molecular modelling.
Check out our recent work on the universality classes of equivariant networks (w/ Marco Pacini @MarcoPacini4, Gabriele Santin, Bruno Lepri). arxiv.org/abs/2506.02293
So why is the @icmlconf camera-ready submission so painful (you have to sign a form, use a paper checker, ...) while @NeurIPSConf and @iclr_conf make it so easy?
Interested in the cutting edge of graph theory and topology for GDL? Join us this Wednesday at 17:00 IDT (16:00 CET) for our seminar with Haggai Maron's lab in Technion to learn how we can push the limits of expressivity in graph and topological deep learning!
GLOW is coming back this Wednesday! 🌟 We will hear from – and interact with – @ChristianKoke (incorporating scale in GNNs) and @YSbrdlwb (sparse geometric MPNNs and their expressive power). 🗓️When May 28th, 5pm CEST on Zoom. 🌐 Details & sign-up: sites.google.com/view/graph-lea….