Jianan Zhao
@AndyJiananZhao
Third-year Ph.D. student of Mila & UdeM. Working on graph and natural language processing.
Can one model generalize to unseen graphs with arbitrary feature and label spaces? 📢 Introducing GraphAny #ICLR25: A fully-inductive model for node classification. 🚀 Trained on one graph, GraphAny generalizes to any unseen graph. 📎 Paper: openreview.net/pdf?id=1Qpt43c… 1/🧵

This thread is just ~30% of the full post, which you can find on Tivadar's book. You won't find better explanations anywhere else: tivadardanka.com/books/mathemat… Trust me on this one. This is the book you want to read.
🚨 The 8th annual Graph Signal Processing Workshop is back this May 14-16! Held in Montreal, CA, at @Mila_Quebec, we’re covering all things graphs, signals, learning, & applications 🕸️〰️ 🔗: gspworkshop.org 👉🏻Abstract submission opens Feb 1 👉🏻 Registration opens Mar 20
I'm so surprised and offended to see the professor mention "Chinese students are dishonest" during one of the invited talk. This is against the code of conduct of #NeurIPS2024 . The Chair Committee should look into this and we want to make sure this won't happen again.
Mila's annual supervision request process opens on October 15 to receive MSc and PhD applications for Fall 2025 admission! Join our community! More information here mila.quebec/en/prospective…
Cats are invariant under SO(3) transformations! 😼
📣Foundation models for graph reasoning become even stronger - in our new #NeurIPS2024 work we introduce UltraQuery going beyond simple one-hop link prediction to answering more complex queries on any graph in the zero-shot fashion better than trainable SOTA! 🧵1/7, arxiv, code
📢 In our new blogpost w/ @mmbronstein @haitao_mao_ @AndyJiananZhao @zhu_zhaocheng we discuss foundation models in Graph & Geometric DL: from the core theoretical and data challenges to the most recent models that you can try already today! towardsdatascience.com/foundation-mod…
How to stay calm (without any hacks) This is all you need.
This is really a 'WOW' paper. 🤯 Claims that MatMul operations can be completely eliminated from LLMs while maintaining strong performance at billion-parameter scales and by utilizing an optimized kernel during inference, their model’s memory consumption can be reduced by more…
LLM bullshit knife, to cut through bs RAG -> Provide relevant context Agentic -> Function calls that work CoT -> Prompt model to think/plan FewShot -> Add examples PromptEng -> Someone w/good written comm skills. Prompt Optimizer -> For…
Virtually nobody is pricing in what's coming in AI. I wrote an essay series on the AGI strategic picture: from the trendlines in deep learning and counting the OOMs, to the international situation and The Project. SITUATIONAL AWARENESS: The Decade Ahead
"Think lightly of yourself and deeply of the world." - Miyamoto Musashi