Ali Shakiba
@ali_shakiba_cs
Dr. Ali Shakiba is a research fellowat UNSW and was an assistant professor of Computer Science at VRU in 2016-2023. More at https://research.unsw.edu.au/people/
Today on the blog we share our recent progress in developing graph foundation models that excel on interconnected relational tables and at the same time generalize to arbitrary sets of tables, features, and tasks without additional training. Learn more → goo.gle/4lLPNVe
“The books of the future won’t just be static: some will talk to you, some will evolve with you, and some will exist in forms we can’t imagine now.” - @nxthompson Check out the “How to Build A Life” notebook, built in collaboration w/ @TheAtlantic based on @arthurbrooks columns:…
Really? It seems diffusion models learn logic. I can't wait to see the results.
Can diffusion models solve visual Sudoku? If you are at #ICML2025, come to our poster in the Wednesday morning poster session (Poster Session 3 East, Poster 3412) and find out! @ChrisWewer @bartek_pog Bernt Schiele @janericlenssen
An interesting piece of work ...
For evolving unknown PDEs, ML models are trained on next-state prediction. But do they actually learn the time dynamics: the "physics"? Check out our poster (W-107) at #ICML2025 this Wed, Jul 16. Our "DISCO" model learns the physics while staying SOTA on next states prediction!
Great point - maybe we should apply FAIR data principles here too: making results Findable, Accessible, Interoperable, and Reusable. If we want AI to use our original knowledge, we need to speak its language.
As AI advances, our contribution is more and more original knowledge - meaning something that can’t be inferred from what exists digitally already by reasoning. Something like the result of an experiment. Maybe it should be written more natively for AIs instead of people, eg PDF…
~400 people have joined us on Sunday at @Cohere_Labs Open Science Community ML Summer School. @TimDarcet as always, delivering a super amazing talk on Scaling Self Supervised Learning (SSL, Dinov2, Masked Image Modeling, CAPI) Super interesting session.