Joey Bose
@bose_joey
Incoming Assistant Professor @imperialcollege and @Mila_Quebec Affiliate member. Into Geometry ∩ Generative Models and AI4Science. Phd @Mila_Quebec / McGill.
🎉Personal update: I'm thrilled to announce that I'm joining Imperial College London @imperialcollege as an Assistant Professor of Computing @ICComputing starting January 2026. My future lab and I will continue to work on building better Generative Models 🤖, the hardest…
What a pleasant end to #ICML2025 to win the best paper at @genbio_workshop with the dream team for our paper FORT: Forward only regression training of Normalizing flows 🌊
Wrapping up #ICML2025 on a high note — thrilled (and pleasantly surprised!) to win the Best Paper Award at @genbio_workshop 🎉 Big shoutout to the team that made this happen! Paper: Forward-Only Regression Training of Normalizing Flows (arxiv.org/abs/2506.01158) @Mila_Quebec
Thrilled to be co-organizing FPI at #NeurIPS2025! I'm particularly excited about our new 'Call for Open Problems'track. If you have a tough, cross-disciplinary challenge, we want you to share it and inspire new collaborations. A unique opportunity! Learn more below.
1/ Where do Probabilistic Models, Sampling, Deep Learning, and Natural Sciences meet? 🤔 The workshop we’re organizing at #NeurIPS2025! 📢 FPI@NeurIPS 2025: Frontiers in Probabilistic Inference – Learning meets Sampling Learn more and submit → fpiworkshop.org…
🚨 Our workshop on Frontiers of Probabilistic Inference: Learning meets Sampling got accepted to #NeurIPS2025!! After the incredible success of the first edition. The second edition is aimed to be bolder, bigger, and more ambitious in outlining key challenges in the natural…
1/ Where do Probabilistic Models, Sampling, Deep Learning, and Natural Sciences meet? 🤔 The workshop we’re organizing at #NeurIPS2025! 📢 FPI@NeurIPS 2025: Frontiers in Probabilistic Inference – Learning meets Sampling Learn more and submit → fpiworkshop.org…
📢We are excited to share SynCoGen—the first generative model that co-generates 🔷building-block graphs,🔷reaction edges and 🔷full 3-D coordinates, so every molecule comes with both a synthesis plan and a physically plausible shape.
we’re not kfc but come watch us cook with our feynman-kac correctors, 4:30 pm today (july 16) at @icmlconf poster session — east exhibition hall #3109 @k_neklyudov @AlexanderTong7 @tara_aksa @OhanesianViktor
Come check out SBG happening now! W-115 11-1:30 with @charliebtan @bose_joey Chen Lin @leonklein26 @mmbronstein
📢📢 "La-Proteina: Atomistic Protein Generation via Partially Latent Flow Matching" Fully atomistic. Partially latent. Structurally precise. Entirely generative. w/ @tomasgeffner*, @DidiKieran*, et al. 📜 Project page & paper: research.nvidia.com/labs/genair/la… 🧵 Thread below... (1/n)
The Programmable Biology Group is en route to Vancouver for #ICML2025!! 🇨🇦🗻 Please come by my student's posters -- they would love your support! 😇 📜: PepTune (openreview.net/forum?id=FQoy1…) -- we're handing out stickers!! 🌟 📍: Main Conference Poster Session 4, W-316 👩💻:…
🚨 ICML 2025 Paper 🚨 "On Measuring Long-Range Interactions in Graph Neural Networks" We formalize the long-range problem in GNNs: 💡Derive a principled range measure 🔧 Tools to assess models & benchmarks 🔬Critically assess LRGB 🧵 Thread below 👇 #ICML2025
If you are into generative models and interested in applications of it in AI4Science - @bose_joey is an amazing person to work with!!!
🎉Personal update: I'm thrilled to announce that I'm joining Imperial College London @imperialcollege as an Assistant Professor of Computing @ICComputing starting January 2026. My future lab and I will continue to work on building better Generative Models 🤖, the hardest…
🔎Do better expert models always lead to better model merging & MoErging? And how does expert training (duration) affect model upcycling? We tackle these questions in our recent work: “Less is More: Undertraining Experts Improves Model Upcycling” 🧵1/N
Started a new podcast with @tvergarabrowne ! Behind the Research of AI: We look behind the scenes, beyond the polished papers 🧐🧪 If this sounds fun, check out our first "official" episode with the awesome @gauthier_gidel from @Mila_Quebec: open.spotify.com/episode/7oTcqr…
A bit of backstory on PITA: the project started with a key goal—to fix the inherent bias in prior diffusion samplers (like iDEM!). PITA leverages importance sampling to guarantee correctness. This commitment to unbiasedness is what gives PITA its power. See thread for details👇
(1/n) Sampling from the Boltzmann density better than Molecular Dynamics (MD)? It is possible with PITA 🫓 Progressive Inference Time Annealing! A spotlight @genbio_workshop of @icmlconf 2025! PITA learns from "hot," easy-to-explore molecular states 🔥 and then cleverly "cools"…
iDEM arxiv.org/abs/2402.06121 introduced a very effective but biased training scheme for diffusion-based samplers. This was great, but we were never able to scale it to real molecules.