Lazar Atanackovic
@lazar_atan
Ph.D. Candidate, @UofT, @Vectorinst. AI4Science, flows and diffusion, ML for cell dynamics. Prev @Mila_Quebec @valence_ai. 🦋http://lazaratan.bsky.social
🚀Introducing — Meta Flow Matching (MFM) 🚀 Imagine predicting patient-specific treatment responses for unseen cases or building generative models that adapt across different measures. MFM makes this a reality. 📰Paper: arxiv.org/abs/2408.14608 💻Code: github.com/lazaratan/meta…
Wrapping up #ICML2025 on a high note — thrilled (and pleasantly surprised!) to win the Best Paper Award at @genbio_workshop 🎉 Big shoutout to the team that made this happen! Paper: Forward-Only Regression Training of Normalizing Flows (arxiv.org/abs/2506.01158) @Mila_Quebec
1/ Where do Probabilistic Models, Sampling, Deep Learning, and Natural Sciences meet? 🤔 The workshop we’re organizing at #NeurIPS2025! 📢 FPI@NeurIPS 2025: Frontiers in Probabilistic Inference – Learning meets Sampling Learn more and submit → fpiworkshop.org…
AI4Mat is back for NeurIPS! time to crystallize those ideas and make a solid-state submission by august 22, 2025 💪 new this year: opt-in your work for our Research Learning from Speaker Feedback program -- a new structured discussion format where spotlight presenters receive…
A bit late, but excited to announce our new paper, "Defining and benchmarking open problems in single-cell analysis," a joint effort led by our team (@scottgigante, @DBBurkhardt ) at @Yale and the @fabian_theis and @MDLuecken at Helmholtz! 🚀 **Open Problems**, is a living,…
reprogramming cells with transcription factors is our most expressive tool for engineering cell state traditionally, we found TFs by ~guesswork @icmlconf we're sharing @newlimit's SOTA AI models that can design reprogramming payloads by building on molecular foundation models
📢Presenting SDE Matching🔥🔥🔥 🚀We extend diffusion models to construct a simulation-free framework for training Latent SDEs. It enables sampling from the exact posterior process marginals without any numerical simulations. 📜: arxiv.org/abs/2502.02472 🧵1/8
Excited to introduce flow Q-learning (FQL)! Flow Q-learning is a *simple* and scalable data-driven RL method that trains an expressive policy with flow matching. Paper: arxiv.org/abs/2502.02538 Project page: seohong.me/projects/fql/ Thread ↓
Soaking up the sun (surprisingly) at #ICML2025. Come check out our work on Generative Modeling on the space of distribution, with cool applications to spatial genomics. With @brandondamos, Aram Pooladian, & @dana_peer. 🗓️ Today, 4:30-7pm 📍 East Exhibition Hall, #3301
Come check out SBG happening now! W-115 11-1:30 with @charliebtan @bose_joey Chen Lin @leonklein26 @mmbronstein
we’re not kfc but come watch us cook with our feynman-kac correctors, 4:30 pm today (july 16) at @icmlconf poster session — east exhibition hall #3109 @k_neklyudov @AlexanderTong7 @tara_aksa @OhanesianViktor
Really excited to share that we’re in the @ycombinator Combinator summer batch! I’m even more excited to be teaming up with @phil_fradkin and @ianshi3 on this next chapter. @blankbio_, we're building the next generation of foundation models for RNA.
We're excited to introduce our new work on mature mRNA property prediction, co-first authored with the amazing @ianshi3 and @Taykhoom_Dalal. We introduce mRNABench to standardize evaluation and present a study on building more efficient RNA foundation models. 🧵
We're excited to release 𝐦𝐑𝐍𝐀𝐁𝐞𝐧𝐜𝐡, a new benchmark suite for mRNA biology containing 10 diverse datasets with 59 prediction tasks, evaluating 18 foundation model families. Paper: biorxiv.org/content/10.110… GitHub: github.com/morrislab/mRNA… Blog: blank.bio/post/mrnabench
Online Now! Bacterial ADP-heptose triggers stem cell regeneration in the intestinal epithelium following injury dlvr.it/TLsYzs #stemcells
🎉Personal update: I'm thrilled to announce that I'm joining Imperial College London @imperialcollege as an Assistant Professor of Computing @ICComputing starting January 2026. My future lab and I will continue to work on building better Generative Models 🤖, the hardest…
Announcing Ambient Protein Diffusion, a state-of-the-art 17M-params generative model for protein structures. Diversity improves by 91% and designability by 26% over previous 200M SOTA model for long proteins. The trick? Treat low pLDDT AlphaFold predictions as low-quality data
New OpenProblems paper out! 📝 Led by Malte Lücken with Smita Krishnaswamy, we present openproblems.bio – a community-driven platform benchmarking single-cell analysis methods. Excited about transparent, evolving best practices for the field! 🔗 nature.com/articles/s4158…
We red-teamed modern LLMs with practicing clinicians using real clinical scenarios. The LLMs: ✅ Made up lab test scores ✅ Gave bad surgical advice ✅ Claimed two identical X-rays looked different Here’s what this means for LLMs in healthcare. 📄 arxiv.org/abs/2505.00467 🧵 (1/)
Paper reviewing can be easy if we follow these simple rules: 1. Summary: A good rule of thumb is to add enough content so that the summary can’t apply to any other paper and highlights the details and claims from the work, and its novelty. 1/3
(1/n) Sampling from the Boltzmann density better than Molecular Dynamics (MD)? It is possible with PITA 🫓 Progressive Inference Time Annealing! A spotlight @genbio_workshop of @icmlconf 2025! PITA learns from "hot," easy-to-explore molecular states 🔥 and then cleverly "cools"…
biorxiv.org/content/10.110… Nice benchmark of single cell "foundation models" (scGPT, scFoundation) and GEARS (a GNN model) further hyped as "virtual cell models" against linear baselines on perturbation prediction. Long-story short: they can't beat the linear baselines. 1/