Vahid Balazadeh
@vahidbalazadeh
PhD Student at @UofT and @VectorInst. Prev. Research Intern @Autodesk
1/10 🧵 LLMs can translate knowledge into informative prior distributions for predictive tasks. In our #ICML2025 paper, we introduce AutoElicit, a method for using LLMs to elicit expert priors for probabilistic models and evaluate the approach on healthcare tasks.
The twelve-day war has ended — or at least, for now it seems so. Given the level of extremism on both sides, it’s an optimistic hope that it won’t repeat. When the war started, I was overwhelmed with strange and heavy emotions. One piece of bad news after another, one worry…
Unfortunately, once again, politics is triumphing over #humanity, and leaders on both sides are dragging people into #war. I have #Iranian, #Israeli, #Muslim, and #Jewish friends and colleagues who may be feeling frustrated, heartbroken, disappointed, deeply stressed, and…
1/7 🚀 Thrilled to announce that our paper ExOSITO: Explainable Off-Policy Learning with Side Information for ICU Lab Test Orders has been accepted to #CHIL2025! Please feel free to come by my poster session this Thursday to chat. #MedAI #HealthcareAI
We red-teamed modern LLMs with practicing clinicians using real clinical scenarios. The LLMs: ✅ Made up lab test scores ✅ Gave bad surgical advice ✅ Claimed two identical X-rays looked different Here’s what this means for LLMs in healthcare. 📄 arxiv.org/abs/2505.00467 🧵 (1/)
(1/5) 👑 New Discrete Diffusion Model — MDM-Prime Why restrict tokens to just masked or unmasked in masked diffusion models (MDM)? We introduce MDM-Prime, a generalized MDM framework that enables partially unmasked tokens during sampling. ✅ Fine-grained denoising ✅ Better…
CausalPFN: Amortized Causal Effect Estimation via In-Context Learning: arxiv.org/abs/2506.07918
finally, wind is changing its direction: causal inference becomes easier if we give up on designing a new estimation algorithm ourselves (i don't think we've evolved to do so ourselves well.) let learning find one for you!
Phil (@phil_fradkin) and I will be presenting Orthrus (biorxiv.org/content/10.110…) as a spotlight poster at the Workshop on AI for New Drug Modalities at #NeurIPS2024! Our poster will be up starting 11:40AM in West Meeting Room 109, 110. Excited to be sharing some new results!
Current LLM personalization methods can be costly and require multiple models. We introduce the Preference Pretrained Transformer, using in-context learning for scalable personalization without retraining. @NeurIPSConf 📅 Sat 14 Dec 4:30 - 5:30 pm PST 📍West Exhibition Hall A
If you have a post training setup for your LLM, you're most probably missing accurate credit assignment without knowing it. Come to our poster to see how we brought inference time compute to fix this in training time to get better reasoners.
From OpenAI’s PPO, people start simplify it by removing its mechanisms, especially credit assignment, without performance loss. This contradicts the DeepRL belief that credit assignment is crucial. Find how we address this contradiction at MATHAI workshop on 11AM & 4PM.
Please attend our #NeurIPS2024 spotlight poster later today (11am-2pm) at the east exhibit hall #2809. Looking forward to meeting both new and familiar faces and having engaging conversations!