Michael Cooper @ ICML
@coopermj_aiml
PhD Student @UofTCompSci and @UHN. ML for fair, efficient liver transplant prioritization. LLM exploration @AbridgeHQ. Likes ≠ Endorsement
💲😢Work on predictive problems where samples are scarce, and labels are expensive? Check out AutoElicit! 🔢Use an LLM to extract prior distributions over the parameters of a predictive model. ⏳Save ~6 months of labelling effort on real outcomes in dementia care.
1/10 🧵 LLMs can translate knowledge into informative prior distributions for predictive tasks. In our #ICML2025 paper, we introduce AutoElicit, a method for using LLMs to elicit expert priors for probabilistic models and evaluate the approach on healthcare tasks.
🚀 Open-source + open dataset!! Going to be a fun weekend.
We just released the best 3B model, 100% open-source, open dataset, architecture details, exact data mixtures and full training recipe including pre-training, mid-training, post-training, and synthetic data generation for everyone to train their own. Let's go open-source AI!
How do we reimagine healthcare systems with AI that are approx. correct, but rapidly improving? Last August, Toronto hosted #MLHC24. ✨We had clinicians & engineers work together to find errors in LLMs without assuming bad intent. See @coopermj_aiml's highlights👇 🧵1/5
We red-teamed modern LLMs with practicing clinicians using real clinical scenarios. The LLMs: ✅ Made up lab test scores ✅ Gave bad surgical advice ✅ Claimed two identical X-rays looked different Here’s what this means for LLMs in healthcare. 📄 arxiv.org/abs/2505.00467 🧵 (1/)
1/7 🚀 Thrilled to announce that our paper ExOSITO: Explainable Off-Policy Learning with Side Information for ICU Lab Test Orders has been accepted to #CHIL2025! Please feel free to come by my poster session this Thursday to chat. #MedAI #HealthcareAI
🚨 This is the future of causal inference. 🚨👇 CausalPFN is a foundation model trained on simulated causal worlds—it estimates heterogeneous treatment effects in-context from observational data. No retraining. Just inference. A 𝘮𝘢𝘴𝘴𝘪𝘷𝘦 leap forward for the field. 🚀
Can neural networks learn to map from observational datasets directly onto causal effects? YES! Introducing CausalPFN, a foundation model trained on simulated data that learns to do in-context heterogeneous causal effect estimation, based on prior-fitted networks (PFNs). Joint…