Sarath Chandar
@apsarathchandar
Associate Professor @polymtl and @Mila_Quebec; Canada CIFAR AI Chair; Machine Learning Researcher. Pro-bono office hours: https://t.co/tK69DKRf9N?amp=1
Always a computer science nerd at heart — grateful to put it to good use helping people and making a real difference in medicine. Thank you to the Fondation de l’Institut de Cardiologie de Montréal @ICMtl for the feature and support. fondationicm.org/en/blog/ai-imp…
Looking forward to seeing folks at @CoLLAs_Conf in Philly in a couple of weeks!
📢 About 2 weeks to go until #CoLLAs2025! Here’s what you need to make the most of it. 🗓 Program: lifelong-ml.cc/Conferences/20… ✅ Accepted Papers: lifelong-ml.cc/Conferences/20… 📍 Venue & Local Info: lifelong-ml.cc/Conferences/20… 🔗 Registration & conference details:…
📢 About 2 weeks to go until #CoLLAs2025! Here’s what you need to make the most of it. 🗓 Program: lifelong-ml.cc/Conferences/20… ✅ Accepted Papers: lifelong-ml.cc/Conferences/20… 📍 Venue & Local Info: lifelong-ml.cc/Conferences/20… 🔗 Registration & conference details:…
A new paper accepted in @COLM_conf 2025! I led a group of 3 brilliant students to dive deep into the problem of discrimination in language models. We discovered that models that take racist decisions don’t always have biased thoughts!
I’ll be talking about the past, present, and future of neural networks as the first lecture of @CIFAR_News DLRL Summer School 2025 in an hour! Looking forward to participating in my most favourite summer school! @AmiiThinks @Mila_Quebec @VectorInst

🔥 Our Nature paper just dropped! We built EchoNext—an AI model that reads ECGs to detect hidden structural heart disease (SHD) 💡 🔬 Trained on 1.2M ECG-echo pairs 🏥 Validated across 6 health systems 📈 AUROC 0.85 👀 Outperforms cardiologists 🚨 Finds undiagnosed SHD in 73%…
🧵1/Today, we published a key milestone towards AI based cardiac screening in Nature. doi.org/10.1038/s41586… EchoNext outperformed cardiologists and found thousands of high-risk patients missed in routine care. We also made a version available to the world.
Transformers pre-trained on raw bytes (no tokenization) are SOTA lossless compressors (better than gzip, etc) on multiple data modalities (audio, images, text) With @HeurtelDepeiges @JoelVeness65957 Tim Genewein 📅 Tue 15 July ⏰ 16:30 – 19:00 📍East Exhibition Hall A-B #E-3410
Honored to get the outstanding position paper award at @icmlconf :) Come attend my talk and poster tomorrow on human centered considerations for a safer and better future of work I will be recruiting PhD students at @stonybrooku @sbucompsc coming fall. Please get in touch.
Very excited for a new #ICML2025 position paper accepted as oral w @mbodhisattwa & @TuhinChakr! 😎 What are the longitudinal harms of AI development? We use economic theories to highlight AI’s intertemporal impacts on livelihoods & its role in deepening labor-market inequality.
Our new paper got #ACL2025 oral! 🎉 If you're interested in LLM training dynamics, its phases, and how scaling affects them — check it out! @Mila_Quebec x.com/mirandrom/stat…
Step 1: Understand how scaling improves LLMs. Step 2: Directly target underlying mechanism. Step 3: Improve LLMs independent of scale. Profit. In our ACL 2025 paper we look at Step 1 in terms of training dynamics. Project: mirandrom.github.io/zsl Paper: arxiv.org/pdf/2506.05447
Wrote my first blog post! I wanted to share a powerful yet under-recognized way to develop emotional maturity as a researcher: making it a habit to read about the ✨past ✨ and learn from it to make sense of the present
I am happy to share that TAPNext got accepted at #ICCV2025 ! TAPNext is a new Point Tracking model that is SOTA in both tracking quality and speed. It is drastically diferent from all previous point tracking methods and is based on a plain ViT.
We're very excited to introduce TAPNext: a model that sets a new state-of-art for Tracking Any Point in videos, by formulating the task as Next Token Prediction. For more, see: tap-next.github.io 🧵
I’ve been meaning to say it for awhile, but we should get rid of this NeurIPS paper checklist. A well intentioned experiment, which should never have continued, and has predictably grown out of control (it can only get longer!).
The NeurIPS paper checklist corroborates the bureaucratic theory of statistics. argmin.net/p/standard-err…
📢 Present your recent work at #CoLLAs2025! Our Journal/Sister Conference Track invites papers published in top-tier venues (e.g., @JmlrOrg @TmlrOrg @NeurIPSConf @iclr_conf etc) on lifelong learning, continual learning, curriculum, meta & federated learning, adaptation in LLMs,…
In the beginning, there was BERT. Eventually BERT gave rise to RoBERTa. Then, DeBERTa. Later, ModernBERT. And now, NeoBERT. The new state-of-the-art small-sized encoder:
🧠 Working on a radiology or cardiology AI model? Bring YOUR model to PACS-AI. We’re building an open-source platform to run CV models directly in clinical imaging workflows. 🚀 Featured in CIFAR Reach → cifar.ca/wp-content/upl… 📬 DM me for Slack access 🔓 Open release this…
We’ve received several requests from authors, and in response, the submission deadline for the RL4RS Workshop has been extended to June 2nd (AoE). Check out the Call for Papers and submission link below. We look forward to your contributions! rl4rs.github.io/RL4RS/
We actually showed a year ago almost the same thing. You don’t need even a diffusion formulation for scalable modeling - just predict 3D as text with Transformer to outperform equivariant diffusion. bindgpt.github.io
It is trivial to explain why a LLM can never ever be conscious or intelligent. Utterly trivial. It goes like this - LLMs have zero causal power. Zero agency. Zero internal monologue. Zero abstracting ability. Zero understanding of the world. They are tools for conscious beings.
⏳ 1 week left to submit to the CoLLAs 2025 Workshop Track! Perfect for early-stage ideas, interdisciplinary work, or papers under review. This year’s spotlight: 🧠 Lifelong Learning in Cognitive Science 📅 Deadline: May 22, 2025 (AoE) 📍 Posters at CoLLAs 2025 🔗…