Jeff Nirschl
@jnirsch
M.D.–Ph.D. interested in computational image analysis, digital pathology, and neuropathology. Personal account: all opinions are my own.
🧬 What if we could build a virtual cell to predict how it responds to drugs or genetic perturbations? Super excited to introduce CellFlux at #ICML2025 — an image generative model that simulates cellular morphological changes from microscopy images. yuhui-zh15.github.io/CellFlux/ 💡…
"Dr. Dirk Keene, a professor and the director of neuropathology at UW Medicine who leads the brain bank, said if federal funding dries up, he’ll go to almost any end to 'honor the gift' of people's donation." nbcnews.com/health/health-…
adrc.wisc.edu/news/nirschl-h…
Today, the Wisconsin Brain Donor Program and Wisconsin ADRC join the national observance of #BrainDonorAwarenessDay to recognize and honor the individuals and families who have made the extraordinary decision to donate their brains to science. Learn more at…
Today, the Wisconsin Brain Donor Program and Wisconsin ADRC join the national observance of #BrainDonorAwarenessDay to recognize and honor the individuals and families who have made the extraordinary decision to donate their brains to science. Learn more at…
My closed collaborator Prof. Jeff Nirschl @jnirsch has started his research lab at University of Wisconsin!!🎉 Dr. Nirschl has a rare combination of deep expertise in both medicine/pathology and AI/ML. Please check out the two open positions in his lab: Scientific software…
Introducing MicroVQA: A Multimodal Reasoning Benchmark for Microscopy-Based Scientific Research #CVPR2025 ✅ 1k multimodal reasoning VQAs testing MLLMs for science 🧑🔬 Biology researchers manually created the questions 🤖 RefineBot: a method for fixing QA language shortcuts 🧵
🚨Large video-language models LLaVA-Video can do single-video tasks. But can they compare videos? Imagine you’re learning a sports skill like kicking: can an AI tell how your kick differs from an expert video? 🚀 Introducing "Video Action Differencing" (VidDiff), ICLR 2025 🧵
🚀 Introducing Temporal Preference Optimization (TPO) – a video-centric post-training framework that enhances temporal grounding in long-form videos for Video-LMMs! 🎥✨ 🔍 Key Highlights: ✅ Self-improvement via preference learning – Models learn to differentiate well-grounded…
Biomedical datasets are often confined to specific domains, missing valuable insights from adjacent fields. To bridge this gap, we present BIOMEDICA: an open-source framework to extract and serialize PMC-OA. 📄Paper: lnkd.in/dUUgA6rR 🌐Website: lnkd.in/dnqZZW4M
1. When you hit your 40’s, you’ll see 2 types of people: Those who took care of themselves and those who did not. 2. Lifting weights is investing in your future self. Start now to be functional well into your 80s while increasing your attractiveness.
How can we build an Al Virtual Cell 🔮🧬 that simulates all functions and interactions of a cell? How will it transform research and drive breakthroughs in programmable biology, drug discovery and personalized medicine? 🚀 Take a look at our Perspective! arxiv.org/pdf/2409.11654
📢 Check out our ECCV paper, “Viewpoint Textual Inversion” (ViewNeTI), where we show that text-to-image diffusion models have 3D view control in their text input space jmhb0.github.io/view_neti/
🚀 Can self-training improve general LVLM performance? 🏎️ How can you adapt your LVLMs to new and diverse applications? 📢 Happy to announce Video-STaR, a self-training approach to utilize any supervision for video instruction tuning! 🧵👇
It’s time to revisit the active learning loop. Checkout joint work @TmlrOrg with Sanket and @jnirsch introducing DropQuery: a simple, effective AL strategy designed to leverage the robust representations of vision foundation models. openreview.net/pdf?id=u8K83M9…
Excited to introduce μ-Bench, a new comprehensive benchmark for microscopy image understanding including 22 perception tasks. It spans scales (from pathology images to electron microscopy), modalities, disciplines, and organisms, and is available for public use on HF. We also…
Microscopy is a cornerstone of biomedical research. Vision-language models (VLMs) offer a promising solution for large-scale biomedical image analysis, however standardized and diverse benchmarks have been lacking until now. Introducing µ-Bench ale9806.github.io/uBench-website/ [1/7]
Check out our VLM benchmark led by @Ale9806_ and @jnirsch! Future AI could help biologists make new discoveries with images, but first, we need a basic understanding of image content. Our work shows there's still a long way to go.
Microscopy is a cornerstone of biomedical research. Vision-language models (VLMs) offer a promising solution for large-scale biomedical image analysis, however standardized and diverse benchmarks have been lacking until now. Introducing µ-Bench ale9806.github.io/uBench-website/ [1/7]
Microscopy is a cornerstone of biomedical research. Vision-language models (VLMs) offer a promising solution for large-scale biomedical image analysis, however standardized and diverse benchmarks have been lacking until now. Introducing µ-Bench ale9806.github.io/uBench-website/ [1/7]
Thanks for the nomination @VentureBeat. It's an honor to be included in this glowing list of outstanding women in AI. @uwsmph @UWMadison_BME @UWiscRadiology @IDiAlab
We're thrilled to unveil our 2024 nominees for the prestigious VentureBeat Women in AI Awards! Join us in celebrating groundbreaking women leaders in the exciting world of AI only at #VBTransform on July 10th. View the full list of nominees here: bit.ly/3zg2vZo