Martin Hebart
@martin_hebart
Proud dad, Prof. of Computational Cognitive Neuroscience, author of The Decoding Toolbox, founder of http://things-initiative.org @ martinhebart.bsky. social
I'm thrilled to announce the THINGS initiative: An initiative of researchers around the world collecting and sharing large-scale behavioral and neuroscience data for object recognition and understanding, using the same image dataset. things-initiative.org
I'm thrilled to see this preprint out! Lenny compellingly demonstrates, in a data-driven way, a coding scheme unifying distributed dimensions & category selectivity.➡️Higher visual cortex comprises many partially overlapping tuning maps that include but go beyond category tuning!
How is high-level visual cortex organized? In a new preprint with @martin_hebart & @KathaDobs, we show that category-selective areas encode a rich, multidimensional feature space 🌈 biorxiv.org/content/10.110… 🧵 1/n
We have an open PhD position in an exciting DFG-AEI project to further develop continuous psychophysics in collaboration with Joan-Lopez Moliner. More info: linkedin.com/posts/constant…
Our new study in @NatComputSci, led by Haibao Wang, presents a neural code converter aligning brain activity across individuals & scanners without shared stimuli by minimizing content loss, paving the way for scalable decoding and cross-site data analysis. nature.com/articles/s4358…
Exciting new preprint from the lab: “Adopting a human developmental visual diet yields robust, shape-based AI vision”. A most wonderful case where brain inspiration massively improved AI solutions. Work with @lu_zejin @martisamuser and Radoslaw Cichy arxiv.org/abs/2507.03168
I can highly recommend working with Rosanne!
Curious about the visual human brain, computation, and pursuing a PhD in a vibrant and collaborative lab located in the heart of Europe? My lab is offering a 3-year PhD position! More details: rademakerlab.com/job-add
Big year for our lab at #OHBM2025! Thrilled to present an exciting mix of posters, talks, and lots of brainy fun 🧠🤓 Come check us out! We’d love to connect! @OHBM #COGNIZELab #OHBM #OHBM2025 #Neuroimaging
Very happy to announce that our paper comparing dimensions in human and DNN representations is now out in @NatMachIntell nature.com/articles/s4225…
What makes humans similar or different to AI? In a new study, led by @florianmahner and @lukas_mut and w/ @umuguc, we took a deep look at the factors underlying their representational alignment, with surprising results. arxiv.org/abs/2406.19087 🧵
Nimmt KI die Welt auf dieselbe Weise wahr wie der #Mensch und versteht sie? @florianmahner, @lukas_mut und @martin_hebart haben untersucht, ob #AI Objekte ähnlich wie Menschen erkennt & Ergebnisse in @NatMachIntell veröffentlicht: tinyurl.com/477dumpz
Does #AI perceive and make sense of the world the same way humans do? @florianmahner, @lukas_mut & @martin_hebart @jlugiessen investigated whether AI recognizes objects similarly to humans and published their findings @NatMachIntell: tinyurl.com/2krukhzf
How is high-level visual cortex organized? In a new preprint with @martin_hebart & @KathaDobs, we show that category-selective areas encode a rich, multidimensional feature space 🌈 biorxiv.org/content/10.110… 🧵 1/n
Our paper is now accepted at Neural Networks! This work builds on our previous threads, updated with deeper analyses. We revisit brain-to-image reconstruction using NSD + diffusion models—and ask: do they really reconstruct what we perceive? Paper: doi.org/10.1016/j.neun… 🧵1/12
Recent studies have shown photorealistic reconstructions from fMRI data using CLIP/diffusion models and the NSD dataset (Left). We evaluated the methods on the Deeprecon dataset (Shen+ 2017/19) with added annotations, but found the results not so impressive ( Right).
Great work by Changde Du from Huiguang He's lab at the Chinese Academy of Sciences. How similar are visual and conceptual representations in (multimodal) large language models to those found in humans? It turns out quite similar! nature.com/articles/s4225…
Asking GPT-4o for a random choice is an *easy* way to reveal its bias 🙃 Choose a random digit? ➡️ 7 (70% of the time❗️) Biden vs. Trump? ➡️ Biden (100%❗️) Male vs. Female? ➡️ Female (84%❗️) Same story for many LLMs. Choice orders are randomized. 1/6 #icml2025
What are the organizing dimensions of language processing? We show that voxel responses are organized along 2 main axes: processing difficulty & meaning abstractness—revealing an interpretable, topographic representational basis for language processing shared across individuals.
#VSS2025 Tomorrow in Talk Room 2 at 9.15AM. @mayukh091 will be presenting our work on TopoNets! You can read more about this work at toponets.github.io.