Tal Golan
@TalGolanNeuro
Assistant professor @ Ben-Gurion University. Studies and tweets about human and machine vision.
The largest 'in silico electrophysiology' study I've been part of, revealing new insights into cortical representation of faces via experiments on CNNs. Led by @farzmahdi, with @KriegeskorteLab, Wilbert Zarco, and Winrich Freiwald.
🎉Excited to share our new @eLife paper! We offer a simple explanation for the peculiar mirror-symmetric viewpoint tuning found in brains and artificial neural networks. Check out the thread for more details! 🧵 doi.org/10.7554/eLife.…
NSD has profoundly shaped vision science. Inspired by it, we combined fMRI and Neuropixels to build a macaque dataset with 1,000 natural images. We’re excited to share Triple-N and welcome feedback from the community!
(1/6) Thrilled to share our triple-N dataset (Non-human Primate Neural Responses to Natural Scenes)! It captures thousands of high-level visual neuron responses in macaques to natural scenes using #Neuropixels. Link: biorxiv.org/content/10.110…
Our new study in @NatComputSci, led by Haibao Wang, presents a neural code converter aligning brain activity across individuals & scanners without shared stimuli by minimizing content loss, paving the way for scalable decoding and cross-site data analysis. nature.com/articles/s4358…
Our latest CS336 Language Modeling from Scratch lectures are now available! View the entire playlist here: youtube.com/playlist?list=…
Fellowship opportunity for American PhDs: Fulbright Postdoctoral Fellowships at Ben-Gurion University. DM me if you're interested in working together towards an application. fulbright.org.il/program/2/843#…
Exciting new preprint from the lab: “Adopting a human developmental visual diet yields robust, shape-based AI vision”. A most wonderful case where brain inspiration massively improved AI solutions. Work with @lu_zejin @martisamuser and Radoslaw Cichy arxiv.org/abs/2507.03168
If you are @ASSC2025 and doing eye movements come see Jonathan Nir’s poster P390 presented NOW instead of Wednesday. Presenting a thorough comparison of eye movements classification algorithms!
Thrilled to see our TinyRNN paper in @nature! We show how tiny RNNs predict choices of individual subjects accurately while staying fully interpretable. This approach can transform how we model cognitive processes in both healthy and disordered decisions. doi.org/10.1038/s41586…
Still time to apply!
New postdoc position - come work with us in France! 🧠🥖🍷🇫🇷 Two-Year Postdoc Position on the Role of Temporal Integration in Visual Attention Using Human Intracerebral Recordings | EURAXESS euraxess.ec.europa.eu/jobs/351721
taking a break from this silly war to discuss real existential issues: academics who teach DL or similar, whats your solution for providing undergrads in large-ish classes, with GPU resources, for assignments etc?
New postdoc position - come work with us in France! 🧠🥖🍷🇫🇷 Two-Year Postdoc Position on the Role of Temporal Integration in Visual Attention Using Human Intracerebral Recordings | EURAXESS euraxess.ec.europa.eu/jobs/351721
Great work by Changde Du from Huiguang He's lab at the Chinese Academy of Sciences. How similar are visual and conceptual representations in (multimodal) large language models to those found in humans? It turns out quite similar! nature.com/articles/s4225…
Can we precisely and noninvasively modulate deep brain activity just by riding the natural visual feed? 👁️🧠 In our new preprint, we use brain models to craft subtle image changes that steer deep neural populations in primate IT cortex. Just pixels. 📝arxiv.org/abs/2506.05633
Absolute honor to be awarded the David Marr Medal by the Applied Vision Association Looking forward to my talk “Five Illusions Challenge Our Understanding of Visual Experience” Thanks so much to NOMIS Foundation, @ItalianAcademy, @columbiacss, @KriegeskorteLab, @ZuckermanBrain
Introducing All-TNNs: Topographic deep neural networks that exhibit ventral-stream-like feature tuning and a better match to human behaviour than the gold standard. Now out in Nature Human Behaviour. 👇
Now out in Nature Human Behaviour @NatureHumBehav: “End-to-end topographic networks as models of cortical map formation and human visual behaviour”. Please check our NHB link: nature.com/articles/s4156…
Five! osf.io/preprints/psya…
How many ideas should a scientific talk be about?
Announcement: Workshop at #CCN2025 🧠 Modeling the Physical Brain: Spatial Organization & Biophysical Constraints 🗓️ Monday, Aug 11 | 🕦 11:30–18:00 CET |📍 Room A2.07 🔗 Register: tinyurl.com/CCN-physical-b… #NeuroAI @CogCompNeuro
VSS Demo Night!! “FIVE ILLUSIONS CHALLENGE OUR UNDERSTANDING OF VISUAL EXPERIENCE” #VSS2025 @VSSMtg Thread below 🧵👇
Check out our new paper! Vision models often struggle with learning both transformation-invariant and -equivariant representations at the same time. @hafezghm shows that self-supervised prediction with proper inductive biases achieves both simultaneously.
🚨 Preprint Alert 🚀 📄 seq-JEPA: Autoregressive Predictive Learning of Invariant-Equivariant World Models arxiv.org/abs/2505.03176 Can we simultaneously learn both transformation-invariant and transformation-equivariant representations with self-supervised learning (SSL)?…
We are recruiting postdoctoral researchers to join an exciting research program focused on large-scale behavioral experiments. Come build and explore your own experimental social networks with us! jobs.sciencecareers.org/job/672009/pos…
We are recruiting two postdoctoral scholars for a research project in human collective intelligence and creativity at UC Davis and Cornell. Joint project with @enfascination,@norijacoby,@daltonconley, & Ofer Tchernichovski. Please forward this thread to relevant people. 1/n