Rosanne Rademaker
@RL_Rademaker
Max Planck group leader at ESI Frankfurt | human cognition, fMRI, MEG, computation | find me here: [email protected]
Curious about the visual human brain, computation, and pursuing a PhD in a vibrant and collaborative lab located in the heart of Europe? My lab is offering a 3-year PhD position! More details: rademakerlab.com/job-add
Can we draw conclusions about orientation tuning from EEG data? And… exactly how inhomogeneous is decoding across the visual field? What started as a “quick analysis” of some EEG data is now out in #naturecommunications. See the tweeprint below or check rdcu.be/eugi0.
Who doesn’t like a good model of the brain? Yet, from simple regression to artificial networks, some limitations keep popping up (eg, overfitting). @mijowolff & I saw some cool but puzzling data, ran a quick analysis, and found 1 such limitation: model mimicry. Tweeprint 🚨 1/N
We have an open PhD position in an exciting DFG-AEI project to further develop continuous psychophysics in collaboration with Joan-Lopez Moliner. More info: linkedin.com/posts/constant…
Now out in NatComms: Mice and monkeys spontaneously shift through comparable cognitive states - and it's written all over their faces! (1/7) nature.com/articles/s4146…
In Frankfurt this fall 🥳
📢 Register for the #BernsteinConference 2025 now! 🗓️ Take advantage of the early-bird registration fee before July 30 All info here 👉 bernstein-network.de/bernstein-conf…
Can't make it to Amsterdam for CCN2025? Join a local meetup! Watch the livestream with colleagues at institutions worldwide. On our website we host a map with existing meetups near you & you can also register to host your own! 📍 View meetups & register: 2025.ccneuro.org/local-meetups/
💥Emotions are central to the human experience 😠😳😃☺️😔😕😮🙂 Our Human Neural Circuitry team just took a step towards understanding how they arise–using brain-wide electrical ⚡recordings in humans and mice @ScienceMagazine Read on for more… 1/n
From a bear catching a fish to a tennis player hitting a ball - extrapolating the trajectory of an object is critical to know its future location. Giuliana will talk about the mechanisms that underlie such motion extrapolation at #VSS2025 this next session in Talk Room 2!!
Welcome #VSS2025! For those attending the sunny beaches and science at this years Florida conference, make sure not to miss the awesome talks and posters from our lab!
Ever wonder why V1, a primary sensory area, is recruited when images are merely held in mind? Find out in an hour from now in Talk Room 1, #VSS2025
Welcome #VSS2025! For those attending the sunny beaches and science at this years Florida conference, make sure not to miss the awesome talks and posters from our lab!
Welcome #VSS2025! For those attending the sunny beaches and science at this years Florida conference, make sure not to miss the awesome talks and posters from our lab!

Wanna know about sensory & memory representations in visual cortex? After integrating the wisdom from several reviewers (and clever comments from 2 more reviewers at #elife still to go), this paper is now officially out: elifesciences.org/reviewed-prepr… (tweeprint below)
Are memories noisier versions of what we perceive? Fundamentally different? Seriously, think about it… Early visual cortex processes what we see around us, but also has information about images briefly held in mind. The two must be different… but how? TWEEPRINT ALERT! 🚨 🧵1/n
Very happy that this work is now published in eLife! We find that humans are unique in the way they encode pairs of stimuli, in the context of symbol-object relations. If you've learned that A->B, you spontaneously generalize it to B->A! The full story : elifesciences.org/articles/87380
How can we define symbols, and are humans unique in their ability to learn symbolic representations? In a recent study, we investigated this, using fMRI to directly compare how humans and macaque monkeys encode associations between stimuli biorxiv.org/content/10.110…
Here's the official advertisement for a PhD position in our lab: uni-giessen.de/de/ueber-uns/k… Please see below for context. If you're interested in applying, feel free to get in touch beforehand - happy to informally answer and questions you may have. Feel free to forward, too!
Job alert! We are now looking for a PhD student starting Sept 2025, for a project on visual relations between people and objects, using behavior, EEG, fMRI, and ANNs. Bonus: The project involves a collaboration with the brilliant @ljuba_pi. Please forward or get in touch! 🧠🎓
Eye movement folks: It is happening! Applications are open for the 2025 GRC and GRS Eye Movements. Check out our amazing program and apply here: grc.org/eye-movements-…
Was just going to advertise another awesome summer course for people starting their labs: safelabs.info -- related to the one we are organizing compneurosci.com/Neuro4Pros/ind… - if you have a new lab this is a good year.
Our new paper with @chrismlangdon is just out in @NatNeuro! We show that high-dimensional RNNs use low-dimensional circuit mechanisms for cognitive tasks and identify a latent inhibitory mechanism for context-dependent decisions in PFC data. nature.com/articles/s4159…
Excited to share my work with @EngelTatiana, out now in @NatNeuro! We show that RNNs use low-dimensional latent circuit mechanisms for cognitive tasks. We find that context-dependent decisions in both RNNs and PFC arise from latent inhibitory mechanisms. nature.com/articles/s4159…
For CCN2025 in Amsterdam, we will, for the first time, have conference proceedings that include both full length peer-reviewed papers and the traditional 2-page track! Key dates for authors: 17th February for full papers & 10th April for 2 pagers.
We’re hiring for a computational/systems/cognitive neuro editor! Get in touch if you’d like to chat about the role 🙂
We're hiring! We're recruting an editor with expertise in computational, systems, or cognitive neuroscience. Must have a PhD and be able to work in the US, Berlin, or Shanghai. Application due Jan. 6. springernature.wd3.myworkdayjobs.com/SpringerNature…
Imagine you are crossing the street and see three cars approaching from your left. You store these cars in memory. When looking over to the right, the blue and red cars pass your view. Only the black car remains relevant for your goal.