Olivier Codol
@OlivierCodol
Postdoc at @Mila_Quebec @UMontreal, studying motor learning and control.
Long time coming. A very cool project that showcases the advantages of single neuron adaptation in RNNs. #PLOSCompBio: Neural networks with optimized single-neuron adaptation uncover biologically plausible regulari ... dx.plos.org/10.1371/journa… Props to @vgeadah and co-authors
Excited to share my latest work with @DiedrichsenJorn and @andpru. In this work, we ask whether motor sequence learning is motoric at all! Check out the 🧵version of the abstract: biorxiv.org/content/10.110… 1/n
Fantastic work by @mehrdadkashefi -- a deep investigation into the nature of learning motor sequences.
Excited to share my latest work with @DiedrichsenJorn and @andpru. In this work, we ask whether motor sequence learning is motoric at all! Check out the 🧵version of the abstract: biorxiv.org/content/10.110… 1/n
📢📢📢 Big paper out from the lab today! We show how motor circuits across cortex and thalamus do sensory planning and how this improves reaching.
Can the motor system use sensory expectations to prepare for unexpected events? Excited to share my latest work with @andpru – where we establish that sensory expectations shape neural population dynamics in motor circuits! 🧵 and paper below 1/
Can the motor system use sensory expectations to prepare for unexpected events? Excited to share my latest work with @andpru – where we establish that sensory expectations shape neural population dynamics in motor circuits! 🧵 and paper below 1/
I’ll be attending #NeurIPS2024 in Vancouver this week. Excited to meet new people and chat about comp neuro, NeuroAI, and foundation models for neuroscience. Also keen to attend the NeuroAI, @unireps and @neur_reps workshops!
It’s been a long time coming, but I’m thrilled to share my first research paper with @arna_ghosh , @ckaplanis1, @tyrell_turing, and Doina! Just accepted to NeurIPS 2024 (see u in Vancouver! 🇨🇦). This will be a longer thread—thanks for following along! arxiv.org/abs/2410.22133 1/11
Check out Jonny's latest work on a bio-plausible synaptic learning rule that yield interesting properties in artificial neural networks! It was very exciting and fun to contribute to this work.
Why does #compneuro need new learning methods? ANN models are usually trained with Gradient Descent (GD), which violates biological realities like Dale’s law and log-normal weights. Here we describe a superior learning algorithm for comp neuro: Exponentiated Gradients (EG)! 1/12
For real, everyone in neuroscience and AI: Get off this site. Elon is now using this platform to mess with American democracy. No one should be here anymore. NeuroAI Bluesky is getting livelier every day. Please come join us.
Thank you to the @wusmsl for providing a great platform to exchange ideas and discuss science in-depth! Always a pleasure to interact with the folks down there, I warmly recommend
No video this time around but @wusmsl was lucky to have lab alum @OlivierCodol present his recent work: "Brain-like neural dynamics for behavioral control develop through reinforcement learning". Check it out, definitely some food for thought! doi.org/10.1101/2024.1…
There is no silver bullet
To me, these results emphasize that scores are tools that need to be used along with domain expertise. In that view, model analyses - score-based or not- uncover to-be-tested hypotheses about how a model may capture an important brain-like mechanism. This is not a 1-d assessment.
To me, these results emphasize that scores are tools that need to be used along with domain expertise. In that view, model analyses - score-based or not- uncover to-be-tested hypotheses about how a model may capture an important brain-like mechanism. This is not a 1-d assessment.
I’ll be recruiting 1-2 graduate students for Fall 2025 to work on visual learning generalization in humans and artificial neural networks. If you’re interested, apply! Check out our lab website (snailab.ca) and reach out if you need more info. RT please.
📷 Meet our student community! Interested in joining Mila? Our annual supervision request process for admission in the fall of 2025 is starting on October 15, 2024. More information here mila.quebec/en/prospective…
Investigating the experimentally-verifiable impact of different credit assignment mechanisms for learning in the brain is a crucial endeavor for computational neuroscience. Here: our take for motor learnning and the RL/SL question when looking at neural representations in cortex.
Check out our new paper led by @OlivierCodol ! We use RNNs to explore possible learning rules that lead to the dynamics we see in brains during behavior. biorxiv.org/content/10.110…
Cool project on a topic that needs more investigation: the effect of the learning algo. on network activity & performance. Looking forward to more projects in that space -so many related questions to ask (effect of algorithm type, explor./noise, see also nature.com/articles/s4146…)!
Here’s our latest work at @g_lajoie_ and @mattperich's labs! Excited to see this out. We used a combination of neural recordings & modelling to show that RL yields neural dynamics closer to biology, with useful continual learning properties. biorxiv.org/content/10.110…
Check out our new paper led by @OlivierCodol ! We use RNNs to explore possible learning rules that lead to the dynamics we see in brains during behavior. biorxiv.org/content/10.110…
Here’s our latest work at @g_lajoie_ and @mattperich's labs! Excited to see this out. We used a combination of neural recordings & modelling to show that RL yields neural dynamics closer to biology, with useful continual learning properties. biorxiv.org/content/10.110…
This is a fantastic paper. I still can’t believe that there wasn’t a way to make SL fit the neural data like RL, but @OlivierCodol tried everything!
Here’s our latest work at @g_lajoie_ and @mattperich's labs! Excited to see this out. We used a combination of neural recordings & modelling to show that RL yields neural dynamics closer to biology, with useful continual learning properties. biorxiv.org/content/10.110…