Weinan Sun
@sunw37
Neuroscience, Artificial Intelligence, and Beyond. Assistant professor, Neurobiology and Behavior @CornellNBB
1/12 How do animals build an internal map of the world? In our new paper, we tracked thousands of neurons in mouse CA1 over days/weeks as they learned a VR navigation task. @nspruston @HHMIJanelia, w/ co-1st author @JohanWinn Video summary: youtube.com/watch?v=yw_4uV… Paper:…
A must read from @antferrui and team!
Excited to share our latest story! We found disentangled memory representations in the hippocampus that generalized across time and environments, despite the seemingly random drift and remapping of single cells. This code enabled the transfer of prior knowledge to solve new tasks
Exciting opportunity for viral engineering wizards to come join our mission to map the mouse brain!
🚀 Join Our Team! We're seeking a Scientist and Research Associate to drive In Vivo Circuit Mapping and push the boundaries of whole-brain connectomics. If you're passionate about moonshot neuroscience and skilled in viral labeling, apply now! #hiring
Excited to share new work @icmlconf by Loek van Rossem exploring the development of computational algorithms in recurrent neural networks. Hear it live tomorrow, Oral 1D, Tues 14 Jul West Exhibition Hall C: icml.cc/virtual/2025/p… Paper: openreview.net/forum?id=3go0l… (1/11)
"A transformer trained on 10M solar systems nails planetary orbits. But it botches gravitational laws".
Can an AI model predict perfectly and still have a terrible world model? What would that even mean? Our new ICML paper formalizes these questions One result tells the story: A transformer trained on 10M solar systems nails planetary orbits. But it botches gravitational laws 🧵
New paper: World models + Program synthesis by @topwasu 1. World modeling on-the-fly by synthesizing programs w/ 4000+ lines of code 2. Learns new environments from minutes of experience 3. Positive score on Montezuma's Revenge 4. Compositional generalization to new environments…
Transformers employ different strategies through training to minimize loss, but how do these tradeoff and why? Excited to share our newest work, where we show remarkably rich competitive and cooperative interactions (termed "coopetition") as a transformer learns. Read on 🔎⏬
How does in-context learning emerge in attention models during gradient descent training? Sharing our new Spotlight paper @icmlconf: Training Dynamics of In-Context Learning in Linear Attention arxiv.org/abs/2501.16265 Led by Yedi Zhang with @Aaditya6284 and Peter Latham
Cool work from @HHMIJanelia .. "cognitive graphs of latent structure" .... Looks like even more evidence for CSCG-like representation and schemas. (science.org/doi/10.1126/sc…, arxiv.org/abs/2302.07350) biorxiv.org/content/10.110…
This preprint is now published at @Nature. With current and former DeepMinders @yuvaltassa, Josh Merel, Matt Botvinick, and my @HHMIJanelia colleagues @vaxenburg, Igor Siwanowicz, @KristinMBranson, @MichaelBReiser, Gwyneth Card and more
🪰By infusing a virtual fruit fly with #AI, Janelia & @GoogleDeepMind scientists created a computerized insect that can walk & fly just like the real thing➡️ hhmi.news/3Rwop0w 🤖Read more about this work, first published in a #preprint in 2024➡️ hhmi.news/4cGAVUW
(Plz repost) I’ve been receiving some good news lately and will be hiring at all levels to expand the lab. Please get in contact if you are interested in reinforcement learning, neural plasticity, circuit dynamics, and/or hearing rehabilitation. pierre.apostolides @ umich .edu
From our team at @GoogleDeepMind: we ask, as an LLM continues to learn, how do new facts pollute existing knowledge? (and can we control it) We investigate such hallucinations in our new paper, to be presented as Spotlight at @iclr_conf next week.
New preprint! Intelligent creatures can solve truly novel problems (not just variations of previous problems), zero-shot! Why? They can "think" before acting, i.e. mentally simulate possible behaviors and evaluate likely outcomes How can we build agents with this ability?
Seize the chance to work with the best!
Thrilled and grateful to be part of Astera's inaugural residency cohort! We're also hiring—check out the details inside!
Our latest study identifies a specific cell type and receptor essential for psilocybin’s long-lasting neural and behavioral effects 🍄🔬🧠🐁 Led by Ling-Xiao Shao and @ItsClaraLiao Funded by @NIH @NIMHgov 📄Read in @Nature - nature.com/articles/s4158… 1/12
Exciting opportunity, come join us!
And! We're simultaneously launching applications for our Fall 2025 (October) cohort. You can apply now through May 2: astera.org/residency
.@ChongxiLai works at the intersection of neuroscience, AI, and brain-machine interfaces. He researches building brain-like models in a simulated environment, to test whether cognition can be enhanced through novel AI-assisted BMI closed-loop stimulation algorithms.
Thrilled to announce I've joined @AsteraInstitute's first residency cohort! Excited to collaborate with this amazing team to build technology for a brighter future! I will focus on building and testing brain-like model in large-scale simulation and use AI to enhance it!
We’re excited to welcome our first residency cohort! This exceptional group of scientists, engineers, and entrepreneurs embodies our mission of creating public goods through open science and technology.
At #Cosyne2025? Come by my poster today (47) to hear how sequential predictive learning produces a continuous neural manifold that can generate replay during sleep, and spatial representations that "sweep" ahead to future positions. All from sensory information + action!
Want to procedurally generate large-scale relational reasoning experiments in natural language, to study human psychology 🧠 or eval LLMs 🤖? We have a tool for that! github.com/google-deepmin… Check out @Kenneth_Marino's thread for some stuff you can do:
Excited to announce our latest #ICLR work on long-context/relational reasoning evaluation for LLMs ReCogLab! openreview.net/pdf?id=yORSk4Y… github.com/google-deepmin… Work with Andrew Liu, @priorupdates @gargi_balasu @neuro_kim and others at @GoogleDeepMind
Check out replay making new place cells (likely from compositional elements!).
Happy to share the latest version of our work on compositional cognitive maps in hippocampus, with Jo Warren, @jrcwhittington, @behrenstimb. We propose hippocampus constructs maps from cortical building blocks in replay – now with empirical support! nature.com/articles/s4159… 1/9