Scott Linderman
@scott_linderman
Assistant Professor @Stanford Statistics and @StanfordBrain. Computational Neuroscience, Machine Learning, Bayesian Statistics. Tweets are my own.
Can synapses in the brain switch their signs between excitatory and inhibitory during learning🚦? Can they act more like weights in artificial neural networks, able to switch signs based on experience 🔃? Excited to share my thesis work in @blsabatini lab! 🧵 ⬇️ (1/13)
There is an opening for a University Assistant Professor in Machine Learning to be based at @CambridgeMLG jobs.cam.ac.uk/job/49361/ Apply!
Can an AI model predict perfectly and still have a terrible world model? What would that even mean? Our new ICML paper formalizes these questions One result tells the story: A transformer trained on 10M solar systems nails planetary orbits. But it botches gravitational laws 🧵
How can we close the generation-verification gap when LLMs produce correct answers but fail to select them? 🧵 Introducing Weaver: a framework that combines multiple weak verifiers (reward models + LM judges) to achieve o3-mini-level accuracy with much cheaper non-reasoning…
LLMs can generate 100 answers, but which one is right? Check out our latest work closing the generation-verification gap by aggregating weak verifiers and distilling them into a compact 400M model. If this direction is exciting to you, we’d love to connect.
How can we close the generation-verification gap when LLMs produce correct answers but fail to select them? 🧵 Introducing Weaver: a framework that combines multiple weak verifiers (reward models + LM judges) to achieve o3-mini-level accuracy with much cheaper non-reasoning…
What is the probability of an image? What do the highest and lowest probability images look like? Do natural images lie on a low-dimensional manifold? In a new preprint with @ZKadkhodaie @EeroSimoncelli, we develop a novel energy-based model in order to answer these questions: 🧵
Local LLMs *privately* collaborating with smarter cloud LLMs, as if you never left your laptop. Pure joy to work with @ollama.
3 months ago, Stanford's Hazy Research lab introduced Minions, a project that connects Ollama to frontier cloud models to reduce cloud costs by 5-30x while achieving 98% of frontier model accuracy. Secure Minion turns an H100 into a secure enclave, where all memory and…
I wrote an Op-Ed about the government's attacks on Harvard. Please share with anybody who thinks these attacks help anybody. prosyn.org/PhWi74r?h=KyJ1…
Open call for Next Generation Leaders! Our NGL program recognizes early career scientists with fresh and innovative perspectives. For 3 years, NGLs contribute to ongoing research and initiatives at the Allen Institute. Apply by June 3. alleninstitute.org/about/people/n…
Great opportunity for young investigators to get up close to and influence the amazing work underway at the @AllenInstitute !
Open call for Next Generation Leaders! Our NGL program recognizes early career scientists with fresh and innovative perspectives. For 3 years, NGLs contribute to ongoing research and initiatives at the Allen Institute. Apply by June 3. alleninstitute.org/about/people/n…
We secure all communications with a cloud-hosted LLM, running on an H100 in confidential mode. Latency overhead goes away once you cross the 10B model size. This is our first foray into applied cryptography -- help us refine our ideas.
can you chat privately with a cloud llm—*without* sacrificing speed? excited to release minions secure chat: an open-source protocol for end-to-end encrypted llm chat with <1% latency overhead (even @ 30B+ params!). cloud providers can’t peek—messages decrypt only inside a…
can you chat privately with a cloud llm—*without* sacrificing speed? excited to release minions secure chat: an open-source protocol for end-to-end encrypted llm chat with <1% latency overhead (even @ 30B+ params!). cloud providers can’t peek—messages decrypt only inside a…
Two (out of two!) accepted papers to #ICML2025 from my lab! 1. SDE Matching: Scalable and Simulation-Free Training of Latent Stochastic Differential Equations 2. Controlled Generation with Equivariant Variational Flow Matching #GenAI #SDE #Bayes #ML #Diffusion #Flows #AI
Super excited to share Chipmunk 🐿️- training-free acceleration of diffusion transformers (video, image generation) with dynamic attention & MLP sparsity! Led by @austinsilveria, @SohamGovande - 3.7x faster video gen, 1.6x faster image gen. Kernels written in TK ⚡️🐱 1/
Training-free acceleration of Diffusion Transformers with dynamic sparsity and cross-step attention/MLP deltas--collaboration with @SohamGovande and @realDanFu! ⚡️ 3.7x faster video and 1.6x faster image generation while preserving quality! 🧵 Open-source code & CUDA kernels!
Our brains adjust dopamine levels to help us learn natural behaviors such as walking and talking, suggests a new @Nature study of zebra finches from @VikramGadagkar and @alfairhall: simonsfoundation.org/2025/04/17/sin… #science #neuroscience
Our scientists with over 150 collaborators have released the largest functional map and wiring diagram of the brain to date – seven years in the making. Today’s 1.6 petabyte release marks a historic day for neuroscience. This moonshot milestone will further efforts towards…
How does the brain work? Scientists are closer to the answer with the largest wiring diagram and functional map of a mammalian brain to date. 🧵
Excited to share our JOSS paper (joss.theoj.org/papers/10.2110…) and code (github.com/probml/dynamax)! Special thanks to our reviewers — Dynamax v1.0 is much improved thanks to their feedback!
I am pleased to announce that the paper on our Dynamax Jax library for SSMs is now available at joss.theoj.org/papers/10.2110…. Code is at github.com/probml/dynamax/. Joint work with @scott_linderman @grrddm @petergchang @karalleyna @GilesHD @mxinglongli
Meet Aditi Jha (@aditi_jd), a 2025 Wu Tsai Neuro Interdisciplinary Postdoctoral Scholar! She develops machine learning models in the Linderman (@scott_linderman) Lab to analyze how internal goals shape decision-making and behavior in naturalistic settings.
1\ 🧠How do dynamic processes shape behavior in a changing world? Join our #cosyne2025 workshop, The Dynamic Brain: Modeling Time-Varying Computations Underlying Natural and Innate Behaviors! Co-organized with @neuronair @BaselessPursuit @scott_linderman and David Anderson