Eli Sennesh
@EliSennesh
NHP ephys @VanderbiltU. Predictive coding, affective science. Abolish the value function! Thou shalt not make a machine to counterfeit a human mind.
PREPRINT THREAD🧵! Ever wanted to know how to ABOLISH THE VALUE FUNCTION, and why IT'S ALL TAXIS NAVIGATION? @mjdramstead and I wrote up some notes for y'all. arxiv.org/abs/2505.17024
Brain rhythms in cognition -- controversies and future directions arxiv.org/abs/2507.15639
She looks better without the mask
i still can't believe this is your virtual girlfriend
The lesson from the VAE is not "a VAE is just an AE with a dumb penalty" the lesson is "dumb penalties have extremely profound effects and induce incredibly sophisticated structures in deep models".
What is the right curry-howard correspondence for ad-hoc polymorphism / type-classes?
My funding is going away in a few months, so I'm outsourcing my job search. Looking for mathy roles (Twin Cities or remote) Years of experience in applied math/math-bio, Ph.D. in biology, pretty good at Mathematica, ok at Julia, willing to learn R/re-learn Python, whatever
So the DGX Spark isn't really releasing this month (July), is it?
3/ That is, it needs to reflect the uncertainty inherent in perception. If one grants these assumptions about the nature of perception, then something akin to Active Inference appears like an attractive (probabilistic) extension of PCT.
Now published open access: Perceptual Control Theory and the Free Energy Principle: a comparison sciencedirect.com/science/articl… @Kihbernetics @AnnaCiaunica @mjdramstead @KordingLab @HenryYin19 @CyberneticsOrg @WiringTheBrain @drmichaellevin @micblackau @leafs_s @Neuro_Skeptic
Octopuses are reported to fall for the "rubber hand" illusion. Very interesting new paper in @CurrentBiology by Sumire Kawashima and Yuzuru Ikeda. Links below. 1/
📜Out in Nat. Commun! We propose a novel framework called “predictive alignment”, which trains the chaotic RNN via a biologically plausible rule. Collaborated with @ClopathLab nature.com/articles/s4146…
1️⃣ What is a TNN? TNNs are neural networks with local recurrence or feedback connections, processing inputs across time. Unlike standard RNNs, each time step in TNNs corresponds to a single feedforward layer’s computation to mimic biological processing. Of course, you can also…
𝗧𝗼𝗽-𝗱𝗼𝘄𝗻 𝗮𝗻𝗱 𝗯𝗼𝘁𝘁𝗼𝗺-𝘂𝗽 𝗻𝗲𝘂𝗿𝗼𝘀𝗰𝗶𝗲𝗻𝗰𝗲: 𝗼𝘃𝗲𝗿𝗰𝗼𝗺𝗶𝗻𝗴 𝘁𝗵𝗲 𝗰𝗹𝗮𝘀𝗵 𝗼𝗳 𝗿𝗲𝘀𝗲𝗮𝗿𝗰𝗵 𝗰𝘂𝗹𝘁𝘂𝗿𝗲𝘀 nature.com/articles/s4158… Small contribution in piece by @_fernando_rosas and colleagues on how we need both types of research culture
Quick post highlighting my rolling issues with "Circuits" - Circuits are way more than the regions they live in We need to think more about what "brain circuits" are *conserving*. virati.github.io/blog/posts/cir…
🧠 New research finding: Astrocytes ensure neural information processing by maintaining ambient levels of the neurotransmitter chemical GABA picower.mit.edu/news/study-fin… #neuroscience #brain @mitbrainandcog @ScienceMIT
🚨 Fully Funded PhD positions Gonda Brain Institute, Sharp Lab We will explore how people build & deploy world models efficiently for planning & deciding. We will also investigate how world model construal & use is biased in anxiety. Deadline: 1 Sept 2025. Please share!