Anand Gopalakrishnan
@agopal42
PhD student at The Swiss AI Lab (IDSIA) with @SchmidhuberAI. Previously: Apple MLR, Amazon AWS AI Lab. 7\. Same handle on 🦋
Excited to present "Recurrent Complex-Weighted Autoencoders for Unsupervised Object Discovery" at #NeurIPS2024! TL;DR: Our model, SynCx, greatly simplifies the inductive biases and training procedures of current state-of-the-art synchrony models. Thread 👇 1/x.

Meet the recipients of the 2024 ACM A.M. Turing Award, Andrew G. Barto and Richard S. Sutton! They are recognized for developing the conceptual and algorithmic foundations of reinforcement learning. Please join us in congratulating the two recipients! bit.ly/4hpdsbD
Come visit our poster East Exhibit Hall A-C #3707, today (Thursday) between 4:30-7:30pm to learn about how complex-valued NNs perform perceptual grouping. #NeurIPS2024
Excited to present "Recurrent Complex-Weighted Autoencoders for Unsupervised Object Discovery" at #NeurIPS2024! TL;DR: Our model, SynCx, greatly simplifies the inductive biases and training procedures of current state-of-the-art synchrony models. Thread 👇 1/x.
Why do video models handle motion so poorly? It might be lack of motion equivariance. Very excited to introduce: Flow Equivariant RNNs (FERNNs), the first sequence models to respect symmetries over time. Paper: arxiv.org/abs/2507.14793 Blog: kempnerinstitute.harvard.edu/research/deepe… 1/🧵
Excited to share our new ICML paper, with co-authors @robert_csordas and @SchmidhuberAI! How can we tell if an LLM is actually "thinking" versus just spitting out memorized or trivial text? Can we detect when a model is doing anything interesting? (Thread below👇)
Your language model is wasting half of its layers to just refine probability distributions rather than doing interesting computations. In our paper, we found that the second half of the layers of the Llama 3 models have minimal effect on future computations. 1/6
In the physical world, almost all information is transmitted through traveling waves -- why should it be any different in your neural network? Super excited to share recent work with the brilliant @mozesjacobs: "Traveling Waves Integrate Spatial Information Through Time" 1/14
Congratulations to @RichardSSutton and Andy Barto on their Turing award!
BREAKING: Amii Chief Scientific Advisor, Richard S. Sutton, has been awarded the A.M. Turing Award, the highest honour in computer science, alongside Andrew Barto! Read the official @TheOfficialACM announcement: hubs.la/Q039nZXM0 #TuringAward #AI #ReinforcementLearning
Brains, Minds and Machines Summer Course 2025. Application deadline: Mar 24, 2025 mbl.edu/education/adva… See more information here: cbmm.mit.edu/summer-school/…
Interested in JEPA/visual representation learning for diverse downstream tasks like planning and reasoning? Check out "Enhancing JEPAs with Spatial Conditioning: Robust and Efficient Representation Learning" at the @NeurIPSConf SSL Workshop on 12/14. Led by @EtaiLittwin (1/n)
Please check out a dozen 2024 conference papers with my awesome students, postdocs, and collaborators: 3 papers at NeurIPS, 5 at ICML, others at CVPR, ICLR, ICRA: 288. R. Csordas, P. Piekos, K. Irie, J. Schmidhuber. SwitchHead: Accelerating Transformers with Mixture-of-Experts…