Sath Thavabalasingam
@sathesant
Data Scientist @Tangerinebank and Cognitive neuroscience PhD @UofT
Excited to share our new paper in @PNASNews exploring how information about duration is incorporated into human hippocampal (and CA1) long-term sequence representations: pnas.org/content/early/…
Memory is a crucial feature of intelligence. Our new blog post overviews the use of memory in deep learning, and how modelling language may be an ideal task for developing better memory architectures: deepmind.com/blog/article/A…
Application portal is now live! Please share with those who may be interested. 🧠👩🏽💻🎓 …ycrest-hospital-openhire.silkroad.com/epostings/inde…
.@Brad_Buchsbaum & I are recruiting a postdoc to study memory using naturalistic, dynamic materials using functional neuroimaging. Research questions could also address cognitive aging. Experience in fMRI/EEG/MEG analysis preferred. Application link avail soon. @rotmanresearch
Latest by György Buzsáki's lab in @ScienceMagazine: Gamma rhythm communication between entorhinal cortex and dentate gyrus neuronal assemblies science.sciencemag.org/content/372/65…
📢New paper out from @RIKEN_CBS Shige Fujisawa lab. "Scalable representation of time in the hippocampus" advances.sciencemag.org/content/7/6/ea…
In this (first) post I talk about implementing "Tweedie Loss" and how it can be useful when you are modelling zero-inflated datasets (e.g. e-commerce website where many users don't make a purchase) #datascience #machinelearning #stats Check it out! link.medium.com/7ppKGBy0jdb
Great start to the new year! Our latest pub early online: "Perirhinal Cortex is Involved in the Resolution of Learned Approach–Avoidance Conflict Associated with Discrete Objects" academic.oup.com/cercor/advance…
Crucial role for CA2 inputs in the sequential organization of CA1 time cells supporting memory. Latest by MacDonald & Tonegawa in @PNASNews pnas.org/content/118/3/…
Delighted to share our review of hippocampal pattern similarity studies, written with the brilliant @jess_robin_ @rosanna_olsen @morganbarense and Morris Moscovitch. For a quick walk-through, thread below: sciencedirect.com/science/articl…
Really enjoyed the presentation at #mlops2020 by Hamza Tahir (@maiotees) explaining why ML in Production is (Still) broken. Great read to accompany the talk: blog.maiot.io/technical_debt/
Our review on sequence #memory is now out in the Journal of Cognitive Neuroscience! @jacobbellmund & @nachopolti summarize lots of great work on how we remember event sequences doi.org/10.1162/jocn_a… @MPI_CBS @KISNeuro
How well do lab memory tests generalize to real life? There is evidence of neural differences when remembering personal experiences vs. lab stimuli, but little behavioral evidence. What do these differences mean? We explored these questions here: psyarxiv.com/ye4ac/ 1/8
Welcome to the Levine Lab Preprintapalooza (aka Autobio Shock and Awe)! Over next 2 w we will share 8 works on indiv differences, staged events, memory accuracy, eye tracking, imagery, aging and neurodegeneration. @NickBDiamond @RenoultLouis @carinalfan @michael_armson R Petrican
Delighted to share our new paper! "Exploring the interaction between approach-avoidance conflict and memory processing" tandfonline.com/doi/full/10.10…
Humans perform “mental time travel” across memories for goal-directed decisions. Our new algorithm, also based on episodic memory retrieval, enables AI agents to perform long-term credit assignment. Paper: nature.com/articles/s4146… Code: github.com/deepmind/tvt
How can we learn a sequence of tasks without forgetting, without class labels and with unknown or ambiguous task boundaries? Continual Unsupervised Representation Learning: Paper: arxiv.org/abs/1910.14481 Code: github.com/deepmind/deepm…
Happy to see this out! An absolute pleasure to collaborate with these wonderful people! Hopefully not our last team up @DanielaJPalombo :)
My “Shape Wheel” with @MorganBarense, Jackson Liang, & Andy Lee is now out in JEP: General! We created the first perceptually uniform shape space whereby angular distance along a 2D circle is a proxy for visual similarity, comparable to the commonly used "color wheel". (1/3)
Neural networks in NLP are vulnerable to adversarially crafted inputs. We show that they can be trained to become certifiably robust against input perturbations such as typos and synonym substitution in text classification: arxiv.org/abs/1909.01492
When did that happen? Our new paper shows that the entorhinal cortex maps temporal relationships of events. Work led by @jacobbellmund with @lorenadeuker now out in @eLife doi.org/10.7554/eLife.… #memory