Qiong Zhang
@qionng
Human memory, cognitive modeling, machine leaning. Assistant professor in Psychology & Computer Science @rutgersU directing the Memory Optimization Lab.
New preprint: osf.io/preprints/psya… Been thinking a lot lately about how to achieve a generalized theory of memory integrating our knowledge from both traditional (e.g., random word lists) and naturalistic memory experiments (e.g., movies, narratives). Feedback is very welcome!
Now out in Psychological Review! We are grateful to the reviewers for helping us to improve the paper. Article link: psycnet.apa.org/doiLanding?doi… with @ptoncompmemlab and @cocosci_lab
Excited to announce our new preprint: “Optimal Policies for Free Recall” with @ptoncompmemlab and @cocosci_lab! Preprint link: psyarxiv.com/sgepb (thread 1/8)
I will be recruiting a PhD student this upcoming cycle, so if you have motivated students interested in modeling memory and the brain, please send them my way (email: [email protected])! Memory Optimization Lab information: sites.rutgers.edu/memory-optimiz…
I will be recruiting PhD students this upcoming cycle, so if you have motivated students interested in modeling memory and the brain, please send them my way (email: [email protected])! Lab information: sites.rutgers.edu/memory-optimiz…
New preprint! We presented an optimal model of how metacognive monitoring (feeling of knowing the answer) could dynamically inform metacognitive control of memory (how to direct retrieval efforts). See below thread for more details,
You know that feeling when you can't remember a citation, but you can feel it hiding in your brain somewhere, taunting you? In this paper, we propose a computational model of how these "feelings of knowing" help us rationally allocate memory resources. psyarxiv.com/haf79/
Super excited to announce that our work (with @ptoncompmemlab @hassonlab) is officially out in eLife: a neural network model of when to retrieve and encode episodic memories to predict upcoming events. Paper: elifesciences.org/articles/74445 Code: bit.ly/3gU5TvD (1/n)