Zejin Lu | 陆泽金
@lu_zejin
PhD Student @FU_Berlin co-supervised by Prof. Radoslaw M. Cichy and Prof. Tim Kietzmann (@TimKietzmann), interested in machine learning and cognitive science.
Now out in Nature Human Behaviour @NatureHumBehav: “End-to-end topographic networks as models of cortical map formation and human visual behaviour”. Please check our NHB link: nature.com/articles/s4156…
🚨 Preprint alert! Excited to share my second PhD project: “Adopting a human developmental visual diet yields robust, shape-based AI vision” -- a nice case showing that biology, neuroscience, and psychology can still help AI :)! arxiv.org/abs/2507.03168
Exciting new preprint from the lab: “Adopting a human developmental visual diet yields robust, shape-based AI vision”. A most wonderful case where brain inspiration massively improved AI solutions. Work with @lu_zejin @martisamuser and Radoslaw Cichy arxiv.org/abs/2507.03168
In this study, Lu et al. introduce All-Topographic Neural Networks (All-TNN) as a parsimonious model of the human visual cortex. nature.com/articles/s4156…
Preprint alert 🚨I am excited about our new paper titled “The representational nature of spatio-temporal recurrent processing in visual object recognition.” 🥳🌟 biorxiv.org/cgi/content/sh…
Excited to announce the second iteration of NEAT (Neuro-AI-Talks), which will take place September 2nd-3rd 2024 in Osnabrück. kietzmannlab.org/neat2024 Never heard of it? Let me tell you what this is about 🧵
🚀 Check this preprint on 'ReAlnet' 🧠 - a game-changer aligning AI with human brain activity for object recognition. Discover how ReAlnet not only mimics human vision more closely but also offers personalized models & robustness against adversarial attacks.
New preprint out😉! w/@yilewangwayne @juliedgolomb Can we use human brain activity to align ANNs on object recognition and achieve more human brain-like vision models? Yes!!! We present 'Re(presentational)Al(ignmentnet)net', a vision model aligned with human brain activity!
🚨 CCN paper w/ @adriendoerig @TimKietzmann - arxiv.org/abs/2308.12435 The brain is a recurrent processor and RNNs mirror this, performing well at object recognition. In such RNNs, what representations and dynamics underlie the time it takes to classify different images? 🧵1/10