Ajay Subramanian
@ajaysub110
PhD student @nyuniversity. Research intern Meta @RealityLabs. Working on human and machine vision 👨💻🧠 | Runner and tennis player 🏃🎾
Excited to present SimbaV2 at ICML 2025 (Spotlight)! We’ll be sharing how a simple change in network architecture can significantly improve sample efficiency in RL. Come visit our poster at 4:30 p.m.- 7:00 p.m., Tuesday (07.15)!
Check out our new paper “Visual adaptation stronger at the horizontal than the vertical meridian: Linking performance with V1 cortical surface area” in PNAS! @carrasco_lab pnas.org/doi/10.1073/pn…
Excited to start my research internship this Summer + Fall at @RealityLabs where I'll be working on vision-language models for robotics! If you're in the Seattle-area and would like to meet up, let me know!

Out now in @ChemicalScience! Give it a read if you're interested in molecular design, synthesizability, and/or organic electronics! Co-authors: James Damewood, @junonam_ , @kevinpgreenman , @Avni_Singhal , and @RGBLabMIT. pubs.rsc.org/en/Content/Art…
📢New preprint out! We constrain the molecular generation space to follow the "symmetry" of patented molecules that are likely to be synthesizable. Achieved with "symmetry-aware" fragment decomposition, and a constrained Monte Carlo Tree Search generator. arxiv.org/abs/2410.08833
Is it just me or has the quality of @cursor_ai Tab's autocomplete suggestions significantly dropped over the past few weeks? Predicts too many lines at once and many of them are often incorrect or not what I want. Makes me have to turn off the feature at times.
This is now out in @NatureComms !!! nature.com/articles/s4146…
First preprint of my PhD! Thanks for amazing supervision from Clayton Curtis and all the cool people from Clayspace lab and help from everyone at NYU CBI without which this work would not have been successful. biorxiv.org/content/10.110…
Thanks @lexfridman for the engaging chat with @narendramodi. As PM Modi mentioned, India is a country with many langs. Making the podcast available in more langs will widen its reach. 🧵 with snippets in 9 langs. Happy to share full versions with you. Built with love by @SarvamAI
Very proud to share our @iclr_conf paper: TopoNets! High-performing vision and language models with brain-like topography! Expertly led by grad student @mayukh091 and @MainakDeb19! A brief thread..
Thrilled to share that I recently defended my PhD dissertation exploring whether and how resilient asymmetries in processing around the visual field are modulated by microsaccades (tiny fixation eye movements), covert spatial attention, and rapid perceptual learning.
Denis G. Pelli et al. @NYUPsych compare the speed–accuracy tradeoff in object recognition by humans and neural networks. doi.org/10.1167/jov.25…
Out now in Journal of Vision! Measuring inference-time compute capability in neural networks (and humans) before it was cool 😎 doi.org/10.1167/jov.25…
Excited to share my first project as a PhD student! To bring 'time' into model-human comparison, we present a large dataset of timed object recognition and test the ability of dynamic NNs to display a human-like speed-accuracy tradeoff. arxiv.org/abs/2206.08427 Summary: (1/5)
Our paper is out in iScience! In the study, we used TMS to reveal the interaction between visual adaptation and exogenous attention in the early visual cortex. @carrasco_lab cell.com/iscience/fullt…
BREAKING NEWS The Royal Swedish Academy of Sciences has decided to award the 2024 #NobelPrize in Physics to John J. Hopfield and Geoffrey E. Hinton “for foundational discoveries and inventions that enable machine learning with artificial neural networks.”
Just had this crazy mental association rabbit hole while coding. Saw a variable called rmses (for root mean squared errors). And then went rmses -> ramesses -> ozymandias -> breaking bad. is this what it feels like to be an LLM?
First preprint of my PhD! Thanks for amazing supervision from Clayton Curtis and all the cool people from Clayspace lab and help from everyone at NYU CBI without which this work would not have been successful. biorxiv.org/content/10.110…
Fantastic paper!
✨🎨🏰Super excited to share our new paper Ensemble everything everywhere: Multi-scale aggregation for adversarial robustness Inspired by biology we 1) get adversarial robustness + interpretability for free, 2) turn classifiers into generators & 3) design attacks on vLLMs 1/12
arXiv -> alphaXiv Students at Stanford have built alphaXiv, an open discussion forum for arXiv papers. @askalphaxiv You can post questions and comments directly on top of any arXiv paper by changing arXiv to alphaXiv in any URL!
Peer review at ML conferences is broken.. my review (which is not abnormally long) is the length of all other reviewers' reviews combined for one of the papers in my batch.. If you don't want to put in the slightest effort then tell the AC and don't review!
It was fun digging into the effect of familiarity on face areas in the Anterior Temporal Lobe (amazing work by @bmhdeen) and connecting it to the border literature. Our commentary about it is out in @PNASNews, with @apurvaratan & @nikolasmcneal pnas.org/doi/10.1073/pn…
Check out our @PNAS Commentary on the exciting new findings on face familiarity by @bmhdeen and colleagues. It was super fun to write this with my awesome grad students @alishdipani and @nikolasmcneal. pnas.org/doi/10.1073/pn…