Shreyas Padhy
@shreyaspadhy
PhD student at the University of Cambridge. Ex @GoogleAI Resident, @jhubme and @iitdelhi. I like the math of machine learning & neuroscience. Also DnD.
I will be at my first in-person NeurIPS, presenting 3 posters at the main conference (🧵)! Please get in touch to chat about - - Diffusion models, sampling and conditional generation - BayesOPT, GPs and BNNs P.S. I'll be on the job market early next year, please reach out!
Checkout this paper with some really interesting insights led by the excellent @JiajunHe614 and @YuanqiD - TLDR: Neural density samplers really need guidance imposed through Langevin annealing to make them work well
Working on sampling and seeking neural network ansatz? Longing for simulation-free* training approaches? – we review neural samplers and present a “failed” attempt towards it with pitfalls and promises! Joint work with @JiajunHe614 (co-lead), Francisco Vargas … 🧵1/n
Thanks for the kind words @ArnaudDoucet1 ! I wanted to shout-out some other great work in the same vein as us - arxiv.org/abs/2501.06148 (@julberner, @lorenz_richter, @MarcinSendera et al) arxiv.org/abs/2410.02711 (@msalbergo et al) arxiv.org/abs/2412.07081 (@junhua_c et al)
Tweeting again about sampling, my favourite 2024 Monte Carlo paper is arxiv.org/abs/2307.01050 by F. Vargas, @shreyaspadhy, D. Blessing & N. Nüsken: . Propose a "simple" loss to learn the drift you need to add to Langevin to follow a fixed probability path.
Tweeting again about sampling, my favourite 2024 Monte Carlo paper is arxiv.org/abs/2307.01050 by F. Vargas, @shreyaspadhy, D. Blessing & N. Nüsken: . Propose a "simple" loss to learn the drift you need to add to Langevin to follow a fixed probability path.
Come chat with folks from our group!
We're excited to be at #NeurIPS Vancouver! See the papers we'll be presenting at the main conference below:
We're excited to be at #NeurIPS Vancouver! See the papers we'll be presenting at the main conference below:
Atinary @ #NeurIPS in Vancouver this week🍁 Connect with our #AI #ML researchers @VictorSabanza & @shreyaspadhy. Our research paper on Multi-Fidelity Bayesian Optimization (MFBO) will be @ AIDrugX workshop on Dec 15! Full article: arxiv.org/abs/2410.00544 @AtinaryTech #SDLabs
I'll be at NeurIPS next week, presenting our work "A Generative Model of Symmetry Transformations." In it, we propose a symmetry-aware generative model that discovers which (approximate) symmetries are present in a dataset, and can be leveraged to improve data efficiency. 🧵⬇️