Jiajun He
@JiajunHe614
PhD student @CambridgeMLG | probabilistic inference; diffusion and generative models | Join our reading group @MolSS_Group
The SPIGM Workshop is back at @NeurIPSConf with an exciting new theme at the intersection of probabilistic inference and modern AI models! We welcome submissions on all topics related to probabilistic methods and generative models---looking forward to your contributions!
🌞🌞🌞 The third Structured Probabilistic Inference and Generative Modeling (SPIGM) workshop is **back** this year with @NeurIPSConf at San Diego! In the era of foundation models, we focus on a natural question: is probabilistic inference still relevant? #NeurIPS2025
@NeurIPSConf, why take the option to provide figures in the rebuttals away from the authors during the rebuttal period? Grounding the discussion in hard evidential data (like plots) makes resolving disagreements much easier for both the authors and the reviewers. Left: NeurIPS…
How do people reason so flexibly about new problems, bringing to bear globally-relevant knowledge while staying locally-consistent? Can we engineer a system that can synthesize bespoke world models (expressed as probabilistic programs) on-the-fly?
Interested in some foundation aspects? Waiting or unhappy about NeurIPS reviews? Plz consider NeurIPS workshop DynaFront: Dynamics at the Frontiers of Optimization, Sampling, and Games sites.google.com/view/dynafront… @yuejiec @Andrea__M @btreetaiji @T_Chavdarova ++ Sponsor appreciated!
📢Presenting SDE Matching🔥🔥🔥 🚀We extend diffusion models to construct a simulation-free framework for training Latent SDEs. It enables sampling from the exact posterior process marginals without any numerical simulations. 📜: arxiv.org/abs/2502.02472 🧵1/8
When sampling from multimodal distributions, we rely on multiple temperatures to balance exploration and exploitation. Can we bring this idea into the world of diffusion-based neural samplers? 👉Check out our ICML paper to see how this idea can lead to significant improvements!
Exited to share our new paper accepted by ICML 2025 👉 “PTSD: Progressive Tempering Sampler with Diffusion” , which aims to make sampling from unnormalised densities more efficient than state-of-the-art methods like parallel tempering. Check our threads below 👇
Join us tomorrow afternoon at 4pm (UK time) if you are interested in scalable and simulation-free method for neural SDE training 🔥
On the coming Tuesday (July 1st), we will have @GrigoryBartosh talking about “SDE Matching: Scalable and Simulation-Free Training of Latent Stochastic Differential Equations" (arxiv.org/abs/2502.02472) 🚀, from 4pm to 5pm (UK time). Join us via Zoom 🔥us05web.zoom.us/j/7780256206?p…
Looking forward to hearing from @GrigoryBartosh Please join us if you are also working on time series data or sequences!
On the coming Tuesday (July 1st), we will have @GrigoryBartosh talking about “SDE Matching: Scalable and Simulation-Free Training of Latent Stochastic Differential Equations" (arxiv.org/abs/2502.02472) 🚀, from 4pm to 5pm (UK time). Join us via Zoom 🔥us05web.zoom.us/j/7780256206?p…
Generative modeling data with multiple modalities (e.g.continuous,discrete,manifold,constrained)? ppl often tokenize everything into 1 modality->use AR transformer. Want an encoder-free native-multimodal diffusion model? arxiv.org/abs/2506.07903 #icml2025 is a general approach.
Why do we keep sampling from the same distribution the model was trained on? We rethink this old paradigm by introducing Feynman-Kac Correctors (FKCs) – a flexible framework for controlling the distribution of samples at inference time in diffusion models! Without re-training…
🧵(1/6) Delighted to share our @icmlconf 2025 spotlight paper: the Feynman-Kac Correctors (FKCs) in Diffusion Picture this: it’s inference time and we want to generate new samples from our diffusion model. But we don’t want to just copy the training data – we may want to sample…
(1/n)🚨You can train a model solving DFT for any geometry almost without training data!🚨 Introducing Self-Refining Training for Amortized Density Functional Theory — a variational framework for learning a DFT solver that predicts the ground-state solutions for different…
Really interesting and inspiring work! Congrats
🧵(1/6) Delighted to share our @icmlconf 2025 spotlight paper: the Feynman-Kac Correctors (FKCs) in Diffusion Picture this: it’s inference time and we want to generate new samples from our diffusion model. But we don’t want to just copy the training data – we may want to sample…