Dimitri Meunier
@DimitriMeunier1
PhD @GatsbyUCL
Dimitri Meunier, Antoine Moulin, Jakub Wornbard, Vladimir R. Kostic, Arthur Gretton. [stat.ML]. Demystifying Spectral Feature Learning for Instrumental Variable Regression. arxiv.org/abs/2506.10899…
Very much looking forward to this ! 🙌 Stellar line-up
Announcing : The 2nd International Summer School on Mathematical Aspects of Data Science EPFL, Sept 1–5, 2025 Speakers: Bach (@BachFrancis) Bandeira Mallat Montanari (@Andrea__M) Peyré (@gabrielpeyre) For PhD students & early-career researchers Application deadline: May 15
New paper on Stationary MMD points 📣 arxiv.org/pdf/2505.20754 1️⃣ Samples generated by MMD flow exhibit 'super-convergence' 2️⃣ A discrete-time finite-particle convergence result for MMD flow Joint work with Toni Karvonen, Heishiro Kanagawa, @fx_briol, Chris J. Oates
New preprint out on arXiv: "Self-Supervised Evolution Operator Learning for High-Dimensional Dynamical Systems"! Read it here: pietronvll.github.io/encoderops/
wanna know how to do inverse Q-learning right? read this paper then!! joint work with the best team of students ever ♥️
new preprint with the amazing @LucaViano4 and @neu_rips on offline imitation learning! when the expert is hard to represent but the environment is simple, estimating a Q-value rather than the expert directly may be beneficial. there are many open questions left though!
new preprint with the amazing @LucaViano4 and @neu_rips on offline imitation learning! when the expert is hard to represent but the environment is simple, estimating a Q-value rather than the expert directly may be beneficial. there are many open questions left though!
Square loss, heavy-tailed noise... not as bad as you think!
New preprint! "Regularized least squares learning with heavy-tailed noise is minimax optimal" joint work with @moskitos_bite, @DimitriMeunier1 and @ArthurGretton
Check out our new result on regression with heavy-tailed noise ! Thanks to @gaussianmeasure , @DimitriMeunier1 , @ArthurGretton arxiv.org/abs/2505.14214
🧠 How do we compare uncertainties that are themselves imprecisely specified? 💡Meet IIPM (Integral IMPRECISE probability metrics) and MMI (Maximum Mean IMPRECISION): frameworks to compare and quantify Epistemic Uncertainty! With the amazing @mic_caprio and @krikamol 🚀
In this work, we introduce the Integral Imprecise Probability Metric (IIPM) framework, a Choquet integral-based generalisation of classical Integral Probability Metric (IPM) to the setting of capacities. arxiv.org/abs/2505.16156
Check out our new result on regression with heavy-tailed noise ! I learned a lot on this project, thanks to @gaussianmeasure for leading the project. @moskitos_bite @ArthurGretton
New preprint! "Regularized least squares learning with heavy-tailed noise is minimax optimal" joint work with @moskitos_bite, @DimitriMeunier1 and @ArthurGretton
📢New Paper on Process Reward Modelling 📢 Ever wondered about the pathologies of existing PRMs and how they could be remedied? In our latest paper, we investigate this through the lens of Information theory! #icml2025 Here’s a 🧵on how it works 👇 arxiv.org/abs/2411.11984
New work on nested expectations with kernel quadrature 📣 arxiv.org/pdf/2502.18284 Our method is the most efficient (use fewer samples and function evaluations) to reach Delta error. 🔍✨
Oral #AISTATS25 Robust Kernel Hypothesis Testing under Data Corruption -Robustify any permutation test to be immune to data corruption of up to X samples -MMD HSIC robust minimax optimality Monday 5 May -Oral Session 7 Robust Learning -Poster Session 3 Presented by Ilmun Kim