Eugène Berta
@Eugene_Berta
PhD student in Machine Learning with @BachFrancis and Michael I. Jordan, working on uncertainty quantification. Also on 🦋.
I’ll be presenting our paper at COLT in Lyon this Monday at the Predictions and Uncertainty workshop — come say hi if you're around! 👋 Check out @DHolzmueller's thread below 👇 #COLT2025
For good probability predictions, you should use post-hoc calibration. With @Eugene_Berta, Michael Jordan, and @BachFrancis we argue that early stopping and tuning should account for this! Using the loss after post-hoc calibration often avoids premature stopping. 🧵1/
*Rethinking Early Stopping: Refine, Then Calibrate* by @Eugene_Berta @LChoshen @DHolzmueller @BachFrancis Doing early stopping on the "refinement loss" (original loss modulo calibration loss) is beneficial for both accuracy and calibration. arxiv.org/abs/2501.19195
What if we have been doing early stopping wrong all along? When you break the validation loss into two terms, calibration and refinement you can make the simplest (efficient) trick to stop training in a smarter position @Eugene_Berta @DHolzmueller Michael Jordan @BachFrancis
What if AI isn’t about building solo geniuses, but designing social systems? Michael Jordan advocates blending ML, economics, and uncertainty management to prioritize social welfare over mere prediction. A must-read rethink. arxiv.org/abs/2507.06268…
COLT Workshop on Predictions and Uncertainty was a banger! I was lucky to present our paper "Minimum Volume Conformal Sets for Multivariate Regression", alongside my colleague @Eugene_Berta and his awsome work on calibration. Big thanks to the organizers! #ConformalPrediction
A great talk about a great paper, check it out 👇
Happy to have our recent papers on conformal prediction with e-values presented at COLT by my advisor @BachFrancis! Full details here: 📚arxiv.org/abs/2503.13050 📚arxiv.org/abs/2505.13732 #COLT2025
Big thanks to the COLT 2025 organizers for an awesome event in Lyon! Here are the slides from my keynote this morning in case you’re curious about the references I mentioned: di.ens.fr/~fbach/fbach_o…
Backward conformal prediction: Instead of fixing the desired coverage level α, it instead fixes a desired constraint rule that (for instance) dictates the prediction set size. So depending on the setup, the α might be adapted instead. arxiv.org/abs/2505.13732
Talk today at @InriaStatify by Eugène Berta from @Sierra_ML_Lab. Lots of nice stuff on calibration, isotonic regression, and beautiful ternary plots :-) Always good to see former students do so well in research! Paper: proceedings.mlr.press/v238/berta24a.…