Andrea Santilli
@teelinsan
Research Scientist in NLP & LLMs | Prev: @Apple, @NousResearch, @BigscienceW, @picampusschool #NLProc
Uncertainty quantification (UQ) is key for safe, reliable LLMs... but are we evaluating it correctly? 🚨 Our ACL2025 paper finds a hidden flaw: if both UQ methods and correctness metrics are biased by the same factor (e.g., response length), evaluations get systematically skewed
If you are at ICML, don’t miss our latest work on model merging!
Want to merge multiple LLMs into a new SOTA model, using just a desktop GPU? 🧬 Meet MERGE3: an evolutionary merging framework that slashes fitness costs by 50×! A quick dive into our #ICML25 paper ⤵️
Are you interested in the intersection of Mathematics and NLP? Consider submitting your paper to #MathNLP 2025: The 3rd Workshop on Mathematical NLP. #EMNLP2025. Submissions will open on June 25! Take a look here for more details sites.google.com/view/mathnlp20…
Controlling text generation and structure remains a difficult problem to solve. Our newest blog post and release from Researcher in Residence @yaboilyrical explores how this problem becomes solvable using Sequential Monte Carlo approximation. nousresearch.com/steering-the-s…
*Mergenetic: a Simple Evolutionary Model Merging Library* by @teelinsan @DonatoCrisosto1 @EmanueleRodola Cool library to combine state-of-the-art merging techniques for LLMs with evolutionary algorithms. 🙃 arxiv.org/abs/2505.11427