FedericoRanaldi
@FedeRanaldi
PhD Candidate in Data Science at @unitorvergata | NLP Researcher at @HumanCentricArt
I will be at #ACL2025 with my group presenting 3 Conference Papers. At the #L2M2 workshop, we will introduce the concept of #protoknowledge as a framework for jointly analyzing the #memorization and #generalization capabilities of LLMs. Link Non-archival: lnkd.in/deDqJAxM
Privacy, Memorization, Multimodal reasoning, and the surge of protoknowledge (non-archival in L2M2 Workshop) ! This is our contribution to #ACL2025NLP to better understand #LLMs We want to know your POV! See you in Vienna! We are hiring.
Privacy, Memorization, Multimodal reasoning, and the surge of protoknowledge (non-archival in L2M2 Workshop) ! This is our contribution to #ACL2025NLP to better understand #LLMs We want to know your POV! See you in Vienna! We are hiring.
More info at: humancentricart.github.io
Are you interested in the intersection of Mathematics and NLP? Consider submitting your paper to #MathNLP 2025: The 3rd Workshop on Mathematical NLP. #EMNLP2025. Submissions will open on June 25! Take a look here for more details sites.google.com/view/mathnlp20…
BlackboxNLP is back! 💥 Happy to be part of the organizing team for this year, and super excited for our new shared task using the excellent MIB Benchmark for circuit/causal variable localization in LMs, check it out! blackboxnlp.github.io/2025/task/
BlackboxNLP will be co-located with #EMNLP2025 in Suzhou this November! 📷This edition will feature a new shared task on circuits/causal variable localization in LMs, details: blackboxnlp.github.io/2025/task If you're into mech interp and care about evaluation, please submit!
For the first time, researchers analyzed data contamination in LLM-aided RTL generation using established methods for contamination detection, showing that data contamination is a critical concern. arxiv.org/pdf/2503.13572
🎉 Excited to announce that our survey paper, "Preserving Privacy in Large Language Models: A Survey on Current Threats and Solutions," is now officially published in @TmlrOrg ! 📚 🔗 [Read it here](openreview.net/forum?id=Ss9MT…)
Indeed! It will be always impossible to fairly evaluate closed source LLMs
The truth, the whole truth and nothing but the truth is critical to the evaluation of any AI model
Thank you for citing our work in your presentation! @esruzzetti @l__ranaldi @Ranaldinho99 @dariutso @Comment98 @HumanCentricArt We did a lot more! Do you want to know more?
On the occasion of #CLiCit2024, I presented Termite (#TexttoSQL Repository Made Invisibile To Engines), the first Italian Text-to-SQL dataset. Thank to my group @HumanCentricArt and to the organisers @AILC_NLP Link: github.com/nexus126/CALAM…

"When the order Matters: Analysis of the Role of Sequence Composition on Language Model Pre-Training" by @l__ranaldi , @Giuli12P2 and @znz8 .
Today @unitorvergata ! A majestic lesson of Avi Wigderson
Very honored to have the "Turing Prize" Avi Widgerson here at the University of Rome Tor Vergata. #aviwidgerson #unitorvergata

It seems that @Scopus has changed its policy: now, citations to preprints are not counted for the related accepted papers appearing in Scopus. Is this the case? Please, @Scopus, let us know the reason for this insane change of policy!
#UnboxingTransformer If you are in #ACL2024NLP #ACL2024, we are sure: 1) you can try to solve our challenge uniroma2-my.sharepoint.com/:b:/g/personal… 2) and, then, you can apply to the OPEN POSITION in our lab. Competitive salary in an astonishing location #ML #NLProc pica.cineca.it/uniroma2/f4-20…
Lab Life ... just after submitting @CLiC_it_conf @AILC_NLP #NLProc
Italian #NLProc I'm honored to know you all (mostly) @AlexLenci1966 @MalvinaNissim @ejezek @RSprugnoli @CLiC_it_conf @AILC_NLP
Let's go back to our work after an excellent talk from the professor @Floridi! As a computer scientist I'm convinced that treating certain topics with philosophy helps us conduct a more and more human-centered research. #humancenterdAI #infosphere

Hey! #Challenge #UnboxingTransformers #NLProc #ArtificialIntelligence #LLM #MachineLearning #JobOpeningCan you force a single-layer transformer with attention to emit a token given an input sequence? #You are the right person! Read here: tinyurl.com/FMZJobPost or reshare!